We may soon say good-bye to the “uncanny valley“: the uncomfortable effect when animations of humans look almost (but not quite) like humans, and instead come across as really creepy. A group of computer scientist researchers at MIT have come up with new computing techniques that they say will help make it easier than ever to create highly realistic animation in games and movies.
Moving objects almost always appear a little blurry in photographs, a quality that translates onto the medium of motion pictures. In fact, if moving images do not blur, a video ends up looking “surprisingly unconvincing,” or much like Clay-mation, according to MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
That’s why game and movie animators try to replicate this blur to make their work appear as natural and film-like as possible. But from a technical standpoint, it’s actually much more difficult to recreate blur than it is to produce sharp images.
The researchers at the computer graphics group of MIT’s CSAIL have developed new computing techniques for reproducing the natural blur, according to a news release published by the university Monday. At the upcoming Siggraph 2011 computer graphics conference to be held in Vancouver, British Columbia this August, the MIT researchers plan to present two papers describing the new blur computation techniques they’ve developed.
In the first paper, the University says:
The researchers make the simplifying assumption that the way in which light reflects off a moving object doesn’t change over the course of a single frame. For each pixel in the final image, their algorithm still averages the colors of multiple points on objects’ surfaces, but it calculates those colors only once. The researchers found a way to represent the relationship between the color calculations and the shapes of the associated objects as entries in a table. For each pixel in the final image, the algorithm simply looks up the corresponding values in the table. That drastically simplifies the calculation but has little effect on the final image.
In the second paper,
[The researchers have developed a method that] reduces the computational burden of determining which rays of light would reach an imagined lens. To produce convincing motion blur, digital animators might ordinarily consider the contributions that more than 100 discrete points on the surfaces of moving objects make to the color value of a single pixel. Lehtinen and his colleagues’ algorithm instead looks at a smaller number of points — maybe 16 or so — and makes an educated guess about the color values of the points in between. The result: A frame of digital video that would ordinarily take about an hour to render might instead take about 10 minutes.
The demand for blurring techniques is so strong that several major special-effects companies have already contacted the researchers about the work, even though the papers have yet to be presented, according to Jaakko Lehtinen, who worked on both projects as a postdoc and is now a senior research scientist with graphics-chip manufacturer Nvidia (s nvda). With the continuing blockbuster success of high-end animation films like Avatar, it makes sense that there’s a strong market demand for the most cutting-edge technologies.
Images courtesy of Jaakko Lehtinen