/cdn.vox-cdn.com/uploads/chorus_image/image/10576269/jonas-01.0.jpg)
Yesterday at GDC, Activision Blizzard director of graphics research and development Javier van der Pahlen and researcher Jorge Jimenez presented the research they and their colleagues Etienne Danvoye, Bernardo Antoniazzi, Zybnek Kysela, and Mike Eheler have been performing for next-generation character modeling and animation, ostensibly for use in Activision Blizzard's next-generation titles in development.
"In the real world," van der Pahlen said, "nothing is perfect." Van der Pahlen went on to explain that perfection and uniformity are the two biggest contributors to the effect known as the Uncanny Valley. The current barriers to realistic in-game character models have to do with recreating imperfection, breaking up uniform surfaces, and animating facial movement in a more believable way, and van der Pahlen presentation broke down ways that Activision Blizzard studios will likely deal with these problems in next-generation development.
The key to more believable, realistic skin on characters, van der Pahlen said, is a more refined sense of light playing off of skin surfaces. Done properly, he asserted that this only requires a small amount of extra processing. Van der Pahlen outlined a series of methods to add "micro-geometry" to faces and skin, which created subtle but noticeable improvements. He stressed that artists and engineers need to focus their attention at the microscopic level, which can be worked into existing techniques, and can be used to add variation and unpredictability to lighting on skin surfaces — van der Pahlen even advised using a technique called "cavity mapping" to simulate the behavior of human pores in relation to lighting.
Van der Pahlen went on to further explain the necessity of more effective methods of subsurface scattering. When light hits a surface, it often penetrates that surface to varying degrees before reflecting back outward, which changes the way it appears. Skin is even more complex, van der Pahlen stressed, because it is composed of multiple layers of differing transparency. He stressed the constant give and take between finding effective ways of simulating the ways in which light behaves with characters and the system resources available to artists, noting that he and his colleagues are experimenting with ways to mix existing techniques. Examples included mixing memory-heavy implementations that look good up close with more efficient techniques that look fine at a distance.
Eyes are the most difficult part of the body, van der Pahlen continued — as eyes are made of different kinds of tissue, some of which is particularly gelatinous and transparent, all skin considerations apply. But eyes are also wet surfaces with multiple densities that reflect light differently. Artists need to consider what makes an eye look wet, van der Pahlen stressed. Activision Blizzard's team found the main difference between a dry and wet eye is the level of distortion of objects in their reflection. In response, the panelists created three different predetermined "geometries" for wetness, to simulate that variation. They also found that blurring the reflections and specular highlights (particularly bright points of reflected light) in the eye change how wet the eye looks.
Realistic eyes also need the eyelids to be reflected in most situations. In hard lighting, local reflections — i.e., a clear, direct reflection of the eyelid in the eye itself — are important. Van der Pahlen strongly asserted that small details contribute to an overall sense of believability. In keeping with this, the minor amount of refraction ( the bending and redirecting of light as it passes through transparent surfaces, and bounces off others) that occurs to light that travels through the iris of the eye can create caustics — think of the sharp reflections of light created by sun passing through a glass of water, or light reflecting off of a swimming pool.
Simulating caustics creates an additional sense of depth. Van der Pahlen said that the current solution for things like this is to fake it — but believably. Panelists used a 3D texture to simulate the way light behaves in the iris as a stopgap to more dynamic, true material simulation. Van der Pahlen also presented a means of independently controlling the appearance of veins and sclera in the eye to simulate redness, which they could combine with wetness for a variety of effects, such as tears.
Van der Pahlen's colleague Jorge Jimenez concluded the presentation by discussing the necessity of aggressive anti-aliasing solutions for proper, believable character rendering. Jimenez explained that the pixel crawl effect of aliased images breaks of the edges of characters and damages suspension of disbelief. He went on to explain how certain kinds of temporal anti-aliasing, which blends information from one frame with the next to blur jagged edges, allowed the team to increase performance and overall accuracy (Activision Blizzard's team was using SMAAT2X, specifically). Jimenez also explained that developers should consider devoting more resources to faces when they're filling most of the player's screen, reducing resources as faces are farther away from the player's camera.
Finally, Jimenez demonstrated the ways that he and his colleagues are using data-driven approaches to create more believable facial animations, explaining that strict hand animation is both incredibly time consuming, and, ultimately, insufficient for realistic character models. The demonstration at GDC involved a facial capture which used 30,000 points of information along the actor's face and head — approximately 30 times the number used in-game now. The panelist's solution was to create heat-maps of "stress points" — areas that moved more than others while the actor spoke and emoted for the camera. Jimenez explained that this could be combined with proper linking of internal "bones" in the model's face to result in a better overall performance.