Real-time facial animation from high-resolution scans driven by video performance capture rendered in a reproducible, game-ready pipeline. This collaborative work incorporates expression blending for the face, extensions to photoreal eye and skin rendering [ Jimenez et al, Real-Time Live! SIGGRAPH 2012], and real-time ambient shadows.
The actor was scanned in 30 high-resolution expressions using a Light Stage [Ghosh et al. SIGGRAPH Asia 2011] from which eight were chosen for real-time performance rendering. The actor’s performance clips were captured at 30 fps under flat light conditions using the multi-camera rig. Expression UVs were interactively corresponded to the neutral expression, retopologized to an artist mesh.
The offline animation solver creates a performance graph representing dense GPU optical flow between video frames and the eight expressions. The graph is pruned by analyzing the correlation between video and expression scans over 12 facial regions. Then dense optical flow and 3D triangulation are computed, yielding per-frame spatially varying blendshape weights approximating the performance.
Mesh animation is transferred to standard bone animation on a game-ready 4k mesh using a bone-weight and transform solver. This solver optimizes the smooth skinning weights and the bone-animated transforms to maximize the correspondence between the game mesh and the reference animated mesh. Surface stress values are used to blend albedo, specular, normal, and displacement maps from the high-resolution scans per-vertex at run time. DX11 rendering includes SSS, translucency, eye refraction and caustics, physically based two-lobe specular reflection with microstructure, DOF, antialiasing, and grain.
Due to the novelty of this pipeline, there are many elements in progress. By delivery time, these elements will be present: eyelashes, eyelid bulge, displacement shading, ambient transmittance, and several other dynamic effects.
USC Institute for Creative Technologies
Joe Alter, Inc.
Javier von der Pahlen