Real-Time Live!

Tuesday, 23 July 5:30 PM - 7:00 PM
Session Chair:

Digital Ira: High-Resolution Facial Performance Playback

Real-time facial animation from high-resolution scans driven by video performance capture rendered in a reproducible, game-ready pipeline. This collaborative work incorporates expression blending for the face, extensions to photoreal eye and skin rendering [ Jimenez et al, Real-Time Live! SIGGRAPH 2012], and real-time ambient shadows.

The actor was scanned in 30 high-resolution expressions using a Light Stage [Ghosh et al. SIGGRAPH Asia 2011] from which eight were chosen for real-time performance rendering. The actor’s performance clips were captured at 30 fps under flat light conditions using the multi-camera rig. Expression UVs were interactively corresponded to the neutral expression, retopologized to an artist mesh.

The offline animation solver creates a performance graph representing dense GPU optical flow between video frames and the eight expressions. The graph is pruned by analyzing the correlation between video and expression scans over 12 facial regions. Then dense optical flow and 3D triangulation are computed, yielding per-frame spatially varying blendshape weights approximating the performance.

Mesh animation is transferred to standard bone animation on a game-ready 4k mesh using a bone-weight and transform solver. This solver optimizes the smooth skinning weights and the bone-animated transforms to maximize the correspondence between the game mesh and the reference animated mesh. Surface stress values are used to blend albedo, specular, normal, and displacement maps from the high-resolution scans per-vertex at run time. DX11 rendering includes SSS, translucency, eye refraction and caustics, physically based two-lobe specular reflection with microstructure, DOF, antialiasing, and grain.

Due to the novelty of this pipeline, there are many elements in progress. By delivery time, these elements will be present: eyelashes, eyelid bulge, displacement shading, ambient transmittance, and several other dynamic effects.

Oleg Alexander
Graham Fyffe
Jay Busch
Xueming Yu
Ryosuke Ichikari
Paul Graham
Koki Nagano
Andrew Jones
Paul Debevec
USC Institute for Creative Technologies

Joe Alter
Joe Alter, Inc.

Jorge Jimenez
Etienne Danvoye
Bernardo Antionazzi
Mike Eheler
Zybnek Kysela
Xian-Chun Wu
Javier von der Pahlen
Activision, Incorporated

Massive Destruction in Real Time

Currently, the most popular approach for fracturing objects in games is to pre-fracture models and replace a mesh by its fractured pieces at run time. This new approach uses pre-defined fracture patterns. A fracture pattern is a decomposition of a large, rectangular block into non-overlapping pieces that can be designed by an artist, created procedurally, or simulated. When an object is to be fractured at run time, the pattern is aligned with the impact location and used as a stencil to decompose the model into pieces. Facture patterns give artists more control over the fracture process than more physically based approaches like finite elements. In general, it is also much faster than a real simulation and, on the other hand, less tedious than pre-fracturing game assets.

This demo shows a scene of a Roman arena with one million vertices and half a million faces destroyed by user-guided meteors. To add to the realism of the scene, dust is simulated using a separate 3D fluid simulation and then generated when the model fractures. It follows a flow field that is influenced by the motion of fractured pieces. The rigid-body simulator, the fluid simulation, and the rendering all run in parallel on two NVIDIA GTX 690 GPUs at a constant rate of over 30 fps up to a high level of destruction of the original arena.

This fracture method is described in more detail in the corresponding SIGGRAPH 2013 Technical Paper: Real-Time Dynamic Fracture With Volumetric Approximate Convex Decompositions.

Matthias Müller-Fischer
NVIDIA Corporation

Nuttapong Chentanez
NVIDIA Corporation

Tae-Yong Kim
NVIDIA Corporation

Bryan Galdrikian
NVIDIA Corporation

Real-Time Crowd Direction With Creation: Horde

Creation: Horde features a powerful animation system that dynamically stitches sequences together to deliver plausible, natural motion for characters. Horde scales from one to thousands of characters in real time, making it applicable to previz and virtual-production scenarios as well as large-scale crowd work.

The semi-procedural locomotion system in Horde enables limbed creatures to navigate over complex terrain such as stairs or rough surfaces while being driven by source animation clips. This combination of keyframe animation and proceduralism allows characters to convey the intent of the artist while adapting the motion to their environment. Character skeletons are driven by customizable solvers that enable traditional animation to be layered with procedural motion such as secondary dynamics, cloth, and muscle simulation.

Horde takes full advantage of the performance capabilities of Fabric Engine's Creation Platform. Not only do these rich characters play back in real time, but also you can work with thousands of them! This real-time capability allows crowd artists to make many more changes and iterations than with traditional crowd systems. Horde front-loads creative decision making by enabling a directors to see the effect of their change requests immediately. Through deep integration with Maya and Softimage, Horde provides high-level controls to enable fine direction of agents through nulls, enabling rapid setup of formations and pathing. Horde also provides controls for direction of agent groups through a time-based painting system. With a simple brush stroke, a director can send armies into battle, start a Mexican Wave at a stadium, or manage background characters on an individual basis.

The latest Horde highlight reel

Philip Taylor
Fabric Engine Inc.

Shadertoy: Live Coding for Reactive Shaders

At SIGGRAPH 2012, after Beautypi showed visuals on stage that reacted to music and controllers that could interact with the audience, many people expressed interest in building similar visuals. For SIGGRAPH 2013, Beautypi introduces Shadertoy, a web tool that allows developers all over the globe to push pixels from code to screen using WebGL. Developers can create shaders in a live-coding environment that allows them to see the final results as they are coding. The creations can react to various inputs, such us music, videos, time of day, or even a webcam.

In addition to the coding tool, Shadertoy.com introduced a social platform, for sharing feedback and promoting great work. The site opened on 18 February, and in a week it received more than 150 pieces showing a great variety of rendering techniques, from post-processing effects, procedural raymarchers, and raytracers to demoscene pieces. It is a place for professionals and students alike to learn and teach about visuals, interactions, reactivity, procedural modeling, GPU internals, and shading.

This demonstration introduces the platform and the technology, then shows how to use it to create a shiny real-time piece live.

Inigo Quilez
Beautypi

Pol Jeremias
Beautypi

Slice:Drop - Collaborative Medical Imaging in the Browser

Traditionally, medical image data is visualized and processed on highly specialized software and workstations. Although such software is quite feature-rich, it is often OS-specific and overly complex, requiring a steep learning curve and large time investment. Significantly, there is no concept of real-time image sharing or collaboration in this field. Clinical or research findings from visualizations are described completely out-of-band, and collaborators need to indepedently render and interact with the image data to discover similar findings.

Slice:Drop is a set of technologies built on open-source components that allows for simple, intuitive rendering of a wide range of medical-image formats directly in the browser. Using existing and freely available middleware, it allows real-time sharing of linked image-session data among any number of browsers. Interacting with visuals on one device or browser updates the identical visualization on any other linked browser. Any WebGL-browser can interact with the data in real time, and all linked browsers are updated accordingly. The technology is built on the open-source MIT-licensed JavaScript XTK library for Scientific WebGL and incorporated in the web site slicedrop.com. Sharing of image data is enabled using the DropBox API.

This Real-Time Live! demonstration shows real-time sharing and interaction with medical data in a single shared session among linked laptops, workstations, smartphones, and tablets. This conceptually simple technology can enable new classes of medical imaging, reduce the obstacles to sharing data, and result in simpler, easier collaboration on clinical and research findings.

Daniel Haehn
Boston Children's Hospital

Nicolas Rannou
Boston Children's Hospital

Rudolph Pienaar
Boston Children's Hospital

P. Ellen Grant
Boston Children's Hospital

Unreal Engine 4 Infiltrator Demonstration

First revealed at Game Developers Conference 2013, Epic Games’ Unreal Engine 4 Infiltrator technical demonstration runs completely in-engine, in real time. Infiltrator showcases Epic’s visual target for the next generation of video games, with high-end rendering features including physically based materials and lighting, full-scene HDR reflections, advanced GPU particle simulation, adaptive detail with artist-programmable tessellation and displacement, dynamically lit particles that emit and receive light, and thousands of dynamic lights with tiled deferred shading. In addition, Unreal Engine 4 supports IES profiles for incorporation of photometric lights and high-quality temporal anti-aliasing.

Dana Cowley
Epic Games, Inc.

Brian Karis
Epic Games, Inc.

Square

Explore the details of a real-time demo-scene animation using a LEAP controller: You can use one finger to easily navigate the camera inside a running animation and adjust the parameters of the underlying mandelbox fractal with the second finger. The project was made with custom software: Tooll2.

Thomas Mann
Still

Daniel Szymanski
Framefield GmbH

Andreas Rose
Framefield GmbH

Wolf Budgenhagen
Still

Adding More Life to Your Characters With TressFX

Crystal Dynamics and AMD made headlines earlier this year with the first fully simulated and rendered head of hair in a video game completely driven by the GPU, which added a new dimension of life to Lara Croft in the re-imagined Tomb Raider. This presentation demonstrates how to achieve exceptional hair rendering with DirectX 11's compute pipeline and shows that it really is possible to breathe new life into our characters with something as simple as a well-styled head of hair.

The presentation summarizes content-creation aspects and tools available for creating hair content. It includes a brief overview of the setup and simulation, followed by a look at shading options that help make the hair look more realistic (most of which is based on work presented at previous SIGGRAPH conferences) and a discussion of the various issues and optimization considerations, so attendees can understand some of the problems they are likely to face.

Jason Lacroix
Square Enix Co.,Ltd.

Spontaneous Fantasia

This demo pushes the envelope of visual and animation creation as a performing art with new physical and software interfaces. It creates and explores an improvised stereoscopic 3D world that takes the audience on an immersive visual-musical odyssey.

J-Walt Adamczyk
Spontaneous Fantasia