Integration with your pipeline
DDH avatars are based on a streamlined version of FACS with a focus on reducing the amount of work needed to build and rig the model. Only 30 blend shapes are required and can be produced form a set of 20 scanned expressions, all coming in under 5,000 polys.
Ultralight facial rig consisting of 30 blend shapes, with 5 simple clusters for refined movement. Our process does not need complex mouth bags or eye systems vastly reducing the time needed to build and rig the character.
One camera performance capture
That's right! All you need is a single standard 2k camera (even a smartphone) to capture your subjects facial performance. Our patented technology does the rest. You can create the videos required with a few simple steps, the DDH process will create a video as texture locked to and synced with the 3D model's animations.
Videogrammetry has its place and in many cases, it's a speedy way to capture volumetric human performance. It also has its hindrances, such as baked-in lighting and larger filesizes. Our process combines the best of video as texture with the versatility of a rigged 3D asset. The ability to light a model and seamlessly add it to our albedo map creates a character that can be placed into any dynamically lit environment. A rigged 3D asset opens up the ability to animate the character with adaptive procedural animation. The DDH process is also extremely light with file sizes coming in at 5MB per minute.
Physically based rendering
Using PBR realistic shading and lighting models to accurately represent real-world materials, Our DDH process seamlessly combines animation and video together. In run-time applications, our process requires only one draw call.
The DDH avatar can be assembled in both Unreal and Unity or rendered inline with any animation software. Our characters can run in high detail cinematics with path tracing and advanced shaders, as well as an optimised workflow to run in realtime scenarios including mobile 6DoF. (eg Oculus Quest)