For indy studios that are looking to improve facial animations, this method enables you to take video capture of your facial performances and use them for three purposes:
1) Video performance of your subject’s face can now be tracked for animation without using facial tracking markers, this allows you to generate an animation file from a clean marker-less video with existing tools like facewear.
2) This clean video is now repurposed in the DDH pipeline to be used as a tracing layer enabling fast and accurate clean up of automated facial animations. Allowing a less experienced animator to achieve professional results.
3) The clean video is reprojected into the geometry, as texture, completing the portrait, adding a layer of fine details like face wrinkles and inner mouth. This will add all that fine micro-detail to your character without any extra animation or geometry. All the crow's feet, the furrowed brows, the interior mouth, and tung. All this no longer needs to be animated extra geometry, just video as a texture! This replicates the level of fine details AAA studio avatar’s achieve without the lengthy specialized labour, producing very lightweight, small geo characters, that can even run in URP on the Quest.
YES!
This process relies on a set up in Maya and Character Creator 3. These are complex applications that need someone who has 5+ years expierence. For the DDH process it is recommended to have a senior level animator with experience in FACS. With the right inital setup, our process allows for very fast and accurate animation clean up, that can be accomplished by a junior Maya person. (soon to be automated)
In an effort to standardize the process of producing simple, plug and play characters we have chosen the CC3 pipeline to take advantage of having a standard rig to build on.
This allows studios that don’t have big budgets to create custom rigs and blendshapes to connect the facial expression bendshapes to a pre-built FACS rig. CC3 also has other handy plugin tools like groomimg, clothing and LOD optimising for gaming applications.
No
Not yet!
The DDH avatar's performance is based on a video capture that is pre-recorded. The character cannot be puppettered manually, as it is a 'canned' performance. You can however combine performances so that the character can respond dynamically to input, and perform a series of predetermined actions.
![DYNAMIC DIGITAL HUMANS [Converted].png](https://static.wixstatic.com/media/7ac28f_c7c751fe34b24c6e9663222fd9ad2a44~mv2.png/v1/fill/w_204,h_63,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/DYNAMIC%20DIGITAL%20HUMANS%20%5BConverted%5D.png)

