The Dynamic Digital Human (DDH) matches the performance and image quality of expensive high fidelity avatars at a fraction of the production time and cost. DDH’s patent-pending pipeline combines animated 3D models with live-action facial video performance. Our avatars show all the fine features that make them look real without the need of GPU-intensive, high poly 3D models, or the large file sizes and lighting limitations of videogrammetry.
Few digital human services or automatic animation tools show their digital humans speaking let alone singing. Even the most detailed digital human can break the illusion with poor lip sync. To us, speech is one of the most compelling forms of engagement and communication. Our digital human performs perfect lip sync with expression and emotion because the DDH pipeline produces a real, one to one, human performance.
sync speech example.
The DDH avatar is built to be as efficient as possible. At only 5MB per minute, it’s ready to run on 6DoF mobile processors such as Quest, Hololens, or the phone in your hand. The DDH avatar is much less processor-heavy than a traditional AAA game-ready avatar and much higher fidelity than procedural or DIY animation plugins.
Runtime render in Unreal for virtual prduction
Runtime render in Unity URP for Meta Quest 2
Runtime render on iPhone 12 ARkit
For downloadable demos
join our mailing list!