top of page

DDH Image Processing

This process creates the facial projection for Maya to animate with. It also creates the textures that combine the projection and UV together to create the final UV atlas that is used in game engines.

For this section you will need the DDH Image Processing Plugin

In this section, you will learn about:

Analyzing video footage

You will be using the following programs:

Install DDH Image Processor Plugin

  1. Install Python 3.9.5 https://www.python.org
     

  2. In your folder, find the file install_admin.bat. Right click on it and click Run as administrator
     

  3. The installer window will appear, showing the progress. If the installer fails, follow the instructions it suggests and try the installation again.
     

  4. When the installer succeeds, it will have installed several Maya scripts, a Photoshop plugin and action, and an After Effects plugin.

Pipeline Production Outline

Retargeting Video Footage

This outline describes the optimal way to work with a production team to analyze, stabilize, retarget and animate characters efficiently. By using this workflow you can use unstabilized videos in your workflow and stabilize during creation.

Faceware Analyzer (Standalone)

Adobe After Effects (AAE)

After Effects DDH Template instructions:

AAE Project Settings (Adjust for new users)

  1. Install rd scripts (get programming students to put into scripts)
     

          A) RD Scripts link

     

          B) This script bulk changes composition lengths and              sizes in After Effects
     

  2. Unzip the folder and place it in your After Effects 2021 folder, in Support Files, in Scripts:



     

  3. Restart After Effects
     

  4. Select imported video and establish frame length
     

  5. Select the compositions you would like to edit under your “Project” menu
     

  6. go to File/ Scripts -scripts should be loaded in

          A) Select rd_CompSetter.jsx

    ​      B) Uncheck everything except duration

02_DDH Image Processing_ae.jpg

1. Stabilize footage

1.1. Copy stabilization track to mouth cutout

1.2 Use mouth cutout xml file of landmarks to face using SIRTPlugins/ FaceTracker

02_DDH Image Processing Doc 1.3.jpg

1.1 Launch Adobe After Effects


1.2 Go to the menu bar. Click File > Import > File . Go to where original camera footage is stored on a device. Click Import.
 

1.3 Drag footage file into Composition space.

02_DDH Image Processing Doc 1.4.jpg

1.4 Go to the menu bar. Window > Tracker . This opens the menu needed to track and stabilize the footage.

02_DDH Image Processing Doc 1.5.jpg

1.5 Select Stabilize Motion in the Tracker menu. The box that appears with a + target in the middle is the motion tracker, named Track Point 1. When selected and moved around the tracker magnifies the area of the footage it is over.

02_DDH Image Processing Doc 1.6.jpg

1.6 Drag Track Point 1 to the inner corner of an eye. Choose a point for it to rest on - this is the point that will remain consistent for stabilization. Drag the edit points of Track Point 1’s rectangles into tall, thin rectangles:

1.7 Click the Play button next to Analyze: in the Tracker menu. As the video plays, Tracker Point 1 will move away from its original position. (figure 1). Eliminate this movement by pressing the Stop button in the Tracker menu and dragging the point back to its original position (figure 2). Press Play and repeat this process with other points that fall out of place (figure 3).

02_DDH Image Processing Doc 1.71.jpg
02_DDH Image Processing Doc 1.72.jpg
02_DDH Image Processing Doc 1.73.jpg
02_DDH Image Processing Doc 1.8.jpg

1.8 Once the end of the video has been reached, click Apply in the Tracker menu. The Motion Tracker Apply Options menu appears. Select the option X and Y in the Apply Dimensions drop down menu. Press OK .

1.9 Now that the footage has been stabilized by the motion tracker, play through the footage and take note of any large shifts and jumps in the footage that remain. These remaining shifts and jumps can be eliminated in two steps:
 

1.9.1 In the timeline: delete anchor points between two anchor positions (figure 1) that are unnecessary by high lighting them and pressing the delete key. Determine which frames shift too much by pressing the Take Snapshot button in the viewport (figure 2) on a frame you would like to reference, then pressing the Show Snapshot button (figure 3) on a different frame to compare the position of your motion tracker. Delete unnecessary anchor points between two frames that are in the same - or a very similar - position.

02_DDH Image Processing Doc 1.9a.jpg
02_DDH Image Processing Doc 1.9d.jpg
02_DDH Image Processing Doc 1.9b.jpg
02_DDH Image Processing Doc 1.9c.jpg

1.9.2 In the timeline: to eliminate minor drifting between the remaining anchor points, add a position key (figure 1) on a frame you would like to reference and press the Take Snapshot button. On a different frame, use the Show Snapshot button and arrow keys on your keyboard to move the footage until it matches the referenced frame. Repeat as needed for the length of the footage.

1.10 Once this is complete, the footage is stabilized and ready to be used in the Image Processing Workflow. Go to the menu bar and click File > Save As and save the stabilized footage as [Footage Name]_Stable.

2. Copy Stabilize to Mouth

Using AfterEffects DDH FaceTracker Plugin

2.1 Make sure the composition duration, framerate, resolution are the same as the Faceware XML file.
 

2.2 Select the layer and add the DDH plugin - FaceTracker effect

 

2.3 Click the Browse button in the effect and select a Faceware XML file

 

2.4 Go to the first frame in the sequence

 

2.5 While the layer is selected:

i) The plugin will create a smooth mask around the selected feature

2.5.1 Select a face feature
 

2.5.2 Click the "Create Mask" button.
 

 

 

 

2.5.3 Select the corner vertices and convert them with the 'Convert Vertex Tool'

2.5.4 Click the 'Track Mask' button

i) The plugin will create all the key frames
 Link to Mask Example

ii) if keyframes don't match composition size, select mask path and scale all keys uniformly to size of composition (using ALT_ left Click)

iii) Make sure Layer 3 is enabled it fills in missing screen information for the mouth mask layer

3. Mouth Mask

Creates automated mask for rendering the content aware fill

3.1 Using the UV_Snapshot (UV_FacePostion) adjust Layer 3 (copyStabilize to mouth) to match head positionUsing the UV_Snapshot (UV_FacePostion) adjust Layer 3 (copyStabilize to mouth) to match head position

3.1.1 Lining up eyes, and chin location to get rough size and placement.

02_DDH Image Processing Doc 3.1.jpg

3.1.2 Hide UV position layer and expose black and white mask for cropping mouth

02_DDH Image Processing Doc 3.2.jpg

4. Mouth Mask

Creates an edge layer to correct texture error from the head projection

4.1 Hide Fill Layer (Layer1)

 

4.2 Select Layer 2 and goto Window Content Aware Fill

4.2.1  Fill method - Object

 

      i) Lighting correction- moderate

4.2.2 Range - Work Area

4.2.3 Generate Fill Layer

4.3 Change Fill Layer (Layer 1) to lighten

02_DDH Image Processing Doc 4.1.jpg
02_DDH Image Processing Doc 4.2.jpg

5. Render Head Projection

Create an animation reference layer for retargeting inside maya. Blend UV and animation together for seamless projection and reprojection on characters

02_DDH Image Processing Doc 5.1.jpg

5.1 Paint out eyes and mouth details inside photoshop (layer 8)

02_DDH Image Processing Doc 5.2.jpg

5.2 Add detail to image using bump or detail extract in photoshop

02_DDH Image Processing Doc 5.3.jpg

5.3 Layer 6 Refine mask to fit characters profile

5.4 Adjust Shadow Highlights and lumetri Color to blend out shadows and highlights

 

5.5 Adjust Refine Hard Matte Shift edge to appropriate edge Adjust Decontamination (Extend where Smoothed) settings to smooth edges

02_DDH Image Processing Doc 5.41.jpg
02_DDH Image Processing Doc 5.5.jpg

5.5 Layer 5 Use Painted out face layer to overlay and neutralize colour on image

02_DDH Image Processing Doc 5.6.jpg

5.6 Layer 3 and 4 Using painted out face blend edges by using Darker and lighter color modes

5.6 Layer 1 and 2 put colour back into eyes and lips using masks and original layers on top of footage

 

5.7 Render Head projection

5.7.1 Add to render queue

5.7.2 Select Lossless and change format to TIFF sequence

 

5.7.3 Select output and Render to sourceimages/Takes/<TakeName>/<TakeName>_HeadProjection.[####].tif

5.7.4 Save in subfolder with<TakeName>_HeadProjection

*don't use underscores in between numbering of files eg. (Droids_HeadProjection.0001.tif)*

6. Render Mouth Fill

Cleans up edge errors between teeth and lip

6.3.1 ​ Adjust parameters to suit new texture

6.1 Replace Content Aware Fill with new fill layer

 

6.2 Copy and paste Luminetri colour to the layer to match colour discrepancies

6.3 Copy and past refine soft matte to soften the edges


 


6.4 Adjust Liquify Warp to tighten nostrils and nose inward

02_DDH Image Processing Doc 6.1.jpg

6.5.1 ​ Add to render queue

6.5.2 Select Lossless and change format to TIFF sequence

 

6.5.3 Select output and Render to sourceimages/Takes/<TakeName>/<TakeName>_HeadMouthFill.[####].tif

6.5.4 Save in subfolder with<TakeName>_HeadMouthFill

*don't use underscores in between numbering of files eg. (Droids_HeadProjection.0001.tif)*

6.5 Render Mouth Fill

6.6 Go to Maya Retarget Documentation then come back and continue below
 

6.7 Import Image sequences

6. Spec Composite

Fixes Spec colour to occlude head and exaggerate mouth and eye specularity

6.6 Go to Maya Retarget Documentation then come back and continue below

bottom of page