Last October we checked out a fascinating animation technique that transfers facial movements from one person to another in real-time. Disney Research’s FaceDirector on the other hand blends two different takes of an actor’s face into one customizable performance.
FaceDirector synchronizes two source performances based on facial cues – the eyebrows, eyes, nose and lips – and the actor’s dialogue. The resulting hybrid face can then be customized in terms of which of the two source performances is visible at any given time. For example, in the image above, the synthesized performance shows the actress switching multiple times between an angry and a happy expression, even though she actually only recorded a happy-only take and an angry-only take. The idea is for filmmakers to save on resources by using post-production to achieve their desired performance with less reshoots. As you’ll see in the video below, FaceDirector can also be used to overdub both the audio and the video of erroneous dialogue.
The researchers acknowledge that FaceDirector is far from perfect. For instance, it has trouble blending performances where the facial cues are drastically different, e.g. one has the actor’s lips closed while the other has them wide open. It’s also hampered by items that cover the actor’s face, such eyeglasses or even hair. You can download their paper from Disney Research’s website.
[via Reddit]