Learn & Support

Everything you need to answer your questions about all Facware products.

Explore
Pricing

Explore the many ways to purchase our products for your facial capture needs.

Pricing Options
In a League of their Own

In a League of their Own

23 June, 2021
Visual effects artists and animators at Scanline create facial performances for Zack Snyder’s Justice League

Zack Snyder’s Justice League, the director’s cut of the superhero film released in 2017 still has the DC Extended Universe’s Batman, Superman, Wonder Woman, Cyborg, Aquaman, and the Flash attempting to save the world from Steppenwolf and the Parademons. For the update, though, Snyder added new scenes and characters, including Martian Manhunter / Calvin Swanwick played by actor Harry Lennix. 

Warner Bros., which had bowed to fan pressure to support this director’s cut, released Zack Snyder’s Justice League on HBO Max’s streaming service in March. The film achieved a 71 percent critical rating on Rotten Tomatoes, and a 94 percent fan rating.

John 'D.J.' Des Jardin supervised the visual effects created by Double Negative, Rodeo FX, Weta Digital and Scanline VFX. They, along with other studios, had all worked on the 2017 film. 

For Scanline’s visual effects crew, Zack Snyder’s Justice League provided an opportunity to exercise their increasing emphasis on character animation. Scanline visual effects supervisor Julius Lechner led a team of approximately 400 artists, with creatures supervisor Jim Su heading the character development and Clem Yip heading the character animation teams. Fifty artists at Scanline concentrated on Martian Manhunter and Steppenwolf, two CG characters, that demanded the creation of sophisticated facial performances in a short timeframe during a global pandemic. To capture and retarget the actors’ performances onto digital characters, the crew used three techniques, with software from Faceware Technologies supporting each variation through six sequences. 

The pandemic was in full force during production, so the Scanline crew had limited options for capturing Harry Lennix’s performance. Lechner was working from home in Vancouver, Canada. Lennix was in New York. 

“We couldn’t use a helmet cam, so we decided to use two regular cameras aimed toward his head during an ADR session,” Lechner says. “We also had a camera for his body to help get his neck and shoulders correct when he turned his head. The idea was to read Harry’s face in Martian Manhunter, but make him look alien. Even though the faces are different, we wanted him to be recognizable.”

Once Scanline had the captured footage, lead performance TD Brendan Rogers trained Faceware’s Analyzer to track the performance by selecting a few frames within the shot. 

“We always let the software do an auto-track first and then pick a minimum of three frames with the most extreme differences,” Su says. “One might be a squash, another a jaw open, and another a stretch. Analyzer solves that in 10 or 20 seconds. Then if some parts need more detail, we add another training frame. We do that until we have the right amount. Just enough so the machine learning can understand. It’s not like traditional keyframing or tracking; it’s a nonlinear solve using machine learning.” 

You can see the result on a digital model of Lennix’s face and head. Because they didn’t have a scan of Lennix, Ismael Alabado modeled both digital Lennix and Martian Manhunter, and created the facial blend shapes.

Once analyzed, tracked performances move from an actor’s head onto a character’s head. There, Faceware’s Retargeter applies the movement to controllers on an animation rig – in this case, a Maya rig.

“We usually have the same artist who managed the Analyzer also work with the Retargeter,” Su says. “Once the keys are baked down, it’s ready for an animator to add his or her own embellishments.”

Rogers estimates that using traditional keyframing, it typically takes two weeks to do a 250-frame shot. With Faceware, they could generate a first pass that gave them 70 percent of the final performance in a day. That freed up time for animators to refine the quality. 

For Martian Manhunter, though, rather than sending the retargeted performance to character animators, Rogers and his team of four Faceware artists created the final performance, refining as well as the analyzing and retargeting. 

“We felt it streamlined the performance,” Su says.

Steppenwolf’s facial performance required a different process. The crew started with stereo footage captured with a DI4D system in 2015 for the earlier film. But ultimately all the Steppenwolf shots created at Scanline for this film were achieved with the help of Faceware technology.

“We ran the camera footage from 2017 through Analyzer and it worked,” Su says. “110 frames, 15 training frames. Because Faceware uses a 2D solve, it didn’t matter that we were working with stereo headcam footage.” 

Retargeter then moved the data onto the digital Steppenwolf’s Maya rig. On this project, the retargeted performances traveled on to character animators for refinement. Greg Coelho was the facial blendshape artist.

Steppenwolf also gave the Scanline artists an opportunity to try realtime facial animation using Faceware’s Studio software.  For some pick-up footage of Steppenwolf, Scanline used Faceware’s Mark III wireless headcam system to capture in-house animators.  

“We built a realtime asset in Motion Builder and streamed facial and body motion into Motion Builder,” Su says. “Motion Builder streamed it back out to the monitor for our performer to see his facial performance on Steppenwolf in realtime. These were non-speaking shots – generic things like snarls. He could change his performance based on what he saw on the monitor, on the actual digital character.”

During the last five or six years, Scanline, which had built a reputation for environmental visual effects and simulations, has been evolving into a character animation studio, as well, as seen in their recent work on Godzilla vs. Kong and this latest rendition of Justice League. Both projects employed Faceware’s technology.

“We’ve done more and more character work each year,” Lechner says. “But this, among Godzilla vs Kong, is the most extensive, the one with the most talking characters and facial performances. We had a short time frame, but the work is great. We didn’t have to cut any corners.”

If you want to learn to animate faces on production-quality software for FREE, click on the link below to get started.

START ANIMATING TODAY

Request a
Free Trial

Click the button below to download one or all of our software products for a free trial.

Request a Trial

Pricing

Explore our different licensing and product options to find the best solution for your facial motion capture needs. If you need a more tailored solution, talk to us about our Enterprise Program.

Pricing Options