Learn & Support

Everything you need to answer your questions about all Faceware products.

Explore
Pricing

Explore the many ways to purchase our products for your facial capture needs.

Pricing Options
Interview: Hazelight Studios Talks Facial Animation on It Takes Two

Interview: Hazelight Studios Talks Facial Animation on It Takes Two

30 November, 2021

We recently sat down with Swedish-based Hazelight Studios, a Faceware customer, to dive into the facial animation and mocap methods used on their latest cooperative multiplayer video game title, It Takes Two, which was published this year by Electronic Arts under the EA Originals label. The Hazelight team previously used Faceware's technology on A Way Out. What follows is the full version of that interview along with some behind-the-scenes images from their development studio.

Emil

Can you tell us about It Takes Two from a production standpoint, e.g. how long did it take to create? How many people were working on the project? How many artists and animators?

After we had delivered A Way Out, which had our full attention until the very last minute, we started the project that would become It Takes Two. In fact, there was a very short overlap. The new project would keep us busy for 3 years during its development cycle. When it began, the studio had about 40 employees and we peaked at around 75 during the project. The animation team grew at a similar rate from 4 to 7 and throughout the entire project, we were 2 technical animators. The environment art team grew from 5 to 8 and we had 3 character artists.

IMG 0210

Did the art style of the game present any challenges with facial capture in It Takes Two?

It Takes Two consists of two different styles where we have most of the content in the fantasy setting and some in the more realistic world setting. In the fantasy world, we generally tried to exaggerate the movements both in terms of body and face, more theatrically. Due to the sheer amount of content we needed to produce for the cutscenes, it became obvious to us that we needed to make the style work with motion capture since keyframing everything would take too much time. Finding the balance in the style between the captured cutscenes and mostly keyframed gameplay was a challenge. In the fantasy setting, the characters are performing a lot of unrealistic moves that would be impossible even for the most experienced stunt actors, so we ended up having to mix in a fair amount of keyframe with the performance capture. This also resulted in the need for mixing multiple types of face animations within the scenes.

Mocap Motionbuilder

Most parts of the scenes would have face from the performance capture and some parts – often the intro or the outros – would have face from either ADR (Automated Dialogue Replacement) recordings or from FaceFX, which were additionally used for solving face animation from audio-only when we could get away with it. It was a challenge to systematically keep track of which face type goes where to ensure we ended up with the correct mix and maintained the sync.

It Takes Two is very different from A Way Out stylistically. Were there any challenges that you encountered during facial capture (that you didn’t run into previously)?

Although we made some updates to the tools for enhancing the experience of working with face animation, the underlying technology did not change much. We animate a set of predefined poses based on FACS (Facial Action Coding System) that runs on joints, very similar to how you would work with blendshapes but more optimized. We could utilize this system for all characters in It Takes Two except for the book character Hakim. Since Hakim's face is on the belly of the character, there’s a lot of other deformation going on there, which would clash if we additionally had the face deforming on joints. We managed to solve this issue by running most of his face poses as blendshapes instead.

Mocap2

More granularly, what were you trying to achieve with your character animation and in particular, your facial animation? Did you meet your goals?

By creating a game with the narrative aspect as a big focus, it’s very important for us to convey the emotions of the characters in their encounters so you as a player can create and develop a bond with them while the story unfolds. Facial animation obviously plays a big role in expressing that emotion and with the joint effort from all departments within our team, I think we managed to achieve our goals in doing so.

it’s very important for us to convey the emotions of the characters in their encounters so you as a player can create and develop a bond with them.

Can you tell us about your facial animation pipeline, e.g. each phase of the process and which Faceware products and people you used?

Our aim is to always record as many actors as possible with performance capture as long as the actor also performs the voice of the character. We record everything from a proprietary software application, which handles and assembles body, audio and face capture, which in turn are all linked together by timecode. The Aja Ki Pro face recorder, which comes bundled with Faceware's Mark lll headcams, fits very well into that pipeline as it can be fully controlled by code.

Based on the selected captures and their time ranges, we generate trimmed-down face videos and Faceware Analyzer work files by utilizing the Analyzer python SDK. Although Analyzer does a great job with the initial track, it still requires some manual tracking work for us.

Once the tracking is completed, all files go through a batch procedure through Faceware Retargeter (which also comes with a python API) and outputs individual .fbx files for each face video containing the face animation only. We only use the Retargeter interface when defining the retargeting model for each actor-to-character relationship.

IMG 0209

The face animations are then injected into our Motionbuilder scenes where any potential timing changes have been cached and will automatically be compensated for to sync. In Motionbuilder, we have an interface for polishing the facial animation that we start once the cameras have been locked down for the scenes so we know exactly where to put our efforts.

When we record motion capture and don’t have the actor present who is performing the final voice for the character or when scenes are keyframed instead, we make use of ADR. The voice actor then delivers the voice and face performance on top of the already existing body movements. After recording the face video, it goes through a very similar process as mentioned above but slightly more complex. ADR often ends up with selects from different recordings and needs to be re-timed more to match the existing performance. It was a challenge to maintain the sync systematically but was well worth it!

Mocap3

How did Faceware’s software and hardware help you achieve your goals?

From the very first days of Hazelight, I’ve always had the ambition to maximize automation instead of adding more people to handle the problems or redundant tasks when possible. That’s one of my favourite things about Faceware: it comes with an API that enables us to do just that and to keep the entire process in-house.

Are there any anecdotal stories you can share with us that relate to the facial animation process and how Faceware’s software or hardware have helped?

Joseph Balderrama, who performed both the body and voice acting for one of the main characters Cody, also had the perfect voice for the book character Hakim. Ideally from a production point of view, we would have found a unique actor for Hakim that would enable us to do performance capture of Hakim as well but that was not the case. This increased the complexity of these scenes with a follow up ADR session. Capturing a book might sound strange in the first place, but it worked out surprisingly well! Being a very energetic character, Hakim was a challenge for ADR but Joseph pulled it off great. With Faceware Retargeter,  it’s possible to create multiple presets for how the face should retarget from an actor to any character, and by doing so, we could get his face retarget well for both fantasy Cody, real-world Cody, and Hakim.

With Faceware Retargeter,  it’s possible to create multiple presets for how the face should retarget from an actor to any character.

From A Way Out to It Takes Two, you used different Faceware hardware. Can you tell us about the changes and how they may have helped in the production process? In particular, how did the Mark III headcam make the facial animation process easier or better? Were there significant differences between using the Indie Cam and the Mark III?

In the development of A Way Out, we didn't do any performance capture. Instead, we relied heavily on ADR recordings afterwards. We captured ADR with a more indie style approach,  using GoPro cameras with attached SyncBac Pro modules to embed timecode within the videos. In order to maintain the sync through edits, we had a similar pipeline as mentioned above. Since we didn’t use performance capture and only had one source of face capture, the pipeline could be a bit more simplistic.

Mocap1

For It Takes Two, we built our own internal motion capture studio, so upgrading our facial capture hardware was an obvious step. I got the chance to evaluate a couple of Faceware's Mark lll headcams and quickly felt that they could match the criterias I had set up for the new stage.

The Mark III offers a much better potential for being integrated into our pipeline, and a better fit and feel for the helmet itself. The real-time preview from the Face Recorder is far superior, as well. I think there’s an advantage to the approach Faceware has taken with a fair amount of the components in the system being standard in similar industries. During sequential days of recordings, it's important for me to have strong confidence in both the hardware and software. Although we never had hardware issues, I know that except for the camera itself, most things that could break down can quickly be acquired and replaced. I think Faceware and the Mark lll setup have found a good balance where I can trust the system and keep my focus more on the important performance going on live at the stage.

We want to thank Hazelight Studios for taking the time to answer our questions.

Request a
Free Trial

Click the button below to download one or all of our software products for a free trial.

Request a Trial

Pricing

Explore our different licensing and product options to find the best solution for your facial motion capture needs. If you need a more tailored solution, talk to us about our Enterprise Program.

Pricing Options