All Posts By


Motion CaptVRe

By | News | No Comments

By Jem Alexander

With recent improvements to hardware and software, plus an increasing emphasis on real-time, interactive performance capture, we could be on the precipice of a golden age for motion capture technology. Jem Alexander investigates recent progressions in mo-cap technologies

Motion capture, and the way it is used in game development, is improving rapidly. No longer used solely by animators to record and store an actor’s performance, the technology is expanding into new areas. Those working with it on a daily basis are excited to see where this might lead.

Technology’s inevitable march forward means that motion capture can now occur in realtime, with an actor’s movements being instantly reflected in a game. Not only does this benefit animators by streamlining their process, but it also opens doors to other applications, like virtual reality. The HTC Vive, Oculus Rift and PlayStation VR all take advantage of motion capture technology to allow players to interact with virtual worlds. Not as advanced as that used in an animation studio, perhaps, but with time this can only improve.

“The biggest advance in mo-cap, in my opinion, is linked to the VR push,” says Alexandre Pechev, CEO of motion capture middleware provider IKinema.

“With the advances in VR hardware, motion capture technologies have moved to our living rooms and offices. Mo-cap will inevitably become part of our everyday life.”

Motion capture studio Audiomotion’s managing director, Brian Mitchell, agrees. “The fact we can stream live data through to game engines has had a massive effect. Matched with VR, this means developers can really let loose with their creativity,” he says.

It seems that only recently have all the many individual steps in motion capture technology culminated in a great leap forward for the industry.

“I think the biggest development this year has been the jump in combined technologies and approaches in realtime full performance capture,” says Derek Potter, head of product management at Vicon Motion Systems. “There’s been steady progress over the past five years in the area of realtime full performance capture, however what we’ve seen previously is progress on single fronts.

“What we find really exciting about the past year is seeing different developments coming together. It feels like this is moving these types of captures from being investigative to being more fully realised and production ready.”

It’s not just the games industry that is enjoying these developments. Motion capture is the same wherever it is used and everyone is learning from one another. “Different industries use the same game engines for realtime rendering, the same mo-cap hardware and the same solving and retargeting technologies for producing the final product – animation in realtime,” says IKinema’s Pechev. “We are already at a point where the entertainment sector is sharing hardware, technologies, approaches and tools.”

Dr. Colin Urquhart, CEO of Dimensional Imaging, sees this as a good thing for everyone. Especially since some areas of the entertainment industry have been at it a little longer than us. “The use of helmet mounted camera systems for full performance capture was pioneered on movie projects such as Avatar and Planet of the Apes, but is now becoming widespread for use on video game projects,” he says. “People see how effective this technology is in movies and expect the same effect in a video game.”

Full performance capture like this, which simultaneously records body and facial movements, leads to much more realistic actions and expressions. Something that can really affect your immersion in a world or your feelings for a character. Peter Busch, VP of business development at Faceware Technologies, a motion capture company that focuses on facial animation, says that “characters in today’s games are already pushing the realism envelope when it comes to facial performances, but that will only increase with time. Look for more realistic facial movement and in particular, eye movement, in the games of tomorrow“.

“It’s one thing to watch an animated character in a game,” Busch continues. “It’s quite another to interact with one. Today, we’re able to interact with characters animated in realtime via live performances or kiosks at theme parks. It’s rudimentary, but it’s effective.

“Tomorrow, we’ll be able to interact with player-driven characters or AI-driven avatars, in game, in realtime. Imagine saying something to a character in a game and having them respond to you as they would in real life. This will change the face of games.”

Virtual reality will be a huge beneficiary of these improvements to facial animation, as developers scramble their way out of the uncanny valley. Classic NPCs feel significantly more like dead-eyed mannequins within a VR environment and improvements in this area could go a long way towards truer immersion and deeper connections with characters and worlds.

“The key to an engaging experience, especially in a new medium such as VR, is connecting users with a powerful story and characters by drawing upon the emotional, character-driven, and nuanced performance from the faces of the actors,” says Busch. “[With today’s technology] studios are able to capture the entire facial performance, including micro expressions, and display those subtleties in convincing ways.”

Vicon’s Derek Potter is in agreementand suggests that improvements in motion capture could mitigate the effect of the uncanny valley, if not remove it completely. “Nothing immerses like the eyes,” he says. “The one thing that each new generation of console has provided is more power. More power means better, more realistic visuals. Motion capture provides the ability to transpose ‘truer’ movements onto characters.

“The mind is a brilliant machine and one of the things that it does amazingly well is to let us know when something ‘doesn’t look quite right’. This lessens the immersiveness of the gaming experience. What excites us about the next generation of consoles is the increasing ability to render realistic graphics, combined with the motion capture industry’s progress in capturing finer and more realistic movements to let us all sink a little deeper into the game. I think this is amazing, both as someone working in motion capture and as a gamer.”

As advances continue to be made to motion capture technology and software, we’re seeing a drop in price for mo-cap solutions. “I think it’s easier for developers to incorporate motion capture into their projects than ever,” says Dimensional Imaging’s Urquhart.

“Mo-cap used to be a tool used exclusively by big studios or developers on big budget projects. This is not the case anymore. There are many tools on the market for capturing an individual’s face and body performance. Mo-cap tech doesn’t have to be a premium product.”

This accessibility helpfully comes at a time when even small indie developers might be looking into motion capture solutions, thanks to the recent release of consumer virtual reality devices. “VR controllers act as a realtime mo-cap source of data,” says IKinema’s Pechev. “Using this to see your virtual body reacting to your real movements changes completely the quality of immersion.”

“Using mo-cap to see your own avatar allows you to actually feel the environment as well as see it,” says Audiomotion’s Mitchell. “It really does turn your stomach when you see your own foot moving closer to the cliff edge.”

With motion capture now more powerful, accessible and interactive than ever, where does the industry go from here?

“The demand for mo-cap is definitely going to increase over the next few years and games projects will require better and better acting talent and direction to meet  this demand,” says Dimensional Imaging’s Urquhart.

“Improving on the realism means increasing the fidelity of motion capture, particularly of facial motion capture. Using techniques such as surface capture instead of marker based or facial-feature based approaches can help with this.”

Faceware’s Peter Busch has plans in mind to solve the issues inherent in facial motion capture, particularly for VR. “The next frontier is creating even more realistic and immersive experiences across the board, including new experiences like realtime user-driven VR characters that are realistic in every aspect, right down to the user’s facial expressions,” he says.

“Current cameras are insufficient for capturing true social VR. There is no way to capture a VR user’s entire face while playing VR. Our interactive team is actively developing hardware and software solutions that we’re planning to launch this coming year.”

Vicon’s Derek Potter believes there’s still plenty of work for providers to do to make mo-cap the best it can be. “I think there are three way that providers can help in continuing to push motion and performance capture forward,” he says. “[Companies] like Vicon need to continue to do what we’ve been doing steadily each year, which is to continue to improve the fundamentals of the motion data provided. The accuracy, reliability, overall quality and speed of the data are all vital.

“Secondly, as motion capture matures as a technology, we need to see it not only as its own entity but also as part of a bigger machine. We need to continue our efforts to integrate and provide more open access to the data provided by the system. The third thing we need to do is to listen to some bright lights in the field who have been pushing technics forward recently. The technology needs to be pliable enough to fit the new and evolving needs that performance capture demands.“

Meanwhile, IKinema’s Alexandre Pechev won’t be happy until motion capture technology is perfected. “[I want] better and simpler (ideally non- invasive) types of motion capture combined with realtime solvers that automatically improve issues and deliver perfect animation,” he says, ending with some blue sky thinking that in the past would have been a better fit for a sci-fi novel than a game development magazine.

“My dream is to see a mo-cap system that reads brainwaves to deliver the muscle movements of all bones. Maybe one day…”

Faceware Rountable Discussion: Visuals of the Future

By | News | No Comments

develop_logoBy Matthew Jarvis

We’re fewer than three years into the latest console generation, but already another major graphical revolution is afoot as VR, mobile advancements and superpowered versions of the PS4 and Xbox One push fidelity forwards. Matthew Jarvis asks Epic, Unity, Crytek, Faceware and Geomerics what’s next?

Alongside Apple’s product range and the Sugababes line-up, computer graphics are one of the quickest-changing elements in modern culture. Games from even a few years ago can appear instantly dated after a new shader or lighting model is introduced, making it crucial for devs to be at the forefront of today’s visual technology.

Yet, as with all of development, taking full advantage of console, PC and mobile hardware comes at a cost – a price that continues to rise with the growing fidelity of in-game environments and latest methods being used to achieve them.

“Triple-A games with fantastic graphics are becoming more and more complex to develop and require increasingly larger and larger teams and longer development time,” observes Niklas Smedberg, technical director for platform partnerships at Unreal Engine firm Epic Games. “This means that it is more and more important to work efficiently and smartly in order to avoid having your team size or budget explode.

“Technologies that have stood out from the rest include new various anti-aliasing techniques like temporal AA, which has made a huge impact on visual quality.”

Unity’s Veselin Efremov, who directed the engine firm’s Adam tech demo, believes “the latest push is around adding layers of depth and evocative power to environments and virtual worlds”.

“There are things such as screen space reflections, and real-time globalillumination,” he says. “It can all be done with a relatively low memory footprint.

“Meanwhile, physically-based rendering is at its core a new approach to the lighting pipeline, simulating the natural way light and real-world materials interact. PBR shines when you’re pursuing realism.”

It is important to work efficiently to avoid having team size or budget explode.

Supporting the latest methods of mastering specific optical elements are new underlying APIs that are propelling the entire sector forward.

“The expanding performance budget of each platform means that every generation developers can build larger worlds with more geometry, richer textures and more advanced effects,” says Chris Porthouse, GM of lightingspecialist Geomerics. “The advent of thin APIs such as Metal and Vulkan are giving developers access to finer control of the performance potential of these devices.”

Although the inflating scale and ambitions of titles has led to a need for greater investment in graphical technology, Dario Sancho-Pradel, leadprogrammer at CryEngine outlet Crytek, highlights the advancements in making tools more accessible and easy to deploy.

“Recent technologies are helping character and environment artists to produce more complex and interesting assets in a shorter amount of time,” he says. “Some examples that come to mind are procedural material generation, multi-channel texturing and photogrammetry.”

Peter Busch, VP of business development at Faceware, concurs that “content creation tools are getting easier to use, less expensive, and now support multiple languages”.

“That means that more people will have access to the hardware and software needed to create good facial performances,” he says. “That, in turn, means facial animation, whether pre-rendered or rendered in real time, will appear in more indie games.”



For decades, it was a widespread belief that indie studios couldn’t hold a candle to the graphical might of triple-A powerhouses. The last few years have destroyed this myth, as the increasing power and falling price of development software have closed the gap, allowing devs including The Chinese Room and Frictional Games to produce outstandingly good-looking titles such as Everybody’s Gone To The Rapture and Soma.

“Development tools have advanced tremendously in the past several years,” says Efremov. “There’s really very little stopping a creator from bringing their vision to market; it takes less time and fewer resources than ever before.

“What really distinguishes big triple-A productions is content and an emphasis on realistic graphics. There are ways smaller studios can replicate triple-A quality, such as PBR-scanned texture set libraries, terrain creation tools, automated texturing tools and markerless motion capture solutions.”

Smedberg offers a reminder that a strong style can be as effective as pure graphical horsepower.

“Use something you feel comfortable with and lets you be most productive,” he advises. “Choose an art style that fits you best and where you can excel. All developers should take the time to use graphics debuggers and profilers. The more you can learn about the tools and your rendering, the better your game will look.”

Porthouse agrees that “workflow efficiency is everything to smaller studios”.

“Any tool that can decrease the time it takes to perform a particular process saves money and leaves developers free to improve other areas of the game,” he explains.

Sancho-Pradel says: “Small studios need to be particularly pragmatic. If they use a third-party engine, they should be aware of the strengths and pitfalls and design their game around them. Using techniques such as photogrammetry, procedural material generation and multi-channel texturing is affordable even in low-budget productions and can significantly increase the quality of the assets if the art direction of the game is aligned with such techniques.”

Each year the challenge of delivering visibly better graphics increases and workflow efficiency is the key to success.


As thread upon thread of ‘PC Master Race’ discussions attest, the trusty desktop has long been considered the flagbearer for bleeding-edge graphics, with the static hardware of consoles acting as a line in the sand beyond which visuals can only toe so far. This could all be set to change, as Xbox’s newly announced Project Scorpio and the PS4 Neo look primed to blur the lines between console generations, in line with the continually evolving nature of PC.

“As hardware manufacturers produce new platforms, development teams will happily eat up the extra performance budget to generate higher fidelity visuals,” Porthouse predicts. “Yet step changes in hardware do not come annually and for every release a franchise’s studio needs to demonstrate graphical improvements even when new hardware is not available.”

John Elliot (pictured), technical director of Unity’s Spotlight team, suggests: “For large studios, there is a great opportunity to stand out by pushing the new hardware to the maximum. For smaller studios, more power provides opportunities in different areas. Many will find creative ways to drive interesting and new visuals.”

Among the headline features of the Xbox Scorpio are high-dynamic range and the ability to put at 4K resolutions. With many titles yet to achieve 60 frames per second performance at 1080p on console, will these new display options even be of interest to devs?

“This was the only option they had to move forward,” Smedberg proposes. “HDR requires new display hardware and 4K requires more performance. While more performance is always better, I am concerned about fragmentation – that consoles will end up like PCs, with tons of different performance and feature levels that developers have to figure out how to make our games play nice with.”

Elliot backs the introduction of HDR and 4K as “interesting features that absolutely should matter to devs”.

“While the jump to 4K is much less than that from SD to HD TV, it is still something that gamers will soon come to expect,” he forecasts. “You just have to look at how quickly 1080p became the standard for TV to know that there will be huge demand for 4K content.

“HDR really does provide an amazing boost in visual quality, but it is much harder to visualise what this means until you have actually seen it. I’m sure it will quickly become a standard requirement and, as with any new hardware, there is a great opportunity for the first games to support it to get noticed in the marketplace.”


As well as HDR and 4K, a major turning point with Scorpio and Neo is improved performance for VR hardware. With good reason: the nascent medium is reliant on flawless performance. It’s a challenge that remains so even with the almost limitless potential of PC, as devs work to balance realistic visuals with hitch-free delivery.

“Photorealism is not essential for VR, but if one of your goals is to immerse the player in a highly realistic world, then the game will have to use a highly optimised rendering pipeline in order to hit the framerate required by the VR platform while rendering in stereo high-detailed models, thousands of draw calls and complex shaders,” observes Sancho-Pradel. “DirectX12 and Vulkan are designed to give engineers much more control in terms of memory, command submission and multi-GPU support, which can translate into more optimal rendering pipelines.”

Busch echoes that absorbing players into a VR experience is vital for devs – and visuals play a major role in doing so. 

“Without believable, immersive experiences, games will face an uphill battle in VR,” he cautions. “In a fully immersive environment people are ‘in’ the content – which only means that the details, framerate and level of engagement are a daunting task. Mix that with a quagmire of VR hardware, and it is a difficult landscape to develop in – to say the least.”

Smedberg drives home the point that performance is virtual reality’s unavoidable challenge, and offers technical advice to devs looking to perfect their game’s smoothness.

“The performance requirement for VR is critically important,” he confirms. “If you miss your target framerate here and there in a regular game, you might have a few unhappy customers. If you miss your target framerate in a VR game, your customers may not just stop playing your game – they may stop playing VR altogether.

“With VR you have to design your game graphics with a lower visual fidelity in mind, because you have twice as many pixels to fill and need to have it all rendered in less than 10ms – instead of 16 or 33ms. Many tried and proven rendering techniques won’t work, like sprite particles, because in VR you can see that they’re just flat squares.

“Better technology can help; faster and more efficient rendering APIs could help your game run much faster on the CPU – you can make more drawcalls and spend more time on AI or physics. GPUs could take advantage of similarities in the shader between the two eye views and run faster on the GPU.”

There is a great opportunity for the first games to support HDR to get noticed in the marketplace.


With a new semi-generation of consoles around the corner, virtual reality continuing to redefine long-established development philosophy and technology always set to take the next major graphical step, what can developers expect to rock the visual world?

“Lighting continues to be one of the most effective, emotive and evocative tools in an artist’s bench,” suggests Efremov. “Real-time global illumination,physically-based materials, lights, and cameras/lenses will all carry us much closer to worlds and environments that truly replicate our own.

“Machine learning is such an exciting new area of research, already put to practical use in so many other industries. The possibilities in the future are countless; simulations for populating and simulating game worlds based on photographic reference, expanding animation performances, creating 3D assets, altering the visual style of a game, new levels of AI and so on.”

Porthouse echoes Efremov’s sentiment that lighting enhancements will be one of the key drivers helping to refine visual fidelity.

“As teams perfect the graphics of their increasingly large world games, lighting and weather effects will play an important part,” he says. “In two to three years’ time all exterior environments will be lit with a real-time day/night cycle with believable and consistent bounced lighting, and gamers will be able to seamlessly transition between indoor and outdoor scenes. Each year the challenge of delivering visibly better graphics increases and workflow efficiency is the key to success.”

Smedberg (pictured) highlights hardware evolution as a strong foundation for devs to expand their graphical ambitions.

“We may see PC graphics hardware that can run at double speed, using 16-bit floating point instead of 32-bit floating point,” he forecasts. “This doubling of performance comes in addition to any other performance increases, like more cores and higher clock frequencies.

“On the rendering technology side, I’m looking forward to using some forms of ray-tracing – not traditional full-scene ray-tracing, but as a partial tool for some specific rendering features.”

Smedberg concludes by reiterating that while pure power is sure to drive visual punch onwards, its benefits are voided without strong design to match.

“Graphics technology is so capable and powerful today that developers have an incredible freedom to choose whatever art style they like,” he enthuses. “Creators can and should choose a style that they feel passionate about and that fits their gameplay and budget goals.”