Faceware Co-Sponsors We Are Code: VR Hackathon

By | News | No Comments

We’re proud to be a sponsor of  We Are Code: VR Hackathon together with companies like Oculus this Labor Day weekend!

Event Details

Register now for the Dhat Stone Academy/Priority We Are Code VR Hackathon! We are enlisting students in Compton to compete with students in Oakland. They will engage in a computer coding competition to create the best mobile app, virtual reality games & website ideas that solve a social or economic problem in their communities.

Film actor, dancer and singer Ben Vereen will be on-site in Compton California on both competition days to entertain and give an inspirational talk.

There will be Virtual Reality technology workshops by NanoVR, and Animation development workshops from the famous Grammy Award winning Animator Leo Sullivan, producer of Fat Albert and the Cosby Kids.

At the We Are Code Virtual Reality Hackathon students will also experience Exhibition Virtual Space, a futuristic glass structure with waterfalls, grassy hills, and lakes. Compton and Oakland students will communicate and interact through a full immersive Virtual Reality experience. 

We are thrilled to announce that Oculus will be sponsoring the  prizes for the first place winners with VR Gear and other cool stuff!

Winners will receive cash prizes, T-shirts, Google VR Cardboard, gadgets, free coding classes, and a chance to turn your winning app idea into a business!

Join us!

 

WHERE
NASA Columbia Memorial Space Center – 12400 Columbia Way, Downey, CA 90242 – View Map

Faceware Rountable Discussion: Visuals of the Future

By | News | No Comments

develop_logoBy Matthew Jarvis

We’re fewer than three years into the latest console generation, but already another major graphical revolution is afoot as VR, mobile advancements and superpowered versions of the PS4 and Xbox One push fidelity forwards. Matthew Jarvis asks Epic, Unity, Crytek, Faceware and Geomerics what’s next?

Alongside Apple’s product range and the Sugababes line-up, computer graphics are one of the quickest-changing elements in modern culture. Games from even a few years ago can appear instantly dated after a new shader or lighting model is introduced, making it crucial for devs to be at the forefront of today’s visual technology.

Yet, as with all of development, taking full advantage of console, PC and mobile hardware comes at a cost – a price that continues to rise with the growing fidelity of in-game environments and latest methods being used to achieve them.

“Triple-A games with fantastic graphics are becoming more and more complex to develop and require increasingly larger and larger teams and longer development time,” observes Niklas Smedberg, technical director for platform partnerships at Unreal Engine firm Epic Games. “This means that it is more and more important to work efficiently and smartly in order to avoid having your team size or budget explode.

“Technologies that have stood out from the rest include new various anti-aliasing techniques like temporal AA, which has made a huge impact on visual quality.”

Unity’s Veselin Efremov, who directed the engine firm’s Adam tech demo, believes “the latest push is around adding layers of depth and evocative power to environments and virtual worlds”.

“There are things such as screen space reflections, and real-time globalillumination,” he says. “It can all be done with a relatively low memory footprint.

“Meanwhile, physically-based rendering is at its core a new approach to the lighting pipeline, simulating the natural way light and real-world materials interact. PBR shines when you’re pursuing realism.”

It is important to work efficiently to avoid having team size or budget explode.

Supporting the latest methods of mastering specific optical elements are new underlying APIs that are propelling the entire sector forward.

“The expanding performance budget of each platform means that every generation developers can build larger worlds with more geometry, richer textures and more advanced effects,” says Chris Porthouse, GM of lightingspecialist Geomerics. “The advent of thin APIs such as Metal and Vulkan are giving developers access to finer control of the performance potential of these devices.”

Although the inflating scale and ambitions of titles has led to a need for greater investment in graphical technology, Dario Sancho-Pradel, leadprogrammer at CryEngine outlet Crytek, highlights the advancements in making tools more accessible and easy to deploy.

“Recent technologies are helping character and environment artists to produce more complex and interesting assets in a shorter amount of time,” he says. “Some examples that come to mind are procedural material generation, multi-channel texturing and photogrammetry.”

Peter Busch, VP of business development at Faceware, concurs that “content creation tools are getting easier to use, less expensive, and now support multiple languages”.

“That means that more people will have access to the hardware and software needed to create good facial performances,” he says. “That, in turn, means facial animation, whether pre-rendered or rendered in real time, will appear in more indie games.”

develop_smallmighty

SMALL BUT MIGHTY

For decades, it was a widespread belief that indie studios couldn’t hold a candle to the graphical might of triple-A powerhouses. The last few years have destroyed this myth, as the increasing power and falling price of development software have closed the gap, allowing devs including The Chinese Room and Frictional Games to produce outstandingly good-looking titles such as Everybody’s Gone To The Rapture and Soma.

“Development tools have advanced tremendously in the past several years,” says Efremov. “There’s really very little stopping a creator from bringing their vision to market; it takes less time and fewer resources than ever before.

“What really distinguishes big triple-A productions is content and an emphasis on realistic graphics. There are ways smaller studios can replicate triple-A quality, such as PBR-scanned texture set libraries, terrain creation tools, automated texturing tools and markerless motion capture solutions.”

Smedberg offers a reminder that a strong style can be as effective as pure graphical horsepower.

“Use something you feel comfortable with and lets you be most productive,” he advises. “Choose an art style that fits you best and where you can excel. All developers should take the time to use graphics debuggers and profilers. The more you can learn about the tools and your rendering, the better your game will look.”

Porthouse agrees that “workflow efficiency is everything to smaller studios”.

“Any tool that can decrease the time it takes to perform a particular process saves money and leaves developers free to improve other areas of the game,” he explains.

Sancho-Pradel says: “Small studios need to be particularly pragmatic. If they use a third-party engine, they should be aware of the strengths and pitfalls and design their game around them. Using techniques such as photogrammetry, procedural material generation and multi-channel texturing is affordable even in low-budget productions and can significantly increase the quality of the assets if the art direction of the game is aligned with such techniques.”

Each year the challenge of delivering visibly better graphics increases and workflow efficiency is the key to success.

CONSOLE CALIBRE

As thread upon thread of ‘PC Master Race’ discussions attest, the trusty desktop has long been considered the flagbearer for bleeding-edge graphics, with the static hardware of consoles acting as a line in the sand beyond which visuals can only toe so far. This could all be set to change, as Xbox’s newly announced Project Scorpio and the PS4 Neo look primed to blur the lines between console generations, in line with the continually evolving nature of PC.

“As hardware manufacturers produce new platforms, development teams will happily eat up the extra performance budget to generate higher fidelity visuals,” Porthouse predicts. “Yet step changes in hardware do not come annually and for every release a franchise’s studio needs to demonstrate graphical improvements even when new hardware is not available.”

John Elliot (pictured), technical director of Unity’s Spotlight team, suggests: “For large studios, there is a great opportunity to stand out by pushing the new hardware to the maximum. For smaller studios, more power provides opportunities in different areas. Many will find creative ways to drive interesting and new visuals.”

Among the headline features of the Xbox Scorpio are high-dynamic range and the ability to put at 4K resolutions. With many titles yet to achieve 60 frames per second performance at 1080p on console, will these new display options even be of interest to devs?

“This was the only option they had to move forward,” Smedberg proposes. “HDR requires new display hardware and 4K requires more performance. While more performance is always better, I am concerned about fragmentation – that consoles will end up like PCs, with tons of different performance and feature levels that developers have to figure out how to make our games play nice with.”

Elliot backs the introduction of HDR and 4K as “interesting features that absolutely should matter to devs”.

“While the jump to 4K is much less than that from SD to HD TV, it is still something that gamers will soon come to expect,” he forecasts. “You just have to look at how quickly 1080p became the standard for TV to know that there will be huge demand for 4K content.

“HDR really does provide an amazing boost in visual quality, but it is much harder to visualise what this means until you have actually seen it. I’m sure it will quickly become a standard requirement and, as with any new hardware, there is a great opportunity for the first games to support it to get noticed in the marketplace.”

THE REALITY OF VR

As well as HDR and 4K, a major turning point with Scorpio and Neo is improved performance for VR hardware. With good reason: the nascent medium is reliant on flawless performance. It’s a challenge that remains so even with the almost limitless potential of PC, as devs work to balance realistic visuals with hitch-free delivery.

“Photorealism is not essential for VR, but if one of your goals is to immerse the player in a highly realistic world, then the game will have to use a highly optimised rendering pipeline in order to hit the framerate required by the VR platform while rendering in stereo high-detailed models, thousands of draw calls and complex shaders,” observes Sancho-Pradel. “DirectX12 and Vulkan are designed to give engineers much more control in terms of memory, command submission and multi-GPU support, which can translate into more optimal rendering pipelines.”

Busch echoes that absorbing players into a VR experience is vital for devs – and visuals play a major role in doing so. 

“Without believable, immersive experiences, games will face an uphill battle in VR,” he cautions. “In a fully immersive environment people are ‘in’ the content – which only means that the details, framerate and level of engagement are a daunting task. Mix that with a quagmire of VR hardware, and it is a difficult landscape to develop in – to say the least.”

Smedberg drives home the point that performance is virtual reality’s unavoidable challenge, and offers technical advice to devs looking to perfect their game’s smoothness.

“The performance requirement for VR is critically important,” he confirms. “If you miss your target framerate here and there in a regular game, you might have a few unhappy customers. If you miss your target framerate in a VR game, your customers may not just stop playing your game – they may stop playing VR altogether.

“With VR you have to design your game graphics with a lower visual fidelity in mind, because you have twice as many pixels to fill and need to have it all rendered in less than 10ms – instead of 16 or 33ms. Many tried and proven rendering techniques won’t work, like sprite particles, because in VR you can see that they’re just flat squares.

“Better technology can help; faster and more efficient rendering APIs could help your game run much faster on the CPU – you can make more drawcalls and spend more time on AI or physics. GPUs could take advantage of similarities in the shader between the two eye views and run faster on the GPU.”

There is a great opportunity for the first games to support HDR to get noticed in the marketplace.

RESOLUTION REVOLUTION

With a new semi-generation of consoles around the corner, virtual reality continuing to redefine long-established development philosophy and technology always set to take the next major graphical step, what can developers expect to rock the visual world?

“Lighting continues to be one of the most effective, emotive and evocative tools in an artist’s bench,” suggests Efremov. “Real-time global illumination,physically-based materials, lights, and cameras/lenses will all carry us much closer to worlds and environments that truly replicate our own.

“Machine learning is such an exciting new area of research, already put to practical use in so many other industries. The possibilities in the future are countless; simulations for populating and simulating game worlds based on photographic reference, expanding animation performances, creating 3D assets, altering the visual style of a game, new levels of AI and so on.”

Porthouse echoes Efremov’s sentiment that lighting enhancements will be one of the key drivers helping to refine visual fidelity.

“As teams perfect the graphics of their increasingly large world games, lighting and weather effects will play an important part,” he says. “In two to three years’ time all exterior environments will be lit with a real-time day/night cycle with believable and consistent bounced lighting, and gamers will be able to seamlessly transition between indoor and outdoor scenes. Each year the challenge of delivering visibly better graphics increases and workflow efficiency is the key to success.”

Smedberg (pictured) highlights hardware evolution as a strong foundation for devs to expand their graphical ambitions.

“We may see PC graphics hardware that can run at double speed, using 16-bit floating point instead of 32-bit floating point,” he forecasts. “This doubling of performance comes in addition to any other performance increases, like more cores and higher clock frequencies.

“On the rendering technology side, I’m looking forward to using some forms of ray-tracing – not traditional full-scene ray-tracing, but as a partial tool for some specific rendering features.”

Smedberg concludes by reiterating that while pure power is sure to drive visual punch onwards, its benefits are voided without strong design to match.

“Graphics technology is so capable and powerful today that developers have an incredible freedom to choose whatever art style they like,” he enthuses. “Creators can and should choose a style that they feel passionate about and that fits their gameplay and budget goals.”

How Hasbro Animated Mr. Monopoly’s Facebook Live Broadcast in Real Time

By | News | No Comments

Digital_Trends_logoBy

 

Facebook Live is proving to be a powerful tool for for sharing moments in real-time — but what happens when the user is actually a cartoon? Hasbro recently used a trio of different technologies to allow an animated Mr. Monopoly (the game’s top hat wearing, mustachioed mascot) to share a live announcement, interact with commenters, and answer questions in real time.

Last week’s live video, which announced the Monopoly Ultimate Banking Game that replaces paper money with bank cards, used not one but several different technologies to create the animation in real-time. The video included facial expressions as well as full body movements and personalized interaction with viewers.

Related: Sonic the Hedgehog hits theaters in 2018, mixing live-action and animation

“In the Monopoly Facebook Live broadcast, Mr. Monopoly was brought to life with the help of a combination of physical motion capture, facial motion capture, and real-time rendering,” Rebecca Hollander, the Monopoly brand’s director of marketing, told Digital Trends in an email. “What you see and hear from Mr. Monopoly in the video is completely live and the character also spoke directly to fans, answering questions and acknowledging their comments during the broadcast.”

Mr. Monopoly’s whole body movements were created using a wearable motion capture device from X-Sens. The system captured the real character’s movements using a series of sensors, including arm and leg bands and a cap. As the actual person moved around inside Hasbro’s Cake Mix Studios, the animated Mr. Monopoly moved around the cartoon stage.

A wearable created the larger motions, but what about the facial expressions? Faceware Live 2.0 was used to analyze a video of the announcer to translate onto the animated character. The system quickly calibrates the person’s facial features, identifying where everything from the lips to the eyebrows are. Then, the system continues to follow those facial markers, translating them to the animated character. Unlike the wearable, the facial expressions are monitored with video and a computer system.

While X-Sens and Faceware was used to capture the motion, the Unreal game engine helped put it all together, mixing the motion of Mr. Monopoly with the animated Scottie the dog in the “video crew.”

“A challenge with using animation with a live broadcast is that there are many layers of technology that have to work together to deliver a quality final product,” Hollander said. “There are also many moving parts that come together to make it work, for example, creating an engaging script and understanding how facial and body movements translate into an animated character on screen. It was also important for Mr. Monopoly to answer questions and engage in real time to give an authentic experience to our fans.”

Hollander said the 80-year-old brand is continually looking for ways to stay fresh and relevant — and Facebook’s live feature was a good way to do just that while engaging with fans in real time.

Along with the release of the new game with bank cards, the live announcement also included a new sweepstakes where participants can win up to $20,580 (the amount of cash inside a traditional Monopoly game).

// //

Faceware Tech Adds to the Magic in Kingsglaive Final Fantasy XV

By | News | No Comments

It’s always a thrill to see our technology used in productions. Check out the official trailer and intro clip of Kingsglaive Final Fantasy XV – from Sony Pictures Home Entertainment.

And the Official Teaser Trailer…

Computer Graphics World Selects Faceware to SIGGRAPH “Best of Show” List

By | News | No Comments
Silver-Edge-Logo_2016_siggraph812-580345-13-TransparentWhite-1

Computer Graphics World Selects Faceware to SIGGRAPH “Best of Show” List

The excitement and hustle and bustle of SIGGRAPH 2016 is now behind us. After taking a bit of time to review all the information collected from the show, the staff is ready to name the winners of the  CGW  Silver Edge Awards, given to companies whose products and vision have the potential to greatly impact the industry.

This year, after much consideration, the following companies and their respective technologies have earned the designation of best of show at the 43rd annual SIGGRAPH conference and exhibition:

AMD’s Radeon Pro WX Series: Professional graphics cards that are optimized for open-source software and tuned for the demands of modern content creation. Also, AMD’s Radeon Pro SSG: A transformative solution that will offer greater memory capacity for real-time postproduction.

Autodesk’s Maya 2017: Packed with many new features, including the Arnold renderer which was recently acquired by Autodesk, as well as new tools for animators, updates to the motion graphics toolset, and more.

Faceware’s Faceware Interactive technology: This new interactive division is focusing on hardware and software that will enable characters to interact with people in real time (ideal for controlling characters in VR).

Maxon’s Cinema 4D R18: A big new release featuring enhancements to the MoGraph toolset, including the new Voronoi fracture system, object tracking within the motion tracker, and more.

Meta Company’s Meta 2: Augmented-reality headset that may give Microsoft’s HoloLens a run for its money.

Nvidia’s Quadro P6000: A VR graphics card said to be the fastest pro card available, harnessing 3,840 Cuda cores. Kudos also for making Mental Ray available directly from Nvidia, running on the GPU. And, for its SDK updates, particularly VRWorks 360 SDK.

Pixar’s RenderMan 21: Touted as the biggest RenderMan release in years, V21 gives users access to some of the same technology used on the studio’s latest feature film, including access to shaders and light types. Also, Pixar announced the open-source release of Universal Scene Description (USD), technology used for the interchange of complex 3D graphics data through various DCC tools.

The Foundry’s Cara VR: A VR plug-in toolset for Nuke. Also, the company should be applauded for shipping Katana to Windows as well as Linux.

WorldViz’s VizConnect: For enabling multi-platform use of VR across multiple systems , including the Oculus Rift and HTC Vive.

Special recognition: While not all of the game engine companies had a presence on the show floor, three in particular deserve special recognition for the work to their engines in advancing real-time graphics and interactivity: Epic Games, Unity Technologies, and Crytek. 

cgw-logo-blue

3D Animation Workshop Offered by CGSociety

By | News | No Comments

Our friends over at CGSociety are offering an amazing animation course we wanted to share:

Character Facial Rigging for Production

Course Instructor – Wade Ryer

Wade Ryer has over 10 years of experience in animated feature films. His credits include How To Train Your Dragon 2, Rise of The Guardians, Kung Fu Panda 2, The Tale of Desperaux and The Ant Bully. He has worked for such studios as DreamWorks Animation, Method Studios, Framestore, Electronic Arts, DNA Productions and ReelFX Creative Studios.

Course Overview

Learn how to rig character faces and bring them to life with Wade Ryer. Wade has over 10 years of experience in animated feature films such as; How To Train Your Dragon 2, Rise of The Guardians, Kung Fu Panda 2, The Tale of Desperaux and The Ant Bully. He has worked for such studios as DreamWorks Animation, Method Studios, Framestore, Electronic Arts, DNA Productions and ReelFX Creative Studios.

Upon completion of this 8 week CGWorkshop, you will have a full understanding of how to create a functional, animator friendly, character face rig ready for their close up on the big screen!  Wade will give you guidance along the way, industry level tips and tricks, not to mention, personal feedback on your character’s facial rig.

CGSociety Animation Course

Note: The format of this class will be Forum replies for Assignments and Forum replies for student Q&As.

Faceware Technologies Launches Faceware Interactive Division

By | Press Release | No Comments

New division, Faceware Interactive, creates facial mocap technologies that enable virtual humans and characters to interact with people in real time.

Faceware InteractiveLos Angeles, CA – July 20, 2016 – Faceware Technologies, the leading innovator and most experienced provider of markerless 3D facial motion capture solutions, today announced FACEWARE INTERACTIVE. The new division is focused on the development of software and hardware that can be used in the creation of digital characters with whom real people can interact. Faceware will be showcasing some of its early work in this area at SIGGRAPH 2016, Booth 322.

Faceware’s software technology identifies the movement of an actor’s face from video and applies that movement to a computer-generated character. Together with its head-mounted and stationary cameras, Faceware’s technology is used successfully in award-winning movies like The Curious Case of Benjamin Button and The Walk, and top-grossing games like Grand Theft Auto III-V, NBA 2K10-2K16, Destiny, Batman: Arkham Knight, Call of Duty: Advanced Warfare and DOOM.

Now imagine those characters interacting with people in real time, or someone acting “through” virtual humans or avatars in real time, like virtual puppeteers. The use cases for this are numerous, and include:

  • Live performances that incorporate digital characters. Digital characters can be “puppeted” in real time, allowing interaction with live audiences and people.
  • Person-driven avatars in VR and AR. Users can stream their own personas into digital and virtual worlds—perfect for training applications as well as interactive chat functionality.
  • Digital characters interacting in real-time on kiosk screens in theme parks and shopping malls.  
  • Animated content that can be created instantly anywhere. Believability of the characters will be driven by the live and interactive nature of the performances, e.g. kids can meet and talk to Elsa from Frozen or have a conversation with Bart Simpson.
Faceware technology is enabling Grace VR

Faceware technology is enabling Grace, a CG virtual reality music experience that combines bleeding edge rendering in Unreal, with full performance capture and mind-blowing 3D spatial audio.

Faceware, and its sister company Image Metrics, have been developing technologies that enable just those types of experiences. The first of these efforts can be seen in Image Metrics’ L’Oreal’s MakeUp Genius app and Nissan’s DieHardFan app, the panel at RTX Australia (which let fans interact with characters Yang and Ruby from the animated web series RWBY in real time), and in the VR games Paranormal Activity from VRWerx, Grace from MacInnes Scott and Here They Lie VR from Sony.

In order to focus more effort in this growing area, Faceware formed this new division in the company and is investing in research and development of both software and hardware to further enable interactive experiences in the public and professional space.

“By now, we’re all familiar with watching digital characters in movies and games. To us, interacting with and through those characters so people can connect on a deeper level is the logical next step,” said Peter Busch, vice president of business development at Faceware Technologies. “We have much of the underlying technology in place and early efforts point to a promising future full of growth. While we can’t share product information yet, we’ll have some exciting news to share in the near future.”

See Faceware Interactive at SIGGRAPH 2016To find out more about Faceware’s technology and how it can be used to create live, interactive content, visit Faceware at SIGGRAPH 2016 (Booth 322), go to www.facewaretech.com or contact sales@facewaretech.com.

 

# # #

 

About Faceware Technologies

Faceware Technologies Inc. (FTI), established in 2012 after years as part of leading facial tracking and augmented reality company Image Metrics, is dedicated to meeting the needs of professional animators in the video game, film, television, and commercial industries. The company’s Faceware Facial Motion Capture product line has been utilized in the production of hundreds of video game titles, feature films, music videos, commercials, television shows, and stage plays, and is the leading facial animation solution provider for clients such as Double Negative, Digital Domain, Blur Studios, Activision-Blizzard, Rockstar Games, Microsoft, 2K Sports, Electronic Arts, Ubisoft, Sega, Sony, Bethesda, Motion Theory and Moving Picture Company. Faceware’s product consists of the Faceware GoPro and Pro HD Headcam and Tripod Capture Systems; Faceware Analyzer, which allows clients to analyze and process their own performance videos; Faceware Retargeter, an Autodesk plugin which allows users to create facial motion capture data at a much faster rate than traditional methods; and Faceware Live, the real-time facial capture and animation solution.

 

© 2016. Faceware Technologies, Inc. All rights reserved. “Faceware” is a registered trademark of Faceware Technologies, Inc. All other trademarks are the property of their respective owner(s).

See Kit Harington Filming ‘Call of Duty: Infinite Warfare’

By | News | No Comments

It’s always exciting to see our hardware on these amazing actors! We’re counting the days till we get to play the upcoming Call of Duty: Infinite Warfare by our friends over at Infinity Ward.

Kit Harington in ‘Call of Duty: Infinite Warfare’

ew-mobile-logo-full

Published by Madeline Boardman • @ml_boardman

Game of Thrones star Kit Harington is eyeing a new game: The British actor will appear in the upcoming Call of Duty: Infinite Warfare. Harington will take a villainous turn in the latest installment from the video game behemoth, though much of his role is still being kept under wraps. See photos of Harington working on Call of Duty: Infinite Warfare in London, ahead.

KitHarington

Image Credit: Ian Gavin/Getty Images for Activision

 

We’ll Be At The Motion Capture Summit UK

By | News | No Comments

Motion Capture Summit UK is heralded at “the most exciting event in mocap”

Engage with industry leaders at Motion Capture Summit UKThe Motion Capture Summit UK is June 25-26, 2016 and will be held at Centroid Studios in London. We’ll be at this exciting event demoing Faceware’s latest headcams and software that have been used in such recent AAA titles as Mirror’s Edge Catalyst, XCOM2, EA Sports UFC 2, DOOM, and more.

We'll be working in a professional motion capture stage at Motion Capture Summit UKThis two day event will unite technology and drama for an intensive learning experience for actors, directors, animators and technicians. Attendees will be working in a professional motion capture stage, and will be taught by a group of industry veterans with specialization in all areas of mocap.

Our colleagues at The Mocap Vaults have an incredible line-up of tutors  from various disciplines within the industry for this symposium. Here are just a few of the industry leaders who will be instructing and interacting at the Summit:

  • Simon Kay –  Motion Capture Supervisor with Oscar winning VFX company Double Negative
  • John Dower – co-founder Mocap Vaults –  and a Director with over twenty years experience in Film, Television and Video Games
  • Gareth Taylor – professional freelance actor, movement director and teacher

Whether you want to work in front of the cameras or behind them, this gathering of the greatest minds in the business will surely be momentus weekend of training and development!

To give you an idea of what’s coming, check out this scene from The Motion Capture Summit USA – Performed, directed and crewed by Summit students – June 2015.

What’s behind The Motion Capture Summit UK?

The Motion Capture Summit UKWhat is the impetus for this international Summit of mocap movers and shakers? According to the rockstars at The Mocap Vaults, it’s to create a new wave of mocap superstars that are trained, prepped and ready for the job. With instructors from various fields of mocap sharing knowledge and precious advice, attendees will share in a new, higher level of performance capture than ever before.

We’ve heard there are still a few openings to attend the The Motion Capture Summit UK as an animator, director or technician – or as an observer, but space is very limited due to the unique immersion activities.

We’ll see you in the UK!

 

Analyzer 3.1 Released

By | Product Update | No Comments

This minor release of Analyzer includes around two dozen fixes and improvements that resolve most of the initial issues users are seeing with Analyzer 3.0. If you were having intermittent issues Parameterizing then this new build should fix that for you. Release notes are listed below.

This update can be downloaded from our website in the ‘Downloads‘ section if you have a current Faceware Subscription. As always, if you have questions or need any assistance please email us at support@facewaretech.com.

3.1 Release Notes:

  • Improved and fixed several issues that were causing some users to not be able to Parameterize
  • Fixed an issue where you sometimes couldn’t create a new Job from an Image Sequence
  • Fixed an issue with Trimming that was occurring in certain languages
  • Fixed an issue where the Start and End time were not displaying properly with Image Sequences
  • Fixed an issue where the Height and Width of the image was not being saved properly when rotating your input video
  • Fixed an issue where the wrong error string was displaying if your .FWT file already existed when creating a new Job
  • Fixed an issue where AnalyzerBatch was not checking for an existing FWT folder before creating a new Job
  • Fixed an issue where Trimming in AnalyzerBatch was sometimes failing using drop-frame time-code
  • Fixed an issue where AnalyzerBatch would sometimes crash after ExportLandmarkPositions finished
  • Fixed an issue where the End time in Trimming was not displaying properly
  • Fixed an issue where NewJob in AnalyzerBatch was not displaying the correct exit code
  • Trimming in New Job has been optimized to work more quickly
  • Start and End time in Trimming will now update properly when selecting a new input video
  • Optimized the CountValidFrame command in AnalyzerBatch
  • Parameterization in AnalyzerBatch will now initialize the Web Service URL before attempting to contact the server
  • Fixed an issue where the CountValidFrame command in AnalyzerBatch was not working properly with a list of videos
  • Fixed a crash when specifying a directory path instead of a file path for the FWT parameter in AnalyzerBatch NewJob
  • Fixed an issue where AnalyzerBatch NewJob was not reporting the correct error when using an unsupported video codec