January 12, 2016 Mike Wilson

Virtual Reality performance capture with HTC Vive & Perception Neuron

It has been a pretty exciting few months over here at Cloudhead Games.

We have been developing a system to capture MoCap data from a performer while they are “In game,” in VR, totally immersed in the environment in which their performance will live.

It is awesome! I will explain why.

First, let’s back up.

Motion Capture (MoCap) is the process of capturing the movements of an actor and then translating that captured data onto a 3D model. It is a (normally) difficult and (normally) expensive way of animating a character, but it allows for a unique, extremely precise performance which exudes life. Manually animation has it’s place to be sure, and we will be employing a mix of both talents in The Gallery, but there is nothing quite like seeing a character in VR, brought to life with MoCap data. The result of standing face to face with a character animated with MoCap data is slightly unnerving, profound, and an experience that would be difficult, if not impossible to replicate outside of VR.

2

Perception Neuron, MoCap for dummies

After getting our mitts on a Perception Neuron suit from Noitom, we were excited to see the results that their inexpensive, yet robust solution could provide. After a short ramp-up time we were able to capture motion data comparable to what we would expect to get from a MoCap system that lives far outside of our budget. It turned out that the Perception Neuron system allows us to quickly capture MoCap data, within the studio, on the fly. This accessibility and short turn-around loop allowed us the needed flexibility to iterate with this new technology, and stub-in animations to ensure that they work perfectly in engine. The added flexibility allowed us to experiment and find uses for animation that we otherwise would not have discovered.

w

Dan expertly demonstrating the dreaded S Pose, it’s hard on your glutes

Capturing MoCap data in studio is great, but why not also use the resources of the studio to enhance the performance? It was a no-brainer that we were going to put the actor in VR to allow them to reference the locations where they would be performing, but then the question become — why take them out? After a few tests we determined that the actor would not have to leave VR for us to capture their performance. We figured that allowing the actor to perform in VR allowed them the ability to play off their surroundings in a way that a simple reference would not allow.

We demonstrated our approach to Adrian Hough, a seasoned actor with Mocap experience (extensively on Assassin’s Creed), and he was as excited for this new merging of technologies as we were. He agreed to lend his talent to our game and give life to our main Antagonist — The Watcher. So the preparation began. Beyond designing the in-game environments in which these scenes would be staged, we also had to design custom MoCap performance rigs for the actor. We went about iterating on a system that would be accommodating the actor as much as we could in-engine.

E_Jo_AdrianHoughWatcher-2

Adrian, a super nice guy who excels at acting like super mean guys.

Dan, our cinematic designer, created the custom performance rigs in Unity. In each rig, the actor’s movements triggered moving platforms/assorted actions, allowing the environment to respond to the actor without further input from an outside operator. Essentially, we designed and scripted an entirely new game, JUST for the actor performance. I won’t dive too heavily into the process of designing performance rigs, but Dan pulled out some sort of dark magic to fit those scenes in a 15×15 capture space, without causing the actors to puke with aggressive vigor.

E_Jo_KyleandAdrian_9833-2

Kyle helping Adrian find his mark.

The performance rigs include a robust cue-card system for the actor, which allowed us to improvise line changes on the fly, and accommodate changes requested by the actor. This cue-card system proved invaluable, and made the shoot incredibly efficient. The actors came prepared with their lines memorized, but in the future that may not always be a requirement; after all, it’s not as if the Player can see the cue cards once we turn them off in engine.

A few moments required us to trigger in-game events externally, so that we could make the environment react to the performer in those moments where the action lacked a movement cue. In a way, the performer and the scene operator were dancing. The actor set the pace of a scene through their performance, while the operator controlled the responses of the environment through action cues. It took a few takes for both the actor and the operator to find a rhythm in each scene, but the end result seems both dynamic and natural. You can feel the human touch in moments that otherwise would have been timed events.

E_Jo_MikeandDan9814-2

Watching Adrian perform a scene, no joke to make here, it was fun to watch.

Joel, our resident audio wizard captured the voice performance while the actor performed in the scene. What you are seeing in game is a whole performance — audio and mocap, timed by the operator while the actor is fully immersed in the scene.

E_JO_Joel_9842-2

Joel capturing sound with his bare hands, he’s that good.

We wanted the actor to feel as if they were acting on a stage, able to see the environment, the Player (we used a stand in model), and experience the space as the character would. We provided the actor with a real world to perform in, devoid of the strain of an abstracted environment, greenscreen or real-world distractions. Based on the feedback from the actors, our VR performance method worked extremely well for them.

E_Jo_PN_Adrian_groupshot-2

Perception Neuron guys . . .percepting.

In the future we will: build in more systems to allow the actor greater control over their performance rig, tightening the process loop to allow us to quickly review performances in (near) real time – in engine, and creating robust environments that will interact with the actor’s performance in more meaningful ways. We look forward to the future of MoCap in VR, and are excited to offer a new type of performance capture for the actors who lend their talents to The Gallery. We have big ideas for this technology, and are chomping at the bit to expand on it down the line.

Capture2

Rye Green performing as the mysterious Professor.

Thanks to the actors who took this leap and joined us on this adventure, and for everyone who helped on shoot days to make them a success.

The future of storytelling in VR is awesome, and we are stoked to continue pushing the boundaries.Capture2A huge thanks to Noitom for sending over their team to help us out on shoot day, even though no issues arose on shoot day!

Thanks to Maingear, who supply us with the gear required to make all of our dreams come true!

Massive thanks to HTC and Valve for creating an amazing VR device that allows us to make real adventure.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *