Using HoloLens to Visualise Motion Capture Performances in Realtime

Motion capture techniques are used throughout the VFX industry, but a new project from Mo-cap specialist and developer Jasper Brekelmans aims to bring live, augmented reality previews for motion capture performance, unlocking the ability to preview dances in realtime overlaid onto reality.

Visual Effects in movies and television now use motion capture, the process of digitally recording a human actor’s movements, as a matter of course throughout the industry these days. But developer Jasper Brekelmans is experimenting with Microsoft’s augmented reality visor HoloLens as a way to give directors and performers ways to see these digital performances overlaid onto reality in realtime.

Last time we checked in with Brekelmans, he was demonstrating his use of the HoloLens to build an AR integration with Autodesk’s popular MotionBuilder character animation studio.

Brekelmans is now working with a group called The WholoDance project, an organisation which is dedicated to capturing digitally the art of dance for both cultural and research purposes. Teaming up with Motek Entertainment, who provide the Motion Capture skills and facilities, Brekelmans has managed to build a way to wireless stream the motion capture data as it’s being generated, and stream that data wirelessly to a HoloLens visor which then generated an augmented avatar mimicking the dancer’s movements overlaid onto reality – all tracked in 3D space.

hololens-dance-wholodance-1

For Brekelmans it was a natural evolution from his prior work. He used the technology for streaming data from Autodesk’s MotionBuilder running on a desktop computer to the Microsoft HoloLens which allowed the visualisation a live-size 3D character in realtime on the motion capture stage.

“The first thing we noticed when giving the HoloLens to a dancer was that they were instantly mesmerized,” says Brekelmans, “Right after that they immediately started analyzing their own performance, much like professional athletes do with similar analyzing equipment. Nuances of how the hips moved during balancing or how footwork looked for example became much more apparent and clear when walking around a live-size 3D character in motion than watching the same thing on a 2D screen.”

hololens-dance-enhanced

An XBox controller was used to provide the user with various controls like offsetting the 3D avatar in the physical space, toggling the various different visualizations and tuning visual parameters. Due to the amount of control needed the team opted for a “known device” instead of relying on the gaze, air tap and voice commands commonly used with the HoloLens.

whololens-dance-1

“Even though the HoloLens is specifically designed for projecting holograms further away from the user we did experiment with the performer wearing the device while being tracked,” says Brekelmans, “This would allow them to see graphics overlayed on their hands and feet for example. The display technology wasn’t ideal for this purpose (yet) but it did give us things to experiment with to form new ideas for the future. ”

So where might this experimental technique lead in the future? “… we would love to play with other devices, including ones that are tethered to a desktop machine utilizing the full power of high-end GPUs. As well as use it in a Visual Effects context, visualizing 3D characters, set extensions and such in the studio or even on location.”

The post Using HoloLens to Visualise Motion Capture Performances in Realtime appeared first on Road to VR.

Emteq Aims to Humanize VR by Capturing Your Facial Expressions

As researchers and VR start-ups make great strides in increasing the sense of presence such as larger field of view, lower latency and more natural input, a Brighton based start-up has just announced a new technology that can track a user’s facial expressions without markers.

Emteq, with was founded with $1.5 million of seed funding, hopes their technology will enhance social engagement in the growing social VR space, which combines miniature sensors and artificial intelligence to enable digital avatars to mirror the user’s expressions and mood.

Emteq’s technology called “Faceteq” uses a range of biometric sensor techniques including electromyography (EMG), electrooculography (EOG), heart rate and more in the faceplate of VR headsets, such as Oculus Rift, which track the electrical current generated in the movement of facial muscles.

Unlike camera-based facial tracking systems, it also registers movements in eye, forehead and cheek muscles that are underneath a VR headset. Although this technology has clear applications for gaming and social VR apps such as vTime and the upcoming Facebook VR platform, the founders also plan to use the recorded data to analyse how audiences react to regular films and TV advertising.

emteq-2

Graeme Cox, the Chief Executive, co-founder of Emteq and serial tech entrepreneur, said: “Our machine learning combined with facial sensor technologies represents a significant leap forward in functionality and form for developers looking to introduce human expression and mood into a digital environment.

“Imagine playing an immersive role playing game where you need to emotionally engage with a character to progress. With our technology, that is possible – it literally enables your computer to know when you are having a bad day or when you are tired.”

See Also: FOVE Debuts Latest Design for Eye Tracking VR Headset
See Also: FOVE Debuts Latest Design for Eye Tracking VR Headset

Emteq joins a small but growing range of companies that hope to bring the user’s facial expressions into VR applications. FOVE VR is able to track a user’s eye movements allowing for people to take actions through eye gaze and blinking. It also goes towards a more natural way of viewing scenes in VR such as shifting focus with our eyes which we do in reality.

In July we reported news that Kickstarter project Veeso was aiming to be the first to market with a mobile VR headset that tracks your face’s movement; a system that uses head-mounted infrared cameras to track both your eyes and mouth for a greater sense of presence in virtual reality. Unfortunately that project was cancelled due to lack of funding, but what is exciting about Emteq is that their technology won’t be restricted to one headset.

What is important is that companies such as Emteq are able to garner enough support from developers and produce the required plugins for game engines such as Unity and Unreal to unlock its true potential.

Road to VR will be visiting Emteq within the next few weeks for a closer look and to try the technology first hand.

The post Emteq Aims to Humanize VR by Capturing Your Facial Expressions appeared first on Road to VR.