PoseVR Is A VR Animation Tool From Disney

PoseVR Is A VR Animation Tool From Disney

Disney has shown off an internal tool for posing and animating 3D models from inside VR. Called PoseVR, the tool is an experimental project exploring the potential of VR for 3D creation. Disney has not indicated that this will be released or even further developed.

Disney describes the tool as:

Removing inefficiency and distractions to allow artists to focus on their craft is one of our core tenets that drives innovation. PoseVR is an experimental project established to demonstrate the potential of VR as a tool to pose and animate CG characters.

A multidisciplinary team composed of engineers and animators developed and tested PoseVR to invent a functioning, posable rig in VR and to test assumptions on design and workflow. This informed us how to expand our current workflows while also showing the benefits and potential of VR for our future animation toolsets.

Animation was also a core focus of MARUI’s VR plugins for Maya and Blender. Using your hands to directly manipulate parts of the model that should move can be far more intuitive than the current approach of trying to move and rotate elements in 3D space with a mouse & keyboard.

Companies across the 3D creation industry are coming to the same conclusion: VR is perfectly suited for animation. While this tool isn’t being released publicly, we expect many like it to emerge in the coming years. This is a workflow which VR will almost certainly disrupt.

Tagged with: ,

The post PoseVR Is A VR Animation Tool From Disney appeared first on UploadVR.

Disney Developing AR Costumes For Theme Park Visitors

Disney have been involved in immersive technology in a number of ways. From way back in the days of the ‘Disney Quest’ arcades, to recent work by the Disney Research team on virtual reality (VR) and mixed reality (MR). The House of Mouse are now aiming to bring augmented reality (AR) to its theme park guests.

When visiting one of Disney’s famous parks, one of the things everyone wants to do is get some great photographs. The new AR system that Disney is working on will let guests dress as their favourite Disney character without needing to put on a costume.

The system is being worked on by Disney Research, the technological development arm of the company, who have previously worked on other immersive projects, such as the mixed reality bench.

A study has been released by Disney Research that lets guests wear an AR costume for photo opportunities. The application has been designed to let guests see themselves wearing an outfit based on their favourite Disney character, including the famous Disney Princess line, or alternatively a Jedi or an Avenger.

The advantage of the AR-based system is that it means guests can have a virtual ‘costume’ that fits perfectly, without needing to worry about putting on a costume, which of course removes worries about hygiene or fabric allergies.

The report states: “Imagine taking a selfie and magically wearing your favorite character or hero’s suit. While we did see digital cloth added onto people in the past, it was often with a depth camera such as a [Microsoft] Kinect, which is not always reliable in outdoor conditions, and is not as widespread as monocular cameras on mobile devices.”

Disney AR Poser

The report goes on to suggest that the technology could be extended beyond selfies or PhotoPass shots, with guests able to see enhanced theme park shoes using AR smart glasses letting performers look exactly like the Disney character they are portraying.

For further coverage on new AR technology, keep checking back with VRFocus.

Disney Research Reveal AR Poser, A New Photo-Ops Programs

One area where a number of scientific and technological breakthroughs occur within the immersive industry are founding within the more unlikely of places. This includes Disney Research who work on a number of different applications and technologies to help push the industry forward. Their newest development is a augmented reality (AR) technology which allows users to pose with, or as, a digital avatar for enhanced photo opportunities.

AR Poser

The program, which has a full title of “AR Poser: Automatically Augmenting Mobile Pictures with Digital Avatars Imitating Poses,” is an example of how technology can be used to enhance photo opportunities and selfies within the entertainment industry. During use, the AR Poser is able to interpret the pose of a human subject that is in front of the camera with a 2D post estimation. This is done by using RGB information received from the camera and once it has that information, it matches it with a set of predetermined 3D poses within it’s library to project the closest match possible.

The current capabilities of the AR Poser application are only the start with the limited functionality expected to expand as the program is developed. In current operation, the image must include a real-world marker so that the camera can be gauged and depth may be determined. Poses and shapes that are built into the programs library are also limited in its current version. These, along with a number of different areas, are all planned to be developed further to allow for a more robust and stable application that is capable of delivering a much wider range of features.

AR Poser

One of the challenges the program needs to overcome in the future is the many different hardware setups that are out there in order to operate effectively. As mobile devices would be the best target platform, Disney Research are looking to make the solution cloud-based so to off load some of the hardware requirements and processes to allow for a wider set of supported devices. Currently the whole process of interpreting an image and projecting the digital avatar takes only about two seconds.

Though still in the early stags, AR Poser has the chance to lead to numerous advancements in AR picture modification and photo taking opportunities. Disney Research have released a short video showcasing AR Poser which you can view below and for all the latest from the division in the future, keep reading VRFocus.

GDC 2018: Disney Is Converting Movie Scripts Into VR In Real-Time With Cardinal

GDC 2018: Disney Is Converting Movie Scripts Into VR In Real-Time With Cardinal

Disney Research’s latest experiments with VR can turn a written script into a VR experience with the help of its new Cardinal system.

Revealed at this week’s GDC event in a session attended by Variety, Cardinal is able to take natural language scripts and turn them into pre-visualizations in VR. The software can detect actions and characters and turn them into animations in scenes that filmmakers can enter with VR, as if attending a virtual rehearsal on set. This gives them a feel for the mechanics of a scene before anyone has walked in front of a camera or started production on animated movies. They can even add their own voice recordings for live readings.

Story Image Credit: Janko Roettgers

The aim with Cardinal is to drastically reduce the amount of time pre-visualization takes when in pre-production of a movie. According to Digital Platforms Group Lead Sasha Schriber the system is able to take “script to storyboard to animation in real-time.”

We weren’t in the session ourselves to hear about the workings behind the system, but Disney says there’s still lots of work to be done on it. Eventually, though, Disney Research sees the platform being used in the wider industry. Schriber also sees the system being used outside of filmmaking, asking attendees at the GDC session: “They asked us: what about us?”

Tagged with: , ,

New Procedural Speech Animation From Disney Research Could Make for More Realistic VR Avatars

A new paper authored by researchers from Disney Research and several universities describes a new approach to procedural speech animation based on deep learning. The system samples audio recordings of human speech and uses it to automatically generate matching mouth animation. The method has applications ranging from increased efficiency in animation pipelines to making social VR interactions more convincing by animating the speech of avatars in real-time in social VR settings.

Researchers from Disney Research, University of East Anglia, California Institute of Technology, and Carnegie Mellon University, have authored a paper titled A Deep Learning Approach for Generalized Speech Animation. The paper describes a system which has been trained with a ‘deep learning / neural network’ approach, using eight hours of reference footage (2,543 sentences) from a single speaker to teach the system the shape the mouth should make during various units of speech (called phonemes) and combinations thereof.

Below: The face on the right is the reference footage. The left face is overlaid with a mouth generated from the system based only on the audio input, after training with the video.

The trained system can then be used to analyze audio from any speaker and automatically generate the corresponding mouth shapes which can then be applied to face model for automated speech animation. The researchers say the system is speaker-independent and can “approximate other languages.”

We introduce a simple and effective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech. Our approach uses a sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to mouth movements in a way that accurately captures natural motion and visual coarticulation effects. Our deep learning approach enjoys several attractive properties: it runs in real-time, requires minimal parameter tuning, generalizes well to novel input speech sequences, is easily edited to create stylized and emotional speech, and is compatible with existing animation retargeting approaches.

Creating speech animation which matches an audio recording for a CGI character is typically done by hand by a skilled animator. And while this system falls short of the sort of high fidelity speech animation you’d expect from major CGI productions, it could certainly be used as an automated first-pass in such productions or used to add passable speech animation in places where it might otherwise be impractical, such as NPC dialogue in a large RPG, or for low budget projects that would benefit from speech animation but don’t have the means to hire an animator (instructional/training videos, academic projects, etc).

In the case of VR, the system could be used to make social VR avatars more realistic by animating the avatar’s mouth in real-time as the user speaks. True mouth tracking (optical or otherwise) would be the most accurate method for animating an avatar’s speech, but a procedural speech animation system like this one could be a practical stopgap if / until mouth tracking hardware becomes widespread.

SEE ALSO
Disney Research Shows How VR Can Be Used to Study Human Perception

Some social VR apps are already using various systems for animating mouths; Oculus also provides a lip sync plugin for Unity which aims to animate avatar mouths based on audio input. However, this new system based on deep learning appears to provide significantly high detail and accuracy in speech animation than other approaches that we’ve seen thus far.

The post New Procedural Speech Animation From Disney Research Could Make for More Realistic VR Avatars appeared first on Road to VR.

Disney Unveils Mixed Reality Magic Bench

Disney has historically had a great interest in new visual entertainment technologies. The House of Mouse has been involved in virtual reality (VR) technologies since the 90s, at one point having an immersive arcade at one of its parks promoting VR and AR technologies. In more modern times, Disney has been involved in research and development of VR technology, and has just unveiled one of its breakthroughs – the mixed reality (MR) Magic Bench.

It’s one of the fantasies of any of the Disney theme parks that children can ‘meet’ the characters from their favourite Disney movies. Instead of actors in costumes, though, Disney now have another option. The Magic Bench allows users to sit down on a bench and see and hear a character sat next to them. In one of the current versions of the Magic Bench, one of the animated characters hands the user an orb. Using Haptics, the user can feel the other character move around on the bench, or, in one encounter, feel an animated donkey kick the side of the bench.

The aim of the project by Disney Research was to create a mixed reality experience that didn’t require any additional equipment or set-up time, a ‘walk up and play’ experience. “This demonstrates [human-computer interaction] in its simplest form: a person walks up to a computer, and the computer hands the person an object,” the researchers write in their paper describing the Magic Bench. “Our mantra for this project was: hear a character coming, see them enter the space, and feel them sit next to you,” added Moshe Mahler, principal digital artist at Disney Research.

It’s not difficult to imagine this sort of technology being implemented at any of Disney’s theme parks, giving customers the chance to really interact with famous animated characters from Disney’s catalogue.

A video demonstration of the Magic Bench technology is available to view below.

VRFocus will bring you further information on Disney’s Magic Bench as it becomes available.

Disney Reveals How to Catch a Real Ball in Virtual Reality

Disney’s Research arm have revealed a system where a user can catch a real, physical ball while wearing a headset and still immersed in virtual reality (VR).

Disney Research have released a video showing how someone wearing a head mounted display (HMD) who can only see the virtual landscape can still catch a real ball that is thrown at them. The technique uses position sensors to generate a projected trajectory for the user in the headset. A virtual version of the ball is then projected in real-time into the virtual world, allowing the VR user to catch the ball, even though the physical ball is not visible.

The system uses some ‘predictive assistance’ by showing the VR user various overlays that make it easier to catch. The video shows a ball being thrown between two people, and the person in the VR headset is able to catch the ball every time.

Matthew Pan and Gunter Niemeyer, from Disney Research Los Angeles have released a paper that contains full details on this experiment into VR, including the full set of equations that were used to create the virtual ball. The simulated ball was rendered at 120fps on an Oculus CV1 headset and used an environment with a minimal amount of decoration in order to keep the framerate high.

It’s not currently clear what the further applications of this technology are, but the researches at Disney say in their paper that it provides “A valuable insight which informs how interactions with dynamic objects can be achieved while users are immersed in VR.”

The researchers also say it has opened up other avenues for future work in their field.

You can find the full text of the research paper at the Disney Research website. You can watch the video demonstration of the technology below.

VRFocus will keep you up-to-date on academic research into VR.

Disney Research Shows How VR Can Be Used to Study Human Perception

Researchers from Disney use a virtual reality headset and a real ball tracked in VR to understand the perceptual factors behind the seemingly simple act of catching a thrown object. The researchers say the work sets a foundation for more complex and meaningful dynamic interaction between users in VR and real-world objects.

Disney Research, an arm of The Walt Disney Company, does broad research that applies across the company’s media and entertainment efforts. With Disney known as being one of the pioneering companies to employ VR in the out-of-home space prior to the contemporary era of VR, it should come as no surprise that the company continues to explore this space.

vr-ball-catch-disney-researchIn a new paper from Disney Research a VR headset is used in conjunction with a high-end tracking system to recreate the simple experience of catching a real thrown ball in VR. But why? With the experience being simulated in virtual reality, the researchers are free to easily modify it—in ways largely impossible in real life—as they see fit to study different aspects of the perception and action of catching a thrown object.

In this case, researchers Matthew K.X.J. Pan and Günter Niemeyer started with a visualization that showed just a ball flying through the air as you’d see in real life. Using VR they were able to add a virtual trajectory line to the ball’s path, and even remove the ball completely, to see what happened when the user had only the trajectory line to rely on. The video heading this article summarizes the tests they performed.

SEE ALSO
OptiTrack's Precise 'Void' Style Tracking Lets You Play Real Basketball in VR

Perhaps most interesting is when they removed the ball and the trajectory line and only showed the user a target of where the ball would land. Doing this lead to a distinct difference in the user’s method of catching the ball.

Whereas the ball and trajectory visualization resulted in a natural catching motion, with only the target to rely on, the user switched to what the authors described as a more “robotic” motion to align their hand to where the ball would land, and then wait for it to arrive.

This suggests that our brains don’t simply calculate the endpoint of the trajectory and then move our hand to the point to catch it, like a computer might; instead it seems as if we continuously perceive the motion of the ball and synchronize our movements with it in some way. The authors elaborate:

20 [of 132] of these tosses were made with only the [visualization of the] virtual ball which most closely matches how balls are caught in the physical world. In this condition, 95% of balls were caught, indicating that our system allows users to catch reliably. Video and screen capture footage indicate that during the catch, the user visually focuses on the trajectory of the ball and does not keep their hands within viewing range until just before the catch. From this evidence, it can be inferred that prioprioception is used to position the hands using visual and depth cues of the ball.

Catching with the other visualizations did not seem to affect catching behaviors, except in the cases where the virtual ball was not rendered: the removal of the virtual ball from the VR scene seems to allow the catcher’s hands to reach the catch lo­cation much earlier prior to catching. The most apparent expla­nation for this phenomenon lies with the observation that the user is forced to alter catching strategy: the catcher has to rely on the target point/trajectory and so the motor task has changed from a catching task, which had required higher brain functions to estimate the trajectory, to a simpler, visually ­guided pointing task requiring no estimation at all.

The researchers say that their work aims to study how people can interact dynamically with real objects in VR, with this first study of a simple task laying the groundwork for more complex interactions:

…combining virtual and physical dynamic interactions to enrich virtual reality experiences is feasible. […] We believe this work provides valuable insight which informs how interactions with dynamic objects can be achieved while users are immersed in VR. As a result of these preliminary findings, we have discovered many more avenues for future work in dynamic object interactions in VR.

Which in research talk means, ‘you better bet we’ll be studying this further!’

The post Disney Research Shows How VR Can Be Used to Study Human Perception appeared first on Road to VR.