Google is Exploring Ways to Let You Animate Your ‘Blocks’ Creations

Google’s team at Daydream Labs have been prototyping ways of animating objects and characters in Blocks, the company’s recently released VR modelling tool. In a recent entry on the Google Blog, senior UX engineer Logan Olson describes how it could give users the power to “create expressive animations without needing to learn complex animation software.”

With its low-poly aesthetic and simple menu systems, Blocks is perhaps the least intimidating 3D modelling tool currently available for VR, and the Daydream Labs team looked to retain that approachability as they prototyped animation systems during their ‘one-week hackathon’. Olson explains that this boils down to three steps: preparing the model, controlling it, and finally recording sequences for playback.

 

Firstly, the static models created in Blocks require some ‘prep’, adding appropriate control points and joints for inverse kinematic techniques (for models with a rigid skeleton), or for a ‘shape matching’ technique that works better for ‘sentient blobs’ or anything with a less defined shape, good for ‘wiggling’. Olson explains that there is a short setup process for shape matching but it “could eventually be automated”.

Once prepared, controlling the movement is where VR is at its most intuitive, as the motion-tracked hardware means that a simple form of motion capture is readily available, although it’s not always appropriate, depending upon what’s being animated. Olson references the creative app Mindshow that embraces this ‘puppeteering’ technique, due to launch into open beta soon. “People loved ‘becoming’ the object when in direct control,” writes Olson. “Many would role-play as the character when using this interface.”

Alternatively, you can simply grab specific control points of objects and manipulate them, which also works well with multiple users, or you can directly pose the skeleton for keyframes, which Olson notes is ‘much more intuitive’ than traditional apps due to the spatial awareness and control afforded in VR.

Finally, recording and playing back movements could be done with ‘pose-to-pose’ or ‘live-looping’, the former operating with a sequence of keyframe poses for complex animations, the latter being suitable for simpler animations, allowing the recording of movement in real-time to be played back in a repeating loop. “Press the record button, move, press the button again, and you’re done—the animation starts looping,” writes Olson. “We got these two characters dancing in under a minute.”

As a proof of concept, the experimentation appears to be a success, although it will likely require further refinement before the team considers rolling out these features into a future Blocks update.

The post Google is Exploring Ways to Let You Animate Your ‘Blocks’ Creations appeared first on Road to VR.

Google Tests Interactive Learning with VR Espresso Machine, “People learned faster and better in VR”

The team behind Daydream, Google’s mobile VR platform, is currently conducting experiments with the aim of broadening virtual reality’s usecase to include more interactive learning. With an experimental VR espresso maker at the ready, the team says “people learned faster and better in VR” than by watching training videos when put to the test on how to brew the real thing.

Divided into two groups; one having access to YouTube videos and the other a VR training prototype featuring a 3D model of an espresso machine—replete with buttons, turn-able knobs and steam wand for frothing milk—the team gave everyone as much time as they wanted to go over the steps on how to make espresso.

image courtesy Google

The Daydream team then put the would-be baristas to task with a real espresso machine. At the end, they gave people a detailed report on how they’d done, including an analysis of the quality of their coffee. According to the experiment, participants in the YouTube tutorial group normally went through the physical task three times, while participants using the VR training method normally went through twice before obtaining a passing result.

SEE ALSO
This Restaurant is Using Google Daydream to Train Employees

“We were excited to find out that people learned faster and better in VR,” says Google Software Engineer Ian MacGillivray in a blogpost. “Both the number of mistakes made and the time to complete an espresso were significantly lower for those trained in VR (although, in fairness, our tasting panel wasn’t terribly impressed with the espressos made by either group!) It’s impossible to tell from one experiment, of course, but these early results are promising.”

Admittedly, the test wasn’t perfect. MacGillivray says espresso wasn’t a great choice to begin with, as the physical sensation of tamping, or getting the right density of coffee grounds in the metal portafilter, “simply can’t be replicated with a haptic buzz.”

People also don’t listen to instructions or warnings. Voice overs, written instructions, hints, tutorials on how to use the controller—all of it fell to the wayside when popping a VR-newcomer into the headset. “No matter what warning we flashed if someone virtually touched a hot steam nozzle, they frequently got too close to it in the real world, and we needed a chaperone at the ready to grab their hand away.”

The team says that VR platforms aren’t quite ready when it comes to acquiring certain types of skills either, and contends that the addition of VR gloves with better tracking and haptics would be necessary before the medium can get outside the ‘moving things and pressing buttons’ phase it’s in currently. There’s also the difficulty of giving users the freedom of choice, as every choice the Daydream team allowed the user to make, only created an exponential growth in the number of paths through the tutorial. “In the end, it was much easier to model the trainer like a video game, where every object has its own state. So instead of the trainer keeping track of all the steps the user did in order (“user has added milk to cup”), we had it track whether a key step had been achieved (“cup contains milk”),” says MacGillivray.

The team considers the VR espresso training prototype a success, saying at very least that VR is a more useful way to introduce people to a new skill, one that can easily be revisited in VR once context is established in the physical world.

The post Google Tests Interactive Learning with VR Espresso Machine, “People learned faster and better in VR” appeared first on Road to VR.

Mixed Reality VR Videos Become More Expressive Thanks to Google Research and Daydream Labs

Conveying what it’s actually like in virtual reality (VR) has been one of the biggest hurdles to adoption, with the most common video technique being a combination of green screen and mixed reality (MR) technology. While this helps viewers see what VR players are engaged in there was one other barrier left, the headset itself. Now Google Research and Daydream Labs have unveiled a new digital technique that allows a users face to be seen whilst wearing a head-mounted display (HMD).

What Google has come up with is a way to make a headset seem transparent so that viewers watching an MR video can see the range of emotions being portrayed by the player. To do this the development team uses a combination of 3D vision, machine learning and graphics techniques to build a model of the person’s face, capturing various facial variations. Then a modified HTC Vive is used that contains SMI eye tracking tech, recording gaze related data. This is then blended together to give the illusion for seeing a users face whilst they play in a virtual world.

Daydream_Labs_Research_headset_removal

The Google Research Blog goes into much greater detail with Vivek Kwatra, Research Scientist and Software Engineers, Christian Frueh, Avneesh Sud explaining the future applications of the technology: “Headset removal is poised to enhance communication and social interaction in VR itself with diverse applications like VR video conference meetings, multiplayer VR gaming, and exploration with friends and family. Going from an utterly blank headset to being able to see, with photographic realism, the faces of fellow VR users promises to be a significant transition in the VR world.”

The project will be an ongoing collaboration between Google Research, Daydream Labs and the YouTube team, with the technology set to become available across select YouTube Spaces for creators in the future.

For the latest updates from Google Research and Daydream Labs, keep reading VRFocus.