Experimental Controllers From Microsoft Research Offer A New Way To Feel Objects In VR

Experimental Controllers From Microsoft Research Show A New Way To Feel Objects In VR

We’re eager to go hands-on with Valve’s new prototype Vive controllers, but these new experiments from Microsoft’s Research division might be even more exciting.

These ‘High-fidelity 3D Haptic Shape Rendering on Handheld Virtual Reality Controllers’, named NormalTouch and TextureTouch, were designed by Hrvoje Benko, Christian Holz, Mike Sinclair, and Eyal Ofek. They’re also position-tracked, though the title is long enough already. The first controller features a platform not dissimilar to analogue sticks seen on a gamepad. Rather than you pushing the stick around to move in a game world, though, the stick rises, lowers and tilts based on surfaces and objects you’d interact with in the virtual world.

As you can see in the video, if you were to run a finger along a table, the stick would remain flat, but when your hand travels over an object it adapts to replicate that change. As a finger passes over a ball the stick tilts and moves with your hand, simulating the curved shape of the object. What’s more, force feedback allows users to test the stiffness of surfaces and prevent your hand passing through an item, meaning you could apply more force to a balloon with less resistance than you would a block of concrete, for example. It can even be used to push objects.

TextureTouch, meanwhile, uses a 4×4 matrix of actuated pins to better replicate the surface of objects, applying feedback as you drag your hand across. The prototype’s creators have tested the controllers, and published their findings online. Using these controllers produced better ratings in tasks like tracing a virtual object with a finger than controllers that used VibroTactile feedback or simply relying on visual feedback.

There’s no denying these are intriguing solutions to VR’s current feedback problem, though they’re far away from any sort of consumer implementation. For now, the best we have its the vibration feedback provided by the Oculus Touch and HTC Vive wands.

File these controllers away with other interesting experiments, like Oculus Research’s haptic feedback prototypes.

The High-end VR Room of the Future Looks Like This

The High-end VR Room of the Future Looks Like This

Today’s VR systems are both fantastic and restrictive: they blow you away, but it’s clear how far they have to go. The HTC Vive is arguably the best out there, but having to buy a souped-up laptop just to run it, paying full price for brief games that feel more like demos, and trailing a huge cable off your head and fumbling to mount trackers on your ceiling…it’s not ideal. But it’s still incredible enough to give a taste of where it’s headed.

Here’s my best guess of what the future high-end VR setup looks like. I’m an early-stage VC focused on virtual and augmented reality, so I pieced this together based on the forward-thinking pitches and demos I’ve been lucky enough to see through my work, plus a lifetime of burning through sci-fi and video games. Check out the bottom of this post for a list of VR inspiration.

Side note that AR will be much bigger than VR, in both the diversity of use cases and market size (analysts predict $30B for VR versus $90B for AR by 2020), but I still believe that most homes will have a dedicated VR space for total immersion.

Body movement

Let’s start from the ground up. Forget the room scale debate: the VR setup of the future moves with you. Maybe it uses an omnidirectional treadmill that adjusts speed and incline based on viewer inputs. To be truly immersive, it needs to be around 8 feet by 8 feet, given that the average sprint stride length for men—the longest possible stride variant—is 93 inches. That gives users more than enough space to walk, run, and even sprint while in VR. Or maybe a section of the floor itself serves as the treadmill, raised up as a platform that controls pitch, yaw, roll, and speed.

Of course, not everyone wants to—or can—be on their feet for long periods, and plenty of immersive entertainment, like watching movies, is sedentary. VR experiences will support a seated and reclining mode when appropriate and shouldn’t be more complicated than pulling up a standard chair. Movement in these modes will likely employ similar mechanics to those we’re beginning to see today, like teleportation via gesture or gaze.

Tactile feedback

Next up is the bodysuit. To mimic the tactile feedback that you experience in real life, you’ll need sensors and haptics all over your body or at least in significant areas, like the face, hands, and feet. Focused, acute pulses simulate sharp points; broader, more distributed ones can simulate sensations like dipping into water. For those who want to push immersion further, optional climate controls mirror environmental conditions (within a safe temperature range).

The first hardware generation attempting to solve the body feedback problem will likely use full bodysuits with haptic responses aligned to the VR experience. The suit’s gloves will simulate gripping objects by restricting finger movement: wrap your hands around a hard plastic cup in VR, and your gloves will freeze at the point where you can’t squeeze any further. Squishier objects will have more give. It’s possible that putting on a full suit will be too much effort for most people, and they’ll find that hand and facial coverage is enough to give them the immersion level they want. Humans have more nerve receptors in our fingers than anywhere else in the body (besides our feet and lips), so covering tactile input in the hands may be enough to make the mind suspend its disbelief while in VR.

Control options

Our future VR setups won’t need controllers. Steve Jobs once said about using the human hand for interaction that “God gave us ten styluses…let’s not invent another.” Imagine the same touch and motion-based actions we’re used to on mobile phones, only happening in the air with our hands while we’re in VR. Need to hold something as part of a VR experience? Your gloves mimic the width and feel of a gun in a first-person shooter, the handle of a scalpel in a surgery simulator, or the stitching on a football…all while you’re empty-handed. Voice UI can supplement gestures with more detailed natural language commands.

Weight is tougher to simulate in VR. The suit could stiffen and slow a user’s movement corresponding to the relative weight of an object: e.g., stooping to pick up a piece of furniture and would force a slow standup, versus an unaffected standup for a feather. Haptic feedback, movement speed in VR, and other techniques could add to the weight effect. For an in-depth discussion of the weight problem in VR today, see “

  • Manus VR:
  • Neurodigital:
  • Handpose:
  • Dexta Robotics:
  • Minority Report (film)
  • AR and VR viewers:

    • HTC Vive (VR):
    • Snow Crash (novel)

    Omnidirectional treadmills:

    • Virtuix Omni:
    • Cyberith Virtualizer:
    • Ready Player One (novel)

    Haptic feedback bodysuits:

    • Teslasuit:
    • Nullspace VR (hand, arm, and chest coverage):
    • Mindmaze:
    • Emotiv:
    • 8i:  www.snapchat.com
    • Star Trek (TV show)

    Eye tracking:

    • Fove:
    • Eyefluence:
    • VisiSonics:
    • RedPill VR:

    Scent creation:

    • Cyrano/ONotes: Accomplice, which is a sponsor of Upload.

    Oculus Is Offering A $250,000 Grant For Perception Research

    Oculus Is Offering A $250,000 Grant For Perception Research

    Last week Oculus Chief Scientist Michael Abrash stood on the stage at Oculus Connect 3 and talked about where he thought VR will be in five years’ time. He made some bold predictions that are going to take a lot of work and resources to achieve.

    That’s why Oculus Research is launching a $250,000 grant initiative looking to advance work in a few key areas. This isn’t like the company’s large investments into content. This money will be split between a maximum of three research proposals based in vision and cognitive science. Research will be carried out between the next one and two years, and submissions should be from academic institutions.

    Oculus is looking to make progress in very specific fields with this money, and the findings from successful applicants will be released to the public. The company has outlined what it’s hoping to find in a Call for Research.

    The first area the company is looking at is ‘Self-motion in VR’. That doesn’t mean new locomotion techniques, but instead the ways that information sources like a wider field of view affect user’s behavior in “three-dimensional scenes”. “More specifically,” the call notes, “we are interested in how these cues to depth may change the way the visual system uses other sources of shape information…to recover the three-dimensional layout of the virtual or augmented scene”.

    Oculus is also looking for a team to develop a way to generate a ‘dataset of binocular eye movements’ within the real world. You might remember Abrash speaking about the complexity of delivering perfect eye-tracking in his talk last week, and this might be related. “While eye movements generated in laboratory settings are well studied,” the call reads, “much less data is available about eye movements in the natural world or in virtual reality.”

    ‘Multisensory studies’ is next. Oculus wants to understand why VR and AR experiences that cover multiple senses are so much more compelling than those that address a single one. “We would like to determine what features and characteristics make multisensory information so valuable in AR/VR,” the call notes.

    Finally, Oculus is interested in “biological motion related to social signaling”. Again, this relates to another part of Abrash’s talk, this time concerning virtual humans. The company wants to establish the gestures, facial expressions, eye movements and other factors that are essential to communicating our intended messages beyond mere words. With a clear understanding of this, we could see more life-like avatars.

    Submissions need to be emailed to callforresearch@oculus.com and will be a maximum of five pages in length, outlining methods, budget, and estimated timelines. Reviews for proposals will begin on October 25th and successful applicants will be contacted on November 1st.

    Dexta’s ‘Dextarity Integration Engine’ Lets Devs Build Tactile VR Worlds

    Dexta’s exoskeleton glove is a haptic input system which is designed to give users a sense of touch in VR. These new videos which show how the company’s ‘Dextarity Interaction Engine’ gives developers the tools they need to get the virtual world to push back.

    The subtleties of a our sense of touch is taken for granted most of the time, you’re unlikely to have given much thought as to just how much information can be passed on through infinitely subtle interactions our fingers and hands have with real world objections. Dexta’s Dexmo exoskeleton gloves are designed to emulate those subtleties, to help add not only immersion to virtual reality experiences, but to give the user that naturalistic, human, tactile guiding force we take for granted in reality, but miss horribly when it’s gone in VR.

    We wrote recently on Dexmo’s exoskeleton gloves, quizzing Dexta Robotic’s CEO Aler Gu on what makes Dexmo tick from a hardware perspective, briefly touching upon development APIs. Now, Dexta have released two new videos which demonstrate what they’re calling ‘Dextarity Interaction Engine’ (DexIE for short), an extended set of developers tools and algorithms which define a sort of haptic language for which devs can communicate events, interactions and help to build a virtual physical interface of sorts.

    One of the bigger issues when your mind is immersed in a virtual space, and with your hands actions faithfully reproduced, it’s jarring when you’re virtual avatar passes through objects that have no physicality on the virtual plane. DexIE provides options for developers to give solidity to those objects, by ‘preventing’ your fingers physical position penetrating the mesh of a polygonal object.

    Dexta CEO Aler Gu, speaking to us via email says “The guiding principle in hand interaction is that the mesh of hands and objects shouldn’t penetrate each other. Once our brain observe that, it will immediately know it’s not real. And that really breaks the immersion,” adding, “There are also a lot of the problems which only appear when you have the right hardware for it. For example, when you are using the Vive controller to pick something up, there is no physics. You are basically ‘sticking’ the object using 2 sets of 3d coordinates.”

    dexmo-prototype-8

    With Dexmo however, Dexta thinks their device makes all the difference, Gu tells me that “There are multiple contact points that actually performs physical interaction with 3d objects, which is why “switch between hands interaction” is actually very difficult to implement. Fortunately our engineers found a way to solve that problem after all. We also have an “Object selection indicator” and “Invalid interaction Indicator” to help further improve the user experience.”

    Dexmo gloves are, naturally, limited in that they can only enact opposing force to your hand’s digits and only within a certain range. But Dexta believe that their “complex algorithms” go a long way to give VR users that instant hand-object collision notification.

    Elsewhere the haptic events are more subtle; Emulating the physical ‘click’ of a button or switch, or conveying the weight of an object playfully flipped by your fingers for example. Then there’s the downright useful tactile sizing of objects interacted with, for example picking up a screw, sensing its dimensions before fitting it to a nearby table.

    Gu tells us what they aimed to convey with the above video:

    This is a parallel comparison between Vive and Dexmo for certain common tasks in VR. The user is requested to pull a lever, turn a knob, and press some buttons. Dexmo can simulate the physical presence of the lever, knob and the different layers of stiffness for the buttons. When the force feedback is combined with the sound, graphics, it really leaps to the next level of immersion.

    We have had a total of 40 volunteers trying out this demo, and 100% of them agreed that Dexmo provides a more immersive VR experience. Another thing we have observed during our user studies is that, when people are using the Vive, we have to teach them how to use Vive controllers to operate the widgets; With Dexmo, it is very natural and intuitive. People reach out their hands and it just works. we don’t have to say anything. So that was a really encouraging feedback for us.

    The demo you are seeing in the video is actually made by one of our software engineers who spent only 3 month playing with Unity. You can imagine the magical experience that developers with 5 years of experiences can pull out with this Device!

    It’s undeniably all very cool. However we still have questions over cost, tracking and availability to market. Some which we touched upon in our recent interview with Aler Gu. It’s also another example of a peripheral that will require specific software integration in order to reach its potential, which is why these videos are important of course – proving to potential developer partners that Dexta have already done a lot of hard work for them.

    The post Dexta’s ‘Dextarity Integration Engine’ Lets Devs Build Tactile VR Worlds appeared first on Road to VR.