Quantifying Touch on 15 Dimensions with SynTouch

matt-borzageSynTouch has created a system that can quantify the sense of touch on fifteen different dimensions called the SynTouch Standard, and they’re one of the most impressive haptic start-ups that I’ve seen so far. SynTouch isn’t creating haptic displays per se, but they are capturing the data that will vital for other VR haptic companies to work towards creating a display that’s capable of simulating a wide variety of different textures. SynTouch lists Oculus as one of their partners, and they’re also providing their data to a number of other unannounced haptic companies.

LISTEN TO THE VOICES OF VR PODCAST

I had a chance to talk with Matt Borzage, head of development and one of the co-founders of SynTouch at CES where we talked about the 15 different dimensions of their SynTouch Standard across the five major areas of Texture, Compliance, Friction, Thermal, and Adhesive. This research was originally funded by DARPA in order for adding the feeling of touch to prosthetics, and the founders have backgrounds in biomedical engineering. But their mechanical process of objectively measuring the different dimensions of textures has a lot of applications in virtual reality that creates a baseline of input data for haptic displays.

Here’s a comparison of denim and a sponge across the 15 dimensions of the SynTouch Standard:
spider-plot-syntouch

SynTouch has found a great niche in the haptics space in being able to already provide a lot of insight and value to a number of different companies looking at the ergonomics of industrial design, and they’re a company to watch in the VR space as more and more different haptics companies try to solve some of the hardest engineering problems around creating a generalized haptic device for VR.


Support Voices of VR

Music: Fatality & Summer Trip

The post Quantifying Touch on 15 Dimensions with SynTouch appeared first on Road to VR.

The Future of VR Arcades with VRsenal

HTC announced the Vive Tracker at CES this year, which will enable a range of VR peripherals that are targeted to from consumers to high-end virtual reality arcades. One of the higher-end peripherals that debuted was VRsenal’s VR-15, which has built-in haptics and the same weight distribution as a M-15 and AR-15. I had a chance to catch up with VRsenal CEO Ben Davenport who talked about targeting the digital out-of-home entertainment and VR arcade market with their integrated solutions of commercial-off-the-shelf VR hardware, VR backpacks and haptic vests with customizations and top-of-the-line gun peripherals with an integrated Vive tracker.

LISTEN TO THE VOICES OF VR PODCAST

While VR hardware is expected to continually improve over each successive generation, Davenport makes the claim that
limited real estate within the homes will drive consumers to VR arcades that will be able to provide better compelling experiences given extra space. He says that competitive VR games are limited by teleportation and locomotion constraints, and that being able to physically move around large spaces will open up the types of social interactions that are possible with laser tag or paint ball.

He expects to see a return to the golden era of arcades when they could provide a more compelling and visceral experience than what’s possible with consumer VR within a home. High-end haptic devices will also likely be a differentiating factor as the passive haptic feedback from the VR-15 peripheral combined with embodied gameplay is able to deliver a compelling experience that people will be willing to pay for. He also expects to people eventually going through non-gaming and non-entertainment virtual and augmented experiences while they are co-located in the same physical environment.


Support Voices of VR

Music: Fatality & Summer Trip

The post The Future of VR Arcades with VRsenal appeared first on Road to VR.

Tricking the Brain is the Only Way to Achieve a Total Haptics Solution

eric-vezzoliDeep in the basement of the Sands Expo Hall at CES was an area of emerging technologies called Eureka Park, which had a number of VR start-ups hoping to connect with suppliers, manufacturers, investors, or media in order to launch a product or idea. There was an early-stage haptic start-up called Go Touch VR showing off a haptic ring that simulated the type of pressure your finger might feel when pressing a button. I’d say that their demo was still firmly within the uncanny valley of awkwardness, but CEO Eric Vezzoli has a Ph.D. in haptics and was able to articulate an ambitious vision and technical roadmap towards a low-cost and low-fidelity haptics solution.


LISTEN TO THE VOICES OF VR PODCAST


Vezzoli quoted haptics guru Vincent Hayward as claiming that haptics is an ‘infinite degree of freedom problem’ that can never be 100% solved, but that the best approach to get as close as possible is to trick the brain. Go Touch VR is aiming to provide a minimum viable way to trick the brain starting with simulating user interactions like button presses.

I had a chance to catch up with Vezzoli at CES where we talked about the future challenges of haptics in VR including the 400-800 Hz frequency response of fingers, the mechanical limits of nanometer-accuracy of skin displacement, the ergonomic limitations of haptic suits, and the possibility of fusing touch and vibrational feedback with force feedback haptic exoskeletons.

SEE ALSO
Hands-on: 4 Experimental Haptic Feedback Systems at SIGGRAPH 2016


Support Voices of VR

Music: Fatality & Summer Trip

The post Tricking the Brain is the Only Way to Achieve a Total Haptics Solution appeared first on Road to VR.

OSSIC CEO on Why the Future of Music is Immersive & Interactive

Jason-RiggsOSSIC debuted their latest OSSIC X headphone prototype at CES this year with one of the best immersive audio demos that I’ve heard yet. OSSIC CEO Jason Riggs told me that their headphones do a dynamic calibration of your ears in order to render out near-field audio that is customized to your anatomy, and they had a new interactive audio sandbox environment where you could do a live mix of audio objects in a 360-degree environment at different heights and depths. OSSIC also was a participant in Abbey Road Studio’s Red Incubator looking at the future of music production, and Riggs makes the bold prediction that the future of music is going to be both immersive and interactive.

LISTEN TO THE VOICES OF VR PODCAST

We do a deep dive into immersive audio on today’s podcast where Riggs explains in detail their audio rendering pipeline and how their dynamic calibration of ear anatomy enables their integrated hardware to replicate near-field audio objects better than any other software solution. When audio objects are within 1 meter, then they use a dynamic head-related transfer function (HRTF) in order to calculate the proper interaural time differences (ITD) and interaural level differences (ILD) that are unique to your ear anatomy. Their dynamic calibration also helps to localize high frequency sounds from 1-2 kHz when they are in front, above, or behind you.

SEE ALSO
Up Close With Sennheiser's $1,700 VR Microphone

Riggs says that they’ve been collaborating with Abbey Road Studios in order to figure out the future of music, which Riggs believes that is going to be both immersive and interactive. There are two ends of the spectrum from audio production ranging from pure live capture and pure audio production, which happens to mirror the differences between passive 360 video capture and interative, real-time CGI games. Right now the music industry is solidly in the static, multi-channel-based audio, but that the future tools of audio production are going to look more like a real-time game engine than the existing fixed perspective and flat-world, audio mixing boards, says Riggs.

OSSIC has started to work on figuring out the production pipeline for the passive, pure live capture end of the spectrum first. They’ve been using higher-order ambisonic microphones like the 32-element em32 Eigenmike microphone array from mh acoustics. They’re able to capture a lot more spatial resolution than with a standard 4-channel, first-order ambisonic microphone. Both of these approaches capture a sound sphere shell of a location with all of it’s directed and reflected sound properties that can transport you to another place.

But Riggs says that there’s a limited amount of depth information that can be captured and transmitted with this type of passive and non-volumetric ambisonic recording. The other end of the spectrum is pure audio production, which can do volumetric audio that is real-time and interactive by using audio objects in a simulated 3D space. OSSIC produced an interactive audio demo using Unity that is able to produce audio in the near-field of less than 1 meter distance.

The future of interactive music faces similar challenges to the similar tension between 360 videos and interactive game environments, which is that it’s difficult to balance the user’s agency with the process of creating authored compositions. Some ways to incorporate interactivity with a music experience is to allow the user to live mix an existing authored music composition with audio objects in a 3D space or to play an audio-reactive game like AudioShield that creates dynamic gameplay based upon the unique sound profile of each piece of music. These are ways to engage the agency of the user, but neither of these actually provide any meaningful way for the user to impact how the music composition unfolds.

Finding that balance between authorship and interactivity is one of the biggest open questions about the future of music, and no one really knows what that will look like. The only thing that Riggs knows for sure is that real-time game engines like Unity or Unreal are going to be much more well-suited to facilitate this type of interaction than the existing tools of production of channel-based music.

Multi-channel ambisonic formats are becoming more standardized for the 360-videos platforms on Facebook and Google’s YouTube, but there is still only output binaural stereo output. Riggs says that he’s been working behind the scenes to provide higher level fidelity outputs for integrated immersive hardware solutions like the OSSIC X since they’re currently not using the best spatialization process to get the best performance out of the OSSIC headphones.

As far as formats for the other end of pure production, there is no emerging standard for an open format of object-based audio. Riggs hopes that eventually this will come, and that there will be plugins for OSSIC headphones and software to be able to dynamically change the reflective properties of a virtualized room, or to be able to dynamically modulate properties of the audio objects.

As game engines eventually move to real-time, physics-based audio propagation models where sound is constructed in real-time, Riggs says that this will still need good spatialization with integrated hardware and software solutions otherwise it’ll just sound like good reverb without any localized cues.

SEE ALSO
Nvidia's VRWorks Audio Brings Physically Based 3D GPU Accelerated Sound

At this point, audio is still taking a backseat to the visuals with a limited 2-3% budget of CPU capacity, and Riggs hopes that there will be a series of audio demos in 2017 that show the power of properly spatialized audio. OSSIC’s interactive sound demo at CES was the most impressive example of audio spatialization that I’ve heard so far, and they’re shaping up to be the real leader of immersive audio. Riggs said that they’ve got a lot of feedback from game studios that they don’t want to use a customized audio production solution by OSSIC, but they want to use their existing production pipeline and have OSSIC be compatible with that. So VR developers should be getting more information for how to best integrate with the OSSIC hardware in 2017 as their OSSIC X headphones will start shipping in Spring of this year.

The post OSSIC CEO on Why the Future of Music is Immersive & Interactive appeared first on Road to VR.