Embody Brings Personalised Spatial Audio to beyerdynamic’s MMX 300 Gaming Headphones

Spatial audio is an integral part of virtual reality (VR) experiences, helping ground you in digital worlds that are becoming ever more expansive and complex. At last year’s Game Developers Conference (GDC) Embody unveiled its AI-driven personalised spatial audio system Immerse to help in that process, offering a software solution that could tailor sounds to a person’s unique ear shape. Today, gamers can now access that system via a collaboration with beyerdynamic and its latest MMX 300 headphones.

beyerdynamic MMX 300

 

Immerse’s customized in-game audio works by using a photo of your ear to identify all its unique features and then an AI algorithm predicts how sound reflects and refracts, creating realistic audio tailored to you. Embody has worked closely with beyerdynamic engineers to create a perceptually accurate reproduction of game audio which in turn should improve your spatial awareness.

Whether you’re playing VR or non-VR videogames being able to accurately hear the direction a sound is coming from can have great implications on the experience. Immerse allows gamers to adjust several settings including moving the sound field closer or further away depending on the title.

“Embody and beyerdynamic share the passion for creating stunning sound experiences,” said Xavier Presser, Gaming Product Manager at beyerdynamic in a statement. “Our headphones make truly professional audio available for gamers and Immerse personalized spatial audio raises the bar for virtual surround simulations. Immerse with beyerdynamic uncovers a new facet of the MMX 300 that will again change your way of gaming.”

Embody - GDC 2019
Embody demonstrating Immerse at GDC 2019. Image credit: Embody

“Immerse personalized spatial audio is the perfect match with beyerdynamic’s design emphasis on delivering precision audio,” said Kapil Jain, CEO at Embody. “With the power of MMX300 and Immerse combined, gone is the hesitation to take a shot because you hear exactly where your enemies lie. Once you try Immerse with beyerdynamic’s MMX 300 headset you’ll never want to play a game without it.”

Having personalized spatial audio does come at a cost with the beyerdynamic MMX 300 headset currently retailing for €239.00 EUR/ $299.00 USD – was €299.00/$349.00 – while the Immerse software is $14.99/£13.99 GBP for a year or $39.99/£36.99 for 7 years.

Embody has yet to confirm when further headphones will be supported or integrated ones found on Vive Pro or Valve Index will be. When those details are available VRFocus will let you know.

Embody Brings Personalised Spatial Audio to beyerdynamic’s MMX 300 Gaming Headphones

Spatial audio is an integral part of virtual reality (VR) experiences, helping ground you in digital worlds that are becoming ever more expansive and complex. At last year’s Game Developers Conference (GDC) Embody unveiled its AI-driven personalised spatial audio system Immerse to help in that process, offering a software solution that could tailor sounds to a person’s unique ear shape. Today, gamers can now access that system via a collaboration with beyerdynamic and its latest MMX 300 headphones.

beyerdynamic MMX 300

 

Immerse’s customized in-game audio works by using a photo of your ear to identify all its unique features and then an AI algorithm predicts how sound reflects and refracts, creating realistic audio tailored to you. Embody has worked closely with beyerdynamic engineers to create a perceptually accurate reproduction of game audio which in turn should improve your spatial awareness.

Whether you’re playing VR or non-VR videogames being able to accurately hear the direction a sound is coming from can have great implications on the experience. Immerse allows gamers to adjust several settings including moving the sound field closer or further away depending on the title.

“Embody and beyerdynamic share the passion for creating stunning sound experiences,” said Xavier Presser, Gaming Product Manager at beyerdynamic in a statement. “Our headphones make truly professional audio available for gamers and Immerse personalized spatial audio raises the bar for virtual surround simulations. Immerse with beyerdynamic uncovers a new facet of the MMX 300 that will again change your way of gaming.”

Embody - GDC 2019
Embody demonstrating Immerse at GDC 2019. Image credit: Embody

“Immerse personalized spatial audio is the perfect match with beyerdynamic’s design emphasis on delivering precision audio,” said Kapil Jain, CEO at Embody. “With the power of MMX300 and Immerse combined, gone is the hesitation to take a shot because you hear exactly where your enemies lie. Once you try Immerse with beyerdynamic’s MMX 300 headset you’ll never want to play a game without it.”

Having personalized spatial audio does come at a cost with the beyerdynamic MMX 300 headset currently retailing for €239.00 EUR/ $299.00 USD – was €299.00/$349.00 – while the Immerse software is $14.99/£13.99 GBP for a year or $39.99/£36.99 for 7 years.

Embody has yet to confirm when further headphones will be supported or integrated ones found on Vive Pro or Valve Index will be. When those details are available VRFocus will let you know.

Oculus Audio SDK Update Adds Geometry-Based Sound Propagation

oculus audio propagation

The latest update to the Oculus Audio SDK adds the long awaited dynamic audio propagation feature.

The Audio SDK spatializes audio sources in real time using head-related transfer functions (HRTF). It also allows for volumetric and ambisonic sounds. This new update improves how it handles reflections and occlusion.

The Old Behavior

The spatializer originally simulated audio reflections by assuming a predefined rectangle around the user. That however assumed the user was in the center of that rectangle. It also obviously doesn’t work properly when moving around a scene.

In early 2018 a feature called Dynamic Room Modeling was added. This allows developers to define the current room as a 3D rectangle with a position. When the user changes to a new room the developer can update the rectangle for the new space.

This required a relatively large amount of effort on the developer’s part however, and only fully works in perfectly rectangular spaces. It also couldn’t model the transition between different sized spaces- such as going from inside to outside.

The New Update

The new update accurately models occlusion and reflections of sound in real time based on the scene geometry. The developer simply needs to tag each object with an acoustic material to let it know how it should absorb or reflect sound. Materials like carpet will absorb far more than materials like metal.

How the Audio SDK now ‘sees’ a scene

Valve’s competing Steam Audio has had geometry-based occlusion since late last year. But reflections have to be prebaked. Facebook’s new update brings VR audio to a new level of realism by modelling reverb in real time. The simulation even performs well on mobile VR, even with many sound sources. This will be important for the the upcoming Oculus Quest.

UPDATE: article previously stated that Steam Audio had feature parity. Thanks to reddit user /u/Hethree for the correction.

Tagged with: , , , , , ,

The post Oculus Audio SDK Update Adds Geometry-Based Sound Propagation appeared first on UploadVR.

Sennheiser Partner With Magic Leap for Spatial Audio

Though often overlooked, clever use of audio can hugely elevate an immersive experience into something good into something amazing. In an attempt to provide this experience to its augmented reality (AR) system, Magic Leap are partnering with audio company Sennheiser to bring spatial audio technology to the company.

Sennheiser will be bringing its AMBEO technology to Magic Leap in order to improve the immersion of AR. The AMBEO  technology uses a process called ‘transparent audio’ which allows sound from the real world to be mixed in with sound from a user’s headphones, making it like an audio version of AR.

While Sennheiser did not confirm what product the new technology would be applied to, according to Venture Beat the company did mention that it would be discussing further news at the LEAP developers conference in October.

“Our spatial computing platform is uniquely designed in how it blends the digital world seamlessly and respectfully with the physical world,” Magic Leap product chief Omar Khan said. “The spatial soundfield is an integral part of the spatial sensory experience. This is why we partnered with Sennheiser … to help explore and enhance our spatial audio accessory solutions.”

“As we enter a new era of spatial computing … it thrills us to bring our AMBEO spatial audio expertise to drive forward this emerging field,” Sennheiser Ambeo boss Veronique Larcher said. “[We also plan on] working closely with the creative community.”

Some analysts say that by using the Sennheiser technology to handle audio, Magic Leap is more free to focus on the visual experience, and the technology might also help developers built better sound into AR experiences.

Magic Leap One Reveal

Magic Leap has started to ship its first product, the Magic Leap One Creator Edition. The response has been somewhat lacklustre, with many people criticising the quality of the AR experience, including Oculus founder Palmer Luckey.

For further coverage on Magic Leap and spatial sound, keep checking back with VRFocus.

HTC Vive Introduce Spatial Audio SDK

In order to create a truly immersive virtual reality (VR) experience, developers need to pay attention to the visuals, but also to the audio. Though frequently not given the recognition it deserves, sound and audio design can have a real impact on player perceptions. To make things easier for developers, HTC Vive are introducing the 3DSP audio SDK.

VRGE docking station - ViveThe HTC Vive 3DSP uses a range of cutting-edge features to deliver realistic and immersive audio to users by taking into account real-world factors such as distance, geometry and background noise.

The 3DSP Software development kit (SDK) offers developers the following key features:

  • Higher Order Ambisonics (HOA) with very low computing power.
  • Head-Related Transfer Function (HRTF) based on refined real-world modeling (horizontally and vertically) resulting in a better algorithm that is applied to all sound filters and effects.
  • Room Audio simulates the reflection and reverberation of a real space.
  • Hi-Res Audio Settings source files and playback.
  • Distance Model based on real-world modelling.
  • Geometric occlusion uses no Unity collider and the cover area is calculated by itself

A number of audio techniques are utilised in the HTC Vive 3DSP SDK to offer more immersive sound quality. Ambisonics is a technique which uses a full-sphere surround sound to simulate spatial sound, while a Distance Model simulates how sounds become quieter the further away you are. An Occlusion model is also in place to represent how sound changes when objects are in the way.

The HTC Vive 2DSP SDK supports room effect, room reverberation and reflection and acoustic occlusion to represent the different ways sound can react based on the geometry of a location, and all of this has been optimised for use with the HTC Vive Pro headphones, though other headphones are head-mounted displays are also supported.

For further information on the HTC Vive 3DSP Audio SDK, you can visit the community forum. Future coverage of new VR tools and software will be covered here on VRFocus.

Valve’s ‘Steam Audio’ Spatial Sound System Gets Support for AMD TrueAudio Next, FMOD

Steam Audio, Valve’s free made-for-VR spatial audio system, has added support for AMD’s GPU-based TrueAudio Next technology, as well as the popular FMOD audio software.

AMD TrueAudio Next

Today Valve announced the release of Steam Audio 2.0 beta 13 which now supports AMD’s TrueAudio Next technology, which allows developers to run certain audio-processing tasks on supported AMD GPUs, including convolution, a method of filtering audio to layer in spatial and environmental effects. As described by Valve’s announcement of the new feature:

Audio engines used for games and VR usually support convolution reverb: various combinations of audio data can be filtered using convolution, to add reverb, after the user has specified an [impulse response] in some way. This is usually real-time convolution, which means that audio is filtered as it is being played. [Impulse responses] used for reverb are often quite long: representing the IR for an airplane hangar with a 5-second reverb at a 48 kHz sampling rate requires (5 * 48,000) = 240,000 audio samples = around 935 KB. Real-time convolution is a very compute-intensive operation, and audio engines usually allow only 2-4 convolution reverb filters running simultaneously, to keep CPU usage within acceptable limits. Convolution reverb can model a wide range of acoustic phenomena, and leads to an increased sense of presence, which is especially important in VR. However, because it is a compute-intensive algorithm, it is often avoided in favor of less compute-intensive, less detailed reverb algorithms.

TrueAudio Next uses the GPU compute capabilities of supported AMD GPUs to accelerate convolution, allowing developers to increase the acoustic complexity and detail in their games and VR applications using convolution reverb.

TrueAudio Next also supports something called Resource Reservation, which allows developers to dedicate a portion of the GPU’s processing power (up to 25%) to spatial audio, effectively offering parallel processing of image rendering and audio processing. This can be enabled and disabled on the fly, allowing full use of the GPU for image processing when needed.

Image courtesy Valve, AMD

Valve says that the OpenCL-based convolution processing running on the GPU via TrueAudio Next shows significant gains over the same convolution tasks running on the CPU, which means you’re not only offloading processing to the GPU, but also freeing up extra CPU processing for other tasks.

Image courtesy Valve

While Valve advises that “Steam Audio’s CPU-based convolution is highly efficient, and is sufficient for a wide variety of games and VR applications […],” they note that “[TrueAudio Next] allows us to give developers the option of reaching an even higher level of auditory complexity: increasing the number of sources, or the Ambisonics order for indirect sound, or the IR length, etc.”

The announcement goes into detail about the potential performance benefits and impacts of TrueAudio Next, and specifies that the technology is supported by the following GPUs on Windows 10: RX 470 Series, RX 480 Series, RX 570 Series, RX 580 Series, R9 Fury, R9 Fury X, and Pro Duo.

Valve, which doesn’t often wade into GPU-specific technologies, also addresses an important question: why?

There are two main reasons why we chose to support TrueAudio Next in Steam® Audio:

  • If we can provide developers with more flexibility in choosing how their audio processing workloads are balanced on a user’s PC, we want to do that. TrueAudio Next allows developers to choose how CPU and GPU resources work together to provide a compelling audio experience on the user’s PC.
  • If we can allow developers to offer an optional higher level of audio detail with their existing content on PCs that are powerful enough, we want to do that. With Steam® Audio, developers just specify higher settings to be used if a TAN-capable GPU is installed on the user’s PC. Developers do not have to re-author any content to work with TAN. Steam® Audio can be used to seamlessly select either CPU- or GPU-based convolution, depending on the user’s hardware configuration.

Steam® Audio’s support for TrueAudio Next does not in any way restrict Steam® Audio to specific hardware or platforms. We hope that our support for TrueAudio Next encourages hardware and platform vendors to provide more options for developers to balance their audio workloads against graphics, physics, and other workloads, which in turn will help them create better audio experiences for their users.

FMOD Studio

As of Steam Audio beta 12, released in January, the software now offers a plugin for the popular FMOD audio authoring tool. Valve says the plugin “lets you use the full range of spatial audio functionality available in Steam Audio—including HRTF, occlusion, physics-based sound propagation, and baking—to projects that use FMOD Studio.”

The post Valve’s ‘Steam Audio’ Spatial Sound System Gets Support for AMD TrueAudio Next, FMOD appeared first on Road to VR.

Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK

Google today released a new spatial audio software development kit called ‘Resonance Audio’, a cross-platform tool based on technology from their existing VR Audio SDK. Resonance Audio aims to make VR and AR development easier across mobile and desktop platforms.

Google’s spatial audio support for VR is well-established, having introduced the technology to the Cardboard SDK in January 2016, and bringing their audio rendering engine to the main Google VR SDK in May 2016, which saw several improvements in the Daydream 2.0 update earlier this year. Google’s existing VR SDK audio engine already supported multiple platforms, but with platform-specific documentation on how to implement the features. In February, a post on Google’s official blog recognised the “confusing and time-consuming” battle of working with various audio tools, and described the development of streamlined FMOD and Wwise plugins for multiple platforms on both Unity and Unreal Engine.

Image courtesy Google

The new Resonance Audio SDK consolidates these efforts, working ‘at scale’ across mobile and desktop platforms, which should simplify development workflows for spatial audio in any VR/AR game or experience. According to the press release provided to Road to VR, the new SDK supports “the most popular game engines, audio engines, and digital audio workstations” running on Android, iOS, Windows, MacOS, and Linux. Google are providing integrations for “Unity, Unreal Engine, FMOD, Wwise, and DAWs,” along with “native APIs for C/C++, Java, Objective-C, and the web.”

This broader cross-platform support means that developers can implement one sound design for their experience that should perform consistently on both mobile and desktop platforms. In order to achieve this on mobile, where CPU resources are often very limited for audio, Resonance Audio features scalable performance using “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality.” A new feature in Unity for precomputing reverb effects for a given environment also ‘significantly reduces’ CPU usage during playback.

Much like the existing VR Audio SDK, Resonance Audio is able to model complex sound environments, allowing control over the direction of acoustic wave propagation from individual sound sources. The width of each source can be specified, from a single point to a wall of sound. The SDK will also automatically render near-field effects for sound sources within arm’s reach of the user. Near-field audio rendering takes acoustic diffraction into account, as sound waves travel across the head. By using precise HRTFs, the accuracy of close sound source positioning can be increased. The team have also released an ‘Ambisonic recording tool’ to spatially capture sound design directly within Unity, which can be saved to a file for use elsewhere, such as game engines or YouTube videos.

Resonance Audio documentation is now available on the new developer site.

For PC VR users, Google just dropped Audio Factory on Steam, letting Rift and Vive owners get a taste of an experience that implements the new Resonance Audio SDK. Daydream users can try it out here too.

The post Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK appeared first on Road to VR.

Audio Factory von Google auf Steam erschienen

Im Mai diesen Jahres erschien Audio Factory für Daydream und Cardboard, das die Spatial-Audio-Engine von Google demonstriert. Nun hat der Suchmaschinen-Riese die kostenlose App auf Steam für Oculus Rift und HTC Vive veröffentlicht.

Audio Factory zeigt Möglichkeiten der Spatial Audio Engine

Die Daydream-Umsetzung Audio Factory zeigt die Fähigkeiten des Resonance Audio SDKs von Google. Dabei durchwandert man bei der farbigen VR-Erfahrung mehrere Fabriketagen und erledigt spaßige interaktive Aufgaben. Laut Google erfährt der virtuelle Reisende so, wie Spatial Audio die immersive Erfahrung intensivieren kann. Mit verschiedenen Techniken lässt sich eine dreidimensionale Hörlandschaft erzeugen: Googles Lösung berücksichtig beispielsweise auch Objekte, die zwischen Spieler und Soundquelle stehen und den Schall schlucken. Dabei werden wie in der echten Welt die hohen Töne mehr reduziert als tiefe Frequenzen. Der Bass lässt grüßen.

Audio Factory VR Spatial Sound SteamVR

Die Anwendung zeigt in VR zwar die Daydream-Controller, mappts sie jedoch auf die Touch Controller von HTC und Oculus. Außerdem setzt Audio Factory drahtgebundene Kopfhörer voraus, da Bluetooth-Ohrhörer eine zu hohe Latenz hätten. Die App ist für Google Cardboard und Daydream sowie jetzt auch für Oculus Rift und HTC Vive via Steam erhältlich.

Der Beitrag Audio Factory von Google auf Steam erschienen zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

InstaVR Introduces Spatial Audio Support to Improve its Online VR Creation Platform

Audio forms an important and oft-neglected part of the immersive experience. The audio experience isn’t just about music, it also covers sound effects, dialogue and background sounds that from a vital part of creating an atmosphere. InstaVR have recognised the need for more support for immersive audio, so have launched Spatial Audio Support.

Spatial Audio allows sounds to be presented to the virtual reality (VR) viewer that are both inside and outside their field of view. This allows for the developers to draw the users attention to a particular area or object using a directional sound cue.

The spatial audio feature has already been used by some professionals, such as Steven Poe, a VR Producer for Mandala Health, a company that produces healthcare treatment for pain management and psychology. Poe said: “The soundscape is more than 50% of the brain’s transference. We hear sound faster and more accurately than we see. Unfortunately the ability to playback spatial audio has been lagging compared to advancements in 360 video,” noted Poe. “InstaVR solved that by allowing VR Producers to upload spatial audio files once and deliver across all the popular VR viewing platforms.”

“We’re very proud that our clients can now incorporate spatial audio into their VR experiences,” said Daniel Haga, Founder and President of InstaVR. “Our goal has always been to empower clients to create the most immersive, engaging, and awe-inspiring virtual reality. Supporting spatial audio gives our most savvy clients the opportunity to do just that.”

Further information on Spatial Audio can be found at the InstaVR website.

Last year InstaVR secured $2 million USD in a funding round led by The VR Fund to build its web-based solution for the authoring of 360-degree VR apps. This February the company announced the software would support HTC Vive, with Japanese firm Toyota being among the first to utilise the new option.

VRFocus will bring you further news on Spatial Audio and other developments in immersive audio as it becomes available.

Steam Audio Promises More Realistic Sound For Games And VR

Steam Audio Promises More Realistic Sound For Games And VR

Valve today released a new tool called Steam Audio that represents the fruit of its purchase of Impulsonic in January. The tech promises sound which very realistically responds to a virtual environment and would serve as an improvement over the standard 3D audio.

In the demo video below you can hear the audio change as a player moves around a virtual room. In particular, some of the sound is partially blocked by a wall, altering the pitch and volume more than normal.

It is a pretty sparse example, but when applied to VR you would have sound that more realistically matches your actions and behavior. Crouching to hide underneath a desk as an alien creeps nearby takes on new meaning when the creature’s steps sound slightly different depending how much of the desk is blocking or reflecting the sound before it hits your ears. 

From the Steam Audio page:

Reflections and reverb can add a lot to spatial audio. Steam Audio uses the actual scene geometry to simulate reverb. This lets users sense the scene around them through subtle sound cues, an important addition to VR audio. This physics-based reverb can handle many important scenarios that don’t easily fit within a simple box-model.

Steam Audio applies physics-based reverb by simulating how sound bounces off of the different objects in the scene, based on their acoustic material properties (a carpet doesn’t reflect as much sound as a large pane of glass, for example). Simulations can run in real-time, so the reverb can respond easily to design changes. Add furniture to a room, or change a wall from brick to drywall, and you can hear the difference.

The tech supports PC, Mac, Linux and Android. It is launching with Unity integration but Unreal Engine 4 support is coming soon with other software integrations planned as well.

Tagged with: , , ,