NVIDIA Shows How Physically-based Audio Can Greatly Enhance VR Immersion

Positional audio for VR experiences—where noises sound as if they are coming from the correct direction—has long been understood as an important part of making VR immersive. But knowing which direction sounds are coming from is only one part of the immersive audio equation. Getting that directional audio to interact in realistic ways with the virtual environment itself is the next challenge, and getting it right came make VR spaces feel far more real.

Positional audio in some form or another is integrated into most VR applications today (some use better integrations and mixes than others). Positional audio tells you about the direction of various sound sources, but it misses out completely on telling you about the environment in which the sound is located, something that we are unconsciously tuned to understand as our ears and brain interpret direct sounds mixed in with reverberations, reflections, diffractions, and more complex audio interactions that change based on the shape of the environment around us and the materials of that environment. Sound alone can give us a tremendous sense of space even without a corresponding visual component. Needless to say, getting this right is important to making VR maximally immersive, and that’s where physically-based audio comes in.

Photo courtesy NVIDIA

Physically-based audio is a simulation of virtual sounds in a virtual environment, which includes both directional audio and audio interactions with scene geometry and materials. Traditionally these simulations have been too resource-intensive to be able to do quickly and accurately enough for real-time gaming. NVIDIA has dreamt up a solution which takes those calculations and runs them on their powerful GPUs, fast enough, the company says, for real-time use even in high-performance VR applications. In the video heading this article, you can hear how much information can be derived about the physical shape of the scene from the audio alone. Definitely use headphones to get the proper effect; it’s an impressive demonstration, especially to me toward the end of the video when occlusion is demonstrated as the viewing point goes around the corner from the sound source.

That’s the idea behind the company’s VRWorks Audio SDK, which was released today during the GTC 2017 conference; it’s part of the company’s VRWorks suite of tools to enhance VR applications on Nvidia GPUs. In addition to the SDK, which can be used to build positional audio into any application, Nvidia is also making VRWorks Audio available as a plugin for Unreal Engine 4 (and we’ll likely see the same for Unity soon), to make it easy for developers to begin working with physically-based audio in VR.

SEE ALSO
Latest Unity Beta Gets NVIDIA VRWorks for Enhanced Rendering Features

The company says that VRWorks Audio is the “only hardware-accelerated and path-traced audio solution that creates a complete acoustic image of the environment in real time without requiring any ‘pre-baked’ knowledge of the scene. As the scene is loaded by the application, the acoustic model is built and updated on the fly. And audio effect filters are generated and applied on the sound source waveforms.”

VRWorks Audio repurposes the company’s OptiX ray-tracing engine which is typically used to render high-fidelity physically-based graphics. For VRWorks Audio, the system generates invisible rays representing sound wave propagation, tracing the path from its origin to the various surfaces it will interact with, and eventually to its arrival at the listener.


Road to VR is a proud media sponsor of GTC 2017.

The post NVIDIA Shows How Physically-based Audio Can Greatly Enhance VR Immersion appeared first on Road to VR.

NVIDIA Launches VRWorks Audio and 360 Video SDK at GTC

Today sees the start of NVIDIA’s GPU Technology Conference (GTC) at the San Jose Convention Center, California. Virtual reality (VR) will play a big part in the proceedings, with multiple sessions taking place as well as announcements. The first two are the VRWorks Audio software development kit (SDK) and VRWorks 360 Video SDK, both releasing publicly for the first time.

The VRWorks Audio SDK will feature real-time OptiX ray-tracing of audio in virtual environments and a plugin for VRWorks Audio in Unreal Engine 4. While normal VR audio provides an accurate 3D position of the audio source within a virtual environment, NVIDIA VRWorks Audio will help make that even more immersive by modeling sound propagation phenomena such as reflection, refraction and diffraction.

Its release consists of a set of C-APIs for integration into any engine or application that is available now on GitHub. And GTC attendees get a live demonstration of VRWorks Audio technology at the VR Village to experience.

NVIDIA_VRWORKS

At the conference NVIDIA will be demoing for the first time the release of the VRWorks 360 Video SDK that enables stereo stitching in real-time. Utilising two Quadro P6000 GPUs, the company will showcase how it’s able to stitch eight 4k cameras in stereo and in real-time using Z CAM’s V1 PRO rig.

“The fact that NVIDIA manages to stitch 4K 360 stereoscopic video in real time, making livestreaming possible, changes the production pipeline and enables entirely new use cases in VR,” said Kinson Loo, CEO of Z CAM.

From today, the first public beta of the VRWorks 360 Video SDK is available to all developers from developer.nvidia.com/vr. Today’s launch is for the VRWorks 360 Video SDK in mono, while VRWorks 360 Video SDK for stereo will be released soon.

VRFocus will continue its coverage of GTC 2017, reporting back with all of the latest updates and announcements.