G’Audio Lab Makes 360 Video More Immersive with New Audio Streaming

Watching concerts, gigs and events with 360 degree video is the best way to go. So much more immersive and interesting than standard TV broadcasts or even the usual HD streams, 360 video makes you feel like you’re there in the moment. It’s the kind of experience that requires more than just stereo audio to be truly immersive, and G’Audio Lab are now moving forward with their spatial audio livestreaming solution, specifically built for 360 video.

G’Audio Lab’s new solution transcends the current limitations of audio streaming with 360 video, and allows true comprehension of the sounds of arenas and stadiums during live concerts and sports events.

“Livestreaming, high-fidelity spatial audio will bring a totally new level of presence to live events,” says CEO and Co-founder of G’Audio Lab, Henney Oh. “Through the magic of VR, you can now experience the sound of concerts, eSports and team sports as if you have the best seat in the house.”

Currently the best audio experience you can expect from 360 video are traditional stereo and surround formats. Surround sound is excellent for most cases, but in 360 video it leaves a little something to be desired. True spatial audio is a step above. That’s where G’Audio Lab steps in, with Sol Livestreaming which uses spatial audio technology to give users that complete, lifelike 3D audio experience.

Listeners will be able to pinpoint where sounds come from and truly understand what it’s like to be there in person. It truly is a transformative audio experience, as users who have already experienced 3D spatial audio will attest to. The technology will even respond to the movement of the user’s head, to give the most accurate possible experience.

Sol Livestreaming intergration will deliver users an Ambisonics audio signal which will give the listener accurate sound which will envelop them in the atmosphere of the environment for more immersive experiences. Ambisonics can now be rendered in the AAC codec, making it easy to adopt this technology to existing devices.

The future definitely sounds incredibly bright for 360 video thanks to spatial audio, though some of us may need to upgrade our headphones. For more on 360 video, make sure to keep reading VRFocus.

Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK

Google today released a new spatial audio software development kit called ‘Resonance Audio’, a cross-platform tool based on technology from their existing VR Audio SDK. Resonance Audio aims to make VR and AR development easier across mobile and desktop platforms.

Google’s spatial audio support for VR is well-established, having introduced the technology to the Cardboard SDK in January 2016, and bringing their audio rendering engine to the main Google VR SDK in May 2016, which saw several improvements in the Daydream 2.0 update earlier this year. Google’s existing VR SDK audio engine already supported multiple platforms, but with platform-specific documentation on how to implement the features. In February, a post on Google’s official blog recognised the “confusing and time-consuming” battle of working with various audio tools, and described the development of streamlined FMOD and Wwise plugins for multiple platforms on both Unity and Unreal Engine.

Image courtesy Google

The new Resonance Audio SDK consolidates these efforts, working ‘at scale’ across mobile and desktop platforms, which should simplify development workflows for spatial audio in any VR/AR game or experience. According to the press release provided to Road to VR, the new SDK supports “the most popular game engines, audio engines, and digital audio workstations” running on Android, iOS, Windows, MacOS, and Linux. Google are providing integrations for “Unity, Unreal Engine, FMOD, Wwise, and DAWs,” along with “native APIs for C/C++, Java, Objective-C, and the web.”

This broader cross-platform support means that developers can implement one sound design for their experience that should perform consistently on both mobile and desktop platforms. In order to achieve this on mobile, where CPU resources are often very limited for audio, Resonance Audio features scalable performance using “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality.” A new feature in Unity for precomputing reverb effects for a given environment also ‘significantly reduces’ CPU usage during playback.

Much like the existing VR Audio SDK, Resonance Audio is able to model complex sound environments, allowing control over the direction of acoustic wave propagation from individual sound sources. The width of each source can be specified, from a single point to a wall of sound. The SDK will also automatically render near-field effects for sound sources within arm’s reach of the user. Near-field audio rendering takes acoustic diffraction into account, as sound waves travel across the head. By using precise HRTFs, the accuracy of close sound source positioning can be increased. The team have also released an ‘Ambisonic recording tool’ to spatially capture sound design directly within Unity, which can be saved to a file for use elsewhere, such as game engines or YouTube videos.

Resonance Audio documentation is now available on the new developer site.

For PC VR users, Google just dropped Audio Factory on Steam, letting Rift and Vive owners get a taste of an experience that implements the new Resonance Audio SDK. Daydream users can try it out here too.

The post Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK appeared first on Road to VR.