Crowdfunded Off-Ear Speakers ‘VR Ears’ Delayed Until Summer 2021

VR Ears was successfully crowdfunded back in May, garnering nearly $200,000 over the course of its month-long Kickstarter campaign. Now the developers Rebuff Reality say its off-ear speaker accessory for VR headsets will ship out a few months later than previously planned.

The project’s Kickstarter says VR Ears offers “high performance audio” via its premium off-ear speakers and built-in Digital Amplifier and Signal Processor.

Featuring a clip-on design, it also supports a wide array of devices including Oculus Rift, Rift S, Oculus Quest, Quest 2, HTC Vive, Vive Pro, Vive Cosmos, PSVR, Valve Index, and Pimax 8K’s rigid headstrap variant.

VR Ears was slated to start shipping in December of this year, however now the Miami-based team says it will officially begin shipping on July 15th, 2021.

The creators cite difficulties making hardware improvements during the global supply chain disruption caused by the COVID-19 pandemic.

Here’s the full statement to Kickstarter backers from Rebuff Reality:

Hi VR Ears Backers,

We have had a long journey this year making a ton of improvements to the design and audio performance of VR Ears, while at the same time dealing with the impact COVID-19 has had on the global supply chain. We are finally ready to begin the tooling process and release a firm shipment date. VR Ears will ship July 15, 2021, with full support for Oculus Quest 2, forward compatibility for all leading VR headsets, and standalone with our HeadStrap accessory.

We know this is not what you were hoping for, we feel the same way. All the funds we’ve gathered have been put to good use improving the product in the best way possible. We take to heart all the support the community has given us this year, just as we have done for TrackStrap, VR Power, VR Shell, and all our other products. VR Ears will be simply awesome. After using VR Ears, you won’t know how you lived without them, and won’t be able to go back to anything else.

Stay tuned for more updates as the tooling and validation process moves forward.

Rebuff Your Reality,

Joe Sciacchetano

Founder and CEO

It’s important to note that the tooling and validations processes come with their own challenges, so it’s slightly puzzling how Rebuff Reality can give such a precise release date this far ahead of having the final product in hand.

Granted, the company has prior experience in manufacturing, as it offers a line of products such as VR Power, an external battery pack and counterweight for Quest and Quest 2, and VR Shell, an exterior faceplate protector for the original Quest. Still, it would have been a nice accessory for some VR users hoping to upgrade to an audio system similar to what Valve Index features natively.

For latecomers to the Kickstarter, Rebuff Reality has an IndieGogo currently taking pre-orders. Early Bird tiers are still available at a 40% discount off its $150 MSRP, which comes to $89 (€75) for a pair of VR Ears and a single set of clips.

The post Crowdfunded Off-Ear Speakers ‘VR Ears’ Delayed Until Summer 2021 appeared first on Road to VR.

VR Ears Doubles Kickstarter Funding Goal In Less Than 24 Hours

And just like that, Rebuff Reality’s VR Ears peripheral is funded on Kickstarter. Twice over, even.

Yesterday we reported on the launch of the crowdfunding campaign for this VR audio upgrade peripheral. Rebuff was looking to raise $30,000 by May 21. At the time of writing, the campaign has doubled that total, reaching over $60,000 with the help of over 600 backers. All of the 500 super early bird units have been claimed too.

vr ears quest psvr

Guess people really want to upgrade their VR audio, huh?

The VR Ears basically take the off-ear speaker design of the Valve Index and brings it to other headsets like the Oculus Quest, Oculus Rift S, original HTC Vive and PSVR. It features two speakers that attach to either side of your headset and can be adjusted to suit you. We got to try the kit briefly at CES earlier in the year and were impressed with the results, though we haven’t sampled the final product.

With the super early bird tier now used up, you can effectively pre-order a pair of the speakers for $89 in another early bird offer, limited to 2,000 backers. This includes clips and cables for a single headset.

With the initial goal settled, the company is now lining up a range of extra peripherals as stretch goals including universal headstraps, carrying cases and more. The campaign suggests backers will get 30% off of these products. If it hits $1,000,000, though, Rebuff says it will give all of them away for free to those that backed the super early bird and above.

Rebuff is expecting to start shipping VR Ears in November 2020, though people that back now are more likely to receive a unit in December.

The post VR Ears Doubles Kickstarter Funding Goal In Less Than 24 Hours appeared first on UploadVR.

Oculus Brings More Lifelike Sound Propagation to Audio SDK 1.34

Despite being oft overlooked in the face of more attention-grabbing visuals, audio is an essential component to creating presence in VR. In a quest to create increasingly lifelike audio in VR environments, Oculus has pushed out an update to its Audio SDK recently that provides developers with the ability to create more realistic audio by generating real-time reverb and occlusion based on the app’s geometry.

Now in beta, the so called ‘Audio Propagation’ tool comes in the Oculus Audio SDK 1.34 update which produces more accurate audio propagation with “minimal set up,” the company says in a developer blogpost.

The Audio Propagation tool generates reverb from game geometry, so to change how a scene sounds developers simply need to tag the applicable meshes in the scene and select an acoustic material for each mesh; that includes things like plaster on brick, ceramic tile, and carpet to name a few.

The update also comes with reverb models for several types of spaces, including indoor, outdoor, and asymmetrical spaces, setting it apart from conventional reverb solutions.

Facebook Reality Labs previously teased some of this in their OC5 developer talk entitled ‘Spatial Audio for Oculus Quest and Beyond’.

The video goes on to explain that state of the art ‘AAA’ games tend to implement a work-intensive process of adding independent reverb presets for each room in a scene and then fading between them as the user moves from one room to another—hardly how sound travels in the physical world. Some developers implement a portal system to handle occlusion problems as well.

Oculus’ solution is real-time, and not prebaked, it’s touted for being quicker for developers to produce, allowing dynamic geometry like a door to be open or closed and still provide correctly reverberated audio.

SEE ALSO
VR Headset Growth on Steam Makes Biggest Leap Yet, Eclipses Linux Steam Users

Valve’s Steam Audio Plugin provides both baked and real-time options, however the company says in the lengthy Unity set-up guide that it “incurs a CPU overhead.” Just how much overhead Oculus’ solution takes, we aren’t sure at this time.

This isn’t Oculus’ first go at more realistic audio. Previously, the Audio SDK included something the company calls ‘the shoebox model’, which essentially created a standard-sized cube around you that direct sounds would then bounce off of.

Oculus provides Audio Propagation guides for both Unity and the Unreal. While we haven’t experienced the results for ourselves yet, we’re hoping the company’s stalwart support of real-time, geometry-based audio propagation will become a standard in the VR games and apps yet to come.

The post Oculus Brings More Lifelike Sound Propagation to Audio SDK 1.34 appeared first on Road to VR.

Oculus Audio SDK Update Adds Geometry-Based Sound Propagation

oculus audio propagation

The latest update to the Oculus Audio SDK adds the long awaited dynamic audio propagation feature.

The Audio SDK spatializes audio sources in real time using head-related transfer functions (HRTF). It also allows for volumetric and ambisonic sounds. This new update improves how it handles reflections and occlusion.

The Old Behavior

The spatializer originally simulated audio reflections by assuming a predefined rectangle around the user. That however assumed the user was in the center of that rectangle. It also obviously doesn’t work properly when moving around a scene.

In early 2018 a feature called Dynamic Room Modeling was added. This allows developers to define the current room as a 3D rectangle with a position. When the user changes to a new room the developer can update the rectangle for the new space.

This required a relatively large amount of effort on the developer’s part however, and only fully works in perfectly rectangular spaces. It also couldn’t model the transition between different sized spaces- such as going from inside to outside.

The New Update

The new update accurately models occlusion and reflections of sound in real time based on the scene geometry. The developer simply needs to tag each object with an acoustic material to let it know how it should absorb or reflect sound. Materials like carpet will absorb far more than materials like metal.

How the Audio SDK now ‘sees’ a scene

Valve’s competing Steam Audio has had geometry-based occlusion since late last year. But reflections have to be prebaked. Facebook’s new update brings VR audio to a new level of realism by modelling reverb in real time. The simulation even performs well on mobile VR, even with many sound sources. This will be important for the the upcoming Oculus Quest.

UPDATE: article previously stated that Steam Audio had feature parity. Thanks to reddit user /u/Hethree for the correction.

Tagged with: , , , , , ,

The post Oculus Audio SDK Update Adds Geometry-Based Sound Propagation appeared first on UploadVR.

1MORE Unveil Head-Tracking Gaming Headphones at E3

Sound is an important and underappreciated aspect of the videogaming experience. Music and sound design can make a big difference to the atmosphere and feel of a videogame, particularly within virtual reality (VR). Recognising this, audio company 1MORE have announced a new line of gaming headphones designed for immersive experiences.

1MORE announced its Bluetooth in-ear Gaming Headphones (VR BT) and the Spearhead VRX Gaming Over-Ear headphones (VRX) featuring Waves Nx head-tracking technology at E3 2018.

1MORE Spearhead VRX Gaming Headphones Featuring Waves Nx® Head Tracking Technology (PRNewsfoto/1MORE)
1MORE Spearhead VRX Gaming Headphones Featuring Waves Nx® Head Tracking Technology (PRNewsfoto/1MORE)

The VRX model is aimed at hardcore gamers and fans of immersive experiences such as high-end VR while the VR BT is designed to bring the same quality of audio experience to mobile videogames.

The Waves Nx technology uses head-tracking to follow the movement of the listener using virtual room emulation and immerse them in a sound environment that makes them feel they are within the videogame experience.

The company believes that movements, both intention and unintentional, form a critical part of the audio experience and so considers the way that most headphones handle sound to be akin to a pair of glasses that only shows you a fixed image.

“The Spearhead VRX Gaming Over-Ear Headphones are the latest and greatest effort by 1MORE to continue to deliver superior sound to its consumers with innovation and value,” says 1MORE Co-Founder Frank Lin. “The VRX deliver the goods and represents our dedication to bringing not only excellent features but the ultimate sound to your gaming experience,” he added.

1MORE Logo (PRNewsfoto/1MORE USA)

The VR BT headphones feature a 35-foot connectivity range and low latency along with real-time sound reproduction and a 10-minute fast charging time. The in-ear headphones also feature customisable LED lighting and Environmental Noise Cancelling, which is also available in the over-ear VRX.

The VRX will be available from August 2018 on Amazon and the 1MORE website, priced at $249.99 (USD). The VR BT is set for a Q4 2018 release and is planned to retail at $89.99. Further information can be found on the 1MORE website.

For future coverage of VR hardware and accessories, keep checking back with VRFocus.

Valve’s ‘Steam Audio’ Spatial Sound System Gets Support for AMD TrueAudio Next, FMOD

Steam Audio, Valve’s free made-for-VR spatial audio system, has added support for AMD’s GPU-based TrueAudio Next technology, as well as the popular FMOD audio software.

AMD TrueAudio Next

Today Valve announced the release of Steam Audio 2.0 beta 13 which now supports AMD’s TrueAudio Next technology, which allows developers to run certain audio-processing tasks on supported AMD GPUs, including convolution, a method of filtering audio to layer in spatial and environmental effects. As described by Valve’s announcement of the new feature:

Audio engines used for games and VR usually support convolution reverb: various combinations of audio data can be filtered using convolution, to add reverb, after the user has specified an [impulse response] in some way. This is usually real-time convolution, which means that audio is filtered as it is being played. [Impulse responses] used for reverb are often quite long: representing the IR for an airplane hangar with a 5-second reverb at a 48 kHz sampling rate requires (5 * 48,000) = 240,000 audio samples = around 935 KB. Real-time convolution is a very compute-intensive operation, and audio engines usually allow only 2-4 convolution reverb filters running simultaneously, to keep CPU usage within acceptable limits. Convolution reverb can model a wide range of acoustic phenomena, and leads to an increased sense of presence, which is especially important in VR. However, because it is a compute-intensive algorithm, it is often avoided in favor of less compute-intensive, less detailed reverb algorithms.

TrueAudio Next uses the GPU compute capabilities of supported AMD GPUs to accelerate convolution, allowing developers to increase the acoustic complexity and detail in their games and VR applications using convolution reverb.

TrueAudio Next also supports something called Resource Reservation, which allows developers to dedicate a portion of the GPU’s processing power (up to 25%) to spatial audio, effectively offering parallel processing of image rendering and audio processing. This can be enabled and disabled on the fly, allowing full use of the GPU for image processing when needed.

Image courtesy Valve, AMD

Valve says that the OpenCL-based convolution processing running on the GPU via TrueAudio Next shows significant gains over the same convolution tasks running on the CPU, which means you’re not only offloading processing to the GPU, but also freeing up extra CPU processing for other tasks.

Image courtesy Valve

While Valve advises that “Steam Audio’s CPU-based convolution is highly efficient, and is sufficient for a wide variety of games and VR applications […],” they note that “[TrueAudio Next] allows us to give developers the option of reaching an even higher level of auditory complexity: increasing the number of sources, or the Ambisonics order for indirect sound, or the IR length, etc.”

The announcement goes into detail about the potential performance benefits and impacts of TrueAudio Next, and specifies that the technology is supported by the following GPUs on Windows 10: RX 470 Series, RX 480 Series, RX 570 Series, RX 580 Series, R9 Fury, R9 Fury X, and Pro Duo.

Valve, which doesn’t often wade into GPU-specific technologies, also addresses an important question: why?

There are two main reasons why we chose to support TrueAudio Next in Steam® Audio:

  • If we can provide developers with more flexibility in choosing how their audio processing workloads are balanced on a user’s PC, we want to do that. TrueAudio Next allows developers to choose how CPU and GPU resources work together to provide a compelling audio experience on the user’s PC.
  • If we can allow developers to offer an optional higher level of audio detail with their existing content on PCs that are powerful enough, we want to do that. With Steam® Audio, developers just specify higher settings to be used if a TAN-capable GPU is installed on the user’s PC. Developers do not have to re-author any content to work with TAN. Steam® Audio can be used to seamlessly select either CPU- or GPU-based convolution, depending on the user’s hardware configuration.

Steam® Audio’s support for TrueAudio Next does not in any way restrict Steam® Audio to specific hardware or platforms. We hope that our support for TrueAudio Next encourages hardware and platform vendors to provide more options for developers to balance their audio workloads against graphics, physics, and other workloads, which in turn will help them create better audio experiences for their users.

FMOD Studio

As of Steam Audio beta 12, released in January, the software now offers a plugin for the popular FMOD audio authoring tool. Valve says the plugin “lets you use the full range of spatial audio functionality available in Steam Audio—including HRTF, occlusion, physics-based sound propagation, and baking—to projects that use FMOD Studio.”

The post Valve’s ‘Steam Audio’ Spatial Sound System Gets Support for AMD TrueAudio Next, FMOD appeared first on Road to VR.

‘MuX’ Lets You Build Wild Musical Instruments from Scratch, Now in Early Access

Described by Danish developer Decochon as a “revolutionary music sandbox for VR”, MuX features virtual, low-level synth components that can be connected together and adjusted to generate unique sounds. The various tools allow the creation of complex electronic instruments, which can be easily shared with the community.

Now in open beta via Steam Early Access, MuX is an intriguing addition to the growing library of creative audio software for VR. Currently, only HTC Vive hardware is officially supported, but Rift users have reported success operating in SteamVR mode, albeit with incorrectly-shaped virtual controllers.

Presented with a number of virtual tools in a room-scale space, the user is able to construct all manner of sound generators, using a fundamental oscillator component combined with various modifiers. Most components feature adjustable dials for fine tuning, and serve as the building blocks for the creation of potentially enormous virtual instruments. These can be played manually with motion control, or triggered by buttons, switches, or metronomes. Alternatively, a marble spawning system can be used, allowing the construction of Rube Goldberg-type music machines that trigger sounds as a result of the physics simulation, as shown in the video below:

MuX’s inviting visual presentation, with clean, flat-shaded geometry and a muted colour palette disguises its complexity, as the modular components currently allow for low-level access to the fundamentals of sound synthesis. Currently, a set of somewhat outdated tutorials (created for alpha testers) can be found on Decochon’s YouTube channel, but this is an area that needs serious attention if MuX is to become more accessible to a wide audience.

As explained on the Steam store page, the software is due to remain in Early Access for a year, as the experimental nature of the tools means that users are likely to do unpredictable things. “While developing and testing MuX, we found people using it in ways we hadn’t expected,” writes Decochon. “They also made music and sound that surprised us, things we couldn’t have made ourselves. As we continue to develop and expand MuX, we find Early Access an opportunity to become informed and inspired by what others might create. MuX is an instrument, ready to be played, explored, and enjoyed by others than just us.”

The post ‘MuX’ Lets You Build Wild Musical Instruments from Scratch, Now in Early Access appeared first on Road to VR.

Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK

Google today released a new spatial audio software development kit called ‘Resonance Audio’, a cross-platform tool based on technology from their existing VR Audio SDK. Resonance Audio aims to make VR and AR development easier across mobile and desktop platforms.

Google’s spatial audio support for VR is well-established, having introduced the technology to the Cardboard SDK in January 2016, and bringing their audio rendering engine to the main Google VR SDK in May 2016, which saw several improvements in the Daydream 2.0 update earlier this year. Google’s existing VR SDK audio engine already supported multiple platforms, but with platform-specific documentation on how to implement the features. In February, a post on Google’s official blog recognised the “confusing and time-consuming” battle of working with various audio tools, and described the development of streamlined FMOD and Wwise plugins for multiple platforms on both Unity and Unreal Engine.

Image courtesy Google

The new Resonance Audio SDK consolidates these efforts, working ‘at scale’ across mobile and desktop platforms, which should simplify development workflows for spatial audio in any VR/AR game or experience. According to the press release provided to Road to VR, the new SDK supports “the most popular game engines, audio engines, and digital audio workstations” running on Android, iOS, Windows, MacOS, and Linux. Google are providing integrations for “Unity, Unreal Engine, FMOD, Wwise, and DAWs,” along with “native APIs for C/C++, Java, Objective-C, and the web.”

This broader cross-platform support means that developers can implement one sound design for their experience that should perform consistently on both mobile and desktop platforms. In order to achieve this on mobile, where CPU resources are often very limited for audio, Resonance Audio features scalable performance using “highly optimized digital signal processing algorithms based on higher order Ambisonics to spatialize hundreds of simultaneous 3D sound sources, without compromising audio quality.” A new feature in Unity for precomputing reverb effects for a given environment also ‘significantly reduces’ CPU usage during playback.

Much like the existing VR Audio SDK, Resonance Audio is able to model complex sound environments, allowing control over the direction of acoustic wave propagation from individual sound sources. The width of each source can be specified, from a single point to a wall of sound. The SDK will also automatically render near-field effects for sound sources within arm’s reach of the user. Near-field audio rendering takes acoustic diffraction into account, as sound waves travel across the head. By using precise HRTFs, the accuracy of close sound source positioning can be increased. The team have also released an ‘Ambisonic recording tool’ to spatially capture sound design directly within Unity, which can be saved to a file for use elsewhere, such as game engines or YouTube videos.

Resonance Audio documentation is now available on the new developer site.

For PC VR users, Google just dropped Audio Factory on Steam, letting Rift and Vive owners get a taste of an experience that implements the new Resonance Audio SDK. Daydream users can try it out here too.

The post Google Releases ‘Resonance Audio’, a New Multi-Platform Spatial Audio SDK appeared first on Road to VR.

Oculus Details New Functionality in Audio SDK – Near-field 3D Audio and Volumetric Sound Sources

In a new series of articles on the official Oculus Developer Blog, the company is offering an overview of additions to the Oculus Audio SDK, specifically new techniques for near-field 3D audio and volumetric sound source rendering. These articles serve as a primer for Oculus Connect 4, taking place October 11th and 12th, which will feature presentations about their “breakthroughs in spatial audio technologies.”

Near-field audio rendering aims to further enhance the realism of spatial audio in VR, particularly for sound sources within arm’s reach of the user. This is achieved with more precise HRTFs that take acoustic diffraction into account, as the head will bend higher frequency sound waves, causing an ‘acoustic shadow’ that is simulated with different filtering effects for each ear, in response to the direction of the sound source.

A second article on volumetric sound sources discusses the problem of using single point sources for large objects or characters in spatial audio rendering—a single point source tends to sound unnatural, like it is coming only from the centre of the object. Oculus Research’s solution was to develop a “process to compute the projection based on the distance and radius”, using spherical harmonics to create a “physically correct and high performance way to represent large sound sources.”

Both articles go into quite a bit of detail into how Oculus is thinking about and attempting to solve these problems. We expect to hear more on this topic from Oculus at the Connect conference in October.

The post Oculus Details New Functionality in Audio SDK – Near-field 3D Audio and Volumetric Sound Sources appeared first on Road to VR.

Oculus to Talk “Breakthroughs in Spatial Audio Technologies” at Connect Conference

Oculus Connect 4, the company’s fourth annual developer conference, is set for October 11th and 12th in San Jose, California. There, Oculus will share with developers some of its latest research and developments, including what’s coming to the company’s VR Audio SDK.

See Also: NVIDIA Shows How Physically-based Audio Can Greatly Enhance VR Immersion

Spatial audio is hugely important for creating convincing virtual reality worlds. Traditional stereo audio often sounds like it emanates from within your head. In VR, most sounds need to have distinct sources that sound as if they’re coming from somewhere within the virtual world, just like they would in real life. But simulating realistic sounds in complex 3D environments isn’t as easy as it may seem, especially if you need to do so accurately and efficiently. Many companies have been working on the challenge of spatial audio in VR, with varying degrees of complexity and success.

At Connect 2017 in October, Oculus Audio Design Mananger Tom Smurdon and
Software Engineering Manager Pete Stirling will take to the stage in a session titled ‘2017 Breakthroughs in Spatial Audio Technologies’, to overview the latest spatial audio tech devised by the company.

Get up to speed on key terminology and concepts you need to know, then dive directly into the newest audio tech developed by Oculus. We’ll cover how new techniques and tools like Near Field HRTF and Volumetric Sound Sources help create dramatically increased immersion for people experiencing your game or app. Attendees will also get a first look at what’s coming in the Audio SDK roadmap.

The session description also promises to give attendees a first look at what’s coming to the Oculus Audio SDK, implying that whatever new spatial audio tech the company has cooked up will soon be rolled into the SDK.

The session is among more than 30 expected at the developer conference, 14 of which are now revealed on the Oculus Connect schedule.

The post Oculus to Talk “Breakthroughs in Spatial Audio Technologies” at Connect Conference appeared first on Road to VR.