Meta & BMW Are Integrating AR/VR Headsets into Cars, Release Timeline Uncertain

Initially announced in 2021, Meta CEO Mark Zuckerberg shared an update on the company’s research partnership with BMW, which focuses on integrating AR and VR into vehicles to make people more productive, social, and entertained while traveling.

The ultimate aim in the BMW/Meta partnership is to accurately anchor virtual objects relative to the car’s motion by hooking into the tracking system of both the car and a Meta headset, which researchers say includes the Meta Quest Pro standalone mixed reality headset and the company’s in-development AR headset, Project Aria.

Without such a system in place, the headset’s rotational tracking would noticeably drift as the car makes turns and other adjustments, making it essentially unusable for anything but perfectly straight sections of road.

Check out the video detailing the research below:

Still considered a proof-of-concept prototype, Meta says the partnership has already overcome some key technical challenges, such as fusing the headset and car’s sensors to understand their relative position. That said, the companies don’t think it’s ready for the public just yet.

“It is too early to tell exactly how or when this technology will make it into customers’ hands, but we envision a number of potential use cases for XR devices in vehicles—from assisting the driver in locating their car in a crowded parking lot to alerting them to hazards on the road and surfacing important information about the vehicle’s condition,” said Claus Dorrer, Head of BMW Group Technology Office in the US. “The implications of future AR glasses and VR devices—for passengers as well as drivers—are promising. The research partnership with Meta will allow us to discover what immersive, in-vehicle XR experiences could look like in the future and spearhead the seamless integration of such devices into cars.”

AR and VR integration in cars isn’t an entirely new area of research. It’s been the sole focus of Audi-backed startup Holoride, which recently partnered with HTC to deliver in-car VR entertainment via HTC Vive Flow. Still, Holoride has been mostly grabbed headlines as a tradeshow mainstay; it hasn’t seen mass adoption yet despite only requiring a $200 retrofit pack, which enables Vive Flow owners to play VR in cars.

In the end, it seems car companies are now seeing the writing on the wall that riders will maybe very soon—but not right now—want to bring their own XR devices and actually use them in the car, just like you might a smartphone, albeit with more utility than any infotainment screen on offer.

Meta Reveals VR Headset Prototypes Designed to Make VR ‘Indistinguishable From Reality’

Meta says its ultimate goal with its VR hardware is to make a comfortable, compact headset with visual finality that’s ‘indistinguishable from reality’. Today the company revealed its latest VR headset prototypes which it says represent steps toward that goal.

Meta has made it no secret that it’s dumping tens of billions of dollars in its XR efforts, much of which is going to long-term R&D through its Reality Labs Research division. Apparently in an effort to shine a bit of light onto what that money is actually accomplishing, the company invited a group of press to sit down for a look at its latest accomplishments in VR hardware R&D.

Reaching the Bar

To start, Meta CEO Mark Zuckerberg spoke alongside Reality Labs Chief Scientist Michael Abrash to explain that the company’s ultimate goal is to build VR hardware that meets all the visual requirements to be accepted as “real” by your visual system.

VR headsets today are impressively immersive, but there’s still no question that what you’re looking at is, well… virtual.

Inside of Meta’s Reality Labs Research division, the company uses the term ‘visual Turing Test’ to represent the bar that needs to be met to convince your visual system that what’s inside the headset is actually real. The concept is borrowed from a similar concept which denotes the point at which a human can tell the difference between another human and an artificial intelligence.

For a headset to completely convince your visual system that what’s inside the headset is actually real, Meta says you need a headset that can pass that “visual Turing Test.”

Four Challenges

Zuckerberg and Abrash outlined what they see as four key visual challenges that VR headsets need to solve before the visual Turing Test can be passed: varifocal, distortion, retina resolution, and HDR.

Briefly, here’s what those mean:

  • Varifocal: the ability to focus on arbitrary depths of the virtual scene, with both essential focus functions of the eyes (vergence and accommodation)
  • Distortion: lenses inherently distort the light that passes through them, often creating artifacts like color separation and pupil swim that make the existence of the lens obvious.
  • Retina resolution: having enough resolution in the display to meet or exceed the resolving power of the human eye, such that there’s no evidence of underlying pixels
  • HDR: also known as high dynamic range, which describes the range of darkness and brightness that we experience in the real world (which almost no display today can properly emulate).

The Display Systems Research team at Reality Labs has built prototypes that function as proof-of-concepts for potential solutions to these challenges.

Varifocal

Image courtesy Meta

To address varifocal, the team developed a series of prototypes which it called ‘Half Dome’. In that series the company first explored a varifocal design which used a mechanically moving display to change the distance between the display and the lens, thus changing the focal depth of the image. Later the team moved to a solid-state electronic system which resulted in varifocal optics that were significantly more compact, reliable, and silent. We’ve covered the Half Dome prototypes in greater detail here if you want to know more.

Virtual Reality… For Lenses

As for distortion, Abrash explained that experimenting with lens designs and distortion-correction algorithms that are specific to those lens designs is a cumbersome process. Novel lenses can’t be made quickly, he said, and once they are made they still need to be carefully integrated into a headset.

To allow the Display Systems Research team to work more quickly on the issue, the team built a ‘distortion simulator’, which actually emulates a VR headset using a 3DTV, and simulates lenses (and their corresponding distortion-correction algorithms) in-software.

Image courtesy Meta

Doing so has allowed the team to iterate on the problem more quickly, wherein the key challenge is to dynamically correct lens distortions as the eye moves, rather than merely correcting for what is seen when the eye is looking in the immediate center of the lens.

Retina Resolution

Image courtesy Meta

On the retina resolution front, Meta revealed a previously unseen headset prototype called Butterscotch, which the company says achieves a retina resolution of 60 pixels per degree, allowing for 20/20 vision. To do so, they used extremely pixel-dense displays and reduced the field-of-view—in order to concentrate the pixels over a smaller area—to about half the size of Quest 2. The company says it also developed a “hybrid lens” that would “fully resolve” the increased resolution, and it shared through-the-lens comparisons between the original Rift, Quest 2, and the Butterscotch prototype.

Image courtesy Meta

While there are already headsets out there today that offer retina resolution—like Varjo’s VR-3 headset—only a small area in the middle of the view (27° × 27°) hits the 60 PPD mark… anything outside of that area drops to 30 PPD or lower. Ostensibly Meta’s Butterscotch prototype has 60 PPD across its entirely of the field-of-view, though the company didn’t explain to what extent resolution is reduced toward the edges of the lens.

Continue on Page 2: High Dynamic Range, Downsizing »

The post Meta Reveals VR Headset Prototypes Designed to Make VR ‘Indistinguishable From Reality’ appeared first on Road to VR.

Meta Research Explores a New Solution to One of VR’s Biggest Display Challenges

New research from Kent State University and Meta Reality Labs has demonstrated large dynamic focus liquid crystal lenses which could be used to create varifocal VR headsets.

Vergence-Accommodation Conflict in a Nutshell

In the VR R&D space, one of the hot topics is finding a practical solution for the so-called vergence-accommodation conflict (VAC). All consumer VR headsets on the market to date render an image using stereoscopy which creates 3D imagery that supports the vergence reflex of pair of eyes (when they converge on objects to form a stereo image), but not the accommodation reflex of an individual eye (when the lens of the eye changes shape to focus light at different depths).

In the real world, these two reflexes always work in tandem, but in a VR they become disconnected because the eyes continue to converge where needed, but their accomodation remains static because the light is all coming from the same distance (the display). Researchers in the field say VAC can cause eye strain, make it difficult to focus on close imagery, and may even limit visual immersion.

Seeking a Solution

There have been plenty of experiments with technologies that could be used in varifocal headsets that correctly support both vergence & accommodation, for instance holographic displays and multiple focal planes. But it seems none have cracked the code on a practical, cost effective, and mass producible solution to solve VAC.

Another potential solution to VAC is dynamic focus liquid crystal (LC) lenses which can change their focal length as their voltage is adjusted. According to a Kent State University graduate student project with funding and participation from Meta Reality Labs, such lenses have been demonstrated previously, but mostly in very small sizes because the switching time (how quickly focus can be changed) significantly slows down as size increases.

Image courtesy Bhowmick et al., SID Display Week

To reach the size of dynamic focus lens that you’d want if you were to build it into a contemporary VR headset—while keeping switching time low enough—the researchers have devised a large dynamic focus LC lens with a series of ‘phase resets’, which they compare to the rings used in a Fresnel lens. Instead of segmenting the lens in order to reduce its width (as with Fresnel), the phase reset segments are powered separately from one another so the liquid crystals within each segment can still switch quickly enough to be practical for use in a varifocal headset.

A Large, Experimental Lens

In new research presented at the SID Display Week 2022 conference, the researchers characterized a 5cm dynamic focus LC lens to measure its capabilities and identify strengths and weaknesses.

On the ‘strengths’ side, the researchers show the dynamic focus lens achieves high image quality toward the center of the lens while supporting a dynamic focus range from -0.80 D to +0.80 D and a sub-500ms switching speed.

For reference, in a 90Hz headset a new frame is shown to the user every 11ms (90 times per second), while a 500ms switching time is the equivalent of 2Hz (two times per second). While that’s much slower than the framerate of the headset, it may be within the practical speed when considering the rate at which the eye can adjust to a new focal distance. Further, the researchers say the switching time can be increased by stacking multiple lenses.

Image courtesy Bhowmick et al., SID Display Week

On the ‘weaknesses’ side, the researchers find that the dynamic focus LC lens suffers from a reduction in image quality as the view approaches the edge of the lens due to the phase reset segments—similar in concept to the light scattering due to the ridges in a Fresnel lens. The presented work also explores a masking technique designed to reduce these artifacts.

Figures A–F are captures of images through the dynamic focus LC lens, increasingly off-axis from center, starting with 0° and going to 45° | Image courtesy Bhowmick et al., SID Display Week

Ultimately, the researchers conclude, the experimental dynamic focus LC lens offers “possibly acceptable [image quality] values […] within a gaze angle of about 30°,” which is fairly similar to the image quality falloff of many VR headsets with Fresnel optics today.

To actually build a varifocal headset from this technology, the researchers say the dynamic focus LC lens would be used in conjunction with a traditional lens to achieve the optical pipeline needed in a VR headset. Precise eye-tracking is also necessary so the system knows where the user is looking and thus how to adjust the focus of the lens correctly for that depth.

The work in this paper presents measurement methods and benchmarks showing the performance of the lens which future researchers can use to test their own work against or identify improvements that could be made to the demonstrated design.

The full paper has not yet been published, but it was presented by its lead author, Amit Kumar Bhowmick at SID Display Week 2022, and further credits Afsoon Jamali, Douglas Bryant, Sandro Pintz, and Philip J Bos, between Kent State University and Meta Reality Labs.

Continue on Page 2: What About Half Dome 3? »

The post Meta Research Explores a New Solution to One of VR’s Biggest Display Challenges appeared first on Road to VR.

Researchers Show Full-body VR Tracking with Controller-mounted Cameras

Researchers from Carnegie Mellon University have demonstrated a practical system for full-body tracking in VR using cameras mounted on the controllers to get a better view of the user’s body.

Although it’s possible to achieve full-body tracking in VR today, it requires the use of extra hardware that needs to be strapped onto your body (for instance, Vive Trackers or IMU trackers). That makes full-body tracking a non-starter for all but hardcore VR enthusiasts who are willing to spent the money and time to strap on extra hardware.

Three vive trackers are used here to add torso and foot-tracking | Image courtesy IKinema

Because standalone VR headsets already have cameras on them to track their position in the world and the user’s controllers, in theory it’s also possible to look at the user’s body and use a computer-vision approach to tracking it. Unfortunately the angle of the cameras from the headset is too extreme to get a reliable view of the user’s legs, which is what led Meta to recently conclude that full-body tracking just isn’t viable on a standalone headset (especially as they get smaller).

But researchers from Carnegie Mellon University are challenging that notion with a prototype standalone VR system that adds cameras to the controllers to get a much clearer view of the user’s body, making it possible to extract reliable tracking data for the legs and torso.

What’s especially interesting about this approach is that it seems to align with the direction next-gen VR controllers are already heading; both Meta’s Project Cambria and Magic Leap 2 are using controllers that ditch a headset-dependent tracking system in favor of calculating their position with their own inside-out tracking system.

Image courtesy Carnegie Mellon University

Using a standard Quest 2 headset as the basis for their prototype system, the researchers added two cameras to the controller which face the user. With the user’s hands in front of them, the cameras can get a much clearer view of the upper and lower body. This view is corrected so a computer-vision system can optimally extract the user’s pose and then combine that data with the known position of the head and hands to create a full-body tracking model.

Image courtesy Carnegie Mellon University

Of course, the user’s hands won’t always been in front of them. The researchers say some limited testing showed that VR users have their hands out in front of them around 68% of the time. When the hands aren’t in a good position to capture the body, the system should fall back to an IK estimate of the body position. And though their prototype didn’t go this far, the researchers say they believe that with an additional camera angle on the controller, it should be possible to capture the leg position even when the user’s arms and controllers are resting at their side.

As for accuracy, the researchers, Karan Ahuja, Vivian Shen, Cathy Mengying Fang, Nathan Riopelle, Andy Kong, and Chris Harrison, say that millimeter tracking precision is probably out of the question for this sort of system, but centimeter tracking precision is likely on the table, which may be good enough for many VR use-cases. For their prototype specifically, the system had a “mean 3D join error of 6.98cm,” though the researchers say this should be “considered the floor of performance, not the ceiling,” given the limited time they spent optimizing the system.

With full-body tracking, legs are finally a viable part of the experience. That’s desirable not just to make your avatar look more realistic to other people, but also the option to incorporate your lower body into the experience, adding to immersion and providing another input for players to use in gameplay.

Image courtesy Carnegie Mellon University

The researchers not only created a full-blown tracking model for the system, they also made some prototype experiences to show off how tracked legs can add to gameplay. They showed a hockey goalie experience, where players can block the puck with any part of their body; a ‘body shape matching’ experience, where players match the shape of an incoming wall to fit through it; and even a ‘Feet Saber’ game, where players cut blocks with their hands and feet.

– – — – –

So could we see full-body tracking from headsets like Magic Leap 2 and Project Cambria? It’s tough to say at this point; although the controllers appear to do their own inside-out tracking, the cameras on the controllers seem to point mostly away from the user.

But maybe some future headset—or just an upgraded controller—could make it happen.

Regardless of where those headsets land, this research shows that practical, low-friction full-body tracking on standalone VR headsets might not be that far out of reach. Combined with the ability to run highly realistic face-tracking, the standalone headsets of the future will radically increase the embodiment felt in VR.

The post Researchers Show Full-body VR Tracking with Controller-mounted Cameras appeared first on Road to VR.

NVIDIA Researchers Demonstrate Ultra-thin Holographic VR Glasses That Could Reach 120° Field-of-view

A team of researchers from NVIDIA Research and Stanford published a new paper demonstrating a pair of thin holographic VR glasses. The displays can show true holographic content, solving for the vergence-accommodation issue. Though the research prototypes demonstrating the principles were much smaller in field-of-view, the researchers claim it would be straightforward to achieve a 120° diagonal field-of-view.

Published ahead of this year’s upcoming SIGGRAPH 2022 conference, a team of researchers from NVIDIA Research and Stanford demonstrated a near-eye VR display that can be used to display flat images or holograms in a compact form-factor. The paper also explores the interconnected variables in the system that impact key display factors like field-of-view, eye-box, and eye-relief. Further, the researchers explore different algorithms for optimally rendering the image for the best visual quality.

Commercially available VR headsets haven’t improved in size much over the years largely because of an optical constraint. Most VR headsets use a single display and a simple lens. In order to focus the light from the display into your eye, the lens must be a certain distance from the display; any closer and the image will be out of focus.

Eliminating that gap between the lens and the display would unlock previously impossible form-factors for VR headsets; understandably there’s been a lot of R&D exploring how this can be done.

In NVIDIA-Stanford’s newly published paper, Holographic Glasses for Virtual Reality, the team shows that it built a holographic display using a spatial light modulator combined with a waveguide rather than a traditional lens.

The team built both a large benchtop model—to demonstrate core methods and experiment with different algorithms for rending the image for optimal display quality—and a compact wearable model to demonstrate the form-factor. The images you see of the compact glasses-like form-factor don’t include the electronics to drive the display (as the size of that part of the system is out of scope for the research).

You may recall a little while back that Meta Reality Labs published its own work on a compact glasses-size VR headset. Although that work involves holograms (to form the system’s lenses), it is not a ‘holographic display’, which means it doesn’t solve the vergence-accommodation issue that’s common in many VR displays.

On the other hand, the Nvidia-Stanford researchers write that their Holographic Glasses system is in fact a holographic display (thanks to the use of a spatial light modulator), which they tout as a unique advantage of their approach. However, the team also writes that it’s possible to display typical flat images on the display as well (which, like contemporary VR headsets, can converge for a stereoscopic view).

Image courtesy NVIDIA Research

Not only that, but the Holographic Glasses project touts a mere 2.5mm thickness for the entire display, significantly thinner than the 9mm thickness of the Reality Labs project (which was already impressively thin!).

As with any good paper though, the Nvidia-Stanford team is quick to point out the limitations of their work.

For one, their wearable system has a tiny 22.8° diagonal field-of-view with an equally tiny 2.3mm eye-box. Both of which are way too small to be viable for a practical VR headset.

Image courtesy NVIDIA Research

However, the researchers write that the limited field-of-view is largely due to their experimental combination of novel components that aren’t optimized to work together. Drastically expanding the field-of-view, they explain, is largely a matter of choosing complementary components.

“[…] the [system’s field-of-view] was mainly limited by the size of the available [spatial light modulator] and the focal length of the GP lens, both of which could be improved with different components. For example, the focal length can be halved without significantly increasing the total thickness by stacking two identical GP lenses and a circular polarizer [Moon et al. 2020]. With a 2-inch SLM and a 15mm focal length GP lens, we could achieve a monocular FOV of up to 120°”

As for the 2.3mm eye-box (the volume in which the rendered image can be seen), it’s way too small for practical use. However, the researchers write that they experimented with a straightforward way to expand it.

With the addition of eye-tracking, they show, the eye-box could be dynamically expanded up to 8mm by changing the angle of the light that’s sent into the waveguide. Granted, 8mm is still a very tight eye-box, and might be too small for practical use due to variations in eye-relief distance and how the glasses rest on the head, from one user to the next.

But, there’s variables in the system that can be adjusted to change key display factors, like the eye-box. Through their work, the researchers established the relationship between these variables, giving a clear look at what tradeoffs would need to be made to achieve different outcomes.

Image courtesy NVIDIA Research

As they show, eye-box size is directly related to the pixel pitch (distance between pixels) of the spatial light modulator, while field-of-view is related to the overall size of the spatial light modulator. Limitations on eye-relief and converging angle are also shown, relative to a sub-20mm eye-relief (which the researchers consider the upper limit of a true ‘glasses’ form-factor).

An analysis of this “design trade space,” as they call it, was a key part of the paper.

“With our design and experimental prototypes, we hope to stimulate new research and engineering directions toward ultra-thin all-day-wearable VR displays with form-factors comparable to conventional eyeglasses,” they write.

The paper is credited to researchers Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Wetzstein.

The post NVIDIA Researchers Demonstrate Ultra-thin Holographic VR Glasses That Could Reach 120° Field-of-view appeared first on Road to VR.

Prototype Meta Headset Includes Custom Silicon for Photorealistic Avatars on Standalone

Researchers at Meta Reality Labs have created a prototype VR headset with a custom-built accelerator chip specially designed to handle AI processing to make it possible to render the company’s photorealistic Codec Avatars on a standalone headset.

Long before the company changed its name, Meta has been working on its Codec Avatars project which aims to make nearly photorealistic avatars in VR a reality. Using a combination of on-device sensors—like eye-tracking and mouth-tracking—and AI processing, the system animates a detailed recreation of the user in a realistic way, in real-time.

Or at least that’s how it works when you’ve got high-end PC hardware.

Early versions of the company’s Codec Avatars research were backed by the power of an NVIDIA Titan X GPU, which monstrously dwarfs the power available in something like Meta’s latest Quest 2 headset.

But the company has moved on to figuring out how to make Codec Avatars possible on low-powered standalone headsets, as evidenced by a paper published alongside last month’s 2022 IEEE CICC conference. In the paper, Meta reveals it created a custom chip built with a 7nm process to function as an accelerator specifically for Codec Avatars.

Specially Made

Image courtesy Meta Reality Labs

According to the researchers, the chip is far from off the shelf. The group designed it with an essential part of the Codec Avatars processing pipeline in mind—specifically, analyzing the incoming eye-tracking images and generating the data needed for the Codec Avatars model. The chip’s footprint is a mere 1.6mm²

“The test-chip, fabricated in 7nm technology node, features a Neural Network (NN) accelerator consisting of a 1024 Multiply-Accumulate (MAC) array, 2MB on-chip SRAM, and a 32bit RISC-V CPU,” the researchers write.

In turn, they also rebuilt the part of the Codec Avatars AI model to take advantage of the chip’s specific architecture.

“By re-architecting the Convolutional [neural network] based eye gaze extraction model and tailoring it for the hardware, the entire model fits on the chip to mitigate system-level energy and latency cost of off-chip memory accesses,” the Reality Labs researchers write. “By efficiently accelerating the convolution operation at the circuit-level, the presented prototype [chip] achieves 30 frames per second performance with low-power consumption at low form factors.”

The prototype headset is based on Quest 2 | Image courtesy Meta Reality Labs

By accelerating an intensive part of the Codec Avatars workload, the chip not only speeds up the process, but it also reduces the power and heat required. It’s able to do this more efficiently than a general-purpose CPU thanks to the custom design of the chip which then informed the rearchitected software design of the eye-tracking component of Codec Avatars.

But the headset’s general purpose CPU (in this case, Quest 2’s Snapdragon XR2 chip) doesn’t get to take the day off. While the custom chip handles part of the Codec Avatars encoding process, the XR2 manages the decoding process and rendering the actual visuals of the avatar.

Image courtesy Meta Reality Labs

The work must have been quite multidisciplinary, as the paper credits 12 researchers, all from Meta’s Reality Labs: H. Ekin Sumbul, Tony F. Wu, Yuecheng Li, Syed Shakib Sarwar, William Koven, Eli Murphy-Trotzky, Xingxing Cai, Elnaz Ansari, Daniel H. Morris, Huichu Liu, Doyun Kim, and Edith Beigne.

It’s impressive that Meta’s Codec Avatars can run on a standalone headset, even if a specialty chip is required. But one thing we don’t know is how well the visual rendering of the avatars is handled. The underlying scans of the users are highly detailed and may be too complex to render on Quest 2 in full. It’s not clear how much the ‘photorealistic’ part of the Codec Avatars is preserved in this instance, even if all the underlying pieces are there to drive the animations.

– – — – –

The research represents a practical application of the new compute architecture that Reality Lab’s Chief Scientist, Michael Abrash, recently described as a necessary next step for making the sci-fi vision of XR a reality. He says that moving away from highly centralized processing to more distributed processing is critical for the power and performance demands of such headsets.

One can imagine a range of XR-specific functions that could benefit from chips specially designed to accelerate them. Spatial audio, for instance, is desirable in XR across the board for added immersion, but realistic sound simulation is computationally expensive (not to mention power hungry!). Positional-tracking and hand-tracking are a critical part of any XR experience—yet another place where designing the hardware and algorithms together could yield substantial benefits in speed and power.

Fascinated by the cutting edge of XR science? Check out our archives for more breakdowns of interesting research.

The post Prototype Meta Headset Includes Custom Silicon for Photorealistic Avatars on Standalone appeared first on Road to VR.

Study Suggests EEG Could be Used to Predict & Prevent VR Motion Sickness

A new study on VR motion sickness concludes that certain brain activity detectable by EEG strongly correlates with VR motion sickness. This finding suggests that it’s possible to quantitatively measure and potentially prevent VR motion sickness.

While virtual reality opens the door to incredible possibilities, what we can actually do with VR today is at least somewhat limited by comfort considerations. While developers have steadily invented new techniques to keep VR content comfortable, scientists continue to work to understand the nature of motion sickness itself.

A new study published by researchers at Germany’s University of Jena in the peer-reviewed journal, Frontiers in Human Neuroscience, intentionally induced VR motion sickness in participants while measuring brain activity.

14 subjects were hooked up to an EEG cap and donned a PSVR headset. In the headset the participants were exposed to increasing levels of artificial movement to induce VR motion sickness over the course of 45 minutes. In addition to recording brain activity via EEG, the subjects also subjectively rated their motion sickness symptoms throughout the experiment.

The researchers found a common pattern in the change in brain activity that closely corresponded with the subject’s own perception of motion sickness.

Specifically, the researchers write, “relative to a baseline EEG (in VR) the power spectrum for [brain] frequencies below 10Hz is increased in all brain regions. The increase in frequency power was correlated positively to the level of motion sickness. Subjects with the highest [perception of motion sickness] had the highest power gain in the theta, delta, and alpha frequencies.”

Researchers Matthias Nürnberger, Carsten Klingner, Otto W. Witte, and Stefan Brodoehl offer the following conclusion:

We have demonstrated that VR-induced motion sickness is associated with distinct changes in brain function and connectivity. Here, we proposed the mismatch of visual information in the absence of adequate vestibular stimulus as a major cause according to the model of predictive coding. […] Differentiation which changes in brain activity is due to the sensory conflict or caused by motion sickness should be investigated in further studies. Given the increasing importance of VR, a profound understanding of the constraints imposed by [VR motion sickness] will be increasingly important. Measures to counteract the occurrence of MS or assist in detecting it at an early stage will undoubtedly improve the progress with this promising technology.

The findings offer further evidence that motion sickness can be objectively detected through non-invasive hardware like scalp EEG, which could be used to guide future research into VR motion sickness and VR comfort techniques.

For one, such EEG measurements could be used to objectively evaluate the effectiveness of VR comfort techniques.

Presently developers of VR content employ a variety of well-known VR comfort techniques like snap-turning and teleportation to reduce the odds of VR motion sickness. But not all VR comfort techniques may be equally effective compared either to one another or when compared across individuals. Establishing a quantitative measurement of motion sickness via EEG could help improve VR comfort techniques or even discover new ones by providing clearer feedback while making testing more objective.

Such measurements could also inform comfort ratings as presented to end-users, to help those sensitive to VR motion sickness find appropriate content.

Further, EEG detection of motion sickness could potentially be used on a real-time basis to predict and prevent motion sickness.

EEG brain sensing technology is becoming increasingly accessible and has already been integrated into commercial VR hardware. In the future, headsets equipped with EEG could allow developers to detect a user’s level of motion sickness in real-time, allowing for content adjustments or for VR comfort techniques to kick in automatically to keep users comfortable while they play or work in VR.


Thanks to Rony Abovitz for the tip!

The post Study Suggests EEG Could be Used to Predict & Prevent VR Motion Sickness appeared first on Road to VR.

Meta Offered a Glimpse into the XR R&D That’s Costing It Billions

During the Connect 2021 conference last week, Meta Reality Labs’ Chief Scientist, Michael Abrash, offered a high-level overview of some of the R&D that’s behind the company’s multi-billion dollar push into XR and the metaverse.

Michael Abrash leads the team at Meta Reality Labs Research which has been tasked with researching technologies that the company believes could be foundational to XR and the metaverse decades in the future. At Connect 2021, Abrash shared some of the group’s very latest work.

Full-body Codec Avatars

Meta’s Codec Avatar project aims to achieve a system capable of capturing and representing photorealistic avatars for use in XR. A major challenge beyond simply ‘scanning’ a person’s body is getting it to then move in realistic ways—not to mention making the whole system capable of running in real-time so that the avatar can be used in an interactive context.

The company has shown off its Codec Avatar work on various occasions, each time showing improvements. Initially the project started off simply with high quality heads, but it has since evolved to full-body avatars.

The video above is a demo representing the group’s latest work on full-body Codec Avatars, which researcher Yaser Sheikh explains now supports more complex eye movement, facial expressions, and hand and body gestures which involve self-contact. It isn’t stated outright, but the video also shows a viewer watching the presentation in virtual reality, implying that this is all happening in real-time.

With the possibility of such realistic avatars in the future, Abrash acknowledged that it’s important to think about security of one’s identity. To that end he says the company is “thinking about how we can secure your avatar, whether by tying it to an authenticated account, or by verifying identity in some other way.”

Photorealistic Hair and Skin Rendering

While Meta’s Codec Avatars are already looking pretty darn convincing, the research group believes the ultimate destination for the technology is to achieve photorealism.

Above Abrash showed off what he says is the research group’s latest work in photorealistic hair and skin rendering, and lighting thereof. It wasn’t claimed that this was happening in real-time (and we doubt it is), but it’s a look at the bar the team is aiming for down the road with the Codec Avatar tech.

Clothing Simulation

Along with a high quality representation of your body, Meta expects clothing with continue to be an important way that people want to express themselves in the metaverse. To that end, they think that making clothes act realistically will be an important part of that experience. Above the company shows off its work in clothing simulation and hands-on interaction.

High-fidelity Real-time Virtual Spaces

While XR can easily whisk us away to other realities, teleporting friends virtually to your actual living space would be great too. Taken to the extreme, that means having a full-blown recreation of your actual home and everything in it, which can run in real-time.

Well… Meta did just that. They built a mock apartment complete with a perfect replica of all the objects in it. Doing so makes it possible for a user to move around the real space and interact with it like normal while keeping the virtual version in sync.

So if you happen to have virtual guests over, they could actually see you moving around your real world space and interacting with anything inside of it in an incredibly natural way. Similarly, when using AR glasses, having a map of the space with this level of fidelity could make AR experiences and interactions much more compelling.

Presently this seems to serve the purpose of building out a ‘best case’ scenario of a mapped real-world environment for the company to experiment with. If Meta finds that having this kind of perfectly synchronized real and virtual space becomes important to valuable use-cases with the technology, it may then explore ways to make it easy for users to capture their own spaces with similar precision.

Continued on Page 2 »

The post Meta Offered a Glimpse into the XR R&D That’s Costing It Billions appeared first on Road to VR.

Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture

As far as live-action VR video is concerned, volumetric video is the gold standard for immersion. And for static scene capture, the same holds true for photogrammetry. But both methods have limitations that detract from realism, especially when it comes to ‘view-dependent’ effects like specular highlights and lensing through translucent objects. Research from Thailand’s Vidyasirimedhi Institute of Science and Technology shows a stunning view synthesis algorithm that significantly boosts realism by handling such lighting effects accurately.

Researchers from the Vidyasirimedhi Institute of Science and Technology in Rayong Thailand published work earlier this year on a real-time view synthesis algorithm called NeX. It’s goal is to use just a handful of input images from a scene to synthesize new frames that realistically portray the scene from arbitrary points between the real images.

Researchers Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn write that the work builds on top of a technique called multiplane image (MPI). Compared to prior methods, they say their approach better models view-dependent effectis (like specular highlights) and creates sharper synthesized imagery.

On top of those improvements, the team has highly optimized the system, allowing it to run easily at 60Hz—a claimed 1000x improvement over the previous state of the art. And I have to say, the results are stunning.

Though not yet highly optimized for the use-case, the researchers have already tested the system using a VR headset with stereo-depth and full 6DOF movement.

The researchers conclude:

Our representation is effective in capturing and reproducing complex view-dependent effects and efficient to compute on standard graphics hardware, thus allowing real-time rendering. Extensive studies on public datasets and our more challenging dataset demonstrate state-of-art quality of our approach. We believe neural basis expansion can be applied to the general problem of light-field factorization and enable efficient rendering for other scene representations not limited to MPI. Our insight that some reflectance parameters and high-frequency texture can be optimized explicitly can also help recovering fine detail, a challenge faced by existing implicit neural representations.

You can find the full paper at the NeX project website, which includes demos you can try for yourself right in the browser. There’s also WebVR-based demos that work with PC VR headsets if you’re using Firefox, but unfortunately don’t work with Quest’s browser.

Notice the reflections in the wood and the complex highlights in the pitcher’s handle! View-dependent details like these are very difficult for existing volumetric and photogrammetric capture methods.

Volumetric video capture that I’ve seen in VR usually gets very confused about these sort of view-dependent effects, often having trouble determining the appropriate stereo depth for specular highlights.

Photogrammetry, or ‘scene scanning’ approaches, typically ‘bake’ the scene’s lighting into textures, which often makes translucent objects look like cardboard (since the lighting highlights don’t move correctly as you view the object at different angles).

The NeX view synthesis research could significantly improve the realism of volumetric capture and playback in VR going forward.

The post Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture appeared first on Road to VR.

Facebook Researchers Show ‘Reverse Passthrough’ VR Prototype for Eye-contact Outside the Headset

Researchers at Facebook Reality Labs today published new work showcasing a prototype headset which has external displays to depict the user’s eyes to others outside of the headset. The goal is to allow eye-contact between the headset wearer and others in an effort to make it less awkward while wearing a headset and communicating with someone in the same room.

One of my favorite things to do when demoing an Oculus Quest to someone for the first time is to put on the headset, activate it’s ‘passthrough view’ (which lets me see the world outside of the headset), and then walk up and shake their hand to clearly reveal that I can see them. Because Quest’s cameras are at the four corners of the visor, it’s not easy to imagine that there would be any way for the user to see ‘through’ the headset, so the result from the outside seems a bit magical. Aftward I put the headset on the person and let them see what I could see from inside!

But this fun little demo reveals a problem too. Even though it’s easy for the person in the headset to see people outside of the headset, it isn’t clear to people outside of the headset when the person in the headset is actually looking at them (rather than looking at an entirely different virtual world.

Eye-contact is clearly a huge factor in face-to-face communication; it helps us gauge if someone is paying attention to the conversation, how they’re feeling about it, and even if they have something to say, want to change the topic, or leave the conversation entirely. Trying to talk to someone whose eyes you can’t see is uncomfortable and awkward, specifically because it robs us of our ingrained ability to detect this kind of intent.

But as VR headsets become thinner and more comfortable—and it becomes easier to use passthrough to have a conversation with someone nearby than taking the headset off entirely—this will become a growing issue.

Researchers at Facebook Reality Labs have come up with a high-tech fix to the problem. Making use of light-field displays mounted on the outside of a VR headset, the so called ‘reverse passthrough’ prototype system aims to show a representation of the user’s eyes that’s both depth and direction accurate.

Image courtesy Facebook Reality Lab

In a paper published this week for SIGGRAPH 2021, Facebook Reality Labs researchers Nathan Matsuda, Joel Hegland, and Douglas Lanman, detailed the system. While to external observers it appears that the headset is very thick but transparent enough to see their eyes, the apparent depth is an illusion created by a light-field display on the outside of the headset.

If it was instead a typical display, the user’s eyes would appear to float far away from their face, making for perhaps a more uncomfortable image than not being able to see them at all! Below researcher Nathan Matsuda shows the system without any eyes (left), with eyes but no depth (middle), and with eyes and depth (right).

The light-field display (in this case a display which uses a microlens array), allows multiple observers to see the correct depth cues no matter which angle they’re standing at.

What the observers see isn’t a real image of the user’s eyes however. Instead, eye-tracking data is applied to a 3D model of the user’s face, which means this technique would be limited by how realistic the model is and how easy it is to acquire for each individual.

Of course, Facebook has been doing some really impressive work on that front too with their Codec Avatars project. The researchers mocked up an example of a Codec Avatar being used for the reverse passthrough function (above), which looks even better, but resolution is clearly still a limiting factor—something the researchers believe will be overcome in due time.

Facebook Reality Labs Chief Scientist, Michael Abrash admits he didn’t think there was much merit to the idea of reverse passthrough until the researchers further proved out the concept.

“My first reaction was that it was kind of a goofy idea, a novelty at best,” Abrash said in a post about the work. “But I don’t tell researchers what to do, because you don’t get innovation without freedom to try new things, and that’s a good thing, because now it’s clearly a unique idea with genuine promise.”

– – — – –

It might seem like a whole lot of work and extra hardware to solve a problem that isn’t really a problem if you just decided to use an AR headset in the first place. After all, most AR headsets are built with transparent optics from the outset, and being able to see the eyes of the user is a major benefit when it comes to interfacing with other people while wearing the device.

But even then, AR headsets can suffer from ‘eye-glow’ which obstructs the view of the eye from the outside, sometimes severely, depending upon the optics and the angle of the viewer.

Image courtesy DigiLens

AR headsets also have other limitations that aren’t an issue on VR headsets, like a limited field-of-view and a lack of complete opacity control. Depending upon the use-case, a thin and light future VR headset with a very convincing reverse passthrough system could be preferable to an AR headset with transparent optics.

The post Facebook Researchers Show ‘Reverse Passthrough’ VR Prototype for Eye-contact Outside the Headset appeared first on Road to VR.