Facebook Reality Labs Shows Method for Expanding Field of View of Holographic Displays

Researchers from Facebook’s R&D department, Facebook Reality Labs, and the University of California, Berkeley have published new research which demonstrates a method for expanding the field-of-view of holographic displays.

In the paper, titled High Resolution Étendue Expansion for Holographic Displays, researchers Grace Kuo, Laura Waller, Ren Ng, and Andrew Maimone explain that when it comes to holographic displays there’s an intrinsic inverse link between a display’s field-of-view and its eye-box (the eye-box is the area in which the image from a display can be seen). If you want a larger eye-box, you get a smaller field-of-view. And if you want a larger field of view, you get a smaller eye-box.

If the eye-box is too small, even the movement from the rotation of your eye would make the image invisible because your pupil would leave the eye-box when looking any direction but forward. A large eye-box is necessary not only to keep the image visible during eye movement, but also to compensate for subtle differences in headset fit from one session to the next.

The researchers explain that a traditional holographic display with a 120° horizontal field-of-view would have an eye-box of just 1.05mm—far too small for practical use in a headset. On the other hand, a holographic display with a 10mm eye-box would have a horizontal field-of-view of just 12.7°.

If you want to satisfy both a 120° field-of-view and a 10mm eye-box, the researchers say, you’d need a holographic display with a resolution of 32,500 × 32,500. That’s not only impractical because such a display doesn’t exist, but even if it did, rendering that many pixels for real-time applications would be impossible with today’s hardware.

So, the researchers propose a different solution, which is decouple the link between field-of-view and eye-box in a holographic display. The method proposes the use of a scattering element placed in front of the display which scatters the light to expand its cone of propagation (also known as étendue). Doing so allows the field-of-view and eye-box characteristics to be adjusted independently.

But there’s a problem of crouse. If you put a scattering element in front of a display, how do you form a coherent image from the scattered light? The researchers have developed an algorithm which pre-compensates for the scattering element, such that the ‘scattered’ light actually forms a proper image after being scattered.

At a high level, it’s very similar to the approach that existing headsets use to handle color separation (chromatic aberration) as light passes through the lenses—rendered frames pre-separate colors so that the lens ends up bending the colors back into the correct place.

Here the orange box represents the field of view of a normal holographic display while the full frame shows the expanded field of view | Image courtesy Facebook Reality Labs

The researchers used optical simulations to hone their algorithm and then built a benchtop prototype of their proposed pipeline to experimentally demonstrate the method for expanding the field of view of a holographic display.

Although the researchers believe their work “demonstrates progress toward more practical holographic displays,” they also say that there is “additional work to be done to achieve a full-color display with high resolution, complete focal depth cues, and a sunglasses-like form factor.”

Toward the end of the paper they identify miniaturization, compute time, and perceptual effects among the challenges needed to be addressed by further research.

The paper also hints at potential future projects for the team, which may be to attempt to combine this method with prior work from one of the paper’s researchers, Andrew Maimone.

“The prototype presented in this work is intended as a proof-of-concept; the final design is ideally a wearable display with a sunglasses-like form factor. Starting with the design presented by Maimone et al. [2017], which had promising form factor and FoV but very limited eyebox, we propose integrating our scattering mask into the holographic optical element that acts as an image combiner.”

Image courtesy Facebook Reality Labs

If you read our article last month on Facebook’s holographic folded optics, you may be wondering how these projects differ.

The holographic folded optics project makes use of a holographic lens to focus light, but not a holographic display to generate the image in the first place. That project also employs folded optics to significantly reduce the size of such a display.

On the other hand, the research outlined in this article deals with making actual holographic displays more practical by showing that a large field-of-view and large eye-box are not mutually exclusive in a holographic display.

The post Facebook Reality Labs Shows Method for Expanding Field of View of Holographic Displays appeared first on Road to VR.

Facebook’s Prototype Photoreal Avatars Now Have Realistic Eyes

Researchers at Facebook figured out how to add natural looking eyes to their photorealistic avatar research.

Facebook, which owns the Oculus brand of VR products, first showed off this ‘Codec Avatars’ project back in March 2019. The avatars are generated using a specialized capture rig with 132 cameras. Once generated, they can be driven by a prototype VR headset with three cameras; facing the left eye, right eye, and mouth. All of this is achieved with machine learning.

While the graphics and face tracking of these avatars are impressive, the eyes tended to have an uncanny feeling, with odd distortions and gaze directions that didn’t make sense.

In a paper titled ‘The Eyes Have It: An Integrated Eye and Face Model for Photorealistic
Facial Animation
‘, the researchers present a solution to this problem. Part of the previous pipeline involved a “style transfer” neural network. If you’ve used a smartphone AR filter that makes the world look like a painting, you already know what that is.

But while this transfer was previously done at the image stage, it’s now done on the resulting texture itself. The eye direction is explicitly taken from the eye tracking system, rather than being estimated by the algorithm.

The result, based on the images and videos Facebook provided, is significantly more natural looking eyes. The researchers claim eye contact is critical to achieving social presence, and their system can now handle this- a feature you won’t get in a Zoom call.

Our goal is to build a system to enable virtual telepresence, using photorealistic avatars, at scale, with a level of fidelity sufficient to achieve eye-contact

Don’t get too excited just yet- this kind of technology won’t be on your head any time soon. When presenting codec avatars, Facebook warned the technology was still “years away” for consumer products.

When it can be realized, however, such a technology has tremendous potential. For most, telepresence today is still limited to grids of webcams on a 2D monitor. The ability to see photorealistic representations of others in true scale, fully tracked from real motion, with the ability to make eye contact, could fundamentally change the need for face to face interaction.

The post Facebook’s Prototype Photoreal Avatars Now Have Realistic Eyes appeared first on UploadVR.

Facebook Says It Has Developed the ‘Thinnest VR display to date’ With Holographic Folded Optics

Facebook published new research today which the company says shows the “thinnest VR display demonstrated to date,” in a proof-of-concept headset based on folded holographic optics.

Facebook Reality Labs, the company’s AR/VR R&D division, today published new research demonstrating an approach which combines two key features: polarization-based optical ‘folding’ and holographic lenses. In the work, researchers Andrew Maimone and Junren Wang say they’ve used the technique to create a functional VR display and lens that together are just 9mm thick. The result is a proof-of-concept VR headset which could truly be called ‘VR glasses’.

The approach has other benefits beyond its incredibly compact size; the researchers say it can also support significantly wider color gamut than today’s VR displays, and that their display makes progress “toward scaling resolution to the limit of human vision.”

Let’s talk about how it all works.

Why Are Today’s Headsets So Big?

Photo by Road to VR

It’s natural to wonder why even the latest VR headsets are essentially just as bulky as the first generation of headsets that launched back in 2016. The answer is simple: optics. Unfortunately the solution is not so simple.

Every consumer VR headset on the market uses effectively the same optical pipeline: a macro display behind a simple lens. The lens is there to focus the light from the display into your eye. But in order for that to happen the lens need to be a few inches from the display, otherwise it doesn’t have enough focusing power to focus the light into your eye.

That necessary distance between the display and the lens is the reason why every headset out there looks like a box on your face. The approach is still used today because the lenses and the displays are known quantities; they’re cheap & simple, and although bulky, they achieve a wide field of view and high resolution.

Many solutions have been proposed for making VR headsets smaller, and just about all of them include the use of novel displays and lenses.

The new research from Facebook proposes the use of both folded optics and holographic optics.

Folded Optics

What are folded optics? It’s not quite what it sounds like, but once you understand it, you’d be hard pressed to come up with a better name.

While the simple lenses in today’s VR headsets must be a certain distance from the display in order to focus the light into your eye, the concept of folded optics proposes ‘folding’ that distance over on itself, such that the light still traverses the same distance necessary for focusing, but its path is folded into a more compact area.

You can think of it like a piece of paper with an arbitrary width. When you fold the paper in half, the paper itself is still just as wide as when you started, but it’s width occupies less space because you folded it over on itself.

But how the hell do you do that with light? Polarization is the key.

Image courtesy Proof of Concept Engineering

It turns out that beams of light have an ‘orientation’. Normally the orientation of light beams at random, but you can use a polarizer to only let light of a specific orientation pass through. You can think of a polarizer like the coin-slot on a vending machine: it will only accept coins in one orientation.

Using polarization, it’s possible to bounce light back and forth multiple times along an optical path before eventually letting it out and into the wearer’s eye. This approach (also known as ‘pancake optics’ allows the lens and the display to move much closer together, resulting in a more compact headset.

But to go even thinner—to shrink the size of the lenses themselves—Facebook researchers have turned to holographic optics.

Holographic Optics

Rather than using a series of typical lenses (like the kind found in a pair of glasses) in the folded optics, the researchers have formed the lenses into… holograms.

If that makes your head hurt, everything is fine. Holograms are nuts, but I’ll do my best to explain.

Unlike a photograph, which is a recording of the light in a plane of space at a given moment, a hologram is a recording of the light in a volume of space at a given moment.

When you look at a photograph, you can only see the information of the light contained in the plane that was captured. When you look at a hologram, you can look around the hologram, because the information of the light in the entire volume is captured (also known as a lightfield).

SEE ALSO
Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of)

Now I’m going to blow your mind. What if when you captured a hologram, the scene you captured had a lens in it? It turns out, the lens you see in the hologram will behave just like the lens in the scene. Don’t believe me? Watch this video at 0:19 at look at the magnifying glass in the scene and watch as it magnifies the rest of the hologram, even though it is part of the hologram itself.

This is the fundamental idea behind Facebook’s holographic lens approach. The researchers effectively ‘captured’ a hologram of a real lens, condensing the optical properties of a real lens into a paper-thin holographic film.

So the optics Facebook is employing in this design is, quite literally, a hologram of a lens.

Continue Reading on Page 2: Bringing it All Together

The post Facebook Says It Has Developed the ‘Thinnest VR display to date’ With Holographic Folded Optics appeared first on Road to VR.

Facebook Research: 3D Body Reconstruction From Just One Camera

For the annual computer vision conference CVPR, Facebook is showing off an algorithm which can generate a fairly detailed 3D model of a clothed person from just one camera.

Facebook is the company behind the Oculus brand of virtual reality products. The company is considered a world leader in machine learning. Machine learning (ML) is at the core of the Oculus Quest and Rift S– both headsets have “inside-out” positional tracking, achieving sub-mm precision with no external base stations. On Quest, machine learning is even used to track the user’s hands without the need for controllers.

In a paper, called PIFuHD, three Facebook staff and a University of Southern California researcher propose a machine learning system for generating a high detail 3D representation of a person and their clothing from a single 1K image. No depth sensor or motion capture rig is required.

This paper is not the first work on generating 3D representations of a person from an image. Algorithms of this kind emerged in 2018 thanks to recent advances in computer vision.

In fact, the system Facebook is showing off is named PIFuHD after PIFu from last year, a project by researchers from various universities in California.

On today’s hardware, systems like PIFu can only handle relatively low resolution input images. This limits the accuracy and detail of the output model.

PIFuHD takes a new approach, downsampling the input image and feeding it to PIFu for the low detail “course” basis layer, then a new separate network uses the full resolution to add fine surface details.

Facebook claims the result is state of the art. Looking at the provided comparisons to similar systems, that seems to be true.

Facebook first showed off its interest in digitally recreating humans back in March 2019, showing off ‘Codec Avatars’. This project focused specifically on the head and face- and notably the avatar generation required an expensive scan of the user’s head with 132 cameras.

In May 2019, during its annual F8 conference, the company showed off real time markerless body tracking with unprecedented fidelity, using a model that takes into account the human muscular and skeletal systems.

Avatar body generation is another step on the path to the stated end goal; allowing users to exist as their real physical self in virtual environments, and to see friends as they really look too.

FRL Codec Avatars Improved

Don’t get too excited just yet- this kind of technology won’t be on your head next year. When presenting codec avatars, Facebook warned the technology was still “years away” for consumer products.

When it can be realized however, such a technology has tremendous potential. For most, telepresence today is still limited to grids of webcams on a 2D monitor. The ability to see photorealistic representations of others in true scale, fully tracked from real motion, could fundamentally change the need for face to face interaction.

The post Facebook Research: 3D Body Reconstruction From Just One Camera appeared first on UploadVR.

Researchers Say Head-mounted Haptics Can Combat Smooth Locomotion Discomfort in VR

Researchers from the National Taiwan University, National Chengchi University, and Texas A&M University say that haptic feedback delivered to the head right from a VR headset can significantly reduce discomfort related to smooth locomotion in VR.

Moving players artificially through large virtual environments isn’t a trivial task. While there’s many different ways to move around in VR, smooth locomotion—the kind you’d find in most first-person non-VR games—is a popular method because it maps easily to existing game design paradigms. Unfortunately this method of virtual locomotion isn’t comfortable for everyone.

Image courtesy Yi-Hao Peng

In a paper published as part of CHI 2020—a conference focused on human-computer interaction—researchers describe their WalkingVibe system which uses simple head-mounted haptics to provide sensations that synchronize with the movement of the user in virtual reality. After conducting an initial study with 240 participants, the researchers say the system can significantly reduce discomfort associated with smooth locomotion and even improve immersion.

Using the Vive Pro Eye headset as the foundation for their work, the researchers tested two different types of head-mounted haptics: vibrating motors and actuated tappers (literally little arms that can gently tap the side of the user’s head). The haptics were synchronized to virtual footsteps to offer a stand-in stimulus for the sensations associated with real walking.

Users were walked through three different virtual environments | Image courtesy Yi-Hao Peng

The researchers built a test VR application in Unity, which was linked to the haptics, in which users were walked through three different VR environments and asked to rate their level of comfort.

To check their work, the researchers also ran the same tests with both visual and auditory stimulation (artificial head bobbing and footstep sounds) without any haptics to isolate any effect. They also ran tests with randomized haptic stimulation to tell if the synchronization of the stimulation mattered to the outcome.

The results from the 240 participant study show a significant improvement in comfort and an improvement in realism from the haptics compared to the other methods tested.

[…] all 2-sided tactile designs significantly reduced VR sickness compared to the conditions with no haptic feedback. In addition, WalkingVibe with the 2-sided, footstep-synchronized vibrotactile cues significantly reduced discomfort compared to all other conditions and significantly improved realism compared to all tactile conditions, including tapping-based feedback.

Image courtesy Yi-Hao Peng

The researchers also discussed the limitations of their experiments. Notably, users in this study were physically seated; the same tests were not conducted with physically standing users. Many VR games with artificial locomotion accommodate seated and standing players. Furthermore the researchers said that the artificial movement during the tests was not under the control of the test subjects; they were essentially taken along a guided path without active control over their movement.

SEE ALSO
Facebook Publishes New Research on Hyper-realistic Virtual Avatars

The full paper is titled WalkingVibe: Reducing Virtual Reality Sickness and Improving Realism while Walking in VR using Unobtrusive Head-mounted Vibrotactile Feedback, and credits researchers Yi-Hao Peng, Carolyn Yu, Shi-Hong Liu, Chung-Wei Wang, Paul Taele, Neng-Hao Yu, and Mike Y. Chen.

The post Researchers Say Head-mounted Haptics Can Combat Smooth Locomotion Discomfort in VR appeared first on Road to VR.

Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of)

A prototype from Facebook Reality Labs researchers demonstrates a novel method for text input with controllerless hand-tracking. The system treats the hands and fingers as a sort of predictive keyboard which uses pinching gestures to select groups of letters.

Text input is crucial to many productivity tasks and it’s something which is still a challenge inside of AR and VR headsets. Yes, you can sit in front of a keyboard, but with a VR headset on you won’t be able to see the keyboard itself. For some very good typists, this isn’t an issue, but for most people it makes typing especially challenging. Even for good typists (or for AR headsets where the keyboard is visible), the need to sit in front of a keyboard keeps you chained to a desk, drastically reducing the freedom that you’d otherwise have with a fully tracked headset.

Voice input is one option, but problematic for several reasons. For one, it lacks discretion and privacy—anyone standing near you would not only have to hear you talk, but they would also hear the entire contents of your input. Another issue is that dictation is a somewhat different mode of thought than typing, and not as well suited for many common writing tasks.

A virtual keyboard in ‘Facebook Spaces’

Virtual keyboards are another option—where you use your fingers to poke at floating keys—but they’re too slow for serious writing tasks and lack physical feedback.

Facebook Reality Labs researchers have created a hand-tracking text input prototype, designed for AR and VR headsets, which throws out the keyboard as we know it.

Instead of touching keys on the keyboard, groups of keys are mapped to each finger. Instead of selecting a specific letter, you pinch with the finger corresponding to whichever color-coded group contains the desired key. As you go, the system attempts to predict which word you want based on context, similar to a mobile swiping keyboard. The researchers call the system PinchType.

PinchType overcomes many of the issues with typical virtual keyboards and voice input. It’s quiet, private, and looks to be much faster than hunt-and-peck on a floating virtual keyboard. It also provides feedback because you can feel when you touch your fingers together.

The researchers shared some initial findings from testing the system:

In a preliminary study with 14 participants, we investigated PinchType’s speed and accuracy on initial use, as well as its physical comfort relative to a mid-air keyboard. After entering 40 phrases, most people reported that PinchType was more comfortable than the mid-air keyboard. Most participants reached a mean speed of 12.54 WPM, or 20.07 WPM without the time spent correcting errors. This compares favorably to other thumb-to-finger virtual text entry methods.

But there’s some downsides. The system relies on accurate hand-tracking, and one of the most challenging facets of it—as seen from a head-mounted camera, it’s very common for fingers to be occluded by the back of the hand. Below, you can see that—as seen from the viewpoint—it’s ambiguous if the user is using their pinky or ring finger for the tap.

It’s very likely that the PinchType prototype was developed using high-end hand-tracking tech with external cameras (to remove sub-par accuracy from the equation). We’ll have to wait for the full details of the system to be published to know if the researchers believe these occluded cases present an issue for an inside-out hand-tracking system.

The PinchType prototype is the work of Facebook Reality Labs researchers Jacqui Fashimpaur, Kenrick Kin, and Matt Longest. The work was presented under the title Text Entry for Virtual and Augmented Reality Using Comfortable Thumb to Fingertip Pinches.

The work was published as part of CHI 2020, a conference focused on human-computer interaction.

The post Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of) appeared first on Road to VR.

Facebook’s Chief Researcher: ‘When The Next Generation of VR Shows Up, It Will Be Because We Did It’

Facebook’s chief VR researcher revealed his belief that the tech giant will be the company to deliver next generation VR. In an interview with The Information he also warned that AR glasses could be as much as 10 years away from wide appeal.

michael abrash ar vr oculus connect 5

Michael Abrash leads Facebook Reality Labs, the division of Facebook researching and developing future VR & AR technologies for use in next generation Oculus products.

AR Glasses: Still 5-10 Years Away

Most of the interview focused around the prospect of consumer augmented reality glasses, which some expect to arrive in the next few years. Abrash warned that compelling AR glasses are approximately five years out, and doesn’t expect the technology to reach even the “Blackberry stage” until 2030.

This is due to the difficulty of fitting a new display technology, as well as a tracking system and processing, into a lightweight pair of glasses — and somehow having an all day battery life. Such a new display technology will need to show near-opaque artificial imagery while still letting most real light through.

But “the hardest part”, according to Abrash, is developing an appropriate input system. Carrying a controller would be impractical and clunky for complex input, and it is unlikely that people will want to use voice commands in public or wave their hands around. From The Information article:

In the long run, what you really want is your interface to work the way that your brain works with your perceptions now. Rather than you having to say, “OK, I want to hear this person,” imagine that when you’re in a noisy environment, your glasses detect that it’s noisy, they infer who are the people you’re talking to, and they pick that signal out. You don’t even know that it’s happened any more than I think about the fact that my glasses gather light rays in a way that lets me see better.

Ultimately, however, when these technological barriers are solved, Abrash expects AR glasses to replace smartphones. But based on Abrash’s statements, this can’t happen until well into the 2030’s or perhaps even 2040.

Abrash acknowledged that other companies have the same understanding of the potential of AR glasses: “Everybody sees that AR will replace the phone someday.”

VR Headsets: High Confidence

When asked about the competition in the VR space, however, Abrash took a different tone, expressing a belief that Facebook invested significantly more than any other company, and that he can see “no other way” for true next generation VR to arrive than via his team’s research:

“No company has invested at anywhere near the level we have. When the next generation of VR shows up, it will be because we did it. I see no other way it’ll happen.

I actually am remarkably impressed with [Facebook CEO Mark Zuckerberg] and Facebook’s strong commitment to and belief in VR. Everybody sees that AR will replace the phone someday. That seems like a given. But I think that VR will be as important as AR. AR can replace the phone, but VR can replace the personal computer.”

Publicly available data indicates that Facebook currently controls around 50% of the PC VR market and the company currently faces no serious competition in the standalone VR space. According to CEO Mark Zuckerberg, the company is selling Oculus Quests as fast as it can manufacture them.

Massive Investment In VR Research

This is, of course, a bold claim. Over the past few years, Facebook revealed deep research into the technologies required to advance virtual reality to a true next generation.

This includes research into varifocal displays, compact wide field of view lenses, eye tracking, facial expression tracking, photorealistic avatars, haptic gloves, AI-based foveated rendering, room meshing & reconstruction, wireless video transmission, and much more.

What Competition?

The only other company with a public showing of VR research anywhere near this scale is Microsoft. The company’s Windows MR virtual reality platform failed to gain traction with consumers and there is no indication of a second revision of the tracking platform or controllers. The head of the company’s gaming brand, Xbox, recently described VR as “not where our focus is.”

In late 2016 Google took on Facebook in the VR space with the launch of its Daydream platform. The company even launched the first consumer 6DoF standalone headset with Lenovo back in 2018, and its researchers co-developed a very high resolution display for VR with LG. However, Daydream was discontinued and it has been widely reported that much of the original team now works elsewhere, with some key names being poached by Facebook.

Valve and HTC provide competition to Facebook on PC. Valve’s hardware appeals to a smaller audience due to its $1000 price and HTC’s new $700 headset is stuck in a middle ground of having neither the affordability of the Rift S nor the innovation of the Index. The Steam Hardware Survey indicates that HTC’s headset failed to gain any traction at all.

Last month, The Information reported Apple is working on a VR-AR hybrid headset which could serve as competition to Facebook’s Oculus Quest. The device is reportedly slated for release in 2022. It is unclear whether this would actually constitute a next generation, and it may be more of a stepping stone (and perhaps development platform) to AR glasses than VR for VR’s sake. CEO Tim Cook stated he has “never been a fan of VR” and called the technology “not profound.

But When?

So if Abrash is right and Facebook is the company which delivers true next generation VR, when will this be? To directly quote Abrash himself just four months ago: “not any time soon”.

Back in 2016 Abrash laid out his predictions for a headset with 4K resolution per eye, varifocal optics, eye tracking, wireless, and 140 degrees field of view. He suggested this could arrive by 2021.

At Oculus Connect 5 in 2018, however, he revised his timeframe. He stated that he expected some specifications to be higher than his predictions, but that it would arrive a year later than predicted.

Abrash OC6 VR Evolution

But at Oculus Connect 6 in September, he rolled back his timeline even further:

The honest truth is, I don’t know when you’re going to be able to buy the magical headset I described last year. VR will continue to evolve nicely, but my full vision for next generation VR is going to take longer. How much longer? I don’t know. Let’s just say not any time soon. Turning breakthrough technology into products is just hard.”

So while Abrash seems confident Facebook will be the company to deliver true next generation VR, if he’s right it may not be happen for another three years at least.

The post Facebook’s Chief Researcher: ‘When The Next Generation of VR Shows Up, It Will Be Because We Did It’ appeared first on UploadVR.

Facebook Details Artificial Intelligence-Enabled Foveated Rendering Reconstruction

Facebook published a research paper for its machine learning-based reconstruction for foveated rendering that the company first teased at Oculus Connect 5 (in late 2018).

The paper is titled DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos.

Foveated Rendering: The Key To Next Generation VR

The human eye is only high resolution in the very center. Notice as you look around your room that only what you’re directly looking at is in high detail. You aren’t able to read text that you aren’t pointing your eyes at directly. In fact, that “foveal area” is just 3 degrees wide.

Future VR headsets can take advantage of this by only rendering where you’re directly looking (the foveal area) in high resolution. Everything else (the peripheral area) can be rendered at a significantly lower resolution. This is called foveated rendering, and is what will allow for significantly higher resolution displays. This, in turn, may enable significantly wider field of view.

It’s Not That Simple

Foveated rendering already exists in the Vive Pro Eye, a refresh of the Vive Pro with eye tracking. However, the foveal area is still relatively large and the peripheral area still relatively high resolution. The display itself is the same as on the regular Vive Pro (and in the Oculus Quest). On Vive Pro Eye, foveated rendering is used to allow for supersampling with no performance loss, rather than to enable significantly lower rendering cost for ultra high resolution displays.

Facebook seems to be seeking how to decrease the number of pixels needed to be rendered by an order of magnitude or more. This could allow even a future Oculus Quest mobile-powered headset to have a significant jump in resolution or graphical detail.

At Oculus Connect 6 back in September, John Carmack briefly revealed that these efforts were not going as well as expected, due to the lower resolution periphery being noticeable:

And it’s also kind of the issue that the foveated rendering that we’ve got… when it falls down the sparkling and shimmering going on in the rest of the periphery is more objectionable than we might have hoped it would be. So it becomes a trade off then.

DeepFovea: A Generative Adversarial Network

DeepFovea is a machine learning algorithm, a deep neural network, which “hallucinates” the missing peripheral detail in each frame in a way that is intended to be imperceptible as being lower resolution.

https://www.youtube.com/watch?v=d1U9mCVrdBM

Specifically, DeepFovea is a Generative Adversarial Network (GAN). GANs were invented in 2014 by a group of researchers led by Ian Goodfellow.

GANs are one of the most significant inventions of the 21st century so far, enabling some astonishing algorithms that almost defy belief. GANs power “AI upscaling”, DeepFakes, FaceApp, NVIDIA’s AI-generated realtime city, and Facebook’s own room reconstruction and VR codec avatars. In 2016, Facebook’s Chief AI Researcher, a machine learning veteran himself, described GANs as “the coolest idea in machine learning in the last twenty years”.

DeepFovea is designed and trained to essentially trick the human visual system. Facebook claims that DeepFovea can reduce the pixel count by as much as 14x while still keeping the reduction in peripheral rendering “imperceptible to the human eye”.

Too Computationally Expensive

Don’t expect this to arrive in a virtual reality headset any time soon. The paper mentions that DeepFovea itself currently requires 4x NVIDIA Tesla V100 GPUs to generate this detail.

As recently as September, Facebook’s top VR researcher specifically warned that next-generation VR will not arrive “any time soon”.

Abrash OC6 VR Evolution

For this to be able to be shipped in a product, Facebook might have to find a way to significantly reduce the computational cost — which isn’t unheard of in the machine learning world. The computational requirement for the algorithm used to power Google Assistant’s current voice was reduced by 1000 times before shipping.

Another alternative would be that Facebook could utilize, or develop, a neural network accelerator chip optimized for this kind of task. A report last month indicated that Facebook was developing a custom tracking chip for AR glasses tracking — perhaps the same could be done to allow foveated rendering in a next-generation Oculus Quest or maybe a future Oculus Rift.

The post Facebook Details Artificial Intelligence-Enabled Foveated Rendering Reconstruction appeared first on UploadVR.

Researchers Develop Method to Boost Contrast in VR Headsets by Lying to Your Eyes

A team of researchers from Cambridge, Berkeley, MIT, and others has developed a novel method for boosting perceived contrast in VR headsets. The method exploits human stereo vision by intentionally mismatching elements of the view seen by each eye; the brain resolves the conflict in a way that boosts perceived contrast, the researchers say.

In the latest round of VR headsets, most major headset makers have moved from OLED displays to LCD displays. The latter offers greater pixel density, a reduced screen door effect, and likely lower cost, with the biggest trade-off being in contrast ratio. While OLED displays offer a wide contrast range and especially deep blacks, LCD displays in today’s headsets deliver a more ‘washed-out’ look, especially in darker scenes.

Researchers from Cambridge, Durham, Inria, Université Côte d’azur, Berkeley, Rennes, and MIT have developed a novel method which could help boost perceived contrast in VR headsets. The system is called DiCE, which stands for ‘Dichoptic Contrast Enhancement’. In a paper published earlier this year in the ACM Transactions on Graphics journal, the researchers say the method has “negligible computational cost and can be directly used in real-time VR rendering.”

SEE ALSO
Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR

The researchers say that while tone mapping methods can boost perceived contrast in images, they are too slow and computationally expensive for practical use in VR rendering. Instead they propose a system which exploits the natural behavior of the human stereo vision system fool it into perceiving greater contrast.

Generally speaking, the goal in VR headsets is to always render stereo-accurate views; if the image shown to each eye has unexpected differences, it creates ‘binocular rivalry’ (AKA stereo-conflict) which can be visually uncomfortable as it creates a mismatch which is difficult for the brain to properly fuse into a coherent image. The DiCE method aims to exploit mismatched stereo images for enhanced contrast while preventing binocular rivalry. A video summary explains:

A key component to the method is figuring out how to render the images to enhance contrast without causing significant binocular rivalry. The researchers say they devised an experiment to determine the factors which lead to binocular rivalry, and then designed the stereo-based contrast enhancement to avoid those factors.

The main challenge of our approach is striking the right balance between contrast enhancement and visual discomfort caused by binocular rivalry. To address this challenge, we conducted a psychophysical experiment to test how content, observer, and tone curve parameters can influence binocular rivalry stemming from the dichoptic presentation. We found that the ratio of tone curve slopes can predict binocular rivalry letting us easily control the shape of the dichoptic tone curves.

After finding an approach which minimizes binocular rivalry, the researchers tested their findings, claiming “our results clearly show that our solution is more successful at enhancing contrast and at the same time much more efficient [than prior methods]. We also performed an evaluation in a VR setup where users indicate that our approach clearly improves contrast and depth compared to the baseline.”

The researchers believe the work is well suited for VR rendering, noting, “as tone mapping is usually a part of the rendering pipeline, our technique can be easily combined with existing VR/AR rendering at almost no [computational] cost.” The team even went so far as to publish a Unity Asset package for other researchers to play with.

The research team included Fangcheng Zhong, George Alex Koulieris, George Drettakis, Martin S. Banks, Mathieu Chambe, Fredo Durand, and Rafał K. Mantiuk.

The post Researchers Develop Method to Boost Contrast in VR Headsets by Lying to Your Eyes appeared first on Road to VR.

Microsoft ‘DreamWalker’ Experiment Takes First Steps into Always-on World-scale VR

Microsoft unveiled an experiment this week that explores the future of always-on virtual reality. Building a system called DreamWalker, Microsoft researchers walk around in the physical world while still being fully immersed in VR, essentially taking the first steps into replacing your morning walk with something that’s not only reactive to your physical surroundings, but ultimately more interesting.

To do this, DreamWalker fuses a Windows VR headset’s inside-out tracking, two RGB depth sensors, and GPS locations. The system, the researchers maintain in their paper, can continuously and accurately position the user in the physical world, sense walkable paths and obstacles in real-time, and represent paths through a dynamically changing scene in VR to redirect the user towards the chosen destination.

Created by Jackie Yang, Eyal Ofek, Andy Wilson, and Christian Holz, the experiment shows it’s clearly still early days for always-on VR—the system requires a backpack-mounted computer and a load of other gear—however DreamWalker poses some interesting questions related to how such a system could (or rather should) be shaped around a dynamic world.

DreamWalker: created by Microsoft researchers Jackie Yang, Christian Holz, Eyal Ofek, Andrew D. Wilson, Image courtesy Microsoft Research

In the paper, the researchers pose a few methods of keeping you safely on your path, including obstacle avoidance techniques like digital pedestrians moving you away from potential danger and controlling a user’s paths with dynamic events such as vehicles being parked in front of you (video linked below). And while randomly spawning traffic cones or a wild gang of pedestrians herding you to your destination may seem like inelegant solutions for now, there’s no telling what a smarter, more integrated system may hold in the future.

SEE ALSO
Microsoft Files 'Virtual Reality Floor Mat' Patent, Possibly Aimed at an Xbox VR Headset

For all its marketing bloat, Magic Leap poses such a highly integrated system in its hypothetical Magicverse that would necessarily require a fairly complete understanding of the physical world, including pre-mapped and digitized streets, buildings—everything so you could potentially ‘reskin’ the world to a varying extent.

Microsoft’s method is decidedly less involved than Magic Leap’s moon-shot idea, which thus far has only been presented as a hypothesis more than object of active experimentation. The researchers instead take the user’s planned walking area and match it up as best they can with a digital map, introducing re-directed walking when needed to curb the user from veering off course.

A: the real-world path B: the digital map with planned redirection – Image courtesy Microsoft Research

Of course, neither AR or VR hardware is capable of doing all of this for now, however it’s not out of the question for future devices. Whatever off-the-shelf parts the researchers have strapped together are more than likely to find themselves on future headsets in some form, which includes greater rendering power, better computer vision, and on-board GPS for better location-based play.

Moreover, these early steps are predicated on the very real assumption that the better AR/VR gets, the more time we’ll spend interacting in digital environments, making something mundane like a walk through the park as exciting and novel as creators can make it.

Check out the four-minute video below to see the research group’s findings it in action:

The post Microsoft ‘DreamWalker’ Experiment Takes First Steps into Always-on World-scale VR appeared first on Road to VR.