Facebook Researchers Show The Most Compact VR Optics Yet

Facebook’s VR research division is presenting prototype VR optics smaller than any we’ve seen yet for the annual SIGGRAPH computer graphics conference.

The ideas behind the “holographic near-eye display” could one day enable VR headsets with sunglasses form factor- but for now this is solely research with limitations.

Why Are VR Headsets So Bulky?

The primary driver of the size and bulk of today’s VR headsets is the optical design. Magnifying a display over a wide field of view requires a large, thick lens, and focusing it at a viewable distance requires a long gap to the display. After adding the housing needed to contain this system, even the most minimal designs end up over 350 grams.

vr panels lenses dual

The standalone Oculus Quest, with a battery, mobile chip and lens separation adjustment, weighs 571 grams. Many people find it hurts their face after a few minutes.

Panasonic and Pico have shown off prototypes of compact headsets using “pancake lenses”, and Huawei has already launched this as a product in China. Without a tracking system or battery, these headsets end up around 150 grams.

Huawei VR Glass
Huawei’s VR Glass, sold in China, weighs 166 grams

However, these current pancake lens designs have a number of unsolved flaws. They block around 75 percent of light which can make the image look dim and washed out. They may also show faint ghost versions of the image slightly misaligned, and this “ghosting” only gets worse as you try to improve the image with a brighter source.

Holographic Lenses

Facebook Reality Labs’ new approach is a thin film where focusing is done by holographic optics instead of by the bulk of the lens. ‘Hologram’ in this context just means a physical “recording” of how light interacts with an object- in this case a lens rather than a scene.

Facebook claims the research may be able “to deliver a field of view comparable to today’s VR headsets using only a thin film for a thickness of less than 9 mm.” The total weight of the display module is claimed as just 18 grams. However, this does not include the actual laser source, and nor do any of the images Facebook provided. “For our green-only sunglasses-like prototype, we measured an overall maximum field of view of approximately 92◦ ×69◦,” according to the research paper.

By using polarization-based optical folding, these ultra-lightweight lenses can be placed directly in front of the display source.

Because holographic elements disperse light, the only practical illumination source is lasers used at specific angles and wavelengths. The researchers were able to “inject” laser light into a 2.1″ 1600×1600 LCD, replacing the backlight.

The prototype is currently monochrome, only capable of displaying the color green. The researchers have a tabletop-sized proof of concept for multi-color, and believe bringing this to the sunglasses prototype is “viable” with further engineering.

The range of colors laser light can deliver (known as the color gamut) is significantly wider than LCD displays, and in fact slightly wider than even OLED, so this would represent a milestone achievement if it could be moved into a head-worn system.

Early Research, Lofty Goals

It’s important to understand that what’s being presented here is just early research for a new kind of display system. If it ever becomes a product, it will also need a tracking system. And unless it connects to your phone with a cable, it’d likely need a battery and mobile chipset too.

Facebook describes this research as being on the same miniaturization research “path” as Half Dome 2 and 3, which it presented at Oculus Connect 6 back in October.

Those headsets are much larger than what’s being shown here, but achieved a wider field of view while also having eye tracking and variable focus. FRL says future iterations of this sunglasses prototype could also be varifocal by moving the lenses on a range of just 1 millimeter. This could theoretically be achieved with tiny piezoelectric actuators.

For virtual reality to reach Mark Zuckerberg’s lofty goal of 1 billion users, headsets need to get significantly more comfortable while increasing realism. While designs like the Rift S “halo strap” can redistribute weight, this is more of a bandage than truly addressing the issue of bulk.

Like all early research, this idea may never pan out. Practical issues may emerge. Facebook is simultaneously exploring a number of novel compact display architectures. If it can make even one work, it could do to VR what LCD panels did to CRT monitors and televisions.

Facebook’s research paper concludes:

“Lightweight, high resolution, and sunglasses-like VR displays may be the key to enabling the next generation of demanding virtual reality applications that can be taken advantage of anywhere and for extended periods of time. We made progress towards this goal by proposing a new design space for virtual reality displays that combines polarization-based optical folding, holographic optics, and a host of supporting technologies to demonstrate full color display, sunglasses-like form factors, and high resolution across a series of hardware prototypes. Many practical challenges remain: we must achieve a full color display in a sunglasses-like form factor, obtain a larger viewing eye box, and work to suppress ghost images. In doing so, we hope to be one step closer to achieving ubiquitous and immersive computing platforms that increase productivity and bridge physical distance.”

The post Facebook Researchers Show The Most Compact VR Optics Yet appeared first on UploadVR.

Facebook Says It Has Developed the ‘Thinnest VR display to date’ With Holographic Folded Optics

Facebook published new research today which the company says shows the “thinnest VR display demonstrated to date,” in a proof-of-concept headset based on folded holographic optics.

Facebook Reality Labs, the company’s AR/VR R&D division, today published new research demonstrating an approach which combines two key features: polarization-based optical ‘folding’ and holographic lenses. In the work, researchers Andrew Maimone and Junren Wang say they’ve used the technique to create a functional VR display and lens that together are just 9mm thick. The result is a proof-of-concept VR headset which could truly be called ‘VR glasses’.

The approach has other benefits beyond its incredibly compact size; the researchers say it can also support significantly wider color gamut than today’s VR displays, and that their display makes progress “toward scaling resolution to the limit of human vision.”

Let’s talk about how it all works.

Why Are Today’s Headsets So Big?

Photo by Road to VR

It’s natural to wonder why even the latest VR headsets are essentially just as bulky as the first generation of headsets that launched back in 2016. The answer is simple: optics. Unfortunately the solution is not so simple.

Every consumer VR headset on the market uses effectively the same optical pipeline: a macro display behind a simple lens. The lens is there to focus the light from the display into your eye. But in order for that to happen the lens need to be a few inches from the display, otherwise it doesn’t have enough focusing power to focus the light into your eye.

That necessary distance between the display and the lens is the reason why every headset out there looks like a box on your face. The approach is still used today because the lenses and the displays are known quantities; they’re cheap & simple, and although bulky, they achieve a wide field of view and high resolution.

Many solutions have been proposed for making VR headsets smaller, and just about all of them include the use of novel displays and lenses.

The new research from Facebook proposes the use of both folded optics and holographic optics.

Folded Optics

What are folded optics? It’s not quite what it sounds like, but once you understand it, you’d be hard pressed to come up with a better name.

While the simple lenses in today’s VR headsets must be a certain distance from the display in order to focus the light into your eye, the concept of folded optics proposes ‘folding’ that distance over on itself, such that the light still traverses the same distance necessary for focusing, but its path is folded into a more compact area.

You can think of it like a piece of paper with an arbitrary width. When you fold the paper in half, the paper itself is still just as wide as when you started, but it’s width occupies less space because you folded it over on itself.

But how the hell do you do that with light? Polarization is the key.

Image courtesy Proof of Concept Engineering

It turns out that beams of light have an ‘orientation’. Normally the orientation of light beams at random, but you can use a polarizer to only let light of a specific orientation pass through. You can think of a polarizer like the coin-slot on a vending machine: it will only accept coins in one orientation.

Using polarization, it’s possible to bounce light back and forth multiple times along an optical path before eventually letting it out and into the wearer’s eye. This approach (also known as ‘pancake optics’ allows the lens and the display to move much closer together, resulting in a more compact headset.

But to go even thinner—to shrink the size of the lenses themselves—Facebook researchers have turned to holographic optics.

Holographic Optics

Rather than using a series of typical lenses (like the kind found in a pair of glasses) in the folded optics, the researchers have formed the lenses into… holograms.

If that makes your head hurt, everything is fine. Holograms are nuts, but I’ll do my best to explain.

Unlike a photograph, which is a recording of the light in a plane of space at a given moment, a hologram is a recording of the light in a volume of space at a given moment.

When you look at a photograph, you can only see the information of the light contained in the plane that was captured. When you look at a hologram, you can look around the hologram, because the information of the light in the entire volume is captured (also known as a lightfield).

SEE ALSO
Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of)

Now I’m going to blow your mind. What if when you captured a hologram, the scene you captured had a lens in it? It turns out, the lens you see in the hologram will behave just like the lens in the scene. Don’t believe me? Watch this video at 0:19 at look at the magnifying glass in the scene and watch as it magnifies the rest of the hologram, even though it is part of the hologram itself.

This is the fundamental idea behind Facebook’s holographic lens approach. The researchers effectively ‘captured’ a hologram of a real lens, condensing the optical properties of a real lens into a paper-thin holographic film.

So the optics Facebook is employing in this design is, quite literally, a hologram of a lens.

Continue Reading on Page 2: Bringing it All Together

The post Facebook Says It Has Developed the ‘Thinnest VR display to date’ With Holographic Folded Optics appeared first on Road to VR.

Watch: Facebook’s Latest Research On Photorealistic Avatars & Full Body Tracking

For the annual computer vision conference CVPR, Facebook Reality Labs released a short clip showing off research towards photorealistic avatars and full body tracking:

Facebook is the company behind the Oculus brand of virtual reality products. The company is considered a world leader in machine learning. Machine learning (ML) is at the core of the Oculus Quest and Rift S– both headsets have “inside-out” positional tracking, achieving sub-mm precision with no external base stations. On Quest, machine learning is even used to track the user’s hands without the need for controllers.

Facebook first showed off its interest in digitally recreating humans back in March 2019, showing off ‘Codec Avatars’. This project focused specifically on the head and face- and notably the avatar generation required an expensive scan of the user’s head with 132 cameras.

In May 2019, during its annual F8 conference, the company showed off real time markerless body tracking with unprecedented fidelity, using a model that takes into account the human muscular and skeletal systems.

Also this week, the company is showing off an algorithm which can generate a fairly detailed 3D model of a clothed person from just one camera.

Don’t get too excited just yet- this kind of technology won’t be on your head next year. When presenting codec avatars, Facebook warned the technology was still “years away” for consumer products.

When it can be realized however, such a technology has tremendous potential. For most, telepresence today is still limited to grids of webcams on a 2D monitor. The ability to see photorealistic representations of others in true scale, fully tracked from real motion, could fundamentally change the need for face to face interaction.

The post Watch: Facebook’s Latest Research On Photorealistic Avatars & Full Body Tracking appeared first on UploadVR.

Elixir Is Facebook’s Free Oculus Quest Hand-Tracking Demo Game, Out Now

 

Elixir is a free hand-tracking demo game for Oculus Quest developed by Magnopus and Facebook Reality Labs. You can download it and play it right now!

We went over some early impressions of the hand tracking and Elixir itself back at Oculus Connect 6 last year, but now that Elixir is out for the public as well you can download it for free. I just played through the entire short demo experience in about 10 minutes — it’s basically a very simply puzzle game.

You’ll need a reasonably sized playspace to move around, roughly 6.5 by 6.5 feet, and your hands. That’s it. No controller required!

Things start out simply enough with you learning how to teleport using hand tracking by making a triangle with your fingers, Tien style, and then pinching both your index fingers to your thumbs. It’s neat, but isn’t ever used again after you learn how to do it. The actual experience is fully roomscale.

There’s a sorceress that wants to hire you as her new apprentice, but naturally, all you can manage to do is muck stuff up. Every time she tells you not to do something, you’re expected to do just that thing until everything in her dungeon is exploding and messing up. It’s very cute, pretty funny, and full of lots of clever interactions that morph your hands into various things.

For a free app that shows a bit of what you can do with hand tracking, it’s certainly worth the download. And if you really like this brand of whimsical fun, consider giving Waltz of the Wizard a try, which just got full hand tracking support today too.

Download Elixir for Quest now and let us know what you think down in the comments below!

The post Elixir Is Facebook’s Free Oculus Quest Hand-Tracking Demo Game, Out Now appeared first on UploadVR.

Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of)

A prototype from Facebook Reality Labs researchers demonstrates a novel method for text input with controllerless hand-tracking. The system treats the hands and fingers as a sort of predictive keyboard which uses pinching gestures to select groups of letters.

Text input is crucial to many productivity tasks and it’s something which is still a challenge inside of AR and VR headsets. Yes, you can sit in front of a keyboard, but with a VR headset on you won’t be able to see the keyboard itself. For some very good typists, this isn’t an issue, but for most people it makes typing especially challenging. Even for good typists (or for AR headsets where the keyboard is visible), the need to sit in front of a keyboard keeps you chained to a desk, drastically reducing the freedom that you’d otherwise have with a fully tracked headset.

Voice input is one option, but problematic for several reasons. For one, it lacks discretion and privacy—anyone standing near you would not only have to hear you talk, but they would also hear the entire contents of your input. Another issue is that dictation is a somewhat different mode of thought than typing, and not as well suited for many common writing tasks.

A virtual keyboard in ‘Facebook Spaces’

Virtual keyboards are another option—where you use your fingers to poke at floating keys—but they’re too slow for serious writing tasks and lack physical feedback.

Facebook Reality Labs researchers have created a hand-tracking text input prototype, designed for AR and VR headsets, which throws out the keyboard as we know it.

Instead of touching keys on the keyboard, groups of keys are mapped to each finger. Instead of selecting a specific letter, you pinch with the finger corresponding to whichever color-coded group contains the desired key. As you go, the system attempts to predict which word you want based on context, similar to a mobile swiping keyboard. The researchers call the system PinchType.

PinchType overcomes many of the issues with typical virtual keyboards and voice input. It’s quiet, private, and looks to be much faster than hunt-and-peck on a floating virtual keyboard. It also provides feedback because you can feel when you touch your fingers together.

The researchers shared some initial findings from testing the system:

In a preliminary study with 14 participants, we investigated PinchType’s speed and accuracy on initial use, as well as its physical comfort relative to a mid-air keyboard. After entering 40 phrases, most people reported that PinchType was more comfortable than the mid-air keyboard. Most participants reached a mean speed of 12.54 WPM, or 20.07 WPM without the time spent correcting errors. This compares favorably to other thumb-to-finger virtual text entry methods.

But there’s some downsides. The system relies on accurate hand-tracking, and one of the most challenging facets of it—as seen from a head-mounted camera, it’s very common for fingers to be occluded by the back of the hand. Below, you can see that—as seen from the viewpoint—it’s ambiguous if the user is using their pinky or ring finger for the tap.

It’s very likely that the PinchType prototype was developed using high-end hand-tracking tech with external cameras (to remove sub-par accuracy from the equation). We’ll have to wait for the full details of the system to be published to know if the researchers believe these occluded cases present an issue for an inside-out hand-tracking system.

The PinchType prototype is the work of Facebook Reality Labs researchers Jacqui Fashimpaur, Kenrick Kin, and Matt Longest. The work was presented under the title Text Entry for Virtual and Augmented Reality Using Comfortable Thumb to Fingertip Pinches.

The work was published as part of CHI 2020, a conference focused on human-computer interaction.

The post Hand-tracking Text Input System From Facebook Researchers Throws Out the Keyboard (sort of) appeared first on Road to VR.

Facebook Researchers Found A Way To Essentially Give Oculus Quest More GPU Power

Facebook Researchers seem to have figured out a way to use machine learning to essentially give Oculus Quest developers 67% more GPU power to work with.

The Oculus Quest is a standalone headset, which means the computing hardware is inside the device itself. Because of the size and power constraints this introduces, as well as the desire to sell the device at a relatively affordable price, Quest uses a smartphone chip significantly less powerful than a gaming PC.

“Creating next-gen VR and AR experiences will require finding new, more efficient ways to render high-quality, low-latency graphics.”

Facebook AI Research

The new technique works by rendering at a lower resolution than usual, then the center of the view is upscaled using a machine learning “super resolution” algorithm. These algorithms have become popular in the last few years, with some websites even letting users upload any image on their PC or phone to be AI upscaled.

Given enough training data, super resolution algorithms can produce a significantly more detailed output than traditional upscaling. While just a few years ago “Zoom and Enhance” was a meme used to mock those who falsely believed computers could do this, machine learning has made this idea a reality. Of course, the algorithm is technically only “hallucinating” what it expects the missing detail might look like, but in many cases there is no practical difference.

One of the paper’s authors is Behnam Bastani, Facebook’s Head of Graphics in the Core AR/VR Technologies department. Between 2013 and 2017, Bastani worked for Google, developing “advanced display systems” and then leading development of Daydream’s rendering pipeline.

It’s interesting to note that the paper is not actually primarily about either the super resolution algorithm or freeing up GPU resources by using that. The researchers’ direct goal was to figure a “framework” for running machine learning algorithms in real time within the current rendering pipeline (with low latency), which they achieved. Super resolution upscaling is essentially just the first example of what this enables.

Because this is the focus of the paper, there isn’t much detail on the exact size of the upscaled region or the perceptibility, other than a mention of “temporally coherent and visually pleasing results in VR“.

The researchers claim that when rendering at 70% lower resolution in each direction, the technique can save roughly 40% of GPU time, and developers can “use those resources to generate better content”.

For applications like a media viewer, the saved GPU power could be kept unused to increase battery life, since on Snapdragon chips (and most others) the DSP (used for machine learning tasks like this) is significantly more power efficient than the GPU.

A demo video was produced using Beat Saber, where the left image “was generated using a fast super-resolution network applied to 2x low resolution content” (the right image is regular full resolution rendering):

Apparently, using super resolution to save GPU power is just one potential application of this rendering pipeline framework:

“Besides super-resolution application, the framework can also be used to perform compression artifact removal for streaming content, frame prediction, feature analysis and feedback for guided foveated rendering. We believe enabling computational methods and machine learning in mobile graphics pipeline will open the door for a lot of opportunities towards the next generation of mobile graphics.”

Facebook AI Research

There is no indication from this paper that this technology is planned to be deployed in the consumer Oculus Quest, although it doesn’t give any reason why it couldn’t either. There could be technical barriers that aren’t stated here, or it may just be considered not worth the complexity until a next generation headset. We’ve reached out to Facebook to get answers on this. Regardless, it looks clear that machine learning may play a role in bringing standalone VR closer to PC VR over the next decade.

The post Facebook Researchers Found A Way To Essentially Give Oculus Quest More GPU Power appeared first on UploadVR.

‘Defy Distance’ Put To The Test As Coronavirus Forces Facebook To Work At Home

A contractor at Facebook’s Stadium East office in Seattle was recently diagnosed with the COVID-19 disease (caused by the SARS-CoV-2 virus) and now all employees of the company working in Seattle are encouraged to work from home until the end of March.

Seattle is home to Facebook’s VR and AR long-term research division — Facebook Reality Labs — and is not the only major technology company implementing work from home policies in response to workers being diagnosed with COVID-19. Microsoft is headquartered in the area and this week implemented a similar policy. A statement from Microsoft explains “we are recommending all employees who are in a job that can be done from home should do so through March 25”. The response to the novel coronavirus is shifting on an almost daily basis as health officials provide updated guidance as the scale of the spread of the virus becomes clearer.

Most major Internet-enabled technology companies have an office hub in the Seattle area, including Valve, Amazon, and Google, but in the case of Facebook its lengthy work-from-home recommendation simultaneously encompassing its most forward-thinking teams puts particular focus on the company’s work attempting to develop the future of VR and AR.

Facebook’s All-Day Headset

In recent years, “Defy Distance” became the rallying cry for Facebook’s long-term investment in VR and its research teams in Washington state are tasked with developing future head-worn devices that could make all-day work in VR feasible. Today’s VR headsets like Oculus Quest and Rift S are relatively bulky and feature optical designs that focus the eyes at a fixed distance. This combination could cause discomfort immediately in some people and become unwearable after 20 minutes in others, and wearing them over the length of an 8-hour workday is essentially out of the question. But at the Oculus Connect 6 developer’s conference in September, Facebook’s Seattle-based head of VR and AR research Michael Abrash made clear his teams are specifically tasked with developing hardware that might meet this standard.

It is a tall order, but hardware alone is only part of the work Facebook’s engineers are undertaking. A recent report from The Information suggested that Facebook’s overall head of VR and AR Andrew Bosworth is already holding meetings in VR using a prototype version of VR meeting software. Facebook also has teams in other locations working on software like “Codec Avatars” which would create hyper-realistic representations of individuals that could be embodied by their host and transmitted over the Internet.

Of course, physical offices work today because humans signal to each other a wealth of communication cues in intonation, facial expression and intricate body movements that are absent in current communication platforms. We host a weekly podcast and regularly conduct interviews in VR with a virtual studio built using the current generation of publicly available VR technologies. While certainly compelling, the software guesses our facial expressions and the gaze of our eyes, and we are constantly holding controllers in our hands, only able to signal one another with a couple of fingers.

While we don’t know what avatar system is being used with Facebook’s internal VR meeting software, nor what sensors or optical designs are being currently tested within Facebook’s walls, it is likely the current coronavirus and work-from-home policies may force insights that could accelerate or alter future development efforts from the company in VR and AR.

The post ‘Defy Distance’ Put To The Test As Coronavirus Forces Facebook To Work At Home appeared first on UploadVR.

Facebook Details Artificial Intelligence-Enabled Foveated Rendering Reconstruction

Facebook published a research paper for its machine learning-based reconstruction for foveated rendering that the company first teased at Oculus Connect 5 (in late 2018).

The paper is titled DeepFovea: Neural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos.

Foveated Rendering: The Key To Next Generation VR

The human eye is only high resolution in the very center. Notice as you look around your room that only what you’re directly looking at is in high detail. You aren’t able to read text that you aren’t pointing your eyes at directly. In fact, that “foveal area” is just 3 degrees wide.

Future VR headsets can take advantage of this by only rendering where you’re directly looking (the foveal area) in high resolution. Everything else (the peripheral area) can be rendered at a significantly lower resolution. This is called foveated rendering, and is what will allow for significantly higher resolution displays. This, in turn, may enable significantly wider field of view.

It’s Not That Simple

Foveated rendering already exists in the Vive Pro Eye, a refresh of the Vive Pro with eye tracking. However, the foveal area is still relatively large and the peripheral area still relatively high resolution. The display itself is the same as on the regular Vive Pro (and in the Oculus Quest). On Vive Pro Eye, foveated rendering is used to allow for supersampling with no performance loss, rather than to enable significantly lower rendering cost for ultra high resolution displays.

Facebook seems to be seeking how to decrease the number of pixels needed to be rendered by an order of magnitude or more. This could allow even a future Oculus Quest mobile-powered headset to have a significant jump in resolution or graphical detail.

At Oculus Connect 6 back in September, John Carmack briefly revealed that these efforts were not going as well as expected, due to the lower resolution periphery being noticeable:

And it’s also kind of the issue that the foveated rendering that we’ve got… when it falls down the sparkling and shimmering going on in the rest of the periphery is more objectionable than we might have hoped it would be. So it becomes a trade off then.

DeepFovea: A Generative Adversarial Network

DeepFovea is a machine learning algorithm, a deep neural network, which “hallucinates” the missing peripheral detail in each frame in a way that is intended to be imperceptible as being lower resolution.

https://www.youtube.com/watch?v=d1U9mCVrdBM

Specifically, DeepFovea is a Generative Adversarial Network (GAN). GANs were invented in 2014 by a group of researchers led by Ian Goodfellow.

GANs are one of the most significant inventions of the 21st century so far, enabling some astonishing algorithms that almost defy belief. GANs power “AI upscaling”, DeepFakes, FaceApp, NVIDIA’s AI-generated realtime city, and Facebook’s own room reconstruction and VR codec avatars. In 2016, Facebook’s Chief AI Researcher, a machine learning veteran himself, described GANs as “the coolest idea in machine learning in the last twenty years”.

DeepFovea is designed and trained to essentially trick the human visual system. Facebook claims that DeepFovea can reduce the pixel count by as much as 14x while still keeping the reduction in peripheral rendering “imperceptible to the human eye”.

Too Computationally Expensive

Don’t expect this to arrive in a virtual reality headset any time soon. The paper mentions that DeepFovea itself currently requires 4x NVIDIA Tesla V100 GPUs to generate this detail.

As recently as September, Facebook’s top VR researcher specifically warned that next-generation VR will not arrive “any time soon”.

Abrash OC6 VR Evolution

For this to be able to be shipped in a product, Facebook might have to find a way to significantly reduce the computational cost — which isn’t unheard of in the machine learning world. The computational requirement for the algorithm used to power Google Assistant’s current voice was reduced by 1000 times before shipping.

Another alternative would be that Facebook could utilize, or develop, a neural network accelerator chip optimized for this kind of task. A report last month indicated that Facebook was developing a custom tracking chip for AR glasses tracking — perhaps the same could be done to allow foveated rendering in a next-generation Oculus Quest or maybe a future Oculus Rift.

The post Facebook Details Artificial Intelligence-Enabled Foveated Rendering Reconstruction appeared first on UploadVR.

OC6: Next Generation VR Headsets ‘Not Any Time Soon’

At Oculus Connect 6, Facebook’s chief VR researcher stated that the company won’t deliver a truly next generation VR headset “any time soon”.

Abrash has spoken about his predictions for when a next generation VR headset will arrive on multiple occasions. His first concrete vision of a next generation headset was delivered at Oculus Connect 3 in 2016. During his keynote, the researcher laid out his predictions for a headset with 4K resolution per eye, varifocal optics, eye tracking, wireless, and a 140 degrees field of view. He stated that he expected this to arrive by 2021.

At Oculus Connect 5 last year, however, Abrash revised his timeframe. He stated that he expected some specifications to be higher than his predictions, but that it would arrive a year later than predicted. This is one of the reasons we warned to not expect a Rift 2 at Oculus Connect 6.

Abrash OC6 VR Evolution

But at Oculus Connect 6 today, Abrash rolled back his expectations even further:

“The honest truth is, I don’t know when you’re going to be able to buy the magical headset I described last year. VR will continue to evolve nicely, but my full vision for next generation VR is going to take longer. How much longer? I don’t know. Let’s just say not any time soon. Turning breakthrough technology into products is just hard.

Abrash did however share two new advanced VR headset prototypes, Half Dome 2 and Half Dome 3. These headsets use improved optics to significantly reduce the size and weight, while still having a “20% wider” field of view than today’s Oculus Quest. But what seems clear is that this kind of technology won’t be arriving until 2023, or even later.

The post OC6: Next Generation VR Headsets ‘Not Any Time Soon’ appeared first on UploadVR.

Oculus’ Varifocal Prototype Half Dome has Two New Variants

The first day of Oculus Connect 6 (OC6) looked like it was going to be dominated by software updates and announcements as plenty of new hardware has been released in 2019. Thankfully, Facebook Reality Lab Chief Scientist Michael Abrash used his portion of the keynote to update everyone on Half Dome, the prototype headset first revealed during OC5 which featured mechanical varifocal displays. The team has been working on not one but two designs, both tackling different problems. 

Oculus Half Dome Family
Half Dome (left), Half Dome 2 (middle) and Half Dome 3 (right)

Abrash unveiled Half Dome 2 and Half Dome 3 prototypes, both significantly sleeker and smaller than the original model. The 2018 version featured Fresnel lenses with a 140° field of view (FoV) with mechanically driven displays based on eye-tracking to ensure the image remains sharp. Half Dome 2 improves on this design in a number of ways, with the team looking at ergonomics and comfort.

As is noticeable from the images, the Half Dome 2 prototype is substantially smaller and lighter, with a weight reduction of 200 grams over its predecessor. This has meant the FoV is narrower than Half Dome 1 but still wider than Oculus Quest.

When it comes to Half Dome 3 the team designed an electronic version of the varifocal mechanism, replacing the mechanical parts with a new type of liquid crystal lens made of two optical elements: polarization-dependent lenses (PDLs) and switchable half-wave plates.

“PDLs are special because their focal length changes based on their polarization state. By changing the voltage applied to the switchable plates, we can toggle between the two focal lengths. This could make for a great set of digital bifocals, but it doesn’t necessarily make for comfortable VR. By stacking a series of PDLs and switchable half-wave plates on top of each other, we’re able to achieve smooth varifocal that lets you comfortably and seamlessly adjust your focus in the headset,” explains the team on Oculus Blog.

Oculus Half Dome 3
Electronic varifocal system of Half Dome 3 (left), original Half Dome prototype (right).

The Half Dome 3 design may still be in the prototype stage but the new design certainly offers a tidy form factor going forward. As further details on Facebook Reality Labs’ Half Dome prototypes are revealed, VRFocus will let you know.