NVIDIA Researchers Demonstrate Ultra-thin Holographic VR Glasses That Could Reach 120° Field-of-view

A team of researchers from NVIDIA Research and Stanford published a new paper demonstrating a pair of thin holographic VR glasses. The displays can show true holographic content, solving for the vergence-accommodation issue. Though the research prototypes demonstrating the principles were much smaller in field-of-view, the researchers claim it would be straightforward to achieve a 120° diagonal field-of-view.

Published ahead of this year’s upcoming SIGGRAPH 2022 conference, a team of researchers from NVIDIA Research and Stanford demonstrated a near-eye VR display that can be used to display flat images or holograms in a compact form-factor. The paper also explores the interconnected variables in the system that impact key display factors like field-of-view, eye-box, and eye-relief. Further, the researchers explore different algorithms for optimally rendering the image for the best visual quality.

Commercially available VR headsets haven’t improved in size much over the years largely because of an optical constraint. Most VR headsets use a single display and a simple lens. In order to focus the light from the display into your eye, the lens must be a certain distance from the display; any closer and the image will be out of focus.

Eliminating that gap between the lens and the display would unlock previously impossible form-factors for VR headsets; understandably there’s been a lot of R&D exploring how this can be done.

In NVIDIA-Stanford’s newly published paper, Holographic Glasses for Virtual Reality, the team shows that it built a holographic display using a spatial light modulator combined with a waveguide rather than a traditional lens.

The team built both a large benchtop model—to demonstrate core methods and experiment with different algorithms for rending the image for optimal display quality—and a compact wearable model to demonstrate the form-factor. The images you see of the compact glasses-like form-factor don’t include the electronics to drive the display (as the size of that part of the system is out of scope for the research).

You may recall a little while back that Meta Reality Labs published its own work on a compact glasses-size VR headset. Although that work involves holograms (to form the system’s lenses), it is not a ‘holographic display’, which means it doesn’t solve the vergence-accommodation issue that’s common in many VR displays.

On the other hand, the Nvidia-Stanford researchers write that their Holographic Glasses system is in fact a holographic display (thanks to the use of a spatial light modulator), which they tout as a unique advantage of their approach. However, the team also writes that it’s possible to display typical flat images on the display as well (which, like contemporary VR headsets, can converge for a stereoscopic view).

Image courtesy NVIDIA Research

Not only that, but the Holographic Glasses project touts a mere 2.5mm thickness for the entire display, significantly thinner than the 9mm thickness of the Reality Labs project (which was already impressively thin!).

As with any good paper though, the Nvidia-Stanford team is quick to point out the limitations of their work.

For one, their wearable system has a tiny 22.8° diagonal field-of-view with an equally tiny 2.3mm eye-box. Both of which are way too small to be viable for a practical VR headset.

Image courtesy NVIDIA Research

However, the researchers write that the limited field-of-view is largely due to their experimental combination of novel components that aren’t optimized to work together. Drastically expanding the field-of-view, they explain, is largely a matter of choosing complementary components.

“[…] the [system’s field-of-view] was mainly limited by the size of the available [spatial light modulator] and the focal length of the GP lens, both of which could be improved with different components. For example, the focal length can be halved without significantly increasing the total thickness by stacking two identical GP lenses and a circular polarizer [Moon et al. 2020]. With a 2-inch SLM and a 15mm focal length GP lens, we could achieve a monocular FOV of up to 120°”

As for the 2.3mm eye-box (the volume in which the rendered image can be seen), it’s way too small for practical use. However, the researchers write that they experimented with a straightforward way to expand it.

With the addition of eye-tracking, they show, the eye-box could be dynamically expanded up to 8mm by changing the angle of the light that’s sent into the waveguide. Granted, 8mm is still a very tight eye-box, and might be too small for practical use due to variations in eye-relief distance and how the glasses rest on the head, from one user to the next.

But, there’s variables in the system that can be adjusted to change key display factors, like the eye-box. Through their work, the researchers established the relationship between these variables, giving a clear look at what tradeoffs would need to be made to achieve different outcomes.

Image courtesy NVIDIA Research

As they show, eye-box size is directly related to the pixel pitch (distance between pixels) of the spatial light modulator, while field-of-view is related to the overall size of the spatial light modulator. Limitations on eye-relief and converging angle are also shown, relative to a sub-20mm eye-relief (which the researchers consider the upper limit of a true ‘glasses’ form-factor).

An analysis of this “design trade space,” as they call it, was a key part of the paper.

“With our design and experimental prototypes, we hope to stimulate new research and engineering directions toward ultra-thin all-day-wearable VR displays with form-factors comparable to conventional eyeglasses,” they write.

The paper is credited to researchers Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Wetzstein.

The post NVIDIA Researchers Demonstrate Ultra-thin Holographic VR Glasses That Could Reach 120° Field-of-view appeared first on Road to VR.

Facebook Reality Labs Shows Method for Expanding Field of View of Holographic Displays

Researchers from Facebook’s R&D department, Facebook Reality Labs, and the University of California, Berkeley have published new research which demonstrates a method for expanding the field-of-view of holographic displays.

In the paper, titled High Resolution Étendue Expansion for Holographic Displays, researchers Grace Kuo, Laura Waller, Ren Ng, and Andrew Maimone explain that when it comes to holographic displays there’s an intrinsic inverse link between a display’s field-of-view and its eye-box (the eye-box is the area in which the image from a display can be seen). If you want a larger eye-box, you get a smaller field-of-view. And if you want a larger field of view, you get a smaller eye-box.

If the eye-box is too small, even the movement from the rotation of your eye would make the image invisible because your pupil would leave the eye-box when looking any direction but forward. A large eye-box is necessary not only to keep the image visible during eye movement, but also to compensate for subtle differences in headset fit from one session to the next.

The researchers explain that a traditional holographic display with a 120° horizontal field-of-view would have an eye-box of just 1.05mm—far too small for practical use in a headset. On the other hand, a holographic display with a 10mm eye-box would have a horizontal field-of-view of just 12.7°.

If you want to satisfy both a 120° field-of-view and a 10mm eye-box, the researchers say, you’d need a holographic display with a resolution of 32,500 × 32,500. That’s not only impractical because such a display doesn’t exist, but even if it did, rendering that many pixels for real-time applications would be impossible with today’s hardware.

So, the researchers propose a different solution, which is decouple the link between field-of-view and eye-box in a holographic display. The method proposes the use of a scattering element placed in front of the display which scatters the light to expand its cone of propagation (also known as étendue). Doing so allows the field-of-view and eye-box characteristics to be adjusted independently.

But there’s a problem of crouse. If you put a scattering element in front of a display, how do you form a coherent image from the scattered light? The researchers have developed an algorithm which pre-compensates for the scattering element, such that the ‘scattered’ light actually forms a proper image after being scattered.

At a high level, it’s very similar to the approach that existing headsets use to handle color separation (chromatic aberration) as light passes through the lenses—rendered frames pre-separate colors so that the lens ends up bending the colors back into the correct place.

Here the orange box represents the field of view of a normal holographic display while the full frame shows the expanded field of view | Image courtesy Facebook Reality Labs

The researchers used optical simulations to hone their algorithm and then built a benchtop prototype of their proposed pipeline to experimentally demonstrate the method for expanding the field of view of a holographic display.

Although the researchers believe their work “demonstrates progress toward more practical holographic displays,” they also say that there is “additional work to be done to achieve a full-color display with high resolution, complete focal depth cues, and a sunglasses-like form factor.”

Toward the end of the paper they identify miniaturization, compute time, and perceptual effects among the challenges needed to be addressed by further research.

The paper also hints at potential future projects for the team, which may be to attempt to combine this method with prior work from one of the paper’s researchers, Andrew Maimone.

“The prototype presented in this work is intended as a proof-of-concept; the final design is ideally a wearable display with a sunglasses-like form factor. Starting with the design presented by Maimone et al. [2017], which had promising form factor and FoV but very limited eyebox, we propose integrating our scattering mask into the holographic optical element that acts as an image combiner.”

Image courtesy Facebook Reality Labs

If you read our article last month on Facebook’s holographic folded optics, you may be wondering how these projects differ.

The holographic folded optics project makes use of a holographic lens to focus light, but not a holographic display to generate the image in the first place. That project also employs folded optics to significantly reduce the size of such a display.

On the other hand, the research outlined in this article deals with making actual holographic displays more practical by showing that a large field-of-view and large eye-box are not mutually exclusive in a holographic display.

The post Facebook Reality Labs Shows Method for Expanding Field of View of Holographic Displays appeared first on Road to VR.

Facebook’s Future VR Headsets Could Feature Holographic Optics

Facebook Holographic-optics

One of the big hindrances to widespread virtual reality (VR) adoption is the fact that headsets are bulky devices, so a lot of people simply don’t want them on their face. Companies like Facebook are spending enormous amounts trying to improve the form factor of headsets and recently the tech giant unveiled a new research project which uses holographic optics to create a ‘VR Glasses’ device.

Facebook Holographic-optics

Current VR technology uses small LCD or OLED displays alongside lenses to focus the light into your eyes. While this is a proven method, this does require them to be a certain distance away from each other to work, enabling the optics to actually fold the light properly. The knock-on effect is that a VR headset has to be deep to fit all of this inside.

Researchers Andrew Maimone and Junren Wang from Facebook Reality Labs (FRL) will be presenting their new research at SIGGRAPH’s virtual conference this August, a system which uses holographic optics to make a device far thinner and lighter than current models, aiming for that coveted sunglasses-like VR hardware.

Just a proof-of-concept research device at the moment, it uses polarization-based optical folding to mimic that conventional distance but in a form factor that’s less than 9mm in depth. At the same time, the team claim that the field of view (FoV) is comparable to existing VR devices.

Facebook Holographic-optics

This is achieved by using flat films as optics and laser illumination. “Holographic optics compel the use of laser light sources, which are more difficult to integrate but provide a much richer set of colours than the LEDs common in nearly all of today’s VR headsets,” FRL notes in a blog post. Presently the research device outputs in monochrome (as seen in the above-left image) but the team do have a larger full-colour benchtop prototype working (right image). The goal now is to bring full colour to the smaller unit.

Obviously this is still an early research project so there are plenty of other variables to solve such as a power source and processing, would these be on-board or in a separate device like the Nreal Light? Ideally, it would be an all-in-one form factor yet those products are still years away.

VRFocus is still waiting to see if anything comes from Michael Abrash’s Half Dome prototypes plus there’s the smaller Oculus Quest Facebook is reportedly working on. For further updates on Facebook’s VR research, keep reading VRFocus.

Unreal Engine Creators can Visualise Their 3D Creations Using Looking Glass Factory’s Holographic Display’s

To view 3D content without the need for glasses, virtual reality (VR) headsets or any other face-base contraption you’ll need a holographic display like The Looking Glass. Today, Looking Glass Factory has announced that its displays will now support those working in Unreal Engine (UE4), Epic Games’ popular videogame development software.

Looking Glass Factory

In collaboration with Epic Games, Looking Glass Factory has released a UE4 plugin so that content creators can visualise their designs using these holographic displays. The new feature can be used for a range of industries not just videogames,  such as automotive, architecture, mapping/GIS and medical imaging.

The Unreal Engine plugin feature list is as follows:

  • Real-time 3D view of content in Unreal’s Game View
  • Holographic 3D visuals in the editor and in builds
  • Support for buttons on Looking Glass displays
  • One-build deployment for 8.9″, 15.6″, and 8K units
  • Adjustable camera for clipping planes and FoV
  • Support for default image effects from Unreal, or customizable effects
  • Windows only (Linux/Ubuntu coming soon)
  • Leap Motion Controller support

“Having access to a glasses-free holographic display is a massive breakthrough, and presents an exciting prospect for teams working in immersive computer graphics, visualization and content creation,” explained Kim Libreri, CTO, Epic Games in a statement. “The Looking Glass holographic display provides a stunning level of realism, and we look forward to seeing the innovations that emerge with the support of Unreal Engine generated content.”

Looking Glass Factory“Every day since we launched the Looking Glass in 2018, more and more engineers and designers would reach out and ask when we would support Unreal Engine,” adds Shawn Frayne, CEO & co-founder of Looking Glass Factory. “That’s why we’re so excited to announce the UE4 plugin for the Looking Glass today. Now studios around the world can make holographic experiences that go beyond anything ever seen before.”

Looking Glass Factory’s holographic displays start from $599 USD for the 8.9″ model with 15.6″ and 8K displays also available. They allow multiple people to view content thanks to the light field technology generating 45 distinct and simultaneous perspectives. For further updates on Looking Glass Factory, keep reading VRFocus.

‘HOLOSCOPE’ Headset Claims to Solve AR Display Hurdle with True Holography

Holo-this, holo-that. Holograms are so bamboozling that the term often gets used colloquially to mean ‘fancy-looking 3D image’, but holograms are actually a very specific and interesting method for capturing light field scenes which have some real advantages over other methods of displaying 3D imagery. RealView claims to be using real holography to solve a major problem inherent to AR and VR headsets of today, the vergence-accommodation conflict. Our favorite holo-skeptic, Oliver Kreylos, examines what we know about the company’s approach so far.


Guest Article by Dr. Oliver Kreylos

oliver-kreylosOliver is a researcher with the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). He has been developing virtual reality as a tool for scientific discovery since 1998, and is the creator of the open-source Vrui VR toolkit. He frequents reddit as /u/Doc_Ok, tweets as @okreylos, and blogs about VR-related topics at Doc-Ok.org.


RealView recently announced plans to turn their previous desktop holographic display tech into the HOLOSCOPE augmented reality headset. This new headset is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, and AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:

Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.

Unlike pretty much everybody else using the holo- prefix or throwing the term “hologram” around, RealView vehemently claims their display is based on honest-to-goodness real interference-pattern based holograms, of the computer-generated variety. To get this out of the way: yes, that stuff actually exists. Here is a Nature article about the HoloVideo system created at MIT Media Lab.

The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and without any special effects:

I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).

There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.

realview-holoscopeThis would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays the screen is bolted to the viewer’s head, solving the problem.

SEE ALSO
HoloLens Inside-out Tracking Is Game Changing for AR & VR, and No One Is Talking about It

So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a large apparatus that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.

Finally, there is no mention of field of view. As HoloLens demonstrates, field of view is important for AR, and difficult to achieve. Maybe this photo from RealView’s web site is a veiled indication of FoV:

RealView-FoVI’m just kidding, don’t be mad.

In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.

The post ‘HOLOSCOPE’ Headset Claims to Solve AR Display Hurdle with True Holography appeared first on Road to VR.