‘Digital Lens’ Plugin for Eye-tracking Headsets Improves Visual Clarity & Pupil Swim

Imaging company Almalence has released a trial plugin for its Digital Lens technology which makes use of eye-tracking to purportedly increase the resolving power and clarity of XR headsets.

Almalense argues that the lenses on most XR headsets today aren’t being used to their fullest potential. By taking advantage of eye-tracking and smarter calibration, the company says its image pre-processing technology can actually increase the resolving power of a headset, including expanding the ‘sweet spot’ (the part of the lens with the highest visual fidelity).

The company has released a trial version of its technology through a plugin that works with Pico 3 Neo Pro Eye, HP Reverb G2 Omnicept, and HTC Vive Pro Eye. The plugin works with OpenXR compatible content, and even allows users to switch back and forth between each headset’s built-in image processing and the Almalence Digital Lens processing.

Based on through-the-lens demonstrations by the company, the technology does objectively increase the resolving power of the headsets. The company focuses on doing more advanced pre-processing to account for artifacts introduced by the lens, like chromatic aberration and image distortion. In essence the software increases the sharpness of the image by making the light passing through the lens land more precisely where it’s supposed to.

Almalence has shared heat maps comparing the changes in visual quality with and without its image technology, along with a broader explanation of how it works.

Another big advantage over the status quo, Almalence says, is the Digital Lens tech uses eye-tracking to perform these corrections in real-time, meaning that as you move your eyes around the scene (and off-axis from the center of the lens), the corrections are updated to account for the new angles. This can expand the ‘sweet spot’ of the lens and ‘pupil swim’ by making adjustments to account for the position of the pupil relative to the center of the lens. This video demonstrates the pupil swim correction:

The plugin, which anyone can use until January 2024, aims to demonstrate the company’s claims. Ultimately it appears the company wants to license its technology to headset makers to improve image quality out of the box.

Hands-on: CREAL’s Light-field Display Brings a New Layer of Immersion to AR

More than four years after I first caught wind of their tech, CREAL’s light-field display continues to be one of the most interesting and promising solutions for bringing light-fields to immersive headsets. At AWE 2023 I got to check out the company’s latest tech and saw first hand what light-fields mean for immersion in AR headsets.

More Than One Way to Focus

So first, a quick recap. A light-field is a fundamentally different way of showing light to your eyes compared to the typical displays used in most headsets today. The key difference is about how your eyes can focus on the virtual scene.

Your eyes have two focus methods. The one most people are familiar with is vergence (also called stereoscopy), where both eyes point at the same object to bring overlapping views of that object into focus. This is also what makes things look ‘3D’ to us.

But each individual eye is also capable of focusing in a different way by bending the lens of the eye to focus on objects at different distances—the same way that a camera with only one lens focuses. This is called accomodation.

Vergence-Accommodation Conflict

Most XR headsets today support vergenge (stereoscopic focus), but not accomodation (single-eye focus). You may have heard this called Vergence-Accomodation Conflict; also known to the industry as ‘VAC’ because it’s a pervasive challenge for immersive displays.

The reason for the ‘conflict’ is that normally the vergence and accommodation of your eyes work in tandem to achieve optimal focus on the thing you want to look at. But in a headset that supports vergence, but not accomodation, your eyes need to break these typically synchronous functions into independent functions.

It might not be something you ‘feel’ but it’s the reason why in a headset it’s hard to focus on things very near to you—especially objects in your hands that you want to inspect up close.

The conflict between vergence and accommodation can be not just uncomfortable for your eyes, but in a surprising way also rob the scene of immersion.

Creal’s Solution

And this is where we get back to Creal, a company that wants to solve the Vergence-Accommodation Conflict with a light-field display. Light-field displays structure light in the same way that we see it in the real world, allowing both of the focus functions of the eyes—vergence and accommodation—to work in tandem as they normally do.

At AWE 2023 this week, I got to check out the company’s latest light-field display tech, and came away with an added sense of immersion that I haven’t felt in any other AR headset to date.

I’ve seen Creal’s static bench-top demos before, which show static floating imagery through the lens to a single eye, demonstrating that you can indeed focus (accommodate) at different depths. But you won’t really see the magic until you see a light-field with both eyes and head-tracking. Which is exactly what I got to do this week at AWE.

Photo by Road to VR

On an admittedly bulky proof-of-concept AR headset, I got to see the company’s light-field display in its natural habitat—floating immersively in front of me. What really impressed me was when I held my hand out and a little virtual turtle came floating over to the palm of my hand. Even though it was semi-transparent, and not exceptionally high resolution or accurately colored, it felt… weirdly real.

I’ve seen all kinds of immersive XR experiences over the years, and holding something in your hand sounds like a banal demo at this point. But there was just something about the way this little turtle looked—thanks to the fact that my eyes could focus on it in the same way they would in the real world—that made it feel more real than I’ve ever really felt in other headsets. Like it was really there in my hand.

Photo by Road to VR

The trick is that, thanks to the light-field, when I focused my eyes on the turtle in my hand, both the turtle (virtual) and my hand (real) were each in proper focus—something that isn’t possible with conventional displays—making both my hand and the turtle feel more like they were inhabiting the same space right in front of me.

It’s frustratingly impossible to explain exactly how it appeared via text alone; this video from Creal shot through-the-lens gives some idea of what I saw, but can’t quite show how it adds immersion over other AR headsets:

It’s a subtle thing, and such added immersion probably only meaningful impacts objects within arms reach or closer—but then again, that distance is where things have the potential to feel most real to use because they’re in our carefully watched personal space.

Digital Prescriptions

Beyond just adding a new layer of visual immersion, light-field displays stand to solve another key problem, which is vision correction. Most XR headsets today do not support any kind of prescription vision correction, which for maybe even more than half of the population means they either need to wear their correctives while using these devices, buy some kind of clip-on lens, or just suffer through a blurry image.

But the nature of light-fields means you can apply a ‘digital prescription’ to the virtual content that exactly matches the user’s corrective prescription. And because it’s digital, this can be done on-the-fly, meaning the same headset could have its digital corrective vision setting change from one user to the next. Doing so means the focus of virtual image can match the real world image for those with and without glasses.

Continue on Page 2: A More Acceptable Form-factor »

Samsung to Acquire AR/VR Microdisplay Company eMagin for $218M

eMagin, the US-based developer and manufacturer of OLED microdisplays for AR/VR headsets, announced a merger agreement with Samsung Display, a subsidiary of the Korean tech giant.

The company announced in a press statement that Samsung will acquire all outstanding shares of eMagin common stock on a fully diluted basis for $2.08 per share in cash, totaling approximately $218 million.

Founded in 2001, eMagin has created head-mounted displays to showcase its OLED technology since the release of Z800, which launched in mid-2005. Since then, the company has focused on creating VR headset prototypes to further showcase its high-density OLED microdisplays while also providing its displays for integration into aircraft helmets, heads-up display systems, AR/VR headsets, thermal scopes, night vision goggles, and future weapon systems.

“This agreement is a validation of our technical achievements to date including our proprietary direct patterning (dPd) technology, provides a significant premium for our shareholders, and represents a win for our customers and employees,” said Andrew G. Sculley, eMagin’s CEO. “By teaming with Samsung Display, we will be able to achieve the full potential of our next-generation microdisplay technology with a partner that can provide the resources and expertise we will need to scale production. Moreover, our customers will benefit from resulting improvements to our production capabilities in terms of yield, efficiency, and quality control.”

The merger will very likely allow Samsung to exclusively manufacture micro-OLED displays using eMagin’s direct patterning display (dPd) technology, which boasts higher efficiencies and brightness since its displays use RGB emitters instead of traditional displays, which typically use a white OLED with a RGB color filter.

The transaction is expected to close in the second half of 2023, whereby eMagin will continue to maintain its operations and facilities in Hopewell Junction, NY. The merger agreement has received unanimous approval from eMagin’s Board of Directors, and stockholders holding around 98% of eMagin’s total voting power have committed to voting in favor of the transaction.

Smart Contact Lens Company Mojo Vision Raises $22M, Pivots to Micro-LED Displays for XR & More

Mojo Vision, a company once noted for its work on smart contact lenses, has raised $22.4 million in a new Series A investment round which it will use in a pivot to develop and commercialize micro-LED display technology for consumer, enterprise, and government applications.

The funding round is led by existing investors NEA and Khosla Ventures, with participation from other investors including Dolby Family Ventures, Liberty Global Ventures, Fusion Fund, Drew Perkins, Open Field Capital, and Edge.

The new Series A comes months after the company was forced to put its smart contact lenses on hold, which also included a 75% downsizing in the company’s workforce.

Prior to the pivot, the company had amassed $205 million in outside investment, with its most recent in January 2022 bringing to the company $45 million.

Its new focus is on displays for AR/VR, automotive, light field, large format displays and others that require high performance micro-LED displays. Mojo’s prototype smart contacts made use of its own in-house displays, which at the time included a monochrome display capable of over 14,000 pixels per inch (ppi).

Now the company is developing its own High Performance Quantum Dot (HPQD) technology to make a “very small, very bright, very efficient RGB pixel,” the company says in a press statement.

The company is boasting a number of advances in its proprietary technology, including dynamic displays with up to 28,000ppi, efficient blue micro-LED devices at sub-μm scale, high efficiency quantum dot ink for red and green, high brightness at 1M+ nits, and a display system that incorporates an optimized CMOS backplane, wafer-to-wafer bonding, and custom micro-lens optics.

Mojo Vision’s new CEO, Dr. Nikhil Balram, is said to bring semiconductor and display technology expertise to the company:

“The market opportunity in the display industry is big – over $100 billion. Sometimes in order to do something very big, you have to start very small. That is exactly what we are doing at Mojo,” said Balram. “We started by developing the world’s smallest, densest dynamic micro-LED display, and now we are applying that innovation to power the next generation of displays. Mojo is combining breakthrough technology, leading display and semiconductor expertise, and an advanced manufacturing process to commercialize micro-LEDs for the most demanding hardware applications.”

“This round of funding will enable us to deliver our breakthrough monolithic micro-LED technology to customers and help bring high-performance micro-LEDs to market,” concluded Balram.

Display Maker Demonstrates Flagship OLED VR Display & Pancake Optics, Its Best Yet

Display manufacturer Kopin recently demonstrated its latest VR display and pancake optic which promises higher resolution and more affordability for future VR headsets.

Most modern VR headsets take on the ‘box on your face’ form-factor because of a simple display architecture which necessitates a certain distance between the display and the lens. In the effort to make VR headsets more compact in the near-term, so-called ‘pancake optics’ are emerging as a leading candidate. These more complex optics reduce the distance required between the display and the lens.

Why Are Today’s Headsets So Big?

Photo by Road to VR

It’s natural to wonder why even the latest VR headsets are essentially just as bulky as the first generation launched back in 2016. The answer is simple: optics. Unfortunately the solution is not so simple.

Every consumer VR headset on the market uses effectively the same optical pipeline: a macro display behind a simple lens. The lens is there to focus the light from the display into your eye. But in order for that to happen the lens needs to be a few inches from the display, otherwise it doesn’t have enough focusing power to focus the light into your eye.

That necessary distance between the display and the lens is the reason why every headset out there looks like a box on your face. The approach is still used today because the lenses and the displays are known quantities; they’re cheap & simple, and although bulky, they achieve a wide field-of-view and high resolution.

Many solutions have been proposed for making VR headsets smaller, and just about all of them include the use of novel displays and lenses.

Pancake Optics (AKA Folded Optics)

What are pancake optics? It’s not quite what it sounds like, but once you understand it, you’d be hard pressed to come up with a better name.

While the simple lenses in today’s VR headsets must be a certain distance from the display in order to focus the light into your eye, the concept of pancake optics proposes ‘folding’ that distance over on itself, such that the light still traverses the same distance necessary for focusing, but its path is folded into a more compact area.

You can think of it like a piece of paper with an arbitrary length. When you fold the paper in half, the paper itself is still just as long as when you started, but its length occupies less space because you folded it over on itself.

But how the hell do you do that with light? Polarization is the key.

Image courtesy Proof of Concept Engineering

It turns out that beams of light have an ‘orientation’ which is referred to as polarization. Normally the orientation of light beams are random, but you can use a polarizer to only let light of a specific orientation pass through. You can think of a polarizer like the coin-slot on a vending machine: it will only accept coins in one orientation.

Using polarization, it’s possible to bounce light back and forth multiple times along an optical path before eventually letting it out and into the wearer’s eye. This approach, known as pancake or folded optics, allows the lens and the display to move much closer together, resulting in a more compact headset.

Kopin is an electronics manufacturer best known for its microdisplays. In recent years the company has been eyeing the emerging XR industry as a viable market for their wares. To that end, the company has been steady at work creating VR displays and optics that it hopes headset makers will want to snatch up.

At AWE 2022 last month, the company demonstrated its latest work on that front with a new plastic pancake optic and flagship VR display.

Kopin’s P95 pancake optic has just a 17mm distance between the display and lens, along with a 95° field-of-view. Furthermore, it differentiates itself as being an all-plastic optic, which makes it cheaper, lighter, more durable, and more flexible than comparable glass optics. The company says its secret sauce is being able to make plastic pancake optics that are as optically performant as their glass counterparts.

Photo by Road to VR

At AWE, I got to peak through the Kopin P95 optic. Inside I saw a sharp image with seemingly quite good edge-to-edge clarity. It’s tough to formulate a firm assessment of how it compares to contemporary headsets as my understanding is that the test pattern being shown had no geometric or color corrections, nor was it calibrated for the numbers shown.

You’ll notice that the P95 is a non-Fresnel optic which should mean it won’t suffer from the kind of ‘god-rays’ and glare that almost every contemporary VR headset exhibits. Granted, without seeing dynamic content it’s tough to know whether or not the multi-element pancake optic introduces any of its own visual artifacts.

Even though the test pattern wasn’t calibrated, it does reveal the retina resolution of the underlying display—Kopin’s flagship ‘Lightning’ display for VR devices.

Photo by Road to VR

This little beauty is a 1.3″ OLED display with a 2,560 × 2,560 resolution running up to 120Hz. Kopin says the display has 10-bit color, making viable for HDR.

Photo by Road to VR

Combined, the P95 pancake optic and the Lightning display appear to make a viable, retina resolution, compact display architecture for VR headsets. But it isn’t necessarily a shoe-in.

For one, the 95° field-of-view is just barely meeting par. Ostensibly Kopin will need to grow its 1.3″ Lighting display larger if it wants to meet or exceed what’s offered in today’s VR headsets.

Further, the company wasn’t prepared to divulge any info on the brightness of the display or the efficiency of the pancake lens—both of which are key factors for use in VR headsets.

Because pancake lenses use polarized light and bounce that light around a few times, they always end up being less efficient—meaning more brightness on the input to get the same level of brightness output. That typically means more heat and more power consumption, adding to the tradeoffs that would be required if building a headset with this display architecture.

Kopin has been touting its displays and optics as a solution for VR headsets for several years at this point, but at least in the consumer & enterprise space they don’t appear to have found any traction just yet. It’s not entirely clear what’s holding the company back from break into the VR space, but it likely comes down to the price or the performance of the offerings.

That said, Kopin has been steadily moving toward the form-factor, resolution, and field-of-view the VR industry has been hoping for, so perhaps the P95 optic and latest Lightning display will be the point at which the company starts turning heads in the VR space.

Meta Research Explores a New Solution to One of VR’s Biggest Display Challenges

New research from Kent State University and Meta Reality Labs has demonstrated large dynamic focus liquid crystal lenses which could be used to create varifocal VR headsets.

Vergence-Accommodation Conflict in a Nutshell

In the VR R&D space, one of the hot topics is finding a practical solution for the so-called vergence-accommodation conflict (VAC). All consumer VR headsets on the market to date render an image using stereoscopy which creates 3D imagery that supports the vergence reflex of pair of eyes (when they converge on objects to form a stereo image), but not the accommodation reflex of an individual eye (when the lens of the eye changes shape to focus light at different depths).

In the real world, these two reflexes always work in tandem, but in a VR they become disconnected because the eyes continue to converge where needed, but their accomodation remains static because the light is all coming from the same distance (the display). Researchers in the field say VAC can cause eye strain, make it difficult to focus on close imagery, and may even limit visual immersion.

Seeking a Solution

There have been plenty of experiments with technologies that could be used in varifocal headsets that correctly support both vergence & accommodation, for instance holographic displays and multiple focal planes. But it seems none have cracked the code on a practical, cost effective, and mass producible solution to solve VAC.

Another potential solution to VAC is dynamic focus liquid crystal (LC) lenses which can change their focal length as their voltage is adjusted. According to a Kent State University graduate student project with funding and participation from Meta Reality Labs, such lenses have been demonstrated previously, but mostly in very small sizes because the switching time (how quickly focus can be changed) significantly slows down as size increases.

Image courtesy Bhowmick et al., SID Display Week

To reach the size of dynamic focus lens that you’d want if you were to build it into a contemporary VR headset—while keeping switching time low enough—the researchers have devised a large dynamic focus LC lens with a series of ‘phase resets’, which they compare to the rings used in a Fresnel lens. Instead of segmenting the lens in order to reduce its width (as with Fresnel), the phase reset segments are powered separately from one another so the liquid crystals within each segment can still switch quickly enough to be practical for use in a varifocal headset.

A Large, Experimental Lens

In new research presented at the SID Display Week 2022 conference, the researchers characterized a 5cm dynamic focus LC lens to measure its capabilities and identify strengths and weaknesses.

On the ‘strengths’ side, the researchers show the dynamic focus lens achieves high image quality toward the center of the lens while supporting a dynamic focus range from -0.80 D to +0.80 D and a sub-500ms switching speed.

For reference, in a 90Hz headset a new frame is shown to the user every 11ms (90 times per second), while a 500ms switching time is the equivalent of 2Hz (two times per second). While that’s much slower than the framerate of the headset, it may be within the practical speed when considering the rate at which the eye can adjust to a new focal distance. Further, the researchers say the switching time can be increased by stacking multiple lenses.

Image courtesy Bhowmick et al., SID Display Week

On the ‘weaknesses’ side, the researchers find that the dynamic focus LC lens suffers from a reduction in image quality as the view approaches the edge of the lens due to the phase reset segments—similar in concept to the light scattering due to the ridges in a Fresnel lens. The presented work also explores a masking technique designed to reduce these artifacts.

Figures A–F are captures of images through the dynamic focus LC lens, increasingly off-axis from center, starting with 0° and going to 45° | Image courtesy Bhowmick et al., SID Display Week

Ultimately, the researchers conclude, the experimental dynamic focus LC lens offers “possibly acceptable [image quality] values […] within a gaze angle of about 30°,” which is fairly similar to the image quality falloff of many VR headsets with Fresnel optics today.

To actually build a varifocal headset from this technology, the researchers say the dynamic focus LC lens would be used in conjunction with a traditional lens to achieve the optical pipeline needed in a VR headset. Precise eye-tracking is also necessary so the system knows where the user is looking and thus how to adjust the focus of the lens correctly for that depth.

The work in this paper presents measurement methods and benchmarks showing the performance of the lens which future researchers can use to test their own work against or identify improvements that could be made to the demonstrated design.

The full paper has not yet been published, but it was presented by its lead author, Amit Kumar Bhowmick at SID Display Week 2022, and further credits Afsoon Jamali, Douglas Bryant, Sandro Pintz, and Philip J Bos, between Kent State University and Meta Reality Labs.

Continue on Page 2: What About Half Dome 3? »

The post Meta Research Explores a New Solution to One of VR’s Biggest Display Challenges appeared first on Road to VR.

New Video Shows Off CREAL’s Latest Foveated Light-field VR Headset

CREAL, a company building light-field display technology for AR and VR headsets, has revealed a new through-the-lens video showing off the performance of its latest VR headset prototype. The new video clearly demonstrates the ability to focus at arbitrary distances, as well as the high resolution of the foveated region. The company also the rendering tech that powers the headset is “approaching the equivalent of [contemporary] VR headsets.”

Earlier this year Creal offered the first glimpse of AR and VR headset prototypes that are based on the company’s light-field displays.

Much different from the displays used in VR and AR headsets today, light-field displays generate an image that accurately represents how we see light from the real world. Specifically, light-field displays support both vergence and accommodation, the two focus mechanisms of the human visual system. Most headsets on the market today only support vergence (stereo overlap) but not accomodation (individual eye focus), which means the imagery is technically stuck at a fixed focal depth. With a light-field display you can focus at any depth, just like in the real world.

While Creal doesn’t plan to build its own headsets, the company has created prototypes to showcase its technology with the hopes that other companies will opt to incorporate it into their headsets.

CREAL’s VR headset prototype | Image courtesy CREAL

We’ve seen demonstrations of Creal’s tech before, but a newly published view really highlights the light-field display’s continuous focus and the foveated arrangement.

Creal’s prototype VR headset uses a foveated architecture (two overlapping displays per eye); a ‘near retina resolution’ light-field display which covers the central 30° of the field of view, and a larger, lower resolution display (1,600 × 1,440, non-light-field) which fills the peripheral field of view out to 100°.

In the through-the-lens video we can clearly see the focus shifting from one part of the scene to another. Creal says the change in focus is happening entirely in the camera that’s capturing the scene. While some approaches to varifocal displays use eye-tracking to continuously adjust the display’s focal depth based on where the user is looking, a light-field has the depth of the scene ‘baked in’, which means the camera (just like your eye) is able to focus at any arbitrary depth without any eye-tracking trickery.

In the video we can also see that the central part of the display (the light-field portion) is quite sharp compared to the rest. Creal says this portion of the display is “now approaching retinal resolution,” and also running at 240Hz.

And while you might expect that rendering the views needed to power the headset’s displays would be very costly (largely due to the need to generate the light-field), the company says it’s rendering tech is steadily improving and “approaching the equivalent of classical stereo rendering of other VR headsets,” though we’re awaiting more specifics.

While Creal’s current VR headset prototype is very bulky, the company expects it will be able to further shrink its light-field display tech into something more reasonable by the end of 2022. The company is also adapting the tech for AR and believes it can be miniaturized to fit into compact AR glasses.

The post New Video Shows Off CREAL’s Latest Foveated Light-field VR Headset appeared first on Road to VR.

CREAL Reveals Its First Light-field AR & VR Headset Prototypes

Switzerland-based CREAL, which is developing a light-field display, has revealed its first prototype AR and VR headsets. The milestone marks ongoing progress in shrinking the once-bulky tech into something which can be worn on the head.

Compared to the displays used in VR and AR headsets today, light-field displays generate an image that accurately represents how we see light from the real world. Specifically, light-field displays support both vergence and accommodation, the two focus mechanisms of the human visual system. Creal and others say the advantage of such displays is more realistic and more comfortable visuals for XR headsets. For more on light-fields, expand our explainer below.

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal planes and jumps between them as needed. Oculus’ Half Dome prototypes do something similar, with support for a larger number of focal planes. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

While Creal has previously demonstrated its impressive light-field display technology, we’ve only ever seen in it large benchtop demos. Now the company has revealed its latest progress in shrinking the tech to fit into a head-mounted form factor. While its AR and VR prototypes are still fairly large, by 2022, the company says it expects its tech to fit into yet smaller form factors.

Image courtesy CREAL

Creal says these prototype headsets are ‘evaluation units’ which the company is sending to potential partners to demonstrate its light-field display. The company’s goal is not to build its own headsets, but to supply its light-field display technology to other headset makers.

CREAL AR Light-field Prototype

Image courtesy CREAL

The Creal AR headset prototype has a resolution of 1,000 × 1,000 across a 60° field of view, according to the company, which also claims ‘unlimited’ depth-resolution (meaning continuous focal planes), with the caveat that it isn’t truly unlimited but that the steps between each focal depth are “much smaller than an eye can resolve.”

The Creal AR headset prototype is tethered and uses an Intel RealSense sensor for 6DOF tracking and Ultraleap for hand-tracking. Below you can see a through-the-lens demo showing the ability to focus at different depths.

While the Creal AR headset prototype is approaching the size of something like HoloLens, the company claims it will be able to fit its light-field tech into a sleek glasses form-factor by late 2022. Doing so will require moving to a foveated version of its display which would see the central 30° of the field of view occupied by the light-field, while the peripheral view would be filled with non-light field imagery out to 60° total, the company says.

Image courtesy CREAL

Creal is also expecting to reduce power consumption from the current 2W down to 0.5W for the glasses-sized version, while boosting the eye-box to 8mm.

CREAL VR Light-field Prototype

Image courtesy CREAL

With its VR headset, Creal says it’s already employing the foveated light-field approach, with a 1,000 × 1,000 resolution light-field covering the central 30° of the field of view, and a 1,600 × 1,440 non-light-field view to fill out to 100° total. Because the light-field area is only 30° across, the resulting resolution is 40 PPD, which is approaching the retina resolution threshold (roughly 60 PPD). Below you can see a through-the-lens video showing the headset’s ability to focus at any depth in the scene.

The Creal VR headset prototype is using an Intel RealSense sensor for 6DOF tracking and includes eye-tracking from Pupil Labs, though the company notes that eye-tracking isn’t necessary for the light-field functionality.

Image courtesy CREAL

As with its AR headset prototype, the bulky VR headset prototype is a step toward a more compact version of the headset which the company expects to have ready by late 2022. By this point the company expects to integrate custom 6DOF and eye-tracking hardware (which would help further reduce the headset’s size).

Image courtesy CREAL

– – — – –

With the company planning to use a foveated combination of light-field and non-light-field displays going forward, we’ll be especially interested to see how closely the two views manage to blend together.

Creal’s announcement of head-mounted light-field prototypes follows the company’s latest investment round of $7.2 million announced late last year.

The post CREAL Reveals Its First Light-field AR & VR Headset Prototypes appeared first on Road to VR.

Facebook Researchers Explore Mechanical Display Shifting to Reduce Screen Door Effect

Researchers from Facebook Reality Labs and the University of Arizona published new work exploring the use of high-speed mechanical display shifting to reduce the so-called screen-door-effect (SDE) of immersive displays. SDE is caused by unlit spaces between pixels leading to the immersion-reducing appearance of a ‘screen door’ between the viewer and the virtual world. The researchers experiment with rapidly and minutely shifting the entire display to cause the display’s pixels to fill in the gaps.

SDE has been one the leading visual artifacts in modern VR headsets since the introduction of the Rift DK1 development kit in 2013. While SDE can be defeated with brute-force by employing extremely high density displays—in which the unlit spaces between pixels are too small to be seen by the naked eye—most consumer VR headsets today still exhibit SDE (with the near exception of Reverb G2), hurting immersion and visual clarity.

A real example of the screen door effect | Image courtesy Facebook Reality Labs Research

Beyond ultra high pixel density, other methods have been employed to reduce SDE. For instance, some headsets choose a smaller field of view which reduces the apparent visibility of SDE. Other headsets use a diffuser film on the display to help blend the light from the pixels into the unlit spaces between them.

Another proposal is to rapidly and minutely shift the display such that nearby pixels fill in the unlit gaps. While this might seem like it would create the appearance of a dizzying jiggling display, it’s been demonstrated with other display technologies that moving a point of light (ie: a pixel) quickly enough can create the appearance of a stable image.

Researchers Jilian Nguyen, Clinton Smith, Ziv Magoz, and Jasmine Sears from the University of Arizona and Facebook Reality Labs Research explored and experimented with the idea in a paper titled Screen door effect reduction using mechanical shifting for virtual reality displays.

Rather than building a VR headset with mechanical display shifting right out of the gate, the paper’s goal was to demonstrate and quantify the efficacy of the method.

Display Actuation and Modes

 

The display actuation mechanism | Image courtesy Facebook Reality Labs Research

The researchers designed a static platform with two piezoelectric actuators which, together, shift the display in a circular motion at 120Hz—in effect, causing each pixel to trace a 10µm circle 120 times per second. The size of the circle was picked based on the distance between the display’s pixels in order to optimally fill in the unlit spaces between pixels. The researchers call this circular path ‘Non-redundancy’ mode.

They also smartly utilized a 480Hz display which allowed them to experiment with a more complex pixel shifting path which they called ‘Redundancy’ mode. This approach aimed to not only fill in the gaps between the pixels with some additional overlap, but split the displayed frame into four sub-frames which are each uniquely shifted and displayed to account for the pixel movement. This means that when a pixel shifts to a location where it would fill in the SDE gap, it uses the correct color which would be used if a pixel was located in that position in the first place.

The two pixel movement modes addressed in the paper | Image courtesy Facebook Reality Labs Research

While the paper is limited to exploring these two pixel paths, the researchers say that others could be employed based on display characteristics.

“Pixel shifting is not limited to a circular shape. Indeed, an elliptical path or even a figure-eight path could be used by controlling the amplitude of each axis’ movement. Paths can be traced in many ways to explore screen door reduction,” the researchers wrote. “For the micro OLED display, a circular path was well-suited to the square pixel and sub-pixel layouts. This path is used to balance the length of the path with the fill factor, minimizing the speed the actuators must operate at.”

The display actuation platform for experimentation | Image courtesy Facebook Reality Labs Research

With the platform built and capable of shifting the display rapidly in the desired paths, the next step was to objectively quantify the amount of SDE reduction, which proved to be difficult.

Quantitative Measurement of Mechanical SDE Reduction

The authors first sought to objectively measure where each subpixel began and ended, but found that the resolution of the camera they employed for the task was not fine enough to clearly delineate the start and end of each subpixel, let alone the spaces between them.

Another approach to quantify SDE reduction was to measure the contrast ratio of a section of the display and compare it to when the screen actuation was on vs. off. Lower contrast would imply less SDE due to moving pixels filling in the unlit spaces and creating a more solid image. While the authors maintained that this measurement isn’t necessarily a reflection of the SDE reduction as the naked eye would see it, they believe it’s a meaningful quantitative measurement.

Contrast ratio reduction in both modes at various magnification levels | Image courtesy Facebook Reality Labs Research

Qualitative Assessments of Mechanical SDE Reduction

Beyond their efforts to quantitatively measure the SDE reduction, the researchers also wanted to look qualitatively at the change. The clearest demonstration of the benefits came from looking at a natural photo with complex scenery.

Image courtesy Facebook Reality Labs Research

Here, the ‘Non-redundancy’ mode clearly reduced the SDE while apparently retaining equal sharpness. Impressively, the ‘Redundancy’ mode not only reduced SDE, but even appears to noticeably sharpen the image (note the zoomed-in sections showing details in the rear of the car).

The image sharpening of the ‘Non-redundancy’ mode is an interesting additional benefit because it actually increases the resolving power of the display without increasing the number of pixels.

Based on their experimentation the researchers also suggest a user-study approach for future investigations which could be used to quantify any SDE reduction method, whether that be mechanical shifting, diffusers, or different sub-pixel layouts and optics.

The researchers conclude:

In using mechanical shifting of pixels for screen door reduction, the dead space of the display needs to be characterized to define the path shape and shift distance required of the mechanical shifting system. With appropriate application of mechanical motion, SDE can be qualitatively reduced. A promising method of screen door visibility quantification uses natural scenes and human subjects to determine the magnification at which SDE and screen door reduction artifacts become noticeable.

– – — – –

While the brute force approach of defeating SDE with ultra high pixel density displays will likely come to fruition, a mechanical approach to SDE reduction could allow headset makers to ‘get more for less’ by boosting the effective resolution of their display while reducing SDE. This could also have knock-on bonuses to display design, as display makers would be less constrained by the need to achieve exceptionally high fill factors.

The post Facebook Researchers Explore Mechanical Display Shifting to Reduce Screen Door Effect appeared first on Road to VR.

CREAL Raises $7.2 Million to Bring its Light-field Display to AR Glasses

Switzerland-based CREAL is developing a light-field display which it hopes to bring to VR headsets and eventually AR glasses. In November the company raised CHF 6.5 million (~$7.2 million) in a Series A+ investment round to bring on new hires and continue miniaturizing the company’s light-field tech.

Creal says it closed its Series A+ investment round in mid-November, raising CHF 6.5 million (~$7.2 million) led by Swisscom Ventures with participation by existing investors Investiere, DAA Capital Partners, and Ariel Luedi. The new funding marks ~$15.5 million raised by the company thus far.

Over the last few years we’ve seen Creal make progress in shrinking its novel light-field display with the hopes of fitting it into AR glasses. Compared to the displays used in VR and AR headsets today, light-field displays generate an image that accurately represents how we see the real world. Specifically, light-field displays support both vergence and accommodation, the two focus mechanisms of the human visual system. Creal and others say the advantage of such displays is more realistic and more comfortable visuals for VR and AR headsets. For more on light-fields, see our explainer below.

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, seems to support a larger number of focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

Having demonstrated the fundamentals of its light-field tech, Creal’s biggest challenging is miniaturizing it to fit comfortably into AR glasses while maintaining a wide enough field of view to remain useful. We saw progress on that front early this year at CES 2020, the last major conference before the pandemic cancelled the remainder for the year.

Through-the-lens: The accurate blur in the background is not generated, it is ‘real’, owed to the physics of light-fields. | Image courtesy CREAL

Creal co-founder Tomas Sluka tells Road to VR that this Summer the company has succeeded in bringing its prototype technology into a head-mounted form-factor with the creation of preliminary AR and VR headset dev kits.

Beyond ongoing development of the technology, a primary driver for the funding round was to pick up new hires that had entered the job market, Sluka said, after Magic Leap’s precarious funding situation and ousting of CEO Rony Abovitz earlier this year.

Image courtesy CREAL

CREAL doesn’t expect to bring its own headset to market, but is instead positioning itself to work with partners and eventually license its technology for use in their headsets. The company aims to build a “complete technology package for the next-generation Augmented Reality (AR) glasses,” which will likely take the form of a reference design for commercialization.

The post CREAL Raises $7.2 Million to Bring its Light-field Display to AR Glasses appeared first on Road to VR.