Testing the Future of AR Optics with Avegant Light Fields

Testing the Future of AR Optics with Avegant Light Fields

UploadVR paid a visit to the Avegant offices in California last week to try out its prototype light field display.

Avegant is best known in the VR/AR scene right now as the creators of the Glyph — a headset that puts users inside their own virtual movie theater. The Glyph arrived at an awkward stage for immersive tech and was quickly overshadowed by full VR headsets from Oculus, HTC and Sony.

Since releasing the Glyph, Avegant has been relatively quiet. Now, however, the company seems prepared to move into pioneering display technologies for augmented reality devices.

The term Avegant uses for its prototype displays is “light field,” which is a bit of a buzzword in the industry. As defined by Edward Tang, Avegant’s co-founder and CTO, a light field is “multiple planes of light coming into your eyes that can create what you would normally see in real life.”

Avegant is far from the first company to theorize and attempt to execute this sort of optic, but they are angling to be the best at its creation and distribution. According to Tang, Avegant is working on ways to make light field displays not only functional, but affordable as well. Tang thinks that widespread commercial AR will be a total “non-starter” without light fields and hopes that, by making the technology affordable, Avegant can help fast track AR’s path to commercial maturity.

During our demo at Avegant the actual tracking of the headset (done with external motion cameras throughout the room) was not emphasized. It was clear that Avegant was prioritizing one thing over every other: the display.

All the regular AR problems persisted in the Avegant demo including a restricted field of view and sub-optimal-positional tracking. However, the display itself was transformative. True to its word, Avegant has created a display capable of rendering multiple planes of focus with freakishly high resolution.

Photographers understand something called “depth-of-field” and so do your eyes. Essentially this is the realization that not all objects in an image should be in full focus at any given time. Your eyes are naturally able to focus on closer objects while blurring out others and standard VR experiences can mimic this somewhat using software. Avegant’s solution, however, aims to create true depth-of-field for AR. Its light field displays allow your eye to switch focus on multiple virtual objects via a hardware solution, not a software illusion.

I could feel my eyes working to refocus as I switched my attention between objects during my demo. That alone is a significant breakthrough for creating realism in augmented reality.

Tang describes what we saw as an “optics prototype only,” one that is only meant to show what Avegant’s new displays are capable of. The company declined to comment on what its final market strategy will be and whether or not it will continue making its own headsets or license these displays to other OEMs.

You can see our full impressions in the discussion below:

Tang made it clear the Avegant prototype is “in no way a commercial product” and that we may end up seeing Avegant light fields in a “variety of form factors.”

One of these form factors could be the separate “cosmetic prototype” of which we only got the briefest glimpse. This design suggested the displays could one day fit inside a much more ergonomic, and fashionable, headset.

Avegant Claims Newly Announced Display Tech is “a new method to create light fields”

Avegant, makers of Glyph personal media HMD, are turning their attention to the AR space with what they say is a newly developed light field display for augmented reality which can display multiple objects at different focal planes simultaneously.

Most of today’s AR and VR headsets have something called the vergence-accommodation conflict. In short, it’s an issue of biology and display technology, whereby a screen that’s just inches from our eye sends all light into our eyes at the same angle (where’s normally the angle changes based on how far away an object is) causing the lens in our eye to focus (called accommodation) on only light from that one distance. This comes into conflict with vergence, which is the relative angle between our eye eyes when they rotate to focus on the same object. In real life and in VR, this angle is dynamic, and normally accommodation happens in our eye automatically at the same time, except in most AR and VR displays today, it can’t because of the static angle of the incoming light.

For more detail, check out this primer:

Accommodation

accomodation-eye-diagram
Accommodation is the bending of the eye’s lens to focus light from objects at different depths. | Photo courtesy Pearson Scott Foresman

In the real world, to focus on a near object, the lens of your eye bends to focus the light from that object onto your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Vergence

vergence-diagram
Vergence is the rotation of each eye to overlap each individual view into one aligned image. | Photo courtesy Fred Hsu (CC BY-SA 3.0)

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate sharply inward to converge the image. You can see this too with our little finger trick as above; this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then look at those objects behind your finger, now you see a double finger image.

The Conflict

With precise enough instruments, you could use either vergence or accommodation to know exactly how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time; there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, any time you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy. But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Solving the vergence-accommodation conflict requires being able to change the angle of the incoming light (same thing as changing the focus). That alone is not such a huge problem, after all you could just move the display further away from your eyes to change the angle. The big challenge is allowing not just dynamic change in focus, but simultaneous focus—just like in the real world, you might be looking at a near and far object at the same time and each have a different focus. Avegant claims it’s new light field display technology can do both dynamic focal plane adjustment and simultaneous focal plane display.

Avegant Light Field design mockup
Avegant Light Field design mockup

We’ve seen proof of concept devices before which can show a limited number (three, or so) of discrete focal planes simultaneously, but that means you only have a near, mid, and far focal plane to work with. In real life, objects can exist in an infinite number of focal planes, which means that three is far from enough if we endeavor to make the ideal display.

Avegant CTO Edward Tang tells me that “all digital light fields have [discrete focal planes] as the analog light field gets transformed into a digital format,” but also says that their particular display is able to interpolate between them, offering a “continuous” dynamic focal plane as perceived by the viewer. The company also says that objects can be shown at varying focal planes simultaneously, which is essential for doing anything with the display that involves showing more than one object at a time.

Above: CGI representation of simultaneous display of varying focal planes. Note how the real hand and rover go out of focus together. This is an important part of making augmented objects feel like they really exist in the world.

Avegant hasn’t said how many simultaneous focal planes can be shown at once, or how many discrete planes there actually are.

From a feature standpoint, this is similar to reports of the unique display that Magic Leap has developed but not yet shown publicly. Avegant’s announcement video of this new tech (heading this article) appears to invoke Magic Leap with solar system imagery which looks very familiar to what Magic Leap has teased previously. A number of other companies are also working on displays which solve this issue.

SEE ALSO
'HOLOSCOPE' Headset Claims to Solve AR Display Hurdle with True Holography

Tang is being tight lipped on just how the tech works, but tells me that “this is a new optic that we’ve developed that results in a new method to create light fields.”

So far the company is showing off a functioning prototype of their light field display (seen in the video) as well as a proof-of-concept headset that they represents the form factor that the company says could eventually be achieved.

We’ll be looking hoping to get our hands on the headset soon to see what impact the light field display makes, and to confirm other important information like field of view and resolution.

The post Avegant Claims Newly Announced Display Tech is “a new method to create light fields” appeared first on Road to VR.

Lytro Shows First Light Field Footage Captured with Immerge VR Camera

Back toward the end of 2015, light field camera company Lytro announced a major turn toward the VR market with the introduction of ‘Immerge’, a light field camera made for capturing data which can be played back as VR video with positional tracking. Now the company is showing the first footage shot with the camera.

Lytro has made point-and-shoot consumer light field cameras since 2012. And while the company has had some success in the static photo market, the potential market for the application of light field capture has pulled the company into VR in a big way.

Lytro_Immerge_Coast
See Also: Lytro’s ‘Immerge’ 360 3D Light-field Pipeline is Poised to Redefine VR Video

Immerge, a 360 degree light field camera in the works by Lytro, captures incoming light from all directions. With not only the color of the light, but also its direction, the camera is capable of capturing data representing a stitch-free snippet of the real world, and (uniquely compared to other 360 degree cameras) the data which is captured allows for positional tracking from the user’s head (the ability to move your head through 3D space {‘parallax’} and have the scene react accurately).

 

This ability is one of the major advantages over standard film capture, and is seen as critical for immersion and comfort in VR experience. Now, Lytro is showing off the first light field footage shot by their Immerge camera; they say it’s the “first piece of 6DOF 360 live action VR content ever produced.”

Light field captures from Lytro’s camera also have a few other tricks, like the ability to change the IPD (distance between the stereo images, to align with each user’s eyes) and focus as needed in post-production.

The company says that Immerge’s light field data captures scenes not only with parallax, but also with view-dependent lighting (reflections that move correctly based on your head position), and truly correct stereo which works no matter the orientation of your head. Traditional 360 degree camera systems have issues showing stereoscopic content when the viewer tilts their head in certain directions, while Immerge’s light field captures retain proper stereo no matter the orientation of the head, Lytro says.

8i-vr-video-1
See Also: 8i are Generating Light Fields from Standard Video Cameras for VR Video

According to Lytro’s VP of Engineering, Tim Milliron, Immerge can render up to 8k per eye resolution, synthesizing the view from hundreds of constituent sub-cameras. Milliron says the company expects content creators to use Immerge’s light field captures like a high quality master file, from which a high-end 6DOF-capable experience could be distributed in app-form to desktop VR headsets, or other more basic 360 video files could be rendered for uploading and playback through traditional means.

Last year, Lytro raised a $50 million investment to pursue their VR interests. While the company initially expected to have Immerge ready in the first half of 2016, it’s just now in Q3 that we’re seeing the first test footage shot with the device. Felix & Paul Studios, Within (formerly ‘Vrse)’, and Wevr were initially said to be among the first companies outside of Lytro to get access to the camera to begin prototyping content. The company is also accepting applications for access to the prototype camera on the official Immerge website.

The post Lytro Shows First Light Field Footage Captured with Immerge VR Camera appeared first on Road to VR.