AR Company Avegant Announces Successful Closing of Funding Round

Last year, Avegant announced it was working on new technology for augmented reality (AR) and mixed reality (MR) with its Light Field Display, with the development kits for the technology shipping in late 2017. Now the company is seeking to further develop its technology with a new investment funding round.

Avegant has announced the successful closing of $12M in Series AA funding from new investors Walden SKT Venture Fund and China Walden Venture Investments, as well as previous investors.

The money will be used to further develop its next-generation display technologies for the consumer market, building on the existing light field technology to create a high-resolution, low latency and high brightness retinal displays.

Ed Tang, CEO of Avegant, said, “The consumer AR industry faces significant challenges developing displays that are high resolution, small form factor, large field-of-view, light field, and low power. The industry is excited about our unique solutions to these technical challenges, which will enable previously impossible AR experiences.”

Dr. Om Nalamasu, President of Applied Ventures and Chief Technology Officer of Applied Materials, commented: “Applied is excited to use its materials engineering technologies to enable new inflections like AR/VR, which require advanced displays, high-performance computing and lots of memory. We are working with Avegant to accelerate the development of their light field technology to create compelling AR applications.”

“Many companies are trying to solve multiple, very difficult technical problems to bring AR experiences to consumers,” said Andrew Kau, Managing Director of Walden International. “We chose to invest in Avegant because their solutions elegantly tackle these problems in creative ways that consider human factors without losing sight of manufacturability.”

Avegant MarsRover_lightfield

It is not currently known when Avegant’s light field technology will see a commercial release, though some developers and researchers are already working with the technology.

For future coverage of Avegant and other new development in VR and AR tecnology, keep checking back with VRFocus.

Avegant Raises $12M Financing to Further Develop AR Display Tech

Avegant, the AR display company known for its Glyph head-mounted display, announced that the company has successfully closed a $12M Series AA financing, which will allow them to further develop their light field technologies and “high resolution, low latency, and high brightness” retinal displays.

While well-known for its Glyph head-mounted display (now called ‘Avegant Video Headset’), the company has also demoed a prototype AR headset using its display technologies, which are said to target the consumer market.

Image courtesy Avegant

“The consumer AR industry faces significant challenges developing displays that are high resolution, small form factor, large field-of-view, light field, and low power. The industry is excited about our unique solutions to these technical challenges, which will enable previously impossible AR experiences,” said Avegant CEO Ed Tang in a statement.

Dr. Om Nalamasu, President of Applied Ventures and Chief Technology Officer of Applied Materials, said their company will be working with Avegant to accelerate the development of Avegant’s light field technology and “to create compelling AR applications.”

New investors include Walden SKT Venture Fund and China Walden Venture Investments III, L.P., with the company’s total funding amounting to $62.4M.

The post Avegant Raises $12M Financing to Further Develop AR Display Tech appeared first on Road to VR.

Testing Lightfield AR with Avegant (AWE Part 3)

Today I’d like to continue with my impressions and live demos from Augmented World Expo in Munich. Last time I wrote about the Meta glasses. The big field of view let`s you forget the window and the gesture concept can be a great step towards more natural human machine interaction. I finished talking about problems of today’s augmented reality glasses and the “floating screen feeling at a fixed distance” with the vergence accommodation conflict. Is there a solution yet? Time to look at lightfield displays from Avegant.

The problem of today’s AR and VR glasses

To go there, let me detour briefly. Almost all commercially available smartglasses today cast one or two screens in front of your eyes. These have a fixed perceived distance and demand your eyes to focus there – like if you were holding up a piece of paper at armlength. This works fine for virtual objects at that position. But what happens when you have an item far out? In the real world, your eyes would adjust focus and vergence: both eyes would turn slightly outwards going more parallel while focusing at the same distance automatically. This is the natural way. Monocular accommodation (focus), binocular vergence plus the brain-part of data analysis (image disparity and blur) all work in concert and harmony automatically for the real world…
But this breaks with HMD screens as of today. If the virtual object is far out, your eyes still need to focus on the virtual screen distance nearby at armlength, breaking the automatism of your eye resulting in the vergence-accommodation conflict (VAC). It might hurt, lead to headaches or be exhausting. Some people are more affected – like when going to 3D movies.
But it`s not only about comfort – it just does not look natural if all objects spread out are crisp sharp. If you really want to integrate augmented objects seamlessly, you need even more than this realistic distance feeling. You need to defocus objects like your physical eyes do with real objects all the time! This is a key feature for an immersive, integrated sensation of augmented objects.

The lightfield tech promise

Magic Leap, Avegant and others claim to have the solution to bring us displays that let us focus and defocus naturally on virtual objects that works in harmony with real world items.

We don’t really know yet what Magic Leap is doing, though there are signs and rumors. I’d like to mention Vance Vids videos here and Karl Guttag’s observations on ML tech. Worth to read and watch. – But we do know that others, like Avegant, use lightfield technology and waveguide optics to render multiple perspectives of virtual objects and directly beam them into your eyes. It takes more optical and general hardware engineering to miniaturize all this and it takes more computational power to calculate multiple perspectives, layers or slices of your 3D objects in real-time. Each image with a slightly different perspective. All will be cast to your eyes and the optics do the magic trick letting your eyes pick the right ones. But how many slices are needed and won’t it kill resolution of today’s devices? Is it possible to render all this in real-time at all?
Nobody tends to explain their magic tricks and during my interview with Avegant they wouldn’t either. We do get an idea when we look at research projects from the past, though. Stanford’s Gordon Wetzstein talks about 25 images per eye per frame (a 5×5 grid). Nvidia showed an earlier lightfield prototype with 14×7 perspectives in total. How much do we need?

During an interview with UploadVR, Avegant’s CTO Edward Tang stressed the idea, that you would really need more spatial resolution in close proximity, since the human eye has little room for error close by (see graphic from Avegant). He mentions 20 layers near-by, but far out even one might be enough to work well. Hence, 21 layers could do the trick? Maybe a 5×5 grid is not such a bad idea as nvidia presented then. But enough on theory, how does the resulting image look like? Can Avegant’s display live up to the hype?

Live Demo with Avegant

The live demo was shown in a rather dark room. The wired headset felt a bit bulky and big, tracking was OK (using HTC Vive’s lighthouse system), but Avegant restated that they only combine 3rd party elements to show a demo. After all, they only focus on their display technology. Allright, let’s take a look.
During the demo tour I could experience some female avatar, rendered in high quality, with details up to a single hair. Brightness and sharpness were really great. No screendoor effect could be seen, i.e. no pixel grid was visible. The field of view felt good, but still objects were cut off. But the interesting part was best observed with the planetary system demo, where multiple planets drifted in mid-air and I could move up closely to the Earth’s moon and make out all details and craters on the surface. Focusing on the moon made the other planet’s in the back blurry naturally. Changing focus to the planet’s in the back, the moon turned blurry as you would expect in the real world! This was truly amazing! I could not make out any jumps or layers. Well done! The switch of focus, the feeling of convergence and accommodation felt natural to me – and I’ve been playing around with stereoscopic vision devices a lot in the past. I’ve hold my finger in front of my head for cross-eyed view a million times. During this demo setup it felt fine!

My Conclusion and what’s next?

The demo was presented very well and I was stunned by the optical illusion that worked so impressively. Unfortunately, there was only little time. So, a longer test could have revealed more. How does the setup work under normal light conditions, daylight, today? How does the focus feel when rendering virtual objects next to (the same) real objects? Will it still feel natural? Is the focus/defocus accurate? Is eye fatigue really minimized or gone with this display tech? We will have to wait and see another time.

But the prototype demo showed a big step toward immersive augmented reality displays. A huge problem solved – or at least tackled in a version 1 going onwards from there. If future designs can be slim and battery-lasting, this sure seems to be a must-have for everyday glasses!

Once, all these issues are solved, we can focus on the next problems (sigh), e.g. a complete world-scan and shared 3D representation for city-scale tracked AR experiences and an easier content sharing with 5G, 6G on the go. Google just pushed out their content platform Poly, … maybe there is more to come for user-generated world scans, too? Well, let’s keep that for another day.


OK, hope you enjoyed my insights. I also tried to summarize the VAC, lightfield, etc. in a simple way. Any thoughts, feedback or what would you like to see covered in detail? Let me know! Cheers!

The post Testing Lightfield AR with Avegant (AWE Part 3) appeared first on augmented.org.

Avegant Ships Development Kits for Light Field Display Technology

Earlier this year, Avegant announced it had made a breakthrough in augmented reality (AR) and mixed reality (MR) with its Light Field Display technology. Now the company has announced that it has begun shipping its Light Field Display Development Kits.

The main goal of Avegant’s Light Field Display technology is to solve a fundamental problem in AR and MR – how to make virtual objects look real at various distances when imposed on the real world. Avegant says its technology solves this problem by allowing for multiple focal planes, allowing digital objects to maintain their sense of presence no matter how near or far the observer is.

The Development Kit contains hardware, software and support services for businesses and creators who wish to use the technology. Avegant has strived to simplify workflows and add value to a number of professional areas with the technology, including medicine, design, engineering, communication and education.

“Since announcing our first Light Field displays earlier this year, Avegant has been flooded with requests from companies that can’t wait to get their hands on this new technology,” said Eric Trabold, Chief Business Officer at Avegant. “Demand took off like a rocket, so we’re launching a development kit designed specifically to meet that demand. By solving the display problem, Avegant enables companies to dramatically accelerate the development of their own mixed reality products,” said Trabold. “This technology is absolutely required for the kinds of mixed reality experiences we all want.”

Some partner companies have been involved in a pilot program that allowed them to get advance access to the development kit to allow them to begin creating mixed reality applications. For other companies, pricing and availability is available by contacting Avegant via the company’s website.

VRFocus will bring you further news on Avegant and Light Field Display as it becomes available.

AWE Europe to Host Nvidia, Avegant, Vuzix and More

Augmented World Expo (AWE) is returning to Munich with its 2nd annual European conference and expo this October, and the initial line-up for the event has been revealed. AWE Europe 2017 will showcase the latest in AR+VR from Audi, Avegant, Bosch, DAQRI, Epson, Fraunhofer, Holo-light, Nvidia, ODG, RE’FLEKT, Volkswagen, Vuforia, Vuzix, Wayfair, Wikitude and more as part of a comprehensive lineup of conference keynotes and panels, exhibits, networking events and breakout sessions.

Vuzix iWearAugmented reality (AR) and virtual reality (VR) continues its steady growth across the globe, and AWE is building strength in Europe with attendees expected to exceed 1,500. With over 100 speakers and exhibitors planned, additional highlights revealed thus far include startups like AGCO Corporation, cognitiveVR, Fringefy, gestigon, Heavy Projects, Holo-Light, IGNYTE, INS Insider Navigation Systems, Innovation.Rocks, Interglass, Joinpad, LC-Tec, Massless, Optinvent, Proceedix, RE’FLEKT, Super Ventures, THX, Ubimax, UpSkill, ViewAR, Vuframe and more.

AWE EU 2017 will take place 19th – 20th October 2017, Munich. More information regarding the speakers, exhibitors and general registration is available at the official AWE EU 2017 website. VRFocus will keep you updated with all the latest details on the global AWE events.

The Crazy VR Goggles in HBO’s ‘Silicon Valley’ Are Not a Prop but a Real Prototype

HBO’s show Silicon Valley entertains with tales of the pervasive startup culture of California’s San Francisco Bay Area. While virtual reality has graced the show previously, season four is bringing it closer to the fore, with the show’s main characters becoming intertwined with the eccentric Keenan Feldspar, a fictional Silicon Valley wunderkind developing a pair of virtual reality goggles. A wild looking prototype version of the fictional headset seen in the eighth episode of this season turns out to be not a prop, but an actual prototype of a product that’s on the market today.

Silicon Valley shines in a large part because of how accurately it portrays the trials and tribulations of startup life in the Bay Area. The minds behind Silicon Valley have done their homework to show believable startups and their fictional tech. In the last few years, virtual reality has been one of the hottest trends in the startup and venture capital space, and the writers of the show have taken clear notice.

While Rift and Gear VR headsets have popped up here and there on the show, the latest episodes have brought the topic of VR front and center. The character Keenan Feldspar is developing a virtual reality headset, and his carefree enthusiasm lures the Pied Piper crew to try out his latest demo.

The wild-looking headset we see used in the show looks as if a prop designer had some fun designing a kitbashed goggle-shaped VR headset with exposed circuit boards and 3D printed parts. But what you’re looking at is actually a real prototype of an HMD called the Glyph, developed by a company called Avegant.

Though it looks quite futuristic, the prototype is relatively old. I got to try it myself back in 2013 when Avegant was developing the then yet-to-be-named Glyph, a personal video viewer which doubles as a pair of headphones and is available today for $400.

And while the prototype Glyph is not technically a VR headset—as the limited field of view makes it suitable only for non-immersive media—Avegant has lately shifted their focus to the immersive HMD space, and is presently developing “a new method to create light fields” for future headsets.

Avegant makes a cameo at Hooli-con | Photo courtesy HBO

The prototype Glyph headset is not the last we see of Avegant on Silicon Valley. In the next episode when the Pied Piper crew attends Hooli-Con, careful viewers will spot an Avegant Glyph booth in the background of one of the shots. Oculus and 360fly booths can also be spotted.

Photo courtesy HBO

Silicon Valley’s interest in virtual reality doesn’t end there. The show’s VR wiz kid, Keenan Feldspar, bears a striking resemblance to Oculus founder Palmer Luckey (if you were on the fence, the Hawaiian shirts are the dead giveaway). Luckey himself recently changed his social media profile pictures to a photo of Feldspar.

And to make things even more meta, the show has also crossed into non-fiction. Thomas Middleditch himself, who plays Silicon Valley’s main character, was the host of the Proto Awards, an annual virtual reality award ceremony, where he pitched a zany idea for a VR game, as we reported from the 2014 event:

One of the more hilarious moments of the night was when Middleditch pitched the audience an idea for a VR game: a simulator of Middleditch’s bathroom where users would sit on the toilet and have to console his small but bossy dog. What seemed like a joke at first became all too real; for reference material he actually pulled up a video, taken while on his toilet, of his dog coming into the bathroom and growling until being pet.

– – — – –

How long virtual reality will remain a central theme of the show is unclear, but in the meantime we’re enjoying the cameos and homages to companies and products within the real VR industry.

The post The Crazy VR Goggles in HBO’s ‘Silicon Valley’ Are Not a Prop but a Real Prototype appeared first on Road to VR.

Lichtfeld Technologie von Avegant soll Mixed Reality verbessern

Das Start-up Aveganz, das vor ein paar Jahren die Glyph, ein Entertainmentcenter in Form von Kopfhörern veröffentlichte, zeigt nun mit einen neuen Prototypen, wie die Zukunft der Augmented Reality aussehen kann. Schon damals war das Unternehmen von der Technologie der AR und VR begeistert und entwickelte entsprechende tragbare Computersysteme.

Avegants neuer Prototyp sorgt für gestochen scharfe MR-Darstellungen

Avegant-Light-Field-Technolgy-Augmented-Reality

Bei der Entwicklung stießen man bei Avegant auf ein fundamentales Problem der Mixed Reality Displays. Diese besitzen nämlich alle einen festen Mittelpunkt. Man kann virtuelle Gegenstände also an Wände projizieren und sie entsprechend mit den Controllern manipulieren, jedoch bleibt die wirklich reale Erfahrung aus. Der Entwickler Edward Tang sagte dazu Folgendes: „Um die Mixed Reality als wirklich reale Erfahrung wahrzunehmen, müsste ich in der Lage sein, auf die projizierten Objekte zuzugehen und sie zu berühren und zu fühlen als wären sie wirklich direkt vor mir. Der Fokus muss also stimmen.“

Um das zu erreichen forschte Avegant an der Lichtfeld-Technologie, um die Objekte je nach Bedarf unscharf oder scharf darstellen zu können, je nachdem welcher Fokus darauf gelegt wird. Es stellt die reale Welt also genauso dar, wie wir sie sehen.

Entsprechend entwickelte Avegant einen neuen Prototyp auf Basis der Lichtfeld-Technologie. Der Unterschied zu anderen existierenden Geräten besteht in der Kompatibilität zu bereits bestehenden Herstellungstechniken und existierendem Zubehör. Welche genauen technischen Spezifikationen das Gerät besitzt, möchte das Unternehmen noch nicht preisgeben. Jedoch sollen die Darstellungen gestochen scharf sein, wesentlich schärfer als derzeitige HD-Displays. Zudem kann der Fokus je nach Betrachtung verschiedener Objekte verändert werden.

Der Mixed Reality Prototyp in Aktion

So entwickelte Avegant verschiedene Demos zur Vorführung ihrer Technik. Eines davon zeigt das Sonnensystem mit gestochen scharfen Bildern der Planeten. Je nachdem, welcher Planet betrachtet wird, werden die anderen Planeten in den Hintergrund gerückt und unscharf. In der zweiten MR-Erfahrung ist der Nutzer in einem Aquarium, in dem er von Fischen, Schildkröten und Wasserwellen umgeben wird. Diese Erfahrung muss so real wirken als wäre man in eine Unterwasserwelt eingetaucht. Die letzte Demo zeigt die Interaktion mit einer virtuellen Frau. Diese soll durch die Details, wie z.B. deutlich sichtbare Sommersprossen sehr real wirken und durch verändernde Gesichtsausdrücke Emotionen deutlich darstellen können.

Die Technologie hinter dem Prototyp ist laut Tang bereits sehr ausgereift und könnte so in die Produktion gehen. Er wäre kompatibel mit bereits bestehender Hardware und das Headset wäre kompatibel mit Chipsets von NVIDIA oder Snapdragon Prozessoren. Wann dies umgesetzt wird, steht jedoch noch nicht fest. Das kleine Start-up könnte in Zukunft dennoch zu einem Fortschritt in der Mixed Reality beitragen.

(Quelle: Engadget)

Der Beitrag Lichtfeld Technologie von Avegant soll Mixed Reality verbessern zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Avegant Secures a Further $13.7m After Light Field Technology Improvements

Back in March, Avegant announced a possible breakthrough in mixed reality (MR) with its Light Field Technology, a system that allows virtual objects to appear far more solid and real than the opaque holograms usually seen. For the back of this announcement the company has raised $13.7 million USD in funding.

The funding round, which closed in March, was led Chinese mobile internet company Hangzhou Lian Luo, and featured new investment from Applied Materials, while existing investor Intel also took part, reports Financial Times. This now makes a total of around $50 million that Avegant has raised since it was founded in 2012.

Avegant MarsRover_lightfield

Avegant is best known for its personal media head-mounted display (HMD) Glyph, which is only available in the US for $499. Designed as a pair of chunky headphones, the Avegant Glyph allows users to watch movies, listen to music or play video games in a compact device. It uses Avegant’s patented Retinal Imaging Technology that features an array of two million mirrors to project images directly onto eyes.

But it’s the company’s development’s in MR and augmented reality (AR) that could hold the most promise. Even though the technology is progressing slower than virtual reality (VR), it is seen as a possible successor, with Microsoft HoloLens and Meta’s Meta 2 already on the market albeit aimed at the enterprise end. There’s also Magic Leap, the super secretive company that’s secured massive amounts of funding but has yet to publicly demo anything – although some celebrities have seen it.

“Mixed reality will become more of a general device and VR will become more of a niche application,” said Ed Tang, Avegant’s founder and chief technology officer. “MR is less of an isolating experience and more inclusionary.”

“We are going to see light-field displays in the market much faster than people have anticipated,” Tang added, estimating in the region of the “next 12 to 18 months”.

As Avegant continues its development, VRFocus will bring you the latest updates.

Testing the Future of AR Optics with Avegant Light Fields

Testing the Future of AR Optics with Avegant Light Fields

UploadVR paid a visit to the Avegant offices in California last week to try out its prototype light field display.

Avegant is best known in the VR/AR scene right now as the creators of the Glyph — a headset that puts users inside their own virtual movie theater. The Glyph arrived at an awkward stage for immersive tech and was quickly overshadowed by full VR headsets from Oculus, HTC and Sony.

Since releasing the Glyph, Avegant has been relatively quiet. Now, however, the company seems prepared to move into pioneering display technologies for augmented reality devices.

The term Avegant uses for its prototype displays is “light field,” which is a bit of a buzzword in the industry. As defined by Edward Tang, Avegant’s co-founder and CTO, a light field is “multiple planes of light coming into your eyes that can create what you would normally see in real life.”

Avegant is far from the first company to theorize and attempt to execute this sort of optic, but they are angling to be the best at its creation and distribution. According to Tang, Avegant is working on ways to make light field displays not only functional, but affordable as well. Tang thinks that widespread commercial AR will be a total “non-starter” without light fields and hopes that, by making the technology affordable, Avegant can help fast track AR’s path to commercial maturity.

During our demo at Avegant the actual tracking of the headset (done with external motion cameras throughout the room) was not emphasized. It was clear that Avegant was prioritizing one thing over every other: the display.

All the regular AR problems persisted in the Avegant demo including a restricted field of view and sub-optimal-positional tracking. However, the display itself was transformative. True to its word, Avegant has created a display capable of rendering multiple planes of focus with freakishly high resolution.

Photographers understand something called “depth-of-field” and so do your eyes. Essentially this is the realization that not all objects in an image should be in full focus at any given time. Your eyes are naturally able to focus on closer objects while blurring out others and standard VR experiences can mimic this somewhat using software. Avegant’s solution, however, aims to create true depth-of-field for AR. Its light field displays allow your eye to switch focus on multiple virtual objects via a hardware solution, not a software illusion.

I could feel my eyes working to refocus as I switched my attention between objects during my demo. That alone is a significant breakthrough for creating realism in augmented reality.

Tang describes what we saw as an “optics prototype only,” one that is only meant to show what Avegant’s new displays are capable of. The company declined to comment on what its final market strategy will be and whether or not it will continue making its own headsets or license these displays to other OEMs.

You can see our full impressions in the discussion below:

Tang made it clear the Avegant prototype is “in no way a commercial product” and that we may end up seeing Avegant light fields in a “variety of form factors.”

One of these form factors could be the separate “cosmetic prototype” of which we only got the briefest glimpse. This design suggested the displays could one day fit inside a much more ergonomic, and fashionable, headset.

Avegant Claims Newly Announced Display Tech is “a new method to create light fields”

Avegant, makers of Glyph personal media HMD, are turning their attention to the AR space with what they say is a newly developed light field display for augmented reality which can display multiple objects at different focal planes simultaneously.

Most of today’s AR and VR headsets have something called the vergence-accommodation conflict. In short, it’s an issue of biology and display technology, whereby a screen that’s just inches from our eye sends all light into our eyes at the same angle (where’s normally the angle changes based on how far away an object is) causing the lens in our eye to focus (called accommodation) on only light from that one distance. This comes into conflict with vergence, which is the relative angle between our eye eyes when they rotate to focus on the same object. In real life and in VR, this angle is dynamic, and normally accommodation happens in our eye automatically at the same time, except in most AR and VR displays today, it can’t because of the static angle of the incoming light.

For more detail, check out this primer:

Accommodation

accomodation-eye-diagram
Accommodation is the bending of the eye’s lens to focus light from objects at different depths. | Photo courtesy Pearson Scott Foresman

In the real world, to focus on a near object, the lens of your eye bends to focus the light from that object onto your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Vergence

vergence-diagram
Vergence is the rotation of each eye to overlap each individual view into one aligned image. | Photo courtesy Fred Hsu (CC BY-SA 3.0)

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate sharply inward to converge the image. You can see this too with our little finger trick as above; this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then look at those objects behind your finger, now you see a double finger image.

The Conflict

With precise enough instruments, you could use either vergence or accommodation to know exactly how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time; there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, any time you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy. But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Solving the vergence-accommodation conflict requires being able to change the angle of the incoming light (same thing as changing the focus). That alone is not such a huge problem, after all you could just move the display further away from your eyes to change the angle. The big challenge is allowing not just dynamic change in focus, but simultaneous focus—just like in the real world, you might be looking at a near and far object at the same time and each have a different focus. Avegant claims it’s new light field display technology can do both dynamic focal plane adjustment and simultaneous focal plane display.

Avegant Light Field design mockup
Avegant Light Field design mockup

We’ve seen proof of concept devices before which can show a limited number (three, or so) of discrete focal planes simultaneously, but that means you only have a near, mid, and far focal plane to work with. In real life, objects can exist in an infinite number of focal planes, which means that three is far from enough if we endeavor to make the ideal display.

Avegant CTO Edward Tang tells me that “all digital light fields have [discrete focal planes] as the analog light field gets transformed into a digital format,” but also says that their particular display is able to interpolate between them, offering a “continuous” dynamic focal plane as perceived by the viewer. The company also says that objects can be shown at varying focal planes simultaneously, which is essential for doing anything with the display that involves showing more than one object at a time.

Above: CGI representation of simultaneous display of varying focal planes. Note how the real hand and rover go out of focus together. This is an important part of making augmented objects feel like they really exist in the world.

Avegant hasn’t said how many simultaneous focal planes can be shown at once, or how many discrete planes there actually are.

From a feature standpoint, this is similar to reports of the unique display that Magic Leap has developed but not yet shown publicly. Avegant’s announcement video of this new tech (heading this article) appears to invoke Magic Leap with solar system imagery which looks very familiar to what Magic Leap has teased previously. A number of other companies are also working on displays which solve this issue.

SEE ALSO
'HOLOSCOPE' Headset Claims to Solve AR Display Hurdle with True Holography

Tang is being tight lipped on just how the tech works, but tells me that “this is a new optic that we’ve developed that results in a new method to create light fields.”

So far the company is showing off a functioning prototype of their light field display (seen in the video) as well as a proof-of-concept headset that they represents the form factor that the company says could eventually be achieved.

We’ll be looking hoping to get our hands on the headset soon to see what impact the light field display makes, and to confirm other important information like field of view and resolution.

The post Avegant Claims Newly Announced Display Tech is “a new method to create light fields” appeared first on Road to VR.