Facebook Researchers Reveal Methods for Design & Fabrication of Compact Holographic Lenses

Researchers from Facebook Reality Labs have shared new methods for the design & fabrication of compact holographic lenses for use in XR headsets.

The lenses used in most of today’s XR devices are typical refractive lenses which can be fairly bulky, especially as they are optimized for certain optical characteristics. Fresnel (ridged) lenses are frequently used in XR headsets to improve optical performance without adding too much bulk.

In theory, holographic lenses are a promising approach for XR optics thanks to their ability to perform the same (or even more advanced) functions of a traditional lens, but in the space of a wafer-thin film. However, designing and fabricating holographic lenses with high optical performance is far more difficult today than it is with typical refractive optics.

In an effort to move us one step closer to the practical use of holographic lenses in XR devices, Facebook Reality Labs researchers Changwon Jang, Olivier Mercier, Kiseung Bang, Gang Li, Yang Zhao, and Douglas Lanman have detailed new methods for creating them. This could go a long way toward making it possible to build, at scale, the kind of compact XR glasses Facebook recently demonstrated.

In a paper published in the peer-reviewed journal ACM Transactions on Graphics (Vol. 39, No. 6, Article 184) in December, titled Design and Fabrication of Freeform Holographic Optical Elements, the researchers write, “we present a pipeline for the design and fabrication of freeform [Holographic Optical Elements (HOEs)] that can prescribe volume gratings with complex phase profiles and high selectivity. Our approach reduces image aberrations, optimizes the diffraction efficiency at a desired wavelength and angle, and compensates for the shrinkage of the material during HOE fabrication, all of which are highly beneficial for VR/AR applications. We also demonstrate the first full-color caustic HOE as an example of a complex, but smoothly-varying, volume grating.”

Specifically the paper covers optimization methods for establishing a theoretical holographic lens design, and two approaches for actually manufacturing it.

One uses a pair of freeform refractive optics to create the target hologram; the paper proposes the method for creating the refractive optics such that they accurately form the target hologram within the holographic film. The other involves a holographic printer used to create the target hologram from tiled holographic patches; again the paper proposes the method for optimizing the process to most accurately recreate the target hologram within the holographic film with this specific approach, which they say is a completely different challenge from the first method.

While the paper didn’t explore quite this far, the authors say that future research could attempt to apply these same methods to curved, rather than flat, surfaces.

“For some VR/AR applications, it could be beneficial to create HOEs with physically curved form-factors, for example, for HOEs laminated on curved windshields or glasses. We expect our fabrication framework to expand well to such cases, since neither the printer or the [refractive lens] approaches require the HOE to be flat, and the optimization method of Algorithm 1 could be adapted to intersect rays with a curved surface […],” the researchers write. “Optimizing the shape of the HOE as part of our method would provide us with more degrees of freedom and would broaden applications, but we leave this as future work.”

The post Facebook Researchers Reveal Methods for Design & Fabrication of Compact Holographic Lenses appeared first on Road to VR.

Facebook Researchers Show The Most Compact VR Optics Yet

Facebook’s VR research division is presenting prototype VR optics smaller than any we’ve seen yet for the annual SIGGRAPH computer graphics conference.

The ideas behind the “holographic near-eye display” could one day enable VR headsets with sunglasses form factor- but for now this is solely research with limitations.

Why Are VR Headsets So Bulky?

The primary driver of the size and bulk of today’s VR headsets is the optical design. Magnifying a display over a wide field of view requires a large, thick lens, and focusing it at a viewable distance requires a long gap to the display. After adding the housing needed to contain this system, even the most minimal designs end up over 350 grams.

vr panels lenses dual

The standalone Oculus Quest, with a battery, mobile chip and lens separation adjustment, weighs 571 grams. Many people find it hurts their face after a few minutes.

Panasonic and Pico have shown off prototypes of compact headsets using “pancake lenses”, and Huawei has already launched this as a product in China. Without a tracking system or battery, these headsets end up around 150 grams.

Huawei VR Glass
Huawei’s VR Glass, sold in China, weighs 166 grams

However, these current pancake lens designs have a number of unsolved flaws. They block around 75 percent of light which can make the image look dim and washed out. They may also show faint ghost versions of the image slightly misaligned, and this “ghosting” only gets worse as you try to improve the image with a brighter source.

Holographic Lenses

Facebook Reality Labs’ new approach is a thin film where focusing is done by holographic optics instead of by the bulk of the lens. ‘Hologram’ in this context just means a physical “recording” of how light interacts with an object- in this case a lens rather than a scene.

Facebook claims the research may be able “to deliver a field of view comparable to today’s VR headsets using only a thin film for a thickness of less than 9 mm.” The total weight of the display module is claimed as just 18 grams. However, this does not include the actual laser source, and nor do any of the images Facebook provided. “For our green-only sunglasses-like prototype, we measured an overall maximum field of view of approximately 92◦ ×69◦,” according to the research paper.

By using polarization-based optical folding, these ultra-lightweight lenses can be placed directly in front of the display source.

Because holographic elements disperse light, the only practical illumination source is lasers used at specific angles and wavelengths. The researchers were able to “inject” laser light into a 2.1″ 1600×1600 LCD, replacing the backlight.

The prototype is currently monochrome, only capable of displaying the color green. The researchers have a tabletop-sized proof of concept for multi-color, and believe bringing this to the sunglasses prototype is “viable” with further engineering.

The range of colors laser light can deliver (known as the color gamut) is significantly wider than LCD displays, and in fact slightly wider than even OLED, so this would represent a milestone achievement if it could be moved into a head-worn system.

Early Research, Lofty Goals

It’s important to understand that what’s being presented here is just early research for a new kind of display system. If it ever becomes a product, it will also need a tracking system. And unless it connects to your phone with a cable, it’d likely need a battery and mobile chipset too.

Facebook describes this research as being on the same miniaturization research “path” as Half Dome 2 and 3, which it presented at Oculus Connect 6 back in October.

Those headsets are much larger than what’s being shown here, but achieved a wider field of view while also having eye tracking and variable focus. FRL says future iterations of this sunglasses prototype could also be varifocal by moving the lenses on a range of just 1 millimeter. This could theoretically be achieved with tiny piezoelectric actuators.

For virtual reality to reach Mark Zuckerberg’s lofty goal of 1 billion users, headsets need to get significantly more comfortable while increasing realism. While designs like the Rift S “halo strap” can redistribute weight, this is more of a bandage than truly addressing the issue of bulk.

Like all early research, this idea may never pan out. Practical issues may emerge. Facebook is simultaneously exploring a number of novel compact display architectures. If it can make even one work, it could do to VR what LCD panels did to CRT monitors and televisions.

Facebook’s research paper concludes:

“Lightweight, high resolution, and sunglasses-like VR displays may be the key to enabling the next generation of demanding virtual reality applications that can be taken advantage of anywhere and for extended periods of time. We made progress towards this goal by proposing a new design space for virtual reality displays that combines polarization-based optical folding, holographic optics, and a host of supporting technologies to demonstrate full color display, sunglasses-like form factors, and high resolution across a series of hardware prototypes. Many practical challenges remain: we must achieve a full color display in a sunglasses-like form factor, obtain a larger viewing eye box, and work to suppress ghost images. In doing so, we hope to be one step closer to achieving ubiquitous and immersive computing platforms that increase productivity and bridge physical distance.”

The post Facebook Researchers Show The Most Compact VR Optics Yet appeared first on UploadVR.

Spatial Makes Holographic-Style Virtual Meetings A Reality On Oculus Quest

I’m not often at a loss for words, but as I re-entered the real world after my second holographic media briefing this month, I realized that I was struggling to speak or type. Mentally, the sensation was awe — my sincere belief that I had just experienced the future of remote work and meetings. Yet physically, I was fighting off nausea, a reminder that though collaborative mixed reality experiences are now affordable and practical, people may not be ready for them to become the new work-from-home normal.

The breakthrough here is Spatial, a collaborative workspace app that just became available for the popular Oculus Quest VR headset. It’s not hyperbole to say that Spatial has unilaterally reignited my enthusiasm for the Quest, which has recently gathered dust on my desk, as the potent pairing enables me to quickly participate in 3D group meetings filled with multiple realistic participants. Instead of using cartoony avatars or floating video tiles, Spatial users appear as “holograms” with real faces, motion-sensed head and hand movements, and even lip motions keyed to their live voices.

At a time when workers are largely confined to home offices and prevented from attending physical gatherings, Spatial meetings feel like actual gatherings — and safe ones. Each of the briefings I’ve attended during the COVID-19 pandemic has been in a clean virtual meeting room, a welcome change from the crowded hotel ballrooms and convention halls typically used for major product announcements. In a Spatial gathering, there’s no need to worry about wearing a mask over your nose and mouth, but over time, you may notice the weight of the mixed reality headset.

Until recently, the Spatial holographic experience required a multi-thousand-dollar Microsoft or Magic Leap AR headset, but Spatial wisely widened its cross-platform support and temporarily dropped its pricing to bring more users to the table. For the duration of the pandemic, Spatial can be used for free by both enterprises and end users, giving businesses every incentive to test it out with the popular, fully standalone Quest — assuming they can find one (or a few) in stores. (Defying “VR is dead” pundits, the $399-$499 headsets keep selling out every time they briefly hit online store shelves.)

I can’t help but be impressed by the overall quality of the Spatial gatherings I’ve attended. While there have been tiny issues here and there, the totality of the experience is surprisingly, perhaps even amazingly, fluid. Step back for a moment and consider all the challenges of having five or ten people in different cities all interacting plausibly within a virtual space — collectively watching a live presentation, passing 3D objects back and forth, and taking turns talking — without constant hiccups. It’s somewhat remarkable that the biggest issues I’ve seen involved one participant dropping out due to a dead headset battery, and another experiencing a beta app crash. Early streaming video services couldn’t even do that much properly without frequent buffering, and Spatial makes 20 times as much complexity seem synchronous and effortless to its users.

On the other hand, I felt a little queasy as I took off the VR gear following an hour-long meeting, and I’m not exactly sure what did it. Was it the length of time I spent immersed? Or something about returning to the real world after focusing my eyes on the Quest’s 3D screens? As a fairly frequent VR user, I haven’t had these sensations for a long time, but I suspect that my eyes were trying to stay focused on some static visible pixels while my head moved during the presentation, and that eventually made me feel sick.

spatial ar vr

For Spatial and the companies that make mixed reality headsets, overcoming that sort of practical usability hurdle may seem like the final step in popularizing virtual work-from-home solutions. And initially, I might have agreed. It’s clear that virtual meetings that end with employees feeling nauseous isn’t the sort of “productivity” experience businesses are looking for. Moreover, Oculus and others are working on VR headsets with higher refresh rates and screen resolutions specifically to smooth the viewing experience for users, making it easier on their eyes and brains.

But as I think back to my latest meeting — where I had to stay focused on the presentation in front of me for an hour, without being able to take notes, check other apps, or attend to other real-world needs — I know that there’s another set of challenges yet to be tackled. Just like Apple’s iPad nailed the “right” tablet form factor but spent years struggling to get multitasking right, companies such as Spatial now have to formulate a cohesive modern XR work experience, one that’s more than just social gatherings, and speaks to the deeper, richer interactivity with objects and work tools that business users will expect to have in mixed reality spaces.

It goes without saying that delivering a comprehensive virtual working experience won’t be easy. After using Spatial, however, I’m optimistic that some great company or companies will make it happen in the not too distant future, and that holography and mixed reality will subsequently become as viable for working from home as desktop and laptop computing are today. I just hope I won’t need motion sickness medication to fully appreciate it.


The written content of this post by Jeremy Horowitz originally appeared on VentureBeat.

The post Spatial Makes Holographic-Style Virtual Meetings A Reality On Oculus Quest appeared first on UploadVR.

Unreal Engine Creators can Visualise Their 3D Creations Using Looking Glass Factory’s Holographic Display’s

To view 3D content without the need for glasses, virtual reality (VR) headsets or any other face-base contraption you’ll need a holographic display like The Looking Glass. Today, Looking Glass Factory has announced that its displays will now support those working in Unreal Engine (UE4), Epic Games’ popular videogame development software.

Looking Glass Factory

In collaboration with Epic Games, Looking Glass Factory has released a UE4 plugin so that content creators can visualise their designs using these holographic displays. The new feature can be used for a range of industries not just videogames,  such as automotive, architecture, mapping/GIS and medical imaging.

The Unreal Engine plugin feature list is as follows:

  • Real-time 3D view of content in Unreal’s Game View
  • Holographic 3D visuals in the editor and in builds
  • Support for buttons on Looking Glass displays
  • One-build deployment for 8.9″, 15.6″, and 8K units
  • Adjustable camera for clipping planes and FoV
  • Support for default image effects from Unreal, or customizable effects
  • Windows only (Linux/Ubuntu coming soon)
  • Leap Motion Controller support

“Having access to a glasses-free holographic display is a massive breakthrough, and presents an exciting prospect for teams working in immersive computer graphics, visualization and content creation,” explained Kim Libreri, CTO, Epic Games in a statement. “The Looking Glass holographic display provides a stunning level of realism, and we look forward to seeing the innovations that emerge with the support of Unreal Engine generated content.”

Looking Glass Factory“Every day since we launched the Looking Glass in 2018, more and more engineers and designers would reach out and ask when we would support Unreal Engine,” adds Shawn Frayne, CEO & co-founder of Looking Glass Factory. “That’s why we’re so excited to announce the UE4 plugin for the Looking Glass today. Now studios around the world can make holographic experiences that go beyond anything ever seen before.”

Looking Glass Factory’s holographic displays start from $599 USD for the 8.9″ model with 15.6″ and 8K displays also available. They allow multiple people to view content thanks to the light field technology generating 45 distinct and simultaneous perspectives. For further updates on Looking Glass Factory, keep reading VRFocus.

Light Field Lab Raises $28 Million For Huge Holographic Displays

Light Field Lab has announced it has raised $28 million in funding for its technology to build large holographic displays out of small building blocks.

Bosch Venture Capital and Taiwania Capital led the round. The rest of the money came from Samsung, Verizon Ventures, Comcast, Liberty Global Ventures, NTT Docomo Ventures, Hella Ventures, Khosla Ventures, Alumni Ventures Group, R7 Partners, and Acme Capital. It follows a $7 million seed round in January 2018.

San Jose, California-based Light Field Lab will use the money to scale its display technology from prototype to product. The aim is to create holographic objects that appear to be three dimensional and float in space without head-mounted gear such as augmented reality or virtual reality goggles.

Jon Karafin, CEO of Light Field Lab, told me in an interview in November that he wants to bring real-world holographic experiences to life with up to hundreds of gigapixels of resolution, including modular video walls for live event and large-scale installations.

“The ultimate goal is to enable the things that we all think of in science fiction as the hologram,” Karafin said. “There’s a lot of things out there, but you know, they say that flying cars and holograms are the two things that science fiction hasn’t yet quite delivered. And we’re going to at least get that started.”

How Light Field Lab’s technology works

Above: Light Field Lab

Light Field Lab said that the world we know is largely based on the perception of our senses, with sight being the primary input. Everything around us is a collection of light energy visible through our eyes and processed by the visual cortex of the brain. The “light field” defines how photons travel through space and interact with material surfaces. The things that we ultimately see as the world around us are bundles of light that focus at the back of our eyes. The trick is getting your eyes to focus on a particular point in space.

Light Field Lab’s technology re-creates what optical physics calls a “real image” for off-screen projected objects by generating a massive number of viewing angles that correctly change with the point of view and location just like in the real world. This is accomplished with a directly emissive, modular, and flat-panel display surface coupled with a complex series of waveguides that modulate the dense field of collimated light rays. With this implementation, a viewer sees around objects when moving in any direction such that motion parallax is maintained, reflections and refractions behave correctly, and the eyes freely focus on the items formed in mid-air. The result is that the brain says, “this is real,” without having any physical objects. In other words, Light Field Lab creates real holograms with no headgear.

The company plans to take smaller holographic image components and assemble them into very large images. Back in November, the company showed me two-inch, see-through holographic image that the company can produce as its basic core building block. There’s no head-tracking, no motion sickness, and no latency in the display. It takes place within a six-inch by four-inch space, or the core building block.

“Real-time physics-based rendering is a huge game for everything we’re doing, in addition to the fact that they already have plugins for the majority of every software package out there,” Karafin said. “We want to make sure that the way we roll our technologies out is starting with the really big thing, using large-scale, high-value entertainment experiences, showing something that is transformational that nobody has ever seen before.”

The building blocks will be assembled into large images, like a T-Rex at a museum that looks amazingly real to the children who are standing next to it. Over time, the tech could migrate to consumer applications.

“The type of experience that we will be providing and ultimately, is geared for the home for all consumer technologies,” Karafin said. “Obviously, we’re starting with big venues and big entertainment experiences where the consumer can go to those days to see it. But we are going to create and we will show you today that it is a fully interactive, social experience that you and your friends and your family we can all be in the same environment together and not be limited to some kind of isolated experience.”

The potential

Above: Concept art for Light Field Lab.

Image Credit: Light Field Lab

“Light Field Lab’s holographic display technologies enable entirely new business opportunities across consumer and enterprise markets,” said Ingo Ramesohl, managing director of co-lead investor Bosch Venture Capital, in a statement. “Light Field Lab has the leadership and technical expertise to bring this vision of the holographic future to life.”

Although Light Field Lab will initially target large-format location-based entertainment venues, a version of its holographic technologies will ultimately be developed for the consumer market. These investment partnerships pave the path forward towards widespread industry adoption and market expansion.

Strategic investors said they are excited to participate in the holographic ecosystem and continue to evaluate its potential.

“Light Field Lab’s holographic displays are the most exciting new technology we have seen in the entertainment space to date,” said Ankur Prakash, vice president at Liberty Global Ventures, in a statement. “We are thrilled to meaningfully participate in their Series A and are well-positioned to help them align with the industry’s top content creators and accelerate holographic media distribution on next-generation networks.”

In addition to holographic displays, Light Field Lab’s technology includes the hardware and software platform required for content distribution.

Above: Light Field Lab founders (from L-R): Ed Ibe, Brendan Bevensee and Jon Karafin.

Image Credit: Light Field Lab

“Verizon’s new 5G network features the higher bandwidth, low latency, and speed/throughput to deliver next generation content,” said Kristina Serafim, investment director at Verizon Ventures, in a statement. “Light Field Lab’s innovative solution will help build the 5G future for Verizon’s consumer, business, network, and media customers.”

The company was founded in 2017 by Karafin, Brendan Bevensee, and Ed Ibe, with the single mission to enable a holographic future by building upon the founders’ collective expertise of light field technology innovation. The company has 14 employees and a dozen contractors.

“The industry response has been extremely enthusiastic as evidenced by the strength of our investors,” said Karafin. “We look forward to working with our syndicate of manufacturing, content creation and distribution partners to uncover opportunities and alliances across a range of vertical markets as we take our technology to the next phase.”

“It’s a merger between the real and synthetic worlds,” he said. “That’s what we’re excited about bringing to market.”

Later on, in the second or third generation of the technology, the company wants to introduce the ability to touch, feel, and interact with the holograms, Karafin said.

“Everything that we are doing is leading up to the ability to create real mechanical things,” he said.

And yes, ultimately the company wants to create the Star Trek Holodeck.

This post by Dean Takahashi originally appeared on VentureBeat.

The post Light Field Lab Raises $28 Million For Huge Holographic Displays appeared first on UploadVR.

8i Raises $27 Million to Create Holo: Holograms for Your Phone

8i Raises $27 Million to Create Holo: Holograms for Your Phone

Holographic technology company 8i is announcing today that it has raised a $27 million series B round of fundraising for the development of new products including its new smartphone hologram program: Holo.

Holo’s tagline encourages users to: mix your world with holograms. According to 8i, Holo is a:

“…Consumer mobile app that gives people an easy way to create mixed reality content with holograms of their favorite celebrities, brands, and characters. Holo lets users add holograms to their real-world environments and take videos and photos they can share with friends across their social channels and messaging apps.”

Part of the technology that enables Holo’s dynamic AR imaging is Google’s Project Tango. Tango is a unique system of cameras, depth sensors and software that is being built into more and more smartphones. Holo’s beta is currently being tested by select users on the Tango-enabled Lenovo Phab 2 Pro. The purpose of the beta, according to 8i, is to see, “how users interact with the new technology and a limited selection of sample 3D holograms.”

Later this year, however, the company plans to release “a new version of the app with content partners and programming later this year on Tango-enabled smartphones, and other mobile devices.”

8i made headlines last month when it announced its new CEO Steve Raymond as well as a Seattle-based research facility that seems to be working on bringing holographic telepresence to mass-communications. As one of the first 8i products to ever release, Holo will continue 8i’s apparent mission: to create high quality holograms and show the world the many ways they can be used.

Holo itself seems to be mostly a Snapchat-esque entertainment product that focuses primarily on giggling with your friends as goofy holograms prance across your screen. However, there is serious technological innovations behind those giggles. Tango is one of the most important AR systems in existence today, and the widespread distribution of high-quality holograms has never truly been attempted on a consumer level. Holo may seem innocent, but it’s also a chance for 8i to introduce holograms to the world and whet our collective appetites for what’s coming next.

8i’s $27 million series B was raised with Time Warner Investments, with participation from Baidu Ventures (its very first VR/AR investment), Hearst Ventures, Verizon Ventures, One Ventures, Carsten Maschmeyer’s Seed & Speed Ventures, and existing investors.

Tagged with: , , , , , ,

‘HOLOSCOPE’ Headset Claims to Solve AR Display Hurdle with True Holography

Holo-this, holo-that. Holograms are so bamboozling that the term often gets used colloquially to mean ‘fancy-looking 3D image’, but holograms are actually a very specific and interesting method for capturing light field scenes which have some real advantages over other methods of displaying 3D imagery. RealView claims to be using real holography to solve a major problem inherent to AR and VR headsets of today, the vergence-accommodation conflict. Our favorite holo-skeptic, Oliver Kreylos, examines what we know about the company’s approach so far.


Guest Article by Dr. Oliver Kreylos

oliver-kreylosOliver is a researcher with the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). He has been developing virtual reality as a tool for scientific discovery since 1998, and is the creator of the open-source Vrui VR toolkit. He frequents reddit as /u/Doc_Ok, tweets as @okreylos, and blogs about VR-related topics at Doc-Ok.org.


RealView recently announced plans to turn their previous desktop holographic display tech into the HOLOSCOPE augmented reality headset. This new headset is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, and AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:

Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.

Unlike pretty much everybody else using the holo- prefix or throwing the term “hologram” around, RealView vehemently claims their display is based on honest-to-goodness real interference-pattern based holograms, of the computer-generated variety. To get this out of the way: yes, that stuff actually exists. Here is a Nature article about the HoloVideo system created at MIT Media Lab.

The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and without any special effects:

I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).

There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.

realview-holoscopeThis would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays the screen is bolted to the viewer’s head, solving the problem.

SEE ALSO
HoloLens Inside-out Tracking Is Game Changing for AR & VR, and No One Is Talking about It

So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a large apparatus that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.

Finally, there is no mention of field of view. As HoloLens demonstrates, field of view is important for AR, and difficult to achieve. Maybe this photo from RealView’s web site is a veiled indication of FoV:

RealView-FoVI’m just kidding, don’t be mad.

In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.

The post ‘HOLOSCOPE’ Headset Claims to Solve AR Display Hurdle with True Holography appeared first on Road to VR.