Meta announced and launched pre-orders for its Meta 2 AR headset dev kit more than a year ago, and expected to begin shipping the device at the end of 2016. While the company says a small number of headsets have indeed gone out the door, Meta acknowledged in May that manufacturing delays had prevented a bulk of pre-orderers from getting their headsets in a timely fashion. A recent update from the company suggests that shipping quantities will pick up in August.
Update (7/30/17): Since offering a vague “soon” timeframe for when shipping of the Meta 2 AR dev kit would pick up beyond the “limited quantities” mentioned back in May, an update to the FAQ on the company’s pre-order page for the headset notes that “Volume shipping will occur in mid Q3 2017,” which would fall in the month of August.
“The number one topic my customer success team has been receiving over the past couple months is ‘when will my headset ship?’,” says Meta’s Senior Director of Customer Success, Gary Garcia. Not surprising, I would say, as I’m not sure what other questions you might ask prior to actually having the hardware.
Garcia says that things started off well but the company eventually ran into issues as they tried to scale up their manufacturing and processing procedures. He says the company has doubled the size of their manufacturing floor as they have tackled the issues.
Aside from acknowledging the delays, the company doesn’t offer any specifics about how many headsets have actually shipped at this point, and offers only a vague update on when developers can expect their headsets.
“Soon headsets will be rolling off the manufacturing line in greater volume.”
The Meta 2 augmented reality dev kit began pre-orders in March of 2016 and has apparently seen quite a bit of interest. By the end of 2016, the company says they began shipping the first Meta 2 dev kits, and by the end of 2017 they expect to ship some 10,000 headsets.
“There’s more Meta 2 pre-orders than there are HoloLens headsets in the world,” Ryan Pamplin, VP of Sales & Partnerships at Meta, told me during a recent visit to check out the latest changes to the company’s AR headset. The Meta 2 uniquely features an impressive 90 degree field of view that’s a huge step up in immersion from the company’s prior Meta 1 device and other AR headsets in its class.
Actually, the Meta 2 was not originally going to be the headset to follow the Meta 1. Back in 2013 the company had announced and taken pre-orders for a $3,000 headset called the Meta Pro. But Pamplin says that that company heard resounding feedback from those working with the original Meta 1, saying that the 36 degree field of view was simply not enough. Meta Pro was only set to bump it up to 40 degrees; at some point the company realized that the Meta Pro was not the right move, and Pamplin told me that they never charged any pre-orders.
Then in March 2016 the company revealed the Meta 2, this time with a huge increase in field of view (and a more digestible $950 price tag), one that makes the headset the most visually immersive of AR devices in its category.
The big difference in field of view came from a total overhaul of the headset’s optics which is now using a dual large combiner approach rather than the waveguide approach of the Meta 1 and (would-be) Meta Pro. Waveguide optics are great because they’re thin and enable very small form-factor designs, but they have their drawbacks.
“Waveguide optics cap out around 50 degrees field of view,” Pamplin said. And while the hope is that one day we’ll be able to have AR glasses with an immersive field of view while managing to be no bigger than a pair of sunglasses, Meta realized that developers needed a wide field of view AR headset to begin building applications on today, in parallel with continued optical R&D; hence the bulky but functional Meta 2.
And it seems developers are in agreement with approach, with the company on track to ship some 10,000 Meta 2 dev kits in 2017, according to Pamplin, who says that all the initial pre-orders will ship in the next few months.
That’s a strong start when you consider not only the $950 price tag, but that these are ostensibly purely developers buying this device so that they can build stuff for other people to do with it. This traction reminds me of the early excitement of the first Rift DK1 development kit which would go on to sell around 56,000 units over roughly two years before being replaced by the Rift DK2 which would eventually double that figure.
Good developer traction at the outset is good not just for Meta, but for the entire AR space. VR in general would not be where it is today if Oculus hadn’t been able to generate early excitement and ship more than 175,000 DK1 & DK2 units to developers and early enthusiasts who were figuring out what works well in VR and what doesn’t. Having a hardware platform that’s good enough to stir the imaginations of developers cedes a foundation of conceptual and technical of knowledge that boosts other AR headsets and platforms too.
Indeed, even though it may be a few years until we see widely available consumer-ready AR devices, the AR headset space is rapidly heating up. While Meta has raised a considerable $73 million in venture capital to fund their ongoing development, giants like Microsoft, Magic Leap, and perhaps even Apple loom.
Despite the competition, the benefit of having so much activity happening in the augmented reality space is not lost on Pamplin. “Bring it on,” he says, welcoming the developments and contributions that are sure to come from many different players.
Outside of a few demonstrations at tradeshows, Meta has only give glimpses of the content on its impressive AR headset dev kit in short clips. During a recent visit to the company’s Silicon Valley headquarters to catch the latest improvements to the headset, I got to see a selection of demo content which Meta has allowed Road to VR to capture and share in full for the first time.
Having initially taken pre-orders for the $3,000 Meta Pro glasses back in 2013, the company rebooted the headset and revealed it in early 2016 as the Meta 2, this time poised as a more affordable (at $949) development kit with a much wider and more immersive field of view at 90 degrees (compared to 36 degrees on the original). After taking pre-orders throughout 2016, Meta tells Road to VR that the Meta 2 dev kit is shipping in limited quantities, and the company expects to ramp up shipments in the months ahead.
Many people right now think that the VR and AR development timelines are right on top of each other—and it’s hard to blame them because superficially the two are conceptually similar—but the reality is that AR is, at best, where VR was in 2013 (the year the DK1 launched). That is to say, it will likely be three more years until we see the ‘Rifts’ and ‘Vives’ of the AR world shipping to consumers.
Without such recognition, you might look at Meta’s teaser video and call it overhyped. But with it, you can see that it might just be spot on, albeit on a delayed timescale compared to where many people think AR is today.
Like Rift DK1 in 2013, Meta 2 isn’t perfect in 2016, but it could play an equally important role in the development of consumer augmented reality.
At the time we weren’t able to share an in-depth look at what it was actually like to use the headset, save for a few short clips taken from promotional videos released by the company. Now, running on the shipping version of the development kit, Meta has exclusively allowed us to capture and share the full length of several of the headset’s demo experiences, blemishes and all.
A few notes to make sense of the videos below:
The footage is captured with the Meta 2’s front-facing camera, and then overlaid with the AR content, so this isn’t precisely what it looks like through the headset’s lens itself, but it’s pretty darn close.
The real resolution actually looks a lot sharper than what you’re seeing here; all the text in the demos was quite readable. For comparison, the captures are done at 1280×720 (and not at a very good bit-rate), while the Meta 2 provides a 1280×1440 per-eye resolution.
The AR content is rendered to be correct for my perspective, not the camera’s. That means that when I reach out to touch things it sometimes looks like my hand is way off from the camera’s vantage point, but to me it actually looks like I’m touching the objects directly (which feels surprisingly cool when you reach out and it’s right where you think it is).
Occlusion is not rendered in these videos, though it is present in the view through the headset, so while it looks like I’m reaching behind some objects (also an issue of the above point), there’s some (rough) occlusion happening in my view which, combined with stereoscopy, maintains the illusion that the objects are actually behind my hands and out into the real world.
The tracking fidelity and latency seen in the videos is a fair representation of the current latency seen through the lens itself.
You will see some moments when I clip through the AR content, though the clipping doesn’t align with my own view through the headset due to the camera’s different perspective.
The picture-in-picture video from the outside view is hand-synchronized and isn’t representative of actual latency.
The field of view seen here is technically representative of what it looks like in the headset, except that the field of view of the human eye is much wider than what the camera captures; thus while the AR content can be seen here stretching across the entire width of the camera’s captured field of view, it doesn’t stretch across the entire human field of view. That said, it’s quite wide and much more compelling compared to a lot of other AR headsets out there.
There’s some occasional jumpiness in the framerate of the capture (especially during the brain scene), likely due to the capture process struggling to keep up; through the headset the scenes were perfectly smooth.
Meta 2 Evolving Brain Demo
Meta 2 Cubes, Earth, and Car Demo
Meta 2’s biggest strength is its wide field of view and its biggest weakness is tracking fidelity (both latency and jitter). I’ve often said that if you mashed together HoloLens’ tracking and Meta’s field of view, you’d have something truly compelling. Meta said they would ship with “excellent” inside-out tracking, but as of today they aren’t there yet.
That said, this is a development kit. The company tells me that there’s still optimization to be done along the render pipeline, including the use of timewarp—an important latency-reducing technique—which is not currently implemented.
Like the Rift DK1 back in 2013—it didn’t have perfect head tracking or low enough latency, but it was still enough to stir the imaginations of developers and get the ball rolling on AR app development. Meta knows that their tracking isn’t consumer ready yet, but the company intends to get there (and they seem committed to doing it in-house). Meanwhile, the Meta 2 could serve as that kick-in-the-imagination that gets a core community developers excited about the possibilities of AR and actually starting to build for the platform.
Holo-this, holo-that. Holograms are so bamboozling that the term often gets used colloquially to mean ‘fancy-looking 3D image’, but holograms are actually a very specific and interesting method for capturing light field scenes which have some real advantages over other methods of displaying 3D imagery. RealView claims to be using real holography to solve a major problem inherent to AR and VR headsets of today, the vergence-accommodation conflict. Our favorite holo-skeptic, Oliver Kreylos, examines what we know about the company’s approach so far.
Guest Article by Dr. Oliver Kreylos
Oliver is a researcher with the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). He has been developing virtual reality as a tool for scientific discovery since 1998, and is the creator of the open-source Vrui VR toolkit. He frequents reddit as /u/Doc_Ok, tweets as @okreylos, and blogs about VR-related topics at Doc-Ok.org.
RealView recently announced plans to turn their previous desktop holographic display tech into the HOLOSCOPE augmented reality headset. This new headset is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, and AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:
Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.
The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and without any special effects:
I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).
There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.
This would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays the screen is bolted to the viewer’s head, solving the problem.
So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a large apparatus that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.
In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.
At CES 2017, Lumus is demonstrating its latest waveguide optics which achieve a 55 degree field of view from optics less than 2mm thick, potentially enabling a truly glasses-sized augmented reality headset.
Israel-based Lumus has been working on transparent optics since 2000. The company has developed a unique form of waveguide technology which allows images to be projected through and from incredibly thin glass. The tech has been sold in various forms for military and other non-consumer applications for years.
But, riding the wave of interest in consumer adoption of virtual and augmented reality, Lumus recently announced $45 million in Series C venture capital to propel the company’s technology into the consumer landscape.
“Lumus is determined to deliver on the promise of the consumer AR market by offering a range of optical displays to several key segments,” Lumus CEO Ben Weinberger says.
This week at CES 2017, Lumus was showing off what they’re calling the Maximus, a new optical engine from the company with an impressive 55 degree field of view. For those of us used to the world of 90+ degree VR headsets, 55 degrees may sound small, but it’s actually been difficult to achieve that level of field of view in a highly compact optical system. Meta has a class-leading 90 degree field of view, but requires sizeable optics. Lumus’ 55 degree field of view comes from a sliver of glass less than 2mm thick. Crucially, you can also get your eyes very close to the Maximus optics, potentially enabling truly glasses-sized augmented reality headsets.
Unlike some of the company’s other optical engines which were shown integrated into development kit products, the Maximus was mounted in place and offered no chance to see any sort of tracking (though Lumus primarily in the optical engine, not entire AR headsets).
Stepping up to the rig and looking inside, I saw an animated dragon flying through the air above the convention floor. The view was very sharp, and for an AR headset, felt like there’s some immersive potential. However, the contrast didn’t seem great, with bright white areas appearing blown out. The image also had a silvery holographic quality to it. This may mean a lack of dynamic range, or that the display was not adjusting for ambient light in this demonstration. The brightness of the Maximus optical engine seems among its strong qualities, as even without adding any dimming lenses to cut back on ambient light, the image was bright and clear. Ultimately I was very impressed by the capabilities of the Maximus optical engine. Assuming there’s no major flaws to the display system, this waveguide technology seems like it could be a foundation for extremely compact AR glasses, similar in size to regular spectacles (and that’s something the AR industry has been attempting to achieve for some time now).
The image I saw in the Maximus was 1080p, quite sharp at the 55 degree field of view, though Dr. Eli Glikman said that the resolution is limited only by the microdisplay that feeds the image to the optics. With a higher resolution microdisplay (such as Kopin’s new 2k x 2k model perhaps), there’s great opportunity to scale image fidelity here.
Glikman said that the Lumus Maximus still has about a year of R&D left before it’s ready to be productized, but says that partner companies this year will introduce product prototypes based on the Maximus.
To prove that the company’s optical engines are capable of enabling glasses-sized AR headsets, Lumus was also showing a prototype headset they called ‘Sleek‘. It uses some of the company’s other optical engines and has a smaller field of view, but it’s made to show the impressively small form factor that these optics make possible.
How it Works
It’s actually a pretty awesome feat of physics to channel light down a slim piece of glass and then get it to pop out of that glass when and where you need it.
The Maximus optical engine, as seen at CES 2017, relied on the bulky electronics above the optics. There, a pair of (microdisplays which function as the light source of the optics) are housed. The image from each display is stretched and compressed to be emitted along the top of the lenses. From here it cascades down the optics and—from our understanding of Lumus’ proprietary technology—uses an array of prism-like structures in the glass to bounce certain sections of the injected light out toward the user’s eye. During that process, the image is reconstructed into that originating on the microdisplay (somewhat like the process of pre-warping visuals to cancel out the warping of a headset’s lenses).
With Lumus’ advances in waveguide optics, coupled with other impressive microdisplay advances seen at CES this year, it seems that practical everyday solutions for lightweight augmented realist hardware are rapidly approaching. CES 2018 may prove to be a fascinating milestone for augmented reality.
Intel revealed at their CES 2017 press conference that Project Alloy, their standalone mixed reality headset and reference platform, would be productized “by” Q4 2017 in partnership with major OEMs.
Intel also emphasized that they would partner with anyone who wants to manufacture a Project Alloy headset, as they want it to be an open platform. Additionally, the company showed a new demo of the hardware, as well as a slew of possible uses that they’re helping to make possible for both their platform and others like the Rift and Vive.
At the press conference, Intel had 250 Oculus Rift stations which they used to show attendees the experiences they aimed to make possible in the future.
The examples they showed off ranged from Arizona Sunshine (2016), to live sports streaming in stereo 360 with Voke (whom Intel acquired last year) to volumetric 360 recordings of a waterfall in Vietnam that users could move around in (with each frame consisting 30 GB in data), as well as a real time inspection of solar panels with 360 camera capture from a drone, of course all powered by Intel. The company announced that the Voke live VR video streaming platform at least would see a release on the Rift and Vive later this year, while it’s already out on Gear VR.
The press conference continued with Intel demonstrating a multiplayer gaming session with Project Alloy that incorporated motion controls as well as a scanned model of the real environment that would be used to mold the virtual environment.
They made sure to emphasize that they were playing without wires and seamlessly mixing in more or less physical elements from the real world into the game, as the wall faded away so that it would just be open sky, and the couches could transform and be used as cover, appearing as a typical game props. They called it “merged reality”, a term the company has been trying to coin since debuting the headset since August.
Hot off the tail of a $58 million capital raise, ODG today announced two new models of their R-series smartglasses, the R8 and R9, which add positional tracking, an improved field of view, and a closer-to-consumer price point.
ODG calls their new R8 and R9 “Consumer AR Smartglasses,” though with prices at $1,000 and $1,800 and a field of view of 40 degrees and 50 degrees respectively, I’m not sure everyone would agree with that characterization. Regardless, the glasses do represent a firm step toward augmented reality and the consumer market, as both models are said to be equipped with optical-based inside-out positional tracking and are siginificantly cheaper than the R7 predecessor, which was priced at $2,750. It seems the company plans to keep whittling prices down as they look toward consumer adoption.
Both smartglasses are based on Qualcomm’s powerful new Snapdragon 835 mobile processor, which is said to be well equipped for AR and VR use-cases. The glasses will tap into Qualcomm’s Snapdragon VR SDK to achieve positional tracking capabilities; in the past we’ve been impressed with the tracking of Qualcomm’s VR headset reference platform, and we hope to see the same positional tracking quality carry over to ODG’s new smartglasses. However, when we asked ODG how its positional tracking compared to that of HoloLens, we were told that the R8 and R9 weren’t built for the same level of tracking quality as HoloLens, so we’ll have to wait and see how well it stacks up.
Both devices have dual 1080p OLED displays, said to run at 80Hz, that use what the company characterized as a “folded optics approach” to achieving a transparent display. Both smartglasses run ODG’s Android-based ReticleOS, which can run regular Android apps in a legacy mode, but the company says they’re working with partners to show how apps can be expanded beyond the phone paradigm with augmented reality capabilities.
So what’s the difference between the R8 and R9? The more expensive R9 is a bit heavier (at around 6 ounces), has a wider 50 degree field of view, and leans more toward the enterprise and developer sectors. The key feature on the R9 is a special expansion port on top which ODG says taps directly into the headset’s hardware, offering huge potential for customization through aftermarket modules—like UV, night vision, or gesture input cameras—making the R9 the device of choice for niche use-cases. The headset also has a 13MP front-facing camera that’s capable of high-resolution or high-frame rate recording (up to 120FPS at lower resolutions). The R9 will be available first, with development kits launching in Q2 2017.
At just 4 ounces, and with a somewhat more sleek design, ODG says the less expensive R8 is positioned more toward the early consumer adopter. The 40 degree field of view is only slightly wider than the R7’s 37 degree field of view. One thing the R8 has that the R9 doesn’t is a 1080p stereo camera pair which can capture 3D video. Development kits of the R8 are planned to ship in Q3 2017.
Input on both devices revolves around buttons and controls on the glasses themselves (similar to Google Glass), as well as an option for phone-based control via an app, and support for Bluetooth accessories like keyboards and a ‘Wiimote-like bluetooth ring controller.
Lumus Ltd. recently confirmed a $30 million funding round led by Quanta and HTC. Lumus is a leading provider and developer of augmented reality technology.
Following from the $15 million funding led by Shanda Group and Crystal-Optech in June, Lumus Ltd. recently announced a $30 million Series C round of funding led by Quanta Computer, the world’s largest notebook computer ODM company, along with HTC and other strategic investors. “AR/VR is well aligned with our growth strategy and we’re pleased to invest in the Lumus optics solution for augmented reality”, says C.C. Leung, vice chairman and president of Taiwan-based Quanta. “This is pioneering technology, and we have great confidence in Lumus as an innovator and industry leader for transparent optical displays in the AR market.”
Founded in 2000, Lumus has established itself as a leading provider and developer of the core enabling technology for augmented reality. Their ‘Optical Engine’, which combines a patented Light-Guide Optical Element and a miniature projector, is already found in AR devices used in industries such as aviation, logistics, medical care, and the military. The technology is also being applied to consumer products in the development stage, with the current DK-50 development kit being provided to leading consumer electronics and smart eye-wear manufacturers, sporting a 40 degree field of view – larger than Microsoft’s HoloLens. Speaking to TechCrunch, Lumus CEO Ben Weinberger revealed that a prototype with a 50% larger field of view than the DK-50 will be shown at CES next month.
HTC’s interest is understandable, as their current involvement in VR will inevitably converge with AR in the near future. Alvin Wang Graylin, HTC China Regional President of Vive, recently predicted that the first ‘integrated selectable AR+VR product’ would arrive in 2018. “We are very committed to AR/VR,” says David Chang, COO of HTC. “Our current investment is aligned with HTC’s natural extension into augmented reality following our successful VIVE launch earlier this year.”
While the combined $45 million funding is eclipsed by the staggering $793.5 million Series C investment in Magic Leap earlier this year, the new financing is a tremendous boost to Lumus. According to the press release, they plan to ‘expand development, operations, and marketing of its display technology for the AR and smart eyewear industry’. The rapid rise of AR investment is expected to continue; in a recent IDC study, it was predicted that AR could become an everyday technology for more than a billion consumers within the next five years, with 30 percent of Global 2000 companies incorporating AR and VR into their marketing programs during 2017.
“This new funding will help Lumus continue to scale up our R&D and production in response to the growing demand from companies creating new augmented reality and mixed reality applications, including consumer electronics and smart eyeglasses,” says Ben Weinberger. “We also plan to ramp up our marketing efforts in order to realize and capture the tremendous potential of our unique technology to re-envision reality in the booming AR industry.”
Meta made sizeable waves in the AR space in early 2016 with the announcement, demonstration, and pre-order launch of their Meta 2 development kit. The company has published an unboxing video of the system, but with just over two weeks until 2017, it doesn’t appear the company will hit its 2016 delivery goal.
Meta opened pre-orders for the Meta 2 development kit back in Q1 of 2016 for $949, saying that orders would be filled in 2016. A new video this week from the company (seen at the top of this article) shows an unboxing of the system and makes a direct pitch interested developers to lock in the $949 price before it goes up at the end of 2016.
The video doesn’t make any mention of a release date for the Meta 2 development kit, but the company’s website reads the same as it has since pre-orders opened, “The product is available for preorder now and will ship later this year. We will be in communication with preorder customers on timing.”
The unboxing video shows everything that comes with the kit, which includes a soft sleeve-case and a helpful resting stand for the headset to prevent its lenses from being scratched when not on your head. At the end we see a brief moment of a welcome sequence which shows a worrying amount of tracking jitter, even with minimal movement of the headset.
Back in March, we went hands-on with an earlier prototype of the Meta 2 at the company’s office and concluded that the promising headset could be an eye-opening AR platform for developers, with potential to “do for augmented reality what Rift DK1 did for virtual reality.” Since October, Road to VR has reached out to Meta on multiple occasions (to multiple points of contact) for an update on the release status of the Meta 2, but hasn’t received any response.
Meta told us previously that they expected to be able to supply “tens of thousands of units” in 2016, though the company hasn’t said how many pre-orders have been received.
With just over two weeks remaining in 2016, the chances of any substantial number of Meta 2 dev kits being delivered by the end of the year seems unlikely, but hopefully it won’t be too far into the New Year before devs start receiving headsets en masse.
San Francisco based Osterhout Design Group (aka ODG) announced today that they’ve raised a significant $58 million Series A investment to accelerate production and R&D of its smart glasses products.
ODG’s current flagship device is the enterprise-focused R-7 Smartglasses System which sell for $2,750 a pop. The HMD, which has a small ~37 degree field of view compared to most VR headsets, is a self-contained computing device which runs a custom version of Android 4.4 (Kit Kat) and allows users to run a suite of proprietary apps mostly as a heads-up-display type interface, though there is some capacity for headtracking and AR applications in the R-7. It may not be immersive yet, but ODG aspires to make their smart glasses product line into a major augmented reality platform.
The $58 million Series A investment will accelerate production and product R&D, expand patents, and pay for new hires, according to the company. Participating in the investment is Shenzhen O-film Tech Co., Vanfund Urban Investment & Development Co., along with 21st Century Fox and “several individual investors.”
ODG has been focused on custom enterprise solutions up to this point, but as their tech matures and reduces in price, the company has its eyes set on the consumer market as well; AR smartglasses will be the new mobile device everyone carries with them, the company believes.
Next month at CES 2017, ODG plans to reveal their latest product offering, though it isn’t clear yet if it will be a new consumer focused device or a continued iteration of the enterprise focused smart glasses the company has touted up to this point.
ODG’s impressive $58 million raise is sure to afford a major boost in capacity, but there’s no shortage of competition, even at this early stage. The company will be facing off against the likes of AR headset maker Meta, who has secured some $73 million in venture investments, along with Magic Leap, the somehow-still-in-stealth AR headset company that has raised a staggering $1.39 billion. All three companies stand in the looming shadow of HoloLens, backed by Microsoft’s whopping $472 billion market cap.