Qualcomm announced at Mobile World Congress (MWC) today it’s partnering with seven global telecommunication companies in preparation for the next generation of AR glasses which are set to work directly with the user’s smartphone.
Partners include CMCC, Deutsche Telekom, KDDI Corporation, NTT QONOQ, T-Mobile, Telefonica, and Vodafone, which are said to currently be working with Qualcomm on new XR devices, experiences, and developer initiatives, including Qualcomm’s Snapdragon Spaces XR developer platform.
Qualcomm announced Snapdragon Spaces in late 2021, a software tool kit which focuses on performance and low power devices which allows developers to create head-worn AR experiences from the ground-up or by adding head-worn AR to existing smartphone apps.
Qualcomm and Japan’s KDDI Corporation also announced a multi-year collaboration which it says will focus on the expansion of XR use cases and creation of a developer program in Japan.
Meanwhile, Qualcomm says OEMs are designing “a new wave of devices for operators and beyond” such as the newly unveiled Xiaomi Wireless AR Glass Discovery Edition, OPPO’s new Mixed Reality device and OnePlus 11 5G smartphone.
At least in Xiaomi’s case, its Wireless AR Glass headset streams data from compatible smartphones. Effectively offloading computation to the smartphone, the company’s 126g headset boasts a wireless latency of as low as 3ms between the smartphone device to the glasses, and a wireless connection with full link latency as low as 50ms which is comparable to wired solution.
The company now known as Meta has spent staggering amounts on creating an immersive successor to the traditional 2D internet. But what has it got to show for it, apart from 11,000 job losses?
What a difference a year makes. Last October, Facebook supremo Mark Zuckerberg could barely wait to show the world what he was up to. “Today, we’re going to talk about the metaverse,” he enthused in a slick video presentation. “I want to share what we imagine is possible.” Transitioning almost seamlessly from his real self into a computer-generated avatar, Zuckerberg guided us through his vision for the virtual-reality future: playing poker in space with your buddies; sharing cool stuff; having work meetings and birthday parties with people on the other side of the world; customising your avatar (the avatars had no legs, which was weird). Zuckerberg was so all-in on the metaverse, he even rechristened his company Meta.
This month, we saw a more subdued Zuckerberg on display: “I wanna say upfront that I take full responsibility for this decision,” he told employees morosely. “This was ultimately my call and it was one of the hardest calls that I’ve had to make in the 18 years of running the company.” Meta was laying off 11,000 people – 13% of its workforce. Poor third-quarter results had seen Meta’s share price drop by 25%, wiping $80bn off the company’s value. Reality Labs, Meta’s metaverse division, had lost $3.7bn in the past three months, with worse expected to come. It wasn’t all bad news, though: Zuckerberg announced last month that Meta avatars would at last be getting legs.
Expo in east London shows how important augmented and virtual reality will be, as attractions move with the times
In the fight for theme park visitors the battle lines have been drawn – monster trucks, virtual reality zombie warfare and “smellscaping”, just thankfully not all at the same time.
And while there was a sombre atmosphere around parts of London as tens of thousands lined up to pay their respects to the Queen, there were 10,000 more gathered in a convention centre in East London experiencing the future of the theme park.
In a previous post, we shared with you a list of every Augmented Reality & Virtual Reality Movie from KBZ Film. KBZ Film has detailed lists of films and bills itself as “The Internet’s Largest Collection of Subgenre and Microgenre Film Lists”. After viewing their AR & VR Film List, we can attest to the completeness of their list as they have listed quite a few obscure AR & VR films – some of which we think every AR & VR fan should check out including Sleep Dealer and Anon.
Venture capitalist Matthew Ball’s new book explores the three-dimensional virtual world that is set to supersede the net. What might this alternative digital reality have in store for users?
Venture capitalist Matthew Ball first wrote about the metaverse in 2018 and his essays have become essential reading for entrepreneurs and tech watchers who are attempting to understand or profit from the network Mark Zuckerberg and many others are anticipating will supersede the internet. Ball is former head of strategy at Amazon Studios and his first book, The Metaverse: How It Will Revolutionize Everything, is published later in July.
What is the metaverse? It is a persistent network of 3D spaces. Almost everything online today – all applications, digital operating systems, webpages – works on common protocols and technology that connects them. The metaverse is a 3D elevation of the online world, which spans augmented reality – unseen virtual simulations in the world around us – as well as much of consumer leisure and socialising.
Having not had a chance to see Mojo Vision’s latest smart contact lens for myself until recently, I’ll admit that I expected the company was still years away from having a working contact lens with more than just a simple notification light or a handful of static pixels. Upon looking through the company’s latest prototype I was impressed to see a much more capable prototype than I had expected.
When I walked into Mojo Vision’s demo suite at AWE 2022 last month I was handed a hard contact lens that I assumed was a mockup of the tech the company hoped to eventually shrink and fit into the lens. But no… the company said this was a functional prototype, and everything inside the lens was real, working hardware.
Image courtesy Mojo Vision
The company tells me this latest prototype includes the “world’s smallest” MicroLED display—at a miniscule 0.48mm, with just 1.8 microns between pixels—an ARM processor, 5GHz radio, IMU (with accelerometer, gyro, and magnetometer), “medical-grade micro-batteries,” and a power management circuit with wireless recharging components.
And while the Mojo Vision smart contact lens is still much thicker than your typical contact lens, last week the company demonstrated this prototype can work in an actual human eye, using Mojo Vision CEO Drew Perkins as the guinea pig.
Image courtesy Mojo Vision
And while this looks, well… fairly creepy when actually worn in the eye, the company tells me that, in addition to making it thinner, they’ll cover the electronics with cosmetic irises to make it look more natural in the future.
At AWE I wasn’t able to put the contact lens in my own eye (Covid be damned). Instead the company had the lens attached to a tethered stick which I held up to my eye to peer through.
Photo by Road to VR
When I did I was surprised to see more than just a handful of pixels, but a full-blown graphical user interface with readable text and interface elements. It’s all monochrome green for now (taking advantage of the human eye’s ability to see green better than any other color), but the demo clearly shows that Mojo Vision’s ambitions are more than just a pipe dream.
Despite the physical display in the lens itself being opaque and directly in the middle of your eye, you can’t actually see it because it’s simply too small and too close. But you can see the image that it projects.
Photo by Road to VR
Compared to every HMD that exists today, Mojo Vision’s smart contact lens is particularly interesting because it moves with your eye. That means the display itself—despite having a very small 15° field-of-view—moves with your vision as you look around. And it’s always sharp no matter where you look because it’s always over your fovea (the center part of the retina that sees the most detail). In essence, it’s like having ‘built-in’ foveated rendering. A limited FoV remains a bottleneck to many use-cases, but having the display actually move with your eye alleviates the limitation at least somewhat.
But what about input? Mojo Vision has also been steady at work on figuring out how users will interact with the device. As I wasn’t able to put the lens into my own eye, the company instead put me in a VR headset with eye-tracking to emulate what it would be like to use the smart contact lens itself. Inside the headset I saw roughly the same interface I had seen through the demo contact lens, but now I could interact with the device using my eyes.
The current implementation doesn’t constrain the entire interface to the small field-of-view. Instead, your gaze acts as a sort of ‘spotlight’ which reveals a larger interface as you move your eyes around. You can interact with parts of the interface by hovering your gaze on a button to do things like show the current weather or recent text messages.
It’s an interesting and hands-free approach to an HMD interface, though in my experience the eyes themselves are not a great conscious input device because most of our eye-movements are subconsciously controlled. With enough practice it’s possible that manually controlling your gaze for input will become as simple and seamless as using your finger to control a touchscreen; ultimately another form of input might be better but that remains to be seen.
This interface and input approach is of course entirely dependent on high quality eye-tracking. Since I didn’t get to put the lens on for myself, I have no indication if Mojo Vision’s eye-tracking is up to the task, but the company claims its eye-tracking is an “order of magnitude more precise than today’s leading [XR] optical eye-tracking systems.”
In theory it should work as well as they claim—after all, what’s a better way to measure the movement of your eyes than with something that’s physically attached to them? In practice, the device’s IMU is presumably just as susceptible to drift as any other, which could be problematic. There’s also the matter of extrapolating and separating the movement of the user’s head from sensor data that’s coming from an eye-mounted device.
Image courtesy Mojo Vision
If the company’s eye-tracking is as precise (and accurate) as they claim, it would be a major win because it could enable the device to function as a genuine AR contact lens capable of immersive experiences, rather than just a smart contact lens for basic informational display. Mojo Vision does claim it expects its contact lens to be able to do immersive AR eventually, including stereoscopic rendering with one contact in each eye. In any case, AR won’t be properly viable on the device until a larger field-of-view is achieved, but it’s an exciting possibility.
So what’s the road map for actually getting this thing to market? Mojo Vision says it fully expects FDA approval will be necessary before they can sell it to anyone, which means even once everything is functional from a tech and feature standpoint, they’ll need to run clinical trials. As for when that might all be complete, the company told me “not in a year, but certainly [sooner than] five years.”
Earlier this month during Apple’s annual developer conference, WWDC 2022, the company gave developers the first look at improvements coming to Apple’s ARKit 6 toolkit for building AR apps on iOS devices.
Though Apple has yet to reveal (or even confirm) the existence of an AR headset, the clearest indication the company is absolutely serious about AR is ARKit, the developer toolkit for building AR apps on iOS devices which Apple has been advancing since 2017.
At WWDC 2022 Apple revealed the latest version, ARKit 6, which is bringing improvements to core capabilities so developers can build better AR apps for iPhones and iPads (and eventually headsets… probably).
ARKit includes a MotionCapture function which tracks people in the video frame, giving developers a ‘skeleton’ which estimates the position of the person’s head and limbs. This allows developers to create apps which overlay augmented things onto the person, or moves them relative to the person (it can also be used for occlusion to place augmented content behind someone to more realistically embed it into the scene).
In ARKit 6, Lipski says the function is getting a “whole suite of updates,” including improved tracking of 2D skeletons which now estimate the location of the subject’s left and right ears (which will surely be useful for face-filters, trying on glasses with AR, and similar functions involving the head).
Image courtesy Apple
As for 3D skeletons, which gives a pose estimation with depth, Apple is promising better tracking with less jitter, more temporal consistency, and more robustness when the user is occluded by the edge of the camera or other objects (though some of these enhancements are only available on iPhone 12 and up).
Camera Access Improvements
Image courtesy Apple
ARKit 6 gives developers much more control over the device’s camera while it’s being used with an AR app for tracking.
Developers can now access incoming frames in real-time up to 4K at 30FPS on the iPhone 11 and up, and the latest iPad Pro (M1). The prior mode, which uses a lower resolution but higher framerate (60FPS), is still available to developers. Lipski says developers should carefully consider which mode to use. The 4K mode might be better for apps focused on previewing or recording video (like a virtual production app), but the lower resolution 60FPS mode might be better for apps that benefit from responsiveness, like games.
Similar to higher video resolution during an AR app, developers can now take full resolution photos even while an AR app is actively using the camera. That means they can pluck out a 12MP image (on an iPhone 13 anyway) to be saved or used elsewhere. This could be great for an AR app where capturing photos is part of the experience. For instance, Lipski says, an app where users are guided through taking photos of an object to later be converted into a 3D model with photogrammetry.
ARKit 6 also gives developers more control over the camera while it’s being used by an AR app. Developers can adjust things like white balance, brightness, and focus as needed, and can read EXIF data from every incoming frame.
More Location Anchor… Locations
Image courtesy Apple
ARKit includes LocationAnchors which can provide street-level tracking for AR in select cities (for instance, to do augmented reality turn-by-turn directions). Apple is expanding this functionality to more cities, now including Vancouver, Toronto, and Montreal in Canada; Fukuoka, Hiroshima, Osaka, Kyoto, Nagoya, Yokohama, and Tokyo in Japan; and Singapore.
Later this year the function will further expand to Auckland, New Zealand; Tel Aviv-Yafo, Israel; and Paris, France.
Plane Anchors
Plane Anchors are a tool for tracking flat objects like tables, floors, and walls during an AR session. Prior to ARKit 6, the origin of a Plane Anchor would be updated as more of the plane was discovered (for instance, moving the device to reveal more of a table than the camera saw previously). This could make it difficult to keep augmented objects locked in place on a plane if the origin was rotated after first being placed. With ARKit 6, the origin’s rotation remains static no matter how the shape of the plane might change during the session.
– – — – –
ARKit 6 will launch with the iOS 16 update which is available now in beta for developers and is expected to be release to the public this Fall.
AR smart glasses with displays now widely available in UK but must be connected to a smartphone to work
The first widely available augmented reality glasses have hit the UK high street, putting TV shows, movies and games on a big virtual screen just in front of your eyes. But while the Nreal Air are the first of their type on the shelves, they are limited in what consumers can do with them.
Many firms have tried to be the first to make AR glasses the next generation of technology, not least Google with its ill-fated Glass back in 2013. Snapchat and Facebook have made attempts, both sporting cameras for recording others, but so far there have been no glasses for consumers with displays for the wearer to view. Until now.
One company at the forefront of augmented reality (AR) glasses is China-based Nreal, having released the Nreal Light followed by the Nreal Air. Outside of its traditional market in Asia, Nreal’s devices have only started to see global availability in the last year and in doing so the company is increasing content efforts. It’s doing so in a couple of ways, one with Steam compatibility and the other via its first hackathon event.
Unlike AR smartglasses that have features like 6DoF tracking, Nreal’s AR glasses allow users to connect their smartphones to watch movies or play videogames on giant virtual screens. Hence why the company has pushed towards native AR cloud gaming experiences by releasing “Steam on Nreal”. So yes, that does mean you can now stream Steam games from your PC onto a huge 130-inch HD virtual display.
Nreal does note that “Steam on Nreal” is a beta release that requires a bit of setup effort without going into specifics. The software isn’t yet optimized for all Steam games but gamers can enjoy titles like DiRT Rally and the Halo series. As an additional benefit, Nreal Light and Air users can already utilise Xbox Cloud Gaming via a browser inside Nebula, Nreal’s 3D system.
“We are excited to be the first to bring Steam into AR,” said Peng Jin, co-founder of Nreal in a statement. “The beta release is meant to give people a glimpse into what is possible. After all, AAA games should be played on a 200″ HD screen and they should be played free of location restrictions.”
As for the AR Jam, this will be Nreal’s first augmented reality hackathon, an online international contest with more than $100,000 USD in cash prizes to be won by creators. Kicking off on 27th June 2022, the AR Jam is looking for developers to compete in at-home fitness, art; games, video (highlighting Nebula’s multi-screen functionality) and Port (converting existing apps into AR) categories. There will also be three bonus categories should participants wish to enter, Multiplayer/Social/Networks; NFT Galleries, and Students.
“We’ve always been focused on creating consumer-ready AR experiences with groundbreaking tech, to redefine the way we interact with information and content in our everyday lives. With the AR Jam and content fund, Nreal is demonstrating its commitment to supporting pioneering developers and their AR passion projects,” Jin added.
Category winners will receive $10k, whilst those in second and third places will receive small cash prizes. Honourable mentions will get their very own Nreal Light Dev kit. The AR Jam will run until 27th July 2022.
For continued updates on Nreal and the AR market, keep reading gmw3.
As the co-creator of Hololens and the chief of Microsoft’s mixed reality division, Alex Kipman has been the face of the company’s immersive efforts for several years now. That’s all coming to an end, with reports stating that Kipman will be leaving Microsoft after allegations of verbal abuse and sexual harassment surfaced.
Insiderreported the allegations back in May and it was the same site this week that first reported on Kipman resigning his position. While Microsoft has yet to officially comment on the report, Geekwire obtained an email from Scott Guthrie, the head of Microsoft’s Cloud & AI Group, announcing a restructuring of the Hololens group.
The hardware and software teams will be split between the Windows + Devices organisation and the Experiences + Devices division respectively. This hasn’t been an overnight decision it seems, with Guthrie stating in the email that: “Over the last several months, Alex Kipman and I have been talking about the team’s path going forward. We have mutually decided that this is the right time for him to leave the company to pursue other opportunities.” Kipman won’t be leaving right away. He’ll help the team’s transitions over the next couple of months before departing Microsoft.
What this will mean for Hololens is unclear as Kipman was by far Hololens’ (and mixed reality’s) most ardent supporter within Microsoft. The news comes at a turbulent time for the device as the US Army decides whether to continue with HoloLens development – called IVAS – for its soldiers, with reports suggesting that the 10-year, $21.9 billion USD contract might be delayed or reduced in size.
Alex Kipman and John Hanke at Microsoft Ignite
A Brazilian engineer, Kipman joined Microsoft in 2001 and worked within the Windows and Xbox teams – he helped create the Xbox Kinect sensor – before heading up the mixed reality division. Insider’s report last month saw dozens of staff detail his alleged behaviour to the publication. These included one employee saying Kipman watched what was essentially VR porn in front of others whilst another spoke of an incident where he kept massaging a female employee’s shoulders even though she was trying to shrug him off.
It was this pattern of continual inappropriate behaviour and unwanted touching that created an atmosphere where managers reportedly told staff women shouldn’t be left alone with him.
At the beginning of the year, the Wall Street Journal reported on more than 70 staff from the Hololens team leaving Microsoft in 2021, with 40 of those joining Meta.
For continued Hololens updates, keep reading gmw3.