Hugo Swart, previous head of Qualcomm’s XR division, announced he’s joined Google where he’ll lead the company’s XR Ecosystem Strategy and Technology efforts.
Swart shared the news in a LinkedIn update, noting the move happened a few months ago:
Happy to share that I joined Google couple months ago and am responsible for XR Ecosystem Strategy and Technology. Super excited to continue the XR journey and working with you all – great things ahead! Thank you Shahram Izadi for the opportunity! Looking forward to AWE this week!!
As General Manager and Vice President of XR at Qualcomm, Swart was a driving force behind the company’s Snapdragon XR series of chipsets, which currently power the majority of standalone headsets on the market, including all of Meta’s Quest headsets to date.
Following Swart’s departure from Qualcomm in February, Alex Katouzian, Group GM of the Mobile, XR, and Compute Business Unit is currently overseeing XR at Qualcomm.
Swart is joining Google at a pivotal moment in XR, as the company recently announced a strategic technology partnership with Magic Leap, which is seen as an effort to keep up with Meta, Apple, and others in a race to control the burgeoning AR headset market.
This follows a notable setback last year when Google reportedly shelved its Project Iris AR glasses following mass restructuring within the company, which included layoffs, reshuffles, and the departure of Clay Bavor, Google’s then-head of AR and VR.
Meanwhile, Google is developing a new Android-based platform for Samsung’s upcoming XR headset announced back in February 2023, which is set to be powered by Qualcomm silicon. Google is also rumored to be developing a “Micro XR” platform for XR glasses, which is said to use a prototyping platform internally known as “Betty.”
Hugo Swart, General Manager and Vice President of XR at Qualcomm, has announced his departure from the company at a pivotal moment in the industry.
Hugo Swart, who has been with Qualcomm for 20 years, this week announced that he’s leaving the company. Most recently, Swart has been the company’s GM & VP of XR for more than six years.
Swart was a driving force behind the company’s Snapdragon XR series of chipsets, which currently power the vast majority of major headsets on the market. Under his tenure, the division worked closely with Meta. Not only are Snapdragon chips in every single one of Meta’s standalone XR headsets, the two companies publicly announced a multi-year collaboration around XR chipsets back in 2022.
“After 20 years at Qualcomm, I am embarking on a new journey,” Swart announced. “It was an amazing two decades starting with driving EV-DO technology adoption in Latin America to running Qualcomm’s XR business. So many great memories, friendships, partnerships and accomplishments that I am grateful for. Looking forward to the next phase, but first I am taking some time off and will update you then.”
So far it seems a successor for the position has not been named.
In the wake of this leadership change, Qualcomm faces the challenge of defending its market share against competitors like NVIDIA, AMD, and Intel, which are also eyeing the growing XR space, especially with the launch of Vision Pro, which is powered by Apple’s own custom chips.
A crucial indicator of Qualcomm’s continued momentum in the XR market may be the reception of Samsung’s upcoming XR headset. Samsung, Google, and Qualcomm are publicly partnered on that project, with the expectation that Qualcomm will make the chips, Samsung will make the hardware, and Google will make the software.
The XR industry is at a critical juncture, with Qualcomm’s partnership with Meta and the competitive pressure from Apple Vision Pro shaping the strategic landscape. The ongoing collaboration with Meta underscores the importance of strategic alliances in driving XR adoption and innovation, especially with competitive pressures mounting in a way the XR industry hasn’t previously seen. The effectiveness of Swart’s eventual successor will be key in shaping Qualcomm’s trajectory in the XR market.
Microsoft, Qualcomm and Magic Leap announced a partnership to “guide the evolution” of the Mixed Reality Toolkit (MRTK), a cross-platform AR/VR development framework which has now gone open-source.
MRTK was a Microsoft-driven project that provided a set of components and features used to accelerate cross-platform XR app development in the Unity game engine. The developing team behind MRTK was unfortunately disbanded, as Microsoft cut both MRTK and the AltspaceVR teams earlier this year in a wide-reaching round of layoffs.
Still, as an open-source project now, Microsoft is joining XR industry cohorts Qualcomm and Magic Leap to form their own independent organization within GitHub that aspires to transform the software into a “true multi-platform toolkit that enables greater third-party collaboration.”
“With Magic Leap joining Microsoft as equal stakeholders on the MRTK steering committee, we hope to enrich the current ecosystem and help our developer community create richer, more immersive experiences that span platforms,” Magic Leap says in a blogpost. “Additionally, our support for MRTK3 will allow for simple porting of MRTK3 apps from other OpenXR devices to our platform.”
MRTK3 already supports a wide range of platforms, either full or experimentally, including OpenXR devices like Microsoft HoloLens 2, Meta Quest, Windows Mixed Reality, SteamVR, Oculus Rift (on OpenXR), Lenovo ThinkReality A3, as well as Windows Traditional desktop. The committee says more devices are “coming soon,” one of which will likely be the Magic Leap 2 AR headset.
Meanwhile, Microsoft announced MRTK3 is on track to reach general availability to developers on the second week of September 2023. To learn more, check out Microsoft’s MRTK3 hub, which includes support info, tutorials, and more.
Industry Direct is our program for sponsors who want to speak directly to the Road to VR newsletter audience. Industry Direct posts are written by sponsors with no involvement from the Road to VR editorial team. Links to these posts appear only in our newsletter and do not intermix with our on-site editorial feed. Industry Direct sponsors help make Road to VR possible.
It’s clear that we’re undergoing another massive shift on how we connect, play, work and interact with the world around us. More specifically, a shift on the devices we use every day.
Mixed reality (MR) is taking the world by storm and headworn devices are quickly winning the hearts (and habits) of consumers. Extended Reality (XR) devices are getting lighter, sleeker, and more performant providing unprecedented immersive experiences. New players in the market means a boost for the industry and more options for developers looking to create the next generation of AR, VR and MR apps and experiences.
A New Wave of MR Devices and Experiences
At AWE 2023 (Augmented World Expo), Qualcomm announced that Snapdragon Spaces XR Developer Platform now supports the new generation of MR devices, including Lenovo’s ThinkReality VRX, Oppo’s new MR Glass Developer Edition, all-in-one AR devices from DigiLens, TCL RayNeo, and other prominent headworn devices expected to be released later this year. Powered by Snapdragon chipsets purposefully built for XR, these devices are a game changer for developers looking to combine computer vision, AI, and 5G capabilities to build immersive and ultra-realistic experiences.
By expanding the perception technology stack from AR to MR, Snapdragon Spaces enables more developers to push the boundaries of reality, all thanks to the video passthrough capabilities combined with features that seamlessly understand environments and users.
With the wide variety of devices available and soon to be available in the market, developers can reap the benefits from working with a platform that is based on OpenXR. Snapdragon Spaces enables developers to easily deploy applications across multiple devices while being part of an open and rapidly growing ecosystem.
Strong Momentum for XR Developers
Developers are in the driver’s seat leading, disrupting and creating this new era of spatial computing.
Hugo Swart, VP and GM of XR, highlighted the incredible traction the Snapdragon Spaces ecosystem is getting: thousands of developers have joined the Snapdragon Spaces community, more than 80 members have joined Snapdragon Spaces Pathfinder Program, three new Metaverse Fund venture investments and an inaugural group of 10 companies joining the Niantic Lightship and Snapdragon Spaces developer initiative.
The platform has been a critical building block for developers across productivity, gaming and entertainment, health, education, training and other verticals to deliver innovative apps based on the world’s most popular development engines: Unity and Unreal.
Get Started with Snapdragon Spaces
The XR market is about to experience a huge influx of content, applications, new devices and increased adoption.
Snapdragon Spaces continues to expand and create an open ecosystem that enables developers to pioneer innovative experiences for the next generation of immersive technology. For developers who want to help build this new era of spatial computing, check out Snapdragon Spaces.
Eye-tracking—the ability to quickly and precisely measure the direction a user is looking while inside of a VR headset—is often talked about within the context of foveated rendering, and how it could reduce the performance requirements of XR headsets. And while foveated rendering is an exciting use-case for eye-tracking in AR and VR headsets, eye-tracking stands to bring much more to the table.
Updated – May 2nd, 2023
Eye-tracking has been talked about with regards to XR as a distant technology for many years, but the hardware is finally becoming increasingly available to developers and customers. PSVR 2 and Quest Pro are the most visible examples of headsets with built-in eye-tracking, along with the likes of Varjo Aero, Vive Pro Eye and more.
With this momentum, in just a few years we could see eye-tracking become a standard part of consumer XR headsets. When that happens, there’s a wide range of features the tech can enable to drastically improve the experience.
Foveated Rendering
Let’s first start with the one that many people are already familiar with. Foveated rendering aims to reduce the computational power required for displaying demanding AR and VR scenes. The name comes from the ‘fovea’—a small pit at the center of the human retina which is densely packed with photoreceptors. It’s the fovea which gives us high resolution vision at the center of our field of view; meanwhile our peripheral vision is actually very poor at picking up detail and color, and is better tuned for spotting motion and contrast than seeing detail. You can think of it like a camera which has a large sensor with just a few megapixels, and another smaller sensor in the middle with lots of megapixels.
The region of your vision in which you can see in high detail is actually much smaller than most think—just a few degrees across the center of your view. The difference in resolving power between the fovea and the rest of the retina is so drastic, that without your fovea, you couldn’t make out the text on this page. You can see this easily for yourself: if you keep your eyes focused on this word and try to read just two sentences below, you’ll find it’s almost impossible to make out what the words say, even though you can see something resembling words. The reason that people overestimate the foveal region of their vision seems to be because the brain does a lot of unconscious interpretation and prediction to build a model of how we believe the world to be.
Foveated rendering aims to exploit this quirk of our vision by rendering the virtual scene in high resolution only in the region that the fovea sees, and then drastically cut down the complexity of the scene in our peripheral vision where the detail can’t be resolved anyway. Doing so allows us to focus most of the processing power where it contributes most to detail, while saving processing resources elsewhere. That may not sound like a huge deal, but as the display resolution of XR headsets and field-of-view increases, the power needed to render complex scenes grows quickly.
Eye-tracking of course comes into play because we need to know where the center of the user’s gaze is at all times quickly and with high precision in order to pull off foveated rendering. While it’s difficult to pull this off without the user noticing, it’s possible and has been demonstrated quite effectively on recent headset like Quest Pro and PSVR 2.
Automatic User Detection & Adjustment
In addition to detecting movement, eye-tracking can also be used as a biometric identifier. That makes eye-tracking a great candidate for multiple user profiles across a single headset—when I put on the headset, the system can instantly identify me as a unique user and call up my customized environment, content library, game progress, and settings. When a friend puts on the headset, the system can load their preferences and saved data.
Eye-tracking can also be used to precisely measure IPD (the distance between one’s eyes). Knowing your IPD is important in XR because it’s required to move the lenses and displays into the optimal position for both comfort and visual quality. Unfortunately many people understandably don’t know what their IPD off the top of their head.
With eye-tracking, it would be easy to instantly measure each user’s IPD and then have the headset’s software assist the user in adjusting headset’s IPD to match, or warn users that their IPD is outside the range supported by the headset.
In more advanced headsets, this process can be invisible and automatic—IPD can be measured invisibly, and the headset can have a motorized IPD adjustment that automatically moves the lenses into the correct position without the user needing to be aware of any of it, like on the Varjo Aero, for example.
Varifocal Displays
The optical systems used in today’s VR headsets work pretty well but they’re actually rather simple and don’t support an important function of human vision: dynamic focus. This is because the display in XR headsets is always the same distance from our eyes, even when the stereoscopic depth suggests otherwise. This leads to an issue called vergence-accommodation conflict. If you want to learn a bit more in depth, check out our primer below:
In the real world, to focus on a near object the lens of your eye bends to make the light from the object hit the right spot on your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.
Vergence
Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate inward to bring each eye’s perspective into alignment. You can see this too with our little finger trick as above: this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then focus on those objects behind your finger, now you see a double finger image.
The Conflict
With precise enough instruments, you could use either vergence or accommodation to know how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time—there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, anytime you look at anything.
But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.
In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, and the lens’ shape is static, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).
That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy.
But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.
Varifocal displays—those which can dynamically alter their focal depth—are proposed as a solution to this problem. There’s a number of approaches to varifocal displays, perhaps the most simple of which is an optical system where the display is physically moved back and forth from the lens in order to change focal depth on the fly.
Achieving such an actuated varifocal display requires eye-tracking because the system needs to know precisely where in the scene the user is looking. By tracing a path into the virtual scene from each of the user’s eyes, the system can find the point that those paths intersect, establishing the proper focal plane that the user is looking at. This information is then sent to the display to adjust accordingly, setting the focal depth to match the virtual distance from the user’s eye to the object.
A well implemented varifocal display could not only eliminate the vergence-accommodation conflict, but also allow users to focus on virtual objects much nearer to them than in existing headsets.
And well before we’re putting varifocal displays into XR headsets, eye-tracking could be used for simulated depth-of-field, which could approximate the blurring of objects outside of the focal plane of the user’s eyes.
As of now, there’s no major headset on the market with varifocal capabilities, but there’s a growing body of research and development trying to figure out how to make the capability compact, reliable, and affordable.
Foveated Displays
While foveated rendering aims to better distribute rendering power between the part of our vision where we can see sharply and our low-detail peripheral vision, something similar can be achieved for the actual pixel count.
Rather than just changing the detail of the rendering on certain parts of the display vs. others, foveated displays are those which are physically moved (or in some cases “steered”) to stay in front of the user’s gaze no matter where they look.
Foveated displays open the door to achieving much higher resolution in AR and VR headsets without brute-forcing the problem by trying to cram pixels at higher resolution across our entire field-of-view. Doing so is not only be costly, but also runs into challenging power and size constraints as the number of pixels approach retinal-resolution. Instead, foveated displays would move a smaller, pixel-dense display to wherever the user is looking based on eye-tracking data. This approach could even lead to higher fields-of-view than could otherwise be achieved with a single flat display.
Varjo is one company working on a foveated display system. They use a typical display that covers a wide field of view (but isn’t very pixel dense), and then superimpose a microdisplay that’s much more pixel dense on top of it. The combination of the two means the user gets both a wide field of view for their peripheral vision, and a region of very high resolution for their foveal vision.
Granted, this foveated display is still static (the high resolution area stays in the middle of the display) rather than dynamic, but the company has considered a number of methods for moving the display to ensure the high resolution area is always at the center of your gaze.
Virtual Desktop has collaborated with Qualcomm to integrate the company’s Snapdragon Game Super Resolution, a software enhancement squarely targeted at increasing the wireless streaming quality and latency of PC visuals to Quest 2.
Virtual Desktop is a great tool not only because it provides Quest users wireless access to their computers, but because its developer, Guy Godin, is constantly adding in new features to tempt users away from using built-in solutions, i.e. Air Link.
That’s a tall order since Air Link is free and actually pretty great, letting Quest users connect to their VR-ready PCs to play games like Half-Life: Alyx, but Virtual Desktop goes a few steps further. With its PC native application developed for high quality wireless Quest streaming, you can do things like cycle through multiple physical monitors and even connect to up to four separate computers—a feature set you probably won’t see on the Air Link change log.
Now Godin has worked with Qualcomm to integrate the company’s Snapdragon Game Super Resolution for built-in upscaling, essentially creating higher resolution images from lower resolution inputs so it can be served up to Quest in higher fidelity. Check out the results below:
Because producing clearer visuals with fewer resources is the name of the game, Qualcomm says in a blog post that its techniques can also reduce wireless bandwidth, system pressure, memory, and provide power requirements.
Godin says in a Reddit post that the new upscaling works with “Potato, Low, Medium quality (up to 120fps) and High (up to 90fps), and it upscales to Ultra resolution under the hood. It can work with SSW enabled as well and doesn’t introduce any additional latency.”
You can get Virtual Desktop on Quest over at the Quest Store, priced at $20. It’s also available on Pico Neo 3 and Pico 4, which you can find in-headset over on the Pico Store.
Qualcomm announced at Mobile World Congress (MWC) today it’s partnering with seven global telecommunication companies in preparation for the next generation of AR glasses which are set to work directly with the user’s smartphone.
Partners include CMCC, Deutsche Telekom, KDDI Corporation, NTT QONOQ, T-Mobile, Telefonica, and Vodafone, which are said to currently be working with Qualcomm on new XR devices, experiences, and developer initiatives, including Qualcomm’s Snapdragon Spaces XR developer platform.
Qualcomm announced Snapdragon Spaces in late 2021, a software tool kit which focuses on performance and low power devices which allows developers to create head-worn AR experiences from the ground-up or by adding head-worn AR to existing smartphone apps.
Qualcomm and Japan’s KDDI Corporation also announced a multi-year collaboration which it says will focus on the expansion of XR use cases and creation of a developer program in Japan.
Meanwhile, Qualcomm says OEMs are designing “a new wave of devices for operators and beyond” such as the newly unveiled Xiaomi Wireless AR Glass Discovery Edition, OPPO’s new Mixed Reality device and OnePlus 11 5G smartphone.
At least in Xiaomi’s case, its Wireless AR Glass headset streams data from compatible smartphones. Effectively offloading computation to the smartphone, the company’s 126g headset boasts a wireless latency of as low as 3ms between the smartphone device to the glasses, and a wireless connection with full link latency as low as 50ms which is comparable to wired solution.
Qualcomm has revealed its latest AR glasses reference design, which it offers up to other companies as a blueprint for building their own AR devices. The reference design, which gives us a strong hint at the specs and capabilities of upcoming products, continues to lean on a smartphone to do the heavy compute, but this time is based on a wireless design.
Qualcomm’s prior AR glasses reference design was based on the Snapdragon XR1 chip and called for a wired connection between a smartphone and the glasses, allowing the system to split rendering tasks between the two devices.
Now the company’s latest design, based on Snapdragon XR2, takes the wire out of the equation. But instead of going fully standalone, the new reference design continues to rely on the smartphone to handle most of the heavy rendering, but now does so over a wireless connection between the devices.
In addition to Snapdragon XR2, the AR glasses include Qualcomm’s FastConnect 6900 chip which equips it with Wi-Fi 6E and Bluetooth 5.3. The company says the chip is designed for “ultra-low latency,” and manages less than 3ms of latency between the headset and the smartphone. The company has also announced XR-specific software for controlling its FastConnect 6900, allowing device makers to tune the wireless traffic between the devices to prioritize the most time-critical data in order to reduce instances of lag or jitter due to wireless interference.
Though a connected smartphone seems like the most obvious use-case, Qualcomm also says the glasses could just as well be paired to a Windows PC or “processing puck.”
Beyond the extra wireless tech, the company says the latest design is 40% thinner than its previous reference design. The latest version has a 1,920 × 1,080 (2MP) per-eye resolution at 90Hz. The microdisplays include a ‘no-motion-blur’ feature—which sounds like a low persistence mode designed to prevent blurring of the image during head movement. A pair of monochrome cameras are used for 6DOF tracking and an RGB camera for video or photo capture. The company didn’t mention the device’s field-of-view, so it’s unlikely to be any larger than the prior reference design at 45° diagonal.
Like its many prior reference designs, Qualcomm isn’t actually going to make and sell the AR glasses. Instead, it offers up the design and underlying technology for other companies to use as a blueprint to build their own devices (hopefully using Qualcomm’s chips!). Companies that build on Qualcomm’s blueprint usually introduce their own industrial design and custom software offering; some even customize the hardware itself, like using different displays or optics.
That makes this AR glasses reference design a pretty good snapshot of the current state of AR glasses that can be mass produced, and a glimpse of what some companies will be offering in the near future.
Qualcomm says its latest AR glasses reference design is “available for select partners,” as of today, and plans to make it more widely available “in the coming months.”
Qualcomm has revealed its latest AR glasses reference design, which it offers up to other companies as a blueprint for building their own AR devices. The reference design, which gives us a strong hint at the specs and capabilities of upcoming products, continues to lean on a smartphone to do the heavy compute, but this time is based on a wireless design.
Qualcomm’s prior AR glasses reference design was based on the Snapdragon XR1 chip and called for a wired connection between a smartphone and the glasses, allowing the system to split rendering tasks between the two devices.
Now the company’s latest design, based on Snapdragon XR2, takes the wire out of the equation. But instead of going fully standalone, the new reference design continues to rely on the smartphone to handle most of the heavy rendering, but now does so over a wireless connection between the devices.
In addition to Snapdragon XR2, the AR glasses include Qualcomm’s FastConnect 6900 chip which equips it with Wi-Fi 6E and Bluetooth 5.3. The company says the chip is designed for “ultra-low latency,” and manages less than 3ms of latency between the headset and the smartphone. The company has also announced XR-specific software for controlling its FastConnect 6900, allowing device makers to tune the wireless traffic between the devices to prioritize the most time-critical data in order to reduce instances of lag or jitter due to wireless interference.
Though a connected smartphone seems like the most obvious use-case, Qualcomm also says the glasses could just as well be paired to a Windows PC or “processing puck.”
Beyond the extra wireless tech, the company says the latest design is 40% thinner than its previous reference design. The latest version has a 1,920 × 1,080 (2MP) per-eye resolution at 90Hz. The microdisplays include a ‘no-motion-blur’ feature—which sounds like a low persistence mode designed to prevent blurring of the image during head movement. A pair of monochrome cameras are used for 6DOF tracking and an RGB camera for video or photo capture. The company didn’t mention the device’s field-of-view, so it’s unlikely to be any larger than the prior reference design at 45° diagonal.
Like its many prior reference designs, Qualcomm isn’t actually going to make and sell the AR glasses. Instead, it offers up the design and underlying technology for other companies to use as a blueprint to build their own devices (hopefully using Qualcomm’s chips!). Companies that build on Qualcomm’s blueprint usually introduce their own industrial design and custom software offering; some even customize the hardware itself, like using different displays or optics.
That makes this AR glasses reference design a pretty good snapshot of the current state of AR glasses that can be mass produced, and a glimpse of what some companies will be offering in the near future.
Qualcomm says its latest AR glasses reference design is “available for select partners,” as of today, and plans to make it more widely available “in the coming months.”
Qualcomm Technologies has become one of the biggest proponents for XR smart glasses thanks to the constant revision of its reference design for OEMs. These are all tethered devices but today, Qualcomm has revealed its first step toward a wireless future for these immersive glasses with its new Wireless AR Smart Viewer Reference Design.
Most smart glasses (or smart viewers as Qualcomm likes to call them) cable to an external device, be it a phone or small processing unit to supply power and handle the heavy software lifting. This means they can stay fairly lightweight whilst maintaining a slim form factor. Somehow, using its technical wizardry, Qualcomm’s wireless reference design is not only slimmer in places but also looks better than its tethered kin.
Sporting a 650mAh battery – in a press briefing, Qualcomm’s GM of XR Hugo Swart wouldn’t say how long that could last – the Wireless AR Smart Viewer uses Wifi 6 in conjunction with the Qualcomm FastConnect 6900 System to pair to a device, providing “virtually lag-free AR experiences” the company claims. Utilising Qualcomm’s latest Snapdragon XR2 platform to power the glasses, it has a dual micro-OLED binocular display with a resolution of 1920×1080 per eye at a 90Hz frame rate.
FastConnect is paired with a split processing system so that the smart glasses send across all the 6Dof, hand and eye-tracking data, the smartphone processes all of that in conjunction with the XR app before sending it all back to the glasses. All of this achieved with a <3ms latency.
As for what you could be watching on these wireless smart viewers, look no further than Snapdragon Spaces, Qualcomm’s developer platform for AR experiences. It has created a $100 million fund to encourage content creators as well as partnering with Square Enix and T-Mobile to help build immersive titles.
Like all of Qualcomm’s reference designs, the Wireless AR Smart Viewer has been created for other companies to build upon, so Qualcomm won’t release this as its own product. Much in the same way, Lenovo made the ThinkReality A3 smart glasses from the XR1 AR Smart Viewer Reference Design.
So there’s no telling quite yet when these wireless XR glasses will come to market. As these details come to light, gmw3 will keep you updated.