Apple Joins Pixar, NVIDIA, & More to “accelerate next generation of AR experiences” with 3D File Protocol

Today, big tech companies including Apple, Pixar, Adobe, Autodesk, and NVIDIA, announced the formation of the Alliance for OpenUSD (AOUSD), which is dedicated to promoting the standardization and development of a 3D file protocol that Apple says will “help accelerate the next generation of AR experiences.”

NVIDIA has been an early supporter of Pixar’s Universal Scene Description (USD), stating last year it thinks Pixar’s solution has the potential to become the “HTML of the metaverse.”

Much like HTML forms a sort of description of a webpage—being hostable anywhere on the Internet and retrievable/renderable locally by a web browser—USD can be used to describe complex virtual scenes, allowing it to be similarly retrieved and rendered on a local machine.

Here’s how the alliance describes their new OpenUSD inititive:

Created by Pixar Animation Studios, OpenUSD is a high-performance 3D scene description technology that offers robust interoperability across tools, data, and workflows. Already known for its ability to collaboratively capture artistic expression and streamline cinematic content production, OpenUSD’s power and flexibility make it an ideal content platform to embrace the needs of new industries and applications.

“Universal Scene Description was invented at Pixar and is the technological foundation of our state-of-the-art animation pipeline,” said Steve May, Chief Technology Officer at Pixar and Chairperson of AOUSD. “OpenUSD is based on years of research and application in Pixar filmmaking. We open-sourced the project in 2016, and the influence of OpenUSD now expands beyond film, visual effects, and animation and into other industries that increasingly rely on 3D data for media interchange. With the announcement of AOUSD, we signal the exciting next step: the continued evolution of OpenUSD as a technology and its position as an international standard.”

Housed by the Linux Foundation affiliate Joint Development Foundation (JDF), the alliance is hoping to attract a diverse range of companies and organizations to participate in shaping the future of OpenUSD actively. For now it counts Apple, Pixar, Adobe, Autodesk, and NVIDIA as foudning memebers, with general members including Epic Games, Unity, Foundry, Ikea, SideFX, and Cesium.

“OpenUSD will help accelerate the next generation of AR experiences, from artistic creation to content delivery, and produce an ever-widening array of spatial computing applications,” said Mike Rockwell, Apple’s VP of the Vision Products Group. “Apple has been an active contributor to the development of USD, and it is an essential technology for the groundbreaking visionOS platform, as well as the new Reality Composer Pro developer tool. We look forward to fostering its growth into a broadly adopted standard.”

Khronos Group, the consortium behind the OpenXR standard, launched a similar USD initiative in the past via its own Metaverse Standards Forum. It’s unclear how much overlap these initiatives will have, as that project was supported by AOUSD founders Adobe, Autodesk, and NVIDIA in addition to a wide swath of industry movers, such as Meta, Microsoft, Sony, Qualcomm, and AMD. Notably missing in the Metaverse Standards Forum was support from Apple and Pixar themselves.

We’re hoping to learn more at a long-form presentation of AOUSD during the Autodesk Vision Series on August 8th. There are a host of events leading up to SIGGRAPH 2023 though, which goes from August 6th – 10th, so we may learn more at any one of the companies’ own presentations on USD.

Meta Reveals New Prototype VR Headsets Focused on Retinal Resolution and Light Field Passthrough

Meta unveiled two new VR headset prototypes that showcase more progress in the fight to solve some persistent technical challenges facing VR today. Presenting at SIGGRAPH 2023, Meta is demonstrating a headset with retinal resolution combined with varifocal optics, and another headset with advanced light field passthrough capabilities.

Butterscotch Varifocal Prototype

Revealed in a developer blogpost, Meta showed off a varifocal research prototype that demonstrates a VR display system which provides “visual clarity that can closely match the capabilities of the human eye,” says Meta Optical Scientist Yang Zhao. The so-called ‘Butterscotch Varifocal’ prototype provides retinal resolution of up to 56 pixels per degree (PPD), which is sufficient for 20/20 visual acuity, researchers say.

Since its displays are also varifocal, it can support from 0 to 4 diopter (i.e. infinity to 25 cm), and matching what researchers say are “the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration.” The pulsing motors below control the displays’ focal distance in an effort to match the human eye.

Varifocal headsets represent a solution to the vergence-accommodation conflict (VAC) which has plagued standard VR headsets, the most advanced consumer headsets included. Varifocal headsets not only include the same standard support for the vergence reflex (when eyes converge on objects to form a stereo image), but also the accommodation reflex (when the lens of the eye changes shape to focus light at different depths). Without support for accommodation, VR displays can cause eye strain, make it difficult to focus on close imagery, and may even limit visual immersion.

Check out the through-the-lens video below to see how Butterscotch’s varifocal bit works:

Using LCD panels readily available on the market, Butterscotch manages its 20/20 retinal display by reducing the field of view (FOV) to 50 degrees, smaller than Quest 2’s ~89 degree FOV.

Although Butterscotch’s varifocal abilities are similar to the company’s prior Half Dome prototypes, the company says Butterscotch is “solely focused on showcasing the experience of retinal resolution in VR—but not necessarily with hardware technologies that are ultimately appropriate for the consumer.”

“In contrast, our work on Half Dome 1 through 3 focused on miniaturizing varifocal in a fully practical manner, albeit with lower-resolution optics and displays more similar to today’s consumer headsets,” explains Display Systems Research Director Douglas Lanman. “Our work on Half Dome prototypes continues, but we’re pausing to exhibit Butterscotch Varifocal to show why we remain so committed to varifocal and delivering better visual acuity and comfort in VR headsets. We want our community to experience varifocal for themselves and join in pushing this technology forward.”

 

Flamera Lightfield Passthrough Prototype

Another important side of making XR more immersive is undoubtably the headset’s passthrough capabilities, like you might see on Quest Pro or the upcoming Apple Vision Pro. The decidedly bug-eyed design of Meta’s Flamera research prototype is looking for a better way to create more realistic passthrough by using light fields.

Research Scientist Grace Kuo wearing the Flamera research prototype | Image courtesy Meta

In standard headsets, cameras are typically placed a few inches from where your eyes actually sit, capturing a different view than what you’d see if you weren’t wearing a headset. While there’s a lot of distortion and placement correction going on in standard headsets of today, you’ll probably still notice a ton of visual artifacts as the software tries to correctly resolve and render different depths of field.

“To address this challenge, we brainstormed optical architectures that could directly capture the same rays of light that you’d see with your bare eyes,” says Meta Research Scientist Grace Kuo. “By starting our headset design from scratch instead of modifying an existing design, we ended up with a camera that looks quite unique but can enable better passthrough image quality and lower latency.”

Check out the quick explainer below to see how Flamera’s ingenious capture methods work:

Now, here’s a comparison between an unobstructed view and Flamera’s light field capture, showing off some pretty compelling results:

As research prototypes, there’s no indication when we can expect these technologies to come to consumer headsets. Still, it’s clear that Meta is adamant about showing off just how far ahead it is in tackling some of the persistent issues in headsets today—something you probably won’t see from the patently black box that is Apple.

You can read more about Butterscotch and Flamera in their respective research papers, which are being presented at SIGGRAPH 2023, taking place August 6th – 10th in Los Angeles. Click here for the Butterscotch Varifocal abstract and Flamera full paper.

Vision Pro Dev Kit Applications Will Open in July

Apple says it will give developers the opportunity to apply for Vision Pro dev kits starting sometime in July.

In addition to releasing a first round of developer tools last week, including a software ‘Simulator’ of Vision Pro, Apple also wants to give developers a chance to get their hands on the headset itself.

The company indicates that applications for a Vision Pro development kit will open starting in July, and developers will be able to find details here when the time comes.

There’s no telling how many of the development kits the company plans to send out, or exactly when they will start shipping, but given Apple’s culture of extreme secrecy you can bet selected developers will be locked down with strict NDAs regarding their use of the device.

The Vision Pro developer kit isn’t the only way developers will be able to test their apps on a real headset.

Developers will also be able to apply to attend ‘Vision Pro developer labs’:

Apply for the opportunity to attend an Apple Vision Pro developer lab, where you can experience your visionOS, iPadOS, and iOS apps running on Apple Vision Pro. With direct support from Apple, you’ll be able to test and optimize your apps and games, so they’ll be ready when Apple Vision Pro is available to customers. Labs will be available in six locations worldwide: Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.

Our understanding is that applications for the developer labs will also open in July.

Additionally, developers will also be able to request that their app be reviewed by Apple itself on visionOS, though this is restricted to existing iPhone and iPad apps, rather than newly created apps for visionOS:

If you currently have an iPad or iPhone app on the App Store, we can help you test it on Apple Vision Pro. Request a compatibility evaluation from App Review to get a report on your app or game’s appearance and how it behaves in visionOS.

Vision Pro isn’t planned to ship until early 2024, but Apple wants to have third-party apps ready and waiting for when that time comes.

Apple Releases Vision Pro Development Tools and Headset Emulator

Apple has released new and updated tools for developers to begin building XR apps on Apple Vision Pro.

Apple Vision Pro isn’t due out until early 2024, but the company wants developers to get a jump-start on building apps for the new headset.

To that end the company announced today it has released the visionOS SDK, updated Xcode, Simulator, and Reality Composer Pro, which developers can get access to at the Vision OS developer website.

While some of the tools will be familiar to Apple developers, tools like Simulator and Reality Composer Pro are newly released for the headset.

Simulator is the Apple Vision Pro emulator, which aims to give developers a way to test their apps before having their hands on the headset. The tool effectively acts as a software version of Apple Vision Pro, allowing developers see how their apps will render and act on the headset.

Reality Composer Pro is aimed at making it easy for developers to build interactive scenes with 3D models, sounds, and textures. From what we understand, it’s sort of like an easier (albeit less capable) alternative to Unity. However, developers who already know or aren’t afraid to learn a full-blown game engine can also use Unity to build visionOS apps.

Image courtesy Apple

In addition to the release of the visionOS SDK today, Apple says it’s still on track to open a handful of ‘Developer Labs’ around the world where developers can get their hands on the headset and test their apps. The company also says developers will be able to apply to receive Apple Vision Pro development kits next month.

A Concise Beginner’s Guide to Apple Vision Pro Design & Development

Apple Vision Pro has brought new ideas to the table about how XR apps should be designed, controlled, and built. In this Guest Article, Sterling Crispin offers up a concise guide for what first-time XR developers should keep in mind as they approach app development for Apple Vision Pro.

Guest Article by Sterling Crispin

Sterling Crispin is an artist and software engineer with a decade of experience in the spatial computing industry. His work has spanned between product design and the R&D of new technologies at companies like Apple, Snap Inc, and various other tech startups working on face computers.

Editor’s Note: The author would like to remind readers that he is not an Apple representative; this info is personal opinion and does not contain non-public information. Additionally, more info on Vision Pro development can be found in Apple’s WWDC23 videos (select Filter → visionOS).

Ahead is my advice for designing and developing products for Vision Pro. This article includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice, and more.

Overview

Apps on visionOS are organized into ‘scenes’, which are Windows, Volumes, and Spaces.

Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app.

Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that floats in front of you rather than being fully immersive.

Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it’s all fully immersive content that surrounds you. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see.

User Input

Users can look at the UI and pinch like the Apple Vision Pro demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won’t need to worry about where these events originate from.

Spatial Audio

Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously.

Development

If you want to build something that works between Vision Pro, iPad, and iOS, you’ll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta’s Quest or PlayStation VR, you have to use Unity.

Apple Tools

For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding, like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like diet-Unity that’s built specifically for this development stack.

One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful.

Existing iOS Apps

If you’re bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, the headset will use the iPad version.

To customize your existing iOS app to take better advantage of the headset you can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly to work on Vision Pro, as ARKit has been upgraded a lot for the headset.

If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you’re losing out on hundreds of millions of users.

Unity

You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR 2.

Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for ARKit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features.

Unity support for Vision Pro includes for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

Product Design

You could just make an iPad-like app that shows up as a floating window, use the default interactions, and call it a day. But like I said above, content can exist in a wide spectrum of immersion, locations, and use a wide range of inputs. So the combinatorial range of possibilities can be overwhelming.

If you haven’t spent 100 hours in VR, get a Quest 2 or 3 as soon as possible and try everything. It doesn’t matter if you’re a designer, or product manager, or a CEO, you need to get a Quest and spend 100 hours in VR to begin to understand the language of spatial apps.

I highly recommend checking out Hand Physics Lab as a starting point and overview for understanding direct interactions. There’s a lot of subtle things they do which imbue virtual objects with a sense of physicality. And the Youtube VR app that was released in 2019 looks and feels pretty similar to a basic visionOS app, it’s worth checking out.

Keep a diary of what works and what doesn’t.

Ask yourself: ‘What app designs are comfortable, or cause fatigue?’, ‘What apps have the fastest time-to-fun or value?’, ‘What’s confusing and what’s intuitive?’, ‘What experiences would you even bother doing more than once?’ Be brutally honest. Learn from what’s been tried as much as possible.

General Design Advice

I strongly recommend the IDEO style design thinking process, it works for spatial computing too. You should absolutely try it out if you’re unfamiliar. There’s Design Kit with resources and this video which, while dated, is a great example of the process.

The road to spatial computing is a graveyard of utopian ideas that failed. People tend to spend a very long time building grand solutions for the imaginary problems of imaginary users. It sounds obvious, but instead you should try to build something as fast as possible that fills a real human need, and then iteratively improve from there.

Continue on Page 2: Spatial Formats and Interaction »

Apple to Open Locations for Devs to Test Vision Pro This Summer, SDK This Month

Ahead of the Apple Vision Pro’s release in ‘early 2024’, the company says it will open several centers in a handful of locations around the world, giving some developers a chance to test the headset before it’s released to the public.

It’s clear that developers will need time to start building Apple Vision Pro apps ahead of its launch, and it’s also clear that Apple doesn’t have heaps of headsets on hand for developers to start working with right away. In an effort to give developers the earliest possible chance to test their immersive apps, the company says it plans to open ‘Apple Vision Pro Developer Labs’ in a handful of locations around the world.

Starting this Summer, the Apple Vision Pro Developer Labs will open in London, Munich, Shanghai, Singapore, Tokyo, and Cupertino.

Apple also says developers will be able to submit a request to have their apps tested on Vision Pro, with testing and feedback being done remotely by Apple.

Image courtesy Apple

Of course, developers still need new tools to build for the headset in the first place. Apple says devs can expect a visionOS SDK and updated versions of Reality Composer and Xcode by the end of June so support development on the headset. That will be accompanied by new Human Interface Guidelines to help developers follow best practices for spatial apps on Vision Pro.

Additionally, Apple says it will make available a Vision Pro Simulator, an emulator that allows developers to see how their apps would look through the headset.

Developers can find more info when it’s ready at Apple’s developer website. Closer to launch Apple says Vision Pro will be available for the public to test in stores.

Xiaomi Unveils Wireless AR Glasses Prototype, Powered by Same Chipset as Meta Quest Pro

Chinese tech giant Xiaomi today showed off a prototype AR headset at Mobile World Congress (MWC) that wirelessly connects to the user’s smartphone, making for what the company calls its “first wireless AR glasses to utilize distributed computing.”

Called Xiaomi Wireless AR Glass Discovery Edition, the device is built upon the same Qualcomm Snapdragon XR2 Gen 1 chipset as Meta’s recently released Quest Pro VR standalone.

While specs are still thin on the ground, the company did offer some info on headline features. For now, Xiaomi is couching it as a “concept technology achievement,” so it may be a while until we see a full spec sheet.

Packing two microOLED displays, the company is boasting “retina-level” resolution, saying its AR glasses pack in 58 pixels per degree (PPD). For reference, Meta Quest Pro has a PPD of 22, while enterprise headset Varjo XR-3 cites a PPD of 70.

The company hasn’t announced the headset’s field of view (FOV), however it says its free-form light-guiding prisms “minimizes light loss and produces clear and bright images with a to-eye brightness of up to 1200nit.”

Electrochromic lenses are also said to adapt the final image to different lighting conditions, even including a full ‘blackout mode’ that ostensibly allows it to work as a VR headset as well.

Image courtesy Xiaomi

As for input, Xiaomi Wireless AR Glass includes onboard hand-tracking in addition to smartphone-based touch controls. Xiaomi says its optical hand-tracking is designed to let users to do things like select and open apps, swipe through pages, and exit apps.

As a prototype, there’s no pricing or availability on the table, however Xiaomi says the lightweight glasses (at 126g) will be available in a titanium-colored design with support for three sizes of nosepieces. An attachable glasses clip will also be available for near-sighted users.

In an exclusive hands-on, XDA Developers surmised it felt near production-ready, however one of the issues noted during a seemingly bump-free demo was battery life; the headset had to be charged in the middle of the 30-minute demo. Xiaomi apparently is incorporating a self-developed silicon-oxygen anode battery that is supposedly smaller than a typical lithium-ion battery. While there’s an onboard Snapdragon XR 2 Gen 1 chipset, XDA Developers also notes it doesn’t offer any storage, making a compatible smartphone requisite to playing AR content.

This isn’t the company’s first stab at XR tech; last summer Xiaomi showed off a pair of consumer smartglasses, called Mijia Glasses Camera, that featured a single heads-up display. Xiaomi’s Wireless AR Glass is however much closer in function to the concept it teased in late 2021, albeit with chunkier free-form light-guiding prisms than the more advanced-looking waveguides teased two years ago.

Xiaomi is actively working closely with chipmaker Qualcomm to ensure compatibility with Snapdragon Spaces-ready smartphones, which include Xiaomi 13 and OnePlus 11 5G. Possible other future contributions from Lenovo and Motorola, which have also announced their intentions to support Snapdragon Spaces.

Qualcomm announced Snapdragon Spaces in late 2021, a software tool kit which focuses on performance and low power devices which allows developers to create head-worn AR experiences from the ground-up, or add head-worn AR to existing smartphone apps.

Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2

In an ongoing shift away from a somewhat proprietary development environment on its first headset, Magic Leap has committed to bringing OpenXR support to its Magic Leap 2 headset later this year.

Although Magic Leap 2 is clearly the successor to Magic Leap 1, the goal of the headsets are quite different. With the first headset the company attempted to court developers who would build entertainment and consumer-centric apps, and had its own ideas about how its ‘Lumin OS’ should handle apps and how they should be built.

After significant financial turmoil and then revival, the company emerged with new CEO and very different priorities for Magic Leap 2. Not only would the headset be clearly and unequivocally positioned for enterprise use-cases, the company also wants to make it much easier to build apps for the headset.

To that end Magic Leap’s VP of Product Marketing & Developer Programs, Lisa Watts, got on stage at week’s AWE 2022 to “announce and reaffirm to all of you and to the entire industry [Magic Leap’s] support for open standards, and making our platform very easy to develop for.”

In the session, which was co-hosted by Chair of the OpenXR Working Group, Brent Insko, Watts reiterated that Magic Leap 2 is built atop an “Android Open Source Project-based OS interface standard,” and showed a range of open and accessible tools that developers can currently use to build for the headset.

Toward the end of the year, Watts shared, the company expects Magic Leap 2 to also include support for OpenXR, Vulkan, and WebXR.

Image courtesy Magic Leap

OpenXR is a royalty-free standard that aims to standardize the development of VR and AR applications, making hardware and software more interoperable. The standard has been in development since 2017 and is backed by virtually every major hardware, platform, and engine company in the VR industry, and a growing number AR players.

In theory, an AR app built to be OpenXR compliant should work on any OpenXR compliant headset—whether that be HoloLens 2 or Magic Leap 2—without any changes to the application.

OpenXR has picked up considerable steam in the VR space and is starting to see similar adoption momentum in the AR space, especially with one of the sector’s most visible companies, Magic Leap, on board.

The post Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2 appeared first on Road to VR.

Niantic is Bringing Its Large-scale AR Positioning System to WebAR Too

This week Niantic announced Lightship VPS, a system designed to make possible accurate localization of AR devices at a large scale to enable location-based AR content that can also be persistent and multi-user. While the first implementation of the system will need to be baked into individual apps, the company says it’s bringing the tech to WebAR too.

With the launch of Lightship VPS (visual positioning system), Niantic is staking its claim in the AR space by offering up an underlying map on which developers can build AR apps which are tied to real-world locations. Being able to localize AR apps to real-world locations means those apps can have persistent virtual content that always appears in the same location in the world, even for different users at the same time.

The system is built into Niantic’s Lightship ARDK, which is a set of tools (including VPS) that developers can use to build AR apps. For the time being, VPS can be added to apps that users will download onto their phone, but Niantic says it also plans to make a version of VPS that will work from a smartphone’s web browser. While it’s not ready just yet, the company showed some live demos of the browser-based VPS in action this week.

WebAR is a foundation of technologies that allow AR experiences to run directly from a smartphone’s web browser. Building AR into the web means developers can deploy AR experiences to users that are easy to share and don’t have the friction of going to an app store to download a dedicated app (you can check out an example of a WebAR experience here).

Image courtesy Niantic

Thanks to Niantic’s recent acquisition of WebAR specialist 8th Wall, the company is now poised to make VPS compatible with 8th Wall’s WebAR tools, bringing the same large-scale AR positioning capabilities to web developers. Though it showed off the first demos this week, the company hasn’t said when the WebAR version of VPS will become available.

The post Niantic is Bringing Its Large-scale AR Positioning System to WebAR Too appeared first on Road to VR.

Niantic Launches City-scale Visual Positioning System for Location-based, Multi-user AR

Niantic today launched its Lightship Visual Positioning System at its first developer summit. The system aims to form an underlying 3D map of the world so that AR devices can share the same frame of reference even on massive scales.

Update (May 24th, 2022): Today during the company’s Lightship Summit event, Niantic launched its Lightship VPS system which is designed to allow developers to localize the position of AR devices with centimeter precision (in enabled areas). With a shared understanding of where devices are, Niantic says its platform will enable developers to build location-based, persistent, and multi-user AR experiences using the underlying VPS map which will hopefully one day span the globe.

Image courtesy Niantic

Today the company is enabling its VPS system in select cities—San Francisco, London, Tokyo, Los Angeles, New York City, and Seattle—comprising some 30,000 VPS localization points which represent 10 × 10 meter playspaces in which developers can build AR experiences. Niantic expects VPS will be active in over 100 cities by the end of the year.

In partnership with developer Liquid City, Niantic launched Reality Browser on iOS and Android devices to function as a demo of the VPS system for attendees of the conference.

Niantic says its VPS map will expand over time thanks to scan data pulled in from players of the company’s existing games, like Ingress and Pokémon GO, as well as data from dedicated ‘surveyors’.

The original article, which overviews the usefulness of a system like VPS for AR, continues below.

Original Article (May 5th, 2022): Many AR devices today are capable of localizing themselves within an arbitrary environment. An AR headset, for instance, looks at the room around you and uses that information to understand how the headset itself is moving through the space.

But if you want to enable multiple devices to interact in a shared space, both need to be able to localize themselves not just to the environment, but with regards to one another. Essentially, you need both devices to share the same map so that both users see the same thing happening in the same place in the real world.

That’s the goal of Niantic’s Lightship Visual Positioning system, which the company says will allow AR devices to tap into a shared digital map to establish their real-world position with “centimeter precision.” While GPS would be far too inaccurate for the job, such a system would allow devices to understand if they’re in a shared space together, allowing for content to be synchronized between the two for multiplayer and persistent content.

Niantic has been talking about its Visual Positioning System for some time now, but the company says it will first launch the feature as part of its Lightship AR development platform at the end of the month. That will coincide with the company’s first developer event, Lightship Summit, which is happening May 24th & 25th in San Francisco.

Although Niantic has been pitching is Visual Positioning System as a ‘world-scale’ solution for syncing AR content between devices, out of the gate it will be much more limited. For now the system is only approaching ‘city-scale’ as the company says it will initially function only “at certain Niantic Wayspots in select cities.” The company plans to expand coverage of the Visual Positioning System by crowdsourcing mapping data from the devices that use it, though it’s not clear how quickly that data can be transformed in the way needed to expand the map.

In theory, the system could enable persistent AR content at large scales, which could allow anyone in the area to see the same things (if they’re using the same app), like the concept above that the company has shown previously.

It will surely be some time yet because the Lightship Visual Positioning System achieves anything close to being truly ‘world-scale’. However, the company has one major potential advantage; it could tap into data from its existing games—like Pokémon Go and the upcoming Peridot—to move at a greater mapping pace than almost any other company could. As far as we know, that’s not happening at the moment, but could be in the works.

The post Niantic Launches City-scale Visual Positioning System for Location-based, Multi-user AR appeared first on Road to VR.