Leaked Xbox Documents Show XR Interest But No Immediate Plans

Leaked documents relating to Microsoft’s business strategy for Xbox show the company eyeing XR technology but continuing to keep it at arm’s length.

While Microsoft has previously taken considerable steps into XR with both HoloLens and the Windows Mixed Reality platform on PC, the company’s flagship gaming division, Xbox, has notably not joined the fray.

Over the years Xbox leadership has repeatedly pushed back on XR interest, saying the tech doesn’t yet have a large enough audience to warrant investment. And while it doesn’t look like we should expect anything relating to XR from Xbox in the near future, the company is at least continuing to eye the tech as a potential opportunity.

Road to VR reviewed the entirety of a trove of documents that leaked this week in connection with an ongoing Federal Trade Commission v. Microsoft court case. The documents, which reveal a significant portion of Microsoft’s long-term plans for the Xbox brand, show the company is still skeptical of XR but not discounting it in the long run.

In a mid-2022 ‘Gaming Strategy Review’ document, Xbox pointed to “AR / VR” as one of a handful of “opportunities” the company was mulling as part of its “early thoughts on [the] next generation of gaming.” In the same section the company pointed to tech like cloud gaming and ML & AI as potential areas of strategic focus.

In another section of the same document the company highlighted Windows Mixed Reality, OpenXR, WebVR, and HoloLens among many platforms and services that Xbox can leverage to build its “next gen platform for immersive apps and games.” Given the context of the document, however, it doesn’t seem that Xbox is specifically referring to XR when using the word “immersive.”

While Xbox has mentioned XR as a future opportunity, the company’s tone is still significantly skeptical that the tech has achieved a meaningful addressable audience.

In another section of the same document which overviewed Xbox’s competitors, the company pointed to Meta’s billions of dollars of investments into XR, but concluded by saying, “we view virtual reality as a niche gaming experience at this time.”

Another document from mid-2022, which overviewed the company’s long-term plans for Xbox all the way through 2030, noted that Microsoft wanted to expand its hardware portfolios to include new hardware categories, but nothing on that long-term roadmap pointed to any XR hardware.

While the leaked documents did focus on long timelines, business is always dynamic and priorities can shift quickly, so it’s important to remember that the documents are just a snapshot of Xbox’s view in mid-2022. With the more recent introduction of devices like Apple Vision Pro, it’s likely that Xbox is looking even more closely at how important XR may be to its future portfolio.

Apple Joins Pixar, NVIDIA, & More to “accelerate next generation of AR experiences” with 3D File Protocol

Today, big tech companies including Apple, Pixar, Adobe, Autodesk, and NVIDIA, announced the formation of the Alliance for OpenUSD (AOUSD), which is dedicated to promoting the standardization and development of a 3D file protocol that Apple says will “help accelerate the next generation of AR experiences.”

NVIDIA has been an early supporter of Pixar’s Universal Scene Description (USD), stating last year it thinks Pixar’s solution has the potential to become the “HTML of the metaverse.”

Much like HTML forms a sort of description of a webpage—being hostable anywhere on the Internet and retrievable/renderable locally by a web browser—USD can be used to describe complex virtual scenes, allowing it to be similarly retrieved and rendered on a local machine.

Here’s how the alliance describes their new OpenUSD inititive:

Created by Pixar Animation Studios, OpenUSD is a high-performance 3D scene description technology that offers robust interoperability across tools, data, and workflows. Already known for its ability to collaboratively capture artistic expression and streamline cinematic content production, OpenUSD’s power and flexibility make it an ideal content platform to embrace the needs of new industries and applications.

“Universal Scene Description was invented at Pixar and is the technological foundation of our state-of-the-art animation pipeline,” said Steve May, Chief Technology Officer at Pixar and Chairperson of AOUSD. “OpenUSD is based on years of research and application in Pixar filmmaking. We open-sourced the project in 2016, and the influence of OpenUSD now expands beyond film, visual effects, and animation and into other industries that increasingly rely on 3D data for media interchange. With the announcement of AOUSD, we signal the exciting next step: the continued evolution of OpenUSD as a technology and its position as an international standard.”

Housed by the Linux Foundation affiliate Joint Development Foundation (JDF), the alliance is hoping to attract a diverse range of companies and organizations to participate in shaping the future of OpenUSD actively. For now it counts Apple, Pixar, Adobe, Autodesk, and NVIDIA as foudning memebers, with general members including Epic Games, Unity, Foundry, Ikea, SideFX, and Cesium.

“OpenUSD will help accelerate the next generation of AR experiences, from artistic creation to content delivery, and produce an ever-widening array of spatial computing applications,” said Mike Rockwell, Apple’s VP of the Vision Products Group. “Apple has been an active contributor to the development of USD, and it is an essential technology for the groundbreaking visionOS platform, as well as the new Reality Composer Pro developer tool. We look forward to fostering its growth into a broadly adopted standard.”

Khronos Group, the consortium behind the OpenXR standard, launched a similar USD initiative in the past via its own Metaverse Standards Forum. It’s unclear how much overlap these initiatives will have, as that project was supported by AOUSD founders Adobe, Autodesk, and NVIDIA in addition to a wide swath of industry movers, such as Meta, Microsoft, Sony, Qualcomm, and AMD. Notably missing in the Metaverse Standards Forum was support from Apple and Pixar themselves.

We’re hoping to learn more at a long-form presentation of AOUSD during the Autodesk Vision Series on August 8th. There are a host of events leading up to SIGGRAPH 2023 though, which goes from August 6th – 10th, so we may learn more at any one of the companies’ own presentations on USD.

Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2

In an ongoing shift away from a somewhat proprietary development environment on its first headset, Magic Leap has committed to bringing OpenXR support to its Magic Leap 2 headset later this year.

Although Magic Leap 2 is clearly the successor to Magic Leap 1, the goal of the headsets are quite different. With the first headset the company attempted to court developers who would build entertainment and consumer-centric apps, and had its own ideas about how its ‘Lumin OS’ should handle apps and how they should be built.

After significant financial turmoil and then revival, the company emerged with new CEO and very different priorities for Magic Leap 2. Not only would the headset be clearly and unequivocally positioned for enterprise use-cases, the company also wants to make it much easier to build apps for the headset.

To that end Magic Leap’s VP of Product Marketing & Developer Programs, Lisa Watts, got on stage at week’s AWE 2022 to “announce and reaffirm to all of you and to the entire industry [Magic Leap’s] support for open standards, and making our platform very easy to develop for.”

In the session, which was co-hosted by Chair of the OpenXR Working Group, Brent Insko, Watts reiterated that Magic Leap 2 is built atop an “Android Open Source Project-based OS interface standard,” and showed a range of open and accessible tools that developers can currently use to build for the headset.

Toward the end of the year, Watts shared, the company expects Magic Leap 2 to also include support for OpenXR, Vulkan, and WebXR.

Image courtesy Magic Leap

OpenXR is a royalty-free standard that aims to standardize the development of VR and AR applications, making hardware and software more interoperable. The standard has been in development since 2017 and is backed by virtually every major hardware, platform, and engine company in the VR industry, and a growing number AR players.

In theory, an AR app built to be OpenXR compliant should work on any OpenXR compliant headset—whether that be HoloLens 2 or Magic Leap 2—without any changes to the application.

OpenXR has picked up considerable steam in the VR space and is starting to see similar adoption momentum in the AR space, especially with one of the sector’s most visible companies, Magic Leap, on board.

The post Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2 appeared first on Road to VR.

Niantic Launches Visual Positioning System For ‘Global Scale’ AR Experiences

Niantic‘s new Lightship Visual Positioning System (VPS) will facilitate interactions with ‘global scale’ persistent and synced AR content on mobile devices.

Niantic launched Lightship during its developer conference this week and you can see some footage in the video embedded above showing some phone-based AR apps using its new features starting from the 50:20 mark. The system is essentially a new type of map that developers can use for AR experiences, with the aim of providing location-based persistent content that’s synced up for all users.

Niantic is building the map from scanned visual data, which Niantic says will offer “centimeter-level” accuracy when pinpointing the location and orientation of users (or multiple users, in relation to each other) at a given location. The technology is similar to large-scale visual positioning systems in active development at Google and Snap.

While the promise of the system is to work globally, it’s not quite there just yet — as of launch yesterday, Niantic’s VPS system has around 30,000 public locations where VPS is available for developers to hook into. These locations are mainly spread across six key cities — San Francisco, London, Tokyo, Los Angeles, New York City and Seattle — and include “parks, paths, landmarks, local businesses and more.”

To expand the map, Niantic developed the Wayfarer app which allows developers to scan in new locations using their phones, available now in public beta. Niantic has also launched a surveyor program in the aforementioned six key launch cities to expedite the process.

“With only a single image frame from the end user’s camera, Lightship VPS swiftly and accurately determines a user’s precise, six-dimensional location,” according to a Niantic blog post.

Scaling VPS to a global level is a lofty goal for Niantic, but could improve mobile AR experiences which could seem to unlock far more interesting content with accurate maps pinning content to real world locations.

You can read more about Lightship VPS over on the Niantic blog.

Catch Road to VR Co-founder Ben Lang on the Between Realities Podcast

Road to VR co-founder Ben Lang recently joined the crew of the Between Realities podcast.

Bringing more than a decade of experience in the XR industry as co-founder of Road to VR, Ben Lang joined hosts Alex VR and Skeeva on Season 5 Episode 15 of the Between Realities podcast. The trio spoke about the impetus for founding the publication, Meta’s first retail store, the state of competition in the XR industry, privacy concerns for the metaverse, and even some musing on simulation theory. You can check out the full episode below or in the Between Realities episode feed on your favorite podcast platform.

In the podcast Lang speaks of a recent article about scientists that believe it’s possible to experimentally test simulation theory, which you can find here.

The post Catch Road to VR Co-founder Ben Lang on the Between Realities Podcast appeared first on Road to VR.

Reality Labs Chief Scientist Outlines a New Compute Architecture for True AR Glasses

Speaking at the IEDM conference late last year, Meta Reality Labs’ Chief Scientist Michael Abrash laid out the company’s analysis of how contemporary compute architectures will need to evolve to make possible the AR glasses of our sci-fi conceptualizations.

While there’s some AR ‘glasses’ on the market today, none of them are truly the size of a normal pair of glasses (even a bulky pair). The best AR headsets available today—the likes of HoloLens 2 and Magic Leap 2—are still closer to goggles than glasses and are too heavy to be worn all day (not to mention the looks you’d get from the crowd).

If we’re going to build AR glasses that are truly glasses-sized, with all-day battery life and the features needed for compelling AR experiences, it’s going to take require a “range of radical improvements—and in some cases paradigm shifts—in both hardware […] and software,” says Michael Abrash, Chief Scientist at Reality Labs, Meta’s XR organization.

That is to say: Meta doesn’t believe that its current technology—or anyone’s for that matter—is capable of delivering those sci-fi glasses that every AR concept video envisions.

But, the company thinks it knows where things need to head in order for that to happen.

Abrash, speaking at the IEDM 2021 conference late last year, laid out the case for a new compute architecture that could meet the needs of truly glasses-sized AR devices.

Follow the Power

The core reason to rethink how computing should be handled on these devices comes from a need to drastically reduce power consumption to meet battery life and heat requirements.

“How can we improve the power efficiency [of mobile computing devices] radically by a factor of 100 or even 1,000?” he asks. “That will require a deep system-level rethinking of the full stack, with end-to-end co-design of hardware and software. And the place to start that rethinking is by looking at where power is going today.”

To that end, Abrash laid out a graph comparing the power consumption of low-level computing operations.

Image courtesy Meta

As the chart highlights, the most energy intensive computing operations are in data transfer. And that doesn’t mean just wireless data transfer, but even transferring data from one chip inside the device to another. What’s more, the chart uses a logarithmic scale; according to the chart, transferring data to RAM uses 12,000 times the power of the base unit (which in this case is adding two numbers together).

Bringing it all together, the circular graphs on the right show that techniques essential to AR—SLAM and hand-tracking—use most of their power simply moving data to and from RAM.

“Clearly, for low power applications [such as in lightweight AR glasses], it is critical to reduce the amount of data transfer as much as possible,” says Abrash.

To make that happen, he says a new compute architecture will be required which—rather than shuffling large quantities of data between centralized computing hubs—more broadly distributes the computing operations across the system in order to minimize wasteful data transfer.

Compute Where You Least Expect It

A starting point for a distributed computing architecture, Abrash says, could begin with the many cameras that AR glasses need for sensing the world around the user. This would involve doing some preliminary computation on the camera sensor itself before sending only the most vital data across power hungry data transfer lanes.

Image courtesy Meta

To make that possible Abrash says it’ll take co-designed hardware and software, such that the hardware is designed with a specific algorithm in mind that is essentially hardwired into the camera sensor itself—allowing some operations to be taken care of before any data even leaves the sensor.

Image courtesy Meta

“The combination of requirements for lowest power, best requirements, and smallest possible form-factor, make XR sensors the new frontier in the image sensor industry,” Abrash says.

Continue on Page 2: Domain Specific Sensors »

The post Reality Labs Chief Scientist Outlines a New Compute Architecture for True AR Glasses appeared first on Road to VR.

Epic Games Offers 3D Scanning On Smartphones Via App In Limited Beta

Epic Games unveiled its new 3D scanning app for smartphones called RealityScan.

The app uses smartphone cameras and photos to create high-fidelity 3D photogrammetric models of real-world objects for use on digital platforms. You can take a closer look at how it works in Epic’s new promotional video, embedded below.

In the video, the user takes a number of photos of an object — in this instance, the armchair — which then allows the app to create a 3D model that can be used in digital experiences and scaled and positioned as required.

Epic says that the app “walks users through the scanning experience with interactive feedback, AR guidance, and data quality-checks” and can then create a model “almost instantly.” The resulting models can be uploaded to Sketchfab (which Epic acquired mid-last year) and used across many platforms, including VR and AR.

The app was developed by Epic in collaboration with CapturingReality (acquired by Epic last year) and Quixel. It is now in limited beta on iOS — the first 10,000 users will be granted access on a first-come, first-served basis with wider access rolling out later in Spring.

This isn’t the first app to offer a form of 3D scanning on smartphone devices, but it is perhaps the most high-profile crack at the concept yet. 3D object capture will likely play a big role in VR and AR’s future. Headsets like the LiDAR-equipped Varjo XR-3 allows users to scan their environment and present it to others in real-time while  games like Puzzling Places showcase the creative potential of photogrammetric data as well, offering puzzles composed of real-world objects and places, scanned into the game as 3D models.

You can join the limited beta for RealityScan on iOS now, while spots last, via TestFlight. Android support will arrive later this year. You can read more about RealityScan here.

GDC Day 4: ARVORE, Hyper Dash, Emerge Wave 1 Haptics & More

The fourth and final day of GDC 2022 has come and gone. Don’t be too sad though — we’ve got lots of interesting interviews with VR developers straight from the show floor to cheer you up.

It was a great week at GDC last week, with lots of interesting news over the course of the four days at the show. Alex and Skeeva from Between Realities were checking it all out for us as UploadVR Correspondents, pulling some fantastic developers aside for interviews each day.

On day one, they spoke to Walkabout Mini Golf developers Mighty Coconut, Zenith developer Ramen VR and more.

Day two saw them speak to Polyarc about Moss: Book 2, along with Fast Travel Games on Cities VR and Virtuoso. Day three brought some hardware into the mix, including demos and talks with the developers of the upcoming Lynx R1 mixed reality headset. They also caught up Tilt Five and Owlchemy Labs, developers of Cosmonious High (releasing later this week).

For the fourth and final day, Alex and Skeeva first checked in with ARVORE, developer of last year’s Yuki and the Pixel Ripped series. When questioned about any new Pixel Ripped content or releases in the near future, Rodrigo Terra from ARVORE was tight lipped but did mention an upcoming collaboration with Holoride (who make VR experiences designed to take place inside moving cars) that might satisfy fans of the series.

Rodrigo also said that the studio is working on a few new projects, which could release this year or next, so keep an eye out.

Alex and Skeeva also spoke to the developers of Hyper Dash, who revealed a new free game mode will release for the title on April 1, called ‘Ball’. Triangle Factory CEO and Co-Found Timothy Vanherbergen insisted it wasn’t a joke, despite the release date, and described the mode as “Rocket League but with guns.”

Last but not least, there were some interesting discussions with the developers of the Emerge Wave 1 haptic device, which uses sound and vibrations to provide a new kind of haptic feedback, and the developer of Finger Guns, an FPS shooter using hand tracking technology coming to Quest this year.

What was your favorite news or reveal from this year’s GDC? Let us know in the comments below.

GDC Day 3: Cosmonious High, Lynx Mixed Reality Headset & More

Another day, another round of GDC 2022 coverage. Today is day three and the Between Realities crew hit the show floor again to bring you more interviews with VR/AR developers.

If you missed the previous two days, it’s been pretty jam packed already. Day one saw Alex and Skeeva talk to the developers of Walkabout Mini Golf, Zenith VR and more, and day two brought us interviews with Polyarc (Moss Book 2) Fast Travel Games (Cities VR and Virtuoso) and others.

Alex and Skeeva kept up the incredible pace today, speaking first to Owlchemy Labs (Job Simulator, Vacation Simulator) about their new game Cosmonious High, which releases next week.

They also caught up with the teams behind Patchworld: Sound of the Metaverse, Altair Breaker and Snapdragon Spaces.

Last, but definitely not least, Alex and Skeeva gave the upcoming Lynx R1 mixed reality headset a try and spoke to Stan Larroque from Lynx about the hardware.

When asked how far along everything was, Larroque said that things were “pretty mature” on the software side and they were “in the process of manufacturing” the hardware at the moment. The headsets were meant to ship next month in April, but Lynx has been affected by the ongoing global supply chain issues, which will mean a short delay.

“We were supposed to deliver in April but we’re going to face some issues with the supply chain,” said Larroque. “I think you can expect the first headsets to come between June and July. It’s a matter of weeks, we have some weeks of delays here.”

Keep an eye out for our GDC wrap-up show tomorrow, where Skeeva and Alex from Between Realities will join Ian live in the UploadVR virtual studio to discuss their hands-on experiences over the last few days.

You can catch that live on our YouTube channel tomorrow at 4pm Pacific.

Snap Acquires Brain-Computer Interface Startup NextMind

Snap announced it’s acquired neurotech startup NextMind, a Paris-based company known for creating a $400 pint-sized brain-computer interface (BCI).

In a blog post, Snap says NextMind will help drive “long-term augmented reality research efforts within Snap Lab,” the company’s hardware team that’s currently building AR devices.

“Snap Lab’s programs explore possibilities for the future of the Snap Camera, including Spectacles. Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.”

Snap hasn’t detailed the terms or price of the NextMind acquisition, saying only that the team will continue to operate out of Paris, France. According to The Verge, NextMind will also be discontinuing production of its BCI.

Photo captured by Road to VR

Despite increasingly accurate and reliable hand and eye-tracking hardware, input methods for AR headsets still isn’t really a solved problem. It’s not certain whether NextMind’s tech, which is based on electroencephalogram (EEG), was the complete solution either.

NextMind’s BCI is non-invasive and slim enough to integrate into the strap of an XR headset, something that creators like Valve have been interested in for years. It’s also

Granted, there’s a scalp, connective tissue, and a skull to read through, which limits the kit’s imaging resolution, which allowed NextMind to do some basic inputs like simple UI interaction—very far off from the sort of ‘read/write’ capabilities that Elon Musk’s Neuralink is aiming for with its invasive brain implant.

Snap has been collecting more companies to help build out its next pair of AR glasses. In addition to NextMind, Snap acquired AR waveguide startup WaveOptics for over $500 million last May, and LCOS maker Compound Photonics in January.

Snap is getting close too. Its most recent Spectacles (fourth gen) include displays for real-time AR in addition to integrated voice recognition, optical hand tracking, and a side-mounted touchpad for UI selection.

The post Snap Acquires Brain-Computer Interface Startup NextMind appeared first on Road to VR.