Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2

In an ongoing shift away from a somewhat proprietary development environment on its first headset, Magic Leap has committed to bringing OpenXR support to its Magic Leap 2 headset later this year.

Although Magic Leap 2 is clearly the successor to Magic Leap 1, the goal of the headsets are quite different. With the first headset the company attempted to court developers who would build entertainment and consumer-centric apps, and had its own ideas about how its ‘Lumin OS’ should handle apps and how they should be built.

After significant financial turmoil and then revival, the company emerged with new CEO and very different priorities for Magic Leap 2. Not only would the headset be clearly and unequivocally positioned for enterprise use-cases, the company also wants to make it much easier to build apps for the headset.

To that end Magic Leap’s VP of Product Marketing & Developer Programs, Lisa Watts, got on stage at week’s AWE 2022 to “announce and reaffirm to all of you and to the entire industry [Magic Leap’s] support for open standards, and making our platform very easy to develop for.”

In the session, which was co-hosted by Chair of the OpenXR Working Group, Brent Insko, Watts reiterated that Magic Leap 2 is built atop an “Android Open Source Project-based OS interface standard,” and showed a range of open and accessible tools that developers can currently use to build for the headset.

Toward the end of the year, Watts shared, the company expects Magic Leap 2 to also include support for OpenXR, Vulkan, and WebXR.

Image courtesy Magic Leap

OpenXR is a royalty-free standard that aims to standardize the development of VR and AR applications, making hardware and software more interoperable. The standard has been in development since 2017 and is backed by virtually every major hardware, platform, and engine company in the VR industry, and a growing number AR players.

In theory, an AR app built to be OpenXR compliant should work on any OpenXR compliant headset—whether that be HoloLens 2 or Magic Leap 2—without any changes to the application.

OpenXR has picked up considerable steam in the VR space and is starting to see similar adoption momentum in the AR space, especially with one of the sector’s most visible companies, Magic Leap, on board.

The post Magic Leap Commits to OpenXR & WebXR Support Later This Year on ML2 appeared first on Road to VR.

Niantic Launches Visual Positioning System For ‘Global Scale’ AR Experiences

Niantic‘s new Lightship Visual Positioning System (VPS) will facilitate interactions with ‘global scale’ persistent and synced AR content on mobile devices.

Niantic launched Lightship during its developer conference this week and you can see some footage in the video embedded above showing some phone-based AR apps using its new features starting from the 50:20 mark. The system is essentially a new type of map that developers can use for AR experiences, with the aim of providing location-based persistent content that’s synced up for all users.

Niantic is building the map from scanned visual data, which Niantic says will offer “centimeter-level” accuracy when pinpointing the location and orientation of users (or multiple users, in relation to each other) at a given location. The technology is similar to large-scale visual positioning systems in active development at Google and Snap.

While the promise of the system is to work globally, it’s not quite there just yet — as of launch yesterday, Niantic’s VPS system has around 30,000 public locations where VPS is available for developers to hook into. These locations are mainly spread across six key cities — San Francisco, London, Tokyo, Los Angeles, New York City and Seattle — and include “parks, paths, landmarks, local businesses and more.”

To expand the map, Niantic developed the Wayfarer app which allows developers to scan in new locations using their phones, available now in public beta. Niantic has also launched a surveyor program in the aforementioned six key launch cities to expedite the process.

“With only a single image frame from the end user’s camera, Lightship VPS swiftly and accurately determines a user’s precise, six-dimensional location,” according to a Niantic blog post.

Scaling VPS to a global level is a lofty goal for Niantic, but could improve mobile AR experiences which could seem to unlock far more interesting content with accurate maps pinning content to real world locations.

You can read more about Lightship VPS over on the Niantic blog.

Catch Road to VR Co-founder Ben Lang on the Between Realities Podcast

Road to VR co-founder Ben Lang recently joined the crew of the Between Realities podcast.

Bringing more than a decade of experience in the XR industry as co-founder of Road to VR, Ben Lang joined hosts Alex VR and Skeeva on Season 5 Episode 15 of the Between Realities podcast. The trio spoke about the impetus for founding the publication, Meta’s first retail store, the state of competition in the XR industry, privacy concerns for the metaverse, and even some musing on simulation theory. You can check out the full episode below or in the Between Realities episode feed on your favorite podcast platform.

In the podcast Lang speaks of a recent article about scientists that believe it’s possible to experimentally test simulation theory, which you can find here.

The post Catch Road to VR Co-founder Ben Lang on the Between Realities Podcast appeared first on Road to VR.

Reality Labs Chief Scientist Outlines a New Compute Architecture for True AR Glasses

Speaking at the IEDM conference late last year, Meta Reality Labs’ Chief Scientist Michael Abrash laid out the company’s analysis of how contemporary compute architectures will need to evolve to make possible the AR glasses of our sci-fi conceptualizations.

While there’s some AR ‘glasses’ on the market today, none of them are truly the size of a normal pair of glasses (even a bulky pair). The best AR headsets available today—the likes of HoloLens 2 and Magic Leap 2—are still closer to goggles than glasses and are too heavy to be worn all day (not to mention the looks you’d get from the crowd).

If we’re going to build AR glasses that are truly glasses-sized, with all-day battery life and the features needed for compelling AR experiences, it’s going to take require a “range of radical improvements—and in some cases paradigm shifts—in both hardware […] and software,” says Michael Abrash, Chief Scientist at Reality Labs, Meta’s XR organization.

That is to say: Meta doesn’t believe that its current technology—or anyone’s for that matter—is capable of delivering those sci-fi glasses that every AR concept video envisions.

But, the company thinks it knows where things need to head in order for that to happen.

Abrash, speaking at the IEDM 2021 conference late last year, laid out the case for a new compute architecture that could meet the needs of truly glasses-sized AR devices.

Follow the Power

The core reason to rethink how computing should be handled on these devices comes from a need to drastically reduce power consumption to meet battery life and heat requirements.

“How can we improve the power efficiency [of mobile computing devices] radically by a factor of 100 or even 1,000?” he asks. “That will require a deep system-level rethinking of the full stack, with end-to-end co-design of hardware and software. And the place to start that rethinking is by looking at where power is going today.”

To that end, Abrash laid out a graph comparing the power consumption of low-level computing operations.

Image courtesy Meta

As the chart highlights, the most energy intensive computing operations are in data transfer. And that doesn’t mean just wireless data transfer, but even transferring data from one chip inside the device to another. What’s more, the chart uses a logarithmic scale; according to the chart, transferring data to RAM uses 12,000 times the power of the base unit (which in this case is adding two numbers together).

Bringing it all together, the circular graphs on the right show that techniques essential to AR—SLAM and hand-tracking—use most of their power simply moving data to and from RAM.

“Clearly, for low power applications [such as in lightweight AR glasses], it is critical to reduce the amount of data transfer as much as possible,” says Abrash.

To make that happen, he says a new compute architecture will be required which—rather than shuffling large quantities of data between centralized computing hubs—more broadly distributes the computing operations across the system in order to minimize wasteful data transfer.

Compute Where You Least Expect It

A starting point for a distributed computing architecture, Abrash says, could begin with the many cameras that AR glasses need for sensing the world around the user. This would involve doing some preliminary computation on the camera sensor itself before sending only the most vital data across power hungry data transfer lanes.

Image courtesy Meta

To make that possible Abrash says it’ll take co-designed hardware and software, such that the hardware is designed with a specific algorithm in mind that is essentially hardwired into the camera sensor itself—allowing some operations to be taken care of before any data even leaves the sensor.

Image courtesy Meta

“The combination of requirements for lowest power, best requirements, and smallest possible form-factor, make XR sensors the new frontier in the image sensor industry,” Abrash says.

Continue on Page 2: Domain Specific Sensors »

The post Reality Labs Chief Scientist Outlines a New Compute Architecture for True AR Glasses appeared first on Road to VR.

Epic Games Offers 3D Scanning On Smartphones Via App In Limited Beta

Epic Games unveiled its new 3D scanning app for smartphones called RealityScan.

The app uses smartphone cameras and photos to create high-fidelity 3D photogrammetric models of real-world objects for use on digital platforms. You can take a closer look at how it works in Epic’s new promotional video, embedded below.

In the video, the user takes a number of photos of an object — in this instance, the armchair — which then allows the app to create a 3D model that can be used in digital experiences and scaled and positioned as required.

Epic says that the app “walks users through the scanning experience with interactive feedback, AR guidance, and data quality-checks” and can then create a model “almost instantly.” The resulting models can be uploaded to Sketchfab (which Epic acquired mid-last year) and used across many platforms, including VR and AR.

The app was developed by Epic in collaboration with CapturingReality (acquired by Epic last year) and Quixel. It is now in limited beta on iOS — the first 10,000 users will be granted access on a first-come, first-served basis with wider access rolling out later in Spring.

This isn’t the first app to offer a form of 3D scanning on smartphone devices, but it is perhaps the most high-profile crack at the concept yet. 3D object capture will likely play a big role in VR and AR’s future. Headsets like the LiDAR-equipped Varjo XR-3 allows users to scan their environment and present it to others in real-time while  games like Puzzling Places showcase the creative potential of photogrammetric data as well, offering puzzles composed of real-world objects and places, scanned into the game as 3D models.

You can join the limited beta for RealityScan on iOS now, while spots last, via TestFlight. Android support will arrive later this year. You can read more about RealityScan here.

GDC Day 4: ARVORE, Hyper Dash, Emerge Wave 1 Haptics & More

The fourth and final day of GDC 2022 has come and gone. Don’t be too sad though — we’ve got lots of interesting interviews with VR developers straight from the show floor to cheer you up.

It was a great week at GDC last week, with lots of interesting news over the course of the four days at the show. Alex and Skeeva from Between Realities were checking it all out for us as UploadVR Correspondents, pulling some fantastic developers aside for interviews each day.

On day one, they spoke to Walkabout Mini Golf developers Mighty Coconut, Zenith developer Ramen VR and more.

Day two saw them speak to Polyarc about Moss: Book 2, along with Fast Travel Games on Cities VR and Virtuoso. Day three brought some hardware into the mix, including demos and talks with the developers of the upcoming Lynx R1 mixed reality headset. They also caught up Tilt Five and Owlchemy Labs, developers of Cosmonious High (releasing later this week).

For the fourth and final day, Alex and Skeeva first checked in with ARVORE, developer of last year’s Yuki and the Pixel Ripped series. When questioned about any new Pixel Ripped content or releases in the near future, Rodrigo Terra from ARVORE was tight lipped but did mention an upcoming collaboration with Holoride (who make VR experiences designed to take place inside moving cars) that might satisfy fans of the series.

Rodrigo also said that the studio is working on a few new projects, which could release this year or next, so keep an eye out.

Alex and Skeeva also spoke to the developers of Hyper Dash, who revealed a new free game mode will release for the title on April 1, called ‘Ball’. Triangle Factory CEO and Co-Found Timothy Vanherbergen insisted it wasn’t a joke, despite the release date, and described the mode as “Rocket League but with guns.”

Last but not least, there were some interesting discussions with the developers of the Emerge Wave 1 haptic device, which uses sound and vibrations to provide a new kind of haptic feedback, and the developer of Finger Guns, an FPS shooter using hand tracking technology coming to Quest this year.

What was your favorite news or reveal from this year’s GDC? Let us know in the comments below.

GDC Day 3: Cosmonious High, Lynx Mixed Reality Headset & More

Another day, another round of GDC 2022 coverage. Today is day three and the Between Realities crew hit the show floor again to bring you more interviews with VR/AR developers.

If you missed the previous two days, it’s been pretty jam packed already. Day one saw Alex and Skeeva talk to the developers of Walkabout Mini Golf, Zenith VR and more, and day two brought us interviews with Polyarc (Moss Book 2) Fast Travel Games (Cities VR and Virtuoso) and others.

Alex and Skeeva kept up the incredible pace today, speaking first to Owlchemy Labs (Job Simulator, Vacation Simulator) about their new game Cosmonious High, which releases next week.

They also caught up with the teams behind Patchworld: Sound of the Metaverse, Altair Breaker and Snapdragon Spaces.

Last, but definitely not least, Alex and Skeeva gave the upcoming Lynx R1 mixed reality headset a try and spoke to Stan Larroque from Lynx about the hardware.

When asked how far along everything was, Larroque said that things were “pretty mature” on the software side and they were “in the process of manufacturing” the hardware at the moment. The headsets were meant to ship next month in April, but Lynx has been affected by the ongoing global supply chain issues, which will mean a short delay.

“We were supposed to deliver in April but we’re going to face some issues with the supply chain,” said Larroque. “I think you can expect the first headsets to come between June and July. It’s a matter of weeks, we have some weeks of delays here.”

Keep an eye out for our GDC wrap-up show tomorrow, where Skeeva and Alex from Between Realities will join Ian live in the UploadVR virtual studio to discuss their hands-on experiences over the last few days.

You can catch that live on our YouTube channel tomorrow at 4pm Pacific.

Snap Acquires Brain-Computer Interface Startup NextMind

Snap announced it’s acquired neurotech startup NextMind, a Paris-based company known for creating a $400 pint-sized brain-computer interface (BCI).

In a blog post, Snap says NextMind will help drive “long-term augmented reality research efforts within Snap Lab,” the company’s hardware team that’s currently building AR devices.

“Snap Lab’s programs explore possibilities for the future of the Snap Camera, including Spectacles. Spectacles are an evolving, iterative research and development project, and the latest generation is designed to support developers as they explore the technical bounds of augmented reality.”

Snap hasn’t detailed the terms or price of the NextMind acquisition, saying only that the team will continue to operate out of Paris, France. According to The Verge, NextMind will also be discontinuing production of its BCI.

Photo captured by Road to VR

Despite increasingly accurate and reliable hand and eye-tracking hardware, input methods for AR headsets still isn’t really a solved problem. It’s not certain whether NextMind’s tech, which is based on electroencephalogram (EEG), was the complete solution either.

NextMind’s BCI is non-invasive and slim enough to integrate into the strap of an XR headset, something that creators like Valve have been interested in for years. It’s also

Granted, there’s a scalp, connective tissue, and a skull to read through, which limits the kit’s imaging resolution, which allowed NextMind to do some basic inputs like simple UI interaction—very far off from the sort of ‘read/write’ capabilities that Elon Musk’s Neuralink is aiming for with its invasive brain implant.

Snap has been collecting more companies to help build out its next pair of AR glasses. In addition to NextMind, Snap acquired AR waveguide startup WaveOptics for over $500 million last May, and LCOS maker Compound Photonics in January.

Snap is getting close too. Its most recent Spectacles (fourth gen) include displays for real-time AR in addition to integrated voice recognition, optical hand tracking, and a side-mounted touchpad for UI selection.

The post Snap Acquires Brain-Computer Interface Startup NextMind appeared first on Road to VR.

Watch: New Look At Magic Leap 2 Headset & Controllers

A video shared by Magic Leap earlier this month gives us our most comprehensive look at the design of the company’s upcoming Magic Leap 2 AR headset yet.

It shows us almost every angle imaginable of the headset and its controllers.

As reported in late January, the Magic Leap 2 specs suggest it will be a best-in-class AR headset, aimed at the enterprise market. Compared to the Magic Leap 1, it’s lighter in weight, twice as powerful and features an eye box that is twice as large. This is just the tip of the iceberg — you can read more spec specifics here.

We had previously seen photos of Magic Leap 2, but this new video gives a full 360 degree overview. Plus, it gives a clearer look at the headset’s accompanying controllers. As reported earlier this month, the controllers feature cameras on the sides, used for onboard inside-out tracking.

We had seen some unofficial pictures of the controllers at the time, but this new video gives us our first official look. The two cameras are present on the sides, but you can also see what looks to be a trackpad on the top of the controller.

This style of inside-out tracking, using cameras on the controllers themselves, is being employed by other companies as well — leaked images from last September suggest that Meta will use a similar onboard camera design with its controllers for Project Cambria.

Magic Leap 2 will target enterprise markets on release, but specific pricing info and release window details have yet to be revealed.

Top 10 Features We’d Love For Apple’s Mixed Reality Headset

All reports and rumors point to a mixed reality headset on the horizon from Apple. But what Apple features do we want to see supported on this upcoming headset?

Credit to The Information for the mockup drawing of Apple’s headset, featured above in the cover image of this article. 

While initially thought to launch this year, it now seems that Apple’s unannounced mixed reality headset could be pushed to a 2023 launch. Nonetheless, last week we assessed how Apple’s key competitive advantage will be its long history of software and operating system development, matched with an extensive feature set and intuitive, integrated design.

This week, we’re going to run through our list, in no particular order, of existing Apple features that we’d love to see support on the company’s mixed reality headset. Apple is all about parity and integration across its ecosystem of devices, so it’s fair to expect that it will leverage many existing features (and the familiar branding behind them) to bolster the user experience of its headsets.

Keep in mind — some of the features listed below are fairly safe bets, while others might be further down the pipeline or simply more speculative/hypothetical in nature. Here’s our list:

AirDrop

AirDrop is one the best features across Apple’s ecosystem and it would make perfect sense on a headset. 

People mostly use AirDrop to share photos between phones, but its functionality extends well beyond that – you can use it to send links to a secondary device, share contacts, send files between devices, and much more. Integrating AirDrop into Apple’s headset would allow users to quickly share content with each other and between their existing Apple devices and the headset. This would come in handy when trying to send your headset a link from your phone, for example, or when trying to quickly transfer a VR screenshot or video recording across from the headset to another device. 

iCloud

iCloud support seems like a no-brainer, if not near guaranteed, inclusion on an Apple headset. Like other Apple devices, this would seamlessly sync content between all devices as well as back up your headset to the cloud in case it needs to be reset or you upgrade to a new headset in the future.

Likewise, this would allow system-level integration with iCloud Files, allowing you to access the same files from your headset, phone and computers at all times. It would also sync your VR screenshots, videos and app data across all devices, providing another easy way to access content you create in VR from another device at any time. 

Sidecar

Sidecar is one of Apple’s recent features allowing an iPad to operate as a mirrored or second display for a Mac computer. It works wirelessly and remarkably well, in my experience, providing users with an easy two-monitor setup while on the go.

We’d love to see Sidecar’s functionality extended with new features for the mixed reality headset. Instead of using another device as a second monitor for a computer, it would be awesome to see Sidecar add support for using an iPad, iPhone or other Apple device while in mixed reality. Perhaps something similar to Horizon Workrooms’ remote desktop, allowing iPads and iPhones to be tracked, represented and usable in mixed or virtual reality.

Pushing the idea even further, it would be cool to see Sidecar allow an iPad or iPhone to work as customizable peripheral accessory for mixed reality — a physical device that you could pick up and interact with, tracked by the headset, displaying some kind of custom content while using the headset.

FaceID

FaceID remains one of the most reliable and fast methods of face-recognition on the smartphone market. As VR avatars get closer to photo-realism, user authentication and authorization is going to be increasingly crucial. While we don’t know what sensors to expect in Apple’s first-generation headset, it would be great to one day see FaceID adapted for VR using face tracking sensors to verify the owner of the headset. It would be equally useful as a way recognize different users on one headset, allowing the headset to automatically switch profiles for each. 

iMessage

Apple’s now-infamous blue bubble iMessage system is standard among Apple users. Much like how users can send Facebook Messenger messages on Quest 2, it would only make sense to see iMessage supported on Apple’s headset.

Facetime & Memoji Support

On existing Apple devices, Facetime now supports audio and video calls. Being able to accept audio Facetime calls while using Apple’s headset would be great, but it would also be fantastic to see Facetime expanded with additional made-for-VR functionality. One option would be to add a new option for VR calls, allowing headset users to talk and interact with each other on call in 3D virtual space with personal avatars. Apple’s Memoji system seems like a natural system to use for VR avatars in these instances, akin to Meta’s recently updated avatar styles.

SharePlay

SharePlay is a newer feature, only recently launched as part of iOS 15. Tied together with Facetime, it lets users sync up video and audio content with each other, so they can watch/listen together at the same time. The obvious next step for SharePlay would be allowing headset users to join a SharePlay session together in VR cinemas or home environments, similar to Horizon Home.

AirPlay with VR Casting Support

One of Quest 2’s best features is the ability to cast your view from VR onto a computer, TV or other Chromecast-enabled device, so that others can follow along. It would be remiss of Apple not to include a similar feature at launch for its own headset, and AirPlay would be the obvious way to do it.

AirPlay works similarly to Google Cast, allowing you to share your screen or content with other AirPlay-enabled devices. Being able to seamlessly share your view in VR to a Mac computer, iPhone, iPad, Apple TV or other device would be fantastic.

2D iOS App Support

One of Meta’s big 2021 Connect announcements was expanded support for 2D apps, like Instagram and Dropbox, coming to Quest 2. However, the app selection is still quite small and still expanding. Apple has a slam dunk opportunity to one-up Meta instantly here, by adding support to run all, or at least most, existing iOS and iPad OS apps in 2D on its headset.

The headset is rumored to feature one of Apple’s proprietary processors, perhaps on par with the M1 Pro chip. This should, from a tech perspective,  make it a possibility for native 2D iOS/iPad OS apps to run on the headset.

This could even work similarly to how iOS app support worked on the iPad at launch. Some apps had iPad-specific designs and features at launch, but many didn’t. To this day, iOS apps that don’t have iPad-specific support can still be run on the system — instead of a native iPad app, you simply use the app as it’s designed for iOS, but scaled up and enlarged to fit as much of the iPad’s screen as possible. Developers can choose to add support for a native iPad version of their iOS apps, which will automatically run instead of the iOS version, once implemented.

A similar approach could be taken for 2D iOS and iPad OS apps on Apple’s headset — supported at launch, but mostly running the same iPhone and iPad versions you’re used to. Developers could then choose to add headset-native versions of the apps over time, which would take full advantage of the platform.

Apple Wallet/Apple Pay

Entering details like a card number while in VR is a huge hassle and switching quickly between real life and VR to enter some text into your headset is never fun. If implemented, Apple Pay would remove the need to enter any card details in your headset and would use automatically suggest cards that are already stored in your Apple Wallet.  Having this connected functionality in VR would be a huge time saver, allowing new headset owners to purchase experiences in a hassle-free way just by linking their Apple account. 


What features do you want to see on Apple’s upcoming headset? Let us know in the comments below.