Meta Reveals New Prototype VR Headsets Focused on Retinal Resolution and Light Field Passthrough

Meta unveiled two new VR headset prototypes that showcase more progress in the fight to solve some persistent technical challenges facing VR today. Presenting at SIGGRAPH 2023, Meta is demonstrating a headset with retinal resolution combined with varifocal optics, and another headset with advanced light field passthrough capabilities.

Butterscotch Varifocal Prototype

Revealed in a developer blogpost, Meta showed off a varifocal research prototype that demonstrates a VR display system which provides “visual clarity that can closely match the capabilities of the human eye,” says Meta Optical Scientist Yang Zhao. The so-called ‘Butterscotch Varifocal’ prototype provides retinal resolution of up to 56 pixels per degree (PPD), which is sufficient for 20/20 visual acuity, researchers say.

Since its displays are also varifocal, it can support from 0 to 4 diopter (i.e. infinity to 25 cm), and matching what researchers say are “the dynamics of eye accommodation with at least 10 diopter/s peak velocity and 100 diopter/s2 acceleration.” The pulsing motors below control the displays’ focal distance in an effort to match the human eye.

Varifocal headsets represent a solution to the vergence-accommodation conflict (VAC) which has plagued standard VR headsets, the most advanced consumer headsets included. Varifocal headsets not only include the same standard support for the vergence reflex (when eyes converge on objects to form a stereo image), but also the accommodation reflex (when the lens of the eye changes shape to focus light at different depths). Without support for accommodation, VR displays can cause eye strain, make it difficult to focus on close imagery, and may even limit visual immersion.

Check out the through-the-lens video below to see how Butterscotch’s varifocal bit works:

Using LCD panels readily available on the market, Butterscotch manages its 20/20 retinal display by reducing the field of view (FOV) to 50 degrees, smaller than Quest 2’s ~89 degree FOV.

Although Butterscotch’s varifocal abilities are similar to the company’s prior Half Dome prototypes, the company says Butterscotch is “solely focused on showcasing the experience of retinal resolution in VR—but not necessarily with hardware technologies that are ultimately appropriate for the consumer.”

“In contrast, our work on Half Dome 1 through 3 focused on miniaturizing varifocal in a fully practical manner, albeit with lower-resolution optics and displays more similar to today’s consumer headsets,” explains Display Systems Research Director Douglas Lanman. “Our work on Half Dome prototypes continues, but we’re pausing to exhibit Butterscotch Varifocal to show why we remain so committed to varifocal and delivering better visual acuity and comfort in VR headsets. We want our community to experience varifocal for themselves and join in pushing this technology forward.”

 

Flamera Lightfield Passthrough Prototype

Another important side of making XR more immersive is undoubtably the headset’s passthrough capabilities, like you might see on Quest Pro or the upcoming Apple Vision Pro. The decidedly bug-eyed design of Meta’s Flamera research prototype is looking for a better way to create more realistic passthrough by using light fields.

Research Scientist Grace Kuo wearing the Flamera research prototype | Image courtesy Meta

In standard headsets, cameras are typically placed a few inches from where your eyes actually sit, capturing a different view than what you’d see if you weren’t wearing a headset. While there’s a lot of distortion and placement correction going on in standard headsets of today, you’ll probably still notice a ton of visual artifacts as the software tries to correctly resolve and render different depths of field.

“To address this challenge, we brainstormed optical architectures that could directly capture the same rays of light that you’d see with your bare eyes,” says Meta Research Scientist Grace Kuo. “By starting our headset design from scratch instead of modifying an existing design, we ended up with a camera that looks quite unique but can enable better passthrough image quality and lower latency.”

Check out the quick explainer below to see how Flamera’s ingenious capture methods work:

Now, here’s a comparison between an unobstructed view and Flamera’s light field capture, showing off some pretty compelling results:

As research prototypes, there’s no indication when we can expect these technologies to come to consumer headsets. Still, it’s clear that Meta is adamant about showing off just how far ahead it is in tackling some of the persistent issues in headsets today—something you probably won’t see from the patently black box that is Apple.

You can read more about Butterscotch and Flamera in their respective research papers, which are being presented at SIGGRAPH 2023, taking place August 6th – 10th in Los Angeles. Click here for the Butterscotch Varifocal abstract and Flamera full paper.

Sony Details PSVR 2 Prototypes from Conception to Production

Sony released a peek into the prototyping stages that led to PSVR 2, showing off a number of test units for both the headset and controllers.

In an extensive interview on the PS blog, PSVR 2’s Product Manager Yasuo Takahashi reveals the development process behind Sony’s latest VR headset.

Takahashi reveals that detailed discussions on the company’s next-gen PSVR began in earnest after the launch of the original in 2016. From there, the team started prototyping various technologies for PSVR 2 starting in early 2017.

Below is a condensed version of the interview, including all provided photos. If you want to read the full article, click here.

Challenges of Design & Optimization

Maintaining a light and compact design while implementing new features was a challenge, Takahashi says, requiring the teams to work closely to produce detailed technical estimates and optimize the design.

Prototype for testing inside-out in tracking cameras with evaluation board | Image courtesy Sony

While comfort was a significant focus during the development process, the initial prototype focused on evaluating functionality rather than weight.

All of that top bulk is dedicated to inside-out camera evaluation boards which would eventually be shrunk down to an SoC embedded within the headset.

Room-scale & Eye-tracking Tech

Various prototypes were created and tested before integration including both inside-out and outside-in tracking methods. Of course, we know inside-out tracking was eventually the winner, but it’s interesting to note the company was at one point still considering an outside-in approach, similar to the original PSVR.

Eye-tracking tech was also explored as a new UI feature in addition to foveated rendering, which allows developer to push the boundaries of PS5’s VR rendering capabilities and serve up higher-fidelity visuals in games.

Testing and optimizing eye tracking took time, considering different eye colors and accommodating players wearing glasses.

Eye-tracking evaluation prototype 2 | Image courtesy Sony

Comfort & Design

The development team assessed comfort and wearability, evaluating numerous configurations based on the headset’s expected weight. The team put a lot of thought into the materials and shape to make the headset feel lightweight while maintaining strength.

A cool ‘skeleton’ prototype shows all of the pieces of the puzzle together, also showing the headset’s halo strap, which like the original PSVR, keeps the bulk of the weight off the user’s forehead. This one should definitely get a spot on the museum shelves (or maybe a fun mid-generation release?).

The ‘skeleton’ prototype | Image courtesy Sony

Headset haptics were also added as a new feature based on the idea of using the rumble motor from the DualShock 4 wireless controller.

PSVR 2 Sense Controllers

The PSVR 2 Sense controllers were developed in parallel with the headset, starting discussions in 2016 and prototyping in 2017.

Features like haptic feedback, adaptive triggers, and finger-touch detection were early additions, although the team was still sussing out tracking. Notice the Move-style tracking sphere on the tip of an early prototype.

Prototype 1 | Image courtesy Sony

The final shape of the Sense controller was achieved through extensive prototyping and user testing to ensure a comfortable fit and optimized center of gravity.

Here you can see a number of IR tracking marker configurations that would eventually settle on the production model’s current form.

While Sony is undoubtedly sitting on a lot more prototypes than this—they began prototype when the original PSVR had only been in the wild for less than a year—it’s an interesting look at how Takahashi’s team eventually settled on the current form and function of what will likely be PS5’s only VR headset for years to come.

If you’re interested to learn more, check out the full interview with Takahashi.

Meta Introduces ‘Super Resolution’ Feature for Improved Quest Visuals

Meta today introduced a new developer feature called Super Resolution that’s designed to improve the look of VR apps and games on Quest. The company says the new feature offers better quality upscaling at similar costs as previous techniques.

Meta today announced the new Super Resolution feature for developers on the company’s XR developer blog. Available for apps built on the Quest V55 update and later, Super Resolution is a new upscaling method for applications that aren’t already rendering at the screen’s display resolution (as many do in order to meet performance requirements).

“Super Resolution is a VR-optimized edge-aware scaling and sharpening algorithm built upon Snapdragon Game Super Resolution with Meta Quest-specific performance optimizations developed in collaboration with the Qualcomm Graphics Team,” the company says.

Meta further explains that, by default, apps are scaled up to the headset’s display resolution with bilinear scaling, which is fast but often introduces blurring in the process. Super Resolution is presented as an alternative that can produce better upscaling results with low performance costs.

“Super Resolution is a single-pass spatial upscaling and sharpening technique optimized to run on Meta Quest devices. It uses edge- and contrast-aware filtering to preserve and enhance details in the foveal region while minimizing halos and artifacts.”

Upscaling using bilinear (left), Normal Sharpening (center), and Super Resolution (right). The new technique prevents blur without introducing as much aliasing. | Image courtesy Meta

Unlike the recent improvements to CPU and GPU power on Quest headsets, Super Resolution isn’t an automatic benefit to all applications; developers will need to opt-in to the feature, and even then, Meta warns that benefits from the feature will need to be assessed on an app-by-app basis.

“The exact GPU cost of Super Resolution is content-dependent, as Super Resolution devotes more computation to regions of the image with fine detail. The cost of enabling Super Resolution over the default bilinear filtering is lower for content containing primarily regions of solid colors or smooth gradients when compared to content with highly detailed images or objects,” the company explains.

Developers can implement Super Resolution into Quest apps on V55+ immediately, and those using Quest Link (AKA Oculus Link) for PC VR content can also enable the sharpening feature by using the Oculus Debug Tool and setting the Link Sharpening option to Quality.

Vision Pro Dev Kit Applications Will Open in July

Apple says it will give developers the opportunity to apply for Vision Pro dev kits starting sometime in July.

In addition to releasing a first round of developer tools last week, including a software ‘Simulator’ of Vision Pro, Apple also wants to give developers a chance to get their hands on the headset itself.

The company indicates that applications for a Vision Pro development kit will open starting in July, and developers will be able to find details here when the time comes.

There’s no telling how many of the development kits the company plans to send out, or exactly when they will start shipping, but given Apple’s culture of extreme secrecy you can bet selected developers will be locked down with strict NDAs regarding their use of the device.

The Vision Pro developer kit isn’t the only way developers will be able to test their apps on a real headset.

Developers will also be able to apply to attend ‘Vision Pro developer labs’:

Apply for the opportunity to attend an Apple Vision Pro developer lab, where you can experience your visionOS, iPadOS, and iOS apps running on Apple Vision Pro. With direct support from Apple, you’ll be able to test and optimize your apps and games, so they’ll be ready when Apple Vision Pro is available to customers. Labs will be available in six locations worldwide: Cupertino, London, Munich, Shanghai, Singapore, and Tokyo.

Our understanding is that applications for the developer labs will also open in July.

Additionally, developers will also be able to request that their app be reviewed by Apple itself on visionOS, though this is restricted to existing iPhone and iPad apps, rather than newly created apps for visionOS:

If you currently have an iPad or iPhone app on the App Store, we can help you test it on Apple Vision Pro. Request a compatibility evaluation from App Review to get a report on your app or game’s appearance and how it behaves in visionOS.

Vision Pro isn’t planned to ship until early 2024, but Apple wants to have third-party apps ready and waiting for when that time comes.

Apple Releases Vision Pro Development Tools and Headset Emulator

Apple has released new and updated tools for developers to begin building XR apps on Apple Vision Pro.

Apple Vision Pro isn’t due out until early 2024, but the company wants developers to get a jump-start on building apps for the new headset.

To that end the company announced today it has released the visionOS SDK, updated Xcode, Simulator, and Reality Composer Pro, which developers can get access to at the Vision OS developer website.

While some of the tools will be familiar to Apple developers, tools like Simulator and Reality Composer Pro are newly released for the headset.

Simulator is the Apple Vision Pro emulator, which aims to give developers a way to test their apps before having their hands on the headset. The tool effectively acts as a software version of Apple Vision Pro, allowing developers see how their apps will render and act on the headset.

Reality Composer Pro is aimed at making it easy for developers to build interactive scenes with 3D models, sounds, and textures. From what we understand, it’s sort of like an easier (albeit less capable) alternative to Unity. However, developers who already know or aren’t afraid to learn a full-blown game engine can also use Unity to build visionOS apps.

Image courtesy Apple

In addition to the release of the visionOS SDK today, Apple says it’s still on track to open a handful of ‘Developer Labs’ around the world where developers can get their hands on the headset and test their apps. The company also says developers will be able to apply to receive Apple Vision Pro development kits next month.

A Concise Beginner’s Guide to Apple Vision Pro Design & Development

Apple Vision Pro has brought new ideas to the table about how XR apps should be designed, controlled, and built. In this Guest Article, Sterling Crispin offers up a concise guide for what first-time XR developers should keep in mind as they approach app development for Apple Vision Pro.

Guest Article by Sterling Crispin

Sterling Crispin is an artist and software engineer with a decade of experience in the spatial computing industry. His work has spanned between product design and the R&D of new technologies at companies like Apple, Snap Inc, and various other tech startups working on face computers.

Editor’s Note: The author would like to remind readers that he is not an Apple representative; this info is personal opinion and does not contain non-public information. Additionally, more info on Vision Pro development can be found in Apple’s WWDC23 videos (select Filter → visionOS).

Ahead is my advice for designing and developing products for Vision Pro. This article includes a basic overview of the platform, tools, porting apps, general product design, prototyping, perceptual design, business advice, and more.

Overview

Apps on visionOS are organized into ‘scenes’, which are Windows, Volumes, and Spaces.

Windows are a spatial version of what you’d see on a normal computer. They’re bounded rectangles of content that users surround themselves with. These may be windows from different apps or multiple windows from one app.

Volumes are things like 3D objects, or small interactive scenes. Like a 3D map, or small game that floats in front of you rather than being fully immersive.

Spaces are fully immersive experiences where only one app is visible. That could be full of many Windows and Volumes from your app. Or like VR games where the system goes away and it’s all fully immersive content that surrounds you. You can think of visionOS itself like a Shared Space where apps coexist together and you have less control. Whereas Full Spaces give you the most control and immersiveness, but don’t coexist with other apps. Spaces have immersion styles: mixed, progressive, and full. Which defines how much or little of the real world you want the user to see.

User Input

Users can look at the UI and pinch like the Apple Vision Pro demo videos show. But you can also reach out and tap on windows directly, sort of like it’s actually a floating iPad. Or use a bluetooth trackpad or video game controller. You can also look and speak in search bars. There’s also a Dwell Control for eyes-only input, but that’s really an accessibility feature. For a simple dev approach, your app can just use events like a TapGesture. In this case, you won’t need to worry about where these events originate from.

Spatial Audio

Vision Pro has an advanced spatial audio system that makes sounds seem like they’re really in the room by considering the size and materials in your room. Using subtle sounds for UI interaction and taking advantage of sound design for immersive experiences is going to be really important. Make sure to take this topic seriously.

Development

If you want to build something that works between Vision Pro, iPad, and iOS, you’ll be operating within the Apple dev ecosystem, using tools like XCode and SwiftUI. However, if your goal is to create a fully immersive VR experience for Vision Pro that also works on other headsets like Meta’s Quest or PlayStation VR, you have to use Unity.

Apple Tools

For Apple’s ecosystem, you’ll use SwiftUI to create the UI the user sees and the overall content of your app. RealityKit is the 3D rendering engine that handles materials, 3D objects, and light simulations. You’ll use ARKit for advanced scene understanding, like if you want someone to throw virtual darts and have them collide with their real wall, or do advanced things with hand tracking. But those rich AR features are only available in Full Spaces. There’s also Reality Composer Pro which is a 3D content editor that lets you drag things around a 3D scene and make media rich Spaces or Volumes. It’s like diet-Unity that’s built specifically for this development stack.

One cool thing with Reality Composer is that it’s already full of assets, materials, and animations. That helps developers who aren’t artists build something quickly and should help to create a more unified look and feel to everything built with the tool. Pros and cons to that product decision, but overall it should be helpful.

Existing iOS Apps

If you’re bringing an iPad or iOS app over, it will probably work unmodified as a Window in the Shared Space. If your app supports both iPad and iPhone, the headset will use the iPad version.

To customize your existing iOS app to take better advantage of the headset you can use the Ornament API to make little floating islands of UI in front of, or besides your app, to make it feel more spatial. Ironically, if your app is using a lot of ARKit features, you’ll likely need to ‘reimagine’ it significantly to work on Vision Pro, as ARKit has been upgraded a lot for the headset.

If you’re excited about building something new for Vision Pro, my personal opinion is that you should prioritize how your app will provide value across iPad and iOS too. Otherwise you’re losing out on hundreds of millions of users.

Unity

You can build to Vision Pro with the Unity game engine, which is a massive topic. Again, you need to use Unity if you’re building to Vision Pro as well as a Meta headset like the Quest or PSVR 2.

Unity supports building Bounded Volumes for the Shared Space which exist alongside native Vision Pro content. And Unbounded Volumes, for immersive content that may leverage advanced AR features. Finally you can also build more VR-like apps which give you more control over rendering but seem to lack support for ARKit scene understanding like plane detection. The Volume approach gives RealityKit more control over rendering, so you have to use Unity’s PolySpatial tool to convert materials, shaders, and other features.

Unity support for Vision Pro includes for tons of interactions you’d expect to see in VR, like teleporting to a new location or picking up and throwing virtual objects.

Product Design

You could just make an iPad-like app that shows up as a floating window, use the default interactions, and call it a day. But like I said above, content can exist in a wide spectrum of immersion, locations, and use a wide range of inputs. So the combinatorial range of possibilities can be overwhelming.

If you haven’t spent 100 hours in VR, get a Quest 2 or 3 as soon as possible and try everything. It doesn’t matter if you’re a designer, or product manager, or a CEO, you need to get a Quest and spend 100 hours in VR to begin to understand the language of spatial apps.

I highly recommend checking out Hand Physics Lab as a starting point and overview for understanding direct interactions. There’s a lot of subtle things they do which imbue virtual objects with a sense of physicality. And the Youtube VR app that was released in 2019 looks and feels pretty similar to a basic visionOS app, it’s worth checking out.

Keep a diary of what works and what doesn’t.

Ask yourself: ‘What app designs are comfortable, or cause fatigue?’, ‘What apps have the fastest time-to-fun or value?’, ‘What’s confusing and what’s intuitive?’, ‘What experiences would you even bother doing more than once?’ Be brutally honest. Learn from what’s been tried as much as possible.

General Design Advice

I strongly recommend the IDEO style design thinking process, it works for spatial computing too. You should absolutely try it out if you’re unfamiliar. There’s Design Kit with resources and this video which, while dated, is a great example of the process.

The road to spatial computing is a graveyard of utopian ideas that failed. People tend to spend a very long time building grand solutions for the imaginary problems of imaginary users. It sounds obvious, but instead you should try to build something as fast as possible that fills a real human need, and then iteratively improve from there.

Continue on Page 2: Spatial Formats and Interaction »

Apple Vision Pro Will Have an ‘Avatar Webcam’, Automatically Integrating with Popular Video Chat Apps

In addition to offering immersive experiences, Apple says that Vision Pro will be able to run most iPad and iOS apps out of the box with no changes. For video chat apps like Zoom, Messenger, Discord, and others, the company says that an ‘avatar webcam’ will be supplied to apps, making them automatically able to handle video calls between the headset and other devices.

Apple says that on day one, all suitable iOS and iPad OS apps will be available on the headset’s App Store. According to the company, “most apps don’t need any changes at all,” and the majority should run on the headset right out of the box. Developers will be able to opt-out from having their apps on the headset if they’d like.

For video conferencing apps like Zoom, Messenger, Discord, Google Meet, which expect access to the front-camera of an iPhone or iPad, Apple has done something clever for Vision Pro.

Instead of a live camera view, Vision Pro provides a view of the headset’s computer-generated avatar of the user (which Apple calls a ‘Persona’). That means that video chat apps that are built according to Apple’s existing guidelines should work on Vision Pro without any changes to how the app handles camera input.

How Apple Vision Pro ‘Persona’ avatars are represented | Image courtesy Apple

Persona’s use the headset’s front cameras to scan the user’s face to create a model, then the model is animated according to head, eye, and hand inputs tracked by the headset.

Image courtesy Apple

Apple confirmed as much in a WWDC developer session called Enhance your iPad and iPhone apps for the Shared Space. The company also confirmed that apps asking for access to a rear-facing camera (ie: a photography app) on Apple Vision Pro will get only black frames with a ‘no camera’ symbol. This alerts the user that there’s no rear-facing camera available, but also means that iOS and iPad apps will continue to run without errors, even when they expect to see a rear-facing camera.

There’s potentially other reasons that video chat apps like Zoom, Messenger, or Discord might not work with Apple Vision Pro right out of the box, but at least as far as camera handling goes, it should be easy for developers to get video chats up and running using a view of the user’s Persona.

It’s even possible that ‘AR face filters’ in apps like Snapchat and Messenger will work correctly with the user’s Apple Vision Pro avatar, with the app being none-the-wiser that it’s actually looking at a computer-generated avatar rather than a real person.

Image courtesy Apple

In another WWDC session, the company explained more about how iOS and iPad apps behave on Apple Vision Pro without modification.

Developers can expect up to two inputs from the headset (the user can pinch each hand as its own input), meaning any apps expecting two-finger gestures (like pinch-zoom) should work just fine, but three fingers or more won’t be possible from the headset. As for apps that require location information, Apple says the headset can provide an approximate location via Wi-Fi, or a specific location shared via the user’s iPhone.

Unfortunately, existing ARKit apps won’t work out of the box on Apple Vision Pro. Developers will need to use a newly upgraded ARKit (and other tools) to make their apps ready for the headset. This is covered in the WWDC session Evolve your ARKit app for spatial experiences.

Quest's New Virtual Keyboard Neatly Integrates Into Apps

Quest's new virtual keyboard neatly integrates into apps instead of just being a crude overlay.

If you develop an app for smartphones, you don't have to also build a touchscreen keyboard. The operating system handles that for you. In VR and AR this isn't quite so simple, since a virtual keyboard is an object in three dimensional space.

Building a virtual keyboard from scratch, especially one that handles accent marks for different languages, is a significant investment of time and effort, not to mention it means the user has an inconsistent text entry experience between different apps.

On Quest, developers have been able to bring up the virtual keyboard used by Meta's system software since mid-2020. But this appears as a crude overlay in a fixed position, rendering above the app no matter where other virtual objects are and replacing their in-app hands with the translucent system "ghost hands" until they've finished typing. It feels completely out of place.

Meta's new Virtual Keyboard for Unity solves these problems. Instead of just being an API call to an overlay, it's an actual prefab developers position in their apps. Virtual Keyboard works with either hands or controllers, and developers can chose between close-up Direct Touch mode or laser pointers at a distance.

The operating system handles populating the surface of Virtual Keyboard with keys for the user's locality, meaning the keyboard will get future features and improvements over time even if the developer never updates the app.

Virtual Keyboard was introduced as an experimental feature in April - so couldn't be shipped to the store or App Lab - but in the v54 SDK released this week it has now graduated to being a production feature.

Apple’s Computer Vision Tool for Developers Now Tracks Dogs & Cats

Would reality really be complete without our beloved four-legged friends? Certainly not. Luckily the latest update to Apple’s ‘Vision’ framework—which gives developers a bunch of useful computer vision tools for iOS and iPad apps—includes the ability to identify and track the skeletal position of dogs and cats.

At Apple’s annual WWDC the company posted a session introducing developers to the new animal tracking capabilities in the Vision developer tool, and explained that the system can work on videos in real-time and on photos.

The system, which is also capable of tracking the skeletal position of people, gives developers six tracked ‘joint groups’ to work with, which collectively describe the position of the animal’s body.

Image courtesy Apple

Tracked joint groups include:

  • Head: Ears, Eyes, Nose
  • Front Legs: Right leg, Left leg
  • Hind Legs: Right rear leg, Left rear leg
  • Tail: Tail start, Tail middle, Tail end
  • Trunk (neck)
  • All (contains all tracked points representing a complete skeletal pose)

Yes, you read that right, the system has ‘tail tracking’ and ‘ear tracking’ so your dog’s tail wags and floppy ears won’t be missed.

The system supports up to two animals in the scene at one time and, in additional to tracking their position, can also identify a cat from a dog… just in case you have trouble with that.

Image courtesy Apple

Despite the similarity in name to the Vision Pro headset, it isn’t yet clear if Apple will expose the ‘Vision’ computer vision framework to developers of the headset, but it may well be the same foundation that allows the device to identify people in the room around you and fade them into the virtual view so you can talk to them.

That may have also been a reason for building out this animal tracking system in the first place—so you don’t trip over fido when you’re dancing around the room in your new Vision Pro headset—though we haven’t been able to confirm that system will work with pets just yet.

Apple Vision Pro Supports Unity Apps & Games

Apple confirmed Vision Pro supports porting Unity apps and games.

Acknowledging the existing Unity VR development community, Apple said "we know there is a community of developers who have been building incredible 3D apps for years" and announced a "deep partnership" with Unity in order to "bring those apps to Vision Pro".

This partnership involved "layering" Unity's real time engine on top of RealityKit, Apple's own high level framework (arguably engine) for building AR apps.

This approach means Unity apps can run alongside other visionOS apps in your environment, a concept Apple calls the "shared space."

0:00
/

Unity apps will get access to visionOS features including the use of real-world passthrough as a background, foveated rendering, and the native system hand gestures.

Apple also has its own Mac-based suite of tools for native spatial app development. You use the Xcode IDE, SwiftUI for user interfaces, and its ARKit and RealityKit frameworks for handling tracking, rendering, physics, animations, spatial audio, and more. Apple even announced Reality Composer Pro, which is essentially its own engine editor.

Vision Pro will have a "brand new" App Store for immersive apps as well as iPhone and iPad apps compatible with the headset.