Unpacking the VR Design Details of ‘Half-Life: Alyx’ – Inside XR Design

In Inside XR Design we examine specific examples of great VR design. Today we’re looking at the details of Half-Life: Alyx and how they add an immersive layer to the game rarely found elsewhere.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.


Now listen, I know you’ve almost certainly heard of Half-Life: Alyx (2020), it’s one of the best VR games made to date. And there’s tons of reasons why it’s so well regarded. It’s got great graphics, fun puzzles, memorable set-pieces, an interesting story… and on and on. We all know this already.

But the scope of Alyx allows the game to go above and beyond what we usually see in VR with some awesome immersive details that really make it shine. Today I want to examine a bunch of those little details—and even if you’re an absolute master of the game, I hope you’ll find at least one thing you didn’t already know about.

Inertia Physics

First is the really smart way that Alyx handles inertia physics. Lots of VR games use inertia to give players the feeling that objects have different weights. This makes moving a small and light object feel totally different than a large and heavy object, but it usually comes with a sacrifice which is making larger objects much more challenging to throw because the player has to account for the inertia sway as they throw the object.

Alyx makes a tiny little tweak to this formula by ignoring the inertia sway only in its throwing calculation. That means if you’re trying to accurately throw a large object, you can just swing your arm and release in a way that feels natural and you’ll get an accurate throw even if you didn’t consider the object’s inertia.

This gives the game the best of both worlds—an inertia system to convey weight but without sacrificing the usability of throwing.

I love this kind of attention to detail because it makes the experience better without players realizing anything is happening.

Sound Design

Note: Make sure to unmute clips in this section

When it comes to sound design, Alyx is really up there not just in terms of quality, but in detail too. One of my absolute favorite details in this game is that almost every object has a completely unique sound when being shaken. And this reads especially well because it’s spatial audio, so you’ll hear it most from the ear that’s closest to the shaken object:

This is something that no flatscreen game needs because only in VR do players have the ability to pick up practically anything in the game.

I can just imagine the sound design team looking at the game’s extensive list of props and realizing they need to come up with what a VHS tape or a… TV sounds like when shaken.

That’s a ton of work for this little detail that most people won’t notice, but it really helps keep players immersed when they pick up, say, a box of matches and hear the exact sound they would expect to hear if they shook it in real life.

Gravity Gloves In-depth

Ok so everyone knows the Gravity Gloves in Alyx are a diegetic way to give players a force pull capability so it’s easier to grab objects at a distance. And practically everyone I’ve talked to agrees they work exceptionally well. They’re not only helpful, but fun and satisfying to use.

But what exactly makes the gravity gloves perhaps the single best force-pull implementation seen in VR to date? Let’s break it down.

In most VR games, force-pull mechanics have two stages:

  1. The first, which we’ll call ‘selection’, is pointing at an object and seeing it highlighted.
  2. The second, which we’ll call ‘confirmation’, is pressing the grab button which pulls the object to your hand.

Half-Life: Alyx adds a third stage to this formula which is the key to why it works so well:

  1. First is ‘selection’, where the object glows so you know what is being targeted.
  2. The second—let’s call it lock-on’—involves pulling the trigger to confirm your selection. Once you do, the selection is locked-on; even if you move your hand now the selection won’t change to any other object.

  3. The final stage, ‘confirmation’, requires not a button press but a pulling gesture to finally initiate the force pull.

Adding that extra lock-on stage to the process significantly improves reliability because it ensures that both the player and the game are on the same page before the object is pulled.

And it should be noted that each of these stages has distinct sounds which make it even clearer to the player what’s being selected so they know that everything is going according to their intentions.

The use of a pulling gesture makes the whole thing more immersive by making it feel like the game world is responding to your physical actions, rather than the press of a button.

There’s also a little bit of magic to the exact speed and trajectory the objects follow, like how the trajectory can shift in real-time to reach the player’s hand. Those parameters are carefully tuned to feel satisfying without feeling like the object just automatically attaches to your hand every time.

This strikes me as something that an animator may even have weighed in on to say, “how do we get that to feel just right?”

Working Wearables

It’s natural for players in VR to try to put a hat on their head when they find one, but did you know that wearing a hat protects you from barnacles? And yes, that’s the official name for those horrible creatures that stick to the ceiling.

But it’s not just hats you can wear. The game is surprisingly good about letting players wear anything that’s even vaguely hat-shaped. Like cones or even pots.

I figure this is something that Valve added after watching more than a few playtesters attempt to wear those objects on their head during development.

Speaking of wearing props, you can also wear gas masks. And the game takes this one step further… the gas masks actually work. One part of the game requires you to hold your hand up to cover you mouth to avoid breathing spores which make you cough and give away your position.

If you wear a gas mask you are equally protected, but you also get the use of both hands which gives the gas mask an advantage over covering your mouth with your hand.

The game never explicitly tells you that the gas mask will also protect you from the spores, it just lets players figure it out on their own—sort of like a functional easter egg.

Spectator View

Next up is a feature that’s easy to forget about unless you’ve spent a lot of time watching other people play Half-Life: Alyx… the game has an optional spectator interface which shows up only on the computer monitor. The interface gives viewers the exact same information that the actual player has while in the game: like, which weapons they have unlocked or equipped and how much health and resin they have. The interface even shows what items are stowed in the player’s ‘hand-pockets’.

And Valve went further than just adding an interface for spectators, they also added built-in camera smoothing, zoom levels, and even a selector to pick which eye the camera will look through.

The last one might seem like a minor detail, but because people are either left or right-eye dominant, being able to choose your dominant eye means the spectator will correctly see what you’re aiming at when you’re aiming down the scope of a gun.

Multi-modal Menu

While we’re looking at the menus here, it’s also worth noting that the game menu is primarily designed for laser pointer interaction, but it also works like a touchscreen.

While this seems maybe trivial today, let’s remember that Alyx was released almost four years ago(!). The foresight to offer both modalities means that no matter if the player’s first instinct is to touch the menu or use the laser, both choices are equally correct.

Guiding Your Eye

All key items in Alyx have subtle lights on them to draw your attention. This is basic game design stuff, but I have to say that Alyx’s approach is much less immersion breaking than many VR games where key objects are highlighted in a glaringly obvious yellow mesh.

For the pistol magazine, the game makes it clear even at a distance how many bullets are in the magazine… in fact, it does this in two different ways.

First, every bullet has a small light on it which lets you see from the side of the magazine roughly how full it is.

And then on the bottom of the magazine there’s a radial indicator that depletes as the ammo runs down.

Because this is all done with light, if the magazine is half full, it will be half as bright—making it easy for players to tell just how ‘valuable’ the magazine is with just a glance, even at a distance. Completely empty magazines emit no light so you don’t mistake them for something useful. Many players learn this affordance quickly, even without thinking much about it.

The takeaway here is that a game’s most commonly used items—the things players will interact with the most—should be the things that are most thoughtfully designed. Players will collect and reload literally hundreds of magazines throughout the game, so spending time to add these subtle details meaningfully improves the entire experience.

Continue on Page 2 »

The post Unpacking the VR Design Details of ‘Half-Life: Alyx’ – Inside XR Design appeared first on Road to VR.

The Secret to ‘Beat Saber’s’ Fun Isn’t What You Think – Inside XR Design

Our series Inside XR Design highlights and unpacks examples of great XR design. Today we’re looking at Beat Saber (2019) and why its most essential design element can be used to make great VR games that have nothing to do with music or rhythm.

You can find the complete video below, or continue reading for an adapted text version.

More Than Music

Welcome back to another episode of Inside XR Design. Now listen, I’m going to say something that doesn’t seem to make any sense at all. But by the end of this article, I guarantee you’ll understand exactly what I’m talking about.

Beat Saber… is not a rhythm game.

Now just wait a second before you call me insane.

Beat Saber has music, and it has rhythm, yes. But the defining characteristic of a rhythm game is not just music, but also a scoring system that’s based on timing. The better your timing, the higher your score.

Now here’s the part most people don’t actually realize. Beat Saber doesn’t have any timing component to its scoring system.

That’s right. You could reach forward and chop a block right as it comes into range. Or you could hit it at the last second before it goes completely behind you, and in both cases you could earn the same number of points.

So if Beat Saber scoring isn’t about timing, then how does it work? The scoring system is actually based on motion. In fact, it’s actually designed to make you move in specific ways if you want the highest score.

The key scoring factors are how broad your swing is and how even your cut is through the center of the block. So Beat Saber throws these cubes at you and challenges you to swing broadly and precisely.

And while Beat Saber has music that certain helps you know when to move, more than a rhythm game… it’s a motion game.

Specifically, Beat Saber is built around a VR design concept that I like to call ‘Instructed Motion’, which is when a game asks you to move your body in specific ways.

And I’m going to make the case that Instructed Motion is a design concept that can be completely separated from games with music. That is to say: the thing that makes Beat Saber so fun can be used to design great VR games that have nothing to do with music or rhythm.

Instructed Motion

Ok so to understand how you can use Instructed Motion in a game that’s not music-based let’s take a look at Until You Fall (2020) from developer Schell Games. This is not remotely a rhythm game—although it has an awesome soundtrack—but it uses the same Instruction Motion concept that makes Beat Saber so much fun.

While many VR combat games use physics-based systems that allow players to approach combat with arbitrary motions, Until You Fall is built from the ground up with a notion of how it wants players to move.

And before you say that physics-based VR combat is objectively the better choice in all cases, I want you to consider what Beat Saber would be like if players could cut blocks in any direction they wanted at all times.

Sure, you would still be cutting blocks to music, and yet, it would be significantly harder to find the fun and flow that makes the game feel so great. Beat Saber uses intentional patterns that cause players to move in ways that are fluid and enjoyable. Without the arrows, player movements would be chaotic and they’d be flailing randomly.

So just like Beat Saber benefits by guiding a player to make motions that are particularly satisfying, combat in VR can benefit too. In the case of Until You Fall, the game uses Instructed Motion not only to make players move a certain way, but also to make them feel a certain way.

When it comes to blocking, players feel vulnerable because they are forced into a defensive position. Unlike a physics-based combat game where you can always decide when to hit back, enemies in Until You Fall have specific attack phases, and the player must block while it happens, otherwise you risk taking a hit and losing one of just three hit points.

Thanks to this approach, the game can adjust the intensity the player feels by varying the number, position, and speed of blocks that must be made. Weak enemies might hit slowly and without much variation in their attacks. While strong enemies will send a flurry of attacks that make the player really feel like they’re under pressure.

This gives the developer very precise control over the intensity, challenge, and feeling of each encounter. And it’s that control that makes Instructed Motion such a useful tool.

Dodging is similar to blocking, but instead of raising your weapon to the indicated position, you need to move your whole body out of the way. And this feels completely different from just blocking.

While some VR combat games would let the player ‘dodge’ just by moving their thumbstick to slide out of the way, Until You Fall uses Instructed Motion to make the act of dodging much more physically engaging.

And when it comes to attacking, players can squeeze in hits wherever they can until an enemy’s shield is broken, which then opens an opportunity to deal a bunch of damage.

And while another VR game might have just left this opening for players to hit the enemy as many times as they can, Until You Fall uses Instruced Motion to ask players to swing in specific ways.

Swinging in wide arcs and along particular angles deals the most damage and makes you move in a way that feels really powerful and confident. It’s like the opposite feeling of when you’re under attack. It really feels great when you land all the combo hits.

Continue on Page 2: Motion = Emotion

The post The Secret to ‘Beat Saber’s’ Fun Isn’t What You Think – Inside XR Design appeared first on Road to VR.

Vision Pro is Hands-down the Best Movie Experience You Can Have on a Plane

After an eight hour flight, I can say that Apple has largely nailed the Vision Pro use-case of movie watching on a plane. But a few key improvements stand to make it more widely appealing.

Nobody looks forward to an eight hour flight. Whether it’s sleeping or reading or working, people want a way to pass the time and distract themselves from the noisy cabin, turbulence, and the general feeling of being packed into a metal tube like sardines.

The seat-back screen—with its selection of movies and TV shows—offers minor refuge from this chaotic environment.

I’m someone who really appreciates ‘cinematic spectacle’—you know, the movies that have the direction and action that really deserve a big screen and great audio.

While the movie selection on a plane is usually not half bad, over the years I have regularly avoided watching some movies I actually wanted to watch, because I felt they deserved much more than the experience I’d get from a small, low quality seat-back screen.

If only I could somehow bring my own movie theater on the plane.

Well, it turns out that’s a thing now.

Vision Pro on a Plane

Using a Vision Pro combined with AirPods Pro 2 on an international flight was a phenomenal experience viewing that can reasonably be described as bringing your own movie theater onto the plane.

While there’s still some obvious ways to improve the experience of using the headset on a plane, I was blown away at how it managed to make me practically unaware of the plane I was on.

This whole thing really only works well because Apple has done a few things to make sure the use-case is not just theoretical, but actually considered from end-to-end.

For one, Vision Pro has a special tracking mode called Travel Mode (not to be confused with Airplane Mode) which allows the headset to keep the floating screen locked in place in front of you even though the airplane is moving. Without it, the headset would detect the motions of the plane and cause the screen to go flying off behind you at worst, or slowly drift out of place at best.

Travel mode managed to keep the screen perfectly locked in place in front of me, with no drift throughout the entire duration of the movie. I put the screen out in front of me and made it 20 feet large.

This would have otherwise created a stereoscopic disparity by going ‘through’ the seats in front of me, turning Vision Pro’s digital crown to add an immersive backdrop behind the screen (which fades out to the passthrough view at the edges) worked perfectly to prevent that. It ended up looking like a soft portal to another dimension was open right in front of me… with a huge TV just on the other side.

Then there’s simply the quality of the display. When it comes to movie viewing, it’s not just resolution that matters. The headset’s HDR capability combined with micro OLED (which offers true blacks) really makes videos shine.

But none of this would matter if it wasn’t easy to find and transfer high quality video content onto the headset.

Luckily it was as simple as opening the Apple TV app before my flight where I downloaded Mad Max: Fury Road (2015)—at 4K resolution with surround sound, HDR, and in 3D—for offline viewing on my headset.

Lost in the Best Way

What’s crazy is that despite being stuck in a plane in an economy seat, this was the best way I’ve ever watched Mad Max on any screen. The quality was great. The 3D is better than what you get in a movie theater, and so is the contrast. Using AirPods Pro 2 also gave me a really impressive audio experience, and I couldn’t believe how well the noise cancellation isolated me from the noise of the plane.

With a high quality video on a huge screen, great sound with noise cancellation, and a movie with constant action, I was lost in an audio-visual reality that practically made me forget I was on a plane. In fact I have to admit that I was so lost in the film that I forgot to capture and screenshots for this article!

But I wasn’t completely unaware… on purpose. I didn’t dial the immersive environment up to 100% (which would have completely surrounded me and made it look like I wasn’t in the plane at all), which meant I could still look off to the side and see what was happening in the cabin so I didn’t need to worry that I’d miss a drink when the flight attendants came by.

Not for Everyone (yet)

The movie watching experience I had with Vision Pro on the plane was vastly better than what I’ve ever had from a seatback screen or a laptop.

But it’s not a perfect experience and there’s still some things that need to be improved before everyone would want to watch movies on the plane this way.

First are the obvious things. Vision Pro is big, and even bigger when it’s in a travel case. At this price, it’s not the kind of headset you’re just going to squeeze into a backpack without any protection. The headset in its travel case took up like 80% of the space in the backpack I carried onto the plane.

When I was ready to pull the headset out, it was fairly clunky to pull the case out of my backpack, unzip and fold it open in my lap, then pull out the headset and battery before getting the headset plugged in and putting the case back under my seat. In the cramped space of an economy seat, it’s a bit of a juggling act.

The only real fix for this is a smaller and more affordable headset. And even better if they can eventually ditch the battery pack. But in the interim, I could easily see an airline offering Vision Pro headsets built into a compartment in first class seats. Not only could these be permanently powered through a tether, but passengers wouldn’t need to carry a bulky case with them onto the plane to get a great movie watching experience.

Although hand-tracking worked incredibly well considering how dark the cabin was, Vision Pro would occasionally give me a ‘Tracking Lost’ message when I shuffled around a little too much—likely a limitation of Travel Mode. Luckily Apple thoughtfully pauses the movie when this happens, and in three or four seconds the tracking would come back and the movie would start playing again.

This happened a handful of times as I watched the movie. Because I understand the tech and the challenge of tracking the headset in this worst-case environment, it didn’t bother me that much. But for a normal person this would probably feel like quite a disruption to the movie experience if it happened multiple times.

Visual and audio isolation is the point if you’re using a headset on a plane, but this can make it hard for someone to get your attention. Passthrough is of course helpful here, but the field-of-view is tighter than your natural field-of-view, making it harder to see things out of the corner of your eye. This makes it more difficult for someone to get your attention (like a fellow passenger who wants to politely interrupt you so they can get out of the seats and to the bathroom).

And of course there’s battery life. After watching the full two hours of Mad Max: Fury Road, I was left with 35% battery on Vision Pro. Although that means I had another hour to squeeze in a show or two, only being able to watch one full length movie on an eight hour flight is an obvious and unfortunate limitation.

And yes I could have brought a big external battery and plugged it into Vision Pro’s battery to extend the runtime, but now we’re talking about adding more bulk, wires, and juggling to the equation.

Personally I was willing to put up with these various hassles to watch a movie with excellent audio and visual quality on a plane. And I’ll do it again.

But I recognize that not everyone cares that much about what a movie looks and sounds like. For those people, Vision Pro is just not convenient enough for the value it would bring them. But once it gets smaller (and losses the battery pack), this use-case will become appealing for a much larger group of people.

The post Vision Pro is Hands-down the Best Movie Experience You Can Have on a Plane appeared first on Road to VR.

‘Silent Slayer’ Preview – Dr. Van Helsing’s Deadly Game of Operation

I went hands-on with Silent Slayer: Vault of the Vampire, an upcoming horror-puzzle for Quest from Schell Games that tasks you with defusing various arcane traps protecting a coven of sleeping vampires. Much like the studio’s pioneering VR puzzle franchise I Expect You to Die, any false move means certain death, but you’ll need to think twice before fumbling your trusty vampire-busting tools since there’s always a jump scare waiting for you on the other side of inevitable failure.

In my preview of Silent Slayer, I got a chance to play through the first three levels of the game, which are basically tutorials that introduce the world, your growing assortment of tools, and three of the coven’s vampire foes. In total, there are apparently nine levels, although I haven’t set foot outside of the third to give you an accurate impression of what the first 30-ish minutes of the game has to offer.

Like I Expect You to Die, the studio’s upcoming horror-puzzle is played equally well standing up or sitting down, requiring little to no room-scale movement on your quest to play what is essentially a spooky version of the kid’s board game Operation, which similarly tasks you with precisely manipulating little doohickeys with the utmost care to not trip the metaphorical buzzer—or in Silent Slayer’s case, a screaming vampire.

Before the fun begins though, you’re first tasked with reassembling a sort of totem inscribed with the crest of your next enemy, called a ‘Bind Stone’.

View post on imgur.com

The broken stones give a few clues on how they’re put back together, although you may be scratching your head a bit as you follow broken contours and match edges to reveal different geometric forms to unlock each sequential level. The stone could be a pyramid, a prism, or anything, making for an interesting little roadblock of a puzzle that forces you to pay close attention to detail—an important skill you’ll learn once you’re face-first with the blood sucker du jour.

And back at your home base, you’re also given a talking book which not only narrates the game’s story, but provides detail on every vampire, and every tool given to you for each mission. More on that later.

The real meat of the game though comes when you’re transported to your target, and put in front of the ghoul’s closed coffin which features a few initial mechanism to undo before you can get to the stabby bit. You’ll need to gingerly pull out locking crossbars, slowly manipulate keys, and pull out nails with a provided mini-crowbar—the latter of which requires you to pry up nails just enough so you can grip them with your free hand. Go a little too far, and the nail will fall, alerting the vampire inside and raising his awareness bar.

Once you’ve opened the top bit of the coffin carefully, keeping quiet and being very precise is the name of the game. Of course, your bookish pal is there to lend a hand, but also adds some color commentary on how you need to hurry up, and what to watch out for.

View post on imgur.com

Using the game’s various physics-based tools bring a lot of solidity and gravity to every move. You’ll use things like clippers to sever tripwires, a heart-detection tool to mark where the vampire’s heart lies, and your trusty stake to pierce the next protective shell. Even that last bit can be a challenge though, as shown by my less-than-precise stab seen above.

If you can make it that far, you’ll be left with two more tasks—at least as far as I know from playing three levels. Trace the vampire’s crest in the air to deactivate the final, unseen trap, and stab the sucker right through the heart. Job done.

From a technical standpoint, Silent Slayer is a visually engrossing and well-refined game that totally fits in with the high production value you see in I Expect You to Die. I still have a lot to learn about the game though, as some previously released images reveal a significant ramp in difficulty with promises of a much higher density of traps and corresponding tools than I experienced in my hands-on. Those look like a lot of keys, which means a lot of very pensive inserting and turning. That image below also shows a long pry bar, which I imagine will mean I have to be super careful with some far away nails.

Image courtesy Schell Games

That said, jump scares weren’t extremely terrifying, since you always know they’re coming after a major screw up. That’s just a piece of the overall puzzle though, which thus far has been a fun experience in learning how each trap works, and finding out just how reactive the world really is. Seriously, if you put down a pair of clippers on your workbench too indelicately, you’ll make a noise and alert the undead within.

I’m also looking forward to learning more about the overarching story, which I hope matures throughout the game’s nine levels. I can’t say I was paying too much attention to the backstory during my playthrough of the first three levels, as I was busy learning how to work the games various tools, which are doled out as you move to tougher vampires.

In all, Silent Slayer appears to be everything it says on the tin, although I’m really hoping it tosses some gratifying twists my way, as looking plainly at the map presented you in the book makes it feel just a little too linear of an experience so far. You can read more about my impressions in the full review though, which ought to be out sometime this summer when the game launches on Quest 2/3/Pro. In the meantime, you can wishlist the game on Quest here, currently priced at a 10% discount off its regular $20 price tag.

The post ‘Silent Slayer’ Preview – Dr. Van Helsing’s Deadly Game of Operation appeared first on Road to VR.

This Clever Immersion Hack Makes VR Feel More Real – Inside XR Design

In Inside XR Design we examine specific examples of great VR design. Today we’re looking at the clever design of Red Matter 2’s ‘grabber tools’ and the many ways that they contribute to immersion.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.


Today we’re going to talk about Red Matter 2 (2022), an adventure puzzle game set in a retro-future sci-fi world. The game is full of great VR design, but those paying close attention will know that some of its innovations were actually pioneered all the way back in 2018 with the release of the original Red Matter. But hey, that’s why we’re making this video series—there’s incredible VR design out there that everyone can learn from.

We’re going to look at Red Matter 2’s ingenious grabber tools, and the surprising number of ways they contribute to immersion.

What You See is What You Get

At first glance, the grabber tools in Red Matter 2 might just look like sci-fi set-dressing, but they are so much more than that.

At a basic level, the grabber tools take on the shape of the user’s controller. If you’re playing on Quest, Index, or PSVR 2, you’ll see a custom grabber tool that matches the shape of your specific controller.

First and foremost, this means that players’ in-game hand pose matches their actual hand pose and the feeling of holding something in their hands. The shape you see in-game even matches the center of gravity as you feel it in your real hand.

Compare that to most VR games which show an open hand pose and nothing in your hand by default… that creates a disconnect between what you see in VR and what you actually feel in your hand.

And of course because you’re holding a tool that looks just like your controller, you can look down to see all the buttons and what they do.

I don’t know about you, but I’ve been using VR for years now, and I still couldn’t reliably tell you off the top of my head which button is the Y button on a VR controller. Is it on the left or right controller? Top or bottom button? Take your own guess in the comments and then let us know if you got it right!

Being able to look down and reference the buttons—and which ones your finger is touching at any given moment—means players can always get an instant reminder of the controls without breaking immersion by opening a game menu or peeking out of their headset to see which button is where.

This is what’s called a diegetic interface—that’s an interface that’s contextualized within the game world, instead of some kind of floating text box that isn’t actually supposed to exist as part of the game’s narrative.

In fact, you’ll notice that there’s absolutely no on-screen interface in the footage you see from Red Matter 2. And that’s not because I had access to some special debug mode for filming. It’s by design.

When I spoke with Red Matter 2 Game Director Norman Schaar, he told me, “I personally detest UI—quite passionately, in fact! In my mind, the best UI is no UI at all.”

Schaar also told me that a goal of Red Matter 2’s design is to keep the player immersed at all times.

So it’s not surprising that we also see that the grabber tools used as a literal interface within the game, allowing you to physically connect to terminals to gather information. To the player this feels like a believable way that someone would interact with the game’s world—under the surface we’re actually just looking at a clever and immersive way of replacing the ‘press X to interact’ mechanics that are common in flat games.

The game’s grabber tools do even more for immersion than just replicating the feel of a controller in your hand or acting as a diegetic interface in the game. Crucially, they also replicate the limited interaction fidelity that players actually have in VR.

Coarse Hand Input

So let me break this down. In most VR games when you look at your hands you see… a human hand. That hand of course is supposed to represent your hand. But, there’s a big disconnect between what your real hands are capable of and what the virtual hands can do. Your real hands each have five fingers and can dexterously manipulate objects in ways that even today’s most advanced robots have trouble replicating.

So while your real hand has five fingers to grab and manipulate objects, your virtual hand essentially only has one point of input—a single point with which to grab objects.

If you think about it, the grabber tool in Red Matter 2 exactly represents this single point of input to the player. Diegetically, it’s obvious upon looking at the tool that you can’t manipulate the fingers, so your only option is to ‘grab’ at a one point.

That’s a long way of saying that the grabber tools in Red Matter 2 reflect the coarse hand input that’s actually available to us in VR, instead of showing us a virtual hand with lots of fingers that we can’t actually use.

So, In Red Matter 2, the grabber tools contextualize the inability to use our fingers. The result is that instead of feeling silly that we have to rotate and manipulate objects in somewhat strange ways, you actually feel like you’re learning how to deftly operate these futuristic tools.

Immersion Insulation Gap

And believe it or not, there’s still more to talk about why Red Matter 2’s grabber tools are so freaking smart.

Physics interactions are a huge part of the game, and the grabber tools again work to maintain immersion when handling objects. Like many VR games, Red Matter 2 uses an inertia-like system to imply the weight of an object in your hand. Small objects move quickly and easily, while large objects are sluggish and their inertia fights against your movement.

Rather than imagining the force our hands would feel when moving these virtual objects, the grabber tools create a sort of immersion insulation gap by providing a mechanical pivot point between the tool and the object.

This visually ‘explains’ why we can’t feel the forces of the object against our fingers, especially when the object is very heavy. The disconnect between the object and our hand—with the grabber tool as the insulator in the middle—alleviates some of the expectation of the forces that we’d normally feel in real life, thereby preserving immersion just a little bit more.

Unassuming Inventory

And if it wasn’t clear already, the grabber tools are actually… your inventory. Not only do they store all of your tools—like the flashlight, hacking tool, and your gun—you can even use them to temporarily stow objects. Handling inventory this way means that players can never accidentally drop or lose their tools, which is an issue we see in lots of other VR games, even those which use ‘holsters’ to hold things.

Inhuman Hands

And last but not least…the grabber tools can actually do some interesting things that our hands can’t. For example, the rotating grabber actually makes the motion of turning wheels like this one easier than doing it with two normal hands.

It’s no coincidence that the design of the grabber tools in Red Matter 2 is so smartly thought through… after all, the game is all about interacting with the virtual world around you… so it makes sense that the main way in which players interact with the world would be carefully considered.

To take full advantage of the grabbers, the developers built a wide variety of detailed objects for the game which are consistently interactive. You can pick up pretty much anything that looks like you should be able to.

And here’s a great little detail that I love to see: in cases where things aren’t interactive, all you have to do is not imply that they are! Here in Red Matter 2 the developers simply removed handles from this cabinet… a clear but non-intrusive way to tell players it can’t be opened.

Somewhat uniquely to VR, just seeing cool stuff up close like it’s right in front of you can be a rewarding experience all on its own. To that end, Red Matter 2 makes a conscious effort to sprinkle in handful of visually interesting objects, whether it’s this resin eyeball, papers with reactive physics, or this incredible scene where you watch your weapon form from hundreds of little balls right in your hand.

– – — – –

Red Matter 2’s grabber tool design is so beneficial to the game’s overall immersion that, frankly, I’m surprised we haven’t seen this sort of thing become more common in VR games.

If you want to check all of this out for yourself, you can find Red Matter 2 on Quest, PSVR 2, and PC VR. Enjoyed this breakdown? Check out the rest of our Inside XR Design series and our Insights & Artwork series.

And if you’re still reading, how about dropping a comment to let us know which game or app we should cover next?

The post This Clever Immersion Hack Makes VR Feel More Real – Inside XR Design appeared first on Road to VR.

Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro

Apple today released ‘Spatial Personas’ in public beta on Vision Pro. The newly upgraded avatar system can now bring people right into your room. We got an early look.

Much has been said about Apple’s Persona avatar system for Vision Pro. Whether you find them uncanney or passable, one thing is certain: it’s the most photorealistic real-time avatar system built into any headset available today. And now Personas is getting upgraded with ‘Spatial Personas’.

But weren’t Personas already ‘spatial’? Let me explain.

Sorta Spatial

At launch the Persona system allowed users to scan their faces into the headset to create a digital identity that looks and moves like the user thanks to the bevy of sensors in Vision Pro. When doing a FaceTime call with another Vision Pro user (or users), their Persona(s) head, shoulders, and hands would be shown inside a floating box.

Image courtesy Apple

While this could feel like face-to-face talking at times, the fact that they were contained within a frame (which you can move or resize like any other window) made it feel like they weren’t actually standing right next to you. And that’s not just because of the frame, but also because you weren’t actually in a sharing the same space as them—it’s not like they could walk right up to you for a high-five, because they’d be stuck in the window on your screen.


Now with Spatial Personas (released in beta today on the latest version of VisionOS), each person’s avatar is rendered in a shared space without the frame. When I say ‘shared space’, I mean that if someone takes takes a step toward me in their room, I actually see them come one step closer to me.

Previously the frame made it feel sort of like you were doing a 3D video chat. Now with the shared space and no frame, it really feels like you’re standing right next to each other. It’s the ‘hang out on the same couch’ or ‘gather around the same table’ experience that wasn’t actually possible on Vision Pro at launch.

And it’s really quite compelling. I got a sneak peek at the new system in a Vision Pro FaceTime call with four people (though up to five are supported total), all using Spatial Personas. You’ll still only see their head, shoulders, and hands but now it really feels like a huddle instead of a 3D video chat. It feels much more personal.

Spatial Personas Are Opt-in

To be clear, the ‘video chat’ version of Personas (with the frame) still exists. In fact, it’s the default way that avatars are shown when a FaceTime call is started. Switching to a Spatial Persona requires hitting a button on the FaceTime menu.

And while this might seem like a strange choice, I actually think there’s something to it.

On the one hand, the default ‘FaceTime in Vision Pro’ experience feels like a video chat. In everyday business we’re all pretty used to seeing someone else on the other side of a webcam by now. And even though this is more personal than an audio-only call, it’s still a step away from actually meeting with someone in person.

Spatial Personas is more like you’re actually meeting up in person, since you can actually feel the interpersonal space between you and the other people in this shared space. If they walk up and get a little too close, you’ll truly feel it in the same way if someone stands too close to you in real life.

So it’s nice to have both of these options. I can ‘video chat’ with someone with the regular mode, or I can essentially invite them into my space if the situation calls for a more personal meeting.

And Spatial Personas aren’t just for chatting. Just like regular Personas, you can use SharePlay while on FaceTime to watch movies and play games together (provided you both have a supported app installed).

Take Freeform for instance, Apple’s collaborative digital whiteboard app. If you launch Freeform while on a FaceTime call with Spatial Personas, everyone else will be asked to join the app, which will then load everyone in front of the whiteboard.

Everything is synchronized too. Anyone else in the call can see what you’ve put on the whiteboard and watch in real time as you add new photos or draw annotations. And just as easily, anyone can physically walk up to the board and interact with it themselves.

When it comes to shared movie viewing on Apple TV on Vision Pro, Spatial Personas unlock the feeling of sitting on the same couch together, which wasn’t quite possible with the headset at launch. Now when you watch a movie with your friends you’ll be sitting shoulder to shoulder with them, which feels very different than having a window with their face in it floating near the video you’re watching.

It’s possible to stream many flat apps to anyone in the FaceTime call while using Spatial Personas, but for 3D or interactive content developers will need to specially implement the feature.

That’s somewhat problematic though because it’s difficult to know exactly which apps support Spatial Personas or even SharePlay for that matter. As of now, you have to scroll all the way to the bottom of an app’s page to see if it supports SharePlay (unless the developer mentions it in the app’s description). And even then this doesn’t necessarily mean it supports Spatial Personas.

The Little Details

Apple also thought through some smaller details for Spatial Personas, perhaps the most interesting of which is ‘locomotion’.

Room-scale locomotion is essentially the default. If you want to move closer to a person or app… you just physically walk over to it. But what happens if it’s outside the bounds of your physical space? Well, instead of directly moving yourself virtually, you can actually move the whole shared space closer or further from you.

You can do this any time, in any app, and everyone else will see your new position reflected within their space, keeping everything synchronized.

Apple also made is so when two Spatial Personas get too close together, they will temporarily revert to just looking like a floating contact photo. I think this is probably because they want to avoid possible harassment or trolling (ie: you want to annoy someone so you phase your virtual hand right through their virtual face, which is uncomfortable both visually and from an interpersonal space standpoint).

The headset’s excellent spatial audio is of course included by default, so everyone sounds like they’re coming from wherever they’re standing in the room, and their voices actually sound like they’re in your room (based on the headset’s estimate of what the acoustics should sound like). And if you move to a fully immersive space like an ‘environment’, the spatial audio transitions to that new acoustic environment—so for instance you can hear people faintly echoing in the Joshua Tree environment because of all the rock surfaces nearby. Hearing the acoustics fade from being inside your own room to being ‘outside’ in an environment is a subtle bit of magic.

Image courtesy Apple

And last but not least, it’s possible to have a mixed group of FaceTime participants. For instance you could have people using an iPhone, an Android tablet (yes you can FaceTime with people on non-Apple devices), a normal Persona, and a Spatial Persona all at once. SharePlay in that case will also work between those formats (except non-Apple devices) as long as long as the app supports it. In cases with apps that are Vision Pro native, the iPhone user would get a notification that their device isn’t supported.

– – — – –

Spatial Personas is a big upgrade to Apple’s avatar system, but the company maintains the whole Persona system is still in ‘beta’. Presumably that means there’s more improvements yet to come.

The post Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro appeared first on Road to VR.

‘Max Mustard’ Review – An ‘Astro Bot’ Style VR Platformer That Cuts the Mustard

Max Mustard may be a bit of a curveball when it comes to names, but this traditional 3D platformer reimagined for VR delivers in nearly every other way, serving up some very Astro Bot Rescue Mission (2018) and Lucky’s Tale (2016) vibes in the process.

Max Mustard Details:

Available On: Quest 2/3/Pro (coming later to Steam & PSVR 2)
Reviewed On: Quest 3
Release Date: March 21st, 2024
Price: $30
Developer: Toast Interactive


Max Mustard isn’t reinventing the wheel here: it’s a solid, extremely well-built 3D platformer that, for all its positives, is a pretty standard experience overall if you’ve played any 3D platformer in the past 30 years, flatscreen or otherwise.

That’s probably the most negative thing I’ll say about this plucky little adventure, which tasks you with guiding the eponymous rocket-boot-clad companion through a world of fairly easy enemies, less easy environmental stuff, and four boss encounters that follow the strict orthodoxy of a ‘hurt it three times and it dies’ variety.

Image courtesy Toast Interactive

While the story is fairly forgettable—delivered almost entirely through letters that pop up at the end of levels—the action rarely disappoints, as you’re served up straight shots through 40 bespoke levels, many of which harken back to the Super Mario titles from the late ’80s and early ’90s.

That said, there isn’t a ton of enemy variety, as all baddies regardless of movement or attack style only take a single bonk on the head to kill, making enemies less interesting than the admittedly very cool environmental gadgets that you start encountering around the second (of four) worlds. Those fun and inventive moving platforms and increasingly difficult environmental traps are the real stars of the show here, it seems.

View post on imgur.com

And if you haven’t noticed from the clip above, Max Mustard is unabashedly a love letter to those platformers past and present, like Crash Bandicoot and Super Mario World, and the more recent Super Mario 3D Land, but also the headlining VR platformers of today too, including the illustrious Astro Bot Rescue Mission on PSVR and Lucky’s Tale on PC VR, PSVR and Quest. With the level of fit and finish, and first-person interaction (more on that below), you might even think of Max Mustard as the Astro Bot of the Quest platform.

And like those platformers from years past, Max Mustard also offers up the familiar overworld map that takes you linearly to the final boss battle, which (no spoilers!) satisfyingly puts together all of the skills you learned throughout the game.

Overworld map | Image captured by Road to VR

Along the way you’ll find minigames and the occasional shop too where you can spend coins on abilities, such as extra hearts, coin bonuses, and new combat moves. You’ll want (but probably not really need) those new moves too, as levels start to ramp in difficulty around world three, which introduces some challenging environmental obstacles, like boxes that disappear and reappear to the beat of the game’s soundtrack, torrents of cannonballs, one-use jump pads, and more. Having an extra heart, a better attack move, or rocket boots that do damage to enemies is all a neat bonus to help out.

You wouldn’t be far off in calling Max Mustard the “spiritual successor” to Sony’s Astro Bot, because like Astro Bot every so often you’re given first-person gadgets, like a dart gun and a fan gun, which you use in certain levels, the dart gun making the biggest impact throughout the game. Here I am blasting at incoming rockets from the game’s tutorial boss:

View post on imgur.com

Still, I wish the first-person gadgets were a little better integrated into regular levels, and had more variation overall considering how cool they can be. You do however get the chance to hone your shooting skills in minigame challenges where you can earn coins to use in the shop, as well as get extra ‘mudpups’, which are normally littered throughout regular levels, acting as a sort of secondary currency which are used to unlock levels as you move forward.

As for enemies, regular baddies don’t really put up much of a challenge, however the game’s four main boss battles are significantly more interesting, each of them staying very loyal to the well-worn platforming tropes you’re probably used to. That said, it’s hard not to smile at just how well Max Mustard nails the whole aesthetic and feel of basically everything.

Max Mustard took me around five hours to complete, although I took it pretty slow due to wanting to collecting all three mudpups found in each level. You don’t need to be a completionist to get through the game with ease though, which could take you three to four hours overall.


Max Mustard is stupid cute, and offers a lots of level variation in both functional design and overall feel. Here’s me using the fan gun to suck up enemies and errant coins after having splashed down into the water—the sort of totally unexpected one-off level transitions you’ll experience throughout.

View post on imgur.com

That said, first-person interactions are comparatively rare in Max Mustard, so you’ll be bopping around as Max most of the time instead of dealing with enemies like you see in the clip above. That puts an increased importance on the visual and functional aspects of levels, which are thankfully so rock solid that it’s easy to snap into your new ‘floating head’ POV and enjoying the game’s bright and colorful art style.

Again, I wish there were more first-person gadgets, although you have to give it to Max Mustard for including them at all, as the game seems to prioritize fast and fluid movement through levels instead of the heavier Astro Bot-y mix of first and third-person gameplay.


The game’s camera necessary follows around Max, but does so in a way that’s gentle and comfortable. The decision by the studio to include snap turning as a purchasable upgrade back at the shop however feels a bit weird, as it’s pretty necessary to reposition yourself when turn around in levels to grab coins or mudpups you may have missed. Granted, this feature is unlocked with in-game coins, although it should be a standard movement scheme out of the box.

There are a few moments of forced motion in one-off events, although nothing that should set off alarm bells in motion sick-prone users, making Max Mustard pretty much perfect for anyone, including VR first-timers.

Max Mustard’ Comfort Settings – March 21st, 2024

Artificial turning
Snap-turn ✔
Quick-turn ✖
Smooth-turn ✖
Artificial movement
Teleport-move ✖
Dash-move ✖
Smooth-move ✔
Blinders ✖
Head-based ✖
Controller-based ✔
Swappable movement hand ✖
Standing mode ✔
Seated mode ✔
Artificial crouch ✖
Real crouch ✖
Subtitles ✖
Interface language
Languages English, French, German, Spanish, Japanese, Korean
Dialogue audio
Languages English
Adjustable difficulty ✖
Two hands required ✔
Real crouch required ✖
Hearing required ✖
Adjustable player height ✔

The post ‘Max Mustard’ Review – An ‘Astro Bot’ Style VR Platformer That Cuts the Mustard appeared first on Road to VR.

Why “Embodiment” is More Important Than “Immersion” – Inside XR Design

Our series Inside XR Design examines specific examples of great XR design. Today we’re looking at the game Synapse and exploring the concept of embodiment and what makes it important to VR games.

You can find the complete video below, or continue reading for an adapted text version.

Defining Embodiment

Welcome back to another episode of Inside XR design. Today I’m going to talk about Synapse (2023), a PSVR 2 exclusive game from developer nDreams. But specifically we’re gonna to look at the game through the lens of a concept called embodiment.

So what the hell is embodiment and why am I boring you talking about it rather than just talking about all the cool shooting, and explosions, and smart design in the game? Well, it’s going to help us understand why certain design decisions in Synapse are so effective. So stick with me here for just a minute.

Embodiment is a term I use to describe the feeling of being physically present within a VR experience. Like you’re actually standing there in the world that’s around you.

And now your reasonable response is, “but don’t we already use the word immersion for that?”

Well colloquially people certainly do, but I want to make an important distinction between ‘immersion’ and ‘embodiment’.

‘Immersion’, for the purposes of our discussion, is when something has your complete attention. We all agree that a movie can be immersive, right? When the story or action is so engrossing it’s almost like nothing outside of the theater even exists at that moment. But has even the most immersive movie you’ve ever seen made you think you were physically inside the movie? Certainly not.

And that’s where ’embodiment’ comes in. For the sake of specificity, I’m defining immersion as being about attention. On the other hand, embodiment is about your sense of physical presence and how it relates to the world around you.

So I think it’s important to recognize that all VR games get immersion for free. By literally taking over your vision and hearing, for the most part they automatically have your full attention. You are immersed the second you put on a headset.

But some VR games manage to push us one step further. They don’t just have our attention, they make us feel like our whole body has been transported into the virtual world. Like you’d actually feel things in the game if you reached out and touched them.

Ok, so immersion is attention and embodiment is the feeling of actually being there.

And to be clear, embodiment isn’t a binary thing. It’s a spectrum. Some VR games are slightly embodying, while others are very embodying. But what makes the difference?

That’s exactly what we’re going to talk about with Synapse.

Cover You Can Feel

At first glance, Synapse might look like a pretty common VR shooter, but there are several really intentional design decisions that drive a strong sense of embodiment. The first thing I want to talk about is the cover system.

Every VR shooter has cover. You can walk behind a wall and it will block shots for you. But beyond that, the wall doesn’t really physically relate to your actual body because you never actively engage with it. It’s just a stationary object.

But Synapse makes walls and other cover interactive by letting you grab it with your hand and pull your body in and out of cover. This feels really natural and works great for the gameplay.

And because you’re physically moving yourself in relation to the wall—instead of just strafing back and forth with a thumbstick—the wall starts to feel more real. Specifically, it feels more real because when you grab the wall and use it as an anchor from which to move, it’s subconsciously becoming part of your proprioceptive model.

Understanding Proprioception

Let’s take a second here to explain proprioception because it’s a term that comes up a lot when we’re talking about tricking our bodies into thinking we’re somewhere else.

The clearest example I’ve ever seen of proprioception in action is this clip. And listen, I never thought I’d be showing you a cat clip in this series, but here we are. Watch closely as the cat approaches the table… without really thinking about it, it effortlessly moves its ear out of the way just at the right time.

This is proprioception at work. It’s your body’s model of where it is in relation to the things around you. In order for the cat to know exactly when and where to move its ear to avoid the table without even looking at it, it has to have some innate sense of the space its ear occupies and how that relates to the space the table occupies.

In the case of the cover system in Synapse, you intuitively understand that ‘when I grab this wall and move my hand to the right, my body will move to the left’.

So rather than just being a ‘thing that you see’ walls become something more than that. They become relevant to you in a more meaningful way, because you can directly engage with them to influence the position of your body. In doing so, your mind starts to pay more attention to where the walls are in relation to your body. They start to feel more real. And by extension, your own body starts to feel more present in the simulation… you feel more ‘embodied’.

Mags Out

And walls in Synapse can actually be used for more than cover. You can also use them to push magazines into your weapon.

Backing away from embodiment for just a second—this is such a cool design detail. In Inside XR Design #4 I spent a long time talking about the realistic weapon model in Half-Life: Alyx (2020). But Synapse is a run-and-gun game so the developers took a totally different approach and landed on a reloading system that’s fast paced but still engaging.

Instead of making players mess with inventory and chambering, the magazines in this game just pop out and float there. To reload, just slide them back into the weapon. It might seem silly, but it works in the game’s sci-fi context and reduces reloading complexity while maintaining much of the fun and game flow that comes with it.

And now we can see how this pairs so beautifully with the cover game’s cover system.

The game’s cover system takes one of your hands to use. So how can you reload? Pushing your magazine against the wall to reload your gun is the perfect solution to allow players to use both systems at the same time.

But guess what? This isn’t just a really clever design, it’s yet another way that you can engage with the wall—as if it’s actually there in front of you. You need to know if your arm is close enough to the wall if you’re going to use it to reload. So again, your brain starts to incorporate walls and their proximity into your proprioceptive model. You start to truly sense the space between your body and the wall.

So both of these things—being able to use walls to pull yourself in and out of cover, and being able to use walls to push a magazine into your gun—make walls feel more real because you interact with them up close and in a meaningful way.

And here’s the thing. When the world around you starts to feel more real, you start to feel more convinced that you’re actually standing inside of it. That’s embodiment. And let’s remember: virtual worlds are always ‘immersive’ because they necessarily have our full attention. But embodiment goes beyond what we see—it’s about what we feel.

And when it comes to reaching out and touching the world… Synapse takes things to a whole new level with its incredible telekinesis system.

Continue on Page 2: Extend Your Reach »

The post Why “Embodiment” is More Important Than “Immersion” – Inside XR Design appeared first on Road to VR.

Vision Pro Preview: Early Thoughts on My Time Inside Apple’s First Headset

Over the last three weeks I’ve had the chance to spend significant time in Apple Vision Pro. While my full review is still marinating, I want to share some early thoughts I’ve had while using the headset and what it means overall for the XR industry.

It has to be said that anyone who doesn’t recognize how consequential the launch of Vision Pro is fundamentally doesn’t understand the XR industry. This is the biggest thing to happen since Facebook bought Oculus in 2014.

It’s not just Vision Pro itself, it’s Apple. Everything about the device suggests the company does not consider the headset a tech demo. It’s in this for the long haul, likely with a forward-looking roadmap of at least 10 years. And even then, it’s not that Apple is magical and going to make the best stuff in the industry all the time; it’s that the things it does well are going to set the bar for others to compete against, resulting in accelerated progress for everyone. That’s something Meta has needed for a long time.

The moment feels remarkably similar to when the iPhone launched 17 years ago. At the time a handful of tech companies were making smartphones that mostly catered to enterprise users, a relatively small subset of users compared to all consumers. Then the iPhone came along, designed as a smartphone that anyone could use, and most importantly, a smartphone that people would want to use.

And while the iPhone did less than a BlackBerry at launch, it steadily caught up to the same technical capabilities. Meanwhile, BlackBerry never caught up to the same ease-of-use. Less than 10 years after the launch of the first iPhone, BlackBerry had been all but put out of the smartphone business.

Compared to Quest, Vision Pro’s ease-of-use is so very much like the original iPhone vs. the Blackberry that it’s not even funny. Quest’s interface has always felt like a sewn-together patchwork of different ideas, offering little in the way clarity, intuitivity, and cohesion.

What Apple has built on Vision Pro from a software standpoint is phenomenally mature out of the box, and it works exactly like you’d expect, right down to making a text selection, sharing a photo, or watching a video. Your decade (or more) of smartphone, tablet, and PC muscle memory works in the headset, and that’s significant. Apple is not fooling around, they knew Vision Pro would be the first headset of many, so they have built a first-class software foundation for the years to come.

Luckily for Meta, it has a much more diversified business than BlackBerry did. So it’s unlikely that they’ll get pushed out of the XR space—Zuckerberg isn’t going to let that happen after all this time. But Meta’s piles of cash no longer guarantees dominance of the space. Apple’s presence will force the one most important thing that Meta has failed to do in its XR efforts: focus.

Even if there wasn’t a single native Vision Pro app on the headset at launch, its impressive how well most iPad and iPhone apps work on the headset right out of the box. Technically speaking, the apps don’t even realize they’re running on a headset.

As far as the iPad and iPhone apps know, you’re using a finger to control them. But in reality you’re using the headset’s quite seamless look+pinch system. Scrolling is fluid and responsive. Drag and drop works exactly like you’d expect. Pinch zoom? Easy. In a strange way it’s surprising just how normal it feels to use an iPad app on Vision Pro.

There’s more than 1 million iPad and iPhone apps which can run on Vision Pro out of the box. That means the vast majority of the apps you use every day can be used in the headset, even if the developer hasn’t created a native VisionOS app. As a testament to Apple really thinking through the technical underpinning and leveraging its existing ecosystem, apps which expect a selfie cam are instead shown a view of your Persona (your digital avatar). So apps with video calls or face-filters ‘just work’ without realizing they aren’t even looking at a real video feed.

And it’s really impressive how you can seamlessly run emulated iPad apps, flat Vision Pro apps (called Windows), and 3D Vision Pro apps (called Volumes), all in the same space, right next to each other. In fact… it’s so easy to multitask with apps on the headset that one of the first bottlenecks I’m noticing is a lack of advanced window management. It’s a good problem for the headset to have; there’s so many apps that people actually want to use—and they can run so easily side-by-side—that the software isn’t yet up to the task of organizing it all in a straightforward way.

For now apps pretty much just stay where you put them. But sometimes they get in the way of each other, or open in front of one another. I expect Apple will tackle this issue quite soon with some kind of window manager that’s reminiscent of window management on MacOS or iPadOS.

Being able to run the apps you already know and love isn’t the only benefit that Apple is extracting from its ecosystem. There’s small but meaningful benefits all over the place. For instance, being able to install the same password manager that I use on my phone and computer is a gamechanger. All of my credentials are secured with OpticID, and can be auto-filled on command in any app. That makes it a breeze to sign into the tools and services I use every day.

And then there’s things like Apple Pay which already knows my credit card info and shipping address. On supported apps and websites, buying something is as quick as double-clicking the digital crown to confirm the purchase. Compare that to typing your info into each individual app through a slow virtual keyboard.

And then there’s AirDrop, FaceTime, etc. It really adds up and starts to make it feel like you can do everything you want to do inside the headset without needing to take it off and go to your phone or computer.

It’s clear that Apple has spent a long time obsessing over the details of the Vision Pro user experience. Just one example: after I set up the headset it had already integrated the custom HRTF for personalized spatial audio that I had scanned for my AirPods on my iPhone a year or two ago. So without any additional step I’m getting more convincing spatial audio any time I’m using the headset.

So there’s a lot to like about the headset out of the box. It’s capable of so much of what you already do every day—and then it mixes in interesting new capabilities. But as much as you might want the headset to be your everything-device, there’s no doubt that its size and weight are bottlenecks to that urge. Vision Pro’s comfort is in the same ballpark as similar headsets in its class (though it might be a little more difficult to find the most comfortable way to wear it).

One of the most interesting things to me about Apple Vision Pro is that it shows that price isn’t what’s holding headsets back from being smaller and more comfortable (at least not up to $3,500). Apple didn’t have any kind of novel and more expensive tech to make AVP smaller than the $500 Quest 3, for instance.

There’s a path forward, but it’s going to take time. This ‘holocake’ lens prototype from Meta, which uses holographic optics, is probably the next step on the form-factor journey for MR headsets. But R&D and manufacturing breakthroughs are still needed to make it happen.

In the end, headsets aiming for all-day productivity will need to pass the “coffee test”— meaning you can drink a full mug of coffee without bumping it into the headset. I’m not even joking about this—even if it’s otherwise perfect, wearing something that’s going to prevent you from doing basic human things like drinking is a tough trade-off that most won’t make.

– – — – –

So there’s a smattering of thoughts I’ve had about the headset—and what it’s existence means more broadly—so far. You can expect our full Vision Pro review soon, which will include a lot more technical detail and analysis. If you’ve got questions for that full review fire away in the comments below!

The post Vision Pro Preview: Early Thoughts on My Time Inside Apple’s First Headset appeared first on Road to VR.

Quest 3 Review – A Great Headset Waiting to Reach Its Potential

Following Quest 2 almost three years to the day, Quest 3 is finally here. Meta continues its trend of building some of the best VR hardware out there, but it will be some time yet before the headset’s potential is fully revealed. Read on for our full Quest 3 review.

I wanted to start this review saying that Quest 3 feels like a real next-gen headset. And while that’s certainly true when it comes to hardware, it’ll be a little while yet before the software reaches a point that it becomes obvious to everyone. Although it might not feel like it right out of the gate, even with the added price (starting at $500 vs. Quest 2 at $300), I’m certain the benefits will feel worth it in the end.

Quest 3’s hardware is impressive, and a much larger improvement than we saw from Quest 1 to Quest 2. For the most part, you’re getting a better and cheaper Quest Pro, minus eye-tracking and face-tracking. And to put it clearly, even if Quest Pro and Quest 3 were the same price, I’d pick Quest 3.

Photo by Road to VR

Before we dive in, here’s a look at Quest 3’s specs for reference:

2,064 × 2,208 (4.5MP) per-eye, LCD (2x)
Refresh Rate
90Hz, 120Hz (experimental)
Pancake non-Fresnel
Field-of-view (claimed) 110ºH × 96ºV
Optical Adjustments
Continuous IPD, stepped eye-relief (built in)
IPD Adjustment Range 53–75mm
Snapdragon XR2 Gen 2
Storage 128GB, 512GB
USB-C, contact pads for optional dock charging
Weight 515g
Battery Life 1.5-3 hours
Headset Tracking
Inside-out (no external beacons)
Controller Tracking
Headset-tracked (headset line-of-sight needed)
Expression Tracking none
Eye Tracking none
On-board cameras 6x external (18ppd RGB sensors 2x)
Touch Plus (AA battery 1x), hand-tracking, voice
In-headstrap speakers, 3.5mm aux output
Microphone Yes
Pass-through view Yes (color)
$500 (128GB), $650 (512GB)


Even if the software isn’t fully tapping the headset’s potential yet, Meta has packed a lot of value into the Quest 3 hardware.


Photo by Road to VR

First, and perhaps most importantly, the lenses on Quest 3 are a generational improvement over Quest 2 and other headsets of the Fresnel-era. They aren’t just more compact and sharper, they also offer a noticeably wider field-of-view and have an unmatched sweet spot that extends nearly across the entire lens. That means even when you aren’t looking directly through the center of the lens, the world is still sharp. While Quest 3’s field-of-view is also objectively larger than Quest 2, the expanded sweet spot helps amplify that improvement because you can look around the scene more naturally with your eyes and less with your head.

Glare is another place that headsets often struggle, and there we also see a huge improvement with the Quest 3 lenses. Gone are the painfully obvious god-rays that you could even see in the headset’s main menu. Now only subtle glare is visible even in scenes with extreme contrast.

Resolution and Clarity

Quest 3 doesn’t have massively higher than Quest 2, but the combination of about 30% more pixels—3.5MP per-eye (1,832 × 1,920) vs. 4.5MP per-eye (2,064 × 2,208)—a much larger sweet spot, and a huge reduction in glare makes for a headset with significantly improved clarity. Other display vitals like persistence blur, chromatic aberration, pupil swim, mura, and ghosting are all top-of-class as well. And despite the increased sharpness of the lenses, there’s still functionally no screen-door effect.

Here’s a look at the resolving power of Quest 3 compared to some other headsets:

Headset Snellen Acuity Test
Quest 3 20/40
Quest Pro 20/40
Quest 2 20/50
Bigscreen Beyond 20/30
Valve Index 20/50

While Quest 3 and Quest Pro score the same here in terms of resolving power, the Snellen test lacks precision; I can say for sure the Quest 3 looks a bit sharper than Quest Pro, but not enough to get it into the next Snellen tier.

While the optics of Quest 3 are also more compact than most, the form-factor isn’t radically different than Quest 2. The slightly more central center-of-gravity makes the headset feel a little less noticeable during fast head rotations, but on the whole the visual improvements are much more significant than ergonomic.


Photo by Road to VR

Ergonomics feels like one of just a few places where Quest 3 doesn’t see many meaningful improvements. Even though it’s a little more compact, it weighs about the same as Quest 2, and its included soft strap is just as awful. So my recommendation remains: get an aftermarket strap for Quest 3 on day one (and with a battery if you know you’re going to use the headset often). Meta’s official Elite Strap and Elite Strap with Battery are an easy choice but you can find options of equal comfort that are more affordable from third-parties. FYI: the Elite Straps are not forward or backward compatible between Quest 2 and 3.

While the form-factor of the headset haven’t really improved, it’s ability to adapt to each user certainly has. Quest 3 is the most adaptable Meta headset to date, offering both continuous IPD (distance between the eyes) and notched eye-relief (distance from eye to lens) adjustments. This means that more people can dial in a good fit for the headset, giving them the best visual comfort and quality.

I was about to write “to my surprise…”—but actually this doesn’t surprise me at this point given Meta’s MO—the setup of Quest 3 either didn’t walk me through adjusting either of these settings or did so in such a nonchalant way that I didn’t even notice. Most new users will not only not know what IPD or eye-relief really does for them, but also struggle to pick their own best setting. There should definitely be clear guidance and helpful calibration.

The dial on the bottom of Quest 3 makes it easy to adjust the IPD, but the eye-relief mechanism is rather clunky. You have to push both buttons on the inside of the facepad at the same time and kind of also pull it out or push it forward. It works but I found it to be incredibly iffy.


In any case, I’m happy to report that eye-relief on Quest 3 is more than just a buffer for glasses. Moving to the closest setting gave me a notably wider field-of-view than Quest 2. Here’s a look at the Quest 3 FoV:

Personal Measurements – 64mm IPD
(no glasses, measured with TestHMD 1.2)

Absolute min eye-relief (facepad removed) Min designed eye-relief Comfortable eye-relief Max eye-relief
HFOV 106° 104° 100° 86°
VFOV 93° 93° 89° 79°

And here’s how it stacks up to some other headsets:

Personal Measurements – 64mm IPD
(minimum-designed eye-relief, no glasses, measured with TestHMD 1.2)

Quest 3 Quest Pro Quest 2 Bigscreen Beyond Valve Index
HFOV 104° 94° 90° 98° 106°
VFOV 93° 87° 92° 90° 106°


Another meaningful improvement for Quest 3 is improved built-in audio. While on Quest 2 I always felt like I needed to have the headset at full volume (and even then the audio quality felt like a compromise), Quest 3 gets both a volume and quality boost. Now I don’t feel like every app needs to be at 100% volume. And while I’d still love better quality and spatialization from the built-in audio, Quest 3’s audio finally feels sufficient rather than an unfortunate compromise.


Photo by Road to VR

Quest 3’s new Touch Plus controllers so far feel like they work just as well as Quest 2 controllers, but with better haptics and an improved form-factor thanks to the removal of the ring. Quest 3 is also much faster to switch between hand-tracking and controller input when you set the controllers down or pick them up.


The last major change is the new Snapdragon XR2 Gen 2 chip that powers Quest 3. While ‘XR2 Gen 1’ vs. ‘XR2 Gen 2’ might not sound like a big change, the difference is significant. The new chip has 2.6x the graphical horsepower of the prior version, according to Meta. That’s a leap-and-a-half compared to the kind of chip-to-chip updates usually seen in smartphones. The CPU boost is more in line with what we’d typically expect; Meta says it’s 33% more powerful than Quest 2 at launch, alongside 30% more RAM.

Quest 3 is still essentially a smartphone in a headset in terms of computing power, so don’t expect it to match the best of what you see on PSVR 2 or PC VR, but there’s a ton of extra headroom for developers to work with.

Continue Reading on Page 2: Softwhere? »