‘Bounce Arcade’ is Like VR Pinball for Your Fists—And Exactly the Kind of Creativity VR Needs to Thrive

Bounce Arcade, recently announced and launching this Fall, looks like a unique fusion of pinball and VR, in a way that’s truly native to the medium. It’s an example of the body of VR-native gameplay mechanics that’s still in its infancy.

All video games trace their lineage to arcade games.

I’m not necessarily talking about games in a big cabinet, but any game which has little in the way of narrative, characters, and progression. They’re primarily built around mechanics that are just plain-old fun.

With Pong (1972) we figured out how one axis of input could work. With Pac Man (1980) we added two axes and enemies to chase the player. Super Mario Bros (1985) figured out how we would fit a larger and more complex world onto a small screen. And Star Fox (1993) on SNES laid the groundwork for navigating 3D worlds from a third-person perspective.

It’s the mechanics that drove these games—the ones that are so fun they don’t even need narrative, characters, or progression for them to feel complete.

It took 21 years to get from Pong to Star Fox. And since then, games have only grown in complexity, but only by building on core mechanics that were invented long before.

Platformers that are not conceptually distant from Super Mario Bros are still huge. So are games using third-person views and on-screen reticles for navigation and aiming.

I could go on-and-on with these examples, but the point is, they take time to figure out. And it’s not until you figure them out that you can create compelling games with all the other stuff on top, like narrative, characters, and progression.

It took decades of work to find these core mechanics and eventually turn them into the huge games we know and love today. But, all that work was done specifically for flatscreen games made for controllers or keyboards and mice.

When a new medium like VR comes along—with a whole new kind of input like 6DOF motion controllers—we can borrow from the flatscreen realm, but ultimately the medium needs to invent the mechanics that feel truly native to it.

Many VR games borrow too much from the world of flatscreen gaming. They don’t sufficiently answer the question ‘why play this in VR instead of on a flat screen?’. And these games tend not to find much commercial success.

Then something like Beat Saber comes along. Rhythm games have been around for a long time; but Beat Saber took the overall concept of a rhythm game and paired it with a core mechanic that is truly VR-native. The way you use your body to slice cubes in Beat Saber can’t be replicated in any medium other than VR.

Beat Saber found a core mechanic that feels great in VR. And one day that mechanic will be the foundation for a game that’s not just the arcadey expression of the mechanic, but a large and complex game instead.

There’s been other core VR mechanics discovered thus far. Like Gorilla Tag’s unique movement—which can’t be replicated outside of VR—that turned the simple game of tag into one of Quest’s most successful titles and spawned a whole new genre of games based on this VR-native mechanic.

But there’s still so much to invent and discover when it comes to VR-native mechanics.

All of this is to say, I love seeing new and creative gameplay ideas that feel truly at home in VR. And what I’ve seen of the recently announced Bounce Arcade immediately struck me as one of those ideas.

We’ve already seen plenty of pinball machine simulations in VR. The kind where you’re literally standing in front of a pinball machine and pressing buttons to flip the flippers.

But Bounce Arcade is taking the overarching concept of pinball and truly and creatively reimagining it for VR. Your fists are the paddles and the world around you is the playing field. It’s a fresh look at what pinball even means when you’ve got the power to alter the player’s entire reality and allow them to bring their hands into the game world.

Bounce Arcade is coming to Quest sometime this fall. So far pricing and other platforms are unconfirmed.

But this isn’t to say that all VR games are destined to be arcade games. To the contrary—what I’m saying is the medium still needs to spend time experimenting and innovating on core VR-native mechanics.

Only once a sufficient number of them are discovered and refined will we start to see a real mass of larger and more complex games that feel properly at home in VR. It’s actually pretty easy with a little imagination to see how you could extend Bounce Arcade’s underlying mechanic into a much more complex and less arcade-centric game.

And honestly, I think there are many more of these VR-native mechanics already out there that simply haven’t gotten enough attention. That’s a huge reason why I’m working on the Inside XR Design video series to highlight these kinds of learnings. If you’ve read this far, I have to imagine you’re interested enough in this topic that you’d probably enjoy checking out the episodes published so far.

The post ‘Bounce Arcade’ is Like VR Pinball for Your Fists—And Exactly the Kind of Creativity VR Needs to Thrive appeared first on Road to VR.

Hands-on: Logitech’s MX Ink for Quest Could Legitimize VR Styluses as a Whole

Over the last decade I’ve reported on and tested many different VR styluses, but none of them have actually caught on. But the new MX Ink stylus for Quest stands a real chance at legitimizing the VR stylus as a whole, thanks to its thoughtful design, strong lineup of launch apps, and tight integration with Quest’s software.

This week Logitech announced MX Ink, an officially endorsed ‘Made for Meta’ stylus supporting Quest 2 and Quest 3 (see the full announcement details here). It’s the first time Meta has allowed any other company to harness its tracking technology in a third-party product. That alone makes MX Ink unique, but there’s more that makes this the device that could legitimize VR styluses as a whole.

The first styluses are thought to have been invented five millennia ago. And there’s a reason they’ve stuck with humanity ever since: a stylus amplifies the precision with which we can point. While that seems rather simple, it makes information tasks like writing, drawing, calculating, and designing significantly more practical and useful than using our fingers alone.

So it’s not surprising that we’ve seen many attempts to bring a VR stylus to life.

Just to name a few: in 2017 an enterprising developer hacked together a chunky prototype using a Vive Tracker and a pressure-sensitive stylus tip; in 2018 a company called Massless designed its own prototype VR stylus that it hoped to bring to market; even Wacom has been toying with the idea. Hell, Logitech already made a VR stylus back in 2019… but at $750, it’s no wonder it never made it to general availability.

So what could be different about Logitech’s new MX Ink? Well for one, the price is significantly more palatable than what’s come before. The $130 price point is a pretty easy sell for professionals for whom the added precision of a stylus could actually improve their workflow.

Logitech is also smartly launching some ‘nice to have’ extras for those who are really serious about making the MX Ink part of their workflow.

There’s the Inkwell dock which, for only another $40, gives you an easy place to store and charge the stylus so it’s ready for your next use. And there’s the MX Mat, for $50, which Logitech pitches as the ideal surface to make it feel like you’re drawing on a paper-like material when using the stylus.

Photo by Road to VR

But more importantly than price or accessories is the first-party integration with Meta and the strong lineup of supported software out of the gate.

Logitech worked directly with Meta, not only to adopt Quest’s tracking technology, but also to build the stylus’ software experience right into Horizon OS. Pairing the MX Ink is just like pairing one of the headset’s own controllers, without any extra hardware or software needed. Even the stylus’ settings—which let you control things like hand selection, button bindings, and pressure curves—are baked right into the system’s own Settings menu.

It’s even got a proper ‘Meta’ button on the end (where the eraser would be), making it easy to pull up the headset’s menu.

And then there’s the strong lineup of software that will work right out of the gate. Logitech has locked in a solid swath of VR design apps for MX Ink support:

  • Adobe Substance Modeler
  • Gravity Sketch
  • PaintingVR
  • Arkio
  • Engage
  • OpenBrush
  • GestureVR
  • ShapesXR
  • Elucis by RealizeMedical

If Logitech plays its cards right, MX Ink could be the first VR stylus that really sticks the landing. So needless to say, I was intrigued to try it.

Hands-on With Logitech MX Ink for Quest

Photo by Road to VR

Last week I swung by Logitech’s San Jose, CA office to check out an early version of the stylus for myself. Compared to the company’s last VR stylus, the MX Ink is significantly more compact. Even so, I was impressed with the tracking.

Photo by Road to VR

Even with my hand covering a significant area of the stylus, there were seemingly enough hidden IR LEDs hiding under the stylus’ shell to provide continuous tracking no matter how I held or twisted the stylus. The company said it even put IR LEDs toward the tip of the MX Ink so it could be held like a wand or a pallet knife.

Logitech says the stylus is ‘as accurate as the Quest controllers’—but that doesn’t mean it can’t be more precise. Using a stylus as a pointing device means you can use your dexterous fingers to manipulate the input position in a very fine way; far more so than twisting your wrist alone (which is what primarily drives fine controller motion).

That was obvious while I was using the MX Ink to draw and sketch directly onto a real table in front of me. The pressure sensitive tip also made it feel natural to vary line width as needed.

Photo by Road to VR

I also tried using the MX Ink stylus against a whiteboard while using Quest 3’s mixed reality view. The tight latency and accuracy of the stylus really made it feel like I was leaving marks on the whiteboard. It was a whole layer of immersion that I wasn’t expecting to feel while trying the stylus.

This sense of actually leaving real marks on the whiteboard only made the next part even more mind-bending… I could lift the stylus from the surface while holding the button on the barrel and extend my drawing into the third dimension. Watching my strokes literally leap off the page like this was just plain fun.

While pressing the MX Ink against a real surface, the tip communicates the amount of pressure to the headset and thus changes the thickness of the line you draw. But when you’re using the stylus to draw in 3D, suddenly there’s no way for the system to know how much pressure you’re using, right? Actually, no; Logitech smartly made the button on the barrel of the stylus pressure sensitive itself, so you can squeeze softer or harder to define the width of brush strokes, even when you’re drawing in the air.

The MX Ink even includes a haptic engine for feedback. So even if you’re using it against a virtual surface, the stylus can let you know when you’re touching the canvas.

– – — – –

I’m impressed with the level of thoughtfulness in the design of MX Ink. It’s clear the company has carried over some important lessons learned from its previous experiments with VR styluses.

MX Ink has a reasonable price point, direct integration with the most popular headsets on the market, and a strong lineup of supporting apps. Logitech is giving the VR stylus—as a category—its best chance yet at really catching on.

The essential pieces are in place. The thing that will make or break this product is now likely down to how well integrated it is into the workflow of key applications. My understanding is that developers have a huge range of control over exactly how their applications will handle MX Ink. Half-hearted implementations could kill what otherwise looks like a strong product.

With MX Ink not due to launch until September, there’s time still for applications to tighten up their implementations, so we’ll have to wait to see how it all comes together.

The post Hands-on: Logitech’s MX Ink for Quest Could Legitimize VR Styluses as a Whole appeared first on Road to VR.

Unpacking the VR Design Details of ‘Half-Life: Alyx’ – Inside XR Design

In Inside XR Design we examine specific examples of great VR design. Today we’re looking at the details of Half-Life: Alyx and how they add an immersive layer to the game rarely found elsewhere.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.

Intro

Now listen, I know you’ve almost certainly heard of Half-Life: Alyx (2020), it’s one of the best VR games made to date. And there’s tons of reasons why it’s so well regarded. It’s got great graphics, fun puzzles, memorable set-pieces, an interesting story… and on and on. We all know this already.

But the scope of Alyx allows the game to go above and beyond what we usually see in VR with some awesome immersive details that really make it shine. Today I want to examine a bunch of those little details—and even if you’re an absolute master of the game, I hope you’ll find at least one thing you didn’t already know about.

Inertia Physics

First is the really smart way that Alyx handles inertia physics. Lots of VR games use inertia to give players the feeling that objects have different weights. This makes moving a small and light object feel totally different than a large and heavy object, but it usually comes with a sacrifice which is making larger objects much more challenging to throw because the player has to account for the inertia sway as they throw the object.

Alyx makes a tiny little tweak to this formula by ignoring the inertia sway only in its throwing calculation. That means if you’re trying to accurately throw a large object, you can just swing your arm and release in a way that feels natural and you’ll get an accurate throw even if you didn’t consider the object’s inertia.

This gives the game the best of both worlds—an inertia system to convey weight but without sacrificing the usability of throwing.

I love this kind of attention to detail because it makes the experience better without players realizing anything is happening.

Sound Design

Note: Make sure to unmute clips in this section

When it comes to sound design, Alyx is really up there not just in terms of quality, but in detail too. One of my absolute favorite details in this game is that almost every object has a completely unique sound when being shaken. And this reads especially well because it’s spatial audio, so you’ll hear it most from the ear that’s closest to the shaken object:

This is something that no flatscreen game needs because only in VR do players have the ability to pick up practically anything in the game.

I can just imagine the sound design team looking at the game’s extensive list of props and realizing they need to come up with what a VHS tape or a… TV sounds like when shaken.

That’s a ton of work for this little detail that most people won’t notice, but it really helps keep players immersed when they pick up, say, a box of matches and hear the exact sound they would expect to hear if they shook it in real life.

Gravity Gloves In-depth

Ok so everyone knows the Gravity Gloves in Alyx are a diegetic way to give players a force pull capability so it’s easier to grab objects at a distance. And practically everyone I’ve talked to agrees they work exceptionally well. They’re not only helpful, but fun and satisfying to use.

But what exactly makes the gravity gloves perhaps the single best force-pull implementation seen in VR to date? Let’s break it down.

In most VR games, force-pull mechanics have two stages:

  1. The first, which we’ll call ‘selection’, is pointing at an object and seeing it highlighted.
  2. The second, which we’ll call ‘confirmation’, is pressing the grab button which pulls the object to your hand.

Half-Life: Alyx adds a third stage to this formula which is the key to why it works so well:

  1. First is ‘selection’, where the object glows so you know what is being targeted.
  2. The second—let’s call it lock-on’—involves pulling the trigger to confirm your selection. Once you do, the selection is locked-on; even if you move your hand now the selection won’t change to any other object.

  3. The final stage, ‘confirmation’, requires not a button press but a pulling gesture to finally initiate the force pull.

Adding that extra lock-on stage to the process significantly improves reliability because it ensures that both the player and the game are on the same page before the object is pulled.

And it should be noted that each of these stages has distinct sounds which make it even clearer to the player what’s being selected so they know that everything is going according to their intentions.

The use of a pulling gesture makes the whole thing more immersive by making it feel like the game world is responding to your physical actions, rather than the press of a button.

There’s also a little bit of magic to the exact speed and trajectory the objects follow, like how the trajectory can shift in real-time to reach the player’s hand. Those parameters are carefully tuned to feel satisfying without feeling like the object just automatically attaches to your hand every time.

This strikes me as something that an animator may even have weighed in on to say, “how do we get that to feel just right?”

Working Wearables

It’s natural for players in VR to try to put a hat on their head when they find one, but did you know that wearing a hat protects you from barnacles? And yes, that’s the official name for those horrible creatures that stick to the ceiling.

But it’s not just hats you can wear. The game is surprisingly good about letting players wear anything that’s even vaguely hat-shaped. Like cones or even pots.

I figure this is something that Valve added after watching more than a few playtesters attempt to wear those objects on their head during development.

Speaking of wearing props, you can also wear gas masks. And the game takes this one step further… the gas masks actually work. One part of the game requires you to hold your hand up to cover you mouth to avoid breathing spores which make you cough and give away your position.

If you wear a gas mask you are equally protected, but you also get the use of both hands which gives the gas mask an advantage over covering your mouth with your hand.

The game never explicitly tells you that the gas mask will also protect you from the spores, it just lets players figure it out on their own—sort of like a functional easter egg.

Spectator View

Next up is a feature that’s easy to forget about unless you’ve spent a lot of time watching other people play Half-Life: Alyx… the game has an optional spectator interface which shows up only on the computer monitor. The interface gives viewers the exact same information that the actual player has while in the game: like, which weapons they have unlocked or equipped and how much health and resin they have. The interface even shows what items are stowed in the player’s ‘hand-pockets’.

And Valve went further than just adding an interface for spectators, they also added built-in camera smoothing, zoom levels, and even a selector to pick which eye the camera will look through.

The last one might seem like a minor detail, but because people are either left or right-eye dominant, being able to choose your dominant eye means the spectator will correctly see what you’re aiming at when you’re aiming down the scope of a gun.

Multi-modal Menu

While we’re looking at the menus here, it’s also worth noting that the game menu is primarily designed for laser pointer interaction, but it also works like a touchscreen.

While this seems maybe trivial today, let’s remember that Alyx was released almost four years ago(!). The foresight to offer both modalities means that no matter if the player’s first instinct is to touch the menu or use the laser, both choices are equally correct.

Guiding Your Eye

All key items in Alyx have subtle lights on them to draw your attention. This is basic game design stuff, but I have to say that Alyx’s approach is much less immersion breaking than many VR games where key objects are highlighted in a glaringly obvious yellow mesh.

For the pistol magazine, the game makes it clear even at a distance how many bullets are in the magazine… in fact, it does this in two different ways.

First, every bullet has a small light on it which lets you see from the side of the magazine roughly how full it is.

And then on the bottom of the magazine there’s a radial indicator that depletes as the ammo runs down.

Because this is all done with light, if the magazine is half full, it will be half as bright—making it easy for players to tell just how ‘valuable’ the magazine is with just a glance, even at a distance. Completely empty magazines emit no light so you don’t mistake them for something useful. Many players learn this affordance quickly, even without thinking much about it.

The takeaway here is that a game’s most commonly used items—the things players will interact with the most—should be the things that are most thoughtfully designed. Players will collect and reload literally hundreds of magazines throughout the game, so spending time to add these subtle details meaningfully improves the entire experience.

Continue on Page 2 »

The post Unpacking the VR Design Details of ‘Half-Life: Alyx’ – Inside XR Design appeared first on Road to VR.

The Secret to ‘Beat Saber’s’ Fun Isn’t What You Think – Inside XR Design

Our series Inside XR Design highlights and unpacks examples of great XR design. Today we’re looking at Beat Saber (2019) and why its most essential design element can be used to make great VR games that have nothing to do with music or rhythm.

You can find the complete video below, or continue reading for an adapted text version.

More Than Music

Welcome back to another episode of Inside XR Design. Now listen, I’m going to say something that doesn’t seem to make any sense at all. But by the end of this article, I guarantee you’ll understand exactly what I’m talking about.

Beat Saber… is not a rhythm game.

Now just wait a second before you call me insane.

Beat Saber has music, and it has rhythm, yes. But the defining characteristic of a rhythm game is not just music, but also a scoring system that’s based on timing. The better your timing, the higher your score.

Now here’s the part most people don’t actually realize. Beat Saber doesn’t have any timing component to its scoring system.

That’s right. You could reach forward and chop a block right as it comes into range. Or you could hit it at the last second before it goes completely behind you, and in both cases you could earn the same number of points.

So if Beat Saber scoring isn’t about timing, then how does it work? The scoring system is actually based on motion. In fact, it’s actually designed to make you move in specific ways if you want the highest score.

The key scoring factors are how broad your swing is and how even your cut is through the center of the block. So Beat Saber throws these cubes at you and challenges you to swing broadly and precisely.

And while Beat Saber has music that certain helps you know when to move, more than a rhythm game… it’s a motion game.

Specifically, Beat Saber is built around a VR design concept that I like to call ‘Instructed Motion’, which is when a game asks you to move your body in specific ways.

And I’m going to make the case that Instructed Motion is a design concept that can be completely separated from games with music. That is to say: the thing that makes Beat Saber so fun can be used to design great VR games that have nothing to do with music or rhythm.

Instructed Motion

Ok so to understand how you can use Instructed Motion in a game that’s not music-based let’s take a look at Until You Fall (2020) from developer Schell Games. This is not remotely a rhythm game—although it has an awesome soundtrack—but it uses the same Instruction Motion concept that makes Beat Saber so much fun.

While many VR combat games use physics-based systems that allow players to approach combat with arbitrary motions, Until You Fall is built from the ground up with a notion of how it wants players to move.

And before you say that physics-based VR combat is objectively the better choice in all cases, I want you to consider what Beat Saber would be like if players could cut blocks in any direction they wanted at all times.

Sure, you would still be cutting blocks to music, and yet, it would be significantly harder to find the fun and flow that makes the game feel so great. Beat Saber uses intentional patterns that cause players to move in ways that are fluid and enjoyable. Without the arrows, player movements would be chaotic and they’d be flailing randomly.

So just like Beat Saber benefits by guiding a player to make motions that are particularly satisfying, combat in VR can benefit too. In the case of Until You Fall, the game uses Instructed Motion not only to make players move a certain way, but also to make them feel a certain way.

When it comes to blocking, players feel vulnerable because they are forced into a defensive position. Unlike a physics-based combat game where you can always decide when to hit back, enemies in Until You Fall have specific attack phases, and the player must block while it happens, otherwise you risk taking a hit and losing one of just three hit points.

Thanks to this approach, the game can adjust the intensity the player feels by varying the number, position, and speed of blocks that must be made. Weak enemies might hit slowly and without much variation in their attacks. While strong enemies will send a flurry of attacks that make the player really feel like they’re under pressure.

This gives the developer very precise control over the intensity, challenge, and feeling of each encounter. And it’s that control that makes Instructed Motion such a useful tool.

Dodging is similar to blocking, but instead of raising your weapon to the indicated position, you need to move your whole body out of the way. And this feels completely different from just blocking.

While some VR combat games would let the player ‘dodge’ just by moving their thumbstick to slide out of the way, Until You Fall uses Instructed Motion to make the act of dodging much more physically engaging.

And when it comes to attacking, players can squeeze in hits wherever they can until an enemy’s shield is broken, which then opens an opportunity to deal a bunch of damage.

And while another VR game might have just left this opening for players to hit the enemy as many times as they can, Until You Fall uses Instruced Motion to ask players to swing in specific ways.

Swinging in wide arcs and along particular angles deals the most damage and makes you move in a way that feels really powerful and confident. It’s like the opposite feeling of when you’re under attack. It really feels great when you land all the combo hits.

Continue on Page 2: Motion = Emotion

The post The Secret to ‘Beat Saber’s’ Fun Isn’t What You Think – Inside XR Design appeared first on Road to VR.

Vision Pro is Hands-down the Best Movie Experience You Can Have on a Plane

After an eight hour flight, I can say that Apple has largely nailed the Vision Pro use-case of movie watching on a plane. But a few key improvements stand to make it more widely appealing.

Nobody looks forward to an eight hour flight. Whether it’s sleeping or reading or working, people want a way to pass the time and distract themselves from the noisy cabin, turbulence, and the general feeling of being packed into a metal tube like sardines.

The seat-back screen—with its selection of movies and TV shows—offers minor refuge from this chaotic environment.

I’m someone who really appreciates ‘cinematic spectacle’—you know, the movies that have the direction and action that really deserve a big screen and great audio.

While the movie selection on a plane is usually not half bad, over the years I have regularly avoided watching some movies I actually wanted to watch, because I felt they deserved much more than the experience I’d get from a small, low quality seat-back screen.

If only I could somehow bring my own movie theater on the plane.

Well, it turns out that’s a thing now.

Vision Pro on a Plane

Using a Vision Pro combined with AirPods Pro 2 on an international flight was a phenomenal experience viewing that can reasonably be described as bringing your own movie theater onto the plane.

While there’s still some obvious ways to improve the experience of using the headset on a plane, I was blown away at how it managed to make me practically unaware of the plane I was on.

This whole thing really only works well because Apple has done a few things to make sure the use-case is not just theoretical, but actually considered from end-to-end.

For one, Vision Pro has a special tracking mode called Travel Mode (not to be confused with Airplane Mode) which allows the headset to keep the floating screen locked in place in front of you even though the airplane is moving. Without it, the headset would detect the motions of the plane and cause the screen to go flying off behind you at worst, or slowly drift out of place at best.

Travel mode managed to keep the screen perfectly locked in place in front of me, with no drift throughout the entire duration of the movie. I put the screen out in front of me and made it 20 feet large.

This would have otherwise created a stereoscopic disparity by going ‘through’ the seats in front of me, turning Vision Pro’s digital crown to add an immersive backdrop behind the screen (which fades out to the passthrough view at the edges) worked perfectly to prevent that. It ended up looking like a soft portal to another dimension was open right in front of me… with a huge TV just on the other side.

Then there’s simply the quality of the display. When it comes to movie viewing, it’s not just resolution that matters. The headset’s HDR capability combined with micro OLED (which offers true blacks) really makes videos shine.

But none of this would matter if it wasn’t easy to find and transfer high quality video content onto the headset.

Luckily it was as simple as opening the Apple TV app before my flight where I downloaded Mad Max: Fury Road (2015)—at 4K resolution with surround sound, HDR, and in 3D—for offline viewing on my headset.

Lost in the Best Way

What’s crazy is that despite being stuck in a plane in an economy seat, this was the best way I’ve ever watched Mad Max on any screen. The quality was great. The 3D is better than what you get in a movie theater, and so is the contrast. Using AirPods Pro 2 also gave me a really impressive audio experience, and I couldn’t believe how well the noise cancellation isolated me from the noise of the plane.

With a high quality video on a huge screen, great sound with noise cancellation, and a movie with constant action, I was lost in an audio-visual reality that practically made me forget I was on a plane. In fact I have to admit that I was so lost in the film that I forgot to capture and screenshots for this article!

But I wasn’t completely unaware… on purpose. I didn’t dial the immersive environment up to 100% (which would have completely surrounded me and made it look like I wasn’t in the plane at all), which meant I could still look off to the side and see what was happening in the cabin so I didn’t need to worry that I’d miss a drink when the flight attendants came by.

Not for Everyone (yet)

The movie watching experience I had with Vision Pro on the plane was vastly better than what I’ve ever had from a seatback screen or a laptop.

But it’s not a perfect experience and there’s still some things that need to be improved before everyone would want to watch movies on the plane this way.

First are the obvious things. Vision Pro is big, and even bigger when it’s in a travel case. At this price, it’s not the kind of headset you’re just going to squeeze into a backpack without any protection. The headset in its travel case took up like 80% of the space in the backpack I carried onto the plane.

When I was ready to pull the headset out, it was fairly clunky to pull the case out of my backpack, unzip and fold it open in my lap, then pull out the headset and battery before getting the headset plugged in and putting the case back under my seat. In the cramped space of an economy seat, it’s a bit of a juggling act.

The only real fix for this is a smaller and more affordable headset. And even better if they can eventually ditch the battery pack. But in the interim, I could easily see an airline offering Vision Pro headsets built into a compartment in first class seats. Not only could these be permanently powered through a tether, but passengers wouldn’t need to carry a bulky case with them onto the plane to get a great movie watching experience.

Although hand-tracking worked incredibly well considering how dark the cabin was, Vision Pro would occasionally give me a ‘Tracking Lost’ message when I shuffled around a little too much—likely a limitation of Travel Mode. Luckily Apple thoughtfully pauses the movie when this happens, and in three or four seconds the tracking would come back and the movie would start playing again.

This happened a handful of times as I watched the movie. Because I understand the tech and the challenge of tracking the headset in this worst-case environment, it didn’t bother me that much. But for a normal person this would probably feel like quite a disruption to the movie experience if it happened multiple times.

Visual and audio isolation is the point if you’re using a headset on a plane, but this can make it hard for someone to get your attention. Passthrough is of course helpful here, but the field-of-view is tighter than your natural field-of-view, making it harder to see things out of the corner of your eye. This makes it more difficult for someone to get your attention (like a fellow passenger who wants to politely interrupt you so they can get out of the seats and to the bathroom).

And of course there’s battery life. After watching the full two hours of Mad Max: Fury Road, I was left with 35% battery on Vision Pro. Although that means I had another hour to squeeze in a show or two, only being able to watch one full length movie on an eight hour flight is an obvious and unfortunate limitation.

And yes I could have brought a big external battery and plugged it into Vision Pro’s battery to extend the runtime, but now we’re talking about adding more bulk, wires, and juggling to the equation.

Personally I was willing to put up with these various hassles to watch a movie with excellent audio and visual quality on a plane. And I’ll do it again.

But I recognize that not everyone cares that much about what a movie looks and sounds like. For those people, Vision Pro is just not convenient enough for the value it would bring them. But once it gets smaller (and losses the battery pack), this use-case will become appealing for a much larger group of people.

The post Vision Pro is Hands-down the Best Movie Experience You Can Have on a Plane appeared first on Road to VR.

‘Silent Slayer’ Preview – Dr. Van Helsing’s Deadly Game of Operation

I went hands-on with Silent Slayer: Vault of the Vampire, an upcoming horror-puzzle for Quest from Schell Games that tasks you with defusing various arcane traps protecting a coven of sleeping vampires. Much like the studio’s pioneering VR puzzle franchise I Expect You to Die, any false move means certain death, but you’ll need to think twice before fumbling your trusty vampire-busting tools since there’s always a jump scare waiting for you on the other side of inevitable failure.

In my preview of Silent Slayer, I got a chance to play through the first three levels of the game, which are basically tutorials that introduce the world, your growing assortment of tools, and three of the coven’s vampire foes. In total, there are apparently nine levels, although I haven’t set foot outside of the third to give you an accurate impression of what the first 30-ish minutes of the game has to offer.

Like I Expect You to Die, the studio’s upcoming horror-puzzle is played equally well standing up or sitting down, requiring little to no room-scale movement on your quest to play what is essentially a spooky version of the kid’s board game Operation, which similarly tasks you with precisely manipulating little doohickeys with the utmost care to not trip the metaphorical buzzer—or in Silent Slayer’s case, a screaming vampire.

Before the fun begins though, you’re first tasked with reassembling a sort of totem inscribed with the crest of your next enemy, called a ‘Bind Stone’.

View post on imgur.com

The broken stones give a few clues on how they’re put back together, although you may be scratching your head a bit as you follow broken contours and match edges to reveal different geometric forms to unlock each sequential level. The stone could be a pyramid, a prism, or anything, making for an interesting little roadblock of a puzzle that forces you to pay close attention to detail—an important skill you’ll learn once you’re face-first with the blood sucker du jour.

And back at your home base, you’re also given a talking book which not only narrates the game’s story, but provides detail on every vampire, and every tool given to you for each mission. More on that later.

The real meat of the game though comes when you’re transported to your target, and put in front of the ghoul’s closed coffin which features a few initial mechanism to undo before you can get to the stabby bit. You’ll need to gingerly pull out locking crossbars, slowly manipulate keys, and pull out nails with a provided mini-crowbar—the latter of which requires you to pry up nails just enough so you can grip them with your free hand. Go a little too far, and the nail will fall, alerting the vampire inside and raising his awareness bar.

Once you’ve opened the top bit of the coffin carefully, keeping quiet and being very precise is the name of the game. Of course, your bookish pal is there to lend a hand, but also adds some color commentary on how you need to hurry up, and what to watch out for.

View post on imgur.com

Using the game’s various physics-based tools bring a lot of solidity and gravity to every move. You’ll use things like clippers to sever tripwires, a heart-detection tool to mark where the vampire’s heart lies, and your trusty stake to pierce the next protective shell. Even that last bit can be a challenge though, as shown by my less-than-precise stab seen above.

If you can make it that far, you’ll be left with two more tasks—at least as far as I know from playing three levels. Trace the vampire’s crest in the air to deactivate the final, unseen trap, and stab the sucker right through the heart. Job done.

From a technical standpoint, Silent Slayer is a visually engrossing and well-refined game that totally fits in with the high production value you see in I Expect You to Die. I still have a lot to learn about the game though, as some previously released images reveal a significant ramp in difficulty with promises of a much higher density of traps and corresponding tools than I experienced in my hands-on. Those look like a lot of keys, which means a lot of very pensive inserting and turning. That image below also shows a long pry bar, which I imagine will mean I have to be super careful with some far away nails.

Image courtesy Schell Games

That said, jump scares weren’t extremely terrifying, since you always know they’re coming after a major screw up. That’s just a piece of the overall puzzle though, which thus far has been a fun experience in learning how each trap works, and finding out just how reactive the world really is. Seriously, if you put down a pair of clippers on your workbench too indelicately, you’ll make a noise and alert the undead within.

I’m also looking forward to learning more about the overarching story, which I hope matures throughout the game’s nine levels. I can’t say I was paying too much attention to the backstory during my playthrough of the first three levels, as I was busy learning how to work the games various tools, which are doled out as you move to tougher vampires.

In all, Silent Slayer appears to be everything it says on the tin, although I’m really hoping it tosses some gratifying twists my way, as looking plainly at the map presented you in the book makes it feel just a little too linear of an experience so far. You can read more about my impressions in the full review though, which ought to be out sometime this summer when the game launches on Quest 2/3/Pro. In the meantime, you can wishlist the game on Quest here, currently priced at a 10% discount off its regular $20 price tag.

The post ‘Silent Slayer’ Preview – Dr. Van Helsing’s Deadly Game of Operation appeared first on Road to VR.

This Clever Immersion Hack Makes VR Feel More Real – Inside XR Design

In Inside XR Design we examine specific examples of great VR design. Today we’re looking at the clever design of Red Matter 2’s ‘grabber tools’ and the many ways that they contribute to immersion.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.

Intro

Today we’re going to talk about Red Matter 2 (2022), an adventure puzzle game set in a retro-future sci-fi world. The game is full of great VR design, but those paying close attention will know that some of its innovations were actually pioneered all the way back in 2018 with the release of the original Red Matter. But hey, that’s why we’re making this video series—there’s incredible VR design out there that everyone can learn from.

We’re going to look at Red Matter 2’s ingenious grabber tools, and the surprising number of ways they contribute to immersion.

What You See is What You Get

At first glance, the grabber tools in Red Matter 2 might just look like sci-fi set-dressing, but they are so much more than that.

At a basic level, the grabber tools take on the shape of the user’s controller. If you’re playing on Quest, Index, or PSVR 2, you’ll see a custom grabber tool that matches the shape of your specific controller.

First and foremost, this means that players’ in-game hand pose matches their actual hand pose and the feeling of holding something in their hands. The shape you see in-game even matches the center of gravity as you feel it in your real hand.

Compare that to most VR games which show an open hand pose and nothing in your hand by default… that creates a disconnect between what you see in VR and what you actually feel in your hand.

And of course because you’re holding a tool that looks just like your controller, you can look down to see all the buttons and what they do.

I don’t know about you, but I’ve been using VR for years now, and I still couldn’t reliably tell you off the top of my head which button is the Y button on a VR controller. Is it on the left or right controller? Top or bottom button? Take your own guess in the comments and then let us know if you got it right!

Being able to look down and reference the buttons—and which ones your finger is touching at any given moment—means players can always get an instant reminder of the controls without breaking immersion by opening a game menu or peeking out of their headset to see which button is where.

This is what’s called a diegetic interface—that’s an interface that’s contextualized within the game world, instead of some kind of floating text box that isn’t actually supposed to exist as part of the game’s narrative.

In fact, you’ll notice that there’s absolutely no on-screen interface in the footage you see from Red Matter 2. And that’s not because I had access to some special debug mode for filming. It’s by design.

When I spoke with Red Matter 2 Game Director Norman Schaar, he told me, “I personally detest UI—quite passionately, in fact! In my mind, the best UI is no UI at all.”

Schaar also told me that a goal of Red Matter 2’s design is to keep the player immersed at all times.

So it’s not surprising that we also see that the grabber tools used as a literal interface within the game, allowing you to physically connect to terminals to gather information. To the player this feels like a believable way that someone would interact with the game’s world—under the surface we’re actually just looking at a clever and immersive way of replacing the ‘press X to interact’ mechanics that are common in flat games.

The game’s grabber tools do even more for immersion than just replicating the feel of a controller in your hand or acting as a diegetic interface in the game. Crucially, they also replicate the limited interaction fidelity that players actually have in VR.

Coarse Hand Input

So let me break this down. In most VR games when you look at your hands you see… a human hand. That hand of course is supposed to represent your hand. But, there’s a big disconnect between what your real hands are capable of and what the virtual hands can do. Your real hands each have five fingers and can dexterously manipulate objects in ways that even today’s most advanced robots have trouble replicating.

So while your real hand has five fingers to grab and manipulate objects, your virtual hand essentially only has one point of input—a single point with which to grab objects.

If you think about it, the grabber tool in Red Matter 2 exactly represents this single point of input to the player. Diegetically, it’s obvious upon looking at the tool that you can’t manipulate the fingers, so your only option is to ‘grab’ at a one point.

That’s a long way of saying that the grabber tools in Red Matter 2 reflect the coarse hand input that’s actually available to us in VR, instead of showing us a virtual hand with lots of fingers that we can’t actually use.

So, In Red Matter 2, the grabber tools contextualize the inability to use our fingers. The result is that instead of feeling silly that we have to rotate and manipulate objects in somewhat strange ways, you actually feel like you’re learning how to deftly operate these futuristic tools.

Immersion Insulation Gap

And believe it or not, there’s still more to talk about why Red Matter 2’s grabber tools are so freaking smart.

Physics interactions are a huge part of the game, and the grabber tools again work to maintain immersion when handling objects. Like many VR games, Red Matter 2 uses an inertia-like system to imply the weight of an object in your hand. Small objects move quickly and easily, while large objects are sluggish and their inertia fights against your movement.

Rather than imagining the force our hands would feel when moving these virtual objects, the grabber tools create a sort of immersion insulation gap by providing a mechanical pivot point between the tool and the object.

This visually ‘explains’ why we can’t feel the forces of the object against our fingers, especially when the object is very heavy. The disconnect between the object and our hand—with the grabber tool as the insulator in the middle—alleviates some of the expectation of the forces that we’d normally feel in real life, thereby preserving immersion just a little bit more.

Unassuming Inventory

And if it wasn’t clear already, the grabber tools are actually… your inventory. Not only do they store all of your tools—like the flashlight, hacking tool, and your gun—you can even use them to temporarily stow objects. Handling inventory this way means that players can never accidentally drop or lose their tools, which is an issue we see in lots of other VR games, even those which use ‘holsters’ to hold things.

Inhuman Hands

And last but not least…the grabber tools can actually do some interesting things that our hands can’t. For example, the rotating grabber actually makes the motion of turning wheels like this one easier than doing it with two normal hands.

It’s no coincidence that the design of the grabber tools in Red Matter 2 is so smartly thought through… after all, the game is all about interacting with the virtual world around you… so it makes sense that the main way in which players interact with the world would be carefully considered.

To take full advantage of the grabbers, the developers built a wide variety of detailed objects for the game which are consistently interactive. You can pick up pretty much anything that looks like you should be able to.

And here’s a great little detail that I love to see: in cases where things aren’t interactive, all you have to do is not imply that they are! Here in Red Matter 2 the developers simply removed handles from this cabinet… a clear but non-intrusive way to tell players it can’t be opened.

Somewhat uniquely to VR, just seeing cool stuff up close like it’s right in front of you can be a rewarding experience all on its own. To that end, Red Matter 2 makes a conscious effort to sprinkle in handful of visually interesting objects, whether it’s this resin eyeball, papers with reactive physics, or this incredible scene where you watch your weapon form from hundreds of little balls right in your hand.

– – — – –

Red Matter 2’s grabber tool design is so beneficial to the game’s overall immersion that, frankly, I’m surprised we haven’t seen this sort of thing become more common in VR games.

If you want to check all of this out for yourself, you can find Red Matter 2 on Quest, PSVR 2, and PC VR. Enjoyed this breakdown? Check out the rest of our Inside XR Design series and our Insights & Artwork series.

And if you’re still reading, how about dropping a comment to let us know which game or app we should cover next?

The post This Clever Immersion Hack Makes VR Feel More Real – Inside XR Design appeared first on Road to VR.

Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro

Apple today released ‘Spatial Personas’ in public beta on Vision Pro. The newly upgraded avatar system can now bring people right into your room. We got an early look.

Much has been said about Apple’s Persona avatar system for Vision Pro. Whether you find them uncanney or passable, one thing is certain: it’s the most photorealistic real-time avatar system built into any headset available today. And now Personas is getting upgraded with ‘Spatial Personas’.

But weren’t Personas already ‘spatial’? Let me explain.

Sorta Spatial

At launch the Persona system allowed users to scan their faces into the headset to create a digital identity that looks and moves like the user thanks to the bevy of sensors in Vision Pro. When doing a FaceTime call with another Vision Pro user (or users), their Persona(s) head, shoulders, and hands would be shown inside a floating box.

Image courtesy Apple

While this could feel like face-to-face talking at times, the fact that they were contained within a frame (which you can move or resize like any other window) made it feel like they weren’t actually standing right next to you. And that’s not just because of the frame, but also because you weren’t actually in a sharing the same space as them—it’s not like they could walk right up to you for a high-five, because they’d be stuck in the window on your screen.

Face-to-face

Now with Spatial Personas (released in beta today on the latest version of VisionOS), each person’s avatar is rendered in a shared space without the frame. When I say ‘shared space’, I mean that if someone takes takes a step toward me in their room, I actually see them come one step closer to me.

Previously the frame made it feel sort of like you were doing a 3D video chat. Now with the shared space and no frame, it really feels like you’re standing right next to each other. It’s the ‘hang out on the same couch’ or ‘gather around the same table’ experience that wasn’t actually possible on Vision Pro at launch.


And it’s really quite compelling. I got a sneak peek at the new system in a Vision Pro FaceTime call with four people (though up to five are supported total), all using Spatial Personas. You’ll still only see their head, shoulders, and hands but now it really feels like a huddle instead of a 3D video chat. It feels much more personal.

Spatial Personas Are Opt-in

To be clear, the ‘video chat’ version of Personas (with the frame) still exists. In fact, it’s the default way that avatars are shown when a FaceTime call is started. Switching to a Spatial Persona requires hitting a button on the FaceTime menu.

And while this might seem like a strange choice, I actually think there’s something to it.

On the one hand, the default ‘FaceTime in Vision Pro’ experience feels like a video chat. In everyday business we’re all pretty used to seeing someone else on the other side of a webcam by now. And even though this is more personal than an audio-only call, it’s still a step away from actually meeting with someone in person.

Spatial Personas is more like you’re actually meeting up in person, since you can actually feel the interpersonal space between you and the other people in this shared space. If they walk up and get a little too close, you’ll truly feel it in the same way if someone stands too close to you in real life.

So it’s nice to have both of these options. I can ‘video chat’ with someone with the regular mode, or I can essentially invite them into my space if the situation calls for a more personal meeting.

And Spatial Personas aren’t just for chatting. Just like regular Personas, you can use SharePlay while on FaceTime to watch movies and play games together (provided you both have a supported app installed).

Take Freeform for instance, Apple’s collaborative digital whiteboard app. If you launch Freeform while on a FaceTime call with Spatial Personas, everyone else will be asked to join the app, which will then load everyone in front of the whiteboard.

Everything is synchronized too. Anyone else in the call can see what you’ve put on the whiteboard and watch in real time as you add new photos or draw annotations. And just as easily, anyone can physically walk up to the board and interact with it themselves.

When it comes to shared movie viewing on Apple TV on Vision Pro, Spatial Personas unlock the feeling of sitting on the same couch together, which wasn’t quite possible with the headset at launch. Now when you watch a movie with your friends you’ll be sitting shoulder to shoulder with them, which feels very different than having a window with their face in it floating near the video you’re watching.

It’s possible to stream many flat apps to anyone in the FaceTime call while using Spatial Personas, but for 3D or interactive content developers will need to specially implement the feature.

That’s somewhat problematic though because it’s difficult to know exactly which apps support Spatial Personas or even SharePlay for that matter. As of now, you have to scroll all the way to the bottom of an app’s page to see if it supports SharePlay (unless the developer mentions it in the app’s description). And even then this doesn’t necessarily mean it supports Spatial Personas.

The Little Details

Apple also thought through some smaller details for Spatial Personas, perhaps the most interesting of which is ‘locomotion’.

Room-scale locomotion is essentially the default. If you want to move closer to a person or app… you just physically walk over to it. But what happens if it’s outside the bounds of your physical space? Well, instead of directly moving yourself virtually, you can actually move the whole shared space closer or further from you.

You can do this any time, in any app, and everyone else will see your new position reflected within their space, keeping everything synchronized.

Apple also made is so when two Spatial Personas get too close together, they will temporarily revert to just looking like a floating contact photo. I think this is probably because they want to avoid possible harassment or trolling (ie: you want to annoy someone so you phase your virtual hand right through their virtual face, which is uncomfortable both visually and from an interpersonal space standpoint).

The headset’s excellent spatial audio is of course included by default, so everyone sounds like they’re coming from wherever they’re standing in the room, and their voices actually sound like they’re in your room (based on the headset’s estimate of what the acoustics should sound like). And if you move to a fully immersive space like an ‘environment’, the spatial audio transitions to that new acoustic environment—so for instance you can hear people faintly echoing in the Joshua Tree environment because of all the rock surfaces nearby. Hearing the acoustics fade from being inside your own room to being ‘outside’ in an environment is a subtle bit of magic.

Image courtesy Apple

And last but not least, it’s possible to have a mixed group of FaceTime participants. For instance you could have people using an iPhone, an Android tablet (yes you can FaceTime with people on non-Apple devices), a normal Persona, and a Spatial Persona all at once. SharePlay in that case will also work between those formats (except non-Apple devices) as long as long as the app supports it. In cases with apps that are Vision Pro native, the iPhone user would get a notification that their device isn’t supported.

– – — – –

Spatial Personas is a big upgrade to Apple’s avatar system, but the company maintains the whole Persona system is still in ‘beta’. Presumably that means there’s more improvements yet to come.

The post Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro appeared first on Road to VR.

‘Max Mustard’ Review – An ‘Astro Bot’ Style VR Platformer That Cuts the Mustard

Max Mustard may be a bit of a curveball when it comes to names, but this traditional 3D platformer reimagined for VR delivers in nearly every other way, serving up some very Astro Bot Rescue Mission (2018) and Lucky’s Tale (2016) vibes in the process.

Max Mustard Details:

Available On: Quest 2/3/Pro (coming later to Steam & PSVR 2)
Reviewed On: Quest 3
Release Date: March 21st, 2024
Price: $30
Developer: Toast Interactive

Gameplay

Max Mustard isn’t reinventing the wheel here: it’s a solid, extremely well-built 3D platformer that, for all its positives, is a pretty standard experience overall if you’ve played any 3D platformer in the past 30 years, flatscreen or otherwise.

That’s probably the most negative thing I’ll say about this plucky little adventure, which tasks you with guiding the eponymous rocket-boot-clad companion through a world of fairly easy enemies, less easy environmental stuff, and four boss encounters that follow the strict orthodoxy of a ‘hurt it three times and it dies’ variety.

Image courtesy Toast Interactive

While the story is fairly forgettable—delivered almost entirely through letters that pop up at the end of levels—the action rarely disappoints, as you’re served up straight shots through 40 bespoke levels, many of which harken back to the Super Mario titles from the late ’80s and early ’90s.

That said, there isn’t a ton of enemy variety, as all baddies regardless of movement or attack style only take a single bonk on the head to kill, making enemies less interesting than the admittedly very cool environmental gadgets that you start encountering around the second (of four) worlds. Those fun and inventive moving platforms and increasingly difficult environmental traps are the real stars of the show here, it seems.

View post on imgur.com

And if you haven’t noticed from the clip above, Max Mustard is unabashedly a love letter to those platformers past and present, like Crash Bandicoot and Super Mario World, and the more recent Super Mario 3D Land, but also the headlining VR platformers of today too, including the illustrious Astro Bot Rescue Mission on PSVR and Lucky’s Tale on PC VR, PSVR and Quest. With the level of fit and finish, and first-person interaction (more on that below), you might even think of Max Mustard as the Astro Bot of the Quest platform.

And like those platformers from years past, Max Mustard also offers up the familiar overworld map that takes you linearly to the final boss battle, which (no spoilers!) satisfyingly puts together all of the skills you learned throughout the game.

Overworld map | Image captured by Road to VR

Along the way you’ll find minigames and the occasional shop too where you can spend coins on abilities, such as extra hearts, coin bonuses, and new combat moves. You’ll want (but probably not really need) those new moves too, as levels start to ramp in difficulty around world three, which introduces some challenging environmental obstacles, like boxes that disappear and reappear to the beat of the game’s soundtrack, torrents of cannonballs, one-use jump pads, and more. Having an extra heart, a better attack move, or rocket boots that do damage to enemies is all a neat bonus to help out.

You wouldn’t be far off in calling Max Mustard the “spiritual successor” to Sony’s Astro Bot, because like Astro Bot every so often you’re given first-person gadgets, like a dart gun and a fan gun, which you use in certain levels, the dart gun making the biggest impact throughout the game. Here I am blasting at incoming rockets from the game’s tutorial boss:

View post on imgur.com

Still, I wish the first-person gadgets were a little better integrated into regular levels, and had more variation overall considering how cool they can be. You do however get the chance to hone your shooting skills in minigame challenges where you can earn coins to use in the shop, as well as get extra ‘mudpups’, which are normally littered throughout regular levels, acting as a sort of secondary currency which are used to unlock levels as you move forward.

As for enemies, regular baddies don’t really put up much of a challenge, however the game’s four main boss battles are significantly more interesting, each of them staying very loyal to the well-worn platforming tropes you’re probably used to. That said, it’s hard not to smile at just how well Max Mustard nails the whole aesthetic and feel of basically everything.

Max Mustard took me around five hours to complete, although I took it pretty slow due to wanting to collecting all three mudpups found in each level. You don’t need to be a completionist to get through the game with ease though, which could take you three to four hours overall.

Immersion

Max Mustard is stupid cute, and offers a lots of level variation in both functional design and overall feel. Here’s me using the fan gun to suck up enemies and errant coins after having splashed down into the water—the sort of totally unexpected one-off level transitions you’ll experience throughout.

View post on imgur.com

That said, first-person interactions are comparatively rare in Max Mustard, so you’ll be bopping around as Max most of the time instead of dealing with enemies like you see in the clip above. That puts an increased importance on the visual and functional aspects of levels, which are thankfully so rock solid that it’s easy to snap into your new ‘floating head’ POV and enjoying the game’s bright and colorful art style.

Again, I wish there were more first-person gadgets, although you have to give it to Max Mustard for including them at all, as the game seems to prioritize fast and fluid movement through levels instead of the heavier Astro Bot-y mix of first and third-person gameplay.

Comfort

The game’s camera necessary follows around Max, but does so in a way that’s gentle and comfortable. The decision by the studio to include snap turning as a purchasable upgrade back at the shop however feels a bit weird, as it’s pretty necessary to reposition yourself when turn around in levels to grab coins or mudpups you may have missed. Granted, this feature is unlocked with in-game coins, although it should be a standard movement scheme out of the box.

There are a few moments of forced motion in one-off events, although nothing that should set off alarm bells in motion sick-prone users, making Max Mustard pretty much perfect for anyone, including VR first-timers.

Max Mustard’ Comfort Settings – March 21st, 2024

Turning
Artificial turning
Snap-turn ✔
Quick-turn ✖
Smooth-turn ✖
Movement
Artificial movement
Teleport-move ✖
Dash-move ✖
Smooth-move ✔
Blinders ✖
Head-based ✖
Controller-based ✔
Swappable movement hand ✖
Posture
Standing mode ✔
Seated mode ✔
Artificial crouch ✖
Real crouch ✖
Accessibility
Subtitles ✖
Interface language
Languages English, French, German, Spanish, Japanese, Korean
Dialogue audio
Languages English
Adjustable difficulty ✖
Two hands required ✔
Real crouch required ✖
Hearing required ✖
Adjustable player height ✔

The post ‘Max Mustard’ Review – An ‘Astro Bot’ Style VR Platformer That Cuts the Mustard appeared first on Road to VR.

Why ‘Embodiment’ is More Important Than ‘Immersion’ – Inside XR Design

Our series Inside XR Design examines specific examples of great XR design. Today we’re looking at the game Synapse and exploring the concept of embodiment and what makes it important to VR games.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.

Defining Embodiment

Welcome back to another episode of Inside XR design. Today I’m going to talk about Synapse (2023), a PSVR 2 exclusive game from developer nDreams. But specifically we’re gonna to look at the game through the lens of a concept called embodiment.

So what the hell is embodiment and why am I boring you talking about it rather than just talking about all the cool shooting, and explosions, and smart design in the game? Well, it’s going to help us understand why certain design decisions in Synapse are so effective. So stick with me here for just a minute.

Embodiment is a term I use to describe the feeling of being physically present within a VR experience. Like you’re actually standing there in the world that’s around you.

And now your reasonable response is, “but don’t we already use the word immersion for that?”

Well colloquially people certainly do, but I want to make an important distinction between ‘immersion’ and ‘embodiment’.

‘Immersion’, for the purposes of our discussion, is when something has your complete attention. We all agree that a movie can be immersive, right? When the story or action is so engrossing it’s almost like nothing outside of the theater even exists at that moment. But has even the most immersive movie you’ve ever seen made you think you were physically inside the movie? Certainly not.

And that’s where ’embodiment’ comes in. For the sake of specificity, I’m defining immersion as being about attention. On the other hand, embodiment is about your sense of physical presence and how it relates to the world around you.

So I think it’s important to recognize that all VR games get immersion for free. By literally taking over your vision and hearing, for the most part they automatically have your full attention. You are immersed the second you put on a headset.

But some VR games manage to push us one step further. They don’t just have our attention, they make us feel like our whole body has been transported into the virtual world. Like you’d actually feel things in the game if you reached out and touched them.

Ok, so immersion is attention and embodiment is the feeling of actually being there.

And to be clear, embodiment isn’t a binary thing. It’s a spectrum. Some VR games are slightly embodying, while others are very embodying. But what makes the difference?

That’s exactly what we’re going to talk about with Synapse.

Cover You Can Feel

At first glance, Synapse might look like a pretty common VR shooter, but there are several really intentional design decisions that drive a strong sense of embodiment. The first thing I want to talk about is the cover system.

Every VR shooter has cover. You can walk behind a wall and it will block shots for you. But beyond that, the wall doesn’t really physically relate to your actual body because you never actively engage with it. It’s just a stationary object.

But Synapse makes walls and other cover interactive by letting you grab it with your hand and pull your body in and out of cover. This feels really natural and works great for the gameplay.

And because you’re physically moving yourself in relation to the wall—instead of just strafing back and forth with a thumbstick—the wall starts to feel more real. Specifically, it feels more real because when you grab the wall and use it as an anchor from which to move, it’s subconsciously becoming part of your proprioceptive model.

Understanding Proprioception

Let’s take a second here to explain proprioception because it’s a term that comes up a lot when we’re talking about tricking our bodies into thinking we’re somewhere else.

The clearest example I’ve ever seen of proprioception in action is this clip. And listen, I never thought I’d be showing you a cat clip in this series, but here we are. Watch closely as the cat approaches the table… without really thinking about it, it effortlessly moves its ear out of the way just at the right time.

This is proprioception at work. It’s your body’s model of where it is in relation to the things around you. In order for the cat to know exactly when and where to move its ear to avoid the table without even looking at it, it has to have some innate sense of the space its ear occupies and how that relates to the space the table occupies.

In the case of the cover system in Synapse, you intuitively understand that ‘when I grab this wall and move my hand to the right, my body will move to the left’.

So rather than just being a ‘thing that you see’ walls become something more than that. They become relevant to you in a more meaningful way, because you can directly engage with them to influence the position of your body. In doing so, your mind starts to pay more attention to where the walls are in relation to your body. They start to feel more real. And by extension, your own body starts to feel more present in the simulation… you feel more ‘embodied’.

Mags Out

And walls in Synapse can actually be used for more than cover. You can also use them to push magazines into your weapon.

Backing away from embodiment for just a second—this is such a cool design detail. In Inside XR Design #4 I spent a long time talking about the realistic weapon model in Half-Life: Alyx (2020). But Synapse is a run-and-gun game so the developers took a totally different approach and landed on a reloading system that’s fast paced but still engaging.

Instead of making players mess with inventory and chambering, the magazines in this game just pop out and float there. To reload, just slide them back into the weapon. It might seem silly, but it works in the game’s sci-fi context and reduces reloading complexity while maintaining much of the fun and game flow that comes with it.

And now we can see how this pairs so beautifully with the cover game’s cover system.

The game’s cover system takes one of your hands to use. So how can you reload? Pushing your magazine against the wall to reload your gun is the perfect solution to allow players to use both systems at the same time.

But guess what? This isn’t just a really clever design, it’s yet another way that you can engage with the wall—as if it’s actually there in front of you. You need to know if your arm is close enough to the wall if you’re going to use it to reload. So again, your brain starts to incorporate walls and their proximity into your proprioceptive model. You start to truly sense the space between your body and the wall.

So both of these things—being able to use walls to pull yourself in and out of cover, and being able to use walls to push a magazine into your gun—make walls feel more real because you interact with them up close and in a meaningful way.

And here’s the thing. When the world around you starts to feel more real, you start to feel more convinced that you’re actually standing inside of it. That’s embodiment. And let’s remember: virtual worlds are always ‘immersive’ because they necessarily have our full attention. But embodiment goes beyond what we see—it’s about what we feel.

And when it comes to reaching out and touching the world… Synapse takes things to a whole new level with its incredible telekinesis system.

Continue on Page 2: Extend Your Reach »

The post Why ‘Embodiment’ is More Important Than ‘Immersion’ – Inside XR Design appeared first on Road to VR.