This Clever Immersion Hack Makes VR Feel More Real – Inside XR Design

In Inside XR Design we examine specific examples of great VR design. Today we’re looking at the clever design of Red Matter 2’s ‘grabber tools’ and the many ways that they contribute to immersion.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.

Intro

Today we’re going to talk about Red Matter 2 (2022), an adventure puzzle game set in a retro-future sci-fi world. The game is full of great VR design, but those paying close attention will know that some of its innovations were actually pioneered all the way back in 2018 with the release of the original Red Matter. But hey, that’s why we’re making this video series—there’s incredible VR design out there that everyone can learn from.

We’re going to look at Red Matter 2’s ingenious grabber tools, and the surprising number of ways they contribute to immersion.

What You See is What You Get

At first glance, the grabber tools in Red Matter 2 might just look like sci-fi set-dressing, but they are so much more than that.

At a basic level, the grabber tools take on the shape of the user’s controller. If you’re playing on Quest, Index, or PSVR 2, you’ll see a custom grabber tool that matches the shape of your specific controller.

First and foremost, this means that players’ in-game hand pose matches their actual hand pose and the feeling of holding something in their hands. The shape you see in-game even matches the center of gravity as you feel it in your real hand.

Compare that to most VR games which show an open hand pose and nothing in your hand by default… that creates a disconnect between what you see in VR and what you actually feel in your hand.

And of course because you’re holding a tool that looks just like your controller, you can look down to see all the buttons and what they do.

I don’t know about you, but I’ve been using VR for years now, and I still couldn’t reliably tell you off the top of my head which button is the Y button on a VR controller. Is it on the left or right controller? Top or bottom button? Take your own guess in the comments and then let us know if you got it right!

Being able to look down and reference the buttons—and which ones your finger is touching at any given moment—means players can always get an instant reminder of the controls without breaking immersion by opening a game menu or peeking out of their headset to see which button is where.

This is what’s called a diegetic interface—that’s an interface that’s contextualized within the game world, instead of some kind of floating text box that isn’t actually supposed to exist as part of the game’s narrative.

In fact, you’ll notice that there’s absolutely no on-screen interface in the footage you see from Red Matter 2. And that’s not because I had access to some special debug mode for filming. It’s by design.

When I spoke with Red Matter 2 Game Director Norman Schaar, he told me, “I personally detest UI—quite passionately, in fact! In my mind, the best UI is no UI at all.”

Schaar also told me that a goal of Red Matter 2’s design is to keep the player immersed at all times.

So it’s not surprising that we also see that the grabber tools used as a literal interface within the game, allowing you to physically connect to terminals to gather information. To the player this feels like a believable way that someone would interact with the game’s world—under the surface we’re actually just looking at a clever and immersive way of replacing the ‘press X to interact’ mechanics that are common in flat games.

The game’s grabber tools do even more for immersion than just replicating the feel of a controller in your hand or acting as a diegetic interface in the game. Crucially, they also replicate the limited interaction fidelity that players actually have in VR.

Coarse Hand Input

So let me break this down. In most VR games when you look at your hands you see… a human hand. That hand of course is supposed to represent your hand. But, there’s a big disconnect between what your real hands are capable of and what the virtual hands can do. Your real hands each have five fingers and can dexterously manipulate objects in ways that even today’s most advanced robots have trouble replicating.

So while your real hand has five fingers to grab and manipulate objects, your virtual hand essentially only has one point of input—a single point with which to grab objects.

If you think about it, the grabber tool in Red Matter 2 exactly represents this single point of input to the player. Diegetically, it’s obvious upon looking at the tool that you can’t manipulate the fingers, so your only option is to ‘grab’ at a one point.

That’s a long way of saying that the grabber tools in Red Matter 2 reflect the coarse hand input that’s actually available to us in VR, instead of showing us a virtual hand with lots of fingers that we can’t actually use.

So, In Red Matter 2, the grabber tools contextualize the inability to use our fingers. The result is that instead of feeling silly that we have to rotate and manipulate objects in somewhat strange ways, you actually feel like you’re learning how to deftly operate these futuristic tools.

Immersion Insulation Gap

And believe it or not, there’s still more to talk about why Red Matter 2’s grabber tools are so freaking smart.

Physics interactions are a huge part of the game, and the grabber tools again work to maintain immersion when handling objects. Like many VR games, Red Matter 2 uses an inertia-like system to imply the weight of an object in your hand. Small objects move quickly and easily, while large objects are sluggish and their inertia fights against your movement.

Rather than imagining the force our hands would feel when moving these virtual objects, the grabber tools create a sort of immersion insulation gap by providing a mechanical pivot point between the tool and the object.

This visually ‘explains’ why we can’t feel the forces of the object against our fingers, especially when the object is very heavy. The disconnect between the object and our hand—with the grabber tool as the insulator in the middle—alleviates some of the expectation of the forces that we’d normally feel in real life, thereby preserving immersion just a little bit more.

Unassuming Inventory

And if it wasn’t clear already, the grabber tools are actually… your inventory. Not only do they store all of your tools—like the flashlight, hacking tool, and your gun—you can even use them to temporarily stow objects. Handling inventory this way means that players can never accidentally drop or lose their tools, which is an issue we see in lots of other VR games, even those which use ‘holsters’ to hold things.

Inhuman Hands

And last but not least…the grabber tools can actually do some interesting things that our hands can’t. For example, the rotating grabber actually makes the motion of turning wheels like this one easier than doing it with two normal hands.

It’s no coincidence that the design of the grabber tools in Red Matter 2 is so smartly thought through… after all, the game is all about interacting with the virtual world around you… so it makes sense that the main way in which players interact with the world would be carefully considered.

To take full advantage of the grabbers, the developers built a wide variety of detailed objects for the game which are consistently interactive. You can pick up pretty much anything that looks like you should be able to.

And here’s a great little detail that I love to see: in cases where things aren’t interactive, all you have to do is not imply that they are! Here in Red Matter 2 the developers simply removed handles from this cabinet… a clear but non-intrusive way to tell players it can’t be opened.

Somewhat uniquely to VR, just seeing cool stuff up close like it’s right in front of you can be a rewarding experience all on its own. To that end, Red Matter 2 makes a conscious effort to sprinkle in handful of visually interesting objects, whether it’s this resin eyeball, papers with reactive physics, or this incredible scene where you watch your weapon form from hundreds of little balls right in your hand.

– – — – –

Red Matter 2’s grabber tool design is so beneficial to the game’s overall immersion that, frankly, I’m surprised we haven’t seen this sort of thing become more common in VR games.

If you want to check all of this out for yourself, you can find Red Matter 2 on Quest, PSVR 2, and PC VR. Enjoyed this breakdown? Check out the rest of our Inside XR Design series and our Insights & Artwork series.

And if you’re still reading, how about dropping a comment to let us know which game or app we should cover next?

The post This Clever Immersion Hack Makes VR Feel More Real – Inside XR Design appeared first on Road to VR.

Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro

Apple today released ‘Spatial Personas’ in public beta on Vision Pro. The newly upgraded avatar system can now bring people right into your room. We got an early look.

Much has been said about Apple’s Persona avatar system for Vision Pro. Whether you find them uncanney or passable, one thing is certain: it’s the most photorealistic real-time avatar system built into any headset available today. And now Personas is getting upgraded with ‘Spatial Personas’.

But weren’t Personas already ‘spatial’? Let me explain.

Sorta Spatial

At launch the Persona system allowed users to scan their faces into the headset to create a digital identity that looks and moves like the user thanks to the bevy of sensors in Vision Pro. When doing a FaceTime call with another Vision Pro user (or users), their Persona(s) head, shoulders, and hands would be shown inside a floating box.

Image courtesy Apple

While this could feel like face-to-face talking at times, the fact that they were contained within a frame (which you can move or resize like any other window) made it feel like they weren’t actually standing right next to you. And that’s not just because of the frame, but also because you weren’t actually in a sharing the same space as them—it’s not like they could walk right up to you for a high-five, because they’d be stuck in the window on your screen.

Face-to-face

Now with Spatial Personas (released in beta today on the latest version of VisionOS), each person’s avatar is rendered in a shared space without the frame. When I say ‘shared space’, I mean that if someone takes takes a step toward me in their room, I actually see them come one step closer to me.

Previously the frame made it feel sort of like you were doing a 3D video chat. Now with the shared space and no frame, it really feels like you’re standing right next to each other. It’s the ‘hang out on the same couch’ or ‘gather around the same table’ experience that wasn’t actually possible on Vision Pro at launch.


And it’s really quite compelling. I got a sneak peek at the new system in a Vision Pro FaceTime call with four people (though up to five are supported total), all using Spatial Personas. You’ll still only see their head, shoulders, and hands but now it really feels like a huddle instead of a 3D video chat. It feels much more personal.

Spatial Personas Are Opt-in

To be clear, the ‘video chat’ version of Personas (with the frame) still exists. In fact, it’s the default way that avatars are shown when a FaceTime call is started. Switching to a Spatial Persona requires hitting a button on the FaceTime menu.

And while this might seem like a strange choice, I actually think there’s something to it.

On the one hand, the default ‘FaceTime in Vision Pro’ experience feels like a video chat. In everyday business we’re all pretty used to seeing someone else on the other side of a webcam by now. And even though this is more personal than an audio-only call, it’s still a step away from actually meeting with someone in person.

Spatial Personas is more like you’re actually meeting up in person, since you can actually feel the interpersonal space between you and the other people in this shared space. If they walk up and get a little too close, you’ll truly feel it in the same way if someone stands too close to you in real life.

So it’s nice to have both of these options. I can ‘video chat’ with someone with the regular mode, or I can essentially invite them into my space if the situation calls for a more personal meeting.

And Spatial Personas aren’t just for chatting. Just like regular Personas, you can use SharePlay while on FaceTime to watch movies and play games together (provided you both have a supported app installed).

Take Freeform for instance, Apple’s collaborative digital whiteboard app. If you launch Freeform while on a FaceTime call with Spatial Personas, everyone else will be asked to join the app, which will then load everyone in front of the whiteboard.

Everything is synchronized too. Anyone else in the call can see what you’ve put on the whiteboard and watch in real time as you add new photos or draw annotations. And just as easily, anyone can physically walk up to the board and interact with it themselves.

When it comes to shared movie viewing on Apple TV on Vision Pro, Spatial Personas unlock the feeling of sitting on the same couch together, which wasn’t quite possible with the headset at launch. Now when you watch a movie with your friends you’ll be sitting shoulder to shoulder with them, which feels very different than having a window with their face in it floating near the video you’re watching.

It’s possible to stream many flat apps to anyone in the FaceTime call while using Spatial Personas, but for 3D or interactive content developers will need to specially implement the feature.

That’s somewhat problematic though because it’s difficult to know exactly which apps support Spatial Personas or even SharePlay for that matter. As of now, you have to scroll all the way to the bottom of an app’s page to see if it supports SharePlay (unless the developer mentions it in the app’s description). And even then this doesn’t necessarily mean it supports Spatial Personas.

The Little Details

Apple also thought through some smaller details for Spatial Personas, perhaps the most interesting of which is ‘locomotion’.

Room-scale locomotion is essentially the default. If you want to move closer to a person or app… you just physically walk over to it. But what happens if it’s outside the bounds of your physical space? Well, instead of directly moving yourself virtually, you can actually move the whole shared space closer or further from you.

You can do this any time, in any app, and everyone else will see your new position reflected within their space, keeping everything synchronized.

Apple also made is so when two Spatial Personas get too close together, they will temporarily revert to just looking like a floating contact photo. I think this is probably because they want to avoid possible harassment or trolling (ie: you want to annoy someone so you phase your virtual hand right through their virtual face, which is uncomfortable both visually and from an interpersonal space standpoint).

The headset’s excellent spatial audio is of course included by default, so everyone sounds like they’re coming from wherever they’re standing in the room, and their voices actually sound like they’re in your room (based on the headset’s estimate of what the acoustics should sound like). And if you move to a fully immersive space like an ‘environment’, the spatial audio transitions to that new acoustic environment—so for instance you can hear people faintly echoing in the Joshua Tree environment because of all the rock surfaces nearby. Hearing the acoustics fade from being inside your own room to being ‘outside’ in an environment is a subtle bit of magic.

Image courtesy Apple

And last but not least, it’s possible to have a mixed group of FaceTime participants. For instance you could have people using an iPhone, an Android tablet (yes you can FaceTime with people on non-Apple devices), a normal Persona, and a Spatial Persona all at once. SharePlay in that case will also work between those formats (except non-Apple devices) as long as long as the app supports it. In cases with apps that are Vision Pro native, the iPhone user would get a notification that their device isn’t supported.

– – — – –

Spatial Personas is a big upgrade to Apple’s avatar system, but the company maintains the whole Persona system is still in ‘beta’. Presumably that means there’s more improvements yet to come.

The post Hands-on: Apple Upgrades Personas for True Face-to-face Chats on Vision Pro appeared first on Road to VR.

Vision Pro and Quest 3 Hand-tracking Latency Compared

Vision Pro is built entirely around hand-tracking while Quest 3 uses controllers first and foremost, but also supports hand-tracking as an alternate option for some content. But which has better hand-tracking? You might be surprised at the answer.

Vision Pro Hand-tracking Latency

With no support for motion controllers, Vision Pro’s only motion-based input is hand-tracking. The core input system combines hands with eyes to control the entire interface.

Prior to the launch of the headset we spotted some footage that allowed us to gauge the hand-tracking latency between 100-200ms, but that’s a pretty big window. Now we’ve run our own test and more precisely find Vision Pro’s hand-tracking to be about 128ms on visionOS beta v1.1.1.

Here’s how we measured it. Using a screen capture from the headset which views both the passthrough hand and the virtual hand, we can see how many frames it takes between when the passthrough hand moves and when the virtual hand moves. We used Apple’s Persona system for hand rendering to eliminate any additional latency which could be introduced by Unity.

After sampling a handful of tests (pun intended), we found this to be about 3.5 frames. At the capture rate of 30 FPS, that’s 116.7ms. Then we add to that Vision Pro’s known passthrough latency of about 11ms, for the final result of 127.7ms of photon to hand-tracking latency.

We also tested how long between a passthrough tap and a virtual input (to see if full skeletal hand-tracking is slower than simple tap detection), but we didn’t find any significant difference in latency. We also tested in different lighting conditions and found no significant difference.

Quest 3 Hand-tracking Latency

How does that compare to Quest 3, a headset which isn’t solely controlled by the hands? Using a similar test, we found Quest 3’s hand-tracking latency to be around 70ms on Quest OS v63. That’s a substantial improvement over Vision Pro, but actual usage of the headset would make one think Quest 3 has even lower hand-tracking latency. But it turns out some of the perceived latency is masked.

Here’s how we found out. Using a 240Hz through-the-lens capture, we did the same kind of motion test as we did with Vision Pro to find out how long between the motion of the passthrough hand and the virtual hand. That came out to 31.3ms. Combined with Quest 3’s known passthrough latency of about 39ms that makes Quest 3’s photon to hand-tracking latency about 70.3ms.

When using Quest 3, hand-tracking feels even snappier than that result suggests, so what gives?

Because Quest 3’s passthrough latency is about three-and-a-half times that of Vision Pro (11ms vs. 39ms), the time between seeing your hand move and your virtual hand move appears to be just 31.3ms (compared to 116.7ms on Vision Pro).

– – — – –

An important point here: latency and accuracy of hand-tracking are two different things. In many cases, they may even have an inverse relationship. If you optimize your hand-tracking algorithm for speed, you may give up some accuracy. And if you optimize it for accuracy, you may give up some speed. As of now we don’t have a good measure of hand-tracking accuracy for either headset, outside of a gut feeling.

The post Vision Pro and Quest 3 Hand-tracking Latency Compared appeared first on Road to VR.

‘Laser Dance’ Hands-on: Forget ‘the floor is lava’—You’ve Got Lasers Now

After recently playing a prototype version of Laser Dance, I’ve been trying to wrap my head around what makes it so engaging. And it finally struck me. The game is a manifestation of the kind of simple rules that could turn any ordinary day in my childhood into an exciting adventure.

Immediately I recalled the classic childhood pastime of ‘the floor is lava’, the game where everyone has to pretend that the floor is entirely covered in lava, and touching it means you’ll die (not morbid at all, I know). So you have to climb on your furniture or hop from spot to spot to avoid an untimely doom.

That one simple rule—don’t touch the floor—provided me and my childhood friends hours of fun with nothing more than the room that was around us.

Laser Dance employs a similarly simple rule—don’t touch the laser—and turns it into an entire game that happens right inside your room. It’s actually very much like ‘the floor is lava’, but now the puzzle is three dimensional. And this time it’s not just in your imagination.

The game works by having you place two virtual buttons on opposite sides of your room. Your goal is to move from one button to the other without touching any lasers. But it isn’t just one set of lasers. The demo I played on Quest 3 showed many different ways the lasers could be arranged and sometimes they even moved, forcing me to time my movements just right.

The game is so simple to understand and play that it’s easy to miss the serious technical and design challenges lurking underneath. It feels somehow like each level as been custom-made for your room; that’s because Laser Dance is actually analyzing the shape of your room and customizing each level to fit. Developer Thomas Van Bouwel wrote about how that system works in a Guest Article published last year.

Footage of early Laser Dance gameplay | courtesy Thomas Van Bouwel

And it seems to be doing a great job. When I played the demo I didn’t run into any moments where it felt like I couldn’t find my way to the goal without bumping into any lasers—even if that meant sometimes crawling along the floor to get there! And this was not in the kind of hyper-clean modern rooms you see in Quest 3 ads… this was in my messy workshop space with chairs and things scattered randomly about the floor.

And therein lies much of the magic of Laser Dance. The technical stuff happens behind the scenes, leaving the player focused on one thing, and one thing alone: don’t touch the lasers.

There’s something really compelling about a game like this that so clearly taps into your sense of proprioception (the unconscious awareness of where your limbs are) and thus drives a strong sense of embodiment.

The game uses the latest inside-out body tracking tech from Meta to convincingly track not just your head, but also your hands and arms. So if you hit a laser with your shoulder—that’s on you. And even though the game doesn’t actually track your legs, I found myself diligently stepping over and around the lasers just the same.

And Laser Dance feels like it’s shaping up to be more than just a tech demo. The variety and difficulty of the levels, even at this early stage, makes it clear that it’s designed to be fun and challenging—not just a way to show off Quest 3’s latest and greatest tech.

Laser Dance is one of the most engaging mixed reality experiences I’ve played yet. It’s also one of the first Quest 3 MR experiences that I’m actually excited to show other people because it’s quick and easy to learn, especially because it doesn’t rely on controllers.

Laser Dance is planned for launch later this year.

The post ‘Laser Dance’ Hands-on: Forget ‘the floor is lava’—You’ve Got Lasers Now appeared first on Road to VR.

Meta Announces “multi-million dollar” XR Development Fund to Boost Newly Formed Studios

Meta this week announced Oculus Publishing Ignition, a “multi-million dollar” fund aimed at newly formed XR studios. The company is offering money for VR and MR prototypes, with the potential for follow-on funding to turn those prototypes into full titles.

Seemingly wanting to tap into the unfortunate number of recently laid off game developers, Meta’s new Oculus Publishing Ignition fund is specifically looking for XR studios that were formed on or after April 2023. The company says it plans to fund up to 20 teams before the end of 2024.

Those teams will receive three Quest 3 headsets and cash to build their ideas into prototypes over the course of six months. At the end of that sprint, there’s also the potential for more funding to turn those prototypes into a “fully-scoped game.” Meta is promising that participating studios will retain “full IP, code, assets, design and distribution rights.”

The fund has an eye toward specific types of content:

For Ignition, we are focused on game concepts that target the midcore player—more mainstream than niche and broad in scope with compelling Quest 3 interaction mechanics. Most importantly, we want teams to build the game you’re most passionate about and experienced in. We’re looking for mixed reality concepts just as much as conventional VR, and have enhanced interest in simulation games, active sports, and accessible social titles.

Meta says its accepting open applications for Oculus Publishing Ignition through September 1st, 2024 (or sooner if the fund is exhausted).

[irp

While Meta regularly cuts deals to help support the development of Quest content, the process for actually getting one of those deals has been opaque. Oculus Publishing Ignition represents a much more transparent pathway for developers to pitch their ideas to Meta for consideration of funding. However the fund is gated to only newly formed studios.

The post Meta Announces “multi-million dollar” XR Development Fund to Boost Newly Formed Studios appeared first on Road to VR.

Apple is Adding Support for Vision Pro’s Input System to WebXR

Apple is adding support for Vision Pro’s unique input system to WebXR, the web standard which allows XR experiences to run right from a web browser.

One of the most unique things about Apple Vision Pro is its input system which eschews motion controllers in favor of a ‘look and pinch’ system which combines eye-tracking with a pinch gesture. On the whole it’s a really useful way to navigate the headset, but because it works so differently than motion controllers, it doesn’t play too well with  WebXR.

But Apple is working to fix that. This week the company announced the latest version of VisionOS (1.1) includes a new input mode for Safari’s WebXR capabilities called ‘transient-pointer’. This new mode provides inputs from the headset in a standardized way which developers can use to understand what users are selecting inside of a WebXR session running on Vision Pro.

Up to this point, WebXR apps typically expect a headset report a continuously updated position of each controller. But Apple says it built Vision Pro’s input system to reveal as little information about the user as possible, so it doesn’t report the pose or position of the user’s hands by default. Instead, it only reveals such information at the moment of the user’s pinch (though it’s possible for a WebXR app to ask for full hand tracking info).

 

With the new transient-pointer option, when a user pinches the WebXR app will be able to see a ray representing the direction of the user’s gaze and the coordinate position of their pinch. Like in VisionOS itself, the app thus looks at the pinch to decide ‘when’ a user is making an input, and looks at the ray to decide ‘where’ they’re making the input.

For the duration of the pinch, the position of the pinch itself is continuously updated, allowing for interactions like dragging, pushing, and pulling objects. But when the pinch is released, the app no longer has access to the direction the user is looking or where their hand is located.

With these new capabilities, WebXR apps will be able to adapt their interactions to work correctly with Vision Pro.

However, WebXR on Vision Pro is still experimental. Developers must manually enable WebXR capabilities by accessing advanced settings of Safari in the headset. Developers can also experiment with WebXR and the transient-pointer mode using the VisionOS simulator.

The transient-pointer mode for Vision Pro is being baked into the WebXR standard, and has been added to the most recent draft version of the specification. That means that devices which adopt the same input mode will be able to tap into the same WebXR capabilities.

The post Apple is Adding Support for Vision Pro’s Input System to WebXR appeared first on Road to VR.

Meta Extends Quest 3 ‘Asgard’s Wrath 2’ Bundle Offer Through June

Since the launch of Quest 3, Meta has included a free copy of its latest and greatest first-party game, Asgard’s Wrath 2. Originally with a redemption deadline in January, the company has extended the offer several times, now pushing it out to the end of June.

Asgard’s Wrath 2 is perhaps the most ambitious Quest game yet launched, and for anyone who has bought a new Quest 3, the game has been bundled for free (normally $60).

The original bundle offer was set to expire in January, but Meta extended it several times. Today the company announced it is extending the free Asgard’s Wrath 2 offer again until the end of June, 2024.

The game is included for free with both the Quest 3 (128GB) and Quest 3 (512GB). But it’s important to remember that customers must redeem the game (which means going to the game page and ‘claiming’ the game) before the deadline, which is now June 30th, 2024.

Asgard’s Wrath 2 is also available on Quest 2, but not for free.

Asgard’s Wrath 2, developed by Meta’s in-house VR studio Sanzaru Games, is in many ways the most ambitious game to ever launch on the Quest platform. It has been critically acclaimed for scale and production quality that exceeds most of what else is available on the headset. We liked it enough to give it our 2023 Quest Game of the Year award.

Get the Most Out of Quest

The Best Quest 3 Accessories: Quest 3 is a great headset but there's a few areas where accessoires can really improve the experience, especially the headstrap!

The Very Best Quest Games: The Quest library can be daunting, here's our quick guide to the best games.

Essential Quest Tips, Tricks, and Settings: If you're just diving into VR as a new Quest owner, you should absolutely check out our Quest Tips & Tricks Guide for a heap of useful tricks and settings everyone should know about.

Fitness and Fun on Quest: For fitness in VR that's as fun as it is physical, check out our suggestion for a VR Workout Routine.

Relaxing in VR: Are you less of a competitive gamer and more interested in how you can use VR to chill out? We have a great list of VR Games for Relaxation and Meditation.

Flex Your Creativity in VR: And last but not least, if you're a creative type looking to express yourself in VR, our list of Tools for Painting, Modeling, Designing & Animating in VR offers a huge range of artful activities, with something for everyone from fiddlers to professionals.

The post Meta Extends Quest 3 ‘Asgard’s Wrath 2’ Bundle Offer Through June appeared first on Road to VR.

HTC is Giving Devs a Big Revenue Share Boost on Its VR Platform

HTC is sweetening the pot for VR developers selling content on its VIVEPORT VR storefront, on both PC VR and its standalone Vive XR Elite headset.

HTC announced today that it will be increasing the revenue share of sales made on its Viveport VR platform to 90%. That means the developer keeps 90% of the revenue from apps bought on the platform while the platform keeps only 10%.

Other major XR app stores—like Meta’s Quest store and Valve’s Steam store—generally give developers a 70% revenue share, while keeping 30% for the platform.

HTC says the new revenue split will apply starting on April 1st to new apps sold on both the PC VR and Vive XR Elite versions of Viveport . Existing apps already on those stores will get the improved share for sales going back to March 1st. The company hasn’t announced how long it will honor the new share. We’ve reached out for more info.

HTC says it’s making the change for the benefit of developers and the critical role they play in the XR industry.

“Developers are the heartbeat of the XR ecosystem—when they thrive, the whole industry thrives,” said Joseph Lin, General Manager of Viveport. “That’s why we’re introducing a generous 90% revenue share on purchases of apps and games on the Viveport store for developers to accelerate their growth. By putting more resources directly into the hands of the creators, we’re ensuring Viveport is at the forefront of driving growth for the XR community.”

This isn’t the first time HTC has sweetened the deal for developers using Viveport. The company has temporarily boosted developer revenue at several points over the years, including giving developers 100% of revenue at the tail-end of 2020.

While Meta’s Quest app store takes a fairly common 30% share of revenue for app sales, the company has been criticized for taking the same amount from apps sold on its App Lab store, which hosts ‘unlisted’ apps which can’t be found by browsing the main Quest store. The company has similarly been criticized for the revenue share structure of its Horizon Worlds social VR app, which keeps nearly 50% of digital goods revenue sold through the app.

The post HTC is Giving Devs a Big Revenue Share Boost on Its VR Platform appeared first on Road to VR.

Spring Sales Bring Deep Discounts on Top Quest & PC VR Games

Spring has nearly sprung, and with it come great sales on games for Quest and PC VR. Don’t miss this chance to pick up a title you’ve been waiting for!

Quest Spring Sale

Meta is offering solid discounts on Quest game bundles, and separately offering a 30% discount on select individual games through March 24th.

You can use the code SPRING30 at check-out for a 30% discount on any individual game on this page.

The bundles have their own discounts and can’t be combined with the code. Here are the bundles:

Vader Immortal: A Star Wars VR Series Bundle – $20 (34% discount)
  • Vader Immortal: Episode I
  • Vader Immortal: Episode II
  • Vader Immortal: Episode III
Friendly Fire Bundle – $35 (36% discount)
  • Breachers
  • Zero Caliber: Reloaded
Rock, Rave & Rhythm Bundle – $45 (39% discount)
  • Smash Drums
  • Synth Riders
The Walking Dead Bundle – $52 (34% discount)
  • The Walking Dead: Saints & Sinners
  • The Walking Dead: Saints & Sinners – Chapter 2: Retribution
Aviation Action Bundle – $24 (39% discount)
  • Warplanes: Battles over Pacific
  • Ultrawings 2
Move to Survive Bundle – $33 (39% discount)
  • SUPERHOT VR
  • Pistol Whip
Whole in One Bundle – $45 (22% discount)
  • Walkabout Mini Golf
  • All 17 DLC courses

Looking to make your Quest 3 gaming experience even better? Don’t miss our top picks for the most essential Quest 3 accessories.

Steam Spring Sale

If you’re a PC VR player, Steam has the hook up. You can find discounts on more than 1,000 PC VR games for the Steam Spring Sale right here. This sale lasts through March 21st at 10AM PT.

Among the massive number of discounts here are a few highlights, including a 66% discount on Half-Life: Alyx (the most its ever been discounted).

Half-Life: Alyx – $20 (66% discount)
BONELAB – $32 (20% discount)
VTOL VR – $21 (30% discount)
Walkabout Mini Golf – $10 (30% discount)
UNDERDOGS – $24 (35% discount)
Ragnarock – $10 (60% discount)
The Light Brigade – $19 (25% discount)
Until You Fall – $14 (44% discount)
The Room VR: A Dark Matter – $15 (50% discount)
The Last Clockwinder – $15 (40% discount)
Vermillion VR Painting – $16 (20% discount)

The post Spring Sales Bring Deep Discounts on Top Quest & PC VR Games appeared first on Road to VR.

Why “Embodiment” is More Important Than “Immersion” – Inside XR Design

Our series Inside XR Design examines specific examples of great XR design. Today we’re looking at the game Synapse and exploring the concept of embodiment and what makes it important to VR games.

You can find the complete video below, or continue reading for an adapted text version.

Defining Embodiment

Welcome back to another episode of Inside XR design. Today I’m going to talk about Synapse (2023), a PSVR 2 exclusive game from developer nDreams. But specifically we’re gonna to look at the game through the lens of a concept called embodiment.

So what the hell is embodiment and why am I boring you talking about it rather than just talking about all the cool shooting, and explosions, and smart design in the game? Well, it’s going to help us understand why certain design decisions in Synapse are so effective. So stick with me here for just a minute.

Embodiment is a term I use to describe the feeling of being physically present within a VR experience. Like you’re actually standing there in the world that’s around you.

And now your reasonable response is, “but don’t we already use the word immersion for that?”

Well colloquially people certainly do, but I want to make an important distinction between ‘immersion’ and ‘embodiment’.

‘Immersion’, for the purposes of our discussion, is when something has your complete attention. We all agree that a movie can be immersive, right? When the story or action is so engrossing it’s almost like nothing outside of the theater even exists at that moment. But has even the most immersive movie you’ve ever seen made you think you were physically inside the movie? Certainly not.

And that’s where ’embodiment’ comes in. For the sake of specificity, I’m defining immersion as being about attention. On the other hand, embodiment is about your sense of physical presence and how it relates to the world around you.

So I think it’s important to recognize that all VR games get immersion for free. By literally taking over your vision and hearing, for the most part they automatically have your full attention. You are immersed the second you put on a headset.

But some VR games manage to push us one step further. They don’t just have our attention, they make us feel like our whole body has been transported into the virtual world. Like you’d actually feel things in the game if you reached out and touched them.

Ok, so immersion is attention and embodiment is the feeling of actually being there.

And to be clear, embodiment isn’t a binary thing. It’s a spectrum. Some VR games are slightly embodying, while others are very embodying. But what makes the difference?

That’s exactly what we’re going to talk about with Synapse.

Cover You Can Feel

At first glance, Synapse might look like a pretty common VR shooter, but there are several really intentional design decisions that drive a strong sense of embodiment. The first thing I want to talk about is the cover system.

Every VR shooter has cover. You can walk behind a wall and it will block shots for you. But beyond that, the wall doesn’t really physically relate to your actual body because you never actively engage with it. It’s just a stationary object.

But Synapse makes walls and other cover interactive by letting you grab it with your hand and pull your body in and out of cover. This feels really natural and works great for the gameplay.

And because you’re physically moving yourself in relation to the wall—instead of just strafing back and forth with a thumbstick—the wall starts to feel more real. Specifically, it feels more real because when you grab the wall and use it as an anchor from which to move, it’s subconsciously becoming part of your proprioceptive model.

Understanding Proprioception

Let’s take a second here to explain proprioception because it’s a term that comes up a lot when we’re talking about tricking our bodies into thinking we’re somewhere else.

The clearest example I’ve ever seen of proprioception in action is this clip. And listen, I never thought I’d be showing you a cat clip in this series, but here we are. Watch closely as the cat approaches the table… without really thinking about it, it effortlessly moves its ear out of the way just at the right time.

This is proprioception at work. It’s your body’s model of where it is in relation to the things around you. In order for the cat to know exactly when and where to move its ear to avoid the table without even looking at it, it has to have some innate sense of the space its ear occupies and how that relates to the space the table occupies.

In the case of the cover system in Synapse, you intuitively understand that ‘when I grab this wall and move my hand to the right, my body will move to the left’.

So rather than just being a ‘thing that you see’ walls become something more than that. They become relevant to you in a more meaningful way, because you can directly engage with them to influence the position of your body. In doing so, your mind starts to pay more attention to where the walls are in relation to your body. They start to feel more real. And by extension, your own body starts to feel more present in the simulation… you feel more ‘embodied’.

Mags Out

And walls in Synapse can actually be used for more than cover. You can also use them to push magazines into your weapon.

Backing away from embodiment for just a second—this is such a cool design detail. In Inside XR Design #4 I spent a long time talking about the realistic weapon model in Half-Life: Alyx (2020). But Synapse is a run-and-gun game so the developers took a totally different approach and landed on a reloading system that’s fast paced but still engaging.

Instead of making players mess with inventory and chambering, the magazines in this game just pop out and float there. To reload, just slide them back into the weapon. It might seem silly, but it works in the game’s sci-fi context and reduces reloading complexity while maintaining much of the fun and game flow that comes with it.

And now we can see how this pairs so beautifully with the cover game’s cover system.

The game’s cover system takes one of your hands to use. So how can you reload? Pushing your magazine against the wall to reload your gun is the perfect solution to allow players to use both systems at the same time.

But guess what? This isn’t just a really clever design, it’s yet another way that you can engage with the wall—as if it’s actually there in front of you. You need to know if your arm is close enough to the wall if you’re going to use it to reload. So again, your brain starts to incorporate walls and their proximity into your proprioceptive model. You start to truly sense the space between your body and the wall.

So both of these things—being able to use walls to pull yourself in and out of cover, and being able to use walls to push a magazine into your gun—make walls feel more real because you interact with them up close and in a meaningful way.

And here’s the thing. When the world around you starts to feel more real, you start to feel more convinced that you’re actually standing inside of it. That’s embodiment. And let’s remember: virtual worlds are always ‘immersive’ because they necessarily have our full attention. But embodiment goes beyond what we see—it’s about what we feel.

And when it comes to reaching out and touching the world… Synapse takes things to a whole new level with its incredible telekinesis system.

Continue on Page 2: Extend Your Reach »

The post Why “Embodiment” is More Important Than “Immersion” – Inside XR Design appeared first on Road to VR.