Hands-on: Virtuix Omni One Comes Full Circle with an All-in-one VR Treadmill System

As far as VR treadmills go, Virtuix is the OG. While the company had set out to make a consumer VR treadmill a decade ago, market realities pushed the company into the out-of-home VR attraction space. But after all these years the company remains dead-set on selling a VR treadmill to consumers, and this time around it’s taking an all-in-one approach with the new Virtuix Omni One. I visited the company’s Austin, Texas headquarters to try it for myself.

The Virtuix Omni Backstory

Image courtesy Virtuix

The original Virtuix Omni treadmill started life way back in 2013 as wooden prototype built by a small group led by CEO Jan Goetgeluk. Thus the core idea was conceived a full three years before the first wave of consumer VR headsets appeared on the market in 2016.

The idea itself is simple. What if you had a treadmill on which you could run in any direction? With such a treadmill and a VR headset on your head, you could move your body and feel like you were really moving through the virtual world.

The execution of this idea, however, has been anything but simple.

Treadmills tend to be large, heavy, and expensive devices. And the Virtuix Omni was no exception. Although the company set out initially to build a device for consumers, the reality of the cost and complexity of such a device made it a challenging sell beyond early adopters. The ahead-of-its-time treadmill also suffered another key issue for the consumer VR space; the ‘ring’ support’ design prevented players from having a full range of motion, which made the treadmill a non-starter for many consumer VR games that expected players to be able to crouch, reach down to the ground, or move their arms around at their waist (where many games commonly place holsters for key items).

These challenges forced the company to pivot toward the out-of-home VR attraction space. Thus, the Omni Arena—a huge VR attraction that includes a pod of four of the company’s VR treadmills for multiplayer gameplay with custom content—was born. The system would go on to be installed in 73 entertainment spaces across the US and has become Virtuix’s bread-and-butter business.

Image courtesy Virtuix

Virtuix realized early on that VR was, at this stage, a fairly clunky proposition. Only early enthusiasts and computer experts had the skills and patience to set up and troubleshoot even consumer VR systems, let alone one that cobbled together complex hardware like a headset and VR treadmill. Expecting arcade attendants to figure out how to keep a system of four Virtuix Omni treadmills, VR headsets, and an array of networked computers powering it all, just wasn’t realistically going to work at scale.

That led the company to build Omni Arena like a giant all-in-one VR arcade. The company has impressively customized literally every step of the customer’s journey through the experience. From the moment they step into the enclosure they’re guided by video screen prompts about what they’re going to experience, how to slip on their special shoes, and how to get into the Virtuix Omni treadmill once it’s their turn.

Photo by Road to VR

The same, if not more, care has been paid to the operator’s experience. Omni Arena has everything to be a self-sustaining VR attraction. It doesn’t just come with the four treadmills, but also four headsets, controllers (with charging pods), SteamVR tracking base stations, and all the hardware to run the networked VR experiences and the pod’s software itself which not only manages all of the connected devices, but even captures footage of players (both in and outside of the game) and emails it to them as a memento of their experience. It also makes routine troubleshooting steps like headsets, computers, or SteamVR into a simple touchscreen button press through a custom interface for the operator. Omni Arena is truly an all-in-one product.

Virtuix Arena’s custom software makes it easy to manage all the computers and hardware that power the experience. | Photo by Road to VR

For a small company, Virtuix’s ability to focus on the holistic experience of its product is both rare and impressive.

Coming Full Circle

With the many lessons learned about creating an all-in-one experience for the out-of-home VR attraction space, the company is turning its attention back to the consumer realm with a brand new product—Virtuix Omni One.

Image courtesy Virtuix

With Omni One, Virtuix isn’t selling a VR treadmill. It’s selling an all-in-one system that includes the newly designed VR treadmill, a VR headset, and access to a library of custom-made content. It’s an ambitious approach, but one that reflects Virtuix’s ability to identify and address key problems with the overall experience it wants to deliver to customers.

The original ring design of the Omni meant players couldn’t crouch or have full movement of their arms around their waist. | Photo by Road to VR

One of those key points the company identified was the way that the original Omni design made compatibility with modern VR content a challenge. The support ring around the player mean their movement was restricted, both in their ability to crouch, lean, and move their arms with complete freedom.

That ‘simple’ problem necessitated a complete redesign of the treadmill. The Omni One now uses an arm support design that always stays behind the user. This gives you the ability to have a full range of motion while also running in any direction. The arm doesn’t actively hold you upright, but it provides the force that prevents you from running straight off the edge of the treadmill.

Another problem the company identified in its goal of delivering a consumer VR treadmill is the complexity of existing PC VR systems and getting players into the right content.

Even if Omni One customer was already an expert in PC VR and willing to put up with technical annoyances, having a tether to the computer means worrying about the user wrapping themselves up in the cable (or asking them to rig up a ceiling mounted cable management system).

Though the Omni One can still technically be used with a PC VR setup, this challenge pushed Virtuix to pair its treadmill with a standalone VR headset out of the box (Pico Neo 3, specifically). But it’s not just a headset, but a headset equipped with a custom-made Omni storefront serving up content that’s specifically made or adapted for the VR treadmill. The company even built its own ‘first steps’ experience, a surprisingly well-made introduction that introduces users to the magic of VR and teaches them how to move and feel comfortable with their controllers and treadmill.

And although sticker-shock has always been a challenge for Virtuix, the Omni One is actually not an unreasonable price… if you think of it as what it truly is: a treadmill that will give you a workout.

Typical exercise treadmills range in price from $500 to $2,000 or more. Omni One will be price at $2,600, including the $700 Pico Neo 3 headset (which the company stresses can also be used as a standard Pico headset (including PC VR streaming). That leaves the treadmill itself at $1,900, the cost of a high-end treadmill. The company is also promising an option to finance the Omni One for $65 per month.

And for those that really believe in Virtuix and its vision, through the company’s crowd-investment campaign it is offering a 20% discount on Omni One (or more, depending upon the amount invested). The campaign has raised $4.4 million to date.

Continue on Page 2: Omni One Hands-on »

The 20 Best Rated & Most Popular Quest Games & Apps – July 2023

While Meta doesn’t offer much publicly in the way of understanding how well individual games & apps are performing across its Quest 2 storefront, it’s possible to glean some insight by looking at apps relative to each other. Here’s a snapshot of the 20 best rated Oculus Quest games and apps as of July 2023.

Some quick qualifications before we get to the data:

  • Paid and free apps are separated
  • Only apps with more than 100 reviews are represented
  • App Lab apps are not represented (see our latest Quest App Lab report)
  • Rounded ratings may appear to show ‘ties’ in ratings for some applications, but the ranked order remains correct

Best Rated Oculus Quest 2 Games & Apps – Paid

The rating of each application is an aggregate of user reviews and a useful way to understand the general reception of each title by customers.

Rank Name Rating (# of ratings) Rank Change Price
#1 The Room VR: A Dark Matter 4.89 (12,719) ↑ 1 $30
#2 Moss: Book II 4.88 (631) ↓ 1 $30
#3 Puzzling Places 4.86 (1,839) $15
#4 Walkabout Mini Golf 4.86 (10,442) $15
#5 I Expect You To Die 2 4.85 (2,861) $25
#6 Vermillion – VR Painting 4.82 (693) New $20
#7 Budget Cuts Ultimate 4.82 (141) New $30
#8 Swarm 4.81 (2,393) ↓ 2 $25
#9 I Expect You To Die 4.81 (5,375) $25
#10 ARK and ADE 4.81 (146) ↑ 3 $10
#11 COMPOUND 4.8 (516) ↓ 4 $20
#12 Moss 4.8 (6,591) ↓ 2 $20
#13 Red Matter 2 4.8 (1,234) ↑ 2 $30
#14 GOLF+ 4.8 (20,642) ↑ 4 $30
#15 Cubism 4.79 (809) ↓ 1 $10
#16 Ancient Dungeon 4.79 (990) $20
#17 Ragnarock 4.79 (1,308) ↓ 5 $25
#18 Pistol Whip 4.78 (9,658) ↑ 2 $30
#19 YUKI 4.78 (217) ↑ 3 $20
#20 Into the Radius 4.78 (4,553) ↓ 1 $30

Rank change & stats compared to May 2023

Dropouts:
PatchWorld – Make Music Worlds, DYSCHRONIA: Chronos Alternate, Eye of the Temple

  • Among the 20 best rated Quest apps
    • Average rating (mean): 4.5 out of 5 (–0.1)
    • Average price (mean): $26 (+$2)
    • Most common price (mode): $30 (±$0)
  • Among all paid Quest apps
    • Average rating (mean): 4.2 out of 5 (±0)
    • Average price (mean): $20 (±$0)
    • Most common price (mode): $20 (±$0)

Continue on Page 2: Most Popular Paid Oculus Quest Apps »

‘Horizon Call of the Mountain’ Behind-the-scenes – Insights & Artwork from Guerrilla & Firesprite

It’s a rare treat when we get a VR game with the scope and scale of Horizon Call of the Mountain, let alone to see a much-loved IP reimagined specifically for the medium. Made exclusively for PSVR 2, the game was built collaboratively between studios Guerrilla Games and Firesprite, both part of PlayStation Studios. We sat down to speak with Alex Barnes, Game Director at Firesprite, to learn more about how Horizon Call of the Mountain came to be and how it turned out to be one of our best-rated VR games in recent memory.

Editor’s Note: The exclusive artwork peppered throughout this article is best viewed on a desktop browser with a large screen or in landscape orientation on your phone. All images courtesy Guerrilla Games & Firesprite.

Gameplay clips may not appear with cookies disabled, click ‘View clip’ to see them in a separate window.

Moving a Mountain

Horizon Call of the Mountain is, of course, a Horizon game. With that, comes the expectation that it will look, feel, and sound like the other two titles in Guerrilla’s lauded franchise. That meant the two studios had to work in close collaboration to deliver on the vision.

Call of the Mountain was an incredibly collaborative project, with both Firesprite and Guerrilla working really closely to develop the game, Barnes explains. “The bulk of the content creation and gameplay teams were over with Firesprite, with Guerrilla holding the original vision for the game and helping direct elements, such as the narrative and art, to create a game that was genuinely grounded in the world of Horizon. We had folks from both teams hands-on at different times and were in constant communication with each other throughout development.”

Even though the game would need to be built as a VR native title, the studios wanted to ensure that it represented elements of a Horizon game, without being too attached to every Horizon gameplay trope regardless of whether or not they fit within VR.

“The core of the gameplay was pretty set from the initial idea for the game. We wanted climbing, crafting, exploration, interaction and combat to be the mainstay of everything that we built. That meant freedom of movement and ‘real-feel’ physical interactions like climbing and bow combat were so crucial that we got feeling great for all types of players,” Barnes say. “Early on, we did look into doing some more wide-ranging gameplay elements to descend from the mountaintops, but ultimately these elements really ended up distracting from the overall gameplay experience, so they didn’t make their way into the released game.”

The bow is central to the game’s combat, so the teams gave it tons of interesting detail. | View clip

Come One, Come All

Another important goal was building a game that anyone could play—whether experienced with VR or not—and to leave a real impression.

“We knew this could be players’ first experience with PSVR 2 and, in some cases, even with VR. That meant building gameplay systems that people could just pick up, play and quickly understand so that we could fully immerse the player in the world,” Barnes says. “We are also big lovers of VR ourselves, and so it became a goal of everyone to blow new players away to show them how amazing a truly VR experience is, especially on this incredible new hardware.”

Building for experiences and new VR players alike also meant rethinking the options for how people would move in the game. This was also driven by the developers themselves, some of which couldn’t tolerate much traditional stick movement in VR. This pushed the studio to come up with an ‘arm-swinger’ locomotion scheme which I personally felt was both more comfortable and more immersive than pure stick-motion.

“Comfort in VR is an incredibly personal thing, and locomotion is such a big part of that. For some of the team, the stick-based movement was difficult to get comfortable with. So the motion mimetic system of moving the player’s arms was conceptualised as a way to help add a layer of comfort that allowed people who were less familiar with VR to play for longer and stay comfortable whilst they did,” says Barnes.

The players gloves also act as a diegetic health bar thanks to the green leaf-like segments

Continue on Page 2: For Fun’s Sake »

Hands-on: Apple Vision Pro isn’t for Gaming, But it Does Everything Else Better

While Apple’s new Vision Pro headset isn’t going to satisfy the existing base of consumer VR users, it’s mastering the rest of the basics better than anyone else.

Probably 90% of what consumers are using VR headsets for today is entertainment, and of that entertainment, most of it is gaming. And if you’re among those people using such headsets today, you’ll reasonably be disappointed that Apple Vision Pro lacks controllers and isn’t going to be playing top VR games anytime soon. But for everyone else, it’s a back-to-basics approach that’s laying a sturdy foundation to build upon in the future.

Today at Apple’s headquarters I got to check out Vision Pro for myself. Unfortunately the company didn’t permit any photos or footage during the demo, but the clips below are a fair representation of what I saw.

Photo by Road to VR

Apple Vision Pro (APV, let’s call it) is doing what only Apple can: carving out a subset of what other devices do, and making sure that subset of things is done really well. And given the current state of UX on most other headsets, this is a reckoning that was a long time coming.

Look & Tap

It starts with the input. Apple is leaning heavily into using your eyes as a cursor, and a pinch gesture as a click. The headset has cameras on the bottom that face downward so that even subtle pinches from your hand in your lap are visible and detected. But you don’t see a floating cursor where your eyes are, nor do you see a laser pointer shooting out of your hand. You just look at the thing you want to press, then do a quick pinch.

On paper you might think this sounds shoddy. But remember, this is Apple. They’ve tested and refined this system six ways from Sunday, and it works so well that after a minute or two you hardly think about how you’re interacting with the headset, you just are.

The pinch input is responsive and reliable. It felt so natural that the two or three times the headset missed my pinch during a 30 minute demo it felt really weird because my brain was already convinced of its reliability.

This look-and-pinch system is so simple for the headset’s basic input that I won’t be surprised if we see other companies adopt it as soon as possible.

Reality First

So there’s the simple input and then there’s a passthrough-by-default view. This is an MR headset after all, meaning it can easily do augmented reality—where most of your view is of the real world, with some virtual content; or virtual reality—where all of your view is virtual content.

When you put AVP on your head, you instantly see the outside world first. In fact, the way that Apple defers to the passthrough view shows that they want to treat fully immersive experiences as the exception rather than the rule. Generally you won’t pop into a fully immersive scene unless you actively making the decision to do so.

The passthrough view is certainly best-in-class, but we’re still probably two generations away from it truly feeling like there’s nothing separating your eyes from the real world. Granted, I was able to read all the text on my phone with no issue, which has been the ‘bar’ for passthrough quality that I’ve been waiting to see exceeded.

Beautiful Virtual Displays

The imperfect passthrough resolution somewhat betrays the exceptional display resolution which exhibits not even a hint of screen-door effect. It may not be ‘retina resolution’ (generally agreed to be around 60 pixels per-degree), but it’s good enough that I won’t know how far off it is from retina resolution until I sit down with an objective test target to find out.

That’s a long way of saying that the headset’s display has excellent resolution with great clarity across the lens. Top of the class.

This clarity is helped by the fact that Apple has done its Apple-y thing and ensured that panels, text, and images consistently render with superb quality. The entire interface feels iOS-polished with animations and easy to use buttons and controls. The interface was so simple to use that the demo chaperones had a hard time keeping me on task as I wanted to flick through menus and move floating apps around the room.

But here’s the thing, probably 75% of what Apple showed me was essentially just floating screens. Whether it was videos or a floating iMessage app or the web browser, it’s clear that Apple wants Vision Pro to be first and foremost be great at displaying flat content to the user.

The other 25% of what I saw, while very impressive all around, felt like just the start of a journey for Apple to build out a broader library immersive experiences.

Record & Rewatch Memories

AVP might not be a VR gaming headset, but it does at least one thing that no other headset does: capture volumetric memories using its on-board cameras. Using the button on the top of the headset you can capture volumetric photos and videos with just a press.

Apple showed me a demo of a volumetric video capture of a group of kids blowing out candles on a birthday cake. It was like they were right in front of me. I’d never even seen these kids before but I could immediately feel their giddy emotions as they giggled and bounced around… as if I was sitting right there while it was happening. Not to mention that the quality was good enough, at least in this best-case-scenario demo capture, that my first thought had nothing to do with the famerate or quality or dynamic range, but purely of the emotion of the people in front of me.

That instant connection—to people I don’t even know—was a clear indicator that there’s something special to this. I can already imagine watching a volumetric video of a cherished memory, or of a loved one that has passed, and I know it would be a powerful experience.

Doing it Right

And here’s the thing; I’ve seen plenty of volumetric video demos before. This isn’t a new idea, not even close. The thing that’s novel here is that everyday users could potentially shoot these videos on their own, and readily watch, share, and store them for later. On other headsets you’d need a special camera for capturing, special software for editing, a player app, and a sharing app to make the same thing happen.

This is the ‘ecosystem’ part of XR that’s missing from most other headsets. It’s not about what’s possible—it’s about what’s easy. And Apple is focused on making using this headset easy.

Continue on Page 2: Immersion Isn’t Off the Table »

Hands-on: CREAL’s Light-field Display Brings a New Layer of Immersion to AR

More than four years after I first caught wind of their tech, CREAL’s light-field display continues to be one of the most interesting and promising solutions for bringing light-fields to immersive headsets. At AWE 2023 I got to check out the company’s latest tech and saw first hand what light-fields mean for immersion in AR headsets.

More Than One Way to Focus

So first, a quick recap. A light-field is a fundamentally different way of showing light to your eyes compared to the typical displays used in most headsets today. The key difference is about how your eyes can focus on the virtual scene.

Your eyes have two focus methods. The one most people are familiar with is vergence (also called stereoscopy), where both eyes point at the same object to bring overlapping views of that object into focus. This is also what makes things look ‘3D’ to us.

But each individual eye is also capable of focusing in a different way by bending the lens of the eye to focus on objects at different distances—the same way that a camera with only one lens focuses. This is called accomodation.

Vergence-Accommodation Conflict

Most XR headsets today support vergenge (stereoscopic focus), but not accomodation (single-eye focus). You may have heard this called Vergence-Accomodation Conflict; also known to the industry as ‘VAC’ because it’s a pervasive challenge for immersive displays.

The reason for the ‘conflict’ is that normally the vergence and accommodation of your eyes work in tandem to achieve optimal focus on the thing you want to look at. But in a headset that supports vergence, but not accomodation, your eyes need to break these typically synchronous functions into independent functions.

It might not be something you ‘feel’ but it’s the reason why in a headset it’s hard to focus on things very near to you—especially objects in your hands that you want to inspect up close.

The conflict between vergence and accommodation can be not just uncomfortable for your eyes, but in a surprising way also rob the scene of immersion.

Creal’s Solution

And this is where we get back to Creal, a company that wants to solve the Vergence-Accommodation Conflict with a light-field display. Light-field displays structure light in the same way that we see it in the real world, allowing both of the focus functions of the eyes—vergence and accommodation—to work in tandem as they normally do.

At AWE 2023 this week, I got to check out the company’s latest light-field display tech, and came away with an added sense of immersion that I haven’t felt in any other AR headset to date.

I’ve seen Creal’s static bench-top demos before, which show static floating imagery through the lens to a single eye, demonstrating that you can indeed focus (accommodate) at different depths. But you won’t really see the magic until you see a light-field with both eyes and head-tracking. Which is exactly what I got to do this week at AWE.

Photo by Road to VR

On an admittedly bulky proof-of-concept AR headset, I got to see the company’s light-field display in its natural habitat—floating immersively in front of me. What really impressed me was when I held my hand out and a little virtual turtle came floating over to the palm of my hand. Even though it was semi-transparent, and not exceptionally high resolution or accurately colored, it felt… weirdly real.

I’ve seen all kinds of immersive XR experiences over the years, and holding something in your hand sounds like a banal demo at this point. But there was just something about the way this little turtle looked—thanks to the fact that my eyes could focus on it in the same way they would in the real world—that made it feel more real than I’ve ever really felt in other headsets. Like it was really there in my hand.

Photo by Road to VR

The trick is that, thanks to the light-field, when I focused my eyes on the turtle in my hand, both the turtle (virtual) and my hand (real) were each in proper focus—something that isn’t possible with conventional displays—making both my hand and the turtle feel more like they were inhabiting the same space right in front of me.

It’s frustratingly impossible to explain exactly how it appeared via text alone; this video from Creal shot through-the-lens gives some idea of what I saw, but can’t quite show how it adds immersion over other AR headsets:

It’s a subtle thing, and such added immersion probably only meaningful impacts objects within arms reach or closer—but then again, that distance is where things have the potential to feel most real to use because they’re in our carefully watched personal space.

Digital Prescriptions

Beyond just adding a new layer of visual immersion, light-field displays stand to solve another key problem, which is vision correction. Most XR headsets today do not support any kind of prescription vision correction, which for maybe even more than half of the population means they either need to wear their correctives while using these devices, buy some kind of clip-on lens, or just suffer through a blurry image.

But the nature of light-fields means you can apply a ‘digital prescription’ to the virtual content that exactly matches the user’s corrective prescription. And because it’s digital, this can be done on-the-fly, meaning the same headset could have its digital corrective vision setting change from one user to the next. Doing so means the focus of virtual image can match the real world image for those with and without glasses.

Continue on Page 2: A More Acceptable Form-factor »

5 VR Games We’re Most Excited for From Quest Gaming Showcase

Meta dumped an avalanche of VR news today in its hour-long Quest Gaming Showcase livestream, revealing trailers and info on more than a dozen new games coming to Quest 2, Quest Pro, and probably also the newly unveiled Quest 3 headset.

Here’s what we’re most excited about:

Asgard’s Wrath 2

Image courtesy Sanzaru Games

That’s right, the sequel to hit Rift title Asgard’s Wrath (2019) is coming to Quest this winter, bringing with it a ton of new places to explore and a cast of new companions and puzzles. Meta’s Sanzaru Games says we should expect phsyics-based melee and a more intuitive combat system altogether, not to mention a brand-new realm to explore which will bring us to a reimagined Ancient Egypt. Catch the full announce here. Also, check out the trailer on YouTube (age-restricted).

I Expect You To Die 3: Cog in the Machine

We’ve known about the upcoming sequel to the hit spy-flavored puzzle game I Expect You To Die, but it seems every new trailer that pops up is just another opportunity to salivate at the Bond-style escape room’s ingenious puzzles and patently dastardly villains. It’s coming to the Quest platform and PSVR 2 “soon,” developers Schell Games says. Catch the full announce here.

Attack on Titan VR: Unbreakable

The trailer for Attack on Titan VR: Unbreakable is admittedly not gameplay, although it’s easy to see where it’s headed, as Japanese studio UNIVRS seems to be heavily suggesting gameplay elements here. It’s bringing both single player and co-op modes to the Titan-killing, swing-tastic game, letting you play in Japanese or English, subs included. It’s coming to Quest platform in winter 2023, which is a little later than the promised summer 2023 launch window previously announced, but better late than never. Catch the full announce here.

Dungeons of Eternity

Although it’s hard to get super excited about a roguelike dungeon crawler—there are a few really great ones out there already—Dungeons of Eternity is coming to the Quest platform this year from a studio called Othergate, which was founded by a bunch of ex-Oculus Studios game designers. The 1-3 player co-op dungeon crawling RPG also incorporates physics-based combat, which is pretty refreshing to see since it focuses on melee as well as archery and magic. Catch the full announce here.

Stranger Things VR

TV show game tie-ins are pretty hit and miss (mostly miss), but we can actually vouch for the studio developing this Stranger Things VR game, coming to Quest this fall from VR pioneers Tender Claws. You may know Tender Claws for its games Virtual Virtual Reality 1 and 2 and The Under Presents, three spectacular titles that really just get what makes VR great. Be the bad guy, Vecna. Do bad shit. Catch the full announce here.

– – — – –

Assassin’s Creed: Nexus VR

Image courtesy Ubisoft

Ok, just one more, but it’s definitely outside of our lineup since we didn’t exactly get an eye-full of Ubisoft’s upcoming Assassin’s Creed game today like we hoped, which is now confirmed to be officially called Assassin’s Creed: Nexus VR (the only real news about AC from the showcase).

The actual reveal is said to come during Ubisoft’s Forward livestream event taking place on June 12th though, so we’re closer than ever to learning whether Ubisoft is set to faithfully translate the franchise’s high-flying, time-tripping assassin into VR. Catch the full announce here.


There were a ton of games announced today. Which one are you looking forward to the most? Let us know in the comments below!

The 20 Best Rated & Most Popular Quest Games & Apps – May 2023

While Oculus doesn’t offer much publicly in the way of understanding how well individual games & apps are performing across its Quest 2 storefront, it’s possible to glean some insight by looking at apps relative to each other. Here’s a snapshot of the 20 best rated Oculus Quest games and apps as of May 2023.

Some quick qualifications before we get to the data:

  • Paid and free apps are separated
  • Only apps with more than 100 reviews are represented
  • App Lab apps are not represented (see our latest Quest App Lab report)
  • Rounded ratings may appear to show ‘ties’ in ratings for some applications, but the ranked order remains correct

Best Rated Oculus Quest 2 Games & Apps – Paid

The rating of each application is an aggregate of user reviews and a useful way to understand the general reception of each title by customers.

Rank Name Rating (# of ratings) Rank Change Price
#1 Moss: Book II 4.89 (594) $30
#2 The Room VR: A Dark Matter 4.89 (12,603) $30
#3 Puzzling Places 4.87 (1,770) $15
#4 Walkabout Mini Golf 4.86 (10,195) $15
#5 I Expect You To Die 2 4.85 (2,757) $25
#6 Swarm 4.82 (2,341) ↑ 3 $25
#7 COMPOUND 4.81 (473) $20
#8 PatchWorld – Make Music Worlds 4.81 (160) ↑ 3 $30
#9 I Expect You To Die 4.81 (5,269) ↑ 3 $25
#10 Moss 4.8 (6,534) ↑ 3 $20
#11 DYSCHRONIA: Chronos Alternate 4.8 (368) ↓ 1 $20
#12 Ragnarock 4.8 (1,277) ↑ 4 $25
#13 ARK and ADE 4.8 (139) ↑ 2 $10
#14 Cubism 4.79 (795) ↑ 3 $10
#15 Red Matter 2 4.79 (1,174) ↓ 1 $30
#16 Ancient Dungeon 4.79 (915) ↑ 2 $20
#17 Eye of the Temple 4.79 (144) New $20
#18 GOLF+ 4.79 (18,143) ↑ 4 $30
#19 Into the Radius 4.78 (4,134) $30
#20 Pistol Whip 4.78 (9,508) ↑ 1 $30

Rank change & stats compared to April 2023

Dropouts:
Breachers, Vermillion, The Last Clockwinder

  • Among the 20 best rated Quest apps
    • Average rating (mean): 4.8 out of 5 (±0)
    • Average price (mean): $23 (±$0)
    • Most common price (mode): $30 (±$0)
  • Among all paid Quest apps
    • Average rating (mean): 4.2 out of 5 (±0)
    • Average price (mean): $20 (±$0)
    • Most common price (mode): $20 (±$0)

Continue on Page 2: Most Popular Paid Oculus Quest Apps »

Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering

Eye-tracking—the ability to quickly and precisely measure the direction a user is looking while inside of a VR headset—is often talked about within the context of foveated rendering, and how it could reduce the performance requirements of XR headsets. And while foveated rendering is an exciting use-case for eye-tracking in AR and VR headsets, eye-tracking stands to bring much more to the table.

Updated – May 2nd, 2023

Eye-tracking has been talked about with regards to XR as a distant technology for many years, but the hardware is finally becoming increasingly available to developers and customers. PSVR 2 and Quest Pro are the most visible examples of headsets with built-in eye-tracking, along with the likes of Varjo Aero, Vive Pro Eye and more.

With this momentum, in just a few years we could see eye-tracking become a standard part of consumer XR headsets. When that happens, there’s a wide range of features the tech can enable to drastically improve the experience.

Foveated Rendering

Let’s first start with the one that many people are already familiar with. Foveated rendering aims to reduce the computational power required for displaying demanding AR and VR scenes. The name comes from the ‘fovea’—a small pit at the center of the human retina which is densely packed with photoreceptors. It’s the fovea which gives us high resolution vision at the center of our field of view; meanwhile our peripheral vision is actually very poor at picking up detail and color, and is better tuned for spotting motion and contrast than seeing detail. You can think of it like a camera which has a large sensor with just a few megapixels, and another smaller sensor in the middle with lots of megapixels.

The region of your vision in which you can see in high detail is actually much smaller than most think—just a few degrees across the center of your view. The difference in resolving power between the fovea and the rest of the retina is so drastic, that without your fovea, you couldn’t make out the text on this page. You can see this easily for yourself: if you keep your eyes focused on this word and try to read just two sentences below, you’ll find it’s almost impossible to make out what the words say, even though you can see something resembling words. The reason that people overestimate the foveal region of their vision seems to be because the brain does a lot of unconscious interpretation and prediction to build a model of how we believe the world to be.

SEE ALSO
Abrash Spent Most of His F8 Keynote Convincing the Audience That 'Reality' is Constructed in the Brain

Foveated rendering aims to exploit this quirk of our vision by rendering the virtual scene in high resolution only in the region that the fovea sees, and then drastically cut down the complexity of the scene in our peripheral vision where the detail can’t be resolved anyway. Doing so allows us to focus most of the processing power where it contributes most to detail, while saving processing resources elsewhere. That may not sound like a huge deal, but as the display resolution of XR headsets and field-of-view increases, the power needed to render complex scenes grows quickly.

Eye-tracking of course comes into play because we need to know where the center of the user’s gaze is at all times quickly and with high precision in order to pull off foveated rendering. While it’s difficult to pull this off without the user noticing, it’s possible and has been demonstrated quite effectively on recent headset like Quest Pro and PSVR 2.

Automatic User Detection & Adjustment

 

In addition to detecting movement, eye-tracking can also be used as a biometric identifier. That makes eye-tracking a great candidate for multiple user profiles across a single headset—when I put on the headset, the system can instantly identify me as a unique user and call up my customized environment, content library, game progress, and settings. When a friend puts on the headset, the system can load their preferences and saved data.

Eye-tracking can also be used to precisely measure IPD (the distance between one’s eyes). Knowing your IPD is important in XR because it’s required to move the lenses and displays into the optimal position for both comfort and visual quality. Unfortunately many people understandably don’t know what their IPD off the top of their head.

With eye-tracking, it would be easy to instantly measure each user’s IPD and then have the headset’s software assist the user in adjusting headset’s IPD to match, or warn users that their IPD is outside the range supported by the headset.

In more advanced headsets, this process can be invisible and automatic—IPD can be measured invisibly, and the headset can have a motorized IPD adjustment that automatically moves the lenses into the correct position without the user needing to be aware of any of it, like on the Varjo Aero, for example.

Varifocal Displays

A prototype varifocal headset | Image courtesy NVIDIA

The optical systems used in today’s VR headsets work pretty well but they’re actually rather simple and don’t support an important function of human vision: dynamic focus. This is because the display in XR headsets is always the same distance from our eyes, even when the stereoscopic depth suggests otherwise. This leads to an issue called vergence-accommodation conflict. If you want to learn a bit more in depth, check out our primer below:

Accommodation

Accommodation is the bending of the eye’s lens to focus light from objects at different distances. | Photo courtesy Pearson Scott Foresman

In the real world, to focus on a near object the lens of your eye bends to make the light from the object hit the right spot on your retina, giving you a sharp view of the object. For an object that’s further away, the light is traveling at different angles into your eye and the lens again must bend to ensure the light is focused onto your retina. This is why, if you close one eye and focus on your finger a few inches from your face, the world behind your finger is blurry. Conversely, if you focus on the world behind your finger, your finger becomes blurry. This is called accommodation.

Vergence

Vergence is the inward rotation of each eye to overlap each eye’s view into one aligned image. | Photo courtesy Fred Hsu (CC BY-SA 3.0)

Then there’s vergence, which is when each of your eyes rotates inward to ‘converge’ the separate views from each eye into one overlapping image. For very distant objects, your eyes are nearly parallel, because the distance between them is so small in comparison to the distance of the object (meaning each eye sees a nearly identical portion of the object). For very near objects, your eyes must rotate inward to bring each eye’s perspective into alignment. You can see this too with our little finger trick as above: this time, using both eyes, hold your finger a few inches from your face and look at it. Notice that you see double-images of objects far behind your finger. When you then focus on those objects behind your finger, now you see a double finger image.

The Conflict

With precise enough instruments, you could use either vergence or accommodation to know how far away an object is that a person is looking at. But the thing is, both accommodation and vergence happen in your eye together, automatically. And they don’t just happen at the same time—there’s a direct correlation between vergence and accommodation, such that for any given measurement of vergence, there’s a directly corresponding level of accommodation (and vice versa). Since you were a little baby, your brain and eyes have formed muscle memory to make these two things happen together, without thinking, anytime you look at anything.

But when it comes to most of today’s AR and VR headsets, vergence and accommodation are out of sync due to inherent limitations of the optical design.

In a basic AR or VR headset, there’s a display (which is, let’s say, 3″ away from your eye) which shows the virtual scene, and a lens which focuses the light from the display onto your eye (just like the lens in your eye would normally focus the light from the world onto your retina). But since the display is a static distance from your eye, and the lens’ shape is static, the light coming from all objects shown on that display is coming from the same distance. So even if there’s a virtual mountain five miles away and a coffee cup on a table five inches away, the light from both objects enters the eye at the same angle (which means your accommodation—the bending of the lens in your eye—never changes).

That comes in conflict with vergence in such headsets which—because we can show a different image to each eye—is variable. Being able to adjust the imagine independently for each eye, such that our eyes need to converge on objects at different depths, is essentially what gives today’s AR and VR headsets stereoscopy.

But the most realistic (and arguably, most comfortable) display we could create would eliminate the vergence-accommodation issue and let the two work in sync, just like we’re used to in the real world.

Varifocal displays—those which can dynamically alter their focal depth—are proposed as a solution to this problem. There’s a number of approaches to varifocal displays, perhaps the most simple of which is an optical system where the display is physically moved back and forth from the lens in order to change focal depth on the fly.

Achieving such an actuated varifocal display requires eye-tracking because the system needs to know precisely where in the scene the user is looking. By tracing a path into the virtual scene from each of the user’s eyes, the system can find the point that those paths intersect, establishing the proper focal plane that the user is looking at. This information is then sent to the display to adjust accordingly, setting the focal depth to match the virtual distance from the user’s eye to the object.

SEE ALSO
Oculus on Half Dome Prototype: 'don't expect to see everything in a product anytime soon'

A well implemented varifocal display could not only eliminate the vergence-accommodation conflict, but also allow users to focus on virtual objects much nearer to them than in existing headsets.

And well before we’re putting varifocal displays into XR headsets, eye-tracking could be used for simulated depth-of-field, which could approximate the blurring of objects outside of the focal plane of the user’s eyes.

As of now, there’s no major headset on the market with varifocal capabilities, but there’s a growing body of research and development trying to figure out how to make the capability compact, reliable, and affordable.

Foveated Displays

While foveated rendering aims to better distribute rendering power between the part of our vision where we can see sharply and our low-detail peripheral vision, something similar can be achieved for the actual pixel count.

Rather than just changing the detail of the rendering on certain parts of the display vs. others, foveated displays are those which are physically moved (or in some cases “steered”) to stay in front of the user’s gaze no matter where they look.

Foveated displays open the door to achieving much higher resolution in AR and VR headsets without brute-forcing the problem by trying to cram pixels at higher resolution across our entire field-of-view. Doing so is not only be costly, but also runs into challenging power and size constraints as the number of pixels approach retinal-resolution. Instead, foveated displays would move a smaller, pixel-dense display to wherever the user is looking based on eye-tracking data. This approach could even lead to higher fields-of-view than could otherwise be achieved with a single flat display.

A rough approximation of how a pixel-dense foveated display looks against a larger, much less pixel-dense display in Varjo’s prototype headset. | Photo by Road to VR, based on images courtesy Varjo

Varjo is one company working on a foveated display system. They use a typical display that covers a wide field of view (but isn’t very pixel dense), and then superimpose a microdisplay that’s much more pixel dense on top of it. The combination of the two means the user gets both a wide field of view for their peripheral vision, and a region of very high resolution for their foveal vision.

Granted, this foveated display is still static (the high resolution area stays in the middle of the display) rather than dynamic, but the company has considered a number of methods for moving the display to ensure the high resolution area is always at the center of your gaze.

Continued on Page 2: Better Social Avatars »

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’

Eye of the Temple is one of the rare VR games that focuses on not just on pure room-scale movement, but dynamic room-scale movement. The result is a uniquely immersive experience that required some clever design behind the scenes to make it all work. This guest article by developer Rune Skovbo Johansen explains the approach.

Guest Article by Rune Skovbo Johansen

Rune Skovbo Johansen is a Danish independent game developer based in Turku, Finland. His work spans games and other interactive experiences, focused on tech, wonder, and exploration. After positive reception of the 2016 VR game jam game Chrysalis Pyramid, he started working on a more ambitious spiritual successor, Eye of the Temple, and at the end of 2020 he quit his day job to pursue indie game development full-time.

In Eye of the Temple, you move through a vast environment, not by teleportation or artificial locomotion, but by using your own feet. It makes unique use of room-scale VR to deliver an experience of navigating an expansive space.

In Eye of the Temple you move around large environments using your own feet

But how does it work behind the scenes? To mark the upcoming release of Eye of the Temple on Quest 2, I wanted to take the time to explain these aspects of the game’s design that I’ve never fully gone into detail with before. In this article we’ll go over a variety of the tricks the game uses to make it all work. Let’s start with the basics of keeping the player in the play area

Keeping the Player in the Play Area

Say you need to go from one tall pillar in the game to another via a moving platform. You step forward onto the platform, the platform moves, and then you step forward onto the next pillar. But now you’re outside your physical play area.

Moving platforms are positioned in a way to keep players inside the play area

If we instead position the moving platform to the side, it goes like this: You sidestep onto the platform, it moves, and you sidestep onto the next pillar. Since you took a step right, and then left, you’re back where you started in the center of the play area. So the game’s tricks are all about how the platforms are positioned relative to each other.

Now, to get a better sense for it, let’s look at some mixed reality footage (courtesy of Naysy) where a grid representing the play area is overlaid on top.

Mixed reality footage with a grid overlaid on top which represents the play area

Keeping an Overview in the Level Design

Now that we’ve seen how the trick works, let’s take a look at how I keep track of it all when doing the level design for the game. First things first – I made this pattern, which represents the player’s entire play area – or the part of it the game takes advantage of anyway:

A pattern representing the physical play area

As you can see, there’s a thick white border along the edge, and a thick circle in the center.

Every platform in the game has a designated spot in the play area and a pattern overlay that shows what that spot is. For platforms that are a single tile large, it’s generally one of nine positions. The overlay makes it easy to see if a given platform is positioned in the center of the play area, or at an edge or corner.

The play area pattern overlaid on each platform and its end positions make it easy to see if they are lined up correctly in the level design

Additional overlays show a ghostly version of the pattern at both the start and end positions of a moving platform. This is the real trick of keeping track of how the platforms connect together, because these ghostly overlays at the end positions make it trivial to see if the platforms are lined up correctly in the level design when they touch each other. If the adjacent ghostly patterns are continuous like puzzle pieces that fit together, then the platforms work correctly together.

It still took a lot of ingenuity to work out how to position all the platforms so they both fit correctly together and also take the player where they need to go in the virtual world, but now you know how I kept the complexity of it manageable.

Getting the Player’s Cooperation

The whole premise of getting around the world via these moving platforms is based on an understanding that the player should step from one platform to another when they’re lined up, and not at other times. The most basic way the game establishes this is by just telling it outright to the player in safety instructions displayed prior to starting the game.

One of the safety instructions shown before the game begins

This instructions is shown for two reasons:

One is safety. You should avoid jumping over gaps, otherwise you would risk jumping right out of your play area and into a wall, for example.

The other is that the game’s system of traversal only works correctly when stepping from one platform to another when they line up. This is not as critical – I’ll get back to later what happens if stepping onto a platform that’s misaligned – but it still provides the best play experience.

Apart from the explicit instructions, the game also employs more subtle tricks to help ensure the player only steps over when blocks are correctly aligned. Consider the following example of a larger 2 x 2 tile static platform the player can step onto. A moving platform arrives from the side in a way that would allow the player to step off well before the platform has stopped moving, but that would break the game’s traversal logic.

In this room, ‘foot fences’ are used to discourage the player from stepping from one platform to another when they are not correctly aligned

To avoid this, “foot fences” were placed to discourage the player from stepping over onto the static platform (or away from it) at incorrect positions. The fences are purely visual and don’t technically prevent anything. The player can still step over them if they try, or right through them for that matter. However, psychologically it feels like less effort to not step over or through a fence and instead step onto the static platform where there’s a gap in the fence. In this way, a purely non-technical solution is used as part of the game’s arsenal of tricks.

Continued on Page 2: Correcting for Unaligned Platforms »

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’

Eye of the Temple is one of the rare VR games that focuses on not just on pure room-scale movement, but dynamic room-scale movement. The result is a uniquely immersive experience that required some clever design behind the scenes to make it all work. This guest article by developer Rune Skovbo Johansen explains the approach.

Guest Article by Rune Skovbo Johansen

Rune Skovbo Johansen is a Danish independent game developer based in Turku, Finland. His work spans games and other interactive experiences, focused on tech, wonder, and exploration. After positive reception of the 2016 VR game jam game Chrysalis Pyramid, he started working on a more ambitious spiritual successor, Eye of the Temple, and at the end of 2020 he quit his day job to pursue indie game development full-time.

In Eye of the Temple, you move through a vast environment, not by teleportation or artificial locomotion, but by using your own feet. It makes unique use of room-scale VR to deliver an experience of navigating an expansive space.

In Eye of the Temple you move around large environments using your own feet

But how does it work behind the scenes? To mark the upcoming release of Eye of the Temple on Quest 2, I wanted to take the time to explain these aspects of the game’s design that I’ve never fully gone into detail with before. In this article we’ll go over a variety of the tricks the game uses to make it all work. Let’s start with the basics of keeping the player in the play area

Keeping the Player in the Play Area

Say you need to go from one tall pillar in the game to another via a moving platform. You step forward onto the platform, the platform moves, and then you step forward onto the next pillar. But now you’re outside your physical play area.

Moving platforms are positioned in a way to keep players inside the play area

If we instead position the moving platform to the side, it goes like this: You sidestep onto the platform, it moves, and you sidestep onto the next pillar. Since you took a step right, and then left, you’re back where you started in the center of the play area. So the game’s tricks are all about how the platforms are positioned relative to each other.

Now, to get a better sense for it, let’s look at some mixed reality footage (courtesy of Naysy) where a grid representing the play area is overlaid on top.

Mixed reality footage with a grid overlaid on top which represents the play area

Keeping an Overview in the Level Design

Now that we’ve seen how the trick works, let’s take a look at how I keep track of it all when doing the level design for the game. First things first – I made this pattern, which represents the player’s entire play area – or the part of it the game takes advantage of anyway:

A pattern representing the physical play area

As you can see, there’s a thick white border along the edge, and a thick circle in the center.

Every platform in the game has a designated spot in the play area and a pattern overlay that shows what that spot is. For platforms that are a single tile large, it’s generally one of nine positions. The overlay makes it easy to see if a given platform is positioned in the center of the play area, or at an edge or corner.

The play area pattern overlaid on each platform and its end positions make it easy to see if they are lined up correctly in the level design

Additional overlays show a ghostly version of the pattern at both the start and end positions of a moving platform. This is the real trick of keeping track of how the platforms connect together, because these ghostly overlays at the end positions make it trivial to see if the platforms are lined up correctly in the level design when they touch each other. If the adjacent ghostly patterns are continuous like puzzle pieces that fit together, then the platforms work correctly together.

It still took a lot of ingenuity to work out how to position all the platforms so they both fit correctly together and also take the player where they need to go in the virtual world, but now you know how I kept the complexity of it manageable.

Getting the Player’s Cooperation

The whole premise of getting around the world via these moving platforms is based on an understanding that the player should step from one platform to another when they’re lined up, and not at other times. The most basic way the game establishes this is by just telling it outright to the player in safety instructions displayed prior to starting the game.

One of the safety instructions shown before the game begins

This instructions is shown for two reasons:

One is safety. You should avoid jumping over gaps, otherwise you would risk jumping right out of your play area and into a wall, for example.

The other is that the game’s system of traversal only works correctly when stepping from one platform to another when they line up. This is not as critical – I’ll get back to later what happens if stepping onto a platform that’s misaligned – but it still provides the best play experience.

Apart from the explicit instructions, the game also employs more subtle tricks to help ensure the player only steps over when blocks are correctly aligned. Consider the following example of a larger 2 x 2 tile static platform the player can step onto. A moving platform arrives from the side in a way that would allow the player to step off well before the platform has stopped moving, but that would break the game’s traversal logic.

In this room, ‘foot fences’ are used to discourage the player from stepping from one platform to another when they are not correctly aligned

To avoid this, “foot fences” were placed to discourage the player from stepping over onto the static platform (or away from it) at incorrect positions. The fences are purely visual and don’t technically prevent anything. The player can still step over them if they try, or right through them for that matter. However, psychologically it feels like less effort to not step over or through a fence and instead step onto the static platform where there’s a gap in the fence. In this way, a purely non-technical solution is used as part of the game’s arsenal of tricks.

Continued on Page 2: Correcting for Unaligned Platforms »