Hands-on: Quest 3 is an Impressive Leap That’s Still Held Back by Software Struggles

Quest 3 is an impressive leap in hardware, especially in the visual department, but it continues Meta’s tradition of building great hardware that feels held back by its software.

Update (September 27th, 2023): Fixed the link to the second page at the bottom of this page.

After months of teasing and leaks, Quest 3 is finally, officially, fully announced. Pre-orders start today at $500 and the headset ships on October 10th. While you can get the full specs and details right here, the overall summary is that the headset is an improvement over Quest 2 nearly across the board:

  • Better lenses
  • Better resolution
  • Better processor
  • Better audio
  • Better passthrough
  • Better controllers
  • Better form-factor

The improvements really add up. The biggest improvement is in the visuals, where Meta finally paired the impressive pancake optics from Quest Pro with a higher resolution display, resulting in a significantly sharper image than Quest 2 that has industry-leading clarity with regards to sweet spot, glare, and distortion.

Quest 3 has two LCD displays, giving it 4.6MP (2,064 × 2,208) resolution per-eye, compared to Quest 2 with 3.5MP (1,832 × 1,920) resolution per-eye. And even though that isn’t a massive leap in resolution, the upgraded lenses are so much sharper and it makes a huge difference compared to just the number of pixels.

 

Photo by Road to VR

Quest 3 also has an improved IPD (distance between your eyes) function and range. A dial on the headset gives it a continuous adjustment between 58–70mm. Given the eyebox of the optics, Meta officially says the headset is suitable for any IPD between 53–75mm. And because each eye has its own display, adjusting the IPD the the far edges doesn’t sacrifice any field-of-view.

Beyond the IPD upgrade, Quest 3 is the first Quest headset with an eye-relief adjustment, which allows you to move the lenses closer or further from your space. As a notched adjustment that can move between three different positions, it’s a little funky to adjust, but it’s a welcomed addition. Ostensibly this will make the headset more adjustable for glasses users, but as someone who tends to benefit from lower eye-relief, I hope that the nearest adjustment goes far enough.

Between the upgraded IPD adjustment and eye-relief, Quest 3 is the most adjustable Quest headset so far, which means more people can dial into the optimal optical position.

Quest 3 has a slightly modified rear strap, but it’s still a soft strap in the end. A deluxe strap and deluxe strap with battery will be available (Quest 2 deluxe straps are unfortunately not forward-compatible) | Image courtesy Meta

Holistically speaking, Quest 3 has the best display system of any headset on the market to date.

The only major things that haven’t improved over Quest 2 are the default headstrap, battery life, and weight, which are all about the same. The biggest benefit of the new optics is their performance, but their more compact form also means the weight of the headset sits a little closer to your face which makes it feel a little lighter and less bulky.

Powered Up

Photo by Road to VR

When Quest 3 is firing on all cylinders—including software that’s well-optimized for its performance envelope—you’ll wonder how you ever got by with the visuals afforded by Quest 2.

Take Red Matter 2, for instance, which was already one of the best-looking games on Quest 2. Developer Vertical Robot put together a demo app, which lets you instantly switch back and forth between the game’s Quest 2 visuals and newly enhanced Quest 3 visuals, and the difference is staggering. This video gives an idea but doesn’t quite show the full impact of the visual improvements that you feel in the headset itself:

Not only are textures significantly sharper, the extra processing power also allowed the developers to add high-resolution real-time shadows which make a big difference to how grounded the virtual world feels around you.

However, the exceptionally well-optimized Red Matter 2 is a rare exception compared most apps available on the platform. Walking Dead: Saints & Sinners, for instance, looks better on Quest 3… but still pretty rough with blotchy textures and shimmering aliased shadows.

And this was an example that Meta specifically showed to highlight Quest 3’s improved processing power…. And yes, the Walking Dead example shows that the developers used some of the extra power to put more enemies on screen. But the question here is, what good is a phone call if you are unable to speak what good is better optical performance if the textures aren’t matching them in the first place?

So while Quest 3 offers the potential for significantly improved visuals, the reality is that many apps on the platform won’t benefit as much from it as they could, especially in the near-term as developers continue to prioritize optimizing their games for Quest 2 because it will have the larger customer base for quite some time. Optimization (or lack thereof) is a systemic issue that is more complicated to address than just ‘throw more processing power at it’.

Quest 3 is the first headset to debut with Qualcomm’s Snapdragon XR2 Gen 2 chip, which claims up to 2.5 times the graphical performance of XR 2 Gen 1, and up to 50% better efficiency between identical workloads | Photo by Road to VR

But as we all know, graphics aren’t everything. Some of the most fun games on the Quest platform aren’t the best looking out there.

But when I say that software is holding back the headset, more than half of that sentiment is driven not by the visuals of apps and games, but by the headset’s overall UI/UX.

This applies to all Quest headsets, of course, but the platform’s obtuse and often buggy interface hasn’t seen the same kind of consistent improvements that the hardware itself has seen from Quest 1 to Quest 3—which is a shame. The friction between a player’s idea of wanting to do something in the headset and how seamless (or not) it is to put on the headset and do that thing is deeply connected to how often and how long they’ll actually enjoy using the headset.

Meta has given no indication that it even acknowledges the deficiency of the Quest UI/UX. With the release of Quest 3, on the interface specifically, it doesn’t seem like it will make any meaningful changes on that front. In terms of UX at least, there’s two general improvements:

Passthrough

Photo by Road to VR

Quest 3’s passthrough view is a big leap over the low-res black-and-white passthrough of Quest 2. Now with full color and higher resolution, passthrough on Quest 3 feels more like something you can use all the time (granted, I haven’t had enough time with the headset to tell if the passthrough latency is low enough to prevent motion discomfort over long periods, which was a problem for me on Quest Pro).

And while it isn’t clear to me if the software will enable passthrough by default (as it should), being able to easily see a reasonably high quality view outside of the headset is a notable UX improvement.

Not only does it make users feel less disconnected from their environment when putting on the headset (until they’re actually ready to be immersed in the content of their choice), it also makes it easier to glance at the real world without removing the headset entirely. That’s useful for talking to someone else in the room or looking to make sure a pet (or child) hasn’t walked into your playspace.

I was surprised to see that with the newly added depth sensor there’s still warping around your hands, but overall the passthrough image is much sharper and has better dynamic range. Unlike Quest Pro, I was able to at least roughly read the time and some notifications on my phone—an important part of not feeling completely disconnected from the world outside the headset.

This also opens the door to improving the flow of putting on the headset in the first place; if passthrough is enabled by default, Meta should encourage users to put on the headset first, then find their controllers (instead of awkwardly trying to fit the headset with controllers already in their hands). And when the session is over, hopefully they turn on passthrough and instruct people to put down their controllers first, then remove the headset. These are the kinds of UX details the company tends to miss entirely… but we’ll see.

Room Scanning

The other real UX improvement coming with Quest 3 could be automatic room scanning, which automatically creates a playspace boundary for users instead of making them create their own. I say “could be” because I didn’t have enough time in my hands-on with this feature to tell how quickly and reliably it works. More testing to come.

Similar to implementations we’ve seen on other headsets, the room scanning feature encourages users to look around their space, giving the headset time to build a map of the geometry in the room. With enough of the space scanned, a playspace boundary will be created. The same system can also be used to establish the position of walls, floors, and other geometry that can be used in mixed reality applications.

Paid Parking

Also worth mentioning is the optional (and fairly expensive) official Quest 3 dock. Keeping the headset and controllers powered, updated, and ready to go is a big challenge when it comes to VR friction. Having a dedicated place to put your headset and controllers that also charges them is definitely a boon to the UX.

Photo by Road to VR

This feels like something that should really be part of the package, but you’ll have to pay an extra $130 for the privilege. Hopefully we’ll see more affordable Quest 3 docks from third-parties in the near future.

Continue on Page 2: Marketing Reality »

Hands-on: Virtuix Omni One Comes Full Circle with an All-in-one VR Treadmill System

As far as VR treadmills go, Virtuix is the OG. While the company had set out to make a consumer VR treadmill a decade ago, market realities pushed the company into the out-of-home VR attraction space. But after all these years the company remains dead-set on selling a VR treadmill to consumers, and this time around it’s taking an all-in-one approach with the new Virtuix Omni One. I visited the company’s Austin, Texas headquarters to try it for myself.

The Virtuix Omni Backstory

Image courtesy Virtuix

The original Virtuix Omni treadmill started life way back in 2013 as wooden prototype built by a small group led by CEO Jan Goetgeluk. Thus the core idea was conceived a full three years before the first wave of consumer VR headsets appeared on the market in 2016.

The idea itself is simple. What if you had a treadmill on which you could run in any direction? With such a treadmill and a VR headset on your head, you could move your body and feel like you were really moving through the virtual world.

The execution of this idea, however, has been anything but simple.

Treadmills tend to be large, heavy, and expensive devices. And the Virtuix Omni was no exception. Although the company set out initially to build a device for consumers, the reality of the cost and complexity of such a device made it a challenging sell beyond early adopters. The ahead-of-its-time treadmill also suffered another key issue for the consumer VR space; the ‘ring’ support’ design prevented players from having a full range of motion, which made the treadmill a non-starter for many consumer VR games that expected players to be able to crouch, reach down to the ground, or move their arms around at their waist (where many games commonly place holsters for key items).

These challenges forced the company to pivot toward the out-of-home VR attraction space. Thus, the Omni Arena—a huge VR attraction that includes a pod of four of the company’s VR treadmills for multiplayer gameplay with custom content—was born. The system would go on to be installed in 73 entertainment spaces across the US and has become Virtuix’s bread-and-butter business.

Image courtesy Virtuix

Virtuix realized early on that VR was, at this stage, a fairly clunky proposition. Only early enthusiasts and computer experts had the skills and patience to set up and troubleshoot even consumer VR systems, let alone one that cobbled together complex hardware like a headset and VR treadmill. Expecting arcade attendants to figure out how to keep a system of four Virtuix Omni treadmills, VR headsets, and an array of networked computers powering it all, just wasn’t realistically going to work at scale.

That led the company to build Omni Arena like a giant all-in-one VR arcade. The company has impressively customized literally every step of the customer’s journey through the experience. From the moment they step into the enclosure they’re guided by video screen prompts about what they’re going to experience, how to slip on their special shoes, and how to get into the Virtuix Omni treadmill once it’s their turn.

Photo by Road to VR

The same, if not more, care has been paid to the operator’s experience. Omni Arena has everything to be a self-sustaining VR attraction. It doesn’t just come with the four treadmills, but also four headsets, controllers (with charging pods), SteamVR tracking base stations, and all the hardware to run the networked VR experiences and the pod’s software itself which not only manages all of the connected devices, but even captures footage of players (both in and outside of the game) and emails it to them as a memento of their experience. It also makes routine troubleshooting steps like headsets, computers, or SteamVR into a simple touchscreen button press through a custom interface for the operator. Omni Arena is truly an all-in-one product.

Virtuix Arena’s custom software makes it easy to manage all the computers and hardware that power the experience. | Photo by Road to VR

For a small company, Virtuix’s ability to focus on the holistic experience of its product is both rare and impressive.

Coming Full Circle

With the many lessons learned about creating an all-in-one experience for the out-of-home VR attraction space, the company is turning its attention back to the consumer realm with a brand new product—Virtuix Omni One.

Image courtesy Virtuix

With Omni One, Virtuix isn’t selling a VR treadmill. It’s selling an all-in-one system that includes the newly designed VR treadmill, a VR headset, and access to a library of custom-made content. It’s an ambitious approach, but one that reflects Virtuix’s ability to identify and address key problems with the overall experience it wants to deliver to customers.

The original ring design of the Omni meant players couldn’t crouch or have full movement of their arms around their waist. | Photo by Road to VR

One of those key points the company identified was the way that the original Omni design made compatibility with modern VR content a challenge. The support ring around the player mean their movement was restricted, both in their ability to crouch, lean, and move their arms with complete freedom.

That ‘simple’ problem necessitated a complete redesign of the treadmill. The Omni One now uses an arm support design that always stays behind the user. This gives you the ability to have a full range of motion while also running in any direction. The arm doesn’t actively hold you upright, but it provides the force that prevents you from running straight off the edge of the treadmill.

Another problem the company identified in its goal of delivering a consumer VR treadmill is the complexity of existing PC VR systems and getting players into the right content.

Even if Omni One customer was already an expert in PC VR and willing to put up with technical annoyances, having a tether to the computer means worrying about the user wrapping themselves up in the cable (or asking them to rig up a ceiling mounted cable management system).

Though the Omni One can still technically be used with a PC VR setup, this challenge pushed Virtuix to pair its treadmill with a standalone VR headset out of the box (Pico Neo 3, specifically). But it’s not just a headset, but a headset equipped with a custom-made Omni storefront serving up content that’s specifically made or adapted for the VR treadmill. The company even built its own ‘first steps’ experience, a surprisingly well-made introduction that introduces users to the magic of VR and teaches them how to move and feel comfortable with their controllers and treadmill.

And although sticker-shock has always been a challenge for Virtuix, the Omni One is actually not an unreasonable price… if you think of it as what it truly is: a treadmill that will give you a workout.

Typical exercise treadmills range in price from $500 to $2,000 or more. Omni One will be price at $2,600, including the $700 Pico Neo 3 headset (which the company stresses can also be used as a standard Pico headset (including PC VR streaming). That leaves the treadmill itself at $1,900, the cost of a high-end treadmill. The company is also promising an option to finance the Omni One for $65 per month.

And for those that really believe in Virtuix and its vision, through the company’s crowd-investment campaign it is offering a 20% discount on Omni One (or more, depending upon the amount invested). The campaign has raised $4.4 million to date.

Continue on Page 2: Omni One Hands-on »

Hands-on: Apple Vision Pro isn’t for Gaming, But it Does Everything Else Better

While Apple’s new Vision Pro headset isn’t going to satisfy the existing base of consumer VR users, it’s mastering the rest of the basics better than anyone else.

Probably 90% of what consumers are using VR headsets for today is entertainment, and of that entertainment, most of it is gaming. And if you’re among those people using such headsets today, you’ll reasonably be disappointed that Apple Vision Pro lacks controllers and isn’t going to be playing top VR games anytime soon. But for everyone else, it’s a back-to-basics approach that’s laying a sturdy foundation to build upon in the future.

Today at Apple’s headquarters I got to check out Vision Pro for myself. Unfortunately the company didn’t permit any photos or footage during the demo, but the clips below are a fair representation of what I saw.

Photo by Road to VR

Apple Vision Pro (APV, let’s call it) is doing what only Apple can: carving out a subset of what other devices do, and making sure that subset of things is done really well. And given the current state of UX on most other headsets, this is a reckoning that was a long time coming.

Look & Tap

It starts with the input. Apple is leaning heavily into using your eyes as a cursor, and a pinch gesture as a click. The headset has cameras on the bottom that face downward so that even subtle pinches from your hand in your lap are visible and detected. But you don’t see a floating cursor where your eyes are, nor do you see a laser pointer shooting out of your hand. You just look at the thing you want to press, then do a quick pinch.

On paper you might think this sounds shoddy. But remember, this is Apple. They’ve tested and refined this system six ways from Sunday, and it works so well that after a minute or two you hardly think about how you’re interacting with the headset, you just are.

The pinch input is responsive and reliable. It felt so natural that the two or three times the headset missed my pinch during a 30 minute demo it felt really weird because my brain was already convinced of its reliability.

This look-and-pinch system is so simple for the headset’s basic input that I won’t be surprised if we see other companies adopt it as soon as possible.

Reality First

So there’s the simple input and then there’s a passthrough-by-default view. This is an MR headset after all, meaning it can easily do augmented reality—where most of your view is of the real world, with some virtual content; or virtual reality—where all of your view is virtual content.

When you put AVP on your head, you instantly see the outside world first. In fact, the way that Apple defers to the passthrough view shows that they want to treat fully immersive experiences as the exception rather than the rule. Generally you won’t pop into a fully immersive scene unless you actively making the decision to do so.

The passthrough view is certainly best-in-class, but we’re still probably two generations away from it truly feeling like there’s nothing separating your eyes from the real world. Granted, I was able to read all the text on my phone with no issue, which has been the ‘bar’ for passthrough quality that I’ve been waiting to see exceeded.

Beautiful Virtual Displays

The imperfect passthrough resolution somewhat betrays the exceptional display resolution which exhibits not even a hint of screen-door effect. It may not be ‘retina resolution’ (generally agreed to be around 60 pixels per-degree), but it’s good enough that I won’t know how far off it is from retina resolution until I sit down with an objective test target to find out.

That’s a long way of saying that the headset’s display has excellent resolution with great clarity across the lens. Top of the class.

This clarity is helped by the fact that Apple has done its Apple-y thing and ensured that panels, text, and images consistently render with superb quality. The entire interface feels iOS-polished with animations and easy to use buttons and controls. The interface was so simple to use that the demo chaperones had a hard time keeping me on task as I wanted to flick through menus and move floating apps around the room.

But here’s the thing, probably 75% of what Apple showed me was essentially just floating screens. Whether it was videos or a floating iMessage app or the web browser, it’s clear that Apple wants Vision Pro to be first and foremost be great at displaying flat content to the user.

The other 25% of what I saw, while very impressive all around, felt like just the start of a journey for Apple to build out a broader library immersive experiences.

Record & Rewatch Memories

AVP might not be a VR gaming headset, but it does at least one thing that no other headset does: capture volumetric memories using its on-board cameras. Using the button on the top of the headset you can capture volumetric photos and videos with just a press.

Apple showed me a demo of a volumetric video capture of a group of kids blowing out candles on a birthday cake. It was like they were right in front of me. I’d never even seen these kids before but I could immediately feel their giddy emotions as they giggled and bounced around… as if I was sitting right there while it was happening. Not to mention that the quality was good enough, at least in this best-case-scenario demo capture, that my first thought had nothing to do with the famerate or quality or dynamic range, but purely of the emotion of the people in front of me.

That instant connection—to people I don’t even know—was a clear indicator that there’s something special to this. I can already imagine watching a volumetric video of a cherished memory, or of a loved one that has passed, and I know it would be a powerful experience.

Doing it Right

And here’s the thing; I’ve seen plenty of volumetric video demos before. This isn’t a new idea, not even close. The thing that’s novel here is that everyday users could potentially shoot these videos on their own, and readily watch, share, and store them for later. On other headsets you’d need a special camera for capturing, special software for editing, a player app, and a sharing app to make the same thing happen.

This is the ‘ecosystem’ part of XR that’s missing from most other headsets. It’s not about what’s possible—it’s about what’s easy. And Apple is focused on making using this headset easy.

Continue on Page 2: Immersion Isn’t Off the Table »

Hands-on: CREAL’s Light-field Display Brings a New Layer of Immersion to AR

More than four years after I first caught wind of their tech, CREAL’s light-field display continues to be one of the most interesting and promising solutions for bringing light-fields to immersive headsets. At AWE 2023 I got to check out the company’s latest tech and saw first hand what light-fields mean for immersion in AR headsets.

More Than One Way to Focus

So first, a quick recap. A light-field is a fundamentally different way of showing light to your eyes compared to the typical displays used in most headsets today. The key difference is about how your eyes can focus on the virtual scene.

Your eyes have two focus methods. The one most people are familiar with is vergence (also called stereoscopy), where both eyes point at the same object to bring overlapping views of that object into focus. This is also what makes things look ‘3D’ to us.

But each individual eye is also capable of focusing in a different way by bending the lens of the eye to focus on objects at different distances—the same way that a camera with only one lens focuses. This is called accomodation.

Vergence-Accommodation Conflict

Most XR headsets today support vergenge (stereoscopic focus), but not accomodation (single-eye focus). You may have heard this called Vergence-Accomodation Conflict; also known to the industry as ‘VAC’ because it’s a pervasive challenge for immersive displays.

The reason for the ‘conflict’ is that normally the vergence and accommodation of your eyes work in tandem to achieve optimal focus on the thing you want to look at. But in a headset that supports vergence, but not accomodation, your eyes need to break these typically synchronous functions into independent functions.

It might not be something you ‘feel’ but it’s the reason why in a headset it’s hard to focus on things very near to you—especially objects in your hands that you want to inspect up close.

The conflict between vergence and accommodation can be not just uncomfortable for your eyes, but in a surprising way also rob the scene of immersion.

Creal’s Solution

And this is where we get back to Creal, a company that wants to solve the Vergence-Accommodation Conflict with a light-field display. Light-field displays structure light in the same way that we see it in the real world, allowing both of the focus functions of the eyes—vergence and accommodation—to work in tandem as they normally do.

At AWE 2023 this week, I got to check out the company’s latest light-field display tech, and came away with an added sense of immersion that I haven’t felt in any other AR headset to date.

I’ve seen Creal’s static bench-top demos before, which show static floating imagery through the lens to a single eye, demonstrating that you can indeed focus (accommodate) at different depths. But you won’t really see the magic until you see a light-field with both eyes and head-tracking. Which is exactly what I got to do this week at AWE.

Photo by Road to VR

On an admittedly bulky proof-of-concept AR headset, I got to see the company’s light-field display in its natural habitat—floating immersively in front of me. What really impressed me was when I held my hand out and a little virtual turtle came floating over to the palm of my hand. Even though it was semi-transparent, and not exceptionally high resolution or accurately colored, it felt… weirdly real.

I’ve seen all kinds of immersive XR experiences over the years, and holding something in your hand sounds like a banal demo at this point. But there was just something about the way this little turtle looked—thanks to the fact that my eyes could focus on it in the same way they would in the real world—that made it feel more real than I’ve ever really felt in other headsets. Like it was really there in my hand.

Photo by Road to VR

The trick is that, thanks to the light-field, when I focused my eyes on the turtle in my hand, both the turtle (virtual) and my hand (real) were each in proper focus—something that isn’t possible with conventional displays—making both my hand and the turtle feel more like they were inhabiting the same space right in front of me.

It’s frustratingly impossible to explain exactly how it appeared via text alone; this video from Creal shot through-the-lens gives some idea of what I saw, but can’t quite show how it adds immersion over other AR headsets:

It’s a subtle thing, and such added immersion probably only meaningful impacts objects within arms reach or closer—but then again, that distance is where things have the potential to feel most real to use because they’re in our carefully watched personal space.

Digital Prescriptions

Beyond just adding a new layer of visual immersion, light-field displays stand to solve another key problem, which is vision correction. Most XR headsets today do not support any kind of prescription vision correction, which for maybe even more than half of the population means they either need to wear their correctives while using these devices, buy some kind of clip-on lens, or just suffer through a blurry image.

But the nature of light-fields means you can apply a ‘digital prescription’ to the virtual content that exactly matches the user’s corrective prescription. And because it’s digital, this can be done on-the-fly, meaning the same headset could have its digital corrective vision setting change from one user to the next. Doing so means the focus of virtual image can match the real world image for those with and without glasses.

Continue on Page 2: A More Acceptable Form-factor »

Hands-on: Bigscreen Beyond – A Little Headset That Could be a Big Deal

It’s exceedingly rare to see a VR software startup transition to making hardware, let alone decent hardware. But that’s exactly what Bigscreen—creators of the long-running social VR theater app of the same name—has done with its upcoming Beyond headset.

Bigscreen has clearly targeted PC VR enthusiasts who are willing to pay for the best hardware they can get their hands on. And with major players like Meta and HTC focusing heavily on standalone headsets, Bigscreen Beyond could prove to be the best option they’ll find any time soon.

Photo by Road to VR

The company has set out to make a headset that’s not just better than what’s out there, but one that’s much smaller too. And while it remains to be seen if the headset will hit all the right notes, my initial hands-on shows plainly the company knows what it’s doing when it comes to building a VR headset.

Bigscreen Beyond Specs
Resolution 2,560 × 2,560 (6.5MP) per-eye
microOLED (2x, RGB stripe)
Pixels Per-degree (claimed) 28
Refresh Rate 75Hz, 90Hz
Lenses Tri-element pancake
Field-of-view (claimed) 93°H × 90°V
Optical Adjustments IPD (fixed, customized per customer)
eye-relief (fixed, customized per facepad)
IPD Adjustment Range 58–72mm (fixed, single IPD value per device)
Connectors DisplayPort 1.4, USB 3.0 (2x)
Accessory Ports USB-C (1x)
Cable Length 5m
Tracking SteamVR Tracking 1.0 or 2.0 (external beacons)
On-board Cameras None
Input SteamVR Tracking controllers
On-board Audio None
Optional Audio Audio Strap accessory, USB-C audio output
Microphone Yes (2x)
Pass-through view No
Weight 170–185g
MSRP $1,000
MSRP (with tracking & controllers) $1,580

Custom-made

Bigscreen is building something unique, quite literally—every Beyond headset comes with a custom-made facepad. And this isn’t a ‘choose one of three options’ situation, Bigscreen has a sleek app that walks buyers through the process of capturing a 3D scan of their face so the company can create a completely unique facepad that conforms to each specific customer.

And it really makes a difference. The first thing that Bigscreen CEO Darshan Shankar showed me during a demo of the Beyond headset was the difference between my personal facepad (which the company created for me prior to our meetup) and someone else’s facepad. The difference was instantly obvious; where mine fit against my face practically like two connected puzzle-pieces, the other facepad awkwardly disagreed with my face in various places. While I’ve recognized for a long time that different facial topology from person-to-person is a real consideration for VR headsets, this made me appreciate even more how significant the differences can be.

The facepad may look rough, but it’s actually made of a soft rubber material | Photo by Road to VR

Shankar says the custom-fit facepad is an essential part of making such a small headset. It ensures not only that the headset is as comfortable as it can be, but also the user’s eyes are exactly where they’re supposed to be with regard to the lenses. For a headset like Beyond, which uses high magnification pancake optics with a small sweet spot, this is especially important. And, as Shankar convincingly demonstrated by shining a flashlight all around the headset while I was wearing it, the custom-fit facepad means absolutely no external light can be seen from inside.

And the custom facepad isn’t the only way each headset is dialed in for each specific customer; instead of wasting weight and space with the mechanics for an IPD adjustment, the headset ships with one of 15 fixed IPD distances, ranging from 58–72mm. The company selects the IPD based on the same face scan that allows them to make the custom facepad. And given the size of the Beyond headset, there’s no way that glasses will fit inside; luckily the company will also sell magnetically attached prescription inserts for those who need them, up to −10 diopter.

Diving In

With my custom facepad easily snapped onto the headset with magnets, it was time to dive into VR.

The baseline version of the $1,000 Bigscreen Beyond headset has a simple soft strap, which I threw over the back of my head and tightened to taste. I felt I had to wear the strap very high on the back of my head for a good hold; Shankar says an optional top-strap will be available, which ought to allow me to wear the rear strap in a lower position.

Photo by Road to VR

As I put on the headset I found myself sitting in a dark Bigscreen theater environment, and the very first thing I noticed was the stellar darks and rich colors that are thanks to the headset’s OLED displays. The second thing I noticed was there was no sound! That’s because the baseline version of the headset doesn’t have on-board audio, so I still had to put on a pair of headphones after the headset was donned.

While the baseline headset lacks on-board audio, Bigscreen is offering a $100 ‘Audio Strap‘, which is a rigid headstrap with built-in speakers. As someone who really values rigid straps and on-board audio, I’m glad to see this as an option—for me it would be the obvious choice. Unfortunately the company wasn’t ready to demo the Audio Strap.

Shankar toured me around a handful of VR environments that showed off the headset’s 2,560 × 2,560 (6.5MP) per-eye displays, which offered a level of clarity similar to that of Varjo’s $2,000 Aero headset, but with a smaller notably field-of-view (Bigscreen claims 90°H × 93°V).

On many current-gen headsets like Quest 2 you can’t quite see the individual lines of the screen-door effect, but it’s still clear that it’s there in aggregate. While the Beyond headset isn’t ‘retina resolution’ there’s essentially no evidence of any screen-door effect. Everything looks really sharp. This was best demonstrated when I ran around in Half-Life: Alyx and the game felt like it had instantly upgraded graphics compared to a headset like Valve’s Index.

There is, however, some persistence blurring and glare. Shankar openly demonstrated how the brightness of the display directly relates to the level of persistence. While there’s some noticeable persistence at the default brightness, when overdriving the display’s brightness the persistence becomes entirely unbearable. The reverse is true; turning the brightness down below the default cuts the persistence down noticeably. While it would be nice if the default brightness had less persistence, at least users will be able to trade brightness for lower persistence based on their specific preference.

Continue on Page 2: Dialing In

Hands-on: HTC’s New Standalone Vive Tracker Effortlessly Brings More of Your Body Into VR

With three versions of SteamVR trackers under its belt, HTC has been a leading enabler of full-body tracking in VR. Now the company’s latest tracker could make it even easier to bring your body into VR.

HTC’s new standalone Vive tracker (still unnamed) has a straightforward goal: work like the company’s existing trackers, but easier and on more platforms.

The ‘easier’ part comes thanks to inside-out tracking—using on-board cameras to allow the device to track its own position, rather than external beacons like those used by the company’s prior trackers.

Photo by Road to VR

To that end, things seem really promising so far. I got to demo the new Vive tracker at GDC 2023 this week and was impressed with how well everything went.

Photo by Road to VR

With two of the new Vive trackers strapped to my feet, I donned a Vive XR Elite headset and jumped into a soccer game. When I looked down at my feet, I saw a pair of virtual soccer shoes. And when I moved my feet in real-life, the soccer shoes moved at the same time. It took less than two seconds for my mind to say ‘hey those are my feet!’, and that’s a testament to both the accuracy and latency being very solid with the new tracker.

That’s not a big deal for older trackers that use SteamVR Tracking, which has long been considered the gold standard for VR tracking. But to replicate a similar level of performance in a completely self-contained device that’s small and robust enough to be worn on your feet… that’s a big deal for those who crave the added immersion that comes with bringing more of your body into VR.

Throughout the course of my demo, my feet were always where I expected to see them. I saw no strange spasms or freezing in place, no desync of coordinate planes between the tracker and the headset, and no drifting of the angle of my feet. That allowed me to easily forget that I was wearing anything special on my feet and simply focus on tracking to kick soccer balls into a goal.

While the tracker worked well throughout, the demo had an odd caveat—I had feet but no legs! That makes it kind of weird to try to juggle a soccer ball when you expect to be able to use your shin as a backboard but watch as the ball rolls right over your virtual foot.

Ostensibly this is the very thing that trackers like this should be able to fix; by attaching two more trackers to my knees, I should be able to have a nearly complete representation of my leg movements in VR, making experiences like ‘soccer in VR’ possible when they simply wouldn’t work otherwise.

I’m not sure if the demo app simply wasn’t designed to handle additional tracking points on the knees, or if the trackers are currently limited to just two, but HTC has confirmed the final inside-out Vive tracker will support up to five trackers in addition to the tracked headset and controllers.

Trackers can, of course, be used to track more than just your body, though apps that support these kinds of tracked accessories are rare | Photo by Road to VR

So the inside-out factor is the ‘easier’ part, but what about the other goal of the tracker—to be available on more platforms than just SteamVR Tracking?

Well, the demo I was playing was actually running purely on the standalone Vive XR Elite. To connect the trackers, a small USB-C dongle needs to be connected to the headset to facilitate the proprietary wireless connection between the dongle and the trackers. HTC says the same dongle can plug into a PC and the trackers will work just fine through SteamVR.

The company also says it’s committed to making the trackers OpenXR compatible, which means (in theory) any headset could support them if they wanted.

– – — – –

I only got to use it in one configuration (on my feet) and in one environment (a large office space). So there’s still the question of how robust they will be. For now though, I’m suitably impressed.

If these trackers really work as well as they seem from their first impression, it could open the door to a new wave of people experiencing the added immersion of full-body tracking in VR… but there’s still the lingering question of price, which historically never seems to be quite right consumer VR market when it comes to HTC. Until then, our fingers shall remain crossed.

PSVR 2 Unboxing – Close-up with the Final Version of Sony’s New VR Headset

Ahead of the launch of PSVR 2, we’ve got a close-up look at the finished version of the headset and what you can expect to find when you crack open the box.

It’s just two weeks until Sony’s newest VR headset hits the streets, and while we’re not yet allowed to go into detail, today we’ve got a close-up look at the production hardware and Sony’s official controller dock. Stay tuned for our full PSVR 2 review.

Photo by Road to VR

The very first thing to notice about PSVR 2 compared to the original is the simplicity of setup… this is everything you’ll see in the box.

PSVR 2 | Photo by Road to VR

Compare that to the original PSVR which had a breakout box requiring extra cables and its own power adapter—not to mention the PS Eye camera that was required for the headset (and the photo below doesn’t even include the Move controllers).

PSVR 1 | Photo by Road to VR

Compared to the original, PSVR 2’s single-cable operation and inside-out tracking makes it so much easier to use.

Getting closer to the headset itself, we get a good look at its range of adjustments. On the top there’s an IPD dial for dialing in the distance between the lenses. Also on top is a button to adjust eye-relief (the range of which is pretty impressive). And on the back is the crank to tighten the headstrap, with the center of the crank acting as a button which releases the springy tension.

As we learned in our early preview of PSVR 2, the headset has an assisted calibration step which helps the user hone in their individual headset orientation and IPD settings, thanks to the in-built eye-tracking.

On the bottom of the headset is the power button and a button to activate PSVR 2’s passthrough view. Alongside those is the built-in microphone.

Photo by Road to VR

While PSVR 2 doesn’t have directly integrated audio, it comes with a pair of custom earbuds which attached to the underside of the rear headstrap and stow in little holes at the sides of the headset. You can use your own 3.5mm headphones instead if you’d like to.

And then there’s the PSVR 2 ‘Sense’ controllers, which have a particularly interesting shape to them. Inside the circular strut is hidden infrared LEDs which can be seen by the headset to track the controllers.

Compared to something like Quest 2, the unique shape and placement of the ring does a good job of reducing the likelihood that you’ll bump the controllers into each other during hand-to-hand interactions. However, the design has a somewhat off-kilter balance to it.

The wrist-straps are mounted on the inside of the tracking ring and can be removed if desired.

The PSVR 2 controllers are rechargeable via USB-C, but Sony is also selling a purpose-built PSVR 2 controller charging dock to make it easy to charge your controllers without fiddling with cables. While its existence is appreciated, and it generally gets the job done, it’s a bit funky to sit the controllers in just the right spot to initiate the charge. Still, I’d rather this than plugging in two cables every time I’m done playing.

We’re looking forward to sharing our full PSVR 2 review in the near future—if you’ve got questions for us, drop them in the comments below!

Display Maker Demonstrates Flagship OLED VR Display & Pancake Optics, Its Best Yet

Display manufacturer Kopin recently demonstrated its latest VR display and pancake optic which promises higher resolution and more affordability for future VR headsets.

Most modern VR headsets take on the ‘box on your face’ form-factor because of a simple display architecture which necessitates a certain distance between the display and the lens. In the effort to make VR headsets more compact in the near-term, so-called ‘pancake optics’ are emerging as a leading candidate. These more complex optics reduce the distance required between the display and the lens.

Why Are Today’s Headsets So Big?

Photo by Road to VR

It’s natural to wonder why even the latest VR headsets are essentially just as bulky as the first generation launched back in 2016. The answer is simple: optics. Unfortunately the solution is not so simple.

Every consumer VR headset on the market uses effectively the same optical pipeline: a macro display behind a simple lens. The lens is there to focus the light from the display into your eye. But in order for that to happen the lens needs to be a few inches from the display, otherwise it doesn’t have enough focusing power to focus the light into your eye.

That necessary distance between the display and the lens is the reason why every headset out there looks like a box on your face. The approach is still used today because the lenses and the displays are known quantities; they’re cheap & simple, and although bulky, they achieve a wide field-of-view and high resolution.

Many solutions have been proposed for making VR headsets smaller, and just about all of them include the use of novel displays and lenses.

Pancake Optics (AKA Folded Optics)

What are pancake optics? It’s not quite what it sounds like, but once you understand it, you’d be hard pressed to come up with a better name.

While the simple lenses in today’s VR headsets must be a certain distance from the display in order to focus the light into your eye, the concept of pancake optics proposes ‘folding’ that distance over on itself, such that the light still traverses the same distance necessary for focusing, but its path is folded into a more compact area.

You can think of it like a piece of paper with an arbitrary length. When you fold the paper in half, the paper itself is still just as long as when you started, but its length occupies less space because you folded it over on itself.

But how the hell do you do that with light? Polarization is the key.

Image courtesy Proof of Concept Engineering

It turns out that beams of light have an ‘orientation’ which is referred to as polarization. Normally the orientation of light beams are random, but you can use a polarizer to only let light of a specific orientation pass through. You can think of a polarizer like the coin-slot on a vending machine: it will only accept coins in one orientation.

Using polarization, it’s possible to bounce light back and forth multiple times along an optical path before eventually letting it out and into the wearer’s eye. This approach, known as pancake or folded optics, allows the lens and the display to move much closer together, resulting in a more compact headset.

Kopin is an electronics manufacturer best known for its microdisplays. In recent years the company has been eyeing the emerging XR industry as a viable market for their wares. To that end, the company has been steady at work creating VR displays and optics that it hopes headset makers will want to snatch up.

At AWE 2022 last month, the company demonstrated its latest work on that front with a new plastic pancake optic and flagship VR display.

Kopin’s P95 pancake optic has just a 17mm distance between the display and lens, along with a 95° field-of-view. Furthermore, it differentiates itself as being an all-plastic optic, which makes it cheaper, lighter, more durable, and more flexible than comparable glass optics. The company says its secret sauce is being able to make plastic pancake optics that are as optically performant as their glass counterparts.

Photo by Road to VR

At AWE, I got to peak through the Kopin P95 optic. Inside I saw a sharp image with seemingly quite good edge-to-edge clarity. It’s tough to formulate a firm assessment of how it compares to contemporary headsets as my understanding is that the test pattern being shown had no geometric or color corrections, nor was it calibrated for the numbers shown.

You’ll notice that the P95 is a non-Fresnel optic which should mean it won’t suffer from the kind of ‘god-rays’ and glare that almost every contemporary VR headset exhibits. Granted, without seeing dynamic content it’s tough to know whether or not the multi-element pancake optic introduces any of its own visual artifacts.

Even though the test pattern wasn’t calibrated, it does reveal the retina resolution of the underlying display—Kopin’s flagship ‘Lightning’ display for VR devices.

Photo by Road to VR

This little beauty is a 1.3″ OLED display with a 2,560 × 2,560 resolution running up to 120Hz. Kopin says the display has 10-bit color, making viable for HDR.

Photo by Road to VR

Combined, the P95 pancake optic and the Lightning display appear to make a viable, retina resolution, compact display architecture for VR headsets. But it isn’t necessarily a shoe-in.

For one, the 95° field-of-view is just barely meeting par. Ostensibly Kopin will need to grow its 1.3″ Lighting display larger if it wants to meet or exceed what’s offered in today’s VR headsets.

Further, the company wasn’t prepared to divulge any info on the brightness of the display or the efficiency of the pancake lens—both of which are key factors for use in VR headsets.

Because pancake lenses use polarized light and bounce that light around a few times, they always end up being less efficient—meaning more brightness on the input to get the same level of brightness output. That typically means more heat and more power consumption, adding to the tradeoffs that would be required if building a headset with this display architecture.

Kopin has been touting its displays and optics as a solution for VR headsets for several years at this point, but at least in the consumer & enterprise space they don’t appear to have found any traction just yet. It’s not entirely clear what’s holding the company back from break into the VR space, but it likely comes down to the price or the performance of the offerings.

That said, Kopin has been steadily moving toward the form-factor, resolution, and field-of-view the VR industry has been hoping for, so perhaps the P95 optic and latest Lightning display will be the point at which the company starts turning heads in the VR space.

Hands-on: Magic Leap 2 Shows Clear Improvements, But HoloLens 2 Retains Some Advantages

Magic Leap 2 isn’t available just yet, but when it hits the market later this year it will be directly competing with Microsoft’s HoloLens 2. Though Magic Leap 2 beats out its rival in several meaningful places, its underlying design still leaves HoloLens 2 with some advantages.

Magic Leap as a company has had a wild ride since its founding way back in 2010, with billions of dollars raised, an ambitious initial product that fell short of the hype, and a near-death and rebirth with a new CEO.

The company’s latest product, Magic Leap 2, in many ways reflects the ‘new’ Magic Leap. It’s positioned clearly as an enterprise product, aims to support more open development, and it isn’t trying to hype itself as a revolution. Hell—Magic Leap is even (sensibly) calling it an “AR headset” this time around instead of trying to invent its own vocabulary for the sake of differentiation.

After trying the headset at AWE 2022 last week, I got the sense that, like the company itself, Magic Leap 2 feels like a more mature version of what came before—and it’s not just the sleeker look.

Magic Leap 2 Hands-on

Photo by Road to VR

The most immediately obvious improvement to Magic Leap 2 is in the field-of-view, which is increased from 50° to 70° diagonally. At 70°, Magic Leap 2 feels like it’s just starting to scratch that ‘immersive’ itch, as you have more room to see the augmented content around you which means less time spent ‘searching’ for it when it’s out of your field-of-view.

While I suspect many first-time Magic Leap 2 users will come away with a ‘wow the field-of-view is so good!’ reaction… it’s important to remember that the design of ML2 (like its predecessor), ‘cheats’ a bit when it comes to field-of-view. Like the original, the design blocks a significant amount of your real-world peripheral vision (intentionally, as far as I can tell), which makes the field-of-view appear larger than it actually is by comparison.

Photo by Road to VR

This isn’t necessarily a bad thing if only the augmented content is your main focus (I mean, VR headsets have done this pretty much since day one), but it’s a questionable design choice for a headset that’s designed to integrate your real-world and the augmented world. Thus real-world peripheral vision remains a unique advantage that HoloLens 2 holds over both ML1 and ML2… but more on that later.

Unlike some other AR headsets, Magic Leap 2 (like its predecessor) has a fairly soft edge around the field-of-view. Instead of a hard line separating the augmented world from the real-world, it seems to gently fade away, which makes it less jarring when things go off-screen.

Another bonus to immersion compared to other devices is the headset’s new dimming capability which can dynamically dim the lenses to reduce incoming ambient light in order to make the augmented content appear more solid. Unfortunately this was part of the headset that I didn’t have time to really put through its paces in my demo as the company was more focused on showing me specific content. Another thing I didn’t get to properly compare is resolution. Both are my top priority for next time.

Photo by Road to VR

Tracking remains as good as ever with ML2, and on-par with HoloLens 2. Content feels perfectly locked to the environment as you move your head around. I did see some notable blurring, mostly during positional head movement specifically. ML1 had a similar issue and it has likely carried over as part of the headset’s underlying display technology. In any case it seems mostly hidden during ‘standing in one spot’ use-cases, and impacts text legibility more than anything else.

And while the color-consistency issue across the image is more subtle (the ‘rainbow’ look), it’s still fairly obvious. It didn’t appear to be as bad as ML1 or HoloLens 2, but it’s still there which is unfortunate. It doesn’t really impact the potential use-cases of the headset, but it does bring a slight reduction to the immersiveness of the image.

While ML2 has been improved almost across the board, there’s one place where it actually takes a step back… and it was one of ML1’s most hyped features: the mystical “photonic lightfield chip” (AKA a display with two focal planes)—is no longer. Though ML2 does have eye-tracking (likely improved thanks to doubling the number of cameras), it only supports a single focal plane (as is the case for pretty much all AR headsets available today).

Continue on Page 2: Different Strokes for Different Folks Enterprise Use-cases »

The post Hands-on: Magic Leap 2 Shows Clear Improvements, But HoloLens 2 Retains Some Advantages appeared first on Road to VR.

Hands-on: Lumus Prototype AR Glasses Are Smaller & Better Than Ever

Lumus’ latest waveguide, dubbed Maximus, is now even more compact thanks to 2D image expansion. With impressive image quality and a more compact optical engine, the company is poised to have a leading display solution for truly glasses-sized AR headsets.

2D expansion adds an additional light bounce to expand the image, allowing for a smaller optical engine

Lumus has been touting its Maximus waveguide since as far back as 2017, but since then its waveguide display has improved and shrunk considerably, thanks to so-called ‘2D expansion’ which allows the optical engine (the part of the waveguide display which actually creates the image) to be considerably smaller without sacrificing quality or field of view. The improvements have moved the company’s display solution closer than ever to actually looking and working like a pair of glasses.

For comparison, here’s a look at the first time we saw Maximus back in 2017. It had thin optics and a fairly wide field-of-view, but the optical engine was huge, requiring a large overhead structure.

Photo by Road to VR

The company’s latest Maximus waveguide has shrunk things down considerably with 2D image expansion. That means the light is reflected twice to magnify the image vertically and then horizontally before bouncing it into your eye. Doing so allows the optical engine (where the display and light source are housed) to be much smaller and mounted on the side of the glasses while retaining plenty of peripheral vision.

What you’re seeing here is a fully functional display prototype (ie: working images through the lens, but battery and compute are not on-board) that I got to check out at last week’s AWE 2022.

Here’s a look at how the optical engine has been shrunk when moving from 1D expansion to 2D expansion. It’s clear to see how much easier it would be to fit the left one into something you could really call glasses.

Lumus waveguide and optical engine with 2D expansion (left) and 1D expansion (right)

Actually looking through the prototype glasses you can see a reasonably wide 50° field-of-view, but more importantly an impressively uniform image, both in color and clarity. By comparison similar devices like HoloLens 2 and Magic Leap tend to have hazy color inconsistency which often shows a faint rainbow haze from one side of the view to the other. Our friend Karl Guttag captured a great through-the-lens comparison from a similar Lumus prototype:

Image courtesy Karl Guttag

Brightness in the Lumus Maximus glasses is also a major advantage, so much so that these glasses don’t need to dim the incoming light at all, compared to many other AR headsets and glasses that have sunglasses-levels of tinting in order to make the virtual image appear more solid against even ambient indoor light. Lumus says this Maximus prototype goes up to 3,000 nits which is usable in broad daylight.

The lack of heavy tinting also means other people can see your eyes just as easily as if you were wearing regular glasses, which is an important social consideration (wearing sunglasses indoors, or otherwise hiding your eyes, has a connotation of untrustworthiness).

The image through the glasses is also quite crisp; the waveguide is paired with a 1,440 × 1,440 microdisplay which resolves small text fairly well given that it’s packed into a 50° field-of-view. The company says the waveguide in no way limits the potential resolution—all that’s needed is a higher resolution microdisplay. In fact the company has previously shown off a similar version of this prototype with a 2,048 × 2,048 display, which was measured to achieve a retina resolution of 60 pixels per-degree.

Lumus’ waveguide offerings clearly have a lot of advantages compared to contemporaries, especially with overall image quality, brightness, and social acceptability. The big question at this point is… why aren’t we seeing them in consumer products yet?

The answer is multifaceted (if anyone from Lumus is reading this, yes, that’s an intentional pun). For one, what Lumus is showing here is a display prototype, which means the displays are functional, but the glasses themselves have none of the other stuff you need for a pair of standalone AR glasses (ie: battery, compute, and sensors). You can of course offload the compute and battery into a tethered ‘puck’ design, but this significantly reduces the consumer appeal. So those other components still require some miniaturization R&D to be done before everything can fit comfortably into this form-factor.

Another reason is manufacturing costs. Lumus insists that its waveguide solutions can be affordably manufactured at large scales—even for consumer-priced products—and has the backing of major electronics manufacturer Quanta Computer and glass manufacturing specialist Shott. But manufacturing at small scale may not be reasonably affordable when it comes to a device priced for the consumer market. That means waiting until a big player is ready to place a big bet on bringing an AR device to consumers.

For Lumus’ part, the company says it has been working closely with several so-called ‘tier-1’ technology companies (a category which would include Facebook, Apple, Google, and others) for years now. Lumus expects to see the first major consumer product incorporating its waveguide solution in 2024.

The post Hands-on: Lumus Prototype AR Glasses Are Smaller & Better Than Ever appeared first on Road to VR.