Unreal Engine 5 shipped earlier this month after launching in Early Access last year. On modern PCs and next-gen consoles its ‘Nanite’ geometry system brings a radical new way to how games are made & rendered. In previous engines, artists import reduced detail versions of the original assets they create. When you move far enough away from those assets, an even lower detail version generated in advance (either manually or automatically) is displayed instead. This is called LOD (Level of Detail).
Nanite upends this approach. Artists import the full movie-quality assets and the geometric detail is scaled automatically in real time based on your distance from the model. Virtual objects look incredibly detailed up close, and don’t “pop in” or “pop out” as you move away from them.
However, Nanite and its associated Lumen lighting system don’t work in VR and aren’t even available on Android. VR developers have to use the legacy geometry and lighting systems, negating many of the advantages of the new engine.
If you’re migrating from Unreal Engine 4, Epic has an important guide on how to do this.
Meta notes the following features are not yet implemented:
passthrough or spatial anchors
late latching
Application SpaceWarp
mobile tonemap subpasses
UE5 ports of sample and showcase projects
As such, Meta still recommends sticking with Unreal Engine 4.27 for serious app development.
To access the Oculus UE5 branch you first need to register your GitHub ID with Epic – if you don’t do this you’ll get a 404 error when trying to access it.
HTC’s Vive Focus 3 standalone headset now has beta support for OpenXR content.
OpenXR is the open standard API for VR and AR development. It was developed by Khronos, the same non-profit industry consortium managing OpenGL. OpenXR includes all the major companies in the space such as Meta, Sony, Valve, Microsoft, HTC, NVIDIA, and AMD – but notably not Apple. It officially released in 2019.
The promise of OpenXR is to let developers build apps that can run on any headset without having to specifically add support by integrating proprietary SDKs. Developers still need to compile separate builds for different operating systems, but all current standalone VR headsets use Android.
Last year Meta deprecated its proprietary Oculus SDK in favor of OpenXR, so Vive Focus 3’s support for OpenXR should make it easier for Quest apps to be ported. HTC still only markets the headset to businesses though – the $1299 price includes a two year business license, extended warranty, and priority support.
There are still barriers to releasing VR apps to other stores however. Platform level APIs like friend invites, parties, leaderboards, cloud saves, and avatars still differ. Porting involves a lot more work than the ideal of OpenXR may suggest.
At GDC last month, Meta shared the clearest look yet at the number of apps reaching various revenue milestones on the Quest platform. So far, 124 apps have earned $1 million in revenue or more, while 8 have exceeded $20 million.
Last month at GDC, Meta’s Director of Content Ecosystem, Chris Pruett, shared the clearest look yet at how many apps are seeing material success on the Quest store over the last year. Lining up the data with some previously shared figures gives us an interesting look at how the Quest store is progressing over time.
First, here are the numbers Pruett shared at GDC, which include data up to February 2022.
An important not here is that these revenue buckets are exclusive, meaning 35 of the apps have exceeded $1 million but not exceeded $2 million (and so on). In total, 124 apps have exceeded the $1 million mark.
Quest Apps Reaching Revenue Milestones Over Time
Lining up the data with the same figures shared by Meta previously gives us an idea of how things are trending over time. First is a naive look at the data points side-by-side:
But this doesn’t account for the time between data points, nor differences in seasonal sales volume. With some interpolation we can account for time and seasonal differences by looking at a yearly average of all apps exceeding $1 million in revenue.
Of our three charts so far, this is the most normalized way to look at the data. The chart above tells us how many apps are reaching the $1 million milestone each year, on average. The number is increasing overall, which is a good sign, though the rate of the increase is slowing.
One variable that could significantly impact how we understand this data is that rate at which apps are being allowed into the store (since Meta hand-picks which apps do and don’t go on the Quest store). The number of apps in the store has been growing at a surprisingly linear rate, so we can say it likely isn’t having much of an impact on the chart above.
Granted, many of Quest’s early apps have been previously successful VR games that were ported to the headset, which makes their success more assured than the growing number of brand new VR apps that have launched as the platform has aged. This could account for the slowing rate of apps reaching $1 million each year on Quest.
This week Epic Games released the latest version of its next-gen game engine, Unreal Engine 5. Though the new version brings improvements in many areas, its most notable features are Lumen (global illumination) and Nanite (micro-polygon geometry), which could be game-changers for VR immersion. Unfortunately the company says neither feature is ready for VR developers.
Available as of this week for all developers, Unreal Engine 5 promises to usher in a new era of game development which makes it easier for developers to create games with extremely high quality assets and realistic lighting. That’s thanks to the engine’s two new key features, Nanite and Lumen.
Nanite
Nanite is what Epic calls a “virtualized geometry” system which radically improves the geometric detail in game scenes.
A rea-time scene rendered using Nanite | Image courtesy Epic Games
Previously developers would create high quality 3D models as a sort of ‘master’ reference which would eventually have their geometry greatly simplified (leading to a reduction detail and complexity) before being pulled into the game engine. The same model generally gets several versions with increasingly reduced detail which ‘pop’ between each other depending upon how far the game camera is away from the model (known as ‘level of detail’ or ‘LOD’). This allows the game to show higher quality up close while using the reduced quality models when they are further away to save performance.
Nanite essentially functions like a continuous LOD system that draws detail from the original ‘master’ model, instead of relying on pre-built models with reduced detail. In each frame the system references the master model and pulls out the maximum level of detail needed for the given camera distance. Not only does this eliminate the need to create discrete LOD models, it also means that the range of detail for a model can be much greater, allowing players to see incredibly fine detail—right down to the original polygons of the ‘master’ model—if they get close enough.
Lumen
Meanwhile, Unreal Engine 5’s new lighting system, Lumen, greatly simplifies game lighting thanks to global illumination.
Real-time lighting rendered with Lumen and Nanite | Image courtesy Epic Games
Realistic lighting can be very computationally expensive; without Lumen, many games use a combination of lighting techniques to achieve the look they want while maintaining game performance. A given scene might use pre-calculated ‘baked’ lighting (which isn’t interactive with the rest of the scene) along with a small number of real-time light-sources that cast shadows on certain objects in the scene, and various ‘screen-space’ effects to emulate realistic lighting.
Lumen unifies lighting into a single approach called global illumination, which aims to make every light in the scene—even the Sun—into a real-time light that is interactive with other lights and the rest of the scene. This includes realistic bounced light which spreads color throughout the scene based on the color of the objects that the light hits. For instance, white sunlight shining into a white room with a red floor will cast some red light onto the walls as it bounces from the red floor. Such bounced lighting is an essential component of photo-real lighting.
Both Nanite and Lumen could massively improve immersion in VR thanks to their ability to hugely increase geometric detail in nearby objects (which is especially noticeable with the stereoscopic capability of VR headsets) and to create more realistic and interactive real-time lighting.
“No Timeframe” for Nanite or Lumen in VR
Unfortunately Epic says that neither Nanite nor Lumen in UE5 are ready for VR yet.
“While we have no timeframe to share in terms of Lumen and Nanite support for VR experiences, we are exploring how to bring those UE5 features to additional platforms,” the company tells Road to VR.
But, Epic says, that doesn’t mean VR developers shouldn’t use UE5.
“VR developers can leverage most of Unreal Engine 5’s production-ready tools and features, such as the new UI, the new suite of modeling tools, creator tools such as Control Rig, MetaSounds, and World Partition for large open environments.”
What’s the Holdup?
Though both Nanite and Lumen are capable of creating incredible looking scenes, they aren’t ‘free’ from a performance standpoint.
“Although the advantages [of Nanite] can be game-changing, there are practical limits that still remain. For example, instance counts, triangles per mesh, material complexity, output resolution, and performance should be carefully measured for any combination of content and hardware,” the company warns developers. “Nanite will continue to expand its capabilities and improve performance in future releases of Unreal Engine.”
Lumen, meanwhile, is only designed to target 60 FPS for large outdoor scenes and 30 FPS for indoor scenes on the very latest console hardware. That’s a far cry from the 90 FPS minimum for most PC VR headsets. And with Quest 2 being significantly less powerful than the latest consoles, there’s just no way it’ll be able to handle those kinds of demands. Which may mean that the ultimate limitation in bringing these features to VR is simply performance (or lack thereof).
The same scene rendered for a flat screen compared to being rendered for VR is often less performant in VR due to the need for stereoscopic rendering (and usually higher resolutions). Tricks like single-pass stereo and foveated rendering help reduce this overhead, but may not yet work in conjunction with the likes of Nanite and Lumen.
So it may be matter of optimization and more powerful hardware before it’s practical to bring these features to VR experiences. From Epic’s perspective, Unreal Engine has just a small fraction of VR developers compared to the likes of the Unity engine, on which the vast majority of VR games today are built. Especially with the trajectory of Meta’s Quest 2 as the most popular target platform for developers (and its lack of power compared to consoles and PCs), it seems likely that optimizing Nanite and Lumen for VR is very low on Epic’s priority list.
Hopefully we’ll see these next-gen engine features in VR eventually, but it might not happen for some time yet.
Meta today announced it’s expanding the company’s VR developer programs to include training on how to build games and experiences inside Horizon Worlds, Meta’s growing social VR platform. Devs can also nab some cash too—a strong incentive to help Meta build out its fledgling metaverse.
“Today, we’re inviting you to further unleash your imagination, expand your expertise, and bring your vision to life by joining one of our Builder Tracks for Horizon Worlds,” Meta says in a blog post.
This is set to include training through a Horizon Worlds boot camp, support from Meta experts and other VR devs currently building in Worlds, and the chance to compete for funding as well as cash prizes.
It’s doing this through Oculus Start and Oculus Launch Pad. Oculus Start is a program created for developers who have either launched a VR application or are close to releasing one, while Oculus Launch Pad was created to support promising VR content creators from diverse backgrounds. Both programs have been important to filling out the Store with a wide swath of VR content.
Through the new builder tracks, Meta says it’s offering up “over $500,000 in funding and cash prizes to program developers creating unique, innovative, and engaging worlds in Horizon Worlds.”
Meta’s new builder tracks expand upon the company’s $10 million creators fund which was launched late last year in effort to initially attract developers to Horizon Worlds.
In comparison to established social VR platforms such as VR Chat and Rec Room, Meta’s Horizon Worlds still has a ways to grow before it sees record-breaking concurrent user numbers and more consistent engagement.
It’s still only available on Meta headsets, and only to users in the US and Canada. And while that’s not set to change today, it’s clear the company is looking to fill out Horizon Worlds with more grabby content to make it less of a virtual chatroom and more of a virtual destination.
If you’re building a VR game or experience for Horizon Worlds—or just thinking of building one—head over to either Oculus Start or Oculus Launch Pad to learn more, and sign up for your chance to access funding and expert support.
Meta today showed off an AI powered ‘concept’ for creating VR worlds with your voice, called Builder Bot.
Instead of picking objects from a user interface and placing with controllers, Builder Bot lets you create by simply describing out loud what you want. In the demo, Mark Zuckerberg asks for a beach, then for a specific type of clouds, then props like trees and a picnic. His colleague is even able to ask for specific ambient background sounds and Zuckerberg asks the stereo to play a music genre.
Zuckerberg describes Builder Bot as a “concept”, warning that “there are a lot of challenges we still need to solve”. The delay between giving a command and seeing it actualized in the demo seems impossibly short. Further, Meta didn’t make clear how much of the demo is actually real, nor whether the 3D models are dynamically generated or picked by the AI from a library.
Meta says Builder Bot is possible thanks to self-supervised learning (SSL), a relatively new way to train AI models the company has helped pioneer in recent years. Most AI today uses supervised learning (SL) requiring vast amounts of data carefully labeled by humans. But that’s obviously not how humans or animals learn, and Meta’s researchers say relying on labelling is a bottleneck to developing more generalized AI. With SSL, AI can get a deeper understanding of a concept from far less data that doesn’t need to be labeled. Existing projects like OpenAI’s DALL·E use SSL to generate images you describe with text, but this is the first time we’ve seen this idea applied to virtual world creation.
With Horizon Worlds (which this demo seems to be based on) Meta is already trying to lower the barriers to creating VR experiences. Like Rec Room, Worlds lets you build inside VR by using your controllers to place & manipulate shapes and using a visual scripting system to add dynamic functionality. But not everyone is comfortable using console-like controllers to navigate intricate menu systems. Projects like Builder Bot hint at a future where just like in Star Trek, anyone can create their own virtual worlds with just their voice.
Gleechi’s VirtualGrasp software development kit is now available through the company’s early access program, offering auto-generated and dynamic hand interactions and grasp positions for VR developers.
We first reported on Gleechi’s technology back in early 2020 when the company released footage of their VirtualGrasp SDK in action, which automates the programming of hand interactions in VR and allows for easy grasping and interaction with any 3D mesh object.
Two years on, the SDK is now available through Gleechi’s early access program, which you can apply for here. The SDK supports all major VR platforms, provided as a plug-in that can be integrated into existing applications, with support for Unity and Unreal, for both controller and hand tracking interactions.
Given the timing of release, you might ask what the difference is between Meta’s new interaction SDK and Gleechi’s VirtualGrasp SDK. The key difference is that Meta’s technology uses set positions for grasps and interactions – if you pick up an object, it can snap to pre-determined grasp positions that are manually assigned by the developer.
On the other hand (pun intended), the Gleechi SDK is designed as a dynamic system that can generate natural poses and interactions between hands and objects automatically, using the 3D mesh of the objects. This means there should be much less manual input and assignment needed from the developer, and allows for much more immersive interactions that can appear more natural than pre-set positions.
You can see an example of how the two SDKs differ in the graphic above, created with screenshots taken from a demonstration video provided by Gleechi. On the left, the interaction uses the Meta SDK – the mug therefore uses set positions and grab poses that are set manually by the developers. In this case, it’s set so the user will always grab the mug by the handle. Multiple grab poses are possible with the Meta SDK, but each has to be manually set up by the developer.
In the middle and the right, you can see how Gleechi’s SDK allows the user to dynamically grab the mug from any angle or position. A natural grab pose is applied to the object depending on the position of the hand, without the developer having to set up the poses manually. It is done automatically by the SDK, using the 3D mesh of the object.
Gleechi also noted that its SDK supports manual grasp positions as well. Developers can use the dynamic grasp system to find a position they’d like to set as a static grasp and then lock it in. For example, a developer could use VirtualGrasp’s dynamic system to pick the mug up from the top, as pictured above, and then set that as the preferred position for the object. The mug will then always snap to that pose when picked up, as opposed to dynamically from any position. This allows you to set static hand grip poses for some objects, while still using the automatic dynamic poses for others.
We were able to try a demo application using the Gleechi SDK on a Meta Quest headset and can confirm that the dynamic poses and interactions work as described and shown above. We were able to approach several objects from any angle or position and the SDK would apply an appropriate grasp position that felt and looked much more natural than most other interactions with pre-set poses and grasp positions.
Meta just released a hand Interaction SDK and Tracked Keyboard SDK for its Quest VR headsets.
Interaction SDK
There are already interaction frameworks available on the Unity Asset Store, but Meta is today releasing its own free alternative as an experimental feature. That experimental descriptor means developers can play around with it or use it in SideQuest apps, but can’t yet ship apps using it to the Store or App Lab.
Interaction SDK supports both hands and controllers. The goal is to let developers easily add high quality hand interactions to VR apps instead of needing to reinvent the wheel.
The SDK supports:
Direct grabbing or distance grabbing of virtual objects, including “constrained objects like levers”. Objects can be resized or passed from hand to hand.
Custom grab poses so hands can be made to conform to the shape of virtual objects, including tooling that “makes it easy for you to build poses which can often be a labor intensive effort”.
Gesture detection including custom gestures based on finger curl and flexion.
2D UI elements for near-field floating interfaces and virtual touchscreens.
Pinch scrolling and selection for far-field interfaces similar to the Quest home interface.
Meta says Interaction SDK is already being used in Chess Club VR and ForeVR Darts, and claims the SDK “is more flexible than a traditional interaction framework—you can use just the pieces you need, and integrate them into your existing architecture”.
Tracked Keyboard SDK
Quest is capable of tracking 2 keyboard models: Logitech K830 and Apple Magic Keyboard. If you pair either keyboard using bluetooth it will show up as a 3D model in the home environment for 2D apps like Oculus Browser.
Tracked Keyboard SDK allows developers to bring this functionality to their own Unity apps or custom engines. Virtual keyboards are slower to type with and result in more errors, so this could open up new productivity use cases by making text input in VR practical.
The SDK was made available early to vSpatial, and has been used by Meta’s own Horizon Workrooms for months now.
Our series Inside XR Design examines specific examples of great XR design. Today we’re looking at the clever design of Stormland’s weapons, locomotion, and open-world.
Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.
You can find the complete video below, or continue reading for an adapted text version.
By the time the studio began development on Stormland, it had already built three VR games prior. That experience shows through clearly in many of Stormland’s cleverly designed systems and interactions.
In this article we’re going to explore the game’s unique take on weapon reloading and inventory management, its use of multi-modal locomotion, and its novel open-world design. Let’s start with weapons.
Weapons
Like many VR games, one of the primary modes of interaction in Stormland is between the player and their weapons. For the most part, this works like you’d expect: you pull your gun out of a holster, you can hold it with one hand or two, and you pull the trigger to fire. But when your gun runs out of ammo, you do something different than we see in most VR games… you rip them in half.
Ripping guns apart gives you both ammo for that weapon type and crafting materials which are used to upgrade your weapons and abilities in the game. In that sense, this gun-ripping pulls double-duty as a way to replenish ammo and collect useful resources after a battle.
Most gun games in VR use magazines to replenish a weapon’s ammo, and while this can certainly work well and feel realistic, it’s also fairly complex and prone to error, especially when the player is under pressure.
Dropping a magazine to the ground in the middle of a firefight and needing to bend over to pick it up might feel reasonable in a slower-paced simulation game, but Stormland aims for a run-and-gun pace, and therefore opted for a reloading interaction that’s visceral, fun, and easy to perform, no matter which weapon the player is using.
This ‘ripping’ interaction, combined with some great visual and sound effects, is honestly fun no matter how many times you do it.
Interestingly, Stormland’s Lead Designer, Mike Daly, told me he wasn’t convinced when one of the game’s designers first pitched the idea for ripping guns apart. The designer worked with a programmer to prototype the idea and eventually sold Mike and the rest of the team on implementing it into the game. They liked it so much that they even decided to use the same interaction for non-gun items like health and energy canisters.
A streamlined approach to weapon reloading isn’t the only thing that Stormland does to make things easier for the player in order to maintain a run-and-gun pace; there’s also a very deliberate convenience added for weapon handling.
If dropping a magazine in the middle of a fight can hurt the pace of gameplay, dropping the gun itself can stop it outright. In Stormland, the designers chose not to punish players for accidentally dropping their gun, by instead having the weapon simply float in place for a few seconds to give the player a chance to grab it again without bending down to pick it up from the floor.
And if they simply leave it there the gun will kindly return to its holster. This is a great way to maintain realistic interactivity with the weapons while avoiding the problem of players losing weapons in the heat of combat or by accidentally not holstering them.
Allowing the weapons to float also has the added benefit of making inventory management easier. If your weapon holsters are already full but you need to shuffle your guns, the floating mechanic works almost like a helpful third-hand to hold onto items for you while you make adjustments.
Multi-modal Locomotion
Locomotion design in VR is complex because of the need to keep players comfortable while still achieving gameplay goals. Being an open-world game, Stormland needed an approach to locomotion that would allow players to move large distances, both horizontally and vertically.
Instead of sticking with just one approach, the game mixes distinct modes of locomotion and encourages players to switch between them on the fly. Stormland uses thumbstick movement when you’re on firm ground, climbing when you need to scale tall structures, and gliding for large scale movement across the map.
Thumbstick movement works pretty much how you’d expect, but climbing and gliding have some smart design details worth talking about.
Climbing in Stormland works very similarly to what you may have seen in other VR games, with the exception that your hand doesn’t need to be directly touching a surface in order to climb. You can actually ‘grab’ the wall from several feet away. This makes it easier to climb quickly by requiring less precision between hand placement and grip timing. It also keeps the player’s face from being right up against the wall, which is more comfortable, and means they don’t need to strain their neck quite as much when looking up for their next hand-hold.
And then there’s Stormland’s gliding locomotion which lets players quickly travel from one end of the map to another. This fast movement seems like it would be a recipe for dizziness, but that doesn’t seem to be the case—and I’ll talk more about why in a moment.
With these three modes of locomotion—thumbstick movement, climbing, and gliding—Stormland does an excellent job of making players feel like they’re free to fluidly move wherever they want and whenever they want, especially because of the way they work in tandem.
Since 2019 Epic Games (well known as the creators of Unreal Engine & Fortnite) has run the Epic MegaGrants program, a $100 million fund to financially support projects built with Unreal Engine. In 2021 the program awarded grants to 31 XR projects.
Epic recently recapped the complete list of MegaGrant recipients in 2021, comprising a whopping 390 individual projects, each of which received a grant from the program of up to $500,000
By our count, 31 one of those were built with XR in mind. The projects range widely from games to simulation to education and more. Here’s a few that caught our eye, along with the complete list or XR recipients further below.
Epic says that MegaGrants awards are not investments or loans, and recipients can use the money to do “whatever will make their project successful,” with no oversight from the company. Similarly, recipients retain full rights to their IP and can choose to publish their projects however they want. If you’re working on something related to Unreal Engine, you can apply for consideration too!