Live Webinar: Get Insider Insights on Marketing XR Startups & Content This Wednesday

Join Road to VR’s Ben Lang and VentureBeat’s Dean Takahashi for a live discussion hosted by Spatial Collective about the challenges and opportunities of marketing XR startups, games, and apps.

Organized by XR veteran Don Stein, Spatial Collective is a network of AR & VR Founders building the next wave of spatial computing startups. 130+ members who have collectively raised $300M in venture capital and employ hundreds, come together weekly for incredible conversations with pioneers of the industry.

This Wednesday, July 17th at 11:30AM PT (your timezone here) the Spatial Collective is hosting a live, remote conversation with Road to VR’s Ben Lang and VentureBeat’s Dean Takahashi, both veteran journalists in the XR space. RSVP for the event here.

The conversation will cover how the press thinks about about covering stories in this industry and the best strategies for getting eyes on your XR startup, app, or game.

Founders working in the areas of XR, 3D, computer vision and venture capital are all encouraged to join this live conversation.

RSVP for the Event

Hosting the discussion is Spatial Collective organizer Don Stein. Stein has made XR scout investments for General Catalyst, worked at Meta deploying capital to Quest apps, and raised $5 million as CEO of his own VR startup.

The post Live Webinar: Get Insider Insights on Marketing XR Startups & Content This Wednesday appeared first on Road to VR.

Crafting Memorable VR Experiences – The Interaction Design of ‘Fujii’

Creating a VR that truly immerses the user is no easy feat. To pull this off correctly requires a careful blend of graphics, animations, audio, and haptics that work together in deliberate concert to suspend disbelief and engross the user. Fujii is a joyful interactive adventure and a masterclass in rich VR interactions. The President of Funktronic Labs, the studio behind the game, is here to tell us more about his design approach.

Guest Article by Eddie Lee

Eddie Lee is the President and co-founder of Funktronic Labs, an LA-based independent game studio that focuses on delivering high-quality experiences through games, XR, and other interactive media. His experience spans nearly 15 years in the fields of graphics, game design, and computer simulations.

Today, we are thrilled to pull back the curtain and give you an inside look into our thought processing while developing Fujii, a title that has been a labor of love for us at Funktronic Labs. As the landscape of virtual reality continues its dynamic evolution, we saw a golden opportunity not just to adapt, but to breathe new life into Fujii. We’re eager to re-introduce our experience to a burgeoning new community of VR enthusiasts. Stick with us as we delve into the design process that originally brought this magical floral adventure to life.

A Brief Foray into Funktronic Labs

Founded a decade ago at the intersection of art, technology, and design, Funktronic Labs took the plunge into VR development back in 2015, a time when the industry was still in its infancy and precedents were scarce. This compelled us to adopt a ground-up, first-principles approach to game design and VR interactions—an ethos that has become the backbone of all our projects since then—from our pioneering VR venture, Cosmic Trip, to Fujii, and all the way to our latest release, Light Brigade.

Fujii – A Harmonious Blend of Nature and Technology

Fujii first made its debut as an auteur, art-focused launch title for the release of Quest 1 in May 2019. This project holds a special place in our hearts as a resonant blend of artistic vision and interactive design, exploring the wonders of humanity’s connection with nature. Conceived as a soulful sojourn, Fujii interweaves the realms of nature exploration and whimsical gardening, creating an interactive meditative space for players to lose themselves in.

In an industry landscape where unconventional, art-focused projects often struggle to find support, we were extraordinarily fortunate to connect with Meta (at the time known as Oculus). Recognizing the artistic merit and unique potential in our vision, they granted us the exceptional opportunity and support to bring this artsy-fartsy, non-core experience to fruition.

Fujii’s Overall Design Philosophy

During Fujii’s development, we were acutely aware that a substantial portion of our audience would be stepping into the realm of VR for the first time via the Quest 1—the industry’s first major standalone 6DoF headset.

This keen insight significantly sculpted our design approach. We opted for intuitive, physics-driven interactions that mirror the tactile simplicity of the natural world, consciously avoiding complex VR interactions, elaborate interfaces or dense text.

By refraining from controls that demand steep learning curves, we zeroed in on cultivating immediate, natural interactions, thereby offering a warm invitation to VR newcomers of all ages and gameplay experience. Remarkably, this has led to an incredibly diverse player base, attracting everyone from young children to the elderly, many of whom have found Fujii to be an accessible and joyous experience. [Editor’s note: we quite liked the game too].

VR as a New Interaction Paradigm

It’s an oversimplification to regard VR as merely a ‘stereoscopic monitor strapped to your face.’ We see it as much more than just a visual spectacle; VR introduces a groundbreaking paradigm shift in user interaction. With its 6DoF capabilities, VR transcends conventional gaming by enabling intuitive physical actions like grabbing, touching, and gesturing.

This new paradigm unlocks a whole new layer of tactile engagement and immersion, connecting players directly with their virtual surroundings. This stands in contrast to the abstract, button-press or cursor interactions that characterize traditional, non-VR games. In essence, VR offers a far more integrated and visceral form of engagement, elevating the gaming experience to a whole new level.

Physics-based Inventory

In the realm of VR, the addition of physics and animations to objects isn’t just aesthetic; it serves as a vital conduit for player engagement and understanding. The enjoyment derived from physics-based interactions comes from the brain’s innate satisfaction in grasping the object’s physical properties—be it weight, drag, or inertia.

Absent these nuanced physics, interactions feel insubstantial and weightless, breaking the immersive spell. As a guiding principle, consider incorporating physics into every touchpoint, enriching the player’s tactile connection to the game world and making interactions incredibly rewarding.

To illustrate, let’s delve into the inventory system in Fujii. Far from being a mere menu or grid, our inventory system is organically woven into the fabric of the game’s universe. We’ve opted for a physically-driven inventory, where items like seeds find their homes in “natural slots” in the virtual environment, echoing real-world interactions.

This design choice is not only intuitive but negates the need for a separate tutorial. To further enhance this connection, we’ve enriched these interactions with animations and robust physics feedback, providing an additional layer of tangibility that helps players more fully connect with their virtual environment.

Plants and Touch

Another compelling instance of the importance of physics-based design in VR can be found in our intricate interaction model for plants within Fujii. Human interaction with plants is often tactile and visceral; we touch, we feel, we connect. Our aim was to preserve that authentic texture and intimacy in a virtual context. But we went a step further by infusing every plant with musical responsiveness, adding an ethereal layer of magic and wonder to your botanical encounters.

In Fujii, each interaction with plant life is designed to resonate on a meaningful level. Every plant, leaf, and stem adheres to its own tailored set of physics rules. Whether it’s the gentle sway of a leaf in response to your touch or the subtle recoil of a stem, our objective has been to make these virtual interactions indistinguishable from real-life ones.

Achieving this required painstaking attention to detail, coupled with robust physics simulations, ensuring that each touch aligns with natural expectations, thereby deepening your immersion in this magical realm.

Watering

Watering plants in Fujii isn’t just a game mechanic; it’s crafted to be a tactile and immersive VR experience that mimics the soothing and nurturing act of watering real plants. From the way the water cascades to how it nourishes the flora, every detail has been considered. Even the extension of your arms into playful, jiggly water hoses has been designed to offer a sense of whimsy while maintaining an air of naturalism. The water interacts realistically with both the plants and the landscape, underlining the game’s commitment to intuitive, lifelike design.

To infuse an additional layer of enchantment into this seemingly simple act, we’ve introduced a delightful touch: any water droplets that fall onto the ground trigger a temporary, flower-sprouting animation. This whimsical feature serves to amplify the ‘reality’ of the droplets, allowing them to interact with the world in a way that grounds them.

The Symphony of Sound Design

In Fujii, sound design is far from peripheral; it’s an integral facet of the game’s immersive landscape. Sound doesn’t merely serve as an auditory backdrop; it plays a pivotal role in how humans subconsciously interpret the physical makeup of the objects they interact with.

When sound, physics, and visuals synergize, they allow the brain to construct a comprehensive mental model of the object’s material properties. Numerous studies have even demonstrated that superior sound design can elevate players’ perception of the graphics, making them appear more lifelike, despite no actual change in visual quality (see this and this).

Seizing this opportunity, we’ve added a unique aural dimension to Fujii. Instead of sticking strictly to realistic, organic sounds, we’ve imbued interactions with melody, notes, and keys, creating an atmosphere of musical exploration and wonder. It’s as if you’re navigating through a symphonic wonderland, amplifying the sense of enchantment and, ideally, offering players a synesthetic experience that enriches their immersion in this captivating virtual world.

Trust the Design Process

In the course of game development, we’ve learned that it’s often impractical, if not impossible, to map out every component of a game’s design during pre-production. Instead, we’ve increasingly embraced a mindset of ‘discovery’ rather than ‘invention’.

While we adhere to certain design principles, the elusive process of ‘finding the fun’ in a VR experience continues to be a mystifying yet exciting challenge, even with over a decade of experience under our belts. The magic often unfolds when the game seems to take on a life of its own, almost as if it wishes to manifest itself in a particular way.

To best facilitate this organic process, we’ve found that maintaining a high degree of flexibility and adopting an iterative mindset is crucial—especially in VR development, where ideas don’t always translate well into enjoyable VR interactions.

Take, for example, the design of our watering mechanic (from earlier): initial concepts like grabbable watering cans or throwable water orbs seemed engaging on paper but fell flat in practice. It wasn’t until we stumbled upon the random idea of water shooting magically from the player’s hands that everything seemed to click into place. Allowing room for such iterative spontaneity has often led us to unexpected yet delightful game mechanics.

– – — – –

In the development of Fujii, our aim was to establish a meaningful benchmark for what can be achieved through simple yet thoughtful interaction design in VR. As technology marches forward, we anticipate that the fidelity of these virtual experiences will continue to gain depth and realism. Yet, the essence of our objective remains constant: to forge not just visually impressive virtual landscapes, but also highly interactive and emotionally resonant experiences.

Members of Funktronic Labs

We hope this in-depth technical exploration has offered you valuable insights into the thought process that go into shaping a VR experience like Fujii. As we continue on this journey, we invite you to explore and to keep your faith in the limitless possibilities that VR offers. Thank you for sharing this journey with us.


Fujii – A Magical Gardening Adventure is now available at the new low price of $10 on Meta Quest, SteamVR and PSVR 1.

Gibby Presents: Road Testing the Latest Location-based VR Experiences

Location-based VR has bounced back since the pandemic. So let’s get some arcade action! The fastest-growing company, Sandbox VR, has just opened their 40th location worldwide. Gibby’s Guide went out and about to road test the best that the sector has to offer.

Gibby Presents

Gibby Zobel is an English-born journalist, filmmaker and radio broadcaster. Based in Brazil for over 20 years, he produces content for the BBC World Service, BBC News and China Global Television Network (CGTN). Currently on sabbatical in the UK, he writes and publishes Gibby’s Guide, a free independent VR digital magazine, launched in 2021.

As fans of Gibby’s work, we share a selection of the magazine’s feature articles, this one from the latest issue: Gibby’s Guide V23.

“I wanted an immersive experience with my friends, where they could reach out and touch each other and actually make a physical connection. I believed that the real magic of VR would begin when someone could totally lose themselves in the immersive experience. The game, the interface, the disbelief would all fall away and only Experience would be left.”

Steve Zhao, co-founder and CEO of Sandbox VR, outlined his vision of a ‘minimum viable matrix’. Then he built it.

WHAT IS LBVR?

Location-based Virtual Reality or LBVR refers to an out-of-home location where people can play unique VR games, usually as a team, that they can’t find on consumer headsets. Haptic vests and physical items like a gun can add to the experience, as can extras like fans, heaters, water spray and hydraulics. Games are purpose built in-house or by studios like Ubisoft.

It began with the opening of their first arena in June 2017 on the 16th floor of a back alley high rise in Hong Kong with leaky pipes, surrounded by private members clubs and other less salubrious neighbours.

Exactly six years later a premium location in downtown Seattle has just become Sandbox VR’s 40th location worldwide—they are present across the US, Europe and Asia—and they are the fastest growing company in the sector.

But it very nearly didn’t happen. Covid-19 threatened to strangle the fledgling LBVR industry at birth. The major player at the time, The Void, sank without trace. Some survived. A case in point is Zhao’s Sandbox VR. He relates the story on his Medium page.

“With a nationwide lockdown and all our retail locations mandated to close, our revenue plummeted by 100%. The year was traumatising for the team and myself: running a near-death startup during the worst crisis possible while undergoing an emotionally taxing bankruptcy process, with the team barely getting paid at all,” he says.

But through a drastic 80% staff cut, rent freezes, and financial contortions they pulled through.

Last month they launched their seventh LBVR title built in-house, Seekers of The Shard: Dragonfire, and have announced a deal with Netflix to bring Squid Game to VR later this year following on from a deal with CBS to make Star Trek Discovery.

While Sandbox VR is undoubtedly the shining beacon, selling upwards of 100,000 tickets a month, other LBVR companies are making headway.

Czech start-up Divr Labs is backed by billionaire Daniel Kretinsky—known for his investment in West Ham United Football Club—and has opened in a prime location in West London inside Westfield, Europe’s largest shopping centre, in addition to venues in Stockholm and Prague.

Clever design means that Divr Labs can accommodate 48 people an hour inside its 150 square metre space. At full capacity that would equate to an income north of $4M a year in just that one retail area.

London’s first VR arcade, DNA VR, has expanded to three venues in the capital and one in Manchester while another UK venture, Meetspace VR has seven arcades across the country.

In the Guandong Province in China, the Lionsgate Entertainment World, which opened in July 2019, is the most technologically advanced theme park on the planet. It leverages popular film franchises like The Hunger Games and The Twilight Saga to create VR experiences including an indoor VR rollercoaster and motorbike sim.

ILMxLAB (now ILM Immersive) similarly held a limited run of Star Wars Tales From The Galaxy’s Edge at Disney World Orlando in 2022.

Back in London, Layered Reality also borrows from popular culture creating a two-hour spectacular with Jeff Wayne’s War Of The Worlds Immersive Experience.

Now in its fourth year, it takes place in a huge purpose-built set. It’s voted the number one immersive experience in the capital on Trip Advisor and has surpassed 175,000 customers.

But what are these experiences like? Do they justify the the hype?

Sometimes LBVR can be a terrible disappointment; recent examples include efforts at high profile arts centres like the Serpentine Galleries and Barbican Centre, which can be fatal to public interest, especially if it is their first time in a headset.

They also have to hold up against competing entertainment options. Traditional arcades have had a renaissance and retro places like NQ64, Arcade Club, and Pixel Bar are popular.

Then there’s the emerging trend of projection mapping with motion tracking.

Immersive Gamebox offers their non-VR version of Squid Game, Ghostbusters, and Angry Birds while Chaos Karts promises “an augmented reality experience without the need for headsets” on their illuminated race tracks.

An LBVR Road Trip

Gibby’s Guide—that’s me and a bunch of mates—set out to take the temperature of the industry, travelling to five different locations in the UK.

All of us had some level of experience playing with Quest 2 at home but none had been to a LBVR attraction.

Clearly this sample is geographically specific but some, like Sandbox VR, can be also found across the US and worldwide and many of the details are common to others.

None of the LBVR venues we visited used Quest 2; various iterations of the HTC Vive (usually the Focus 3) or PiMAX were the headsets of choice at the venues.

Prices varied between the equivalent of $40–$75 per person, and lasted between 25 minutes and two hours. The minimum age requirement began at 7 and went up to 16 depending on the game.

Sandbox VR

Sandbox VR knows the value of first impressions. The location is prime real estate in central London and the façade of the modern Post Building is unmissable, decked in giant posters of VR gamers with the brand’s logo.

You are greeted by airport-style check-in terminals and a robot cocktail waiter to mix your drinks.

Attendants give you an iPad menu of weapons to chose from (you take the physical item into the arena), snap your photo and lead you in to a loading area. You put on a haptic vest and tie alien-looking velcro trackers that look like atoms around your wrists and ankles for full body real-time motion capture.

You carry a laptop in a backpack that sends movement coordinates to a server. It’s quite a bit of kit, not forgetting the headset itself, and you feel the weight.

I’m playing Dead Wood Valley with Jonny. We often play multiplayer games on Quest 2 from separate houses but this is our first co-location VR experience (ie: occupying the same physical playspace).

The street is filled with zombies and vultures. It’s loud. We can’t hear each other over the sound of our gunfire which starts from the get-go and only relents after we defeat the final boss.

25 minutes later. We’ve flown on a chopper, ridden on a truck and saved each other several times from certain death (you have to physically touch the shoulder of your teammate to revive them).

At the end of the experience it’s time to party on a lit up dancefloor to record one of a couple of videos ripe for social media that hot swap from you in the VR gear in the room to the virtual world.

“Overall I’m a little underwhelmed,” says Jonny. “The game itself looked good, sounded good, but what you actually do is quite limited.”

“You are just shooting, you don’t really have time to communicate, the room was quite small. It reminded me of one of those old arcade games where you’d have the gun and the foot pedal to duck down and hide behind things but upscaled into a VRscape.”

“I liked the haptic suit and the feedback on the gun. When I had to touch you on the shoulder it felt disorientating.”

“I guess for people who have never done VR before or in a group it’s something fun to do, like going bowling.”

“I’m glad I’ve done it, I would recommend that people have a go. It’s a little overpriced but then I’m notoriously tight-fisted!”

Continue on Page 2: Divr Labs & DNA VR »

Cloudhead Games CEO: Apple Vision Pro is an AR Headset Wearing VR Clothes

Cloudhead Games is one of the most successful and senior VR studios in the industry. In this Guest Article, studio head Denny Unger shares his thoughts on Apple’s entrance into the space.

Guest Article by Denny Unger

Denny Unger is CEO and CCO at Cloudhead Games. Based in British Columbia and founded in 2012 Cloudhead’s pioneering approach to VR gave rise to broadly adopted movement standards including Snap Turns and Teleportation. Working closely with Valve, Sony, and Meta, Cloudhead is best known for their title Pistol Whip and has shipped four popular VR titles (Pistol Whip, Valve’s Aperture Hand Labs, Call of the Starseed, and Heart of the Emberstone).

So let’s get the obvious over first; Apple Vision Pro is Apple’s first generation attempt at AR glasses using a Mixed Reality VR headset. AVP is a development platform also serving an enthusiast demographic. Make no mistake, this no compromise MR device appears to get many things right for AR at a premium cost. Will Cloudhead Games be buying one to better understand Apple’s approach? Heck yes. AVP will give developers a powerful foundation and ecosystem for which to develop AR apps for a future ‘glasses formfactor’ device in that mythical 5–10 year window. And to the victor, the spoils of a smartphone replacing device.

No doubt (and if rumors are true) there were many debates at Apple HQ about VR. Whether or not to open the device up to VR studios and successful titles. Whether or not to include controllers to support legacy VR titles. Whether to allow users to full-dive into Virtual Reality, freely move around, and be active in the medium. But in an effort to sharpen their messaging, and to command a dominating lead within the AR space, VR and its many benefits were expertly omitted on nearly every level. Do I understand the strategy to strike a different cord as an XR business owner? Absolutely. Does it frustrate me as a VR-centric studio owner? You bet it does.

Image courtesy Apple

I question why the AVP didn’t maximize its potential, leveraging almost a decade of know-how from the VR community working within this space. Why not set a vision for a future device that would accommodate both AR and VR as complimentary mediums? Apple could have embraced a dual launch strategy with a rich and proven catalog of best selling VR games, perfectly tuned to onboard a completely new audience to XR. Apple could have expanded into VR’s recent success, growth and competition within the current market. In their recent presentation VR is essentially reduced to a gimmick, the thing you lightly touch the edges of, instead of a complimentary and equally important medium. Unity engine support is promised but with no plans for motion control support, Apple has cut out any possibility of porting most of the existing or future VR catalog to its platform.

Hand-tracking is a logical affordance for AR based spatial computing and no doubt some experiences will work well with that design philosophy. However it is important to point out that most VR games built over the last 10 years (and many more in production) are not compatible with, nor will they ever be “portable” to hand-tracking only design. Inputs and Haptics are incredibly important to Virtual Reality as a major tenant in reinforcing immersion and tactile interaction with virtual objects. Buttons pushed, triggers pulled, vibrational feedback experienced, objects held, thrown or touched, alternative movement schemes supported. There is a comfort in understanding the topological landscape of a controller and a physical touchpoint within the virtual environments themselves. When introducing users to a radically different medium like VR, convention & feedback matters. And over the last 50 years in gaming, input has evolved to encourage a suite of highly refined game design standards, creating a particular kind of muscle memory in the gaming population. Say what you will about which genres remain popular in this 450 Billion dollar industry but it does strain belief to think we’ll all be playing with finger guns in the latest and greatest shooter.

I know what some are likely to say “ there will be new innovative standards and we’ll look back on controllers as a crutch”, but I would push back and say hand-tracked or not, moving away from future haptic devices and innovation is a backwards step in XR design. Even smartphone games utilize basic haptics, because touch is foundational to the human experience.

In the aftermath of the AVP launch some would argue that VR is not yet mainstream and that Apple did the right thing by ignoring it. I would argue that VR turned a significant mainstream corner when Quest 2 outsold Xbox, when Sony reentered the market with PSVR2, and when Google teamed up with Samsung to work on what’s next, and on it goes. Over its 10 year rebirth, the last 3 years of VR have experienced Hockey Stick levels of growth. OEM’s have increased investments, and significant indicators keep coming with more titles earning revenues north of $20 Million. Fully immersive VR is a legitimized medium not because I say it is but because people like it, and are willing to part with their hard earned money to experience it.

Image courtesy Apple

I hope Apple is more inclusive of VR over time but the Apple Vision Pro appears to be a VR headset pretending not to be a VR headset. Because of this strategy it represents a unique opportunity for Apple’s competitors to double-down on supporting Virtual Reality at a more affordable entry point. Sure, they can all wage the 5-10 year war for a smartphone replacement but why in the world would one ignore an equally compelling revenue stream within a blended MR ecosystem? Maybe, because it took too long to go mainstream? Sorry all, we had to learn a few things along the way but I’m happy to say that after 10 years, the trail ahead has never been this clear.

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’

Eye of the Temple is one of the rare VR games that focuses on not just on pure room-scale movement, but dynamic room-scale movement. The result is a uniquely immersive experience that required some clever design behind the scenes to make it all work. This guest article by developer Rune Skovbo Johansen explains the approach.

Guest Article by Rune Skovbo Johansen

Rune Skovbo Johansen is a Danish independent game developer based in Turku, Finland. His work spans games and other interactive experiences, focused on tech, wonder, and exploration. After positive reception of the 2016 VR game jam game Chrysalis Pyramid, he started working on a more ambitious spiritual successor, Eye of the Temple, and at the end of 2020 he quit his day job to pursue indie game development full-time.

In Eye of the Temple, you move through a vast environment, not by teleportation or artificial locomotion, but by using your own feet. It makes unique use of room-scale VR to deliver an experience of navigating an expansive space.

In Eye of the Temple you move around large environments using your own feet

But how does it work behind the scenes? To mark the upcoming release of Eye of the Temple on Quest 2, I wanted to take the time to explain these aspects of the game’s design that I’ve never fully gone into detail with before. In this article we’ll go over a variety of the tricks the game uses to make it all work. Let’s start with the basics of keeping the player in the play area

Keeping the Player in the Play Area

Say you need to go from one tall pillar in the game to another via a moving platform. You step forward onto the platform, the platform moves, and then you step forward onto the next pillar. But now you’re outside your physical play area.

Moving platforms are positioned in a way to keep players inside the play area

If we instead position the moving platform to the side, it goes like this: You sidestep onto the platform, it moves, and you sidestep onto the next pillar. Since you took a step right, and then left, you’re back where you started in the center of the play area. So the game’s tricks are all about how the platforms are positioned relative to each other.

Now, to get a better sense for it, let’s look at some mixed reality footage (courtesy of Naysy) where a grid representing the play area is overlaid on top.

Mixed reality footage with a grid overlaid on top which represents the play area

Keeping an Overview in the Level Design

Now that we’ve seen how the trick works, let’s take a look at how I keep track of it all when doing the level design for the game. First things first – I made this pattern, which represents the player’s entire play area – or the part of it the game takes advantage of anyway:

A pattern representing the physical play area

As you can see, there’s a thick white border along the edge, and a thick circle in the center.

Every platform in the game has a designated spot in the play area and a pattern overlay that shows what that spot is. For platforms that are a single tile large, it’s generally one of nine positions. The overlay makes it easy to see if a given platform is positioned in the center of the play area, or at an edge or corner.

The play area pattern overlaid on each platform and its end positions make it easy to see if they are lined up correctly in the level design

Additional overlays show a ghostly version of the pattern at both the start and end positions of a moving platform. This is the real trick of keeping track of how the platforms connect together, because these ghostly overlays at the end positions make it trivial to see if the platforms are lined up correctly in the level design when they touch each other. If the adjacent ghostly patterns are continuous like puzzle pieces that fit together, then the platforms work correctly together.

It still took a lot of ingenuity to work out how to position all the platforms so they both fit correctly together and also take the player where they need to go in the virtual world, but now you know how I kept the complexity of it manageable.

Getting the Player’s Cooperation

The whole premise of getting around the world via these moving platforms is based on an understanding that the player should step from one platform to another when they’re lined up, and not at other times. The most basic way the game establishes this is by just telling it outright to the player in safety instructions displayed prior to starting the game.

One of the safety instructions shown before the game begins

This instructions is shown for two reasons:

One is safety. You should avoid jumping over gaps, otherwise you would risk jumping right out of your play area and into a wall, for example.

The other is that the game’s system of traversal only works correctly when stepping from one platform to another when they line up. This is not as critical – I’ll get back to later what happens if stepping onto a platform that’s misaligned – but it still provides the best play experience.

Apart from the explicit instructions, the game also employs more subtle tricks to help ensure the player only steps over when blocks are correctly aligned. Consider the following example of a larger 2 x 2 tile static platform the player can step onto. A moving platform arrives from the side in a way that would allow the player to step off well before the platform has stopped moving, but that would break the game’s traversal logic.

In this room, ‘foot fences’ are used to discourage the player from stepping from one platform to another when they are not correctly aligned

To avoid this, “foot fences” were placed to discourage the player from stepping over onto the static platform (or away from it) at incorrect positions. The fences are purely visual and don’t technically prevent anything. The player can still step over them if they try, or right through them for that matter. However, psychologically it feels like less effort to not step over or through a fence and instead step onto the static platform where there’s a gap in the fence. In this way, a purely non-technical solution is used as part of the game’s arsenal of tricks.

Continued on Page 2: Correcting for Unaligned Platforms »

The Hidden Design Behind the Ingenious Room-Scale Gameplay in ‘Eye of the Temple’

Eye of the Temple is one of the rare VR games that focuses on not just on pure room-scale movement, but dynamic room-scale movement. The result is a uniquely immersive experience that required some clever design behind the scenes to make it all work. This guest article by developer Rune Skovbo Johansen explains the approach.

Guest Article by Rune Skovbo Johansen

Rune Skovbo Johansen is a Danish independent game developer based in Turku, Finland. His work spans games and other interactive experiences, focused on tech, wonder, and exploration. After positive reception of the 2016 VR game jam game Chrysalis Pyramid, he started working on a more ambitious spiritual successor, Eye of the Temple, and at the end of 2020 he quit his day job to pursue indie game development full-time.

In Eye of the Temple, you move through a vast environment, not by teleportation or artificial locomotion, but by using your own feet. It makes unique use of room-scale VR to deliver an experience of navigating an expansive space.

In Eye of the Temple you move around large environments using your own feet

But how does it work behind the scenes? To mark the upcoming release of Eye of the Temple on Quest 2, I wanted to take the time to explain these aspects of the game’s design that I’ve never fully gone into detail with before. In this article we’ll go over a variety of the tricks the game uses to make it all work. Let’s start with the basics of keeping the player in the play area

Keeping the Player in the Play Area

Say you need to go from one tall pillar in the game to another via a moving platform. You step forward onto the platform, the platform moves, and then you step forward onto the next pillar. But now you’re outside your physical play area.

Moving platforms are positioned in a way to keep players inside the play area

If we instead position the moving platform to the side, it goes like this: You sidestep onto the platform, it moves, and you sidestep onto the next pillar. Since you took a step right, and then left, you’re back where you started in the center of the play area. So the game’s tricks are all about how the platforms are positioned relative to each other.

Now, to get a better sense for it, let’s look at some mixed reality footage (courtesy of Naysy) where a grid representing the play area is overlaid on top.

Mixed reality footage with a grid overlaid on top which represents the play area

Keeping an Overview in the Level Design

Now that we’ve seen how the trick works, let’s take a look at how I keep track of it all when doing the level design for the game. First things first – I made this pattern, which represents the player’s entire play area – or the part of it the game takes advantage of anyway:

A pattern representing the physical play area

As you can see, there’s a thick white border along the edge, and a thick circle in the center.

Every platform in the game has a designated spot in the play area and a pattern overlay that shows what that spot is. For platforms that are a single tile large, it’s generally one of nine positions. The overlay makes it easy to see if a given platform is positioned in the center of the play area, or at an edge or corner.

The play area pattern overlaid on each platform and its end positions make it easy to see if they are lined up correctly in the level design

Additional overlays show a ghostly version of the pattern at both the start and end positions of a moving platform. This is the real trick of keeping track of how the platforms connect together, because these ghostly overlays at the end positions make it trivial to see if the platforms are lined up correctly in the level design when they touch each other. If the adjacent ghostly patterns are continuous like puzzle pieces that fit together, then the platforms work correctly together.

It still took a lot of ingenuity to work out how to position all the platforms so they both fit correctly together and also take the player where they need to go in the virtual world, but now you know how I kept the complexity of it manageable.

Getting the Player’s Cooperation

The whole premise of getting around the world via these moving platforms is based on an understanding that the player should step from one platform to another when they’re lined up, and not at other times. The most basic way the game establishes this is by just telling it outright to the player in safety instructions displayed prior to starting the game.

One of the safety instructions shown before the game begins

This instructions is shown for two reasons:

One is safety. You should avoid jumping over gaps, otherwise you would risk jumping right out of your play area and into a wall, for example.

The other is that the game’s system of traversal only works correctly when stepping from one platform to another when they line up. This is not as critical – I’ll get back to later what happens if stepping onto a platform that’s misaligned – but it still provides the best play experience.

Apart from the explicit instructions, the game also employs more subtle tricks to help ensure the player only steps over when blocks are correctly aligned. Consider the following example of a larger 2 x 2 tile static platform the player can step onto. A moving platform arrives from the side in a way that would allow the player to step off well before the platform has stopped moving, but that would break the game’s traversal logic.

In this room, ‘foot fences’ are used to discourage the player from stepping from one platform to another when they are not correctly aligned

To avoid this, “foot fences” were placed to discourage the player from stepping over onto the static platform (or away from it) at incorrect positions. The fences are purely visual and don’t technically prevent anything. The player can still step over them if they try, or right through them for that matter. However, psychologically it feels like less effort to not step over or through a fence and instead step onto the static platform where there’s a gap in the fence. In this way, a purely non-technical solution is used as part of the game’s arsenal of tricks.

Continued on Page 2: Correcting for Unaligned Platforms »

Cas & Chary Present: Which is the Better VR Cloud Gaming Service? Shadow vs. PlutoSphere

There are currently two cloud streaming services that work with VR and are available to consumers: Shadow and PlutoSphere. These services allow Quest users to play PC VR games without needing a gaming PC, which is a big investment if you’re only looking to play a few SteamVR titles like Half-Life: Alyx.

Cas & Chary Present

Cas and Chary VR is a YouTube channel hosted by Netherland-based duo Casandra Vuong and Chary Keijzer who have been documenting their VR journeys since 2016. They share a curated selection of their content with extra insights for the Road to VR audience.

I got curious so I checked out both services to compare the two and see which one is worth subscribing to right now. This article is a summary of my video where I share the key differences, pros, and cons per service. This article also includes more recent developments on PlutoSphere billing.

Key Differences

Shadow focuses more on flatscreen PC gaming via the cloud, VR is a side project and their Quest app is still in beta. PlutoSphere focuses on XR and flatscreen gaming is secondary; this is why PlutoSphere virtual PCs come preinstalled with SteamVR, while on Shadow, you have to configure the VR software yourself. Pluto’s entire service is currently in early access.

Pricing & Billing

Shadow

  • Shadow has a monthly subscription fee of US$30 per month. In return, you can use the service unlimitedly (as long as you pay). You can cancel 48 hours before your subscription renews.
  • Included is persistent storage (256GB) and you can add more storage for an extra fee.

PlutoSphere

  • PlutoSphere has time-based billing, so you pay per hour and only when you use it.
  • You have to buy ‘PlutoTokens’ to get access. Currently, 600 PlutoTokens is priced at $2 and amount to one hour of usage. The more tokens you buy at once, the cheaper it gets.
  • PlutoSphere does not come with persistent storage included. To get persistent storage, you have to pay a monthly fee. You get two options:
    • $9.99 / month for 128GB storage
    • $39.99 / month for 128GB storage and 12,000 tokens/mo

Devices Support & Availability

Shadow

  • Currently supports Windows, macOS, Ubuntu, Android, AndroidTV, and iOS/ tvOS. As far as I can see, Shadow cannot be run from a web browser, you have to install an app on your device to run it.
  • Only available in eight different countries. This includes the US, UK, Belgium, France, Germany, Switzerland, Luxembourg, and my own little country, The Netherlands.
  • In the US, Shadow isn’t available in every state. You can check availability here by choosing your state from the dropdown menu.

PlutoSphere

  • Currently supports Android, iOS, Hololens 2, and any device with a web browser. Native Windows support is said to come soon.
  • Pluto leverages Amazon Web Services, which has servers worldwide, so Pluto is available as long as there is a server near you. If you go to this online tool, you can estimate the network latency from your browser to AWS data centers. As long as there is a server that has less than 100ms ping to you, you can use Pluto. Keep in mind that CloudFront doesn’t count.
  • For VR, both services support Meta Quest 1 and 2. Other headsets might be coming in the future.

Internet Connection Requirements

Both services have requirements for your internet connection.

Shadow

  • At least 50 Mbps download speed
  • 5Ghz WiFi network
  • Lower than 20ms ping

PlutoSphere

  • At least 50 Mbps download speed
  • 5Ghz WiFi network (WIFI6 recommended)
  • Lower than 100ms ping required but lower than 50ms is recommended

Cloud PC Specs

These are the specs that you’ll get on your virtual PC.

Shadow

  • CPU
    • Intel Xeon E5-2678 v3 @2.5 GHz with 3.1GHz Turbo Boost (8-cores)
    • Alternatively: Intel Xeon E5-2667 v3 @3.2 GHz with 3.6 GHz Turbo Boost
  • GPU
    • P5000 with 16GB GDDR5X
    • Alternatively, in some regions: GTX1080 with 8GB GDDR5X
    • Alternatively, in some regions: RTX4000 with 8GB GDDR6
  • RAM
    • 12GB DDR4 at 2133Mhz
  • Persistent Storage
    • 256GB SSD storage (Optional extra storage 2TB HDD)
  • Video Quality Options
    • Option to choose max video bitrate
    • Option to adapt max bitrate to network conditions (recommended)
  • Refresh rate for VR
    • 72hz max
  • Download Speed
    • ~950 Mbps download, ~100 Mbps upload

PlutoSphere

  • CPU
    • Intel Xeon Platinum 8259CL CPU@ 2.50 GHz (8-cores)
  • GPU
    • NVIDIA Tesla T4
  • RAM
    • 32GB
  • Persistent Storage
    • 128GB SSD add-on (one-time-fee, $97.50 for one year)
  • Video Quality Options
    • 4K60fps, 1080p60, 720p60
  • Refresh rate for VR
    • 72hz max
  • Download Speed
    • ~3,100 Mbps download, ~4,000 Mbps upload

Setup

Shadow doesn’t have as many initial steps. All you need is to install the Shadow app via SideQuest, then you can launch Shadow from within VR and do everything from there.

On Pluto, there are more initial steps, which are detailed in my how-to video. After these steps, you need to access their dashboard first (on any device with an internet connection). That’s where you can start the service, which can take between 10 to 15 minutes. Lastly, you also need to start SteamVR on your Virtual PC, before you can put on your Quest.

Performance

During my tests, I kept both services on default settings, and both were tested in 72hz on Quest 2. I’ve tried playing Beat Saber, Blade & Sorcery, Fracked, and Half-Life Alyx. If you want to see a side-by-side gameplay comparison, you can watch my video.

The streaming quality is good enough on both services to be fully playable. However, there is a difference in visual quality, Pluto is noticeably much sharper, while Shadow has more streaming artifacts.

Unfortunately, Pluto’s software is more buggy and it causes compatibility issues with certain games as you can see from this list. For example, I wasn’t able to play Half-Life: Alyx on Pluto, yet it did work on Shadow.

Input lag is comparable on both services. I wouldn’t recommend using streaming services for very competitive games where every movement counts, but slower-paced games do seem to work well.

In the end, I think that Shadow has the best overall performance with software that’s easy to use without many hiccups, and the games I tried work out of the box. However, PlutoSphere has better stream quality overall, mainly when a game is supported.

Which service is best for you will depend on how many hours you want to use your virtual PC and whether a game is supported. However, overall, I think Shadow is the way to go for now. Keep in mind that PlutoSphere is in early access, so hopefully, they will solve the bugs when it releases.

As you can tell by now, VR streaming can be tricky, but I think it’s incredible that it’s already possible, and having more options for people to get into higher-end VR games is never a bad thing, right?


Disclosure: Both platforms provided me with free access to their service

Tech Secrets Behind ‘Cosmonious High’s’ Cast of Interactive Characters

Cosmonious High contains 18 characters across six species all created by a team with zero dedicated animators. That means lots and lots of code to create realistic behaviors and Owlchemy-quality interactivity! The ‘character system’ in Cosmonious High is a group of around 150 scripts that together answer many design and animation problems related to characters. Whether it’s how they move around, look at things, interact with objects, or react to the player, it’s all highly modular and almost completely procedural.

This modularity enabled a team of content designers to create and animate every single line of dialogue in the game, and for the characters to feel alive and engaging even when they weren’t in the middle of a conversation. Here’s how it works.

Guest Article by Sean Flanagan & Emma Atkinson

Cosmonious High is a game from veteran VR studio Owlchemy Labs about attending an alien high school that’s definitely completely free of malfunctions! Sean Flanagan, one of Owlchemy’s Technical Artists, created Cosmonious High’s core character system amongst many other endeavors. Emma Atkinson is part of the Content Engineering team, collectively responsible for implementing every narrative sequence you see and hear throughout the game.

The Code Side

Almost all code in the character system is reusable and shared between all the species. The characters in Cosmonious High are a bit like modular puppets—built with many of the same parts underneath, but with unique art and content on top that individualizes them.

From the very top, the character system code can be broken down into modules and drivers.

Modules

Every character in Cosmonious High gets its behavior from its set of character modules. Each character module is responsible for a specific domain of problems, like moving or talking. In code, this means that each type of Character is defined by the modules we assign to it. Characters are not required to implement each module in the same way, or at all (e.g. the Intercom can’t wave.)

Some of our most frequently used modules were:

CharacterLocomotion – Responsible for locomotion. It specifies the high-level locomotion behavior common to all characters. The actual movement comes from each implementation. All of the ‘grounded’ characters—the Bipid and Flan—use CharacterNavLocomotion, which moves them around on the scene Nav Mesh.

CharacterPersonality – Responsible for how characters react to the player. This module has one foot in content design—its main responsibility is housing the responses characters have when players wave at them, along with any conversation options. It also houses a few ‘auto’ responses common across the cast, like auto receive (catching anything you throw) and auto gaze (returning eye contact).

CharacterEmotion – Keeps track of the character’s current emotion. Other components can add and remove emotion requests from an internal stack.

CharacterVision – Keeps track of the character’s current vision target(s). Other components can add and remove vision requests from an internal stack.

CharacterSpeech – How characters talk. This module interfaces with Seret, our internal dialogue tool, directly to queue and play VO audio clips, including any associated captions. It exposes a few events for VO playback, interruption, completion, etc.

It’s important to note that animation is a separate concern. The Emotion module doesn’t make a character smile, and the Vision module doesn’t turn a character’s head—they just store the character’s current emotion and vision targets. Animation scripts reference these modules and are responsible for transforming their data into a visible performance.

Drivers

The modules that a character uses collectively outline what that character can do, and can even implement that behavior if it is universal enough (such as Speech and Personality.) However, the majority of character behavior is not capturable at such a high level. The dirty work gets handed off to other scripts—collectively known as drivers—which form the real ‘meat’ of the character system.

Despite their more limited focus, drivers are still written to be as reusable as possible. Some of the most important drivers—like CharacterHead and CharacterLimb—invisibly represent some part of a character in a way that is separate from any specific character type. When you grab a character’s head with Telekinesis, have a character throw something, or tell a character to play a mocap clip, those two scripts are doing the actual work of moving and rotating every frame as needed.

Drivers can be loosely divided into logic drivers and animation drivers.

Logic drivers are like head and limb—they don’t do anything visible themselves, but they capture and perform some reusable part of character behavior and expose any important info. Animation drivers reference logic drivers and use their data to create character animation—moving bones, swapping meshes, solving IK, etc.

Animation drivers also tend to be more specific to each character type. For instance, everyone with eyes uses a few instances of CharacterEye (a logic driver), but a Bipid actually animates their eye shader with BipedAnimationEyes, a Flan with FlanAnimationEyes, etc. Splitting the job of ‘an eye’ into two parts like this allows for unique animation per species that is all backed by the same logic.

Continue on Page 2: The Content Side »

The post Tech Secrets Behind ‘Cosmonious High’s’ Cast of Interactive Characters appeared first on Road to VR.

Catch Road to VR Co-founder Ben Lang on the Between Realities Podcast

Road to VR co-founder Ben Lang recently joined the crew of the Between Realities podcast.

Bringing more than a decade of experience in the XR industry as co-founder of Road to VR, Ben Lang joined hosts Alex VR and Skeeva on Season 5 Episode 15 of the Between Realities podcast. The trio spoke about the impetus for founding the publication, Meta’s first retail store, the state of competition in the XR industry, privacy concerns for the metaverse, and even some musing on simulation theory. You can check out the full episode below or in the Between Realities episode feed on your favorite podcast platform.

In the podcast Lang speaks of a recent article about scientists that believe it’s possible to experimentally test simulation theory, which you can find here.

The post Catch Road to VR Co-founder Ben Lang on the Between Realities Podcast appeared first on Road to VR.

The Metaverse Saved My Life, Now I’m Using it to Save Others

I’ve spent over 10,000 hours in ‘the metaverse’—or at least the proto-metaverse—virtual worlds inhabited by real people represented as avatars. The experiences and relationships I had there saved me from a dark place and set me on a mission to do the same for others.

Guest Article by Noah Robinson

Noah is founder and CEO of Very Real Help, and a clinical psychology doctoral candidate at Vanderbilt University. Very Real Help has built a clinical research platform called Help Club to explore how social virtual reality environments can be used to deliver transdiagnostic cognitive behavioral peer support. Very Real Help has received funding from the National Institutes of Health, National Science Foundation, and investors to use Help Club for the treatment of substance use disorders and other mental health conditions.

There’s real pitfalls and dangers in the metaverse, such as the pedophilia and child grooming recently highlighted by the BBC, or the sexual assault that’s occurred on various platforms. But like any technology, the metaverse can be used for both good and bad. It all depends on how each application is built and how we choose to use them. With immersive virtual reality, where the entire environment can be controlled, the potential to help people is nearly limitless.

When I was 13 I escaped into a virtual social game called Runescape. Right when I hit puberty, I realized I was gay. I was overwhelmed with feelings of shame and anxiety. Several years into the burden of being closeted, I felt hopeless for my future and considered ending my life. But one thing kept me going: as I made friends and leveled up alongside them in Runescape, virtual stimuli created real hits of dopamine. These hits are an important treatment target for depression—in therapy, we teach patients to engage in rewarding behaviors to increase motivation and potentially overcome depression.

I spent most of my teenage years, nearly 10,000 hours, living in this virtual world. Inside I could build a virtual identity in a fantasy world where sexual identity was not a factor. As I gained confidence in the virtual world, I eventually created my first clan which steadily grew in size. Although it consisted of 400 ‘strangers’ on the internet, they were my closest friends. Eventually I felt enough social belonging and validation to come out of the closet to them. My friends accepted me, even when they knew my deepest, darkest secret. Going through this process virtually empowered me to come out of the closet in the real world and eventually to overcome my depression.

From that moment on, I knew I wanted to devote my life to building virtual experiences that were as compelling as a videogame but also as effective as therapy.

I know it’s possible because I’m already on that journey. I’ve built a mental-health focused virtual app called Help Clubavailable on Quest, PC, Mac, and iOS—that allows anyone to improve their well-being and mental health in a virtual setting. You can join as an avatar and attend live groups that are led by trained coaches. A fully realized metaverse has the potential to change millions of lives by making it easy to connect with this kind of virtual support group.

Avatars inside Help Club | Image courtesy Very Real Help

Help Club is just getting started—since launching our beta in October we’ve had thousands of people come into our community—we’re starting to see that a safe, supervised environment can quite literally change peoples’ lives. Help Club is designed from the ground up to support mental health. We’re training everyday, ordinary empathic people to become coaches who can lead support groups and teach the scientifically validated tools of an approach we’ve developed called Cognitive Behavioral Immersion.

It’s a place I wish I had as a 13-year old to guide me toward healing rather than entice me into a world of escape.

Building a mental health space that’s ready for the metaverse isn’t easy and we’ve had to use technology to ensure a safe world for all. We screen folks and monitor interactions—although we’re not delivering therapy, we’ve adopted standard practices developed in therapy training clinics such as recording interactions to monitor for quality and prevent trolls from causing psychological harm. Although we only support people who are 18 or over, we’ve also seen demand from minors who have found our platform and want mental health help.

We’re starting to see exciting results from our virtual mental health platform. It’s attracting people who need help; 53% of our users have (self-reported) levels of clinical depression, and 45% have clinical levels of anxiety. And we’re starting to observe decreases in symptoms of depression and anxiety for those who spend time in our application.

While VRChat is the platform the BBC highlighted recently in its story on child grooming, there are examples of safe spaces on the platform. For example, a beautiful transgender community blossomed in VRChat and allowed safe spaces to exist for some people who were struggling with the same things I did as a teenager. One person even described that they were thinking about transitioning to the opposite gender for 10 years, and it took trying on a female avatar in VRChat to finally begin the acceptance process and seek out a gender therapist.

We’ve also seen Alcoholics Anonymous meetings and chronic illness support groups come to fruition in Altspace. These groups find refuge in these virtual spaces—safe places to connect with others in a nonjudgmental space. The spaces are safe because people have the comfort of being anonymous while also feeling the immersive social support of avatars around them. Although these platforms can deliver help, they can also cause harm if there is no moderation or accountability. These platforms also need to protect minors by keeping them in safe, moderated environments.

Help Club is also changing lives. We have Help Club members who started out with severe social anxiety or depression and have now completed our coach training and are leading meetings to help others.

One of our members publicly shared that she had not been able to leave her house in nearly three years. Help Club helped her to feel comfortable leaving the house again, and she reported her experience was “infinitely better than three years of therapy.” Now that she can leave home, she’s able to engage in rewarding real-world activities that help people to overcome depression.

Image courtesy Very Real Help

Another member reported that he was too depressed to go to work and had been lying in bed all day. For nearly two weeks he went to Help Club meetings every day and reported that he was able to go to work for the first time in a long time. He told us he had tears in his eyes after coming home from his first day of work, thinking about how Help Club had gotten him there.

This is just the beginning. More research is needed, including randomized control trials, to truly know if the metaverse can deliver on its promise of helping people overcome real-life problems. But even right now I know that there are thousands of other people out there like me, looking to escape into the metaverse to avoid, and maybe even heal, real-life pain.

The post The Metaverse Saved My Life, Now I’m Using it to Save Others appeared first on Road to VR.