Microsoft Halts Development On Maquette Prototyping Tool

A new note at the top of a documentation page indicates that Microsoft halted development on Maquette, its VR prototyping tool.

Here is a copy of the note in full which can be found at the top of this Maquette documentation page:

Microsoft is not actively developing Microsoft Maquette at this time, and the access to the application in the store will be discontinued. Microsoft will apply the learnings from building the application and the feedback from the community to enable better tools for Mixed Reality content creation in the future. While there are no plans to open source any of Maquette’s source code, we will continue to offer the application as a download here. We want to thank the community for the journey and support.

The note has a similar feel to when Google announced the end of its VR art tool Tilt Brush — although in that case, the software was open sourced and made available to the community. We were first alerted to the update by SkarredGhost on Twitter.

Maquette launched in 2019, available for free and allowing users to create immersive VR concept designs and sketches from within a headset. It included “a set of tools for importing, creating, composing, and storyboarding content in 3D space, complete with a library of primitives and UI icons.” It even supported exporting content into Unity, where developers could take those VR concepts from Maquette and begin to carry them over into full development.

While development has halted, at the time of this writing you can still download Maquette directly from Microsoft here or access it via the Oculus Store or Steam. And while Microsoft Maquette’s story might be finished, one of the developers who worked on the application built another VR product in his spare time you can go download right now: Unplugged, the incredible air guitar game.

 

Free Version of ‘Masterpiece Studio Pro’ VR Creation Suite Now Available for Non-commercial Use

Masterpiece Studio (formerly MasterpieceVR) today announced it’s releasing a free edition of its latest professional VR creator suite, Masterpiece Studio Pro. The free software license is targeting individuals looking to use the suite for non-commercial use.

The free version is said to contain the entire set of features of Masterpiece Studio Pro, which is a subscription-based service aimed at freelancers, teams, and educators using its creation tools for work.

Like its original 2019-era Masterpiece Studio, Masterpiece Studio Pro lets users create 3D assets within VR, letting you use motion controllers to draw, sculpt, texture, optimize, rig, skin, and animate things like characters or objects. The Pro version was launched back in April 2021.

Image courtesy Masterpiece Studio

One of the biggest caveats with the original was the inability to export models, which was a feature only paying users could access. That’s still a thing with the free version of Pro, although the studio has now created a public library where creations can be published and viewed.

“We believe this Free version will help showcase your work, bring value to other creatives, and help build the creative community of the future,” the studio says on its Steam page.

The Ontario, Canada-based startup is pitching the free license as a way to support VR indie creators by not only letting them learn the ropes of their software for free, but also by establishing a way to share and remix those publicly shared creations. You can find it on PC VR headsets for free over at Steam and Viveport.

The post Free Version of ‘Masterpiece Studio Pro’ VR Creation Suite Now Available for Non-commercial Use appeared first on Road to VR.

What Any VR Game Can Learn From the ‘Electronauts’ Interface – Inside XR Design

Our series Inside XR Design examines specific examples of great XR design. Today we’re looking at the interface of Electronauts to find out what makes it so excellently usable.

Editor’s Note: Now that we’ve rebooted our Inside XR Design series, we’re re-publishing them for those that missed our older entries.

You can find the complete video below, or continue reading for an adapted text version.

Electronauts is a music making game by developer Survivors that’s designed to make it easy to feel like a competent DJ, even if—like me—you don’t have much musical talent. It’s available on every major VR headset; check out our full review here.

And while it’s easy to think that the game’s interface has little relevance outside of music, nothing could be further from the truth. The Electronauts interface is smartly designed at the core, and for reasons that have nothing to do with music or rhythm games.

There’s three pillars to this interface that make it great: ease-of-use, hierarchy, and flexibility.

Ease-of-use

It’s clear to see why the designers would give the players drumsticks for a game with drum-like instruments, but what’s really smart is also making the drumsticks the tools for manipulating the interface. Humans are evolutionarily adept at manipulating tools—in fact studies have shown that with enough practice, we subconsciously and proprioceptively consider tools to be an extension of ourselves.

In the case of Electronauts, the extra reach provided by the drumsticks allows the interface to be comfortably large to overcome issues with precision, making the entire interface easier to use with less chance for mistakes.

We can see this clearly in the way that the game’s buttons work. While the intuitive idea would be to have buttons that are pressed as they are touched, Electronauts does things differently for the sake of precision and reliability. Instead of simply touching a button to activate it, you actually insert your drumstick into the button and then pull the trigger.

This is a very smart solution to the issue of missing physical feedback in VR. Real life buttons are deeply designed around physical feedback, and this feedback helps you press them reliably. Because there’s nothing to push back on the drumstick in VR, it’s harder to confidently target and activate a physically simulated button.

Asking the user to intersect the button with their drumstick and then pull the trigger to confirm their selection greatly increases the precision of the game’s buttons compared to a physical button simulation.

Hierarchy

Hierarchy is an essential part of any interface. It’s the way in which you organize the functions of the interface so that it’s logical, easy to remember, and easy to access.

Electronauts has a very smart hierarchy where all of the game’s functions are contained within tools, and all of the tools are represented as cubes. To access the functions of any tool, you simply place a cube into a pedestal.

You can think of each cube as it’s own little mini-app, just like the way that smartphone apps are shown as icons on a screen, each containing specific functionality. This makes it really easy to remember where to access certain functions without the interface needing to overwhelm the user by displaying all the functions at once.

With a limit of three cubes active at any one time, Electronauts does a good job of having a clearly organized hierarchy that’s not too deep. A hierarchy that’s too deep—like having folders inside of folders inside of folders—can mean too much time spent digging to reach the function you’re looking for, even if it means everything is clearly organized.

Continued on Page 2: Flexibility »

The post What Any VR Game Can Learn From the ‘Electronauts’ Interface – Inside XR Design appeared first on Road to VR.

Grabbing The Bull By The Horns: The Importance Of VR Hands And Presence

We spoke with several VR game designers and developers about the important (and difficulty) of creating believable and immersive hands in VR.

The Importance of Believable VR Hands

VR hands are a strange thing to ‘get to grips with’ so to speak. Not only do they allow you to interact with the virtual world in front of you but they allow you to become part of it. As you shape yourself to fit into that world, it does the same to fit around you. In real life, you have limited strength and ability but VR gives you that rare opportunity to be better than real life; it lets you become an action hero.

“It was pretty clear right off the bat that it was important,” Kerry Davis, a Valve programmer that worked on Half-Life: Alyx, said in an interview “Even if you didn’t have a full-body avatar yet, to at least have hands so that you could sort of connect with the world and actually participate. We all have an innate desire to control the world we inhabit. As a VR user, you don’t simply want to sit back and let the story happen—you want this organic control over the world. “People really wanted to have hands.” 

Half-Life: Alyx’s design encapsulates this. There’s this visceral sense of control you have covering your mouth when you come across Jeff or even that satisfying click you get from loading up ammo. Tristan Reidford, a Valve artist who also worked on Alyx, learned this  previously when working on the Aperture Robot Repair Demo that shipped as part of The Lab.

We didn’t have hands in that… we simulated the controller… that satisfied people’s desires to have representation,” Reidford said.

Oculus Touch - Hand Presence Technology screenshot

The most fascinating thing about VR hands is the spectacle itself, the representation of you. When using a controller, you can disconnect and understand that certain buttons equal certain actions. The uncanny valley nature of movement in VR makes it just close enough to true immersion to become distracting when it’s not. 

The uncanny valley in this case represents that physical and mental border between what your eyes see in the headset lens and what you feel around you. As VR gets closer to real life, it also gets further away. There’s something distinctly recognizable, yet alien-feeling about this imitation of real life. VR is at its best when it’s immersive and compelling but not trying to lie to you. It offers you a fantasy and, in the case of Half-Life: Alyx, that alien feeling comes from somewhere else: Xen

“Now you don’t need as much of an abstraction so you can almost get an exact representation of reality,” Davis said. “It turns out it’s almost harder to do in VR.” 

The complicated nature of emulating some sense of real-life movement means you often have to trivialize or exaggerate that movement. “In VR, your interactions are so close that your brain wants to fall back to actions you’ve known since you were a toddler,” Davis said. “We still have to put this interface layer in there and say we’re defining what the constraints of this virtual world are.” 

You are constrained in two senses when in VR. There are the constraints of your movement—such as how far you can move in your actual room and how much you can lunge forward before colliding head-first into your dresser—and the constraints of the tech itself. There is this wonderful creativity that springs from necessity when creating games atop necessary limitations. 

Oftentimes, a world has to be made less organic and genuine to feel real, as paradoxical as that sounds. The swinging of a sword feels natural but you don’t have to undergo a year of training before you can use it in VR. It throws a little bit of ‘real life’ out the window to provide a more fun and, ironically, more immersive experience. 

 

Limitless Limitations

Over at Streamline Media, a small group working on their first real VR title. “Due to resolution issues they (the team) often went back to using just larger gestures and bigger levers,” Stefan Baier, COO at Streamline Media, said. “Smaller hand gestures didn’t reliably scale and made it not work well with the dev work we were doing for PS4 where you don’t have that input.” 

When working with it, PSVR often has to be at the bottom of the barrel, technologically speaking. This means that whilst hand gestures are held back in some sense, the choice to fit certain actions in are made purely for the player experience. When you can’t show off the tech or provide hyper-realistic details, you only provide what’s necessary. A streamlined control system allows for more natural movement, even if it’s constrained.

“We are always limited by the hardware…Especially with these fairly new technologies…tracking will massively improve”, Christof Lutteroth, a senior lecturer in the department of computer science at Bath University, said. “There’s nothing fundamental that will prevent us from simply tracking our hands. It’s just a computational problem that’s harder to solve.”

Ultimately, we are always held back by the tech but this doesn’t mean we should stop and accept it as it is.

facebook Quest 2 Hand Tracking

“From what I’ve seen…It’s definitely a big step forward. However, it’s still very rudimentary,” Lutteroth said when talking about the Oculus Quest’s finger tracking. “When you come to hand tracking, there’s quite a lot of error involved with machine learning… That’s very often data-driven.”  Due to the varied nature of human fingers (that’s before mentioning those with disabilities) hand tracking technology is dominated by research into what an average hand is. Unfortunately, with the way that research works, it often gets caught up in implicit biases. 

One very highly publicized case of these forms of bias is Microsoft’s AI chatbot, Tay. It was designed to pick up on speech and emulate it to make Tay’s speech sound authentic. Within a day, it was virulently bigoted. This is an interesting microcosm of how these biases set in. If the people you test are racist, your chat bot will be. If the people you test have all ten fingers, your hand designs will be. 

This is why Davis and Reidford were so outspoken in our interview about the power of playtesting. Reidford spoke of finger tracking and the unique hurdles in making your movement feel both organic, yet slick. 

“There’s this sliding scale where, on one side, it’s fully finger tacking and then, on the other side, 100% animation,” Reidford said. ”We had to find the balance there… as soon you put the player under any kind of duress…They just want to slam a new magazine in.

Surpassing Limitations

There’s creativity to just existing in VR. Davis mentioned that testers told him “‘Yeah, I’m playing this. I don’t feel powerful… I feel like my normal everyday self… that’s not why I go into games,’” Davis said. “‘I go into games to feel powerful and skilled.’” Even though it emulates your actions, VR is so captivating due to its ability to emulate the wonderful and powerful. That uncanny valley is less prevalent when the situation itself is one that you choose to put yourself into. When you’re aware of your world and place yourself in there, you can forget all about the gear you’re wearing. 

A fantastic example of this is Half-Life: Alyx’s gravity gloves. “In Half-Life: Alyx it was about having the hands feel so natural you don’t really think about them anymore,” Reidford said. 

You hold your hands up, grab an item and pull it towards you, and hope it lands. As long as you’re close to where you should be, it will always land. The same design philosophy is at work with the doors in Alyx. They don’t function like a normal door. You put your hand up to the doorknob and your character just turns it automatically. 

 

Half-Life: Alyx Combine Elevator

“The player can still turn the handle themselves if they want to but they don’t have to, all they have to do is reach out, make a fist, and the door is open,” Reidford said. “Over time, it looks so correct and it’s what they expect to happen that the player actually believes that they’ve done it. They think that they reached out and turned the handle themselves when really they didn’t. The game did it for them”

The genius of Half-Life: Alyx is that this notion of feeling like “the action hero” is deployed so effectively without making you overpowered. You can simultaneously get crushed by creatures, cower away from Jeff, and giggle at the falling physics of a head crab—yet you still get out there, load your weapon, and take down the bad guys. This is a testament to VR as a whole. There’s a spectacle to it that can only be accessed via your hands. From its early days and crude movements to the beginning signs of significant hand tracking, there’s this bustling sense of creativity that keeps pushing the industry forward.

When you stare into the dark lenses of a VR headset and find yourself staring back, you look around your environment, look down at your hands, then squeeze closed your fists and you become something grander than the every day. You become your own version of an action hero.

Case Study: The Design Behind ‘Cubism’s’ Hand-tracking

Hand tracking first became available on the Oculus Quest back in late 2019. Out of enthusiasm for this new input method, I published a demo of Cubism to SideQuest with experimental hand tracking support only a few days later. Needless to say, this initial demo had several flaws, and didn’t really take the limitations of the technology into account, which is why I decided to initially omit hand tracking support from the full release of Cubism on the Oculus Store. It took more development, leaning on lessons learned from the work of fellow developers, to build something I was happy to release in the recent Cubism hand-tracking update. Here’s an inside-look at the design process.

Guest Article by Thomas Van Bouwel

Thomas is a Belgian-Brazilian VR developer currently based in Brussels. Although his original background is in architecture, his current work in VR spans from indie games like Cubism to enterprise software for architects and engineers like Resolve.

This update builds on lessons learned from many other games and developers who have been exploring hand tracking over the last year (The Curious Tale of the Stolen Pets, Vacation Simulator, Luca Mefisto, Dennys Kuhnert, and several others).

In this article I’d like to share some things I’ve learned when tackling the challenges specific to Cubism’s hand interactions.

Optimizing for Precise Interactions

Cubism’s interactions revolve around placing small irregular puzzle pieces in a puzzle grid. This meant the main requirement for hand tracking input was precision, both in picking up and placing pieces on to the grid, as well as precisely picking out pieces from a completed puzzle. This informed most of the design decisions regarding hand input.

Ghost Hands

I decided early on to not make the hands physics-based, but instead let them pass through pieces until one is actively grabbed.

This avoided clumsily pushing the floating puzzle pieces away when you are trying to grab them mid-air, but more importantly, it made plucking pieces in the middle of a full puzzle easier since you can just stick your fingers in and grab a piece instead of needing to figure out how to physically pry them out.

Signaled by their transparency, hands are not physical, making it easier to pick out pieces from the middle of a puzzle.

Contact Grabbing

There are several approaches to detecting a users intent to grab and release objects, like focusing on finger pinches or total finger joint rotation while checking a general interaction zone in the palm of the hand.

For Cubism’s small and irregular puzzle pieces however, the approach that seemed to handle the precision requirements the best was a contact based approach, where a piece is grabbed as soon as thumb and index intersect the same piece and are brought together over a small distance, without requiring a full pinch.

Similar to the approach in The Curious Tale of the Stolen Pets, the fingers are locked in place as soon as a grab starts, to help give the impression of a more stable looking grab. The piece is parented to the root of the hand (the wrist) while grabbed. Since this seems to be the most stable tracked joint, it helps produce a steadier grip, and guarantees the piece stays aligned with the locked fingers.

Piece is grabbed when thumb and index intersect it and are brought together slightly. Rotation of index and thumb are then locked in place to help give the impression of a stable grab.

As soon as a piece is grabbed, the distance between thumb and index is saved, and a release margin is calculated based on that distance. Once thumb and index move apart beyond that margin, the piece is released.

Several safeguards try to prevent unintentional releases: we don’t check for release when tracking confidence is below a certain threshold, and after tracking confidence is re-gained, we wait several frames until checking for release again. Fingers are also required to be beyond the release margin for several frames before actually releasing.

Debug visualization: during a grab, the initial grab distance between fingertips is saved (outer red circle). The piece is released when the real position of the fingertips move beyond a certain margin (blue circle).

There is also a system in place similar to Vacation Simulator’s overgrab method. Due to the lack of haptic feedback when grabbing a piece, it’s not uncommon for fingers to drift closer to one another during a grab. If they close beyond a certain threshold, the release margins are adjusted to make releasing the piece easier.

Try it yourself: to see these debug visualizations in-game, go to ‘Settings > Hand Tracking > Debug visualizations’ and turn on ‘Interactions widgets’.

Debug visualization: If fingers drift to each other during a grab over a certain threshold (inner red circle), the release margins are re-adjusted to make releasing the piece feel less “sticky”.

One limit to this approach is that it makes supporting grabbing with fingers other than the index a bit harder. An earlier implementation also allowed grabbing between middle finger and thumb, but this often led to false positives when grabbing pieces out of a full puzzle grid, since it was hard to evaluate which finger the player was intending to grab a specific piece with.

This would not have been an issue if grabbing revolved around full finger pinches, since that results in a more clear input binary from which to determine user intent (at the cost of a less natural feeling grab pose).

Midpoint Check

Besides checking which piece the index and thumb are intersecting, an additional check happens at the midpoint between index fingertip and thumb fingertip.

Whatever piece this midpoint hovers over will be prioritized for grabbing, which helps avoid false positives when a player tries to grab a piece in a full grid.

In the example below, if the player intends to grab the green piece by its right edge, they would unintentionally grab the yellow piece if we didn’t do this midpoint check.

Left: thumb, index & midpoint between fingertips are in yellow → grab yellow. Right: thumb & index are in yellow, midpoint is in green → grab green

Grabbing the Puzzle

Grabbing the puzzle works similar to grabbing puzzle pieces, except it is initiated by performing a full pinch within the grab zone around the puzzle.

The size of this zone is dynamically increased when switching from controllers to hands. This makes it a bit easier to grab, and helps reduce the likelihood of accidentally grabbing a piece in the grid instead of the grid itself.

The grab zone around the puzzle expands when switching from controllers to hands, making it easier to grab. Although it requires a full pinch, grabbing the puzzle works similar to grabbing puzzle pieces.

Dynamic Hand Smoothing

The hand tracking data provided by the Oculus Quest still can have a bit of jitter to it, even when tracking confidence is high. This can actually affect game play too, since jitter can be much more noticeable when holding the puzzle grid or a long puzzle piece by the edge, making precise placement of pieces on the grid harder.

Smoothing the tracking data can go a long way to produce more stable looking grabs, but needs to be done in moderation since too much smoothing will result in a “laggy” feeling to the hands. To balance this, hand smoothing in Cubism is dynamically adjusted depending on whether your hand is holding something or not.

Try it yourself: to see the impact of hand smoothing, try turning it off under
‘Settings > Hand Tracking > Hand smoothing’.

Increasing the smoothing of hand positions while holding objects helps produce a more stable grip, making precise placement on the grid a bit easier.

Pressing Buttons

One thing I noticed with Cubism’s original hand tracking demo was that most people tried pressing the buttons even though that was not supported at the time. Therefore, one of my goals with this new version of hand tracking was to make the buttons actually pushable.

Buttons can be hovered over when a raycast from the index finger tip hits a collider at the back of the button. If the index finger then intersects with the collider, a press is registered. If the index intersects the collider without first hovering it, no press is registered. This helps prevent false positives when the finger moves from bottom to top.

There are a few more checks in place to prevent false positives: the raycast is disabled when the finger is not facing the button, or when the player is not looking at their finger when pressing.

Try it yourself: to see this debug visualization in-game, go to ‘Settings > Hand Tracking > Debug visualizations’ and turn on ‘Interactions widgets’.

Debug visualization: a raycast from the index tip checks whether the finger is hovering over a button. To help prevent false positives, interaction is disabled when the finger is not facing the button, or when the player is not looking at their finger.

Guiding Interactions

One of the main challenges of building any interaction for hand tracking is that, in contrast to buttons on a controller which are either pushed or not pushed, there are many different ways people may try to approach an interaction with their hands while expecting the same outcome.

Playtesting with a diverse set of people can help you learn how people are approaching the interactions presented to them, and can help refine the interaction cues that guide them to the expected gestures. Playtesting can also help you learn some of the outliers you may want to catch by adding some interaction redundancy.

Interaction Cues

There are several cues while grabbing a piece. When a user first hovers over a piece, their index and thumb take on the color of that piece, both to indicate it can be grabbed, and to signal which fingers can grab it (inspired by previous work by Luca Mefisto, Barrett Fox, and Martin Schubert). The piece is also highlighted to indicate it can be grabbed.

Several cues also indicate when the grab is successful: the fingertips become solid, the highlights on the piece flash, and a short audio cue is played.

Various cues both on the hand and the puzzle piece guide and confirm the grab interaction.

Buttons have several cues to help indicate that they can be pushed. Much like with puzzle pieces, the index fingertip is highlighted in white once you hover over a button, indicating which finger can interact. Like they did with controllers, buttons extend outward when hovered, but this time the extended button can actually be pressed: once the index touches it, it follows the finger until it is fully pressed down, at which point an audio cue confirms the click.

A subtle drop shadow on the button surface indicates where the position and distance of the index to the button and helps guide the press interaction.

Various cues guide interactions with buttons: buttons extend outward when hovered, the index fingertip is highlighted, a drop shadow shows where the tip will interact, and the button follows the finger when pushed.

Interaction Redundancy

Since some people may approach some interactions in unintended ways, it can be good to try and account for this where possible by adding some redundancy to the ways people can use their hands to interact. Interaction cues can still guide them to the intended interaction, but redundancy can help avoid them getting unnecessarily stuck.

When it comes to grabbing pieces, a few playtesters would try to grab pieces by making a fist at first instead of using their finger tips. By having the colliders cover the entire finger instead of just the fingertip, a decent amount of these first grabs will still be registered.

I should note this approach still needs some improvement, since it also introduces some issues producing unintended grabs in cases when there are a lot of pieces floating around the play area. A better approach in the future might be to also perform a check on the total finger rotation to account for fist grabs instead.

Though grabbing is designed around fingertips, colliders on index and thumb cover the entire finger to help catch different forms of grabbing.

With buttons, there were a few playtesters who would try pinching them instead of pushing them. In part this seemed to occur when they previously learned how to pinch buttons in the Oculus home screen, right before launching the game.

For this reason, buttons can also be clicked by pinching once they are hovered, and hopefully cues like the highlighted index and drop shadow will eventually guide them to pressing the buttons instead.

Pinching while hovering over buttons also registers as a click.

The first button players encounter when using hands also explicitly states “Push to Start”, to help transition people from pinching to pushing after coming from the Oculus Home menu.

Teaching Limitations

Although the quality of Quest’s hand tracking has improved over the last year, it still has its limitations — and a player’s awareness of these limitations can have a big impact on how good they perceive their experience to be.

Cubism implements a few ways of teaching player’s about the current limitations of hand tracking on Quest.

When the player first switches to hand tracking (either at launch or mid-game), a modal informs them of some best practices, like playing in a well-lit space and avoiding crossing hands.

When a user switches to hand tracking, a modal informs them about limitations and best-practices. The “Push to Start” instruction helps teach new users that buttons can be naturally pushed in this game.

It is important to acknowledge that most people are likely to immediately dismiss modals like this or quickly forget its guidelines, so signaling why things can go wrong during the experience is also important.

In Cubism, hands will turn red to signal when tracking was lost. In some playtests, people would keep one hand on their lap and play with the other, and be puzzled why their lap hand would appear frozen. To help inform cases like this, a message is displayed on the hand to clearly state why the hand is frozen if tracking loss persists. If tracking is lost specifically because the player is crossing their hands, the message changes to inform them not to do that.

Left: hands turn red when tracking is first lost. Middle: when tracking loss persists, a message informs the player about what is going on. Right: if tracking is lost due to occluded hands this is also indicated

For more seasoned players, or players who prefer playing with one hand, this feature can be replaced in the settings by having hands fade out when they lose tracking instead, more closely resembling the behavior in the Oculus home menu.

The red hands and warning messages can be replaced in the settings by fading hands.

Future Work

Hand tracking on Quest still has its limitations, and though Cubism’s support for it is already in its second version, there is still plenty of room for improvement.

Regardless, I’m excited to start exploring and supporting these new input methods. In the short term, I think they can help make experiences like this more accessible and easier to share with new VR users.

Mixed reality footage captured on an iPhone with Fabio Dela Antonio’s app Reality Mixer gives an idea of what it may be like to play Cubism on an AR headset in the future.

In the long term, there seems to be a good chance that hand tracking will be the go-to input for future standalone AR devices, so hopefully this update can be a first small step towards an AR version of Cubism.


If you enjoyed this look at at the hand-tracking design in Cubism, be sure to check out Thomas’ prior Guest Article which overviews the design of the broader game.

The post Case Study: The Design Behind ‘Cubism’s’ Hand-tracking appeared first on Road to VR.

“Oh shit, the game is successful,” Says ‘Gorilla Tag’ Dev After 1.5M Player Milestone & Quest IAP

First-time indie VR developer Kerestell Smith was taken aback at the reception of his Early Access multiplayer VR game, Gorilla Tag. The unassuming lo-fi title, which plays like a game of multiplayer tag, has found a synergistic combination of interesting locomotion and social VR. Following the game’s release on Quest via App Lab, player counts have continued to grow, now reaching an astounding 1.5 million unique players. With in-app cosmetic purchases now available in the game on all platforms, Smith says the game is ready to grow beyond a one-person project.

Gorilla Tag has reached a new milestone of 1.5 million unique players across Steam and Quest, with a concurrent user record of 13,000, says developer Kerestell Smith.

While the free-to-play game had offered in-app purchases (IAP) on Steam, the Quest version was stuck with no path for monetization because App Lab developers couldn’t add DLC or IAP to their games.

Recently that changed, which allowed Smith to add IAP to support the Quest version of the game. To that end, Smith says the game is now pulling in “big boy money;” enough to grow the game beyond a one-person team.

“I think [the game] was probably sustainable just [from Steam IAP revenue], but server costs [from the Quest version] were eating up most of the net profits there. [With IAP now on Quest it] pretty much means that I have a solid base [of revenue] to work from. I’m a pretty risk averse person, so I don’t really like throwing money and resources at something speculatively, but now it’s at a point where there’s enough income to justify getting help and stuff, not just to cover expenses for the game itself,” Smith tells Road to VR. “I’m a little out of my depth at this point (to be frank I’ve been pretty out of my depth since I released it. I really didn’t make the game with any expectation of financial success), but now things seem a lot less like ‘maybe IAP will make the game successful’ and more like ‘oh shit, the game is successful’ lol.”

With the game now monetizing across all versions, Smith plans to bring on additional help to expand and improve the game.

“Being the only person managing the game 24/7 isn’t super fun, since I’m always worried about potential server issues or needing to respond to new exploits or people being horrible and toxic in game and stuff, and dealing with people is one of my least favorite things,” said Smith. “I have a few volunteers helping out with the Discord who I am immensely grateful for (electronic, pink, blue and graic), but for the most part having a successful multiplayer game means a lot more time spent on community management and stuff. […] Overall I’m really thrilled with how things are going, and getting through the growing pains will be worth it. It’s still completely bizarre to have made something so many people are enjoying.”

The prior update, which covers an earlier milestone in the game’s rapid growth, continues below.

Four and a half months after the game’s release, Gorilla Tag’s momentum is still going strong.

After being released on App Lab back in March (making it easier for Quest users to jump into the game), the game has reached a new record high count of 5,500 concurrent players across Quest and Steam. Developer Kerestell Smith tells Road to VR that the game has seen 675,000 unique players to date, a staggering success for an indie VR game that’s had no formal marketing.

As a multiplayer game running dedicated servers, the impressive player traction is a blessing and a curse; the game is free, which means that every additional player makes the game more expensive to operate. Smith has since added an ‘Early Access Supporter Pack’ DLC on Steam for $10 to give players a way to support the game’s ongoing development and get exclusive in-game cosmetics.

Smith says this has allowed him to cover server costs up to this point; most of the growth is happening on Quest though, which presents a problem. Developers are currently unable to offer paid add-ons (like Gorilla Tag’s Early Access Supporter Pack) through App Lab on Quest, which hampers the game’s monetization options.

The original article, which overviews the game and its impressive organic traction, continues below.

Original Article (March 1st, 2021): Gorilla Tag is as straightforward as it sounds… players take on the role of (legless) gorillas which toss themselves around by smacking their hands on the ground in an effort to chase one another in a game of tag. Interestingly though, you’ll find nearly as many people simply fooling around and chatting with one another as those who are really there to play tag.

Currently available in Early Access on Steam and Quest via SideQuest (update: now also on App Lab), Gorilla Tag is a free game, but even so, that doesn’t account for its unexpected success—the title has quickly become the best rated free VR game on Steam with an ‘Overwhelmingly Positive’ 98% rating from 1,854 user reviews, beating out the likes of Google Earth VR, The Lab, and all others. It has seen a similarly positive reception on SideQuest where it currently holds a 4.9 out of 5 rating from 218 reviews.

Update (December 2nd, 2021): The latest of the above figures:

  • Steam: ‘Overwhelmingly Positive’ 96% from 10,556 reviews
  • Quest (SideQuest): 4.5 out of 5 from 1,620 reviews
  • Quest (App Lab): 4.7 out of 5 from 7,441 reviews

“[…] the response [to Gorilla Tag] has been completely insane,” developer Kerestell Smith wrote to his community after the game’s first week. “In my wildest dreams I was hoping for a slow burn, so I’d be able to work on stuff at a steady pace and maybe there’d be a room or two of people playing at any given time, but, uh, it turns out lots more people like to ‘be monke’ than I thought they would.”

According to Smith, the game’s sole developer, Gorilla Tag saw 42,000 unique players across all platforms in its first two weeks after release; as of today (three days later) it has reached 54,000 players.

Update (December 2nd): The latest of the above figures:

  • Unique players: 1.5 million
  • Max concurrent players: 13,000

Update (July 1st, 2021): The latest of the above figures:

  • Unique players: 675,000
  • Max concurrent players: 5,500

Even for an Early Access title, Gorilla Tag is a bare-bones game right now. Between the pixelated graphics and legless avatars, you’d be forgiven for passing it by. But if you peer inside you’ll find a winning combination of interesting locomotion and social VR magic. Road to VR spoke to Smith about the project and what comes next.

Image courtesy Another Axiom

Smith is a 31 year old enterprise software developer. While Gorilla Tag is his first game development project, he was previously involved in Echo Arena’s competitive scene and says he was inspired by the game’s take on VR locomotion (which doesn’t rely on typical stick or teleportation movement).

“When you’re doing something like stick locomotion or teleportation, you’re more or less giving orders to a virtual entity. It doesn’t fully feel like you’re present. Like with stick locomotion it feels a lot more like you’re kind of sliding and ice skating around. It doesn’t feel like you’re moving through an environment,” Smith says. “When you have to walk with your hands [as in Gorilla Tag], every movement is dependent on how you’re actually moving [in the real world]. You’re using your arms like you would be using your feet, so it feels a lot more like you’re actually walking around.”

As for the gameplay, Smith notes that the simple game of tag is “a really primal thing,” which makes for easy and compelling multiplayer experience.

For his game he also sought to remove parts of the character that weren’t directly controllable by the player.

“I tried to focus the design as much as possible was on making it feel as grounded as possible,” he says. “You don’t have feet controls in VR, so I took out the legs. You don’t have ring and pinky fingers in most controllers, so I took those out. I didn’t put in any floating menus or UI, everything is grounded in the world.”

The ‘grounded’ metaphor even extends to how players find their way from one game lobby to the next. Instead of a floating multiplayer menu, players climb up a large tree and then descend down a tunnel into a ‘mine’ level. On their way down the tunnel they are seamlessly connected into a new multiplayer session happening in that level. To navigate back to the other level, just climb back up the tunnel and you’ll be connected to a session happening there.

Smith has clearly been taken aback at the game’s traction right out of the gate.

“I didn’t expect this reception at all, I figured I’d have time to work on more of the basics […], but im starting to feel pretty dumb for not having any way for people to give me money at all.”

That said, his short-term plan is to add some paid DLC to Gorilla Tag to give players a way to directly support the game’s ongoing development.

“I’m just working on fleshing out some of the more core features like making it easier for people to play with their friends in public rooms, making a queue for players who want to play a little harder, making more game modes and maps, and working on technical stuff like reducing the networking traffic and adding lag compensation and more server locations,” he said about the near-term goals for the game.

Smith also tells Road to VR that Gorilla Tag is awaiting App Lab approval which will make it much more easily accessible to Quest players (currently the game is only available on Quest via sideload).

 – – — – –

Whether Gorilla Tag will be able to translate its early traction into lasting success is anyone’s guess at this point, but there’s clear lessons here in both VR design & distribution that are worth studying.

The post “Oh shit, the game is successful,” Says ‘Gorilla Tag’ Dev After 1.5M Player Milestone & Quest IAP appeared first on Road to VR.

VR Design App ‘Gravity Sketch’ is Now Free for Individual Users

Gravity Sketch is a 3D modeling software that supports PC VR headsets and Oculus Quest. Up until now, the consumer app was priced at $25, but now the studio has just made it completely free for individual users.

Launched into Steam Early Access in 2017, Gravity Sketch has been lauded for its full-featured VR creation tools and support for traditional peripherals such as Wacom tablets.

It’s more complicated than creation apps like Tilt Brush or Oculus Quill, but it also has an impressive range of design-oriented tools that are aimed at everything from product ideation to digital asset modeling. The app supports Oculus QuestOculus Rift, and SteamVR headsets.

“Providing a free product allows us to welcome a more diverse user group,” Gravity Sketch co-founder and CEO Oluwaseyi Sosanya says in a blogpost. “Early Adopters have helped support the development and R&D. These users have been instrumental to what we have built to date; many of whom brought Gravity Sketch into their formal workflow which resulted in enterprise contracts that helped accelerate our revenues to the point at which we can support and grow the team and platform.”

The basic version for individual users, which is now free on all platforms, includes the ability to import images and video (.jpg, .png, .mp4), import/export .obj with textures and materials, and utilize infinite layers. It also gives you access to 1GB of cloud-saves via the company’s free Landing Pad service.

The paid business version includes enterprise-level support and security, cloud backup, and support for more 3D file types such as IGES, FBX, GLTF, Blender files (.blend), STL, and Collada (.dae). It also includes the ability to collaborate live with teams across the world in the same virtual studio.

The post VR Design App ‘Gravity Sketch’ is Now Free for Individual Users appeared first on Road to VR.

VR Comfort Settings Checklist & Glossary for Developers and Players Alike

For those who have been playing or developing VR content for years, it might seem ‘obvious’ what kind of settings are expected to be included for player comfort. Yet for new players and developers alike, the confusing sea of VR comfort terms is far from straightforward. This has lead to situations where players buy a game but find it doesn’t include a comfort setting that’s important to them. So here’s a checklist and glossary of ‘essential’ VR comfort settings that developers should clearly communicate to potential customers about their VR game or experience.

VR Comfort Settings Checklist

Let’s start with the VR comfort settings checklist, using two example games. While it is by no means comprehensive, it covers many of the basic comfort settings employed by VR games today. To be clear, this checklist is not what settings a game should include, it is merely the info that should be communicated so customers know what comfort settings are offered.

Turning

Half-Life: Alyx Beat Saber
Artificial turning ✔ ✖
Smooth-turn ✔ n/a
     Adjustable speed ✔ n/a
Snap-turn ✔ n/a
     Adjustable increments ✔ n/a

Movement

Artificial movement ✔ ✖
Smooth-move ✔ n/a
     Adjustable speed ✔ n/a
Teleport-move ✔ n/a
Blinders ✖ n/a
     Adjustable strength ✖ n/a
Head-based ✔ n/a
Controller-based ✔ n/a
Swappable movement hand ✔ n/a

Posture

Standing mode ✔ ✔
Seated mode ✔ ✖
Artificial crouch ✔ ✖
Real crouch ✔ ✔

Accessibility

Subtitles ✔ ✖
     Languages [languages would be listed] n/a
Audio ✔ ✔
     Languages English n/a
Adjustable difficulty ✔ ✔
Two-hands required ✖ For some game modes (optional)
Real-crouch required ✖ For some levels (optional)
Hearing required ✖ ✖
Adjustable player height ✖ ✔

If players are equipped with this information ahead of time, it will help them make a more informed buying decision.

VR Comfort Settings Glossary

For new players, many of these terms might be confusing. Here’s a glossary of basic definitions of each VR comfort setting.

Turning

  • Artificial turning – whether or not the game allows the player to rotate their view separately from their real-world orientation within their playspace (also called virtual turning)
  • Smooth-turn – an artificial turning mode which smoothly rotates the camera view (also called continuous-turn)
  • Snap-turn – an artificial turning mode which rotates the camera view in steps or increments (also called blink-turn)

Movement

  • Artificial movement – whether or not the game allows the player to move through the virtual world separately from their real-world movement within their playspace (also called virtual movement)
  • Smooth-move – an artificial movement mode which smoothly moves the player between positions (also called continuous-move)
  • Teleport-move – an artificial movement mode which teleports the player between positions (also called blink-move)
  • Blinders – cropping of the headset’s field of view to reduce motion visible in the player’s periphery
  • Head-based – the game considers the player’s head direction as the ‘forward’ direction for artificial movement
  • Hand-based – the game considers the player’s hand/controller direction as the ‘forward’ direction for artificial movement
  • Swappable movement hand – allows the player to change the artificial movement controller input between the left and right hands

Posture

  • Standing mode – supports players playing in a real-world standing position
  • Seated mode – supports players playing in a real-world seated position
  • Artificial crouch – allows the player to crouch with a button input instead of crouching in the real world (also called virtual crouch)
  • Real crouch – allows the player to crouch in the real-world and have it correctly reflected as crouching in the game

Accessibility

  • Subtitles – a game that has subtitles for dialogue & interface, and which languages therein
  • Audio – a game that has audio dialogue, and which languages therein
  • Adjustable difficulty – allows the player to control the difficulty of a game’s mechanics
  • Two-hands required – whether two hands are required for core game completion or essential mechanics
  • Real-crouch required – a game which requires the player to physically crouch for core completion or essential mechanics (with no comparable artificial crouch option)
  • Hearing required – a game which requires the player to be able to hear for core completion or essential mechanics
  • Adjustable player height – whether the player can change their in-game height separately from their real world height (distinct from artificial crouching because the adjustment is persistent and may also work in tandem with artificial crouching)

As mentioned, this is not a comprehensive list. VR comfort is a complex topic especially because everyone’s experience is somewhat different, but this is hopefully a useful baseline to help streamline communication between developers and players alike.

The post VR Comfort Settings Checklist & Glossary for Developers and Players Alike appeared first on Road to VR.

Stanford & Samsung Develop Ultra-dense OLED Display Capable of 20,000 PPI

Researchers at Stanford and Samsung Electronics have developed a display capable of packing in greater than 10,000 pixels per inch (ppi), something that’s slated to be used in VR/AR headsets and contact lenses of the future.

Over the years, research and design firms like JDI and INT have been racing to pave the way for ever higher pixel densities for VR/AR displays, astounding convention-goers with prototypes boasting pixel densities in the low thousands. The main idea is to reduce the perception of the dreaded “Screen Door Effect”, which feels like viewing an image in VR through a fine grid.

Last week however researchers at Stanford University and Samsung’s South Korea-based R&D wing, the Samsung Advanced Institute of Technology (SAIT), say they’ve developed an organic light-emitting diode (OLED) capable of delivering greater than 10,000ppi.

In the paper (via Stanford News), the researchers outline an RGB OLED design that is “completely reenvisioned through the introduction of nanopatterned metasurface mirrors,” taking cues from previous research done to develop an ultra-thin solar panel.

Image courtesy Stanford University, Samsung Electronics

By integrating in the OLED a base layer of reflective metal with nanoscale corrugations, called an optical metasurface, the team was able to produce miniature proof-of-concept pixels with “a higher color purity and a twofold increase in luminescence efficiency,” making it ideal for head-worn displays.

Furthermore, the team estimates that their design could even be used to create displays upwards of 20,000 pixels per inch, although they note that there’s a trade-off in brightness when a single pixel goes below one micrometer in size.

Stanford materials scientist and senior author of the paper Mark Brongersma says the next steps will include integrating the tech into a full-size display, which will fall on the shoulders of Samsung to realize.

It’s doubtful we’ll see any such ultra-high resolution displays in VR/AR headsets in the near term—even with the world’s leading display manufacturer on the job. Samsung is excellent at producing displays thanks to its wide reach (and economies of scale), but there’s still no immediate need to tool mass manufacturing lines for consumer products.

That said, the next generation of VR/AR devices will need a host of other complementary technologies to make good use of such a ultra-high resolution display, including reliable eye-tracking for foveated rendering as well as greater compute power to render ever more complex and photorealistic scenes—things that are certainly coming, although aren’t here yet.

The post Stanford & Samsung Develop Ultra-dense OLED Display Capable of 20,000 PPI appeared first on Road to VR.

‘Blaston’ is a Fantastically Creative VR ‘Shooter’ That’s All About Making You Move

Blaston, the latest VR title from Resolution Games, is a 1 vs. 1 multiplayer VR game that thinks completely outside the box and explores new and interesting mechanics that challenge the notion of what a ‘VR shooter’ can be.

I’ve spent nearly a decade now reporting on the VR industry and watching VR content grow from the earliest demo experiences for dev kit headsets to the shiniest big budget exclusive titles. And though there’s a growing number of games that truly feel ‘VR native’, the majority of VR content that we see today feels like it’s still trying to break free of the tropes of non-VR game design.

Quietly launched just last week on Quest (and coming soon to SteamVR), Blaston immediately stands out to me as a game that was designed with a properly blank canvas and developers that dodged preconceptions about how a shooting game could work in VR.

Blaston could be called a First Person Shooter—after all, it’s a 1v1 shooting game—but it’s really nothing like any you’ve played before. After playing the game for a few hours, the most succinct way I can describe the game is ‘PvP bullet hell shooter’. Here’s the gist.

Two opposing players stand on raised platforms. At the start of the match weapons begin to spawn at the edges of the platforms. Each weapon fires projectiles that differ greatly in speed & size, and guns have vastly different ammo counts & rates of fire.

Each player has a health bar and takes damage as they get hit by projectiles. To avoid them, players need to duck, weave, or block—without falling off the platform—all while returning fire.

It looks chaotic, but the slow movement of the projecticles and the differences in the weapons leave the door open to gameplay that’s deeper than just shooting. Not just in the way you dance around to dodge incoming fire, but also how you pick which weapons to bring with you into the game and how you use them together in the heat of battle.

Captured by Road to VR

There’s 22 different weapons in Blaston, each with their own bullet properties and spawn timers. Players choose six weapons for their loadout which spawn around them during the match. With creative weapons like laser grenades, guns with curving projecticles, and even deployable shields for blocking incoming fire, there’s tons of room for creative strategizing in offense & defense.

Some weapons are semi-auto and shoot large, slow bullets, others are full auto and shoot faster smaller bullets | Captured by Road to VR

In a nutshell, Blaston is about making your opponent move how you want them to. Once their movements are predictable, then you know where to shoot to score damage. But your opponent might have a loadout that foils your underlying strategy, forcing you to adapt in real time and encouraging you to tweak and refine your own loadout for the next match.

In this way, Blaston is almost like a bullet hell game where—instead of a computer shooting a bunch of bullets everywhere—an intelligent agent (the other player) is the one making the ‘map’ for you in real time. It’s a genius arrangement.

Though one is a shooter and the other a melee game, aspects of Blaston’s design reminds me a lot of the artfully designed Until You Fall. It’s no coincidence that both games take special care to control the pace of combat in a way that allows for deeper gameplay to emerge, engaging both your micro-skill (aiming, dodging, and blocking) and your macro-skill (pre-game planning and overarching strategy). Nor is it coincidence that the core gameplay of both are build around engaging bodily movement instead of heavy use of buttons and sticks.

What isn’t part of Blaston is an important lesson too. There’s no reloads. No stick locomotion. No ADS. No inventory. No giant map. No shields. Etc, etc.

None of those things are necessarily bad for VR, but the assumption that they should be in there (a holdover of non-VR game design) would have steered Blaston toward serving preconceptions instead of the reality of its gameplay.

– – — – –

When it comes to VR game design, counterintuitively, one of the biggest challenges for developers seems to be stepping outside of the box of non-VR game design, to undo assumptions about what should be, and to explore new ideas that don’t neatly fit into established non-VR genres like ‘shooter’. With Blaston, developer Resolution Games has clearly demonstrated that capacity and paved new ground for all of us to consider.

The post ‘Blaston’ is a Fantastically Creative VR ‘Shooter’ That’s All About Making You Move appeared first on Road to VR.