Accessible XR Development After VRTK

As a developer moving from the web and app world into 3D and XR, I’ve had to constantly re-evaluate my platform and tool choices as the industry evolves at tweetstorm velocity. Today’s XR development pipeline is clogged by a glut of proprietary hardware and software APIs and SDKs by competing firms like Oculus, HTC Vive, Microsoft, Google, Apple, Sony and SteamVR — to say nothing of emerging third-party peripherals like Logitech’s VR-tracked keyboard, the new AR-enabling Zed Mini dual-eye camera for the Rift or Vive, or any other industry-disrupting Kickstarters that might’ve sprung up since I started typing this paragraph.

Left to right: a bunch of cool stuff I want.

Each platform’s fine — even technologically stunning, one might argue — with respective strengths, weaknesses and use cases. But the distinctions force XR developers to ask hard questions: Where is the market going? How do I invest my skill-building time? What devices should my app support? What platform can I get a job working on? Developers must be business analysts as much as creative technologists to stay relevant. It’s easy to suffer choice paralysis with such a wide array of options, and easier still to bet on the wrong technology and lose.

Personally, I also face certain technical, logistic and financial realities as an independent XR developer in the Midwest (US), where the industry hasn’t proliferated as it has in major coastal cities. Thankfully, game engines like Unity and Unreal are rapidly democratizing this space. Both engines seek to bridge the gaps between the various XR SDKs, employing thousands of engineers to ensure their software plays nicely with just about any significant third-party API. For example, as I wrote about in August, the Oculus SDK integrates beautifully with Unity and comes equipped with many of the scripts and prefabs needed to quickly prototype, develop and deploy a custom Rift app.

I miss bossing around my hand-modeled #MadeWithBlocks BB-8. Check out my deep dive on this project, The Future of VR Creation Tools.

That’s fantastic, but it’s still non-standard. To port the same Unity app to the HTC Vive or a Windows HMD is non-trivial — not impossible or even terribly difficult, but non-trivial. Maintaining your app for multiple SDKs over the long haul is similarly non-trivial. Non-trivial costs money and time and we’re all short on both.

Instead imagine if XR practitioners had to worry less about betting on the right platform or device and could instead focus on creating unique and compelling experiences, content and UX. The first step down that path was VRTK — but sadly, one of the best tools to combat the VR SDK surplus will soon be hobbled by the loss of its founder.

VRTK: The Open Source Approach

This free, open source Unity toolkit aims to knit together a single workflow for a variety of VR APIs. It comes with the same stock prefabs and scripted mechanics you might find included in any single proprietary SDK, but makes each piece of functionality identical whether deployed to Oculus, SteamVR (read: Vive and, with v3.3.0, Windows HMDs) or Daydream — covering all major VR HMD manufacturers today.

It’s a boon to anyone wanting to dip their toes in the waters of VR development. Think of it: Want to implement teleportation locomotion over a Unity NavMesh? Just drop the component onto your player prefab. Want to test out grab mechanics, or a quick bezier pointer? VRTK’s demo scenes have you covered, and they’ll work easily on a variety of devices. Since it’s open source, you’re also free to dive in and customize the code. Struggling to get a feature working in your own project? Check out this implementation on a varieties of SDKs — not a bad way to grok new XR coding concepts.

Sadly, VRTK’s creator is sunsetting the woefully underfunded project. The UK-based developer TheStoneFox — who until recently was actively seeking contributors, partnerships and support — announced recently that he would will be stepping back from the project post-version 3.3.0. Though VRTK boasts an active Slack community, a growing list of “made with” titles and a recent Kickstarter, TheStoneFox was unable to attract the support necessary to sustain it for the long term.

Now, as the opportunity to contribute to and utilize a premier open-source VR development pipeline expediter will fade going forward, what if anything will replace it?

OpenXR: One API to Rule Them All

The VRTK approach —using Unity scripting to knit together similar mechanics across a spectrum of VR SDKs — is necessary in the current fragmented development landscape, but there are downsides. Some community still has to monitor the various proprietary SDK updates and your end-user VRTK app still has to be mindful of VRTK’s changes over time. In this way, VRTK treated the symptoms of the VR SDK overload, but was not equipped to address the root cause. Enter OpenXR, The Khronos Group’s upcoming industry standard:

The standard, announced December 2016, is being written now and is quickly gaining traction among industry players (with the notable exception of Magic Leap). Instead of forcing developers to grapple with variable propriety SDKs and all the accompanying business consequences, companies will instead tailor their hardware and software to comply with OpenXR’s spec. Khronos, the non-profit responsible for shepherding the Vulkan, OpenGL, OpenGL ES and WebGL standards, is leading the charge. Cue the infographics!

On the left, the problem — on the right, the solution:

Images courtesy of https://www.khronos.org/openxr.

“Each VR device can only run the apps that have been ported to its SDK. The result is high development costs and confused customers — limiting market growth,” reads some fairly accurate marketing copy on their website. “The cross-platform VR standard eliminates industry fragmentation by enabling applications to be written once to run on any VR system, and to access VR devices integrated into those VR systems to be used by applications.”

A working group of industry heavyweights have agreed the standards be extensible to allow for future innovation and should support a range of experiences — anything from a 3-DoF controller all the way to a high-end, room-scale devices.

The only thing missing is a realistic timetable before this standard has an impact on the development community and its day-to-day workflow. Until the market-movers get their act together, we’ll be left scrambling (and patching up VRTK projects, in many cases).

OpenXR supporters: everyone except Magic Leap.

The Cinema of Attractions: Slow Your Reel

But should we so quickly welcome industry standardization while the technology is still so new and full of possibilities? That’s the question asked in a recent Voices of VR podcast by Kent Bye and Rebecca Rouse. The two discussed the early days of cinema — when exploration and experimentation were the status quo — and Rouse drew striking parallels between that era and the current period in XR production and development.

Pure spectacle then and now. Left: a Cinema of Attractions-era still. Right: Chocolate VR.

“[Scholars of early film] came up with this term ‘cinema of attractions’ because they saw an incredible wealth of diversity and kind of range of exuberant experimentation in those early pieces, so they were very hard to sort of clump them together — there was such diversity — but this ‘attraction’ idea was a large enough umbrella, because all of those early pieces are in some way showing off the technology’s capabilities and generate this experience of wonder or amazement for the viewer. And the context in which they were shown is that of attractions, so they were shown at world’s fairs and as a part of vaudeville shows with other kinds of performances and displays.”

 — Rebecca Rouse, assistant professor of communication & media at Rensselaer Polytechnic Institute

Sounds eerily familiar, huh? The whole podcast is well worth a listen, but tldr: while there are obvious consumer and market advantages to XR standards, Rouse argues that perhaps we shouldn’t jump the gun here— not during this era of frenetic, often avant garde XRexperimentation across art, science, cinema and gaming. Looking around the industry, it’s hard to disagree.

EditorXR

One man-eating-the-camera-brilliant new application of XR technology is Unity Labs’ EditorXR. Created by Unity’s far-future R&D team (whose roles often find them working on projects and products five-to-ten years away from consumer adoption), EditorXR offers you an interface to create custom XR Unity scenes entirely within virtual reality.

Oh! And there’s flying, among other superpowers — soar through your scene like Superman or scale the whole thing down to a pinhole. They’ve literally ported the Unity inspector, hierarchy and project windows (again among others) to an increasingly user-friendly VR UI pane on your wrist. With the latest update, you’re able to:

  • hook into Google’s Poly asset database web API in real-time inside VR
  • create multiplayer EditorXR sessions for editing Unity scenes with friends and collaborators
  • run EditorXR with Unity’s primary version 2017.x editor

It’s still new and I’ve encountered bugs, but it’s a foregone conclusion that this tech will become a standard feature of Unity’s scene creation process as XR technology matures and proliferates. Even their alpha and beta efforts evoke the same sense of wonder and possibility that early Cinema of Attractions-era moviegoers must have felt.

For more insight on the design side, check out this deep dive on the future of XR UX design by Unity Lab’s Dylan Urquidi or the Twitter feed of Authoring Tools Group Lead, Timoni West.

ML-Agents

Another experimental Unity project, ML-Agents, explores one of the most promising avenues for the future of XR development, design and UX: machine learning. Using so-called “reinforcement learning” techniques which expressly don’t feed the AI model any sample data or rules for analysis, ML-Agents instead applies simple rewards and punishments (in the form of tiny float values) based on the outcomes to their [usually very narrowly defined set of] behaviors.

Stretched out over hundreds of thousands if not millions of trial-and-error training sessions, the computer experiments with its abilities and forms a model for how to best achieve the desired goal. In this way, your Agents become their own teacher s— you just write the rubric.

The original GitHub commit contained some basic demo scenes and the development community quickly took up the torch from there. Unity’s Alessia Nigretti followed up the original blog with one describing how to integrate ML-Agents into a 2D game. On Twitter, @PunchesBears has been demonstrating similar concepts — and showing that often enough, Agents respond to developers’ carefully calculated reward system in ways they don’t anticipate. Similar to actual gamers, no?

In one of my favorite applications of ML-Agents, the developer Blake Schreurs actually brings a 6-DoF robo-arm Agent trained to seek a moving point in space into virtual reality — with slightly terrifying results once he assigns that moving target to his face.

Imagine someone applying this training model to actual robotics and fat-fingering the wrong key. Or don’t, whatever. 

He’s down for the count! I was immediately reminded of the audiences pouring out of theaters in 1895, afraid they’d be run down by the Lumière brothers’ Arrival of Train at La Ciotat. We’re still in the salad days of both machine learning and XR development compared to where we hope to be 10 or even 50 years from now. In that time, some combination of traditional or procedural AI with these new machine learning approaches will doubtless lead to great developments in gaming and XR at large — or even in the very design process and daily workflow of computing itself.

Rift OS Core 2.0

With Rift’s new Core 2.0 OS, your entire Windows PC is accessible from your right-hand menu button. Being able to view and use your desktop apps, as well as pin windows inside other VR apps, introduces new possibilities for XR workflows (and even for traditional computing workflows) in VR.

While working on my next project, entirely within VR, I can watch Danny Bittman’s great Unity rendering and lighting tutorial on YouTube in a pinned browser while messing with those same settings on my wrist in UnityXR. I can watch @_naam craft original assets in Google Blocks at the same time I do, or I could gather assets from the Poly database and deploy them to my Unity scene in real-time VR, pulling up Visual Studio to code some game logic as I please.

That sounds pretty goddamn metaversal to me — and before long, we likely won’t even need code.

The XR Developer of the Future Is Not a Developer

If XR technology is to go mainstream, the development process must be as efficient and accessible as possible — and likely even open to non-developers through content creation and machine learning applications. Spanning sciences and disciplines, there’s so much more to talk about and speculate over that this piece hasn’t even touched on (next time I’ll examine WebVR and A-Frame as viable XR development pathways). More and more pieces of this accessible, standardized XR development pipeline will fall into place as the immersive computing revolution rolls on, though I’m thankful the XR industry isn’t ready to ditch its Cinema of Attractions ethos quite yet.