Unity Promises Initial OpenXR Platform Support By The End Of 2020

When the Khronos Group released OpenXR — a royalty-free standard designed to make cross-platform VR and AR app development easier, supporting multiple platforms — it was backed by many of the mixed reality industry’s biggest names, including Epic Games, Microsoft, Oculus, and Valve, to name just a few. Unity confirmed that its eponymous 3D engine will start supporting some OpenXR platforms by the end of 2020, with a focus on “providing the best developer experience on Unity supported platforms.”

Though Unity publicly backed OpenXR in March 2019 and has actively contributed to the standard, it hadn’t committed to a timeline for actually bringing OpenXR support to the Unity engine, which is used by untold numbers of 3D apps and games, as well as automotive, film, and engineering firms. Mixed reality applications such as Childish Gambino’s interactive music experience Pharos AR have been built using Unity, but until now, might not have easily reached every device a developer would hope to target. Leading PC VR headsetsHoloLens 2, and both Oculus platforms all support OpenXR, which is expected to be the backbone for most future AR and VR devices as well.

From a big-picture perspective, Unity’s promise of OpenXR support means that a significant number of developers will be able to bring previously created mixed reality content to whatever platforms prove popular, and create new AR/VR apps with Unity that work across virtually any XR device. That said, implementation won’t necessarily be exactly the same from platform to platform, and Unity is warning that due to the “unbounded combinations” of possible hardware and software, it can’t test or guarantee optimal performance for every platform out there.

Unity is currently working to support partners’ OpenXR runtimes and expects to offer initial previews of the Unity engine’s OpenXR support on “some” partner platforms before year’s end. The next stage will be early in 2021, when Unity will offer preliminary OpenXR 1.0 specification-compliant support for non-partner OpenXR runtimes and devices, with plans to improve them based on reported issues, sharing test results and spec changes with the Khronos Group.

This post by Jeremy Horwitz originally appeared on VentureBeat.

New 4K ‘Spatial Reality Display’ From Sony Has Glasses-Free 3D

The ELF-SR1 is a new ‘Spatial Reality Display’ from Sony that features a 4K screen and glasses-free volumetric 3D targeting professional users.

Volumetric 3D displays are neither easy to produce nor common, as holographic imagery generally requires a mix of stereoscopic screen technology and unique optics, sometimes backed by high-speed eye tracking. Today, the display experts at Sony are throwing their hat into the ring with a new option called the ELF-SR1 — also known as the Spatial Reality Display — which is initially being targeted at professional users in content creation businesses, but with an eye towards future use in consumer-facing applications.

Resembling a traditional computer monitor fixed on a 45-degree recline with a triangular frame, the Spatial Reality Display combines a 15.6-inch screen with a micro optical lens coating and an eye-tracking camera. While the display packs a conventional 4K resolution, the pixels are effectively split into twin 2K arrays for your left and right eyes, using live pupil tracking data and precision alignment of the micro-lenses atop pixels to deliver sharp, realistic 3D imagery. The results are digital 3D objects that appear to be floating right in front of the screen, and switch perspectives smoothly as your head and eyes move.

In other words, if you can imagine computer-generated holograms coming to life and being viewable from whatever angle you prefer relative to the display, that’s what Sony is promising here. Apart from the laptop-like screen size, the only catch is that the volumetric imagery only looks optimized for an audience of one person at a time.

Similar technology appeared in consumer form within Nintendo’s 3DS, but it was obviously far lower in resolution and initially suffered from major headache-inducing issues due to the absence of eye tracking. Beyond using over 40 times as many pixels, Sony’s implementation independently tracks the viewer’s pupil positions on three axes — up-down, left-right, and forward-back — on a millisecond level, enabling the screen to dynamically adjust and render what the viewer needs to see in real time. A “powerful” Windows PC running either Unity or Unreal Engine is required to actually create the 3D content; Mac support is expected in the future.


For the time being, Sony is targeting content creators in the 3D computer graphics field, including filmmakers and animators (such as the Ghostbusters: Afterlife team at Sony Pictures), automotive product designers, architects, and VR/AR content creators. 3D models and environments can be previewed in volumetric and realistic ways, enabling creators to adjust lighting, test object positioning, and check camera blocking ahead of finalizing scenes.

Sony expects that film previsualization will be a major use of the technology in the future. Another suggested use of the Spatial Reality Display will be in car dealerships, enabling customers to examine realistic customized car models without needing to actually see the vehicles in person. Judged against 2D displays, ELF-SR1’s raw specs aren’t exactly mind-blowing — 500 nits of brightness, a contrast ratio of 1,400:1, and approximately 100% of Adobe RGB in color gamut, with an undisclosed refresh ratio — but Sony is confident that users will be wowed when they see the 3D effects for themselves. The unit has 2.1-channel speakers built in and can be paired with optional accessories such as a Leap Motion gesture controller for input or a Sony-crafted stage-like box to contain content and block ambient light.

The ELF-SR1 Spatial Reality Display will sell for $5,000, within the same general price range as rival products from companies such as Looking Glass. It will start shipping in November 2020 and can be ordered directly from Sony’s website.

This article by Jeremy Horwitz originally appeared in VentureBeat.

Ultra-Wideband Wireless Technology Could Be Key To VR’s Future

Ultra-Wideband (UWB) wireless tech might be the next big thing for the success of VR on a larger scale for remote work.

When you think of wireless technologies, the ones that come to mind first have taken years to become household names — Wi-Fi, Bluetooth, NFC, and 4G — while others have faded into the ether of technical jargon. It’s fair to be skeptical when a new technology arrives alongside claims that it’s going to be huge, so when Samsung proclaimed this morning that a long-nascent technology called Ultra-Wideband (UWB) is “the next big thing in wireless tech,” I might normally shrug it off as typical industry hype.

But despite prior commercialization challenges, there’s reason to believe that Ultra-Wideband technology will indeed be a big deal. Using radio waves, the wireless technology promises to enable any object with a UWB chip to be located within 4-12 inches (10 to 30 centimeters) of its actual location, compared with prior technologies measured in feet or yards. Moreover, UWB can be used to facilitate short-range data transfers, including file sharing and secure transactions. In essence, it promises to do at close distances what GPS did for long distances, unlocking a new age of location-aware business applications and opportunities.

It’s unclear at this point whether companies will use UWB to track down-to-the-foot location information for items in a warehouse, or instead rely upon it to enable virtual and remote work — imagine donning a VR headset and being able to accurately find your physical keyboard and trackpad despite not actually seeing them. Multiple business and personal applications of the technology are possible, and it looks like we’re about to start seeing some of the early ones finally emerge from labs into public view. Here’s what’s coming.

UWB technology and 5G confusion

Without getting overly into the weeds, Ultra-Wideband refers to the use of very large blocks of radio spectrum to transmit and receive data at a short range. Imagine a tiny radio tower that can simultaneously blast data onto every station on the dial at once, but limited in power so that it doesn’t interfere with radios outside of the current room, and using special frequencies that won’t disrupt traditional radio communications. The strength and directionality of the varied wide radio signals can be used to determine the relative location of the tiny tower, as well as to convey huge amounts of information quickly.

If you’re in the United States, you may have heard of UWB in another context: Verizon decided to use the same term to market its millimeter wave 5G mobile network, making the acronym “5G UWB” pop up on the screens of supporting devices. While there are some similarities in the underlying concepts, Verizon’s 5G UWB is unrelated to location services — it’s purely referencing the specific short-range cellular technology used for its highest-speed 5G connectivity.

U.S. rival AT&T instead uses the term “5G+” to differentiate millimeter wave connections, and globally, no other carrier is confusing customers in this way. As Ultra-Wideband location service technology becomes more common, Verizon’s use of the term for cellular purposes will hopefully disappear, or else remain perplexing for its customers.

Who’s on board and why

Samsung noted today that it has been working since 2018 to bring Ultra-Wideband technology into its products, and has already released two UWB-capable devices, the Galaxy Note20 Ultra in August and Galaxy Z Fold2 in September. For now, the company notes that UWB is enabling two features: Nearby Share, which lets users transmit files to friends and family in the same room, and SmartThings Find, which lets users of those two devices see the exact locations of UWB-equipped objects within “an augmented reality visual display.”

It’s worth underscoring that Samsung didn’t bring UWB to any phone in the Galaxy S20 series or to the entire lineup of its Note20 family. These are the first Android phones with UWB, but to see the feature for yourself on that platform, you’ll need to spend $1,300 for the Galaxy Note20 Ultra or $2,000 for the Galaxy Z Fold2. That’s probably why the company didn’t make a huge deal out of UWB during their unveilings.

The timing of Samsung’s press release wasn’t coincidental. Tomorrow, Apple will hold an iPhone event where another “next big thing in wireless” — 5G cellular technology — will be the dominant story, but rumors suggest that UWB news could also be on the agenda. If Apple holds a proper coming-out party for the technology, Samsung likely wanted to be sure no one forgot that it’s supporting UWB, too.

Although you might not have taken note of it last year, Apple added a UWB chip named “U1” to every iPhone 11 model in September 2019, which means that there are already tens of millions of supporting devices in the marketplace, with prices starting at $700. The same chip was quietly added to the $400+ Apple Watch Series 6 and is expected to be inside all of this year’s iPhone 12 devices as well.

Last year, Apple said only that UWB would be used to enhance the precision and speed of AirDrop, its version of Nearby Share, but openly hinted at much greater things to come in the future. iOS 13 code revealed that the company was also working on standalone U1 location trackers called Apple Tags or AirTags, and the plan was apparently to add near-field location services for the trackers into the Find My app. That hasn’t happened yet, but taking all the details into account, it’s clear that Apple was working on the exact same features as Samsung, but under different names, and with much greater initial scale due to Apple’s larger collection of supported devices.

What to expect

Unlike Apple, which hasn’t yet shown its full UWB hand, Samsung offered a short list of future applications for the technology, including:

  • A Digital Key solution that lets your phone unlock doors as you approach them, including a building’s front door
  • Accurately navigating large spaces, such as locating a car in a parking garage or finding a place to eat at the airport
  • Making secure remote payments
  • Locating missing remote controls

Samsung has also said that it’s planning to bring UWB “to everyone, not just a select few” thanks to open collaboration with over 45 organizations spanning multiple industries, ranging from automobile manufacturers and universities to enterprise and consumer technology companies. The promise of interoperability is a clear shot at Apple, which has thus far promised only to use UWB to help devices understand their “precise location relative to other nearby U1‑equipped Apple devices.”

Apple is likely headed in similar directions, though. Rumors have suggested that the next Apple TV remote will include a U1 so that you can locate it in a couch, and the company has confirmed that it’s collaborating on an automotive industry-wide standard for digital car unlocking that uses the U1 rather than NFC.

Another tantalizing prospect is that UWB becomes an enabler for mixed reality, helping devices bridge the gap between the physical and digital worlds. Thanks to UWB, users may finally realize the promise of high-precision indoor mapping based on Lidar and/or Wi-Fi, such that you might soon be able to explore a “digital twin” of a real space, remotely previewing the live locations of real objects within an office, store, or home, then finding them quickly when you arrive. Combined with 3- or 6-DoF orientation trackers, UWB could enable headphones to deliver high-precision spatial audio, such that a user could simply move and turn around in a room to experience a concert, movie, or game differently from the left, center, right, front, and back of a space. Feats like this are possible without UWB, but many companies have been waiting for precise spatial location and directional data to make them better.

One key question — and potentially major limitation on how widespread the technology becomes — is how much UWB accessories will cost. Though some people expected Apple to release Tags for under $30, the component costs alone could be $10, and some rumors have suggested that Apple was planning a $50 price point for each sensor. That’s markedly higher than the $20-$30 Tile charges for trackers, and probably high enough to slow demand for UWB tags, even if millions of tracking devices are out there. Samsung has a larger challenge, as UWB hardware isn’t yet found in any of its more affordable phones, and there hasn’t been any hint yet of Samsung tags. Past history suggests that where Apple goes, Samsung follows, and vice versa.

So regardless of the platform you prefer, there should be some extremely interesting applications of UWB technology. We’ll have to see whether it actually turns out to be “the next big thing in wireless tech,” but the power to make it so rests firmly in the hands of Apple, Samsung, and relatively few other companies with the wherewithal to release a comprehensive suite of hardware, software, and services to support UWB’s unique capabilities.

This article by Jeremy Horwitz originally appeared in VentureBeat.

XRHealth Debuts At-Home VR Therapy App For ADHD

Attention deficit hyperactivity disorder (ADHD) affects millions of people — over 6 million children in the U.S. alone — and can range from a lack of control over impulsive behavior to an inability to sit still and pay attention. While prescription medications can help with focus, individuals may be able to learn concentration skills through personalized instruction, and XRHealth has developed a VR therapy app that can be used at home for that purpose.

Based on research into brain plasticity — the mind’s ability to restructure itself to overcome challenges — XRHealth’s app presents a visual, auditory, and physical experience that mimics real life, enabling users with ADHD to improve both cognitive abilities and motor movement. As an alternative to game-based ADHD treatments, the VR solution helps users improve their attentiveness, reduce impulsivity, and develop strategic life skills such as planning and executing daily tasks. At the same time, the system measures sustained focus in the presence of distractions.

The COVID-19 pandemic has reduced clinicians’ ability to offer supervised outpatient treatments for ADHD, a condition that affects some business executives, employees, and children who are now studying at home under their parents’ tutelage. Consequently, home therapeutic solutions that can be used without visiting a doctor’s office have become increasingly appealing, and the U.S. Food and Drug Administration (FDA) is permitting XRHealth to trial the ADHD app in patients’ homes during the pandemic.

As of now, the FDA will allow XRHealth’s app to be used in homes as an adjunct to other treatments, including medication and/or in-person therapy. But there are caveats: The app isn’t intended to replace other treatments, and will require an XRHealth-approved clinician’s guidance. Adults and kids 12 and over will be able to use the app right away; kids aged 8 and up will be allowed to participate starting in November.

Critically, clinicians will be able to use the app to gather eye-tracking data, gaining “unbiased, objective, and quantifiable” information about what the wearer is looking at during each VR session, with the ability to adjust task difficulty to motivate users to continue. Doctors can also see how each user is performing generally on tasks and track session-to-session improvements over time.

Though ADHD is often thought of as a condition affecting children, with serious negative impacts on early reading, writing, math, and scholastic/social interactions, it can persist into adulthood, stifling higher education and work. Some adults with undiagnosed or untreated ADHD are unable to concentrate on tasks for more than brief periods of time before shifting to other subjects, sometimes abandoning tasks without completing them. ADHD medications tend to work for several-hour stretches, but they aren’t an ideal long-term solution.

VR has been gaining steam as a treatment for various medical challenges, ranging from mid-procedure surgical pain to social anxiety, loneliness and isolation. It’s also being used to train doctors and coronavirus frontline workers, as clinicians have seen quantifiable improvements in everything from information retention to user relaxation when consuming VR content.

Users interested in the ADHD therapy app can apply for their insurance to cover XRHealth’s VR Telehealth Kit, which will include Pico’s Neo 2 headset, notably locked down in kiosk mode for use with the app. In the future, the app could expand to additional hardware, though that’s not a certainty at this point. XRHealth has offered Oculus Go and Quest-compatible solutions since its earlier days as VRHealth, as well as supporting HTC headsets, and was previously announced as a medical AR partner for Magic Leap.

This post by Jeremy Horwitz originally appeared on VentureBeat.

Editorial: Oculus Quest 2 Is Putting The Rift S Out Of Its Misery

It’s no coincidence that Facebook is killing the Oculus Rift S VR headset on the same day it’s announcing the Oculus Quest 2 at Facebook Connect. On paper, the Go, Quest, and Rift families were supposed to be different devices for different markets, but they’ve coalesced over a much shorter period of time than people expected. Just as that meant the end for Go earlier this year, the death of Rift was a long-premeditated killing, arguably more a matter of when than whether.

The VR market was different when Facebook segmented Oculus products into three families. Go was a cheap 3DoF media viewer with minimal gaming potential, designed to appeal to people who didn’t want to spend more than $200 on a standalone VR headset — Walmart was a big Go customer. Quest was a $400 6DoF standalone alternative that was better than Go in every way, and highly capable of gaming, but not equivalent to a VR-ready PC. Lastly, Rift S was there as a $400 headset solely for PC VR purposes, with marginally better performance than Quest when connected to a Windows machine, but no ability to be used on its own.

Over the past year, Facebook has worked aggressively to make Quest a viable replacement for the Go and Rift. To win over Rift users, Oculus Link turned Quest into a Rift alternative that worked nearly as well for tethered PC VR. As a nod to Go users, the Quest 2 drops from $400 down to $300, closer to Go’s original $200 price point, and reaching the “magic” price point that typically leads to hockey stick growth for compelling products.

Make no mistake: Quest 2 is compelling. Thanks to its Snapdragon XR2 chipset and massive display upgrades, Quest 2 will be a better standalone VR headset and a better PC VR headset than its already capable predecessor, at a lower price than any Rift. Its inside out tracking and screen resolution should run circles around HTC’s competing Vive Cosmos, and that’s before taking into account Quest 2’s convenience, size, and weight. Assuming Facebook can get enough units into stores — and that people don’t object to the latest Oculus/Facebook account policies — there’s every reason to believe this new model will be a smash hit.

It’s hard to picture where Rift S would have fit into the Oculus lineup after Quest 2 showed up. Rift S wasn’t so much a step forward as a step sideways when it was announced, focusing on improved comfort and convenience rather than major visual or other spec improvements. In retrospect, Facebook set the stage for Rift to disappear at this point. The company made clear that while it was working on next-generation Rift-ready innovations, it didn’t have any immediate plans to commercialize them, and planned to test them in its own offices — perhaps with enterprise applications — ahead of any general release. For consumers, the message was not to expect Rift 2 anytime soon.

Facebook has declared that standalone VR, not PC-tethered VR, is the future of virtual reality technology. While killing Rift S and offering Quest 2 at an aggressive price suggests that Oculus is already betting everything on standalone VR, the reality is that the Quest family is capable of covering both the standalone and tethered bases — at a better price than Rift S, besides. That means there’s no need for PC VR fans to abandon their software libraries or give up on tethered experiences, though my best guess is that Facebook will spend the next two years making standalone experiences as appealing as possible.

Whether Quest 2 will be enough to fully displace HTC’s Vive, Valve’s Index, and other vendors’ headsets remains to be seen, but this was the right choice for Rift S, which didn’t have a viable future given its features, specs, and price point. At some point, there may be an Oculus headset with higher-end innovations that millions of people would actually pay for, but for now, focusing on improving VR’s appeal to the masses is exactly the right move.

This post by Jeremy Horwitz originally appeared in VentureBeat.

XRSI Releases 45-Page VR/AR Privacy Framework Due To Urgent Industry Need

Virtual and augmented reality technologies have continued to improve at a brisk pace, with Facebook’s Oculus Quest VR headset and Nreal’s Light AR glasses setting new standards for mobility and comfort. But as the hardware and software evolve, concern over their user privacy implications is also growing. The nonprofit XR Safety Initiative has released its own solution — the XRSI Privacy Framework — as a “baseline ruleset” to create accountability and trust for extended reality solutions while enhancing data privacy for users.

The XRSI Privacy Framework is urgently needed, the organization suggests, as “individuals and organizations are currently not fully aware of the irreversible and unintended consequences of XR on the digital and physical world.” From headsets to other wearables and related sensors, XR technologies are now capable of gathering untold quantities of user biometric data, potentially including everything from a person’s location and skin color to their eye and hand positions at any given split second. But comprehensive regulations are not in place to protect XR users. The National Institute of Standards and Technology has offered basic guidance, while regional laws such as GDPR, COPPA, and FERPA govern some forms of data in specific locations. But XRSI’s document ties them all together and goes further.

Developed and vetted by a group of academics, attorneys, XR industry executives, engineers, and writers, the Framework is a 45-page document with around 25 pages of regulatory and guideline meat that will be of more interest to lawyers and corporate privacy officers than end users. Broadly speaking, the Framework pushes companies such as Facebook to develop and use immersive technologies responsibly, rather than creating tools to harvest as much information from individuals as possible. It uses the aggregated threat of legal consequences to encourage voluntarily appropriate corporate behavior and is designed to get XR stakeholders to think before acting, rather than holding to the classic “move fast and break things” mantra.

From a user perspective, the XRSI aims to deliver transparent, easy-to-understand solutions that are inclusive while protecting individual privacy by design and default, including modern understandings of identity and respect for the user’s individual characteristics and preferences. It’s also timely: As schooling from home gains traction and XR potentially plays a larger role in remote education, the Framework canvasses existing laws protecting both children under 13 and older students against discrimination and inappropriate record keeping, helping XR companies understand their existing and future legal obligations in the scholastic arena.

The XRSI is working with liaison organizations — including Open AR Cloud, the University of Michigan, and the Georgia Institute of Technology — to further develop the Framework beyond its current “version 1.0” status and get it adopted and enforced. While the group credits individual experts from organizations like HERE and Niantic with helping to craft the document, it’s unclear at this stage whether XR platform developers such as Facebook, HTC, or Valve will support the initiative.

This post by Jeremy Horwitz original appeared on VentureBeat.

Apple’s Investment In Lidar Could Be Big For AR

While many of Apple’s investments in innovative technologies pay off, some just don’t: Think back to the “tremendous amount” of money and engineering time it spent on force-sensitive screens, which are now in the process of disappearing from Apple Watches and iPhones, or its work on Siri, which still feels like it’s in beta nine years after it was first integrated into iOS. In some cases, Apple’s backing is enough to take a new technology into the mainstream; in others, Apple gets a feature into a lot of devices only for the innovation to go nowhere.

Lidar has the potential to be Apple’s next “here today, gone tomorrow” technology. The laser-based depth scanner was the marquee addition to the 2020 iPad Pro that debuted this March, and has been rumored for nearly two years as a 2020 iPhone feature. Recently leaked rear glass panes for the iPhone 12 Pro and Max suggest that lidar scanners will appear in both phones, though they’re unlikely to be in the non-Pro versions of the iPhone 12. Moreover, they may be the only major changes to the new iPhones’ rear camera arrays this year.

If you don’t fully understand lidar, you’re not alone. Think of it as an extra camera that rapidly captures a room’s depth data rather than creating traditional photos or videos. To users, visualizations of lidar look like black-and-white point clouds focused on the edges of objects, but when devices gather lidar data, they know relative depth locations for the individual points and can use that depth information to improve augmented reality, traditional photography, and various computer vision tasks. Unlike a flat photo, a depth scan offers a finely detailed differentiation of what’s close, mid range, and far away.

Six months after lidar arrived in the iPad Pro, the hardware’s potential hasn’t been matched by Apple software. Rather than releasing a new user-facing app to show off the feature or conspicuously augmenting the iPad’s popular Camera app with depth-sensing tricks, Apple pitched lidar to developers as a way to instantly improve their existing AR software — often without the need for extra coding. Room-scanning and depth features previously implemented in apps would just work faster and more accurately than before. As just one example, AR content composited on real-world camera video could automatically hide partially behind depth-sensed objects, a feature known as occlusion.

In short, adding lidar to the iPad Pro made a narrow category of apps a little better on a narrow slice of Apple devices. From a user’s perspective, the best Apple-provided examples of the technology’s potential were hidden in the Apple Store app, which can display 3D models of certain devices (Mac Pro, yes; iMac, no) in AR, and iPadOS’ obscure “Measure” app, which previously did a mediocre job of guesstimating real-world object lengths, but did a better job after adding lidar. It’s worth underscoring that those aren’t objectively good examples, and no one in their right mind — except an AR developer — would buy a device solely to gain such marginal AR performance improvements.

Whether lidar will make a bigger impact on iPhones remains to be seen. If it’s truly a Pro-exclusive feature this year, not only will fewer people have access to it, but developers will have less incentive to develop lidar-dependent features. Even if Apple sells tens of millions of iPhone 12 Pro devices, they’ll almost certainly follow the pattern of the iPhone 11, which reportedly outsold its more expensive Pro brethren across the world. Consequently, lidar would be a comparatively niche feature, rather than a baseline expectation for all iPhone 12 series users.

The new XS Portrait Mode lets you adjust background blur (bokeh) from f/1.4 to f/16 after taking a photo.

Above: Portrait Mode lets you adjust background blur (bokeh) from f/1.4 to f/16 after taking a photo.

Image Credit: Jeremy Horwitz/VentureBeat

That said, if Apple uses the lidar hardware properly in the iPhones, it could become a bigger deal and differentiator going forward. Industry scuttlebutt suggests that Apple will use lidar to improve the Pro cameras’ autofocus features and depth-based processing effects, such as Portrait Mode, which artificially blurs photo backgrounds to create a DSLR-like “bokeh” effect. Since lidar’s invisible lasers work in pitch black rooms — and quickly — they could serve as a better low-light autofocus system than current techniques that rely on minute differences measured by an optical camera sensor. Faux bokeh and other visual effects could and likely will be applicable to video recordings, as well. Developers such as Niantic could also use the hardware to improve Pokémon Go for a subset of iPhones, and given the massive size of its user base, that could be a win for AR gamers.

Apple won’t be the first company to offer a rear depth sensor in a phone. Samsung introduced a similar technology in the Galaxy S10 series last year, adding it to subsequent Note 10 and S20 models, but a lack of killer apps and performance issues reportedly led the company to drop the feature from the Note 20 and next year’s S series. While Samsung is apparently redesigning its depth sensor to better rival the Sony-developed Lidar Scanner Apple uses in its devices, finding killer apps for the technology may remain challenging.

Though consumer and developer interest in depth sensing technologies may have (temporarily) plateaued, there’s been no shortage of demand for higher-resolution smartphone cameras. Virtually every Android phone maker leaped forward in sensor technology this year, such that even midrange phones now commonly include at least one camera with 4 to 10 times the resolution of Apple’s iPhone sensors. Relying on lidar alone won’t help Apple bridge the resolution gap, but it may further its prior claims that it’s doing the most with its smaller number of pixels.

Ultimately, the problems with Apple-owned innovations such as 3D Touch, Force Touch, and Siri haven’t come down to whether the technologies are inherently good or bad, but whether they’ve been widely adopted by developers and users. As augmented reality hardware continues to advance — and demand fast, room-scale depth scanning for everything from object placement to gesture control tracking — there’s every reason to believe that lidar is going to be either a fundamental technology or a preferred solution. But Apple is going to need to make a better case for lidar in the iPhone than it has on the iPad, and soon, lest the technology wind up forgotten and abandoned rather than core to the next generation of mobile computing.

This post by Jeremy Horwitz originally appeared on VentureBeat.

Editorial: Spaces Is Only A Small Part Of Apple’s Enormous AR/VR Puzzle

“Apple buys smaller technology companies from time to time,” the company’s official acquisition confirmation statement says, “and we generally do not discuss our purpose or plans.” If you follow Apple, you’ve seen these words multiple times before — pretty much any time it buys a company that doesn’t have an existing customer base to reassure. That now happens once or twice a month, often remaining under the radar until someone stumbles across it or gets tipped off to an ambiguous “headed in a new direction” final post on a company’s website.

Yesterday, Apple offered that vague confirmation for a VR startup called Spaces. We covered their launch in 2016funding in 2017opening of a location-based VR center in 2018, and pandemic pivot to VR Zoom meetings earlier this year. Along the way, Spaces’ most noteworthy offering was arguably a four-person, room-scale VR experience based on the film Terminator: Salvation, which debuted in Orange County, California before rolling out with Sega in Shibuya, Japan. But Apple’s interest is likely something different.

A demonstration of Spaces’ latest tech shows a cartoony teacher offering whiteboard presentations with accompanying lip and body synchronization — a gentle evolution of existing VR avatar technology. You could easily imagine the 3D model replaced with one of Apple’s current Memoji avatars, enabling an iPad- or iPhone-toting teacher to offer a presentation to a virtual class over Zoom. That’s basically the VR video conferencing solution Spaces was offering prior to the acquisition, minus the Apple elements, and the platform-agnostic company promised compatibility with practically every major VR headset and video sharing app around.

I’m not going to tell you Apple’s VR and AR acquisitions have become too numerous to count, but the picture they paint is anything but narrowly focused. Every little company Apple buys feels like another tile in a massive mosaic, contributing its own color and texture to a picture that’s bigger than many people realize. If it seemed Apple was just making AR glasses two years ago, the company now appears to be developing both AR and VR hardware — including key components such as displays. Similarly, if you thought Apple’s AR ambitions were mostly about hardware, nope, it’s filing software patents and buying a lot of software companies. And services companies.

This won’t surprise anyone who knows that Apple’s core strength is its ability to integrate hardware, software, and services. But it does mean the company’s interest in mixed reality goes far beyond dropping a pair of glasses in the marketplace and seeing how they perform on their own. Apple is building the initial suite of AR/VR applications that will enable the hardware to succeed in its first or second generation, perhaps before there’s a robust “Reality App Store” with third-party apps. Like the classic iPhone apps Mail, Messages, and Safari, Spaces could be the key to Apple’s “Keynote VR” — or its development team may help with collaborative multi-person experiences in rooms, building on lessons learned from the Terminator offering.

Compare Apple’s approach to what we’ve seen with a couple of consumer VR and AR companies, Oculus and Nreal. Both announced hardware and largely let third-party developers loose to create cool games or useful apps that use their technology. Yet both companies (and other XR hardware makers) have realized that they, too, need to develop compelling apps to move their platforms forward. Some of the biggest current and upcoming Oculus titles have been either backed or developed by Facebook. Nreal is similarly collaborating with mobile operators to create game-changing AR apps. Neither waited until software and services were mature to launch hardware, decisions that (thankfully) gave early adopters tastes of our mixed reality future.

It’s been a comparatively long walk toward Apple’s product, with small public steps forwardodd leakscontradictory reports, and the occasional bad decision. On their own, many of these moves don’t add obvious value to the Apple we know today. But collectively, they’re either going to come together for a massive iPhone-caliber launch or show up as an ever-growing collection of small developments, like the Apple Watch. The reported 2022 release might be getting closer every day, but if these acquisitions keep piling up, expectations for what’s about to arrive should be sky-high ahead of the reveal. Given Apple’s history with the iPhone and the Apple Watch, I don’t question whether the finished offering will have a huge impact, but rather how quickly the world will change as a result.

This post by Jeremy Horwitz originally appeared in VentureBeat.

Hour One Wants Synthetic AI Characters To Be Your Digital Avatars

If you ever wondered how we’ll populate the metaverse, look no further than Hour One, an Israeli startup that is making replicas of people with AI avatars. These avatars can be a near-perfect visual likeness of you and speak with words fed to them by marketers who want to sell you something. An avatar can speak on your behalf in a digital broadcast when you’re at home watching TV.

Such creations feel like a necessary prerequisite of the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. And the trick: You’ll never know if you’re talking to a real person or one of Hour One’s synthetic people.

“There is definitely interest in the metaverse and we are doing experiments in the gaming space and with photorealism,” Hour One business strategy lead Natalie Monbiot said in an interview with VentureBeat. “The thing that has fired up the team is this vision of a world which is increasingly virtual and a belief that we will live increasingly virtually.”

She added, “We already have different versions of ourselves that appear in social media and different social channels. We represent ourselves already in this kind of digital realm. And we believe that our virtual selves will become even more independent. And we can put them to work for us. We can benefit from this as a human race. And you know, that old saying we can’t be in two places at once? Well, we believe that that will no longer be true.”

The race for virtual beings

Hour One is one more example of the fledgling market for virtual beings. Startups focused on virtual beings have raised more than $320 million to date, according to Edward Saatchi of Fable Studios, speaking at July’s Virtual Beings Summit.

But we’re a little ahead of ourselves. Metaverse plays are becoming increasingly common as we all realize that there has to be something better than Zoom calls to engage in a digital way. So the Tel Aviv, Israel-based company said it raised $5 million in seed funding this week from Galaxy Interactive via its Galaxy EOS VC Fund, as well as Remagine Ventures, Kindred Ventures, and Amaranthine. It will use that money to scale its AI-driven cloud platform and create thousands of new digital characters.

You’ve heard of stock photos. Hour One is talking about something similar to stock humans. They can be used to speak any kind of script in a marketing video or give a highly customized message to someone. The goal is to create characters who cross the “uncanny valley.

“I think that we’ve crossed the uncanny valley because we have our likeness test, and our videos are actually live and in market and generating results for customers,” Monbiot said. “I think that’s something that’s really distinctive about us, even though we’re such a young company, we’ve had very positive commercial traction already.”

Who's real and who's not?

Above: Who’s real and who’s not?

Image Credit: Hour One

“We create synthetic characters based on real people,” Monbiot said. “We do so for commercials. We take real people and we have this really simple process for converting real people into synthetic characters that resemble them exactly. And once we have the synthetic characters, we can program them to generate all kinds of new content at enormous speed and scale.”

The race for the metaverse

The competition in this space will be tough. GamesBeat will be having our own conference, tentatively scheduled for January 26 to January 27, 2020, on topics including the metaverse, and we expect it to be full of interesting companies.

A Samsung spinoff, Neo, caught a lot of attention for creating human AI avatars at CES 2020 in January, and then it promptly caught a lot of bad press for avatars that didn’t look as real as expected. But Hour One also started coming out of stealth at the same time with a plan to expand business-to-business human communication. The company showcased its “real or synthetic” likeness test at CES 2020, challenging people to distinguish between real and synthetic characters generated by its AI.

Hour One is using deep learning and generative adversarial neural networks to make its video characters. The company says it can do this in a highly scalable and cost-effective way. They’re supposed to look good, and the image on top of this story looks realistic.

But the cost of missing the mark is high. Hour One will have to beat Neo in the race across the uncanny valley. And Genies is coming from another direction, with cartoon-based avatars that represent digital versions of celebrities.

Above: Hour One’s real Natalie Monbiot

Image Credit: Hour One

Hour One is working with companies in the ecommerce, education, automotive, communication, and enterprise sectors, with expanded industry applications expected throughout 2020. The company has about 100 avatars today.

The pitch is that the lower cost per character use means that companies will be able to engage more with their customers on every level, from digital receptionists to friendly salespeople.

“These customers can create thousands of videos simply by submitting text to these characters,” Monbiot said. “It appears as though real people are actually saying those words, but we’re using AI to make it happen. We’re improving communication. We’re obviously living in an ever-more virtual existence. And we’re enabling businesses of all kinds to engage in a more human way.”

And if your avatar is speaking on behalf of you somewhere and it’s generating value, you’ll get paid for it, Monbiot said — even if you’re not there. “We have a very bright view of the future. If your avatar speaks, you can get paid for that,” Monbiot said. “So we’re at the beginning of a new future. And for us, that’s a future in which everybody will have a synthetic character. We will have virtual versions of ourselves.”

Sam Englebardt, managing director of Galaxy Interactive (and a speaker on the subject of the metaverse at our GamesBeat Summit event), calls the approach an ethical one. “Hour One is a business-to-business provider of the best synthetic video tech I’ve seen to date,” Englebardt said in an email to GamesBeat.

Oren Aharon and Lior Hakim created Hour One in 2019 with a mission of driving the economy of the digital workforce powered by synthetic characters of real-life people. They can use blockchain technology to verify the identity of a digital character and who owns it. If they’re altered or used for “deep fakes,” Hour One will be able to mark them as altered and notify people what has happened. The team has eight people.

This post by Dean Takahashi originally appeared on VentureBeat. 

The post Hour One Wants Synthetic AI Characters To Be Your Digital Avatars appeared first on UploadVR.

Google Sheets And Excel VR Spreadsheets Are A Thing Now

If your dreams occasionally involve spreadsheets that extend endlessly in all directions, good news: Researchers have developed a virtual reality spreadsheet interface that could expand Google Sheets and Microsoft Excel files from flat screens into 3D spaces. Rather than being nightmare-inducing, it could actually make spreadsheet apps more usable than before.

While traditional spreadsheets have been limited by the boundaries of 2D windows and displays, the research team envisions VR opening up adjacent 2D workspaces for related content, then using 3D for everything from floating menus to cell selection and repositioning. In one example, a VR headset mirrors the view of one spreadsheet page displayed on a physical tablet, while two virtual sheets sit to the left and right, permitting drag-and-drop access to their cells, as an overview hovers above.

Alternatively, tablet-surrounding areas could display useful reference materials, expanded views of formulas, or the full collection of a spreadsheet’s pages displayed as floating previews. Another possibility is a single spreadsheet page that stretches far further than the 30-degree diagonal field of view occupied by a typical tablet on a desk, utilizing more of the ~110-degree fields of view supported by VR headsets.

Fans of the film Minority Report and portrayals of similarly holographic future 3D interfaces will appreciate the team’s use of a floating pie menu — complete with a drop shadow on the spreadsheet — for selecting features and functions, as well as spherical rather than flat buttons and other visual elements that appear to leap off the flat pages. The 3D spreadsheet workspace could also be extended with floating desktop objects, such as a virtual trash can, to make disposing of unwanted content more intuitive.

Interestingly, the project’s research team includes members of the Mixed Reality Lab at Germany’s Coburg University, as well as a professor from the University of Cambridge and two principal researchers from Microsoft — mixed reality expert Eyal Ofek and UX engineer Michel Pahud. But their work isn’t limited to potential Microsoft applications: Dr. Jens Grubert, one of the paper’s authors, tells VentureBeat that the cross-organizational team includes “long time collaborators” and actually used Google Sheets rather than Excel for the backend.

In addition to an HTC Vive Pro VR headset and Microsoft Surface Pro 4 tablet, the researchers employed a spatially tracked stylus, enabling precision direct input for spreadsheet interactions while adding the freedom of in-air movement within a 3D space. Unsurprisingly, the VR spreadsheets can be used on existing commodity PC hardware, and the virtual UI was created with the Unity engine. Sheets pages are rendered within Chromium browser windows to match the resolution and size of the Surface Pro 4’s screen.

Full project details are available in the “Pen-based Interaction with Spreadsheets in Mobile Virtual Reality” research paper. If you’re interested in deeper dives, you can see a video of the project here, as well as a broader exploration of the team’s VR-tablet research here, ahead of their presentation at the IEEE’s International Symposium on Mixed and Augmented Reality, being held online from November 9 to 13.

This post by Jeremy Horwitz originally appeared on VentureBeat.

The post Google Sheets And Excel VR Spreadsheets Are A Thing Now appeared first on UploadVR.