AR Content is Coming to Google Maps, But It Won’t Matter Until There’s a Headset to See it Through

Google today announced it’s starting a pilot program that will soon allow select partners to create AR content and display it within Google Maps. While it seems like an important step for Google on the way to owning a piece of the ‘all-day AR glasses’ future, it’s unclear just where it’s all headed for the company in the near term. Because compared to Meta and Apple, Google still seems unable to commit to a coherent XR strategy.

Starting in Singapore and Paris later this year, Google is essentially elevating content built in its Geospatial Creator platform to the world stage, as it will soon allow select partners to publish their AR content connected to physical landmarks via Google Maps, which you can view through both Lens and Street View.

The hope, it seems, is it get mobile users engaged with AR content by searching for a location in Google Maps and holding your phone up at landmarks, shops, etc. Some of the examples seen in the video below include cultural and historical stuff, but also virtual billboards for private businesses, presenting something of a low poly Blade Runner vibe.

It’s a pretty neat showcase for tourist boards to get behind, and a cool Easter egg for Google Maps users too, but it’s difficult to imagine it will ever be more than that, at least on mobile devices.

While we use our phones for everything, mobile AR applications are neither as immersive as the promo video suggests, nor additive enough yet to really engage with for any meaningful amount of time before the glass rectangle goes back in your pocket or bag. That’s why so many companies are pinning their hopes on functional AR glasses for all-day use; it will remove that frictional boundary and put that AR layer much closer to the forefront to both users and the advertisers trying to reach them.

And as you’d imagine, there was little in the way of XR at Google’s I/O developer conference this year—unfortunately expected after the company canned its AR glasses Project Iris last summer, which also saw the resignations of top leadership, including AR & VR chief Clay Bavor, and head of XR operating systems Mark Lucovsky.

At the time, Lucovsky maintained in an X post his departure was heavily influenced by “changes in AR leadership and Google’s unstable commitment and vision.”

That’s not to say Google isn’t doing XR stuff, but it all still feels like it’s following the company’s usual brand of scattershot Darwinism. We heard about more incremental updates to ARCore, its developer platform for building AR experiences which was initially released in 2017. We heard about how its light field video chatting tech Project Starline will soon become an actual product.

We also got a quick glimpse of a very Project Iris-style device in a video (seen below), which the company simply calls “a prototype glasses device.”

The demo was more about highlighting the company’s research in computer vision and AI assistants with Project Astra though, as there’s no word on what those glasses are beyond that description. Given what we saw, it appears the device is more like a pair of Google Glass-style smartglasses than AR glasses as such. Learn more about the difference here.

The short of it: smartglasses can do things like feed you AI assistant stuff, play music, and show you static information, i.e. not spatial data like 3D models that blend in naturally with the physical landscape. That would require significantly more compute, battery, and more powerful optics than those prototype glasses could hope to provide, which means no interactive maps or more immersive version of Pokémon Go either.

Most of all, we’re still waiting to hear about the Samsung+Google partnership that might bring a Vision Pro competitor from Samsung. Most importantly though, it will be Google’s next big stab at launching an Android-based XR operating system following its now defunct Daydream platform.

The post AR Content is Coming to Google Maps, But It Won’t Matter Until There’s a Headset to See it Through appeared first on Road to VR.

Google Trials ‘Starline’ Glasses-Free Light Field Display

Google’s research into light fields is bearing fruit with a glasses-free 3D display technology called “Project Starline” available at a few of its offices.

Google revealed the work as part of its annual developer conference this week. It is pitched as working like a “magic window” and relies on “custom-built hardware and highly specialized equipment” with advances in real-time compression, spatial audio, computer vision, and machine learning to provide a sense of being face to face with someone no matter the physical distance.

The image below posted by Google’s Vice President of AR and VR Clay Bavor offers a look at the substantial footprint for the system while it is used in one of Google’s offices.

project starline google

Google also posted a video showcasing the technology used for some person-to-person interactions said to provide “a sense of volume and depth that can be experienced without the need for additional glasses or headsets.” The company says it is planning to trial deployments with enterprise partners later this year.

We tested some early-stage glasses-free light field display technology in 2018 and it required years more development and enormous investments to improve brightness and cost enough to put it within reach of average consumers. In our 2018 demonstration from Light Field Lab, for instance, the 3D effect only worked if you kept your head in a very specific area relative to the display. Indeed, even with Google claiming key breakthroughs in its efforts to prove its glasses-free 3D display technology as a direction “technology can and should go”, the company cautions that only “some of these technical advancements” are likely to make it into its communication products.

Still, we’d love to go eyes-on with Project Starline at some point for a better sense of its use cases and the investment Google will need to spend to bring its advancements into wider use.

Google Launches Depth API for ARCore, Increasing Realism And Improving Occlusion

Google announced today that the Depth API is now available for ARCore 1.18 on Android and Unity. The Depth API is meant to improve occlusion and increase realism thanks to new interaction types.

The Depth API was first announced with a preview on the Google developers blog last year. The API allows a device to determine the depth of objects shown on the camera, according to how far or close by they are. In terms of AR, the API helps to significantly improve occlusion, which Google succintly describes as “the ability for digital objects to accurately appear in front of or behind real world objects.”

snapchat hotdog arcore

The example embedded above shows the dancing hotdog filter on Snapchat being accurately occluded by a lounge as the camera moves down. According to Google, another case where the API would be useful is in Five Nights at Freddy’s AR: Special Delivery, as occlusion is vital to the experience — characters can accurately hide behind objects and then provide a jump scare by moving out from behind the real-world object. Niantic showed something similar with Pokemon Go in the past as well.

However, Occlusion is not the only use for the Depth API — Google notes that developers have found many other uses as well, including implementing more realistic physics, better surface interactions, and environmental traversal. For example, the Google Creative Lab experiment ‘Lines of Play’ allows users to build AR domino arrangements that will accurately collide with furniture and walls in the room when the dominoes knocked over.

The Depth API will begin rolling out today. You can read more over on the Google developers blog.

The post Google Launches Depth API for ARCore, Increasing Realism And Improving Occlusion appeared first on UploadVR.

Retro-inspired Adventure ‘Pixel Ripped 1995’ to Launch Spring 2020

Pixel Ripped 1995, the sequel to the nostalgia-soaked VR game Pixel Ripped 1989 (2018), is now slated to launch this spring.

Created by São Paulo-based studio ARVORE, Pixel Ripped 1995 jumps six years forward into the history of gaming, leaving behind the 8-bit handhelds of the late ’80s and dipping its toes into the era of 16-bit and 32-bit consoles—all of course following the same trippy game-within-a-game style that Pixel Ripped 1989 pioneered.

Arvore says we should expect to find plenty of homages to ’90s games; to the studio, Pixel Ripped 1995 focuses on the historical transition from 2D to 3D gaming, as it includes action RPGs, brawlers, 2D and 3D platformers, space shooters and racing games.

The game is said to include six levels, which the studio says should individually “feel like an entire new game.”

Pixel Ripped 1995 is slated to support Oculus Quest, Oculus Rift, PSVR, and SteamVR headsets when it launches this spring. If you’re planning to play on a SteamVR headset, you can add it to your Steam wishlist in the meantime.

The post Retro-inspired Adventure ‘Pixel Ripped 1995’ to Launch Spring 2020 appeared first on Road to VR.

Google’s ARCore Is Getting Full Occlusion For More Real AR

Google’s ARCore is getting a new feature which will enable full occlusion of virtual objects in real scenes.

ARCore is Google’s Android augmented reality runtime and SDK. It provides positional tracking, surface detection, and lighting estimation so developers can easily create AR apps for high end Android phones.

https://www.youtube.com/watch?v=1q0-jdknbTs

But currently, virtual objects will always be shown on front, because ARCore has no good understanding of the depth of the real objects in the scene. Positional tracking works like on VR headsets- it tracks high contrast features in the scene, not the entire scene.

The ARCore Depth API estimates the depth of everything the camera sees. This allows for occlusion– virtual objects will now appear behind real objects if the real object is closer to the camera.

Occlusion is arguably as important to AR as positional tracking is to VR. Without it, the AR view will often “break the illusion” through depth conflicts.

Apple’s ARKit for iOS and iPadOS doesn’t have full depth occlusion yet. However, on the most recent powerful devices it does now have occlusion for human bodies, including hands. Whereas ARCore and ARKit have stayed roughly equivalent for now, it’s interesting to now see them diverge on different paths. This will make life harder for developers, but allow for specific innovation on each platform.

Real time understanding of scene depth could also be hugely beneficial for virtual reality. Current VR headsets keep you aware of your real surroundings by having you draw out your playspace during setup. If you come close to this boundary it will be shown. But this technology could allow this to be automatic and 3 dimensional – walking too close to your couch could make it appear in the headset.

oculus passthrough guardian

Facebook, which owns the Oculus brand, has shown off real time mobile depth mapping too, but hasn’t shipped it in a product yet.

ARCore’s Depth API isn’t publicly available and Google hasn’t given any release window, but developers can sign up for approval to try it out.

The post Google’s ARCore Is Getting Full Occlusion For More Real AR appeared first on UploadVR.

Google ARCore Depth API Now Available, Letting Devs Make AR More Realistic

ARCore, Google’s developer platform for building augmented reality experiences for mobile devices, just got an update that brings the company’s previously announced Depth API to Android and Unity developers. Depth API not only lets mobile devices create depth maps using a single RGB camera, but also aims to make the AR experience more natural, as virtual imagery is more realistically placed in the world.

Update (June 25th, 2020): Google today announced it’s making its Depth API for ARCore available to developers. A few studios have already integrated Depth API into their apps to create more convincing occlusion, such as Illumix’s Five Nights at Freddy’s AR: Special Delivery game, which lets enemies hide behind your real-world objects for more startling jump scares.

ARCore 1.18 for Android and Unity, including AR Foundation, is rolling out to what Google calls “hundreds of millions of compatible Android devices,” although there’s no clear list of which devices are supported just yet.

Original Article (December 9th, 2019): Shahram Izadi, Director of Research and Engineering at Google, says in a blog post the new Depth API now enables occlusion for mobile AR applications, and also the chance of creating more realistic physics and surface interactions.

To demonstrate, Google created a number of demos to shows off the full set of capabilities the new Depth API brings to ARCore. Keep an eye on the virtual objects as they’re accurately occluded by physical barriers.

“The ARCore Depth API allows developers to use our depth-from-motion algorithms to create a depth map using a single RGB camera,” Izadi says. “The depth map is created by taking multiple images from different angles and comparing them as you move your phone to estimate the distance to every pixel.”

Full-fledged AR headsets typically use multiple depth sensors to create depth maps like this, which Google says was created on device with a single sensors. Here, red indicates areas that closer, while blue is for farther areas:

 

“One important application for depth is occlusion: the ability for digital objects to accurately appear in front of or behind real world objects,” Izadi explains. “Occlusion helps digital objects feel as if they are actually in your space by blending them with the scene. We will begin making occlusion available in Scene Viewer, the developer tool that powers AR in Search, to an initial set of over 200 million ARCore-enabled Android devices today.”

Additionally, Izadi says Depth API does’t require specialized cameras and sensors, and that with the addition of time-of-flight (ToF) sensors to future mobile devices, ARCore’s depth mapping capabilities could eventually allow for virtual objects to occlude behind moving, physical objects.

The new Depth API follows Google’s release of its ‘Environmental HDR’ tool back at Google I/O in May, which brought more realistic lighting to AR objects and scenes, something which aims at enhancing immersion with more realistic reflections, shadows, and lighting.

Update (12:10): In a previous version of this article, it was claimed that Google was releasing Depth API today, however the company is only now putting out a form for developers interested in using the tool. You can sign up here.

The post Google ARCore Depth API Now Available, Letting Devs Make AR More Realistic appeared first on Road to VR.

Google ARCore Update Brings Changes To ‘Visual Processing In The Cloud‘

Google is updating its augmented reality cloud anchors system which takes camera data from your phone, processes parts of it on their servers, and produces a 3D map of the environment.

The technology allows for shared AR experiences where multiple camera-based gadgets can see the positions of one another. The change to the “Cloud Anchors API” is included in the latest version of Google’s augmented reality software ARCore, according to a Google blog post for developers published today.

”We’ve made some improvements to the Cloud Anchors API that make hosting and resolving anchors more efficient and robust. This is due to improved anchor creation and visual processing in the cloud. Now, when creating an anchor, more angles across larger areas in the scene can be captured for a more robust 3D feature map,” according to a post by Christina Tong, Product Manager, Augmented Reality at Google. “Once the map is created, the visual data used to create the map is deleted and only anchor IDs are shared with other devices to be resolved. Moreover, multiple anchors in the scene can now be resolved simultaneously, reducing the time needed to start a shared AR experience.”

I put a few pointed questions to Google representatives this morning for clarity on how exactly this functions. I asked for detail on what exactly “visual processing in the cloud” means and whether anything more than 3D pointcloud and location data is passed to Google servers. I also asked Google to specify how this API functioned differently in the past. Here’s the full response I received over email from a Google representative:

“When a Cloud Anchor is created, a user’s phone provides imagery from the rear-facing camera, along with data from the phone about movement through space. To recognize a Cloud Anchor, the phone provides imagery from the rear-facing camera,” according to Google. “Using the cloud (instead of the device) to do feature extraction allows us to reach a much higher bar of user experience across a wider variety of devices. By taking advantage of the computing power available in the cloud, we are able to extract feature points much more effectively. For example, we’re better able to recognize a Cloud Anchor even with environmental changes (lighting changes or objects moved around in the scene). All images are encrypted, automatically deleted, and are used only for powering shared or persistent AR experiences.”

For comparison, Apple is due to release its iOS 13 software soon and its iOS 12 documentation explains a method of producing a shared AR world map between local devices without sending data to a remote server.

Persistent Cloud Anchors

Google’s ARCore update also added “Augmented Faces”  support for Apple devices and the company says it is looking for developers to test “Persistent Cloud Anchors” with a form to fill out  expressing interest in “early access to ARCore’s newest updates.”

“We see this as enabling a ‘save button’ for AR, so that digital information overlaid on top of the real world can be experienced at anytime,” the Google blog post states. “Imagine working together on a redesign of your home throughout the year, leaving AR notes for your friends around an amusement park, or hiding AR objects at specific places around the world to be discovered by others.”

The post Google ARCore Update Brings Changes To ‘Visual Processing In The Cloud‘ appeared first on UploadVR.

Google ARCore Update Brings More Robust Cloud Anchors for Improved Multiuser AR

ARCore, Google’s developer platform for building augmented reality experiences, is getting an update today that aims to make shared AR experiences quicker and more reliable. Additionally, Google is also rolling out support for Augmented Faces on iOS, the company’s 3D face filter API.

Introduced last year, Google’s Cloud Anchors API essentially lets developers create a shared, cross-platform AR experience for Android and iOS, and then host the so-called anchors through Google’s Cloud services. Users can then add virtual objects to a scene, and share them with others so they view and interact simultaneously.

In today’s update, Google says it’s made improvements to the Cloud Anchors API that make hosting and resolving anchors more efficient and robust, something the company says is due to improved anchor creation and visual processing in the cloud.

 

Google AR team product manager Christina Tong says in a blog post that developers will now have access to more angles across larger areas in the scene, making for what she calls a “more robust 3D feature map.”

This, Tong explains, will allow for multiple anchors in the scene to be resolved simultaneously, which she says reduces the app’s startup time.

Tong says that once a map is created from your physical surroundings, the visual data used to create the map is deleted, leaving only anchor IDs to be shared with other devices.

 

In the future, Google is also looking to further develop Persistent Cloud Anchors, which would allow users to map and anchor content over both a larger area and an extended period of time, something Tong calls a “save button” for AR.

This prospective ‘AR save button’ would, according to Tong, be an important method of bridging the digital and physical worlds, as users may one day be able to leave anchors anywhere they need to, attaching things like notes, video links, and 3D objects.

Apps like Mark AR, a graffiti-art app developed by Sybo and iDreamSky, already uses Persistent Cloud Anchors to link user-made creations to real-world locations.

If you’re a developer, check out Google’s guide to creating Cloud Anchor-enabled apps here.

The post Google ARCore Update Brings More Robust Cloud Anchors for Improved Multiuser AR appeared first on Road to VR.

Google Maps’ ‘Live View’ AR Feature Available in Beta, Makes Getting Lost Harder

Google may seem to be losing interest in its virtual reality (VR) ventures such as Daydream View but on the augmented reality (AR) the company is still pressing forward with gusto. Having released an AR feature for Google Maps earlier this year to Google Maps Local Guides and Google Pixel users, the company has today begun a wider rollout of Live View.

Google Maps Live View

Currently still in beta, the feature to compatible iOS and Android devices which support ARKit and ARCore respectively. While the launch happens today, you may not be able to update your device just yet, as it may take Google several days or weeks to get to your region.

The whole purpose of the AR option is to make navigation using the highly popular Google Maps even easier and straight forward. All you need to do is tap on a location in the app, hit the Directions button then select the Walking option. After that, you should find the option to ‘Live View’ those directions towards the bottom of the screen. With Live View enabled you’ll get some gigantic handy arrows appear in the middle of the street (or wherever you are) telling you the right direction to head.

Obviously, this sort of feature isn’t supposed to make you continually hold your phone up and look like a lost kitten. You can simply bring it up when required to let you know you’ve gone the wrong way – or going the right way. It’s just one of a number of updates Google has added to the app, including being able to see all of your flight and hotel reservations in one place, or finding a nice restaurant and booking a reservation all without leaving the app.

Google Maps

While AR might be seen as the little brother to VR, it’s often thought of as having the greatest potential in the long run. Apart from apps like Google Maps, a lot of the AR content consumers are coming across at the moment are videogames such as Harry Potter: Wizards Unite and Minecraft Earth. VRFocus will continue its coverage of AR, reporting back with the latest updates.

PuzzlAR: World Tour is the First ARCore Experience Live in China

Western companies generally tend to struggle to break into the lucrative Chinese market for a number of reasons, which is why they’re inclined to find a home-grown partner to help facilitate the process. These first steps can be small and innocuous, such as Google’s ARCore arrival, helped by a videogame called PuzzlAR: World Tour.

PuzzlAR: World Tour image1

A 3D jigsaw puzzle videogame, PuzzlAR: World Tour first arrived for iOS in 2017 followed by Android devices in 2018. It uses landmarks such as the Statue of Liberty or the Taj Mahal and creates digital puzzles out of them. As is commonplace with most augmented reality (AR) titles you scan a flat surface to place the puzzle on, with all the pieces floating around you needing to be grabbed a put in place.

Developed by ONTOP Studios, its arrival in China is thanks to a collaboration with Chinese publisher NetEase (Nostos, Stay Silent), making the puzzle experience the very first ARCore compatible videogame available to the massive Chinese consumer market.

It’s not just mobile AR that ONTOP Studios has been interested in. PuzzlAR: World Tour also went live for the Magic Leap One headset last month, the first app supported by Magic Leap’s Independent Creator Program to do so. The program launched last year, seeing 31 companies chosen out of a pool of 6,500. Other successful applicants included Funktronic Labs (Starbear: TaxiCosmic Trip), Metanaut (Gadgeteer); Within, Felix & Paul Studios (Marshall from DetroitTraveling While Black) and Resolution Games (Angry Birds VR: Isle of Pigs).

“When I first discovered Magic Leap, I immediately knew this was a company with the vision and means to create technology that can make the future come to life, and I knew entertainment will soon change forever,” said ONTOP Studios’ Creative Director Nun Holhadela in a statement.

Mobile gaming is a massive market in China and one many AR developers like ONTOP Studios are keen to exploit, valved at $30.8 billion in 2018 and expected to rise to $41.5 billion in 2023 according to a report by Niko. Magic Leap doesn’t have quite the same mass-market appeal due to the cost of the device, but it’s encouraging more content on its platform, like the recently announced BBC Earth – Micro Kingdoms: Senses. As the AR market continues to develop, VRFocus will keep you updated.