Google ARCore Update Brings Changes To ‘Visual Processing In The Cloud‘

Google is updating its augmented reality cloud anchors system which takes camera data from your phone, processes parts of it on their servers, and produces a 3D map of the environment.

The technology allows for shared AR experiences where multiple camera-based gadgets can see the positions of one another. The change to the “Cloud Anchors API” is included in the latest version of Google’s augmented reality software ARCore, according to a Google blog post for developers published today.

”We’ve made some improvements to the Cloud Anchors API that make hosting and resolving anchors more efficient and robust. This is due to improved anchor creation and visual processing in the cloud. Now, when creating an anchor, more angles across larger areas in the scene can be captured for a more robust 3D feature map,” according to a post by Christina Tong, Product Manager, Augmented Reality at Google. “Once the map is created, the visual data used to create the map is deleted and only anchor IDs are shared with other devices to be resolved. Moreover, multiple anchors in the scene can now be resolved simultaneously, reducing the time needed to start a shared AR experience.”

I put a few pointed questions to Google representatives this morning for clarity on how exactly this functions. I asked for detail on what exactly “visual processing in the cloud” means and whether anything more than 3D pointcloud and location data is passed to Google servers. I also asked Google to specify how this API functioned differently in the past. Here’s the full response I received over email from a Google representative:

“When a Cloud Anchor is created, a user’s phone provides imagery from the rear-facing camera, along with data from the phone about movement through space. To recognize a Cloud Anchor, the phone provides imagery from the rear-facing camera,” according to Google. “Using the cloud (instead of the device) to do feature extraction allows us to reach a much higher bar of user experience across a wider variety of devices. By taking advantage of the computing power available in the cloud, we are able to extract feature points much more effectively. For example, we’re better able to recognize a Cloud Anchor even with environmental changes (lighting changes or objects moved around in the scene). All images are encrypted, automatically deleted, and are used only for powering shared or persistent AR experiences.”

For comparison, Apple is due to release its iOS 13 software soon and its iOS 12 documentation explains a method of producing a shared AR world map between local devices without sending data to a remote server.

Persistent Cloud Anchors

Google’s ARCore update also added “Augmented Faces”  support for Apple devices and the company says it is looking for developers to test “Persistent Cloud Anchors” with a form to fill out  expressing interest in “early access to ARCore’s newest updates.”

“We see this as enabling a ‘save button’ for AR, so that digital information overlaid on top of the real world can be experienced at anytime,” the Google blog post states. “Imagine working together on a redesign of your home throughout the year, leaving AR notes for your friends around an amusement park, or hiding AR objects at specific places around the world to be discovered by others.”

The post Google ARCore Update Brings Changes To ‘Visual Processing In The Cloud‘ appeared first on UploadVR.

Want to Switch from Web Developer to VR Developer: Here’s How

Augmented reality (AR) and virtual reality (VR) technologies are gaining faster pace among different industries. Henceforth, it’s the perfect time to learn how to create content for these market-bursting technologies. They need to make a leap towards the next level of adoption.

From the designer’s perspective, the abilities for getting into AR and VR is fundamentally the same as and furthermore the inconveniences to passage is low. Regardless of in the event that you are a novice who has quite recently begun with programming or a specialist with long stretches of involvement in this field; to turn into a VR designer, you have to have solid abilities in the 3D region. By and large, it is like the advancement of the 3D game in light of the fact that VR is tied in with making vivid conditions that can be interfaced within three measurements.

In this article, we will be covering a brief guide on how you can switch from a web developer to the VR one.

In the initial stage, VR development needs to have a creative use of spatial thinking. Therefore, creating in 3D is traditionally based on math where you are required to think back to your X, Y and Z coordinate days from geometry sessions.

Begin your VR improvement venture by getting a page from the 3D game advancement handbook. Does the inquiry emerge for what reason do you have to do as such? All things considered, VR content advancement is very much similar to 3D game improvement where there is a spatial plan for a huge and vivid 3D universe for clients to navigate individually.

There are different foundations when it comes to dealing with 3D game development/VR development. Firstly, the developers are required to learn how to use a 3D videogame engine. Additionally, designers ought to have the option to make or import 3D resources for populating the condition that you are building.  For instance – 3D modelling tools such as Blender or Maya or online asset stores of Google’s Poly. The programmers need to learn to program in that game engine where they have to design the objects to interact with each other.

The engineers can likewise consider catching 360° photographs and recordings to start by building up a spatially thinking muscle. You can check a few examples of the famous brand that is making use of 3D space in an attractive way.

AltspaceVR-Daydream-RL1 (2)

Initiate by Working on your Technical Skills

For becoming a VR developer, you need to have a great command over your technical skills as they range from knowing a programming language to manage how to use a VR tool. Initially, you need to start networking with other VR developers and ask about their potential experience in their field to give you an idea about their journey and some hacks to survive on. You are also required to learn to operate the tool called Unity which is a cross-platform game engine that supports more than 25 platforms.

Along with this, you are required to have a basic to advanced knowledge of C# programming language. Furthermore, you can try your hands on the basic guide or refer Google Daydream/Gear VR for experiencing the VR journey which includes modules from Android development experience.

Knowing the Right Hardware

It is crucial for every VR developer to have a test unit for checking how the applications work and what are the bugs which need to get fixed up. You need to go off your budget and experience something that you want to develop. If you wish to target casual users of VR then develop for Google Daydream but if you wish to target premium VR users then there are plenty of options available such as HTC Vive or Oculus Rift. After you have decided your target platform along with the associated hardware or headset, you need to consider the computing requirements. To continue with Rift and Vive, you need to invest some bucks in a PC gaming computer that comes with a solid graphics card.

HTC Vive

Wrap Up

Here, we come to the end of the article. We hope you have learned how to swiftly move from being a web developer to becoming a VR developer. As the world of VR development seems to be challenging, commit to learning new skills by trying your hands on small projects. And also, don’t forget to learn from the tutorials in order to push yourself towards spatial thinking. Till then – keep learning!

Google Releases Real-time Mobile Hand Tracking to R&D Community

Google has released to researchers and developers its own mobile device-based hand tracking method using machine learning, something Google R call a “new approach to hand perception.”

First unveiled at CVPR 2019 back in June, Google’s on-device, real-time hand tracking method is now available for developers to explore—implemented in MediaPipe, an open source cross-platform framework for developers looking to build processing pipelines to handle perceptual data, like video and audio.

The approach is said to provide high-fidelity hand and finger tracking via machine learning, which can infer 21 3D ‘keypoints’ of a hand from just a single frame.

“Whereas current state-of-the-art approaches rely primarily on powerful desktop environments for inference, our method achieves real-time performance on a mobile phone, and even scales to multiple hands,” say in a blog post.

 

Google Research hopes its hand-tracking methods will spark in the community “creative use cases, stimulating new applications and new research avenues.”

 explain that there are three primary systems at play in their hand tracking method, a palm detector model (called BlazePalm), a ‘hand landmark’ model that returns high fidelity 3D hand keypoints, and a gesture recognizer that classifies keypoint configuration into a discrete set of gestures.

SEE ALSO
Indie Dev Experiment Brings Google Lens to VR, Showing Real-time Text Translation

Here’s a few salient bits, boiled down from the full blog post:

  • The BlazePalm technique is touted to achieve an average precision of 95.7% in palm detection, researchers claim.
  • The model learns a consistent internal hand pose representation and is robust even to partially visible hands and self-occlusions.
  • The existing pipeline supports counting gestures from multiple cultures, e.g. American, European, and Chinese, and various hand signs including “Thumb up”, closed fist, “OK”, “Rock”, and “Spiderman”.
  • Google is open sourcing its hand tracking and gesture recognition pipeline in the MediaPipe framework, accompanied with the relevant end-to-end usage scenario and source code, here.

In the future, say Google Research plans on continuing its hand tracking work with more robust and stable tracking, and also hopes to enlarge the amount of gestures it can reliably detect. Moreover, they hope to also support dynamic gestures, which could be a boon for machine learning-based sign language translation and fluid hand gesture controls.

Not only that, but having more reliable on-device hand tracking is a necessity for AR headsets moving forward; as long as headsets rely on outward-facing cameras to visualize the world, understanding that world will continue to be a problem for machine learning to address.

The post Google Releases Real-time Mobile Hand Tracking to R&D Community appeared first on Road to VR.

Indie Dev Experiment Brings Google Lens to VR, Showing Real-time Text Translation

Google Lens is great for when you want to quickly translate a menu written in a foreign language, or visually explore the world around you simply using your Android smartphone. In effort to bring some of those machine learning functions into a VR environment, Twitter user ‘Phasedragon’ recently showed off a new workaround that lets him use Google Lens in VR.

As reported by 9to5 Google, Phasedragon demoed Google Lens in VR by translating a few bits of Korean text from what appears to be a recreation of a Korean train station. Considering however it’s using the full Google Lens suite of tools, we bet a lot more is possible.

To do this, Phasedragon says in a followup tweet that he “just hooked together a few apps,” and tried “a bunch to see which ones worked.”

Phasedragon, also known for tinkering with VRChat on his YouTube channel, says that he initially tried Microsoft Translate to step over some integration issues, but concluded that Microsoft’s version was “simply not as good as Google Translate.”

In the implementation Phasedragon used Sparkocam to capture the desktop and export as a virtual webcam. He then used Android Studio Emulator to run Google Lens, and OVR toolkit to display it in VR.

Although it’s admittedly an impressive bit of software kitbashing, and not anywhere near an official use case, the thought of being able to bring some of the AR functionality of Google Lens into VR is pretty exciting to say the least. Should Google ever invest time into making an official Lens overlay for VR, it could lead to new and exciting types of games, as developers come up with novel ideas of leveraging Google’s machine learning in their creations.

The post Indie Dev Experiment Brings Google Lens to VR, Showing Real-time Text Translation appeared first on Road to VR.

Google Develops Real-Time Finger Tracking Algorithm For Mobile Chips

Google released an open source algorithm which performs real-time 21-point finger tracking on mobile hardware.

The system is part of Google’s MediaPipe, a modular framework for machine learning based solutions such as face detection, object detection, and hair segmentation.

When people put on a VR headset, one of the first things they do is reach out with their hands. Tracked controllers offer a basic representation of our hands. They’re also very suited for gaming. But they don’t track the vast majority of finger motion and the very act of holding them restricts those motions too.

In non-interactive VR experiences and social VR, controllers often feel more like a chore than a help. If we could enter these experiences by just putting on a headset and seeing our real hands, this reduction of friction would be a welcome improvement.

Unfortunately, Google’s blog post doesn’t mention the quality and latency of the current implementation. It also doesn’t mention VR, though Google is known to be researching virtual reality technologies. It does, however, mention that the company plans in the future to “extend this technology with more robust and stable tracking”.

Google doesn’t currently have plans to release an Oculus Quest competitor. In fact, the company’s commitment to VR at all has come under question this year with no mention of VR at IO 2019. But if this changes in the future, such technology could allow for a standalone headset with natural input interactions.

Facebook, the company behind the Oculus brand, is also known to be researching camera-based finger tracking. However, the company has not released any implementation to the public.

The post Google Develops Real-Time Finger Tracking Algorithm For Mobile Chips appeared first on UploadVR.

Google Maps’ ‘Live View’ AR Feature Available in Beta, Makes Getting Lost Harder

Google may seem to be losing interest in its virtual reality (VR) ventures such as Daydream View but on the augmented reality (AR) the company is still pressing forward with gusto. Having released an AR feature for Google Maps earlier this year to Google Maps Local Guides and Google Pixel users, the company has today begun a wider rollout of Live View.

Google Maps Live View

Currently still in beta, the feature to compatible iOS and Android devices which support ARKit and ARCore respectively. While the launch happens today, you may not be able to update your device just yet, as it may take Google several days or weeks to get to your region.

The whole purpose of the AR option is to make navigation using the highly popular Google Maps even easier and straight forward. All you need to do is tap on a location in the app, hit the Directions button then select the Walking option. After that, you should find the option to ‘Live View’ those directions towards the bottom of the screen. With Live View enabled you’ll get some gigantic handy arrows appear in the middle of the street (or wherever you are) telling you the right direction to head.

Obviously, this sort of feature isn’t supposed to make you continually hold your phone up and look like a lost kitten. You can simply bring it up when required to let you know you’ve gone the wrong way – or going the right way. It’s just one of a number of updates Google has added to the app, including being able to see all of your flight and hotel reservations in one place, or finding a nice restaurant and booking a reservation all without leaving the app.

Google Maps

While AR might be seen as the little brother to VR, it’s often thought of as having the greatest potential in the long run. Apart from apps like Google Maps, a lot of the AR content consumers are coming across at the moment are videogames such as Harry Potter: Wizards Unite and Minecraft Earth. VRFocus will continue its coverage of AR, reporting back with the latest updates.