Google Acquires North’s Focals Smartglasses Business

Google confirmed it acquired North and its Focals smartglasses platform.

Reports had been circulating in recent days that an acquisition was in the works and now Google formally confirmed the move.

“Today we’re announcing that Google has acquired North, a pioneer in human computer interfaces and smart glasses. They’ve built a strong technology foundation, and we’re excited to have North join us in our broader efforts to build helpful devices and services,” Google’s Senior Vice President of Devices & Service Rick Osterloh wrote in a prepared statement. “We’re building towards a future where helpfulness is all around you, where all your devices just work together and technology fades into the background. We call this ambient computing.”

The team coming on board at Google will stay based in Kitchener-Waterloo Canada, where North is located.

North was formerly known as Thalmic Labs and the group previously made the Myo gesture-based input device that looks vaguely similar to work being done at Crtl Labs, a startup that Facebook acquired last year. It looks like there’s a major gap in tracking robustness between what Facebook acquired in 2019 and what Myo had at launch in 2016 but there’s also a multi-year gap in between the technology developments. Instead of the arm-band, Focals smartglasses became the focus of the work at North.

Focals offered simple notification features similar to a smart watch and a basic display system in a slim pair of glasses. In a statement from the founders of North the company confirmed it would not be shipping the 2.0 version of the glasses.

Google, Apple, Facebook and others continue to build toward a augmented reality platforms but difficult problems need to be solved on a number of fronts before we might see a compelling consumer AR platform emerge in the coming years. Acquiring startups can also have cascading effects on the internal structure and hardware plans at major technology companies, and we’ll be curious to see how North impacts Google’s efforts in AR.

The post Google Acquires North’s Focals Smartglasses Business appeared first on UploadVR.

Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing

Google unveiled a method of capturing and streaming volumetric video, something Google researchers say can be compressed down to a lightweight format capable of even being rendered on standalone VR/AR headsets.

Both monoscopic and stereocopic 360 video are flawed insofar they don’t allow the VR user to move their head completely within a 3D area; you can rotationally look up, down, left, right, and side to side (3DOF), but you can’t positionally lean back or forward, stand up or sit down, or move your head’s position to look around something (6DOF). Even seated, you’d be surprised at how often you move in your chair, or make micro-adjustments with your neck, something that when coupled with a standard 360 video makes you feel like you’re ‘pulling’ the world along with your head. Not exactly ideal.

Volumetric video is instead about capturing how light exists in the physical world, and displaying it so VR users can move their heads around naturally. That means you’ll be able to look around something in a video because that extra light (and geometry) data has been captured from multiple viewpoints. While Google didn’t invent the idea—we’ve seen something similar from NextVR before it was acquired by Apple—it’s certainly making strides to reduce overall cost and finally make volumetric video a thing.

In a paper published ahead of SIGGRAPH 2020, Google researchers accomplish this by creating a custom array of 46 time-synchronized action cams stuck onto a 92cm diameter dome. This provides the user with an 80-cm area of positional movement, and also bringing 10 pixels per degree angular resolution, a 220+ degrees FOV, and 30fps video capture. Check out the results below.

 

The researchers say the system can reconstruct objects as close as 20cm to the camera rig, which is thanks to a recently introduced interpolation algorithm in Google’s deep learning system DeepView.

This is done by replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content, researchers say.

SEE ALSO
Facebook Says It Has Developed the 'Thinnest VR display to date' With Holographic Folded Optics

“We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser,” Google researchers conclude.

In practice, what Google is introducing here is a more cost-effective solution that may eventually spark the company to create its own volumetric immersive video team, much like it did with its 2015-era Google Jump 360 rig project before it was shuttered last year. That’s of course provided Google further supports the project by say, adding in support for volumetric video to YouTube and releasing an open source plan for the camera array itself. Whatever the case, volumetric video, or what Google refers to in the paper as Light Field video, is starting to look like a viable step forward for storytellers looking to drive the next chapter of immersive video.

If you’re looking for more examples of Google’s volumetric video, you can check them out here.

The post Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing appeared first on Road to VR.

Google Figured Out How To Stream 6DoF Video Over The Internet

Researchers from Google developed the first end-to-end 6DoF video system which can even stream over (high bandwidth) internet connections.

Current 360 videos can take you to exotic places and events, and you can look around, but you can’t actually move your head forward or backward positionally. This makes the entire world feel locked to your head, which really isn’t the same as being somewhere at all.

Google’s new system encapsulates the entire video stack; capture, reconstruction compression, and rendering- delivering a milestone result.

The camera rig features 46 synchronized 4K cameras running at 30 frames per second. Each camera is attached to a “low cost” acrylic dome. Since the acrylic is semi-transparent, it can even be used as a viewfinder.

Each camera used has a retail price of $160, which would total to just north of $7,000 for the rig. That may sound high, but it’s actually considerably lower cost than bespoke alternatives. 6DoF video is a new technology just starting to become viable.

The result is a 220 degree “lightfield” with a width of 70cm- that’s how much you can move your head. The resulting resolution is 10 pixels per degree, meaning it will probably look somewhat blurry on any modern headset with the exception of the original HTC Vive. As with all technology, that will improve over time.

But what’s really impressive is the compression and rendering. A light field video can be streamed over a reliable 300 Mbit/sec internet connection. That’s still well beyond average internet speeds, but most major cities now offer this kind of bandwidth.

How Does It Work?

In 2019 Google’s AI researchers developed a machine learning algorithm called DeepView. With an input of 4 images of the same scene, from slightly different perspectives, DeepView can generate a depth map and even generate new images from arbitrary perspectives.

This new 6DoF video system uses a modified version of DeepView. Instead of representing the scene through 2D planes, the algorithm instead uses a collection of spherical shells. A new algorithm reprocesses this output down to a much smaller number of shells.

Finally, these spherical layers are transformed into a much lighter “layered mesh”, which sample from a texture atlas to further save on resources (this is a technique used in game engines, where textures for different models are stored in the same file, tightly packed together.)

You can read the research paper and try out some samples in your browser on Google’s public page for the project.

Light field video is still an emerging technology in the early stages, so don’t expect YouTube to start supporting light field videos in the near future. But it does looks clear that one of the holy grails of VR content, streamable 6DoF video, is now a solvable problem.

We’ll be keeping a close eye on this technology as it starts to transition from research to real world products.

The post Google Figured Out How To Stream 6DoF Video Over The Internet appeared first on UploadVR.

Google Launches Depth API for ARCore, Increasing Realism And Improving Occlusion

Google announced today that the Depth API is now available for ARCore 1.18 on Android and Unity. The Depth API is meant to improve occlusion and increase realism thanks to new interaction types.

The Depth API was first announced with a preview on the Google developers blog last year. The API allows a device to determine the depth of objects shown on the camera, according to how far or close by they are. In terms of AR, the API helps to significantly improve occlusion, which Google succintly describes as “the ability for digital objects to accurately appear in front of or behind real world objects.”

snapchat hotdog arcore

The example embedded above shows the dancing hotdog filter on Snapchat being accurately occluded by a lounge as the camera moves down. According to Google, another case where the API would be useful is in Five Nights at Freddy’s AR: Special Delivery, as occlusion is vital to the experience — characters can accurately hide behind objects and then provide a jump scare by moving out from behind the real-world object. Niantic showed something similar with Pokemon Go in the past as well.

However, Occlusion is not the only use for the Depth API — Google notes that developers have found many other uses as well, including implementing more realistic physics, better surface interactions, and environmental traversal. For example, the Google Creative Lab experiment ‘Lines of Play’ allows users to build AR domino arrangements that will accurately collide with furniture and walls in the room when the dominoes knocked over.

The Depth API will begin rolling out today. You can read more over on the Google developers blog.

The post Google Launches Depth API for ARCore, Increasing Realism And Improving Occlusion appeared first on UploadVR.

Vacation Simulator ‘Back To Job’ Update Coming This Fall

Owlchemy Labs is planning a free downloadable content update for Vacation Simulator later this year that expands the game by bringing forward some familiar experiences from Job Simulator.

Google-owned Owlchemy teased the update in its UploadVR Showcase segment.

Check out the trailer here:

Job Simulator and its sequel Vacation Simulator are among the more popular and widely known VR titles in existence. While both games are similar in tone and general mechanics, Vacation Simulator features a range of widely varying activities divided into a number of zones you can visit centered at  three vacation-themed environments — beach, mountain, and forest. The original Job Simulator featured four jobs including office worker, auto mechanic, chef, and convenience store clerk with a number of activities and challenges faced within each job.

So with the Back To Job downloadable content update Owlchemy is teasing that the gig economy made its way to the world of Vacation Simulator and “all bots have gone on vacation and no one is left to job, so it’s time for the human to enter the on-demand workforce to make the perfect vacation for bots. Time to job, again.”

“It’s kinda bringing two worlds together,” explains Owlchemy’s Devin Reimer. Taking “some of the cool things from both and kinda mash them together.”

Back To Job will come to all platforms with Vacation Simulator, so that means PSVR, Quest and PC VR systems will all get the update. They’re still working on the update so it’s unclear how much added content it will bring and the exact timeline for launch on each system. We’ll bring you updates about the new DLC as soon as we have them.

Check out every trailer, article, announcement, interview, and more from the UploadVR Showcase right here.

The post Vacation Simulator ‘Back To Job’ Update Coming This Fall appeared first on UploadVR.

Google Can Now Present AR Models On Mobile For Select Search Results

Select Google search results will display a ‘View in 3D ‘ option on mobile devices, which can then be extended into AR and explored in 3D using your phone. The content available includes a range of animals and scientific content that could be for education purposes.

The feature actually launched last year, but at the time only included models of animals. Now, Google has teamed up with Visible Body and Biodigital to expand the AR search result offerings to include scientific content such as models of human anatomic systems and cell structures.

The feature is integrated right into Google search on your phone’s web browser and supports Android phones running Android 7 and up and Apple phones from the iPhone 6S onward, running iOS 11 and up. All you need to do is open your web browser of choice on Android, or Safari or Chrome on iOS, and search for one of the supported subjects. The results should display an option to look at a 3D model — simply press the ‘View in 3D’ button and then click on ‘View in your space’.

Google search results AR

Once the ground has been identified, the model will be displayed in your space, allowing you to explore it in 3D. You can view an example in the GIF above, provided by Google.

With a number of anatomical systems and different cell structures, the AR functionality could become a really valuable and interactive tool in science education. This is especially true in the current climate, where many children are currently being home-schooled due to the global pandemic but still might have access to an AR-supported phone.

The human anatomical 3D models include the digestive system, the respiratory system, the skeletal system, and much more. There are also a large number of cell structures as well, such as the mitochondrion, cell membranes, and plant cells. Given that the tool was recently expanded to include scientific content, hopefully even more supported search results are added to the tool in the near future.

For a full list of search terms and items that support AR models on mobile, see this Google Search help article.

The post Google Can Now Present AR Models On Mobile For Select Search Results appeared first on UploadVR.