Google Open Sources Seurat, a ‘Surface Light-field’ Rendering Tool for 6DOF Mobile VR

Google announced Seurat at last year’s I/O developer conference, showing a brief glimpse into the new rendering technology designed to reduce the complexity of ultra high-quality CGI assets so they can run in real-time on mobile processors. Now, the company is open sourcing Seurat so developers can customize the tool and use it for their own mobile VR projects.

“Seurat works by taking advantage of the fact that VR scenes are typically viewed from within a limited viewing region, and leverages this to optimize the geometry and textures in your scene,” Google Software Engineer Manfred Ernst explains in a developer blogpost. “It takes RGBD images (color and depth) as input and generates a textured mesh, targeting a configurable number of triangles, texture size, and fill rate, to simplify scenes beyond what traditional methods can achieve.”

Blade Runner: Revelations, which launched last week alongside Google’s first 6DOF Daydream headset Lenovo Mirage Solo, takes advantage of Seurat to a pretty impressive effect. Developer studio Seismic Games used the rendering tech to bring a scene of 46.6 million triangles down to only 307,000, “improving performance by more than 100x with almost no loss in visual quality,” Google says.

Here’s a quick clip of the finished scene:

To accomplish this, Seurat uses what the company calls ‘surface light-fields’, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it.

This is largely useful for developers looking to create 6DOF experiences on mobile hardware, as the user can view the scene from several perspectives. A major benefit, the company said last year, also includes the ability to add perspective-correct specular lightning, which adds a level of realism usually considered impossible on a mobile processors’ modest compute overhead.

Google has now released Seurat on GitHub, including documentation and source code for prospective developers.

Below you can see an image with with Seurat and without Seurat (click to expand):

The post Google Open Sources Seurat, a ‘Surface Light-field’ Rendering Tool for 6DOF Mobile VR appeared first on Road to VR.

Google Plans to Close the Gap Between PC and Mobile Graphics with Seurat

While Google’s I/O conference has plenty of interesting news for consumers, the majority of it is geared towards developers, helping them build, create and maximise the potential for their future or current projects. In terms of virtual reality (VR) the company aims to increase the graphical fidelity of mobile experiences with a project called Seurat. 

Google Seurat – named after the French painter – is a way of processing complex scenes that could only be handled by a desktop PC, and making it possible for a mobile device to render it, all in real-time.

Google Seurat_1

To achieve this Andre Doronichev, director of product management at Daydream explained: “As a developer you define a volume, one in which you wish the user to move around and view your scene. You also define parameters like the number of polygons and overdraw. And then you let the tool do its magic. It takes dozens of images from different parts of the defined volume, and then it automatically generates an entirely new 3D scene that looks identical to the original, but is dramatically simplified. And you can still have dynamic interactive elements in it.”

Doronichev then went on to showcase a project Google made in collaboration with ILMxLAB, that highlighted how Seurat could even be used for projects like a movie scene, which require even more processing power. In the video below ILMxLab’s executive creative director John Gaeta said: “[Seurat] potentially opens the door to cinematic realism in VR.”

Seurat already supports Unreal Engine, Unity and Maya, and Google is testing the tool with a select group of partners currently, prior to rolling it out later this year.

VRFocus will continue its coverage of Google I/O, reporting back with all the latest updates.

Google’s ‘Seurat’ Surface Light-field Tech Could Be a Graphical Breakthrough For Mobile VR

Google has revealed a new ‘surface light-field’ rendering technology that it’s calling ‘Seurat’ (after the famous Pointillism painter). The company says that the tech will not only bring CGI quality visuals to mobile VR, but it will do so at a miniscule filesize—a hurdle that other light-field approaches have struggled to surmount.

Today at I/O 2017 Google introduced Seurat, a new rendering technology that’s designed to take ultra high-quality CGI assets that couldn’t be run in real-time even on the highest performance desktop hardware, and format them in a way that retains their visual fidelity while allowing them to run on mobile VR hardware. Now, that wouldn’t be very impressive if we were just talking about 360 videos, but Google’s Seurat approach actually generates sharp, properly rendered geometry which means that it retains real volumetric data, allowing players to walk around in a room-scale space rather than having their head stuck in one static point. This also means that developers can composite traditional real-time assets into the scene to create interactive gameplay within these high fidelity environments.

So how does it work? Google says Seurat makes use of something called surface light-fields, a process which involves taking original ultra-high quality assets, defining a viewing area for the player, then taking a sample of possible perspectives within that area to determine everything that possibly could be viewed from within it. The high-quality assets are then reduced to a significantly smaller number of polygons—few enough that the scene can run on mobile VR hardware—while maintaining the look of high quality assets, including perspective-correct specular lightning.

While other light-field approaches that we’ve seen are fundamentally constrained by the huge volumes of data they take up (making them hard to deliver to users), Google says that individual room-scale view boxes made with Seurat can be as small as just a few megabytes, and that a complex app containing many view boxes, along with interactive real-time assets, would not be larger than a typical mobile app.

That’s a huge deal because it means developers can create mobile VR games that approximate the graphical quality that users might expect from a high-end desktop VR headset—which may be an important part of convincing people to drop nearly the same amount of money on a standalone Daydream headset.

Google seems to still be in the early phases of the Seurat rendering tech, and we’re still waiting for a deeper technical explanation; it’s possible that potential pitfalls are yet to be revealed, and there’s no word yet on when developers will be able to use the tech, or how much time/cost it takes to render such environments. If it all works as Google says though, this could be a breakthrough for graphics on mobile VR devices.

The post Google’s ‘Seurat’ Surface Light-field Tech Could Be a Graphical Breakthrough For Mobile VR appeared first on Road to VR.