Google’s ‘Welcome to Light Fields’ VR App Reveals the Power of Volumetric Capture

Google has released a free app for PC VR headsets called Welcome to Light Fields. The company says the app serves as a showcase of “the emerging technology Google is using to power its next generation of VR content.”

When it comes to capturing the real world for VR, 360 photos and videos can only go so far, and have a number of limitations which make them less immersive than computer generated VR content that’s rendered in real-time—namely the missing ability to actually move within a captured scene. Since 360 photo and video content is limited to being seen from only the precise location of the camera, you’re effectively stuck in place, except for being able to rotate your head.

Volumetric capture techniques aim to capture not just one circular perspective, but a totality of the scene (or a at least portion of it), so that viewers can move their heads through 3D space within the capture and see the scene from varying perspectives. Light fields are one promising type of volumetric capture and could represent a flexible, high quality, foundational format for capturing, generating, and storing VR content.

In light fields like this one you can see how reflections and lighting on shiny objects move correctly as you move your head, instead of being ‘baked’ into an object’s texture as happens with some non-light field approaches to volumetric capture | Image courtesy Google[/caption]

Initially the scenes look similar to simple 360 stereoscopic photos, but the magic happens when you move your head through space—instead of the world being effectively ‘locked to your head’, you’ll see it move around you just like you’d expect if you were really standing there; it’s much more immersive than a static 360 capture.

Google’s custom light field camera takes about one minute for a complete spin to capture a light field scene | Image courtesy Google

It does seem like magic, but it’s not without limitations. The captures are generated in this case by a custom lightfield camera which spins an array of GoPro cameras in a circle to capture a spherical area about two feet wide—you can view the scene from anywhere inside that sphere, but if you stick your head outside of it, the world will go blank. That’s because the cameras effectively capture all of the light rays intersecting the sphere on all sides, and then algorithms recreate the view from any point within the sphere for your viewing pleasure. A larger viewing area can be achieved with a larger camera.

When you move your head outside of the captured area, the scene fades away | Image courtesy Google

Welcome to Light Fields features some very impressive light field imagery. At its best you’re seeing sharp, immersive views with lots of depth and reflections which convincingly react as you move your head. But there’s some less than stellar scenes (especially if you manually delve into the app’s gallery) that show some of the challenges of capturing quality light fields.

The inside of Space Shuttle Discovery looks astounding as a light field | Image courtesy Google

Some of the scenes are not nearly as sharp as the others, and you can sometimes spot artifacts at the edges of objects, mistaken depth information, and reflections and lighting that don’t seem to act quite right. And there’s still the big challenge of filesizes. The handful of static light field scenes in Welcome to Light Fields clock in around 6GB total—fine for a demonstration app, but arguably too large for mass adoption.

Still, Welcome to Light Fields is a powerful example of light field technology and a look at why volumetric capture is probably the future of VR video.

The post Google’s ‘Welcome to Light Fields’ VR App Reveals the Power of Volumetric Capture appeared first on Road to VR.

Welcome to Light Fields: Google veröffentlicht Lichtfeld Aufnahmen

Auch wenn man mit der Virtual-Reality-Technologie in andere Welten fliehen kann, so  gibt es auch in der realen Welt viele spannende Orte, die erkundet werden wollen. Zwar bringen 360-Grad-Aufnahmen die echte Welt auf eine virtuelle Leinwand, doch mit einer echten VR-Erfahrung haben 360-Grad-Videos und -Bilder eigentlich wenig am Hut, da sich der User nicht innerhalb der Szene bewegen kann und somit zum stillen Zuschauer wird. Aber es geht besser, wie man mit einer kostenlosen Google App für PC-Brillen jetzt selbst erleben kann.

Welcome to Light Fields

Mittels Lichtfeld-Technologie lassen sich Aufnahmen erstellen, die mit einer VR-Brille in einem gewissen Radius begehbar sind. Einige Unternehmen haben die Möglichkeiten der Lichtfeld-Aufnahmen bereits vollmundig angekündigt, doch Beispiele lädt anscheinend kein Unternehmen gerne ins Netz. Google prescht nun voran und veröffentlicht mit „Welcome to Light Fields“ eine kostenlose Anwendung, welche euch begehbare Lichtfeld-Aufnahmen zur Verfügung stellt.

Aktuell enthält die Software von Google drei verschiedene Orte: Das Gable House, das Mosaic Tile House und das Space Shuttle Discovery. Google möchte mit den Videos zeigen, wie die nächste Generation an immersiven Inhalten aussehen wird. Die Software bietet eine geführte Tour, die euch auch die Technologie hinter den Aufnahmen näher bringen soll, und eine direkte Auswahl von verschiedenen Aufnahmen. Bei allen Inhalten handelt es sich um Fotos, die kleine Kopfbewegungen zulassen.

Welcome to the Light Field

Eine echte Bewegung in einer Richtung ist zwar nicht möglich, da nach wenigen Zentimetern das Bild schwarz wird, doch die kleinen Bewegungen reichen aus, um die Szene lebendig zu machen. Zudem verändern sich die Lichtreflexionen je nach Kopfposition, was ein extrem tolles Erlebnis ist und eine unvergleichbare Atmosphäre schafft. Zwar ist die App inhaltlich noch extrem begrenzt, doch für einen Ausblick in die Zukunft reicht die Anwendung aus. Und einem geschenkten Gaul schaut man bekanntlich nicht ins Maul. Dementsprechend solltet ihr schnell die Anwendung installieren, wenn ihr eine Oculus Rift, HTC Vive oder Windows-Mixed-Reality-Brille besitzt.  Hier geht es zur Steam-Seite.

Der Beitrag Welcome to Light Fields: Google veröffentlicht Lichtfeld Aufnahmen zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

The Future of Virtual Lightfields with Otoy CEO Jules Urbach

Otoy is a rendering company that pushing the limits of digital light fields and physically-based rendering. Now that Otoy’s Octane Renderer has shipped in Unity, they’re pivoting from focusing on licensing their rendering engine to selling cloud computing resources for rendering light fields and physically-correct photon paths. Otoy has also completed an ICO for their Render Token (RNDR), and will continue to build out a centralized cloud-computing infrastructure to bootstrap a more robust distributed rendering ecosystem driven by a Etherium-based ERC20 cryptocurrency market.

LISTEN TO THE VOICES OF VR PODCAST

jules-urbach-2017I talked with CEO and co-founder Jules Urbach at the beginning of SIGGRAPH 2017 where we talked about relighting light fields, 8D lightfield & reflectance fields, modeling physics interactions in lightfields, optimizing volumetric lightfield capture systems, converting 360 video into volumetric videos for Facebook, and their movement into creating distributed render farms.

In my previous conversations with Urbach, he shared his dreams of rendering the metaverse and beaming the matrix into your eyes. We complete this conversation by diving down the rabbit hole into some of the deeper philosophical motivations that are really driving and inspiring Urbach’s work.

This time Urbach shares his visions of VR’s potential to provide us with experiences that are decoupled from the normal expected levels of entropy and energy transfer for an equivalent meaningful experience. What’s below the Planck’s constant? It’s a philosophical question, but Urbach suspects that there are insights from information theory since Planck’s photons and Shannon’s bits have a common root in thermodynamics.

SEE ALSO
Facebook Unveils Two New Volumetric Video 'Surround360' Cameras, Coming Later this Year

He wonders whether the Halting problem suggests that a simulated universe is not computable, as well as whether Gödel’s Incompleteness Theorems suggests that we’ll never be able to create a complete model of the Universe. Either way, Urbach is deeply committed to trying to creating the technological infrastructure to be able to render the metaverse, and continue to probe for insights into the nature of consciousness and the nature of reality.

Here’s the launch video for the Octane Renderer in Unity:


Support Voices of VR

Music: Fatality & Summer Trip

The post The Future of Virtual Lightfields with Otoy CEO Jules Urbach appeared first on Road to VR.

Lytro is Positioning Its Light Field Tech as VR’s Master Capture Format

Lytro, a leading light-field company, is positioning its light-field capture and playback technology as the ideal format for immersive content. The company is building a toolset for capturing, rendering, and intermingling both synthetic and live-action light-field experiences which can then be delivered at the highest quality playback supported by each individual platform.

Speaking with Lytro CEO Jason Rosenthal at the company’s Silicon Valley office, I got the rundown of how the company aims to deploy its tech toolset to create what he calls the Lytro Reality Experience, immersive experiences stored as light-fields and then delivered for the specific quality and performance capabilities of each consumption end point—all the way from the highest-end VR headset at a VR arcade down to mobile 360 footage viewed through a smartphone.

Light-fields are pre-computed scenes which can recreate the view from any point within the captured or rendered volume. In short, that means that a light-field can be played back as scenes which exceed the graphical capabilities of real-time rendering while still retaining immersive 6DOF positional tracking and (to an extent) interactivity. Though not without its own challenges, light-fields aim to combine the best of real-time immersion with the visual quality of pre-rendered VR experiences.

Rosenthal made the point that revolutions in media require new content formats with new capabilities. He pointed to the PDF, OpenGL, http, and MPEG as examples of media formats which have drastically altered the way we make and consume information. Immersive media, Rosenthal says, requires a volumetric format.

To that end, Lytro has been building a complete pipeline for light-fields, including capture/rendering of light-field content, mastering, delivery, and playback. He says that the benefit of this approach is that creators can capture/render and master their content once, and then distribute to headsets and platforms of varying capabilities without having to recapture, recreate, or remaster the content for each platform, as presently needs to be done for most real-time content spanning desktop and mobile VR headsets.

SEE ALSO
Exclusive: Lytro Reveals Immerge 2.0 Light-field Camera with Improved Quality, Faster Captures

There’s three main pieces of Lytro’s toolset that makes it all possible. First is the company’s light-field camera, Immerge, which enables high-quality live-action light-field capture; we recently detailed its latest advancements here. Then there’s the company’s Volume Tracer software which renders synthetic light-fields from CG content. And finally there’s the company’s playback software which aims to enable the highest-fidelity playback on each device.

Image courtesy Lytro

For example, a creator could create a high-fidelity CGI scene like One Morning—a Lytro Reality Experience which the company recently revealed—with their favorite industry-standard rendering and animation tools, and then output that experience as a Lytro Reality Experience which can be deployed across high-end, low-end, and even 360 video without needing to modify the source content for the specific capabilities of each device, and without giving up the graphical quality of raytraced, pre-rendered content.

Lytro is keeping its tools close to the chest for now; the company is working one on one with select customers to release more Lytro Reality Experiences, and encourages anyone interested to get in touch.

An example of an incredibly detailed Lytro Reality Experience. The company says that high fidelity light-field scenes like this will be able to seamlessly merge with real-time interactive content. | Image courtesy Lytro

I’ve seen a number of the company’s latest Lytro Reality Experiences as played back along the spectrum of devices (like Hallelujah), from high-end desktops suitable only for out-of-home VR arcades, all the way down to 360 playback on an iPad. The idea is to maximize the fidelity and experience to the greatest degree that each device can support. On the high-end desktop, as seen through a VR headset, that means maximum quality imagery generated on the fly from the light-field dataset with 6DOF tracking. For less capable computers or mobile headsets, the same scene would be represented as baked-down 3D geometry, while mobile devices would get a high quality 360 video rendered at up to 10K resolution—all using the same pre-rendered source assets.

– – — – –

The appeal of the light-field approach is certainly clear, especially for creators seeking to make narrative experiences that go above and beyond what’s possible to be rendered in real-time, even with top of the line hardware.

Since light-fields are pre-rendered however, they can’t be interactive in the same way that traditional real-time rendering can be. Rosenthal acknowledges the limitation and says that Lytro is soon to debut integrations with leading game engines which will make it easy to mix and match light-field and real-time content in a single experience—a capability which opens up some very interesting possibilities for the future of VR content.

SEE ALSO
Avegant Claims Newly Announced Display Tech is "a new method to create light fields"

For all of the interesting potential of light-fields, one persistent hurdle has hampered their adoption: file sizes. Even small light-field scenes can constitute huge amounts of data, so much that it becomes challenging to deliver experiences to customers without resorting to massive static file downloads. Lytro is well aware of the challenge and has been aggressively working on solutions to reduce data sizes. While Rosenthal says the company is still working on reining in light-field file sizes, the company provided Road to VR with a fresh look at the their current data envelopes for each consumption end-point:

  • 0.5TB/minute for 6DoF in-venue
  • 2.7GB/minute for in-home desktop
  • 2.5GB/minute for tablet/mobile devices
  • 9.8MB/minute for 360 omnistereo

The above is all based on the company’s Hallelujah experience, as optimized for each consumption end-point. Think these numbers are scary? They were much higher not that long ago. Lytro has also teased “interesting work” still in development which it claims will reduce the above figures by some 75%.

Despite Lytro’s vision and growing toolset, and an ongoing effort to battle file sizes into submission, there’s still no publicly available demo of their technology that can be seen through your own headset. Fortunately the company expects that the first public Lytro Reality Experience from one of their partners will launch to the public starting in Q1 2018.

The post Lytro is Positioning Its Light Field Tech as VR’s Master Capture Format appeared first on Road to VR.

Exclusive: Lytro Reveals Immerge 2.0 Light-field Camera with Improved Quality, Faster Captures

Lytro’s Immerge light-field camera is meant for professional high-end VR productions. It may be a beast of a rig, but it’s capable of capturing some of the best looking volumetric video that I’ve had my eyes on yet. The company has revealed a major update to the camera, the Immerge 2.0, which, through a few smart tweaks, makes for much more efficient production and higher quality output.

Light-field specialist Lytro, which picked up a $60 million Series D investment earlier this year, is making impressive strides in its light-field capture and playback technology. The company is approaching light-field from both live-action and synthetic ends; last month Lytro announced Volume Tracer, a software which generates light-fields from pre-rendered CG content, enabling ultra-high fidelity VR imagery that retains immersive 6DOF viewing.

Immerge 2.0

Immerge 2.0 | Image courtesy Lytro

On the live-action end, the company has been building a high-end light-field camera which they call Immerge. Designed for high-end productions, the camera is actually a huge array of individual lenses which all work in unison to capture light-fields of the real world.

At a recent visit to the company’s Silicon Valley office, Lytro exclusively revealed to Road to VR latest iteration of the camera, which they’re calling Immerge 2.0. The form-factor is largely the same as before—an array of lenses all working together to capture the scene from many simultaneous viewpoints—but you’ll note an important difference if you look closely: the Immerge 2.0 has alternating rows of cameras pointed off-axis in opposite directions.

With the change to the camera angles, and tweaks to the underlying software, the lenses on Immerge 2.0 effectively act as one giant camera that has a wider field of view than any of the individual lenses, now 120 degrees (compared to 90 degrees on the Immerge 1.0).

Image courtesy Lytro

In practice, this can make a big difference to the camera’s bottom line: a wider field of view allows the camera to capture more of the scene at once, which means it requires fewer rotations of the camera to capture a complete 360 degree shot (now with as few as three spins, instead of five), and provides larger areas for actors to perform. A new automatic calibration process further speeds things up. All of this means increased production efficiency, faster iteration time, and more creative flexibility—all the right notes to hit if the goal is to make one day make live action light-field capture easy enough to achieve widespread traction in professional VR content production.

Ever Increasing Quality

Lytro has also been refining their software stack which allows them to pull increasingly higher quality imagery derived from the light-field data. I saw a remastered version of the Hallelujah experience which I had seen earlier this year, this time outputting 5K per-eye (up from 3.5K) and employing a new anti-aliasing-like technique. Looking at the old and new version side-by-side revealed a much cleaner outline around the main character, sharper textures, and distant details with greater stereoscopy (especially in thin objects like ropes and bars) that were previously muddled.

What’s more, Lytro says they’re ready to bump the quality up to 10K per-eye, but are waiting for headsets that can take advantage of such pixel density. One interesting aspect of all of this is that many of the quality-enhancing changes that Lytro has made to their software can be applied to light-field data captured prior to the changes, which suggests a certain amount of future-proofing available to the company’s light-field captures.

– – — – –

Lytro appears to be making steady progress on both live action and synthetic light-field capture & playback technologies, but one thing that’s continuously irked those following their story is that none of their light-field content has been available to the public—at least not in a proper volumetric video format. On that front, the company promises that’s soon to be remedied, and has teased that a new piece of content is in the works and slated for a public Q1 release across all classes of immersive headsets. With a bit of luck, it shouldn’t be too much longer until you can check out what the Immerge 2.0 camera can do through your own VR headset.

The post Exclusive: Lytro Reveals Immerge 2.0 Light-field Camera with Improved Quality, Faster Captures appeared first on Road to VR.

Lytro On Their Light-Field Technology And Taking Virtual Reality Where It Needs To Be

Last month VRFocus went to AR and VR on the Lot, (previously known as just VR on the Lot) to uncover what Hollywood’s high-end blockbuster creatives are looking at, and caught up with Lytro, a company specialising in light-field cameras and technology.

The changing of the event’s name, may tell you how the introduction of augmented reality (AR) applications such as Apple’s ARKit and Google’s ARCore will be shaping the future of content in the entertainment industry. VRFocus spoke to Orin Green, the VR/VFX Supervisor at Lytro Inc. to talk about what they’re focusing on, the virtual reality (VR) experiences they were showcasing as well as what they were doing at AR & VR on the Lot.

Green explains that Lytro’s focus is on live action and animated capture, enabling content creators to have the freedom to use both 3D models or real-life locations for future projects. He said that through Lytro’s ability to capture light-fields, it opens up a whole new world for individuals looking to use true 6 Degrees of Freedom (6 DoF) for their VR experiences, allowing the user to move around freely in a more natural way, for a more immersive experience.

The mechanical bird and laser-like head from ‘One Morning’

A lot of 360 VR experiences have problems with creating realistic environments or experiences; where a user is normally stuck to the position of where the camera is situated, therefore unable to move around in a space. The freedom of movement allowed in 6 DoF virtual reality would enable a user to walk around a 3D environment that looks highly realistic. Examples of the environments Lytro have created with their technology can be seen in the video below.

Lytro showcased three VR experiences at AR and VR on the Lot. The first is Hallelujah, which is also available on Within’s website and app. With singer and composer Bobby Halvorson performing Leonard Cohen’s classic track. The musical experience premiered at Tribeca Film Festival in the U.S. and internationally at Cannes Film Festival in France. When in the experience you will see Halvorson start singing in-front of you, then several versions of him as he starts to add a unique five-part a cappella around you. The piece escalates to suddenly finding yourself inside a church before Halvorson is joined by a choir.

The second VR experience allowed users to walk around fully rendered CGI environment in 6 DoF. “Right now when you get animated environments, they often get generated from game engines which have very limited rendering capabilities.” Green explains. “With our system, you don’t have to be limited by the game engine.”

Their third experience One Morning, is the first light field animated short film in history by director Rodrigo Blaas. Blaas has done animation work on Wall-E (2008), Up (2009) and Finding Nemo (2003) and worked for years at Pixar and Dreamworks Animation. The short VR experience has a small mechanical bird with a red laser head come around a blue car, it comes to inspect you then suddenly a larger mechanical bird (presumably it’s mother of father) appears.

Lytro attended AR and VR on the Lot not only to showcase their technologies and future capabilities of VR for future content, but to also make new contacts and meet creators interested in starting a partnership with Lytro for future content creation. Watch the video below to find out more.

Lytro Announces VR Light Field Rendering Software ‘Volume Tracer’

Lytro, the once consumer-facing light field camera company which has recently pivoted to create high-end production tools, has announced a light field rendering software for VR that essentially aims to free developers from the current limitations of real-time rendering. The company calls it ‘Volume Tracer’.

Light field cameras are typically hailed as the next necessary technology in bridging the gap between real-time rendered experiences and 360 video—two VR content types that for now act as bookends on the spectrum of immersive to less-than-immersive virtual media. Professional-level light field cameras, like Lytro’s Immerge prototype, still aren’t yet in common use though, but light fields aren’t only capable of being generated with expensive/large physical cameras.

The company’s newest software-only solution, Volume Tracer, places multiple virtual cameras within a view volume of an existing 3D scene that might otherwise be expected to be rendered in real-time. Because developers who create real-time rendered VR experiences constantly fight to hit the necessary 90 fps required for comfortable play, and have to do so in a pretty tight envelope—both Oculus and HTC agree on a recommended GPU spec of NVIDIA GTX 1060 or AMD Radeon RX 480—the appeal of graphics power-saving light fields is pretty apparent.

Lytro breaks it down on their website, saying “each individual pixel in these 2D renders provide sample information for tracing the light rays in the scene, enabling a complete Light Field volume for high fidelity, immersive playback.”

According to Lytro, content created with Volume Tracer provides view-dependent illumination including specular highlights, reflections, refractions, etc; and is scalable to any volume of space, from seated to room-scale sizes. It also presents a compelling case for developers looking to eke out as much visual detail as possible by hooking into industry standard 3D modeling and rendering tools like Maya, 3DS Max, Nuke, Houdini, V-Ray, Arnold, Maxwell, and Renderman.

Real-time playback with positional tracking is also possible on Oculus rift and HTC Vive at up to 90 fps refresh rate.

One Morning, an animated short directed by former Pixar animator Rodrigo Blaas that tells the story of a brief encounter with a robot bird, was built on the Nimble Collective, rendered in Maxwell, and then brought to life in VR with Lytro Volume Tracer.

“What Lytro is doing with its tech is bringing something you haven’t seen before; it hasn’t been possible. Once you put the headset on and experience it, you don’t want to go back. It’s a new reality,” said Blaas.

Volume Tracker doesn’t seem to be currently available for download, but keep an eye on Lytro’s site and sign up for their newsletter for more info.

The post Lytro Announces VR Light Field Rendering Software ‘Volume Tracer’ appeared first on Road to VR.

Otoy Wants to Make Light-field Rendering Affordable with a Supercomputing Cluster You Get Paid to Be Part Of

Otoy has announced the Render Token, a blockchain-based currency that underpins a distributed GPU rendering network. The company hopes to allow idle GPUs on consumer PCs to be tapped for rendering work, earning money for the owner in exchange for their computer’s work. The goal, Otoy says, is to make massive GPU rendering power available at low cost for rendering light-fields and more.

Otoy is a maker of rendering tools and a proponent of light-fields as the next-generation format of capture and display for AR and VR. Light-fields can be thought of as volumetric representations of a scene, where every view possible has already been calculated, allowing for real-time playback of cinema-quality scenery, even in demanding applications like virtual reality. Sounds great, right?

One problem with the practical application of light-fields is that they’re expensive to render, both computationally and temporally. If you want to farm your render out to the cloud to get it done in a reasonable amount of time, you can expect to pay a hefty fee.

For a company that’s pushing light-field as the future of immersive content, that rendering cost is a major blocker to adoption. And so on a quest to make GPU rendering dramatically more affordable, Otoy is mashing up the ideas of distributed supercomputing clusters and the blockchain with the hopes of creating a decentralized cloud rendering network that runs rendering tasks on idle GPUs in exchange for payment in the form of a cryptocurrency.

Introducing Render Token

The result is what Otoy calls the Render Token (RNDR). It’s a cryptocurrency coin based on the Ethereum blockchain, and the company says it’s the payment that will be used to incentivize and compensate participants in the rendering network for the use of their GPU power.

Distributed Computing Isn’t Exactly New

The idea of a distributed computing supercomputing cluster isn’t new. You may have heard of Folding@home or SETI@home, two popular distributed computing initiatives which borrowed unused computational power from idle computers running a piece of client software. But that computation power was offered by users on a volunteer basis. Now that blockchain technology (the underlying structure of cryptocurrencies) has been proven out, there’s a trusted method to distribute payments among a network of computers performing work for paying customers.

SEE ALSO
Beaming the 'Matrix' into Your Eyes: Otoy CEO on the Future of Real-time Lightfield Rendering

Intrinsic Human Value

Typical cryptocurrencies work by incentivising so-called ‘miners’ to run software on their computers to log and process cryptocurrency transactions for the whole network, and in exchange receive small bits of the cryptocurrency for their work. But all that processing power spent on number crunching is wasted, argues Otoy CEO Jules Urbach in his introduction of RNDR.

GPU hashing [AKA mining] incurs real world energy and cap-ex costs which return less and less value to the crypto-community as the blockchain grows. Over time, and on a global scale, this becomes enormously wasteful as GPU compute cycles are essentially thrown away hashing numbers with no intrinsic human value, while GPU rendering power on AWS remains scarce at $14.4/hour ( ~1000 OctaneBench).

Instead, Urbach says, the fundamental mining work that underpins crytocurenies could be used to produce valuable output in the form of rendered imagery.

The Render Token recalibrates the weighting of GPUs in the network, making it possible for each transaction on the blockchain to validate far greater value of equivalent GPU proof-of-render work that is valuable for real world jobs that are prohibitively expensive to fulfill quickly on local or centralized GPUs.

ICO Incoming

If you’re at all familiar with cryptocurrencies, you’ll know where this is all heading… an ICO. Otoy plans to make an ‘Initial Coin Offering’, which is a sale of the first Render Tokens. It’s both a way for Otoy to raise capital for their initiative and to establish the initial value of each Render Token. The company will offer a limited number of tokens, and, according to the Render Token White Paper, hopes to sell $134 million to support the project, presumably cutting off the supply after that amount is raised. That wouldn’t be the largest ICO to date (that would be Filecoin at $250M+, according to The Cointelegraph), but it’s not far off. Here’s how Otoy says they’ll spend the funds:

40% – will go to future development of each expansion phase (I-IV) and will support the team dedicated to the operations and engineering of the Render Token platform.

25% – running, maintaining, and scaling the network – this will include developing and creating new and more efficient solutions for rendering through custom built GPU solutions, effectively lowering the price of rendering across the network and the world.

20% – will be allocated to marketing and expanding the applications and reach and use-cases of the network.

10% – for third party services and contractors providing guidance and efficiencies to the project.

5% – for unforeseen roadblocks and circumstances.

Buying (or selling) Rendering Power

Owners of Render Tokens can then be spent to pay for rendering work on the network, or sold to others in exchange for difference currencies. Their ultimate value will be determined over time by the market, with prospective purchasers hoping value will increase following the ICO.

SEE ALSO
'Decentraland' – Using Ethereum Blockchain ICO to Sell Virtual Real Estate

More Than Light-fields

Light-fields are particularly compelling for AR and VR, and Otoy hopes that the Render Token platform will make rendering them faster and more affordable, but light-field isn’t the only thing that the system can render; the company points to the following categories that could be disrupted if they achieve their vision of affordable, distributed rendering:

Media – From blockbuster films to home movies, RNDR brings affordable GPU compute to democratize advanced special effects and graphics. This will accelerate the arrival of holographic displays and avatars to change storytelling forever.

Gaming – Billions of consumers worldwide put unprecedented demands on 3D game engines. RNDR will provide the infrastructure and standards to uplevel gaming and finally bring cinematic rendering to interactive experiences.

Manufacturing – RNDR makes scientific-grade rendering available to any 3D object. Industry will be retooled as physics-accurate rendering transforms imaging from 3D visualization to intelligent 3D simulation.

Medical – Radiology is being overhauled by the introduction of high-level rendering. From surgeons to new medical students, RNDR will enable unprecedented levels of fidelity in medical imaging at a fraction of the speed and cost.

Virtual Reality – RNDR will bring economical light field media and streaming to allow any artist to create high quality VR experiences at 72K resolution and beyond —rendering an immersive Metaverse in stunning detail.

Augmented Reality – As the ARKit and ARCore revolutions take off, RNDR will make photorealistic objects and scenes on wearables and mobile devices a possibility by democratizing the authorship, registration and streaming of light fields and next gen media formats.

Mixed Reality – With the breakout successes of WeChat and SnapChat, the economy of virtual goods and services is only just beginning. RNDR will provide the key distribution system to monetize and track and digital objects in the Metaverse.

The post Otoy Wants to Make Light-field Rendering Affordable with a Supercomputing Cluster You Get Paid to Be Part Of appeared first on Road to VR.

Lichtfeld Technologie von Avegant soll Mixed Reality verbessern

Das Start-up Aveganz, das vor ein paar Jahren die Glyph, ein Entertainmentcenter in Form von Kopfhörern veröffentlichte, zeigt nun mit einen neuen Prototypen, wie die Zukunft der Augmented Reality aussehen kann. Schon damals war das Unternehmen von der Technologie der AR und VR begeistert und entwickelte entsprechende tragbare Computersysteme.

Avegants neuer Prototyp sorgt für gestochen scharfe MR-Darstellungen

Avegant-Light-Field-Technolgy-Augmented-Reality

Bei der Entwicklung stießen man bei Avegant auf ein fundamentales Problem der Mixed Reality Displays. Diese besitzen nämlich alle einen festen Mittelpunkt. Man kann virtuelle Gegenstände also an Wände projizieren und sie entsprechend mit den Controllern manipulieren, jedoch bleibt die wirklich reale Erfahrung aus. Der Entwickler Edward Tang sagte dazu Folgendes: „Um die Mixed Reality als wirklich reale Erfahrung wahrzunehmen, müsste ich in der Lage sein, auf die projizierten Objekte zuzugehen und sie zu berühren und zu fühlen als wären sie wirklich direkt vor mir. Der Fokus muss also stimmen.“

Um das zu erreichen forschte Avegant an der Lichtfeld-Technologie, um die Objekte je nach Bedarf unscharf oder scharf darstellen zu können, je nachdem welcher Fokus darauf gelegt wird. Es stellt die reale Welt also genauso dar, wie wir sie sehen.

Entsprechend entwickelte Avegant einen neuen Prototyp auf Basis der Lichtfeld-Technologie. Der Unterschied zu anderen existierenden Geräten besteht in der Kompatibilität zu bereits bestehenden Herstellungstechniken und existierendem Zubehör. Welche genauen technischen Spezifikationen das Gerät besitzt, möchte das Unternehmen noch nicht preisgeben. Jedoch sollen die Darstellungen gestochen scharf sein, wesentlich schärfer als derzeitige HD-Displays. Zudem kann der Fokus je nach Betrachtung verschiedener Objekte verändert werden.

Der Mixed Reality Prototyp in Aktion

So entwickelte Avegant verschiedene Demos zur Vorführung ihrer Technik. Eines davon zeigt das Sonnensystem mit gestochen scharfen Bildern der Planeten. Je nachdem, welcher Planet betrachtet wird, werden die anderen Planeten in den Hintergrund gerückt und unscharf. In der zweiten MR-Erfahrung ist der Nutzer in einem Aquarium, in dem er von Fischen, Schildkröten und Wasserwellen umgeben wird. Diese Erfahrung muss so real wirken als wäre man in eine Unterwasserwelt eingetaucht. Die letzte Demo zeigt die Interaktion mit einer virtuellen Frau. Diese soll durch die Details, wie z.B. deutlich sichtbare Sommersprossen sehr real wirken und durch verändernde Gesichtsausdrücke Emotionen deutlich darstellen können.

Die Technologie hinter dem Prototyp ist laut Tang bereits sehr ausgereift und könnte so in die Produktion gehen. Er wäre kompatibel mit bereits bestehender Hardware und das Headset wäre kompatibel mit Chipsets von NVIDIA oder Snapdragon Prozessoren. Wann dies umgesetzt wird, steht jedoch noch nicht fest. Das kleine Start-up könnte in Zukunft dennoch zu einem Fortschritt in der Mixed Reality beitragen.

(Quelle: Engadget)

Der Beitrag Lichtfeld Technologie von Avegant soll Mixed Reality verbessern zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Lytro’s Latest VR Light-field Camera is Huge, and Hugely Improved

In the last few years, Lytro has made a major pivot away from consumer-facing digital camera products now to high-end production cameras and tools, with a major part of the company’s focus on the ‘Immerge’ light-field camera for VR. In February, Lytro announced it had raised another $60 million to continue developing the tech. I recently stopped by the company’s offices to see the latest version of the camera and the major improvements in capture quality that come with it.

The first piece of content captured with an Immerge prototype was the ‘Moon’ experience which Lytro revealed back in August of 2016. This was a benchmark moment for the company, a test of what the Immerge camera could do:

Now, to quickly familiarize yourself with what makes a light-field camera special for VR, the important thing to understand is that light-field cameras shoot volumetric video. So while the basic cameras of a 360-degree video rig output flat frames of the scene, a light-field camera is essentially capturing data enough to recreate the scene as complete 3D geometry as seen within a certain volume. The major advantage is the ability to play the scene back through a VR headset with truly accurate stereo and allow the viewer to have proper positional tracking inside the video; both of which result in much more immersive experience, or what we recently called “the future of VR video.” There’s also more advantages of light-field capture that will come later down the road when we start seeing headsets equipped with light-field displays… but that’s for another day.

Lytro’s older Immerge prototype, note that many of the optical elements have been removed | Photo by Road to VR

So, the Moon experience captured with Lytro’s early Immerge prototype did achieve all those great things that light-field is known for, but it wasn’t good enough just yet. It’s hard to tell unless you’re seeing it through a VR headset, but the Moon capture had two notable issues: 1) it had a very limited capture volume (meaning the space around which your head can freely move while keeping the image in tact), and 2) the fidelity wasn’t there yet; static objects looked great, but moving actors and objects in the scene exhibited grainy outlines.

So Lytro took what they learned from the Moon shoot, went back to the drawing board, and created a totally new Immerge prototype, which solved those problems so effectively that the company now proudly says their camera is “production ready,” (no joke, scroll to the bottom of this page on their website and you can submit a request to shoot with the camera.).

Photo courtesy Lytro

The new, physically larger Immerge prototype brings a physically larger capture volume, which means the view has more freedom of movement inside the capture. And the higher quality cameras provide more data, allowing for greater capture and playback fidelity. The latest Immerge camera is significantly larger than the prototype that captured the Moon experience, by about four times. It features a whopping 95 element planar light-field array with a 90-degree field of view. Those 95 elements are larger than on the precursor too, capturing higher quality data.

I got to see a brand new production captured with the latest Immerge camera, and while I can’t talk much about the content (or unfortunately show any of it), I can talk about the leap in quality.

Photo by Road to VR

The move from Moon to this new production is substantial. Not only does the apparent resolution feel higher (leading to sharper ‘textures’), but the depth information is more precise which has largely eliminated the grainy outlines around non-static scene elements. That improved depth data has something of a double-bonus on visual quality, because sharper captures enhance the stereoscopic effect by creating better edge contrast.

Do you recall early renders of a spherical Immerge camera? Purportedly due to feedback informed by early productions using a spherical approach, the company decided to switch to a flat (planar) capture design. With this approach, capturing a 360 degree view requires the camera to be rotated to individually shoot each side of an eventual pentagonal capture volume. This sounds harder than capturing the scene all at once in 360, but Lytro says it’s easier for the production process.

The size of the capture volume has been increased significantly over Moon, though it can still feel limiting at this size. While you’re well covered for any reasonable movements you’d do while your butt is planted in a chair, if you were to take a large step in any direction, you’ll still leave the capture volume (causing the scene to fade to black until you step back inside).

And, although this has little to do with the camera, the experience I saw featured incredibly well-mixed spatial audio which sold the depth and directionality of the light-field capture in which I was standing. I was left very impressed with what Immerge is now capable of capturing.

The new camera is impressive, but the magic is not just in the hardware, it’s also in the software. Lytro is developing custom tools to fuse all the captured information into a coherent form for dynamic playback, and to aid production and post-production staff along the way. The company doesn’t succeed just by making a great light-field camera, they’re responsible for creating a complete and practical pipeline that actually delivers value to those that want to shoot VR content. Light-field capture provides a great many benefits, but needs to be easy to use at production scale, something that Lytro is focusing on just as heavily on as they are the hardware itself.

All-in-all, seeing Lytro’s latest work with Immerge has further convinced me that today’s de facto 360-degree film capture is a stopgap. When it comes to cinematic VR film production, volumetric capture is the future, and Lytro is on the bleeding edge.

The post Lytro’s Latest VR Light-field Camera is Huge, and Hugely Improved appeared first on Road to VR.