Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture

As far as live-action VR video is concerned, volumetric video is the gold standard for immersion. And for static scene capture, the same holds true for photogrammetry. But both methods have limitations that detract from realism, especially when it comes to ‘view-dependent’ effects like specular highlights and lensing through translucent objects. Research from Thailand’s Vidyasirimedhi Institute of Science and Technology shows a stunning view synthesis algorithm that significantly boosts realism by handling such lighting effects accurately.

Researchers from the Vidyasirimedhi Institute of Science and Technology in Rayong Thailand published work earlier this year on a real-time view synthesis algorithm called NeX. It’s goal is to use just a handful of input images from a scene to synthesize new frames that realistically portray the scene from arbitrary points between the real images.

Researchers Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn write that the work builds on top of a technique called multiplane image (MPI). Compared to prior methods, they say their approach better models view-dependent effectis (like specular highlights) and creates sharper synthesized imagery.

On top of those improvements, the team has highly optimized the system, allowing it to run easily at 60Hz—a claimed 1000x improvement over the previous state of the art. And I have to say, the results are stunning.

Though not yet highly optimized for the use-case, the researchers have already tested the system using a VR headset with stereo-depth and full 6DOF movement.

The researchers conclude:

Our representation is effective in capturing and reproducing complex view-dependent effects and efficient to compute on standard graphics hardware, thus allowing real-time rendering. Extensive studies on public datasets and our more challenging dataset demonstrate state-of-art quality of our approach. We believe neural basis expansion can be applied to the general problem of light-field factorization and enable efficient rendering for other scene representations not limited to MPI. Our insight that some reflectance parameters and high-frequency texture can be optimized explicitly can also help recovering fine detail, a challenge faced by existing implicit neural representations.

You can find the full paper at the NeX project website, which includes demos you can try for yourself right in the browser. There’s also WebVR-based demos that work with PC VR headsets if you’re using Firefox, but unfortunately don’t work with Quest’s browser.

Notice the reflections in the wood and the complex highlights in the pitcher’s handle! View-dependent details like these are very difficult for existing volumetric and photogrammetric capture methods.

Volumetric video capture that I’ve seen in VR usually gets very confused about these sort of view-dependent effects, often having trouble determining the appropriate stereo depth for specular highlights.

Photogrammetry, or ‘scene scanning’ approaches, typically ‘bake’ the scene’s lighting into textures, which often makes translucent objects look like cardboard (since the lighting highlights don’t move correctly as you view the object at different angles).

The NeX view synthesis research could significantly improve the realism of volumetric capture and playback in VR going forward.

The post Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture appeared first on Road to VR.

Google’s Project Starline is a Light-field Display System for Immersive Video Calls

This week Google revealed Project Starline, a booth-sized experimental system for immersive video chatting, purportedly using a bevy of sensors, a light-field display, spatial audio, and novel compression to make the whole experience possible over the web.

This week during Google I/O, the company revealed an experimental immersive video chatting system it calls Project Starline. Functionally, it’s a large booth with a big screen which displays another person on the other end of the line at life-sized scale and volumetrically.

Image courtesy Google

The idea is to make the tech seamless enough that it really just looks like you’re seeing someone else sitting a few feet away from you. Though you might imagine the project was inspired by the pandemic, the company says the project has been “years in the making.”

Google isn’t talking much about the tech that makes it all work (the phrase “custom built hardware” has been thrown around), but we can infer what a system like this would require:

  • An immersive display, speakers, and microphone
  • Depth & RGB sensors capable of capturing roughly 180° of the subject
  • Algorithms to fuse the data from multiple sensors into a real-time 3D model of the subject

Google also says that novel data compression and streaming algorithms are an essential part of the system. The company claims that the raw data is “gigabits per second,” and that the compression cuts that down by a factor of 100. According to a preview of Project Starline by Wired, the networking is built atop WebRTC, a popular open-source project for adding real-time communication components to web applications.

As for the display, Google claims it has built a “breakthrough light-field display” for Project Starline. Indeed, from the footage provided, it’s a remarkably high resolution recreation; it isn’t perfect (you can see artifacts here and there), but it’s definitely impressive, especially for real-time.

Granted, it isn’t yet clear exactly how the display works, or whether it fits the genuine definition of a light-field display (which can support both vergence and accommodation), or if Google means something else, like a 3D display showing volumetric content based on eye-tracking input. Hopefully we’ll get more info eventually.

Once hint about how the display works comes from the Wired preview of Project Starline, in which reporter Lauren Goode notes that, “[…] some of the surreality faded each time I shifted in my seat. Move to the side just a few inches and the illusion of volume disappears. Suddenly you’re looking at a 2D version of your video chat partner again […].” This suggests the display has a relatively small eye-box (meaning the view is only correct if your eyes are inside a specific area), which is likely a result of the particular display tech being employed. One guess is that the tech is similar to the Looking Glass displays, but Google has traded eye-box size in favor of resolution.

Image courtesy Google

From the info Google has put out so far, the company indicates Project Starline is early and far from productization. But the company plans to continue experimenting with the system and says it will pilot the tech in select large enterprises later this year.

The post Google’s Project Starline is a Light-field Display System for Immersive Video Calls appeared first on Road to VR.

CREAL zeigt neue Light-Field AR und VR Prototypen

CREAL hat einen weiteren Meilenstein erreicht und präsentiert erstmalig Prototypen für AR- und VR-Brillen, die auf Light-Field-Bildschirme setzen.

CREAL zeigt neue Light-Field AR und VR Prototypen

Das Besondere an der Technologie ist, dass sie dem Auge auch bei der Benutzung einer VR-Brille die Möglichkeit bietet, auf unterschiedliche Distanzen fokussieren zu können. Da die Display-Technik dem Sehen der realen Welt deutlich näher kommt als bisherige Technologien, könnte sie auch für eine immersivere und angenehmere VR-Erfahrung sorgen.

Die aktuellen Prototypen sollen jedoch kein erster Schritt für eine eigene AR- oder VR-Brille sein. Das Unternehmen möchte die Prototypen an interessierte Partner geben, um sie von der Light-Field-Technologie zu überzeugen.

AR-Brille von Creal

Die AR-Brille von Creal biete eine Auflösung von 1.000 x 1.000 Pixel und ein Field of View von 60 Grad. Außerdem biete die Brille eine unendliche Tiefenauflösung – also unendlich viele Punkte zum Fokussieren. Dabei gibt es natürlich nicht unendlich viele Ebenen, aber wohl mehr, als das menschliche Auge unterscheiden könne.

Die AR-Brille muss mit einem Kabel an einem PC betrieben werden, nutzt Intel RealSense für 6DOF-Tracking und Ultraleap für das Tracking der Hände.

Auch wenn der Prototyp aktuell noch etwas rustikal und klobig wirkt, soll bereits Ende 2022 ein Design möglich sein, welches einer normalen Sehhilfe sehr nahe komme.

VR-Brille von Creal

Bei der VR-Brille setzt Creal auf 1.000 x 1.000 Pixel Light-Field Displays mit einem Sichtfeld von 30 Grad. Diese Displays decken aber nur den mittleren Teil des Bildes ab. Zusätzlich kommen herkömmliche Displays mit 1.600 x 1.440 Pixel zum Einsatz, welche die Brille auf ein Sichtfeld von 100 Grad heben. Eine ähnliche Technologie kennen wir bereits von den Brillen von Varjo, wobei hier keine Light-Field-Technologie zum Einsatz kommt.

Auch bei der VR-Brille wird Intel RealSense für das 6DOF-Tracking genutzt und zusätzlich ist ein Modul für das Tracking der Augen verbaut, welches aber wohl nicht zwingend erforderlich sei.

(Quelle: Road to VR, Creal)

Der Beitrag CREAL zeigt neue Light-Field AR und VR Prototypen zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing

Google unveiled a method of capturing and streaming volumetric video, something Google researchers say can be compressed down to a lightweight format capable of even being rendered on standalone VR/AR headsets.

Both monoscopic and stereocopic 360 video are flawed insofar they don’t allow the VR user to move their head completely within a 3D area; you can rotationally look up, down, left, right, and side to side (3DOF), but you can’t positionally lean back or forward, stand up or sit down, or move your head’s position to look around something (6DOF). Even seated, you’d be surprised at how often you move in your chair, or make micro-adjustments with your neck, something that when coupled with a standard 360 video makes you feel like you’re ‘pulling’ the world along with your head. Not exactly ideal.

Volumetric video is instead about capturing how light exists in the physical world, and displaying it so VR users can move their heads around naturally. That means you’ll be able to look around something in a video because that extra light (and geometry) data has been captured from multiple viewpoints. While Google didn’t invent the idea—we’ve seen something similar from NextVR before it was acquired by Apple—it’s certainly making strides to reduce overall cost and finally make volumetric video a thing.

In a paper published ahead of SIGGRAPH 2020, Google researchers accomplish this by creating a custom array of 46 time-synchronized action cams stuck onto a 92cm diameter dome. This provides the user with an 80-cm area of positional movement, and also bringing 10 pixels per degree angular resolution, a 220+ degrees FOV, and 30fps video capture. Check out the results below.

 

The researchers say the system can reconstruct objects as close as 20cm to the camera rig, which is thanks to a recently introduced interpolation algorithm in Google’s deep learning system DeepView.

This is done by replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content, researchers say.

SEE ALSO
Facebook Says It Has Developed the 'Thinnest VR display to date' With Holographic Folded Optics

“We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser,” Google researchers conclude.

In practice, what Google is introducing here is a more cost-effective solution that may eventually spark the company to create its own volumetric immersive video team, much like it did with its 2015-era Google Jump 360 rig project before it was shuttered last year. That’s of course provided Google further supports the project by say, adding in support for volumetric video to YouTube and releasing an open source plan for the camera array itself. Whatever the case, volumetric video, or what Google refers to in the paper as Light Field video, is starting to look like a viable step forward for storytellers looking to drive the next chapter of immersive video.

If you’re looking for more examples of Google’s volumetric video, you can check them out here.

The post Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing appeared first on Road to VR.

Hands-on: CREAL is Shrinking Its Light-field Display for AR & VR Headsets

Switzerland-based CREAL is developing a light-field display which it believes will fit into VR headsets and eventually AR glasses. An earlier tech demo showed impressive fundamentals, and this week at CES 2020 the company revealed its progress toward shrinking its tech toward a practical size.

Co-founded by former CERN engineers, CREAL is building a display that’s unlike anything in AR or VR headsets on the market today. The company’s display tech is the closest thing I’ve seen to a genuine light-field.

Why Light-fields Are a Big Deal

Knowing what a light-field is and why it’s important to AR and VR is key to understanding why CREAL’s tech could be a big deal, so let me drop a quick primer here:

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, and—from what we know so far—seems to support a wide range of continuous focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

More simply put, almost all headsets on the market today are displaying imagery that’s an imperfect representation of how we see the real world. CREAL’s approach aims to get us a several steps closer.

That’s why I was impressed when I saw their tech demo at CES 2019. It was a huge, hulking box, but it generated a light-field that with one eye (and without eye-tracking) I could focus on objects of arbitrary depths (which means that accomodation, the focusing of the lens of the eye, works just like when you’re looking at the real world).

Above is raw, through-the-lens footage of the CREAL light-field display in which you can see the camera focusing on different parts of the image. (CREAL credits the 3D asset to Daniel Bystedt).

Slimming Down for AR & VR

At CES 2020 this week, CREAL showed its latest progress toward shrinking the tech to fit into AR and VR headsets.

Photo by Road to VR

Though the latest prototype isn’t yet on a head-mount, the company has shrunk the display and projection module (the ‘optical engine’) enough that it could reasonably fit on a heard-worn device. The current bottleneck which is keeping it on a static mount is the electronics required to drive the optical engine which are housed in a large box.

Photo by Road to VR

Shrinking those driving electronics is the next step; on that front, the company told me it already has a significantly reduced board which in the future will give way to an ASIC (a tiny chip) which could fit into a glasses-sized AR headset.

CREAL’s ‘benchmark’ tech demo | Photo by Road to VR

Looking through their CES 2020 demo, the company showed that they had replicated their light-field technology in a much smaller package, though the demo had a much smaller eye-box, field of view, and lower resolution than what could be seen through their much larger demo.

CREAL told me it intends to expand the field of view on the compact optical engine by projecting additional non-light-field imagery around the periphery.

This is very similar to the concept behind Varjo’s ‘retina resolution’ headset, which puts a high resolution display in the center of the view while filling out the periphery with lower resolution imagery. Except, where Varjo needs additional displays, CREAL says it can project the lower fidelity peripheral views from the same optical engine as the light-field itself.

The company explained that the reason for doing it this way (rather than simply showing a larger light-field) is that it reduces the computational complexity of the scene by shrinking the portion of the image which is a genuine light-field. This is ‘foveated rendering’, light-field style.

CREAL hopes to cover the entire fovea—the small portion in the center of your eye’s view which can see in high detail and color—with the light-field. The ultimate goal, then, would be to use eye-tracking to keep the central light-field portion of the view exactly aligned with the eye as it moves. If done right, his could make it feel like the entire field of view is covered by a light-field.

That’s all theoretically possible, but execution will be anything but easy.

A growing question is what level of quality the display tech can ultimately achieve. While the light-field itself is impressive, the demoes so far don’t show good color representation or particularly high resolution. CREAL has been somewhat hesitant to detail exactly how their light-field display works, which makes it difficult for me to tell what might be a fundamental limitation rather than a straightforward optimization.

VR Before AR

The immediate next step, the company tells me, is to move from the current static demo to a head-mounted prototype. Further in the future the goal is to shrink things toward a truly glasses-sized AR device.

A mockup of the form-factor CREAL believes it can achieve in the long-run (this anticipates off-board compute and and power). | Photo by Road to VR

Before the tech hits AR glasses though, CREAL thinks that VR headsets will be the first stop for its light-field tech considering more generous space requirements and a number of other challenges facing AR glasses (power, compute, tracking, etc).

CREAL doesn’t expect to bring its own headset to market, but is instead positioning itself to work with partners and eventually license its technology for use in their headsets. Development kits are available today for select partners, the company says, though it will likely still be a few years yet before the tech will be ready for prime time.

The post Hands-on: CREAL is Shrinking Its Light-field Display for AR & VR Headsets appeared first on Road to VR.

Founded by CERN Engineers, CREAL3D’s Light-field Display is the Real Deal

Co-founded by former CERN engineers who contributed to the ATLAS project at the Large Hadron Collider, CREAL3D is a Switzerland-based startup that’s created an impressive light-field display that’s unlike anything in an AR or VR headset on the market today.

At CES last week we saw and wrote about lots of cool stuff. But hidden in the less obvious places we found some pretty compelling bleeding-edge projects that might not be in this year’s upcoming headsets, but surely paints a promising picture for the next next-gen of AR and VR.

One of those projects wasn’t in CES’s AR/VR section at all. It was hiding in an unexpected place—one and half miles away, in an entirely different part of the conference—blending in as two nondescript boxes on a tiny table among a band of Swiss startups representing at CES as part of the ‘Swiss Pavilion’.

It was there that I met Tomas Sluka and Tomáš Kubeš, former CERN scientists and co-founders of CREAL3D. They motioned to one of the boxes, each of which had an eyepiece to peer into. I stepped up, looked inside, and after one quick test I was immediately impressed—not with what I saw, but how I saw it. But it’ll take me a minute to explain why.

Photo by Road to VR

CREAL3D is building a light-field display. Near as I can tell, it’s the closest thing to a real light-field that I’ve personally had a chance to see with my own eyes.

Light-fields are significant to AR and VR because they’re a genuine representation of how light exists in the real world, and how we perceive it. Unfortunately they’re difficult to capture or generate, and arguably even harder to display.

Every AR and VR headset on the market today uses some tricks to try to make our eyes interpret what we’re seeing as if it’s actually there in front of us. Most headsets are using basic stereoscopy and that’s about it—the 3D effect gives a sense of depth to what’s otherwise a scene projected onto a flat plane at a fixed focal length.

Such headsets support vergence (the movement of both eyes to fuse two images into one image with depth), but not accommodation (the dynamic focus of each individual eye). That means that while your eyes are constantly changing their vergence, the accommodation is stuck in one place. Normally these two eye functions work unconsciously in sync, hence the so-called ‘vergence-accommodation conflict’ when they don’t.

More simply put, almost all headsets on the market today are displaying imagery that’s an imperfect representation of how we see the real world.

On more advanced headsets, ‘varifocal’ approaches dynamically shift the focal length based on where you’re looking (with eye-tracking). Magic Leap, for instance, supports two focal lengths and jumps between them as needed. Oculus’ Half Dome prototype does the same, and—from what we know so far—seems to support a wide range of continuous focal lengths. Even so, these varifocal approaches still have some inherent issues that arise because they aren’t actually displaying light-fields.

So, back to the quick test I did when I looked through the CREAL3D lens: inside I saw a little frog on a branch very close to my eye, and behind it was a tree. After looking at the frog, I focused on the tree which came into sharp focus while the frog became blurry. Then I looked back at the frog and saw a beautiful, natural blur blossom over the tree.

Above is raw, through-the-lens footage of the CREAL3D light-field display in which you can see the camera focusing on different parts of the image.

Why is this impressive? Well, I knew they weren’t using eye-tracking, so I knew what I was seeing wasn’t a typical varifocal system. And I was looking through a single lens, so I knew what I was seeing wasn’t mere vergence. This was accomodation at work (the dynamic focus of each individual eye).

The only explanation for being able to properly accommodate betweentwo objects with a single eye (and without eye-tracking) is that I was looking at a real light-field—or at least something very close to one.

That beautiful blur I saw was the area of the scene not in focus of my eye, which can only bring one plane into focus at a time. You can see the same thing right now: close one eye, hold a finger up a few inches from your eye and focus on it. Now focus on something far behind your finger and watch as your finger becomes blurry.

This happens because the light from your finger and the light from the more distant objects is entering your eye at different angles. When I looked into CREAL3D’s display, I saw the same thing, for the same reason—except I was looking at a computer generated image.

A little experiment with the display really drove this point home. Holding my smartphone up to the lens, I could tap on the frog and my camera would bring it into focus. I could also tap the tree and the focus would switch to the tree while the frog became blurry. As far as my smartphone’s camera was concerned… these were ‘real’ objects at ‘real’ focal depths.

Through-the-lens: focusing on the free. | Image courtesy CREAL3D

That’s the long way of saying (sorry, light-fields can be confusing) that light-fields are the ideal way to display virtual or augmented imagery—because they inherently support all of the ‘features’ of natural human vision. And it appears that CREAL3D’s display does much of the same.

But, these are huge boxes sitting on a desk. Could this tech even fit into a headset? And how does it work anyway? Founders Sluka and Kubeš weren’t willing to offer much detail on their approach, but I learned as much as I could about the capabilities (and limitations) of the system.

The ‘how’ part is the least clear at this point. Sluka would only tell me that they’re using a projector, modulating the light in some way, and that the image is not a hologram, nor are they using a microlens array. The company believes this to be a novel approach, and that their synthetic light-field is closer to an analog light-field than any other they’re aware of.

SEE ALSO
Facebook Open-sources DeepFocus Algorithm for More Realistic Varifocal VR Rendering

Sluka tells me that the system supports “hundreds of depth-planes from zero to infinity,” with a logarithmic distribution (higher density of planes closer to the eye, and lower density further). He said that it’s also possible to achieve a depth-plane ‘behind’ the eye, meaning that the system can correct for prescription eyewear.

The pair also told me that they believe the tech can be readily shrunk to fit into AR and VR headsets, and that the bulky devices shown at CES were just a proof of concept. The company expects that they could have their light-field displays ready for VR headsets this year, and shrunk all the way down to glasses-sized AR headsets by 2021.

At CES CREAL3D showed a monocular and binocular (pictured) version of their light-field display. | Photo by Road to VR

As for limitations, the display currently only supports 200 levels per color (RBG), and increasing the field of view and the eyebox will be a challenge because of the need to expand the scope of the light-field, though the team expects they can achieve a 100 degree field of view for VR headsets and a 60–90 degree field of view for AR headsets. I suspect that generating synthetic lightfields in real-time at high framerates will also be a computational challenge, though Sluka didn’t go into detail about the rendering process.

Through-the-lens: focusing on the near pieces. The blur scene in the background is not generated, it is ‘real’, owed to the physics of light-fields. | Image courtesy CREAL3D

It’s exciting, but early for CREAL3D. The company is a young startup with 10 members so far, and there’s still much to prove in terms of feasibility, performance, and scalability of the company’s approach to light-field displays.

Sluka holds a Ph.D in Science Engineering from the Technical University of Liberec in the Czech Republic. He says he’s a multidisciplinary engineer, and he has the published works to prove it. The CREAL3D team counts a handful of other Ph.Ds among its ranks, including several from Intel’s shuttered Vaunt project.

Sluka told me that the company has raised around $1 million in the last year, and that the company is in the process of raising a $5 million round to further grow the company and its development.

The post Founded by CERN Engineers, CREAL3D’s Light-field Display is the Real Deal appeared first on Road to VR.

Avegant Raises $12M Financing to Further Develop AR Display Tech

Avegant, the AR display company known for its Glyph head-mounted display, announced that the company has successfully closed a $12M Series AA financing, which will allow them to further develop their light field technologies and “high resolution, low latency, and high brightness” retinal displays.

While well-known for its Glyph head-mounted display (now called ‘Avegant Video Headset’), the company has also demoed a prototype AR headset using its display technologies, which are said to target the consumer market.

Image courtesy Avegant

“The consumer AR industry faces significant challenges developing displays that are high resolution, small form factor, large field-of-view, light field, and low power. The industry is excited about our unique solutions to these technical challenges, which will enable previously impossible AR experiences,” said Avegant CEO Ed Tang in a statement.

Dr. Om Nalamasu, President of Applied Ventures and Chief Technology Officer of Applied Materials, said their company will be working with Avegant to accelerate the development of Avegant’s light field technology and “to create compelling AR applications.”

New investors include Walden SKT Venture Fund and China Walden Venture Investments III, L.P., with the company’s total funding amounting to $62.4M.

The post Avegant Raises $12M Financing to Further Develop AR Display Tech appeared first on Road to VR.

Google präsentiert Entwicklung der Lichtfeldfotografie auf der SIGGRAPH 2018

Werbung für Virtual Reality Hygiene

Auf der SIGGRAPH 2018 stellten die Expert/innen von Google ihre Forschung und Arbeit im Bereich der Lichtfeldfotografie vor und gaben dabei interessante Einblicke in die Technologie.

SIGGRAPH 2018 – Google präsentiert eigene Entwicklungen innerhalb der Lichtfeldfotografie

Bereits im März 2018 veröffentlichte Google seine VR-Erfahrung Welcome to Light Fields, welche einen Einblick in die Lichtfeldtechnologie gewährt. Die App entstand als Auskopplung der Forschungsarbeit des Google-Teams, das unter anderem den Lichtfeld-Experten Paul Debevec beheimatet.

Google-Siggraph-Lightfield-capture

Auf der VR-Tour werden in verschiedenen Lokalitäten, wie beispielsweise einem Raumschiff, den Nutzer/innen die Vorzüge der Lichtfeldtechnik nähergebracht. So werden innerhalb der immersiven Erfahrung die Technik erklärt und verschiedene Lichtfeldfotos vorgestellt. Dank der Lichtfeldaufnahmemethode ist es den Nutzer/innen möglich in realistische Szenen mitsamt dynamischen Lichtreflexionen einzutauchen und sich innerhalb dieser sogar zu bewegen. Ein großer Unterschied zu den derzeitigen Verfahren.

Die Fortschritte in der Forschung innerhalb des Lichtfeldbereichs stellte Google nun auf der SIGGRAPH 2018 vor. In der knapp einstündigen Präsentation gehen Debevec und weitere Speaker auf diverse Probleme innerhalb der Arbeit ein, stellen die Technologie selbst sowie Prototypen vor und erklären, warum man sich für spezielle Entwicklungswege entschied.

So arbeiteten die Entwickler/innen zunächst mit einem speziellen Kameragestell, das 16 vertikale Kameras in einer bogenförmigen Vorrichtung befestigte. Bei der Aufnahme eines Lichtfeldfotos dreht sich die Halterung einmal um sich selbst und kann somit aus sämtlichen Perspektiven aufnehmen. Der Vorgang dauert zwischen 30 bis 90 Sekunden.

Google-Siggraph-Lightfield-capture

Bei der Aufnahme von Personen wurde eine Markierung an der Kamera befestigt. Als Instruktion wurde den Fotomodellen gegeben, sich nicht zu bewegen, dafür jedoch stets die Markierung im Auge zu behalten. Das Ergebnis findet sich in der VR-Erfahrung Welcome to Light Fields wieder: Eine lebensechte Person, die aus sämtlichen Perspektiven Augenkontakt mit den Nutzer/innen hält.

Weitere Fakten und Einblicke sind innerhalb der Präsentation enthalten. Nach knapp einer halben Stunde wechselt das Thema auf fraktale VR-Software.

(Quellen: Upload VR | Video: ACMSIGGRAPH YouTube)

Der Beitrag Google präsentiert Entwicklung der Lichtfeldfotografie auf der SIGGRAPH 2018 zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Lytro gibt auf, Mitarbeiter wechseln angeblich zu Google

Es ist nicht lange her, als der Lichtfeld-Spezialist Lytro verkündete, das Standardformat für „begehbare“ VR-Videos entwickeln zu wollen. Nun gibt das Unternehmen in einem Blog-Statement bekannt, dass es die Arbeit einstellt. Neue Produkte sind demnach nicht zu erwarten, der professionelle Service wird nicht weiter geführt. Außerdem trennen sich bei dem Lytro-Team die Wege. Laut einem Bericht sollen die Mitarbeiter bei Google Unterschlupf finden.

Lytro: Keine neuen Produkte, Team teilt sich

Erst kürzlich hat Techcrunch berichtet, dass der VR-Kamera-Hersteller und Lichtfeld-Spezialist von Google übernommen werden soll. Laut The Verge sollen zumindest Mitarbeiter wechseln, was ein Blog-Post von Lytro zu bestätigen scheint. Demnach teilt sich das Team, ohne dass das Unternehmen einen konkreten Namen nennt. Allerdings soll Google laut Informationen von The Verge kein Interesse daran haben, die Technologien von Lytro zu übernehmen, sondern will die neuen Mitarbeiter auf verschiedene Abteilungen verteilen.

Insgesamt kann man Lytros Geschichte als glücklos bezeichnen. Die Lichtfeld-Kameras des Herstellers brachten zwar die Technologie erstmals in ein für Nicht-Profis bezahlbaren Rahmen, floppten aber aus verschiedenen Gründen am Markt. Ab dem Jahr 2015 setzte das Unternehmen dann ganz auf die virtuelle Realität. Bei Lichtfeld-Sensoren werden – ähnlich wie bei Raytracing – Lichtstrahlen durch den Raum geschickt, wodurch sich Tiefeninformationen exakt bestimmen lassen. Die Technik ist zwar Systemen mit zwei Kameras wie bei Smartphones überlegen, leidet aber unter der geringen Auflösung beziehungsweise einem enorm erhöhten und damit teurem Hardware-Aufwand. Lytro arbeitete zuletzt an einem eigenen Format, um Videos für VR interaktiver zu gestalten. Sie lassen sich dann nicht nur aus einem fixen Blickwinkel betrachten, sondern Anwender können sich im begrenzten Rahmen darin bewegen. Vor kurzer Zeit hat Google eine kostenlose App für PC-Brillen veröffentlicht, die diese Möglichkeit demonstriert.

(Quellen: The Verge und Lytro)

Der Beitrag Lytro gibt auf, Mitarbeiter wechseln angeblich zu Google zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Google hat Interesse an Übernahme von Lichtfeld-Spezialist Lytro

Erst kürzlich veröffentlichte Google mit Welcome to Light Fields eine extrem spannende und kostenlose Anwendung auf Steam, die (begrenzt) begehbare Bilder auf die Virtual-Reality-Brille zaubert. Lichtfeld-Aufnahmen scheinen für Google jedoch mehr zu sein als eine Tech-Demo. Wie TechCrunch durch unterschiedliche Quellen erfahren haben will, sei das Unternehmen nun an einer Übernahme von Lytro interessiert, einem Start-up mit Fokus auf eben diese Technologie.

Google hat Interesse an Übernahme von Lichtfeld-Spezialist Lytro

Derzeit ist nicht klar, in welchem Stadium sich die Verhandlungen befinden, aber der Deal scheint noch nicht abgeschlossen zu sein. Eine Quelle berichtete, dass Google nicht an dem Team interessiert sei, sondern an den Patenten und der Technologie. Hierfür würde Google voraussichtlich zwischen 25 Millionen US-Dollar und 40 Millionen US-Dollar bezahlen. Glaubwürdigkeit erscheinen die Quellen, da schon einige Mitarbeiter die Büros verlassen haben sollen.

Die Lösungen von Lytro sind vielseitig und dementsprechend ist auch nicht klar, welche Bereich besonders interessant für den Suchmaschinen-Riesen sind. Der Lichtfeld-Spezialist hat mit der Lytro Immerge 2.0 nicht nur eine spannende und vielversprechende Light-Field-Kamera entwickelt, sondern arbeitet auch beispielsweise an einer Lichtfeld-Spiele-Engine. Google Earth und Google Street View zeigten schon in der Vergangenheit das Interesse von Google daran, die Welt zu digitalisieren. Mit Lichtfeld-Aufnahmen könnten Dienste wie Street View auf ein komplett neues Level gehoben werden.

Lytro hatte versucht, Lichfeld-Kameras für den Consumer-Markt zu etablieren. Herausragendes Merkmal: Die Technik erfasst auch die Tiefeninformationen, sodass Anwender beispielsweise den Fokus-Punkt nachträglich festlegen können. Allerdings waren die Preise relativ hoch, die effektive Auflösung gering und der Einsatzzweck überschaubar. Die Kameras sind inzwischen fast nur noch gebraucht erhältlich, das Unternehmen richtete sich nach zwei Produkten auf den professionellen VR-Markt aus. Lytro versucht jetzt, einen Standard für begehbare 360-Grad-Videos zu schaffen. Hauptproblem dabei sind allerdings die riesigen Datenmengen, die bei solchen Videos anfallen.

(Quelle: TechCrunch)

Der Beitrag Google hat Interesse an Übernahme von Lichtfeld-Spezialist Lytro zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!