Facebook Unveils Two New Volumetric Video ‘Surround360’ Cameras, Coming Later this Year

Facebook today announced two new additions to the Surround360 hardware initiative that are poised to make 360 video more immersive. Unveiled at the company’s yearly developer conference, F8, the so-called x24 and x6 cameras are said to capture 360 video with depth information, giving captured video six degrees of freedom (6DoF). This means you can not only move your vantage point up/down, left/right like before, but now forwards/backwards, pitch, yaw and roll are possible while in a 360 video.

Even the best stereoscopic 360 videos can’t provide this sort of movement currently, so the possibility of a small, robust camera(s) that can, is pretty exciting—because let’s face it, when you’re used to engaging with the digital world thanks to the immersive, positional tracking capabilities of the Oculus Rift, HTC Vive, or PSVR, you really notice when it’s gone. Check out the gif below to see exactly what that means.

Originally announced at last year’s F8 as an open source hardware platform and rendering pipeline for 3D 360 video for VR that anyone could construct or iterate on, Facebook is taking their new Surround360 reference designs in a different direction. While Facebook doesn’t plan on selling the 360 6DoF cameras directly, the company will be licensing the x24 and x6 designs—named to indicate the number of on-board sensors—to a select number of commercial partners. Facebook says a product should emerge sometime later this year.

The rigs are smaller than the original Surround360, now dubbed Surround360 ‘Open Edition’, but are critically smaller than rigs capable of volumetric capture like unwieldy rigs like HypeVR’s high-end camera/LIDAR camera.

Specs are still thin on the ground, but the x24 appears to be around 10 inches in diameter (257mm at its widest, 252mm at its thinnest), and is said to capture full RGB and depth at every pixel in each of the 24 cameras. It is also said to oversample 4x at every point in full 360, providing “best in-class image quality and full-resolution 6DoF point clouds.”

The x6, although not specified, looks to be about half the diameter at 5 inches, and is said to oversample by 3x. No pricing info has been made public for either camera.

Facebook says depth information is captured for every frame in the video, and because it outputs in 3D, video can be feed into existing visual effects (VFX) software tools to create a mashup of live-action capture and computer-generated imagery (CGI). Take a look at the gif below for an idea of what’s possible.

Creating good-looking 6DoF 360 video is still an imperfect process though, so Facebook is also partnering with a number of post-production companies and VFX studios to help build out workflows and toolchains. Adobe, Otoy, Foundry, Mettle, DXO, Here Be Dragons, Framestore, Magnopus, and The Mill are all working with Facebook in some capacity.

“We’ve designed with Facebook an amazing cloud rendering and publishing solution to make x24’s interactive volumetric video within reach for all,” said Jules Urbach, Founder & CEO Otoy. “Our ORBX ecosystem opens up 28 different authoring and editing tools and interactive light field streaming across all major platforms and browsers. It’s a simple and powerful solution this game-changing camera deserves.”

Keep an eye on this article, as we’ll be updating information as it comes in.

The post Facebook Unveils Two New Volumetric Video ‘Surround360’ Cameras, Coming Later this Year appeared first on Road to VR.

Werden die plenoptischen Kameras von Lytros die Zukunft der VR-Aufnahmen prägen?

Das Unternehmen Lytro vollzog einen Wandel in ihren Produkten. Ihr früherer Fokus lag auf Digitalkameras, heute stellen sie unter anderem High-End-Kameras für Virtual Reality her. Dabei liegt die Hauptproduktion auf plenoptischen Kameras (oder Lichtfeldkameras), durch die das Eintauchen in die Virtual Reality noch intensiver wird. Dafür sammelte das Unternehmen Spenden in Höhe von 60 Millionen USD, um die Technologie zu verbessern.

Was macht plenoptische Kameraufnahmen so besonders?

Um eine Vorstellung zu erhalten, wie die Aufnahmen einer plenoptischen Kamera aussehen, sollte man sich folgendes Video ansehen:

Lytro veröffentlichte das Mondexperiment erstmals im August 2016. Der Unterschied von plenoptischen Kameraaufnahmen zu Aufnahmen herkömmlicher 360-Grad-Kameras liegt in der zusätzlichen Dimension. Denn neben den normalen zwei Bilddimensionen kann diese Kamera die Richtung einfallender Lichtstrahlen aufnehmen und erhält somit zusätzliche Informationen über die Bildtiefe. Dadurch ist theoretisch eine unendliche Schärfetiefe möglich. Außerdem kann man eine Refokussierung der Schärfentiefe vornehmen. Dieselbe Szene wird aus mehreren Blickwinkeln erfasst und durch die zusätzlich aufgenommenen Informationen über die Bildtiefe sind volumetrische Videoaufnahmen möglich. Das bedeutet, die aufgenommene Szene wird in fast realer Bildschärfe in 3-D-Geometrie wiedergegeben und kann auf ein VR-Headset übertragen werden. In der VR ist die Szene nun mit akkuratem Sound vorhanden und der Nutzer hat die Möglichkeit sich innerhalb des Videos frei zu bewegen und die Situation aus verschiedenen Perspektiven nachzuerleben. Hierdurch wird eine viel immersivere VR-Erfahrung als durch aktuelle Standards ermöglicht.

Nachteile des Mond-Experiments und heutige Fortschritte

Das Mondexperiment verwendete damals zur Aufnahme einen Prototyp. Dabei gab es noch einige Probleme, denn einerseits hatte diese eine eingeschränkte Aufnahmekapazität, andererseits war die Genauigkeit noch nicht so stark ausgeprägt. Daraufhin verbesserte Lytro seinen Prototyp und veröffentlichte eine neue Kamera. Diese ist deutlich größer und ermöglicht mehr Bewegungsfreiheiten bei den Aufnahmen. Auch die Auflösung, Bildschärfe und Audioqualität der Aufnahmen hat sich deutlich verbessert. Zusätzlich arbeitet das Unternehmen an passender Software, die zur Bearbeitung der Videos angemessen ist.

Lytro-volumetric-video-light-field-camera

Insgesamt kann man davon ausgehen, dass diese Kamera-Generation die Zukunft für VR-Videos und VR-Filmen prägen wird. Deshalb besteht die Vermutung, dass die volumetrischen Videos zum Standard innerhalb der Virtual Reality werden können.

(Quellen: roadtovr, Lytro Vimeo, Lytro)

Der Beitrag Werden die plenoptischen Kameras von Lytros die Zukunft der VR-Aufnahmen prägen? zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Lytro’s Latest VR Light-field Camera is Huge, and Hugely Improved

In the last few years, Lytro has made a major pivot away from consumer-facing digital camera products now to high-end production cameras and tools, with a major part of the company’s focus on the ‘Immerge’ light-field camera for VR. In February, Lytro announced it had raised another $60 million to continue developing the tech. I recently stopped by the company’s offices to see the latest version of the camera and the major improvements in capture quality that come with it.

The first piece of content captured with an Immerge prototype was the ‘Moon’ experience which Lytro revealed back in August of 2016. This was a benchmark moment for the company, a test of what the Immerge camera could do:

Now, to quickly familiarize yourself with what makes a light-field camera special for VR, the important thing to understand is that light-field cameras shoot volumetric video. So while the basic cameras of a 360-degree video rig output flat frames of the scene, a light-field camera is essentially capturing data enough to recreate the scene as complete 3D geometry as seen within a certain volume. The major advantage is the ability to play the scene back through a VR headset with truly accurate stereo and allow the viewer to have proper positional tracking inside the video; both of which result in much more immersive experience, or what we recently called “the future of VR video.” There’s also more advantages of light-field capture that will come later down the road when we start seeing headsets equipped with light-field displays… but that’s for another day.

Lytro’s older Immerge prototype, note that many of the optical elements have been removed | Photo by Road to VR

So, the Moon experience captured with Lytro’s early Immerge prototype did achieve all those great things that light-field is known for, but it wasn’t good enough just yet. It’s hard to tell unless you’re seeing it through a VR headset, but the Moon capture had two notable issues: 1) it had a very limited capture volume (meaning the space around which your head can freely move while keeping the image in tact), and 2) the fidelity wasn’t there yet; static objects looked great, but moving actors and objects in the scene exhibited grainy outlines.

So Lytro took what they learned from the Moon shoot, went back to the drawing board, and created a totally new Immerge prototype, which solved those problems so effectively that the company now proudly says their camera is “production ready,” (no joke, scroll to the bottom of this page on their website and you can submit a request to shoot with the camera.).

Photo courtesy Lytro

The new, physically larger Immerge prototype brings a physically larger capture volume, which means the view has more freedom of movement inside the capture. And the higher quality cameras provide more data, allowing for greater capture and playback fidelity. The latest Immerge camera is significantly larger than the prototype that captured the Moon experience, by about four times. It features a whopping 95 element planar light-field array with a 90-degree field of view. Those 95 elements are larger than on the precursor too, capturing higher quality data.

I got to see a brand new production captured with the latest Immerge camera, and while I can’t talk much about the content (or unfortunately show any of it), I can talk about the leap in quality.

Photo by Road to VR

The move from Moon to this new production is substantial. Not only does the apparent resolution feel higher (leading to sharper ‘textures’), but the depth information is more precise which has largely eliminated the grainy outlines around non-static scene elements. That improved depth data has something of a double-bonus on visual quality, because sharper captures enhance the stereoscopic effect by creating better edge contrast.

Do you recall early renders of a spherical Immerge camera? Purportedly due to feedback informed by early productions using a spherical approach, the company decided to switch to a flat (planar) capture design. With this approach, capturing a 360 degree view requires the camera to be rotated to individually shoot each side of an eventual pentagonal capture volume. This sounds harder than capturing the scene all at once in 360, but Lytro says it’s easier for the production process.

The size of the capture volume has been increased significantly over Moon, though it can still feel limiting at this size. While you’re well covered for any reasonable movements you’d do while your butt is planted in a chair, if you were to take a large step in any direction, you’ll still leave the capture volume (causing the scene to fade to black until you step back inside).

And, although this has little to do with the camera, the experience I saw featured incredibly well-mixed spatial audio which sold the depth and directionality of the light-field capture in which I was standing. I was left very impressed with what Immerge is now capable of capturing.

The new camera is impressive, but the magic is not just in the hardware, it’s also in the software. Lytro is developing custom tools to fuse all the captured information into a coherent form for dynamic playback, and to aid production and post-production staff along the way. The company doesn’t succeed just by making a great light-field camera, they’re responsible for creating a complete and practical pipeline that actually delivers value to those that want to shoot VR content. Light-field capture provides a great many benefits, but needs to be easy to use at production scale, something that Lytro is focusing on just as heavily on as they are the hardware itself.

All-in-all, seeing Lytro’s latest work with Immerge has further convinced me that today’s de facto 360-degree film capture is a stopgap. When it comes to cinematic VR film production, volumetric capture is the future, and Lytro is on the bleeding edge.

The post Lytro’s Latest VR Light-field Camera is Huge, and Hugely Improved appeared first on Road to VR.

8i Lands $27M in Series B Funding and Reveals Tango-powered Mixed Reality App ‘Holo’

Volumetric video specialists 8i have announced their latest series B funding round has netted them a further $27 while also unveiling Holo, a mixed reality video app, powered by Google’s Tango technology.

We first reported on 8i back in 2015, when they unveiled their 360 volumetric video capture system, capable of capturing imagery and data from different viewpoints and stitching them back together in realtime, allowing the video to be viewed from different angles.

Now, in addition to the company’s previous 2015 series A funding round, 8i have announced it’s to receive a further $27M in funds from a series of high profile investors including Baidu, Verizon and Time Warner.

Up to now, 8i’s focus has very much been on virtual reality, with an early version of their ‘3D Video’ player launching for the Oculus Rift, even before the consumer version had reached market. However, the company’s latest direction embraces the recent wave of consumer devices to include Google’s ‘Tango’ depth sensing and capture technology. It’s called Holo, and it purports to “bring holograms to consumers” via pre-recorded volumetric video and augmented reality. 8i is making extensive use of the word “hologram” in the colloquial sense, though technically speaking their work does not involve holograms in the optical sense.

The Lenovo Phab Pro 2
The Lenovo Phab Pro 2

“As consumers are augmenting, mixing and creating new content on their smartphones on a massive scale, mobile presents an unparalleled opportunity for distribution of holograms,” said 8i CEO Steve Raymond. “We’re thrilled to have the strategic expertise and backing of leaders in media, technology, and communications as we bring audiences new ways to create and engage with content. With this global round, we look forward to partnering with our investors from the US, China, Europe, and Australia as we bring our technology to consumers worldwide.”

The app, which allows users with Google Tango-enabled phones—such as the recently released Lenovo Phab 2 Pro—is in beta right now with a release set for some time this year. It allows users to capture video of their real world and drop pre-recorded volumetric video ‘avatars’ (captured by 8i) into the scene which then pan and rotate in real time, matching the camera’s movement.

SEE ALSO
Asus 'ZenFone AR' Google Tango, Daydream VR Phone Launched, Specs Revealed
8iStudiosVolCapRig_JonHamm
Actor John Hamm, being captured at 8i’s volumetric video studio

But while Holo looks like enormous fun, what of 8i’s plans for virtual reality? 8i CEO Steve Raymond says, “Our investment into mobile AR in no way diminishes our excitement for the many use cases that are emerging for our holograms in high end VR. What we are seeing are different kind of content creators embracing different forms of content for different consumption platforms.”

Holo is the first, low-cost entry step for content creation using their volumetric assets, but the company has already produced more ambitious projects with higher fidelity visuals, such as Buzz Aldrin’s Cycling Pathways to Mars (below), a “volumetric VR experience powered by 8i holographic technology and designed for HMD’s that enable 6-degrees of freedom,” which is due to premiere at SXSW next month.

8i-buzz-aldrin

“With VR and AR, we’re seeing the very beginning of a new generation of immersive media,” said Scott Levine of Time Warner Investments. “8i makes holographic human content a reality in this new era with its breakthrough volumetric capture technology, while lowering the barrier for creators. We’re excited to back this world-class team as they continue to push the boundaries of data compression and depth acquisition, and bring holograms to the mainstream with Holo on smartphones.”

We’re not sure what to make of Holo itself having not tried it just yet, but with the VR industry still in an embryonic state compared with more established media platforms, 8i’s multi-pronged approach to introducing its immersive technologies to a mass market audience with something fun and accessible is probably a smart move.

The post 8i Lands $27M in Series B Funding and Reveals Tango-powered Mixed Reality App ‘Holo’ appeared first on Road to VR.

Believe the Hype: HypeVR’s Volumetric Video Capture is a Glimpse at the Future of VR Video

After having teased the tech toward the end of last year, we’ve finally gone hands-on with HypeVR’s volumetric video captures which lets you move around inside of VR videos.

Inherent Limitations of 360 Video

Today’s most immersive VR video productions are shot in 360 degree video and 3D. Properly executed 360 3D video content can look quite good in VR (just take a look at some of the work from Felix & Paul Studios). But—assuming we can one day achieve retina-quality resolution and geometrically perfect stereoscopy—there’s a hurdle that 360 3D video content simply can’t surmount: movement inside of the video experience.

With any 360 video today (3D or otherwise) your view is locked to a single vantage point. Unlike real-time rendered VR games, you can’t walk around inside the video—let alone just lean in your chair and expect the scene to move accordingly. Not only is that less immersive, it’s also less comfortable; we’re are all constantly moving our heads slightly even when sitting still, and when the virtual view doesn’t line up with those movements, the world feels a less real and less comfortable.

Volumetric VR Video Capture

That’s one of a number of reasons that HypeVR is working on volumetric video capture technology. The idea is to capture not just a series of 360 pictures and string them together (like with traditional 360 cameras), but to capture the volumetric data of the scene for each frame so that when the world is played back, the information is available to enable the user to move inside the video.

At CES 2017, I saw both the original teaser video shot with HypeVR’s monster capture rig, and a brand new, even more vivid experience, created in conjunction with Intel.

With an Oculus Rift headset, I stepped into that new scene: a 30 second loop of a picturesque valley in lush Vietnam. I was standing on a rock on a tiny little island in the middle of a lake. Just beyond the rock the island was covered in lush wild grasses, and a few yards away from me was a grazing water buffalo and a farmer.

Surrounding me in the distance was rainforest foliage and an amazing array of waterfalls cascading down into the lake. Gentle waves rippled through the water and lapped the edge of my little island, pushing some of the wild grass at the water’s edge.

It was vivid and sharp—it felt more immersive than pretty much any 360 3D video I’ve ever seen through a headset, mostly because I was able to move around within the video, with proper parallax, in a roomscale area. It made me feel like I was actually standing there, in Vietnam, not just that my eyes alone had been transported. This is the experience we all want when we imagine VR video, and it’s where the medium needs to head in the future to becoming truly compelling.

Now, I’ve seen impressive photogrammetry VR experiences before, but photogrammetry requires someone to canvas a scene for hours, capturing it from every conceivable angle and then compiling all the photos together into a model. The results can be tremendous, but there’s no way to capture moving objects because you can’t capture the entire scene fast enough to record moving objects.

HypeVR’s approach is different, their rig sits static in a scene and captures it 60 times per second, using a combination of high-quality video capture and depth-mapping LiDAR. Later, the texture data from the video is fused with the depth data to create 60 volumetric ‘frames’ of the scene per second. That means you’ll be able to see waves moving or cars driving, but still maintain the volumetric data which gives users the ability to move within some portion of the capture.

hypevr-capture-rig-2The ‘frames’ in the case of volumetric video capture are actually real-time rendered 3D models of the scene which are playing back one after another. That not only allows the viewer to walk around within the space like they would a VR game environment, but is also the reason why HypeVR’s experiences look so sharp and immersive—every frame that’s rendered for the VR headset’s display is done so with optimal sampling of the available data and has geometrically correct 3D at every angle (not just a few 3D sweet spots, as with 360 3D video). This approach also means there’s no issues with off-horizon capture (as we too frequently see with 360 camera footage).

Continue Reading on Page 2 >>

The post Believe the Hype: HypeVR’s Volumetric Video Capture is a Glimpse at the Future of VR Video appeared first on Road to VR.

Intel zeigt begehbare Videos auf der CES 2017

Während der Pressekonferenz auf der CES 2017 hat Intel eine neue Technologie gezeigt, die euch 360 Grad Aufnahmen begehen lassen. Intel bezeichnet diese Technologie als Volumetric Video Technology und zeigte die Aufnahmen dem Publikum mit der Oculus Rift.

Intel zeigt begehbare Videos auf der CES 2017

Die gezeigte Demo von Intel sieht im Video bereits sehr beeindruckend aus. Von jedem gezeigte Objekt im Video werden die Informationen zur Position im Raum erfasst und dadurch könnt ihr euch im Video frei bewegen und über Objekte lehnen. Wenn diese Technologie zum Standard wird, dann werden 360 Grad Videos einiges an Popularität zulegen können. Aktuell stört an 360 Grad Videos, dass sie kein echtes VR bieten, da man nur ein eingeschränkter Beobachter ist. Wenn Intel dies aufbrechen kann, dann können wir uns auch als ein Teil der Welt fühlen.

Aktuell benötigt die Technologie aber enorme Rechenleistung, denn die Videos besitzen eine Größe von 3 GB pro Frame. Wenn das Video also mit 30 Bilder pro Sekunde abgespielt werden würde, dann müsste man 90 GB pro Sekunde verarbeiten.

Doch nicht nur Intel möchte mit begehbaren Videos die 360 Grad Welt revolutionieren. Auch das Unternehmen Lytro möchte Videos mittels Lichtfeld-Technologie begehbar machen. Bisher haben wir aber noch keine Aufnahmen der Lytro Immerge Kamera selbst gesehen.

Der Beitrag Intel zeigt begehbare Videos auf der CES 2017 zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Crazy Camera Rig Captures Volumetric VR Video with 14 Cameras and LiDAR

HypeVR is working to bring live-action volumetric 360 video to VR. The company’s crazy camera rig, built to capture the necessary data, is a mashup of high-end cameras and laser scanning tech.

HypeVR recently shot a brief demonstration of the output of their rig, which they say can capture ‘volumetric’ VR video that allows users to move around in a limited space within the video, similar to Lytro’s Light field capture which we saw the other day. Traditional 360 video capture solutions don’t allow any movement, effectively locking the user’s head to one point in 3D space, reducing comfort and immersion.

The HypeVR rig used to capture the footage appears almost impractically large; consisting of 14 high-end Red cameras and a Velodyne LiDAR scanner, HypeVR says the rig can “simultaneously capture all fourteen 6K [Red cameras] at up to 90fps and a 360 degree point cloud at 700,000 points per second.”

See Also: Inside ‘Realities’ Jaw-droppingly Detailed Photogrammetric VR Environments
See Also: Inside ‘Realities’ Jaw-droppingly Detailed Photogrammetric VR Environments

With similar capture approaches we’ve seen in the past, the video data is used to ‘texture’ the point cloud data, essentially creating a 3D model of the scene. With that 3D data piped into an engine and played back frame-by-frame, users can not only see live-action motion around them, but also move their head through 3D space within the scene, allowing for much more natural and comfortable viewing.

Fortunately, HypeVR says that this massive camera platform is not the only option for capturing volumetric VR video. The company’s purportedly patent pending capture method is camera agnostic, and can be applied to smaller and more affordable rigs, which HypeVR says are in development.

The post Crazy Camera Rig Captures Volumetric VR Video with 14 Cameras and LiDAR appeared first on Road to VR.