Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture

As far as live-action VR video is concerned, volumetric video is the gold standard for immersion. And for static scene capture, the same holds true for photogrammetry. But both methods have limitations that detract from realism, especially when it comes to ‘view-dependent’ effects like specular highlights and lensing through translucent objects. Research from Thailand’s Vidyasirimedhi Institute of Science and Technology shows a stunning view synthesis algorithm that significantly boosts realism by handling such lighting effects accurately.

Researchers from the Vidyasirimedhi Institute of Science and Technology in Rayong Thailand published work earlier this year on a real-time view synthesis algorithm called NeX. It’s goal is to use just a handful of input images from a scene to synthesize new frames that realistically portray the scene from arbitrary points between the real images.

Researchers Suttisak Wizadwongsa, Pakkapon Phongthawee, Jiraphon Yenphraphai, and Supasorn Suwajanakorn write that the work builds on top of a technique called multiplane image (MPI). Compared to prior methods, they say their approach better models view-dependent effectis (like specular highlights) and creates sharper synthesized imagery.

On top of those improvements, the team has highly optimized the system, allowing it to run easily at 60Hz—a claimed 1000x improvement over the previous state of the art. And I have to say, the results are stunning.

Though not yet highly optimized for the use-case, the researchers have already tested the system using a VR headset with stereo-depth and full 6DOF movement.

The researchers conclude:

Our representation is effective in capturing and reproducing complex view-dependent effects and efficient to compute on standard graphics hardware, thus allowing real-time rendering. Extensive studies on public datasets and our more challenging dataset demonstrate state-of-art quality of our approach. We believe neural basis expansion can be applied to the general problem of light-field factorization and enable efficient rendering for other scene representations not limited to MPI. Our insight that some reflectance parameters and high-frequency texture can be optimized explicitly can also help recovering fine detail, a challenge faced by existing implicit neural representations.

You can find the full paper at the NeX project website, which includes demos you can try for yourself right in the browser. There’s also WebVR-based demos that work with PC VR headsets if you’re using Firefox, but unfortunately don’t work with Quest’s browser.

Notice the reflections in the wood and the complex highlights in the pitcher’s handle! View-dependent details like these are very difficult for existing volumetric and photogrammetric capture methods.

Volumetric video capture that I’ve seen in VR usually gets very confused about these sort of view-dependent effects, often having trouble determining the appropriate stereo depth for specular highlights.

Photogrammetry, or ‘scene scanning’ approaches, typically ‘bake’ the scene’s lighting into textures, which often makes translucent objects look like cardboard (since the lighting highlights don’t move correctly as you view the object at different angles).

The NeX view synthesis research could significantly improve the realism of volumetric capture and playback in VR going forward.

The post Stunning View Synthesis Algorithm Could Have Huge Implications for VR Capture appeared first on Road to VR.

Arcturus Raises $5 Million to Expand Volumetric Video Toolset & Streaming

Arcturus, a company building tools for editing and distributing volumetric video, today announced it has raised a $5 million seed investment.

Distinct from stereoscopic video, volumetric video is fully three-dimensional and can be viewed from all angles, which makes it potentially well suited for use in augmented and virtual reality. Volumetric video isn’t yet widespread, owed to challenges with capture, storage, editing, and distribution.

With its ‘Holosuite’—HoloEdit, HoloCompute, and HoloStream—Arcturus hopes to streamline the use of volumetric video, by making it easy to edit, manage, and stream.

The company today announced a $5 million seed investment led by BITKRAFT Ventures with participation HBSE Ventures, NTT Docomo Ventures, Build Ventures, Marc Merril and Craig Kallman.

Arcturus says the funds will be used to “scale the software development team, focus efforts on sales growth, and expand the product line with an emphasis on live-streaming features.”

“Arcturus’ mission is to create a future where digital human holograms are captured from reality, customized and even interact with the viewer in real time. This can take the form of digital customer service agents, human avatars, virtual 3D concerts and fashion runways, or giving access to the perspectives of professional athletes in broadcast sports,” says Arcturus CEO, Kamal Mistry. “With the backing of BITKRAFT Ventures, true leaders in games and XR investments, we are confident Arcturus will serve as a catalyst to enable widespread accessibility to volumetric video creation, enabling millions of users to create a new form of interactive content.”

Capturing live-action volumetric video remains a complex process, often requiring dedicated light-stages with tens if not hundreds of cameras surrounding the subject. The resulting datasets are also massive compared to traditional or even stereoscopic video.

Microsoft’s Mixed Reality capture stage | Image courtesy Microsoft

But that could well change in the future thanks to developments in both hardware and software.

Researchers in recent years have shown compelling results using machine learning approaches to reconstruct volumetric video from traditional video footage. Hardware built specifically for capturing volumetric data—like Microsoft’s Azure Kinect or Apple’s LiDAR-equipped phones & tablets—could streamline the capture process and expand the use-cases of volumetric video from dedicated capture stages to less complex productions.

Arcturus doesn’t deal in the actual capture of volumetric video, but it’s counting on the growth in the demand for volumetric video and wants to be ready with its suite of tools for creators to store, edit, and stream the content. But with the freshness of this tech it isn’t something individual users will be using for some time to come—that much is clear from Arcturus’ Holosuite pricing, which runs a cool $7,500 per year, per user.

The post Arcturus Raises $5 Million to Expand Volumetric Video Toolset & Streaming appeared first on Road to VR.

Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing

Google unveiled a method of capturing and streaming volumetric video, something Google researchers say can be compressed down to a lightweight format capable of even being rendered on standalone VR/AR headsets.

Both monoscopic and stereocopic 360 video are flawed insofar they don’t allow the VR user to move their head completely within a 3D area; you can rotationally look up, down, left, right, and side to side (3DOF), but you can’t positionally lean back or forward, stand up or sit down, or move your head’s position to look around something (6DOF). Even seated, you’d be surprised at how often you move in your chair, or make micro-adjustments with your neck, something that when coupled with a standard 360 video makes you feel like you’re ‘pulling’ the world along with your head. Not exactly ideal.

Volumetric video is instead about capturing how light exists in the physical world, and displaying it so VR users can move their heads around naturally. That means you’ll be able to look around something in a video because that extra light (and geometry) data has been captured from multiple viewpoints. While Google didn’t invent the idea—we’ve seen something similar from NextVR before it was acquired by Apple—it’s certainly making strides to reduce overall cost and finally make volumetric video a thing.

In a paper published ahead of SIGGRAPH 2020, Google researchers accomplish this by creating a custom array of 46 time-synchronized action cams stuck onto a 92cm diameter dome. This provides the user with an 80-cm area of positional movement, and also bringing 10 pixels per degree angular resolution, a 220+ degrees FOV, and 30fps video capture. Check out the results below.

 

The researchers say the system can reconstruct objects as close as 20cm to the camera rig, which is thanks to a recently introduced interpolation algorithm in Google’s deep learning system DeepView.

This is done by replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content, researchers say.

SEE ALSO
Facebook Says It Has Developed the 'Thinnest VR display to date' With Holographic Folded Optics

“We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser,” Google researchers conclude.

In practice, what Google is introducing here is a more cost-effective solution that may eventually spark the company to create its own volumetric immersive video team, much like it did with its 2015-era Google Jump 360 rig project before it was shuttered last year. That’s of course provided Google further supports the project by say, adding in support for volumetric video to YouTube and releasing an open source plan for the camera array itself. Whatever the case, volumetric video, or what Google refers to in the paper as Light Field video, is starting to look like a viable step forward for storytellers looking to drive the next chapter of immersive video.

If you’re looking for more examples of Google’s volumetric video, you can check them out here.

The post Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing appeared first on Road to VR.

Jaunt Acquires Personify’s Volumetric Capture Tech & Talent to Build Out XR Platform

Jaunt was once a cinematic VR company which produced high-quality 360 video, and to boot even a professional-grade 360 camera dubbed NEO. Taking a step in a decidedly more AR direction with its recently revealed volumetric video capture solution, Jaunt further announced it’s also acquired both Personify’s ‘Teleporter’ volumetric video streaming tech and the engineers behind it.

As a talent and IP-driven acquisition, the move is said in a press statement to directly support Jaunt’s volumetric R&D initiatives for its Jaunt XR Platform, a solution that lets businesses create and deliver their own branded volumetric video content like livestreamed avatars of real people, deliverable to both VR headsets and AR-capable devices like flagship Apple smartphones and tablets.

According to Venture Beat’s Dean Takahashi, who visited Jaunt’s San Mateo, California headquarters last month, the company has created a pipeline that uses six Intel RealSense depth cameras; the resultant images are then automatically stitched into a single 3D avatar and livestreamed to supported devices.

Jaunt CTO and Founder Arthur van Hoff says adding both Teleporter and the talent behind it allows them to “increase the speed and scope of our research and development as we move further into the extended reality arena with the Jaunt XR Platform at the core of our business.”

Continuing: “We’re honing in on fully immersive virtual, mixed, and augmented reality experiences, and are thrilled to advance those technologies with the help of our new Chicago-based team.”

SEE ALSO
'Gravity Sketch' 1.5 Update Brings New Tools for 3D Design Concepting

The deal includes seven Personify engineers, who will join Jaunt’s R&D team, four pending patents developed around Personify’s Teleporter technology, and Personify’s office in Chicago. Jaunt hasn’t disclosed the acquisition price.

Jaunt’s evolution to a B2B-focused company coincided with the late-2017 announcement of their Jaunt XR platform. The company has been involved in the VR cinematic space since its founding in 2013.

The post Jaunt Acquires Personify’s Volumetric Capture Tech & Talent to Build Out XR Platform appeared first on Road to VR.

The Future of Virtual Lightfields with Otoy CEO Jules Urbach

Otoy is a rendering company that pushing the limits of digital light fields and physically-based rendering. Now that Otoy’s Octane Renderer has shipped in Unity, they’re pivoting from focusing on licensing their rendering engine to selling cloud computing resources for rendering light fields and physically-correct photon paths. Otoy has also completed an ICO for their Render Token (RNDR), and will continue to build out a centralized cloud-computing infrastructure to bootstrap a more robust distributed rendering ecosystem driven by a Etherium-based ERC20 cryptocurrency market.

LISTEN TO THE VOICES OF VR PODCAST

jules-urbach-2017I talked with CEO and co-founder Jules Urbach at the beginning of SIGGRAPH 2017 where we talked about relighting light fields, 8D lightfield & reflectance fields, modeling physics interactions in lightfields, optimizing volumetric lightfield capture systems, converting 360 video into volumetric videos for Facebook, and their movement into creating distributed render farms.

In my previous conversations with Urbach, he shared his dreams of rendering the metaverse and beaming the matrix into your eyes. We complete this conversation by diving down the rabbit hole into some of the deeper philosophical motivations that are really driving and inspiring Urbach’s work.

This time Urbach shares his visions of VR’s potential to provide us with experiences that are decoupled from the normal expected levels of entropy and energy transfer for an equivalent meaningful experience. What’s below the Planck’s constant? It’s a philosophical question, but Urbach suspects that there are insights from information theory since Planck’s photons and Shannon’s bits have a common root in thermodynamics.

SEE ALSO
Facebook Unveils Two New Volumetric Video 'Surround360' Cameras, Coming Later this Year

He wonders whether the Halting problem suggests that a simulated universe is not computable, as well as whether Gödel’s Incompleteness Theorems suggests that we’ll never be able to create a complete model of the Universe. Either way, Urbach is deeply committed to trying to creating the technological infrastructure to be able to render the metaverse, and continue to probe for insights into the nature of consciousness and the nature of reality.

Here’s the launch video for the Octane Renderer in Unity:


Support Voices of VR

Music: Fatality & Summer Trip

The post The Future of Virtual Lightfields with Otoy CEO Jules Urbach appeared first on Road to VR.

NextVR’s Latest Tech is Bringing New Levels of Fidelity to VR Video

NextVR, a company specializing in live VR broadcasting of sports and entertainment content, has debuted its latest broadcast technology this week at CES 2018. Parallel with new, more powerful and higher resolution VR headsets, improvements to NextVR’s technology are bringing promising new levels of quality and volumetric capability to VR video content.

Attitudes toward 360 video content among high-end VR users are generally quite bad. Because 360 video is so easy to capture on inexpensive hardware, there’s troves of low-effort content that’s captured and/or produced poorly. Contrasted against the high resolution, high framerate, and highly interactive VR games that many users of high-end headsets are used to, 360 video content is often dismissed out of hand, assumed to be the usual highly-compressed, off-horizon, non-stereoscopic mess that seems to crop up at every turn.

NextVR, on the other hand, is one of only a handful of companies pushing the limits of production and playback quality, now approaching a level of quality and features that could properly be called ‘VR video’ rather than plain old ‘360 video’. Now with higher resolution headsets on the market, the company’s latest pipeline improvements will give even the high-end headset crowd a glimpse of the true potential of VR video.

End to End Approach

Having originally formed in 2009 as Next3D, a company creating compression and live broadcasting technology for 3DTV content, the company pivoted into NextVR following the sharp decline of the 3DTV market, and—having raised more than $100 million in venture capital since—hasn’t looked back.

NextVR co-founder David Cole, holding one of the company’s camera modules. | Photo by Road to VR

NextVR is focused on the entire pipeline from capture to broadcast to viewing, and everywhere in between. Speaking with co-founder David Cole last month about the company’s latest developments, it’s clear that NextVR does much more than just film or host video content. Cole explained how the company constructs their own camera rigs, and builds their own compression, transmission, and playback technology, putting NextVR in a fundamentally different class than filmmakers shooting 360 video on GoPro rigs and throwing it on YouTube.

It’s that end to end approach which has allowed the company to push the boundaries of VR video quality, and its latest developments show great hope for a medium marred by expectations set by lowest common denominator content.

Improved Quality

The first major improvement that the company is rolling out is a major jump in video quality, which can finally be realized by headsets with higher resolution (in the tethered category), and more powerful processors capable of breaking through previous decoding bottlenecks (in the mobile category).

Cole says that, in the best case scenario with 8 Mbps bandwidth, the company can now stream 20 pixels per degree, up from 8.5 pixels per degree previously. Keep in mind, that’s also in stereo and at 60 FPS. The company plans to roll out this higher-res playback to supported devices (we understand Windows VR headsets only, to start) early this year, but I got to see a preview running on Samsung’s Odyssey VR headset which offers a solid step up in pixel count over headsets like the Rift and Vive (1,440 × 1,600 vs 1,080 × 1,200).

Photo by Road to VR

Watching footage from the company’s library of sports content, including soccer, basketball, and even monster truck rallies, I was very impressed with the improved quality. Not only does the higher resolution make details stand out much more clearly, it also greatly enhances the stereo effect, as the more defined imagery creates sharper edges which makes stereo depth more apparent.

That enhanced stereo effect made things look better in general, but was especially notable on thinner details, like the net of a soccer goal which was clearly separated from the rest of the scene behind it. With insufficient resolution, thin details like the net sometimes seem to mesh with the world behind, rather than clearly existing at their own discrete depth.

The 60 FPS footage looks much smoother than the 30 FPS footage that’s often seen from 360 content, and it also allows for some decent slow motion; I watched in awe as a massive monster truck hit a ramp in front of me and did a complete back flip in slow motion. In another scene, a monster truck cut a sharp turn right near me and sent detailed clumps and clouds of dirt flying in my direction; it was a great example of the image quality, stereoscopy, and slow motion, as I really felt for a moment like there was something flying toward me.

Live Volumetric Video

In addition to improved quality, NextVR is also adapting their pipeline for volumetric capture and playback, allowing the viewer’s perspective to move positionally within the video (rather than just rotationally). Adding that extra dimension is huge for immersion, since it means the scene reacts to your movements in a way that appears much more natural. Even though VR video content generally assumes a static perspective, even the small movements that you make when ‘sitting still’ must be reflected in your view to maintain high immersion. So while NextVR’s volumetric solution isn’t going to allow you to walk around room-scale footage without breaking the scene, it still stands to make a big difference for seated content.

David Cole, NextVR’s co-founder, told me that the company’s capture and playback approach is well-suited for latency-free volumetric playback, which is crucial considering one of their key value propositions is the live broadcasting of VR content.

Since the company is using stereo orthogonal projection, Cole explained, wherein the scene’s pixels are projected on a 3D mesh and transmitted to the host device, new frames needed for positional tracking are generated locally and displayed at the headset’s own refresh rate (meaning, just like rotational tracking, even though the footage is 60 FPS, you’ll see 90Hz tracking on a 90Hz headset). Each transmitted frame essentially has the shape of the scene built in, so when you move your head to look behind an object to reveal something that you couldn’t see before, you don’t need to wait for the server to send a new frame with your headset’s updated position (which would introduce significant latency).

NextVR’s camera rigs are comprised of varying levels of these camera modules, depending upon what type of footage is being shot. | Photo by Road to VR

I got a chance to see the company’s volumetric playback in action. Putting on the Samsung Odyssey headset once again, I found myself sitting on a pretty beach at sunset, surrounded by big boulders and rocks, with the waves lapping near my feet in front of me. As I moved my head, I could clearly see the scene moving accurately around me, and by moving I could make out sides of the rocks that I otherwise wouldn’t be able to see from a static perspective. As Cole described, it felt latency-free (beyond the headset’s inherent latency).

The volumetric beach scene was a good tech demonstration, but I didn’t see a wide enough variety of volumetric content to get a feel for how it would handle more challenging scenes, like those with closer and/or faster moving objects. Because of the tendency for near-field objects to cast significant ‘volumetric shadows’ (blank areas where the camera is blocked from capturing due to occlusion), it’s likely that volumetric capture will be limited to certain, suitable productions.

The company says that volumetric viewing will be rolled out starting this year, coming first to on-demand content, followed by live broadcasts.

Continued on Page 2: ‘Wow to Watch’ »

The post NextVR’s Latest Tech is Bringing New Levels of Fidelity to VR Video appeared first on Road to VR.

Light Field Lab Announces $7 Million Investment to Develop Light Field Displays

Light Field Lab, a company founded by ex-Lytro veterans, announced the successful completion of a $7 million seed funding round that will help the company further develop a light field-based holographic display. The seed round was led by Khosla Ventures and Sherpa Capital, with participation by R7 Partners.

The display, the company says, will enable photo-real objects to appear as if they are floating in space without the aid of eye wear. More importantly, the technology, which the company calls a “full parallax holographic display,” is said to be capable of serving up light field objects to several people simultaneously, all from the viewers’ own unique position relative to the display itself.

According to Venture Beat, current display prototypes measure at six inches by four inches, but the modules intended for production will be scaled up to two feet by two feet. Light Field Lab CEO and founder Jon Karafin says that eventually many of these TV set-style displays could be stitched together in order to produce high-resolution holograms that could fill larger spaces, something on the order of 100-feet wide screens.

“Projecting holograms is just the beginning,” said Karafin. “We are building the core modules to enable a real-world Holodeck. The strategic guidance offered by our investors is critical to enable these breakthrough technologies.”

Light Field Lab says that future releases of the technology will allow users to touch and interact with holographic objects, something that could be accomplished by volumetric haptics, a method for creating three-dimensional haptic shapes in mid-air using focused ultrasound.

According to a Variety report, Light Field Lab will have a public debut at the National Association of Broadcasters Show (NAB Show) in Las Vegas next week.

The post Light Field Lab Announces $7 Million Investment to Develop Light Field Displays appeared first on Road to VR.

NextVR Plans 6DOF, Increased Quality, and AR Support for Live VR Video in 2018

Today, live event broadcasting specialists NextVR announced three technology advancements to their platform coming this year: six degrees of freedom-enabled content, higher resolution output, and augmented reality support. A sneak peek of the technologies is being shown to media at CES 2018.

Positional tracking is the dream for immersive video content, but it is a complex hardware and software challenge. If done correctly, the improvement over common 3DoF content is significant, both in terms of comfort and presence. NextVR claim that their 6DoF solution will make obstructed views “a thing of the past” and that users will be able to naturally shift their vantage point to look around a referee or spectator as they would in reality.

The company haven’t provided details about their process, but 6DoF support has been on their road map for some time, having spoken about the use of light field technology for this purpose in 2015. High quality volumetric video has been demonstrated with enormous camera rigs from companies such as HypeVR and Lytro, but NextVR’s solution is likely to be more compact for the practicalities of event capture and broadcast; on-demand 6DoF content in 2018 is expected be followed by live 6DoF broadcasting.

“VR is the most demanding visual medium ever created and we’re just beginning to deliver on its potential to convincingly create experiences that mimic reality,” says David Cole, NextVR Co-Founder and CEO. “The ability to move naturally inside the experience and the increased ability to see detail add a critical level of presence and realism.”

Higher fidelity output is coming to NextVR early this year, as a result of platform optimisations. The company says it has “exploited and enhanced the detail capture capability of its proprietary VR cameras and encoder infrastructure,” which enables “much higher resolution and higher detailed playout on compatible VR headsets.”

In addition, NextVR plans to “broadly support” AR devices in mid-2018. Exactly how NextVR’s popular live event content will be presented in augmented reality is unclear, but the company says “this cohesive blend of real and transmitted reality allows for real life social engagement while still delivering an unmatched entertainment experience.”

Launched in 2009, NextVR has many years of experience in live broadcast, transitioning from stereoscopic 3D content delivery as Next3D to a VR-focused platform. In recent years, the company has concentrated its efforts on mobile VR platforms such as Gear VR and Daydream, and only recently introduced support for 6DoF-capable hardware in the form of Windows Mixed Reality and PlayStation VR apps in October 2017. While the Oculus Rift and HTC Vive surprisingly still lack support, the company has further plans to support new hardware this year, “including affordable and powerful all-in-one mobile headsets.”

We have feet on the ground at CES, so check back for all things virtual and augmented.

The post NextVR Plans 6DOF, Increased Quality, and AR Support for Live VR Video in 2018 appeared first on Road to VR.

Exclusive: Lytro Reveals Immerge 2.0 Light-field Camera with Improved Quality, Faster Captures

Lytro’s Immerge light-field camera is meant for professional high-end VR productions. It may be a beast of a rig, but it’s capable of capturing some of the best looking volumetric video that I’ve had my eyes on yet. The company has revealed a major update to the camera, the Immerge 2.0, which, through a few smart tweaks, makes for much more efficient production and higher quality output.

Light-field specialist Lytro, which picked up a $60 million Series D investment earlier this year, is making impressive strides in its light-field capture and playback technology. The company is approaching light-field from both live-action and synthetic ends; last month Lytro announced Volume Tracer, a software which generates light-fields from pre-rendered CG content, enabling ultra-high fidelity VR imagery that retains immersive 6DOF viewing.

Immerge 2.0

Immerge 2.0 | Image courtesy Lytro

On the live-action end, the company has been building a high-end light-field camera which they call Immerge. Designed for high-end productions, the camera is actually a huge array of individual lenses which all work in unison to capture light-fields of the real world.

At a recent visit to the company’s Silicon Valley office, Lytro exclusively revealed to Road to VR latest iteration of the camera, which they’re calling Immerge 2.0. The form-factor is largely the same as before—an array of lenses all working together to capture the scene from many simultaneous viewpoints—but you’ll note an important difference if you look closely: the Immerge 2.0 has alternating rows of cameras pointed off-axis in opposite directions.

With the change to the camera angles, and tweaks to the underlying software, the lenses on Immerge 2.0 effectively act as one giant camera that has a wider field of view than any of the individual lenses, now 120 degrees (compared to 90 degrees on the Immerge 1.0).

Image courtesy Lytro

In practice, this can make a big difference to the camera’s bottom line: a wider field of view allows the camera to capture more of the scene at once, which means it requires fewer rotations of the camera to capture a complete 360 degree shot (now with as few as three spins, instead of five), and provides larger areas for actors to perform. A new automatic calibration process further speeds things up. All of this means increased production efficiency, faster iteration time, and more creative flexibility—all the right notes to hit if the goal is to make one day make live action light-field capture easy enough to achieve widespread traction in professional VR content production.

Ever Increasing Quality

Lytro has also been refining their software stack which allows them to pull increasingly higher quality imagery derived from the light-field data. I saw a remastered version of the Hallelujah experience which I had seen earlier this year, this time outputting 5K per-eye (up from 3.5K) and employing a new anti-aliasing-like technique. Looking at the old and new version side-by-side revealed a much cleaner outline around the main character, sharper textures, and distant details with greater stereoscopy (especially in thin objects like ropes and bars) that were previously muddled.

What’s more, Lytro says they’re ready to bump the quality up to 10K per-eye, but are waiting for headsets that can take advantage of such pixel density. One interesting aspect of all of this is that many of the quality-enhancing changes that Lytro has made to their software can be applied to light-field data captured prior to the changes, which suggests a certain amount of future-proofing available to the company’s light-field captures.

– – — – –

Lytro appears to be making steady progress on both live action and synthetic light-field capture & playback technologies, but one thing that’s continuously irked those following their story is that none of their light-field content has been available to the public—at least not in a proper volumetric video format. On that front, the company promises that’s soon to be remedied, and has teased that a new piece of content is in the works and slated for a public Q1 release across all classes of immersive headsets. With a bit of luck, it shouldn’t be too much longer until you can check out what the Immerge 2.0 camera can do through your own VR headset.

The post Exclusive: Lytro Reveals Immerge 2.0 Light-field Camera with Improved Quality, Faster Captures appeared first on Road to VR.

HypeVR Uses ARKit as a Portal Into Volumetric Video Content

HypeVR is developing volumetric video tech which we called “a glimpse at the future of VR video,” when we got to see it back in January. Now the company has adapted their system for Apple’s ARKit, showing how phone-based augmented reality can be used to view volumetric footage, and allow users to step through a portal into the footage.

Volumetric video is that which contains spatial data and allows users to physically move around inside of it, similar to real-time rendered VR games. Volumetric video is far more immersive than the 360 video footage you can readily find today, because the perspective isn’t locked into one static location.

I was impressed with the level of immersion and quality when I got to see some of HypeVR’s volumetric video footage running through a VR headset, but with new AR tracking tools coming from Apple (ARKit), Google (ARCore), and others, there’s an opportunity to bring a huge audience part way into these experiences.

HypeVR is experimenting on that front, and recently integrated their tech with ARKit’s tracking, giving any capable iPhone the ability to view the company’s volumetric video footage on the phone’s display.

The video above shows a volumetric video scene from HypeVR being played back on an iPhone SE (which is of the ‘6S’ generation) The company is employing a ‘portal’ as a way of navigating from the real-world into the volumetric space; it’s a neat trick and definitely a cool demo, if quite novel, though a similar mechanism might be an interesting way to transition between volumetric video scenes in the future, or as a central hub for ‘browsing’ from one volumetric video to the next.

The AR tracking seen here is indeed cool to see in action, but what’s happening under the hood is equally interesting. Since rendering volumetric video can be challenging for a mobile device (not to mention, take up a ton of storage), CEO Tonaci Tran says the company has devised a streaming scheme; the phone is actually relaying its movements to a cloud server which then renders the appropriate video frame and sends it back to the phone, all fast enough for a hand-held AR experience.

SEE ALSO
The 10 Coolest Things Being Built with Apple's ARKit Right Now
One of the crazy camera rigs that HypeVR uses to capture volumetric video. | Image courtesy HypeVR

That means the output is being drawn from the same source data set that would playback on a high-end VR headset and require a beefy GPU. This not only lowers the computational bar enough that even a last-gen iPhone can play back the volumetric video, but it also means users don’t need to download a massive file.

Tran tells me that the company also plans to support volumetric video playback via Google’s ARCore. Between ARKit and ARCore, there’s expected to soon be hundreds of millions of devices out there that are capable of this sort of tracking, and HypeVR intends to launch an ARKit app in early 2018.

Tran says that the ultimate vision for HypeVR revolves around distribution and monetization of volumetric video. The company is in the process of raising a Series A investment and encourages inquiries to be sent through their website.

The post HypeVR Uses ARKit as a Portal Into Volumetric Video Content appeared first on Road to VR.