Apple Reveals USDZ – A New AR File Format Made With Pixar, Adobe Bringing Support To Creative Cloud

iOS 12 has made its official debut at Apple’s WWDC 2018 event and has brought quite a lot of new toys with it. And the team kicked off discussions by focusing on how the tech conglomerate is continuing its focus on augmented technology.  It was last year at the same time, almost to the day, that Apple revealed its AR developer platform ARKit and now as then Apple’s Craig Federighi took to the stage to talk attendees through not just the rumoured ARKit 2, as reported earlier on VRFocus, but other developments.

WWDC 18The first announcement was a brand-new file format specifically for working with AR. Something developed companies such as Pixar – “some of the greatest minds in 3D.”

“AR is transformational technology.” Federighi told the audience at San Jose’s McEnery Convention Center. “Bringing experiences into the real world? It enables all kinds of new experiences, changing the way we have fun and the way we work. In iOS12 we wanted to make an easy way to experience AR across the system.”

The new file format is called USDZ (or Universal Scene Description), which has a focus on sharing content and will be able to be used or viewed in everything from internal file views, to the web browser Safari to email. Enabling you to place 3D models into the real world.  “It’s something like ‘AR quick-look'”, Federighi explained.

USDZ - Universal Scene DescriptionApple confirmed that they would be working with leading companies in 3D tools and libraries to bring USDZ support to their services. With Allegorithmic (developer of Substance), PTC, Turbosquid, Adobe, Autodesk, Sketchfab and Quixel all namechecked on stage.

“At Adobe we believe that augmented reality is an incredibly important technology. And with ARKit, Apple is by far the most powerful platform for AR.” Added Adobe‘s Executive Vice President and CTO Abhay Parasnis who appeared on stage to explain more about the company’s work on USDZ, something he described as “a pretty big deal” and confirming that USDZ support would be coming to Adobe’s Creative Cloud set of applications and services.

“With Creative Cloud designers and developers will now be able to use familiar apps – apps that they know and love, like Photoshop or Dimension – to create amazing AR content, and bring it easily via USDZ.”

Parasnis also confirmed a new Adobe creative app for iOS, specifically for designing AR-related content that will enable developers to bring in anything from text to images and video on Creative Cloud directly into a WYSIWIG AR editing environment.

VRFocus will bring you more news on the AR developments on WWDC shortly.

Insta360 Announces New High-Res VR Playback and Video Transmission

As 360-degree video capture becomes increasingly popular, the range of tools, hardware and software becomes ever more extensive and sophisticated. Virtual reality (VR) and 360-degree camera company Insta360 have now announced new technologies aimed and helping to simplify the VR production workflow for users at all levels.

The tools it has announced include an integrated VR post-production extension for Adobe Premiere Pro, a long-range video transmitter and a new technology for payback of high-quality immersive content on smartphones and VR headsets.

When delving into capturing and editing 360-degree video, it has previously been necessary to learn an entirely new set of software and tools, thus slowing down production significantly. Insta360 have recognised this problem so have produced an extension for Adobe Premiere Pro that lets users stitch 360-degree video captured by Insta360 Pro camera directly in Adobe Premiere Pro.

To save time, during export only the footage that made the final cut is stitched together, reducing processing time and saving system resources.

When filming in 360-degrees, a director and crew usually need to take fairly exotic precautions to avoid appearing in shot, in some cases being forced to hide bhind – or even inside – props. The Insta360 Farsight is a remote video transmitter that lets the Insta360 Pro stream high-res video over long distances.

Insta360 are also introducing a new system called CrystalView that breaks a 360-degree video into many segments and selectively renders the segments of the video the viewer is actually observing in high resolution, while down-grading the other areas, thus allowing for much higher quality playback than would otherwise be possible.

CrystalView will be introduced into the Insta360 Player, the free video playback app which is available on iOS, Android, Oculus Go and Samsung Gear VR. Insta360 are making the CrystalView SDK available for developers to use the technology in their own products and media players.

For future coverage of new and updated VR and 360-degree products and services, keep checking back with VRFocus.

Infinite Walking In VR Thanks To A Clever Brain Hack

Locomotion in virtual reality (VR) has been something of a challenge for developers. For the most immersive experience, VR should be able to track the movement of the user, moving when they do, but this comes with several issues, though these might be solved thanks to the work of a group of researchers.

Computer scientists from Stony Brook University, Nvidia and Adobe have been working together to create a new framework that allows VR users to experience infinite walking in the virtual world, even though they are limited to a small physical space in the real world.

The framework utilises a natural function of the human eye in order to ‘hack’ the brain. The work revolves around something called the saccade. This is something that they human eye does when looking at different points in ur field of vision, such as when scanning a room of viewing a painting. These saccades occur without conscious direction and can happen several times in a second.

When a saccade is happening, the brain ignores the input coming in from the eye to avoid confusion, something called ‘saccadic suppression’. The process used by the research team takes advantage of this by using head and eye-tracking to detect when saccadic suppression is occurring and redirects the users’ walking path, making them walk in a circle without being consciously aware of it.

“In VR, we can display vast universes; however, the physical spaces in our homes and offices are much smaller,” says lead author of the work, Qi Sun, a PhD student at Stony Brook University and former research intern at Adobe Research and NVIDIA. “It’s the nature of the human eye to scan a scene by moving rapidly between points of fixation. We realized that if we rotate the virtual camera just slightly during saccades, we can redirect a user’s walking direction to simulate a larger walking space.”

The research paper produced is titled ‘Towards Virtual Reality Infinite Walking: Dynamic Saccade Redirection’ and the team will be presenting their work at SIGGRAPH 2018, which is due to take place from 12th-16th August in Vancouver, Canada.

For news on the latest developments in VR technology, keep checking back with VRFocus.

The VR Job Hub: Mid May Opportunities Around The Globe

With the royal wedding now come and gone, it now seems like a good time to look for a new job opportunity within the immersive industry. As always VRFocus is here to help with a selection of different positions available for your viewing pleasure below.

With positions open for software engineers, community managers, artists and even producers, there is plenty to look at this week in another entry of The VR Job Hub.

Every weekend VRFocus gathers together a number open position from across the virtual reality (VR), augmented reality (AR) and mixed reality (MR) industry, in locations around the globe, to help make finding the ideal job easier. Below are a selection of roles that are currently accepting applications across a number of disciplines, all within departments and companies that focus on VR, AR and MR.

Location

Company

Role

Link

Culver City, CA, USA AfterNow Unity3D Developer

Click Here to Apply

Bangalore, India

AfterNow

Unity3D Developer

Click Here to Apply

Raleigh, NC, US

Downpour Interactive

Community Manager

Click Here to Apply

Raleigh, NC, US

Downpour Interactive

Software Enginner

Click Here to Apply

Stockholm, Sweden

Fast Travel Games 3D Artist

Click Here to Apply

Portland, OR, US

Happy Finish 3D Artist

Click Here to Apply

Portland, OR, US Happy Finish Lead 3D Artist

Click Here to Apply

London, UK Happy Finish Junior Producer

Click Here to Apply

Bristol, UK University of the West of England Senior Lecturer in Virtual & Extended Realities

Click Here to Apply

San Mateo, CA, US Jaunt XR Producer

Click Here to Apply

Costa Rica or Florida SweetRush Inc UX/UI VR Designer

Click Here to Apply

San Jose, CA, US Adobe Senior Content Services Engineer, Augmented Reality

Click Here to Apply

Don’t forget, if there was nothing in this week’s feature that was a good fit for you, you can always look at the previous edition of The VR Job Hub.

As always, if you are an employer looking for someone to fill an immersive technology related role – regardless of the industry – don’t forget you can send us the lowdown on the position and we’ll be sure to feature it in that following week’s feature. Details should be sent to myself at keva@vrfocus.com and also pgraham@vrfocus.com.

Check back with VRFocus next Sunday at the usual time of 3PM (GMT) for another selection of jobs from around the industry.

Adobe Reveal New VR Application Project New View

During Adobe’s recent summit event held in London, the company revealed a new virtual reality (VR) application that it has in the works by the name of Project New View.

Adobe Project New View

Project New View is a new VR-and-voice version of Adobe’s business and marketing analytics tool, making it an innovative social VR data experience. The approach to taking all the data that users of the analytics tools would usually have to click through and allowing them to explore it visually, within virtual space is an exciting new move. The use of voice actions also allows users to reduce the overall number of inputs requires from a mouse and keyboard and make use of Adobe’s artificial intelligence (AI) to help them process all the data in front of the.

One way to see the benefit of this VR solution to marketing analytics data is the distribution of visitors to a website. Rather then a traditional map displaying the count of visitors, or even some graphics, users can view a 3D globe model and rotate it to see the data for worldwide users all at the flick of a wrist. Though this all sounds a bit over the top, the application makes a great deal of sense when you take into account the social aspect of it. Users being able to come together with virtual space to review and discuss the data would be a great benefit to many. Allowing those to partake in the application without a VR headset as well would further the applications reach.

Adobe Project New View

As mentioned, Project New View can make use of voice commands to help make the navigation of the application easier. What is interesting though is the idea that these voice control features would be taken to other programs as well, allowing for opportunities to improve workflows and increase productivity.

Though only shown as a working demonstration, with details rather thing on how Project New View will develop in future, it is clear that Adobe is keen to create new experiences within virtual space, alongside AI and machine learning. VRFocus will be sure to bring you all the latest on Adobe’s developments moving forward so stay tuned for more.

Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR

Researches from Stony Brook University, NVIDIA, and Adobe have devised a system which hides so-called ‘redirected walking’ techniques using saccades, natural eye movements which act like a momentary blindspot. Redirected walking changes the direction that a user is walking to create the illusion of moving through a larger virtual space than the physical space would allow.

Update (4/27/18): The researchers behind this work have reached out with the finished video presentation for the work, which has been included below.

Original Article (3/28/18): At NVIDIA’s GTC 2018 conference this week, researchers Anjul Patney and Qi Sun presented their saccade-driven redirected walking system for dynamic room-scale VR. Redirected walking uses novel techniques to steer users in VR away from real-world obstacles like walls, with the goal of creating the illusion of traversing a larger space than is actually available to the user.

There’s a number of ways to implement redirected walking, but the strengths of this saccade-driven method is that it’s hidden from the user, widely applicable to VR content, and dynamic, allowing the system to direct users away from objects newly introduced into the environment, and even moving objects, the researchers say.

The basic principle behind their work is an exploitation of a natural quirk of human vision—saccadic suppression—to hide small rotations to the virtual scene. Saccades are quick eye movements which happen when we move our gaze from one part of a scene to another. Instead of moving in a slow continuous motion from one gaze point to the next, our eyes quickly dart about, when not tracking a moving object or focused on a singular point, a process which takes tens of milliseconds.

An eye undertaking regular saccades

Saccadic suppression occurs during these movements, essentially rendering us blind for a brief moment until the eye reaches its new point of fixation. With precise eye-tracking technology from SMI and an HTC Vive headset, the researchers are able to detect and exploit that temporary blindness to hide a slight rotation of the scene from the user. As the user walks forward and looks around the scene, it is slowly rotated, just a few degrees per saccade, such that the user reflexively alters their walking direction in response to the new visual cues.

This method allows the system to steer users away from real-world walls, even when it seems like they’re walking in a straight line in the virtual world, creating the illusion that the the virtual space is significantly larger than the corresponding virtual space.

A VR backpack allows a user at GTC 2018 to move through the saccadic redirected walking demo without a tether. | Photo by Road to VR

The researchers have devised a GPU accelerated real-time path planning system, which dynamically adjusts the hidden scene rotation to redirect the user’s walking. Because the path planning routine operates in real-time, Patney and Sun say that it can account for objects newly introduced into the real world environment (like a chair), and can even be used to steer users clear of moving obstacles, like pets or potentially even other VR users inhabiting the same space.

The research is being shown off in a working demo this week at GTC 2018. An academic paper based on the work is expect to be published later this year.

The post Researchers Exploit Natural Quirk of Human Vision for Hidden Redirected Walking in VR appeared first on Road to VR.

Insta360 Extension to Make Editing 360 Video in Adobe Premiere Faster, Higher Quality

Today 360 camera marker Insta360 announced a new extension for Adobe Premiere Pro CC which will bring complete 360 post-production workflow into the video editing program. For filmmakers shooting with the Insta360 Pro camera, the company promises faster editing and better quality thanks to the plugin.

Premiere Pro is one of the most popular pieces of professional video editing software, and while Adobe has actually been fairly proactive in bringing 360 video editing tools to the application, filmmakers shooting in 360 have had to rely on external software to do the initial ‘stitching’—taking the various camera views from a 360 camera and fusing them together into one cohesive piece of 360 footage.

SEE ALSO
Adobe Shows Method to Create Volumetric VR Video From Flat 360 Captures

Thanks to a new extension soon to be released from Insta360, the company says that the complete editing workflow, from initial import of raw camera views to final 360 output, can now be handled inside of Premiere Pro. The extension is said to work specifically with footage from the Insta360 Pro camera—the company’s $3,500 360 3D camera—which Adobe calls “industry-leading.”

The extension, which is free and will be released this quarter, says Insta360, will offer ‘no-stitch’ editing, which saves time by eliminating the need to stitch all of the raw footage (given that much of it will get trimmed out in the editing process anyway). Instead, the imported footage sees a simplified quick-stitching pass which creates a “proxy video,” a lower quality version of the raw footage that can be more quickly and easily manipulated in Premiere. After all the cutting, editing, and application of effects is complete, the raw footage gets a high quality stitch and is rendered to a final output precisely based on the edits made to the proxy video.

Image courtesy Insta360

Not only does this approach stand to save editors time in the post-production process, Insta360 says, but it also reduces the number of video processing passes. Whereas an external stitching tool would take an pass to turn the raw footage into stitched 360 footage and then a final output would be rendered from that footage after editing, the Insta360 Premiere extension does it all in a single pass, which means higher quality output thanks to less re-processing of the footage.

It isn’t clear if the Insta360 Premiere Pro extension will work with any footage other than what’s shot with the Insta360 Pro; we’ve reached out to the company for clarification.

Update (4/23/18): A spokesperson for Insta360 said that the extension only works with footage from the Insta360 Pro because it uses a proprietary stitching algorithm that’s tailored to the camera.

The post Insta360 Extension to Make Editing 360 Video in Adobe Premiere Faster, Higher Quality appeared first on Road to VR.

Want More Space for Roomscale VR? NVIDIA Research can do This Virtually

Roomscale virtual reality (VR) technology is a wonderful thing. It allows you to explore a virtual world with your own feet, being able to wander round a room and interact with objects as if you were really there. While the HTC Vive system for example can cover an area of 15ft x 15ft everyone doesn’t necessarily have that amount of space to work with, meaning walls and other furniture can quickly be bumped into if not careful. So during the GPU Technology Conference (GTC) 2018, NVIDIA Research demonstrated a new technique its been working on in collaboration with Adobe and Stony Brook University to make physical areas seem much bigger in VR.

NVIDIA redirected walking path1

Called Saccadic Redirected Walking, the technique utilises a quirk in your eyes where involuntary movements temporarily blind you a few times per second. These movements, known as saccades, are imperceptible because they last only tens of milliseconds.

So during those fractions of a second the technique rotates the scene ever so slightly. What this does without the user noticing it is guide them on a physical path that’s ever so slightly different to the one they’re viewing in the virtual world. As shown in the image above, this means the physical path of the player can be small whilst in VR it can seem far larger, imagine walking round a grand hall just in your living room. This also helps with avoiding objects like walls, or other players if systems are setup up nearby.

NVIDIA is demoing the technique this week at the VR Village using Quadro GPUs, HTC Vive and SMI eye tracking, with guests able to walk around a huge virtual Alice in Wonderland-like chess board with pieces the size of people, all within a 15×15 foot booth.

HTC Vive stock image 4

The teams will be taking the research to SIGGRAPH later this year to present a paper on the technique. As development progresses and further details released, VRFocus will keep you updated.

Begehbare Videos mit Adobe Project Sidewinder

In dieser Woche fand wieder die Adobe-MAX-Konferenz statt. Adobe hat zur Feier des Events auch bereits ein neues Tool namens Project CloverVR veröffentlicht, welches im letzten Jahr noch bei den „Sneaks“ vorgestellt wurde. Dabei handelt es sich um Projekte, die zwar innovativ sind, jedoch nicht zwangsläufig in ein finales Produkt überführt werden. Bei den diesjährigen Sneaks ist mit Project Sidewinder erneut ein spannendes Projekt für Virtual Reality am Start, von dem wir hoffen, dass es die gleiche Reise wie CloverVR antreten wird.

Begehbare Videos mit Project Sidewinder

Adobe Sidewinder ermöglicht es, ein 360-Grad-Video so zu bearbeiten, dass leichte Bewegungen innerhalb der Szene ermöglicht werden. Hierzu nutzt die Software herkömmliches 3D-360-Grad-Material. Durch die Informationen über die Tiefe lässt sich ein Blick hinter Objekte simulieren. Dabei kann natürlich nicht die Realität hinter einem Objekt angezeigt werden, da diese nicht aufgezeichnet wird, jedoch werden die Elemente des Bildes so gestreckt, dass zumindest die Illusion entsteht, dass man sich in der Szene tatsächlich bewegen könne.

Die Anwendung von Adobe wird also Videos nicht komplett begehbar machen. Jedoch könnte die Immersion beim Ansehen von 360-Grad-Videos deutlich steigen, da bei leichten Bewegungen die Verzerrung kaum auffallen wird. Wir dürfen gespannt sein, ob sich Adobe dazu entschließen wird, diese Technologie marktreif zu machen. Weitere Projekte der Sneaks stellt Adobe auf seinem Blog vor.

(Quelle: Upload VR)

Der Beitrag Begehbare Videos mit Adobe Project Sidewinder zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Adobe Answers Facebook With ‘Sidewinder’ Volumetric Video Technique

Adobe Answers Facebook With ‘Sidewinder’ Volumetric Video Technique

Earlier this year Facebook debuted a technique for capturing volumetric content from a 360-degree camera — meaning you can move freely around inside the footage to see the action from different angles.  This week Adobe debuted a project that seems to produce a very similar effect.

The concept is called Project Sidewinder and it was presented on stage during Adobe’s MAX creator’s conference in Las Vegas. The project may or may not become a part of the company’s products. It was presented as one of 11 concepts this week showcasing Adobe’s future-facing efforts to help creators work more efficiently. Another VR-related concept Adobe showed, called SonicScape, looks like it could easily fit into the workflow of people using Adobe’s Premiere video editing software.

The techniques from both Adobe and Facebook look like they are extremely limited, with some stretched and distorted artifacts becoming more and more visible the further you move away from the camera’s actual position. Nonetheless, for small movements in VR the techniques can powerfully enhance a persons’s sense of presence in captured content.

Check it out in the video from MAX as Silicon Valley’s Kumail Nanjiani tests out the feature.

Tagged with: ,