NVIDIA GTC: Texteingabe ist die Killer-App in VR

Auf der GPU Technology Conference (GTC) in San Francisco skizziert Morgan McGuire, woran er und sein zukünftiger Arbeitgeber NVIDIA derzeit arbeitet. Im Mittelpunkt stand eine dramatisch verbesserte Bildqualität ohne erhöhte Latenz und eine Alternative zur Eingabe von Text in VR.

Texteingabe in VR: Die Tastatur muss weg

Derzeit ist Morgan McGuire noch Professor für Computerwissenschaften am Williams College, bevor er in die Forschung bei dem Grafikkartenhersteller NVIDIA wechselt. In seinem Vortrag auf der GTC widmete er sich besonders der Texteingabe: Sie sei aktuell die Killer App, zitiert VR Focus McGuire. VR stelle den Zugang zu anderen Inhalten bereit und Headsets setzten dabei den Standard zur Computernutzung. Klassische Tastaturen seien dadurch kein praktisches Interface mehr und Firmen, die mit VR arbeiten wollen, müssen das Problem der Texteingabe bedenken. Eine Idee für eine Lösung lieferte der Professor jedoch nicht.

Grafikqualität: Noch sechs Generationen zur Perfektion

Im Grafikbereich sieht McGuire noch weiter in die Zukunft: Grafiksysteme müssten 100.000 Megapixel in der Sekunde verarbeiten können, um die Wahrnehmung des menschlichen Auges auszureizen und Dinge so real wie möglich darstellen zu können. Derzeit sei man bei 450 Megapixel und es dauere noch sechs Generationen, bis man das Ziel erreichen könne.

Anschließend stellte der Professor einen Zusammenhang zwischen Latenz und Grafikqualität her. Man könne zwar derzeit die Latenz verringern, das ginge aber auf Kosten der Bildqualität. Für VR-Inhalte brauche man deshalb neue Ansätze und NVIDIA arbeite an verschiedenen Lösungen. Eine davon sei das Path Rendering, eine Kombination von Rasterung und GPU-Berechnung. Das chinesische GAPS-Forschungsteam hat im letzten Jahr einen solchen Ansatz vorgestellt und konnte damit das Rendern um den Faktor 2,5 bis 30 beschleunigen.

Der Beitrag NVIDIA GTC: Texteingabe ist die Killer-App in VR zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!

Nurulize Win GTC’s VR Content Showcase

Earlier this week NVIDIA held its annual GPU Technology Conference (GTC) 2017 with CEO Jen-Hsun Huang giving a keynote address on the third and final day. As part of the event the company held the second VR Content Showcase competition, the winner of which has just been announced as Los Angeles-based software developer Nurulize.

The showcase featured 120 entrants with pitches by 10 finalists. To actually participate, contestants needed to have raised no more than $5 million USD in funding and come from industries other than videogames. Nurulize took home the grand prize of $15,000 in cash plus three NVIDIA Quadro P6000 GPUs. Phillip Lunn, CEO and co-founder at Nurulize and Scott Metzger, Nurulize co-founder and chief creative officer both said they plan to distribute the cash to employees.

Nurulize-VR-Winner80

Each company had to impress a panel of judges in just five minutes, then they were grilled by Jeff Herbst, vice president of business development at NVIDIA, and Mark Rein, Epic Games’ vice president and co-founder.

Nurulize won with its GPU-accelerated Atom Wise software, aimed at simplifying and speeding the creation of virtual reality (VR) experiences. “It’s great to be selected and recognized for the work we’ve been doing,” said Phillip Lunn, CEO and co-founder at Nurulize on the Nvidia blog. “It’s a significant validation of the path we’re on and the technology we’re developing.”

The company’s competitorsincluded Cavrnus Inc., Doghead Simulations, FundamentalVR, Funique VR, Kalloc Studios, Opaque Media Group / Opaque Space, Sheencity, Theia Interactive and von waldkirch.

During GTC 2017 Nvidia made several VR related announcements, such as the launch of its VRWorks Audio software development kit (SDK) and VRWorks 360 Video SDK. While during Huang’s keynote address he revealed Project Holodeck, demonstrated alongside Swedish supercar manufacturer Koenigsegg.

VRFocus will continue its coverage of Nvidia, reporting back with all of its latest VR related news and updates.

NVIDIA Unveils Proof-of-Concept Multi Headset PC

Currently, multi-user virtual reality (VR) in one location isn’t exactly streamlined for the most part. A setup requires multiple PC’s depending on how many headsets need to be run, taking up space whilst adding a massive network of cables. So graphics card manufacturer NVIDIA has come up with an experimental solution, an all-in-one PC that can run four HTC Vive’s at the same time.

At NVIDIA’s GPU Technology Conference (GTC) today, the company showcased the proof-of-concept PC that uses four Quadro P6000 GPUs running four virtual machines on a PC server.

NVIDIA 8-vr-poc-system

This design of system would have many use cases such as theme parks, arcades, or for companies looking to train employees. Not only does the design minimise space, its also ensures reduced power consumption and cooling states NVIDIA in a blog posting.

“Initially, the reason for developing this system was to figure out a way to support multi-user VR. However, other interesting use cases began to emerge, including a mixed-reality spectator view, where some virtual machines drive head-mounted displays for participants, while others drive virtual cameras for observers,” wrote NVIDIA’s Victoria Rege.

“The possibilities are endless,” said Tom Kaye, a senior solutions architect at NVIDIA who helped develop the system in the blog posting. “With the addition of remote management and reliability features, such as multiple templates, clone on boot and remote rebuilds, we could see system builders working to create a robust, ready-to-deploy multi-user VR appliance.”

At the conference, CAVRNUS, a VR company that specialises in solutions for collaborative design, engineering, training and education, will showcase in-the-field training utilising the multi-user VR PC. “When NVIDIA shared this system with us, we knew it would be an ideal solution for our collaborative VR platform for our most demanding users, ” said Anthony Duca, founder and CEO at CAVRNUS. “The feedback and reaction to the multi-user, virtualized system, particularly in the engineering and defense markets, has been tremendous.”

VRFocus will continue its coverage of GTC 2017, reporting back with the latest announcements.

NVIDIA Launches VRWorks Audio and 360 Video SDK at GTC

Today sees the start of NVIDIA’s GPU Technology Conference (GTC) at the San Jose Convention Center, California. Virtual reality (VR) will play a big part in the proceedings, with multiple sessions taking place as well as announcements. The first two are the VRWorks Audio software development kit (SDK) and VRWorks 360 Video SDK, both releasing publicly for the first time.

The VRWorks Audio SDK will feature real-time OptiX ray-tracing of audio in virtual environments and a plugin for VRWorks Audio in Unreal Engine 4. While normal VR audio provides an accurate 3D position of the audio source within a virtual environment, NVIDIA VRWorks Audio will help make that even more immersive by modeling sound propagation phenomena such as reflection, refraction and diffraction.

Its release consists of a set of C-APIs for integration into any engine or application that is available now on GitHub. And GTC attendees get a live demonstration of VRWorks Audio technology at the VR Village to experience.

NVIDIA_VRWORKS

At the conference NVIDIA will be demoing for the first time the release of the VRWorks 360 Video SDK that enables stereo stitching in real-time. Utilising two Quadro P6000 GPUs, the company will showcase how it’s able to stitch eight 4k cameras in stereo and in real-time using Z CAM’s V1 PRO rig.

“The fact that NVIDIA manages to stitch 4K 360 stereoscopic video in real time, making livestreaming possible, changes the production pipeline and enables entirely new use cases in VR,” said Kinson Loo, CEO of Z CAM.

From today, the first public beta of the VRWorks 360 Video SDK is available to all developers from developer.nvidia.com/vr. Today’s launch is for the VRWorks 360 Video SDK in mono, while VRWorks 360 Video SDK for stereo will be released soon.

VRFocus will continue its coverage of GTC 2017, reporting back with all of the latest updates and announcements.

Here’s the 10 Companies Competing at GTC’s VR Content Showcase for $30,000 in Prizes

With over 120 applications, Nvidia has announced the 10 companies that will compete at the VR Content Showcase during GTC 2017 for $30,000 in cash and prizes.

Nvidia’s GTC 2017 conference next week has a heavy focus on broader business uses of AR and VR, including healthcare, education, architecture, and more.

The event’s VR Content Showcase is designed to highlight those non-entertainment use-cases by bringing together leading startups focusing their efforts to make AR and VR useful across industries. From 120 applications, Nvidia has announced the 10 companies that will participate in the VR Content Showcase, showing off their wares in front of a panel of judges who will decide who walks home with $30,000 in cash and prizes. Those companies are:

The VR Content Showcase takes place inside GTC 2017 on Tuesday, May 9, starting at 3:30 pm PT.

SEE ALSO
NVIDIA to Present Latest Foveated Rendering Research at GTC 2017 in May

In addition to these companies, who will also be demoing their content at the event’s exhibit hall, a number of other Nvidia partners will share their VR work at the VR Partner Pavilion, including:

  • Artec 3D will be scanning people in 3D with their new, NVIDIA Tegra-based handheld Artec Leo scanner. And out on the concourse, they’ll be using GPUs for one-click body scanning in 12 seconds with automatic data post-processing.
  • Epic Games will showcase high-fidelity VR experiences and new photorealistic content developed with Unreal Engine.
  • ESI Group is demonstrating IC.IDO, an immersive virtual environment that allows engineers to visualize complex 3D data to validate design decisions, including assembly and service procedures, without the expense of physical prototypes.
  • NASA has conducted some of the earliest and most advanced R&D in VR. Come see its Hybrid Reality Lab, the space agency’s low-cost, scalable platform to provide an “out of this world” experience for astronauts training for difficult and dangerous missions.
  • NVIDIA Research will demonstrate perceptually based foveated VR.
  • OPTIS is showing how it applies NVIDIA VRWorks PhysX and Audio SDKs to help Bentley Motors designers perfect their vehicles and maximize safety by immersing them in accurate and realistic VR environments.

In addition to demos on the exhibit floor, the event has more than 60 talks from VR experts across industries; we outlined a few we’re looking forward to here.


Road to VR is a proud media sponsor of GTC 2017

The post Here’s the 10 Companies Competing at GTC’s VR Content Showcase for $30,000 in Prizes appeared first on Road to VR.

McLaren, NASA and Ubisoft Heading to GTC 2017 to Showcase VR Development

NVIDIA’s annual GPU Technology Conference (GTC) takes place next month in California, and the event is set to be filled with the latest discussions on virtual reality (VR) and augmented reality (AR) technology. There’s going to be 48 sessions focused on immersive tech, featuring the likes of McLaren, NASA and Ubisoft.

McLaren Automotive for example has built a mobile AR app letting fans explore every aspect of how it designs its supercars. While McLaren veteran Mark Roberts will join Epic Games’ Partner Technology Manager Doug Wolff and several other industry luminaries for a panel called: “Beyond Games: How Unreal Engine is Putting the Reality into Virtual Reality.”

NVIDIA-vr-at-gtc-mclaren-672x460

NASA on the other hand has a long history of VR and AR experimentation. The organisation will be holding a session titled ‘NASA’s Hybrid Reality Lab: One Giant Leap for Full Dive’, hosted by Matthew Noyes – Aerospace Technologist/Hybrid Reality Lab Software Lead, discussing its use of commercial game engine and consumer-grade VR technologies at its Johnson Space Center for improving the engineering workflow.

While Ubisoft’s vice president of  Digital Publishing, Chris Early, will talk about key learnings, insights on player behavior, game-play design tips and more, gathered from Ubisoft VR’s Eagle Flight, Werewolves Within and the soon to be released Star Trek: Bridge Crew.

Other notable sessions include:

Insights from the First Year of VR – Jason Holtman – Head of Publishing, Oculus

Vulkan VR Rendering – Ingo Esser – Senior Developer Technology Engineer, NVIDIA

Immerswive VR with NVIDIA VR Funhouse – Dane Johnston – Lead Producer, VR Funhouse, NVIDIA

Passengers: Awakening VR, When Film Meet VR – Francesco Giordana – Researcher, MPC and Damien Fagnou – CTO, MPC

GTC 2017 runs from 8th – 11th May 2017, in San Jose, California. If you wish to attend then prices start from $275 USD for an exhibits only ticket.That cost then jumps to $660 for a one day conference pass or $1500 for multiple days.

VRFocus will continue its coverage of GTC, reporting back with the latest updates.

NVIDIA to Present Latest Foveated Rendering Research at GTC 2017 in May

Held from May 8-11th in Silicon Valley at the San Jose Convention Center, NVIDIA’s GTC 2017 session schedule is chock full of deep tech talks that we’re looking forward to. Among them, Senior NVIDIA Research Scientist Anjul Patney will overview the company’s latest learnings from their study of the ‘perceptually-based’ approach to foveated rendering.

Simply put, foveated rendering in VR aims to render the highest quality imagery only at the center of your vision where your eye can detect sharp detail, while rendering low quality imagery in the periphery of your vision where your eye is not tuned to pick up high resolution details. Combined with eye-tracking, it’s widely believed that foveated rendering is an important pathway to unlocking retinal-resolution VR rendering in the near future (imagery which is so sharp that any additional detail would be indiscernible).

SEE ALSO
NVIDIA Demonstrates Experimental "Zero Latency" Display Running at 1,700Hz

But foveated rendering is still in its infancy, and early attempts at using simple blur masks over the peripheral view has been shown to be too visible and distracting; a bad approximation of the limits of our peripheral vision.

Last year, NVIDIA researchers demonstrated a compelling new approach to foveated rendering (they call it ‘perceptually based’) which aims to let the end experience of human perception drive the outcome of foveated rendering techniques, rather than the other way around. The new work, which involved a ‘contrast-preserving’ rendering approach, showed a major improvement in making foveated rendering difficult to notice, and was faster than other common techniques to boot.

At GTC 2017, one of the researchers leading NVIDIA’s investigations into perceptually based foveated rendering, Anjul Patney, will take to the stage to outline the latest developments. The session description reads:

Foveated rendering is a class of algorithms which increase the performance of virtual reality applications by reducing image quality in the periphery of a user’s vision. In my talk, I will present results from our recent and ongoing work in understanding the perceptual nature of human peripheral vision, and its uses in improving the quality and performance of foveated rendering for virtual reality applications. I will also talk about open challenges in this area.

Patney’s talk is just one of several deep technical talks that we’re looking forward to at GTC 2017.

Register for GTC 2017

Here’s a number of others that have caught our eye so far from NASA, Oculus, Pixvana, OTOY, NVIDIA and Stanford’s Computational Imaging Lab.

NASA’s Hybrid Reality Lab: One Giant Leap for Full Dive – Matthew Noyes, NASA

This session demonstrates how NASA is using consumer VR headsets, game engine technology and NVIDIA’s GPUs to create highly immersive future astronaut training systems augmented with extremely realistic haptic feedback, sound, and additional sensory information, and how these can be used to improve the engineering workflow. Examples explored include a simulation of the ISS, where users can interact with virtual objects, handrails, and tracked physical objects while inside VR, integration of consumer VR headsets with the Active Response Gravity Offload System, and a space habitat architectural evaluation tool. Attendees will learn about how the best elements of real and virtual worlds can be combined into a hybrid reality environment with tangible engineering and scientific applications.

Light Field Rendering and Streaming for VR and AR – Jules Urbach, OTOY

Jules Urbach, Founder & CEO of OTOY will discuss OTOY’s cutting edge light field rendering toolset and platform. OTOY’s light field rendering technology allows for immersive experiences on mobile HMDs and next gen displays, ideal for VR and AR. OTOY is actively developing a groundbreaking light field rendering pipeline, including the world’s first portable 360 LightStage capture system and a cloud-based graphics platform for creating and streaming light field media for virtual reality and emerging holographic displays.

The Virtual Frontier: Computer Graphics Challenges in Virtual Reality – Morgan McGuire, NVIDIA

Video game 3D graphics are approaching cinema quality thanks to the mature platforms of massively parallel GPUs and the APIs that drive them. Consumer head-mounted virtual reality is a new domain that poses exciting new opportunities and challenges in a wide-open research area. We’ll present the leading edge of computer graphics research for VR across the field. It highlights emerging methods for reducing latency, increasing frame rate and field of view, and matching rendering to both display optics and the human visual system while maximizing image quality.

Insights From the First Year of VR – Jason Holtman, Oculus

There are a myriad of choices to make when jumping into VR development. We’ll explore how to navigate those decisions, and what the lessons from this first generation of VR content means for future titles.

Streaming 10K Video Using GPUs and the Open Projection Format – Sean Safreed, Pixvana

Pixvana has developed a cloud-based system for processing VR video that can stream up to 12K video at HD bit rates. The process is called field-of-view adaptive streaming (FOVAS). FOVAS converts equirectangular spherical format VR video into tiles on AWS in a scalable GPU cluster. Pixvana’s scalable cluster in the cloud delivers over an 80x improvement in tiling and encoding times. The output is compatible with standard streaming architectures and the projection is documented in the Open Projection Format. We’ll cover the cloud-architecture, GPU processing, Open Projection Format, and current customers using the system at scale.

Computation Focus-tunable Near-eye Displays – Nitish Padmanaban, Stanford Computational Imaging Lab

We’ll explore unprecedented display modes afforded by computational focus-tunable near-eye displays with the goal of increasing visual comfort and providing more realistic and effective visual experiences in virtual and augmented reality. Applications of VR/AR systems range from communication, entertainment, education, collaborative work, simulation, and training to telesurgery, phobia treatment, and basic vision research. In every immersive experience, the primary interface between the user and the digital world is the near-eye display. Many characteristics of near-eye displays that define the quality of an experience, such as resolution, refresh rate, contrast, and field of view, have been significantly improved over the last years. However, a pervasive source of visual discomfort prevails: the vergence-accommodation conflict (VAC). Further, natural focus cues are not supported by any existing near-eye display.


Road to VR is a proud media sponsor of GTC 2017

The post NVIDIA to Present Latest Foveated Rendering Research at GTC 2017 in May appeared first on Road to VR.

GTC 2017: NVIDIA Is Looking For VR/AR Apps, Offering $30,000 And Other Prizes

GTC 2017: NVIDIA Is Looking For VR/AR Apps, Offering $30,000 And Other Prizes

Graphics powerhouse NVIDIA is a crucial part of the VR industry. The higher end virtual experiences are pretty demanding and need powerful graphics cards to push them. As VR continues to grow, so does the technology feeding them with NVIDIA’s new GTX GeForce GTX 1080 Ti as the latest example. NVIDIA is also a part of OpenXR, a working group that’s working toward creating an API standard for the VR industry. At this year’s GPU Technology Conference (GTC 2017) they’ll also be opening up for up to 10 companies or teams to present their non-gaming ideas and potentially win a cash prize.

GTC is a conference that provides a window into the vital advancements of the computing industry from self-driving cars to artificial intelligence. VR and AR are a bit newer to the scene, but having your ideas showcased here would be an big opportunity for exposure and partnerships. On top of that, those showcasing could win $30,000 and additional prizes.

NVIDIA is welcoming teams that have accrued no more than $5 million in total capital and must be currently or planning to incorporate GPU tech like GameWorks, DesignWorks, or VRWorks. They’ll have a 12×12 booth space on the exhibition floor which is where the judging panel will come to try out whatever demos the teams have on hand. Bring your AR/VR applications related to tech, education, medicine, art and other markets — but they’re not looking for gaming content.

All accepted companies will be offered:

  • Free demonstration space at the 2017 GPU Technology Conference (May 8-11, 2016)
  • Opportunity to speak at the VR Showcase
  • Marketing and PR exposure
  • Opportunity to participate in “Share Your Science” video series

The application for the showcase can be found here and the deadline for submissions is March 15. The showcase will be taking place at GTC from May 8th – 11th. NVIDIA asks to have at least two staff members on hand so that there’s always at least one person manning the exhibition booth.

This is sponsored content which has been produced by UploadVR and brought to you by NVIDIA. NVIDIA did not have any input into the creation of this content.

Tagged with: ,

Auf der NVIDIA GTC 2017 können Teilnehmer 30.000 USD bei einem VR-Wettbewerb gewinnen

Am 8. – 11. Mai findet die GTC 2017 (GPU Technology Conference) gesponsort von NVIDIA statt. Auf diesem Event in San Jose, Kalifornien veranstaltet der Sponsor einen Wettbewerb, bei dem die Gewinner Preise in Höhe von 30.000 USD erhalten.

Eine Ausstellung über VR-Produkte

Der Wettbewerb wird in Form einer Ausstellung über Inhalte mit VR-Bezug gehalten. Die Firmen haben die Möglichkeit ihre Produkte einzureichen und später vorzustellen. Jedoch liegt der Fokus auf allem auf Gaming. Die VR-Produkte müssen außerdem die Nvidia GPU Technologie, wie z. B. GameWorks, DesignWorks oder VRWorks nutzen und gemeinsam mit einem an den Computer oder Workstation verbundenen VR-Headset funktionieren. Neben VR-Produkten können außerdem AR-Produkte eingereicht werden. Die Bewerbungsfrist endet am 15. März.

Der Veranstalter wählt aus den Einreichungen zehn Firmen oder Teams aus, die am Wettbewerb teilnehmen dürfen. Diese erhalten die Möglichkeit ihre Produkte während der GTC auf einem zur Verfügung gestellten Platz, vorzustellen. Außerdem sind sie verpflichtet eine Präsentation in Länge von fünf Minuten vor einer Jury zu halten, um die Gewinner auszumachen. Danach darf das Publikum noch drei Minuten lang Fragen stellen. Die Jury darf außerdem die vorgestellten Demos der Teilnehmer in einem Pavillon im NVIDIA VR Startup Dorf testen, in dem sich alle Partizipanten während des Wettbewerbs aufhalten.

Neben Geldpreisen und materiellen Preisen im Wert von 30.000 USD erhalten die Gewinner ebenfalls Unterstützung im Bereich PR und Marketing. Zudem sollen die Unternehmen finanziell unterstützt werden, um eine Entwicklung zu gewährleisten. Nvidia sucht mit diesem Verfahren speziell nach Start-Ups, die bisher nicht mehr als 5 Millionen USD Umsatz vorweisen. Zu dem beinhaltet die Suche das Auffinden von Entwicklern, die sich auf VR-Inhalte ohne Gaming-Bezug spezialisiert haben. Darunter fallen unter anderem die Bereiche Wissenschaft, Technik, Bildung, Kunst und Medizin.

Durch das Event sollen auch kleinere Unternehmen unterstützt werden, um ihre Produkte auf dem Markt zu etablieren und weiterzuentwickeln. Wir dürfen gespannt sein, welche VR und AR Highlights uns erwarten.

(Quellen: Nvidia)

Der Beitrag Auf der NVIDIA GTC 2017 können Teilnehmer 30.000 USD bei einem VR-Wettbewerb gewinnen zuerst gesehen auf VR∙Nerds. VR·Nerds am Werk!