Researchers Induce Artificial Movement Sensation in VR Using Four-Pole Galvanic Vestibular Stimulation

Shown as part of the Emerging Technologies installations at SIGGRAPH 2017 this week, the GVS RIDE experience demonstrates the effects of four-pole galvanic vestibular stimulation combined with a VR headset. The technology is pitched as a potential alternative to a motion platform, with its ability to “induce tri-directional acceleration” and “enhance virtual acceleration”.

GVS RIDE is the result of years of studies into galvanic vestibular stimulation (GVS) from researchers at Osaka University in Japan. The demonstration, as described on the Emerging Technologies page of the SIGGRAPH 2017 website, is presented in two parts: firstly, the user has a typical VR experience by watching a conventional VR video without GVS, followed by watching it again with the GVS circuit enabled, which is said to induce a “higher sensation of presence”.

GVS technology has cropped up regularly in VR discussions over the years, and is generally approached with a healthy dose of trepidation and skepticism. Passing electric current through the head is, in itself, a rather alarming concept, which is then compounded by its ability to manipulate our precious bodily sensors. When controlled precisely however, it has the potential to enhance motion sensations, and assist in resolving certain nausea-inducing VR effects. The basic concept is surprisingly simple – electrodes placed behind the ear (on the ‘mastoids’) pass current through the vestibular system (parts of the inner ear), affecting balance. By controlling the current paths, it is possible to induce different balance and acceleration sensations.

Image courtesy Dr. Aoyama et al, Nature

A two-pole GVS setup, with an electrode behind each ear, is able to induce lateral movement or ‘roll’, and a three-pole GVS, which fits an additional electrode to the forehead, can induce anteroposterior movement or ‘pitch’. The four-pole GVS system developed by the Osaka University team lead by Dr. Kazuma Aoyama, places two electrodes on the mastoids, and another two on the temples. This is able to induce directional virtual head motion around three perpendicular axes. In other words, they’ve managed to evoke roll, pitch, and yaw sensations.

Dr. Aoyama’s work detailing four-pole GVS was detailed in a report published in the peer-reviewed journal Nature in 2014. I asked him what had changed since that initial publication, and he explained that they now have six-pole GVS, which can induce four directions: “lateral, front-back, yaw-rotation, and up-down”. This advanced system uses six electrodes (two extra on the neck, “5 or 6cm below the mastoids”), used to enhance the vertical acceleration sensation, but for GVS RIDE as shown at SIGGRAPH 2017 they are just using the four-pole system to manipulate three directions.

Dr. Aoyama avoids describing the ‘lateral’ and ‘front-back’ directions as ‘roll’ and ‘pitch’, as it is difficult for a human to differentiate between an actual roll rotational head motion and a linear lateral movement through vestibular stimulation alone. However, this is apparently advantageous, as the interpretation of both movements can be “easily controlled by visual flow”. As such, Dr. Aoyama believes that GVS can suitably align with both rotational and positional tracking in VR.

It’s unclear when or how GVS might be incorporated into a consumer device (although there have been promising GVS experiments with headphones), but the Osaka team believes their lightweight solution can be “easily adapted to conventional VR systems.” The biggest concern is surely the consumer acceptance of such ‘intrusive’ technology and the variability in its effectiveness across a wide range of people. (For example, there are many reports of GVS tests causing discomfort as a result of variable skin sensitivity.)

The post Researchers Induce Artificial Movement Sensation in VR Using Four-Pole Galvanic Vestibular Stimulation appeared first on Road to VR.

Nvidia Debuts Isaac-trained Robot in VR at SIGGRAPH 2017

SIGGRAPH 2017 is well underway, running for the entirety of this week. The first two days have already seen a few virtual reality (VR) related announcements and now VRFocus has another. While NVIDIA is mainly known for its graphics cards the tech company has its fingers in a lot of different pies, especially when it comes to immersive technology, and for SIGGRAPH the company will be demonstrating an Isaac-trained robot in VR and real-life.

Attendees at the event will be able to get the first hands-on public demo Isaac, a robot trained with NVIDIA’s Isaac Lab robot simulator, to show how VR can help teach robots the nuanced task of interacting with people. Robots in the future maybe used for all manner of everyday tasks so NVIDIA is teaching the AI-enabled robot things like pouring a cup of coffee, playing dominoes or caring for the elderly.

isaac-tall-art

As Zvi Greenstein, General Manager at NVIDIA’s GeForce team explains in a blog posting: “You’ll be able to go head-to-head with Isaac in the physical world on the show floor. And you’ll be able to strap on a VR headset, and enter a simulation via Project Holodeck.

“Deep learning and computer vision have been combined to teach a robot to sense and respond to human presence, to identify the state of the play of the game, to understand the legal moves of the game, and to determine which tile to select and how to place it.

“By developing and training robots in a simulated world and then working with those robots in a virtual reality environment like Project Holodeck, researchers can deploy them to the real world in a way that is safer, faster, and more cost-effective.”

NVIDIA Isaac is just one of the ways the company is utilising VR as a means for enterprise solutions. Project Holodeck – unveiled during NVIDIA’s GTC Conference in May is a physically accurate VR environment such as the demonstration featuring Swedish supercar maker Christian Koenigsegg.

VRFocus will continue its coverage of NVIDIA, reporting back with its latest advancements in VR.

Researchers Showcase Impressive New Bar for Real-time Digital Human Rendering in VR

A broad team of graphics researchers, universities, and technology companies are showcasing the latest research into digital human representation in VR at SIGGRAPH 2017. Advanced capture, rigging, and rendering techniques have resulted in an impressive new bar for the art of recreating the human likeness inside of a computer in real-time.

MEETMIKE is the name of the VR experience being shown at this week at SIGGRAPH 2017 conference, which features a wholly digital version of VFX reporter Mike Seymour being ‘driven’ and rendered in real-time by the real life Seymour. Inside the experience, Seymour is to play host, interviewing industry veterans and researchers inside of VR during the conference. Several additional participants wearing VR headsets can watch the interview from inside the virtual studio.

The result is a rather stunning representation of Seymour—rendered at 90 FPS in VR using Epic’s Unreal Engine—standing up to extreme scrutiny, with shots showing detailed eyebrows and eyelashes, intricate specular highlights on the pores of the skin, and a detailed facial model.

To achieve this, Seymour wears a Technoprops stereo camera rig which watches his face as it moves. The images of the face are tracked and solved with technology from Cubic Motion, and that data is relayed to a facial rig created by 3Lateral and based on a scan of Seymour created as part of the Wikihuman project at USC-ICT. Seymour’s fxguide further details the project:

  • MEETMIKE has about 440,000 triangles being rendered in real time, which means rendering of VR stereo about every 9 milliseconds, of those 75% are used for the hair.
  • Mike’s face rig uses about 80 joints, mostly for the movement of the hair and facial hair.
  • For the face mesh, there is only about 10 joints used- these are for jaw, eyes and the tongue, in order to add more an arc motion.
  • These are in combination with around 750 blendshapes in the final version of the head mesh.
  • The system uses complex traditional software design and three deep learning AI engines.

A paper published by Seymour and Epic Games researchers Chris Evans and Kim Libreri titled Meet Mike: Epic Avatars offers more background on the project.

From our reading of the project, it’s somewhat unclear but sounds like the rendering of the digital Seymour is being done on one PC with a GTX 1080 Ti GPU and 32GB of RAM, while other computers accompany the setup to allow the host’s guest and several audience members to view the scene in VR. We’ve reached out to confirm the exact hardware and rendering setup.

The post Researchers Showcase Impressive New Bar for Real-time Digital Human Rendering in VR appeared first on Road to VR.

HP’s New Commercial VR Backpack PC Smartly Docks to Double as VR Desktop

Scheduled for release in September starting at $3,300, the HP Z VR Backpack G1 Workstation is aimed at the commercial market, with high-performance components including an Intel Core i7 CPU and Nvidia Quadro GPU. It features a smart docking design for use as a VR backpack computer as well as a VR desktop, and includes two hot-swappable batteries for continuous tether-free VR use.

Unlike their gamer-centric Omen X VR Backpack (which was introduced last year as a concept and is now about to launch as the ‘Omen X Compact Desktop‘), the new Z VR Backpack adds to HP’s family of VR-Ready workstation-class PCs, making it suitable for business applications such as product development and employee training. Announced at SIGGRAPH 2017 this week, the system is described as “the most powerful wearable VR PC ever created”, and “the world’s first professional wearable VR PC”.

The professional-grade specifications centre around the vPro-enabled Intel Core i7-7820HQ processor, and Nvidia Quadro P5200 GPU with 16GB VRAM. The system can be configured with up to 32GB RAM and a 1TB M.2 SSD.

Despite the seemingly imminent wireless VR revolution, HP believes there is still a market for this form of wearable solution, perhaps for applications where it is unsuitable or difficult to mount a wireless transmitter, or where many VR users are operating in close proximity.

The aggressive cut in the video between dock and backpack in the footage perhaps highlights the rugged design of the system, which has been built for “extreme durability with 120,000 hours of HP testing” and is designed to pass MIL-STD 810G tests (a US Military Standard).

Image courtesy HP

Much like the recently-announced Omen X Compact Desktop, part of the Z VR Backpack’s appeal is the dual-purpose design, meaning that it can quickly transition between desktop dock—which is capable of dual 4K monitor output—and backpack, for untethered, high-end VR. That makes it a much more practical purchase than an expensive VR backpack PC which would otherwise require lots of plugging and unplugging each time you want to switch between backpack mode and use as a regular computer—not to mention that mounting it upright saves plenty of desk space.

A full overview of the system and its specifications is available on HP’s website.

“Virtual reality is changing the way people learn, communicate and create. Making the most of this technology requires a collaborative relationship between customers and partners”, says Xavier Garcia, vice president and general manager, Z Workstations, HP Inc. “As a leader in technology, HP is uniting powerful commercial VR solutions, including new products like the HP Z VR Backpack, with customer needs to empower VR experiences our customers can use today to reinvent the future.”

The post HP’s New Commercial VR Backpack PC Smartly Docks to Double as VR Desktop appeared first on Road to VR.

New York Times Sponsors VR Film Screening at SIGGRAPH

Well-known newspaper the New York Times will be sponsoring a special screening of virtual reality (VR) short film Under A Cracked Sky along with a special presentation on the New York Times’ integration of VR as part of SIGGRAPH 2017, a conference and exhibition on Computer Graphics and Interactive Techniques.

Director of Immersive platforms and storytelling for the New York Times will present a 40-minute talk on the mew medium of VR and how the New York Times is integrating the technology into its news reporting. The presentation will also include a showing of VR short film The Antarctica Series: Under A Cracked Sky.

“We are beyond thrilled to be welcoming Graham Roberts, the man who helped spearhead and establish the Times’ NYT VR program, to our conference this year,” said Jerome Solomon, conference chair for SIGGRAPH 2017. “Having a project with the pedigree and prestige of ‘Under A Cracked Sky’ truly establishes the fact that we continue to present our attendees with world-class material and technologies. I am certain people will enjoy Graham’s presentation and short film — after all, it doesn’t get more impressive than The New York Times!”

Graham Roberts notes, “VR is an incredible new tool that gives people a different perspective — it offers the feeling of full immersion within our news stories and reports. I see VR as an important change in the way in which people digest news and interact with the media: It’s a new paradigm through which people can experience brand-new perceptions of the world … it transports the viewer somewhere he or she has never been before.”

SIGGRAPH 2017 will be held on 30th July – 2rd of August in Los Angeles. Further information and tickets can be found at the official SIGGRAPH website.

VRFocus will bring you further information on VR-related events as it becomes available.

SIGGRAPH 2017’s VR Village Hosts Diverse Range of VR and AR Tech

SIGGRAPH 2017 is due to be held in Los Angeles this Summer, and their VR Village is planning a great range of experiences for attendees. VR Village is one of the newer programs within the SIGGRAPH conference, and is hoping to wow audiences with unique applications of both virtual reality (VR) and augmented reality (AR).

Promising to showcase unique applications for both VR and AR in fields such as health, education, entertainment, design and gaming, VR Village exhibitors are hoping to impress new business partners as well as the public.SIGGRAPH 2017 logo

Diversity is the focus of 2017’s VR Village. Previous events looked at art and simulations, while 2017 looks towards the diversity of both creators from around the world and their projects, as Denise Quensel, 2017 VR Village Chair, explains; “We made a conscious effort for diversity — we tried to normalize our content to be as diverse as possible. We believe that diversity in content, and diversity of contributors, helps facilitate perspectives and opportunities that are of great benefit to attendees.

“The experiences that will be seen at this summer are not only outstanding examples of VR and AR, but can only be experienced in SIGGRAPH’s unique VR Village space.”

Interesting exhibits include Neurable: Brain-Computer Interfaces for Virtual and Augmented Reality, a promising leap into mind-controlled virtual reality, Out of Exile, a room-scale VR experience telling a story of LGBTQ discrimination, and HOLO-DOODLE, a “VR hangout” experience featuring naughty robots making its world premiere at SIGGRAPH 2017.

SIGGRAPH 2017 is taking place from July 30th to August 3rd 2017 in Los Angeles, where the event will showcase the latest in computer graphics technology and interactive experiences.

For more on the future of VR and SIGGRAPH 2017, keep up to date with VRFocus.

Oculus Working On Improved VR Image Display

As impressive as current virtual reality (VR) technology is, it is wise to remember that it is still in relatively early days of development. There are still considerable improvements that can be made and hurdles to overcome. On that front, Oculus are working on a new method of display to improve the viewing experience.

Currently, VR has difficulty with mimicking how human eyes focus. When humans look at things, they are able to focus on it no matter how close or far away the object is, but when we focus on that object, everything else becomes unfocussed. An effect known in photography as ‘bokeh’. VR displays currently are not able to achieve this effect correctly, resulting in the wrong areas being blurry.

In order to improve the ‘depth of field’ experience, Oculus are working on a system referred to as the ‘Focal surface display’ which is intended to more accurately model how human eyes work. Oculus describe the system thusly: “Focal surface displays mimic the way our eyes naturally focus on objects of varying depths. Rather than trying to add more and more focus areas to get the same degree of depth, this new approach changes the way light enters the display using spatial light modulators (SLMs) to bend the headset’s focus around 3D objects — increasing depth and maximizing the amount of space represented simultaneously.”

Oculus are planning on demonstrating the technology at the upcoming Siggraph conference, which is dedicated to computer graphics and interactive media. It is not expected that the technology will appear in any Ocuus Rift systems any time soon, but is instead being used to research improved visual systems for VR users, especially those that will work better for people who wear glasses or have other visual problems.

VRFocus will bring you further news on developments in VR technology as it becomes available.

Registrations for SIGGRAPH 2017 now open

January saw the opening of virtual reality (VR) and augmented reality (AR) submissions for the 44th annual SIGGRAPH conference being held later this year. While those are still ongoing until 21st March, for those interested in attending the event, registrations are now open.

This year’s conference chair is Jerome Solomon, Dean of the College at Cogswell College, who was appointed back in 2015. He plans on making SIGGRAPH 2017 “bigger and better than ever.”

“We are very excited about returning to Los Angeles for this year’s conference,” said Solomon in a statement. “We have plans for over a dozen awesome experiences that we’ve never done before. For example, we will be holding our grand opening day reception at the California Science Center, which will offer attendees the chance to get up close to the actual Space Shuttle Endeavor. We will also be debuting a brand-new VR Theater—a venue in which attendees can experience VR storytelling from different filmmakers using state of the art technology and VR headsets.”

jerome_avatar SIGGRAPH 2017

On this year’s conference, he continued: “Not only is Los Angeles one of the world’s major hubs for the entertainment industry and all the major movie and TV studios, but it’s also dramatically evolved in recent years and has finally ‘arrived’ as a player within the digital space. L.A. has numerous top digital companies located in Silicon Beach, and the city has also become a key player and the home of countless new VR content development and production companies.”

“Interactive technologies, real-time graphics, virtual reality, and augmented reality are all key growth areas today,” adds Solomon. “In fact, the computer graphics industry is going through a bit of a renaissance right now—CG plays a critical role in more everyday digital tools and technologies than ever before. It’s bleeding over everywhere, and into so many industries.”

There are a number of ways visitors can register for SIGGRAPH 2017 depending on how much they want to see and do. Prices start from $50 USD for an Exhibits Only pass, rising all the way up to $1,445 for a Full Conference Platinum pass for non-members or $1,245 for ACM OR ACM SIGGRAPH members. These are also early bird prices valid until 9th June, after which the cost does go up.

For the latest news from SIGGRAPH 2017, keep reading VRFocus.

VR and AR Submissions Now Open for SIGGRAPH 2017

This year’s SIGGRAPH 2017 will mark the 44th International Conference and Exhibition on Computer Graphics and Interactive Techniques. Today, organiser ACM (Association for Computing Machinery) has opened virtual reality (VR) and augmented reality (AR) submissions, seeking a diverse range for the event.

ACM wants VR and AR submissions for possible inclusion within the following programs: VR Village, The Computer Animation Festival’s VR Theater, Emerging Technologies, Games Focus Area and Real-Time Live!

“This year, we are seeking VR and AR submissions across programs that are both content-driven and highly interactive. Previous years’ contributions included art installations, real-world applications and simulations. Projects this year that focus on technology are best suited for Emerging Technology, while projects that are content centered around storytelling and film are encouraged for submission to our VR Theater,” said Denise Quesnel, 2017 VR Village Chair, and a Research Assistant at Simon Fraser University.

SIGGRAPH vr_village_cc

The VR Village offers visitors the ability to explore the fascinating potential of brand-new VR and AR formats for shared experiences, engaging audiences, and powering real-world applications in health, education, entertainment, design, and gaming. The deadline to submit a project for VR Village consideration is 21st March 2017

“Submissions for 2017 VR Village content should be hands-on and focused on the experience rather than the technology, allowing attendees to explore the capabilities and functionalities of each project in context. Projects that include performative elements and social experiences are encouraged, as are multi-user experiences that are collaborative,” notes Quesnel.

The Computer Animation Festival’s VR Theater gives attendees the ability to experience the rapidly evolving VR medium. “We are looking for submissions for our VR Theater that will push the boundaries of storytelling. During the past two years, VR has reemerged as an important medium, and the projects we hope to show will present the next era of storytelling — presentations that depict new and powerful new stories to see, hear, and experience,” Pol Jeremias, Computer Animation Festival Director for 2017 and a Graphics Engineer at Pixar Animation Studios, commented. The deadline is 21st March 2017 to submit a project.

Emerging Technologies program will present novel, innovative, and surprising systems that advance human-to-human, human-to-computer, and computer-to-computer interaction. The deadline is 14th February 2017 to submit a project.

Chris Evans, Games Focus Area Chair for SIGGRAPH 2017 and a Senior Character Technical Director with Epic Games, said, “This year, we are trying to do something quite different by seeking submissions of content centered around animated short films with beautiful assets and interesting characters for the new VR Jam competition. Those accepted will have the opportunity to work on these animated short films and transform them into virtual reality experiences at the conference itself.” Submissions for the Games VR Jam will open in late February.

While Real-Time Live! will showcase the latest trends and techniques for pushing the boundaries of interactive visuals. The deadline to submit a project to Real-Time Live! is 4th April 2017.

For any further updates on SIGGRAPH 2017, keep reading VRFocus.