Oculus Plans to Let You Bring Your ‘Medium’ Sculptures Into Home

According to a recent entry on the official Oculus blog, the team has plans to allow sculptures from Oculus Medium (2016) to be placed into Home spaces. Customisation is already the central feature of the new Home, which makes up a key component of Rift Core 2.0, the underlying platform for Rift and Touch currently in beta.

“We’ll be adding tons of new content throughout the year, including new items and decorations built by the community,” writes Nate Mitchell, Head of Rift at Oculus. “We’ll also make it easy to bring your own content, like Medium sculptures, into Home in 2018.”

This brief mention of Medium in the first ‘Dev Diary’ for Rift Core 2.0 is about all we have to go on; it’s a logical step to allow user-created content in personal Home spaces. The team will continue to add decorative and interactive objects into the beta, and some games already provide physical trophies for displaying your achievements, in much the same way as SteamVR Home. Bringing Medium support could open the floodgates to much deeper personalisation, particularly if the sculpts can be scaled to any size, but that remains to be seen.

The post Oculus Plans to Let You Bring Your ‘Medium’ Sculptures Into Home appeared first on Road to VR.

Google is Developing a VR Display With 10x More Pixels Than Today’s Headsets

Earlier this year, Clay Bavor, VP of VR/AR at Google, revealed a “secret project” to develop a VR-optimised OLED panel capable of 20 megapixels per eye. The project was mentioned during SID Display Week 2017 but has gone largely under the radar as little information has surfaced since.

Following a general overview of the limits of current VR technology, and an announcement that Google is working with Sharp on developing LCDs capable of VR performance normally associated with OLED, Bavor revealed an R&D project that hopes to take VR displays to the next level. A video of the session comes from ARMdevices.net’s Nicolas “Charbax” Charbonnier.

“We’ve partnered deeply with one of the leading OLED manufacturers in the world to create a VR-capable OLED display with 10x more pixels than any commercially available VR display today,” Bavor said. At 20 megapixels per eye, this is beyond Michael Abrash’s prediction of 4Kx4K per eye displays by the year 2021.

“I’ve seen these in the lab, and it’s spectacular. It’s not even what we’re going to need in the ‘final display’” he said, referring to the sort of pixel density needed to match the limits of human vision, “but it’s a very large step in the right direction.”

SEE ALSO
Exclusive: How NVIDIA Research is Reinventing the Display Pipeline for the Future of VR, Part 1

Bavor went on to explain the performance challenges of 20 MP per eye at 90-120 fps, which works out at unreasonably high data rates of 50-100 Gb/sec. He briefly described how foveated rendering combined with eye tracking and other optical advancements will allow for more efficient use of such super high resolution VR displays.

The post Google is Developing a VR Display With 10x More Pixels Than Today’s Headsets appeared first on Road to VR.

New Report from ILMxLab & Disability Visibility Project Shares Insights on VR Accessibility Design

The Disability Visibility Project recently published a report that highlights the accessibility issues relating to VR users with disabilities. This follows a survey created by Disability Visibility Project founder Alice Wong in partnership with Lucasfilm’s ILMxLab, which covered user experiences, accessibility issues, and ideas about VR for people with disabilities.

79 VR users with disabilities from around the world participated in the survey, offering insights into their experiences with VR technology. Of the 98 different types of disabilities described, the most common were deafness, arthritis, scoliosis, cerebral palsy, autism, asthma and PTSD. ILMxLAB helped to promote the survey, and users’ experiences with their VR game Trials on Tatooine (2016) was an optional section.

The report offers six key takeaways from the survey: accessibility should be integrated into VR software from the start, VR software/hardware needs maximum flexibility and customisation, developers should interrogate cultural norms and diversify representation, be sensitive to the diverse and varied communities and disability types, and VR development teams should hire disabled people. A large portion of the report sheds light on the specific challenges that disabled users face when using VR hardware and software.

Barriers to Use

6-DOF headsets (of which 79% of participants use) are generally more problematic as developers take advantage of the positional tracking, which often follows an expectation that the user can move more freely in a wide space. 3-DOF headsets (used by 63% of participants) have their own problems too, such as the fixed menus using small text. In terms of VR activities, the most common difficulties are balancing while standing, crouching, standing, physical locomotion, and raising/extending/moving arms. Other difficulties that received several mentions include holding/gripping objects, sensitivity to light, seeing, moving fingers, thinking, remembering, or concentrating, and sensitivity to flashing lights or visual patterns.

The report offers many choice quotes from participants, who describe their difficulties in more detail. For example, in relation to balancing while standing, one user writes “I’m unable to stand to use VR. I need to be seated with the backrest at just the right incline, and with the right padding / firmness. I’ve only ever found 2 seats that don’t increase my pain, and currently the one that works best is my wheelchair, which obviously doesn’t swivel like an office chair, so as well as being unable to stand, I’m unable to physically rotate, which is fairly frustrating and impacts the majority of the VR experiences I try.”

“The Vive is hard to use because I have to hold the controllers and push my wheelchair around at the same time,” says one user in relation to VR locomotion challenges. “Hard to turn. Easy to bump into walls even with chaperone because my radius is wider.”

SEE ALSO
Developer Makes VR Accessible to Physically Disabled with Custom Locomotion Driver

One user firstly noted some positives to VR imaging: “VR allows me to see far clearer than I do with my natural eyesight, giving me far more detail in both objects in the distance as well as holding objects up close. I also experience depth perception in VR, where normally I have diplopia (Double Vision).” But they go on to describe issues with light sensitivity. “One area I do have difficulty with is if the screen suddenly goes very bright, I can be dazzled and lose focus, another being small text, or text that is tracked in the centre of my view.”

Some participants appear to be unable to use VR at all, fearful of visual triggers. “I’m afraid to try and risk a migraine. I already have to avoid various media with strobe lights, flashing effects and too much blurring.”

“I am sensitive to loud noises and flashing lights/images,” says another participant. “I am not interested in VR because I won’t be able to predict or control these features. I also cannot use shared headsets/gear because of the chance I could be contaminated by gluten or allergy triggers.”

For others, the challenge is in hardware setup, requiring assistance to wear a headset. “My cerebral palsy makes it impossible for me to take my device out of the case… therefore I can’t participate in VR without assistance. I would like to be able to set it up by myself, because I often have episodes of anxiety, depression, and pain while I am alone. I use VR to treat those things.”

Developer Recommendations

Certain issues highlighted in the report are also relevant to the wider VR user base, such as potential motion sickness, and cable management. Some existing problems will naturally improve as the technology advances; for instance, the difficulties in reading text due to the low resolution, and being able to communicate via gestures will dramatically improve as motion controllers and hand tracking evolves, but others require more consideration from developers to make their experiences more accessible and customisable.

One user asks for alternative button mapping to be a default feature across the board, as it would be “so much easier for the disabled gamer to choose the option of which button is suitable for them to play that particular game”, and another requests the option for alternatives to motion controls: “I’ve seen traditional games with VR components lock out traditional control methods when a VR headset is being used. This isn’t right! I should always be able to use a gamepad coupled with a VR headset to play games, especially games that’d normally support a VR headset.”

There are requests to pay more attention to text and captioning, in terms of their appearance and flexibility. “Tactile objects like clipboards, whiteboards, and posters that can be moved are great because we’re able to find the right place to stand and look at it. If a text box floats right in front of us wherever we look, forcing us to cross our eyes, and can’t move closer, we won’t be able to read it.”

Several statements discuss accommodating the seated user. “Remember that we exist,” writes one participant. “We share this space with able bodied people and as it stands it’s very difficult for us to use this new experience without a lot of pain or even at all.”

“Make height adjustments available and movements such as bending and crouching optional,” suggests another. “Right now I have to take off my headset and put it to the ground to bend down and watch a screen to see what I’m doing.”

Another participant offers a different view: “Games are very hostile to people with any kind of motor disability. However I am not asking for games to be tailored for me or people like me. Get the market as wide as you can as fast as you can and someone will make products I can use without damaging the potential of VR. Having said that, I would not be opposed to a disabled setting that allowed people to play from a more limited field of view and sitting.”

SEE ALSO
Katie Goode on VR Accessibility & Designing for Users with Disabilities

Many users with restricted movement request more options in terms of locomotion, button inputs and motion controls; visual and hearing impaired users request more specific options for graphics and audio; software flexibility is key. With so many possible considerations, an important suggestion is to involve people with disabilities from the start of the process, or at the very least as part of the testing phase. “Ultimately, get people with disabilities to help create and test your experiences before you ship them!” writes one user. “Surveys are helpful to the cause, but until you get people with disabilities creating and testing VR experiences, there’s only so much data can do to help.”

Potential for Enrichment

While the participants lament the many hardware and software challenges relating to disabilities, the report also highlighted the positivity and excitement about VR’s potential to enrich lives and enable otherwise impossible experiences.

“I’m excited by the illusion of traveling free of the confinement of my body. This would hold true of anyone, with and without a disability. But VR opens up the possibility of being able to walk in the woods and feel surrounded by trees and the sounds of the forest.”

Another user sees VR as a great equaliser, writing “I can do things in VR, like drive a car that I can no longer safely do in real life. And, like healthy people, I can do and see things in VR that I could never do in real life.”

“I think VR for people with disabilities has tremendous potential in almost all areas of life. I think it has incredible educational and recreational potential and I also think it has the ability to provide life simulation activities that could encourage more personal development and growth in a fun way. I think VR experiences of all kinds should be available to people with disabilities and that there are specific things like driving simulators and life simulation experiences that could be of real benefit to the disability community.”

The post New Report from ILMxLab & Disability Visibility Project Shares Insights on VR Accessibility Design appeared first on Road to VR.

Free UE4 Template Makes Creating More Realistic VR Hands a Snap

Czech developer iNFINITE Production has released UVRF – a free, cross-platform template for hand presence in VR. The open-source demo offers a framework for use in any Unreal Engine project, as well as a ‘Playground’ scene containing an underground bunker and shooting range to showcase hand interactivity.

Detailed in a post on the Unreal Engine VR developer forum, UVRF’s framework aims to be a useful starting point for implementing hand presence in an Unreal-based VR experience, offering 17 grab animations to cover most objects, per-platform input mapping and logic, basic haptics, teleport locomotion using NavMesh (with rotation support on Rift), touch UI elements, and several other useful features. The framework is released under the CC0 license, meaning it can be used by anyone without restriction.

In a message to Road to VR, Jan Horský at iNFINITE Production explained how this template could be particularly useful to new developers. “While Unreal does very good job at making development accessible, building hands that properly animate, are properly positioned, with grabs and throws that feel natural and so on, is still not a trivial task,” he writes. “While it’s not a problem for experienced dev teams, it is a problem for newcomers. And they’re the ones that are likely to have ideas that will surprise us all. This little demo is an attempt to make VR development easier for them.”

SEE ALSO
'Job Simulator' and the Magic of Hand Presence

The included ‘Playground’ demo shown in the video features a functional shooting range in an underground bunker, littered with magazines to show the multi-object interaction of reloading a gun, along with many other features to highlight the hand animations.

Originally developed as an internal tool for prototyping at iNFINITE Production, the team decided to kindly share it with the world. “I expected such a project would come from companies that are more interested in VR growth like Oculus, Valve, or HTC,” says Horský. “It’s nearly a year since Touch was released and there is still no such thing publicly available, so we decided to take it into our own hands.”

You can download to the template here.

The post Free UE4 Template Makes Creating More Realistic VR Hands a Snap appeared first on Road to VR.

Futuremark to Launch New DirectX 12 Scene for VRMark Benchmark

Benchmark specialists Futuremark are launching a new ‘room’ on November 22nd as a free update for VRMark Advanced &  Professional Editions. Adding to the existing ‘Orange Room’ and ‘Blue Room’ benchmarks, ‘Cyan Room’ uses a pure DirectX 12 engine optimised for VR.

Finnish software development company Futuremark launched VRMark in 2016, a dedicated virtual reality benchmarking application to compliment their popular performance testing software, 3DMark. VRMark features two tests: the ‘Orange Room’, a specific ‘VR readiness’ test designed around the minimum PC hardware requirements suggested for the HTC Vive and Oculus Rift; and the ‘Blue Room’, a much more demanding test designed to run at 5K resolution, stretching the legs of the latest graphics cards.

‘Cyan Room’ is the latest VRMark test, using a pure DirectX 12 engine built in-house. According to the VRMark website, this test was designed to show “how using an API with less overhead can help developers deliver impressive VR experiences even on modest PC systems.”

Image courtesy Futuremark

As with the other test rooms, the new scene can run on a monitor or VR headset, and runs on a fixed path for consistency. Resolution and other visual settings can be adjusted, with frame-by-frame performance charts. There is also an Experience mode where the user can explore the sequence at their own pace, to judge the quality for themselves. As explained in a previous article, because VR rendering techniques compensate for dropped frames or low performance so effectively, and the experience varies significantly between individuals, Futuremark believe that VR benchmarking should offer a combination of objective and subjective tests.

Image courtesy Futuremark

The benchmark results will tell you objectively whether your PC was able to meet the target frame rate, along with a comparison with other systems, but sometimes the numbers don’t tell the whole story, and a subjective look around in your headset could give a different impression, depending on how sensitive you are to various VR rendering tricks, like reprojection, which are used to cover up moments of spotty performance.

The Cyan Room scene will be a free update for VRMark Advanced Edition and VRMark Professional Edition. VRMark Basic Edition is a free download, containing just the Orange Room benchmark.

The post Futuremark to Launch New DirectX 12 Scene for VRMark Benchmark appeared first on Road to VR.

GridRaster Raises $2 Million for AR/VR Cloud Rendering Solution

Palo Alto-based VR/AR startup GridRaster recently announced a nearly $2 million seed funding round to strengthen and develop their mobile VR/AR rendering infrastructure. GridRaster claims to deliver high-fidelity graphics at 10 times the performance compared to standalone mobile platforms, a feat accomplished by a method of cloud computing optimisation called ‘edge computing’.

Offloading the rendering of real-time graphics from power-limited mobile devices to the cloud from can provide major performance advantages, but at the expense of latency – an essential factor to minimise in VR/AR applications. Edge computing is an evolving paradigm in cloud computing optimisation, where the typical latency problems associated with the cloud can be mitigated. According to the press release provided to Road to VR, GridRaster leverages this technology “to re-define the network and compute stack at multiple layers – device, network and edge cloud.”

VR/AR devices using mobile chipsets are expected to continue as the largest sector over the next few years, and while CPU and GPU performance will improve significantly, power and heat will remain limiting factors. In his keynote at Oculus Connect 4 in October, John Carmack lamented the end of Moore’s Law, warning that PC performance “will never get to a mobile platform” and that developers should be prepared to “embrace the grind” of eternal mobile optimisation. GridRaster’s edge cloud infrastructure may offer a potential alternative – or at least additional tool – for extracting greater performance from mobile platforms.

Moore’s Law showing exponential increase in microprocessor transistors, image courtesy Wgsimon

As explained in the press release, GridRaster “provides the underlying infrastructure to distribute and manage loads across servers, dynamically optimizing network bandwidth and intelligently reducing latency to enable compelling immersive experiences,” claiming to offer “high fidelity graphics at ultra-low latency” with a 10 times increase in performance over the mobile platform alone, and easy integration into popular engines like Unity.

“GridRaster’s software technology focus helps further advance augmented reality and virtual reality experiences by off-loading processing to remote servers and clouds to support real-time collaboration of complex 3D models with significantly reduced power requirements on mobile devices,” said John Haddick, CTO at Osterhout Design Group, one of the select group of GridRaster customers named in the press release. “It is an exciting technology application for ODG smartglasses as we work with enterprise customers who want to create and collaborate in mixed reality or build immersive 3D interactive experiences. We are impressed by what their software can accomplish in a wide-range of mobile environments.”

The near $2 million funding round consists of investments from several venture capital firms including Lumia Capital, Pipeline Capital, Exfinity Ventures, NextStar Partners, Unshackled Ventures, and Explorer Group. Istvan Jonyer, Principal at NexStar Partners made perhaps the boldest claim about the transformative potential of the technology:

“We see the future of VR being powered by the mobile phone, which everyone has in their pockets,” says Joyner. “GridRaster’s technology will turn these handsets into high-end VR HMDs at an attractive price point to enable high-end enterprise and consumer VR/AR experiences at scale.”

“We have just begun to see capabilities this technology can bring to the VR/AR space,” said Rishi Ranjan, founder and CEO of GridRaster. “We have proven out our core technology working with great partners. Now, with new capital, we will work toward strengthening our development team and maturing the product for specific enterprise and customer use cases as we continue to establish GridRaster as a standard and platform of choice for cloud-powered high-end VR/AR.”

The post GridRaster Raises $2 Million for AR/VR Cloud Rendering Solution appeared first on Road to VR.

HTC Announces New First-Party Titles, ‘Front Defense: Heroes’ & ‘Super Puzzle Galaxy’

HTC’s internal development and publishing arm Vive Studios has announced two new games coming to Vive in early December. Physics puzzler Super Puzzle Galaxy from 2 Bears Studio and multiplayer WWII shooter Front Defense: Heroes from Fantahorn Studio will both launch with promotional pricing.

As detailed in a recent entry on the official Vive blog, both games are second entries from Vive Studios development partners 2 Bears Studio and Fantahorn Studio, creators of Arcade Saga (2016) and Front Defense (2017) respectively. Although Arcade Saga was updated with Oculus Touch support, both games were originally designed to take advantage of the Vive’s room-scale VR capabilities, and the new games Super Puzzle Galaxy and Front Defense: Heroes have also been described as ‘room-scale’ experiences.

Image courtesy 2 Bears Studio

Super Puzzle Galaxy will be available in early December on Steam with 75% off the standard $9.99 price at launch “for a limited time”. This physics-based puzzler involves terrain and object manipulation and will contain 48 levels and an in-game editor to create more.

Super Puzzle Galaxy was born out of a passion for creating compelling and engaging VR content for the whole family that challenges the player’s problem solving and creative abilities,” said David Sapienza, Executive Producer of 2 Bears Studio. “Adding user-generated content was a core tenet of creating something that empowers the community, and Super Puzzle Galaxy delivers a unique room-scale VR experience. We’re excited to see the levels and Rube Goldberg contraptions the community is able to come up with.”

Image courtesy Fantahorn

Front Defense: Heroes aims to build on the strengths of the first Front Defense title, but with a focus on multiplayer – a feature missing from the original. Launching on Steam and Viveport in early December at a promotional price of $4.99, the new game will feature multiple maps, offering 5v5 multiplayer across capture the flag, deathmatch, and defense mission modes. As described in our Front Defense review, its room-scale game design was the highlight, as you could only physically move around a predetermined space (no artificial locomotion was possible), taking cover behind your virtual defenses. Front Defense: Heroes appears to be expanding on this concept, introducing a new locomotion system called ‘V-Move’, described as “unique” and “innovative” on the Vive blog.

“With Front Defense: Heroes we’ve built upon our experience with Front Defense to engage the community and offer new challenges with every match,” said River Ho, producer at Fantahorn. “As a dynamic multiplayer shooter, Front Defense: Heroes lends itself perfectly to the competitive ethos of VR e-sports, an important feature as VR gaming matures.”

The post HTC Announces New First-Party Titles, ‘Front Defense: Heroes’ & ‘Super Puzzle Galaxy’ appeared first on Road to VR.

Hands-On: ‘Star Shelter’ – Survival in Zero Gravity

Stranded in space with just an AI voice and your broken ship for company, you must scavenge for materials to survive, searching random debris floating nearby using motion-based, zero-gravity locomotion. Needing some polish but highly playable in its current state, Star Shelter is available now on SteamVR Early Access with support for HTC Vive and Oculus Rift.

This compelling space survival game is described as a ‘rogue-lite’ due to its procedurally generated world, and the significant resource management required to succeed—but presented with real-time, action-orientated gameplay. Perhaps it also refers to the not-so-devastating ‘permadeath’ of your character; dying in the game makes you restart as a different person (a new name appears in your pod), with your personal inventory wiped, but any progress you’ve made in completing objectives—along with your ship inventory and upgrades—are retained. Apparently this ties in with the backstory for the game, but you won’t find much in the way of lore at the moment, something that might be added at a later stage of development.

Lone Echo Locomotion

As mentioned in their first video blog, the developers at Overflow Games happily admit that Ready at Dawn’s zero-gravity VR adventure Lone Echo (2017) was their reference for the movement system in Star Shelter. Indeed it is very similar, although the ‘grip’ function is on the main triggers by default, and the wrist-mounted propulsion is activated by the grip buttons (or grip triggers on Touch), which might feel odd for Lone Echo/Echo Arena players.

You can swap these functions to bring the control scheme closer to Lone Echo, but currently this is somewhat flawed, as the main triggers are always used to activate certain commands, meaning that the inverted setting activates unwanted propulsion on any trigger command. The good news is that manoeuvring in zero-gravity by grabbing objects with the motion controllers continues to be a great locomotion system that’s immersive and comfortable for most players without the need for teleporting.

Image courtesy Overflow Games

Unfortunately, this means Star Shelter can’t escape a direct comparison to one of the highest-quality VR productions ever created, and it falls short, feeling less intuitive, and more prone to glitches. Though this is to be expected considering this is an indie project compared to Lone Echo’s AAA sheen.

The game can be forgiven for lacking the complex hand animation system of Lone Echo, but it does seem odd that your character has practically no body, with no arms, legs, or even torso aside from a small chest piece. Along with this limited body presence, the movement doesn’t feel quite as natural in terms of maintaining and adjusting your momentum. Most notably, you will often come to instantaneous stops when bumping into walls, rather than bouncing in a more realistic manner. That said, there is some enjoyable physics at play here, particularly the recoil of your weapon, which propels you backwards if you’re not holding onto anything.

Unforgiving Survival Gameplay

Image courtesy Overflow Games

As a survival game, it is perhaps fitting that the optional tutorial doesn’t really give the player a thorough training; instead there is a strong suggestion that you’re expected fend for yourself. Being thrown into the deep end might be an ideal opening sequence for fans of the genre, but it could discourage some players expecting a shallower learning curve. There is certainly no downtime in the opening moments of the game, as you find yourself immediately floating in space surrounded by debris, with four main stats of oxygen, nutrition, energy, and health to maintain, and broken ship to fix if you want to sustain yourself in the long run.

Oxygen is the biggest issue at the beginning, as you’re constantly consuming the limited supply, and your wrist thrusters also use it—at an alarming rate. Small oxygen canisters floating among the debris can sometimes be difficult to find due to the randomised nature of the environment, and instead of being a safe haven, your ship almost feels like a hindrance. It will replenish your suit, but it also has a limited oxygen supply (at least before you can upgrade it), and is constantly under threat from random comet strikes.

Image courtesy Overflow Games

It took me a while to acclimatise to the management systems, partly because the wrist-mounted UI feels a little confusing, and partly because the relationship between your ship and suit (and the way they share and replenish resources) is not well explained. This meant inefficient use of oxygen just moving around the ship itself, and several deaths from suffocation. I don’t believe the game is necessarily too difficult—the balance is probably right—but if the systems were explained better from the start, it would avoid the frustrating trial and error learning process for new players.

The game is listed on Steam as suitable for seated, standing, and room scale VR setups, but you’re at a serious disadvantage without a reasonable amount of space to move. Whether you’re collecting an item, scanning some debris, or simply reaching for another surface to grab, a limited standing or seated play space means you’ll be using your thrusters to make small movements more regularly, wasting that precious oxygen. There seems to be some kind of forcefield in the middle of your ship, presumably to reduce your momentum when you’re close to the floating UI. I found this to be infuriating, as it felt like it was preventing me from leaving my own ship (something you have to do countless times) without wasting more oxygen.

Unfinished, But a Promising Start

Image courtesy Overflow Games

The developers appear to be rapidly improving the game and squashing bugs with regular updates, but I found number of small issues during my testing. Front-facing seated and standing VR users who require the rotation function might find the Vive touchpad inputs too sensitive; I was often accidentally rotating my view when trying to activate the scanner. (Perhaps that’s not a significant problem, as Vive users are far less likely to be using a front-facing setup in the first place, and the rotation works better on Touch with a positive flick of the analog stick.)

Speaking of the scanner, the effect features a glowing texture that seems to animate across the geometry based on the velocity of the object. This looks odd at the best of times, but I found that if I happened to be very close to the object I was scanning, particularly if it featured a large surface, the moving texture would cause vection, making me feel like I was moving instead, which I suspect could be a nausea trigger for some players.

While the Oculus Touch ergonomics feel more suited to this kind of game, the grip function did not appear to work on my left controller, unlike when using the Vive. Some of the option toggles for controls and graphics settings seem to work the opposite way round on first selection—and the shadow and ambient occlusion options seemed to really hurt performance.

Image courtesy Overflow Games

The visual style is restrained, with little in the way of detailed geometry or textures, so it is surprising that the game struggled on a GTX 1080 with those effects enabled. I also experienced a few clipping problems—mostly inconsequential—but I was able to clip my head through the wall of my ship, giving me an oxygen warning as if I was outside without a closed helmet, and occasionally important objects would become stuck in other surfaces.

Work-in-progress problems aside, Star Shelter is an interesting take on the survival genre; Lone Echo movement with survival sim gameplay is a compelling combination. The result is an odd mix of relaxation and high stress, as the serene quiet and effortless movement in space combines with the constant danger of energy depletion and threats from comets and random drone attacks. The sense of exploration once you hack into nearby ships is also quite powerful, although this is likely dependent on the number of assets available to be randomised in order to keep that feeling fresh.

The post Hands-On: ‘Star Shelter’ – Survival in Zero Gravity appeared first on Road to VR.

‘Haptic Shape Illusion’ Allows VR Controllers to Simulate Feel of Physically Larger Objects

In a study lead by Eisuke Fujinawa at the University of Tokyo, a team of students created a procedure for designing compact VR controllers that feel physically larger. Exploring the concept of ‘haptic shape illusion’, the controllers have data-driven, precise mass properties, aiming to simulate the same feeling in the hand as the larger objects on which they are based.

Simulating the feel of real objects is a fundamental haptics challenge in VR. Today’s general-purpose motion controllers for VR work best when the virtual object is reasonably similar in size and weight; very large or heavy virtual objects immediately seem unrealistic when picked up.

One solution is to use specific controllers for a given application—for instance attaching a tracker to a real baseball bat; in a hands-on with one such solution, Road to VR’s Ben Lang described the significance of gripping a real bat and how that influenced his swing compared to a lightweight controller. But swinging a controller the size and weight of a baseball bat around your living room probably isn’t the best idea.

As shown in the video below, researchers from the University of Tokyo attempted to create much smaller objects that retain the same perceived size. The team designed an automated system which takes the original weight and size of an object and then creates a more compact but similar feeling output through precise mass arrangement.

The paper refers to several ecological psychology studies into how humans perceive the size of an object through touch alone, supporting the idea that perceived length and width is strongly related to the moment of inertia about the hand position.

The team concentrated its efforts on this haptic shape perception, collecting data from participants wielding different sample controllers in VR to determine their perceived sizes, having never seen the controllers in reality. This data allowed the creation of a ‘shape perception model’, which optimises the design of a large object within smaller size constraints, outputting CAD data for fabrication.

The object is deformed to fit the size constraints, holes are cut out, and weights are placed at specific points to maintain the original moment of inertia.

Image courtesy Fujinawa et al.

The team had VR developers in mind, as this approach could offer a potential benefit in demonstrating a product with a more realistic controller. The CAD data output means that smaller, safer prototype controllers that give the impression of wielding larger objects can be created quickly with a laser cutter or 3D printer.

SEE ALSO
Exploring Methods for Conveying Object Weight in Virtual Reality

Further information and the full paper is available on Fujinawa’s website. The research is being presented at this week’s VRST 2017, the 23rd ACM Symposium on Virtual Reality Software and Technology held in Gothenburg, Sweden.

The post ‘Haptic Shape Illusion’ Allows VR Controllers to Simulate Feel of Physically Larger Objects appeared first on Road to VR.