AxonVR was awarded US Patent No. 9,652,037 of a “whole-body human-computer interface” on May 16th, which includes an external exoskeleton as well as a generalized haptic display made out of microfluidic technology. I had a chance to demo AxonVR’s HaptX™ haptic display that uses a “fluidic distribution laminate” with channels and actuators to form a fluidic integrated circuit of sorts that could simulate variable stiffness and friction of materials.
At GDC, I stuck my hand into a 3-foot cube device with my palm facing upward. I could drop virtual objects into my hands, and there was an array of tactile pixels that was simulating the size, shape, weight, texture, and temperature of these virtual objects. The virtual spider in my hand was the most convincing demo as the visual feedback helped to convince my brain that I was holding the virtual object. Most of the sensations were focused on the palm on the hand, and the fidelity was not high enough to provide convincing feedback to my fingertips. The temperature demos were also impressive, but also were a large contributor to the bulkiness and size of the demo. They’re in the process of miniaturizing their system and integrating it with an exoskeletal system to have more force feedback, and the temperature features are unlikely going to be able to be integrated in the mobile implementations of their technology.
LISTEN TO THE VOICES OF VR PODCAST
I had a chance to talk with AxonVR CEO Jake Rubin about the process of creating a generalized haptic device, their plans for an exoskeleton for force feedback, and how they’re creating tactile pixels to simulate a cutaneous sensation of different shapes and texture properties. Rubin said that that the Experiential Age only has one end point, and that’s full immersion. In order to create something like the Holodeck, then Rubin thinks that a generalized haptic device will unlock an infinite array of applications and experiences that will be analogous to what general computing devices have enabled. AxonVR is not a system that’s going to be ready for consumer home applications any time soon, but their microfluidic approach for haptics is a foundational technology that is going to be proven out in simulation training, engineering design, and digital out of home entertainment applications.
New VR gloves designed by engineers at UC San Diego employ “soft robotics” to deliver tactile feedback to the wearer as they touch and interact with virtual objects.
The system is designed to mimic the movement and sensation of muscle with a a component called a McKibben Muscle. The glove is structured in a layer of latex chambers, surrounded on the surface by braided muscles. The entire glove — including the muscles — is connected to a circuit board, and as you interact with virtual objects, the gloves inflate and deflate to replicate pressure. It’s a finely tuned process designed to give you the sensation you’re actually lifting and touching objects, just like you would in the real world.
In theory, the gloves could paired with other technologies like a Leap Motion sensor to simulate a wide range of activities.
“This is a first prototype but it is surprisingly effective,” Professor Michael Tolley says on the UC San Diego website. “Our final goal is to create a device that provides a richer experience in VR…but you could imagine it being used for surgery and video games, among other applications.”
This type of technology has been used in similar ways before — though not exactly in the muscle structure described above. The Kor-FX and Hardlight Suit, for example, are VR-ready vests that allow you to “feel” impacts and pressure on your chest through haptic feedback.
Jointly, these technologies may someday be used to immerse you entirely in a virtual world, whether for entertainment, gaming or more practical purposes like situational training.
Of course, they will remain separate pieces of gear for now, at least until engineers or developers figure out a way to create one, seamless outfit or suit. That would require overcoming obstacles like interconnectivity problems that occur with other kinds of electronics. A full-body suit would need to be able to differentiate between pressure, impact, or muscle simulations on different areas of your body, which would also need to be fine-tuned from a software perspective. Video games, for instance, would have to include information such as what part of the player’s body a bullet hit.
The gloves are not a commercially viable product just yet, and they probably won’t be for some time. Michael Tolley, a mechanical engineer professor working with the team who came up with the VR gloves, said in a prepared statement they are nothing more than a “first prototype,” though adding they’re “surprisingly effective.”
The team was able to 3D-print a soft glove exoskeleton mold — or case mold, if you will — to serve as proof of a mass production opportunity. In other words, they are actively displaying support for a commercial release of such a device.
Kayla Matthews writes about artificial intelligence, virtual reality and other tech for websites like VentureBeat and MakeUseOf. To read more posts from Kayla, read and subscribe to her tech blog, Productivity Bytes.
Playing a racing game in VR is an immensely satisfying gaming experience. The rush of cars blazing past you, the thrill of drifting around corners, and the detailed interiors of some of the world’s most exotic vehicles is all enough to take your breath away when viewed from inside a VR headset. When you look down in a racing game though and see your character’s hands grasping a steering wheel it can take you out of the experience if you’re just holding a regular old gamepad in your real hands instead. That’s why you need a certified racing wheel to really take the immersion to the next level.
The ThrustmasterT300RS GT Edition Racing Wheel is designed to offer a top-of-the-line racing experience for PS4 and PC gaming (as well as PS3 too) and it works right out of the box. As soon as I opened it up and set everything up I just plugged it into my PS4 and the wheel automatically started spinning for calibration. No other setup required.
When I reviewed the Thrustmaster T16000M FCS Flight Pack I noted that I wasn’t exactly a rabid connoisseur of realistic flight simulators, but I could more than appreciate the audience and their tastes. I’ve got a high-level of respect for gamers that take the time to master flight control systems and perfect flying a craft in a realistic way. The same can be said for excelling at a high level in racing games with wheels like the T300RS.
Setting up this wheel was a little confusing at first, but that’s only because it came in a few more pieces than I expected. When you open the box the wheel itself is actually disconnected from the base and you just have to slide it into a socket and spin it around a bit to lock it in place. On the bottom you can attach a table mount that tightens to the edge of a table (or TV tray or something similar) and once that’s done it feels great to use. The pedals just plug into the back of the base, the base plugs into the wall and the PS4 via USB, and that’s it.
Ergonomically it’s fantastic, the material on the wheel itself is nice to grip and it spins in my hands very easily. There is nice resistance while turning and it spins back to a centered position when you let go, which is great for straightening out after turns or a collision. The subtle force feedback is noticeable when it needs to be and just subtle enough to add that extra ounce of immersion that’s missing most sorely from using just a gamepad.
I spent most of my time testing the T300RS GT Edition with my PS4 Pro and PlayStation VR (PSVR) headset in DriveClub VR and DiRT Rally. The setup also works out of the box for PC and does a wonderful job of increasing the level of immersion in titles such as Project CARS and iRacing.
Making the transition from gamepad playing to using a racing wheel took a few races. Instead of alternating the brake and acceleration with trigger presses I was shifting my foot between two different pedals. The setup even comes with a clutch already, which is great for fans of driving manual transmission vehicles. I don’t possess that skillset, but everything else about the T300RS GT Edition was wonderful for me to use.
Final Recommendation: Absolutely
There are more expensive wheels on the market and there are more affordable wheels on the market. The Thrustmaster T300RS GT Edition is definitely on the higher end of the spectrum, but you get what you pay for. It’s wonderful to use, easy to setup, and feels amazing in your hands. I can’t imagine playing a game like DriveClub VR, Project CARS, DiRT Rally, or any other racing title without a wheel like this now.
The Thrustmaster T300RS GT Edition Racing Wheel is now available for purchase from Amazon for $390 as well as other retailers online and in stores.
Regular VRFocus readers will be aware of our interest not just in the use of virtual reality (VR) as a means to entertain but also as a tool to educate and help the human condition. To that end we have as part of our features section “Your Virtual Health” which covers an array of topics relating to healthcare and the medical technology (medtech) industry as a whole. We’ve had discussions on how VR is being used to benefit mental health, how and why it affects the brain in series VR & The Mind and I even discussed my own thoughts on an unspoken issue of VR technology, namely how it is just not suitable for those suffering from more general sickness.
Our most regular series dealing with VR’s healthcare possibilities however is The VR Doctor, written by Dr. Raphael Olaiya. An NHS doctor and Director of Medigage Ltd, Dr. Olaiya with the UK’s National Health Service (NHS) on VR immersive training programmes for doctors and nurses.
Back in April Dr. Olaiya and Economics PHD fellow Nandor F Kiraly discussed the possibilities of how VR may influence healthcare and we’re able to bring you that discussion today, as well as a portion of Kiraly’s Creative Economies video essay which the interview takes place.
You can read Dr. Olaiya opinions and see the video clip below.
How do you think Virtual Reality will influence clinical skills simulation training for healthcare?
VR runs along a continuum, and further along that continuum involves haptics, motion sensing and even involving smells – all of these are senses that brings us to a deeper level of immersion and realism and currently lots of these technologies that are available to connect into the virtual reality experience – it’s just not connecting to the right markets, the tipping point hasnt been reached so to speak!
Right now we are talking about healthcare, so, coming back to the main question that you asked; number one I think VR in heathcare will be very useful. The main disadvantage of mannequin based simulation is that it’s not really as customizable as it needs to be, it’s variables which are very important to make the trainee adaptive, are often very fixed so, if we want to structure VR clinical skills simulation training to be as effective as possible then customisation of each very healthcare scenario he trainee is put in must be atleast slightly different otherwise it will seem artificial like a dejavu and another big factor is touch.
A program we have at the minute is basic life support, and as soon as people put on the headset and are in a hospital having to perform basic life support, the first thing they do is put their hands in front of them, to see if their hands are actually involved in the program; and in the first version we developed at Medigage there is no hand motion sensing, however it was still an effective learning tool when used alone or when supplementing physical manikin based training. Particularly through: 1) Increasing the accuracy and detail of how much the trainee retained of how to do the procedure itself. 2) Simulating the environmental emotional pressure or stress of having to carry out a medical procedure alone when new to the skill.
Haptics have different levels of realism, which make the VR experience more realistic. In standard non-VR manikin simulation based training you can touch a mannequin but it doesn’t feel at all real; a tool that most medical students in the western world have used for training, is the Advanced life support high fidelity manikin there are several companies that manufacture this and it costs between £40,000 – £100,000, which increases the realism and customisation of the simulation.
As haptics in VR become more sophisticated, adaptive and realistic, we do strongly believe it will bridge the gap needed to converting even the staunchest of VR sceptics and in healthcare there is a lot of them.
Social aspects – Working in a team an aspect of healthcare, which is crucially important and cannot be overlooked. A common misconception of course is that VR is a isolating, lonely and a solitary experience. VR infact allows us to be more connected than ever before imagine collaborative surgery where the operating room has a multidisciplinary team who are all working together simultaneously in a vr space from different countries or even continents.
These are all important factors that make the VR experience more effective as a learning tool.
Currently where is the technology right now?
It’s there, it just needs to be directed, and the right expertise is needed to develop it. It comes down to managing it and allowing it to be used – to make it so that is as effective as possible.
Would you agree with the following statement: “With the dropping prices of electronics and technologies, the training of practitioners via the use of VR will be better than current methods – such as doing the same training on carcases – from a cost effective standpoint?”
In the future I firmly100% beleive this. Its realizing how soon that is and when we should be investing more capital and finance in to it to speed up the process, because before we invest more finance into it, we have to be on the right track; currently there are a lots of different talented people and development companies going off in different directions, but there is no standard set guidelines (of the best way to do it). Before we start pushing things to replace what is already there -cadavers as you said – we need to come up with a gold standard. Medical education itself is a speciality itself, which is hundreds of years old, some of the great scientists had their take on it, and we are still developing it right now, so adding VR into the mix cannot be looked upon lightly.
First, we need to find the best way, and secondly push it in the right direction. Every step of the way we will need to conduct research to confirm that what we are on the right track, and only then can we start financing the transition towards a more VR based education system.
To answer your question – in terms of actual objective finances – a cutting edge advanced life support high fidelity mannequin room is around £100,000 for the full set, which normally is equipped with a two-way mirror, so that the clinical skills tutor can observe what the team are doing, and then you also need to calculate for the clinical skills tutor; their training, the effectiveness of their teaching methods, and their salary. With this simple example, we can already see that the costs are mounting up… But how do we replace that with virtual reality? First of all, virtual reality comes down to customizability, and implementing a level of artificial intelligence that would allow the virtual reality system to know factors such as what level of training the participant is at, and adapting the course to their needs. Then there is the actual hardware itself; anything that can be done with mannequins now can be adapted for virtual reality.
Financially, using virtual reality will allow the most elaborate simulations to be affordable by substituting the most expensive manikin based simulation hardware and other important elements for virtual assets instead. The most expensive part of VR development for VR sim training in healthcare will the touch feedback/haptics; of course manual dexterity and muscle memory development is a crutial part of simulation training and integration of this is the current factor that steeply increases the cost of VR sim training. Different medical skills require different amounts of emphasis on manual dexterity and muscle memory development for the trainee. The most expensive part of VR sim training is the touch feedback/haptics and how accurate it needs to be and this is the current decider of whether VR sim training will be more cost effective than high fidelity manikin training.
There is an interesting point you made Raphael, as with training of pilots, no amount of simulation is comparable to doing the real thing, as simulation do not factor in the human elements. In regards to medical studies, VR training would have to go hand in hand with hands on training, as well as work based training, though would you say that VR could prepare students for the real thing?
Yes, the individualism of the human experience and the nature of human beings is that nothing can really replace dealing with a human being, but of course we are comparing virtual reality to the golden standards( manikin based sim training) of clinical training without actually dealing with a patient – because when you are dealing with a patient you risk doing harm to said patient; for example when you are practicing taking blood from a patient – the first time you do it, the success rate is going to be much lower compared to the tenth time you do it, but in those ten times you may have failed numerous time, and you may have harmed the patients. Of course that is just a simple example, but take chest-drains; you can seriously injure a couple of people if you complete that wrong. You can practice on the mannequins, but what if we can increase the efficiency on that, as there is still that cross-over period. We have a motto in Medigage; “We are bridging the gap between clinical skills, classroom based training and real life Grade A clinical performance on human beings”. It’s about making that gap seamless as possible, so when a person moves on from all these technologically advanced training techniques onto a real human being, they should be as prepared as possible and the chance of failure is minimized. The human factor is a constant element which we can’t replace, but it’s all about bridging the gap through training.
Would you say the field of Medical VR can be considered a Creative Economy at its current state, as there is no golden standard? Everyone is trying to tackle the issue of creating something that will ultimately benefit the students, or very specific to specialisms like surgery or anaesthesia, and do you believe there will be a turning point once said golden standard is achieved where the creative process will be more boxed in and standardised?
One of the exciting things is that there is huge potential for the creative community and creatives themselves to be involved in designing the different methods of teaching through virtual reality for medical education and other domains as well. There is going to be a big input from the creative economy and we really need to capitalise on that, and to see all the different options, and eventually there will be some design techniques, user journey profiles, and ways of developing programs that are more effective for most people which will set the precedent, but at the same time since VR is so dynamic it will allow for other design fundamentals and techniques to always have a place, as opposed to structuring medical education lectures which has a lot less scope to be different.
In a lecture, you have the lecturer themselves and a important variable is how dynamically can they connect with the audience of learners and then there is the lecture content itself, and the use of multimedia and applied interactivity. Let’s go back 200 years where the lecture was blackboard, very one-dimensional, and the structure was very rigid. As a learner you really had to have the learning style to benefit and the people whose personal style of learning was such were in a great position to learn, but the people who it didn’t, most likely fell off the academic ladder. Coming back to the question you asked, there will be a huge opportunity for the creative industry to get involved, and in my opinion that should be pushed and encouraged, and the medical education sector should allow that to happen, invest financial resources and really be open minded. And whilst some of the most effective ways will be more successful, become popular, and take off, there still will be opportunities for other innovative ways to be the optimal learning style for some students who learn differently. It’s going to be like nothing before because virtual reality is so dynamic.
I have an HTC Vive set up in my living room, and it was quite hilarious watching an actual surgeon (my father) play Surgeon Simulator, this made me think can’t we have an approved first aider training program that is like that game, but which would teach lifesaving skills for the public? From what I understand, first aid training costs precious resources like time and money, couldn’t that be conveyed through a video game at the fraction of the cost?
What your talking about is turning medical simulation training into a game, essentially gamification; it’s a wonderful technique that has been capitalized upon the business and management sectors to take advantage of our inclination as humans to want to track progress through whatever we are doing, to have rewards, and have feedback, and know how far we are to finish what we are on. These are just some factors of gamification and applying that to VR for medical education – for example first aid as you mentioned – is a fantastic opportunity, and that’s been one of the main aspects we wanted to involve in our programs at Medigage. Let’s talk about specifics and a example; the emergency first aid at work is a three-day course, each day takes six hours, so 18 hours with an assessment at the end. There is extra studying at home involved so let’s count 25 hours in all to be really competent in being first aider at work. That does not involve advanced life support. So that course itself can be very expensive, over a thousand pounds, it involves a qualified trainer, and the people who do the course have to stop what they are doing in their own professional lives, there is a lot of cost involved, and the people who are doing the course and their employers often do it reluctantly like a choir as it’s not really an enjoyable thing – often seen as just ticking a box. So if there was a way to make it more engaging, to gamify it, make it enjoyable, I think virtual reality has a massive opportunity there. And gamification is really the word we are looking for here; ways of gamification are very creative and there are ways to do it that haven’t even been exposed yet, as such gamification can never be forgotten when it comes to using virtual reality for medical education. And with Medigage our first product at medigage.co.uk is basic life support, its gamified.
What would you say are the risks in training medical professionals in virtual reality – if any?
100% there are risks with everything, and first risk is that what we have already talked about, which is going down the wrong track and spending lots of resources and finance in developing something that is not as effective as it could be. With virtual reality a lot of people who invest and develop see the financial incentive to commercialize VR, and of course that is a great massive opportunity, but at the same time you have to be cautious, methodical, systematic, and you can’t jump with both feet into the first idea that comes to mind, because it could jeopardise the wider perspective on VR for medical education.
Second risk is how technology is still developing, and relatively, compared to what we have now and what we had 20 years ago it looks advanced, but if you understand VR and how advanced it can be, you’ll see how we have relatively primitive technology in terms of where we could go; primitive as in its not adapted to our biology as humans. For example, there are our eyes, or how we process information; whether it will be healthy for our brains, we are not sure what health risks there are in the long term effects from multiple hours of looking at a screen which is literally centimetres away from your eye. Currently the research available on the negative health effects of VR – concentrating on the eyes – are quite positive in the sense that your eyes become used to how far the screen is from your eyes, and only people with pre-existing eye conditions would they be adversely affected; there is no real evidence to suggest that currently, but it needs to be further researched.
Social aspect is another risk that needs to be mentioned; how is VR medical education going to change the social aspect of medicine and healthcare? Healthcare itself relies a lot on teamwork, and it helps when people like each other, and are active team members. With VR we need to work as a team to ensure the social element, and putting emphasis on us working with each other for better patient care from the very beginning. There is no real technological limit stopping VR from becoming a social experience; there is a scope for multiple VR users to be in the same environment and for it to be as social as sitting in the same room together. But the risk is, that this is not focused on, so it needs to be given priority from the start, to make it a social experience as opposed to an isolating experience – like the image of a gamer in their parent’s basement whose life is all about that video game – and we don’t want that.
Let’s hypothetically say that VR medical training replaces the 5 years of university studies, and all your training comes from VR, is there a risk of desensitization?
There is a risk, a bonus, and an opportunity. All doctors have a risk to be desensitized to the original reason and motivation as to why they chose their profession. Every day they risk being subjected to very sad emotive situations, such as breaking sad news like cancer. And after a while we see that people who have to deal with that a lot sometimes develop an emotional struggle to really express their emotions and empathy that is needed to give for patient care whether it is to their family or the patient themselves. This has been an issue since the start of medicine and patient care. In virtual reality if someone completes training – for 5 years through a VR course – then what risk does that have in terms of desensitizing them? It takes decades to desensitize a doctor to the level where they are no longer sympathetic or empathic – so I don’t think that is a risk that comes to basic medical training. The opportunity of using VR is that because it’s still different from real life and anyone can see that difference and understand that it is for training purposes – it’s not the real thing, but they are bridging that gap. They can take the advantages of the training and limiting the disadvantages by recognizing that it is not real, their empathy can be preserved. We don’t know, this is just my experience talking as a health care professional, a doctor and as a virtual reality developer. You mentioned virtual reality courses replacing the primary medical degree, the 5-year degree, and I don’t think VR could ever replace the degree, from the way I am looking at it, there needs to be a balance between technology assisted learning and real life experience; all those elements which you can’t simulate in a virtual environment. But the fidelity and realism in virtual reality is a spectrum, and there are dynamic properties of this technology that we haven’t realised yet, due to it seeming impossible at this point in time. And such element could be a degree of realism which would allow for a complete replacement of the 5-year degree via virtual reality.
In one of your publications you stated that one of the biggest hurdles when it comes to virtual reality and medicine is lack of a robust artificial intelligence. Could you explain a bit more?
Artificial intelligence is a spectrum; it is a broad wide spectrum with a longitudinal quality. What I mean by that is AI can be as simple as a calculator if we look at its basic fundamentals. And at its sophisticated level it’s something which understands a situation – which is abstract – and can come up with an answer; like a human, or even completely unlike a human. It will have to come up with an answer though, which it can then use to learn more information, and it can learn by itself from its environment, and allow itself to adapt. Thus, allowing unlimited potential. This is still in a fairly primitive stage of development, especially its adaptiveness; the current most powerful artificial intelligent machine – from my understanding – is IBM Watson, very powerful and incredible machine being used for really fantastic feats in particularly the healthcare field, and the business field, and in big data. With healthcare particularly, a specific project in the US is to do with oncology ( study and treatment of cancer) looking at big data patterns with gene coding, and understanding what sort of genes give rise to cancer, and trying to detect that, deal with it, and treat it most effectively. Despite how amazing the AI is here, it is very specialised and not adaptive at all: It’s a AI has a super narrow and specific range.
My opinion is that artificial intelligence being used in VR simulation training will allow every clinical training environment to be different and adapt and respond and react to the trainee in a natural way best for learning effectiveness. It will allow for the trainee to vethinking on their feet and not have the disadvantages which we currently face with mannequins; if we are taking a blood sample from a mannequin you know exactly where the vein is, because you have done it a hundred times, and you can see the puncture sites that you and your colleagues have done. It’s not customizable; VR with artificial intelligence could present a different patient, with a different voice, a different sized arm, different coloured arm allowing randomisation. And that’s just customisation of visuals, pushing that further one could have branching levels of customisation, where by selecting a set of options the artificial intelligence will then customize and adapt to your learning situation making it as challenging – in order to make it as effective towards your development. That is a lot more complex.
For that level of complexity, you would need an enormous collection of data on patients. Wouldn’t that incur a level of risk regarding patient confidentiality, and making the AI a possible target for hacking?
In order to cover all variables, a lot of memory is needed, and this should be fine because the rate of digital memory(available) expansion is skyrocketing. I don’t think there is an issue with memory. What underpins this all is big data, masses and masses of petabytes is needed to assist artificial intelligence and virtual reality to keep on developing. At the core of it, what is needed is communication between what’s actually happening in the real life, such as a real life clinical statistics on what actually happens with patients and the AI engine of the simulation, this would allow the AI to learn live as more and more data is collected.
Regarding the security aspect of patient confidentiality being breached, it is a fundamental concept within the medical education domain to use patient data – very confidential information about patients– and use it for teaching purposes. As long as everyone understands that this data needs to be kept within the domain of medical education, and that only particular people who are learning to become better clinicians have access to them and patient identification information is completely anonymised, the security risk is a minimal risk.
The VR Doctor will return again soon to VRFocus with another discussion. Interested in Healthcare? Why not check out some of the other articles in the series.
Virtual reality (VR) is all about immersion. To truly feel as though the user is present in the virtual location, there should be as few reminders of the ‘real world’ as possible. Standard controllers can break that sense of immersion, tracked motion controllers help, but for the best experience, many firms are working on haptic control gloves. Then there is Go Touch VR, who thinks haptic motion control should be possible without the glove.
Go Touch VR are a French start-up who are working on an innovative solution for VR haptic controllers by stripping away the actual glove and leaving just the haptic feedback. The current form of the VR Touch is a small motor in a plastic case that attaches to the user’s finger, looking somewhat like the monitoring devices that get clipped to patient’s fingers in hospitals. The device is modular and up to three of the devices can be worn by a single user.
Tracking works by using a Leap Motion attached to the front of an Oculus Rift headset and the Go Touch also contains an Inertial Measurement Unit or IMU for better finger tracking. The demos currently available allow users to push buttons, play a xylophone or the drums, with the device offering a type of ‘push’ feedback more like what you would feel when pressing down on an object that the rumble feedback commonly found on controllers like the PlayStation Dual Shock.
When asked about why the Go Touch was designed the way it was, CEO Eric Vezzoli said: “With a glove you need different sizes, you need to wash it, you have to wear it, you could have breathability issues in the summer,” Vezzoli told us. “Also a glove you have to put it on, with this we are working to get to less than one second to get it on.”
The product is still in early prototype at the moment, but Vezzoli and his team are hoping to have 100 developer kits available by September.
VRFocus will bring you further information on the Go Touch when it becomes available.
Virtual reality is great at bringing two of our senses into the digital world: sight and sound. That VR tech is already well-established and improving rapidly. However, we still aren’t at a point where we can touch, smell, or taste things inside of a VR headset. Obviously taste is the most important of these to solve (can you imagine VR Pac Man?) but for now let’s settle on haptic feedback for touch.
The Cornell University Organic Robotics Lab has created a prototype that allows VR experiences to provide tactile feedback while you play. It’s called the Omnipulse and, well, you really need to see it to believe it.
What you’re seeing above is not, as I originally thought, the soul of that Vive controller trying to escape into a human host. It’s actually a series of pneumatic tubes that connect to several pockets embedded into the Omnipulse’s synthetic rubber skin.
The skin is connected to a tube of compressed air via a narrow hose. When you trigger an action inside of VR the compressed air is released strategically into the specific pockets necessary for mimicking the physical sensation of that action.
The Omnipulse can mimic everything from shooting a gun to grabbing a hammer. It was seen running this week at GTC 2017 where it was connected to NVIDIA’s Funhouse demo. Inside the virtual carnival users could feel the recoil of various weapons and the impact of reflex-testing boxing and whack-a-mole mini-games.
The Omnipulse is still a very early prototype and it honestly doesn’t seem very likely that we will one day be buying tanks of compressed air for our living rooms. However, the Cornell team’s work here is impressive in that it shows that believable haptic feedback can be achieved with relatively cheap components through a device that connects simply to your pre-existing VR controller.
Unlocking our sense of touch in VR will take time but breakthroughs like this will help us get there quicker. Now if only they had a prototype to let me taste the gun I was shooting. Then we’d really be in business.
As the rise in virtual reality (VR) has happened, so has the opportunity to develop new methods for increasing immersion. From wind machines and smell pods to moving chairs, many things have been tried, but one of the most popular has been to stimulate the sense of touch using haptic feedback. A team ar Cornell University are aiming to advance the science of haptic technology for a better, more immersive experience.
Current haptic technology is limited in that it mostly uses vibration to provide basic feedback, related to the vibration units found in controllers like PlayStation’s Dual Shock controllers. As such, it is unable to express a range of textures and sensations to the user. New technology out of Cornell University’s Organics Robotics Lab called Omnipulse.
The current version of the Omnipulse is a flexible rubber sheet which uses a pneumatic system to provide tactile feedback. The fact that it is thin rubber means it is able to be easily added in to several existing controllers like the Oculus Touch controllers or even controller gloves and haptic suits.
Though the Omnipulse is still in very early prototype stages, demonstrations have shown that it is capable of replicating sensations such as hitting an object with a hammer, the recoil of a gun, punching things and shooting a harmless water gun. The use of pneumatics means that the Omnipulse can replicate solid objects and soft, squishy textures.
It isn’t currently know if the creators of Omnipulse are aiming to have the technology integrated into existing controllers, and therefore provide their technology to companies like Oculus, HTC and Sony or if they are aiming to create their own Omnipulse equipped series of peripherals.
VRFocus will bring you further news on Omnipulse and other emerging haptic technologies.
Omnipulse is a new haptic technology out of Cornell’s Organic Robotics Lab which uses an array of embedded pneumatic actuators to create haptic feedback which feels quite ‘organic’ compared to the more ‘mechanical’ of many other haptic technologies out there. With the ability to form the flexible Omnipulse skin into arbitrary shapes, the technology could be integrated into VR controllers, gloves, or potentially even haptic VR suits.
Showing off their technology at GTC 2017 this week, the Organic Robotics Lab has been collaborating with NVIDIA to create compelling haptic feedback with a version of the Omnipulse skin which was adapted to a Vive controller. Running inside of Nvidia’s VR Funhouse demo, the lab showed the haptic system being used to convey sensations of gun recoil, hitting a hammer against objects, punching objects, and shooting a squirt gun.
The prototype haptic skin is shaped to conform to the controller and simply slides over top of the existing structure. From there it’s attached to a tether which, at this stage, contains one pneumatic tube per pulsating pocket (currently 12), though the creators tell me there are a number of ways to simplify the tether. The tether runs to a compressor which pressurizes air for use in inflating the various actuators; compressed gas like C02 could also be used for a system that wouldn’t need to rely on a powered compressor.
The sleeve itself feels like a piece of thick rubber, with a consistency similar to your own skin; combined with the roundness of the inflating pockets, the whole ordeal feels quite a bit more squishy and organic than many other haptics technologies we’ve used for VR. When you see it active on the controller when it isn’t in anyone’s hand, squirming and shaking the controller at times, it’s actually a little creepy how it seems… alive.
But that doesn’t mean it can only provide organic-feeling feedback. Actually the creators say it’s capable of applying a hearty 15 PSI against your hand (provided you keep a firm grip), which means it can push quite hard against your hand. I was surprised to find that the response time was fast enough to create a compelling feeling of the kick of a gun in my hand when I tried the demo.
When it comes to haptics, pneumatics are not new to the scene; we’ve seen it used in gun peripherals, haptic vests, and more in years past. What’s interesting about Omnipulse is the ability to integrate many pneumatic actuators within a tight space, and control them all independently from one another. This means more ‘haptic resolution’, and the ability to create more advanced haptic effects.
The creators of Omnipulse tell me that this is a very early prototype, and what I saw and felt was just a preliminary set of haptic effects in one potential form factor. There’s still lots of exploration to do with regard to figuring out how to pulsate the actuator array in ways that create the most compelling haptic sensations that feel like a good analog for what the user is experiencing in VR. And further out, the company plans to experiment with different form factors, saying that the skins can be molded in arbitrary shapes, with the possibility to be made into gloves and even haptic suits.
Starting as a project within Bristol University, Ultrahaptics has come a long way. The British firm specialises in creating haptic feedback without the need for the user to wear any bulky suits, gloves or other add-ons by using ultrasound technology. Now the firm are poised to develop even further with a £17.9 million (GBP) cash injection.
Ultrahaptics has received funding from Dolby Family Ventures, Cornes and the IP Group, who are all confident in the ability of the firm to further develop the technology that allows users of virtual reality (VR) to ‘feel’ virtual objects. Touchless interfaces are already being looked into, with Ultrahaptics having worked on concepts for vehicles, industrial controls and medical devices.
Speaking of the latest funding round, Mark Reilly, Head of Technology at one of the investors, the IP Group said: “We have supported Ultrahaptics from the very beginning and have been impressed with its substantial growth and ambition. I am particularly excited about the prospect of the company bringing its technology to virtual reality where it has the potential to be truly disruptive, and where the market pull has been significant.”
A representative of another investor, Makoto Seki, Executive Director, Cornes, was in agreement, saying: “We are excited to have our partnership further strengthened with Ultrahaptics. Since we agreed a distribution agreement with the company the market pull for this technology has been clear to us and it was an obvious choice for investment.”
Ultrahaptics CEO, Steve Cliffe added: “We are a global business and the range of investors now reflects this more than ever. The Dolby family fund managers bring expertise in key markets, Cornes reflect and support the growth of our distribution network in Asian markets, and we are of course thrilled to continue our existing relationship with both IP Group and Woodford Investment.”
VRFocus will bring you further news on investment in VR companies and projects as it becomes available.
Deepening the immersion of virtual reality is something many creators or working on, from finding a natural form of locomotion to delivering realistic physical feedback from these virtual worlds. The latter of the two has been tackled in various forms but a common factor between a majority of the options is a high price tag. The VRgluv, currently undergoing a Kickstarter campaign, is an attempt to bring affordable feedback to HTC Vive and Oculus Rift.
Despite the team behind VRgluv touting this as an affordable haptic device, it doesn’t look like the glove will be without any significant features. It is said to offer complete tracking for every individual finger and will have realistic force feedback for each as well. On top of all that, they aim to make it lightweight, comfortable, and rechargeable all while somehow being wireless as well.
For the “We Got Funded” pricing tier the project’s creators promise a pair of VRgluvs for $369 (limited to 400 pairs). The “Kickstarter” special promises a pair at $399, described as being a “20% reduction off the retail price”. If this glove operates as they say with minimal lag, this price sounds like quite a bargain. It may take a little bit more time to bring the price to a fully consumer friendly zone, but it isn’t far off from a price that allow it to be paired with the Rift and Vive in a bundle and not scare off the early adopters.
The VRgluv team says they already getting developer kits out to creators. Experiences such as Drunken Bar Fight and Abode are listed as compatible content and, considering they’ve surpassed their $100,000 goal with most of the month to go, that list should grow quickly. Stay tuned to UploadVR for more updates in the future.