Meta’s BuilderBot Concept Offers Vocal Virtual World Building

This week Meta (formerly Facebook) held its “Meta AI: Inside the Lab” event which showcased how the company is developing artificial intelligence to tackle a range of use cases, one of which is building its metaverse vision. CEO Mark Zuckerberg revealed a concept called BuilderBot, designed to create worlds simply using your voice.

Meta - BuilderBot

In the demo video which begins in a plain, gridded environment, Zuckerberg starts by saying: “let’s go to a park.” This creates a fairly plain park-like environment with a green base and a couple of trees. He then changes his mind, saying “actually, let’s go to the beach,” prompting BuilderBot to create a far more attractive desert island scene.

He then follows this up by populating the scene with clouds and another island, prompting the AI with “let’s add an island over there,” pointing into the distance. A colleague in the same virtual space then continues to add further adornments, from stationary objects like a picnic bench and a stereo to ambient sounds like seagulls and waves.

If you didn’t know Zuckerberg loves hydrofoils, so he added one of those in there too. While the process looked fairly smooth no further details were provided regarding BuilderBot’s abilities, just that the tool would help “fuel creativity in the metaverse.”

Anything.World
Image credit: Anything.World

BuilderBot may come in useful for creators in Horizon Worlds, for example, who may benefit from the ability to quickly iterate using their voice. However, the aesthetics were very plain and simple, so the tool might not scale as well for developers looking to build more complex scenes.

It’s not the first time gmw3 has come across voice command world-building. Anything World is a platform that not only allows you to build 3D experiences using machine learning, you can also animate models as well. One up on BuilderBot.

Whether Meta releases BuilderBot to the public or it just stays as a concept remains to be seen. As further details are released, gmw3 will let you know.

Digital Surgery Designated As A Microsoft Mixed Reality Partner

Digital Surgery, a health technology company working to reshape the future of surgery, has announced that its designation as an official Microsoft Mixed Reality Partner (MMRP). This exciting partnership will allow Digital Surgery the means to create mixed reality (MR) solution to improve the delivery of surgical care as part of the MMRP. Studies show that nearly one in seven patients hospitalized for major surgical procedures are readmitted within 30 days of discharge, with financial implications for the health system and clinical implications for the patients. Digital Surgery is working to utilize this new partnership, together with other demonstrated capabilities in artificial intelligence (AI), to power a radical shift in surgical care.

Microsoft HoloLens

“The Microsoft recognition is truly an honor and sets the stage for our larger mission, which is to deliver safe surgical care for all. With our AI technology and database of digital surgical processes, we’ve trained a computer to understand surgical procedures and predict what happens next.” Explains Dr. Jean Nehme, CEO, Digital Surgery, discussing the strategic importance of the partnership: “With HoloLens, we open the exciting opportunity to use the system’s integrated camera as the visual recognition system to deliver even more immersive experiences for the entire surgical team. This collaboration is a critical part of our strategy to partner with the world’s best technology firms, especially providers of breakthrough hardware, to support the delivery of safer surgery. I am excited to see what we can accomplish together.”

By being a part of the MMRP, Digital Surgery will be able to leverage technology, knowledge and experience along with Microsoft’s HoloLens MR headset to bring their goal to reality. According to the Lancet Commission on Global Surgery, more than five billion people lack access to safe surgical care, with operative knowledge being one of the critical factors that has yet to scale globally. Addressing this problem is one of the goals of Digital Surgery by leveraging innovative technologies and intelligent operation systems.

Digital Surgery

“We are delighted to have Digital Surgery accredited as a Mixed Reality Partner. Given the pace of technological change, it is key that customers can access partners who understand mixed reality potential and have the proven ability to deliver transformative solutions.” Added Leila Martine, Product Director, Mixed Reality, Microsoft: “It’s great to see Microsoft HoloLens being added to Digital Surgery’s impressive content catalogue, and help us jointly shape the future of surgery and improvements in patient outcomes.”

VRFocus will be sure to bring you all the latest on the work from Digital Surgery as they now work as part of the MMRP, so stay tuned for more.

Apprentice.io Secures Multi-Million Dollar Series A Funding

New Jersey augmented reality (AR) firm Apprentice.io is the latest immersive technology company to announce the results of a funding round, this time Series A, led by Pritzker Group Venture Capital and which also saw investment from Silverton Partners and Hemi Ventures as well as another name common to these types of stories The Venture Reality Fund.

The funding round saw the firm, secure an additional $8 million (USD) in funding. Bringing the total raised by the company so far to over $10 million.

Apprentice.io has developed Apprentice – an AR and artificial intelligence (AI) platform for batch records, tech transfer and R&D workflows as well as training procedures, with particular use to the pharmaceutical and biotech industries. Apprentice offers support ARKit 2 on iPhone XS as well as ARCore for Android devices, plus already has support for HoloLens, Magic Leap and other AR focused smart glasses. Subsequent growth has seen the start-up triple in size.

“We like to say that we don’t just augment reality; we augment human ability,” said Angelo Stracquatanio, the co-founder and CEO of Apprentice.io in a statement on the investment. “AR and AI are changing the way workforces across all industries solve problems and share information, ushering in the next wave of human potential.”

Vice President of the Pritzker Group Venture Capital added on his firm’s financial involvement: “We are absolutely thrilled to be collaborating with the leader in AR and AI. Angelo and his team of experts have a clear vision for what will be the next generation of enterprise AR. It was among the many reasons we were eager to invest at this stage.”

VRFocus will have more news about investment in the AR industry very soon.

Life In 360°: Goodwood’s Change Of Pace

Hello again and welcome to another Life In 360° themed around motor racing. It wasn’t that long ago that we’d last taken to the tarmac, as we have done in Li360 on more than a few occasions. In fact, we looked once again at Formula 1’s selection of coverage from across the most recent races a few weeks back. 

Life In 360° / 360 Degree Video

My intention was not, therefor, to go back to the motorsport well for a little while, at least.

However…

At the end of last month we featured a story about the world-famous Goodwood Festival of Speed and, as part of the event people were able to take part in an HTC Vive experience that showcased the view from Roborace’s Robocar – the first fully autonomous, fully driverless vehicle to participate in the event. Doing so with a combination of ultrasonic, GPS, and camera sensors as well as both LiDAR and standard radar.

Robocar captured the footage of the run then it was put into a simulator, that (utilising an HTC Vive Pro) allowed attendees to feel as if they were driving the car on the exact run they’d seen done at the event.

Well, it turns out that Roborace also released the footage as a standard 360 degree video, so we’ve included it as today’s Li360 entry.

Roborace“We are ecstatic that the team has been able to achieve this landmark run and we hope that it draws attention to the amazing advances that are being made in the automotive industry.” Explained Deputy CEO of Roborace, Rod Chong at the time. “Robocar is an ambassador for the future technologies we will see on our roads and we hope that inspirational stunts like this will change public perceptions of autonomous vehicles.”

“It is an enormous achievement for a race car to complete the very first run of the Hill using only artificial intelligence,” Added the Duke of Richmond, Charles Gordon-Lennox – the Festival of Speed’s Founder. “Roborace has worked incredibly hard in order to pull this off.”

You can see a behind the scenes video here and the 360 degree video below. We’ll have more Li360 next week, here on VRFocus.

Life In 360°: Music By Machine

To paraphrase a sentiment we often share here at VRFocus, and one I certainly refer back to a lot in VR vs articles, technology does not stay still – even if you want it to. What you expect of technology today it may well not deliver today, but it could just as well deliver it tomorrow or have delivered it yesterday but you weren’t aware of it.

It probably didn’t even leave one of those ‘Sorry We Missed You’ cards either. Rude.

Life In 360° / 360 Degree VideoTechnology often goes through cycles of high and low progress. One technology that’s seemingly accelerating at the moment is the development, implementation and use of A.I. – artificial intelligence. I swear, my inbox has about as much A.I related news in it as virtual reality (VR), augmented reality (AR) or anything else at the moment. So I was intregued when hunting for something to fill today’s Li360 slot when I stumbled across a project from back in February this year called Life Support.

Life Support is a 360 degree music video that combines several technologies. VR, obviously, is included. Not just because of the formatting but the visuals were in part created by Google Tiltbrush with some Unity thrown in. A.I. wrote the music, thanks to music composer and API Amper. While according to singer and video producer Taryn Southern, whose Youtube channel this is from, it also contains functional magnetic resonance imaging (fMRI) video footage of her brain.

That’s quite the combo. Actually, Southern used A.I. to compose the entirety of her recent album, but as far as I can see this song was the only one to get the 360 treatment. For those interested you can find out more about using A.I. to compose in this interview with The Verge from last year.

Other than that you can watch things unfold below.

ABI Research Provides New Report On Eight Technologies That Will Transform Manufacturing

ABI Research, a market-foresight advisory firm that provides strategic guidance on compelling transformative technologies have released a recent report that outlines how technologies fit together with each other in Smart Manufacturing and identifies vendor challenges and solutions within the sector. The eight areas the report covers include additive manufacturing, artificial intelligence (AI) and machine learning (ML), augmented reality (AR), blockchain, digital twins, edge intelligence, Industrial Internet of Things (IIoT) platforms, and Robotics.

ABIResearch_Company_Logo

Within the manufacturing sector there has already been an increased adoption of IIot platforms and edge intelligence. Over the next ten years, it is expected that manufactures will start piecing together the other new technologies to eventually lead to a more dynamic factories less dependent on fixed assembly lines and immobile assets.

“Manufacturers want technologies they can implement now without disrupting their operations,” says Pierce Owen, Principal Analyst at ABI Research. “They will change the way their employees perform jobs with technology if it will make them more productive, but they have no desire to rip out their entire infrastructure to try something new. This means technologies that can leverage existing equipment and infrastructure, such as edge intelligence, have the most immediate opportunity.”

With a transition towards a lights-out factory already in motion, the major disruption will require an overhaul of workforce, IT architecture, physical facilities and equipment and full integration of a number of new technologies including connectivity, additive manufacturing, drones, mobiles collaborative robotics, IIoT platforms and AI, according to the report.

The report also notes that the above technologies have already started to converge and that robotics provide a physical representation of this convergence. Their use of AI and computer vision and connect to IIoT platforms where their digital twins are located. This connectivity, along with AI, will increase in importance as more robots and technology join the assembly line and work alongside humans and each other.

ABI Research’s full report, titled Smart Manufacturing Transformative Horizon, is available to be read in full and is part of the company’s Smart Manufacturing research service. Back in May ABI Research released a report that predicts AR will struggle in the brick and mortar retail environment.

For more on ABI Research in the future, keep reading VRFocus

Depth-sensing, Algorithms and Retail Shopping Allowing AiFi to Push the Boundaries of Interactivity

Founded by former Google and Apple engineers, AiFi is combining artificial intelligence (A.I.) with ARKit on Apple products such as iPhones and iPads. Speaking to VRFocus, co-founder and CEO Steve Gu explained how AiFi has enabled consumer products to understand detailed 3D shapes and activities, including individuals and their surroundings.

Wonderlens_AiFi

The first application Gu showcases is Wonderlens, an application for your iOS devices that allows you to ‘segment yourself’ or distinguish yourself from the background in real-time. Similar to how conventional methods would use green screens to ‘transport’ the individual on screen somewhere else, with AiFi’s technology no laborious hours of keying out are needed to get a rough outline. No green or blue screen is needed either. All you need is an iPhone or iPad, and the user is able to transport themselves to the top of a mountain or watch creatures swim by under the ocean as shown in the image above. Wonderlens is available on the App Store now. Gu mentions that it will be coming to Android devices in the future as well.

The second application similarly uses Apple products. Holo Messenger allows users to take a video or image of themselves and send that as a hologram. Inspired by the holographic messaging system used in Star Wars, Holo Messenger lets users record a video message using their phones, then applies augmented reality (AR) filters to make the image appear like the grainy, blue-tinted holograms used in the movies. You simply need to record an image or video recording of yourself, and the message will be sent to users with Holo Messenger for them to see the holographic version of what you recorded on a flat surface through a droid from Star Wars like BB8 or R2D2.

“Our engineers and scientists have been developing this enabling technology to power future business applications,” said Steve Gu, CEO, AiFi. “Imagine that once even a single cell phone camera can understand the intricate motion, 3D shape, and various activities of individual people, the implication is simply tremendous and mind blowing. We could easily enable cameras to interpret your gesture, intention, body shape, motion, and activities practically in the blink of an eye.”

AiFi is not only using their technology for instant real-time AR creations on consumer-facing technology, but is also focusing on retail environments. Through AiFi’s system, shoppers will be able to come into a retail store and pick up items without having to use cash, a card or scan bar codes. They are hoping to bring it to hospital settings, physical therapy and sports as well and are looking for partners to further expand on their SDK’s.

“Today we are bringing a blend of the physical world and digital world into everyone’s life with tools they can use,” Gu continues, “but as we go forward we are taking these solutions and working with great partners to solve large scale checkout free solutions for the retail space. We are very excited that we can use these technologies to improve user experience, to entertain, and to benefit humanity at large.”

To find it more watch the video below.

Universal mCloud Completes Purchase of NGRAIN

Universal mCloud has completed their purchase of NGRAIN, a leader in artificial intelligence (AI) and 3D augmented reality software (AR).

mCloud Universal Company Logo

The purchase will see NGRAIN’s AI and AR technology – which currently serves a number of aerospace and military applications that demand high precision and reliability – enable mCloud to offer the same grade of capabilities to maximise the performance of critical energy assets. This will see the revolutionary AI and 3D/AR technology that NGRAIN provide being worked into mCloud’s AssetCare Cloud Solution, expanding their product portfolio.

One company that currently benefits from NGRAIN’s technology is Lockheed Martin, who make use of the Battle Damage Assessment and Repair capabilities for maintenance, sustainment, and readiness of their F-35 and F-22 stealth fighters. The damage assessment technology, which will see drones deployed to conduct high-resolution aerial surveys of wind turbines along with NGRAIN’s sophisticated computer vision capabilities to semi-autonomously inspect turbine blades for damage, correlate blade condition with turbine output and provide guidance on all required repairs. The goal of all this is to allow for faster and more precise inspections of wind farms at a reduced cost ensuring more effective workflows.

NGRAIN Company Logo

Russel McMeekin, President and CEO of mCloud, commented on the purchase by stating: “According to research studies, turbine blade damage can result in annual energy production losses of up to 25%, we will be the industry’s only provider of an AI-powered Digital Blade Inspection capability, setting new standards both in terms of how the industry will conduct inspections going forward and how asset owners will ultimately profit from the optimized performance of their assets. As we head into the second half of 2018 and lead up to 2019, NGRAIN’s AI technology, combined with our in-house wind expertise, will allow us to further optimize customer’s asset care budgets.”

Now that mCloud has completed its purchase of NGRAIN the technology will be implemented by Q2 2018, seeing the product solution being used across a large number of worldwide assets that should see results within a short time. Dr. Barry Po, NGRAIN’s Senior Director, Product and Business Development commented on the news by stating: “We have been working closely with the mCloud team since announcing this transaction earlier this year. We have made great progress in defining and getting ready for a very aggressive roll-out of our AI-powered damage assessment capabilities as part of the AssetCare Wind solution.”

For more stories like this in the future, keep reading VRFocus.

The VR Doctor: Gamification, Education & The Possible VR Future Of Healthcare

Regular VRFocus readers will be aware of our interest not just in the use of virtual reality (VR) as a means to entertain but also as a tool to educate and help the human condition. To that end we have as part of our features section “Your Virtual Health” which covers an array of topics relating to healthcare and the medical technology (medtech) industry as a whole. We’ve had discussions on how VR is being used to benefit mental health, how and why it affects the brain in series VR & The Mind and I even discussed my own thoughts on an unspoken issue of VR technology, namely how it is just not suitable for those suffering from more general sickness.

Our most regular series dealing with VR’s healthcare possibilities however is The VR Doctor, written by NHS doctor and Director of Medigage Ltd, Dr. Olaiya with the UK’s National Health Service (NHS) on VR immersive training programmes for doctors and nurses.

Back in April Dr. Olaiya and Economics PHD fellow Nandor F Kiraly discussed the possibilities of how VR may influence healthcare and we’re able to bring you that discussion today, as well as a portion of Kiraly’s Creative Economies video essay which the interview takes place.

You can read Dr. Olaiya opinions and see the video clip below.

How do you think Virtual Reality will influence clinical skills simulation training for healthcare?

VR runs along a continuum, and further along that continuum involves haptics, motion sensing and even involving smells – all of these are senses that brings us to a deeper level of immersion and realism and currently lots of these technologies that are available to connect into the virtual reality experience – it’s just not connecting to the right markets, the tipping point hasnt been reached so to speak!

Right now we are talking about healthcare, so, coming back to the main question that you asked; number one I think VR in heathcare will be very useful. The main disadvantage of mannequin based simulation is that it’s not really as customizable as it needs to be, it’s variables which are very important to make the trainee adaptive, are often very fixed so, if we want to structure VR clinical skills simulation training to be as effective as possible then customisation of each very healthcare scenario he trainee is put in must be atleast slightly different otherwise it will seem artificial like a dejavu and another big factor is touch.

A program we have at the minute is basic life support, and as soon as people put on the headset and are in a hospital having to perform basic life support, the first thing they do is put their hands in front of them, to see if their hands are actually involved in the program; and in the first version we developed at Medigage there is no hand motion sensing, however it was still an effective learning tool when used alone or when supplementing physical manikin based training. Particularly through: 1) Increasing the accuracy and detail of how much the trainee retained of how to do the procedure itself. 2) Simulating the environmental emotional pressure or stress of having to carry out a medical procedure alone when new to the skill.

Haptics have different levels of realism, which make the VR experience more realistic. In standard non-VR manikin simulation based training you can touch a mannequin but it doesn’t feel at all real; a tool that most medical students in the western world have used for training, is the Advanced life support high fidelity manikin there are several companies that manufacture this and it costs between £40,000 – £100,000, which increases the realism and customisation of the simulation.

Mativision VRinOR - Medical TrainingAs haptics in VR become more sophisticated, adaptive and realistic, we do strongly believe it will bridge the gap needed to converting even the staunchest of VR sceptics and in healthcare there is a lot of them.

Social aspects – Working in a team an aspect of healthcare, which is crucially important and cannot be overlooked. A common misconception of course is that VR is a isolating, lonely and a solitary experience. VR infact allows us to be more connected than ever before imagine collaborative surgery where the operating room has a multidisciplinary team who are all working together simultaneously in a vr space from different countries or even continents.

These are all important factors that make the VR experience more effective as a learning tool.

Currently where is the technology right now?

It’s there, it just needs to be directed, and the right expertise is needed to develop it. It comes down to managing it and allowing it to be used – to make it so that is as effective as possible.

Would you agree with the following statement: “With the dropping prices of electronics and technologies, the training of practitioners via the use of VR will be better than current methods – such as doing the same training on carcases – from a cost effective standpoint?”

In the future I firmly100% beleive this. Its realizing how soon that is and when we should be investing more capital and finance in to it to speed up the process, because before we invest more finance into it, we have to be on the right track; currently there are a lots of different talented people and development companies going off in different directions, but there is no standard set guidelines (of the best way to do it). Before we start pushing things to replace what is already there -cadavers as you said – we need to come up with a gold standard. Medical education itself is a speciality itself, which is hundreds of years old, some of the great scientists had their take on it, and we are still developing it right now, so adding VR into the mix cannot be looked upon lightly.

First, we need to find the best way, and secondly push it in the right direction. Every step of the way we will need to conduct research to confirm that what we are on the right track, and only then can we start financing the transition towards a more VR based education system.

Washington Leadership Academy
To answer your question – in terms of actual objective finances – a cutting edge advanced life support high fidelity mannequin room is around £100,000 for the full set, which normally is equipped with a two-way mirror, so that the clinical skills tutor can observe what the team are doing, and then you also need to calculate for the clinical skills tutor; their training, the effectiveness of their teaching methods, and their salary. With this simple example, we can already see that the costs are mounting up… But how do we replace that with virtual reality? First of all, virtual reality comes down to customizability, and implementing a level of artificial intelligence that would allow the virtual reality system to know factors such as what level of training the participant is at, and adapting the course to their needs. Then there is the actual hardware itself; anything that can be done with mannequins now can be adapted for virtual reality.

Financially, using virtual reality will allow the most elaborate simulations to be affordable by substituting the most expensive manikin based simulation hardware and other important elements for virtual assets instead. The most expensive part of VR development for VR sim training in healthcare will the touch feedback/haptics; of course manual dexterity and muscle memory development is a crutial part of simulation training and integration of this is the current factor that steeply increases the cost of VR sim training. Different medical skills require different amounts of emphasis on manual dexterity and muscle memory development for the trainee. The most expensive part of VR sim training is the touch feedback/haptics and how accurate it needs to be and this is the current decider of whether VR sim training will be more cost effective than high fidelity manikin training.

There is an interesting point you made Raphael, as with training of pilots, no amount of simulation is comparable to doing the real thing, as simulation do not factor in the human elements. In regards to medical studies, VR training would have to go hand in hand with hands on training, as well as work based training, though would you say that VR could prepare students for the real thing?

Yes, the individualism of the human experience and the nature of human beings is that nothing can really replace dealing with a human being, but of course we are comparing virtual reality to the golden standards( manikin based sim training) of clinical training without actually dealing with a patient – because when you are dealing with a patient you risk doing harm to said patient; for example when you are practicing taking blood from a patient – the first time you do it, the success rate is going to be much lower compared to the tenth time you do it, but in those ten times you may have failed numerous time, and you may have harmed the patients. Of course that is just a simple example, but take chest-drains; you can seriously injure a couple of people if you complete that wrong. You can practice on the mannequins, but what if we can increase the efficiency on that, as there is still that cross-over period. We have a motto in Medigage; “We are bridging the gap between clinical skills, classroom based training and real life Grade A clinical performance on human beings”. It’s about making that gap seamless as possible, so when a person moves on from all these technologically advanced training techniques onto a real human being, they should be as prepared as possible and the chance of failure is minimized. The human factor is a constant element which we can’t replace, but it’s all about bridging the gap through training.

Would you say the field of Medical VR can be considered a Creative Economy at its current state, as there is no golden standard? Everyone is trying to tackle the issue of creating something that will ultimately benefit the students, or very specific to specialisms like surgery or anaesthesia, and do you believe there will be a turning point once said golden standard is achieved where the creative process will be more boxed in and standardised?

One of the exciting things is that there is huge potential for the creative community and creatives themselves to be involved in designing the different methods of teaching through virtual reality for medical education and other domains as well. There is going to be a big input from the creative economy and we really need to capitalise on that, and to see all the different options, and eventually there will be some design techniques, user journey profiles, and ways of developing programs that are more effective for most people which will set the precedent, but at the same time since VR is so dynamic it will allow for other design fundamentals and techniques to always have a place, as opposed to structuring medical education lectures which has a lot less scope to be different.

In a lecture, you have the lecturer themselves and a important variable is how dynamically can they connect with the audience of learners and then there is the lecture content itself, and the use of multimedia and applied interactivity. Let’s go back 200 years where the lecture was blackboard, very one-dimensional, and the structure was very rigid. As a learner you really had to have the learning style to benefit and the people whose personal style of learning was such were in a great position to learn, but the people who it didn’t, most likely fell off the academic ladder. Coming back to the question you asked, there will be a huge opportunity for the creative industry to get involved, and in my opinion that should be pushed and encouraged, and the medical education sector should allow that to happen, invest financial resources and really be open minded. And whilst some of the most effective ways will be more successful, become popular, and take off, there still will be opportunities for other innovative ways to be the optimal learning style for some students who learn differently. It’s going to be like nothing before because virtual reality is so dynamic.

I have an HTC Vive set up in my living room, and it was quite hilarious watching an actual surgeon (my father) play Surgeon Simulator, this made me think can’t we have an approved first aider training program that is like that game, but which would teach lifesaving skills for the public? From what I understand, first aid training costs precious resources like time and money, couldn’t that be conveyed through a video game at the fraction of the cost?

What your talking about is turning medical simulation training into a game, essentially gamification; it’s a wonderful technique that has been capitalized upon the business and management sectors to take advantage of our inclination as humans to want to track progress through whatever we are doing, to have rewards, and have feedback, and know how far we are to finish what we are on. These are just some factors of gamification and applying that to VR for medical education – for example first aid as you mentioned – is a fantastic opportunity, and that’s been one of the main aspects we wanted to involve in our programs at Medigage. Let’s talk about specifics and a example; the emergency first aid at work is a three-day course, each day takes six hours, so 18 hours with an assessment at the end. There is extra studying at home involved so let’s count 25 hours in all to be really competent in being first aider at work. That does not involve advanced life support. So that course itself can be very expensive, over a thousand pounds, it involves a qualified trainer, and the people who do the course have to stop what they are doing in their own professional lives, there is a lot of cost involved, and the people who are doing the course and their employers often do it reluctantly like a choir as it’s not really an enjoyable thing – often seen as just ticking a box. So if there was a way to make it more engaging, to gamify it, make it enjoyable, I think virtual reality has a massive opportunity there. And gamification is really the word we are looking for here; ways of gamification are very creative and there are ways to do it that haven’t even been exposed yet, as such gamification can never be forgotten when it comes to using virtual reality for medical education. And with Medigage our first product at medigage.co.uk is basic life support, its gamified.

What would you say are the risks in training medical professionals in virtual reality – if any?

100% there are risks with everything, and first risk is that what we have already talked about, which is going down the wrong track and spending lots of resources and finance in developing something that is not as effective as it could be. With virtual reality a lot of people who invest and develop see the financial incentive to commercialize VR, and of course that is a great massive opportunity, but at the same time you have to be cautious, methodical, systematic, and you can’t jump with both feet into the first idea that comes to mind, because it could jeopardise the wider perspective on VR for medical education.

Second risk is how technology is still developing, and relatively, compared to what we have now and what we had 20 years ago it looks advanced, but if you understand VR and how advanced it can be, you’ll see how we have relatively primitive technology in terms of where we could go; primitive as in its not adapted to our biology as humans. For example, there are our eyes, or how we process information; whether it will be healthy for our brains, we are not sure what health risks there are in the long term effects from multiple hours of looking at a screen which is literally centimetres away from your eye. Currently the research available on the negative health effects of VR – concentrating on the eyes – are quite positive in the sense that your eyes become used to how far the screen is from your eyes, and only people with pre-existing eye conditions would they be adversely affected; there is no real evidence to suggest that currently, but it needs to be further researched.

Social aspect is another risk that needs to be mentioned; how is VR medical education going to change the social aspect of medicine and healthcare? Healthcare itself relies a lot on teamwork, and it helps when people like each other, and are active team members. With VR we need to work as a team to ensure the social element, and putting emphasis on us working with each other for better patient care from the very beginning. There is no real technological limit stopping VR from becoming a social experience; there is a scope for multiple VR users to be in the same environment and for it to be as social as sitting in the same room together. But the risk is, that this is not focused on, so it needs to be given priority from the start, to make it a social experience as opposed to an isolating experience – like the image of a gamer in their parent’s basement whose life is all about that video game – and we don’t want that.

Let’s hypothetically say that VR medical training replaces the 5 years of university studies, and all your training comes from VR, is there a risk of desensitization?

There is a risk, a bonus, and an opportunity. All doctors have a risk to be desensitized to the original reason and motivation as to why they chose their profession. Every day they risk being subjected to very sad emotive situations, such as breaking sad news like cancer. And after a while we see that people who have to deal with that a lot sometimes develop an emotional struggle to really express their emotions and empathy that is needed to give for patient care whether it is to their family or the patient themselves. This has been an issue since the start of medicine and patient care. In virtual reality if someone completes training – for 5 years through a VR course – then what risk does that have in terms of desensitizing them? It takes decades to desensitize a doctor to the level where they are no longer sympathetic or empathic – so I don’t think that is a risk that comes to basic medical training. The opportunity of using VR is that because it’s still different from real life and anyone can see that difference and understand that it is for training purposes – it’s not the real thing, but they are bridging that gap. They can take the advantages of the training and limiting the disadvantages by recognizing that it is not real, their empathy can be preserved. We don’t know, this is just my experience talking as a health care professional, a doctor and as a virtual reality developer. You mentioned virtual reality courses replacing the primary medical degree, the 5-year degree, and I don’t think VR could ever replace the degree, from the way I am looking at it, there needs to be a balance between technology assisted learning and real life experience; all those elements which you can’t simulate in a virtual environment. But the fidelity and realism in virtual reality is a spectrum, and there are dynamic properties of this technology that we haven’t realised yet, due to it seeming impossible at this point in time. And such element could be a degree of realism which would allow for a complete replacement of the 5-year degree via virtual reality.

In one of your publications you stated that one of the biggest hurdles when it comes to virtual reality and medicine is lack of a robust artificial intelligence. Could you explain a bit more?

Artificial intelligence is a spectrum; it is a broad wide spectrum with a longitudinal quality. What I mean by that is AI can be as simple as a calculator if we look at its basic fundamentals. And at its sophisticated level it’s something which understands a situation – which is abstract – and can come up with an answer; like a human, or even completely unlike a human. It will have to come up with an answer though, which it can then use to learn more information, and it can learn by itself from its environment, and allow itself to adapt. Thus, allowing unlimited potential. This is still in a fairly primitive stage of development, especially its adaptiveness; the current most powerful artificial intelligent machine – from my understanding – is IBM Watson, very powerful and incredible machine being used for really fantastic feats in particularly the healthcare field, and the business field, and in big data. With healthcare particularly, a specific project in the US is to do with oncology ( study and treatment of cancer) looking at big data patterns with gene coding, and understanding what sort of genes give rise to cancer, and trying to detect that, deal with it, and treat it most effectively. Despite how amazing the AI is here, it is very specialised and not adaptive at all: It’s a AI has a super narrow and specific range.

My opinion is that artificial intelligence being used in VR simulation training will allow every clinical training environment to be different and adapt and respond and react to the trainee in a natural way best for learning effectiveness. It will allow for the trainee to vethinking on their feet and not have the disadvantages which we currently face with mannequins; if we are taking a blood sample from a mannequin you know exactly where the vein is, because you have done it a hundred times, and you can see the puncture sites that you and your colleagues have done. It’s not customizable; VR with artificial intelligence could present a different patient, with a different voice, a different sized arm, different coloured arm allowing randomisation. And that’s just customisation of visuals, pushing that further one could have branching levels of customisation, where by selecting a set of options the artificial intelligence will then customize and adapt to your learning situation making it as challenging – in order to make it as effective towards your development. That is a lot more complex.

For that level of complexity, you would need an enormous collection of data on patients. Wouldn’t that incur a level of risk regarding patient confidentiality, and making the AI a possible target for hacking?

In order to cover all variables, a lot of memory is needed, and this should be fine because the rate of digital memory(available) expansion is skyrocketing. I don’t think there is an issue with memory. What underpins this all is big data, masses and masses of petabytes is needed to assist artificial intelligence and virtual reality to keep on developing. At the core of it, what is needed is communication between what’s actually happening in the real life, such as a real life clinical statistics on what actually happens with patients and the AI engine of the simulation, this would allow the AI to learn live as more and more data is collected.

Regarding the security aspect of patient confidentiality being breached, it is a fundamental concept within the medical education domain to use patient data – very confidential information about patients– and use it for teaching purposes. As long as everyone understands that this data needs to be kept within the domain of medical education, and that only particular people who are learning to become better clinicians have access to them and patient identification information is completely anonymised, the security risk is a minimal risk.

The VR Doctor will return again soon to VRFocus with another discussion. Interested in Healthcare? Why not check out some of the other articles in the series.

 

Omnichannel Realities

In my last VRFocus article from September, I stressed the importance of Virtual Reality (VR) applications in focusing on usefulness and superseding reality. Then going on to highlight how content should be delivered via accessible (cheap and easy-to-use) hardware such as VR headsets connected to media boxes (e.g., Netflix) to reach mass market adoption.

Well, cases of such VR hardware are coming into play this year: Microsoft announced their VR OEM Windows “Mixed Reality” headset plans last year (previously called “Holographic”) and just provided more details at Game Developers Conference in San Francisco, beginning with key partnerships with Dell, Acer, Lenovo as well as launching their developers kits. These easy-to-setup and more affordable devices have the potential to become a home accessory for the mass market (I am not covering the gaming or B2B industries, nor their customer base or high spec VR & Augmented Reality (AR) hardware in this article, and therefore not referring to those).

The headsets don’t require external trackers and instead use their on-board sensors to provide indoor tracking, as well as other technologies, to enable what Microsoft has coined ‘6 Degrees of Freedom’. Although they are still tethered – for the moment at least as the wireless technology has been changing a lot in the past few months with cheaper solutions being offered by many different providers – their setup seems to be as simple as plug and play.

 

Microsoft announcing their VR headsets in 2016

Microsoft Acer Headset

Although their specifications are yet to be announced, at a price point of $300 one would hope they will be sold as bundles with new laptops and desktop computers. Indeed, as they are OEM and therefore built and distributed by computer manufacturing partners such as HP, Dell, Lenovo and more, it would make sense for Dell (as an example) to sell them as a PC with VR headset bundle this upcoming Christmas season. However, they could also lower the margins so much so that when someone is shopping for a computer the additional cost to add a VR headset would be even lower.

Also, one can expect GPU/CPU requirements and parts costs to go down, especially for the screens and chipsets; therefore, this will dramatically increase the accessibility in terms of cost and lower spec PCs requirements in future versions.

Example of a Dell online purchase bundle options, VR coming soon too?

As part of the Microsoft developer community, the Windows “Mixed Reality” or “Holographic” developer program also offers the promise of attracting an enormous pool of Microsoft developers to develop news apps, as well as extensions and browsers toolkits.

Perhaps the most important aspect here is the potential for the Windows “Mixed Reality” VR headsets to become a home accessory sitting next to one’s printer. Imagine you are browsing a website and there is a VR button to visualise the items on your basket at their real size or to watch a preview of a potential holiday; one would just click, put the headset on, experience the products and services, then remove or continue to finish your purchase in the VR mode!

The headset could become a tool which improves the customer journey experience, especially in terms of e-commerce – this is where there is truly mass-market adoption potential. Therefore, I don’t believe these VR headsets will be purchased by the mass market as a gaming or entertainment device (unlike the headsets which would be twinned with media boxes or gaming consoles, but also the Windows “Mixed Reality” VR which will be compatible with the Xbox gaming console), but instead as a tool being used sporadically to improve the internet browsing experience or through some VR apps experiences.

The browsing experience will also be seamless, with VR call to action buttons integrated within existing browsers – such as Internet Explorer – to create a seamless experience. We’ve already seen Google integrating VR functionalities in its’ Chrome browser and, therefore, it seems logical that Microsoft Explorer will also have these VR integrations. Given that there is a whole VR/Augmented Reality (AR) Windows Mixed Reality integrated development platform, we will be sure to see more and more AR, Mixed Reality (MR) and VR integrated features within the Windows Operating System and its’ core applications, such as Explorer, Apps, Office, Skype, LinkedIn and more.

At this stage, VR becomes part of the e-commerce customer journey which, amazingly, extends into an AR/MR/VR/Artificial Intelligence (AI)/Internet of Things (IoT)/Wearables circle:

A customer uses a mobile or wearable Augmented/Mixed Reality device to gain more information in a shop about a product or location, or just special offers. To do that, AI computer vision and IoT provide more information about the product whilst also learning about the customer’s behaviour. While doing this, an updated 3D pointcloud of the shop and the product has been scanned. All this information can be used in a Virtual Reality version of the shop by another customer who is shopping fully or partly in VR (i.e., browser mode).

Of course, more detailed scanning and updates will also be carried over by specific staff (and drones) in shops, with the VR versions will be customised and adapted using machine learning to deliver a personalised experience.

On the AR and MR side, which company is better positioned to provide cloud point data and then a VR rendering and version of a location such as a business? The answer is a company who has had AR products tested long before the current wave of AR and VR buzz.

Google

It seems logical that Google will be (or already is?) a central provider of those AR cloudpoints through existing data; but also of AR wearables and mobile devices, such as the hybrid DayDream/Tango phones like the Asus Zenfone AR. It’s also logical that it will release a successor to the Glass product for the mass market, since it arguably has the most experience in that area (with companies like ODG, a very experienced AR glasses maker).

ASUS ZenFone AR with Google Daydream integration

Also, bear in mind that there is already a VR version of Google Earth on Steam for the HTC Vive, which shows that having Google Maps VR is not far-fetched at all and that all AR scanning would update outdoor and indoor datasets. Google also has relationships with businesses that are mapped and on the internet through its’ SEO; this provides a great advantage for existing information and relationships to integrate those within the AR/MR information systems, as well as VR e-commerce experiences.

 

This illustrates how close and connected AR/MR and VR have become, as well as how intrinsic AI, IoT & wearables technologies are to the whole system.

From a hardware perspective, it also shows that Microsoft Mixed Reality VR OEM headsets are not the only potential mass market devices; it seems logical that future Google Daydream VR headsets and their wearable AR products will be fully integrated with Google Tango phones as a hybrid (beyond the current two modes in one phone).

Therefore, Google and Microsoft will have strong multi-platform AR/VR capabilities that harness their operating systems, technologies and ecosystems.

Most importantly, this means the Omni channel strategy for brands and marketers is more streamlined and effective if they ensure they harness those AR/MR/AI/IoT/Wearables interactions and prepare accordingly.

Consequently, instead of calling this a ‘circle’ or a ‘system’, it seems to be more a strategic AR/MR/VR vision relying on a product/service’s ‘omni-channel presence’ or ‘omnichannel realities’.

To prepare for their presence on those various technologies, brands and agencies must prepare for seamless integrations of AR and VR features within their marketing and e-commerce channels. It starts, for example, with adopting 3D scanning technologies to make the products available for visualisation, as well as to integrate those assets for narrated/interactive marketing experiences. However, these are not simple integrations as they require different skillsets and product management systems.

Also, by making products available in 3D, their design is out in the open, which is no different from stocking a product physically in a shop for a customer to observe. However, the most conservative brands may be slower to accept this, although they will eventually be required to adapt.

These are exciting times to prepare the grounds for augmented customer journeys, in which the focus really comes back to usefulness and personalisation.

I don’t believe in providing more information to visitors/customers in augmented shops or on e-commerce websites with VR functionalities, but instead a more seamless and customised information delivery system providing much higher satisfaction and conversion rates.