Exclusive: Summoning & Superpowers – Designing VR Interactions at a Distance

Manipulating objects with bare hands lets us leverage a lifetime of physical experience, minimizing the learning curve for users. But there are times when virtual objects will be farther away than arm’s reach, beyond the user’s range of direct manipulation. As part of its interactive design sprints, Leap Motion, creators of the hand-tracking peripheral of the same name, prototyped three ways of effectively interacting with distant objects in VR.

Guest Article by Barrett Fox & Martin Schubert

Barrett is the Lead VR Interactive Engineer for Leap Motion. Through a mix of prototyping, tools and workflow building with a user driven feedback loop, Barrett has been pushing, prodding, lunging, and poking at the boundaries of computer interaction.

Martin is Lead Virtual Reality Designer and Evangelist for Leap Motion. He has created multiple experiences such as Weightless, Geometric, and Mirrors, and is currently exploring how to make the virtual feel more tangible.

Barrett and Martin are part of the elite Leap Motion team presenting substantive work in VR/AR UX in innovative and engaging ways.

Experiment #1: Animated Summoning

The first experiment looked at creating an efficient way to select a single static distant object then summon it directly into the user’s hand. After inspecting or interacting with it, the object can be dismissed, sending it back to its original position. The use case here would be something like selecting and summoning an object from a shelf then having it return automatically—useful for gaming, data visualization, and educational simulations.

This approach involves four distinct stages of interaction: selection, summoning, holding/interacting, and returning.

1. Selection

One of the pitfalls that many VR developers fall into is thinking of hands as analogous to controllers, and designing interactions that way. Selecting an object at a distance is a pointing task and well suited to raycasting. However, holding a finger or even a whole hand steady in midair to point accurately at distant objects is quite difficult, especially if a trigger action needs to be introduced.

To increase accuracy, we used a head/headset position as a reference transform, added an offset to approximate a shoulder position, and then projected a ray from the shoulder through the palm position and out toward a target (veteran developers will recognize this as the experimental approach first tried with the UI Input Module). This allows for a much more stable projective raycast.

In addition to the stabilization, larger proxy colliders were added to the distant objects, resulting in larger targets that are easier to hit. The team added some logic to the larger proxy colliders so that if the targeting raycast hits a distant object’s proxy collider, the line renderer is bent to end at that object’s center point. The result is a kind of snapping of the line renderer between zones around each target object, which again makes them much easier to select accurately.

After deciding how selection would work, next was to determine when the ‘selection mode’ should be active; since once the object was brought within reach, users would want to switch out of selection mode and go back to regular direct manipulation.

Since shooting a ray out of one’s hand to target something out of reach is quite an abstract interaction, the team thought about related physical metaphors or biases that could anchor this gesture. When a child wants something out of their immediate vicinity, their natural instinct is to reach out for it, extending their open hands with outstretched fingers.

Image courtesy Picture By Mom

This action was used as a basis for activating the selection mode: When the hand is outstretched beyond a certain distance from the head, and the fingers are extended, we begin raycasting for potential selection targets.

To complete the selection interaction, a confirmation action was needed—something to mark that the hovered object is the one we want to select. Therefore, curling the fingers into a grab pose while hovering an object will select it. As the fingers curl, the hovered object and the highlight circle around it scale down slightly, mimicking a squeeze. Once fully curled, the object pops back to its original scale and the highlight circle changes color to signal a confirmed selection.

2. Summoning

To summon the selected object into direct manipulation range, we referred to real world gestures. A common action to bring something closer begins with a flat palm facing upwards followed by curling the fingers quickly.

At the end of the selection action, the arm is extended, palm facing away toward the distant object, with fingers curled into a grasp pose. We defined heuristics for the summon action as first checking that the palm is (within a range) facing upward. Once that’s happened, we check the curl of the fingers, using how far they’re curled to drive the animation of the object along a path toward the hand. When the fingers are fully curled the object will have animated all the way into the hand and becomes grasped.

During the testing phase we found that after selecting an object—with arm extended, palm facing toward the distant object, and fingers curled into a grasp pose—many users simply flicked their wrists and turned their closed hand towards themselves, as if yanking the object towards themselves. Given our heuristics for summoning (palm facing up, then degree of finger curl driving animation), this action actually summoned the object all the way into the user’s hand immediately.

This single motion action to select and summon was more efficient than two discrete motions, though they offered more control. Since our heuristics were flexible enough to allow both, approaches we left them unchanged and allowed users to choose how they wanted to interact.

3. Holding and Interacting

Once the object arrives in hand, all of the extra summoning specific logic deactivates. It can be passed from hand to hand, placed in the world, and interacted with. As long as the object remains within arm’s reach of the user, it’s not selectable for summoning.

4. Returning

You’re done with this thing—now what? If the object is grabbed and held out at arm’s length (beyond a set radius from head position) a line renderer appears showing the path the object will take to return to its start position. If the object is released while this path is visible, the object automatically animates back to its anchor position.

Overall, this execution felt accurate and low effort. It easily enables the simplest version of summoning: selecting, summoning, and returning a single static object from an anchor position. However, it doesn’t feel very physical, relying heavily on gestures and with objects animating along predetermined paths between two defined positions.

For this reason it might be best used for summoning non-physical objects like UI, or in an application where the user is seated with limited physical mobility where accurate point-to-point summoning would be preferred.

Continued on Page 2: Telekinetic Powers »

The post Exclusive: Summoning & Superpowers – Designing VR Interactions at a Distance appeared first on Road to VR.

Qualcomm’s Snapdragon 845 to Improve AR/VR Rendering & Reduce Power Consumption “by 30%”

Qualcomm introduced it’s new Snapdragon 845 mobile processor at the company’s tech summit in Hawaii this week, which Qualcomm says improves mobile AR/VR headset (branded to as ‘XR’, or ‘eXtended reality’) performance up to 30 percent in comparison to its predecessor, Snapdragon 835.

According to a company press release, Qualcomm’s Snapdragon 845 system-on-chip (SoC) will house the equally new Adreno 630 mobile GPU that aims to make entertainment, education and social interaction “more immersive and intuitive.”

The company says their new camera processing architecture and Adreno 630 GPU will help Snapdragon 845 deliver “up to 30 percent power reduction for video capture, games and XR applications compared to the previous generation,” and up to 30% improved graphics/video rendering.

The Snapdragon 845 also boasts the ability to provide room-scale 6 degrees-of-freedom (6 DoF) tracking with simultaneous localization and mapping (SLAM), and also includes the possibility for both 6 DoF hand-tracking and 6 DoF controller support. The company says it will support VR/AR displays up to 2K per eye at 120Hz.

image courtesy Qualcomm

The new SoC, which no doubt is destined to find its way to the next generation of flagship smartphones and dedicated standalone AR/VR headsets, will feature what the company calls “Adreno foveation,” which has the possibility for a number of processing power-saving methods including tile rendering, eye tracking, and multiView rendering.

As Qualcomm’s third-generation AI mobile platform, Snapdragon 845 is also said to provide a 3x improvement in overall AI performance of the prior generation SoC, something that aims to make voice interaction easier. The benefits to both augmented and virtual reality are clear, as virtual text usually requires you to ‘hunt and peck’ with either motion controllers or gaze-based reticles, an input style that oftentimes tedious and imprecise.

The company says the Snapdragon 845 also improves voice-driven AI with improved “always-on keyword detection and ultra-low-power voice processing,” making it so users can interact with their devices using their voice “all day.”

Check out the full specs of Qualcomm’s new Snapdragon 845 here.

The post Qualcomm’s Snapdragon 845 to Improve AR/VR Rendering & Reduce Power Consumption “by 30%” appeared first on Road to VR.

1,350 Student VR Education Study Measures Attitudes, Interest in VR Learning & Building

Foundry10, a Seattle based philanthropic educational research organization, wanted to explore what happens when you bring VR into the classroom. Following a pilot project started in 2015, Foundry10 has now put VR in the hands of 40 schools and community centers around the world, and measured students attitudes toward the technology.

The data from Foundry10’s initiative is available in two free reports, which begin to lay a foundation of data about how VR might be most useful in the classroom:

The majority of the schools in the study were in the United States and Canada. Headsets were mostly passed out by Foundry10 contacting teachers who had expressed interest in bringing VR to their classroom. A total of 1,351 students from 6th to 12th grade, a majority of them were in the 7th and 8th grades. As for gender, ½ the students were male, a ⅓ were female, and ⅙ did not specify. Most of the students had not tried VR before trying it in school or in a community center.

Image courtesy Foundry10

To gather data student surveys were passed out both pre/post VR experiences. The students surveys were grouped into two categories: VR consumers, those who viewed/engaged with VR content; and VR creators, those who consumed as well as created and/or modified existing content as part of their learning. VR content creation was offered in classes such as Advanced Computer Science, Game and App Development, and Fundamentals of Digital and Visual Arts. Some students were using VR to create artistic work, but those were not included VR content creators since they were not coding. Content creators also were mostly in higher grade levels (majority being in 12th grade), and the majority were male.

Image courtesy Foundry10

Of the students surveyed (regardless of class content) majority were interested in both consumption and creation of VR content. There was a small decrease (moved to consumption only) when comparing post to pre, but still a majority wanted to do both. As for subject matter, students mostly wanted to experience content in concrete subjects such as history or science education.

Initially students were unsure of what to see or what could be done in VR, but there were distinct shifts in before and after. There were positive shifts in categories such as trying new things and historic experiences, but negative shifts in emotions. Students felt that they could learn about places through VR. A teacher offered an anecdote about their students in a rural classroom experiencing a virtual subway ride. This was very impactful on this group of students because majority of them had never seen anything like a subway except in images and video.

Image courtesy Foundry10

Additional questions were recorded in the survey such as what causes breaks in immersion when it comes to VR content (the answer will most likely not surprise you: ). There was also data presented for discomfort experienced by students in VR, which was not overly common, but still something teachers and creators of educational VR content would need to consider in the future.

SEE ALSO
How VR, AR, & AI Can Change Education Forever – Part 1, Today's Problems

Also, there was evidence that, in schools where the VR program that did not have support of the school’s administration and IT services, the technology was draining on teachers, regardless of the teacher’s previous interest. Students also had comments on the hardware, in particular the cables being troublesome, so a wireless version would be preferred or less cumbersome experience could be beneficial.

Overall, students had confidence that VR ed content developers were knowledgeable about the content they were creating. They also understood that the technology has a long way to go, but felt the simulations they experienced were realistic. Students also felt that VR was helpful to people, and should be more accessible. At the moment of the published released 30 schools were enrolled for the 2017-2018 program. For more in depth analyses of these finding please visit foundry10. Links for both the in depth study as well as the summary are available there. This was a very intriguing study, and we look forward to seeing their results in the future.

The post 1,350 Student VR Education Study Measures Attitudes, Interest in VR Learning & Building appeared first on Road to VR.

‘Haptic Shape Illusion’ Allows VR Controllers to Simulate Feel of Physically Larger Objects

In a study lead by Eisuke Fujinawa at the University of Tokyo, a team of students created a procedure for designing compact VR controllers that feel physically larger. Exploring the concept of ‘haptic shape illusion’, the controllers have data-driven, precise mass properties, aiming to simulate the same feeling in the hand as the larger objects on which they are based.

Simulating the feel of real objects is a fundamental haptics challenge in VR. Today’s general-purpose motion controllers for VR work best when the virtual object is reasonably similar in size and weight; very large or heavy virtual objects immediately seem unrealistic when picked up.

One solution is to use specific controllers for a given application—for instance attaching a tracker to a real baseball bat; in a hands-on with one such solution, Road to VR’s Ben Lang described the significance of gripping a real bat and how that influenced his swing compared to a lightweight controller. But swinging a controller the size and weight of a baseball bat around your living room probably isn’t the best idea.

As shown in the video below, researchers from the University of Tokyo attempted to create much smaller objects that retain the same perceived size. The team designed an automated system which takes the original weight and size of an object and then creates a more compact but similar feeling output through precise mass arrangement.

The paper refers to several ecological psychology studies into how humans perceive the size of an object through touch alone, supporting the idea that perceived length and width is strongly related to the moment of inertia about the hand position.

The team concentrated its efforts on this haptic shape perception, collecting data from participants wielding different sample controllers in VR to determine their perceived sizes, having never seen the controllers in reality. This data allowed the creation of a ‘shape perception model’, which optimises the design of a large object within smaller size constraints, outputting CAD data for fabrication.

The object is deformed to fit the size constraints, holes are cut out, and weights are placed at specific points to maintain the original moment of inertia.

Image courtesy Fujinawa et al.

The team had VR developers in mind, as this approach could offer a potential benefit in demonstrating a product with a more realistic controller. The CAD data output means that smaller, safer prototype controllers that give the impression of wielding larger objects can be created quickly with a laser cutter or 3D printer.

SEE ALSO
Exploring Methods for Conveying Object Weight in Virtual Reality

Further information and the full paper is available on Fujinawa’s website. The research is being presented at this week’s VRST 2017, the 23rd ACM Symposium on Virtual Reality Software and Technology held in Gothenburg, Sweden.

The post ‘Haptic Shape Illusion’ Allows VR Controllers to Simulate Feel of Physically Larger Objects appeared first on Road to VR.

Oculus Research Devises High-accuracy Low-cost Stylus for Writing & Drawing in VR

Using a single camera and a 3D-printed dodecahedron decorated with binary square markers, the so-called ‘DodecaPen’ achieves submillimeter-accurate 6DoF tracking of a passive stylus. Lead by National Taiwan University PhD student Po-Chen Wu during his internship at Oculus Research, the work presents a low-cost tracking and input solution with many potential applications in virtual and augmented reality.

As shown in the video below, the ‘passive stylus’ in this case is an actual ball-point pen, allowing for a quick visual demonstration of the impressive accuracy of the tracking system, with the real and digitised drawings being almost indistinguishable from each other. Although the project focused on stylus applications, the video also highlights how the dodecahedron could be attached to other objects for virtual tracking, such as a physical keyboard.

According to the paper published on the NTU’s website, the DodecaPen’s absolute accuracy of 0.4mm is comparable to an advanced OptiTrack motion capture setup using 10 cameras—a combined resolution of 17 megapixels. The DodecaPen system achieves the same accuracy with a single, off-the-shelf, 1.3MP camera. The research clearly shows that marker corner alignment alone is not enough for robust tracking; the team instead used a combination of techniques detailed in the paper, including Approximate Pose Estimation and Dense Pose Refinement. The 12-sided shape was chosen to retain constant tracking quality, so that “at least two planes are visible in most cases.”

The key advantage of the DodecaPen is its simple construction and minimal electronics, making it particularly suited to 2D and 3D drawing. However, the team recognises its limitations and drawbacks, being prone occlusion due to the single camera, and relying on a reasonable amount of ambient light to maintain accuracy. Also, the paper notes that their computer vision algorithm is ‘slow’ compared to 300-800Hz motion capture systems, as well as Lumitrack, another low-cost tracking technology. DodecaPen’s solution is limited by the fiducial marker recognition software and the motion blur generated by the camera, resulting in unwanted latency.

The conclusion states that the system could “easily be augmented with buttons for discrete input and an inertial measurement unit to reduce latency and increase throughput.” A more complex stylus could also offer a better simulation of real drawing, including pressure sensitivity and tip tilt, which would make it better suited to emulate a pencil or brush rather than a pen. The problems of occlusion and limited low-light performance could be improved with multiple cameras with higher quality image sensors and lenses, but each upgrade would add to the system’s cost and complexity.

SEE ALSO
Oculus Research Reveals "Groundbreaking" Focal Surface Display

A made-for-VR stylus like the DodecaPen could prove to be a versatile tool for traditional productivity tasks in VR, which are largely limited today by a missing solution for fast and easy text input.

The post Oculus Research Devises High-accuracy Low-cost Stylus for Writing & Drawing in VR appeared first on Road to VR.

Kite & Lighting Uses iPhone X in Experiment to Create ‘Cheap & Fast’ VR Facial Mocap

Packing a 7-megapixel front-facing depth camera, Apple’s iPhone X can do some pretty impressive things with its facial recognition capabilities. While unlocking your phone and embodying an AR poop emoji are fun features though, the developers at Kite & Lightning just published a video of an interesting experiment that aims to use the iPhone X as a “cheap and fast” facial motion capture camera for VR game development.

Created by Kite & Lightning dev Cory Strassburger, the video uses one of the studio’s Bebylon Battle Royale characters (work in progress) to demonstrate just how robust a capture the iPhone X can provide. Flexing through several facial movements, replete with hammy New York(ish) accent, Strassburger shows off some silly sneers and a few cheeky smiles that really show the potential for capturing expressive facial movement.

While still a quick first test, Strassburger says that even though the iPhone X can drive a character’s blendshapes at 60fps while it tracks 52 motion groups across the face, “there’s a bit more to be done before I hit the quality ceiling in regards to the captured data.”

On the docket before the iPhone X’s TrueDepth camera can be levied as a VR game development workhorse, Strassburger says his next steps will include getting the eyes properly tracked, figure out why blinking causes the whole head to jitter, re-sculpting some of the blend shapes from the Beby rig to be better suited for this setup, visually tune characters, and add features to record the data.

image courtesy Kite & Lightning

To top it off, Strassburger is thinking about creating a harness system to mount the iPhone into a mocap helmet so both face and body (with the help of a mocap suit) can be recorded simultaneously.

Bebylon Battle Royale, a comedic multiplayer arena brawler, is due out sometime in 2018 on Rift and Vive via Steam. We can’t wait to see what the devs have come up with, as the game already promises to be one of the silliest games in VR.

The post Kite & Lighting Uses iPhone X in Experiment to Create ‘Cheap & Fast’ VR Facial Mocap appeared first on Road to VR.

New Procedural Speech Animation From Disney Research Could Make for More Realistic VR Avatars

A new paper authored by researchers from Disney Research and several universities describes a new approach to procedural speech animation based on deep learning. The system samples audio recordings of human speech and uses it to automatically generate matching mouth animation. The method has applications ranging from increased efficiency in animation pipelines to making social VR interactions more convincing by animating the speech of avatars in real-time in social VR settings.

Researchers from Disney Research, University of East Anglia, California Institute of Technology, and Carnegie Mellon University, have authored a paper titled A Deep Learning Approach for Generalized Speech Animation. The paper describes a system which has been trained with a ‘deep learning / neural network’ approach, using eight hours of reference footage (2,543 sentences) from a single speaker to teach the system the shape the mouth should make during various units of speech (called phonemes) and combinations thereof.

Below: The face on the right is the reference footage. The left face is overlaid with a mouth generated from the system based only on the audio input, after training with the video.

The trained system can then be used to analyze audio from any speaker and automatically generate the corresponding mouth shapes which can then be applied to face model for automated speech animation. The researchers say the system is speaker-independent and can “approximate other languages.”

We introduce a simple and effective deep learning approach to automatically generate natural looking speech animation that synchronizes to input speech. Our approach uses a sliding window predictor that learns arbitrary nonlinear mappings from phoneme label input sequences to mouth movements in a way that accurately captures natural motion and visual coarticulation effects. Our deep learning approach enjoys several attractive properties: it runs in real-time, requires minimal parameter tuning, generalizes well to novel input speech sequences, is easily edited to create stylized and emotional speech, and is compatible with existing animation retargeting approaches.

Creating speech animation which matches an audio recording for a CGI character is typically done by hand by a skilled animator. And while this system falls short of the sort of high fidelity speech animation you’d expect from major CGI productions, it could certainly be used as an automated first-pass in such productions or used to add passable speech animation in places where it might otherwise be impractical, such as NPC dialogue in a large RPG, or for low budget projects that would benefit from speech animation but don’t have the means to hire an animator (instructional/training videos, academic projects, etc).

In the case of VR, the system could be used to make social VR avatars more realistic by animating the avatar’s mouth in real-time as the user speaks. True mouth tracking (optical or otherwise) would be the most accurate method for animating an avatar’s speech, but a procedural speech animation system like this one could be a practical stopgap if / until mouth tracking hardware becomes widespread.

SEE ALSO
Disney Research Shows How VR Can Be Used to Study Human Perception

Some social VR apps are already using various systems for animating mouths; Oculus also provides a lip sync plugin for Unity which aims to animate avatar mouths based on audio input. However, this new system based on deep learning appears to provide significantly high detail and accuracy in speech animation than other approaches that we’ve seen thus far.

The post New Procedural Speech Animation From Disney Research Could Make for More Realistic VR Avatars appeared first on Road to VR.

Using Google Earth VR to Study Awe – Towards a Virtual Overview Effect

denise-quesnelOne of the unique affordances of virtual reality is its power to convey the vastness of scale, which can invoke feelings of awe. Denise Quesnel is a graduate student at Simon Frasier University’s iSpace Lab, and she has been studying the process of invoking awe using Google Earth VR. She was inspired by Frank White’s work on The Overview Effect, which documented the worldview transformations of many astronauts after they observed the vastness of the Earth from the perspective of space.

LISTEN TO THE VOICES OF VR PODCAST

Quesnel wants to better understand the Overview Effect phenomenon, and whether or not it’s possible to use immersive VR to induce it. Anecdotally, I think that it’s certainly possible as I reported my own experience of having a virtual overview effect in my interview with Google Earth VR engineers. Quesnel won the best 3DUI poster award at the IEEE VR conference for her study Awestruck: Natural Interaction with Virtual Reality on Eliciting Awe.

I had a chance to catch up with Quesnel at the IEEE VR conference in March where she shared her research into awe, how it can be quantified by verbal expressions, chills, or goosebumps, and how she sees awe as a catalyst for the transformative potential of virtual reality.

Here’s a short video summarizing Quesnel’s research into using Google Earth VR to study the induction of awe:

Here’s Quesnel’s poster on Awe summarizing results:
Quesnel-Awe


Support Voices of VR

Music: Fatality & Summer Trip

The post Using Google Earth VR to Study Awe – Towards a Virtual Overview Effect appeared first on Road to VR.

Using Experiential Design to Expand VR Presence Theory

Dustin Chertoff has pulled experiential design insights from the advertising world to come up with a more holistic theory of Presence in virtual reality. In 2008, he was in graduate school and was dissatisfied with the major theories of VR Presence. His gaming experience showed him how much of his feeling of immersion was related to the content of the game. He wrote an essay published in the journal Presence where he laid out what he saw were the two major limitations of VR Presence theory at that time.

LISTEN TO THE VOICES OF VR PODCAST

“First, many models tend to focus heavily on perceptual issues while focusing less attention other facets of virtual experiences, such as cognition and emotion,” he says. Second, “these models fail to provide an interpretable, extensible framework with which to understand and apply the theoretical principles to practical applications.”

Virtual_Experience_Test_-_A_virtual_environment_evaluation_questionnaire__2010___1__copy_pdf__page_3_of_9_

Chertoff finished his Ph.D. thesis titled Exploring Additional Factors of Presence in 2009, and then published his Virtual Experience Test questionnaire in the 2010 proceedings of the IEEE Virtual Reality Conference. Presence researcher Richard Skarbez first alerted me to Chertoff’s work after I asked him if he’d seen any prior research into Presence looking at the different dimensions of my elemental theory of presence, which breaks down the subjective quality of an experience of VR into different combinations of Embodied Presence, Emotional Presence, Active Presence, and Social & Mental Presence. I was encouraged to see that Chertoff had independently come to an identical framework through his survey that was designed to holistically “measure virtual environment experiences based upon the five dimensions of experiential design: sensory, affective, active, relational, and cognitive.”

I had a chance to catch up with Chertoff in San Francisco during GDC this year, and we each concluded that our experiential design frameworks are functionally equivalent. We talked about his FearlessVR company that he co-founded where they design VR exposure therapy experiences for different phobias, but we spend the bulk of our discussion exploring how he came to looking to looking at the field of experiential design to inform Presence theory. We also compare and contrast how each of our experiential design frameworks create tradeoffs and amplify different qualitative dimensions of an experience.

Chertoff’s Ph.D. thesis “Exploring Additional Factors of Presence” has a great survey of Presence research (see his summary graphic down below), as well as inspiration he’s drawn from flow theory and video game design frameworks like GameFlow. He summarizes Forlizzi and Battarbee’s definition of “experience” by saying it’s “something that can be articulated, named, and schematized within a person’s memory. Experiences of this type have beginnings and ends, but anticipation of, and reflection on, the experience may take place before or after the event.”

The idea of experiential design is that deeply immersive experiences form stronger memories, and that it’s easier for us to store new information along when we’ve had related experiences. Chertoff says, “Experiential designs are successful when they encourage people to create meaningful emotional and social connections—personal narratives that involve episodic memories, and positive associations with the artifacts of that experience.”

Chertoff cites Joseph Pine & James Gilmore’s book The Experience Economy as his source of inspiration for the five dimensions of his experiential design framework that make for a memorable and immersive experience, which map nicely over to my elemental theory of Presence.

Elemental-Theory-of-Presence-1280

Embodied Presence corresponds to Chertoff’s sensory dimension which he says “includes all sensory input—visual, aural, haptic, and so forth — as well as perception of those stimuli.” I’d also include different virtual body representations, as well as the virtual environmental components which help transport you into another world and trick your perceptual system into believing that you’re in another world. Most Presence researchers research embodied Presence as virtual reality uniquely stimulates our sensorimotor aspects of perception beyond what any other communications medium can do.

Emotional Presence corresponds to the affective dimension which “refers to a participant’s emotional state. For simulation, this dimension can be linked to the degree to which a person’s emotions in the simulated environment accurately mimic his or her emotional state in the same real-world situation.” For me, this includes the storytelling and narrative components as well as music, colors, symbols, and deeper myths that all engage the emotions. Emotional Presence can also come into the experience through social engagement with other people.

Mental Presence corresponds to the cognitive dimension, which “encompasses all mental engagement with an experience, such as anticipating outcomes and solving mysteries. For simulation, much of the cognitive dimension can be interpreted as task engagement, [which] is related to the intrinsic motivation, meaningfulness, and continuity (actions yielding expected responses) of an activity.” Game designers often talk about mental friction, and if there isn’t something in an experience to stimulate your mind then you’ll risk getting bored.

Social Presence corresponds to the relational dimension, which is “composed of the social aspects of an experience. For simulation, this can be operationalized as co-experience — creating and reinforcing meaning through collaborative experiences… Experiences that are created or reinforced socially are usually stronger than individual experiences and they further enable individuals to develop personal and memorable narratives.” I combine mental & social Presence into the air element, because they both deal with the abstractions of thought and communication. But it also emphasizes the fact that not every experience has to have a social dimension to it, and that solitary experiences can be just as immersive and engaging to your mind.

Finally, Active Presence corresponds to the active dimension, which Chertoff described in the interview the degree to which you can express your agency, and physical engagement through taking action within the experience. He also sees it as a form of subjective engagement by saying “Does he or she incorporate the experience into his or her personal narrative; does he or she form meaningful associations via the experience?”
Chertoff assigns a few things to the active dimension that I would categorize elsewhere. For example, I think empathy is more of a function of emotional engagement, and that connection to the environment, avatars, and identity are more related to embodied Presence. I tend to think of active Presence being primarily as an expression of agency and will that includes exploration, curiosity, creativity, physical or virtual locomotion, and any type of interactivity.

There are qualitative dimensions of an experience that are sometimes hard to clearly schematize into a single category, and I believe that all of these different dimensions are happening at the same time all the time. But I do see that there are tradeoffs between active Presence and emotional Presence that I explore in much more detail in this introductory essay about elemental theory of Presence.

Continued on Page 2 »

The post Using Experiential Design to Expand VR Presence Theory appeared first on Road to VR.

Mel Slater’s Theory of VR Presence vs an Elemental Theory of Presence

mel-slaterVR Presence researcher Mel Slater is fascinated by what makes the medium of virtual reality unique and different from other communications mediums. He says that VR activates our sensorimotor contingencies in a way that fools our brain that we’re transported into another world, and that what is happening is real. He breaks this into two primary illusions where the Place Illusion answers the question “Am I there?” and the Plausibility Illusion answers the question “Is this happening?” These two illusions happen inside of your mind and are very difficult to study, but Slater has develop an experimental research protocol that draws inspiration from color theory research the combination of the objective spectral distribution as well as an individual’s subjective perception of color.

After seeing well over a thousand VR experiences, I started to cultivate my own ideas about an Elemental Theory of Presence that describes different qualities of Embodied Presence, Social & Mental Presence, Active Presence, and Emotional Presence. Slater says that these different qualities of experience are more related to the content of the experience, and that they’re not unique to virtual reality. You can be just as emotionally engaged with a movie or a book as you are with a VR experience, and so looking at how the content contributes to his conceptualization of presence isn’t an interesting research question trying to figure out what’s unique about VR.

My elemental theory of presence is more of an elemental framework for experiential design that’s isn’t unique to VR, but it has been useful in helping to understand the component parts of a VR experience. Slater is primarily interested in researching the objective and measurable dimensions of a VR experience that contribute to the place illusion and plausibility illusion that he sees are the primary factors of the subjective feeling of presence. I personally don’t believe that you disregard the role content in how it helps cultivate a feeling of presence, but I acknowledge that it’s a difficult thing to study in controlled academic research environment. There is not a universal formula for what combination of content and experience ingredients that will help you achieve a sense of presence whether you are in VR or not. There are limits to predicting the degree to which a piece of content will resonate with someone, and the successful approaches are usually market-based solutions that big data collections of behavior to drive the content recommendation algorithms at Amazon, Netflix, Facebook, and Google.

My interview with Slater explores the threshold of the boundaries of his theory of presence as I try to understand it through the lens of my own elemental framework of experiential design. He cites NASA’s Stephen Ellis who once said that any good theory of presence will provide a series of tradeoffs that allows you to make choices amongst features that are within the same “equivalence class.” Slater’s approach to presence focuses on the objective features of the VR system while my elemental theory of presence focuses on the qualitative aspects of the specific content. Slater says that it’s a completely valid approach, but that it’s just completely different than what he’s interested in looking at. This conversation clarified for me the differences between objectively controllable VR hardware & software variables and the specific content of a VR experience. I think that both contribute different things to the subjective feelings of presence, and experiential designers will have to take into account both the objective features of the VR hardware and software as well as the specifics of the content in order to create the qualities of presence that they’re striving for.

LISTEN TO THE VOICES OF VR PODCAST


Support Voices of VR

Music: Fatality & Summer Trip

Feature image courtesy Digital Catapult (Source)

The post Mel Slater’s Theory of VR Presence vs an Elemental Theory of Presence appeared first on Road to VR.