Verdict Analysis: Why the Jury Awarded ZeniMax $500 Million in Oculus Lawsuit

Following the news of a $500 million plaintiff award in the ZeniMax v. Oculus lawsuit, a detailed breakdown of the verdict reveals the jury’s specific findings, and who is responsible to pay for the damages.


Guest Article by Matt Hooper & Brian Sommer, IME Law

matt-hooper-imeMatt is a Partner at IME Law, where he represents clients in the immersive media, entertainment and technology industries. He represents several of the leading VR content creation and software companies in the United States. He also serves as Co-Chair of the VRARA Entertainment Committee. You can follow Matt on Twitter @mhooplaw.

brian-Sommer-HeadshotBrian is an interactive media and entertainment attorney at IME Law, where he focuses his practice on the intersection of traditional entertainment and immersive media. He also serves as Co-Chair of the VRARA Licensing Committee. You can follow Brian on Twitter @arvrlaw.


Breaking Down the Jury Verdict in ZeniMax v. Oculus

After only a few days of deliberating, the Oculus jury returned a verdict in favor of Plaintiffs ZeniMax and id Software totaling $500 million. ZeniMax was awarded money damages against Oculus, founder Palmer Luckey, (former CEO) Brendan Iribe and CTO John Carmack, but parent-company Facebook escaped monetary liability (although Oculus is a subsidiary of Facebook).

Before the jurors started deliberating, Judge Ed Kinkeade provided them with nearly 90-pages of jury instructions. The jury instructions read like a missive and questionnaire, detailing the laws the jury must apply and includes spaces for the jury to fill in their award decisions (each count has to be reached unanimously, and there were nine jurors). Since the jury is a cross-section of people with different levels of education and experience, the judge wrote the jury instructions in easily digestible format, being careful to not distort important legal significances and nuances. The Oculus jury was comprised of six women and three men, with a wide-array of diverse backgrounds.

The following summarizes each count in the jury instructions and how the jury ruled:

Common Law Misappropriation of Trade Secrets

Defendants: Oculus, Facebook, Luckey, Iribe and Carmack
Jury Award (Defendants’ Liability to Plaintiffs): $0

The plaintiffs alleged that the defendants misappropriated their trade secrets. The court explained that a trade secret is defined as “a formula, pattern, device or compilation of information used in a business which gives its owner an opportunity to obtain an advantage over his competitors who do not know or use it.” Plaintiffs asserted that their trade secrets included the following technologies: (1) distortion correction technology; (2) chromatic aberration correction method; (3) gravity orientation and sensor drift correction technology; (4) head and neck modeling technology; (5) HMD view bypass technology; (6) predictive tracking technology; and (7) time warping methodology.

John Carmack_4
John Carmack had been an employee of id Software (owned by plaintiff ZeniMax). He took an early interest in the Rift (while at id Software) and left to join Oculus as CTO in 2014.

To prevail on their claim for misappropriation of trade secrets, the plaintiffs needed to prove that: (1) a trade secret existed; (2) the defendants acquired the trade secret through breach of a confidential relationship or by improper means; (3) the defendants made commercial use of the trade secret in their business without authorization; and (4) the plaintiffs suffered damages as a result.

The jury found that ZeniMax failed to prove by a preponderance of evidence that any of the defendants misappropriated the trade secrets claimed by the plaintiffs. With respect to most civil claims, a plaintiff need only prove each element of a claim by a “preponderance of the evidence.” To establish an element by a preponderance of the evidence means to prove that “something is more likely so than not so.” This is a significantly lower burden than the “beyond a reasonable doubt” standard which is used for criminal cases.

Because the jury found that ZeniMax failed to prove that any of the defendants misappropriated its trade secrets, the jury did not award any damages to ZeniMax for this claim.

Copyright Infringement

Against Defendants: Oculus, Facebook, Luckey, Iribe and Carmack
Jury Award: $50,000,000 in actual damages against Oculus

All the defendants were alleged to have copied ZeniMax or id Software’s computer programs code in violation of their copyrights. There is no copyright protection in a computer program for ideas, program logic, algorithms, systems, methods, concepts or layouts; only original “expressions” of work embodied in a computer program are eligible for copyright protection. For example, literal elements such as source code and non-literal elements such as program architecture, structure, sequence and organization, operation modules and computer-user interface may enjoy copyright protection. A computer program can be original even if it incorporates elements that are not original to the author. Accordingly, computer code copyright infringement cases require filtering and separating uncopyrightable elements of the computer program from the protected parts, an expensive and complicated analysis usually involving expert witnesses.

mark zuckerberg brendan iribe
Former Oculus CEO Brendan Iribe with Facebook CEO Mark Zuckerberg. Both appeared in court as part of the lawsuit. Zuckerberg and Facebook weren’t found liable for any charges, but Oculus (a subsidiary of Facebook) was.

The plaintiffs were granted the $50 million dollar copyright infringement against Oculus because the jury concluded the following: (1) the computer programs in question were copyrightable; (2) ZeniMax or id Software own the copyrights; and (3) Oculus copied the copyright-protected computer programs owned by ZeniMax or id Software.

Elements (1) and (2) were relatively easy issues for the jury to reach, because the plaintiffs registered their computer programs with the Copyright Office. Proving the third element was the complicated, contested part of the trial.

To prove the third element and find Oculus liable, the jury had to answer yes to both of the following questions: (1) did Oculus copy computer programs; and (2) if there was copying, was the copying “substantially similar” to plaintiffs’ copyrighted computer programs.

The Oculus court used the Abstraction-Filtration-Comparison Test (“AFC Test”) to analyze whether the non-literal elements of Oculus computer programs were substantially similar to ZeniMax or id Software copyright-protected computer programs. Essentially, the AFC Test involved breaking down each computer program into constituent parts, examining each of the constituent parts, sifting out non-protectable code and then comparing Oculus and plaintiffs’ programs to determine whether the copyright-protectable elements were substantially similar to warrant a claim for infringement.

Plaintiffs used Dr. David Dobkin, Professor of Computer Science at Princeton, to shepherd jurors through the AFC Test. At the end of his testimony, Dr. Dobkin concluded he is “absolutely certain Oculus copied from ZeniMax code,” and the jury agreed. Prior to the jury verdict, Oculus contended in its January 30, 2017 Motion for Judgment as a Matter of Law that the AFC Test is “invalid and unconstitutional.” This issue may play a central role in expected appeals.

Continue Reading on Page 2 >>

The post Verdict Analysis: Why the Jury Awarded ZeniMax $500 Million in Oculus Lawsuit appeared first on Road to VR.

$4 Billion ZeniMax v. Oculus Verdict Could Come as Early as Today, Here’s What You Need to Know

This week the eyes of the virtual reality industry are on a federal court in Dallas, Texas where ZeniMax (and child company id Software) and Facebook (and child company Oculus) have been engaged in legal battle over a dispute which could cost Facebook $4 billion. ZeniMax alleges that a former employee used VR code that it owned after being hired by Oculus, and further that Facebook should have known that the code was ZeniMax property. With jury deliberations now starting, a verdict could come as soon as today. Here’s what you need to know about the case.


brian-Sommer-HeadshotGuest Article by Brian Sommer, IME Law

Brian is an interactive media and entertainment attorney at IME Law, where he focuses his practice on the intersection of traditional entertainment and immersive media. He also serves as Co-Chair of the VRARA Licensing Committee. You can follow Brian on Twitter @arvrlaw, and @IME_Law.


For 13 days, attorneys in the Dallas federal court have been selling the jury very different stories. “One of the biggest technology heists ever” is how ZeniMax attorney Tony Sammi described to jurors Facebook’s acquisition of Oculus in opening statements. In Thursday’s closing arguments, Oculus attorney Beth Wilkinson told jurors ZeniMax and Id Software are “jealous, they’re angry and they’re embarrassed” over the success of Oculus and the acquisition by Facebook.

At first blush, this lawsuit appears to be a complicated mess involving two plaintiffs, five defendants, nine causes of action, over 900 court filings (many sealed from the public) and a demand for more than $4 billion in damages. Without having access to many of the critical motions filed in the case (due in part to the Court’s order sealing such filings), it is not possible to assess in exacting detail certain critical arguments made by each side. But, from arguments, publicly-available filings and reports that have been made available to the public, it appears that the essence of the lawsuit can be distilled down to this: this is a dispute about who owns the intellectual property (“IP”) that was vital in creating the Oculus Rift.

SEE ALSO
Experts Share 6 Legal Considerations to Know Before Jumping into the VR/AR Industry

Will the jury agree with ZeniMax that its proprietary computer code was a foundational component of Oculus’ success, or will the jury side with the defense’s argument that Oculus code was developed independently and based upon publicly known code and different solutions?

Starting today, jurors begin sorting through hundreds of facts and applying them to the issues contained in the jury instructions, weighing the credibility of witness testimony and evidence presented. Here are three key issues that could drive jury deliberations:

1. Did Palmer Luckey and Oculus Misappropriate IP That Zenimax Disclosed Through a Nondisclosure Agreement?

palmer luckey oculus rift price facebook
Palmer Luckey, Founder of Oculus

Defendant John Carmack is heralded as one of the most recognized and accomplished video game programmers and virtual reality engineers in the industry today. He co-founded Id Software (plaintiff), which was later acquired by ZeniMax (plaintiff). In April 2012, while employed as Id Software’s Technical Director, Carmack discovered through an Internet forum that Palmer Luckey (defendant)—who would go on to become the founder of Oculus—had developed a prototype virtual reality headset called the “Rift.” Carmack contacted Luckey, and Luckey sent Carmack a very early Rift prototype. Carmack is alleged to have immediately started to evaluate, analyze and modify the Rift prototype using research, software code and tools owned by id Software.

Carmack and Luckey’s friendship quickly turned business-like by May 2012 when Luckey in his personal capacity signed a nondisclosure agreement (“NDA”) with Id Software’s parent company ZeniMax, according to information from the case.

Companies use NDAs to ensure ideas or trade secrets disclosed to another party remain confidential. NDAs usually prohibit the recipient of confidential information from using or disclosing any information that they receive under the NDA, except for agreed purposes. Since an NDA is a contract, all of the legal principles surrounding contract law (e.g., elements needed to form a contract, defenses, etc.) are used to analyze an alleged breach of an NDA.

In June 2012, Luckey formed Oculus on the heels of successful demonstrations by Carmack (employed at the time by ZeniMax) and Luckey at the E3 Convention. ZeniMax alleges that through early 2013, and while bound by the NDA, Carmack and other Id Software employees collaborated with Oculus and Luckey to debug and refine the Rift.

SEE ALSO
Oculus Founder Issues Statement After Developer Backlash to Polarizing Politics

ZeniMax alleges Luckey breached the NDA by taking ZeniMax-owned proprietary information and then using it without permission and disclosing it to Facebook. Oculus and Luckey contend the NDA is unenforceable for a number of reasons, including because the NDA was signed by Luckey in his personal capacity before Oculus was founded, a key material term was never defined, and for other legally nuanced reasons. In response, plaintiffs assert that Oculus is bound by the NDA because Oculus is a mere continuation of Luckey’s prior work. The jury’s outcome may hinge on the many factual findings related to the NDA.

Continue Reading on Page 2 >>

The post $4 Billion ZeniMax v. Oculus Verdict Could Come as Early as Today, Here’s What You Need to Know appeared first on Road to VR.

‘HOLOSCOPE’ Headset Claims to Solve AR Display Hurdle with True Holography

Holo-this, holo-that. Holograms are so bamboozling that the term often gets used colloquially to mean ‘fancy-looking 3D image’, but holograms are actually a very specific and interesting method for capturing light field scenes which have some real advantages over other methods of displaying 3D imagery. RealView claims to be using real holography to solve a major problem inherent to AR and VR headsets of today, the vergence-accommodation conflict. Our favorite holo-skeptic, Oliver Kreylos, examines what we know about the company’s approach so far.


Guest Article by Dr. Oliver Kreylos

oliver-kreylosOliver is a researcher with the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES). He has been developing virtual reality as a tool for scientific discovery since 1998, and is the creator of the open-source Vrui VR toolkit. He frequents reddit as /u/Doc_Ok, tweets as @okreylos, and blogs about VR-related topics at Doc-Ok.org.


RealView recently announced plans to turn their previous desktop holographic display tech into the HOLOSCOPE augmented reality headset. This new headset is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, and AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:

Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.

Unlike pretty much everybody else using the holo- prefix or throwing the term “hologram” around, RealView vehemently claims their display is based on honest-to-goodness real interference-pattern based holograms, of the computer-generated variety. To get this out of the way: yes, that stuff actually exists. Here is a Nature article about the HoloVideo system created at MIT Media Lab.

The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and without any special effects:

I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).

There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.

realview-holoscopeThis would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays the screen is bolted to the viewer’s head, solving the problem.

SEE ALSO
HoloLens Inside-out Tracking Is Game Changing for AR & VR, and No One Is Talking about It

So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a large apparatus that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.

Finally, there is no mention of field of view. As HoloLens demonstrates, field of view is important for AR, and difficult to achieve. Maybe this photo from RealView’s web site is a veiled indication of FoV:

RealView-FoVI’m just kidding, don’t be mad.

In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.

The post ‘HOLOSCOPE’ Headset Claims to Solve AR Display Hurdle with True Holography appeared first on Road to VR.

Visualising UI Solutions for Our Mixed Reality Future

Augmented and Mixed Reality technologies are rapidly evolving, with consumer devices on the horizon. But how will people interact with their new digitally enhanced lives? Designer Ben Frankforter visualises several ideas he’s had to help bring about the arrival of what he calls the “iPhone of mixed reality”.


Guest Article by Ben Frankforter

ben-frankforter-1Ben Frankforter is a designer passionate about connecting consumers and services via positive experiences. In the past 10 years, he’s designed and led small teams creating brands, furniture, interiors, and apps. I recently finished a position as Head of Product Design at BillGuard and now researching on user interfaces for mixed reality.


While virtual and mixed reality experiences are trending right now (we’ve seen a lot of cool examples in movies), I feel that there’s a lack in convergence of practical interaction patterns. We haven’t seen the iPhone of mixed reality yet, so I decided to explore the user experience and interface aesthetics of mixed reality and share my ideas with the community. My goal is to encourage other designers to think and publish ideas on MR interfaces.

As technology becomes invisible at all such levels, from a perceptual and cognitive point of view, interaction becomes completely natural and spontaneous. It is a kind of magic.
– Alessandro Valli

During our lifetime, we acquired skills that empowered us to interact with our environment. As Bret Victor explains, by manipulating tools that answer our needs, we can amplify our capabilities. We perform thousands of these manipulations everyday, to a point that most of them feel natural. And one of the attributes of good interaction design is allowing Natural User Interfaces: those which are invisible to the user, and remain invisible as we learn them. Some examples of these interfaces are speech recognition, direct manipulation, and gestures.

Apps as Objects

I started by looking into an interaction that felt very natural: browsing records.

records-mr-1I found this interaction interesting because of the following:

  • Direct manipulation of the catalog
  • Perception of progress while browsing
  • Full visual of selected item
  • Minimal footprint of scrolled items

I was thinking of a way to apply these principles to an interaction for browsing and launching apps in a mixed reality environment.

Apps as cards

mr-ui-cards-1In this case, the app cards are arranged in a stack and placed below the user’s point of view, at a comfortable reach distance. The perspective allows a full view of the apps in the stack. Just browse through the cards and pick up the app you want to launch.
Being virtual, the app cards could grow into various sizes, starting from a handheld virtual device up to a floating virtual display.

animated-ui-1
Manipulating virtual devices and displays
animated-ui-2
Going from app to device to display
mr-ui-concept-collection-1
Mockup of apps and virtual devices

Switching Between Apps

It’s an interesting way to open and close apps, but what about switching between them?
Inspired by Chris Harrison’s research, I explored a system that uses simple thumb gestures to navigate between apps and views. We can easily perform these operations, even with blinded eyes, thanks to two factors: proprioception (awareness of position and weight of our body parts) and tactile feedback (contact and friction applied to the skin).

mr-ui-app-switching-ui-1
Thumb gestures occur against fingers

mr-ui-appswitching-2-animatedThanks to the friction applied by the thumb sliding on the index, we perceive a continuous tactile feedback.

Proprioception with tactile and visual feedbacks enables switching easily between views.

 

 

 

 

Tools and Controls

While the left hand controls the basic navigation, the right hand is free to execute other operations by using virtual tools. The result of these operation are displayed in a virtual display in front of the user.

mr-ui-mockup-large-1
A bird’s-eye view of a photo browsing environment
mr-ui-mockup-large-2
Scroll through your photos

But a planar surface is not always available, and to be able to interact with any environment the user should be able to perform other types of gestures as well. Gestures in mid-air can help, such as framing the right photo.

mr-ui-concept-camera-1
Camera app

You can follow Ben Frankforter on Twitter and Facebook as he brainstorms solutions for the future of immersive technology user interfaces.

The post Visualising UI Solutions for Our Mixed Reality Future appeared first on Road to VR.

2016’s Record Breaking VR Venture Funding Has Been Driven by Mega Deals

After years of record-breaking venture funding in the virtual reality industry, we wondered if 2016 would continue on its torrid pace. The numbers are starting to roll in, as reported in the new Fall edition of the 2016 Virtual Reality Industry Report, researched and published by Greenlight Insights and Road to VR.

Mega deals in VR venture funding have been the story of 2016. A diverse set of companies made up the largest venture deals of the year. This year, MindMaze ($100M) and NextVR ($80M) have raised two of the largest single venture rounds we’ve ever seen. Moreover, the top ten deals make up a whopping $396M, with several massive rounds at the top end:

2016-large-investment-deals-1Excluded from the analysis of more than 150 venture deals in the 2016 Virtual Reality Industry Report are large deals in adjacent technology industries, such as Magic Leap’s $793.5M Series C (February 2016), which Greenlight Insights counts as an augmented reality deal, and Unity’s $181M Series C (July 2016), which is primarily a game engine. But even without counting these outliers, 2016 will end as a record-breaking year for VR investments.

“This year will shatter funding records. The number of deals and actual dollar value is up significantly year-over-year,” says Greenlight Insights’ Senior Vice President, Steve Marshall. “As for next year, we expect to see more breakout deals as the first wave of innovation matures.”

However, despite the vigorous funding environment throughout much of this year, Greenlight Insights still considers the virtual reality industry to be very much in its first wave of innovation, with much of the action still at the early stages and many potential success stories still waiting to be written.

The complete analysis of 2016’s VR venture funding, and everything else you need to know about VR this year, is available in the newly updated 92-page 2016 Virtual Reality Industry Report, which includes new market revenue and hardware shipment forecasts; for a limited time you can use the special code ROADTOVR to save $500.

The post 2016’s Record Breaking VR Venture Funding Has Been Driven by Mega Deals appeared first on Road to VR.

Everything Wrong with Traditional Data Visualization and How VR is Poised to Fix It

Data visualization is one of a handful of topics that VR evangelists like to count off on their fingers as spaces that virtual reality could radically change. But how, exactly? And what’s wrong with data visualization today? This article digs into specific issues with traditional data visualization and the challenges of understanding abstract information, and how VR is ready to change everything.


evan-warfel-kinevizGuest Article by Evan Warfel

Evan is program manager at virtual reality data visualization company Kineviz. He previously worked as a data scientist for HID Global, and he graduated from U.C. Berkeley with a degree in Cognitive Science. When he’s not working for Kineviz and exploring VR, he writes and researches the human decision-making process.


In 1983, Amos Tversky and Daniel Kahneman asked college students the following question:

Linda is thirty-one years old, single, outspoken, and very bright. She majored in philosophy. As a student she was deeply concerned with issues of discrimination and social justice and she also participated in anti-nuclear demonstrations. How likely is that:

  1. Linda is a teacher in an elementary school?
  2. Linda works in a bookstore and takes yoga classes?
  3. Linda is active in the feminist movement?
  4. Linda is a psychiatric social worker?
  5. Linda is a member of the League of Women Voters?
  6. Linda is a bank teller?
  7. Linda is an insurance salesperson?
  8. Linda is a bank teller and is active in the feminist movement?

They found that 86% of undergraduates believed that #8 (Linda is a bank teller and is active in the feminist movement) was a more likely scenario than #6. While it is easier to imagine Linda being both feminist and a bank teller; ‘feminist bank tellers’ are only one kind of bank teller, and thus there are less feminist bank tellers than total bank tellers.
Not only is this example well known, most people find it confusing. Notice how much easier it is to understand when it is visualized:

bank-teller-feminist

Which is more likely: that Linda is a Bank Teller, or a Feminist Bank Teller? Assume the circles are sized proportional to reality.

Virtual reality has the potential to make probabilistic reasoning easy, just like this diagram made the so-called “Linda Problem” much easier.

– – — – –

Talking about data and virtual reality is a bit of chicken and egg problem — it’s difficult to build a suite of VR tools that people will use without knowing how said people will use VR data tools. That being said, virtual reality can help with a) probabilistic thinking (illustrated above), b) high dimensional data visualization, c) high information density, and d) providing context to fully understand what is going on.

High-Dimensional Data Visualization

“Graphs are essential to good statistical analysis.”  – F.J. Anscombe

Provided your dataset has two dimensions or fewer, the respective data is relatively easy to visualize with graphs or charts:

graphs

Anscombe’s famous quartet, taken from Wikipedia. Each data set has the same mean, correlation, variance, and best-fit line.

For each dataset above, the mean of all of the X coordinates is 9, the mean of all of the Y coordinates is 7.50, the variance of the X coordinates is 11, the correlation between the X and Y coordinates is .816, and the equation for the best-fit line in each case is Y = 3 + 5x.

In other words, these four datasets are seemingly statistically identical, even though their true nature is betrayed by visualization. However, we had it easy—we were only working with two dimensions of data.

– – — – –

If you have three-dimensions worth of data, you could conceivably use a three dimensional plot. If you have high dimensional data (aka plenty of columns in an Excel spreadsheet of your data), you are mostly out of luck. While it is easy enough to think in 2D, the trouble with having a lot of columns (like 10,000, for instance, but also anything greater than 3) in your dataset is that it is impossible to visualize more than three spatial dimensions.

SEE ALSO
CLOUDS is an Immersive VR Documentary Combining CG Data Visualization & Depth-mapped Interviews

However, there are other ways of representing dimensions. A triangle, for instance, could be used to represent three dimensions of data, if you mapped each dimension to the length of a side. You could, if you really wanted, utilize a red-blue spectrum and a light-dark spectrum to color in the middle of the triangles and blamo! You’ve got five continuous dimensions all in one visualization. Compare each triangle, and you might spot anomalies or heretofore hidden patterns and relationships. That’s the theory, anyway.

Paul Chernoff explored a variant of this idea in the 1970s — instead of lengths of triangle-sides, he mapped dimensions of data to different characteristics of cartoon faces.

I’ll let you judge how well this worked by way of an L.A. Times infographic:

life-in-los-angeles-graphic

Eugene Turner — Life in Los Angeles (1977), L.A. Times. The four facial dimensions, the geographic distribution of each face and the community line information mean you are looking at six dimensions of data.

Your gut reaction will be to dismiss this method of data presentation, as it looks silly, vaguely racist, and hard to interpret. But I urge you to give it a second look — can you spot the buffering row of communities in between the poor and affluent parts of town?

One reason Chernoff faces don’t get wider use, I submit, is that they look too cartoonish (and seeing how science is very Serious Business, it wouldn’t be proper for plots to be cartoon faces…). While realistic Chernoff faces solve the cartoonishness problem, they highlight another issue: though they seem like they could be intuitive, we all have too much experience with faces and real emotions to evaluate arbitrarily constructed ones.

In the depictions below, parameters of Tim Cook’s face — like the slope of his eyebrows — have been mapped to various Apple financial data-points for the year in question.

tim-cook-data-faces

From Christo Allegra. Each version of Tim Cook’s face represents Apple’s financial data for the year in question. The width of Tim Cook’s nose represents the amount of debt taken on by Apple; the closed-ness of Cook’s mouth represents the revenue of that year; the size of his eyes represents the earnings per share, and so on. For serious uses of Chernoff faces, check out Dan Dorling’s work.

Clearly, there are some issues with this approach too. One thing that stands out is that not every aspect of a face conveys emotional information on the same scale as, for instance, the smile. In other words, the perceptual difference between one face and another doesn’t match the actual differences between the data. This, I submit, is one of the properties that makes plots and graphs so useful. It’s why visualizing the Linda Problem makes it much more intuitive. It is also something that is missing from current approaches high-dimensional data visualization.

– – — – –

Virtual reality can solve several of the aforementioned issues. Instead of faces, a Chernoff-like technique can be applied to control how neutral objects look, move, interact and are distributed. For example, all of the following properties of tables can be used to represent different data dimensions: height, area of table-top, color, leg-length, degree of table polish, as well as type and location of stains and burns. If you have 15-dimensional data, you could do worse than translate the dimensions to parameters that would control how tables might look.

table-diagram

Each measurement can be utilized to visualize another dimension of data. From mycarpentry.com.

The advantage of VR is that it allows you to perceive the true, intuitive, meaning of a table that is twice as tall as another; or the meaning of having different coefficients of friction on the table top. Some testing could ensure that the differences in dimensions carry the same perceptual weight.

SEE ALSO
How VR Productivity Apps Could Make Us As Smart As Sherlock

Moreover, the methodology for how to go about this has been thoroughly explored in the realm of psychophysics and color perception — researchers have spent a vast amount of time measuring how people perceive both tiny and large differences in different kinds of sensations. In other words, VR and a little psycho-physics could make understanding complex data as easy (or stress inducing) as walking through IKEA.

Continue Reading on Page 2 >>

The post Everything Wrong with Traditional Data Visualization and How VR is Poised to Fix It appeared first on Road to VR.

Battlezone PSVR Dev Diary #3: The Art of Battlezone

In our final developer diary from the team behind Rebellion’s PlayStation VR launch title Battlezone, Lead Artist on Sun He reflects on how they developed and executed on the artistic ethos behind the game.

Guest Article by Sun He, Lead artist on Battlezone – performing magic in 3D!
Guest Article by Sun He, Lead artist on Battlezone – performing magic in 3D!

As an art team beginning work on our very first VR game, we knew we were undertaking a challenge with Battlezone, but our goals were always crystal clear. We wanted to fashion an art style that not only wowed in VR, but also retained a strong visual connection to the original 1980s arcade game. A simple goal to understand then, but very complex to deliver! Here’s how we approached it, and the lessons we learned along the way:

1. Rethink the workflow

battleszone_07_1k

At the earliest stages of production we conceptualized the game’s look the traditional way.

Metaphorically, we knew we weren’t about to paint on canvas, but we still tried sketching out ideas on paper. Sure enough, this traditional approach produced results that looked rather different in VR compared to how we imagined.

It quickly became clear just how important it was to consider the scale and spacing in VR as early as possible. In VR, artists are working in a truly digital world. In other words, that meant conceptualizing in 3D and indeed in VR right from the off.

The best allegory for this shift I can come up with is that whereas before VR we were artists making nice paintings of houses, we are now actually building the houses and designing their interiors! It’s quite a jump.

We’d take 3D concept models into VR and scale them, move them around in the 3D space and test them again in again in as many different scenarios as possible, essentially trying to break them! Once an outline was set, we could then finally add detail and texturing.

Special effects were a particularly good example of this. In a traditional approach we would simply generate effects with 2D sprites, but in VR this led to effects that lacked an inherent sense of depth and volume. This, interestingly, was particularly noticeable with larger particles.

If you play Battlezone, you’ll notice a lot of the game has a polygonal feel, from the hexes of the campaign map and the in-game surfaces to the polyhedral pieces of data that spawn from defeated enemies. This is certainly part of the game’s retro-futuristic feel. But with our effects, using a polygonal design allowed us to create effects in 3D meshes. By this I mean instead of drawing a 3D sphere, for example, as a texture to put on 2D particle sprites to create what looks like a sphere in-game, you’re actually using a 3D sphere. And you can see that particularly in the explosions: Bright yellow and orange dynamic polygons that look like lots of smaller shapes combined together. It’s a striking look that really resonates with the retro style but is also very well suited for VR art design.

SEE ALSO
Battlezone PSVR Dev Diary #1: The Importance of Feedback in Uncharted Territory

2. Exaggerate the scale

battleszone_11_1k

Creating a game for VR, we of course wanted to create a world that people would naturally look around. One of the ways we tried to achieve this is something we’ve mentioned in a previous post: Using very tall, imposing structures in the vertical space that really hammer home the sense of scale. These work both as visual landmarks and orientation tools in VR, much like you’d use the tallest building as a point of reference in a busy metropolis.

In addition to this, we used a combination of “vanishing points” in scenery to make perspectives feel more exaggerated. A vanishing point is, essentially, the point in your perspective where two parallel lines appear to converge. Try imagining a picture of a road leading to the horizon. At some point you see the two sides of the road meet towards the horizon, essentially disappearing. That’s a simple example of a vanishing point, though it can comprise more than just lines, and it’s often used to simulate 3D in 2D art.

By using multiple vanishing points in the Battlezone scenery, we were able to make our 3D perspectives feel more exaggerated in scale; structures would feel taller and environments even bigger. For instance, during the opening launch sequence, the hangar feels incredibly spacious in VR because we’ve exaggerated the draw distance. And then as the tank lifts you out of the level, your eyes are drawn upwards towards that epic landmark in the sky.

SEE ALSO
Battlezone PSVR Dev Diary #2: Building Levels in VR That Welcome Players Into the World

3. Design a VR-friendly art style

battleszone_09_1k

I’d probably describe Battlezone’s look as a retro-futuristic style with very neon, chunky and blocky shapes. We wanted to inherit some of the classic elements from the original Battlezone, like the colour, the neon wireframe and the polygonal look, but at the same time give it the kind of makeover players would expect from a next-gen VR experience

In early development, Battlezone had a litany of thin neon lines and very, very detailed environments. However, it started to look noisy, with elements a little indistinctive in the mid-to-far distance. As artists, we found we needed to be a little bit more restrained in VR.

With that in mind, we began to sculpt the buildings and vehicles into big, blocky shapes at first, and then balance things like the level of detail and the thickness of lines. Once again, we were testing assets early and regularly in VR, so we could have a much clearer idea of how much additional detail we could add.

Battlezone’s chunky neon polygons became the basis around which we chose the colour palette. We tinted environmental themes around it – the volcanic theme has primary and secondary colours of brown and grey, which really contrasted against the neon orange and yellow of things like effects to make everything more pronounced in VR. And placing our player in the cockpit meant we could bring back the game’s classic green look into the displays and user interface right at the front of the view. The end result is something that harkens back to the original arcade game and yet feels undeniably modern, digital and virtual – retro-futuristic, classic but modern, familiar and yet in VR.

Buy Battlezone from Amazon
Buy Battlezone for PSVR

It’s been so exciting to be a part of this VR journey, and I’m looking forward to further exploring this new area of gaming and finding more solutions for future development. We are really lucky to be one of the few teams to create art content for a brand new platform in such uncharted territory. I really hope you’ll enjoy Battlezone and appreciate the work that has gone into its art style.


Our thanks to Sun He and the rest of the Rebellion team for putting together these developer diaries. Battlezone is out now on PlayStation VR.

The post Battlezone PSVR Dev Diary #3: The Art of Battlezone appeared first on Road to VR.

STRIVR Introduces VR Training On and Off the Field

With industries outside of entertainment expected to account for the majority of virtual reality industry revenues, the question of utility seems ever-pressing. How can virtual reality transcend novelty into utility and become truly useful?

It’s a question that STRIVR Labs CEO Derek Belch has been asking himself since 2007 when the former Stanford Cardinal kicker took a class about virtual reality from Jeremy Bailenson, the head of Stanford’s Virtual Human Interaction Lab (VHIL).

Belch realized the potential of VR for sports training and spent two years working with Bailenson to develop and troubleshoot an effective way to film Stanford Football’s practices in 360-degree, immersive video allowing players to rewatch practice film using a VR headset. And what started as a leg up for the Stanford football team became his career.

“We’ve chosen to go the way of asking: What are the problems and how can this help?” says Belch. “I think the entertainment stuff is really cool and some of the gaming experiences are incredible, but how big is that market outside of a gaming community? We’ll find out. I think the utility applications where VR actually makes a lot of sense are the winners in the long term.”

strivr-espn
Photo courtesy ESPN

And so far, his approach seems to be working. Fast forward to 2016 and STRIVR Labs counts the likes of Texas Tech, Arkansas, Stanford, Dallas Cowboys, and Arizona Cardinals, among others, as clients using STRIVR’s VR solutions to enhance training. STRIVR has continued to work with NBA, MLB, and NHL franchises and their corporate partners at different games using virtual reality.

STRIVR’s approach capitalizes on the oft-unacknowledged mental workout of football: memorizing plays and repeating them in perpetuity until they stick. Where repeating plays on the field becomes physically and mentally exhausting, as well as time consuming, STRIVR’s approach puts players in a headset for between two and fifteen minutes. In five minutes, a player can get as many as 40 additional reps of a seveon-second play that they wouldn’t get on the field.

And while training football players might still technically fall under the umbrella of entertainment and not utility, STRIVR has extended its reach into training the employees of an anonymous international retailer. This move suggests greater potential for expansion into many industries that necessitate repetitive mental training and memorization.

“[Training employees is] not unlike what a football team faces, where there’s only so much time to give people reps,” says Belch. “You have to teach to have them prepared to play. It’s just a different type of play with an everyday job versus a game.”

Belch will further explore his company’s approach to the business of VR at his session “From NFL to Walmart: How to Use VR to Grow Your Human Capital” on November 2 at the 2016 Virtual Reality Strategy Conference.

strivr-fox
Photo courtesy Fox

The high stakes of working with large corporations and football giants means STRIVR is constantly searching for the best technology for the job. Belch considers his company “agnostic” as far as allegiances to specific platforms. STRIVR has used HTC Vive, Samsung Gear, the Oculus CV1 and Oculus DK2 headsets.

“For us the cost of failure is really high,” says Belch. “If something fails, the teams think it’s our fault when it probably isn’t. We try to be smart about what hardware we deploy and where.”

But the challenge for STRIVR lies mainly in filming VR footage from the field and getting intimate shots of plays for VR reproduction.

“As far as actually doing this stuff on site, it’s not easy,” Belch says. “This is not like computer or cell phones right now where you just beam it out to people and everybody owns it. You actually have to put it in front of people.”

But Belch says that is something STRIVR has succeeded in doing.

“Out of the 20,000 plays we filmed last year, those plays were watched 50,000 times,” he says. “These are real paying customers and real players actually using VR for utility.”


Road to VR is a proud media sponsor of the 2016 Greenlight Virtual Reality Strategy Conference.

The post STRIVR Introduces VR Training On and Off the Field appeared first on Road to VR.

Choosing the Right 360 VR Camera

With a virtual reality camera, you can capture the whole world around you in a 360 degree videosphere. VR filmmaking is seeing rapid innovation, meaning that there are more 360-degree cameras on the market now than ever before, geared at everyone from intrigued consumers to high-end professionals. This overview is designed to give you solid starting points across a range of options.


aaronrhodes_headshot_2016_v002Guest Article by Aaron Rhodes

Aaron is Pixvana’s in-house filmmaker and executive producer. A veteran of the post-production world, he has worked as a director, visual effects supervisor, senior colorist, editor, and more, at renowned facilities including Emotion Studios, Evil Eye Pictures, Spy Post, and The Orphanage; he currently serves as a board member of the Visual Effects Society. Aaron is a creative problem solver who has lent his talents to box office hits such as The Avengers, Iron Man, films from the Harry Potter and Pirates of the Caribbean series, and many others.


At Pixvana, we’re constantly testing new cameras, including custom rigs, to help advise on what the best VR system is for a variety of projects. There’s not one ‘best’ 360 VR camera; the best one is the one that suits your needs.

360-camera-comparison

First, when planning to make a piece of VR film content, it’s important to ask yourself: does this experience really need to be immersive? How will the content be delivered and experienced? What are my budgetary and production constraints? All of these factors can help narrow down your camera choices from the get go. Integrated, off-the-shelf VR cameras make the post-production process much simpler, but custom camera rigs can also have some advantages for more discerning filmmakers. I’ll cover a variety of options here.

Entry Level

office-camera-02

For entry-level choices, I like the Samsung Gear 360 or Ricoh Theta S, both of which let users easily experiment with VR for under $400. These dual lens 360 cameras are consumer-friendly, with small, portable form factors and accompanying smartphone apps to quickly review footage. The Ricoh Theta S can also livestream which is a nice perk.

SEE ALSO
Ricoh Announces Theta SC, a Colorful Mid-Range Addition to 360 Camera Lineup

These cameras offer lower resolution than the more high-end options, but if you’re a consumer or even a professional just dabbling in VR for the first time, these are solid yet affordable options to help give you the lay of the land before making a larger investment. Even more experienced professionals shouldn’t overlook these cameras, they’re great to have on hand for proof of concept, scouting, and pre-visualization.

Mid Range

gopro-omni-360-vr-camera
Photo courtesy GoPro

Moving up to the mid-range, the GoPro Omni ($5,000) is a strong off-the-shelf option, offering six synchronized HERO4 Black cameras in a portable spherical rig, all capturing content at 8K resolution. In addition to the hardware, GoPro’s Kolor software suite gives users a straightforward way to import, stitch, view, and publish content.

gopro-omni-kitThe Omni has some drawbacks, notably, no live preview or real time stitching. But combined with a Ricoh Theta S or Samsung Gear 360 you can still get quick on-set previews. The Omni remains my first choice in this price range because it provides a one stop-shop for VR at high resolution. And at the size of a grapefruit, it gives filmmakers a lot more freedom and flexibility on set than they might have with a larger custom rig. If you need something that won’t break the bank but will still produce high-res content, to me this is the most straightforward option on the market.

High End

nokia-ozo-on-mount

If you’re looking for a professional camera, the Nokia OZO is a great option. Specifically designed for professional VR production, the Ozo is one spherical camera with eight synchronized sensors, and it comes with a standalone computer with Ozo software for live stitching and preview. It also offers ambisonic sound recording, partial stereo, and can live stream in HD resolution (and can capture footage up to 6K resolution).

SEE ALSO
Exclusive: Nokia's $60,000 VR Camera Goes on a Drone Test Flight

The Ozo costs about $45,000 to buy or $3,000 per day to rent—nothing to sneeze at—but it does provide significant perks and a well-designed end-to-end workflow. Professionals looking for a robust, self-contained production pipeline, or who need to live stream at high resolution, should give the Ozo a try.

Custom Rigs

custom-vr-camera-rig

Lastly, custom camera rigs are another option for those filmmakers wanting a specific set of benefits not fulfilled by any off-the-shelf solutions. For a recent shoot, I opted to use a custom rig of five RED Weapon cameras in order to capture content at 10K resolution and 60 FPS. The higher your resolution, the better the content will look in a VR headset, something to keep in mind when deciding on a camera system. Using this custom rig also let me fully control the exposure, swap out lenses, and make other modifications to meet the goals of the production.

SEE ALSO
HypeVR Captures Ultra-High Def 360 Degree, Depth-Mapped Video Using a 14 x 'Red Dragons' and LiDAR

Though the rig delivered, it was large and cumbersome to move around on set, and was more complicated to use than an ordinary VR camera. Even seasonedpros should make sure their shoot is very well planned and really needs to meet certain requirements before experimenting with custom rigs.


Once you’ve decided on a camera, you’re ready to start capturing 360-degree video! With so much innovation in the VR space, I anticipate that higher resolutions and streamlined workflows will become even more standard.

Follow the Pixvana blog for more field tips as we continue to test available camera systems. HTC Vive and Oculus Rift users can also check out our SPIN Technology Preview on Steam to see firsthand how our Field of View Adaptive Streaming (FOVAS) technology delivers crystal clear content wherever you look.

The post Choosing the Right 360 VR Camera appeared first on Road to VR.

Battlezone PSVR Dev Diary #2: Building Levels in VR That Welcome Players Into the World

In the 2nd of our 3 part developer diary series from Battlezone developers Rebellion, Game Designer Grant Stewart writes on making levels that don’t just use VR, but also welcome you into the game world.


grant-stewartGuest Article by Grant Stewart

Grant designs games at Rebellion. Nobody seems to have stopped him yet.


At its heart, Battlezone is an arcade game. It’s designed for frantic, quick bursts of tank-combat gameplay. Some developers have opted for shorter experiences or a more laconic pace, but we want to get your pulse racing and your trigger finger itching. Every facet of Battlezone leads into providing this feeling and accentuating it with VR. Enemies swoop and careen around you, explosions light up your view and the battlefield feels alive with action.

Creating the levels to house this action in VR has been a unique experience. We built Battlezone in Rebellion’s in-house engine, Asura. Our tech team crafted a selection of tools for the project that enabled us to rapidly prototype. Every level supports every mission, and they each work a number of ways in each level.

Our first attempts at crafting environments for tank warfare were inspired by the original 80s and 90s Battlezone games. We cautiously attempted undulating terrain that stretched across vast areas, though some on the team weren’t convinced. The glowing vector mountains of the arcade cabinet became faceted rock formations, polygonal trees dispersed around them. The extraterrestrial landscapes of the Activision strategy games allowed us to zip across levels, flowing under and over rolling hillocks. You can see some of these environments in one of our earliest trailers (along with the old cockpit!)

These all seemed like strong ideas at first, but the longer we played in VR the more we saw the problems. We kept snagging on those polygonal trees and the undulations were beginning to make us feel uncomfortable. So, as ever with this project, we experimented and iterated.

We knew combat had to happen at a variety of ranges and heights; what’s the point of having full freedom of movement if it doesn’t translate into variable gameplay? So we flattened out the hills and tied the plateaus together with easily navigable ramps. We also swapped out the smaller objects for rocks, vents and snow drifts, each offering cover in a fight. We kept some of the speed and all of the freedom, but we still circumvented problems.

After extensive playtesting and iteration, we started to apply more dressing to the levels. The campaign in Battlezone sees you perpetually work your way towards The Corporation AI Core’s volcano lair, a nod to the original Battlezone. Across that journey you fight in five distinct settings: Frozen Wastes, Robotic Metropolis, Neon Cities, Industrial Complexes and Volcanic Ridges. Each theme has its own unique palette and style, and all of them are calibrated to complement the enemy designs.

battlezone-dev2-2

In addition to the traditional aspects of environmental design, VR gave us something new: Scale. Being wowed not just by the world you see, but the world you are in.

So we ensured each level features a unique landmark, a structure towering above you. In VR they are frankly awe-inspiring – you gaze up and appreciate the size of your surroundings. We embrace this with a moment at the start of every mission. Before the action begins you watch as your cockpit comes online. The shutters around your tank gradually come up and the world slowly comes into focus. It’s an opportunity to marvel before plunging into the action.

battlezon-dev2-1

As well as being stunning, these structures provide an anchor point for orienting yourself. Knowing where you are helps you get into the action that must faster. As you switch targets from close to long range, ground to air, and so on, you’ll always be able to find the horizon, that landmark and your bearings.

This was especially important to use because each level can play out missions in a variety of ways. Our procedurally generated campaign algorithmically connects missions and maps, so you never know exactly what you’re going to get in each level. We wanted to ensure that any combination offers a unique scenario. Even the placement of structures and enemies is chosen by chance! So with all that in mind, having n imposing landmark enhances the readability of this exotic world, which is something you really need in the heady swell of VR.

SEE ALSO
Battlezone PSVR Dev Diary #1: The Importance of Feedback in Uncharted Territory

Battlezone and VR provided us with a lot of unique challenges and opportunities. It’s been a real joy to carve out our gameplay niche on unfamiliar ground. Every aspect of development has been influenced by it. So the whole team and I are proud of what Battlezone has shaped up to be. We can’t wait for you to play it!


Our thanks to Grant for penning this diary entry. Battlezone is a PSVR launch title, available to buy alongside PlayStation VR on October 13.

The post Battlezone PSVR Dev Diary #2: Building Levels in VR That Welcome Players Into the World appeared first on Road to VR.