Another entrant in the rapidly burgeoning wireless VR segment appears as DisplayLink prepares to present their new WiGig 60Ghz wireless VR technology at next week’s E3 convention and, according to a recent hands-on, it’s looking pretty impressive.
Given recent opinions shared by the founder of Oculus, that current generation virtual reality headsets would not see a successor until 2018 at least, it’s fallen to other technology leaders to push the state of VR hardware forward. The next most enticing prospect to enhance the PC VR experience are wireless VR add-ons that let VR enthusiasts cut the cord on their high-end VR headsets.
The market is already starting to look pretty busy, with Road to VR taking a look at several solutions both ready for retail and in the works. Now, veteran video protocol specialist DisplayLink is due to debut their own solution to the world at next week’s E3 gaming convention in LA.
HTC Vive with DisplayLink XR prototype receiver and transmitter [Image courtesy: Tom’s Guide]DisplayLink XR is a system which utilises the WiGig (short for ‘Wireless Gigagbit Alliance’) garnered 60Ghz wireless video standard and, according to DisplayLink is capable of delivering dual 4k (3840×2160) video signals at a whopping 120Hz. Tom’s Guide got an exclusive sneak peek at a prototype iteration of the technology recently and according to them, when coupled with an HTC Vive, which sports dual 1080×1200 resolution OLED panels running at 90Hz, the new system delivers “razor-sharp”, low latency wireless image quality. Such was the proficiency of DisplayLink XR demo, which was powered by the company’s latest DL-8000 chipset, that Tom’s Guide said “We couldn’t even tell that the difference between corded and uncorded use.” Sounds impressive.
TP-Link 7200ad router, the world’s first WiGig router, unveiled at CES last week
WiGig (Intel’s chosen solution) is, as the name suggests, a wireless multi-gigabit networking standard which dramatically increases over-the-air bandwidth over standard WiFi over short distances (the same room). In actual fact, the name ‘WiGig’ is a shortening of the organisation (Wireless Gigabit Alliance) which helped define the IEEE 802.11ad 60GHz standard. WiGig is aimed at very high bandwidth data uses, such as the broadcast of multi-gigabit uncompressed video and audio streams. Although its uses are more limited (short range, doesn’t work well through walls) it is ultimately a very high speed general purpose network standard in the same way as other WiFi standards. Bottom line, if you buy an 802.11ad compatible router, it’ll not only be backwards compatible with your older devices, you’ll be able to use that extra bandwidth for any sort of data transfer, not just video and audio. WiGig data rates max out at 7 gigabits per second per channel.
The system, as with the likes of TPCAST’s WirelessHD based system, requires the user to strap a receiver to the top of their VR headset, with a transmitter and encoder (powered by a proprietary compression system) relaying the digital video signal from the PC. In the case of DisplayLink XR (still at the prototype stage), that head mounted box is formidable in size at present with no details of how much it weighs. I’d hope and expect to see this form factor improved as the system edges closer to a final release. Speaking of which, although DisplayLink have not yet settled on a date for making the unit available to the public, they are tossing around a possible price of $249, which is close to the aforementioned $220 TPCAST wireless VR system, which went up for sale last month.
Display specialists Kopin in partnership with Chinese company GoerTek have announced a new reference VR headset design that it claims is the smallest of its kind integrating the firm’s ‘Lightning’ OLED micro display panels sporting a substantial 2k x 2k resolution.
One of the key ‘most wanted’ advances desired in today’s retail virtual reality headsets is higher resolution displays. Recently we reported on Samsung’s prototype OLED panels sporting a PPI (pixels per inch) figure of 858, nearly twice that of the current generation HTC Vive and Oculus Rift headsets. Now, micro display specialist Kopin have unveiled a new reference design headset with displays that top even that.
The adorably named ‘Elf VR’ headset is equipped with two of Kopin’s “Lightning” OLED micro display panels, which each feature a 2048 x 2048 resolution, providing “binocular 4K image resolution at a 120Hz refresh rate” – a figure which is misleading as the horizontal resolution is ‘per eye’ and there cannot resolve the 3840 horizontal pixels required for an equivalent ‘UHD’ image (even ignoring the shortfall in vertical resolution). In case you’re wondering, each diminutive display represents an impressive 2940 pixels per inch – that’s five times the number on existing Samsung panels in the Vive and Rift.
A Kopin Micro Display [Image courtesy Kopin]Going by images included in our recent report on those prototype Samsung panels, this would substantially reduce screendoor effect, artifacts cause by the visible gap between display elements. What’s more, Elf VR should represent not only a great visual experience for traditional VR experiences, but also provide an impressive bump for 360 and standard movie watching too.
“It is now time for us to move beyond our conventional expectation of what virtual reality can be and strive for more,” explained Kopin founder and CEO John Fan as part of a recent press release. “Great progress has been made this year, although challenges remain. This reference design, created with our partner Goertek, is a significant achievement. It is much lighter and fully 40% smaller than standard solutions, so that it can be worn for long periods without discomfort. At the same time, our OLED microdisplay panel achieves such high resolution and frame rate that it deliver a VR experience that truly approaches reality for markets including gaming, pro applications or film.”
Of course, the other major statistic of interest for VR headsets is the expansiveness of the field of view (FOV) or, how much of your peripheral vision is encompassed by the image. With smaller displays come optical challenges in achieving immersive FOVs. Kopin claim are tackling this with a two-pronged approach. Their reference design includes two “Multi-lens” optical design branches. The first is a unit targeting the aforementioned media / movie watching category which offers a 70 degree FOV (it’s not stated if this is horizontal, vertical or diagonal) – which will present a sharper image with higher pixel density. The second offers a much greater 100 degree FOV, presumably at the sacrifice of optical sharpness.
Of course with smaller integrated panel hardware and these optical systems, the other benefit to Kopin’s approach could be weight advantages. Kopin claim it’s managed to reduce its optical module by 60% to leverage a 50% weight reduction – although as no numbers were provided we’re not sure what this comparison refers to.
As we’ve seen time and again since the start of the most recent VR renaissance, it continues to provide an impressive catalyst to accelerating technological innovation in multiple fields. And with both Samsung and Kopin already at a stage where they can produce next generation VR displays, it hopefully won’t be too long before we begin to see tangible upgrades over existing ‘first gen’ hardware. That ‘soon’ may mean mid 2018, at least according to Oculus founder Palmer Luckey, speaking in an interview recently.
One of the first in a line of Windows Holographic powered immersive devices, Lenovo’s first mixed reality headset will be here in time for US “back to school”, according to Lenovo’s North America VP of Consumer Products Mike Abary.
Windows Holographic, a set of APIs integrated into Microsoft’s Windows 10 operating system, was first touted by the computing giant during the company’s surprise reveal of their augmented reality headset, the HoloLens. The HoloLens represented one of MS’ vision for the future of immersive computing platforms, with impressive computer vision driven outside-in tracking, a transparent augmented reality display and a completely untethered experience, the existence of HoloLens signified just how serious Microsoft were about positioning itself as ready to embrace the next computing age.
But with HoloLens, Microsoft is leaving the mainstream Windows Holographic hardware push to partner – a strategy the company has adopted for decades of course. A we detailed earlier in the year, the initial lineup for Windows Holographic hardware was announced in December of last year, with Microsoft OEM stalwarts Asus, Acer, Dell, HP, and Lenovo all signing on to produce VR hardware for the platform. Chinese VR headset maker 3Glasses also joined the group, and will support the Windows mixed reality environment on their S1 VR headset in the first half of 2017.
Images courtesy Windows Central
Now it looks as if Lenovo’s mixed reality headset offering will be one of the first Windows Holographic devices to launch. Speaking to Twice, the company’s VP of Consumer Products stated that it would see a launch in time for “back to school”, according to Twice’s interpretation. This could mean the it’ll be available before mid-August and will reportedly cost less than the Oculus Rift which received an aggressive price cut just recently. This could mean that the new Lenovo headset which, like the HoloLens leverages onboard cameras (2 in this case) to drive its outside-in tracking system, could come in as low as $300, according to an earlier report from The Verge. The headset also sports dual 1440×1440 OLED displays but, unlike the HoloLens, is a tethered device requiring a Windows 10 PC to run and also sports no similar AR-style transparent visor.
Lenovo also recently showed its planned SteamVR powered VR headset which sports a higher resolution than the HTC Vive as well as a neat flip-up visor design and a PSVR style solid headband. That will join the company’s new ‘Legion’ PC hardware too, with the VR-capable Y720 laptop due in 2017 too.
Microsoft just last week shipped its latest milestone Windows 10 ‘Creator’s Update’ which amongst many other things included Mixed Reality support for the OS. This opens the door for the release of those partner headsets, with Lenovo seemingly positioning itself as one of the first to market.
Today’s most immersive virtual reality systems, like the Oculus Rift and HTC Vive, rely on a bothersome tether to send power and high fidelity imagery to the headset at low latency. But everyone agrees a dangling cable is not only annoying, it’s an immersion detractor. The demand for a solution to this issue has spurred the creation of no less than seven solutions (and counting) hoping to make a wireless link between the high-end host PC and the headset.
Update (4/28/17, 1:31PM PT): When this article was originally published we hadn’t heard back from TPCAST. The company has since reached out to tell us more about their approach to wireless VR in their own words, and we’ve added that information to the list below.
Original Article (3/24/17): Eliminating the tether on high-end VR headsets is an obvious desire with no obvious solutions. The issue comes down to three major factors: bandwidth, latency, and price; needs unmet by prior wireless video technology, which is why the big three high-end VR headsets that hit in 2016—Oculus Rift, HTC Vive, and PSVR—all rely on a cable which runs from the headset to the host machine.
Unfortunately, the tether also keeps us connected to reality (the one we’re trying to escape with VR). Especially in room-scale VR—where you’re walking around, stepping on, or over it—the cable keeps us from completely detaching from the physical space we’re in; somewhere in the back of your head your brain is tracking the (virtually invisible) cable and deciding when you need to step over it, twist a different direction, or avoid hitting it with your arms. Ridding our headsets of the cable would mean deeper immersion and physical freedom within the virtual world.
Now at least seven solutions are hoping to rise to that challenge, using a variety of technologies to make high-end VR wireless. This article is designed to give a broad overview of some of those proposed solutions & claims, and understand how each technology is being positioned. We reached out to several companies working in this space to tell us in their own words about their approach to making VR wireless and what they feel is their unique advantage.
The difference between IMR’s wireless technology for HMDs and other companies is that IMR have been designing a VR standard designed specifically for VR devices that will allow VR video transmission between any VR device. More specifically, other solutions popping up on the market appear to use H.264 or similar chip solutions. The problem with this and what differentiates IMR is we don’t do frame to frame comparisons which immediately add at least 11ms latency (@ 90fps) and our solution doesn’t therefore introduce the frame-to-frame motion artefacts that you see with this older technology. The IMR VR compression standard was always designed for VR from the start so the result now is something special. We achieve 90-95% compression and the user is hard pressed if not impossible to tell the difference running through our system and the original.
IMR has developed an algorithm and hardware that enables wireless transmission and streaming of VR video over the leading wireless standards. The algorithm and hardware was developed to resolve more than one challenge i.e. wireless VR, but rather produce a standard which the entire VR industry can use; this extends far beyond what most people are thinking for just VR applications, including UAVs and robotics. The algorithms and overall technology IMR has developed for this VR standard can be used for transmitting VR data between any VR capable device—HMD, PC, Laptop, phone, camera, etc.
The new VR standard provides both the required video data compression and ultra-low latency for virtual reality and 3D remote presence applications now and into the future. The following is a description of its capabilities:
Rapid Data Transmission:The 95% compression rate allows IMR’s technology to compress and decompress with a record breaking introduced latency of less than 1ms. This translates to zero perceived latency by the player, preserving user comfort due to the elimination of motion sickness that latency causes in VR play.
Image Quality:The quality of the decompressed image is indiscernible from the original with no motion blur or introduced artefacts.
Eye Tracking:IMR’s algorithm utilises single pass dynamic compression schemes, including foveation, tuneable parameters and offers support for eye tracking. Furthermore, the algorithm’s built-in flexibility facilitates further custom compression.
Versatile:IMR’s technology can leverage both the 802.11ac and 802.11ad wireless standards as well as other wireless communications that have sufficient bandwidth. This enables current generation HMDs to be supported via the AC standard, and futureproofs the technology by enabling it to handle up to 2x 4K VR video transmission over the AD standard.
Multi-faceted Application:IMR’s compression standard facilitates peer to peer data transmission between devices, to and from PC’s, HMD’s smartphones, 360 cameras and other VR enabled devices.
Our technology is designed to operate across all VR and telepresence robotics applications and each has their own requirements for the wireless. Our technology provides the necessary compression/decompression at ultra low latencies for ALL these applications, and we are working with and looking to partner with different wireless manufacturers and communication link suppliers to push this technology into each area.
KwikVR’s unique advantage over other wireless competitors is hard to tell, because we have not been able to test our competitors’ solutions. They are all claiming an impossible one or two millisecond latency overhead, so I would say our main advantage is to be honest. Also, our solution does not use 60GHz Wi-fi at the top of the head of the user, which might be better for health reasons. Using 5GHz Wi-fi is also less prone to obstruction issues when it comes to the Wi-fi signal. We believe that our latency overhead is close to optimal, but only the customers will be the judges.
I think you can classify Wireless VR into what type of radio it uses and what type of compression. Of course all systems have to deliver under a frame of round trip latency.
Various Radio Types:
WiFi 802.11ac 5GHHz & 2.4GHz
WiFi 802.11ad 60GHz
5G LTE cellular for cloud VR (various frequencies)
Proprietary radio in unlicensed frequency (e.g. 5GHz)
Our solution uses WiFi 802.11ac and LTE. This has the benefits of not needing line of sight transmission. 60GHz transmission suffers from large attenuation when propagating through physical barriers including humans. 802.11ac can travel much longer distance than 60GHz and provide multiple room coverage. 802.11ac is also much cheaper and requires much smaller wireless antennas than 60Ghz. Placement of the transmitter is not important with 802.11ac unlike 60GHz. 802.11ac is also lower power giving longer battery life of the HMD.
Various Compression Types
JPEG (Intra frame) with 3:1 compression
JPEG 2000 (Intra frame) with 6:1 compression
MPEG H.264 (Intra and Inter frame) 100:1 compression
MPEG H.265 (Intra and Inter frame) 200:1 compression
Proprietary Compression
Our solution uses MPEG H.265/HEVC compression which provides 200:1 compression. E.g. a source of 1080p60 requires 3,000 Mbps to transmit uncompressed. We compress this to 15 Mbps a compression ratio of 200:1. This allows headroom for error correction and higher resolutions and frame rates as well as data rates that can be delivered from the cloud over 5G LTE and fibre networks. Standards based systems also allow off the shelf mobile chipsets to be used to build into mobile HMDs. We will adopt future H.265 profiles which can provide even better compression using tools like multi view and screen content coding tools.
While other vendors are focused on bringing wireless accessories to today’s HMDs, Nitero is the only company developing an integratable solution that will support the aggressive requirements of future VR HMDs.
The solution’s novel micro-second latency compression engine provides royalty-free, visually lossless encoding, adding end-to-end latency of one millisecond. At power below one Watt, it can be integrated into future headsets without the need for expensive heat sinks or vents. In fact, adding Nitero’s wireless solution will be significantly less expensive than cables, resulting in an overall cost reduction, which is critical for VR adoption going forward.
Interoperable with WiGig, Nitero has customized for the unique challenges in the VR/AR use cases with advanced beam-forming that supports NLOS at room-scale. Additionally, back-channel support for computer vision, eye-tracking, 3D-audio and other forthcoming technologies can be supported simultaneously with the VR display, without needing another chipset.
Some of the industry leaders that have supported Nitero via investment and collaboration include Valve Software, Super Ventures, and the Colopl VR Fund, along with others not publicly announced.
We use a combination of video compression and proprietary streaming protocol that allows us to stream high resolutions to multiple headsets. Our solution is designed primarily for Theme Parks and Arcades that want to put two or more people in the same tracked space.
Our thesis is that in the future you will always need some amount of compression, either when resolutions get higher (4K and above. We need 16K for retina resolution), or if you try to put the server outside the local network. Ideally, you could put a GPU farm in the cloud and have all the content available immediately thus even eliminating the need of a PC at home! I think that in five years the only computer you would need at home would be a small mobile chip, probably built into the headset itself.
Of course, any sort of compression introduces latency. However, there’s been a lot of development in the past two years to go around that. We’ll be releasing a network aware technology similar to Spacewarp that’s used by Oculus. And companies like Microsoft have done a lot of research on reducing latency by doing predictive (also known as speculative) rendering. Project Irides, for example, is able to compensate for 120 ms of network latency in their demo. We’ve been talking to one of the lead researchers of Irides for a while, and we’ll release similar technology in 2017. So I would say that the future of wireless VR is very bright!
The advantage of TPCAST Wireless Adaptor is near-zero latency, and no compression of image. We believe these two characteristics are key standards of high-end wireless VR. Any noticeable image compression is not acceptable in VR, due to its high requirement of image resolution.
The biggest difference between TPCAST and other companies is that our device is not a prototype or model, but a product, which is the world’s first commercial tetherless VR product.
In the last four months, over 1,000 people have experienced TPCAST WIRELESS ADAPTOR for VIVE personally on VIVE X demo day (BEIJING), CES, MWC, GDC and GUANGZHOU VRAR SUMMIT. Almost all of them felt no difference from tethered Vive, especially the near-zero latency. These positive evaluations means a lot to us.
Here are the specs of TPCAST Wireless Adaptor for Vive:
We’ve had a chance to test out a number of these technologies, but not yet in appropriately controlled conditions. Expect more coverage to come as these products get closer to market-ready.
NVIDIA’s latest GPU is here and it offers a big performance bump, but what exactly does that power deliver the VR gaming enthusiast? We pit the new Nvidia GTX 1080 Ti against the GTX 1080 to see just how far each card can enhance VR image quality through supersampling.
It’s frightening the pace at which the GPU industry moves. Here we are, less than one year after Nvidia launched its brand new line of 10-series ‘Pascal’ architecture graphics cards with the GTX 1080, back with a new card which promises to not only outgun its predecessor by a significant margin, but on paper matches the performance of Nvidia’s flagship GPU, the ludicrously pricey and powerful Titan X.
The new GTX 1080 Ti is here and offers a step change in performance when compared with the last generation, Maxwell architecture GTX 980 Ti.
This is certainly impressive, and you can see why Nvidia are keen to emphasise the progress that’s been made since the 980 Ti’s launch in 2015. But the real story here is that this new card’s closest performance stable mate is the current generation $1,200+ ultra-enthusiast card, the Titan X. In fact, the GTX 1080 Ti is built around the same GP102 GPU used in Nvidia’s Titan X released last year. With 12 billion transistors, GP102 is “the most powerful GPU Nvidia has ever made for gaming.”
1080 Ti block diagram shows the card’s underlying architecture
The GeForce GTX 1080 Ti ships with 3,584 CUDA Cores, 28 Streaming Multiprocessors (SMs), and runs at a base clock frequency of 1,480 MHz, while the GPU Boost clock speed is 1,582 MHz. And as we’ll discover, there’s quite a bit of headroom in both memory and core base clocks. The 1080Ti sports 11GB of GDDR5X VRAM, just 1GB shy of the Titan X, and that’s a spec shaving that you’re very unlikely to notice, even when gaming at 4k or supersampling at extreme levels. In other words, the 1080Ti just made the Titan X effectively obsolete.
Bear all of that in mind, and consider that the new GTX 1080 Ti shipped last week for $699, the same price as its GTX 1080 predecessor went on sale for just 10 months ago. It’s also launching at this price a mere 8 months after the 10-series Titan X, owners of which may justifiably feel their wallet wincing at their short lived performance supremacy.
Testing Methodology & ‘FCAT VR’
The world of cutting edge GPUs may move quickly, but one of the reasons why virtual reality remains fascinating is that it’s moving even faster. Last year’s GTX 1080 review opened with an apology of sorts, stating that as VR itself was in its infancy, we had no tools to record metrics at the level of empirical detail which standard PC gaming enthusiasts take for granted. As of this week, we’re allowed to publish benchmarks based on the newly released FCAT VR tool from Nvidia, a new frame analysis tool which records VR runtime data in detail and lets us peek under the hood at if and when VR rendering safety nets like Asynchronous Spacewarp and Asynchronous Timewarp/Reprojection are kicking in under load.
As the 1080Ti is considered a high-end GPU for dedicated enthusiasts, we wanted to really get to grips with the benefits such extreme performance could provide VR gamers. Whilst current generation headset displays are limited in terms of overall pixel density (meaning a visible panel structure), one of the biggest immersion breakers are jaggies (aliasing) caused by a low target render resolution. We’ve therefore concentrated our VR benchmarking efforts to test the limits of the GTX 1080 and 1080Ti and their ability to supersample the image to extreme levels. Supersampling is a compute intensive way to reduce aliasing (the appearance of obvious pixels or stepping on a digital image) by first rendering at a much higher resolution and using that extra detail to down-sample to a lower resolution, but one of a much higher resultant quality. Supersampling is the easiest way outside of game-specific rendering options to improve image quality and immersion.
As man cannot live on VR gaming alone, we’ve also assembled a selection of visually sumptuous and computationally taxing games. each benchmarked with tests designed to highlight the raw grunt each card possesses.
Overclocking
Although we’ve only had limited time with the 1080Ti thus far, we did manage to ascertain what we think is a stable (and fairly generous) overclock on our supplied founders edition unit. Pushing the core clock to +170Mhz above base with an additional +400Mhz bump for memory, we cautiously kept fan speed fixed at 80% with temperatures maxing out around the 80-85 degree mark. These numbers are provisional, but provide a healthy boost to performance and that’s with no additional cooling or voltage applied – and they proved stable. We’ve included overclocked results in some of the benchmark breakdowns. Interestingly – for those of you squeamish about damaging such a pricey piece of hardware – you actually only need to lift the cap on the card’s power and thermal throttling limits to realise some significant gains.
Testing Rig
We partnered with AVA Direct to create the Exemplar 2 Ultimate, our high-end VR hardware reference point against which we perform our tests and reviews. Exemplar 2 is designed to push virtual reality experiences above and beyond what’s possible with systems built to lesser recommended VR specifications.
Test PC Specifications:
SuperNOVA 850 G2 Modular Cables, 80 PLUS® Gold
MAXIMUS VIII GENE LGA 1151 Intel Z170 HDMI SATA 6Gb/s USB 3.1 USB 3.0 mATX Intel Motherboard
Oculus has provided the ‘Touch Accessory Guidelines 1.0’ for download, which contains 3D CAD files of the Touch VR controller. This data can be used to help designers and manufacturers create new accessories that integrate with the Touch hardware.
Available for download on the Oculus developer website, the Touch Accessory Guidelines 1.0 include technical drawings and STEP files of the controller’s exterior surfacing and battery compartment. In addition, it includes data for the Rock Band VR connector, an adapter included with every Oculus Touch package, enabling the design of devices which could use the adapter to attach a Touch controller.
You can take a look at the CAD files here for the Rock Band adapter, the exterior surface, and the battery compartment. The battery compartment model is the most complex, as it includes many of the internal components and surfaces, which can be highlighted using the Model Browser tool.
The Rockband adapter holds the Touch controller neatly to a Rockband guitar, but it could be used to attach the controller to other accessories too.
The new guidelines add to the existing Rift Accessories Guidelines documentation, which include sections for the headset, Audio Module and Facial Interface. While the Touch section doesn’t offer much in the way of controller-specific tips for accessory makers (perhaps ‘don’t obstruct the tracking ring’ was too obvious!), only detailing the electrical specifications, the general guidelines written for the Rift headset can still be interpreted and applied to Touch accessories. Avoid using LEDs in mounted accessories (to prevent tracking conflicts), keep in mind comfort is paramount, and keep in mind that the fit of accessories not only impacts physical comfort but can also impact how users experience content in VR.
Interestingly Touch is not much larger than the Vive Tracker, as Tactical Haptics has shown. Perhaps one of the biggest issues with using Touch as a device to track a dedicated VR peripheral is the lack of input/output options between the peripheral and the controller. Peripherals made for use with Touch would need to make use of the controller’s own buttons to input/output to the game at hand, or likely a separate wireless connection to the host PC.
Peripheral manufacturer Bionik are building a Rift-life aftermarket integrated headphone solution for the PlayStation VR called ‘Mantis’ and, they look pretty neat too.
Regardless of which VR headset you personally may prefer, there’s little argument that the Oculus Rift’s solution to the problem of tangled, headphone cables is pretty elegant. By contrast, having to deal with separate ‘phones and the inevitable cable tangles they involve with the HTC Vive and PlayStation VR is a hassle.
For PSVR at least, owners of Sony’s VR headset can rejoice as peripheral manufacturer Bionik has announced it’s bringing a solution to the device that looks very much inspired by the Oculus Rift’s. The Mantis comprises clip-on, on ear headphones that slot onto the PSVR’s headband, of course then hooking into the standard 3.5mm jack on the control unit as any other would. As with the Rift, once the headphones are in place, they can be flipped up and adjusted for comfort – or to temporarily just get them out of the way.
The solution looks good, although Bionik does seem to be reaching a little for filling out its feature list quoting “Creates an immersive experience that puts you directly in the game” – so, like any other headset really. Marketing spiel aside, we suspect Bionik may find a market for the add-on devices, especially as the recently announced install base is fast approaching 1 Million units. The Mantis is marked as ‘coming soon’ and is currently priced $49.99.
Alongside the $100 price cut of both Rift headset and Touch controllers, Oculus also permanently reduced the price of additional Sensors to $59. This is the first major price reduction for high-end PC VR hardware since launch.
Oculus have applied the first permanent price reduction for their PC VR hardware since the Rift launched in March 2016 – cutting $100 off both the headset and Touch motion controllers. This means a new Rift and Touch bundle is now $598, down from $798. The halving of the Touch price alone should tempt many Rift owners still on the fence about motion controllers to jump in, particularly with the added incentive of the recently-launched Robo Recall.
Lost in the excitement of this aggressive pricing strategy was the news that the Oculus Sensor has also dropped to $59, down from $79. This is good news for those who want to make use of a third sensor to create a roomscale VR play space. It also means that a viable roomscale setup is now available for $657, significantly undercutting the $799 HTC Vive, which has more hassle-free room-scale capabilities straight out of the box.
That said, despite Oculus making good progress, there are still some who seem to be fighting issues when equipped with 3 or more sensors, utilising the company’s “experimental” room-scale mode. Commenting on an Oculus subreddit in February, Nate Mitchell said that recent updates meant Oculus were seeing “improve tracking quality in aggregate” and that many problems were “too many sensors (4 or more sensors can suffer from USB challenges) and overall sensor positioning (sensors too far apart from each other and/or not enough overlap in field of view).” Oculus recently released version 1.12 of their Rift software, with still more improvements in this and other areas – so far feedback seems positive.
At GDC 2017, developers Impulse Gear confirmed that their VR shooter Farpoint has Co-op, and will launch in a bundle with the PS VR Aim Controller on May 16th. An ‘unnerving space adventure set on a hostile alien world’, Farpoint is a free-movement FPS exclusive to PlayStation VR.
Build from the group up for PlayStation VR by independent studio Impulse Gear, Farpoint is a free-movement FPS designed to be played with the PS VR Aim Controller. While the game can be played on a standard PS4 gamepad, the new peripheral was developed by Sony with input from Impulse Gear, and will be launching as a bundle on May 16th.
During a developer session at this week’s GDC about the Aim controller, which is expected to receive support for several future PS VR titles, Impulse Gear confirmed Farpoint will have a co-op mode.
The PSVR Aim Controller has a friendlier appearance than the gun-like Sharp Shooter PS3 accessory, and benefits from having PlayStation Move-style features such as a tracking sphere, trigger and thumbstick integrated into the unit, meaning no separate Move controllers are required. The integrated motion sensors mean that the new controller is more accurate than the Sharp Shooter, able to deliver 1:1 tracking of the in-game weapon.
Farpoint is most notable for its free movement (often referred to as ‘full locomotion’) – something that is typically avoided in VR FPS in favour of a teleport mechanic due to its tendency to cause nausea. The 1:1 weapon tracking, combined with careful attention to movement speed and animation, assisted by IKinema’s real-time inverse kinematics, means that it is able to deliver a comfortable experience, as described in our hands-on.
The latest version of NVIDIA’s FCAT VR analysis tool is here and it’s equipped with a wealth of impressive features designed to demystify virtual reality performance on the PC.
NVIDIA has announced a VR specific version of its FCAT (Frame Capture Analysis Tool) at GDC this week which aims to provide accessible access to virtual reality rendering metrics to help enthusiasts and developers demystify VR performance.
Back in the old days of PC gaming, the hardware enthusiast’s world was a simple place ruled by the highest numbers. Benchmarks like 3DMark spat out scores for purchasers of the latest and greatest GPU to wear like a badge of honour. The highest frame rate was the primary measure of gaming performance back then, and most benchmark scores were derived from how quickly a graphics card could chuck out pixels from the framebuffer. However, anyone who has been into PC gaming for any length of time will tell you, this rarely gives you a complete picture of how a game will actually feel when being played. It was and is perfectly possible to have a beast of a gaming rig and for it to perform admirably in benchmarks, but to deliver a substandard user experience when actually playing games.
Over time however, phrases like ‘frame pacing’ and ‘micro stutter’ began creeping into the performance community’s conversations. Enthusiasts started to admit that the consistency of a rendered experience delivered by a set of hardware trumped everything else. The shift in thinking was accompanied (if not driven) by the appearance of new tools and benchmarks which dug a little deeper into the PC performance picture to shed light on how well hardware could deliver that good, consistent experience.
One of those tools was FCAT – short for Frame Capture Analysis Tool. Appearing on the scene in 2013, FCAT aimed to grab snapshots of what the user actually saw on their monitor, measuring frame latency and stuttering caused by dropped frames – outputting that final imagery to captured video with an accompanying stream of rendering metadata right alongside it.
Now, NVIDIA is unveiling what it claims is the product of a further few years of development capturing the underbelly of PC rendering performance. FCAT VR has been officially announced and brings with it a suite of tools which increase its relevancy to a PC gaming landscape now faced with the latest rendering challenge. VR.
What is FCAT VR?
The FCAT VR Capture Tool GUI
At its heart, FCAT VR is a frametime analysis tool which hooks into the rendering pipeline grabbing performance metrics at a low level. FCAT gathers information on total frametime (time taken by an app to render a frame), dropped frames (where a frame is rendered too slowly) and performance data on how the VR headset’s native reprojection techniques are operating (see below for a short intro on reprojection).
The original FCAT package was a collection of binaries and scripts which provide the tools to capture data from a VR session and convert that data into meaningful capture analysis. However, with FCAT VR, Nvidia have aimed for accessibility and so, the new package is fully wrapped in a GUI. FCAT VR is comprised of three components, the VR Capture tool which hooks into the render pipeline and grabs performance metrics, the VR Analyser tool which takes data from the Capture tool and parses it to form human readable graphs and metrics. The final element is the VR Overlay, which attempts to give a user inside VR a visual reference on application performance from within the headset.
When the FCAT VR Capture tool is fired up, prior to launching a VR game or application, its hooks stand ready to grab performance information. Once FCAT VR is open, benchmarking is activated using a configured hotkey and it then sets to work dumping a stream of metrics to raw data on disk. Once the session is finished, you can then use supplied scripts (or write your own) to extract human readable data and output charts, graphs or anything your stat-loving heart desires. As it’s scripted, it’s highly customisable for both capture and extraction.
So What Does FCAT VR Bring to VR Benchmarking?
In short, a whole bunch – at least in theory. As you probably know, rendering for virtual reality is a challenging prospect and the main vendors for today’s consumer headsets have had top adopt various special rendering techniques to allow the common or garden gaming PC to deliver the sorts of low latency, high (90FPS) framerate performance required. The systems are designed as backstops when system performance dips below the desired minimum, something which deviates from the ‘perfect world’ scenario for rendering a VR application. The below diagram illustrates a simplified VR rendering pipeline (broadly analogous to all PC VR systems).
A Simplified VR Rendering Pipeline (Perfect World)
However, given the complexity of the average gaming PC, even the most powerful rigs are prone to performance dips. This may result in the VR application being unable to meet the perfect world scenario above where 90 FPS is delivered without fail every second to the VR headset. Performance dips result in dropped frames, which can in turn result in uncomfortable stuttering when in VR.
VR Application Dropped Frames
Chief among these techniques are the likes of Asynchronous Time Warp (and now Space Warp) and Reprojection. These are techniques that ensure what the user sees in their VR headset, be that an Oculus Rift or an HTC Vive, matches as closely with that users movements in VR as closely as possible. Data sampled at the last possible moment is used to morph frames to match the latest movement data from the headset to fill in the gaps left by inconsistent or under-performing systems or applications by ‘warping’ (producing synthetic) frames to match. Even then, these techniques can only do so much. Below is an illustration of a ‘Warp Miss’, when neither the application or runtime could provide an up to date frame to the VR headset.
It’s a safety net, but one which has been incredibly important in reducing sensations of nausea caused by the visual disconnect experienced when frames are dropped, with stutter and jerkiness of the image. Oculus in particular are now so confident in their arsenal of reprojection techniques, they lowered their minimum PC’s specifications upon the launch of their proprietary Asynchronous Spacewarp technique. None of these techniques should be (and indeed aren’t designed to be) a silver bullet for poor hardware performance. When all’s said and done though, there’s no substitution for a solid frame rate which matches the VR headset’s display.
Either way, these are techniques implemented at a low level and are largely transparent to any application which is sat at the head of the rendering chain. Therefore, metrics gathered from the driver which measure when performance is dipping and when these optimisations are employed are vital to understand how well a system is running. This is where FCAT VR comes in. Nvidia summarises the new tool’s capabilities as below (although there is a lot more under the hood we can’t go into here):
Frame Time — Since FCAT VR provides detailed timing, it’s possible to measure the time it takes to render each frame. The lower the frame time, the more likely it is that the app will maintain a frame rate of 90 frames per second needed for a quality VR experience. Measurement of frame time also allows an understanding of the PC’s performance headroom above the 90 fps VSync cap employed by VR headsets.
Dropped Frames — Whenever the frame rendered by the VR game arrives too late for the headset to display, a frame drop occurs. It causes the game to stutter and increases the perceived latency which can result in discomfort.
Warp Misses — A warp miss occurs whenever the runtime fails to produce a new frame (or a re-projected frame) in the current refresh interval. The user experiences this miss as a significant stutter.
Synthesized Frames — Asynchronous Spacewarp (ASW) is a process that applies animation detection from previously rendered frames to synthesize a new, predicted frame. If FCAT VR detects a lot of ASW frames, we know a system is struggling to keep up with the demands of the game. A synthesized frame is better than a dropped frame, but isn’t as good as a rendered frame.
What Does This All Mean?
In short, and for the first time, enthusiasts will have the ability not only to gauge high level performance of their VR system, but crucially the ability to dive down into metrics specific to each technology. We can now analyse how active and how effective each platform’s reprojection techniques are across different applications and hardware configurations. For example, how effective is Oculus’ proprietary Asynchronous Time Warp when compared with Open VR’s asynchronous reprojection? It can also provide system enthusiasts alike vital information to pinpoint where issues may lie, or perhaps a developer key pointers on where their application could use some performance nips and tucks.
All that said, we’re still playing with the latest FCAT VR package to fully gauge the scope of information it provides and how successfully its present (or indeed how useful the information is). Nevertheless, there’s no doubt that FCAT‘s latest incarnation delivers the most comprehensive suite of tools to measure VR performance we’ve yet seen, and goes a long way to finally demystifying what is going on deeper in the rendering pipeline. We look forward to digging in a little deeper with FCAT VR and we’ll report back around the tool’s planned release in mid March.