VRFocus is once again providing liveblog coverage of sessions (where we can) at this year’s GPU Technology Conference (GTC) hosted by NVIDIA in San Jose California. At GTC we’re expecting a number of sessions that will be touching on the fields of virtual reality (VR), augmented reality (AR) and also how mixed reality (MR) and related technologies might fit into the creative mix of both the present and future.
Next up, Thomas Burnett CTO of FoVI3D takes to the stage. “A light-field display projects a 3D aerial scene that is visible to the unaided eye without glasses or head tracking and allows for the perspective correct visualization of the scene within the display’s projection volume. The light-field display computes a synthetic radiance image from a 3D scene/model and projects the radiance image through a lens system to construct the 3D aerial scene. Binocular disparity, occlusion, specular highlights, gradient shading, and other expected depth cues are correct from the viewer’s perspective as in the natural real-world light-field. There are a few processes for generating the synthetic radiance image; the difference between the two most common rasterization approaches is the order in which they decompose the 4D light-field (two dimensions of position, two dimensions of direction) into 2D rendering passes. This talk will describe Double Frustum and Oblique Slice and Dice synthetic radiance image rendering algorithms and their effective use for wide-area light-field displays.”
Your liveblogger for the event is Kevin Joyce.