Views 207

Faces of Training Featured

[Editor’s Note: Chris Chinnock, President of Insight Media, and organizer of the Display Summit 2017, recently highlighted what to expect from the event and presenters in this industry. Read below for Chris’ latest piece.]

Advance display technologies are having a significant impact on simulation training. The latest display technologies impact augmented reality, virtual training, and wearables, as well as other display form factors. Advanced display technologies bring training realism to life, which is why they are so important.

Light field displays, volumetric displays, and eventually, holographic displays are all in development with some systems in limited use today with the anticipation of commercial products coming soon. Such displays require a lot of image points (some call them voxels or volume pixels) and a lot of data.  As a result, methods to format, encode and deliver these intense data streams are also in development. Two upcoming events will focus on commercial activities in these areas.

At the Display Summit from October 4-5, 2017, presenters will be discussing their approach to advanced displays and addressing challenging issues such as limited field of view, resolution and image size. Industry leaders also are taxed with the large computational power needed to generate the diffraction patterns to drive the spatial light modulators for holographic display. Here are some of the companies that will be tackling these challenges at the show:

  • Rockwell Collins: The event will be hosted at the Rockwell Collins facility in Sterling, VA. Lee Hendrick of Rockwell Collins will be speaking on Integrated Digital Vision Systems (IDVS) and Carlo Tiana will do a feature presentation on Augmented Reality: Content is King.
  • SeeReal will be discussing a holographic display that uses an eye tracking system that only has to render what the user’s fovea sees.
  • FoVI3D will talk about their integral imaging approach for making a light field display. Powered by a 2D array of OLED microdisplay and microlenses, the system offers vertical and horizontal parallax with an image appearing in a tabletop format.
  • Light Field Labs plans to work the whole food chain of light field data processing, formatting encoding and displays.
  • Holografika and Third Dimension Technologies will present their approaches to minimizing complexity by focusing on horizontal-only parallax solutions by using a technique called holographic stereography. For this, an array of projectors illuminates a special holographically defined screen that compresses each projector image into a narrow (~1-degree) horizontal FOV and wide vertical FOV.  Each projector has a different perspective of the scene so when combined provides a glasses-free 3D image with a large sweet spot.  Holografika has constructed large commercial versions of this architecture while Third Dimension Technologies has delivered a flight simulator prototype.
  • LightSpace will talk about their volumetric display system. It consists of two main components. The first is a high speed, single-chip DLP engine that is used for rear projecting video onto the second main component: a stack of about 20 air-spaced screens. The arrangement of the screens can be visualized as similar to the slices in a loaf of bread.

Each screen is individually, electrically addressable and can be driven to quickly switch from a transparent state to a scattering state. More specifically, each screen in the so-called Multi-planar Optical Element (MOE) is a liquid crystal scattering shutter. In previous versions of the LightSpace display, the MOEs were composed of Polymer Dispersed Liquid Crystal material. The latest version of the MOEs is reported to use a new liquid crystal formulation.

At the Streaming Media for Field of Light Displays (SMFoLD) workshop, which will be held on October 3, 2017 just before the Display Summit, presenters will focus on the distribution part of the light field ecosystem.

Here, there is much debate about how to deliver the vast amounts of data in real time that a light field image can require. For synthetic data, one can deliver data as mesh and textures perhaps with additional metadata. But for video content, there could easily be 50 to 100 views to deliver to the display system. Do you focus on extensions to conventional video codecs to reduce the massive redundancy or should you use these images to create a 3D model and encode as mesh, textures and metadata? Many are trying to understand the trade offs in terms of processing power, cost, bandwidth and efficiency.

At the SMFoLD workshop, several approaches to this formatting and encoding problem will be introduced, and there will be discussions about current activities in the relevant standards bodies in this area as well. We anticipate a vigorous panel session to debate various approaches.

So how real are light field display applications? Certainly there is much interest in the AR/VR community about being able to provide a 3D image that does not have any side effects, so understanding the delivery and display ecosystem for such next generation products is critical.  But there are also many 3D data sets in military, medical, intelligence and commercial applications where existing stereo 3D systems can be upgraded to a more robust light field or other forms of advanced 3D displays. Some of these will be profiled at the workshop as well.

To learn more about the Streaming Media for Field of Light Displays, you can click here and to register for the Display Summit, you can click here.

Post Author