Why 8K is Needed for Volumetric Capture
LightMatrix 3D is a Taiwanese company that does volumetric capture to create “bullet time” like video sequences that allow an operator to zoom around a sports figure, performer, or action sequence to get multiple views of the scene. A video on their website provides a good example. At IBC, they showed how volumetric capture at a baseball stadium could be processed to create an 8K bullet-time video sequence, as described in the video below.
The use of volumetric video in main sports stadiums is not new. Intel was one of the pioneers, jump-starting their efforts by purchasing Israel-based Replay Technologies. Intel’s True View is an end-to-end platform that uses dozens of cameras around a stadium or arena to capture the volumetric video. These video feeds are processed by Intel Xeon process-based servers to store, synthesize, analyze and render terabytes of volumetric data to create 360-degree replays or freeze frames. The rendered videos can be formatted to play on mobile devices, TVs, PCs, or VR headsets.
For several years, Intel was busy installing these systems, reaching 19 NFL stadiums and several soccer stadiums in Europe. But in mid-2021, Intel decided to shut down its sports division and “remove volumetric video from Intel’s roadmap to focus on advancing innovative technologies that better support our core businesses and IDM 2.0 strategy.”
This has created a market opportunity for LightMatrix 3D. According to CEO Joe Chen, their baseball stadium volumetric capture solution has been operating for two years. And they are discussing upgrading all of Taiwan’s baseball stadiums to offer a volumetric capture solution. He says a prominent University in the U.S. is also currently evaluating its volumetric capture solution for football.
Chen says they have developed their solution independently from Intel. This includes their own synthesizing and processing algorithms as well as encoding technology. One big difference with the True View solution is that Light Matrix can create volumetric videos in real-time, which is not limited to later replays. That’s a pretty big deal as it requires massive processing power and very efficient processing.
The LightMatrix solution is also different in not creating a game-like 3D model of the volumetric capture. It is then represented as a mesh and texture model for rendering as video. Chen says they do not create a mesh and texture model but synthesize the video frames in the video domain. We asked for more details on their algorithms and processing needs, but the company does not want to provide any details on their proprietary process. In addition, most volumetric captured content is rendered at lower resolutions (typically up to 1080p) for playback on TVs and mobile devices. LightMatrix renders their models at 8K. Their simplified workflow is shown below.

For the stadium installation profiled at IBC, Chen says they set up 12 cameras at each base for a total of 48. They were not allowed to capture the pitcher. Currently, the cameras in the stadium are 4K resolution, but the volumetric video is rendered at 8K and encoded with the Advantech encoder for playback. Their customers planned to upgrade the cameras to 8K along with the OTT platform for encoding and transcoding. They are finding this necessary to increase the fidelity of the volumetric video frames. However, Chen revealed these plans were put on hold when it became clear that the new iPhone14 would not support 8K playback.