Skip to Main Content
December 7, 2023

Ready for 8K – The Future of Broadcast & Pro AV?

AMD organized a webinar on the topic of 8K and Rob Green from the company introduced the subject, his firm and the panel. Green is in the broadcast & ProAV division of AMD (and we interviewed him earlier this year on AMD’s FPGA developments). 

Having acquired Xilinx, AMD has been in the market for a while with FPGAs because they are are very adaptable to a wide range of applications and already has a lot of 4K products developed on the platform. Many of the elements used for 4K systems (e.g. codecs) can be re-used for 8K workloads. The adaptability allows a lot of innovation and customisation at the hardware level, Green said. There has been more adoption of larger platforms and SOCs for professional applications.

More Power Needed

More powerful architectures will be needed for 8K and key elements will be:

  • High Bandwidth memory (HBM)
  • High Speed Networking on chip to move data around the chip quickly
  • Multi-rate Ethernet MACs (they are already reaching more than 400Gbps)

AMD’s GPUs support DisplayPort 2.1 for gamers and its Epyc and ThreadRipper CPUs are used in virtual production and other immersive applications (although the webinar was focused on FPGAs and SOCs rather than on CPUs and GPUs).

AMD supports the move to 8K and Green said that it appears to be the next wave in immersive content. TV makers are expected to move increasingly to 8K and it will be used in events, signage and virtual production. Although there is some scepticism, 8K is emerging and the arguments against it  are the same as those heard on the transitions from SD to HD and from HD to 4K.

RED Highlights 8K Advantages

The first guest speaker was Uday Mathur who is CTO of Red, which he described as a ‘digital cinema camera maker’ and he said that the firm has already developed its fourth generation 8K solution – an 8K sensor with support for 120fps. Red has been making 8K products since 2015 and it has consistently used FPGAs in those products.

Mathur reinforced the key advantages of capture in 8K. These are:

  • Support for supersampling with scaled 8K providing better 4K and even 2K content because of the reduction in aliasing for smoother edges and ‘a finer grain structure’.
  • There is often the ability to re-frame in post production for virtual lens movements that were not captured during the scene
  • Having an 8K original often allows for very effective image stabilization while still delivering high quality – it can be even better than a high end tripod. Mathur said that even hand held content can be stabilized to a high degree
  • Much Hollywood content these days has visual effects (VFX) applied and a higher resolution canvas allow these to be produced on a finer grid
  • 8K provides future proofing, allowing content to be re-purposed for longer.

There are some challenges to resolution increasing, and Mathur used to be a digital designer, so he is well aware of the technical obstacles in compression and storage. He highlighted the high I/O speed of the Xilinx technology to meet the needs of capture systems, whether the signal is then distributed via new networking architectures or via traditional multiple 12G SDI. You also need higher memory bandwidth.

Higher Resolution and Big Displays Mean HFR

With higher resolution and larger display surfaces comes a need for higher frame rates and that adds to the workload. Even if you are capturing in 24fps, Mathur said, you really need to capture in 8ms or so (equivalent to 120Hz) if you want to ensure that you do not have visible rolling shutter artifacts. This is often under-appreciated, he said and can mean a ‘burst speed’ of 4 Gpixels/second for capture. Red has always worked at 16 bits per pixel, so HDR and WCG have not made any real difference in the cameras.

There has been a growing demand for connected live content in very high resolution and for that you need a fast system with high speed ethernet and with the CPU close to the FPGA to allow fast operation and extremely low latency. Event organisers are starting to demand cinematic quality live content and being able to exploit TCP/IP (for proprietary headsets) and SMPTE 2110 I/O is very useful. Rates for 2110 have been going up to 100Gbps and 400Gbps, Green said, with talk of rates even higher. 

Megapixel Exploiting Networking

The next speaker was Jeremy Hochman, the founder of Megapixel VR which is focused on R&D and developing the technology to distribute content anywhere from ingest to the display. The firm’s technology is used by many major LED display makers. The firm has a large ‘technology stack’ and while some of this is licensed to others, other technologies are exploited in house and the stack can be widely distributed. The firm has a number of products in virtual production systems.

Green asked “what can you do with 8K in video wall that can’t be done with 4K?”. Hochman said that the firm’s entire engineering team worked on a project with 27 4K streams at a previous company. This project (which certainly sounds a bit like ‘The Sphere’ – editor) was very complicated with many possible failure points. Moving to 8K streams reduces the number of systems by a factor of four reducing the potential failure points. Megapixel’s systems are designed to be bandwidth-based rather than raster-based. In other words, they are not intended for just standard display shapes and pixels can be arranged as needed for the application.

At lower frame rates you can exploit the bandwidth-based approach to have more pixels in each frame.

More Tiling

AMD is seeing more tiling being used to create very large display surfaces. High frame rates allow new apps such as switching backgrounds on LED displays. To a user a massive display needs a higher frame rate so that motion is not so jerky and higher resolution is also needed to smooth out motion. In virtual production, the HFR can be used for different applications (Megapixel can support up to 960fps on a tile).

Rather than having to pre-multiplex different content sources, tiles can ingest multiple 24fps or 60fps streams and the operator can decide what is shown on the tile. You can sychronize different cameras to only see particular frames. For example, you could refresh the background at 120Hz, with cameras at 30Hz, so that four different backgrounds are seen with the same foreground. The extra backgrounds can be use for autocues, different chromakeys and other data and applications. (To see more of this kind of application check the demo from IBC in 2022 here and Megapixel has an interesting white paper here – editor)

Live moving of HFR 8K for events is a challenge and the I/O is very demanding. Hochman said that the Helios ingest point can ingest 8K or 4 x 4K via fiber, SDI or DisplayPort while output is basically fixed at 80Gbps. The tiles can ‘request’ particular feeds via the fiber link. The Megapixel PX1 hardware is in the display module and talks to the rack. Each tile almost acts like a matrix switcher, Hochman said and there is lots of very high speed switching with some tiles switching at up to 28KHz!

SMPTE 2110 is Important

Megapixel chose to work heavily with SMPTE 2110 and Green asked why this was a Megapixel is not really in the broadcast market, where it was really optimized for. Hochman explaind that his firm likes ethernet – there are a high range of speeds available and IT has ‘hardened’ network hardware and infrastructure. That was why the firm adopted ethernet as the transport method to the tiles. Tiles have 10Gbps support (so up to 4 x 2.5Gbps) and can communicate bi-directionally with diagnostics etc. going back to the controller. As the system is bandwidth not raster-based, it can display a wide range of formats e.g. 32 x 1K is possible rather than the 8K x 4K of a ‘regular’ 8K display.

The Ventana Slim Tile

Megapixel has developed the Ventana – a tiled LED system that is nowhere near as bulky as traditional systems intended for rental. The tile is very thin and light – it’s less than 15mm thick. The firm sees the technology going into the home and displays will be modular. They can even imagine the technology being used as  ‘living architecture’ replacing wall furnishings. For this application, you may need even more than 8K resolution, Hochman said. Viewers will want and need HFR, HDR and ‘retina resolution’ on these large displays. There are plenty of apartments where ‘traditional’ very large TVs cannot be installed and Megapixel believes that in the future this kind of application could become mainstream.

Intopix Reduces Transport Demands

The final speaker in the webinar was Jean-Baptiste Lorent, Director of Sales & Marketing of Intopix, a firm that develops lightweight ‘mezzanine’ codecs. Lorent characterized the firm’s technology as ‘compression for those that don’t want to use compression’. This often means codecs with extremely low latency and that are either visually or mathematically lossless. The two main codecs are TicoRAW which is used for camera sensor output and JPEG XS which is used in broadcast applications. JPEG XS was developed alongside the SMPTE 2110 networking standard.

Intopix has been working on 8K since 2009 when it started some projects with NHK of Japan and it was involved in the tests of live 8K transmission during the Tokyo and Beijing Olympics. From 40Gbps ingest, there was an opportunity to move to a 2Gbps networking system in these demos which made transmission feasible. Lorent highlighted that in broadcast it’s not enough to support a single stream and you have to be able to switch between and handle multiple 8K streams.

Demand from New Applications

In ProAV, where Intopix is also involved, the demand for 8K is really coming from new and creative applications including digital signage and very large LED walls. Sometimes there is a desire to show multiple 4K streams, for example, on a single 8K or bigger display. A desire for new experiences is really driving this area of 8K demand.

“Why don’t you always compress?” Green asked. More and more Lorent replied, everybody is compressing. For example even in machine vision, medical and other applications. mathematically or visually lossless codecs can be helpful. With AI taking off and being used to analyze visual content, compression should not affect the accuracy of detection. The threshold of what is acceptable depends on the application. In sports, latency in the issue, but for a movie experience, the visual quality has to be cinematic. Compression has to work along with image processing in cameras and other systems that may be debayering images or noise processing, for example. Lorent explained that the firm’s technology is used in medical systems, where video content may be visually lossless when compressed, but where medical data such as X-Ray images are mathematically lossless.

JPEG XS Allows Cat 5 Use

With JPEG XS compression and using SMPTE 2110, you can transmit 8K 60P content over Cat 5 UTP cables and there are already millions of those installed. In acquisition inside cameras, often the challenge is to match the content capture to the bandwidth of the storage medium. JPEG XS has been developed and is now in its third edition and with the TDC proifile Intopix can compress an image within three to ten video lines of latency even though there are some elements of intra and inter frame compression.

Networks at just 2.5Gbps can be used for 4:4:4 content using the JPEG XS Edition 3 TDC profile or with 4:2:2 content using the I profile.

The number of gates used for the JPEG XS codec can be very small and 8K 60fps can be compressed with as little as 25K logic gates and just very limited memory – using fewer gates than 4K in JPEG 2000. The memory requirement is small enough for it all to be on chip and not only does this mean avoiding the need for external DDR memory and ports, but it also means that JPEG-XS can be used within the FPGA to reduce the memory needed within the FPGA!

Power Consumption also Benefits

Power consumption is also a big benefit of the use of FPGAs, Lorent said, and green issues are also important throughout industry. FPGAs are more power efficient than CPUs and GPUs (which are also supported by Intopix) but sometimes you need the extra power.

Compression and codecs are getting more important as there is a desire to move to higher performance but without major upgrades in processing, networks and storage. Better codecs help to minimize the additional cost of better video, Lorent concluded.

When we went to press, the webinar was viewable online here.

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x