MainConcept’s Codec Tools Support Ingest as Well as Distribution
Codecs are a critical enabling part of any broadcast or video pipeline. They fall into the category of ‘enablers’ when analysts are looking at how markets develop. For any market or technology, there have to be both ‘drivers’ and ‘enablers’. Nobody creates content just because they want to use a particular codec. However, today’s high resolution, high frame rate, wide color gamut and high dynamic range sensors that create the best video create a huge amount of data. Without a codec to compress that data much of the fantastic video we see every day on TV and in the cinema could not be captured, stored, edited and delivered. Codecs are an essential part of the ‘plumbing’ of the video ecosystem.
One of the key global providers of this codec ‘plumbing’ is MainConcept GmbH, a player for thirty years in developing codecs from the days of MPEG-2 to today. The company was founded after work done to enable video editing on the Amiga computer in Aachen, Germany. (Germany was a hub for software development of the Amiga Computer, originally developed by Commodore – editor). Over the years, the company has changed hands several times, including a period owned by DivX and then Rovi and is now part of Endeavor, a global sports and entertainment company. It has continuously developed codecs from MPEG-1 and -2 (1993), to the H.264/MPEG-4 AVC Codec in 2004 and H.265/HEVC in 2013.
VVC is the Major Development Today
Work still continues on codecs from MPEG-2 onwards, but the main focus currently is H.266/VVC which is still in the (relatively) early stage of encoder development. VVC is especially suited to high performance video including 8K, so we spoke with Thomas Kramer, VP Strategy & Business Development of the company and based in Germany, and Geoff Gordon, VP of Marketing and based in California to check on recent developments.
The codec technology is important in both the delivery and ingest parts of video production and we covered both those areas as the firm works in both parts of the ecosystem. It believes that its technology is currently used in 90% of professional video workflows as it is licensed by companies including Adobe, Autodesk and Avid among others.
VVC Improves on HEVC
VVC, as we have reported, brings significant savings in terms of the compression over HEVC, but at the expense of more computation. However, as Kramer pointed out, there is always more and better processor power on the way from the semiconductor industry.
One of the key metrics of the quality of encoder technology is the visual quality of the decoded signal, but another key metric is the speed of encoding. When content is being processed and delivered offline, this is not quite as important for the final rendering, but in live broadcast situations, time to compress is absolutely critical.
At NAB this year, MainConcept demonstrated live simultaneous encoding of HD, 4K and 8K content using the VVC codec. To perform this level of processing took a lot of power and the work was done by a 192 core system in the AWS cloud. Cloud processing is a great way to get a scalable level of processing.
Kramer pointed out that to make it work well, you have to get all ofthe content into the cloud. If you have multiple streams of 8K video with higher frame rates and bit depths and in 4:2:2 production quality, you have to move a lot of data. The need to upload all that content can mean that local processing is still the best option in some cases.

VVC Supports Layering
One of the areas that is particularly interesting in VVC is the facility to combine multiple streams to create both different levels of quality but also to add new functionality. The idea of base layers with enhancement is not new, but VVC was built ‘from the ground up’ with the intent that the idea should be supported in the codec.
At IBC in 2022, NHK demonstrated how a FullHD layer could be enhanced to UltraHD and then to 8K. Crucially, the layers can be delivered to the final display device using different broadcast or distribution methods. A FullHD signal could be delivered by traditional broadcast (terrestrial, cable or satellite), while the enhancement layers could be provided using OTT streaming.

In Brazil, MainConcept is collaborating with others on broadcast trials for Globo to meet the SBTVD TV 2.5 and TV 3.0 standards, aimed at bringing UHD video and MPEG-H Audio to the millions of consumers who access linear content. DVB and ISDB are developing technical standards as well. ATSC 3.0 allows MIMO technologies to support simultaneous vertical and horizontal antennas, allowing one layer to be transmitted over each but allowing a fall back to a single layer where the complete specification is not implemented.
This layering concept is an elegant way to exploit the efficiency of traditional broadcast methods while avoiding the need to transmit the same data multiple times (which would be the case if you were transmitting full discrete channels of HD, 4K and 8K video).
Layers Can Enable New Application
As NHK showed last year, the layers can also be used to provide different content such as accessibility features but MainConcept told us that it had demonstrated how this technology could also be used to deliver customized and personalized content directly to viewers using the OTT streams. It got a tremendous response to this idea at NAB when it showed it and there will be more to see at IBC in September 2023. (Gordon also has a blog post showing what you can expect to see at the event)
The ability to provide multiple streams at different levels of performance, quality and bitrate is useful for broadcasters, but absolutely essential to streaming services which have to be able to support a huge range of devices and bitrates. Here, one of the key skills of an encoder developer is to understand how to optimise the compression for the best possible visual quality for every device and viewer.
Although machine learning and AI can help, it’s not usually ‘super real-time fast’, Kramer said, and it takes a lot to build the databases for learning, so the understanding of the encoder developer in matching the compression to minimize the compression artefacts is critical.
Codecs are Critical for Ingest
Kramer told us that the firm is seeing more and more recording and editing of content in 8K as cameras that can support this format become more accessible and widely available. Typically, these cameras will need to transmit content in 4:2:2 format with 10 bits during production. There is lots of hardware around that can deal with 4:2:0 encoded content – as used for content distribution – but in 4:2:2 there is often a need to fall back to software processing as hardware may not be capable.
Professional broadcast cameras tend to use HEVC as the codec and that can be a challenge in editing and live production. Some codecs used in content capture (such as ProRes) are based on intra-frame techniques, where the data for each frame is processed independently. HEVC, though, is an inter-frame codec, where the result for the current frame usually depends on the previous frames.
HEVC is a ‘long GOP‘ (Group of Pictures) codec, so developing the technology to make responsive editing fast and simple is a critical and challenging task. It’s one of the reasons that many of the main professional video editors (NLEs) use MainConcept technology, the firm told us.

Kramer told us that he expects HEVC to be the main codec for this kind of application for some time to come although there is also a lot of development in RAW formats.
The firm also works on other codecs including LCEVC (where it is collaborating with Globo and V-Nova) and AV-1 and AV-2. Kramer told us that he expects AV-1 to only be used for streaming, not for broadcast, but that AV-2, which is under development, could, in the future, compete with VVC.