Skip to Main Content
March 11, 2024

UpScaling with Samsung – the Inside Track

Several years ago, a friend of the writer joined a company that was famous for its large screen TVs and he excitedly told me how, as an executive in the TV division, he could buy a big 4K TV at a very attractive staff price. Some time later, I asked him how he was enjoying it and he replied that he really wasn’t enjoying it at all.

He was living in Germany and in those days, German national broadcasters were sending only very compressed video with very low quality and when scaled up, the high resolution and the large size of the new TV made the images look really poor. Fortunately, since then a lot of work has been done to really improve up-scaling and the latest techniques are exploiting AI techniques to avoid this kind of problem.

Samsung Briefs at CES

At CES in January 2024, Samsung briefed journalists on its latest upscaling technology and we saw multiple reports saying how effective it was, so we followed up and spoke to Dr. Tien Bau, Senior Staff Research Engineer II, of Samsung Research in the US. He made the briefing at CES. He’s a specialist in the techniques used and comes from a research background in IC design and then computer vision.

Dr Bau explained that upscaling is really all about working in the ‘spatial domain’ rather than in the color domain. There are a number of areas that need to be considered including compression artifacts, banding and blurring. In the past, each of these factors needed to be processed using separate techniques and algorithms. Some are easier to identify and process than others.

For example, blurring is fairly straightforward to identify and to fix, but compression artifacts can be very hard to distinguish from real data, he said, and are non-linear, so are harder to fix. There are so many steps and processes from camera sensor to the TV that image quality can be easily lost. Even though a video format may be designated as FullHD or 4K, for example, and the video is in a FullHD or UltraHD/4K container the actual image can be of much lower real resolution.

Details Can be Difficult

Newer codecs such as AVC (H.264) and HEVC (H.265) tend to have high levels of smoothing. If there is an image of a head, as an example, the outline may be quite clear, but the details of the face can be difficult to distinguish. Textures can easily be lost when high levels of compression are used to reduce bitrates.  We were slightly surprised to hear that a really good standard definition image (for example at DVD quality) can be easier to upscale with high quality than a more compressed higher resolution image.

In the end, what is being lost during compression is usually high frequency information and Dr Bau told us that much of the work of upscaling is done in the frequency domain. To upscale, you have to capture the image and analyze in that domain to understand the level of detail, what needs to be done and what can be done.

One of the key changes with AI, he said, is that all of the different factors can be processed in a single step rather than needing individual solutions.

First Train your AI…

Of course, much of the work in developing good AI processes is in the training of the system. Training can range from fully ‘supervised’ to unsupervised. In a fully supervised AI system, images are fed in that have been fully labelled, for example by identifying facial features such as noses or eyes. Creating those labels initially is clearly very resource intensive.

At the other extreme, there is unsupervised learning and that usually needs many more images. Dr Bau told us that Samsung started working on the challenge of understanding the best learning models as long ago as 2017 and it has taken a long time and a lot of work to understand the ‘sweet spot’ of learning that can give good results without an overwhelming level of training.

…Then Use a Lot of Silicon

There are also difficult decisions to be made about how much silicon is used in the neural networks to apply the processing. You need a lot of gates to have a good effect, so there is a significant cost to good upscaling. Initially, dedicated chips were used by Samsung because of the number of gates that were needed, at significant cost.

Samsung upscales the whole image using AI, rather than just using AI to decide how to process using conventional technology.

The image pipeline that Samsung uses exploits AI to fully process the image. That means that the entire image has to be input into the processor and the full pixel stream has to be output. So much I/O requires larger and more expensive chips. Samsung’s approach differs from some other vendors who advertise their systems as using ‘AI upscaling’ but who use AI techniques only to analyze the image (often in a very reduced sample image) and then use conventional upscaling to perform the processing. They do use AI, but this approach uses a lot less silicon and cost.

The Visible Difference of Real 8K

Dr Bau also told us that his group has done quite a lot of work on the thorny question of the visible difference between 4K and 8K. He pointed out that very often, even if you have a camera sensor with 8K photosites, the image may be just 4K, after processing and de-bayering and with the effect of any integrated low pass filter. Much of the high frequency detail is lost.

If you take that level of image and ‘downscale’ it to 4K for a 4K set (when comparing 8K sets to 4K), the reality is that you are not losing any additional information in the downscaling. This means that 4K will look like 8K because both are actually showing 4K content. Dr Bau said that you really need to be sure that the video image actually contains all the detail of the 8K input if you want to use it for comparison.

As you can imagine, this was an interesting perspective from our point of view and is a topic that we will return to.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x