Skip to Main Content
July 9, 2024

So Upscaling is Important, but What About Downsampling?

Over the last year or two, we have reported a lot on the upscaling technology being used to boost the resolution of video up to 8K, but in content creation there is often an advantage in downscaling from the higher resolution being captured. The improvement in image quality that you get by downsampling is a factor in the widespread use of 4K capture even when the content is only to be delivered in 2K/FullHD.

In audio, there is a clear understanding that when dealing with digital audio, you need to sample the audio at a much higher rate than the maximum frequency of the audio. Digital camera sensors are effectively sampling devices for images, so some of the same concepts apply. 

So, how and why does downsampling help the image?

There are different ways to view the whole subject of sampling. There are approaches based on the practicalities of the equipment used, but there are also mathematical and theoretical approaches.

The fundamental negative issue in the sampling process is aliasing and the elimination of it, or anti-aliasing. Aliasing is a phenomenon caused by the fact that while the world is analogue and continuous, camera sensors are digital so each pixel in a camera sensor is effectively taking a sample of the view at a particular point in space. Each pixel can only record one value for the area that it is under and cannot account for slight variations at different points.

Let’s look at an example. If we imagine a scene that consists of bars of waves in terms of colour or brightness, the original analogue input is sampled by the sensor (each of the vertical lines in the image below represents one pixel in the camera sensor). When the number of samples is higher than the frequency of the pattern being sampled, the result is an accurate representation of the image. However, when the number of samples is lower, major inaccuracies occur.

This phenomenon is well known and understood in digital signal processing circles, and it was understood and codified almost 100 years ago, by Nyquist and the understanding was developed by Shannon. The theory that was derived is called in the West the Nyquist-Shannon Sampling Theorem. The fundamental tenet of the theorem is that in order to avoid aliasing, you need to sample at a rate that is at least twice the highest frequency that you want to be able to recreate. That’s the reason that high quality audio is sampled at 44KHz or more to allow the capture and reproduction of frequencies up to 20KHz. In imaging terms, higher frequency equates to finer details.

In video, aliasing tends to show up as interference or moiré patterns when there is fine detail in the image. This is one reason why those appearing on TV are advised to avoid stripes and fine patterns in their clothes. However when you are filming the world as it is, it is impossible to control what patterns might appear. 

In this moiré image, the bars at the bottom are spaced slightly wider apart from those at the top. When they are interlaced, sometimes they line up, but sometimes they don’t causing the solid blocks which are a moiré pattern.

One way to ensure that there are no aliasing artifacts is to turn the Nyquist theorem around and filter any content that is of a higher frequency (i.e. with more spatial detail) than double the sampling frequency to make sure that it is not misrepresented. This is simple to do in audio, but a bit more complex in cameras and video. Where there is certain to be  a lot of fine detail, one solution is to use an ‘optical low pass filter’ (OLPF).

Filtering the Image

An example of where this might be useful is in virtual production with LED volumes being used to create backgrounds for video or cinema content. LEDs have very defined edges to their pixels, so it is easy to generate aliasing artifacts. If you always use a particular camera at a particular distance, it is not difficult to ensure that the pixel pitch of the LED display is selected to ensure that there are enough pixels to avoid aliasing with the sensor. However, in a more general studio, where the camera may be moving nearer or further from the LEDs, or where lenses with different magnification may be used, there needs to be a more general solution. 

In the past, companies have supplied what are essentially optical blurring surfaces to go over the LEDs to avoid the issue, by making the LEDs less separated, but it is more efficient to put the OPLF into a camera. Camera makers such as Blackmagic Design, sometimes offer camera models that have this feature as an option where there might be challenges with aliasing. Of course, the design has to be optimised to avoid harming the details that the operator may want to capture.

We plan to dig further into the issue of supersampling in a subsequent article1, but the idea that it is best to sample at least twice as many pixels as you want to display to avoid aliasing is one of the reasons why 4K UltraHD is widely used to capture content that is to be displayed in 2K FullHD and why capturing in 8K makes sense even if the eventual delivery or display resolution is 4K. If you want to deliver the highest quality 8K, you might even choose to shoot at 16K or more.

Many camera sensors use a ‘Bayer matrix’ to sample the different colors which complicates this topic somewhat! We plan to come back to that aspect of the topic. If you are interested in an introduction to this topic, we can recommend this YouTube video. https://www.youtube.com/watch?v=lyf5jGIrwQE

  1. Many camera sensors use a ‘Bayer matrix’ to sample the different colors which complicates this topic somewhat! We plan to come back to that aspect of the topic. If you are interested in an introduction to this topic, we can recommend this YouTube video. ↩︎
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x