8K Helps with Image Stabilization
We recently reported on a webinar run by AMD about the benefits of 8K in content creation and processing. One of the speakers was Uday Mathur, CTO of digital cinema camera maker, Red. He summarized the advantages of 8K in content creation as:
- Support for supersampling with scaled 8K providing better 4K and even 2K content because of the reduction in aliasing for smoother edges and ‘a finer grain structure’.
- There is often the ability to re-frame in post production for virtual lens movements that were not captured during the scene
- Having an 8K original often allows for very effective image stabilization while still delivering high quality – it can be even better than a high end tripod. Mathur said that even hand held content can be stabilized to a high degree
- Much Hollywood content these days has visual effects (VFX) applied and a higher resolution canvas allow these to be produced on a finer grid
- 8K provides future proofing, allowing content to be re-purposed for longer.
Four of the five topics have been covered by the 8K Association on the website, but we haven’t previously reported on the third advantage – better image stabilization. We thought that it was time that we covered this topic in a bit more detail. Mathur was clearly talking about digital image stabilization in the post-production process, but for completeness, we’ll look at other techniques as well.
The main purpose of the image stabilization process is to remove blurring caused by the movement of a camera during capture. Of course, some content benefits from the obvious motion that comes with hand-holding the camera. Even tiny disturbances on a video sensor can look very obvious when the image is blown up to be shown on a large TV or a cinema screen.
There are two basic ways to perform image stabilization using optics and to reduce the blurring at the sensor surface. You can either move the lens or move the sensor using information from gyroscopes in the camera body to understand the motion. Different kinds of motion that can be compensated include:
- axis rotation
- horizontal rotation
- vertical rotation
Different cameras and makers can have different subsets of these features, and can use different algorithms.
Putting the compensation in the lens means, in cameras that use a range of different lenses, having to build in the technology to each lens, adding to the cost. The alternative is to move the sensor to compensate in in-body stabilization (IBIS) so that it can support any lens. The sensor has to move more than if correction is done in the lens and the lens may have to produce a slightly larger image to ensure that the image circle can still match to the sensor. The sensor also has to be able to understand the zoom level on zoom lenses.
For some extreme levels of movement or vibration, an external vibration control system can be used to move the whole camera and lens combination.
The next step was to offer both lens and sensor stabilization at the same time. That needs tight integration between the camera body and lens but can produce very useful levels of improvement in image sharpness.
Earlier stabilization techniques used gyroscopes and digital spirit levels to understand motion, but the boost in processor power in recent years also allows the use of the image from the sensor to estimate motion.
Digital Image Stabilization
An alternative way to stabilize the image is to do it after the image capture, in post-production. In this case, the pixels are analyzed and a cropped portion of the image is centred around a stable image centre. If you capture in UltraHD/4K for use in content that is to be output in 4K, this can mean that the image has to be stabilized, cropped and then re-scaled to the full image size. Inevitably that reduces the resolution of the final image. The crop level can be as high as 30% or more on cameras with high level of stabilization – for example cameras designed for vloggers. (We found a good example of the level of cropping with aggressive stabilization here).
An alternative approach to post-production stabilization has been adopted by Sony with its Catalyst Browse software which can use metadata from Sony’s recent camera gyroscopes to optimize the post-production software to work. (There’s a list of supported cameras here)
Starting with an 8K image allows a loss of some pixels to provide stability, but still allowing sampling to overcome potential loss of quality because of aliasing issues*. An 8K image can use a slightly wider lens or zoom setting to allow for stabilization without cropping important parts of the image.
One of the downsides of very aggresive levels of stabilization is that you can get an effect that there is some blurring of each frame because of motion during the capture of the image. When the image is viewed with some camera motion, that seems entirely natural. However, if the image is then stabilzed to remove the camera motion, the blurring can seem unnatural.
Non-linear editors such as Da Vinci Resolve, Adobe Premiere Pro and Final Cut Pro can offer stabilization functions and there are also third party tools using AI such as Topaz Labs Video AI.
* Digital sampling theory suggests that in a digital camera, which is a sampling device, you really need twice as many samples as the final output to guarantee optimum signal quality. For a more detailed explanation of this see this article and for a quick video showing supersampling (from RED Digital) see this YouTube video