Skip to Main Content
June 27, 2024

‘Native’ Projection Resolutions & Pixel Shifting – A Primer

We recently reported on a new 8K LCOS imager for projectors that has been developed by Sony’s semiconductor group and that was described at Display Week. We talked about this as a ‘Native 8K’ device so we thought it was a while since we spoke about the whole pixel-shifting issue.

What is ‘Native 8K’?

So what do we mean by ‘Native 8K’? Simply it means that the imager has enough pixels to fully display the 7680 x 4320 UltraHD resolution image without any processing or movement. As we reported in that article, putting that much resolution on a tiny panel is very challenging. LCOS projectors typically use three separate images, one each for red, green and blue, with an optical combiner, so there is no worry about sub-pixels, but still, 33 million is a lot of pixels to put on a chip.

Up to now, as far as we are aware, all the 8K projectors on the market have been enabled by ‘pixel shifting’ technology. That is to say, the imager itself is of lower resolution (and 4K is typical) but by moving the image multiple times within a single frame to create a number of different images that are slightly apart and create the impression of the higher resolution. The technique (also known as wobulation) has been around a long time after TI started promoting the idea for its DLP/DMD imagers.

TI’s DLP device, used in projectors now for more than a quarter of a century, uses an electro-mechanical system that is very sensitive to size and scale. Around 20 years ago, TI started to use the wobulation idea (patented in 2000 by HP) to help achieve FullHD resolution, when its micromirrors were larger than they are now. By using two 960 x 1080 images and moving the images horizontally twice as often as the frame rate, a projector could create a FullHD image. ((960 + 960) x 1080 = 1920 x 1080). That meant that the firm could keep the chips smaller (and of a squarer shape). Eventually, TI developed devices with the full 1920 x 1080 resolution and these days makes them with 3840 x 2160 4K UltraHD resolution. At one point, it was promoting chips with 2560 x 1600 that were shifted diagonally to create the effect of 4K (used e.g. by Barco).  Each of the different resolution chips has been used with wobulation to create higher resolution and the 4K UltraHD chips have been used by Digital Projection in its 8K projector.

Different companies use two images with a diagonal shift or four images with both vertical and horizontal shifting. In reality, especially with LCOS or DLP, there would be a much smaller area around each pixel than the diagram shows. Image:8KA

LCOS Also Wobbles

LCOS-based projectors also use optical systems that are broadly similar to three chip DLP projectors, so again they have used wobulation with the JVC e-shift system being the most widely recognised to create 8K using 4K devices. The JVC technology creates four frames of 4K for each input frame of 8K with both horizontal and vertical shifting. There are some nice illustrations of this here.

The other competitor to DLP is the transmissive polysilicon LCD, these days made by just Epson. They have a different challenge to the reflective LCOS and DLP devices. In the LCDs, which are made on a quartz substrate rather than the glass of other LCDs, the transistor that controls the pixel sits within the pixel and blocks some of the light. Microlenses can be used to focus the light away from the transistor, but light transmission is still a challenge. That is in contrast to the reflective DLP and LCOS chips, where the controlling transistor structure is beneath the reflective surface.

For 4K, Epson uses ‘4K Enhancement technology (which) shifts each pixel diagonally to double FullHD resolution’ to quote the literature for its 4K projectors. 

(For a more detailed look into this topic, check out the white paper from Insight Media (registration required). It was created in 2017 but is still relevant and well presented).

Does the Use of Wobulation Matter?

There are often two schools of thought about whether it is important to have native 8K resolution imagers. From an engineering point of view, it should be simpler to use a native device, although if the imager is larger that can present optical challenges. From a user point of view, does it matter?

One point of view is that ‘you have to be able to count the pixels’ and that they should be there all the time. It is true that creating each pixel for as little of a quarter of the time will usually lead to some loss of luminance, but as there are more pixels in the frame, the overall level of light on the screen should be similar.

However, there is a second point of view that tends to be supported by those that have extensively studied the human factors point of view. Their argument is that it is less important what you measure than what you perceive. The human visual system has inate persistence – that is to say that an image of a very short exposure to light stays on the retina (or in the perceived view, anyway) for significantly longer than the exposure. Without persistence our cinema and TV systems simply wouldn’t work. The argument is that if the frames are shown fast enough, the viewer will not be aware of the different images and will see a stable higher resolution image. 

Supporters of the second point of view highlight that especially with video content, there is often less high frequency detail in the signal than might be implied in the container. In other words, a video container may be specified as 4K or 8K, but the actual image may be of much lower resolution. That can be because of issues in opticcs, processing, de-bayering or other processing as well as the common practice of chroma sub-sampling. 

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x