Skip to Main Content
April 28, 2023

No one can see 4k (or 8K), right?

That argument is older than most of you reading this.

Source thetrilogytapes

Summary: The introduction of HDTV in the US in 1998 faced slow adoption due to content availability and the need for a personal delivery system. 4K was introduced 15 years later with an infrastructure for content delivery already in place, but there were debates about whether the human eye could see the difference. Despite this, 4K and HDR (introduced in 2014) were widely accepted as improvements in display technology. Ironically, similar debates occurred in the computer industry 50 years ago about the demand from users for high-resolution displays, with experts arguing that the human visual system had limits. However, technology has progressed to provide displays today up to 8K that challenge the human vision system—nonetheless, users with high-resolution multi-displays report they are doing more because they can see more.

by Jon Peddie

In 1998, HDTV was introduced in the U.S. Its popularity grew slowly gated by content and a personal delivery system (Blu-ray). Eventually the networks and the cable added more content. When 4K was introduced 15 years later a digital infrastructure for content delivery was in place and most of the content was being mastered in 4k or 5k, so it became a network bandwidth issue, not content.

Adoption rate of HD vs. 4k (Source CTA)

Putting aside the plumbing and content creation issues, as important as they are, there was the perception of perception debates about 4K—could a human really see it, whatever an “it” was. Lengthy mind-numbing proofs were offered (some by yours truly—sorry) that explained the retina’s subtended arc and focus depth and most importantly, the brain’s ability to fill in the gaps. But things did look different in 4k, which was introduced as TV screens made a major jump in size, while dropping in price—how could we resist? Some argued that we were seeing ‘the emperor’s new clothes’ and 4k looks good because we wanted it to—kinda like why the stock market rises. Whatever the reason or motivation, 4K does indeed, and in fact, looks better. And just to put the icing on the cake, one year later in 2014 HDR was introduced—and now things REALLY looked better. In fact many said HDR was more important and gave more delight than 4K ever could or would.

The irony of this discussion about what we can, or can’t see, is almost as old as computers. Almost a half a century ago, the technical computer and fledgling workstation market was arguing about the demand from the user compared to what the industry was delivering. The industry was slowly transitioning from calligraphic stroke writer vector displays to raster scan displays and used 30-inch round displays like a radar screen. Some referred to them as ‘big oscilloscopes’. They were incredibly accurate, and it was not uncommon for engineers to take physical measurements from the screen of such a display. Someone, forever lost to history but deserving of fame, worked out the equivalent resolution of those displays. Like most things, there wasn’t unanimous agreement but general agreement was that they were 3000×3000 to 5000×5000 in equivalent pixels. Some argued they were infinite, limited only be the grain of the phosphor and the resolution (microvolts) of the x-y voltage.

When the first bit-mapped raster screens were introduced into the engineering community, they had the astounding resolution of 512×512. That limitation was due mostly to the cost of RAM and its bandwidth—remember, this was back in the mid-1970s.

The end users balked and said. “the systems which have been developed and employed in various operating environments were not strictly specified by the end user. The systems were the product of the engineering judgement of the developer. Interaction with the ultimate user or the photo-interpreter was highly limited. . .  if it looked or seemed good to the engineer or programmer, then the function in question was included.”

AutoCAD on a 1982 monitor (Source John Walker)

The end users wanted as a minimum 3000×3000. To counter the complaints, and defend the investment being made in raster scan systems, the suppliers brought in experts. The suppliers said, “the psychologist would agree that the human visual system will not use more than 500 x 500 displayed points due to resolution limits.”  Sound familiar?

The researchers trying to defend their low-res raster displays said, “One solution which seems to offer promise was the use of a degraded image as a search field with the high-resolution data stored in a rapidly accessible form and a mechanism of extracting any 512×512 designed by the user. In this approach, the user would search the low-resolution image, and when (or if) something interests him, then by the appropriate pointer and command, the system would fetch a high-resolution image for detail study.”

Now doesn’t that sound a bit like DLSS? (an Nvidia technology for AI-assisted upscaling of images on GPUs – ed.) Render small, raytrace if you must, and then scale up for final examination—50 years ago.

And as I write this, on my 32-inch 8K UHD display I wonder if technology is really progressing or if we’re mainly finding ways to build what we always wanted and knew we needed?

Looking at the chart, it took forty years to get back to a real high-resolution display. And now with 8K we have one that really truly challenges the human vision system—but not the brain, a discussion for a future posting.

Display resolution over time

You’ve heard it before—the more you can see, the more you can do. Those who believe that have big, high-resolution multiple displays and are doing more. Ask them what they can and cannot see. (JP)

0 0 votes
Article Rating
Notify of
Inline Feedbacks
View all comments
Would love your thoughts, please comment.x