The web didn't go from streaming 480p straight to 4k. There were a couple of intermediate jumps in pixel count that were enabled in large part by better compression. Notably, there was a time period where it was important to ensure your computer had hardware support for H.264 decode, because it was taxing on low-power CPUs to do at 1080p and you weren't going to get streamed 1080p content in any simpler, less efficient codec.
Correct. DCT maps N real numbers to N real numbers. It reorganizes the data to make it more amenable to compression, but DCT itself doesn't do any compression.
The real compression comes from quantization and entropy coding (Huffman coding, arithmetic coding, etc.).
> DCT compression, also known as block compression, compresses data in sets of discrete DCT blocks.[3] DCT blocks sizes including 8x8 pixels for the standard DCT, and varied integer DCT sizes between 4x4 and 32x32 pixels.[1][4] The DCT has a strong energy compaction property,[5][6] capable of achieving high quality at high data compression ratios.[7][8] However, blocky compression artifacts can appear when heavy DCT compression is applied.
DCT was developed in 1972 and has a compression ratio of 100:1.
H.264 compresses 2000:1.
And standard resolution (480p) is ~1/30th the resolution of 4k.
---
I.e. Standard resolution with DCT is smaller than 4k with H.264.
Even high-definition (720p) with DCT is only twice the bandwidth of 4k H.264.
Modern compression has allowed us to add a bunch more pixels, but it was hardly a requirement for internet video.