The business logic here is >15 years old. Http compression was only in early stages then, and you can guarantee that many client side scripts and libraries would jot have supported it. Zip was well known. Compress and place.
If I'm not missing the point here, this was, and still is, about offering the simplest, most reliable solution over a long period. This is a near perfect example of how to do exactly that. No changing formats, no moving requirements, no big swings in frameworks, apis, or even standards. And most importantly, no breaking your customers business workflows.
No edge case dependencies on the WWW server's configuration, and no sudden "why did we just saturate our external connection?"
No emergency change requests from the outage team that has to be impacted by other areas and fit into the infrastructure's teams maintenance windows and their capacity to address that.
No rebalancing of workloads because Jane had to implement (or schedule the task and monitor it) that change, Joe had to check and verify that the external availability tests passed, and Annick had to sign off on the change complete, and now everyone isn't available for another OT window for the week.
> "The business logic here is >15 years old. Http compression was only in early stages then"
At least one of us is confused about history here; are you really saying that circa 2008, HTTP compression would have been considered immature or unstable?
If I'm not missing the point here, this was, and still is, about offering the simplest, most reliable solution over a long period. This is a near perfect example of how to do exactly that. No changing formats, no moving requirements, no big swings in frameworks, apis, or even standards. And most importantly, no breaking your customers business workflows.