This was before I was born, so I'm hardly an expert, but I've heard of feeding IBM mainframes chilled water. A quick check of wikipedia found some mention of the idea: https://en.wikipedia.org/wiki/IBM_3090
Having to pre chill water (via a refrigeration cycle) is radically less efficient than being able to collect and then disperse heat. It generates considerably more heat ahead of time, to deliver the chilled water. This mode of gathering the heat and sending it out, dealing with the heat after it is produced rather than in advance, should be much more energy efficient.
I don't know what surprises me about it so much, but having these rack-sized CDU heat-exchangers was quite a surprise, quite novel to me. Having a relatively small closed loop versus one big loop that has to go outside seems like a very big tradeoff, with a somewhat material and space intensive demand (a rack with 6x CDUs), but the fine grained control does seem obviously sweet to have. I wish there were a little more justification for the use of heat exchangers!
The way water is distributed within the server is also pretty amazing, with each server having it's own "bus bar" of water, and each chip having it's own active electro-mechanical valve to control it's specific water flow. The TPUv3 design where cooling happens serially, each chip in sequence getting hotter and hotter water seems common-ish, where-as with TPUv4 there's a fully parallel and controllable design.
Also the switch from lidded chips to bare chips, with a cold plate that comes down to just above, channeling water is one of those very detailed fine-grained optimizations that is just so sweet.
When our mainframe in 1978 sprung a leak in its water cooling jacket it took down the main east/west node on IBMs internal network at the time. :-). But that was definitely a different chilling mechanism than the types Google uses.