> > CDUs exchange heat between coolant liquid and the facility-level water supply.
Oh interesting I missed that when I went through in the first pass. (I think I space bared to pass the image and managed to skip the entire paragraph in between the two images so that’s on me.
I was running off an informal discussion I had with a hardware ops person several years ago where he mentioned a push to unify cooling and eliminate thermal transfer points since they were one of the major elements of inefficiency in modern cooling solutions. By missing that as I browsed through it I think I leaned too heavily on my assumptions without realizing it!
Also, not all chips can be liquid cooled so there will always be an element of air cooling so the fans and stuff are still there for the “everything else” cases and I doubt anybody will really eliminate that effectively. The comment you quoted was mostly directed towards the idea that Cray-1 had liquid cooling, it did, but it transferred to air outside of the server which was an extremely common model for most older mainframe setups. It was rare for the heat to be kept liquid along the whole path.
The CDUs are essentially just passive water to water heat exchangers with some fancy electronics attached. You want to run a different chemical mix outside to the chillers as you do on the internal loop, it also helps regulate flow/pressure and leak detection with auto cutoff is all fairly essential.
Running direct on facility water would made day to day operations and maintenance a total pain.
One of the biggest problems with water cooling, especially on boards that weren’t designed for it, can be passive components which don’t usually have a heatsink and therefore don’t offer a good surface for a water block, but end up in a thermal design which requires airflow - resistors and FETs are common culprits here. Commodity assemblies are also a big problem, with SFPs being a huge pain point in designs I’ve seen.
The problem is often exacerbated on PCBs designed for air cooling where the clearance between water cooled and air cooled components is not high enough to fit a water block. Usually the solution when design allows is to segment these components into a separate air cooled portion of the design, which is what Google look to have done on these TPU sleds (the last ~third of the assembly looks like it’s actively air cooled by the usual array of rackmount fans).
I wonder if you could just put a conventional heatsink in there to cool the air inside the box?
You would have a liquid block on the CPU but you'd also have a heat sink on top that transfers heat from the air to the coolant block, working in reverse compared to normal air cooling heatsinks. The temperature difference would cause passive air circulation and the liquid cooling would now cool both the CPU and the air in the box, without fans.
Seems like something someone would have thought about and tested already though.
Not really practical it wouldn't transfer much energy at all. Let's say that your coolant comes in at 30 degrees c, well if your air is 40° and you've got no fans you can do the maths but it may as well be 0.
I was imagining the coolant comes in at a lower temp like 20 and maybe keeps the air from going above 40.
It doesn't have to do that much, but maybe you're right. I'm sure they'd be doing this if it was practical, being able to onit thousands of fans would probably save a pretty penny both on hardware and electricity.
Indeed, and the problem is once you've committed to fans and liquid cooling you can reduce the complexity and plate size massively by just cooling the big wins (CPU/GPU). I've actually seen setups where they only cold plate the GPU and leave the CPU and it's entire motherboard on air cooling.
If you're blasting enough air around to cool a 600W GPU, you don't care if your GPU's power connector dissipates 10W under certain circumstances - the massive airflow will take care of it.
Take that airflow away and you have to be a good deal more careful with your connector selection, quality control and usability or you'll risk melted connectors.
Water-cooling connectors and cables isn't common, outside of things like 250kW EV chargers.
Oh interesting I missed that when I went through in the first pass. (I think I space bared to pass the image and managed to skip the entire paragraph in between the two images so that’s on me.
I was running off an informal discussion I had with a hardware ops person several years ago where he mentioned a push to unify cooling and eliminate thermal transfer points since they were one of the major elements of inefficiency in modern cooling solutions. By missing that as I browsed through it I think I leaned too heavily on my assumptions without realizing it!
Also, not all chips can be liquid cooled so there will always be an element of air cooling so the fans and stuff are still there for the “everything else” cases and I doubt anybody will really eliminate that effectively. The comment you quoted was mostly directed towards the idea that Cray-1 had liquid cooling, it did, but it transferred to air outside of the server which was an extremely common model for most older mainframe setups. It was rare for the heat to be kept liquid along the whole path.