> Applications on Linux should use urandom to the exclusion of any other CSPRNG
Applications, yes. Appliances built on it - now that's more open to interpretation.
Back in 2003/2004 I was building a centrally managed security appliance system. At the time I made the hard choice that first-boot operations (such as generating long-term device keys) MUST use /dev/random. It made the initial installs take longer, but I refused to take the chance that an attacker could install and instrument a few hundred nodes and find out possible problems with entropy sources.
Once the first-boot sequence was over, applications used /dev/urandom for everything. This included the ipsec daemons. Forcing everything to /dev/random during first boot made sure that on subsequent boots there would be (for all practical purposes) enough entropy available for urandom to work securely.
The first-boot problems were amplified by the fact that we were running our nodes inside virtualization. (At the time: UML, and we built our own on top of it. Xen wasn't nearly ready enough back then.)
It's fascinating to see that the problems we had to deal with 10 years ago are now becoming an issue again. To this day I choose to use /dev/random if I need to generate key material shortly after boot (which could be install), or for my own long-term use. Good thing personal GPG keys have a shelf-life of several years...
If you're building an appliance, why wouldn't you simply ensure urandom is seeded at first boot?
I'm sympathetic to people's concerns about generating long-term keys. But my problem is, /dev/random isn't addressing the major risks there either. You should generate long-term keys on entirely separate hardware.
Lordy. Hyperbole much? There are cases where I am sure having access to a blocking source of entropy is interesting. Perhaps it has nothing to do with crypto. Maybe it's mathematical or scientific in nature. Who knows. But it's good to give developers an option and not cut off access to useful tools because you think they can't handle it.
Would you care to lay out a scenario in which a scientific application might care about the decision that the Linux kernel RNG makes about entropy estimation? Place make sure your answer takes into account how the Linux entropy estimator actually works.