The total size isn't what matters in this case but rather the total number of files/directories that need to be traversed (and their file sizes summed).
> I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.
I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.
If that's really the case, I wish they would just come out and say it and spare the rest of us the burden of trying to debate such a decision on its technical merits. (Of course, I am aware that they owe me nothing here.)
Assuming this theory is true then, what other GPLv3-licensed "core" software in the distro could be next on their list?
Maybe the thought is that there will be more pressure now on getting all the tests to pass given the larger install base? It isn't a great way to push out software, but it's certainly a way to provide motivation. I'm personally more interested in whether the ultimate decision will be to leave these as the default coreutils implementation in the next Ubuntu LTS release version (26.04) or if they will switch back (and for what reason).
I can certainly understand it for something like sudo or for other tools where the attack surface is larger and certain security-critical interactions are happening, but in this case it really seems like a questionable tradeoff, where the benefits in this specific case are abstract (theoretically no more possibility of any memory-safety bugs) but the costs are very concrete (incompatibility issues; and possibly other, new, non-memory-safety bugs being introduced with new code).
EDIT: Just to be clear, I'm otherwise perfectly happy that these experiments are being done, and we should all be better off for it and learn something as a result. Obviously somebody has assessed that this tradeoff has at least a decent probability of being a net positive here in some timeframe, and if others are unhappy about it then I suppose they're welcome to install another implementation of coreutils, or use a different distro, or write their own, or whatever.
I'd prefer it if all software was written in languages that made it as easy as possible to avoid bugs, including memory-safety bugs, regardless of whether it seems like it has a large attack surface or not.
I view `uutils` as a good opportunity to get rid of legacy baggage that might be used by just 0.03% of the community but has to sit there and it impedes certain feature adding or bug fixing.
F.ex. `sudo-rs` does not support most of what the normal `sudo` does... and it turned out that most people did not need most of `sudo` in the first place.
OpenBSD has a lot of new stuff throughout the codebase.
No need for adding a bloated dependency (e.g. Rust) just because you want to re-implement "yes" in a "memory-safe language" when you probably have no reasons to.
I'm not going to speculate about what might be ahead in regards to Oracle's forecasting of data center demand, but regarding the idea of efficiency gains leading to lower demand, don't you think something like Jevons paradox might apply here?
The industry definitely seems to be going in this hybrid PQC-classical direction for the most part. At least until we know there's a real quantum computer somewhere that renders the likes of RSA, ECC, and DH no longer useful, it seems this conservative approach of using two different types of locks in parallel might be the safest bet for now.
However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):
> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.
> until there is actually a quantum computer that can break it
There isn't one yet (at least that the general public knows about), but that doesn't mean we don't need to do anything about it right now. See this problem, for example, which would potentially affect today's encrypted data if it were harvested and saved to storage for the long term: https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later
There's also a lunar occultation of Mars (which is near opposition itself, making it relatively bright) happening in a few days, and then again in February, which should be visible from parts of the northern hemisphere: https://in-the-sky.org/news.php?id=20250114_16_100
By precisely timing them you can measure/check various facts like distance, diameter and so on. In fact, if you time them precisely from different locations on earth you can determine the shape of the occulting body (e.g. an asteroid occulting a star). And on occasion you can get a 'grazing occultation', for example a star goes behind mountains on the moon resulting in it blinking on and off; observe from multiple latitudes and it's possible to recover the profile of the range.
Occultations can tell you about the atmosphere of the object in front. Depending on the rate at which the background object fades can tell you about atmospheric density, composition etc. If it disappears suddenly it indicates there may be no atmosphere.
> If you do consider paying for either Wubuntu or LinuxFX, it's worth keeping in mind that in the past, the developer's activation system and registration database have both been investigated and found to be horribly insecure. However, from the database, it looks like some 20,000 people did pay.
Even if one wanted to use it for anything serious without paying or otherwise providing any personal information in the process, this is a huge turnoff.