Hacker Newsnew | past | comments | ask | show | jobs | submit | ls65536's commentslogin

My intuition would be that constant usage (not exceeding maximum rated capacity/thermals/etc.) should generally result in less wear compared to the more frequent thermal cycling that you might expect from intermittent use, but maybe there's something else going on here too. I suppose this would depend on what exactly the cause of the failure is.

Either way, these are obviously being intentionally sold to be used for non-gaming-type workloads, so it wouldn't be a good argument to state that they're just being (ab)used beyond what they were inteded for...unless somehow they really are being pushed beyond design limits, but given the cost of these things I can't imagine anyone doing this willingly with a whole fleet of them.


Electromigration may be a factor

Electromigration decays exponentially with inverse temperature. If it's genuinely a factor, you're running that GPU way too hot.

But if everyone follows this advice, then everything just gets overwhelmed by "hustlers" (and their "shameless spam"), and collectively we're now all worse off because of it. It just turns into yet another tragedy of the commons situation.

I say this as someone who received a lot of great feedback and had some interesting interactions after posting about a project of mine using "Show HN" a few years ago. I didn't need to spam anything to get the attention, but I admit maybe I just got very lucky, or maybe there were just fewer posts to "compete" with at the time (this was before the recent write-everything-with-AI-and-launch-it-out-there craze).

Finally, I'm not making any moral judgments here, and if someone feels they need to do this to get the attention they want, then who am I to tell you otherwise. But we should be aware of what we're giving up when we overall tend to behave in such a way, even if it's the inevitable outcome.


Yes, this is why it's called a race to the bottom. If everyone does what is best for themselves then everyone's result will be worse.

The total size isn't what matters in this case but rather the total number of files/directories that need to be traversed (and their file sizes summed).


I responded here, it's essentially the same content: https://news.ycombinator.com/item?id=46150030


> I've seen claims of providers putting IPv6 behind NAT, so don't think full IPv6 acceptance will solve this problem.

I get annoyed even when what's offered is a single /64 prefix (rather than something like a /56 or even /60), but putting IPv6 behind NAT is just ridiculous.


What is a single /64 prefix not enough for?


Multiple local networks while still using SLAAC.


Separating out main, guest, work, internet-of-shit, security & VPN subnets


If that's really the case, I wish they would just come out and say it and spare the rest of us the burden of trying to debate such a decision on its technical merits. (Of course, I am aware that they owe me nothing here.)

Assuming this theory is true then, what other GPLv3-licensed "core" software in the distro could be next on their list?


Maybe the thought is that there will be more pressure now on getting all the tests to pass given the larger install base? It isn't a great way to push out software, but it's certainly a way to provide motivation. I'm personally more interested in whether the ultimate decision will be to leave these as the default coreutils implementation in the next Ubuntu LTS release version (26.04) or if they will switch back (and for what reason).


I can certainly understand it for something like sudo or for other tools where the attack surface is larger and certain security-critical interactions are happening, but in this case it really seems like a questionable tradeoff, where the benefits in this specific case are abstract (theoretically no more possibility of any memory-safety bugs) but the costs are very concrete (incompatibility issues; and possibly other, new, non-memory-safety bugs being introduced with new code).

EDIT: Just to be clear, I'm otherwise perfectly happy that these experiments are being done, and we should all be better off for it and learn something as a result. Obviously somebody has assessed that this tradeoff has at least a decent probability of being a net positive here in some timeframe, and if others are unhappy about it then I suppose they're welcome to install another implementation of coreutils, or use a different distro, or write their own, or whatever.


I'd prefer it if all software was written in languages that made it as easy as possible to avoid bugs, including memory-safety bugs, regardless of whether it seems like it has a large attack surface or not.


I view `uutils` as a good opportunity to get rid of legacy baggage that might be used by just 0.03% of the community but has to sit there and it impedes certain feature adding or bug fixing.

F.ex. `sudo-rs` does not support most of what the normal `sudo` does... and it turned out that most people did not need most of `sudo` in the first place.

Less code leads to less bugs.


> "sudo"

Hence "doas".

OpenBSD has a lot of new stuff throughout the codebase.

No need for adding a bloated dependency (e.g. Rust) just because you want to re-implement "yes" in a "memory-safe language" when you probably have no reasons to.


I'm not going to speculate about what might be ahead in regards to Oracle's forecasting of data center demand, but regarding the idea of efficiency gains leading to lower demand, don't you think something like Jevons paradox might apply here?


The industry definitely seems to be going in this hybrid PQC-classical direction for the most part. At least until we know there's a real quantum computer somewhere that renders the likes of RSA, ECC, and DH no longer useful, it seems this conservative approach of using two different types of locks in parallel might be the safest bet for now.

However, what's notable is that the published CNSA 2.0 algorithms in this context are exclusively of the post-quantum variety, and even though there is no explicit disallowing of the use of hybrid constructions, NSA publicly deems them as unnecessary (from their FAQ [0]):

> NSA has confidence in CNSA 2.0 algorithms and will not require NSS developers to use hybrid certified products for security purposes.

[0] https://www.nsa.gov/Press-Room/News-Highlights/Article/Artic...


They don't endorse hybrid constructions but they also don't ban them. From the same document:

> However, product availability and interoperability requirements may lead to adopting hybrid solutions.


> until there is actually a quantum computer that can break it

There isn't one yet (at least that the general public knows about), but that doesn't mean we don't need to do anything about it right now. See this problem, for example, which would potentially affect today's encrypted data if it were harvested and saved to storage for the long term: https://en.wikipedia.org/wiki/Harvest_now,_decrypt_later


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: