Write-only perhaps, but with perl you only have to write it once and it'll run forever, anywhere. No breaking on updates, no containers, no special version of Perl just for $application, just the system perl.
Because of this, in practice, the amount of system administration mantainence and care needed for perl programs is far, far less than other languages like python where you actually do have to go in and re-write it all the time due to dep hell and rapid changes/improvements to the language. For corporate application use cases these re-writes are happening anyway all the time so it doesn't matter. But for system administration it's a significant difference.
Agreed! My father (RIP) absolutely loved Perl and could do amazing things with it in seemingly impossibly-few characters. I got reasonably proficient w/ regex but never came close to his wizardry. Much respect for those in his rarified company.
Pure perl modules are not, unless they use syntactic features that first appear in the newer versions.
Modules with C extensions have to be recompiled with libperl they run against, as much as CPython extensions link to a particular libpython, and guess Ruby is the same. But they, with very few exceptions, will recompile and run fine. XS is cryptic but its backwards compatibility story is good.
Perl's "decline" saved it from a fate worst than death: popularity and splitting into dozens of incompatible versions from added/removed features (like python). Instead Perl is just available everywhere in the same stable form. Scripts always can just use the system perl interpreter. And most of the time a script written in $currentyear can run just as well on a perl system interpreter from 2 decades ago (and vice versa). It is the perfect language for system adminstration and personal use. Even if it isn't for machine learning and those kinds of bleeding edge things that need constant major changes. There are trade-offs.
This kind of ubiquitous availablility (from early popularity) combined with the huge drop-off in popularity due to raku/etc, lead to a unique and very valuable situation unmatched by any other comparable language. Perl just works everywhere. No containers, no dep hell, no specific versions of the language needed. Perl is Perl and it does what it always has reliably.
Perl's binary brings with it the ability to run every release of the language, from 5.8 onwards. You can mix and match Perl 5.30 code with 5.8 code with 5.20 code, whatever, just say "use v5.20.0;" at the start of each module or script.
By comparison, Python can barely go one version without both introducing new things and removing old things from the language, so anything written in Python is only safe for a a fragile, narrow window of versions, and anything written for it needs to keep being updated just to stay where it is.
Python interpreter: if you can tell "print" is being used as a keyword rather than a function call, in order to scold the programmer for doing that, you can equally just perform the function call.
> By comparison, Python can barely go one version without both introducing new things and removing old things from the language
Overwhelmingly, what gets removed is from the standard library, and it's extremely old stuff. As recently as 3.11 you could use `distutils` (the predecessor to Setuptools). And in 3.12 you could still use `pipes` (a predecessor to `subprocess` that nobody ever talked about even when `subprocess` was new; `subprocess` was viewed as directly replacing DIY with `os.system` and the `os.exec` family). And `sunau`. And `telnetlib`.
Can you show me a real-world package that was held back because the code needed a feature or semantics from the interpreter* of a 3.x Python version that was going EOL?
> Python interpreter: if you can tell "print" is being used as a keyword rather than a function call, in order to scold the programmer for doing that, you can equally just perform the function call.
No, that doesn't work because the statement form has radically different semantics. You'd need to keep the entire grammar for it (and decide what to do if someone tries to embed a "print statement" in a larger expression). Plus the function calls can usually be parsed as the statement form with entirely permissible parentheses, so you have to decide whether a file that uses the statement should switch everything over to the legacy parsing. Plus the function call affords syntax that doesn't work with the original statement form, so you have to decide whether to accept those as well, or else how to report the error. Plus in 2.7, surrounding parentheses are not redundant, and change the meaning:
$ py2.7
Python 2.7.18 (default, Feb 20 2025, 09:47:11)
[GCC 13.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> print('foo', 'bar')
('foo', 'bar')
>>> print 'foo', 'bar'
foo bar
The incompatible bytes/string handling is also a fundamental shift. You would at least need a pragma.
> Can you show me a real-world package that was held back because the code needed a feature or semantics from the interpreter
That is not what I was getting at. What I was saying is that, if you write code for perl 5.20 and mark it "use 5.20.0;", then that's it, you're done, code never needs to change again. You can bring in newer perl interpreters, you can upgrade, it's almost certainly not going to break.
You can even write new code down the line in Perl 5.32 which wouldn't be possible in 5.20, and the 5.20 code wouldn't be valid in 5.32, but as they're both declaring which version of the language they're written in, they just seamlessly work together in the same interpreter.
Compared to Python's deliberate policy, which is they won't guarantee your code will still run after two minor releases, and they have a habit of actively removing things, and there's only one version the interpreter implements and all code in the same interpreter has to be be compatible with that version... it means a continual stream of having to update code just so it still runs. And you don't know what they're going to deprecate or remove until they do it, so it's not possible to write anything futureproof.
> in 2.7, surrounding parentheses are not redundant,
That is interesting, I wasn't aware of that. And indeed that would be a thorny problem, moreso than keeping a print statement in the grammar.
Fun fact: the parentheses for all function calls are redundant in perl. It also flattens plain arrays and does not have some mad tuple-list distinction. These are all the same call to the foo subroutine:
> Compared to Python's deliberate policy, which is they won't guarantee your code will still run after two minor releases
They don't guarantee that the entire standard library will be available to you two minor releases hence. Your code will still run if you just vendor those pieces (and thanks to how `sys.path` works, and the fact that the standard library was never namespaced, shadowing the standard library is trivial). And they tell you up front what will be removed. It is not because of a runtime change that anything breaks here.
Python 3 has essentially prevented any risk of semantic changes or syntax errors in older but 3.x-compatible code. That's what the `__future__` system is about. The only future feature that has become mandatory is `generator_stop` since 3.7 (see https://peps.python.org/pep-0479/), which is very much a corner case anyway. In particular, the 3.7-onward annotations system will not become mandatory, because it's being replaced by the 3.14-onward system (https://peps.python.org/pep-0649/). And aside from that again the only issue I'm aware of (or at least can think of at the moment) is the async-keyword one.
> And you don't know what they're going to deprecate or remove until they do it
This is simply untrue. Deprecation plans are discussed in public and now that they've been burned a few times, removal is scheduled up front (although it can happen that someone gives a compelling reason to undo the deprecation).
It's true that you can't make your own code, using the standard library (which is practically impossible to avoid), forwards-compatible to future standard libraries indefinitely. But that's just a matter of what other code you're pulling in, when you didn't write it in the first place. Vendoring is always an option. So are compatibility "forward-ports" like https://github.com/youknowone/python-deadlib. And in practice your users are expecting you to put out updates anyway.
And most of them are expecting to update their local Python installations eventually, because the core Python team won't support those forever, either. If you want to use old FOSS you'll have to accept that support resources are limited. (Not to mention all the other bitrot issues.)
Well isn't that nice. The boxes I care most about are 32 bit. The perl I use is 5.0 circa 2008. May you amiga386, or anyone else, thank you in advance, may be able to tell me what do I need to upgrade to perl 5.8?
Is it only perl 5.8 and whatever is the contemporaneous gcc? Will the rest of
my Suse 11.1 circa 2008, crunch? May I have two gcc's on the same box/distro version,
and give the path to the later one when I need it? The reason I am still with
Suse 11.1, is later releases broke other earlier things I care about, and I could not fix.
What incompatible versions of pythons do you mean? I'm entirely unaware of any forks, and the youngest version I have to supply at the moment is 3.9, which is over 5 years old and available in all supported platforms.
Try to run any random python program of moderate dep use on your python 3.9 system interpreter without using containers. Most likely you'll have to use a venv or the like and setup a special version of python just for that application. It's the standard now because system Python can't do it. In practice, pragmatically, there is no Python. Only pythons. And that's not even getting in to the major breakages in point version upgrades or the whole python 2 to 3 language switch.
> Most likely you'll have to use a venv or the like and setup a special version of python just for that application.
Using venvs is trivial (and orders of magnitude more lightweight than a container). And virtually every popular package has a policy of supporting at least all currently supported Python versions with each new release.
You need to set up a venv because of how the language is designed, and how it has always worked since the beginning. Python doesn't accommodate multiple versions of a package in the same runtime environment, full stop. The syntax doesn't provide for version numbers on imports. Imports are cached by symbolic name and everyone is explicitly expected to rely on this for program correctness (i.e., your library can have global state and the client will get a singleton module object). People just didn't notice/care because the entire "ecosystem" concept didn't exist yet.
I have at least one local from-source build of every Python from 3.3-3.14 inclusive (plus 2.7); it's easy to do. But I have them explicitly for testing, not because using someone else's project forces me to. The ecosystem is just not like that unless perhaps you are specifically using some sort of PyTorch/CUDA/Tensorflow related stack.
> It's the standard now because system Python can't do it.
Your system Python absolutely can have packages installed into it. The restrictions are because your Linux distro wants to be able to manage the system environment. The system package manager shouldn't have to grok files that it didn't put there, and system tools shouldn't have to risk picking up a dependency you put there. Please read https://peps.python.org/pep-0668/, especially the motivation and rationale sections.
> major breakages in point version upgrades
I can think of exactly one (`async` becoming a keyword, breaking Tensorflow that was using it as a parameter name). And they responded to that by introducing the concept of soft keywords. Beyond that, it's just not a thing for your code to become syntactically invalid or to change in semantics because of a 3.x point version change. It's just the standard library that has changes or removals. You can trivially fix this by vendoring the old code.
> And that's not even getting in to the major breakages in point version upgrades or the whole python 2 to 3 language switch.
Python doesn't use semver and never claimed to do so, but it's probably worth treating "x.y" releases as major versions in their own right (so like 2.7 -> 3.0 is a major version and so 3.10 -> 3.11). If you do that, the versioning makes a bit more sense
How old are you now? Mid fifties here. And 'vibe coding' in what exactly - it is not of interest from a programming perspective, but from a 'what does the AI know best perspective'? I've followed a similar, but not identical trajectory and now vibe in python/htmx/flask without needing to review the code in depth (NB internal apps, not public facing ones), with claude code max. Vibe coding in the last 6-8 weeks now also seems to make a decent fist of embedded coding - esp32/arduino/esp-32, also claude code.
35–44. Same thing, sometimes it makes planning errors, or misses context that should be obvious based on the files, but overall a huge booster. No need to review in depth, just set it against tests and let it iterate. So much potential, so exciting.
My feeling is the current suite of LLMs are "not smarter than US" they simply have far greater knowledge, unlimited focus, and unconstrained energy (modulo plan/credits/quotas of course!). I can't wait for the AIs that are actually smarter than us. Exciting to see what they'll do.
Caffeine is not chemically addictive. It can lead to depedency but that is not addiction. Motivation and wanting are not altered but unpleasant withdrawl effects can occur.
There is no real importance to the concept of “chemically addictive” and it has largely gone out of favor in psychology. Even physical behaviors like gambling and sex that obviously cannot directly, chemically act on reward system pathways, can still be just as life destroying addictive and challenging to quit as any drug. The dsm now classifies gambling disorder as an addiction.
Caffeine, unlike some drugs and alcohol, doesn't cause severe withdrawal symptoms. Because of that, experts don't label regular caffeine use as an addiction.
There’s so many layers to this. First, there’s history: Coka-cola (originally made from a Kola nut and cocaine) was told they couldn’t put cocaine in their “medicine” anymore so they just sold it as a “soft-drink” without the cocaine.
Then there’s the beverage industry who pointed out there’s caffeine in tea leaves and other plant material and that it’s not a threat: (1) US vs 40 barrels and 20 kegs of Coka-cola. Ultimately reducing the amount of caffeine in soft-drinks.
Round and round we go allowing companies to use chemicals to keep us buying their consumables.
While it is a contributing factor, physical dependence- withdrawal is not anymore considered necessary or sufficient for addiction. The author there is using an outdated pre-DSM5 definition of addiction which failed to recognize that there are two separate but related phenomena here. Things like gambling and sex addiction obviously cause no withdrawal symptoms from chemical dependence at all, but can be almost impossible to quit and serious enough to destroy someone’s life.
Severity of withdrawal symptoms from caffeine also varies substantially from person to person. It’s probably not directly killing anyone, but for some people it can be brutally unpleasant and disabling for at least several days.
That's called chemical dependence and it's the point I'm trying to make. Dependence is not addiction. Addiction means wanting is hijacked, not that stopping is aversive.
Addiction and dependence have real medical meanings and in the context of this discussion and we shouldn't mix them up. See this very short and to the point lancet medical journal summary, https://www.thelancet.com/journals/lanpsy/article/PIIS2215-0...
>Addiction (synonymous with substance use disorder), as defined by the DSM-5, entails compulsive use, craving, and impaired control over drug taking in addition to physical dependence. The vast majority of patients taking medications such as opioids and benzodiazepines are doing so as prescribed by clinicians, with only 1·5% of people taking benzodiazepine being addicted, for example. Physical dependence is much more common than addiction. Importantly, withdrawal effects occur irrespective of whether these drugs are taken as prescribed or misused.
>Failure to distinguish between addiction and physical dependence can have real-life consequences. People who have difficulty stopping their medications because of withdrawal effects can be accused of addiction or misuse. Misdiagnosis of physical dependence as addiction can also lead to inappropriate management, including referral to 12-step addiction-based detoxification and rehabilitation centres, focusing on psychological aspects of harmful use rather than the physiology of withdrawal.
>It should be made clear that dependence is not the same as addiction. The problems with prescribed drug dependence are not restricted to the small minority who are misusing or addicted to these drugs, but to the wider population who are physically dependent on and might not be able easily to stop their medications because of withdrawal effects. Antidepressants (superkuh note: and caffeine) should be categorised with other drugs that cause withdrawal syndromes as dependence-forming medications, while noting that they do not cause addiction.
The explanation for the headaches is that coffee raises blood pressure short term, and the blood vessels in the brain prepare for the predicted caffeine ingestion, and if it doesn't come there will be a mismatch.
What triggers the blood vessel constriction on the brain? Will avoiding e.g. certain places at certain hours also avoid the preemptive blood vessel constriction and associated headache?
What’s the point of this distinction, what does it mean that it’s not chemically addictive? It causes withdrawals, dependence, it definitely acts on brain chemistry.
That lancet article very well refutes the point you are trying to make. The term “chemical addiction” is not really used anymore because it really just refers to mechanisms of chemical dependence, which are neither necessary or sufficient to cause addiction on their own.
There has been a major shift in how addiction is understood in modern research, but you have it backwards- your perspective of chemical addiction or direct chemical mechanism being important is the old discredited concept, not the new one, which sees it as a psychological process that requires no direct chemical mechanism at all.
The chemical dependence is quite a factor in the psychological process you refer to. It nudges and reinforces this psychological behaviour. You can broaden the definition to include addiction without chemical dependence, but it does not mean you can omit the chemical dependence factor from the equation.
This chemical dependence is often the number one reason people cannot physically stop their psychological process. Potential effects from quitting include simply dying, or with less strong chemical dependence, feeling anxiety or generally ill.
This chemical dependence is learned behavior in some cases, chemically induced in others.
I get what you’re saying. Dopamine withdrawal is real though and if you no longer get dopamine from an action or you physically prevent yourself from receiving that dopamine, it can be just as debilitating as cigarette withdrawal or kicking a (soft) drug habit.
> Dopamine withdrawal is real though and if you no longer get dopamine from an action or you physically prevent yourself from receiving that dopamine
Exactly, this is why the idea of addiction is more appropriately focused around the actual real world impacts rather than specific chemical mechanisms- the difficulty quitting and the negative impacts on your life. If it's strong enough to overpower your will and destroy your life, that is sufficient, it doesn't matter exactly how.
When it comes down to it, something like an amphetamine drug or other stimulants that directly increase synaptic dopamine, vs a behavior like gambling addiction that exploits the brains instincts and wiring in other ways to still cause the increase in synaptic dopamine are not fundamentally, categorically different in a way that one or the other shouldn't be taken seriously and considered a "real addiction." Either can completely destroy some peoples life, and for other people can be easily controlled and used in moderation.
Yes this is absolutely true, it is a factor in addiction- I initially mentioned this in my comment but deleted it because I felt I was making it too complicated.
>That lancet article very well refutes the point you are trying to make.
No. That lancet article very well refutes the point you are trying to make. I'm flabberghasted by your interpretation. Could you please try to support this interpretation with quotes? I can't even begin to understand how to converse with this point of view since such a POV does not exist in the lancet article. I've read it a handful of times and now once again trying to understand you. But it's not there. I recommend you re-read the article.
I have quoted the appropriate bits supporting my, and the articles very title's, claims already in the other comment in this thread and you may refer to it.
> I'm flabberghasted by your interpretation. Could you please try to support this interpretation with quotes? I can't even begin to understand how to converse with this point of view since such a POV does not exist in the lancet article.
It's hard for me to know where to start, because I feel similarly confused about where you might be coming from, and I don't know your level of background in reading and interpreting biomedical papers. However, I can elaborate a bit on my thinking and mention that I am an academic biomedical researcher that reads, publishes, and peer reviews biomedical papers - but I am not a psychiatrist or medical doctor. This is not my field of expertise, I'm not trying to argue from authority, just mentioning where I'm coming from.
First, for context, this correspondence article is in The Lancet Psychiatry, so is targeted at psychiatrists, and is able to avoid a lot of background that they can safely assume the reader already has, like the diagnostic criteria for common conditions.
You are using the term "chemically addictive," which is not used in the article, and which is a term that simultaneously implies both "physical dependence" or "substance dependence" and "addiction" from back when the two were mistakenly assumed to be one in the same. This article is emphasizing the fact that they aren't the same thing, and both can exist independently of one another. Since that is really the only singular point in the article, and is really hammered home over and over, I cannot see how pulling out quotes would help. I think our disagreement comes from the surrounding context not mentioned, not the contents of the article itself.
The article describes that as of the DSM-5 they directly address the confusion between the two, and separate them into two entirely different things. While not explained in the article, it is important to realize that the DSM-5 now includes behavioral addictions together with drug addictions, and considers physical dependence and/or other types of direct chemical modulation of the reward system to be a contributing factor in many cases, but not essential, for addiction.
This distinction is extremely important, because it allows for addiction without substance dependence to be taken just as seriously, and properly treated and addressed clinically or by other means.
Previously, because of the history of this mistaken connection, psychiatrists and patients would wrongly dismiss (as you are with caffeine) the possibility of serious addiction without a direct chemical dependence mechanism. This left people whose lives were being destroyed by things like gambling and sex addictions to be dismissed as not serious, and not allow them to get real help. On the flip side, it also made doctors wrongly afraid to administer drugs that caused chemical dependence but not addiction, for fear that it would lead to addiction in patients.
However, I would argue that while addictive, the level of addiction potential from caffeine is pretty limited because of the fact that it has pretty severe adverse/toxic effects if you take too much, and the enjoyable aspect saturates out pretty quick. Taking a lot more than a normal amount, enough to damage your health, feels awful, so people aren't likely to become addicted to doing so. Counter-intuitively, the most addictive drugs have low acute toxicity and so you can take increasingly huge doses of them and it continues to feel good rather than just make you uncomfortable and sick like a high dose of caffeine.
>This distinction is extremely important, because it allows for addiction without substance dependence to be taken just as seriously, and properly treated and addressed clinically or by other means.
Here's where you seem confused. The article is not saying this. It is explicitly saying that medications which one builds up a tolerance to and experiences withdrawal symptoms from are not addictive.
>The DSM-5 referred to the confusion over this issue, stating that “’Dependence’ has been easily confused with the term ‘addiction’ when, in fact, the tolerance and withdrawal that previously defined dependence are actually very normal responses to prescribed medications that affect the central nervous system and do not necessarily indicate the presence of an addiction.” Public Health England makes the same distinction.
You are claiming the article's distinction between addiction and dependence is discussed in order to make a claim about substance abuse and addiction without dependence. This is not in the text at all. What the heck?
I have the decades of domain specific knowledge and time spent reading neuroscience journal articles to know that I don't have to read between the lines of the article here. It's not an opaque or jargon hidden meaning. It's quite plain: dependence is not addiction. Not, "addiction can happen without dependence" which is not addressed or relevant to the paper or this HN discussion about caffeine.
I still don’t understand the significance of the distinction, maybe because we are talking about caffeine in particular. Caffeine alters mood and causes euphoria. Is that not a chemical component of addiction?
What’s the significance of the chemical component of addiction anyway, when people can struggle with addiction to things which have clearly no chemical component like gambling?
I completely agree that the article only makes one very specific and narrow point, that dependence and addiction are separate things. The rest of what I wrote is, like you said, not in the article at all, but is context I am adding in attempt to explain my point.
It seems like we’re talking past each other somehow, perhaps one or the other of us misunderstood what the other is saying, but I don't see any value in continuing further.
A lot of substances cause withdrawals/dependence and act of brain chemistry, e.g., the vast array of psychiatric drugs. However, I have never seen the word addiction used in the context of antidepressants, mood stabilizers, etc..
Even non-psychiatric drugs like NSAIDs, insulin, hypertension medication, etc. can have a withdrawal effect.
I might be mistaken, but I am under the impression that addiction is psychological in nature. Take gambling addiction, for example, I am not certain if there is any physical withdrawal effect, but there is definitely a psychological compulsion.
Yes! Thank you! He is talking about AI generated summaries being inaccurate, which is plenty to get up in arms about.
A lot of folks here hate AI and YouTube and Google and stuff, but it would be more productive to hate them for what they are actually doing.
But most people here are just taking this headline at face value and getting pitchforks out. If you try to watch the makeup guy’s proof, it’s talking about Instagram (not YouTube), doesn’t have clean comparisons, is showing a video someone sent back to him, which probably means it’s a compression artifact, not a face filter that the corporate overlords are hiding from the creator. It is not exactly a smoking gun, especially for a technical crowd.
I, for one, find it extremely odd that any of these video posters believe they get to control whether or not I use, directly or indirectly, an AI to summarize the video for me.
They're under the encouraged belief that they are in control over what is shown on their youtube channel. They think they should control what text is shown under their videos on "their" channel. This illusion of control of presentation has been unconvincing for quite a while but now Alphabet is just throwing around it's weight because there are no other options except youtube for what youtube does: allowing money to flow to people who make videos without the video file host getting sued out of existence. Alphabet does this by mantaining a large standing army of lawyers and a huge money supply. Trivial technical issues like file hosting and network bandwidth have been repeatedly solved by others but when they become popular they're legally attacked and killed.
Most of my use for my blog posts is linking people to them on IRC and forums. I don't need or want search engine traffic. It's true there's no money or the churning waves of activity associated with that money in blogging anymore. And that's great. Social media siphoned off the profit chasers and all their running in place activity to stay on top of the eternal wave of now in recommendation engines.
I log on once in a while to a channel I used to use, and some of the same people are sorta still there. IRC is weird now, nostalgic but also... the things that made it truly fun aren't really a thing. Weird !fserves for warez, strange early chat bots, a/s/l... I do miss it. I think it has moved on except in little bubbles, and I cheer those on from afar.
It is no surprise to me that people still have to use HDD for storage. SSD stopped getting bigger a decade plus ago.
SSD sizes are still only equal to the HDD sizes available and common in 2010 (a couple TB~). SSD size increases (availability+price decreases) for consumers form factors have entirely stopped. There is no more progress for SSD because quad level cells are as far as the charge trap tech can be pushed and most people no longer own computers. They have tablets or phones or if they have a laptop it has 256GB of storage and everything is done in the cloud or with an octopus of (small) externals.
SSDs did not "stop getting bigger a decade plus ago." The largest SSD announced in 2015 was 16TB. You can get 128-256TB SSDs today.
You can buy 16-32TB consumer SSDs on NewEgg today. Or 8TB in M.2 form factor. In 2015, the largest M.2 SSDs were like 1TB. That's merely a decade. At a decade "plus," SSDs were tiny as recently as 15 years ago.
Perhaps my searching skills aren’t great but I don’t see any consumer ssds over 8TB. Can you share a link?
It was my understanding that ssds have plateaued due to wattage restriction across SATA and M.2 connections. I’ve only seen large SSDs in U.3 and E[13].[SL] form factors which I would not call consumer.
The mainstream drives are heavily focused on lowering the price. Back in the 2010s SSDs in the TB range were hundreds of dollars, today you can find them for $80 without breaking a sweat[1]. If you're willing to still spend $500 you can get 8TB drives[2].
I bought 4x (1TB->4TB the storage for half the price after my SSD died after 5 years (thanks samsung), what you mean they 'stopped being bigger'?
Sure, there is some limitation in format, can only shove so many chips on M.2, but you can get U.2 ones that are bigger than biggest HDD (tho price is pretty eye-watering)
By stopped getting bigger I mean people still think 4TB is big in 2025. Just like 2010 when 3/4TB was the max size for consumer storage devices. u.2/u.3 is not consumer yet, unfortunately. I have to use m.2 nvme to u.2 adapters which are not great. And as you say, low number of consumer cpu+mobo pcie lanes has been restricting from the number of disks side until just recently. At least in 2025 we can have more than 2 nvme storage disks again without disabling a pcie slot.
I think this is more a symptom of data bloat decelerating than anything else. Consumers just don't have TBs of data. The biggest files most consumers have will be photos and videos that largely live on their phones anyway. Gaming is relatively niche and there just isn't that much demand for huge capacity there, either -- it's relatively easy to live with only ~8 100GB games installed at the same time. Local storage is just acting as a cache in front of Steam, and modern internet connections are fast enough that downloading 100GB isn't that slow (~14 minutes at gigabit speeds).
So when consumers don't have (much) more data on their PCs than they had in 2015, why would they buy any bigger devices than 2015? Instead, as sibling commenter has pointed out, prices have improved dramatically, and device performance has also improved quite a bit.
(But it's also true that the absolute maximum sized devices available are significantly larger than 2015, contradicting your initial claim.)
I read that SSDs don't actually guarantee to keep your data if powered off for an extended period of time, so I actually still do my backup on HDDs. Someone please correct me if this is wrong.
A disk that is powered off is not holding your data, regardless of whether it is an HDD, SDD, or if it is in redundant RAID or not. Disks are fundamentally a disposable medium. If you don't have them powered on, you have no way to monitor for failures and replace a drive if something goes wrong - it will just disappear someday without you noticing.
Tape, M-DISC, microfilm, and etched quartz are the only modern mediums that are meant to be left in storage without needing to be babysit, in climate controlled warehousing at least.
Do you poweroff your backup HDDs for extended periods of time (months+)? That's a relatively infrequent backup interval. If not, the poweroff issue isn't relevant to you.
(More relevant might be that backups are a largely sequential workload and HDDs are still marginally cheaper per TB than QLC flash.)
>frontend barely works without JavaScript, ... In the past, it used to gracefully degrade without enforcing JavaScript, but now it doesn't.
And the github frontend developers are aware of these accessibility problems (via the forums and bug reports). They just don't care anymore. They just want to make the site appear to work at first glance which is why index pages are actual text in html but nothing else is.
I'd love to hear the inside story of GitHub's migration of their core product features to React.
It clearly represents a pretty seismic cultural change within the company. GitHub was my go-to example of a sophisticated application that loaded fast and didn't require JavaScript for well over a decade.
The new React stuff is sluggish even on a crazy fast computer.
My guess is that the "old guard" who made the original technical decisions all left, and since it's been almost impossible to hire a frontend engineer since ~2020 or so that wasn't a JavaScript/React-first developer the weight of industry fashion became too much to resist.
But maybe I'm wrong and they made a technical decision to go all-in on heavy JavaScript features that was reasoned out by GitHub veterans and accompanied by rock solid technical justification.
GitHub have been very transparent about their internal technical decisions in the past. I'd love to see them write about this transition.
> But beyond accessibility and availability, there is also a growing expectation of GitHub being more app-like.
> The first case of this was when we rebuilt GitHub projects. Customers were asking for features well beyond our existing feature set. More broadly, we are seeing other companies in our space innovate with more app-like experiences.
> Which has led us to adoption React. While we don’t have plans to rewrite GitHub in React, we are building most new experiences in React, especially when they are app-like.
> We made this decision a couple of years ago, and since then we’ve added about 250 React routes that serve about half of the average pages used by a given user in a week.
It then goes on to talk about how mobile is the new baseline and GitHub needed to build interfaces that felt more like mobile apps.
(Personally I think JavaScript-heavy React code is a disaster on mobile since it's so slow to load on the median (Android) device. I guess GitHub's core audience are more likely to have powerful phones?)
For contrast, gitea/forgejo use as little JavaScript as possible, and have been busy removing frontend libraries over the past year or so. For example, jquery was removed in favor of native ES6+.
Let them choke on their "app-like experience", and if you can afford it, switch over to either one. I cannot recommend it enough after using it "in production" daily for more than five years.
I honestly believe that the people involved likely already wanted to move over to React/SPAs for one reason or another, and were mostly just searching for excuses to do so - hence these kind of vague and seemingly disproportional reasons. Mobile over desktop? Whatever app-like means over performance?
Non-technical incentives steering technical decisions is more common than we'd perhaps like to admit.
What's nuts about that presentation is that the github frontend has gone from ~.2 to >2 Million lines of code in the last 5-6 years. 10x the code... to get slower?
That also means a much larger team and great possibilities for good perf reviews, so basically an excellent outcome in a corporate env. People follow incentives.
> My guess is that the "old guard" who made the original technical decisions all left, and since it's been almost impossible to hire a frontend engineer since ~2020 or so that wasn't a JavaScript/React-first developer the weight of industry fashion became too much to resist.
I very much hope not, but fear you're right.
I'm (theoretically) an iPhone app developer, and I really dislike the Reactive idiom: I can see the theoretical benefits, but in practice, the magic glue has never worked right, and comes at a painful performance cost. React is to me what LLMs are to LLM-skeptics.
I'm retraining anyway due to LLMs and how well they eat UI, but I don't yet know where I'll end up next.
It has been, this year, 30 years since JS became a normal part of the web. You don't need it to render text and buttons, but we do different things with the internet now.
It's 1 step forward 2 steps back with this "server side rendering" framing of the issue and in practice observing Microsoft Github's behaviors. They'll temporarily enable text on the web pages of the site in response to accessibility issues then a few months later remove it on that type of page and even more others. As that thread and others I've participated in show this is a losing battle. Microsoft Github will be javascript application only in the end. Human people should consider moving their personal projects accordingly. For work, well one often has to do very distasteful and unethical things for money. And github is where the money is.
To be fair, the developers might care, but upper management certainly doesn't, and they're the ones who decide if those developers make their rent this month.
While there are use cases for NAS, generally, if you have a desktop PC it's far better to put the hard drives in it rather than setting up a second computer you have to turn on and run too. Putting the storage in the computer where you'll use it means it'll be much faster, much cheaper, incomparably more reliable, with a more natural UI, and it'll use less eletricity than having to run 2 computers.
Now if your NAS use case is streaming media files to multiple devices (TV set top boxes, etc), sure, NAS makes sense if the NAS you build is very low idle power. But if you just need the storage for actual computing it is a waste of time and money.
It's pretty simple: 2 computers have twice the parts and having twice the parts means there are more chances for something to die. But it goes beyond this too. Far less software stack complexity (the big one), no flaky network link, no complex formatting that cannot be recovered with common tools, etc.
Being rolling doesn't fix the lack of upstream support for GPUs that AMD does for the first half year (and any years past 4~). LTS distros are great because they work pretty good "forever" instead of great for brief unknowable periods.
Because of this, in practice, the amount of system administration mantainence and care needed for perl programs is far, far less than other languages like python where you actually do have to go in and re-write it all the time due to dep hell and rapid changes/improvements to the language. For corporate application use cases these re-writes are happening anyway all the time so it doesn't matter. But for system administration it's a significant difference.
reply