No, they made many wrong architecture decisions that made it fringe project rather than mainstream. You could glimpse on how things could played out by looking into bun.js adoption.
Levelized Cost of Energy for solar is 30-60$ and 100-200$ for nuclear. In the case of Spain, it is cheaper to build more energy lines with Morocco and battery storage than to use nuclear. Spain already has some of the cheapest energy in Europe thanks to renewables.
In the case of Germany, nuclear makes sense, but it is not clear where you would buy fuel for it, It might still be a supply chain risk since Russia and Kazakhstan are the main players there.
It's not that easy, and the 2025 blackout good evidence of that. Renewables need a grid that's engineered for them and that require significative investments. Without them, closing power plants (of any kind) is, IMO, nonsensical.
Ironically, Spain has plenty of Uranium, but there is an environmental law that doesn't allow its mining.
> It's not that easy, and the 2025 blackout good evidence of that. Renewables need a grid that's engineered for them and that require significative investments.
The outage in spain had multiple complex causes.
While the grid had a rather routine instability/oscillation on-going during time of the incident, the actual point-of-no-return was completely non-technical: Prices crossed into the negatives, which caused generation to drop by hundreds of megawatts and load to increase likewise within a minute (!) because the price acted as a non-technical synchronized drop-off signal for the grid.
In grids where the price action is not forwarded directly to the generators and consumers there would be no incentive to suddenly drop off decentralized generation. So for example in Germany a black-out would not happen like this.
Unfortunately, to have an informed opinion, you pretty much have to read all these pages, because the situation is just so complex. Otherwise, you just fall for agenda pushing from all sides.
That being said, I was apparently also under the impression of outdated or just plain wrong information.
While the report I listed mentions the sudden loss of decentralized generation as starting point of the blackout, and also specifically mentions small-scale rooftop PV, it says that the cause for that sudden synchronized drop-off is actually unknown.
You can't get an "informed opinion" by reading crap like that report.
The Spanish systems have systematic design failures for stability and electricity market design. Working out the political failures that led to the design failures is much harder.
Only those working closely in that profession have any knowledge of the underlying causes.
Most everyone else (including this comment) is different levels of ignorance and cluelessness.
Edit E.g. Crap quote from the report "but no significant oscillations with
amplitudes above 20 mHz". The rest of it is about that level from what I could tell.
> Only those working closely in that profession have any knowledge of the underlying causes.
This report is literally from the ENTSO-E which is the main regulatory body for the grid in Europe.
> Crap quote from the report "but no significant oscillations with amplitudes above 20 mHz".
What is the "crap" about that?
An amplitude can still be measured in Hz, if you are looking at oscillating frequency deviations, if that is what you mean.
Very timely, the final report has been released today.
I hadn't read the document you referenced, and I admit don't have the prior knowledge, nor the time, to fully understand all the implications of what it says. My opinion is then the result of reading and listening a variety of experts and news sources, and it will have some biases, for sure.
Still, I have skimmed the final report to see if there was something that I could understand from first hand (and to support my original point, not gonna lie), and I found this:
_The increasing penetration of variable renewable and
distributed generation, further market integration,
broader electrification, and evolving environmental
and geopolitical risks place the European electricity
system under increasingly challenging operational
conditions, requiring higher levels of resilience._
Do you really think that my original point (as uniformed as it might be), namely, that the levels renewable energy currently present in the spanish grid require significative investments, was wrong?
Yes, I think it's wrong, or at least way over-exaggerated.
You can run a grid to supply approximately 80% renewables (long-term average) without significant technical changes.
Only if you want to get the last 20% to renewables, you get technical challenges, e.g. related to synchronization and load-matching. But that is also not unsolvable problems, e.g. instead of relying on the inertia of steam turbines you can "just" build specific-purpose fly-wheels to do the same thing. It's just less elegant.
Source: Volker Quaschning "Understanding Renewable Energy Systems", too lazy right now to look up the exact page.
This is also consistent with the section they quoted.
Generally, the load matching in grids is done by the system itself.
If you add more wind and solar, which depends on the weather and location, you have to more large-scale intervention, e.g. allow generation re-dispatch. But that doesn't immediately imply that this is a dangerous process.
I have not read the report yet, but in another thread someone gave a very plausible explanation of what happened.
The high levels of renewable energy happened to contribute to this incident, but not because of something inherent in renewable energy. All renewable energy sources are connected to the grid through inverters, and in Spain most of these inverters do not use an adequate control policy, i.e. they do not compensate the phase fluctuations of the grid, like the synchronous electromechanical generators do (i.e. they do not generate an appropriate amount of reactive power for compensation).
Technically it is easy to implement such control policies in all solid-state inverters, but it was not done in Spain because there were no incentives, i.e. there were no regulations specifying how the inverters connected to the grid should behave, otherwise than disconnecting when the frequency went outside a permissible range.
Yes, that is plausible indeed, but the problem is that there are many explanations which are plausible, but there doesn't seem to be a smoking gun.
Strange about that explanation for example is that the time correlation is backwards.
First the solar generation started to drop out and only then central generator stations tripped. Also the on-going frequency oscillations had already stabilized. If it was related to frequency issues, the solar inverters would either have shut down 15 minutes earlier (while the frequency oscillations were at the peak) OR 1-2 minutes later (when power stations tripped and frequency would have dipped)
But doesn't nuclear power present a complication when designing a power grid for renewable energy? It is basically very expensive caseload energy that needs permanent demand, when the entire proposition of a renewable-focused grid is that you manage a non-certain production with dynamic demand (via batteries and price-sensitive usage).
For power plants, this is glacial. A power grid has to balanced perfectly on a sub-second level. Also, you can only do this down to about 50% of rated capacity. Below that you have to switch it off completely.
If you combine this with renewable generation, it all falls apart. A cloud passing over a large PV installation will drop generation much faster than nuclear plants will ever be able to follow (by increasing generation). So if you want to have a substantial share of renewable generation (which, remember, is the cheap stuff), you can't have more than a token nuclear capacity, because you need to invest the money you might want to spend on nuclear on battery and hydro storage.
The other aspect is the economics of nuclear itself. Nuclear power plants are the most capital intensive generation capacity you can build. Even when driving them at the maximum of their rated capacity, the have a levelized cost of electricity several times that of PV and Wind per kwh. Requiring routine load following for nuclear would basically guarantee that no one ever builds a nuclear reactor again.
There are reasons to build new nuclear, but it's not cheap/reliable power generation. You build it to have access to a nuclear industrial base, as well as the research and professional community to run a military nuclear program. Or you actually succeed in creating a Small Modular Reactor, which might be suitable for niche applications (i.e. power isolated communities in extreme remote locations). Or you are simply fascinated by the technology and want to invest a ton of money on the off chance that it will produce some unforeseen technological breakthrough (though arguably you'd do better with investing in nuclear fusion from my limited understanding of the research).
But as far as I know this is a non issue since we 've mostly been able to cover this where it props up. Especially since the grid's demand doesn't tend to go 0-100 or the other way around that fast. Even with a significant amount of nuclear there's multiple of those solar farms, wind farms, etc
For the small fluctuations the turbine's governor response can provide frequency stabilization and pressurised water reactors also provide moderate load following.
>The other aspect is the economics of nuclear itself. Nuclear power plants are the most capital intensive generation capacity you can build. Even when driving them at the maximum of their rated capacity, the have a levelized cost of electricity several times that of PV and Wind per kwh.
When I looked at actually honest comparisons this simply isn't true across the board. I mean it doesn't help that the west has built so few recently and managed some exceptional fuckups whilst also making a lightbulb in an unimportant sidebuildings toilet cost a couple dozen grand in a way that might as well be purposefull sabotage of nuclear but much of the world (read mostly china) does relatively fine with their costs and time frame. These comparisons also have a tendency to use absolutely unrealistic storage costs all the same or foresee continued storage costs of methods that are exhausted. (hydro over here)
Additionally it's the cheapest solar that often pulls this down but the vast majority in let's say here in Belgium is residential which is a lot lot more costly and less efficient. The solar farms are all way more south so a lot of these american reports don't make much sense in most of europe either.
> If you combine this with renewable generation, it all falls apart
Rubbish. Only true if the renewable generation is poorly integrated. Solar plus batteries can provide synthetic inertia if the incentives/regulations are correctly designed.
Australia has been adding oodles of solar, and they have been doing it surprisingly well.
> Solar plus batteries can provide synthetic inertia if the incentives/regulations are correctly designed.
Yes, but why build nuclear at all, if you are already building PV + batteries? Nuclear is much more expensive than that combination. And if you add nuclear capacity on a level that actually matters (i.e. 30%+ of peak load), you run into real integration problems.
As I've written elsewhere, a toke nuclear program can make sense if you want to keep the industrial base, institutional knowledge and expertise around, i.e. to guarantee independent access to nuclear weapons. But it is ludicrous to make nuclear a cornerstone of your energy policy. Not even China is expanding its share of nuclear in total energy generation. They keep it around as a strategic asset, but a subsidized one.
For countries like Denmark and Spain I'd be pulling my hair out if my government would start throwing money into the money pit that is nuclear power (and it is inevitably is government money, because no nuclear power plant has ever been built without government subsidies and/or price guarantees).
> Nuclear can load follow, within limitations
Yes, but it makes zero economic sense to do so. Nuclear is multiple times more expensive per kwh than PV + batteries, even if you run it at max capacity continuously. If you require nuclear to load follow on a regular basis, not a single reactor will ever be built again.
It's more cost efficient to keep them running all the time since most of the cost of nuclear is building the power plant, but power output can be adjusted if needed.
In another thread that comments the report it was said that most inverters used in Spain for the renewable energy sources do not implement a control policy to generate an adequate reactive power to compensate the phase fluctuations of the grid, like a synchronous electromechanical generator would do. The inverters only disconnected from the grid when the frequency went outside the permitted range.
Ensuring that the inverters produce compensating reactive power would have been easy to do, but it was not done simply because there were no regulations that requested this. Obviously, as a consequence of the report, this is likely to change.
Yeah, been trivial forever. In the US it became a requirement for new all utility scale non-synchronous generators a decade ago. And then a bunch of statewide rules for rooftop solar as well.
Levelised Cost of Energy is the highest, in the entire developed world, in the UK, which has enough wind and solar installed to entirely meet needs today.
It is NOT cheap, it is cheap for sellers, because they account on the basis of a MWh being equally useful all the time. It isn't. There are TWh-scale shortfalls in winter because, and a medieval peasant understood this, a shortage of ambient energy is what winter is, and it's worth paying energy penny you have to avoid its worst effects.
Business is not better. I've worked in the chemicals industry, and conferences in Europe have been like a wake for the last decade. I've overseen large orders go to China because, I could not give a shit how much it cost, the European green alternative - for delivery within Europe - could not guarantee timeframes, due to reliance on renewables. The Chinese shipped product could. That is your "cheap".
You can buy uranium from Russia, Kazakhstan, Mali, Canada, US, Australia, or the sea if you really want to, all of those have large reserves, and store multiple years' worth more or less by accident, modern industrial processes actually struggle to make sense at the low volumes nuclear requires. Bringing that up as a problem is just not honest.
Can you please source and explain your claims? They don't match my understanding.
For example:
> Levelised Cost of Energy is the highest, in the entire developed world, in the UK, which has enough wind and solar installed to entirely meet needs today.
Do you mean cost per country (not levelized?)? Even then, UK energy is not the most expensive.
They can't (because LCoE is a per-project or per-technology measure, not a grid-wide measure, for a start).
UK energy is expensive because we have gas-linked wholesale pricing. That's nothing to do with the true cost of renewables. I'm going to go out on a limb and say they're being disingenuous.
(Gas-linked pricing was implemented for sensible reasons, but I don't see how it continues to be tenable today).
>Levelized Cost of Energy for solar is 30-60$ and 100-200$ for nuclear.
With the storage for shitty winter weeks? What's the source on that one?
Mind you I love solar since i'd like to go relatively off grid one day but i've heard too much bullshit around this.
>but it is not clear where you would buy fuel for it, It might still be a supply chain risk since Russia and Kazakhstan are the main players there.
There's a lot of locations from my understanding and a lot more that don't produce anything simply because Russia and Kazakhstan and such don't make it worthwile.
It's a tiny share of the cost of production in the end.
Kazakhstan does uranium mining but doesn't do uranium enrichment. Other big uranium mining countries, which don't do uranium enrichment, are: Namibia, Canada, Australia, Uzbekistan.
"The following countries are known to operate enrichment facilities: Argentina, Brazil, China, France, Germany, India, Iran, Japan, the Netherlands, North Korea, Pakistan, Russia, the United Kingdom, and the United States."
> Levelized Cost of Energy for solar is 30-60$ and 100-200$ for nuclear. In the case of Spain, it is cheaper to build more energy lines with Morocco and battery storage than to use nuclear.
But keeping nuclear open is an entirely different thing than building out more nuclear. OP was talking about the former, you about the latter.
Yeah, but that reminds me of Nick Clegg in the UK in 2010 saying:
> By the most optimistic scenarios... there's no way they are going to have new nuclear come on stream until 2021, 2022. So it's just not even an answer
Well, now we are in 2026, and we still have the same problem.
The UK has had complete political unity on building new nuclear power since 2006. That tells you the timelines.
For Hinkley Point C with the latest estimate being the first reactor online (not commercially operational) in 2030 that gives a "planning to operation" time of 24 years.
For Sizewell C EDF are refusing to take on any semblence of a fixed price contract and they are instead going with a guaranteed profit pay as you go model. Where ratepayers handout enormous sums today to hopefully get something in return in the 2040s.
The Americans can give Germany all the fuel they’d ever need. If you go solar, you are trading one supply chain dependency for another. France’s strategy is, once again, completely and totally vindicated.
enableScripts: false is a great default, but in a pnpm workspace monorepo it needs some tuning — a few packages legitimately rely on postinstall (esbuild, sharp,
etc. downloading platform binaries).
What worked for us was whitelisting just those in onlyBuiltDependencies. Everything else stays locked down.
The age gate is a nice extra layer. I do wonder how well it holds up for fast-moving deps where you actually want the latest patch though.
In conflict between equals, landmines are the only practical way to restrict the mobility of the enemy. That's why 20% of Ukraine is contaminated by mines. If you were official and your choices would be losing and more people dying or placing more landmines that can be cleared over 20 years, what would you do?
As someone that spend many days on performance I will tell you that bundle size have minimal impact on performance. In most cases backend dominate any performance discussion.
Apps are slow because caching is missing, naive pagination, poorly written API calls waterfall, missing db indexes, bad data model, database choice or slow serverless environment. Extra 1MB of bundle add maybe 100ms to one time initial load of app that can mitigated trivially with code splitting.
Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more. Once you fully integrate you will not switch. Also being capital intensive is a form of moat.
I think we will end up with market similar to cloud computing. Few big players with great margins creating cartel.
>Anthropic is building moat around theirs models with claude code, Agent SDK, containers, programmatic tool use, tool search, skills and more.
I think this is something the other big players could replicate rapidly, even simulating the exact UI, interactions, importing/exporting existing items, etc. that people are used to with claude products. I don't think this is that big of a moat in the long run. Other big players just seem to be carving up the landscape and see where they can can fit in for now, but once resource rich eyes focus on them, Anthropic's "moat" will disappear.
I thought that, too, but lately I've been using OpenCode with Claude Opus, rather than Claude Code, and have been loving it.
OpenCode has LSPs out of the box (coming to Claude Code, but not there yet), has a more extensive UI (e.g. sidebar showing pending todos), allows me to switch models mid-chat, has a desktop app (Electron-type wrapper, sure, but nevertheless, desktop; and it syncs with the TUI/web versions so you can use both at the same time), and so on.
So far I like it better, so for me that moat isn't that. The technical moat is still the superiority of the model, and others are bound to catch up there. Gemini 3 Preview is already doing better at some tasks (but frequently goes insane, sadly).
LibreOffice didn't replace MS Office, and Octave didn't replace Matlab. It seems to me that there is even less of a moat with these products than there is with Claude Code, yet neither was commoditized.
Google Workspace replaced Microsoft Office. It has around 70% market share. Microsoft Office is still dominant in much of the traditional enterprise, but the moat is shrinking.
I can use Claude in Jetbrains IntelliJ and in Zed, I can use it with OpenCode, and there are lots of other agent tools. Everyone can build these tools around an LLM, and they're already being commodified.
The moat right now is the quality of the model, not the client. Opus is just so much better than the competitors, at least for now.
Enterprise is what determines commoditization as that's where the lion's share of the revenue comes from. Maybe it will eventually happen to MS Office, maybe it won't, but until it happens it hasn't happened. Access to the full MS ecosystem, technical support and seamless integration, these things matter a lot to businesses, and I'm not even saying you're wrong, but I'm not convinced yet that something similar won't play out with coding agents.
It can, but the auth & communication to Anthropic's APIs is basically reverse engineered from Claude Code. While it works, and it seems Antropic is choosing to look the other way, it _may_ result in your account getting banned, as I'm pretty sure it's against their TOS.
I haven't experienced this myself, but RooCode does something similar to OpenCode's approach and the maintainer has reported some bans [1].
Google on the other hand, is being very strict about keeping you locked in to their tools, unless you use API keys, of course.
Also: Claude has already asked me several times the last few days if I want to install an LSP for various things. I've not seen any signs of LSP use yet.
A generic wrapper is not a moat, but the context is. Both the LLM provider and the wrapper provider depend on local context for task activities. The value flows to the context, the LLMs and wrappers are commodities. Who sets the prompts stands to benefit, not who serves AI services.
Except most of their product line is oriented towards software development which has historically been dominated by free software. I don't see developers moving away from this tendency and IMO Anthropic will find themselves in a similar position to JetBrains soon enough (profitable, but niche)... assuming things pan out as you describe.
If you are CEO of a large company, and you see a huge uptick of ecommerce during covid, you could:
- Expand aggressively and compete for top talent
- Wait and possibly miss opportunity if trend continue
Most CEOs decided to expand. For example meta hired around 30k people from 2020 to 2022.
If you see end of ZIRP, slowdown and inflation and you actually overhired as CEO what you should do?
- Keep employees locked in roles that are not needed?
- Let go people that are not needed and hire for what is needed.
Meta let go most e-commerce positions and support and now is aggressively hiring AI specialists.
If you are hiring, and you see someone that cannot hold a job for more than 6 months, it is red flag. In capitalist system, employees are just to provide labor in exchange for wages. Nothing more, nothing less. Problem in US is that both retirement and healthcare is often provided by employer, creating this weird illusions of long term caring relationship.
Node.js made many decisions that have massive impact on ESM adoption. From forcing extensions and dropping index.js to loaders and complicated package.json "exports". In addition to node.js steamrolling everyone, tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes.
Requiring file extensions and not supporting automatic "index" imports was a requirement from Browsers where you can't just scan a file system and people would be rightfully upset if their browser modules sent 4-10 HEAD requests to find the file it was looking for.
"exports" controls in package.json was something package/library authors had been asking for for a long time even under CJS regimes. ESM gets a lot of blame for the complexity of "exports", because ESM packages were required to use it but CJS was allowed to be optional and grandfathered, but most of the complexity in the format was entirely due to CJS complexity and Node trying to support all the "exports" options already in the wild in CJS packages. Because "barrel" modules (modules full of just `export thing from './thing.js'`) are so much easier to write in ESM I've yet to see an ESM-only project with a complicated "exports". ("exports" is allowed to be as simple as the old main field, just an "index.js", which can just be an easily written "barrel" module).
> tc39 keep making are idiotic changes to spec like `deffered import` and `with` syntax changes
I'm holding judgment on deferred imports until I figure out what use cases it solves, but `with` has been a great addition to `import`. I remember the bad old days of crazy string syntaxes embedded in module names in AMD loaders and Webpack (like the bang delimited nonsense of `json!embed!some-file.json` and `postcss!style-loader!css!sass!some-file.scss`) and how hard it was to debug them at times and how much they tied you to very specific file loaders (clogging your AMD config forever, or locking you to specific versions of Webpack for fear of an upgrade breaking your loader stack). Something like `import someJson from 'some-file.json' with { type: 'json', webpackEmbed: true }` is such a huge improvement over that alone. The fact that it is also a single syntax that looks mostly like normal JS objects for other very useful metadata attribute tools like bringing integrity checks to ESM imports without an importmap is also great.
For me intermittent fasting after 6pm and small diet changes fixed my acid reflux. PPI were not helping and making things worse. I actually took Betaine HCI supplements to fix digestive issues after PPIs.
> May I know what is so "terrible" about those protocols ans what "technical debt" are you talking about?
POP is pretty archaic and lack support for multiple clients accessing the same account.
IMAP is complex, slow and lack modern email futures threading, contacts. Clients often implement it in inconsistent matter, given there are plenty of extensions.
SMTP is not a single protocol but collection of bolted on protocols (DMARC, DKIM, SPF). Lack delivery tracking, and it is very opaque when it comes to spam filtering.
All these protocols are build on top of raw TCP. It is harder to implement any things we take for granted in http like: encryption, compression, multiplexing and debugability are not there by default.
> That has nothing to do with actual email protocols. Generic email protocols are extremely reliable and resilient to any sorts of disruptions. I wish any of modern protocols exhibit similar simplicity and reliability.
People spend decades building infrastructure, but even then only hardcore graybeard will self-host email.
> POP is pretty archaic and lack support for multiple clients accessing the same account.
There are no issues with POP and multiple clients whatsoever.
> IMAP is complex, slow and lack modern email futures threading, contacts.
What's "complex" about IMAP? Extremely simple and reliable protocol. Contacts are not part of IMAP and handled by different protocols.
> SMTP is not a single protocol but collection of bolted on protocols (DMARC, DKIM, SPF).
I suggest you to educate yourself on DKIM, DMARC, SPF and SMTP before making those statements.
> Lack delivery tracking, and it is very opaque when it comes to spam filtering.
It DOES have delivery tracking. Spam filtering is not a protocol feature and it shouldn't be. I suggest you again to educate yourself.
> All these protocols are build on top of raw TCP. It is harder to implement any things we take for granted in http like: encryption, compression, multiplexing and debugability are not there by default.
Let me tell you one of most hidden secrets in the industry - EVERYTHING we have online is built on top of raw TCP. Ok, and UDP as well. Every bloody fancy JS framework or mobile app you can think about is written on top of that raw TCP. Crazy world huh?
> People spend decades building infrastructure, but even then only hardcore graybeard will self-host email
If you don't know how to build something doesn't mean it's left out to "hardcore graybeards". You got to admit you just don't know how and either learn or surrender to companies who know, offering the same for a buck. It's pretty simple.
> There are no issues with POP and multiple clients whatsoever.
POP3 in standard only have "Leave a copy on the server" but lack synchronisation mechanism.
> I suggest you to educate yourself on DKIM, DMARC, SPF and SMTP before making those statements.
You cannot use SMTP in real world without these protocols. Your messages would automatically land in spam folders of big providers. For example, if you want to send email to Gmail, you need SPF and DKIM [1]. Any half decent implementation of SMTP need to support all these protocols [2].
> It DOES have delivery tracking. Spam filtering is not a protocol feature and it shouldn't be. I suggest you again to educate yourself.
SMTP have extension for DSNs (Delivery Status Notifications) but crucially it does not provide information if/why email was classified as spam. This is a reason why many website registration form have “check spam folder”. SMTP deliverability is a hard problem both on protocol level and infrastructure on spam filtering[3].
> If you don't know how to build something doesn't mean it's left out to "hardcore graybeards". You got to admit you just don't know how and either learn or surrender to companies who know, offering the same for a buck. It's pretty simple.
I spend a significant amount of time investigating feasibility of building an email product and build some libraries for email protocols. It is not just my opinion but other HNs users including OP. Search HN for "self hosting email" for others people experience.
> POP3 in standard only have "Leave a copy on the server" but lack synchronisation mechanism.
Would you explain what do you mean under the "synchronisation mechanism"?
> You cannot use SMTP in real world without these protocols.
You absolutely can. GMail is not the only email provider out there.
> SMTP have extension for DSNs (Delivery Status Notifications) but crucially it does not provide information if/why email was classified as spam. This is a reason why many website registration form have “check spam folder”. SMTP deliverability is a hard problem both on protocol level and infrastructure on spam filtering[3].
I find it hard to even start commenting this, only can suggest you again to educate yourself. Spam related matters were never a part of transport protocol. And never will be for obvious reasons. Just because you would like them to be, it won't change anything.
> I spend a significant amount of time investigating feasibility of building an email product and build some libraries for email protocols. It is not just my opinion but other HNs users including OP. Search HN for "self hosting email" for others people experience.
Happy for you to actually explore building software, but we can't carry discussions just because some HN users carry some, possibly erroneous opinions, or can we?
reply