Re vendor lock in point: this is a harness issue really. Sure, CC is restricted to Anthropic models, but it's not the only harness out there. So if one vendor has an outage or botches the quality of their models due to compute shortage, you can switch to another vendor. LLMs are the easiest to switch. Of course, if hardware costs go up, so will all AI vendors. The only way out for the employer would be to directly buy the hardware (or do a fixed price deal with a cloud provider).
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
CC isn’t even limited to Anthropic models, there’s a post on the front page right now to use it with Deepseek V4 since Deepseek provides an Anthropic compatible API and CC reads API URLs from env variables so you can override them.
I’ve build a configuration transpiler to Claude code and codex and found I can switch pretty quickly between both and run both at once. At the moment codex performs better. Prior CC did. There is no vendor lockin and this is an old canard in technology that LLMs in fact themselves make irrelevant. Once you’ve got an implementation that uses X converting it to Y is almost trivial with an LLM because the spec is canonical in the reference.
It’s buried in my dotfiles and not easily extracted. But the idea isn’t a hard one to implement, except the coding engineers are woefully unaware of themselves. Codex is easier because it’s open source. Claude you kind of have to futz with it for a while. Once you have the intermediate form working and outputting config for the two I’m sure you can coerce it to any other agent that comes along with similar constructs (marketplaces, etc). Theres some nuance for some MCPs particular those that download binaries like rust MCPs but its very complex I found and probably better to avoid unless you really need it.
This is a general fear for me whenever I take a taxi or something like it: i always remind the driver of my luggage in the back when we arrive and ask them whether they can help me get it.
It's unpleasant for me at normal speed settings, but on fast mode it works really well: the AI does changes quickly enough for me to stay focused.
Of course this requires being fortunate enough that you have one of those AI positive employers where you can spend lots of money on clankers.
I don't review every move it makes, I rather have a workflow where I first ask it questions about the code, and it looks around and explores various design choices. then i nudge it towards the design choice I think is best, etc. That asking around about the code also loads up the context in the appropriate manner so that the AI knows how to do the change well.
It's a me in the loop workflow but that prevents a lot of bugs, makes me aware of the design choices, and thanks to fast mode, it is more pleasant and much faster than me manually doing it.
So that article can in theory be used to conscript any man, citizen or not, living in Germany or not.
The Wehrpflichtgesetz, which is a simple law and requires just the 50% Bundestag majority to have it changed, refines this very wide constitutional power in article 1, to require men who hold German citizenship above 18.
Article 3 refines it even further to folks below 45 or 60, depending on the severity of the situation.
But yes, in theory it can be changed to include any non-German citizen man, people aged 80, living inside of Germany since a while or never having been to Germany ever, or just random men who happen to change flights at FRA.
This one might last longer. The AI race is on, and the US tries its best to make it as expensive for China as possible to participate in it. Every dollar China spends on GPUs they get at markup is one not spent on building navy ships.
If there is an escalation over Taiwan, then that will cause the loss of most of the world's high grade chip manufacturing capacity. TSMC is busy doing technology transfers into the US, but it is going to take time, those fabs won't have capacity for the whole world, and they still heavily depend on Taiwan based engineers if something goes wrong etc.
Just like with COVID you don't know how long this shortage will last.
It will incredibly hard for China to conquer Taiwan. One hundred kilometers across the straits introduces a brutal geographic hurdle. If anything, the fabs will probably be severely damaged in the war. Plus most senior execs and elite engineers would be moved to US offices in Arizona.
We are going to have that now in a couple of months regardless. So it won't matter if Taiwan's manufacturing base gets disrupted, the hardware will have already effectively stopped.
Wow, I wasn't aware Samsung, Intel, SMSC were unable to produce "modern technology." Not everything needs to be on a 3nm TSMC process, believe it or not.
TSMC makes a lot of stuff besides the EUV-scale parts that all the YouTube videos talk about.
Almost everything you own that runs on electricity has some parts from Taiwan in it. TSMC alone makes MEMS components, CMOS image sensors, NVRAM, and mixed-signal/RF/analog parts to name a few.
Also, people seem to assume that TSMC is an autonomous entity that receives sand at one loading dock and ships wafers out at another. That's not how fabs work. Their processes depend on a continuous supply of exotic materials and proprietary maintenance support from other countries, many of them US-aligned. There is no need to booby-trap any equipment at TSMC; it will grind to an unrecoverable halt soon after the first Chinese soldier fires a rifle or launches a missile.
Hopefully Xi understands that. But some say it's a personal beef/legacy thing with him, and that he doesn't even care about TSMC.
Russia weren't able to take Ukraine even when they were able to just drive their tanks right up to Kiyv. Modern warfare tech just favors the defender too much. China has ninety km of sea to cross before they even get to Taiwan. Missiles and drones have already taken out the Russian naval fleet in the Black Sea. China will be losing a lot in the same way if they ever attempt the crossing.
It's a loss leader but this is normal. Same has happened with Uber, Airbnb, Amazon, etc. Using VC money to buy marketshare and once you have it, you can milk it.
The question is more around the moats that these companies have and it seems to me while their models are amazing technology, they don't really have a moat. The open/chinese models still continuously catch up to the american ones.
And what possible moat. It isn't hard to foresee that in just a couple of years, models outpacing the latest frontier tech we have today will run on consumer hardware. With open source workflows anyone can pull in to run, providers won't see a penny.
Another scenario is that dense models get replaced entirely, in which case the likelyhood of OpenAI and co pioneering the concept is pretty slim. They will be left with billions worth of infrastructure which cost them 10 times that 2 years earlier, faced with the reality touched by the article: liquidate.
If I look around in the FLOSS communities, I see a lot of skepticism towards LLMs. The main concerns are:
1. they were trained on FLOSS repositories without consent of the authors, including GPL and AGPL repos
2. the best models are proprietary
3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).
I agree those are legitimate problems but LLMs are the new reality, they are not going to go away. Much more powerful lobbies than the OSS ones are losing fights against the LLM companies (the big copyright holders in media).
But while companies can use LLMs to build replacements for GPL licensed code (where those LLMs have that GPL code probably in their training set), the reverse thing can also be done: one can break monopolies open using LLMs, and build so much open source software using LLMs.
> LLMs are the new reality, they are not going to go away
That's the conventional wisdom, but it isn't a given. A lot of financial wizardry is taking place to prop up the best of these things, and even their most ardent proponents are starting to recognize their futility once a certain complexity level is reached. The open weight models are the stalking horse that gives this proposition the most legs, but it's not given that Anthropic and OpenAI exist as anything more than shells of their current selves in 5 years.
But LLMs themselves are literally not going away, I think that's the point. Once a model is trained and let out into the open for free download, it's there, and can be used by anyone - and it's only going to get cheaper and easier.
Yeah like Kimi is good enough, if there was some kind of LLM fire and all the closed source models suddenly burnt down and could never be remade, Kimi 2.5 is already good enough forever.
Good enough is probably redundant, it's amazing compared to last year's models
> 3. folks making low-effort contribution attempts using AI (PRs, security reports, etc).
Meanwhile as people sleep on LLMs to help them audit their code for security holes, or even any security code auditing tools. Script kiddies don't care that you think AI isn't ready, they'll use AI models to scrape your website for security gaps. They'll use LLMs to figure out how to hack your employees and steal your data. We already saw that hackers broke into government servers for the Mexican government, basically scraping every document of every Mexican citizen. Now is the time to start investing in security auditing, before you become the next news headline.
AI isn't the future, it's already here, and hackers will use it against you.
More like, you're still using horses to move your product, meanwhile thieves and your competitors are using trucks to outpace you. A truck can get in the way of your horse carriage and then they can rob you easily and take all your cargo. Yes, you can still get your cargo from point A to point B, but you're going to be targeted by bad actors in vehicles.
The blog post of this thread argues that now, even average users have the ability to modify GPL'd code thanks to LLMs. The bigger advantage though is that one can use it to break open software monopolies in the first place.
A lot of such monopolies are based on proprietary formats.
If LLM swarms can build a browser (not from scratch) and C compiler (from scratch), they can also build an LLVM backend for a bespoke architecture that only has a proprietary C compiler for it. They can also build adobe software replacements, pdf editors, debug/fix linux driver issues, etc.
The end game is a resource based economy as all sorts of labor becomes cheap.
Think of Saudi Arabia, Iran, Putin's Russia, or Norway. I.e. risk for highly nepotic dictatorships, with the potential that it might end up well despite the odds (Norway).
Before, if you made a product that improved the lives of everyone, say you invented Google or Heinz ketchup, you could make a lot of money through that, and you did a good deed and became rich the same time. The masses of humans would reward you for delivering the benefits of your invention to them by giving you a piece of their work output.
As their work becomes less and less worth, why focus on those humans though? I am asking rhetorically of course.
An economy that thrives from innovation enriches the innovators, making them powerful. A brute in power causes the innovators to leave or in the worst case, he mass-executes them outright (think of what Stalin did in Russia). With AI, you can have a brute in power though, as an oil rig or datacenter can be protected by a bunch of machine guns.
An economy with AI everywhere will be, after a short and very innovative period, just be about who controls which resource, i.e. water for a datacenter, production lines for robots, mining rights, operational control of robot fleets, etc.
The working 95% will probably experience a sharp decrease in purchasing power, making a lot of products unaffordable to them, so consumption wise we'll have a further shift towards plutonomics. The owning top 10% will probably be affected by this major shift in consumption as well, E.g. a tower full of condos becomes worthless if the tenants can't pay rent because they got laid off, etc.
Need for robots and AI will further increase. Eventually most economic activity will revolve around those robots. It's a bit like paperclip optimizer here, whether those robots protect gay luxury space communism from counterrevolutionaries, or they project the will of the Davos council of Forbes 400, economically it will be quite similar.
There will still be human societies, humans will still talk to other humans. We won't be all exclusively conversing with LLMs, I doubt that. There will still be social mobility but it will revolve around nepotism, lying, and various escalation steps of war.
We might end up in different scenarios depending on the country, but some countries like Germany might lose relevance as most of their value lies in stuff that is going to be replaced by AI, i.e. they have less natural resources, or they have been depleted already.
We might also see companies that automate everything from end to end, from mining to producing and running weaponized robot fleets. Shareholders of those companies will do great too, if the leadership of the companies respects minority shareholder rights that is (why should they though, they will outgun any law enforcement).
Do I like this future? I don't think so. We will probably have solved cancer, communicable diseases, and aging in the next 30 years if AI continues its successful trajectory, but not sure if it will be accessible to 8 billion humans.
You have a lot of control over LLM quality. There is different models available. Even with different effort settings of those models you have different outcomes.
Of course, they don't learn like humans so you can't do the trick of hiring someone less senior but with great potential and then mentor them. Instead it's more of an up front price you have to pay. The top models at the highest settings obviously form a ceiling though.
You also have control over the workflow they follow and the standards you expect them to stick through, through multiple layers of context. Expecting a model to understand your workflow and standards without doing the effort of writing them down is like expecting a new hire to know them without any onboarding. Allowing bad AI code into your production pipeline is a skill issue.
Imagine you opened a job posting and had all applicants complete SWE-bench.
Ignoring the useless/unqualified candidates and models, human applicants have a much wider range of talent for you to choose from than the top models + tooling.
The frontier models + tooling are, in the grand scheme of things, basically equivalent at any given moment.
Humans can be just as bad as the worst models, but models are no where near as good as the best humans.
AI etiquette is a great term. AI is useful in general but some patterns of AI usage are annoying. Especially if the other side spent 10 seconds on something and expects you to treat it seriously.
Currently it's a bit of a wild west, but eventually we'll need to figure out the correct set of rules of how to use AI.
I'm hearing nightmare stories from my friends in retail and healthcare where someone walks in holding a phone and asks you talk to them through their chatbot on their phone. Friend had a person last week walk in and ask they explain what he does to Grok and then ask Grok what questions they should ask him.
What shocks me is the complete lack of self awareness of the person holding the phone. People have been incapable of independent thought for a while, but to hold up a flag and announce this to your surroundings is really something else.
Re the understanding code point: you can still use LLMs to understand code. If you write the spec without knowing anything about the code, of course the architecture might suck. Maybe there is already a subsystem that you can modify and extend instead of adding a completely new one for the new feature you are adding, etc.
I use LLMs for my daily workflows and they do understand code perfectly and much more quickly than if I read it.
reply