Hacker Newsnew | past | comments | ask | show | jobs | submit | netcan's commentslogin

So the article isn't very good but the vibe coding debate is pretty interesting.

This is how I'm thinking about it: in a scenario with increased opportunity and risk... You've gotta know where you stand.

First question is how much is more software actually worth to you."

This is one with a lot of self deception. Software development is expensive. The companies have to do lists and wishlist and road maps. They have an A/B testing system and a productivity mindset.

But... If Linkedin, Salesforce or any whatnot really did have ways of producing software to make money... they would have done it already. Remaining opportunities follow a diminishing marginal value curve/cliff.

Imo, software development isn't necessarily a bottleneck. So... opportunity is limited and risk is the bigger deal.

The opportunity is at the upstart trying to bootstrap feature parity with Salesforce.

If you have no customers yet... you can unfettter the vibe and see if it works.

Imo companies need to revisit google's early days. Let a thousand flowers bloom. 20% time. If you unleash capable people and give them tokens .. That's a good way of searching for opportunities.

The thousand flowers died at Google because they had reached a point where opportunities are not everywhere. The best ideas had been discovered and also... the markets big enough to move Google's dial are few. There aren't many $100bn markets.

There's no way to do vibe coding safely, at scale, currently.


> how much is more software actually worth to you.

A really misunderstood vibe coding task, especially in more corperate settings, is code removal and refactorings.

I think this is the the fundamental misunderstanding about agentic development: people only see it as a tool to add code.


This smells like BS to me, and I have a bird’s eye view into several enterprises and startups.

LLMs are not being used for code removal or refactorings, it’s either to “hopefully unblock” this large project that has been behind deadline for 12 months, or to just speed up development (somewhat).


Sorry, the "I" should have been an "A" (which I have corrected).

You are right that they are not. And that is the issue, the misunderstanding.


>The thousand flowers died at Google because they had reached a point where opportunities are not everywhere.

It died because Google reached the enshittification penny pinching rent-seeking stage.


In a sense, everyone is a startup now... At least, every serious user of agents.

So... if you spend $3m to replace a $1m team... you are betting on that $3m cost coming down. It's a proof of concept. The first step is to find out if agents can do the job at all. At this point you are hoping future versions will get more efficient.

Trying to make something efficient before you know that it is even possible is hard.

Drop-in, profitable on day-1 isn't what the frontier looks like.


If we want to be like everyone else then yes it's true. However that business may or may not survive when token costs go up (or is fashionable to say now, "rug pull"). If you can be token efficient now, the path to profitability is much clearer.

There's already many things that can be done now to bring down token use. Better planning, tests, Language severs, MCP compression. Don't use claw, teams, swarms, Ralph loop, scheduled tasks unless there is a clear use case.


If token cost goes up, then the efficiency gains come from using fewer tokens... which is likely possible.

The point is that efficiency comes after, not before.


seems like what you're suggesting to token efficiency is to simply use less of it?

Less or be more productive with same amount?

Almost everyone needs a worker-owned co-op to capture more of the value they create.

Useful comment. Thanks.

Myopic is inevitable, to some extent. It's very hard to project this stuff.

Socrates wrote about what was being lost as philosophy was becoming written rather than oral...and he was right.

We can't even understand what was lost. Many methods of learning and thinking became entirely lost. You could say they were redundant, and they were. But... writing largely replaced oral traditions. It didn't just augment them.

He was that old school coder who had the skills to do philosophy and be an intellectual without writing. Writing was an augmentation for him. But for the new cohort... it was a new paradigm and old paradigm skills became absent.

It is very hard to imagine skilled coders becoming skilled without need pressing that skill acquisition. The diligent student will acquire some basic "manual coding" skill... but mostly the skill development will be wherever the hard work is.


I think if manual coding becomes "outdated" then there will just be no demand for junior engineers to manually code. People will probably still learn to code manually, just as there are folks who will still build their own furniture. There may just not be a business demand for it.

What that means 20-30 years from now when the seniors of today retire if there are no juniors right now is yet to be seen. People say that AI will probably have advanced far enough that it won't be a problem. But let's say somehow AI stagnantes, then I would guess that AI-generated code that is too difficult to debug will be treated as legacy and there'll be demand for manual coding again.

Companies that aren't able to afford the rewrite or maintenance will probably go out of business.

It's an interesting time we live in for sure.


>> What that means 20-30 years from now when the seniors of today retire.

I fear that many won't retire and instead completely leave the industry which is already happening. Its anecdotal, but when I first started as a junior dev, I was working with many intermediate devs who had a few years on me.

I kept ties with a group of about two dozen devs. We all went through a lot of the same stuff. Last year I attended two local conferences. Out of the 24 or so, who were all seasoned senior devs now? Only 3 of us remained in the industry. Granted, I'm in accessibility and another moved more into a UI/UX design role but we were all that's left.

The majority of a discussion at lunch was about why they left and it was pretty universal. They were seeing AI creeping into everything they did and just walked away. The list was long of what they disliked about it and really didn't see the huge upsides that the industry was pushing. They had money, they had other opportunities they choose to pursue far and away from the tech industry.

It was pretty eye opening to say the least. We always imagined sitting around a table in our 60's recounting our experiences in tech and now we're not even into our 40's and the industry is losing amazing talent every year that IMHO cannot be replaced by an LLM prompt.

I don't have a good feeling about where this is headed.


Thanks for sharing. Maybe I'm just hanging out with a lot of young devs (we're in our 30s and senior leaning) but we're all cautiously optimistic about AI. That being said we also don't have FU money so we're kinda forced to deal with it.

Maybe CS is one of those industries that just ends up cannibalizing itself with its success.


Out of curiosity, where did those who couldn’t or didn’t want to retire go?

> Socrates wrote about what was being lost…

Dr. Steven Skultety & Dr. Gad Saad discussed this in a recent video / podcast.

This link is time stamped to the topic https://youtu.be/7mcQf9E3YRo?t=1058


Socrates never wrote anything. At least, not as far as we know.

It's the opening page of the book Technopoly.

And here I thought I was being unique. I guess Socrates must be popular.

I'd say that by purging stuff from the brain we are losing thinking itself. Thinking is manipulating ideas and concepts in your head, assembling and linking. The fewer things there is, the more primitive the result. You cannot juggle without object to juggle, connecting the dots result in trivial patterns when you have just a couple of dots.

It's true for all automation we do get more comfort. We build systems so that we humans have as little struggle as possible, not realising that struggle is the only reason for existence. By eliminating it, we are erasing ourselves from this world.

Automation is also for reducing drudgery - the work that prevents us from meaningful struggle by taking up resources that can be better applied elsewhere. Not all struggle (or pain) is created equal.

I wouldn’t count on reduced drudgery. The assembly line automated many movements needed for manufacturing. But which work involved more drudgery—-craftsman-style car production or standing on an assembly line at Ford?

With any new technology, subsequent drudgery depends on the technology, its concomitant economics, and the imagination of the people using it.


The craftsman didn't move to the assembly line.

This kind of argument flies in the face of the fact that plenty of inherited rich people seem to lead very happy lives. Of course, they do find things to struggle with, but it's much more pleasant to struggle to score 72 at the golf course or to outbid a rival for a piece of contemporary art than to struggle for basic needs.

I don’t share your idea of a happy life.

I can live a happy life without struggling for basic needs and without playing golf all day long. If you strip off every obligation from life, then you exist, not live.

Facing challenges and overcoming obstacles, friends and family is what makes me happy. When you’re rich, most people only care about your money, not the person you are. And I think that’s exactly what a happy life is about.


I guess to each their own. But in the little free time I have as a non-rich version, I like to face low-stakes challenges I myself choose, e.g. in my case those currently mostly are learning Chinese and learning to play a musical instrument. Those still provide obstacles, difficulties, the feeling of progress and moments of success/failure, but I can do them at my own pace and with no serious consequences if I fail.

I can imagine I could be perfectly happy with a life full of challenges of that kind, instead of being forced to work at given scheduled times which often imply I spend less time with my son than I would like, including days I don't feel like it, and including boring tasks (I love my job, but like almost every job, it also has its paperwork, pointless meetings, etc.), knowing I depend on that work to live.

In short, I think we all do need the challenge, the struggle, the successes and the failures, otherwise life would just be boring and pointless. But I don't think we (or at least I) need the obligation component and the high stakes.

What you mention about the rich attracting people focused on money rings true, but it would be moot if AI led us all to lead lives more similar to the rich, which was the point here. (Of course, there's also the issue of whether there is widespread or unequal access to AI, but that's another story...).


It's fairly easy to be submarine rich, and fly completely below the radar. Just brush off questions about your work with vagueness. If you're not flashy, nobody will suspect you're rich

i agree, but i doubt anyone on hn is struggling for basic needs. so the struggle is almost always fun, and i think that goes for most white collar jobs. it's a fun struggle. getting to the office, doing some chores, and that's something AI is slowly killing off

But there is 150/200k year people using gpt for psychological help...

"struggle is the only reason for existence"

That is a bold and frankly unsupportable claim.


Humans don’t tend towards idle quiescence.

We seem to be insatiable inquisitive.

Curiosity doth struggle many cats.


Being inquisitive doesn't equate to loving, or needing, struggle in my brain. Also, struggle differs for many people. Running a half marathon was a struggle for me, but I can't compare it to a family who is struggling to pay bills.

If we take Maslows hierarchy of needs, me running a half marathon is self actualization. Something I'm privileged to be able to do. A family struggling to put food on the table is still on the Lower tier of the pyramid.


Yes, I tend to agree.

A lot of paraimony between your statement and Socrates' comments on the transition to writing.

Interestingly, he placed a lot of importance on memory... where you emphasize manipulation of concepts.


I’ve grown to appreciate this aspect of standard examination as I’ve gotten older. Everyone wants to say “oh, you can just look it up now”, but how can you come up with higher level thinking, when you don’t have the fundamentals in your mind?

To use math as an example, you can always look up formulas. But after more than 1 "layer" of looking up, that quickly becomes impossible. Like, when I had to learn to calculate derivatives and primitives, I could look those things up. But when I got to linear algebra, I couldn't progress until I deeply internalized derivatives and primitives, because looking up formula A only for it to contain unknown formula B just becomes a mess.

Agreed. We've been able to "look it up" for a while. To use math as an example, we've had calculators for a very long time. But when I was in school they didn't let us use calculators until precalc. Now I use calculators even for simple math because I already understand the fundamentals and just need expedience.

Just because one can "look it up" doesn't mean it's necessarily the best thing to do at the moment. But it also doesn't mean that folks who look it up are necessarily losing any higher level thinking, though I concede that many people certainly delude themselves into thinking they understand the fundamentals and thus can use AI as a tool for expedience when they're really using it as a tool for thought.


It just becomes more abstracted but the thinking is still there. And who is to say we aren’t going to keep reading books, delving into hobbies, or watching movies. All those concepts will then be mixed into the our brains and who knows what new things we will think of to extract out and desire to build with AI.

I think we'll continue to read books and stuff. But many books/movies will probably have devolved into AI slop (not that this hasn't been a trend for the last few decades to a lot of film buffs).

But hobbies like woodworking or instrument seem immune to slop... But people can be creative with what they can sloppify


> I'd say that by purging stuff from the brain we are losing thinking itself

The idea that there will be less to think about seems a bit short-sighted. Humans are very good at moving to higher levels of abstraction, often with more complexity to deal with, not less.


I "purge" - or better yet choose not to retain - the data.

BUT, BUT! I keep the index.

My favourite quote from Donald Rumsfeld (a very bad human being, but this is still good)

> Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tends to be the difficult ones.

What I optimise for is to have as many "known unknowns" as possible. I know a concept, process or a tool exists, but don't understand it or know how to do it. But because I know it exists, I won't start inventing it again from scratch when I need it.

Like if one needs to do some esoteric task, they might start figuring it out from scratch. But because the index in my brain contains a link ("known unknown") to a tool/process that makes that specific thing a LOT easier, I can start looking into it more.

Or I might need to do something common like plumbing or some electrical work at home. Do I know how to do that? No. But I Know A Guy I can call, again externalising the knowledge. Either they come over and help me do it or talk me through the process of adjusting the thermostat in my shower faucet (you need to use WAY more force than I was comfortable with without an expert on the phone btw... there are no hidden screws, you just rip the bits off :D)


We will never fundamentally get rid of thinking; it's coupled to navigation of 3D reality we live

And we don't need words to think; cognitive problem solving and language processing are separate processes [1]

We will shift the problems we need to think about. Same as always; humanity isn't really solving building stone pyramids. Did we stop thinking? No just thought about a different todo list.

[1] https://www.scientificamerican.com/article/you-dont-need-wor...


We also never run out of fuel. There will always be some energy left here and there to tap into.

Fuck thinking!

If I am free as “rational I,” then the rational in me, or reason, is free; and this freedom of reason, or freedom of the thought, was the ideal of the Christian world from of old. They wanted to make thinking – and, as aforesaid, faith is also thinking, as thinking is faith – free; the thinkers, the believers as well as the rational, were to be free; for the rest freedom was impossible. But the freedom of thinkers is the “freedom of the children of God,” and at the same time the most merciless – hierarchy or dominion of the thought; for Isuccumb to the thought. If thoughts are free, I am their slave; I have no power over them, and am dominated by them. But I want to have the thought, want to be full of thoughts, but at the same time I want to be thoughtless, and, instead of freedom of thought, I preserve for myself thoughtlessness. If the point is to have myself understood and to make communications, then assuredly I can make use only of human means, which are at my command because I am at the same time man. And really I have thoughts only as man; as I, I am at the same time thoughtless. He who cannot get rid of a thought is so far only man, is a thrall of language, this human institution, this treasury of human thoughts. Language or “the word” tyrannizes hardest over us, because it brings up against us a whole army of fixed ideas. Just observe yourself in the act of reflection, right now, and you will find how you make progress only by becoming thoughtless and speechless every moment. You are not thoughtless and speechless merely in (say) sleep, but even in the deepest reflection; yes, precisely then most so. And only by this thoughtlessness, this unrecognized “freedom of thought” or freedom from the thought, are you your own. Only from it do you arrive at putting language to use as your property. If thinking is not my thinking, it is merely a spun-out thought; it is slave work, or the work of a “servant obeying at the word.” For not a thought, but I, am the beginning for my thinking, and therefore I am its goal too, even as its whole course is only a course of my self-enjoyment; for absolute or free thinking, on the other hand, thinking itself is the beginning, and it plagues itself with propounding this beginning as the extremest “abstraction” (such as being). This very abstraction, or this thought, is then spun out further

- The ego and its own, Max Stirner


Yeah but where comparison with philosophy falls short is - if we lost some ways of thinking, it was gradual and most didn't notice.

Software code is on the other hand extremely formal, and either it works perfectly as intended, it works crappily and keeps breaking in various edge cases or just doesn't work (last 2 are just variants of same dysfunctionality, technically its binary state). There is no scenario where broken code somehow ends up working and delivering, or maybe 1 in trillion, sometimes.

Also the change is so fast that the failure is immediately obvious to everybody, its not gradual change of thinking over few decades/generations.

LLMs are getting impressive, but anybody claiming there is no massive long term harm to getting to what we call now proper seniority is... don't know, delusional, junior who never walked that long and hard-won path, doing PR for llms at all costs or some other similar type. Or simply has some narrow use case working great for them long term which definitely can't be transferred on whole industry, like 1-man indie game dev.


I would argue it's virtually impossible going forward for a junior engineer to run that harder path.

Because the easier path seemingly delivers what's expected of them. Sigh, they may even be demanded to take the faster path.

I've seen many junior unable to walk that necessary path before LLMs were a thing.


Socrates was histories first Luddite. He opened Pandora’s box. I wish him and Plato would be radically rejected as the garbage trash they are (basically just a defense of hierarchy and dialects)

Quoting my boy Max Stirner who also fking hated these guys

“This war is opened by Socrates, and not until the dying day of the old world does it end in peace.“ - The Ego and its Own, Max Stirner


I think it's an interesting idea to explore.

But... It's the type of idea that is unpredictable as it comes into contact with reality. If it works, it probably works very differently from the initial idea of how it will work.


I 100% agree with this. I am certain that I cannot foresee how this would play out in reality.

Yeah, I 100% agree with the caution in this comment.

I see the merit in such a proposal. It's the linguistic equivalent to boiling the food you consume, instead of eating it raw with all the associated bad stuff.

The problem is, as you said, that this plan is unlikely to be as rosy as it's portrayed and probably has a lot of drawbacks in real life.

Interesting to think about and explore, though.


I wasn't even talking about drawbacks, though that applies too.

I mean... you would be basically taking a complex thing, transforming and reconstructing it. What we want out of social media isn't a simple, legible function. The positives. You'd have to discover them.

If someone starts building with the intitial idea above, my guess is that they'd end up with some sort of custom feed that draws inspiration and inputs from social media... but isn't social media. It's something else that you can scroll, read and whatnot.


That is exactly what I want. A boring but factual summary of useful nuggets from the mountain of shite that is ALL of social media. For example, on any given day, reddit/X/Bluesky/HN only has a couple of paragraphs worth of stuff that I care to know about. I want to train my brain to equate the internet with something boring that's only worth visiting when I need to look up information. I want this tech to reduce my (and hopefully others') use of internet to down by 98%.

I want to go to news.ycombinator.com/reddit.com/etc on any given day and just see a couple of paragraphs and maybe a few reference links to follow if I so choose. Spend a few minutes reading that and close it.

All of that in the hope of diverting my limited time/energy on Earth to endeavours in real life with real people.


>. In many domains, productivity is already sufficient. What’s being sold is workforce reduction.

This is a blindspot to many. People working on entrepreneurial projects need to build a lot. They start with nothing. They need (for example) features. There's a lot to do.

Most firms are not that. Visa, Salesforce, LinkedIn or whatnot. They have a product. They have features. They have been at it for a while. They also have resources. They are very often in a position of finding nails for a "write more software" hammer.

It's unintuitive because they all have big wishlist and to do lists and and a/b testing system for pouring software into but...

If there were known "make more software, make more money" opportunities available, they would have already done them.

Actual growth and new demand needs to come from arenas outside of this. Eg companies that suck at software(either making or acquiring) might be able to get the job done.

The Problem, bringing this back to the article, is fungibility. A lot of this "human capital" stuff cannot be easily repackaged. It's a "living" thing. Talent and skills pipelines can be cut off, and vanish.

A danger in Ai coding (and other fields) is that it leverages preexisting human capital and doesn't generate any for later.


> If there were known "make more software, make more money" opportunities available, they would have already done them.

Sometimes they're available, but not palatable, when the opportunity could threaten their existing investments or patterns. That might mean "self-cannibalism", or changing the ecology so that the main product niche is threatened.

Then those opportunities are ignored, or actively worked-against via lobbying, embrace-extend-extinguish, etc.


Ok... but this just generalizes into the "known things" type.

Whether the reason of strategic (like your example), internal politics, insufficient knowledge.... The point is that there is a local equilibrium, and most mature firms are at this equilibrium.

More resources via Ai, at first order, goes after that diminishing returns part of the curve... which is a cliff especially for highly resourced firms topping the S&P500.

A lot of Ai-optimist:s " mental model" of the economy do not account for this stuff at all.

"Save time/money" outcomes are not similar at all to "make more stuff" outcomes. Firing employees does freeze up labour... but reutilizing this labour is non-trivial... as this article demonstrates quite well.


> doesn't generate any for later.

"any" is quite an assumption.


I didn't mean this as an absolute statement. Relatively, and in the short term.

I agree that any sufficiently complex human operation - whether industrial or scientific or whatever - requires a culture and a living tradition that develops over time and communicates knowledge and understanding across generations. In fact, many problems in our culture can be attributed to a contempt for tradition that developed. (It is true that tradition can ossify. That's can be a problem with attitudes toward tradition rather than tradition itself, or a sign that something needs to be addressed. A good tradition is a dialogue spanning history.)

However, it is also true that technology develops and produces changes that in the short term cause pain, but in the long term produce a better outcome in some desirable sense. Coding is not an end in itself. Just as switchboard operators and human computers are obsolete, because the conditions that caused the need for them ceased to exist, it may be the case that a certain manual style of programming is also becoming obsolete.

You can imagine human computers decades ago thinking that computing technology is bad, because people will loose numerical facility. But this misunderstands the structure of the value of practical skills and the difference between knowledge of principles and practical skill. Sure, few if any people today can perform numerical computation as quickly and competently in their heads or on paper as human computers, but...

1. that's different from understanding the principles of computation which is closer to a theoretical grasp and has eternal or at least lasting value

2. the value of the practical numerical facility was rooted in the need for obtaining results as quickly as possible, and that particular set of techniques or skills is no longer practical

Perhaps manual coding is like that. I don't know why people are surprised. Generative programming has always been a desired end in CS for along time. CS grads can still and should still learn the principles of their field and learn them well, but the profile of practical industrial techniques and needed skills is changing. As software eats more and more of the world, it is becoming increasingly impractical for manually fiddling with silly bits of plumbing. We obviously haven't been able to develop abstractions well enough to avoid it, and part of the reason is that appetite comes with eating. Once you make something easier, it makes it easier to achieve even greater things more easily...hence new plumbing and implementation complexity.

Let's be honest here. Much of programming is intellectually dull. It's is plumbing. It's not algorithmically interesting. It's not interesting from a modeling perspective. It's not interesting conceptually. It's not interesting as a matter of system design. Most programming out in the wild is the same old crap being recapitulated a million times over. If all you want is to become skilled in doing the same thing over and over again, then I can understand why you might find LLMs threatening. Your market value as a maker of yet-another-flask-web-app has plummeted hard. People who enjoy that kind of programming are generally not very intellectually motivated people - at least not where programming is concerned - and likely prefer the tedious comforts of rehearsed ephemeral detail. LLMs can keep us from rabbit holing and focused on the domain.

In any case, I don't think LLMs are a threat to the field per se. I just think that the skill set is shifting and developing. I think we are still figuring out what it means to develop the right understanding and intuitions to develop software without the benefit of having had done it manually. Time will tell. However, I also think being able to read code has become relatively more important than writing it. When you have to verify the quality of LLM-generated code and put your name behind it, you have to be able to understand it, and that's a somewhat neglected skill in my view. Programming very often prefer to write code than to read it. LLMs might be just the thing to coerce an improvement in the latter sort of literacy. With this also comes a greater importance of formal specification. That's where I would expect the future of the field to shift.


So yes, but that doesn't negate the circular investment aspect, for most intents and purposes.

The risk is from this structure is mostly to do with how this affects market cap. Companies using the value of their shares to fund demand for their services.

That's a risk.


I feel like the whole market at this point is just AI since big tech other than Apple are all massively invested into that. Everyone owns either the S&P or the total world ETF which are both heavily skewed towards big tech and this trade - so literally everybody is in it. It might go well for a few more quarters/years but once something breaks or gets exponentially cheaper this will take down the whole market with it.

It's just hard to tell the difference between "real" demand and "circular." That's the concern.

PG had an essay about this during the dotcom, when he worked at yahoo. Iirc...Yahoo's share price and other big successes in the space attracted investment into startups. Startups used that money to advertise on yahoo. Yahoo bought some of these the startups.

So... a lot of the revenue used to analyze companies for investment was actually a 2nd order side effect of these investments.

Here the risk is that we have Ai investments servicing Ai investments for other Ai investments.

Google buys Nvidia chips to sell anthropic compute. Anthropic sells coding assist to Ai companies (including Google and Nvidia). They buy anthropic services with investor money that is flowing because of all this hype.

Imo the general risk factor is trying to get ahead of actual worldly use.

The Ai optimists have a sense that Ai produces things that are valuable (like software) at massive scale...that is output.

But... even if true, it will take a lot of time, and lot of software for the Econony to discover this, go through the path dependencies and actually produce value.

The most valuable, known software has already afy been written. The stuff that you could do, but haven't yet is stuff that hasn't made the cut. Value isn't linear.


While value isn't linear, prejudgement of value for allocation of resources is very imperfect.

A lot of stuff that doesn't make the cut is the the stuff that does have value. When you're lowering the bar, remember it's a noisy bar - so a lot more good stuff is going to come through as well.


Yes.. I agree.

.. and that entropy can be where all the ultimate value is. That said... considering the point at hand is the context, it's important to start with the diminishing marginal returns.

To give a simple example... Google and FB do not have "invest able software opportunities" at hand. They've been searching everywhere for nails for their "build software" hammer. They are well resourced and risk tolerant.

The diminishing returns curve for "more software" is steep.

Good stuff coming through often starts with $100m markets becoming $1bn markets. That's not even noise at the scale they're thinking about. Long term, sure. Plausibility range is as wide at it has maybe ever been.

But... systemic value is hard to make.


Most places I've worked have roadmaps, i.e. investable priorities.

If you can burn through lower priority experiments quickly it's great!

They might be working on all of the super high level things they can think of, but there are always more A/B tests, more features, etc. that are just lower priority, and the chaos of scaling up the org to address them all is super linear whereas the return on going down the list is sub linear.

So you end up with an equilibrium. If the cost shifts, just like in econ 101, the output will change.


I'm starting to transition how we build software at our company due to the power of AI. No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies. There is going to be a giant sucking sound in India.

I can't continue the current model. The dev that gets AI is done in five hours, the ones that don't are thrashing for the next two weeks. I have to unleash the good AI dev. I have the Product team handing us markdown files now with an overview of the project and all the details and stories built into them. I'm literally transforming how a billion dollar company works right now because of this. I have Codex, Claude and GitHub Copilot enterprise accounts on top of Office 365. Everyone is being trained right now as most devs are behind, even.


> No more: five code monkey contractors under a lead. Two top-notch devs are all that is needed now, unrestrained by sprints and mindless ceremonies.

This doesn't tell me anything. Two devs who cared and didn't have a bunch of pointless meetings could already, and regularly did, scoop the big tech teams.

There were always 2 ways to complete a ticket. One that did what the stakeholder wanted, and one that does what the ticket says.

But devs that care about the product and what the stakeholders need are rare, and finding one of them was already a significant bottleneck on most projects.

AI might be an accelerator, but we've yet to see if it's optimizing the part that was actually the bottleneck yet.


Ok... but extrapolating from this to "whole market" paradigms is speculative.

The (imo) question isn't how you produce software, but what the value of this software is. Are you going to make make/better software such that customers pay more, or buy more? Are those customers getting value of this kind?

The answer may be yes. But... it's not an automatic yes.

Instead of programming think of accounting. Say you experience what you are experiencing, but as an accountant. 6 person team replaced by 2-3 hotshots.

So... Maybe you can sell more/better accounting for a higher price. But... potential is probably pretty limited. Over time, maybe business practices will adjust and find uses for this newly abundant capacity.

Maybe you lower prices. Maybe the two hotshot earn as much as the previous team.

If you are reducing team size, and that's the primary benefit... the fired employees need to find useful emplyment elsewhere in the economy for surplus value to be realized.

Mediating all this is the law of diminishing returns. At any given moment, new marginal resources have less productive value than the current allocation.


And the day you don't have that drug what do you do? If anything you are training people to become dependent on one or more subscription services.

Like the drug of electricity and Internet, running water grocery stores?

I don't think the likelyhood of "electricity and Internet, running water grocery stores" being pulled out from underneath you (either by long term failure or prohibitive cost changes) is anywhere near as high as it is for subscription-based AI tools (at least not in the US).

That was a factor with electricity early on as it was first put to use. The flip side of the infamous "does it make the beer taste better?" adage/nonsense is that, per the story, back then you had breweries build their own power plants, because electricity was just that useful. It took a while for the market to start feeling comfortable with reliability of electricity supply and price point.

Solidworks is also a subscription service.

Except the dev that gets AI done in 5 hours will have a poorer mental model of the code. Whether that's important might or might not depend on whether that bites you in the ass at some point.

Don’t really agree with this.

That dev is productive with AI precisely _because_ they have a good mental model.

AI like other tools is a multiplier - it doesn’t make bad devs good, but it makes good devs significantly more productive.


Don't agree - the dev is productive because they have a good mental model of the problem space and can cajole the agent into producing code that agrees with the spec. The trend is for devs to become more like product managers (which is why you see some whip-smart product managers able to build products _without_ human devs)

I believe these tools change the value of different skill sets in a very profound ways. Being good with rules of a programming language and syntax is no longer as valuable as it used to be.

Understanding the problem space is becoming more valuable. Strength in architecture of a solution is another skill that is becoming very valuable.

We are close to getting to a point where someone with overall general (and perhaps not very detailed) understanding of arch and design and a good understanding of the problem space and having a good taste in usability will be able to create awesome solutinos.

I can't wait to see these solutions being created by one or two person teams.


But does it matter?

If you write a program in Python or JavaScript, you have a terrible mental model for how that code is actually executed in machine code. It's irrelevant though, you figure it out only when it's a problem.

Even if you don't have a great mental model, now you have AI to identify the problems and generate an explanation of the structure for you.


No, but you have a great mental model of the interface between your problem domain and the code, which is where you can affect change.

Outsourcing that to an AI SaaS might be ok I guess. Given past form there's going to be a rug-pull/bait-and-switch moment and dividends to start paying out.


The effect of JavaScript or python code is well defined - they have an excellent model of what it will do.

The performance - how that is executed on the machine is what you were referring to. “As if” is the key to optimization


> It's irrelevant though, you figure it out only when it's a problem.

For the past decade people have been clawing their eyes out over how sluggish their computers have become due to everything becoming a bloated Electron app. It's extremely relevant. Meanwhile, here you are seemingly trying to suggest that not only should everything be a bloated, inefficient mess, it should also be buggy and inscrutable, even moreso than it already is. The entire experience of using a computer is about to descend into a heretofore unimaginable nightmare, but hey, at least Jensen Huang got his bag.


That is the doom side. However AI has found and fixed a lot of security issues. I have personally used AI to improve my code speed, AI can analyze complex algorithms and figure out how to make them much faster in ways I can do as a developer, but it's a lot of work that I typically wouldn't do. Even just writing various targeted benchmarks to see where the problems really are in my code is something I can do, but would be so tedious I often would not bother. I can tell AI to do it and it will write those.

Only time will tell which version of the future we end up with. It could be good or bad and we will have to see.


In terms of runtime performance of applications, AI is a net win. You can easily remove abstractions like Electron, React, various libraries. Just let the AI write more code. You can even do the unthinkable and write desktop native again.

> literally everybody

I personally make sure I really diversify, so that when I buy funds, I buy those with stocks of EU companies which pay dividends. AFAICT there are 0 European AI companies that pay dividends.


There are zero US pure-play AI companies which pay dividends, right?

You have to go pretty far down the list of holdings (under "Holding details") to find any big bets on AI:

https://www.vanguardinvestor.co.uk/investments/vanguard-ftse...


For tax reasons most companies are avoiding paying dividends. It still happens but it's not nearly as common and companies are trying to get away from it because for many investors it is better not to have dividends paid.

>Companies using the value of their shares to fund demand for their services.

That's not what's happening here though. Google isn't using the value of its shares to fund demand. Google is using its own cash flow to fund this demand from Anthropic.

The question is whether Anthropic has demand from end users for the capacity they are buying from Google (that's a yes I guess) and whether that demand is profitable for Anthropic (that's a question mark).


True.

Regardless, (a) it's ability/desire to make such investments is still driven by stock-driven optimism and (b) these transactions' "signal" can have a similar, warping effect.

In this case the transaction creates demand for Google's services and also funds anthropic's growth... which represents demands for google's services.

"Loop" is an approximation of an analogy. The risk is that enough of such transactions create a dynamic that distorts feedbacks.


>(a) it's ability/desire to make such investments is still driven by stock-driven optimism

I don't think it has much to do with the stock price at all. Current platform oligopolists fear the rise of new platforms. They want a foot in the door for strategic reasons.

What could happen is that frontier labs like Anthropic and OpenAI never become platforms and turn out to be providers of a largely commoditised, low margin service.

In that event, current valuations are too high. But Anthropic's valuation doesn't seem extreme to me. Their $30bn annual run rate is valued at $380bn.

Given this price and Anthropic's strategic value, Google's investment seems reasonable.


But OpenAI/Anthropic are not selling the compute as they're buying that from Google/Amazon/etc.

So they're selling the transformation, or the model. Or the ability to make a model. And their brand and their harness.

And it seems like the model is definitely not worth 380 billion. Models depreciate incredibly fast. There are lots of models and the other models aren't that far behind.

And it seems like the harness is not worth much as there's already open source alternatives that people claim are better.

And all these companies are paying lots of money for these AI training experts.

But I suspect that any regular Hacker News reader of 10 years dev experience could become a training expert in months if allowed to play with a load of compute and a lot of data for a bit.

Just like any of us could have become a data scientist, this stuff is not particularly hard. Random horny dudes on the internet are putting out loras and quantized models in days against the open source image models.

So what's worth 380 billion exactly? The brand?

These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

What I also don't get is that it's pretty obvious to me that the Europeans should all be spinning up their own, not necessarily massive, data centers and throwing a few billion at some guys in Cambridge or Stockholm or London or Berlin to make their own AI models.

Only the French have done it.

But instead the rest seem to be trying to court Anthropic or OpenAI to build data centers. Which is just stupid politics given what's happening in the world right now.


The technical task is not the business task... unless the task really is a commodity.

Coding facebook isn't rocket surgery either. Neither is Visa, Salesforce or many other tech-centric companies. Replicating their business model is.

Those are locked in by network effects. Path dependencies and suchlike can play a role. But... the upshot is that anthropic, open Ai and whatnot have the model people are using for work.

A government sponsored model isn't a bad thing to have, but I thing it's unlikely (but possible) that it will also be the product people want to use or the business that succeeds.


>So what's worth 380 billion exactly? The brand?

Whatever it is that leads to a $30bn run rate, growing >200%. Right now it's having the better model and being able to show how to use it in specific verticals.

But I suspect in the long run only platforms have high margins (and they will need margins not just revenues to justify their valuation). Are they becoming platforms? Google seems to think (or fear) that they might.


Not directly related to the valuation question you asked, but for Google there's a lot of value in getting as much Anthropic workload to run on their hardware as possible. The value comes from getting the insights and learnings of running these workloads, especially when they run on custom Google hardware. That hardware will get better as a result and increase the likelihood that Google has world class AI hardware in the future.

I can't say with any confidence that the $40B is a reasonable amount to pay for that value, but it doesn't seem unreasonable over a multi year time horizon given the stakes.


Moonshot (Kimi) and Deepseek trained their model on chinese GPU, with little capital, and are raising now at around 20b$ valuations.

Their latest models are arguably comparable to frontier ones. It is obvious that the valuations of the US companies are totally surreal now.


Apparently it's not obvious by evidence of the investment in them and stock value.

Kimi and Deepseek are in China and don't have access to the US capital market.

Because everybody is playing the same game?

>So what's worth 380 billion exactly? The brand?

>These valuations just look really off. Not by one order of magnitude, but more like by 4 orders of magnitude. Like 380 million might be a reasonable valuation, but not billion.

Or maybe the USD isn't worth that much now.


Can you share more on this market cap risk? I see legit stability / correlation risks but can’t work out the market cap risk mechanics.

The cash was just sitting on their balance sheet not increasing Google’s valuation, turning it into revenue is value creation.

The equity transfer is a bit murkier, Google I guess gets to mark this on their books according to Anthropic’s latest valuation, but isn’t this more of a volatility swap than conjuring market cap? Analysts are not going to apply $30b of future spend at current PE, they will additionally discount this by the P(Anthropic demand crashes). So it’s not like this just boosts their market cap for free.

Of course Google’s balance sheet now has higher vol equity instead of cash for their products.


The tech industry goes through investment phases to produce oligopolies it turns around and enshittifies, parasitizing income off what it has built. Venture capital, acquisitions, acquihires, circular investments - It’s been incestuous for years. The question is whether competition from China’s sophisticated tech sector, which already surpasses the US in many areas, will put a pin in these plans this time round.

I don't agree with the "full cynicism" POV, but I do agree that TechnoChina's existence is a potential paradigm shifter.

But generally speaking, AI is currently pretty competitive and robust. Straightforward business model where users pay money and select the best deal are central. Market power is relatively dispersed.

So... Idk. Nvidia doesn't have competition. But Intel didn't have much competition either, and they drive the Moore's law bus for a long time.

Hardware has been less prone to enshitification. Maybe it's because the demand curve for compute doesn't have natural limits. Drive down price, and demand grows by enough that the total market grows.


There is a giant capital outlay required to produce a competitive model. Joe Schmo can’t jump into this market. Best he could do would be to ingratiate himself to an existing funding cartel. The moat surrounding a handful of market participants is billions of dollars wide.

There’s competition now among the American companies (who have a head start in this space) as always happens as the professional oligopolists try to manufacture their footholds in the new market.

Nor is it cynical to objectively appraise the interests and economics at play. People aren’t playing circular financing games out of the goodness of their hearts.


Nvidia clearly has competition, that's what this deal with Google is about (TPUs).

Economics is circular. The baker buys shoes from the cobbler, and the cobbler buys food from the baker.

Yes but the baker doesn't just give the cobbler money to buy bread and take a share in the shoe shop in return.

But there's nothing wrong with that. It's not a circle; it's an exchange. Like any transaction.

I like this abstraction. If the baker says “I could sell 10x more if only I had shoes that allowed me to bake faster” then the cobbler says, “split the growth with me and I’ll craft you all the shoes you want.”

The claim was circularity is evidence the business activity is fake.

Those are tangible items. Here, the baker is buying shoes from someone who says they're going to be a cobbler some day.

It’s no different with services. Making deals with potential cobblers seems like a fine market activity.

In human time scales, the species which thrive will tend to be the adaptive generalists. Evolution takes time.

And: on the 'r' side of the r/K reproductive strategy. Whales are literally the exemplar of K-selection, that is a very small number of high-quality offspring.

<https://en.wikipedia.org/wiki/R/K_selection_theory>

Whale lifespans are long, populations and fecundity / brood sizes are small, sexual maturity relatively late, and childhood mortality relatively high. All of these make for slower rather than more rapid evolution.

Species such as krill (on which many whales feed) are far more likely to evolve rapidly in the face of increasing selection pressures. Whales might well find themselves boxed into an inescapable evolutionary corner.


Imo, there is a real question about the value of better here. Also, the ability and likelihood of the enterprise to actually leverage better.

This dynamic is not new. Unsophisticated enterprise buyers making bad decisions in a bad way. We haven't had an overwhelming market discipline come down though.

Do these enterprises actually need "good?"


Because they offer an attractive job/package relative to other opportunities... same as any other job people take.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: