Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the author might need a reality check. In almost every company the usage of AI is becoming more and more valuable. Even if progress in models freezes and halts, the current state is too valuable. I have talked to multiple people in multiple companies and LLMs are becoming indispensable at so many things. Yes chatgpt might or might not conquer the world but the enterprise usage I don't see decreasing (and I am not talking about customer facing LLM usage which might be very stupid).


The author's tone is over the top, but I think this quote is true:

> Large Language Models and their associated businesses are a $50 billion industry masquerading as a trillion-dollar panacea for a tech industry that’s lost the plot.

There are very real use cases with high value, but it's not an economy-defining technology, and the total value is going to fall far short of projections. On bottom lines, the overall productivity gain from AI for most companies is almost a rounding error compared to other factors.


Part of this is because VCs are hoping for a repeat of the Web 2.0 boom, where marginal costs were zero and billions of people were buying smartphones. If you check YC’s RFS (just an example), they’re all software: https://www.ycombinator.com/rfs

Everyone asks “what if this is like the internet” but what if it’s actually like the smartphone, which took decades of small innovations to make work? If in 1980 you predicted that in 30 years handheld computers would be a trillion dollar industry you’d be right but it still required billions in R&D.

There are a ton of non-software innovations out there, they just require more than a million dollar seed to get working. For example making better batteries, better solar panels, fusion power, innovations in modular housing, etc.


Hopefully you are right, but I fear this is a very naive and premature judgement. At one point the internet was a $50B industry insisting it would be a $1T industry. It even had a bubble, bursting and burying entire companies.

Yet, $1T was nevertheless a profound underestimation.


Same could be said about all the hype and trend that died, blockchain(ignore cryptocurrency part), IoT(not sure what happened), bigData(foundation of current AI regardless of whag anyone says) , app for everything(we indeed have more apps now and everything is junk) were also considered the new water/air/electricity/revolution/disruption.

We in aggregate seem to have developed a collective amnesia due to how fast these trends move and how much is burned in keeping the hype machine going to keep us on the edge. We also need to stop calling LLMs different just like every kid wants to claim mark zuck was diff or bill gates was diff so dropping out like them would make these kids owner of next infinite riches.

After a long decade of fast moving “this will truly revolutionize everything” speech every so often, we need to keep some skepticism. Additionally, the AI bubble is more devastating than the previous as previous money was being spread into multiple hypes from which some emerged silent victors of current trends but now everything is consolidated into one thing, all eggs in one basket. If the eggs break, a large population and industry will metaphorically starve and suffer.


Is it? There's certainly $1T of other businesses built with the internet, but the internet business, itself, was rapidly commoditized. The valuable things were the applications built on it, not the network. The argument here is that nobody's found the $1T applications built on AI foundation models yet, but OpenAI is valued as if they have, because their demo chatbot took off out of peoples' curiosity and people are extrapolating that accident exponentially into the future.


The internet bubble is probably a good analogy. It took almost 20 years and several rounds of failed businesses for the internet to have the impact that was originally promised. The big internet companies of the 90s are not where the money was ultimately made.

Similarly, the current LLM vendors and cloud providers are likely not where the money will ultimately be made. Some startup 10-15 years from now will likely stack a cheaply hosted or distributed LLM with several other technologies, and create a whole new category of use cases we haven't even thought of yet, and that will actually create the new value.

It's basically the Gartner hype cycle in action.


Almost all of the internet build out happened between 1998 and 2008, and cost about $1T and was adding $1T to the economy annually by the end of that buildout.

This latest AI hype cycle is also about 10 years old and about $1T invested, and yet it's still a super-massive financial black hole with no economy-wide trillion dollar boost anywhere in sight.

The internet broadband, fiber, and cellular buildout changed the world significantly. This LLM buildout is doing no such thing and is unlikely to ever do so.


Approximately no one gave a flying fuck about “AI” at anything close to this scale and level of funding and hype before ChatGPT was released in Nov 2022. My non-tech friends and relatives couldn’t have named a single AI product, now they all use ChaptGPT, many of them daily and with paid accounts.

Let’s circle back in 2032 and see how much of this was “hype”.


> Approximately no one gave a flying fuck about “AI” at anything close to this scale and level of funding and hype before ChatGPT was released in Nov 2022.

I think that google image search is a really good example of useful results from the overall AI boom.

I do remember talking to someone in 2016 about the possibility of an AI winter if the image stuff didn't work out, so clearly I'm not the right person to talk to about that.


> the internet was a $50B industry

How much is "the internet" an industry? It's an enabler and a commodity as much as electricity or road networks are. Are you counting everything using the internet as contributing a sizable share to the internet industry's value?


By the time we finished pouring a trillion dollars into the global broadband, fiber, and cellular network buildout between 1998 and 2008, the Internet was already adding a trillion dollars a year to the economy.

We've now got 10 years and about a trillion dollars invested in this latest AI bubble, and it's still a super-massive financial black hole.

Ten years and a trillion dollars can make great things like happen. AI ain't that.


It’s not enough to be right eventually. Being too early is as good as wrong.


There is a difference between valuable and profitable. I think anyone who wants to say there isn’t a bubble needs to solve two problems:

1) Inference is too damn expensive.

2) The models/products aren’t reliable enough.

I also personally think talking to industry folks isn’t a silver bullet. No one knows how to solve #2. No one. We can improve by either waiting for better chips or using bigger models, which has diminishing returns and makes #1 worse.

Maybe OpenAI’s next product should be selling dollars for 99 cents. They just need a billion dollars of SoftBank money, and they can do 100 billion in sales before they need to reraise. And if SoftBank agrees to buy at $1.01 the business can keep going even longer!


I think AI will be useful to industries/companies where #2 is unimportant: Where the quantity of product is far more important than its quality. Disturbingly, this describes the market for a lot of industries/companies.


It seems like every 6 months they come out with a new model that's as good as the previous generation but with a fraction of the inference cost. Inference cost for a given level of quality has been dropping fast. E.g. GPT-4.1 nano outperforms the old GPT-4 model but costs 300x less.

Right now, the API cost for asking a single question costs about a fiftieth of a cent on their cheap 4.1-nano model, up to about 2 cents on their o3 model. This is pretty affordable.

On the other end of the spectrum, if you're maxing out the context window on o3 or 4.1, it'll cost you around $0.50 to $2. Pricy, but on 4.1, this is like inputting several novels worth of text (maybe around 2000 pages).


If you're looking at it from the perspective of business sustainability (i.e. what the article is about) "we keep lowering our prices" doesn't sound so great. The question is whether GPT-4.1 nano costs OpenAI 300x less than GPT-4 to run or not. If it costs exactly that much less, that still means demand needs to grow by more than 300x just to keep revenue constant. And if it does, then total inference cost correspondingly goes up again.


I think that's convoluted logic trying to show that a business lowering its costs is actually bad for the business. You can make this assumptions about any business. It's not a good assumption. If a car company discovered a way to lower its production costs and started selling their cars cheaper, would you argue that maybe this is actually bad for the car company because maybe they lowered their price so much that it cancelled out all the benefit? You're just hoping they're really bad at pricing their cars.

The different GPT models are for different use cases. The existence of the nano model does not imply everyone who used the formerly cutting-edge GPT-4 will switch over to the cheaper nano model. Most of them will switch to the smarter models like 4o, 4.1, or o3. Nano allows the creation of new tools where the other models are either too pricey or too slow to respond to be viable.


It's funny that you bring up cars as a point of comparison, because constantly releasing new models while dropping prices is exactly what made a bunch of Chinese EV companies go bankrupt in recent years.

It's not that they were stupid and priced their cars badly, it's that they didn't have much choice. Developing a new car model has significant upfront costs that require a correspondingly large number of cars to be sold for the company as a whole to turn a profit, each individual sale being profitable is not enough. But in an environment with lots of competitors constantly releasing new models for cheaper, any single company had little choice but to also release a new model and lower their prices in order to sell any cars at all, eating a loss on the previous model. And eventually some companies couldn't take those losses any more and folded.

OpenAI is definitely losing money overall, presumably because training new large language models is expensive. But can they stop doing that to turn a profit? If OpenAI announced that the funding for their next frontier training run fell through and they're laying off research staff to focus on inference, and then competitors come out with better, cheaper models, how long will OpenAI be able to stay relevant? I guess they hope everyone else gives up first and they won't have to find out.


This is all true. Competition reduces a company's profits.

OpenAI reducing their costs isn't what hurts them. They're better off with lower costs. Their competitors getting lower cost models is what hurts them.

Absolutely some AI companies will be out-competed and fail. I don't know which one will end up on top.

But even if almost all the AI companies fail, it doesn't stop the industry as a whole from being very valuable. Whichever AI companies are left will just take over the market share of those that failed. And this eases the pressure on the surviving companies.


That would be great news for OpenAI if they could somehow prevent people from buying their own computers. Because as inference costs come down for OpenAI, their customers also get access to better and better commodity hardware (and also better models to run on it). And commodity models become more and more capable all the time, even if they’re not the best.


It counters the claim that "inference is too damn expensive". You can argue that cheap inference is actually bad for OpenAI because it makes it easier to run models on commodity hardware, but then you're arguing against your point that inference was too expensive for OpenAI to be sustainable. Which is it? Is inference too expensive or too cheap?

For now, OpenAI's models are closed source, so if you find their models offer the best value for your use case, you don't have the option of running it on your own hardware. If a competitor releases better products for cheaper, OpenAI will fail, just like any other company would.


It’s both. For an increasing number of tasks, it’s too cheap. People will be able to do simple things on their own hardware. Things OpenAI would love to charge high margins on. There’s probably an 80/20 rule on the horizon.

And for others, it’s too expensive. The frontier is constantly being pushed, so they can’t stop improving or they will fall behind. Google at least makes their own chips so they can control their costs somewhat.


So you suspect their cheap models have few customers because people prefer to run open source models on their own computers, and their high-end models have either very thin margins over the inference cost or few customers because the costs are too high.

Do you have any evidence for any of this?


Honestly, it's the training costs that will kill them. AFAIK, training costs have not come down anywhere near as much as inference costs.

And the models don't last long. So you have a rapidly depreciating capital asset that you need to provide your services, not really a recipe for a sustainable business (certainly not with the fat software margins tech companies are used to).


> The models/products aren’t reliable enough.

People aren't reliable enough.

Nature isn't reliable enough.

For most uses, all that is needed is a system to handle cases where it is not reliable enough.

Then suddenly it becomes reliable enough.


There are different kinds of reliable. There is the reliability of certain failure for instance. These things are unreliable in a way that the other things that you listed are not.


Nature is reliable in that we have rules (physics) and practices (engineering) to base our process upon.

People aren't reliable, for a specific value of reliable.

We expect of technology(machines, software, AI, whatever) to be: deterministically reliable (knowing their failure modes), and significantly more reliable than humans at what they do (because that's what we rely upon, why we use them to replace humans at what they, in ways humans can't: harder, faster, stronger).


A computer can never be held accountable. Therefore, a computer must never make a management decision.


> In almost every company the usage of AI is becoming more and more valuable.

It's certainly becoming more common, and there are lots of people who want it to be valuable, and indeed believe it's valuable.

Personally, I find it about as valuable as a really, really good intellisense. Which is certainly valuable, but I feel like that's way off from the type/quality/quantity of value you're suggesting.


I also find the intellisense aspect of it good though the price is still too high when my local IDE can do 1/10th of that for a long time.

Additionally, LLMs are sort of using old day google mastery to find the right result quickly to save a huge waste of wading through junk and SEO spam, which translates to productivity but then we are again balanced out because this gained productivity was lost once SEO spam took off a decade back. I am indifferent about this gain again, as anything with mass adaptation tends to devolve in garbage behavior eventually, slowly the gains will again be eaten up.


This is the comment I wanted to write before scrolling down. Information retrieval in general is really improving computer-related tasks, but I think present or visible is a much better term to describe it than valuable.


Why is there a ChatGPT button on my gaming mouse? https://www.yankodesign.com/2025/04/27/razers-first-vertical...

How is this not indicative of a massive bubble?


Where is the more of value coming from if everyone uses the same tool to do the same things?

Your emails, presentations etc. will all look the same and what’s worse so will the emails, presentations etc. of scammers and phishers.


That's besides the point, though. How are they gonna meet the shareholder's expectations of future revenue? When AI becomes equally expensive as a human, what happens then?


When AI becomes equally expensive as a human, layoff humans to continue finance AI because, it is easy to fire and rehire but expensive(more precisely acknowledging failure and bad decision making to hurt leadership ego) to remove and become independent from complex integrations.

Edit: I am just sharing how our CTO responds to the massive push of AI into everything, because integration of a non deterministic system has massive cost and eventually once the thing is made deterministic, the additional steps add expenses which finally makes the entire solution too expensive compared to the benefit. This is not my opinion, just sharing how typical leaderships hope to tackle the expense issue.


It seems like the question is whether you believe in cost-based or value-based pricing. The cost of AI for the same amount of power is going down a lot year-to-year. [1]

If market prices go down with costs, then we see something like solar power where it’s everywhere but suppliers don’t make money, not even in China.

Or maybe customers spend a lot more on more-expensive models? Hard to say.

[1] https://simonwillison.net/2025/Feb/9/sam-altman/


At these companies it becomes more and more forced, but not more and more valuable.

Please make a distinction between what people say and what can be measured


I noticed there is not a single number in your comment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: