Hacker Newsnew | past | comments | ask | show | jobs | submit | keeda's commentslogin

Actually I have the opposite take. This is largely a play to procure compute capacity (and I suspect, distribution via Google Cloud), and I think Dario wildly underestimated the amount of demand they would see.

I always wondered why Anthropic was not out there feverishly scrambling to procure compute like the other big players. While Altman was being laughed at as a "podcasting bro asking for trillions in investment" Dario was on Dwarkesh expounding on how tricky it is to predict the demand for capacity. Now Dario has to give equity to a competitor to get compute. (OpenAI does this too, of course, but I suspect the terms are much better.)

At this point, it's pretty clear that compute is the only moat in this business. Even as an outsider, the extreme demand curves and compute crunch were painfully obvious, so this seems like a serious strategic error on Dario's part.


My personal theory, represented in the points in TFA but not quite pinpointed, is that it is due to smartphones and the overall media environment (not just social media). Specifically:

Smartphones enable unprecedented levels of reach as well as content personalized to you... as decided by The Algorithm. Media organizations and social media influencers discovered that ragebait gets clicks, which generates revenue. This also explains why news articles overall are very negative, as TFA points out. This is what influences The Algorithm.

This is all that is needed. Consider:

1. The psychological harms of social media are very well understood, as often shown in Meta's own leaked reports. But the discussion has focused on youths because "think of the children" (which is actually justified here) but overshadows the harm to the general population.

2. Elon and Twitter. 'Nuff said.

3. Beyond public channels, there is even more negativity in private message groups like WhatsApp and Telegram which is invisible from the outside. I've seen a lot of large influence campaigns and disinformation flow through those channels that have not made the news. Which also means that fact-checking is not a thing there.

4. The countries where happiness is rising has two main (mostly mutually exclusive) traits:

a) They have low inflation (from TFA: Portugal, Italy, Spain). Maybe this is sufficient to overcome the effects of negative media environment.

b) They are largely authoritarian states (from TFA: China, India, Vietnam) where the media environment is heavily controlled. So the constant media narrative is "Things have never been better!" (Though the cracks are showing in India, because people will tolerate this only as long as things are good, and genuine dissatisfaction is breaking the narrative barrier, since "fake it til you make it" does not work for national economies. I suspect cracks will show in China too if the gravy train comes to an end there.)

5. The lockdown from the pandemic was probably just the impetus that drove more people to their smartphones and got them hooked into this cycle of negativity.

So basically people have been inundated, via public and private channels, with constant waves of negativity and disinformation. Even the "positivity" is stuff like social media influencers portraying unrealistic, luxurious lifestyles ("a day in the life of a PM at a tech company".) This further breeds resentment in people even if their own lives are actually getting better.

In my tinfoil hat mode, I even suspect the global media environment is heavily manipulated to sow dissatisfaction and cause instability (hence the "vibecession") as a form of economic warfare. ("We will take America without firing a shot. We do not have to invade the U.S. We will destroy you from within." - Kruschev, maybe)

But Occam's Razor says good old capitalism is a sufficient explanation.


No, the argument is they want to sell more product to more people, not just more product (to the same people.) Given that a lot of their income is from flat-rate subscriptions, they make money with more people burning tokens rather than just burning more tokens.

After all, "the first hit's free" model doesn't apply to repeat customers ;-)


I'm convinced a lot of the backlash against AI is driven by LinkedInfluenza posts like this one. I don't see such unhinged AI hype anywhere else... but then I'm not on Twitter.

> The transition Apple and Tim Cook announced today is entirely different. No one’s hand was forced.

I don't follow Apple very closely, but given this is coming right after the AI leadership shakeup and at a time where Apple's AI story is being debated, the thought did pop into my mind...

This reminds me of Ballmer leaving Microsoft. Strictly by the numbers, he was a very good steward of the company at the time, but for various reasons (in his case, at least partially related to optics) he was considered unsuitable to lead Microsoft in its cloud era, and so he left and cleaned up a lot of house in the process.

I honestly don't know what the best AI story is for Apple, but I appreciate that they are pushing the envelope on on-device inference, however under-utilizied it may be at the moment. I think this is going to be essential to keep AI widely accessible in the long term, because everyone else is incentivized to try to lock it up in their data centers.


They (and other AI players) have been using WAU over DAU for all their metrics, and many have questioned why. But if you look at other data sources of AI adoption, the reason is clear: Even while 56% of Americans now "regularly" use GenAI on a weekly basis, a much smaller percentage 10 - 14% use it on a daily basis. Here's one source but others had similar numbers: https://www.genaiadoptiontracker.com/

56% is much more impressive than 14%.

This may look bad until you consider that all of them are already desperately strapped for compute. I think the lower DAU is due to a combination of that and people still figuring out how to use AI.


It still takes 3 - 5 years or more even for that incremental progress. It takes years to just catch up on the field! Do we expect PhD candidates to subsist on barely livable wages until they eventually publish a ground-breaking result? That kind of disincentive to even start a PhD would not be conducive at all to progress.

Yes, most PhD theses are scientific and commercial dead-ends (even more reason not to gate the degree on ground-breaking results!) but they do serve to cull the problem space, and that's exactly why we need more of them. In fact we should even provide some incentives to publish negative results in academia.


> We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff.

...

And so far there's no evidence that all this investment has generated more profit for the users of AI.

If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)

The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.

Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.


75% report positive ROI (and the VPs are much more "optimistic" than the middle managers who are closer to the work) - but how much ROI? 1%? The fact that they don't quote a figure at all is pretty telling. And that's the ROI of the people buying the AI services, which are often heavily subsidized. If it costs a billion dollars to give a mid-sized company a 1% ROI, that doesn't sound sustainable.

I would love to see another report that isn't a year old with actual ROI figures...


It’s not easy to quantify because you’re basically substituting or augmenting labor. How do you quantify an ROI on employees? You can look at profit of a project they’re hired to execute. But with AI, it’s mixed with the employees, so how do you distinguish the ROI of the two? With time, we might be able to make comparisons, but outside of very specific scenarios it’s difficult to quantify.

Everyone I’ve seen try has had negative actual ROI.

All the middle managers are afraid to say anything though, so go go go.


Good questions! I have only skimmed through the report but slide 45 onwards of the full report has some vague numbers: https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025...

Can't say why they don't report exact numbers, but it may be because a) of confidentiality and b) RoI is very context dependent and c) there is a wide spectrum of RoI by different dimensions, including some 9% even reporting negative RoI. This may make it hard to cite a single number, but the majority report "moderate" to "significant" RoI, whatever that means to them.

I'll add that I've seen mentions of similar reports from other sources like McKinsey and co. e.g. this one that claims actual revenue increase: https://www.mckinsey.com/featured-insights/week-in-charts/ge... -- I tend not to take these reports at face value, but I'm seeing multiple of them from various sources that tend to align.

As an aside, I just wanted to say, these are the kinds of discussions I was hoping to see here!


> The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.

It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")

In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".

This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.

And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".

One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.


That's fair, it is survey based but it is apparently based on formal internal measurements. The full report (https://ai.wharton.upenn.edu/wp-content/uploads/2025/10/2025... -- slides 43 onwards) mentions that for 75% of them have "integrated formal ROI measurement."

There is little discussion of what that means, however, but we really can't expect concrete numbers for what is going to be sensitive business data,and given that the report tracks it across multiple industries and functions ranging from IT to operations to legal to sales, it may be hard to put into sensible numbers, or how the measurements may be flawed or biased.


They're not looking for solutions, they're capitalizing on the AI backlash. It's just the new form of rageviews.

The only saving grace is that this is less cynical than typical rageviews, considering they have something of a point in that they are going to be negatively impacted by the same technology that has been trained on their content without compensation.


I suspect the cause-and-effect in creating the narrative is the reverse of what's in the narraitve: Frank Herbert wanted the intricate dynamics of the Guild and Spice and Mentats and exciting close-quarter combat for a more intriguing narrative. But AI and robots made those all obsolete, so he made it disappear with a handwave of "because Butlerian Jihad."

I always thought the Butlerian Jihad was the biggest plot hole in Dune, but I deeply appreciate the world and narrative it enabled.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: