I think a better title would be "Big Tech is struggling to add real capabilities and improve productivity despite slapping LLM models in every one of their existing apps", but it just doesn't roll off the tongue..
For the bubble to burst, investors would need to stop believing in the potential of AI in the near-to-medium term. I don't think we're quite there yet and, if we are, the article doesn't really support that claim.
There's still room for other companies to innovate and create real value with LLMs. But innovation is usually better done by challengers who have nothing to lose than incumbents who are, as the article correctly points out, trying to signal "trust me, we can innovate" to their (non-VC) investors.
I mean, research didn't end after the last six or so AI bubbles collapsed, either; this pattern goes back to the 1960s. The bubble referenced is the financial/economic bubble which currently exists around AI. I think saying it's bursting is probably _slightly_ premature, but "people refuse to pay for the Big New Thing, we must force it upon them" _is_ usually a sign that the end is nigh.
There's a tremendous amount of useful things you can do with Transformers and diffusion models. It's kind of fascinating how businesses are not able to turn those straight forward things into businesses and instead think that they should run commercials insisting people will want to have an AI write heartfelt thankyou letters or do creative tasks for them. Whoever is driving these initiatives is so lacking in aesthetic and cultural awareness it's insane.
Just thinking of it. 1% error rate, say 1 in 100 customers gets some wrong information. And they go to place trusting it. Just to hear that AI lied to them. Say you have 1000 or 10000 customers using system, now you potentially have 10 or 100 one star negative reviews... And this might be just answering to simple queries like a restaurant menu or opening time.
No decently coded chatbot is going to respond with an incorrect restaurant menu or opening time. You'd call a function to return the menu from a database or the opening time from a database. At worst, the function fails, but it's not going to hallucinate dishes.
Exactly this. Even for internal use. Our corp approved a small project where NN will do the analysis of nightly test runs (our test suite runs very long). For now it does classification of results into several exiting broad categories. Technically product type failures are the most important usually and this should allow to focus efforts on them. But since even 1% false rate (it is actually in double digits in real life) would mean that we, QAs, need to verify all results anyway. So no time saved, and this NN software is eh... useless.
There are other ideas how to make it more useful, but my point is that non-zero failure rate with unpredictable answers is not applicable to many domains.
Yeah, a 1% error rate (IME in practice the error rate is _much_ higher than this if you care about detail, but whatever) just won't fly in most use cases. You're really talking about stuff which doesn't matter at all (people are rarely willing to pay very much for this) or where its output is guaranteed to be reviewed by a human expert (at which point, in many cases, well, why bother).
Curious—what useful things are you looking to work on right now? I'd love to learn more and help out.
In my opinion, this AI development stalemate is more layered. Big companies set such broad targets in a race to catch up with OpenAI that they lose focus on real use cases. So the loudest voices, those good at navigating internal politics, end up in a good spot to push their own ambition over actual customer needs or technical practicality. They set goals that sound just a bit more exciting than their peers, which pulls resources their way. But the focus shifts to chasing KPI's rather than drilling into real problems. Even when they know going smaller is smarter, knowing and doing are two different things.
It’s still a great time for small AI startups. My favorite kind is a team that quickly learn a business’s needs, and iterate toward the right interaction points to help. I think just staying focused on solving a lot of small related problems very fast, you can create something that feels like a real solution.
If I were a small business, I would not be excited about having a magic robot 'hallucinate' things about my business. Like, this is the company that brought you glue on pizza.
I think the main issue is that the average consumer does not know what to do with a raw transformer model.
While the base technology is now there and is rapidly improving, a lot of the "glue" and "plumbing" is still missing. What is the best way to integrate these tools into our normal workflows / daily lives and so on?
It will take time...
Articles like the one above are not very useful as they completely miss the big picture.
>Plus, we know that Google knew I wouldn't like this because they didn't tell me they were going to do it. They didn't update their docs when they did it, meaning all my help documentation searching was pointless because they kept pointing me to when Gemini was a per-user subscription, not a UI nightmare that they decided to force everyone to use.
These are paying customers. This isn't a case of "you're the product." Yet Google chooses not to be good stewards of their customer base. The level of contempt big tech has for users really is something.
Good read - I use ChatGPT almost every day but a lot of my friends only tried it when it came out, weren't impressed and haven't been back since. I don't know any non-tech friends who pay for it.
> It just doesn't seem like those applications matter enough to normal people to actually pay anyone for them.
I thought this was basically the core point supporting the conclusion, but I don't think people really want to pay for anything, which is why everything is ad supported. I don't think you can say people don't want AI just because they don't want to pay 20/month or w/e for it.
The piece is about Google Workspace, a paid service. Gemini was initially an add-on for it. Gemini apparently had low uptake from this already-self-selecting group of customers who are indeed willing to pay for stuff.
But rather than going back to the drawing board to make it more useful/appealing, they increased everyone's base subscription and made Gemini "free;" you know, a feature that paying customers demonstrably didn't value enough to pay for.
I paid 20/month for a few months with ChatGPT but stopped because you could get basically the same for free. If there were no free options I might pay but when free versions are pretty much forced upon you there's not much compulsion.
> I don't think you can say people don't want AI just because they don't want to pay 20/month or w/e for it.
But do people want AI that's rigged to constantly recommend Shopping Like A Billionaire at Temu™, either? Because that's the alternative if people won't pay.
Right, but companies which have already invested a gazillion dollars into AI aren't going to entertain the idea that users simply don't want AI at all.
I'm the real world, judges I know are using it to do case summaries that used to take weeks, Goldman is using it to do 95% of IPO filings work and I personally am using O1 pro to write a ton of code.
AI's biggest use cases are for doing actual work, not necessarily replacing regular interactions with your mobile or entertainment devices
No, it's not, not right now at least. But I concede it might burst as the internet bubble did 20 years ago. It will then continue. It won't have as much as an impact as internet did imho, but on that point I can be very wrong.
The free AI in my phone isn't good enough to even improve dictation recognition rate - it's as bad as it was 2 years ago. Real useful AI costs money and costs privacy (can't run it locally on the phone).
I don’t know if the bubble framing is particularly helpful.
Every new technology and medium (AI has features of both) goes through a period where people try to unsuccessfully apply it to old paradigms before discovering the new ones that make it shine. Motion picture cameras seemed like a goofy fad for decades before people finally understood the unique potentials of the medium and stopped trying to just film vaudeville stage shows.
TLDR: Google tries to push Gemini (and requires money to turn it off) and therefore nobody wants any kind of AI for any reason.
This, despite that the other AI product (the one everyone talks about, ie. ChatGPT $200) is too successful (meaning people use it too much and the price should be higher or there should be more tiers)
For the bubble to burst, investors would need to stop believing in the potential of AI in the near-to-medium term. I don't think we're quite there yet and, if we are, the article doesn't really support that claim.
There's still room for other companies to innovate and create real value with LLMs. But innovation is usually better done by challengers who have nothing to lose than incumbents who are, as the article correctly points out, trying to signal "trust me, we can innovate" to their (non-VC) investors.