Damn, you guys are toxic. So -- they did not invent AGI yet. Yet, I like what I'm seeing. Major progress on multiple fronts. Hallucination fix is exciting on its own. The React demos were mindblowing.
This reaction didn't emerge in a vacuum, and also, toxicity flows both ways. In the tech field we've been continually bombarded for 2+ years about how this tech is going to change the world and how it is going to replace us, and with such a level of drama that becoming a cynic appears to be the only thing you can do to stay sane.
So, if sama says this is going to be totally revolutionary for months, then uploads a Death Star reference the night before and then when they show it off the tech is not as good as proposed, laughter is the only logical conclusion.
Companies linking this to terminating us and getting rid of our jobs to please investors means we, whose uptake of this tech is required for their revenue goals, are skeptical about it and have a vested interest in it failing to meet expectations
Yeah, when it becomes cool to be anti AI or anti anything in HN for that matter, the takes start becoming ridiculous, if you just think back a couple of years, or even months ago and where we're now and you can't see it, I guess you're just dead set on dying on that hill.
I'm extremely pro AI, it's what I work on all day for a living now, and I don't see how you can deny there is some justification for people being so cynical.
This is not the happy path for gpt-5.
The table in the model card where every model in the current drop down somehow maps to one of the 6 variants of gpt-5 is not where most people thought we would be today.
The expectation was consolidation on a highly performant model, more multimodal improvements, etc.
This is not terrible, but I don't think anyone who's an "accelerationist" is looking at this as a win.
Update after some testing: This feels like gpt-4.1o and gpt-o4-pro got released and wrapped up under a single model identifier.
4 years ago people were amazed when you could get GPT-3 to make 4-chan greentexts. Now people are unimpressed when GPT-5 codes a working language learning app from scratch in 2 minutes.
Oh a working language learning app? Like one of the hundreds that have been shown on HN in the past 3 years? But only demonstrated to be some generic single word translation game?
How are they mindblowing? This was all possible on Claude 6 months ago.
> Major progress on multiple fronts
You mean marginal, tiny fraction of % progress on a couple of fronts? Cause it sounds like we are not seeing the same presentation.
> Yet, I like what I'm seeing.
Most of us don't
> So -- they did not invent AGI yet.
I am all for constant improvements and iterations over time, but with this pace of marginal tweak-like changes, they are gonna reach AGI never. And yes, we are laughing because sama has been talking big on agi for so long, and even with all the money and attention he can't be able to be even remotely close to it. Same for Zuck's comment on superintelligence. These are just salesmen, and we are laughing at them when their big words don't match their tiny results. What's wrong with that?
When you have the CEOs of these companies talking about how everyone is going to be jobless (and thus homeless) soon what do you expect? It's merely schadenfreude in the face of hubris.
It's not about being toxic, it's about being honest. There is absolutely nothing wrong with OpenAI saying "we're focused on solid, incremental improvements between models with each one being better (slightly or more) than the last."
But up until now, especially from Sam Altman, we've heard countless veiled suggestions that GPT-5 would achieve AGI. A lot of the pro-AI people have been talking shit for the better part of the last year saying "just wait for GPT-5, bro, we're gonna have AGI."
The frustration isn't the desire to achieve AGI, it's the never-ending gaslighting trying to convince people (really, investors) that there's more than meets the eye. That we're only ever one release away from AGI.
Instead: just be honest. If you're not there, you're not there. Investors who don't do any technical evals may be disappointed, but long-term, you'll have more than enough trust and goodwill from customers (big and small) if you don't BS them constantly.
LLMs are incredibly capable and useful, and OpenAI has made good improvements here. But they're incremental improvements at best - nothing revolutionary.
Meanwhile Sam Altman has been making the rounds fearmongering that AGI/ASI is right around the corner and that clearly is not the truth. It's fair to call them out on it.