Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe accidental, but I feel you’ve presented a straw man. We’re not discussing something that _may be_ better. It _is_ better. It’s not as big an improvement as previous iterations have been, but it’s still improvement. My claim is that reasonable people might still ship it.


You’re right and... the real issue isn’t the quality of the model or the economics (even when people are willing to pay up). It is the scarcity of GPU compute. This model in particular is sucking up a lot of inference capacity. They are resource constrained and have been wanting more GPUs but they’re only so many going around (demand is insane and keeps growing).


It _is_ better in the general case on most benchmarks. There are also very likely specific use cases for which it is worse and very likely that OpenAI doesn't know what all of those are yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: