Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I’d much rather have it be slower, more expensive, but smarter


Depends what you want it for. I'm still holding out for a decent enough open model, Llama 3 is tantalisingly close, but inference speed and cost are serious bottlenecks for any corpus-based use case.


I think, that might come with the next GPT version.

OpenAI seems to build in cycles. First they focus on capabilities, then they work on driving the price down (occasionally at some quality degradation)


Then the current offering should suffice, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: