Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nah, Apple made the right choice. Nobody except a niche market of hobbyists is interested in running tiny quantized models.
 help



Small models keep getting smarter and local hardware keeps getting better.

At some point, they will converge and an inflection for local LLMs will happen. Local LLMs will never be as smart or fast as cloud LLMs but they will be very useful for lower value tasks.


About the same niche market as the people who bought the Apple I, and we know where that went.

The Apple I was a pretty poor predictor of what mainstream mass-market computing was going to end up looking like. I don't think anybody has yet come up with the Apple II of local LLMs, let alone the VisiCalc or Windows 95.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: