Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Given the popularity and activity and pace of innovation seen on /r/LocalLLaMa, I do think models will keep improving. Likely not at the same pace as they are today, but those people love tinkering but it's mostly enthusiasts with a budget for a fancy setup in a garage, independent researchers and smaller businesses doing research there.

These people won't sit still and models will keep getting better as well as cheaper to run.



No-one on LocalLlama is training their own models. They’re working with foundation models like Llama from Meta and tweaking them in various ways: fine tuning, quantizing, RAG, etc. There’s a limit to how much improvement can be made like that. The basic capabilities of the foundation model still constrain what’s possible.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: