Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Yes, the big companies are making the models, but enough of them are open weights that they can be fine tuned and run however you like.

And how long is that going to last? This is a well known playbook at this point, we'd be better off if we didn't fall for it yet again - it's comical at this point. Sooner or later they'll lock the ecosystem down, take all the free stuff away and demand to extract the market value out of the work they used to "graciously" provide for free to build an audience and market share.



How will they do this?

You can't take the free stuff away. It's on my hard drive.

They can stop releasing them, but local models aren't going anywhere.


They can't take the current open models away, but those will eventually (and I imagine, rather quickly) become obsolete for many areas of knowledge work that require relatively up to date information.


What are the hardware and software requirements for a self-hosted LLM that is akin to Claude?


Llama v3.3 70B after quantization runs reasonably well on a 24GB GPU (7900XTX or 4090) and 64GB of regular RAM. Software: https://github.com/ggerganov/llama.cpp .




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: