Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For running the model once it’s been trained, all a datacenter does is give you lower latency. Once the devices have a large enough memory to host the model locally, then the need to pay datacenter bills is going to be questioned. I’d rather run OpenClaw on my device plugged into a local LLM rather than rely on OpenAI or Claude.
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: