Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If he was building compute device for LLM inference specifically it would help to check in advance what that would entail. Like GPU requirement. Which putting bunch of RPis in the cluster doesn't help one bit.

Maybe I'm missing something.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: