Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's not bad actually because bottleneck in training is in RAM and we can have 128gb...


If you’re training a model that needs 128gb, the GPU speed is going to be a “bottleneck”


I believe that Apple Silicone is still too slow for training. The big RAM is really useful for interference of big models though




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: