Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Nope, on cloud it is faster than conversational where you can skim to the right place or quickly regenerate seeing it is going down the wrong path.


I take that to mean that it is basically so slow as to be useless and that you would in fact need a Macbook to be able to run this model locally.

This whole thread has become a bit frustrating with comments about how other laptops are competitive with a Macbook. But no one can actually name a single laptop that can do all the things that a Macbook can.


I just replied to someone saying it couldn't run on a Windows laptop with how it can. It's below reading rate. Both are below skimming rate that is more useful for LLMs. I would rather have a 96GB macbook than a windows laptop if my only case was local LLMs and I didn't care about all the hoops around sealed system volume, signed app troubles, etc. that are bringing Macbooks closer to ipads/chromebooks, or the many other things that are better with nvidia GPUs.

For desktop I'd still rather threadripper etc. where it has more competitive CPU memory bandwidth and is upgradable and can run multiple GPUs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: