Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can run a text-to-image model on a consumer GPU, meanwhile you need a cluster of GPUs to run a model with GPT-3's capabilities. Also Dalle 2 is really inefficient so it was easily surpassed by latent diffusion models.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: