Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I never thought people would be upscaling models by increasing quantization precision. The rationale makes sense bit its also a goofy outcome.


You should be able to upscale and fine tune to recover performance, I suppose!

Clearly we should train a diffusion model to denoise the weights of LLM transformer models. Yo dawg.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: