Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting thought, if nothing less. Unless I misunderstand, it would be easy to run a study to see if this is true; use the API to send the same but slightly different prompt (as to avoid the caches) which has a definite answer, then run that once per hour for a week and see if the accuracy oscillates or not.


Yes good idea - although it appears we would also have to account for the possibility of providers nerfing their models. I've read others also think models are being quantized after a while to cut costs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: