Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I find more important is the ability to get reproducible results for testing.

I do not know about other LLMs, but Cohere allows setting a seed value. When setting the same seed value it will always give you the same result for a specific prompt (of course unless the LLM gets an update).

OTOH I would guess that they normally simply generate a random seed value on the server side when processing a prompt, and it depends on their random number generator how random that really is.



That's expected behavior when you run LLM locally with a fixed seed and temperature at zero




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: