Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Eh, OpenAI is too cheap to beat at their own game.

But there are a ton of use-cases where a 1 to 7B parameter fine-tuned model will be faster, cheaper and easier to deploy than a prompted or fine-tuned GPT-3.5-sized model.

In fact, it might be a strong statement but I'd argue that most current use-cases for (non-fine-tuned) GPT-3.5 fit in that bucket.

(Disclaimer: currently building https://openpipe.ai; making it trivial for product engineers to replace OpenAI prompts with their own fine-tuned models.)



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: