> We've all heard complaints about GPT-3.5 Turbo, particularly when compared to its successor, GPT-4, seemingly struggling to follow instructions. Guess what? In our experience, this is a non-issue with a properly fine-tuned GPT-3.5 Turbo model. In fact, GPT-4 can serve as the "prompt engineer" that assists in generating the training data.
This is omitting the very very important detail that a finetuned gpt-3.5-turbo is 8x the cost of a normal gpt-3.5-turbo, and the output is not 8x better especially with, you know it, prompt engineering. (such as gpt-3.5-turbo's function calling/structured data support, which is prompt engineering at its core)
It's also missing the detail that properly finetuning a model is very hard to do well.
This is omitting the very very important detail that a finetuned gpt-3.5-turbo is 8x the cost of a normal gpt-3.5-turbo, and the output is not 8x better especially with, you know it, prompt engineering. (such as gpt-3.5-turbo's function calling/structured data support, which is prompt engineering at its core)
It's also missing the detail that properly finetuning a model is very hard to do well.