The AI isn’t the product, e.g. the ChatGPT interface is the main product that is layered above the core AI tech.
The issue is trustworthiness isn’t solvable by applying standard product management techniques on a predictable schedule. It requires scientific research.
Wow, it really does have that sound. I'm trying to figure out exactly what about the phrasing feels that way, while at the same time it doesn't seem like it's as uniformly broken up as most GPT answers are. Of course it could be an edited combo as well.
yes the size is different, but training a diffusion model and a language model are really different, like how RL models can be small but take a long time to train aswell