Sometimes it feels like Google are so far ahead in AI but all we get to see are mediocre LLMs from Open AI. Like they're not sharing the really good stuff with everyone.
I think I believe OpenAI's claim that they have better models that are too expensive to serve up to people.
I think Google have only trained what they feel they need, and not a megamodel, but I can't justify this view other as some kind of general feeling. They obviously know enough to make excellent models though, so I doubt they're behind in any meaningful sense.
This has been said by enough people in the know to be considered true by now. Not just from oAI, but also Anthropic and Meta have said this before. You train the best of the best, and then use it to distill/curate/inform the next training run, on something that makes sense to serve at scale. That's how you get from GPT4 / o3 prices (80$/60$ /Mtok) to gpt5 prices (10$ /Mtok) to gpt5-mini (2$ /Mtok).
Then you use a combination of the best models to amplify your training set, and enhance it for the next iteration. And then repeat the process at gen n+1.