Altman mentioned GPT-4.5 is the model code named "Orion". Which originally was supposed to be their next big model, presumably GPT-5, but showed disappointing improvements on benchmark performance. Apparently the AI companies are hitting diminishing returns with the paradigm of scaling foundation model pretraining. It was discussed a few months ago:
https://news.ycombinator.com/item?id=42125888