This is written by someone who's not an AI researcher, working with tiny models on toy datasets. It's at the level of a motivated undergraduate student in their first NLP course, but not much more.
If one can easily reach parity with a motivated undergrad by leveraging LLMs I will still consider it impressive.
While the 5-minutes model will never be useful in itself it lays the groundwork for amateurs and small groups to getting into developing small models. There's at the moment another HN headline hyping up a tiny model that scores impressively at the arc-agi benchmarks so it's clearly not a dead end to explore what is "household-affordable" models.
Though an approach that doesn't lean on the authors $200/month OAI sub would've been more interesting to follow.