Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is written by someone who's not an AI researcher, working with tiny models on toy datasets. It's at the level of a motivated undergraduate student in their first NLP course, but not much more.


If one can easily reach parity with a motivated undergrad by leveraging LLMs I will still consider it impressive.

While the 5-minutes model will never be useful in itself it lays the groundwork for amateurs and small groups to getting into developing small models. There's at the moment another HN headline hyping up a tiny model that scores impressively at the arc-agi benchmarks so it's clearly not a dead end to explore what is "household-affordable" models.

Though an approach that doesn't lean on the authors $200/month OAI sub would've been more interesting to follow.


You can also reach research parity by downloading a Github repository. Is that impressive too?


Downloading a file is not equivalent to having high level abstractified control over running software.

And if it is then I'm a farmer because I bought potatoes from the store.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: