Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting work. Not super familiar with neural architecture search, but how do they ensure they’re not overfitting to the test set? Seems like they’re evaluating each model on the test set, and using that to direct future evolution. I get that human teams will often do the same, but wouldn’t the overfitting issues be magnified a lot by doing thousands of iterations of this?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: