Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think we need to separate theory from practice. In theory, it can edit the training loop and come up with novel techniques. That is interesting.

In practice, the vast majority of the changes that auto research actually made would have been found much faster with BO if properly parameterized. You do not need an LLM to find a better batch size or learning rate.

 help



I’d always hoped something like this could take advantage of FPGAs directly

FPGAs won't rebuild fast enough for it to matter vs software simulation I'd wager. Even FPGA-in-CPU has been a dream for decades and there you have more time for some workloads, still never was commercially viable for general computing.

There was research a few years back that tried doing something like this with an FPGA, and they found that their algorithm actually exploited defects in the particular chip (not the model, the actual single specific chip) they were using to use electrical interference for computation that shouldn't have worked on paper. They could not reproduce their design on another FPGA of the same model from the same lot.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: