Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No one can tell you much about that. Interpretability is still very poor.

You don't know what they learn beforehand (else deep learning wouldn't be necessary) so you have to try and figure it out afterwards.

But artificial parameters aren't beholden to any sort of "explainabilty rule". No guarantee anything is wired in a way for humans to comprehend. And even if it was, you're looking at hundreds of billions of parameters potentially.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: