Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe because "machine learning" isn't all that interesting, and instead "proper AI" has been taken for granted in science fiction? A sentient AI is much more interesting than a "dumb" ML algorithm that can recognize cat pictures on the internet.

Stanislaw Lem wrote a few really good short stories about machine AIs at the brink of being sentient (e.g. robots that should be dumb machines, but show signs of self-awareness and human traits).



Lem has also written about the trouble with black boxes in charge of policy (similar to the paperclip factory problem).

His advice seemed to always be for the human civilization to better engineer our future and stop playing it fast and loose.


> trouble with black boxes in charge of policy

They are only black boxes if you don’t take the time to understand them and they are not that complicated to understand.

Humans on the other hand...


Some simple regressions, which is also ML, may be possible to understand completely by human beings. You can even calculate their results manually. This doesn't mean their models are always a good fit. Or that the world never changes, especially when impacted by feedback-loops of ML.

Nobody can claim to fully understand models with millions and billions of parameters. We know they are "overfit", but they may work in certain scenarios, much better than manually crafting rules by hand. So we end up with "it depends", then someone starts profiting, with real-world implications.


Depending on what you mean by understand, it seems that some ML models are already beyond human understanding.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: