Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Your reply gives me the impression that you are not up to date with Reinforcement Learning. If you did, you would know that the author really understands this domain and was not merely tautological.

"Specialized at being human" - this is a deep intuition. We are reinforcement learning agents that are pre-programed with a certain number of reward responses. We learn from rewards to keep ourselves alive, to find food, company and make babies. It's all a self reinforcing loop, where intelligence has the role of keeping the body alive, and the body has the role of expressing that intelligence. We're really specialized in keeping human bodies alive and making more human bodies, in our present environment.

The author puts a hard limit on intelligence because intelligence is limited by the complexity of the problems it needs to solve (assuming it has sufficient abilities). So the environment is the bottleneck. In that case, an AGI would be like an intelligent human, a little bit better than the rest, not millions of times better.



It has nothing to do with any particular model of learning, let alone of intelligence in general. From the point of view that you have expressed here, it would seem a little surprising that "specialized at being human" includes things like calculus and topology. How do you include these things in a definition of the "being human" speciality that does not turn it into a vacuous category that says the specialization is everything that humans have shown themselves to be capable of, nothing more and nothing less?

If it were valid, one could take the argument in your last paragraph to draw a line at any point in the evolution of intelligence and say "this is as good as it gets."


What are humans specialized in doing? Because it seems to me that humans are pretty good at chess, calculus, social manipulation, flying to the moon, building machines that take us to the bottom of the ocean, discovering fundamental physics, etc. A fish, no matter what environment and upbringing you give it, can't do any of those things. So it seems like there's some dimension in which the human brain is more generally intelligent than a fish's.


That dimension is still on a thin film around a little ball floating in one of a great number of possible universes within a great number of possible rule systems. Compared to that space, we are quite similar to fish, in terms of the purposes for which our machinery functions.

But the question is not, "is intelligence explosion possible?" The question is, "explode into what?"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: