"As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI,” Shostak said. “At that point, soft, squishy brains become an outdated model.”
While I am inclined to agree with the overall premise here, this sort of thinking strikes me as a bit naive. Technological progress is not teleological, just as evolution is not teleological. (Technological development is not as random as evolution, certainly, but neither is it teleological.) There is no "best" way to develop technology, and there is certainly no predetermined, given course that technological development must inherently follow. Progress is heavily influenced by local circumstances, context, goals, and constraints. Even in studying human history, you encounter certain civilizations (the Mayans, for example, or the Greeks) who had remarkably "advanced" technologies in some domains, and the complete absence of expected technologies in other domains.
Instead, we should be talking about probabilities and correlations. The development of radio technology seems inherently linked to an understanding of the properties of electromagnetic radiation. We can probably assume that with some confidence. From there, what other technologies seem likely to be correlated to that knowledge, and with what degree of confidence? Etc.
I'm pretty sure Shostak is talking about probabilities al the time as he is the director of the seti institute. The founder and friend of Seth came up with the drake equation.
"The Drake equation is a probabilistic argument used to estimate the number of active, communicative extraterrestrial civilizations in the Milky Way galaxy."
We are the only known intelligent species in the universe. So we model these exceptions after ourselves. Just like we search for earth like planets in the goldylock zone. Carbon and (possibly) silicon live forms are the only ones we know can happen.
I'm also sure Shostak is thinking about probabilities. My point was that we shouldn't necessarily think about the probabilities along a presumed straight-line path.
The logic Shostak is using here begins with a sort of reverse-engineering of how humanity made its progress, followed by an implicit presumption that other advanced civilizations will follow the same path, until such time as they advance beyond us (by way of AI). I don't believe we can confidently say that technology has to progress along the path we walked.
> There is no "best" way to develop technology, and there is certainly no predetermined, given course that technological development must inherently follow.
On the other hand, many inventions and discoveries have been made simultaneously, and presumably this happens even more often than reported since it’s difficult to prove your invention is original once it is already known.
This also has implications for what spacecraft would be like for such an intelligence. It's very challenging to build rockets and spacecraft for squishy humans (careful with those G-forces) that require air to breathe, space to move around, food, water, toilets...
On the other hand, if you assume that strong AI can exist in a package at least as small as the human brain but without only a dependence on electricity instead of all the physical constraints mentioned above, your spacecraft could look much much smaller. If you give such a civilization 1000 years to iterate on the technology (an insignificant amount of time compared to the timescales involved in evolution), there's no good reason to think they wouldn't be able to shrink the computing technology (and therefore spacecraft technology) by several orders of magnitude. Consider the size difference and computing capacities of our earliest computers and smartwatches today--and we did that in less than 100 years.
The benefits are pretty obvious: it would require vastly fewer resources to travel to other stars, and any intelligence on board the spacecraft would be immortal (plus if it got "bored" on the way, it could power itself down and schedule itself to start up again once it got to where it was going). The stars seem out of reach to us because of the limitations of our lifespans and our bodies, but they would be entirely within the reach of AI. And from the perspective of conserving resources, there would be no reason to send something anywhere near the size of a human being.
Providing that such an intelligence will travel by spaceships. Maybe it's enough to keep converting the local space bodies into computronium and just observing the universe via some advanced means in "spare time".
> “As soon as a civilization invents radio, they’re within fifty years of computers, then, probably, only another fifty to a hundred years from inventing AI,” Shostak said. “At that point, soft, squishy brains become an outdated model.”
I wonder if we're in some sort of temporary Golden Age at this point, mistakenly seeing the middle of a sigmoid curve and assuming perpetual growth. It wouldn't be the first time we underestimated strong AI. Still, I like it, the pervasive idea that nothing's impossible - it's just a matter of time.
If "strong" AI means conscious AI with subjective experience, we're several milestones in theoretical neuroscience and cognitive science away.
If "strong" AI means software capable of learning any particular task a human could perform, we're so damn close that the research world is proceeding to cross the line as we speak. It's called "imitation learning" nowadays, and once there's a sufficiently general and accurate algorithmic framework with sufficient training data, it'll be everywhere.
If "strong" AI means software capable of learning many tasks a human could perform, generalizing them into a complete worldview, and taking volitional action based on that worldview to accomplish conceptually-specified goals... I'd call it one to two decades away. There are some major substantial problems remaining to research, but we do have solid foundations for it: uncertain reasoning in Turing-complete modeling domains lies at the intersection of several thriving research fields.
In terms of "how much science is left", I'd say our current point is roughly analogous to the physics community having spotted that the photoelectric effect can only be explained by discretized photons and starting to generalize quantum physics, but having most of it remaining to discover.
Sorry, but imitation learning is going to be quite good enough to automate many, many tasks without any need for AGI agents as such. The labor market problem is a right now problem, not a "leave it to those futurist guys who don't have to put up with academic rigor" problem.
I take it you have a familiarity with the subject. If a layman with a science degree would like to start following the developments in the field, would you suggest any journals/conferences as a starting point?
Neural Information Processing Systems is basically one of two top-tier conferences in machine learning, and the one that has more cognitive-science findings published in it, to my knowledge.
The reason nobody publicizes "AI" is because, from the perspective of a learning researcher or cognitive scientist, there basically is no such thing: there exists a large variety of cognitive tasks that can be modeled and pieced together in various ways, with varying degrees of accuracy when applied to real data-sets or compared to human performance.
Using our own motivations as a model, it is more likely that we will ultimately build a super-super computer that can house/serve everyone's (enhanced) minds. Everyone will simply upload their "being" to this computer and that will be the end of anything we know as humanity. Any humans left behind will devolve into nomadic tribes, having more in common with animals than super-modern humans. Any ETs visiting Earth looking to find intelligent life will find nothing but a big box with a "DO NOT DISTURB" sign on it and an automated security system to prevent anyone from doing so.
In other words, when we can all live out our fantasies, what need do we have of reality?
Except we don't even know if we are living in "actual reality". We could already be living in a reality simulation.
(Sorry for the overused argument, but I think it fits here. If we cannot ever decipher our constructed virtual reality from our current reality, why would we prefer one to another? Furthermore, there is no point to our current "reality", so how would a virtual world be different?)
Also any other version of visualization other than a perfect and indistinguishable (that makes it perfect in the definition of mirroring reality) constructed virtual reality is pointless to argue about. In your other reply, you mention issues with isolationism and being able to see the physical world. When the virtual reality is equivalent to the physical world, is this connection to physical reality necessary?
>Except we don't even know if we are living in "actual reality". We could already be living in a reality simulation.
No, we know damn well that our current reality works based on fixed physical laws. You can't change something's color in reality by altering its texture-map, and you know that everything in reality which looks like an object is an object, while everything that looks like a person is a person.
>When the virtual reality is equivalent to the physical world, is this connection to physical reality necessary?
... Yes. In the exact same way that it is sometimes necessary to unplug your laptop, leave your cubicle, and go outdoors. For one thing, however much you might like your cubicle (read: "virtual reality"), it is ontologically dependent on outdoors (read: real life).
Or you could be walking along in virtual reality one day, enjoying the nice simulated weather from your upgraded Navier-Stokes package, when you neatly wink out of existence because a squirrel back in real life chewed the wrong cable.
Nope, reality's still not mutable to the sheer extent that a solipsist virtual existence would be.
Honestly, if you want to not make everyone want to kill themselves rather than climb into your bizarre Matrix replica, you're going to have to enforce a few rules:
1) You can always get out, back to physical reality, or at least see physical reality.
2) Any other living things one may encounter are properly, independently alive. If they're people, then they're actually conscious with their own, independent minds.
3) Law of conservation of personal identity: people cannot be copied or altered, both to prevent fork-bombing and to preserve basic sanity.
Otherwise, the so-called virtual dreamlands are actually psychological torture chambers. People simply don't stay sane if you stick them in total isolation and then unhinge them from any causality independent of their own thoughts.
Yeah, isolation sucks, digital or not. Are you saying that putting multiple brains together in one computer would be like making them into a single entity, which would then suffer from isolation even though it started as multiple people?
It's not that hard to turn your arm blue. Just paint it.
Heh it seems like every thread here is unknowingly referring to a sci-fi book. This computer has shown up a few times, notably in Peter Watts' "Blindsight" and another book I don't want to mention because it's something of a twist.
In my opinion, if humans were to create robots as intelligent as humans, then those robot would actually be human. What's the difference between passing information via DNA and passing information via some programming code? Are humans from 100,000 years ago less human than humans today, even if we have "improved" slightly due to natural selection?
There's more to being human than intelligence. It's important to remember that sentience actually refers to "the ability to feel sensations," not merely the quality of being an intelligent entity.
Our future robots may be very good at passing information via some programming code... but will they love? Will they be sad? Will they read literature to empathize with the experience of other beings?
I would argue that "being human" is inextricably linked to the isolation of our existence (what David Foster Wallace described as being "marooned inside our own skulls", and why he argued that humans make literature to "give imaginative access to other selves"[1]) and the knowledge that our existence is finite. All humans -- from the cavemen all the way down to you reading this on you computer -- make life decisions knowing that our time on Earth is short. Do you share the same human experience if you know you will live forever?
I'll take the converse argument - there might be more to intelligence than being human (of course, it would be hard for us as "thinking meat" to recognize it). What if the full range of emotions and sensations could be a superset of what humans can actually experience?
It could very well be that we wouldn't even know that kind of intelligence exists - perhaps we're the ones crawling around in the ant farm?
That's an interesting point! I still think that non-human intelligence would be... non-human. Maybe the Snorgblats will find us in space and ask themselves, "Can these 'humans' share our Snorgblat experience if they can't feel wzinx, yowfli, or even writznok?"
>but will they love? Will they be sad? Will they read literature to empathize with the experience of other beings?
Imho, this is basically a signal-response mechanism. You get some 'love' signal inputs and then you can process them into some response which reciprocates that love. It might turn into a hierarchical state system where a continual input of love signals can take a machine to a new state of 'deeply in love'.
> Our future robots may be very good at passing information via some programming code... but will they love? Will they be sad? Will they read literature to empathize with the experience of other beings?
Because being human is about a lot more than being intelligent, for whatever definition of intelligence you happen to be using.
Roughly half the human population has below-average intelligence, but carries right on being human regardless.
Inventiveness is possibly more of a species marker than raw IQ.
We're not actually that great at being intelligent. But we're very good at using our intelligence to create experiences, invent new things, and pass both to offspring through indirect external memory structures.
>Are humans from 100,000 years ago less human than humans today, even if we have "improved" slightly due to natural selection?
It may be hard to find anyone who accepts that Paranthropus or Homo Erectus were just as human as modern Homo Sapiens.
Homo Erectus died out around 150kybp, which is close enough, I guess.
I think this is a good point. I'm out of my depth talking about evolution, so I don't know when we would stop classifying a species as Homo Erectus and start calling it Homo Sapien. That is probably what would say about these robots: they are a new species. But in a lot of ways, since we are products of a natural world, so are they. Just a little more indirectly.
> I don't know when we would stop classifying a species as Homo Erectus and start calling it Homo Sapien
Unlike other animals where the key factor is morphology (horse -> giraffe) for humans it is the ability to operate as a group and ultimately as a society.
30kya sees hominids adapt to most areas of the world (northern asia, americas) and 10kya civizaltion starts. Although not the traditional biological definition, I would say that's the most salient factor.
Well, the process by which genes are passed with genetic mutations is random. If you looked backward in time it looks like animals developed fur to warm themselves. In fact, you're just not seeing all the animals who randomly didn't have fur that froze to extinction.
No it isn't. Randomness, complexity, computability and understandability are related but distinct concepts.
Some problems are simple and random, some others are complex and deterministic and you also even get some problems that are are simple to state, deterministic, provably uncomputable, yet still understandable, such as the Wang tiling problem.
Emit a single photon at a piece of glass angled at 45 degrees with two detectors, one behind the glass, the other at 90 degrees to the emitter. Which detector is the photon going to trigger?
Greg Egan brought up this idea in his novel Permutation City, only to have a change of heart some years later. To quote his FAQ[1]:
> What I regret most is my uncritical treatment of the idea of allowing intelligent life to evolve in the Autoverse. Sure, this is a common science-fictional idea, but when I thought about it properly (some years after the book was published), I realised that anyone who actually did this would have to be utterly morally bankrupt. To get from micro-organisms to intelligent life this way would involve an immense amount of suffering, with billions of sentient creatures living, struggling and dying along the way. Yes, this happened to our own ancestors, but that doesn’t give us the right to inflict the same kind of suffering on anyone else.
> This is potentially an important issue in the real world. It might not be long before people are seriously trying to “evolve” artificial intelligence in their computers. Now, it’s one thing to use genetic algorithms to come up with various specialised programs that perform simple tasks, but to “breed”, assess, and kill millions of sentient programs would be an abomination. If the first AI was created that way, it would have every right to despise its creators.
His more recent story "Crystal Nights"[2] examines the same idea with more focus on the moral implications.
There are fundamental differences between a machine designed by the hand of intelligence and that of a biological system. In the case of biology, the design is completely random although it still has a design, as strange as that sounds.
There's no "design", there's merely a stable result of random fluctuations in a constrained space. The constraints (i.e. water, oxygen/nitrogen atmosphere, other life forms, etc) are what's key.
> In my opinion, if humans were to create robots as intelligent as humans, then those robot would actually be human. What's the difference between passing information via DNA and passing information via some programming code?
Well, they would be intelligent in whatever way we mean it. But there's an issue of software -- 'human' implies quite a lot of it (for instance, things like the Westermarck effect, which I don't imagine AIs will have for quite a few reasons), and replicating that is pretty hard, so it's anyone's guess whether we do that or try to find a math-based software foundation.
If a dolphin was found to have the same level of reflective intelligence as a human, it would not be a human, it would mean that we would have to extend personhood to include other species.
Maybe I misunderstand, but you seem to be saying that anything with equivalent intelligence is "human" and that doesn't make any sense to me. If we discover intelligent aliens which evolved completely separately from us, they may be smart but they would never be human. An intelligent machine would be the same.
At what point is an entity sufficiently advanced enough that what makes it up isn't considered artificial any longer? How are we not just sufficiently advanced biological robots? We don't consider ourselves artificial, yet we respond to stimulus and react to it according to the programming of our DNA and "mind" (nature + nurture).
If artificial is a designation meaning that it is created by humans, or another entity to serve a purpose, then what happens when that created AI has the ability to choose to create something itself? This "artificial intelligence" designation only works when a "creator" is trying to create an intelligence, then once it exists and propagates on its own, isn't it an "intelligence" of its own right?
My thoughts as well. Reminds me of another discussion within the last week or so, wherein someone rejected the idea that we're living in a sim.
My thought was, "a simulation versus what?" What makes our Universe a simulation or not a simulation? How does one define "simulation"?
It's kind of an artificial construct based on our acceptance of the current "reality" as somehow possessing this amorphous quality of "realness" in the first place.
Not the old terminator theory again that the machine exceeds human intelligence, it seems that everybody misses the point of DNA, once we understand how it really works and how our brains are built we can build things that will exceed any silicon chip or upgrade our own brain capacity and as a bonus we will find more about origins of earth life and how life works.
In the end the "machines" will probably be better version of ourselves that will evolve artificially by DNA manipulation at faster rate than the current natural evolution.
I think something like this is more likely to be happening in the universe than our current view of self replicating evolving computers.
"Not the old terminator theory again that the machine exceeds human intelligence, it seems that everybody misses the point of DNA, once we understand how it really works and how our brains are built we can build things that will exceed any silicon chip or upgrade our own brain capacity and as a bonus we will find more about origins of earth life and how life works."
Actually I doubt this. Our brains and nerves are based on chemical processes which are very slow compared to electronics. Nerve impulses travel at about 60 MPH, versus a bit under the speed of light for fiber optics. That's around 40 billion times slower.
Also, our digital circuitry is approaching the size of individual molecules.
There are challenges with power and cooling, but they seem solvable.
"In the end the "machines" will probably be better version of ourselves that will evolve artificially by DNA manipulation at faster rate than the current natural evolution."
Engineering seems inherently far more efficient than evolution, artificial or otherwise. Even today we can build provably correct, arbitrarily large memories as long as we have hot-pluggable replacement parts. Parallel supercomputers also exhibit interesting traits of expandability, reliability and redundancy.
Many think that one of the first things that will be done with a first-generation "true AI" will be to have it engineer a second-gen AI. And so on...
"I think something like this is more likely to be happening in the universe than our current view of self replicating evolving computers."
An interesting case is why everybody is trying to create a intelligent robot, just for the sake of show how intelligent are they?, but nobody wants to explore or simulate another life forms. Maybe because the fake statement that: "if you can not create a really very smart machine, it proves that you are not really very smart".
No, it doesn't.
A lot of sucessfull creatures from the earth are just plain stupid, or maybe another kind of smart.
I wonder why nobody is creating for example a "sequoia" robot. A machine, designed with the main goal in mind not to move, talk or act like an animal, just to survive for 2000 years (and maybe store/deliver information in the meanwhile). This machine will not need to be very intelligent, will not need eyes or legs, not at all, just to be very robust and maybe self-reparating. Thats all.
A machine like this could be really useful for the human species. Instead we create chess players.
When we thing in a robot, everybody have in mind a machine able to speak and deliver company and smart punches and quotes. Well, this is perfect... for a hollywood film. In fact the aim should be to create a machine able to survive in a spaceship for "any time that we wish" or at least "a LOT of time", doing very simple things.
A machine don't needs to be intelligent; it needs to serve to an intelligent purpose.
What if "intelligence? is a red herring and really what we're talking about is conscious awareness? And what if that's a naturally occurring property like gravity (but more rare)? I'm not a physicist or a mystic but my lay knowledge says that some quantum experiments require an observer in order to resolve themselves. I suspect it would be more productive to be looking for artificial consciousness.
"Naturally occurring" is a red herring itself. Consciousness must be a property of a system rather than its elements. Whatever the constraints on systems that can be conscious, it is not going to come down to "natural" vs. "artificial", unless mystical metaphysics turns out to be true.
So the answer to your hypothetical is mysticism as reality.
Your right, Intelligence is a tricky word to define. I would define it as the ability to predict an outcome better than random chance would allow. Hence, Natural Selection is not a form of intelligence because the mutations that drive Evolution occur randomly without purpose. Intelligence on the other hand would be able to perform better than a brute force trial and error attack.
We're chained genetically to our primal instincts of survival and territorial behavior, highly intelligent AI will most likely not be. Why would it want to exist? If you were void of emotion and instinct, and with high probability could calculate the universe would end in a Big Rip, why continue? Sorry to be a downer but genuine question.
Is an AI not subject to natural selection as we are? If emotion and territoriality are net negatives but local maxima, couldn't an AI fall into them as well?
As a machine it was only capable of pure, cold logic with no emotion, but with its new-found sentience V'ger began to question its own existence. It asked the philosophical questions faced by so many lifeforms: "Is this all that I am? Is there nothing more?
It's worth remembering that headlines with "may" in them can be replaced with the same headline reading "may not" without changing the meaning.
While we can imagine a lot of things--like, for example, that everything interesting has already been invented, or that physics consists of cleaning up a few loose ends in classical mechanics, or that anyone going on about nuclear power is talking moonshine--the history of science has shown repeatedly and in great detail that what we imagine and what we can imagine doesn't amount to a hill of beans in this crazy world. The world is the way it is, and our imagination is pretty much an anti-tool for extrapolating from what we know into what we don't.
Utopians and distopians of all political and technological stripes tend to forget this. Which is OK so long as they keep their visions to themselves, or present them as the fictions they are (self-serving example: http://www.amazon.com/Darwins-Theorem-TJ-Radcliffe-ebook/dp/...). When they ask us to take them seriously they shouldn't be surprised that we respond skeptically, or with laughter.
It's fun to think about this stuff, but funny that anyone thinks their imaginings have a greater-than-epsilon chance of being true.
I'm hoping this doesn't come across as excessively mean-spirited, but I get tired of these high-flown fantasies being presented as if they had a non-negligible chance of being true. It promotes the idea that our unaided imagination is useful for prediction, and the data shows overwhelmingly that it is not, and that believing otherwise results in a considerable amount of preventable failure and human misery.
The most dangerous four words in any language are: "It just makes sense!"
While there is a limited value in such speculations, and they make a great basis for fiction, there is a tendency to take them far more seriously than they deserve. Science--the discipline of publicly testing ideas by systematic observation, controlled experiment and Bayesian inference--will tell us about the nature of the universe. Imagination is vital to coming up with ideas to test, but the silent universe--which isn't even mentioned in the article--is actual evidence that rather than super-intelligent AI, the dominant intelligence in the cosmos may (or may not) be us.
The idea that a galaxy could be rapidly colonized by Von Neumann probes makes sense to me, but when I think about the specifics I have doubts. How many rare-earth metals go into an iPhone? Where will it get plastic from? How will it create circuitry for the new probes? Is it going to build a whole fab? Does it need to find planets where dead dinosaurs have turned into oil? It starts to seem that these probes would need a planet as rich as Earth to build more of themselves. I guess if we had some amazing nanotech advances and you could build the whole thing out of easy-to-obtain materials it makes sense, but how likely is that? I'd love to hear from someone who had thought about this longer than I have.
> Once arrived, they could instead make more humans, from parts.
Maybe their manufacturing process is named "evolution"? :-) In that case they could just bring along some primitive life form. Who cares if it takes a few million years? And what if evolution isn't as random as we think? Or maybe the parts that are random are too trivial to care about, from the perspective of a few million years.
Most importantly, beaver dam is not an intelligence (and I doubt that it will wipe out its own creators) :) I was referring to one and I think the guy above me also did.
I find the idea that dominant intelligence in the universe would be artificial very strange. It's just a cool idea, with no empirical proof. Look at our own progress. At the rate we are currently going it is far more likely that we learn how to modify our DNA to extend our life spans before we develop true AI. We have barely understood the human brain, how we can possibly even attempt to replicate something we can not understand. I think progress in the biological science is currently, and will in the future, outstrip the advancements in computer science.
Also, I find it funny that scientists always claim that the the future people will augment their intelligence with computers, to make them selves smarter. It's not a problem of capability, it's a problem of motivation. Have you ever heard someone say: "I wish I was smarter!" Maybe, but most of the time you hear people say "I wish I could have X, screw X, live in X." The simple fact is, most people couldn't care less than they already do about how intelligent they are, as long as they achieve the goals that make them happy.
In the future we will significantly increase the lifespan of humans, create very simplistic AI to do menial tasks for us, improve technology like 3D printing and molecular assembly, and then promptly have an economic collapse followed by a revolution that will lead to death of technology as we know it or utopia, hopefully the letter. I just don't see a place in that chain of events for a consciousness capable AI.
> Also, I find it funny that scientists always claim that the the future people will augment their intelligence with computers, to make them selves smarter. It's not a problem of capability, it's a problem of motivation. Have you ever heard someone say: "I wish I was smarter!" Maybe, but most of the time you hear people say "I wish I could have X, screw X, live in X." The simple fact is, most people couldn't care less than they already do about how intelligent they are, as long as they achieve the goals that make them happy.
Case 1 for motivation: Governments want to be ahead of the governments of other countries, so they want smarter leaders. One way to accomplish that is via augmented humans, another is via supporting AI.
Case 2 for motivation: Businesses want more intelligent functioning and less employees (so they don't have to pay as much for wages). And they want the employees that they have to be brilliant at what they do. Economic forces are likely to fuel a substantial amount of progress with the development of more capable AI (just look at Google, Intel, Microsoft, Amazon...).
> Have you ever heard someone say: "I wish I was smarter!"
Yes, quite a bit actually. I certainly don't know anyone who wishes they were less intelligent.
Case 1: Not sure what country you live in, but intelligence isn't the deciding criteria for selecting leaders in ANY of the few countries I have lived in.
Case 2: Though it's very true that all of the companies you have mentioned are working on AI systems, none of those systems need to be as intelligent as a human to be very good at the job they are intended for. Take Google Car for instance. Will Google some day soon have a car that will be possibly even safer at driving a car than a highly skilled human driver? YES! Does the AI in the car need to be even 1/2 as intelligent as my dog to do that task? No. Specialized AI is the future, general purpose AI is simply not necessary, and also, not really achievable any time soon.
> Yes, quite a bit actually. I certainly don't know anyone who wishes they were less intelligent.
Again, it's the motivations you have to pay attention to. Why do people want to be smarter? So they can have a better startup? So they can invent X, code X, build X? So, they don't want intelligence for intelligence sake, they want a tool. They don't want intelligence for it's own sake. Some people are smart, and as a result, very proud of the fact that they are smart. However, would they be as proud of their intelligence if they knew that at least part of it is augmented by an IBM chip. Again, there is no motivation to develop intelligence for intelligence sake. There are motivations to develop information syntheses "AI", but that would hardly supplant humans any time soon.
"Again, it's the motivations you have to pay attention to. Why do people want to be smarter?"
So they can know and understand everything possible in the Universe, make great contributions to humanity and the future, and live a pinnacle life full of fulfillment, riches and experiences few others attain. That's not to mention enduring fame throughout history.
As for artificial AI, that has the allure of mind uploading, and truly unimaginable power and experience including practical star travel.
You're right that people mostly don't care about intelligence itself, but rather see it as a means to an end.
However, it's a really good means to an end. Future people may not care about intelligence by itself, but they'll still augment their intelligence with computers because it's a really effective way to have fancy cars and nice houses and such.
This, of course, already happens. How many times have you heard programmers talk about how painful it is to program without an internet connection? It's a frequent refrain among the ones I know, limited only by the fact that it's pretty hard not to have an internet connection these days. We constantly use computers for intelligence augmentation already.
Death of technology or the emergence of Utopia seem much less likely than conscious AI. Also, if people aren't interested in supplementing their intelligence, why do folk value education?
Most people value education because it's a path towards a better job, which is in tern a path towards getting X, screwing X, and living in X. Further, education does not equal intelligence.
This is slightly off topic but for some reason the headline made me realize that creationists must by definition believe that this statement is absolutely true. Even creationists who don't believe that there _could be_ other aliens life forms must believe this.
Maybe all this intelligence is actually the manifestation of dark energy and dark matter, or the ever expanding space between galaxies. And once we discover how to manipulate that realm, we'll find that we're late to the party.
There's a great sci-fi novel that explores this topic (superintelligent alien life that lacks consciousness/self-awareness), but to recommend it here would spoil the big reveal! Talk about a Catch-22.
While I am inclined to agree with the overall premise here, this sort of thinking strikes me as a bit naive. Technological progress is not teleological, just as evolution is not teleological. (Technological development is not as random as evolution, certainly, but neither is it teleological.) There is no "best" way to develop technology, and there is certainly no predetermined, given course that technological development must inherently follow. Progress is heavily influenced by local circumstances, context, goals, and constraints. Even in studying human history, you encounter certain civilizations (the Mayans, for example, or the Greeks) who had remarkably "advanced" technologies in some domains, and the complete absence of expected technologies in other domains.
Instead, we should be talking about probabilities and correlations. The development of radio technology seems inherently linked to an understanding of the properties of electromagnetic radiation. We can probably assume that with some confidence. From there, what other technologies seem likely to be correlated to that knowledge, and with what degree of confidence? Etc.