Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

On a more conceptual angle, Landrebe and Smith's "Why Machine's Will Never Rule the World" clarifies the limits of computation w/r/t complex dynamic systems.

Here is the core argument: "an artificial intelligence that could equal or exceed human intelligence—sometimes called artificial general intelligence (AGI)—is for mathematical reasons impossible. It offers two specific reasons for this claim: 1. Human intelligence is a capability of a complex dynamic system—the human brain and central nervous system. 2. Systems of this sort cannot be modelled mathematically in a way that allows them to operate inside a computer.



A denser-than-air vehicle that could equal or exceed bird flight --sometines called an airplane-- is for mathematical reasons impossible, for two specific reasons:

1. Bird flight is a capability of a complex dynamic system -- the bird's musculoskeletal system and its brain.

2. Systems of these sort cannot be modelled mathematically in a way that allows them to operate inside a machine.


What would prevent an embodied AI - i.e.: some kind of deep learning system operating a robot full of sensors from representing such a 'complex dynamic system'?

And if the answer is nothing - what would prevent such an dynamic system from being emulated? If the answer is real time data, this can be fed into the 'world model' of the emulation in numerous ways.


This is what I believed intuitively, but the reality is more nuanced. I'd rephrase it to: "We can't create something better than humans". A tractor can outplow a human easily just like an LLM can outwork a human on menial, easily verifiable tasks.


If humans were too complicated to be mathematically modelled, then we wouldn't exist.


This kind of argument is nonsense. It boils down to: "This previously solved problem is unsolvable."

The previous solution is a biological brain, and the future solutions are mechanical, but that doesn't matter. Even if it did, such arguments involve little more than waving one's hands about and claiming that there's some poorly specified fundamental difference.

There isn't.


I would advise those to read the book and grasp the argument before 'rebutting' it.


This is like saying you should read a 300 page flat-earther tome before contradicting flat earth.


It's a waste of time. These arguments always boil down to some "mysterious soul that only biological brains possess". It's theistic nonsense.

Even if current LLM architectures can't get to AGI, which I will believe, there's no coherent argument that can be made that there is no possible path to AGI with digital computers.

It could be as simple as simulating the top-to-bottom biology of a human brain! That's possible, just wildly impractical, so any arguments based on abstractions like logic, mathematics, physics, etc... go right out the window. They're obviously invalid. Only practical engineering arguments can possibly be valid.

"It's impossible for heavier-than-air objects to fly!"

"Look up. What's that?"

"A bird!"

"You're not every bright, are you?"


Why would someone believe that to be true?


That core argument reads like a word salad, no offense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: