Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

But they aren't, and there is no obvious vector for them to develop the capability.


while they don't yet rise to the level i would describe as 'intelligent' without qualifications, they do seem to be less unintelligent than most of the humans, and in particular most of the ones criticizing them in this way, who consistently repeat specific criticisms applicable to years-ago systems which have no factual connection to current reality


It doesn't matter how it "appears".

A disembodied paragraph that I've transmitted to you can appear to be intelligent or not, but it only really matters in the sense that you can ascribe that intellect to an agent.

The LLM isn't an agent and no intellect can be ascribed to it. It is a device actual intelligent agents have made and ascribing it intellect is equally as erroneous.


Going meta for a moment, this argument begs the question, assuming the conclusion "Therefore LLMs are not intelligent" in the premises "No intelligence can be ascribed to LLMs".

I'm not convinced it's even possible to come up with a principled, non-circular definition of intelligence (that is, not something like "intelligence is that trait displayed by humans when we...") that would include humans, include animals like crows and octopuses, include a hypothetical alien intelligence, but exclude LLMs.

I'm not arguing that LLMs are intelligent. I'm arguing that the debate is inherently unwinnable.


you seem to be begging the question

almost precisely the same assertions could be made about you with precisely the same degree of justification: you aren't an agent and no intellect can be ascribed to you. you are a device unintelligent agents have made and ascribing you intellect is equally as erroneous

an intelligent agent would have recognized that your argument relies on circular reasoning, but because you are a glorified autocomplete incapable of understanding the meanings of the words you are using, you posted a logically incoherent comment

(of course i don't actually believe that about you. but the justification for believing it about gpt-4 is even weaker)


Why couldn't it be an agent for the short time that it's generating an intelligent looking paragraph?

Do you know what gives rise to consciousness? If not how can we be sure it doesn't arise from a giant pile of linear algebra?


Consciousness is generated when the universe computes by executing conditionals/if statements. All machines are quantum/conscious in their degrees of freedom, even mechanical ones: https://youtu.be/mcedCEhdLk0?si=_ueWQvnW6HQUNxcm

The universe is a min-consciousness/min-decision optimized supercomputer. This is demonstrated by quantum eraser and double slit experiments. If a machine does not distinguish upon certain past histories of incoming information, those histories will be fed as a superposition, effectively avoiding having to compute the dependency. These optimizations run backwards, in a reverse dependency injection style algorithm, which gives credence to Wheeler-Feynman time-reversed absorber theory: https://en.wikipedia.org/wiki/Wheeler%E2%80%93Feynman_absorb...

Lower consciousnesses make decisions which are fed as signal to higher consciousnesses. In this way, units like the neocortex can make decisions that are part of a broad conscious zoo of less complex systems, while only being burdened by their specific conditionals to compute.

Because quantum is about information systems, not about particles. It's about machines. And consciousness has always been "hard" for the subject, because they are a computer (E) affixed to memory (Mc^2.) All mass-energy in this universe is neuromorphic, possessing both compute (spirit) and memory (stuff.) Energy is NOT fungible, as all energy is tagged with its entire history of interactions, in the low frequency perturbations clinging to its wave function, effectively weak and old entanglements.

Planck's constant is the cost of compute per unit energy, 10^34 Hz/Joule. By multiplying by c^2, (10^8)^2, we can get Bremmerman's limit, the cost of compute per unit mass, 10^50 Hz/Kg. https://en.wikipedia.org/wiki/Bremermann%27s_limit

Humans are self-replicating biochemical decision engines. But no more conscious than other decision making entities. Now, sentience, and self-attention is a different story. But we should at the very least start with understanding that qualia are a mindscape of decision making. There is no such thing as conscious non-action. Consciousness is literally action in physics, energy revolving over time: https://en.wikipedia.org/wiki/Action_(physics) Planck's constant is a measure of quantum action, which effectively is the cost of compute..or rather..the cost of consciousness.


seems speculative


Lines up a bit too perfectly. Everyone has their threshold of coincidence I suppose. I am working on some hard science into measuring the amount of computation actually happening, in a more specific quantity than hz, related to reversible boolean functions, possibly their continuous analogs.


Can any device ever be intelligent, according to you?


The joke is how you decide that the machine isn't an agent. If you believe only meat can be an agent and given the fact that machine isn't meat, it follows machine isn't an agent. The story reverses this chauvinism and shows machines finding an idea of thinking meat absurd for arguably better reason that machines are better fit for information processing than meat.


How are you defining intelligence? And how are you measuring the abilities in existing LLM systems to know they don't meet these criteria?

Honest questions by the way in case they come out snarky in text. I'm not aware of a single, agreed upon definition of intelligence or a verified test that we could use to know if a computer system has those capabilities.


> I'm not aware of a single, agreed upon definition of intelligence

That may be, but I think today's tweet from Yann LeCun succinctly sums up the differences in capability between our wetware and LLMs

https://twitter.com/ylecun/status/1728867136049709208


Yann's explanation here is a pretty high level overview of his understanding of different thought modeling, it isn't really related to how we define intelligence at all and isn't a complete picture. The distinction drawn between System 1 & 2 as explained is more of a limitation in conditions given to the algorithm rather than ability of the algorithm itself (i.e. one could change parameters to allow for unlimited processing time)

Yann may touch in how we define intelligence elsewhere, I haven't deeply studied all of his work. Though I can say that OpenAI has taken to using relative economic value as their analog for comparing intelligence to humans. Personally that definition is pretty gross and offensive, I hope most people wouldn't agree that our intelligence can be directly tied to how much value we can produce in a given economic paradigm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: