Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

  > What if language is the intelligence? 
Almost certainly not. There does not seem to be a strong correlation between the two. We have a lot of different measures for intelligence when it comes to animals. We can place them across a (multidimensional) spectrum and humans seem unique with language. It also appears that teaching animals language does not cause them to rapidly change on these metrics despite generations of language capabilities.

  > What if "guessing the next word" really was all that was there, to peak human intelligence, knowledge, and understanding of our world?
I believe this is falsifiable. As I best understand it is a belief of this relationship: predict next word <--> understanding. Yet we know that neither direction holds true. I'll state some trivial cases for brevity[0] but I have no doubt you can determine more complicated ones and even find examples.

-> I can make accurate predictions about coin flips without any understanding of physics or how the coin is being flipped. All I need to do is be lucky. Or we can take many mechanical objects like a clock that can predict time.

Or a horse can appear to do math if I tell it how many times to stomp its foot. It made accurate predictions yet certainly has no understanding.

Ehh I'll give you a more real example. Here's a model that gives accurate predictions for turn by turn taxi directions where the authors extract the world model and find it is not only inaccurate but find that it significantly diverges. Vafa has a few papers on the topic, I suggest reading his work.

<- You can understand all the physics to a double pendulum and not predict the movement for any arbitrary amount of time moving forward if you do not also know the initial conditions. This is going to be true for any chaotic system.

I said we've seen this in the history of science. {Geo,Helio}centrism is a great example. Scientists who had no affiliation with the church still opposed Galileo because his model wasn't making accurate predictions for certain things. Yet the heliocentric model is clearly a better understanding and more accurate as a whole. If you want to dive deeper into this topic I'd highly recommend both the podcast "An Opinionated History of Math" and the book "Representing and Intervening" by Ian Hacking. They're both very approachable. FWIW, metaphysics talks about this quite a lot.

  > My first problem is that there are many people that claim that they have NO internal monologue
So again, I cannot stress that we should not represent this as a binary setting. The binary cases are the extreme (in both directions). Meaning very few people experience them.

The problem here is one of language and semantics, not effect. I completely believe that someone will say "I have no internal monologue" if >90% of their thinking is without an internal monologue. Just like how a guy who's 5'11.75" will call themselves 6'. Are they a liar? I wouldn't say so, they're >99% accurate. Would you for someone 5'11"? That's probably more contextually dependent.

So you distrust the data. That's fine. Let's assume poisoned. We should anyways since noise is an important part of any modeling[2]. It is standard practice...

So instead, do you distrust that there's a distribution into how much of an internal monologue individuals use? Or do you presume they all use them the same.

I'd find it hard to believe you distrust the spectrum. But if you trust the spectrum then where is the threshold for your claim? 0%? That's really not a useful conversation even if heavy tailed.

You are hyper-fixated on the edge case but its result isn't actually consequential to your model. The distribution is! You'll have to consider your claims much more carefully when you consider a distribution. You need to then claim a threshold, in both directions. Or if you make the claim that we're all the same (I'd find that quite surprising tbh, especially given the nature of linguistics), you need to explain that too and your expected distribution that would claim that (narrow).

All I can tell you is that my friend and I have had this conversation multiple times over many years and it seems very constant to me. I have no reason to believe they are lying and if they are they are doing so with an extreme level of consistency, which would be quite out of the norm.

[0] Arguing the relationship still requires addressing trivial relationships.

[1] https://arxiv.org/abs/2406.03689

[2] Even if there are no liars (or "lizardmen"[3]) we still have to account for miscommunication and misunderstandings.

[3] https://en.wiktionary.org/wiki/Lizardman%27s_Constant



> We have a lot of different measures for intelligence when it comes to animals.

But there is an abismal difference between animal intelligence and human intelligence.

> predict next word <--> understanding

Yes, and I could say a stone understands the world because its state reflects the world: it gets hot, cold, wet, dry, radiated, whatever. Perhaps its internal state can even predict the world: if it's rolling downhill, it can predict that it will stop soon. But the stone is not conscious like a human, and neither is a clock nor a horse that can count to ten. The stone obviously is "reducing to the absurd" - a horse can actually "guess" to some degree, but nothing like a human. It cannot ask a question, and it cannot answer itself a question.

> I cannot stress that we should not represent this as a binary setting.

That was kind of my point, to eliminate the binary "no", leaving us with a spectrum.

My initial claim "these are just nutjobs" - my apologies for the phrasing - was addressing this: there are no people "without internal monologue AT ALL".

Since we seem to actually agree on this point, our difference is that I believe that the people with "little internal monologue" are simply not aware of it.

Let me phrase string it this way: If language is the understanding, then the internal monologue is not some quirky side effect. To understand something at the human level, we need to describe it with language, the rest are primitive instincts and "feelings".

We can model the past and the future. We can model ourselves in 10 years. And what is one of the most important things we would model? What we would say or think then - thinking being "saying something out silently in our head". Not really just feelings: "I would love my partner", sure but why? "Because . . .".

When we are utilizing language, the internal monologue, to construct the model, we cannot be "aware of it" constantly. That is, the bandwidth is taken by the tasks at hand that we are dealing with, it would be detrimental if every other phrase would be followed with "btw did I notice that I just understand this via a string of words?". The more complex actions or idea we process, the less aware we are that we are using language for it. That is "being in the flow". We can reconstruct it when done, and here, if there is a lack of awareness of internal monologue, it will be rationalized as something else.

> Or if you make the claim that we're all the same (I'd find that quite surprising tbh, especially given the nature of linguistics), you need to explain that too and your expected distribution that would claim that (narrow).

My explanation (without proof), is that it's just a matter of awareness.

> All I can tell you is that my friend and I have had this conversation multiple times over many years and it seems very constant to me. I have no reason to believe they are lying and if they are they are doing so with an extreme level of consistency, which would be quite out of the norm.

Can you think of some kind of tests question (or string of questions) that could prove either? I have been thinking about it obviously, but I can't come up with any way to empirically test that there is or is no internal monologue. Consistency could simply mean that their rationalization is consistent.

I'll leave you this article, which I found quite interesting: https://news.ycombinator.com/item?id=43685072 The person lost language, and lost what we could consider human-level consciousness at the same time, and then recovered both at the same rate. Of course, there was brain damage, so it's not an empirical conclusion.

Also this book https://en.wikipedia.org/wiki/The_Origin_of_Consciousness_in... while partially debunked and being pop-sci to begin with, has wildly interesting insights into the internal monologue and at least draws extremely interesting questions.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: