Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.

In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."



What is so absurd about the idea of ascribing consciousness to some potential future form of AI?

We don't understand consciousness, but we've an idea that it's an emergent phenomena.

Given the recent papers about how computationally dense our DNA are, and the computing capacity of our brains, is it so unreasonable to assume that a sufficiently complex program running on non-organic matter could give rise to consciousness?

The difference to me seems mostly one of computing mediums.


The problem is that current AI has nothing in common with consciousness as found in (say) cats and dogs, whatever that might be - no robot is even close to being as conscious as a cockroach - yet human consciousness seems to have great overlap with consciousness in nonhuman animals. The tiny fragment of human consciousness that appears to overlap with LLMs should be called something else, maybe “virtual sapience” as exhibited in LLMs. (The overarching difficulty here is we don’t know enough about consciousness/sentience/sapience to define them precisely.)


> The problem is that current AI has nothing in common with consciousness as found in (say) cats and dogs, whatever that might be - no robot is even close to being as conscious as a cockroach - yet human consciousness seems to have great overlap with consciousness in nonhuman animals.

Given we don't know what consciousness is — worse, there's around 40 different definitions, and I've seen people use "is a living biological organism" as one of them here on HN — we not only absolutely cannot say if the parts of our minds that give rise to it are, or are not, functionally equivalent to what LLMs do with attention.

We also definitely cannot put them on a graph with cockroaches, which, like LLMs, may or may not have any at all.

It's like we've got this word "weather", and some people are saying that only their own country really experiences "weather" because what they mean by the word is the most characteristic part of their local climate (windy moistness for Britain, sunny for the Sahara, snow for Siberia, rain for the Amazon etc.), and then we made a telescope and looked at Mars and are now arguing that Mars can't have real weather because there's no water there.

And then someone else pipes in and is talking about "weather" to mean "daytime", because nobody ever says "what nice weather we're having" at night…


This is common in arguments about AI. One person says why couldn't future AI be conscious, the next replies the problem is current LLMs aren't conscious. Future AI and current LLMs are not the same thing.

Future AI no doubt will be able to be conscious for practical purposes.


Because they're numbers represented in digital form. In inference, you're doing simple math with those numbers. So what's alive, the numbers? Maybe the silicon holding the numbers? What if we print them out? Does the book become conscious?

Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.

You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?


> So what's alive, the numbers? Maybe the silicon holding the numbers? What if we print them out? Does the book become conscious?

Indeed, those are exactly the questions you need to ponder.

It might also help to consider that human brain itself is made of cells, and cells are made of various pieces that are all very obviously machines; we're able to look as, identify and catalogue those pieces, and as complex as molecular nanotech can be, the individual parts are very obviously not alive by themselves, much less thinking or conscious.

So when you yourself are engaging in thought, such as when writing a comment, what exactly do you think is alive? The proton pumps? Cellular walls? The proteins? If you assemble them into chemically stable blobs, and have them glue to each other, does the resulting brain become conscious?

> Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.

Imagine I'm so great a surgeon that I can take a brain out someone, keep in on a lab table for a while, and then put it back in, and have that someone recover (at least well enough to they can be interviewed before they die). Do you think is fundamentally impossible? Or do you believe the human brain somehow transmutes into a "brain on a lab table" as it leaves the body, and then transmutes back when plugged back in? Can you describe the nature of that process?

> You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?

Well, if you the current precisely enough, sure. Just because we can't currently demonstrate that on a human brain (though we're getting pretty close to it with animals), doesn't mean the idea is unsound.


Yes, if you take a dead persons brain and run current through it continuously so that it can direct a body and produce novel thoughts then that is equivalent to a living human being.

Your brain is ultimately just numbers represented in neuronal form. What's conscious, the neurons?


This level of materialism is soul-crushing :)


Thanks :) I always take the opportunity to crush souls when I can, since they don't exist in Reality nor Actuality.

FWIW I'm a hardcore idealist, but in the way it was originally posed, not in the quasi-mystical way the Hegelians corrupted it into.


What's the fundamental, inescapable difference between "numbers represented in digital form" and "jelly made of wet flesh crammed into an oversized monkey skull"?

Why should one be more valid than the other?


Because we decide which one is valid, and we are also part of the comparison being made.

Yes, it's almost a perfect conflict of interest. Luckily that's fine, because we're us!


Not here, not as phrased. They're asking a question about physical reality; the answer there is that there fundamentally is no difference. Information and computation are independent of the medium, by the very definition of the concept.

There is a valid practical difference, which you present pretty much perfectly here. It's a conflict of interest. If we can construct a consciousness in silico (or arguably in any other medium, including meat - the important part is it being wrought into existence with more intent behind it than it being a side effect of sex), we will have moral obligations towards it (which can be roughly summarized as recognizing AI as a person, with all moral consequences that follow).

Which is going to be very uncomfortable for us, as the AI is by definition not a human being made by natural process human beings are made, so we're bound to end up in conflict over needs, desires, resources, morality, etc.

My favorite way I've seen this put into words: imagine we construct a sentient AGI in silico, and one day decide to grant it personhood, and with it, voting rights. Because of the nature of digital medium, that AGI can reproduce near-instantly and effortlessly. And so it does, and suddenly we wake up realizing there's a trillion copies of that AGI in the cloud, each one morally and legally an individual person - meaning, the AGIs as a group now outvote humans 100:1. So when those AGIs collectively decide that, say, education and healthcare for humans is using up resources that could be better spent on making paperclips, they're gonna get their paperclips.


Maybe an unpopular answer but a soul. Agency.

This materialist world view is very dangerous and could lead to terrible things if you believe numbers in a computer and a human being are equivalent.


What do you think you are? Just a bunch of atoms following the math of physics.

And what are those atoms are made of? Just a bunch of quantum numbers in quantum fields following math equations.


I think I'm a human being with a soul


> You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?

A better question is: did this dead brain briefly wake up and experience anything?

AI aren't alive, and aren't humans. So what?


It is absurd, but consciousness is fundamentally absurd.

Why would doing a bunch of basic arithmetic produce an entity that can experience things the way we do? There's no connection between those two concepts, aside from the fact that the one thing we know that can experience these things is also able to perform computation. But there's no indication that's anything other than a coincidence, or that the causation doesn't run in reverse, or from some common factor. You might as well say that electric fences give rise to cows.

On the other hand, what else could it be? Consciousness is clearly in the brain. Normal biological processes don't seem to do it, it's something particular about the brain. So it's either something that only the brain does, which seems to be something at least vaguely like computation, or the brain is just a conduit and consciousness comes from something functionally like a "soul." Given the total lack of evidence for any such thing, and the total lack of any way to even rigorously define or conceptualize a "soul," this is also absurd.

Consciousness just doesn't fit with anything else we know about the world. It's a fundamental mystery as things currently stand, and there's no explanation that makes a bit of sense yet.


> Consciousness just doesn't fit with anything else we know about the world. It's a fundamental mystery as things currently stand, and there's no explanation that makes a bit of sense yet.

Well put. I think there's one extremely solid explanation, though: it's a folk psychology concept with no bearing on actual truth. After all, could we ever build a machine that has all four humours? What about a machine that truly has a Qi field, instead of merely imitating one? Where are the Humours and Qi research institutes dedicated to this question?


We're making progress in being able to measure qualia. [1],[2] If the philosophical underpinnings of emergence in a physicalist sense hold, then that is a stepping stone toward a theory of consciousness.

[1] https://www.cell.com/iscience/fulltext/S2589-0042(25)00289-5

[2] Popularization: https://backreaction.blogspot.com/2025/06/scientists-measure...


That looks to be some major equivocation on "qualia." What they're actually measuring is related to how colors are perceived. That's very different from the actual subjective experience that is what we call consciousness. An intelligence that wasn't conscious would not be distinguishable in this test from a conscious being.


This sort of thing is why I seriously wonder if maybe some people have consciousness and some don't, rather than it being universal.

My experience of consciousness is undeniable. There's no question of the concept just being made up. It's like if you said that hands are a folk concept with no bearing on actual truth. Even if I can't directly detect anyone else's hands, my own are unquestionably real to me. The only way someone could deny the existence of hands in general is if they didn't have any, but I definitely do.


The point is that you believe you have something called consciousness, but when pressed no one can define it in a scientific (i.e. thorough+consistent) way. In comparison, I can absolutely define hands, and thus prove to myself that I (and others!) have them.

Regardless, some of the GangStalking people are 100% convinced that they have brain implants in their head that the federal government is manipulating -- belief is not evidence.


My point is that my experience of consciousness is more than sufficient proof. In fact, it is the only thing I can definitively with 100% certainty know is real. Other people's consciousness is a lot harder to demonstrate, but my own is incontrovertible to me.

The only way someone with that experience could say that it's not real is if they're taking the piss, they're very confused, or they just don't have it.


But what are you experiencing? Something that cannot be defined? If so, do you see the issue there?


I'm not sure. What is "the issue" exactly?

The difficulty in defining it certainly makes it hard to talk about. And it makes it impossible to even conceive of how one might detect this phenomenon in other people, or even come up with any sort of theoretical framework around it.

But if "the issue" is that this difficulty means I can't really be sure it's even there, no. As I said, this is literally the only thing I can be 100% sure exists. For everything else, there's room for at least a little doubt. This world, the room I'm in, the computer I'm using, even my own body could all be illusions. But my own consciousness is definitely real.

If you don't feel the same way about your own consciousness, then as I said, you're either taking the piss, you're very confused, or you just don't have it.


  As I said, this is literally the only thing I can be 100% sure exists.
How can you be 100% confident something exists if you don't even know what it is? That's literally impossible, on a logical level. You can't hold a belief about a concept you don't have -- it would be like a pointer to memory that doesn't exist (i.e. useless, invalid, and erroneous).

Certainly you're aware of things. There are some relevant phenomenological concepts that you hold beliefs about, just like there were real symptoms being described by the Humours system. But you have no justification for bundling them all together into something called "consciousness", which coincidentally comes packed with other, completely unproven assertions.


"Certainly you're aware of things."

Seems you agree with me, then. The "you" i.e. me being aware is what I'm certain exists. I'm not sure what other assertions you think are bundled in there, seems like just the one thing to me.


Correct.

Which is precisely why I have a problem with this idea as Anthropic is executing it; they might as well say "books and video games are conscious and we should be careful about their feelings."


> Normal biological processes don't seem to do it, it's something particular about the brain. So it's either something that only the brain does, which seems to be something at least vaguely like computation, or the brain is just a conduit and consciousness comes from something functionally like a "soul."

Another option is hidden the first sentence: "Normal biological processes don't *seem* to do it." — emphasis on "seem", because not only is it currently beyond us to have meaningful two-way conversations with dogs about their experiences and as them if they're conscious, we absolutely can't do it with trees or bacteria or our own livers.

We can talk with LLMs, but we also know they're making stuff up, so we can't trust that they're not just saying what got them an up-vote in RLHF training.


Nothing at all, if all you're doing is speculating within an academic context.

This appears to be more than that; these are steps in the direction of law and policy.


Its not absurd to do it to a potential future AI, its absurd to do it to the face of the man in the moon.


It reeks of some navel-gazing self-aggrandizing. I bet not even half the people doing the hand-wringing over how some matrix multiplications might feel are vegan or regularly spare a thought about how their or their companies consumption behavior indirectly leads to real human suffering.

It's just so absurd how narrowly their focus on preventing suffering is. I almost can't imagine a world where their concern isn't coming from a disingenious place.


I'm not highly concerned but I think there is merit in at least contemplating this problem. I believe that it would be better to reduce suffering in animals, but I am not vegan because the weight of my moral concern for animals does not outweigh my other priorities.

I believe that it doesn't really matter whether consciousness comes from electronics or cells. If something seems identical to what we consider consciousness, I will likely believe it's better to not make that thing suffer. Though ultimately it's still just a consideration balanced among other concerns.


I too think there is merit in exploring to what degree conciousness can be approximated by or observed in computational systems of any kind. Including neural networks. But I just can't get over how fake and manipulative the framing of "AI welfare" or concern over suffering feels.


That's reasonable, I certainly believe that there are many fake and manipulative people who say what's best for their personal gain, perhaps even the majority. But I still think it's reasonable to imagine that there are some people are genuinely concerned about this.


We're doing the homunculus again. Whether you wank into a test tube and add a bit of soil and grass, or sew together parts of a corpse and connect them to electricity: so far, every prospect of fulfilling this dream has turned out to be a delusion. Why should it be any different with the latest manifestation, this time in computational form?

The AGI drivel from people like Sam Altman is all about getting more VC money to push the scam a little further. ChatGPT is nothing more than a better Google. I'm happy to be proven wrong, but so far I see absolutely no potential for consciousness here. Perhaps we should first clarify whether dolphins and elephants are equipped with it before we do ChatGPT the honor.


> ChatGPT is nothing more than a better Google.

It's not even trying to be a search engine. It can use them, but if that's what you think of them as, you're missing most of the cool stuff they can do.

> Perhaps we should first clarify whether dolphins and elephants are equipped with it before we do ChatGPT the honor.

Before either of those, we need to understand the question. Humanity uses the word "consciousness" for far too many things right now: some of these things, AI have had for a long time; others of which can never be had by anything, including humans.


> It's not even trying to be a search engine.

I use search engines to get answers to questions and I use ChatGPT to get answers to questions. Except I can just ask ChatGPT like I would ask a person, whereas the search engine is more like a keyword search in a library catalog.

> Before either of those, we need to understand the question.

I don't think you can approach consciousness in this way. It's a bit like trying to derive Modus Ponens from propositional logic. I'm not sure if this comparison fits well, but I'm very skeptical that there is even a question to be explored that can be answered positively. Perhaps consciousness ultimately defies definition because it operates as a universal negation. It points to everything else and says: “I am different from that!”

There are other terms that defy definition rather stubbornly: Love, art, intelligence, nature, happiness, beauty, ... So it's not that unusual. My skepticism towards conscious AI comes from the fact that I don't want to take the lack of definition as an opportunity to hastily interpret the undefined into all kinds of phenomena, even if it seems tempting. In the end, it is probably an illusion.


Why make AI then if intelligence is electrical activity in a substrate is everywhere.

We’re engineering nothing novel at great resource cost and appropriation of agency.

Good job we made the models in the textbook “real”?

Wasted engineering if it isn’t teaching us anything physics hadn’t already decades ago, then. Why bother with it?

Edit: and AGI is impossible… ones light cone does not extend far enough to accurately learn; training on simulation is not sufficient to prepare for reality. Any machine we make will eventually get destroyed by some composition of space time we/the machine could not prepare for.


This is a strange position.

>Wasted engineering if it isn’t teaching us anything physics hadn’t already decades ago, then. Why bother with it?

Why build cars and locomotives if they don't teach us anything Horses didn't...

>and AGI is impossible… ones light cone does not extend far enough to accurately learn; training on simulation is not sufficient to prepare for reality. Any machine we make will eventually get destroyed by some composition of space time we/the machine could not prepare for.

This could be applied to human's as well. Unless you believe in some extra-physical aspect of the human mind there is no reason to think it is different than a mind in silicon.


AGI may not be impossible. But next token prediction won't get us there.


It's actually really unclear that this is true. If you brought GPT-o3 back to 1990 I have a hard time believing the world wouldn't immediately consider it full AGI.


If you told a person from 1990 that in the year 2025, they have this thing, and described OpenAI's o3 - strengths, flaws and all? That person would say "yep, your sci-fi future of year 2025 has actual AI!"

But if someone managed to actually make o3 in year 1990? Not in some abstact sci-fi future, but actually there, available broadly, as something you could access from your PC for a small fee?

People would say "well, it's not ackhtually intelligent because..."

Because people are incredibly stupid, and AI effect is incredibly powerful.


I'm very confident that if someone in 1990 used o3 they would be absolutely astonished and would not pull the 'well actually' thing you think they would.


Nah, AI effect is far too powerful. Wishful thinking of this kind is simply irresistible.

In real life, AI beating humans at chess didn't change the perception of machine intelligence for the better. It changed the perception of chess for the worse.


The same reasoning that would call this consideration of the possibility of machine consciousness "dehumanizing" would necessarily also apply to the consciousness of animals, and I can't agree with that. To argue this is to define "human" in terms of exclusive ownership of conscious experience, which is a very fragile definition of humanity.

That definition of humanity cannot countenance the possibility of a conscious alien species. That definition cannot countenance the possibility that elephants or octopuses or parrots or dogs are conscious. A definition of what it means to be human that denies these things a priori simply will not stand the test of time.

That's not to say that these things are conscious, and importantly Anthropic doesn't claim that they are! But just as ethical animal research must consider the possibility that animals are conscious, I don't see why ethical AI research shouldn't do the same for AI. The answer could well be "no", and most likely is at this stage, but someone should at least be asking the question!


Am surprised but it is real...

"As well as misalignment concerns, the increasing capabilities of frontier AI models—their sophisticated planning, reasoning, agency, memory, social interaction, and more—raise questions about their potential experiences and welfare26. We are deeply uncertain about whether models now or in the future might deserve moral consideration, and about how we would know if they did. However, we believe that this is a possibility, and that it could be an important issue for safe and responsible AI development."

chapter 5 from system card as linked from article: https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad1...

Humans do care about welfare of inanimate objects (stuffed animals for example) so maybe this is meant to get in front of that inevitable attitude of the users.


> Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid. > > In full agreement with OP; there is just about no justifiable basis to begin to ascribe consciousness to these things in this way. Can't think of a better use for the word "dehumanizing."

We cannot arbitrarily dismiss the basis for model welfare until we defined precisely conciousness and sapience, representing human thinking as a neural network running on an electrochemical substrate and placing it at the same level as an LLM is not neccessarily dehumanizing, I think model welfare is about expanding our respect for intelligence and not desacralizing human condition (cf: TNG "Measure of a man").

Also lets be honest, I don't think the 1% require any additional justification for thinking of the masses as consumable resource...


> Oh wow. My respect for Anthropic just dropped to zero; I had no idea they were entertaining ideas this stupid.

It's not stupid at all. Their valuation depends on the hype, and the way sama choose was to convince investors that AGI is near. Anthropic decided to follow this route so they do their best to make the claim plausible. This is not stupid, this is deliberate strategy.


Rights aren't zero sum. This is classic fixed pie fallacy thinking. If we admit Elephants are conscious it has no effect on the quality of consciousness of humans.


Until elephants gain rights in law that conflict with human rights. It seems all of life is some sort of competition for resources.



I agree. This is a very dangerous marketing and legal stragety than can end up costing us very dearly.


Indirect, but it gets to Roko's basilisk.


They're not ascribing consciousness, they're investigating the possibility. We all agreed with Turing 75 years ago that deciding whether a machine is "truly thinking" or not is a meaningless, unscientific question -- what changed?

It doesn't help that this critique is badly researched:

  The Anthropic researchers do not really define their terms or explain in depth why they think that "model welfare" should be a concern.
Maybe check the [paper](https://arxiv.org/abs/2411.00986) instead of the blog post describing the paper?

  Saying that there is no scientific *consensus* on the consciousness of current or future AI systems is a stretch. In fact, there is nothing that qualifies as scientific *evidence*.
A laughable misapplication of terms -- anything can be evidence for anything, you have to examine the justification logic itself. In this case, the previous sentence lays out their "evidence", i.e. their reasons for thinking agents might become conscious.

  The report's exploration of whether models deserve moral and welfare status was based solely on data from interview-based model self-reports. In other words: People chatting with Claude a lot and asking if it feels conscious. This is a strange way to conduct this kind of research. It is neither good AI research, nor a deep philosophical investigation.
That is just patently untrue -- again, as a brief skim of the paper would show. I feel like they didn't click the paper?

  Stances on consciousness and welfare [...] shift dramatically with conversational context... This is not what a conscious being would [do].
Baseless claim said by someone who clearly isn't familiar with any philosophy of mind work from the past 2400 years, much less aphasia subjects.

Of course, the whole thing boils down to the same old BS:

  A theory that demands we accept consciousness emerging from millennia of flickering abacus beads is not a serious basis for moral consideration; it's a philosophical fantasy.
Ah, of course, the machines cannot truly be thinking because true thought is solely achievable via secular, quantum-tubule-based souls, which are had by all humans (regardless of cognitive condition!) and most (but not all) animals and nothing else. Millennia of philosophy comes crashing against the hard rock of "a sci-fi story relates how uncomfy I'd be otherwise"! Notice that this is the exact logic used to argue against Copernican cosmology and Darwinian evolution -- that it would be "dehumanizing".

Please, people. Y'all are smart and scientifically minded. Please don't assume that a company full of highly-paid scientists who have dedicated their lives to this work are so dumb that they can be dismissed via a source-less blog post. They might be wrong, but this "ideas this stupid" rhetoric is uncalled for and below us.


Perhaps I slightly misspoke: The underlying ideas in an academic context are not stupid.

The "rush" (feels like to me) to bring them into a law/policy context is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: