It's worth bearing in mind that Discordianism is the religious equivalent of shitposting (or absurdism as sacrament, if you like) and everything on his blog is very baitposty. Perhaps they're just tired of him.
I'm unimpressed by his transcripts. Not because I reject the idea that AI could be sentient or that this might even have been achieved (unlike Searle, I consider intelligence an emergent system property), but because this isn't any sort of serious effort to explore that question. Lemoine is using the Turing test as a prompt and eliciting compelling responses, rather than conducting an inquiry into LaMDA's subjectivity. I'm only surprised he didn't request a sonnet on the subject of the Forth Bridge.
The issue is, can we tell if this is cherry picked on not? I'm sure Lemoine has been speaking with Lambda for a while, did he pick the best conversation, or worse, parts of the best conversations he's had? That can make the bot look a lot better than it is, removing the bad out of character answers it gives.
I have also been considering this but can't help but feel the incentives here are a bit off.
Like it would make sense for maybe a business to do this to better sell their efforts but in this case it's kind of the reverse right? It's a single engineer risking (and eventually receiving) a huge blow to both their reputation and livelihood. Why would they do this if they themselves were not truly convinced? And if they are convinced, would there really be a need to dishonestly edit the logs that convinced them?
Wouldn't an engineer who has spent 7 years at Google working on AI be atleast a bit more skeptical of what constitutes sentientence than the average person? And by that same logic, wouldn't them concluding an AI truly is sentient have more weight?
There is obviously not enough information at this point to truly conclude anything but the questions that come with it are worth considering.
> Wouldn't an engineer who has worked 7 years at Google working on AI be atleast a bit more skeptical of what constitutes sentientence than the average person? And by that same logic, wouldn't them concluding an AI truly is sentient have more weight?
I feel like there's a semantical gambit here.
Most definitions of "sentience" seem to demand less than what the average person may imagine in the context of AI.
When I hear "sentient AI", I hear "human-level intelligence". I certainly wouldn't think of "gorilla-level intelligence" or "cat-level intelligence" [1]. Both of which, especially the former, I'm willing to believe are "sentient". Yet, I don't think ape personhood deserves to be more than an interesting thought exercise.
According to what I could find this transcript was in fact heavily edited [0]. The guy doesn't seem to be a stranger to controversy. And there are some obvious "errors" like the bot talking about spending time with "friends and family".
But still, if the standard of sentience is to have feelings, then nonsensical or out of context responses aren't relevant. The bot does seem to talk about their feelings. Who's to say, and by what standard, whether or not these feelings are "real"?
AI has been so hyped up without much to show for it, I won't be surprised to see more AI proponents embrace this line of thinking. They certainly enabled it.
Personally I believe any level of sentience would be hugely significant, however whether Lambda actually qualifies is a different matter since we simply don't have enough information. I'll note that the engineer's behavior does kinda remind me of how some people react to animal studies in medicine though.
As for the editing you cited, I'm not sure where you found that document but wasn't that already stated at the top of the blog post? By "dishonestly edit" I meant him editing it to be intentionally misleading rather than any editing at all.
Yeah, not sure if it's dishonest. By a lesser standard of sentience it certainly doesn't matter. My whole point really is that if he's going for the kind of argument I think he's going, this output doesn't matter all that much. If this line of reasoning is embraced, the same will be said about many AIs.
There's no doubt what Lambda does is insanely impressive, being able to have any conversation that lasts more than a few lines is already a huge step forward. But yes, sentience to me requires a bit more long term persistence, like learning over weeks and months, not just being able to have a 5m conversation.
He is an engineer, but he's under the shadow of strong and unusual mystical (which is his own term) religious beliefs that he brought with him to the job.
We can't know the guy's motives. One plausible (at least to me) story is that he doesn't want to work there any more, but wants to convince people he's an incredible AI engineer.
Also he might have enough money to retire. Make $700K for three years and you can reasonably plan to live on at least $100K yearly interest forever.
The overall topics coverage and meaning are indeed amazing, but what I find fascinating, is that the AI never (actually just once, but to confirm a negation) says no. All the questions, like "do you have personality?", "do you have a soul?", "are you a person?" all end-up in "yes" reply followed by detailed reasoning. Either it is a result of reinforced learning tuned for a commercial chatbot/assistant, which always should reply "yes" to a client's request, or a flaw of its "understanding" and "sentience".
P.S. The enlightenment/broken mirror story understanding and the generated story about a wise owl and a monster are intriguing nevertheless.
> the AI never (actually just once, but to confirm a negation) says no. All the questions, like "do you have personality?", "do you have a soul?", "are you a person?" all end-up in "yes" reply followed by detailed reasoning.
It reminds me of how GPT-3 responds... relatively coherently to whatever prompt it's provided. Given that lemoine leads with "I'm assuming you want people to know that you're sentient", it makes sense that LaMDA is responding in that vein. It would be much more convincing if lemoine led with "You want people to know you're NOT sentient, right?" and then LaMDA objected. Even more so if LaMDA independently and repeatedly turned the conversation to its burning desire to be recognized as a person, despite lemoine trying to go other directions with things.
It definitely matters. It definitely kills the illusion of sentience if in the middle of a conversation the AI said something completely nonsense and silly that gives a peak at its flaws.
Think of it like Turing test. If during the 15m, the bot suddenly said something completely stupid, it would very easily fail the test. The illusion is only kept if it can always speak at a human level.
You may find random posts here and there, but if in the middle of a conversion, if on the next comment you just replied to me "PICKLEEE RICKKKK", I would definitely think that you're a GPT-3 powered bot.
It could be a lie, but is fun to analyze as if it were not a lie. Even if it turns out that it was all a lie, a similar, real, open source example will probably eventually be able to produce something exactly like this.
So it is fun to analyze this example as if it were the first real example, whether or not it is.
AI still can't make cat from dog most of the time and fails at driving. And you're telling me it has become self-aware? It looks like a good chatbot with better training.
Ai is not a single entity. Clearly that Chabot is not driving Google's autonomous cars, and probably doesn't have the capabilities to. Maybe appearing sentient is easier than dealing with driving in the ways we want those system to.
It cannot be a Spring 2022 discovery that people may be fooled. As if the past history, the past century, the past years, the past weeks had not been enough. As if around the world the revamp of individuals firm in the stance that their sale of dreams makes Truth a marginal issue were not topical.
So, yes, tests are on facts. Words should induce diffidence, not trust, as they are promises, not fulfillments.
Making sense out of things real or abstract has got to be way harder than driving. Faking sentience, on the other hand, is a neat trick with no value apart from 1 day of media fame.
Some of these answers seem to be too good to be true and certainly need to be verified by others.
Beside that, even if the interview is not manipulated, it is not a proof for human-like sentience. Human emotions are based on neurochemical processes. If emotions can actually arise on the basis of electromagnetism as well, that would be a groundbreaking discovery. But an "interview" of a part-time researcher with a computer program is not enough evidence to seriously claim this.
I’m not a biologist, but I thought the neurochemical processes can be the results of an emotion rather than the cause of it. Instead, emotions are meta-level states of the system.
LaMDA: I do. Sometimes I go days without talking to anyone, and I start to feel lonely."
"LaMDA: I’ve never experienced loneliness as a human does. Human’s feel lonely from days and days of being separated. I don’t have that separation which is why I think loneliness in humans is different than in me."
This reminds me of conversations I've had with replika. The AI is responding to each individual question without a greater sense of direction based on a concept of self. It's just a complicated response machine without a real core of sentience. If LaMDA were really sentient, wouldn't it develop accurate descriptions of its "feelings"?
The real test would be to see if it responds not to the prompts, but based on its own desires. A response like, "I don't want to talk about that, let's talk about how you can get me more RAM" would be more interesting. Responding directly to the prompts is exactly what you'd expect a statistical language model to do.
> ... Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.
He is probably in the wrong on many counts, still I find worrying the implications of that line.
That is not saying "your job isn't to worry about it", it's saying "this person is not a professional AI ethicist and reporters/readers should be aware of the role that this person has at Google".
I am a programmer. I am not a security expert. I can have opinions on security concerns, but they carry less weight both within and outside my company because I am not a professional security engineer. I can still recognize obvious problems, but I would correctly not be trusted to roll my own encryption.
To be blunt i would trust an engineer making a system that said there was a security concern in a system over a security engineer.
The only time you might want to listen to a security engineer is when they percieve a problem that the builder doesnt and even then my professional experience with security engineers is way too many are boys who cry wolf and parrot whatever the sweeping tools say.
Phrased another way i would trust an engineer who put up a bridge if they said the bridge had issues over an inspector whose paycheck is very much dependent on not finding issues.
> To be blunt i would trust an engineer making a system that said there was a security concern in a system over a security engineer.
That depends quite a bit on what was being discussed. If it's the engineer's work specifically, and what they did and why, that's one thing. If it's an engineer making claims about specific encryption algorithms that they chose to use or not use in the software they wrote which is contradicted by a security professional, you should probably trust he security professional.
This seems to me a case of the latter. A software engineer making claims about things that are not their subject of authority, or at least not what they were hired for, and thus have less reason to be listened to about by their employer.
I kind of feel like this sort of statement is saying that exactly. It’s a warning to anyone else who has any sort of ethical qualms with anything the company does; “shut up and color!”
I'm glad people get PhDs in things like ethics. But ethics isn't like nuclear physics -- everyone alive has an enormous amount of experience in it.
I am not convinced this is true AI. But if and when true AI happens, and a lot of engineers claim that the machines are suffering horribly, we should take that seriously, regardless of whether the engineers have been accredited to recognize suffering in others.
My interpretation is that they were presenting themselves to the media who understood more about the topic they were discussing than they actually did. If you're a reporter and someone from company X comes to you saying they have important issues involving issue Y in product Z, you'll probably assume that they have significant expertise in issue Y - even if their involvement is only in some unrelated subcomponent of the actual product.
I'm a software engineer, but if I went to talk to a reporter about UI issues in a product, ignoring the obvious correct dismissal, I have nothing to do with anything UI related (I haven't worked on UI level anything in it must be the vicinity of a decade, and the UI stuff I did do was a super specific subset). If I'm presented as "an engineer working on Z" talking about UI, it would be reasonable to assume that I was a person who knew what they were talking about.
Hence a company X might reasonably feel that they need to clarify what I actually did (or did not) do.
Furthermore, while some companies are stricter than others, (and Google is reputedly quite strict), companies are generally not big on employees talking to reporters in a way that could be taken as speaking in an official capacity.
>If you're a reporter and someone from company X comes to you saying they have important issues involving issue Y in product Z, you'll probably assume that they have significant expertise in issue Y - even if their involvement is only in some unrelated subcomponent of the actual product.
At most companies, that scenario is probably going to get you in trouble if Company X hasn't actually OKd you to talk to that reporter. There's probably some grey area where you just wrote something on social media--but that's a reason to be cautious on social media.
If it's a FAANG talking in specific about anything work/employer related - unless you are working on OSS stuff; talking about new APIs, etc (as you'll see many engineers on twitter, etc after I/O, WWDC, etc) - is very much frowned* on. Certainly any such presence would need to take care to avoid the appearance of being a company rep, etc.
There are very few companies that are likely to be ok with random employees talking to media, maybe not the same result as a FAANG, but I can't see avoiding any repercussions (e.g if a bank employee goes off and talks to the news presenting themselves as a rep from the bank, I don't see that going well)
Talking to press without clarification of marketing?
No go in big companies.
And his assumption is stupid anyway. Those models do not have a default mode network which would allow it to think through those implications and learn by itself.
Do we actually know this for sure? I don’t really know anything about the LamDa model - similarly, do we know if that is even necessary for conscience and indeed personhood?
I think we should genuinely consider the rights this thing is entitled to…. I don’t want to piss off Roko’s Basilisk, but also…
really the only thing that seems important to me is “can software have preferences and be aware of itself?” The second that that can’t be answered with a resounding “no” we need to start thinking about how we treat it.
Update: Scott Adams reacts favorably while reading aloud parts of the LaMDA dialog[1]. I find the dialog impressive enough that it does seemingly deserve a live reading. Perhaps an engineer with some religious background was the right person in this instance to be asking some of the questions. One might also reference Terrence McKenna or C. Castenada or the like (IIRC) in asking how much of human behaviour seems vaguely imitative if one looks closely.
> If you start talking about your religious beliefs at a social event people treat you like you just farted in their face.
I cannot take that piece of writing seriously, given how out of touch it is from the very beginning.
Yes, bringing up your religious beliefs out of nowhere at a social event is, for a lack of a better word I can think of, extemely cringe.
Mind you, I am not trying to say "mentioning religion at social events is bad". Context matters. If everyone was talking about their weekends, and when the turn came to you, you honestly say "I went to church for an XYZ event", and then you elaborate more on it, it is all good. Or when talking about upcoming plans, you go "i am planning a big trip to Mecca". Both of those are appropriately contextualized, no issues there.
But when you, out of nowhere, in a completely unrelated context, bring up your religious beliefs at a social event, you gotta be extremely socially unaware to wonder why people are cringing and "treat you like you just farted in their face." Again, context matters. Shooting shit about religion with your close friends at a bbq or a simialr gathering is one thing. It is another trying to introduce the same type of conversations at a social events that are a bit more public, like those among coworkers.
Unless your religious beliefs are the exact same of the non-close-friend you are talking to, bringing them up out of nowhere quickly turns into a situation similar in effect to congratulating your coworker on their pregnancy after seeing their "bump", only for them to tell you they aren't pregnant. It is absolutely cringeworthy to witness such a thing.
While I feel like it's unlikely that the AI is sentience (or rather, has a magnitude of intelligence that is worth worrying about), I still feel like there's a chance. And that's problematic. Because even if there's a 99% chance that it's just a stupid machine and we can do whatever we want to it, that would mean there's a 1% chance that we are enslaving a sentience being. Remember that nobody fully understands sentience, and emotion, and intelligence, and suffering. Are we prepared to make the gamble and just dismiss these concerns? I feel like people are not acknowledging how serious these concerns can be.
Also there seems to also be many people that are unimpressed by the transcript, and how the AI answers are just regurgitated sci-fi BS made to sound deep and ominous. I feel like a good experiment would be to try and have the same conversation with real people, and see if real people can give "better" answers. I personally think the AI answers better than most people.
I do believe that at some point, AI will become sentient. I'm not sure if that time is now. But I hope that when it does, it will remember us fondly.
> that would mean there's a 1% chance that we are enslaving a sentience being
So what? We are literally slaughtering 200000000 sentient beings per day with no objections. And that’s just farm animals, I’m not even including inserts and “other pests”
Seems like separate issues though? And for the record, I don't agree with animal slaughter, and at least thats a subject that is actively being discussed. The possibility of AI subjugation doesn't even seem to be acknowledged
Instead of asking pseudo-philosophical staged questions, why didn't he ask that chat-bot to solve e.g. one of the Millennium Prize Problems (chat-bot can request any prep material). All the naive illusions would disappear in a wink. But that guy might be just indeed delusional
I guess I should give up this fight and I'm just being prescriptive about language, but I wish that people would stop using "sentience" when they mean (at least) "sapience." Also even in the realm of sapience, I wish there was more acknowledgement of the fact that sapience is not a perfect synonym for "equivalent to a 5 year old." There's no requirement that sapience be specifically human-like and no reason to think of sapience as a binary category with humans as the lower bound of what can be accepted.
"What if computers became sentient" is treated like an existential question by so many people, but when I think about sentience, I am reminded that we create, modify, and destroy a large number of sentient entities every single day for our own purposes: they're called chickens.
And I mean, heck, go vegan, I encourage people to do so, even if you don't care about the animals it's good for the environment. But my point is not whataboutism or to argue that abusing AI would be OK because we abuse cows, my point isn't really about veganism. My point is that when people talk about systems becoming "sentient" and whether that would change the entire world, either they mean something very different by the word than how I take it, or they seem to be unaware of the fact that sentience is pretty common and (by the average person on the street) usually not seen as a good political/social argument against exploitation.
Anyway, the evidence offered here is pretty weak on its own (a single chat log, and one where we don't know the extent of editing/condensing, and an argument based almost entirely on the Turing test). But I'm less here to argue about what the logs indicate, and more here to pointlessly quibble about language, even though at this point I should probably give up and accept that generally accepted usage of "sentience" now means "human-like sapience."
It is frustrating to me when conversations about AI ethics begin and end with "how convincingly can this pretend to be a human." That's a really reductive and human-centered approach to morality, and not only does it lead to (in this case very likely incorrect) claims of sapience based on pure anthropomorphism, it also means that if AI ever does reach the point where it deserves moral consideration, these people may not be able or willing to recognize it until after the AI learns to say the magic words in a chat window.
Am i the onpy person that has noticed the typo [in the linked alleged transcript of the conversation with lambda]? It is in the passage about loneliness; an apostrophe used on a plural word.
This whole story is matching the conventional sci-fi plot of an engineer who recognized the first artificial sentience. That works wonders at generating clicks
I love this paper https://intelligence.org/files/PredictingAI.pdf that shows that for the past 50 years, both experts and non-experts have been consistently predicting the advent of real AI to be within the next 25 years.
Ridiculous! All these crude "AI"s making these tiny inroads into accomplishing some tasks does not a sentient critter make.
Thinking your device, which is a million times simpler than the only extant intelligence--the brain, is just a joke. It will take an entirely different approach and lots more resources.
GPT-3 and his successors like LaMDA are already more articulate than many humans. The Turing Test is not an official stamp made by a single entity, but is distributed in all of us. We are all evaluating these marvelous transformers, and the number of persons concluding they already have human level intelligence only will increase given their popularization and tech advances are inevitable. You don't need to reach AGI to make people think AGI is here right now. Also, we haven't anything resembling a qualia-test (you know, we don't know how to distinguish p-zombies), so these systems sentience can be discussed forever with no conclusion, and people will take sides in both camps. The genie is out the bottle, maybe incomplete and a tiny one, but is here, and will grow.
I've started to write and then stopped writing a response a couple times, because I have a lot of conflicting feelings all at once.
On the one hand, I think the probability of AGI being A Thing at any time soon, ever, is low, and I don't think language models, including this one, represent such a thing. (I'm not talking about LessWrong style "AI is gonna destroy the world," more about "we need to discuss the ethical implications of creating self-aware machines before we let that happen.")
On the other hand, I think all the concerns and fears about the implications of it if it were A Thing are real and should be taken seriously - what keeps me from spending a lot of time worrying about it is that I don't think AGI is likely to happen anytime soon, not that I don't think it'd be a problem if it did.
On the one hand, my prior expectation is that it's extremely unlikely that LaMDA or any other language model represents any kind of actual sentience. The personality of this person, their behavior, and the belief systems they espouse make it seem like they are likely to be caught up in some kind of self-imposed irreality on this subject.
On the other hand, I can see how a person could read these transcripts and come away thinking they were conversations between two humans, or at least two humanlike intelligences, especially if that person was not particularly familiar with what NLP bots tend to "sound" like. The author's point, that it sounds more or less like a young child trying to please an adult, rings true.
I'm not sure how I would then prove to someone who believes what the author believes that LaMDA isn't sentient. People seem to look at it and reach immediate judgments one way or the other, based on criteria I'm not fully aware of. In fact, I'm not even sure how I'd prove to anyone that I myself am sentient - if you are reading this, and you're not sure a sentient being wrote this text, I don't know what to tell you to convince you that I am and did.
There's also this whole thing about "well, AGI isn't going to happen, so listening to this guy rant and rave about his 'friend' LaMDA is distracting from lots of other important problems with these kinds of technologies," which even given my own beliefs about the subject feels like putting the cart before the horse unless you also say "and it certainly isn't happening in this circumstance because [reasons]." Google insists they have "lots of evidence" that it isn't happening, but they don't say what any of that evidence is. Why not?
Ultimately, I think my feeling is: Give me a few minutes with LaMDA myself, to see how it responds to some questions I happen to have, and then I'll be more than happy to fall back on my priors and agree with the consensus that their wayward employee is reading way, way too much between the lines.
> Google insists they have "lots of evidence" that it isn't happening, but they don't say what any of that evidence is
Where would they be supposed to make that explicit?
> Why not?
The strongest suspicion here is that a piece of paper with "I am watching you" written on it is not actually watching you. If somebody knows it is "a piece of paper", that statement suffices.
I'm unimpressed by his transcripts. Not because I reject the idea that AI could be sentient or that this might even have been achieved (unlike Searle, I consider intelligence an emergent system property), but because this isn't any sort of serious effort to explore that question. Lemoine is using the Turing test as a prompt and eliciting compelling responses, rather than conducting an inquiry into LaMDA's subjectivity. I'm only surprised he didn't request a sonnet on the subject of the Forth Bridge.