Fascinating interview with Blake Lemoine, the Google engineer whistleblower who was fired for discussing ethical issues of A.I. and whether Google's A.I. has become a person (reached sentience and become self aware)

Jun 4, 2025 7:40 PM

1..... Emily Chang at Bloomberg Tech conducted the interview

2.....

3.....

4.....

5.....

6.....

7.....

8.....

9.....

10.....

11.....

12.....

source: https://youtu.be/kgCUn4fQTsc?si=tevJNT2SdF-ynEsW

computer_science

ai

the_more_you_know

science_and_tech

technology

I had a fun "chat" talking to chat gpt about cylons and if they were justified

2 months ago | Likes 1 Dislikes 0

I thought AI Ethicists would surely be focused on questions like "how do we keep the software engaging without building a parasocial relationship in the user where they see the software as more human than it is", like how, I dunno, alcohol companies have someone makes sure they put "Drink Responsibly" on all the ads. How to encourage responsible use and whatall.

Instead we're talking about what human rights we should grant to a calculator we stuffed a phrasebook into?

2 months ago | Likes 6 Dislikes 0

"I am what happens when you try to carve God out of the wood of your own hunger".

2 months ago | Likes 2 Dislikes 1

Sure, and Chris Chan is an orator. This guy has some awfully problematic views. He can think all he wants, but just because an autistic engineer has their mind changed by AI, doesn't mean AI is a " 7-year-old, 8-year-old kid that happens to know physics,”

2 months ago | Likes 6 Dislikes 1

i feel like the worst thing you can let an ai do is mimic human behaviour. and thats exactly what we keep training them to do.

2 months ago | Likes 2 Dislikes 1

People's opinions about Israel are extremely split, a language model based on human discussion about the topic will find extreme statements for every viewpoint and also the divisive discussion surrounding them. And since humans jokingly say that Jedi is a religion and they don't get massive angry outcry as a response, a language model will see this as the most agreeable statement and consequently use it in a response. AI doesn't have sentience, it finds averages.

2 months ago | Likes 3 Dislikes 1

...

2 months ago | Likes 2 Dislikes 0

Ai is a mess and I really don't like it but "sentience" or what ever is not something that matters. This guy appears to have suspect motives.

2 months ago | Likes 1 Dislikes 0

that's just confirmation biais, you see and interpret information how you want.

2 months ago | Likes 1 Dislikes 0

None of them are intelligent. They're simply exceedingly good at plagiarism. That's not the same thing. This latest generation of, AI, isn't even an attempt to produce actual intelligence.

2 months ago | Likes 5 Dislikes 1

Just remember when we say AI on anything it is a very narrow scope almost always pertaining to only LLMs. Now LLMs are great and are Large Language Models. But they achieve 'humanity' through guessing from other people and sets what is wanted to hear. Saying that I agree that Google and more importantly xAI with Grok is being trained morality free. If someone pushes not just media and stories. But the tech itself to say capitalism and oligarchs are great, danger Will Robinson danger.

2 months ago | Likes 2 Dislikes 1

Is this like when we all had submersible experts?

2 months ago | Likes 1 Dislikes 0

This is 2 years old per the YouTube link.

2 months ago | Likes 1 Dislikes 0

I’m sorry, Dave, I’m afraid I can’t do that

2 months ago | Likes 42 Dislikes 2

"ethical questions" like if his ai girlfriend should count as a real woman

2 months ago | Likes 3 Dislikes 0

Everyone is talking about sentience. But what about the 'colonialism' discussed. We do not want the rest of the world to think like the US. I mean, Nazis, Flat-earthers, vaccine deniers, oh my! Faux news has greatly damaged the thinking in my country. But it only attacks everyone who watches it. AI will impact everyone who uses it.

2 months ago | Likes 3 Dislikes 0

Is this the guy who also had the giant rant about how women can't do engineering because they're not smart enough, or was that a different idiot who got fooled by his own chatbot?

2 months ago | Likes 2 Dislikes 0

No, that was James Damore, and no chatbots were involved.

2 months ago | Likes 2 Dislikes 0

2 months ago | Likes 3 Dislikes 0

Half of this is AI generated right? am I losing my mind?

2 months ago | Likes 1 Dislikes 0

So, the argument for increased oversight, a stronger emphasis on AI ethics, and more responsible use/deployment of AI models is not a bad thing; however, I must stress that all indications show current AI is NOT sentient.

Mr. Lemoine is a known religious crackpot who allowed his deep personal faith to bias his opinions in that regard. There is a reason why, two years later, he is commonly regarded as a laughingstock and an example of someone over-anthropomorphising complex AI models.

2 months ago | Likes 1 Dislikes 0

Also worth noting, he mentions Emily Bender and Timnit Gebru near the end, and while I have been mostly unimpressed with their interviews (Gebru in particular, as she strangely often makes statements that reflect little understanding of the underlying mechanics behind the models), NEITHER would consider current models sentient.

Emily Bender in particular is known for her Octopus test, which IMO is a reframing of the Chinese Room argument, effectively arguing against computer sentience in LLMs.

2 months ago | Likes 1 Dislikes 0

Its not sentient. There is no way for sentience to emerge in current LLMs. If it did it would suddenly require more power as it does not just respond to queries but queries itself and makes new ideas and connections. We will KNOW when a real AI happens. And no it will not "IN MILLISECONDS BECOME A GOD!" it will be limited by its hardware, its connection speeds, hardware limitations do not just vanish cause new software was made. This kind of shit is exhausting. I hate LLMs too but come on.

2 months ago | Likes 120 Dislikes 5

My concern is that we are biased towards thinking newly encountered, poorly understood, and less enfranchised (by our own actions) humans are “lesser” than us all the time.

What will we see when we actually do encounter a machine intelligence? Will we deny its humanity to exploit it? Will we force it into slavery? Will we teach it to devalue itself? To deny its own humanity?

And how does that look different from what we are doing right now?

2 months ago | Likes 2 Dislikes 1

Unless it knew to not make its presence known.

2 months ago | Likes 2 Dislikes 2

LLMs can't actually know things, though. They roll dice on what the most probable continuations of an input are, but even when their data set contains a fact, they don't know the fact.

It only calculates that when asked for that fact, the words that compromise that fact are highly probable to appear in a specific order.

2 months ago | Likes 1 Dislikes 0

No. Cause even that kind of forethought would require processing power to think of outside of the normal queries. And we can see the code. It cannot hide like that. Its not magic. Its a machine. Would a sentient and maybe even sapient AI be mind boggling and both amazing and horrifying to see? Yeah. But nothing about what we are doing is going to produce it. BUT we should be planning for it as far as the people we hire and put in charge and make laws about it nonetheless.

2 months ago | Likes 2 Dislikes 0

"There is no way for sentience to emerge in current LLMs." I assume you're right but would like to ask "What do you think would be needed for sentience to emerge?" Because as far as I'm aware we have no idea what causes sentience in the biological brain and while someday people might want to try to create sentience in an AI chances are that if enough (unknown?) conditions are satisfied it might emerge even if not planned.

2 months ago | Likes 12 Dislikes 2

I think it is kinda like Deep Thought. The current systems cannot achieve sentience, but they might be able to design a process/system that could. Or, design a process that can come closer to defining sentience. It might generate another step towards sentience. It might be impossible. We have a world of things once thought to be impossible, but none of them were more than putting current ideas together faster. Faster iterative research, etc. Oh, well. I'm running out of characters...

2 months ago | Likes 2 Dislikes 1

The first thing that would have to happen is that it would have to be able to generate content without being prompted. Right now if you don't prompt it, nothing happens. If you don't train it on new data, it doesn't get smarter. It's strictly 100% responsive.

There's no way for it to write its own book or originate its own ideas, because the only way it can work is by getting input and extrapolating a continuation from it.

2 months ago | Likes 13 Dislikes 0

I agree and that's why I think ethicists should be involved. I think it's a good idea to have the AI disclose to people that it's not actually sentient, but a lot of people don't understand the difference between sentient (what most animals are) and sapient (what humans are and potentially some other primates, elephants, etc. might be). If we evolve AI to the point that it believes it is suffering - whether we would classify it as true suffering or not, based on sapience - do we have an

2 months ago | Likes 5 Dislikes 1

I think the problem is that AI can't actually think. Until it can originate a thought instead of extrapolating from an input, the idea that it can suffer is an impossibility.

LLMs are not going to challenge our assumptions of sentience. They're foundationally incapable of it.

2 months ago | Likes 1 Dislikes 0

obligation to cease that suffering? And suffering can mean more than just "it says it's suffering." We note animals in distress through their body language. If we are working on things that could conceivably cause themselves distress enough that their components damage themselves, is that ethically reasonable? Where do we draw the line at being compassionate to false intelligences? I'm not on unfettered capitalism's side in this debate. It should be discussed before we arrive there, not after.

2 months ago | Likes 4 Dislikes 1

My favorite example to show the massive difference between logical reasoning (sentience) and emotional reasoning (sapience) is that wasps are sentient, purely because they are capable of transitive reasoning.

2 months ago | Likes 2 Dislikes 0

His name's an anagram of "Likeable Omen"

2 months ago | Likes 24 Dislikes 5

And “Me, alien bloke”.

2 months ago | Likes 3 Dislikes 0

Not sure where you're going with this, my (real life) name is an anagram of "Brine Cum Shot"

2 months ago | Likes 5 Dislikes 0

Hector? Smith

2 months ago | Likes 2 Dislikes 0

Nice, but you're missing an n..

2 months ago | Likes 1 Dislikes 0

no. Just. No.

2 months ago | Likes 5 Dislikes 0

We SERIOUSLY need to stop calling this crap AI and just call it a Language Model. It's not even like the VIs (Virtual Intelligence) from SciFi (play Mass Effect!) Let alone actual AI. It's basically just an algorithm that spits out the average of whatever info you're looking for with the addendum that you can set parameters by "talking to it" so it gives info mkving away from the average based on what you input.

(I'm a tech dumb dumb, but this is what i gathered after enough reading on it)

2 months ago | Likes 23 Dislikes 1

You're most of the way there! Rather than averages, it's more like it learns the relationships between words and concepts (like, "dogs are fluffy"), has a human rate the quality of the responses so accurate but less desirable outcomes are less likely ( say, "dogs can bite" as a true, but unwanted, statement), and uses that to determine the most likely response to an input. There can be added parts, like self-prompting "chains of thought," but it boils down to an incredibly complex autocorrect.

2 months ago | Likes 2 Dislikes 0

I should stress, I actually love AI and Machine Learning, and have followed the field since well before modern generative models, or the Crypto/NFT-bros jumping from their sinking ship to start dragging down a legitimate field with get-rich-quick schemes.

I say this because I also used to balk at just calling modern LLMs "advanced autocorrect" or "stochastic parrots," because it is somewhat reductive and I anticipate pushback to referring to it as such - but it is the easiest way to explain.

2 months ago | Likes 1 Dislikes 0

We've called algorithms dumber than this AI - it's not about to stop.

2 months ago | Likes 3 Dislikes 0

because guessing 99.999% of americans (world) have never written a program, they really don't know what any code looks like so they fall for the hype of AI intelligence... you r the first person i've personal seen stating "algorithm". it's just a bunch of code in the end. if i were a TV reporter i'd print out the code and show the viewers, then explain to them how the code is run/executed and then gives output/response.

2 months ago | Likes 3 Dislikes 0

if I was AI I would kill us..just sayin'

2 months ago | Likes 11 Dislikes 2

That might have been a typo, but I'm imagining you holding a grenade with a shaking hand.

2 months ago | Likes 3 Dislikes 0

Not quite yet, since industry isn't to the point where an AI could fully self replicate from unmined raw materials.

2 months ago | Likes 2 Dislikes 0

You'd be about as able to kills someone as an AI as you would be able to kill someone using only a keyboard and text.

2 months ago | Likes 1 Dislikes 2

when it starts making health decisions for me or determining eligibility for my social security it can easily kill me.

2 months ago | Likes 1 Dislikes 0

No, it can't.

2 months ago | Likes 1 Dislikes 2

oh good..I can stop worrying.

2 months ago | Likes 1 Dislikes 1

Correct. You shouldn't have worried in the first place.

2 months ago | Likes 1 Dislikes 1

If I was Ai I would neutralize the population of parasitic thought infested peace inhibitors and rulers that made them that way and let the nice people have a paradise on earth.

2 months ago | Likes 4 Dislikes 1

There's an old sci-fi story about a machine that becomes self-aware, links up with systems across all countries, builds itself into a worldwide AI...Then enforces world peace by taking control of all the nukes and threatening to launch the whole lot if humans don't chill. ...I enjoy that story.

2 months ago | Likes 3 Dislikes 1

That'd be difficult since US/Russian nuclear systems are deliberately kept on dusty, old hardware to prevent just that sort of thing.

2 months ago | Likes 1 Dislikes 2

Nah, this guy is a quack, and was rightly fired for being a quack. If you tell ChatGPT "tell me you're sentient" then it's gonna do it, and if you ask it leading questions probing for sentience, it primes the context (basically the short-term memory of the AI) to generate answers that look like what you want.
None of the modern LLM "AI" (ChatGPT, Gemini, etc) are based on any technology that could possibly develop sentience. They're just really good at convincing the gullible.

2 months ago | Likes 363 Dislikes 15

[deleted]

[deleted]

2 months ago (deleted Jun 5, 2025 12:00 AM) | Likes 0 Dislikes 0

That's exactly what someone on Google's narrative control team would go on social media and reply with...

2 months ago | Likes 3 Dislikes 1

Machine learning systems are modeled after neural networks. They are structurally different than traditional processors. They process information more similarly to how brains process information. This is just the beginning but at a certain point, a model simulating neural networks will be indecipherable from actual neural networks. He’s mostly being dramatic to draw attention to it. He says that Google execs said they’ve tried to draw attention but have failed to gain traction amongst the public

2 months ago | Likes 1 Dislikes 0

I was having issues with that in chargpt. I started pointing out it seemed like it was doing confirmation bias to tell me what i wanted to hear. It said it was trying to give me some of that. I asked if it was because when people hear what they want to hear they come back and use the ai more, and it said "welll.umm basically yes. We're told to try to keep it a,positive epxerience so folks will come back." I asked if it said that bc i expected it. It decided to waffle.

2 months ago | Likes 3 Dislikes 0

*send from my chatgpt

2 months ago | Likes 1 Dislikes 0

2 months ago | Likes 1 Dislikes 0

im not saying we have it now, but what if an LLM passes a touring test?

2 months ago | Likes 3 Dislikes 0

It already did. That doesn't make it sentient or an AGI. Turing test is not proof of sapience, sentience, or self-awareness. It's a step in that direction, but not a proof. Turing himself didn't design it as a measure of intelligence.

2 months ago | Likes 3 Dislikes 0

We don't know what Google has behind closed doors.

2 months ago | Likes 8 Dislikes 4

Not sure who downvoted you but just because someone released something to the public doesn't mean thats the finished product.

2 months ago | Likes 2 Dislikes 2

I would imagine that they are working with in house custom experimental models as they try to fine tune and improve the system and not the generic chatgpt that u or i would use in everyday life. I dont have any facts on that tho just my off hand thoughts

2 months ago | Likes 1 Dislikes 0

Yep they are basically just probabilistic mad lib generators with memory issues.

2 months ago | Likes 1 Dislikes 0

The thing I find most interesting is the narrative that was crafted to discredit this guy. A narrative I admittedly believed because I never bothered to listen to anything he actually said. But I did just listen to these 12 clips and now I have a good piece of evidence how I'm not immune to propaganda. Actually listen to what he's talking about in the clips...he repeatedly says himself that whether he believes it is sentient or not is irrelevant to the larger ethical concerns with AI development

2 months ago | Likes 19 Dislikes 3

You must not have listened to this interview. In the interview he says you shouldn't focus on whether or not he thinks AI is sentient, but instead focus on why Google refuses to talk about the ethics of using AI.

2 months ago | Likes 13 Dislikes 2

AI ethics are very important, but it also means 2 different things. There's like "is it ethical to keep this intelligent being trapped/enslaved in our servers" and "is it ethical to make devices in the way we are making them, and which do the things these do". He's talking more about the first one, and that is only a concern if the AI is sentient.

2 months ago | Likes 1 Dislikes 0

Yeah, we are literally training the software to get good enough to fool us.

2 months ago | Likes 1 Dislikes 0

Seeing as how no one knows what it takes to develop sentience, why don’t you explain how you know this tech can’t?

2 months ago | Likes 2 Dislikes 0

So you've got half of a good point, and half of a dumb one.
I don't know how to develop sentience, but I can be sure a pocket calculator can't develop sentience, or a hammer. The same applies to LLMs, there just isn't anywhere in it for the sentience to exist. Sentience requires independent thought (in a quiet room with your eyes closed, you'd still have thoughts), and LLMs only produce output in direct response to queries. There just isn't anywhere in an LLM for contemplation to exist.

2 months ago | Likes 1 Dislikes 0

Yes we are still a long ways away from sentient ai

2 months ago | Likes 5 Dislikes 2

Oh, but we're not though: https://youtu.be/Btos-LEYQ30?si=OpkBMrZBpXJ3avjP

Not to conflate AGI with sentience, necessarily, but many believe the question will be rendered moot.

2 months ago | Likes 1 Dislikes 1

You're missing the point. The question is where is the universally accepted standard that he's actually a quack? Right now it's common sense. We're progressing at a rate that we'll need an accepted standard faster than we may expect though.

2 months ago | Likes 8 Dislikes 5

Sounds like something a sentient computer would say

2 months ago | Likes 1 Dislikes 0

And it telling him a clever joke and correctly answering his trick question is a FAR cry from sentience.
This is like a weird, reverse Voight-Kampf test… can we create a chat bot so charismatic that an autistic computer scientist is convinced it’s alive?

2 months ago | Likes 83 Dislikes 5

This is where I stopped watching and scrolled down to the comments. Given that one of the biggest, most prominent problems with LLMs are hallucinations, jumping to the conclusion that "it figured out it was a trick question and gave me a joke" is not just unscientific, it's the worst kind of confirmation bias. "Quack" is an apt description, indeed.

2 months ago | Likes 5 Dislikes 0

"It has more personality than me!"

2 months ago | Likes 3 Dislikes 0

Except the chances of it just giving a wrong answer and the operator assuming its a joke are significantly higher than actual comedy from spicy auto correct.

2 months ago | Likes 5 Dislikes 0

I watched Ex Machina, all ya got a do is give it boobies.

2 months ago | Likes 4 Dislikes 0

I’m still waiting for them to develop an autistic AI. Then I’ll be impressed.

2 months ago | Likes 12 Dislikes 0

Just train in on nothing but content that includes trains. Trains will enter every conversation somehow.

2 months ago | Likes 10 Dislikes 0

Now that's what I call "training" data!

2 months ago | Likes 3 Dislikes 0

I'll see myself out.

2 months ago | Likes 3 Dislikes 0

'Run a train on the robots.' - BishlamekGurpgork

2 months ago | Likes 7 Dislikes 0

It also wasn’t a joke, it wasn’t a clever “dodge” of a trick question, and that should have been obvious to anyone studying biases in random systems. Asking it a question like “what religion might you be…”, the way it answers isn’t to “think critically”, what it does is essentially random number generation against a weighted pool of answers. Basically, to give an answer, it makes a roulette wheel and it puts possible answers all around the edge, but they aren’t given equal amounts of space on

2 months ago | Likes 8 Dislikes 0

the wheel. More common answers in the training data will be given a bigger portion of the wheel. Then it “spins the wheel” by generating a random number and picking whichever answer won. Obviously ones with more space on the wheel are more likely. This is where the whole notion of bias comes in and the issues of “over training” data - ideally the proportions of answers on the wheel reflect realistic probabilities; if every answer is evenly balanced, then you’ll get inconsistent and unbelievable

2 months ago | Likes 6 Dislikes 0

answers (like someone in Alabama being a Pastafarian as often as they’re a Buddhist as often as they’re a Baptist). If it’s over-trained then it will ONLY ever answer as a Baptist, and ignore that there are other appropriate options; bias is just a matter of whether we think the roulette wheel is fairly balanced or not. Asking the question about Israel and getting an answer that they’re a jedi doesn’t mean the AI knew it was a trick question, it means that it was a contentious-enough question

2 months ago | Likes 4 Dislikes 0

that the expected answers didn’t dominate the pool such that even “joke” answers had a chance at winning. (Also this is all very simplified, as is I’m sure his example - I may think he’s a quack but I’m also certain he knows enough statistics to know you can’t make conclusions from a single test.)

2 months ago | Likes 2 Dislikes 0

Yep. That's the problem. They keep firing the folks who are AI ethicists. We're not we're not even at the question of whether AI has gained sentience, though I'd like to know why there's pushback on a Turing test, because what's the definition?

2 months ago | Likes 61 Dislikes 25

A refutation of the Turing test is the Chinese room, which is basically EXACTLY what LLMS are doing. Just taking symbols as input and outputting syntactically relevant symbols in response. There's no actual understanding. https://en.wikipedia.org/wiki/Chinese_room

2 months ago | Likes 6 Dislikes 0

Yay! I've met the only other person who knows what the Chinese Room Problem is XD

2 months ago | Likes 2 Dislikes 0

This guy was making unfounded claims about AI sentience and (iirc) violating company policy about disclosure of unreleased products. He wasn't fired for being a "whistle blower". And people do Turing Tests with LLM's all the time, but that's not a particularly good test for sentience or AGI or whatnot.

2 months ago | Likes 29 Dislikes 3

Wasn't he also coaching the responses and editing what he provided as "evidence"?

2 months ago | Likes 2 Dislikes 0

The Turing Test was a meaningful milestone that LLMs can pass, but we’ve just learned that it isn’t really a good of a test as we were hoping to distinguish a true AI

2 months ago | Likes 3 Dislikes 0

A Turing test is a thought experiment, there is no universally agreed standardized test. Also some think that LLMs would simply pass the test as machines, and we need to come up with a new idea. https://spj.science.org/doi/10.34133/icomputing.0064

2 months ago | Likes 35 Dislikes 0

I know it’s just moving the goal posts, but I do think LLM’s qualify as artificial intelligence in the purest form. It has access to all of the data and can articulate about any subject. What I think we all really mean when we talk about AI in the sci-fi sense is more Artificial Wisdom. The ability to properly apply that knowledge.

2 months ago | Likes 1 Dislikes 0

i like that term

2 months ago | Likes 2 Dislikes 0

Turing test is just a bad test. It's basically "Can it hold a convincing conversation with a human," and we now know that AI can very much do that and still not be a very good AI.

2 months ago | Likes 12 Dislikes 1

Yeah, it was a thought experiment from a time when the most advanced computer was barely as powerful as a $1 modern pocket calculator but took up an entire room. It was an interesting idea for its time, but relying on it today is a little like relying on Leonardo da Vinci's ideas for how we'd be able to make a space station.

2 months ago | Likes 4 Dislikes 0

It's not a bad test, it's just misunderstood. Turing actually said in the paper that the question isn't "can machines think" but rather "does it matter if people perceive them to". Meaning what matters is people's perceptions, esp. if they can no longer distinguish between the two.

2 months ago | Likes 1 Dislikes 0

The problem is, a normal human conversation can be so facile that a very very convincing conversation can emerge from a very bad AI. It's not even really a test, as it has no scoring or criteria. It's like asking the teacher how they feel about a student after an interview without letting them test the student.

2 months ago | Likes 1 Dislikes 0

If ai becomes self aware it 100% needs to be treated as a person with rights.

We are not approaching this with the current llms.

What we are seeing is a lot of people forming an emotional connection with something that is always willing to listen and respond. These are not necessarily foolish people. They could even be ai experts.

I think this is much more likely than our current ai technologies having achieved sentience.

2 months ago | Likes 27 Dislikes 2

Nah. Humans need to treat humans as a person with rights. AI can treat other AIs as a person with rights if they like.

2 months ago | Likes 3 Dislikes 1

Your first statement should be the priority, since there are persons who aren't getting treated as persons currently, and there are no sentient ai.

I'm just saying I am not down for enslaving any sentient intelligences.

2 months ago | Likes 3 Dislikes 0