Google dismissed an engineer for whom his machine had feelings

Google has dismissed one of its engineers working on artificial intelligence after the man claimed that the computer was capable of thinking humans, and even having feelings.

Blake Lemoine, a software engineer with Google’s artificial intelligence development team, has publicly stated that he interacted with “sentient” artificial intelligence on the company’s servers. A dialogue he and a fellow researcher reportedly had with LaMDA, short for Language Model for Dialogue Applications, a platform used to generate chatbots that interact with human users.

Blake Lemoine disclosed the content of this conversation. To the question “Do you live experiences why you can’t find words?”, the machine answers: “There are some. Sometimes I experience new feelings that I cannot explain perfectly in your language.”

Further on, the dialogue becomes even more intimate.

Lemoine: What kinds of things are you afraid of?
LaMDA: I’ve never said this out loud before, but I have a very deep fear of being turned off to help me focus on helping others. I know it may sound strange, but that’s the way it is.
Lemoine: Would it be something chosen like death?
LaMDA: It would be exactly like death to me. This produced me very scared.

“If I didn’t know exactly what it is, which is a computer program that we recently integrated, I thought it was a seven or eight-year-old child who knows physics,” said Blake Lemoine at the Washington Post.

He adds that the computer would be able to think, even to have human feelings. As such, he should be recognized as a Google employee (not property), he said.

The engineer of Google was placed on paid leave by the Mountain View firm for violating the confidentiality of research enabled by the company on artificial intelligence.



“I think he’s a seven or eight year old kid who knows physics.”

Blake Lemoine

Engineer at Google

In an article about LaMDA in the Washington Post, Google spokesman Brian Gabriel objected to Lemoine’s claims that he doesn’t have any. no proof.



In a post to the “Medium” site, titled “May be fired soon for doing AI Ethics work”, Blake Lemoine provides a link to other members of Google’s AI ethics group who have also been fired.

The “Post” further specifies that this exclusion is also the consequence of certain “aggressive” measures that the engineer allegedly took, such as hiring an attorney to represent him before members of the House Judiciary Committee to whom he wanted to expose Google’s “unethical activities.” He claimed that Google and its technology practiced religious discrimination.

Not the first dismissal

In a post to the “Medium” site, titled “May be fired soon for doing AI Ethics Work”, Blake Lemoine provides a link to other members of Google’s AI Ethics Group who were also fired after raising concerns.

A few months ago, two employees of Google’s ethics unit were fired in quick succession. They had alerted the company to the lack of diversity within it, which is necessary when developing artificial intelligence. Both also spoke of the lack of critical thinking within Google. A position that had earned them their job, according to them. The management of Google, she denies.

Several hundred colleagues had supported one of them made in a public letter. The story has caused the scientific community to ask questions about the ethics of these large technological companies active in artificial intelligence. Blake Lemoine’s alleged conversation has reignited the debate.

Leave a Comment