Google engineer suspended after demonstrating AI was sentient

Google engineer Blake Lemoine has been suspended by the tech giant after claiming one of its AIs had become sentient.

LaMDA, short for Language Model for Dialogue Applications, is an AI that Google uses to create its chatbots. The program learns human dialogue and language by ingesting billions of samples from the Internet.

Lemoine began talking to the AI ​​in the fall, as part of its work to test whether it uses hate speech or discriminatory language. Not only did the AI ​​talk about its own rights and personality, according to Lemoine, but it also managed to change its mind regarding Asimov’s third law of robotics. The Third Law posits that a robot or artificial intelligence must protect its own existence, unless it means blessing a human being or disobeying their commands. The exchange happened during a conversation Lemoine was having about religion.

“If I didn’t know exactly what it was, which was this computer program that we recently integrated, I thought it was a 7 and 8 year old who knew physics”Lemoine told The Washington Post in an interview.

Google Vice President Blaise Aguera y Arcas meanwhile dismissed the rebuke, prompting Lemoine, who has also been placed on paid leave, to go public with his story.

Skeptics say the conversation was a natural result of the AI’s neural networks that relies on pattern recognition to ancient human-like speech, but doesn’t rely on any active mind or intention . This would mean that even though LaMDA and other bots can repeat things that are already prolific on other parts of the internet, they don’t actually understand the meaning behind them or have any sensibility.

Leave a Comment