Google thanks one of its engineers claiming that its conversational AI is endowed with sensitivity

An engineer on Google’s Artificial Intelligence team said the LaMDA chat technology has sensitivity. The Mountain View firm and industry experts agree to the contrary. The engineer was placed on forced leave.

Last year, at its annual Google I/O conference, Google unveiled LaMDA, its new artificial intelligence-powered conversation technology. This new language model for dialogue applications is based on a neural network developed internally by Google and should make it possible to create high-performance Chatbots.

Presented as revolutionary, it allows a machine to have a fluid conversation and natural interactions with humans. So much so that Google explained that its technology was able to understand nuances and even express certain feelings, such as empathy.

For Google, the objective with this new conversational model is to improve its various tools such as Google Assistant, Google Search or even WorkSpace. But the effectiveness of LaMDA would be such that some of its creators would begin to believe that it would be endowed with a sensitivity similar to that of the human being.

An engineer convinced that AI has feelings

the Washington Post reports an amazing story about Blake Lemoine, a Google software engineer. As part of his job, the engineer, who had signed up to test whether Google’s artificial intelligence used discriminatory or hate speech, recorded a conversation that took a funny turn.

A graduate in cognitive science and computer science, Blake Lemoine began to talk about religion with the chatbot before quickly realizing that the machine was beginning to talk to him about his rights and status as a person. Faced with this surprising speech, he decided to dig deeper by questioning the AI ​​a little more, and obtained answers that are chilling.
Convinced that this artificial intelligence was endowed with sensitivity, Blake Lemoine, with the help of a colleague, decided to present a file to Google containing all the elements aimed at proving it, including captures of the various conversations held with LaMDA.

Blaise Aguera y Arcas, the vice president of Google, and Jen Gennai, the head of innovation, reviewed the report, but were not at all convinced by the engineer’s statements.

Google and industry professionals at odds

Google and other industry players seem to disagree strongly with Blake Lemoine’s statements. A spokesperson for the Mountain View firm indicated that its teams of ethicists and technologists had analyzed the points raised by the engineer, and that nothing could demonstrate that LaMDA was endowed with sensitivity.

Margaret Mitchell, the former head of the Artificial Intelligence Ethics team at Google, was able to read an abridged version of Blake Lemoine’s paper and saw it not as a person, but as simple software.

“Our minds are very good at constructing realities that are not necessarily true to a larger set of grand facts presented to us. […] I’m really concerned about what it means for people to be more and more affected by the illusion.she told the Washington Post.

For AI specialists, the images and words generated by artificial intelligence systems like LaMDA are all based on answers that humans have posted on the web, such as Wikipedia and Reddit, two sources often used to train AIs. . However, this does not mean that AIs understand the meaning of what they produce.

Convinced otherwise, Blake Lemoine has taken other actions to try to justify his fears. Among other things, he invited an attorney to represent LaMDA and spoke with a representative of the House Judiciary Committee regarding Google’s activities deemed unethical. After all, an aggressive action that Google has obviously had little taste for: the Californian company has decided to place Blake Lemoine on forced paid leave for violation of its privacy policy.

Source :

Washington Post

Leave a Comment