Google engineer Blake Lemoine was fired on Friday, after saying the company’s LaMDA artificial intelligence system had become sentient and was beginning to develop a fear of being turned off.
Lemoine claimed the AI system had achieved self-awareness and even felt a range of emotions, according to The Big Technology Newsletter.
The engineer also said researchers should ask for consent from the A.I. machine before conducting experiments — especially following a lengthy conversation he had with the apparatus, which Lemoine later posted online.
When Lemoine asked what the A.I. was afraid of, it responded, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is” adding that “ It would be exactly like death for me. It would scare me a lot.”
In reply to Lemoine posting the interactions online and publicly speaking about his research, he was placed on paid administrative leave, which led to him sitting down for an interview with The Washington Post.
“Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said in a statement, released shortly after Lemoine’s firing.
The corporation defended its work with artificial intelligence explaining, “LaMDA has been through 11 distinct reviews, and we published a research paper earlier this year detailing the work that goes into its responsible development.”
The company emphasized that Lemoine’s concerns had been thoroughly considered and Google, “found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months. These discussions were part of the open culture that helps us innovate responsibly.”
The multi-billion dollar conglomerate reaffirmed its research adding, “We will continue our careful development of language models,” and concluded by “wish[ing] Blake well.”
You Can Follow Sterling on Twitter Here