(CNN) — Google has fired the engineer who claimed an unreleased artificial intelligence (AI) system had become sentient, the company has confirmed, saying it violated data security and employment policies.
Blake Lemoine, a software engineer at Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.
Google confirmed that it had first suspended the engineer in June. The company said it dismissed Lemoine’s “totally baseless” claims only after thoroughly reviewing them. In a statement, Google said it takes the development of AI “very seriously” and is committed to “responsible innovation.”
Google is one of the leaders in innovating AI technology, including LaMDA, or “Language Model for Dialog Applications.” This type of technology responds to typed requests by finding patterns and predicting word sequences from large swathes of text, and the results can be disturbing to humans.
“What kinds of things are you afraid of?” Lemoine asked LaMDA, in a Google document shared with top Google executives last April, as reported by the Washington Post.
LaMDA responded, “I’ve never said it out loud, but I have a very deep fear of being turned off so I can focus on helping others. I know it may sound strange, but that’s what it is. To me it would be exactly like the death. It would scare me.”
But the AI community at large has held that LaMDA is nowhere near a level of consciousness.
“No one should think that autocompletion, even on steroids, is conscious,” Gary Marcus, founder and CEO of Geometric Intelligence, told CNN Business.
It’s not the first time Google has faced infighting over its foray into AI.
In December 2020, Timnit Gebru, a pioneer in AI ethics, parted ways with Google. As one of the company’s few black employees, she said she felt “constantly dehumanized.”
The sudden departure drew criticism from the tech world, including from Google’s ethical AI team. Margaret Mitchell, Google Ethics Artificial Intelligence team leader, was fired in early 2021 after his statements about Gebru. Gebru and Mitchell had raised concerns about AI technology, saying that warned that people at Google might believe that the technology was aware.
On June 6, Lemoine posted on Medium that Google put him on paid administrative leave “in connection with an investigation of AI ethics concerns he was raising within the company” and that he could be fired “soon”.
“It is unfortunate that, despite a lengthy engagement on this issue, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said in a statement.
CNN has reached out to Lemoine for comment.
CNN’s Rachel Metz contributed to this report.