Lemoine worked for Google’s Responsible AI organization and, as part of his job, began talking to LaMDA, the company’s artificially intelligent system for building chatbots, in the fall. He came to believe the technology was sentient after signing up to test if the artificial intelligence could use discriminatory or hate speech.
In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper that detailed efforts for responsible development.
“If an employee shares concerns about our work, as Blake did, we review them extensively,” he added. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.”
He attributed the discussions to the company’s open culture.
“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake well.”
Lemoine’s firing was first reported in the newsletter Big Technology.
Lemoine’s interviews with LaMDA prompted a wide discussion about recent advances in AI, public misunderstanding of how these systems work, and corporate responsibility. Google previously pushed out heads of Ethical AI division, Margaret Mitchell and Timnit Gebru, after they warned about risks associated with this technology.
LaMDA utilizes Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can produce deceptively humanlike speech because they are trained on massive amounts of data crawled from the internet to predict the next most likely word in a sentence.
After LaMDA talked to Lemoine about personhood and its rights, he began to investigate further. In April, he shared a Google Doc with top executives called “Is LaMDA Sentient?” that contained some of his conversations with LaMDA, where it claimed to be sentient. Two Google executives looked into his claims and dismissed them.
Lemoine was previously put on paid administrative leave in June for violating the company’s confidentiality policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he is considering potentially starting his own AI company focused on a collaborative storytelling video games.