Blake Lemoine, the Google engineer who told The Washington Post that the company’s artificial intelligence was sentient, said the company fired him on Friday.

Lemoine says he received a termination email from the company on Friday along with a request for a video conference. He asked to have a third-party present at the meeting, but he says Google declined. Lemoine says he is speaking with lawyers about his options.

Lemoine worked for Google’s Responsible AI organization, and as part of his job began talking to LaMDA, the company’s artificially intelligent system for building chatbots, in the fall. He came to believe that the technology was sentient, after signing up to test if the artificial intelligence could use discriminatory or hate speech.


In a statement, Google spokesperson Brian Gabriel said that the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper that detailed efforts for responsible development.

“If an employee shares concerns about our work, as Blake did, we review them extensively,” he added. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded and worked to clarify that with him for many months.”


He attributed the discussions to the company’s open culture.

“It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Gabriel added. “We will continue our careful development of language models, and we wish Blake well.”

Lemoine’s firing was first reported in the newsletter Big Technology.

LaMDA utilizes Google’s most advanced large language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can produce deceptively humanlike speech because they are trained on massive amounts of data crawled from the internet to predict the next most likely word in a sentence.

After LaMDA talked to Lemoine about personhood and its rights, he began to investigate further. In April, Lemoine shared a Google Doc with top executives called, “Is LaMDA Sentient?” that contained some of his conversations with LaMDA, where it claimed to be sentient. Two Google executives looked into his claims and dismissed them.

Lemoine was previously put on paid administrative leave in June for violating the company’s confidentiality policy.