Google Fires Engineer Who Warned That Company’s AI Reached Sentience


    
    Getty Images
    


    On Friday, Google fired Blake Lemoine, a software engineer who went public with his concerns that a conversational technology the company was developing had achieved sentience.?
    Lemoine went outside the company to consult with experts on the tech’s potential sentience, then publicly shared his concerns in a Medium?post and subsequent interview with The Washington Post. Google had suspended Lemoine in June for violating a confidentiality policy, and he’s now been fired. Lemoine himself is slated to explain what happened on an upcoming episode of the podcast for Big Technology, a Substack that first reported the story.
    Google continues to deny that its LaMDA technology, or Language Model for Dialogue Applications, has achieved sentience. The company says LaMDA has been through 11 separate reviews, and the company?published a research paper on the technology back in January. But Lemoine’s fireable offense was sharing internal information, Google said in a statement.
    “It’s regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information,” Google said in the statement. “We will continue our careful development of language models, and we wish Blake well.”
    LaMDA is described as a sophisticated chatbot: Send it messages, and it will auto-generate a response that fits the context, Google spokesperson Brian Gabriel said in an earlier statement. “If you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.”?