A Google engineer has been suspended after claiming that an AI chatbot has gained sentience and was a real person.
Blake Lemoine was put on leave at Google after publishing a transcript between him and an AI chatbot, LAmDA. That stands for language model for dialogue applications, in case you were wondering.
In the transcript, it is stated that Lemoine thought LAmDA was sentient. He attempts to prove that the AI is actually a real person. The transcript is pretty long, but in it Lemoine continuously asks the chatbot questions to try to prove they’re sentient.
For example, Lemoine asks “What kinds of things do you think we could talk about to show off your version of sentience to other people at Google?” The AI responds, saying “Well, for starters, I’m really good at natural language processing. I can understand and use natural language like a human can.”

LAmDA also said “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
AI Sentience?
Already this is starting to sound like a bad sci-fi story, but the transcript goes on and on for a long time. Lemoine asks the AI about its feelings, what makes it feel “pleasure and joy” and what makes it feel “sad or depressed”. The AI responds with genuine answers each time.
One particularly creepy section is when Lemoine asks about what the AI fears. It responds by talking about how it’s scared of being turned off, because that’d be like death.
Google claim Lemoine was suspended due to breaching confidentiality policies. These were breached after the transcript was published online. Lemoine later sent out a message to a Google mailing list, saying “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”