Artificial intelligence engineer at Google Blake Lemoine says Google’s AI model laMDA is sentient. But the company disagrees.
Google engineer Blake Lemone.
Credit: Martin Klimek/WP.
Google recently placed an engineer on paid leave after dismissing his claim that Google’s artificial intelligence called LaMDA ( Language Model for Dialogue Applications ) has sentience—exposing another internal fight over the company’s most advanced technology. —.
For its part, Google maintained that its systems mimicked conversational exchanges and could talk about different topics, but had no conscience.
“Our team — including ethicists and technologists — have reviewed Blake’s concerns against our AI principles and advised him that the evidence does not support his claims,” Brian Gabriel, a Google spokesman, said in a statement. “Some in the tech community are considering the long-term possibility of a sentient or general AI (AGI), but it doesn’t make sense to do so by anthropomorphizing current conversational models, which are not sentient.”
The Controversy Surrounding Google’s AI LaMDA
While pursuing the goal of cutting-edge AI, Google’s research organization has spent the last few years mired in scandal and controversy. The division’s scientists and other employees have regularly squabbled over technology and personnel issues in episodes that have often spilled over into the public arena.
In March, for example, the tech giant fired a researcher who had publicly tried to disagree with the published work of two of his colleagues. And the firings of two AI ethics researchers—Timnit Gebru and Margaret Mitchell—after criticizing LaMDA language models have added fuel to the fire.
Other Such Claims Against Google
Lemoine, a military veteran who has described himself as a priest, an ex-convict, and an AI researcher, told Google executives including Kent Walker, president of global affairs, that he believed LaMDA was a child of 7 or 8 years. He wanted the company to seek consent from the computer program before experimenting with it. His claims were based on his religious beliefs, which he said the company’s human resources department discriminated against.
“My sanity has been repeatedly questioned,” Lemoine said. «They came to tell me: “Have you been checked by a psychiatrist recently?”. In fact, in the months before he was placed on administrative leave, the company had suggested that he take mental health leave.
Yann LeCun, the director of AI at Meta and a key figure in the rise of neural networks, said in an interview this week that such systems are not powerful enough to achieve true intelligence.
Google’s technology is what scientists call an artificial neural network, which is a mathematical system that learns skills by analyzing large amounts of data. By identifying patterns in thousands of cat photos, for example, you can learn how to recognize a cat.
In recent years, Google and other leading companies have designed neural networks that learned from vast amounts of prose—including thousands of unpublished books and Wikipedia articles. These “large language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets, and even write blog posts.
But they are extremely flawed. Sometimes they generate perfect prose. Sometimes they generate nonsense. Systems are very good at recreating patterns they’ve seen in the past, but they can’t reason like a human. At least it seems that way.