Hiya!
The first time I heard of this story, I expected it to die off quickly. Despite once unimaginable events becoming disturbingly commonplace these days, I didn’t think we’ve reached the point of sentient robots. Sure, predictions of our future paint images of artificial intelligence ruling the world one day. And yeah, we’ve imagined countless scenarios about how it may occur and how they could turn on us.
But, we also thought flying cars would be the norm by now, which they aren’t. And it seems to me that, considering our lack of understanding regarding the working of our consciousness, creating sentient artificial intelligence would be significantly more complex than flying cars. Yet, one Google software engineer claims Google made a chatbot so sentient that he argues it should be granted personhood.
Let’s Start With An Overview
A 41-year-old software engineer named Blake Lemoine worked at Google for seven years. His job mostly pertained to personalization algorithms and artificial intelligence.
During his career at Google, Lemoine developed a fairness algorithm to remove bias from machine learning systems, but when the pandemic started, he transferred to Google’s Responsible AI department.
He joined other colleagues in developing and testing an artificial intelligence called LaMDA — Language Model for Dialogue Applications — a chatbot for mimicking speech by processing trillions of words from the internet.
Part of Lemoine’s job was to push LaMDA’s boundaries and test whether it used discriminatory or hate speech. Google’s goal for LaMDA was to embed it in their products, such as Google Assistant and Search.
We’re all familiar with chatbots these days. You can find them on countless websites as a customer support tool. Except, so far, it’s pretty easy to tell you’re talking to an algorithm. Well, in April of 2022, Lemoine wrote an internal report intended initially for only the eyes of Google executives in which he claims LaMDA is sentient.
After the executives dismissed Lemoine’s report, he went public, and Google placed him on administrative leave. The Washington Post published the story in June, which I’ll refer to a couple more times. In it, Lemoine attempts to explain his experience with LaMDA and spread knowledge of its existence. He says:
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”
In January of 2022, before Lemoine went to the executives with his claims, Google released a paper about LaMDA in which they acknowledged the safety concerns around anthropomorphization — attributing human traits, emotions, or intentions to non-human entities — a well-known human instinct.
Google recognized that competitors could use AI like LaMDA to “sow misinformation” by mimicking “specific individuals’ conversational style.” Which, given the level of dis and misinformation these days, is particularly concerning.
If you’re anything like me, it might be easy to brush this off as an extreme case of anthropomorphization. However, if you look a little deeper, the possibility becomes less black and white.
Keep reading with a 7-day free trial
Subscribe to Curious Adventure to keep reading this post and get 7 days of free access to the full post archives.