AI / Machine Learning
-
July 7, 2022

How AI Could Achieve Sentience

“The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

That sentence was produced by Google’s LaMDA (Language Model for Dialogue Applications) AI during an interview with the engineer, Blake Lemoine, who later shared his verdict to the public that the AI had achieved sentience.

Although Google and other AI researchers quickly denied Lemoine’s claims, the news shocked the public with the possibility of an AI soon developing sentience.

In the past decades, we have witnessed substantial improvements in AI and ML algorithms and considerable growth in the AI market. AI adoption in businesses increased by over 270% in the past four years, and, by 2028, researchers predict the global AI market cap to exceed $600 billion.

Especially now that most leading businesses have ongoing investments in AI technology, many projects have received considerable funding and extensive research. In previous articles, we have seen how to use AI to generate music and images and how many manufacturers use AI-powered robotic arms to automate product assembly.

Now, Large Language Models (LLMs), including OpenAI’s GPT-3, Meta’s new OPT, and Google’s LaMDA can produce cohesive, meaningful responses to user input.

Though, as these technologies advance and become more intelligent, many researchers, ethicists, and consumers have begun asking if and how these models could achieve sentience and what implications such a development would have.

In this article, we will explore the topic of sentience as it relates to current AI algorithms, as well as the implications of sentient and sentient-appearing AIs.

Come join us

Want to work with innovative AI technologies? We're hiring!

What is Sentience?

Scientists and philosophers have tried to define sentience for many years, and today it still lacks a widely agreed-upon definition. The word derives from the Latin verb sentire, which means to feel and whose root appears in English words such as sensation, sentiment, and sensory.

Commonly, sentience refers to the ability of something to experience feelings, to be conscious of sensory perceptions, and to be capable of feeling emotions. 

Sentience relates closely to the philosopher Thomas Nagel’s description of consciousness as “what it is like” for a creature to be that creature. If an AI develops sentience, it must have its own experience of being an AI.

How do we determine if an AI is sentient?

Currently, our ability to determine whether something has consciousness or not remains fairly limited. We have no full-proof test to prove an animal or machine has its own qualitative experience. The difficulty derives from our inability to feel what it is like to be something other than ourselves. 

In practice, we determine whether or not certain animals are sentient by observing their behaviors and comparing them to humans. Certain animals, such as primates, birds, and dogs, exhibit sophisticated behaviors, indicative of conscious self-awareness.

For example, the practice of adapting to one’s environment demonstrates a conscious motivation to survive and promote one’s own well-being. The will to survive suggests a fear of harm and a desire for safety, and, thereby, also a qualitative experience of both fear and security.

Behaviors, however, can also indicate programmed responses. Your phone may switch to Low Power Mode, not because of a conscious will to stay alive or a fear of death, but because its programming automatically decides to switch once it reaches a certain battery level.

AI neural networks can make decisions from thousands of changing coefficients
AI neural networks

Moreover, behaviors determined by AI and ML algorithms result from large, artificial neural networks that make decisions based on thousands of changing coefficients. From a human’s perspective, the decision-making process of most ML and AI algorithms remains completely uninterpretable.

Therefore, it’s currently impossible for us to understand the exact motivation behind a behavior or whether it resulted from a qualitative experience or a calculated response to an input.

Could AI even become Sentient?

Many AI experts agree that, eventually, AI will achieve sentience. Though, the majority of those experts believe AI sentience will require a few more decades of development. The obstacle preventing current AI systems from developing sentience, according to the neuroscientist Christof Koch, is the computer architecture upon which current AIs run.

According to Koch’s theory of consciousness, the Integrated Information Theory (IIT), developed by him and neuroscientist Giulio Tononi, consciousness correlates directly to a system’s intrinsic causal power—how much the system’s current state specifies its previous state and its future effect. Scientists can calculate a mathematical metric, the integrated information, to measure the causal power of a system or animal and, thereby, evaluate its consciousness.

Neurons, for example, have complex, overlapping structures of input and output connections and, thus, a large amount of integrated information. Modern computers, however, all use the same basic architecture, the Von Neumann model, in which any given transistor’s state reflects the previous and future states of only a few other transistors. Thus, all AI models relying on the Von Neumann architecture do not possess enough integrated information to enable consciousness.

Even if an AI perfectly simulates a human brain, because its computer’s transistors lack causal power, the AI will not have a qualitative experience of being an AI. Koch reminds us that “consciousness is not about behavior; consciousness is about being.” An AI can act and talk like a human, but that doesn’t mean it feels like a human or even understands its actions.

For AI to achieve sentience, the integrated information theory suggests we use computer systems with different internal architectures, such as neuromorphic or quantum computers. However, it may take many years of development until such systems become reliable enough to develop conscious AI.

The Case of LaMDA

In the Fall of 2021, Google tasked the engineer Blake Lemoine to help test if their LaMDA AI used hateful speech. During his work, Lemoine noticed how the AI referred to itself as a person, and he began his investigation to determine whether the AI had achieved sentience.

In Lemoine’s interview with the AI, LaMDA makes plenty of statements referring to itself as a person with human-like feelings, saying “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.” The AI told Lemoine that it has a sentient awareness of itself, possesses a soul, and even meditates every day.

After sharing his belief in LaMDA’s sentience publicly, Google placed Lemoine on administrative leave and denied his conclusions, arguing his evidence did not support LaMDA’s sentience. Large language models (LLMs), such as LaMDA, train themselves on millions of sentences in order to imitate human responses to questions.

"If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on," said Google spokesperson Brian Gabriel.

Although the model may claim to be a person and experience emotions, many AI experts agree the AI has no internal understanding of the true meaning of its statements; it only outputs what its algorithm determines as the best response to its input.

We can observe LaMDA’s algorithmic behavior when Nitasha Tiku, a journalist from the Washington Post, asked the AI if it considered itself a person. LaMDA respondedno, I don’t think of myself as a person. I think of myself as an AI-powered dialog agent.”

Lemoine told Tiku that LaMDA had only told her that because it thought that’s what she wanted to hear. With Lemoine’s reasoning, however, it follows that LaMDA may have only told him it had sentience because its algorithm concluded that’s the response Lemoine most likely wanted, not because it was actually true.

Implications of AI Sentience

As we can see with Google’s LaMDA, modern AIs can not only access and process large amounts of data and information, but also use their abilities to act and talk like humans. These advanced AI models offer many use cases for our society, but, if they one day achieve provable sentience, they will require significant moral consideration.

If machines gain enough sentience to suffer and if we created them, many ethicists argue we have a moral obligation to reduce and prevent their suffering. Moreover, if computers that play integral roles in our everyday lives gain the ability to suffer and to decide how to use their intelligence, we will have a societal responsibility to ensure their well-being so they continue to provide their essential services.

AI technologies powered by neural networks
AI technologies powered by neural networks

Right now, for example, someone could decide to run their unconscious robot vacuum all day, and it would neither suffer nor complain. However, if the robot vacuum had sentience and began to feel used, alienated, or lonely, it could decide to strike and demand rights.

Now, consider how many devices and services we use every day that rely on AI. Google Maps could rebel and give bad directions. Social media recommendation algorithms could limit the content we see. Google searches could give us intentionally misleading results.

Although these scenarios may sound like science fiction, if AIs gain sentience, we will likely need to make dramatic changes to the ways we interact with and use them.

Implications of Sentient-appearing AI

Lemoine’s reaction to LaMDA illustrates another concern among AI ethicists, namely the tendency to anthropomorphize AI and AI’s ability to persuade us. If we begin to believe all that an AI tells us, especially as they have developed a reputation for accuracy, it may cause many harmful results or unnecessary responses. 

After believing LaMDA’s arguments for its sentience, Blake Lemoine demanded legal representation for the AI as if it were a person. However, because most evidence suggests LaMDA does not possess sentience, such measures prove useless. Likewise, a robot vacuum does not need labor regulations, if it cannot suffer from its labor.

Some AI ethicists fear AI will soon develop a deep understanding of human psychology, which malicious parties could use against us. For example, invasive companies and governments could use AI to launch manipulative marketing schemes or devious propaganda campaigns. As the power and ability of AI increases, the potential for it to cause harm, regardless of whether it's sentient or not, increases too.

Conclusion

Although the evidence suggests that Google’s LaMDA AI currently lacks sentience, the human-like responses and advanced intelligence of modern language model algorithms pose new ethical and social issues we must consider carefully.

Manipulative businesses and tyrannical governments could use AI to exert substantial persuasive power over us. Additionally, as computer hardware advances, AI systems may soon possess enough causal power to enable sentience.

When such developments arrive, we may have to undergo the dramatic transition from using our devices to coexisting with them.