Sentient, self aware, conscious, general intelligence, human-like: disentangling A.I. terminology

A few weeks ago, an employee of google claimed that one of its artificial intelligences had become sentient. I want to take this event as an opportunity to explain some important terms frequently used in conjunction with advanced A.I.. I have noticed that online discussions about the topic often tend to be unproductive mainly because there is no shared definition of the different terms used. So let‘s start:

Intelligence
First of all: there is no commonly accepted definition of intelligence. Different experts will give us different definitions which can vary considerably. The definitions also have evolved a lot over the last few decades, as computers made early definitions of intelligence obsolete: when computers became superior to humans in performing certain tasks (e.g. calculating), it was assumed that solving these problems does not require true human intelligence. Like this, the superiority of man over machine could be preserved.
Modern definitions of intelligence focus on two main aspects: solving complex problems and acquiring the ability to solve new problems (i.e. learning).
As there are many different kinds of problems, there are also many different kinds of intelligence: visual perception (e.g. facial recognition), motoric intelligence (e.g. walking), planning actions, rational intelligence (e.g. solving physics problems) etc..
An artificial intelligence which is only capable of solving problems in one or a few of such areas is called an artificial narrow intelligence (ANI). Today, all artificial intelligences are still of this type.
On the other hand, an A.I. which can perform well on a broad range of tasks (comparable to a human) is called an artificial general intelligence (AGI). Such an A.I. does not exist yet.
We say that an A.I. has superhuman performance for a certain task, if it performs better than the average human. This exists already today for many tasks. Playing the game „Go“ and predicting protein structures from amino acid sequences are recent examples.
If an A.I. shows superhuman performance on a broad range of tasks, we call it an artificial superintelligence (ASI). Also this kind of A.I. does not exist yet (but might appear very quickly after the appearance of the first AGIs).

Self awareness
Self awareness is the ability of an A.I. to know about itself. An A.I. with this ability could for instance recognize itself in a mirror. This probably requires some kind of meta model which contains information about the current inner state of the A.I.. We could call such information „emotions“ of the A.I.. For instance it could be „sad“ if it cannot solve a problem for a long time. This ability should not be confused with sentience. For self awareness it is not required that the A.I. really „feels“ these „emotions“, it is only required that it can react according to them. One possible reaction is to talk about them to others.
Self awareness can be measured with tests and it seems likely that A.I.‘s will soon possess self awareness as it is not too difficult to build such meta models.
Note that it is quite easy to build simple software systems which can give humans the illusion of self awareness even if it is not present. Current large language models (like GPT-3) fall into this category: if asked about themselves they will give sensible answers because such dialogues are part of their training data (i.e. interviews). But the answers will vary on the context (i.e. the dialogue with the machine before asking this questions) and have nothing to do with the inner state of the A.I.. Current large language models have no meta model (for instance, to assess how difficult it was to answer a certain question).
We have a tendency to assume that self aware systems are also sentient (see below), but these are in fact independent qualities.

Sentience / Consciousness
Sentience (or the synonymous term consciousness) includes the abilities to feel emotions and pain but also sensations like seeing colors or feeling objects when touching them. Philosophers call such sensations „qualia“. It is not sufficient to have sensors for this physical quantities (and to produce some kind of reaction to such inputs). These sensations must be really felt.
It is very unclear if computers will ever have such abilities. This has nothing to do with the special nature of computers but with the difficulty of the question: in the end I can be sure only about myself that I have them. Even if it seems very reasonable to assume that other people perceive the same kind of qualia, this must ultimately remain an assumption. And even if other people do perceive the same kind of qualia they could perceive them very differently. For instance others could see the color I see as red when they see an object which is blue for me. We would still fully agree that the object is blue and there is no way we could find out if we perceive the object differently: there are no objective criteria which allow us to describe colors. How does yellow look? There is no answer to this question, we can only refer to yellow objects.
Today we have no idea what makes a system sentient. Is it complexity? Unfortunately this is, like intelligence, a very poorly defined term and we know little about the subject. Does it arise from the communication between some kind of nodes (like neurons)? We simply don‘t know and we might never know because it is impossible to experiment in this field.
But only because we don‘t understand the phenomenon we should not deny A.I.‘s sentience rashly. It is possible that A.I.‘s perceive qualia in a way which is very different from the way humans perceive them. How does an octopus feel (they have a brain in each leg)? Can plants feel pain (they also have internal communication systems)? It seems to be very difficult to answer such questions but it is even more difficult to answer them for the extremely „alien“ (i.e. non biological) computer hardware.
And if computers can perceive qualia, it seems likely that they perceive them in different ways than humans. This could be the case even if they show impressive human-like behavior (see below).
Again: it is very easy to build simple systems which claim to be sentient. If a system is intelligent and claims to be sentient it does not need to be sentient at all. But it could be, in a strange and unexpected way.
Note that the topic of sentience / qualia is very complex and intensely debated among philosophers. There is little agreement on the definitions given in this article.

Human-like
An A.I. is called human-like if it shows behavior similar to a human. An A.I. which can predict protein folding is sure not human-like (it cannot talk, hear, control a body, understand language, recognize objects etc.) but a humanoid robot might be. Human-like robots don’t need to be necessarily sentient but they need to be intelligent and self aware (to the extent the average human is). Human-likeness is the property the famous Turing test is measuring. Note that to pass the Turing test, an A.I. cannot be superintelligent (or fully self aware) as too smart answers would reveal it as a computer. Today we can imagine two ways to create human-like A.I.’s:

  • Supervised learning. This works for instance in natural language processing (NLP) where we can train an A.I. to complete incomplete sentences (with the rest of the sentence written by a human as correct answer). Today it is not clear if this method will ever work to train humanoid robots which have many sensors (like two eyes/cameras) and actors (like fingers etc.). But it seems to work well for systems with limited input/output capabilities (like the mentioned NLP A.I.’s). It is possible today to build chatbots which pass the Turing test.
  • Inverse reinforcement learning. Maybe it will be possible in the future to reconstruct the reward function of a human from a long time observation of its behavior. A robot trained with reinforcement learning using such a reward function should, after training („growing up“) show human like behavior.

It is again possible to build simple systems which can give humans the illusion of human-likeness. In some cases surprisingly little intelligence is required to achieve this goal. Weizenbaum‘s classic ELIZA chatbot from 1964 is a good example. ELIZA had almost no true intelligence but many humans which interacted with it were quickly convinced that they were talking to a human-like computer).

Image: Shutterstock / PayPau


Follow me on Twitter to get informed about new content on this blog.

I don’t like paywalled content. Therefore I have made the content of my blog freely available for everyone. But I would still love to invest much more time into this blog, which means that I need some income from writing. Therefore, if you would like to read articles from me more often and if you can afford 2$ once a month, please consider supporting me via Patreon. Every contribution motivates me!