Artificial Intelligence (AI) is pervasive today. Across all aspects of technology, AI is increasingly playing a part. But, arguably, most AI is essentially just sophisticated automation rather than true “intelligence”.

In this “explainer” article we look to the distant future of artificial intelligence: artificial consciousness. What is artificial consciousness and when can we expect it?

What is artificial consciousness?

Artificial consciousness (also known as machine consciousness or synthetic consciousness or AI consciousness) refers to a non-biological, human created machine that is aware of its own existence. When – or if – it will be created, it will profoundly affect our understanding of what it means to be “alive”.

But before discussing artificial consciousness, we need to examine what we have now: artificial intelligence.

What is artificial intelligence?

Artificial Intelligence (AI) is a broad term and has been used to refer to anything from basic software automation, to machine learning. Artificial intelligence is a long way from artificial consciousness.

One of the ways artificial intelligence is built through technologies such as machine learning or neural networks; technologies that enable computers to be “trained” from the information it is given, through the use of complex algorithms. As a result of this training, these artificial intelligence machines can then take actions

Artificial intelligence has been evolving rapidly and has had numerous applications within a wide number of industries. From retail, to manufacturing, to banking and financial services – artificial intelligence algorithms are being deployed to target individuals’ needs and desires and to enhance user experience, promote efficiency and safety.

For example, in the healthcare services industry, UK Heath Secretary Matt Hancock has acknowledged the “enormous power” that artificial intelligence has to improve the healthcare sector. This was in the context of the government’s announcement to spend £250 million on a new special artificial intelligence laboratory for the NHS.

What is Artificial Narrow Intelligence (ANI)

As AI has developed, so too has the scope of research.. Artificial intelligence has been divided into two categories: Artificial Narrow Intelligence (ANI) and Artificial General Intelligence (AGI).

Artificial Narrow Intelligence is commonly known and referred to as ‘Weak AI’. Artificial Narrow Intelligence is the branch of artificial intelligence which pervades today. Whilst these “intelligent” machines have had significant breakthroughs and, in some cases, are even out-performing humans in specific tasks; they are ‘weak’ because they operate intelligently only in narrow circumstances and under significant constraints.

 Artificial narrow intelligence can learn to be very good at doing a particular defined task, but is completely unable to perform any other task. Its “intelligence” cannot be applied to anything else.

The other category of artificial intelligence is Artificial General Intelligence (AGI) – or ‘Strong AI’. This is the next stage in artificial intelligence and has yet to be achieved.

What is Artificial General Intelligence (AGI)?

Essentially AGI machines can observe and learn, but then able to apply what they have learned in one area to another unfamiliar area. Whilst this intelligence is basic in humans and other biological life, and is found at early stages of development, we have yet to produce a machine that can mimic this.

Whilst the gap between human intelligence and artificial intelligence seems to be diminishing at a rapid rate, ‘it takes more than just performing specific tasks better than humans to qualify as Artificial General Intelligence.

How is artificial consciousness different?

Artificial consciousness (AC) is one step beyond Artificial General Intelligence and implies more than just intelligence – it implies sentience and being self-aware. Whilst our smart kitchen appliances are not likely to benefit from higher levels of consciousness, Artificial Consciousness could perhaps find use in a voice assistant or humanoid robots that are designed with that interactivity in mind.

Unlike intelligence, which can (arguably) be quantified through IQ; consciousness is more difficult to assess. Testing whether artificial consciousness has been achieved will be a philosophical question rather than a technical one. There are many varying interpretations of what consciousness is and so this affects when artificial consciousness could be possibly developed.

Some argue that consciousness is complex and non-binary; and that a machine is either conscious or not. Others such as Ned Block, on the other hand, believe that consciousness is a mongrel concept and that, there are a number of different types of consciousness.

On a similar line of thought, Marco Margnelli argues for ‘states of consciousness’. In explaining consciousness, Margnelli uses a computer analogy wherein the brain represents our computer hardware, and the states of consciousness represent its operating programs.

When could artificial consciousness become real?

There is still no specific or agreed timeline to when artificial consciousness could become a reality. Artificial consciousness is unpredictable because we do not yet understand the technological leaps that need to be made in order to achieve artificial consciousness.

In a book titled ‘Architects of Intelligence’, writer and futurist Martin ford interviews a number of prominent people working in AI, asking the question of when there will be at least 50% chance of artificial general intelligence – not even artificial consciousness – being built. Some, thers such as Rodney Brooks – co-found of iRobot, were less optimistic suggesting 2200.

Hybrid artificial consciousness

Much sooner on the horizon – perhaps as early as 2045 – we may start to see hybrid artificial consciousness, essentially the melding of man and machine.

Futurist Ray Kurzweil predicts, that by this point, humans will be a hybrid of biological and non-biological intelligence that will become increasingly dominated by the non-biological (artificial) component. Kurzweil suggests that "AI is not an intelligent invasion from Mars. These are brain extenders that we have created to expand our own mental reach. They are part of our civilization. They are part of who we are”.

Once a two-way “brain to technology” interface has been developed, augmented humans combining the best of biological human intelligence, inspiration and emotion, and artificial technological processing speed and storage capabilities could begin to make significant new technological developments. Past this point, the pace of change might grow exponentially.

The development of artificial consciousness could therefore become a gradual one, with the biological component of “intelligence” slowly diminishing in importance as increasingly sophisticated technology does more and more of the work of “thought”.

Can a machine be conscious? What is “consciousness”?

When discussing artificial consciousness, there does not seem to be a consensus on what “consciousness” actually means. There is still debate as to its definition, with several different explanations put forth by both neuroscientists and philosophers. The difficulty lies with the fact that unlike intelligence, consciousness involves an element of subjective experience. This subjectivity has been coined by David Chalmers as the ‘hard problem of consciousness’. The hard problem refers to the difficulty in explaining how subjective states can arise from objective physical systems. A subjective experience is for example, what it is like to see the colour red, or feel pain. Our perception is subjective and involves sight, smell, touch, emotion and has by in large several biochemical influences. We see colours differently to a dog, who sees differently to an insect, yet we are all “alive”.

Types of consciousness

Whilst there are a number of different theories and papers written on consciousness, there is also a general understanding that consciousness involves both an objective, as well as subjective element.

Ned Block makes a distinction between two types of consciousness: phenomenal consciousness and access consciousness. The former refers to the ability to of the mind to reason and guide behaviour; whilst the latter relates to ‘experience’ – ‘what it is like to be in that state’.

This concept of “access consciousness” may be what most people mean in the discussion about artificial consciousness. Arguably, the artificial phenomenal consciousness may have already been reached: self-driving cars have the ability to understand their environment and make decisions.

Understanding consciousness is needed first before realising artificial consciousness

One of the difficulties in creating artificial consciousness , is that in order to do so, we need to have a strong understanding of what consciousness is and how it arises in the brain - particularly from a scientific perspective. Whilst the debate has been predominantly discussed from a philosophical standpoint, neuroscience has recently taken an interest in consciousness and seems determined to come up with an explanation on how it arises in the physical sense.

We know that consciousness is affected by neurons and has at its core a biochemical origin. As an example, Margnelli cites the effect of LSD on the state of consciousness due to the chemical’s direct impact on how neurons communicate. He claims the study on the effects of LSD in the brain ‘suddenly disclosed extraordinary connections between the neuronal hardware and the mental software, thus suggesting that consciousness is somehow a product of the activity of neurons and of chemical relations set up among them.’

Neuroscience suggests that whatever consciousness is, it must arise physically through the brain and nervous system. The idea is that if something has its origins in the physical – science can replicate it. Science has already replicated a human heart, so why would it not be able to replicate the brain in a form of artificial consciousness?

Identifying artificial consciousness

Philosopher Nick Bostrom has said that, given our problems with understanding “consciousness”, we must accept a possibility that true self-aware artificial consciousness could arise long before machines reach human-level intellect (or beyond).

Bostrum suggests that it is possible that humanity could, in the future, be unwittingly causing the suffering of an artificial consciousness.

What does it mean for a machine to act intelligently?

There is no one definition of intelligence, or one test on how to measure machine intelligence. As with consciousness, there are several different interpretations. In fact, Legg and Hutter produced an entire report reviewing over seventy different definitions of intelligence – highlighting the numerous interpretations depending on the field and subject-matter.

Nonetheless, when discussing AI and artificial consciousness, the ”gold standard” is to human beings and therefore natural intelligence. Much in the same way that human intelligence is measured by testing problem-solving, a machine is considered intelligent if it can solve problems. The more complex these problems are, the more intelligent the machine can be considered.

If a machine can solve specific problems better than a human, it can be considered as more intelligent in that field, but that does not mean that they are intelligent in all measures of intelligence. Perhaps the existing definition of intelligence is too narrow and needs to be reconsidered.

The Turing test

In 1950, Alan Turing proposed ‘The Imitation Game’, now known as the ‘Turing Test’, which tests a machine’s ability to exhibit intelligent behaviour. It is based on the notion that if a machine acts in a way that is indistinguishable from a person, then it is considered to be intelligent. In this game, a human evaluator must decide between A and B, which is the machine and which is the human – based solely on their answers. If the evaluator cannot correctly distinguish the human from the machine, the machine would be considered intelligent.

Machine intelligence has come a long way since 1950, and scientists have already created robots that surpass human intelligence – for example at chess, data mining, search theorem and others. As such, the Turing test has been criticised for having limited applicability in this new age of intelligence.

The Chinese Room argument

In a seminal article first published in 1980, philosopher John Searle put forth ‘one of the best-known arguments’ against strong AI – ‘The Chinese Room Argument’. In his argument, Searle imagines himself in a room, where he receives Chinese characters slipped under the door. Searle knows no Chinese, but he is following a program for manipulating symbols and numerals much like a computer does and then from those instructions he sends back the appropriate Chinese characters under the door. Because he successfully followed the program, people on the other side of the door would suppose there was a Chinese speaker on the other end – when if fact there is not. Searle’s point that artificial intelligence is and can never be strong, lies with the fact AI use syntactic rules, but have no understanding of meaning, or semantics.  

Are human and machine intelligence the same?

Machine intelligence is inherently different from that of human intelligence, not only because of the differences in hardware (one analogous and one digital), but also because machine intelligence tends to be highly specialised. In a paper on machine consciousness, Shevlin argues there is a difference between specialised and general intelligence. He argues that when it comes to machine consciousness, ‘it is general intelligence that matters.’ He cites three criteria upon which to assess general intelligence (and therefore the possibility of consciousness): ‘robustness’, ‘flexibility’ and ‘whole-system integration’.

A specialised intelligent machine – for example a toaster that knows when your toast is done the way you like it, is intelligent when measured in that way. But it cannot tell you the weather outside, nor can it tie shoelaces, nor can it ever aspire to be anything other than a smart toaster.

Philosophy of artificial intelligence and artificial consciousness

There are some deep philosophical questions about what it means to be alive and so the degree to which this is (or could be) reproducible in technology.

Can a machine have emotions?

Answering whether machines can have artificial emotions is much the same as answering whether they could have artificial consciousness. We do not know. We do not yet know how to artificially create emotion because we do not yet know how exactly it arises in the brain.

Robots have been created that can imitate emotion and can appear to have emotions. These “emotional robots” are simply doing as programmed though, and are lightyears away from possessing any intelligence or artificial consciousness In this study, for example, emotional robots were programmed to appear as if they had feelings by saying “please don’t switch me off” and “I’m afraid of the dark”. This in turn affected how people interacted with them.

There is, however, AI being developed to read emotions in humans. Humans and animals as social creatures rely on emotions to interact with and understand each other. Body language, tone and intonation all express our feelings and emotions and are usually spontaneous and automatic. Being able to understand these signals is called emotional intelligence and has be described as ‘critical to the success of human interactions’. These robots are being developed to learn and interpret human emotion, by interpreting cues in tone, voice, body language such as facial expressions. However, there still aren’t any robots that can replicate this and “feel” emotions in the same way.

Perhaps one of the reasons developments of artificial consciousness have not yet reached a point where robots can feel emotions, is because there does not seem to be many reasons why we would want robots to feel emotions. There is a strong argument in saying that robots are more efficient without artificial emotion or artificial consciousness – especially when considering them within the workplace, it would be counter-productive.  If robots can “feel” tired or emotional and so get distracted from their core duties, would there really be any advantage over human workers?

There are, however, other industries where robots would be taking care of patients, where artificial consciousness and emotions might be beneficial . In care homes today for instance, some AI is being used to care for the elderly or the infirm and the more human they appear, the more they can foster ‘relationships’ with those they care for – thus improving their function.

Can a machine be self-aware?

Among the numerous definitions on consciousness, it has been helpful to many scientists to categorise different levels or stages of consciousness. Some scientists argued there are three levels and that self-awareness emerges in the third level “C2” and is ‘commonly called introspection, or what psychologists call “meta-cognition”’. There has been no evidence of robots developing this level of consciousness or self-awareness and we don’t appear to be close.

Can a machine be original or creative?

AI has been shown to be creative in a number of different tasks from song-writing to painting and even rapping. However, in these studies involving machine learning where machines are producing content that might be deemed art, it is difficult to argue with any conviction that there is creativity involved. This because all the AI is really doing is extrapolating from a data set and reproducing within a predefined amount of random error.

Can a machine be benevolent or hostile?

Benevolence and malevolence are largely related to emotions. We know that for the moment, AI cannot feel emotions and are for the most part highly predictable. Nonetheless, there is much out there on the dangers of AI, and there are plenty of science fiction novels and TV series premised on the idea that artificially intelligent machines will one day turn on us.

James Barrat, author of ‘Our Final Invention: Artificial Intelligence and the End of the Human Era’ predicts that AI in the future will not necessarily be friendly or benevolent and may in fact turn on humans. Elon Musk has also several times mentioned that once artificial intelligence surpasses human intelligence, we will not be able to control it. That may be beneficial or detrimental, there are no ways of predicting that outcome.

However, the ability to feel or not might not be the cause for hostility. Musk gives an example that humans when building a road may destroy an ant nest, not because we are inherently evil or dislike the ants, but because our goal is to build a road. He argues that a machine for example programmed to fix a problem, might consider humans and the human race to be the problem. One solution may be to destroy humans. We as humans would of course view this as hostile, but it is a kind of hostility that does not necessarily relate to emotions.

As such, the concept of hostility is highly subjective.

Creating artificial consciousness: could you give a robot a soul?

Consciousness has already been difficult for scientists to define – let alone a ‘soul’, which has inherently a theological aspect to it. Vladimir Havlik, a philosopher at the Czech Academy of Sciences suggests we define a soul as a “coherent identity” that stands the test of time. If we take consciousness itself to mean a soul, then the answer remains the same: we do not know.

However, some argue that true artificial consciousness will never be achieved. This is what is argued by the dualists in philosophy, who argue that there is a difference between the brain and the mind. One might be able to be replicated, but the other not. The materialists, on the other hand, argue that neuroscience is capable of explaining everything; even if it doesn’t yet have the answers.

Ethics of exploiting artificial consciousness

The problem with using the word ‘conscious’ to describe AGI, is that it brings up certain connotations of humanity. If machines were ever to be deemed conscious, a number of ethical and perhaps legal considerations would come into play.

Robot slaves

As mentioned above, there is the possibility that artificial consciousness might exist long before we are able to identify and, as such, humanity might ignorantly enslave a new life.

The idea of “robot rights” has been discussed often before. It stands to reason that should we succeed in creating a conscious machine, there is a compelling argument for it to enjoy rights and legal protections. Humans do not prevent those with a lower IQ from being afforded the same protection and rights. They are enjoyed by all, simply because of being human. If future machines are truly intelligent and possessed an artificial consciousness, at what point would they acquire rights?

But if robots were designed, programmed and created to serve humans, the idea of robot rights seems counter-productive.

In a paper by Joanna Bryson, she argues that robots should be considered slaves because they would be servants, and that it would be wrong to consider their place on our planet alongside humanity. Bryson argues that we do in fact own robots and everything about them from their appearance to their intelligence is either directly or indirectly designed by humans. She argues that the exploitation of human empathy for AI can be potentially dangerous.


More on artificial consciousness

1 Comment