Picture this. Two figures, seated in comfortable swiveling seats, talking as they pore over magazines. “I’m here for a chrome job”, says one. “I’m here for a recalibration.”
Another scene. Two shining figures, one behind a podium as it orates in a language possibly far too evolved for the current human mind to perceive, as others like it record this data, interspersed with their flesh-and-blood, human brethren, who are all listening intently.
It may sound like a scene out of the mind of Asimov, or more recently the dystopian world that a certain snarky character named Bender once belonged to, but is it possible?
Science has, and continues to explore the farthest reaches of the human mind. A recent study examined FOP (Feeling of a Presence), a phenomenon reported by many with neurological and psychological disorders.
The study was conducted on 12 individuals with various neurological conditions such as epilepsy, stroke, migraine and tumours, each of them previously experiencing FoP for a period of seconds to minutes as a result of their conditions. The team, led by Professor Olaf Blanke, MD, PhD, of Ecole Polytechnique Fédérale de Lausanne, Switzerland, was able to trace the patients’ FoP to damage in any of three brain regions by using, incidentally, a robot designed specifically for the purpose.
With every passing day and development in technology, we glean a deeper, more detailed understanding of the intricacies and mysteries of the human brain. We already know that sensory and nerve impulses travel through our Cental and Peripheral Nervous Systems much like electrical currents, and experts in neurology, psychiatry and other medical and scientific research fields uncover more information every day; this is helped by significant funding provided to study these issues.
Armed with this knowledge and a constant, steady flow of new information, it could perhaps be hypothetically possible to replicate these signals, this structure, and subsequently have it function, with a power source provided to these robots, previously non-sentient, engineered beings, much like humans provide themselves nutrition to survive.
The same experiment also used robot-human synchronicity as a crucial element; humans were connected to a master-slave robot. As they made movements with their hands in front of their body, the robot reproduced the movement in sync. According to the paper, “”The robotic system mimics the sensations of some patients with mental disorders or of healthy individuals under extreme circumstances.”
If it is possible for robots to be designed to be able to mimic psychological disorders and mental illness, or to replicate the mental conditions of a healthy human under duress, it seems they can be designed to experience two rather complex components of human life and existence, components that have not even been fully understood yet.
Legendary mathematician Alan Turing, during his lifetime, devised the Turing Test in his 1950 paper “Computing Machinery and Intelligence,” which opens with the words: “I propose to consider the question, ‘Can machines think?’
In 1948, Turing said “It is not difficult to devise a paper machine which will play a not very bad game of chess.”
Deep Blue, IBM’s chess-playing computer, beat Grandmaster Garry Kasparov, considered the greatest chess player of all time, in 1997. Watson, its later offering, was initially designed specifically to answer questions on trivia show Jeopardy, which it proceeded to win.
In February 2013, IBM announced that Watson software system’s first commercial application would be for utilization management decisions in lung cancer treatment at the Memorial Sloan–Kettering Cancer Center.
It was this interchangeability- this ability for humans and machines to switch places and go unnoticed in doing so- that was implied by Turing’s Imitation Game.
However, this specifically only examines the ability of a machine to take rational decisions; rationality is not the only component of sentience – emotion and consciousness are integral. In what John Searle calls ‘Weak AI’, systems that can take these rational decisions and ‘act intelligently’ are classed in this category, which is often the most-considered by scientists.
Humans consider themselves more than just the working of nerves and signals and interconnected neurobiological function, but perhaps each of the physical processes that occur in the human brain which result in the continuous, dynamic mental states we experience can be entirely correlated to the series of logical functions and algorithms a computer system uses to arrive at a certain conclusion or decision.
Finally, it is perhaps this consciousness, this ability to feel and create emotion that mankind considers as setting it apart from the remainder of society – as Turing himself put it, there are many arguments of the form “a machine will never do X”, where X can be many things, such as:
“Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.”
Computers are still mathematically unable to integrate information in the same way that humans do, according to recent studies.
It is a fact that our thoughts and consciousness are based on the neural activity of the brain. It is also a fact that we do not perceive our brain activity as it really is — patterns of neural firings. Instead, we perceive our sensations and thoughts apparently as they are. Neurobiology and related studies will, in the future, give us a better insight into the advancement of artificial intelligence that may, perhaps, outside of a dystopian science fiction novel, perhaps even be able to propagate itself. As humankind understands more about the intricacies of its own workings and (pardon the pun) machinations, it will become easier for it to replicate those intricacies in the machines it engineers.
Meanwhile, if the significant possibility that what humans know as ‘consciousness’ is simply a series of rational, logical decisions following an algorithm, much like a species straight out of the mind of Gene Rodenberry, and if our species is self-reflective enough to understand completely the inner intricacies of the mind, it is possible we can work together with these new robotic ‘species’, instead of considering them an immediate existential threat.
Until then, it seems that in this game of catch up, machines are still playing just that: an imitation game.
Writer and communications professional by day, musician by night, Anuradha Santhanam is a former social scientist at the LSE. Her writing focuses on human rights, socioeconomics, technology, innovation and space, world politics and culture. A programmer herself, Anuradha has spent the past year studying and researching, among other things, data and technological governance. An amateur astronomer, she is also passionate about motorsport.
More of her writing is available here and she can be found on Twitter at @anumccartney.
(Image credit: Andrea Vallejos)
If the basis of consciousness is material and all material processes are mechanical, then consciousness is Turing computable and can therefore be produced by a machine. Noone has yet proved that any process of thought or feeling is not Turing computable.
In other words, if a human is a machine then it follows that a machine can be conscious.
The Turing Test (or Imitation Game) is a simple test for the presence of consciousness. It is despised by many in the AI community partly because it is not objective but mostly because they have so far come nowhere near to passing it and do not like the adverse publicity that can bring.