“Can machines think?” asked Alan Turing in 1950. Yes, they can, if a human can’t tell the difference between a computer and a person. The highly influential yet widely criticized Turing test became one of the most important concepts in AI philosophy.
Seventy years later, AI applications are part of our daily lives and even beat humans in some of the world’s most challenging video games. Yet, modern AI is weak as narrow domain-specific tasks limit its success. In other words, even if AI beats Garry Kasparov in chess, it can’t beat him in other areas of human activity.
How to pass the Turing test?
Place a computer (A) and a human (B) on one side and a human evaluator (C) on the other side. If the evaluator (C) can’t recognize which candidate is human and which candidate is a computer after a series of questions, the computer successfully passed the Turing test.
The computer passes the test if the evaluator (C) decides wrongly as often when the game is played with a computer (A) as he does when the game is played with a human (B).
To date, no AI has passed the Turing test, but some came pretty close.
In 1966, Joseph Weizenbaum (computer scientist and MIT professor) created ELIZA, a program that looked for specific keywords in typed comments to transform them into sentences. Its script pretended to be a Rogerian psychotherapist that gave “non-directional” responses. If ELIZA couldn’t find a keyword in a user’s text, it would provide a “non-directional” response containing a keyword earlier in the conversation. That’s why ELIZA could fool some humans and claimed to be one of the programs to pass the Turing test. However, ELIZA was an easy target if trying to intentionally ask questions that are likely to make a computer slip up.
In 1972, PARRY, a chatbot modeling the behavior of a paranoid schizophrenic, used a similar approach to that of ELIZA. During the Turing test, two groups of psychiatrists analyzed conversation transcripts of both actual patients and computers running PARRY. The psychiatrists were fooled 48 percent of the time – impressive!
Fast forward to 2014 – Eugene Goostman, a computer program that simulated a 13-year-old boy from Ukraine, made headlines claiming to have passed the Turing test. The bot convinced 33% of the human judges that it was a human (read some of the conversation transcripts here). However, there were only three judges, meaning that only one was fooled – not exactly a significant result. Another problem was that, by portraying the chatbot as a 13-year old child from Odessa, judges would let nonsense sentences and obvious mistakes slip, explaining it by English skills and young age.
In 2018, Google Duplex voice AI called a hairdresser and successfully made an appointment in front of the audience. The hairdresser did not recognize she was speaking to an AI. Considered to be a groundbreaking achievement in AI voice technology, Google Duplex is also far from passing the Turing test.
Duplex is a deep learning system representing the ‘Second Wave of AI’ – trained with hundreds of hours at performing very narrow tasks. Real-time learning, deep understanding, reasoning requires true cognitive abilities that none of the Second Wave AI programs have. As soon as the human would lead the conversation in a different direction, Google Duplex would fail.
Are we close to developing AI that would finally pass the Turing test?
Some suggest that it might happen around 2030; some say not earlier than 2040. Most AI scientists agree that we need to know more about the human brain before replicating something we still don’t fully understand.
According to the neuroscientist, computer-game producer, and chess master Demis Hassabis, to truly advance in AI, we need to understand how the human brain works on an algorithmic level.
“If we knew how conceptual knowledge was formed from perceptual inputs, it would crucially allow for the meaning of symbols in an artificial language system to be grounded in sensory reality,” Hassabis said.
Essentially, Alan Turing used natural human brain intelligence as the prototype for artificial intelligence. Previously, AI researchers have largely ignored the brain as a source of algorithm ideas lacking the means to analyze the human brain properly. Today, we can look inside our biological “black box” to find the answers and build intelligent and fair artificial systems. On this journey, we would also inevitably gain more understanding into our own consciousness.
Leave a Reply