Our misguided perceptions of AI confuse the vital public debate about AI’s role in society by mitigating its severity and exaggerating its impact.
Artificial Intelligence is sexy. It’s been able to translate between languages, recommend us new TV shows to watch, and beat humans at everything from Go to Jeopardy. At its core, much of AI’s sex appeal comes from our tendency to project ourselves onto AI – whether as Data from Star Trek or the Terminator. While this metaphor has sparked public interest, it muddles the larger public policy debate that we need to have about AI.
Hype in the Age of AI
Much of the hype stems from the latest headline-grabbing advances of the latest technique: neural networks. The name says it all: it captures the popular imagination by dangling the tantalizing possibility that computer scientists are building a silicon equivalent of the human brain. While that might have been the initial inspiration, even the leading researchers regularly caution against taking the metaphor too far. While aeronautical engineers can draw inspiration airplane designs from birds, the two types of flight are still very different. Similarly, while the genesis for neural networks may have been brain science, the usefulness of the cognitive metaphor has its limit.
So how should we think about AI? One way is technical: to disabuse ourselves of such delusions, remember that neural networks and modern AI are a really fancy version of linear regression. Yes, that really boring thing you learned in statistics class. (Actually, logistic regression but you likely fell asleep before that lecture). All the latest machine learning algorithms are really nothing more than a big fancy soup of equations and code, albeit a highly well-tuned soup. There’s nothing sexy about that.
Competence without Comprehension
On a more conceptual level, the best metaphor I’ve found borrows Daniel Dennette’s “Competence without Comprehension.” Dennette used the expression to describe evolution but it describes well the algorithms of modern artificial intelligence. If evolution is the process of randomly stumbling around in an impossibly large gene space marching towards improved evolutionary fitness, AI algorithms are blindly walking through an exponentially large hyperparameter space (the “genes” of our model if you will) towards being able to better fit the data. The principle difference being that AI’s “evolution” process can be sped up so that AI models can be trained fast enough for a data scientist to collect a paycheck. But the sheer dumbness of the process is astounding.
We find it hard to accept that a mindless set of equations can replicate such competence without an iota of human comprehension, at least not in the classical humanist sense. We constantly use the active voice to describe both and this appears to impart agency as if the process understood what it was doing. For example, we read that “Darwin’s finches have evolved into …” (Science Daily) as if the birds actually understand evolution and select the genes they individually pass to the next generation. Of course, the point of evolution is that they don’t — natural selection takes care of it for them. Similarly, when we read that “Artificial Intelligence has learned” (Science Magazine), we are seduced by the idea of machines that truly comprehend. But these equation soups don’t really understand — they’re just highly tuned to perform specific tasks.
Limitation of AI
Sometimes, we are biased to believe that if a computer can do many harder things better than we can, it must be able to do everything better than we can, stoking potentially unwarranted fears. For example, a recent Gallup poll found that 73% of respondents believe that AI will eliminate more jobs than it creates, even as the World Economic Forum’s research shows that machines will create 58 million net new jobs by 2022. And these misperceptions about AI do not just make us more susceptible to Terminator-style fear-mongering, but also unrealistic pollyannaish visions of AI. For example, Beauty.AI wanted to remove human “bias” from beauty pageants by leveraging the “impartial opinion” of algorithms. The computer-selected winners skewed white, demonstrating how good AI is at learning from our most ignominious biases. The sex-appeal of AI blinds us to what AI can and cannot really do.
AI is not much more than an incredibly competent equation soup. This is not to downplay the very substantial impacts, both positive and negative, that the technology will have on society. Indeed, AI will be one of the most transformative technologies in this coming century. But that’s all the more reason for us to better understand AI and stop fetishizing it.