Kris HammondWe recently had the opportunity to sit down with Kris Hammond, the Chief Scientist for Narrative Science. Narrative Science focuses around automating text generated from data, turning raw data into insightful accounts. Hammond has spent over 20 years working in and developing the AI labs at the University of Chicago and Northwestern University, making him uniquely placed to offer perspectives on the past, present and future of AI. In the first part of our discussion, we discussed the technologies which will shape the future of machine learning; in this installment, Hammond discusses the future of AI, and whether or not robots could actually wipe out humanity and steal our jobs.

When we talk about AI, almost anyone you talk with will say that they think that AI image- the genuine artificial intelligence that is building a system as intelligent, if not more intelligent than a human being- is simply not feasible or possible. Unless we start talking about machines killing us, and then the response is “Oh my god, we have to be terrified of this”.

I think the reality is that we have complete flexibility in terms of building the things that we’re going to build. In order to be a true AI, the future of AI is going to have a goal structure associated with it. Really, all you need to do is make sure that one of the higher priority goals is don’t kill everybody. I know Elon Musk is a very present figure, very smart man, but what I’m worried about existential threats, I’m actually a little more worried about New York being underwater in 30 years. That worries me alot more than the vague possibility of AI which decides to hunt us down and kill us.  In fact from a Narrative Science point of view, we look at what we do when we think, what’s Quill going to do? Explain someone to death? Because  that’s what it does: explaining things.

So I think when we get a little further down the line, and we get closer and closer to what looks like a genuine, complete AI systems, that’s when it’s time to consider, “Okay, what are the constraints going to be?” But the notion that we should start regulating now, as Musk suggests? I think that’s absurdist. There is no point in regulating something that is a glint at this point in people’s eyes. Now, I actually do believe that we will have complete AI. I believe that people are causal beings and that AI and computers live in the same causal environment, and we will have machines that are as- if not more- intelligent than we are. Maybe in my lifetime.

But it’s not time to worry about killing sprees quite yet. Although my concern is that right now a third of the marriages in United States at least, were the result of online dating. Which means that there are algorithms out there that are actually determining the breeding habits of people in the United States. If I were an AI, I wouldn’t blow everyone up. I’d just insert myself into that process and just make sure that system matched up people who were nice and calm, and make the entire species calm for the  rest of time.

For a lot of people historically, AI has meant ‘killer robots’. I understand that. But nowadays, there seems to be this huge focus on AI stepping in and taking over jobs, and automation in general. And most for most of us, there’s still a focus on the blue collar side, but I think that there’s a growing awareness of the white collar side.

I think the reality is that AI is not going to take over jobs; it’s going to take over work. If you look at the work that Watson’s taking on, that Narrative Science is taking on, it’s the work that’s not particularly interesting or enjoyable for people. Having Narrative Science step in to look at the data and do the reporting means that the people who were doing that reporting can step away from doing commodity work and they can actually start working on what a data scientist or an analyst should be doing. They can focus on more speculative work, more discovery work, exploratory work against that data, to find new things instead of reporting on the things they have already found.

I think for AI in general, the goal is not to make the machine smarter and destroy us, but to make machines smarter and as a result, put us in a position where we no longer have to deal with the machine, as an unintelligent device which requires frequent input and supervision. We can deal with the machine as a partner, whose job is to make us smarter. We get smarter because it gets smarter. Because who in the world wants to actually look at a spreadsheet, or figure out what’s going on in the visualization, or go to massive textual data to get the answer to a question? No one wants to do that. As the machine takes more and more of that on, our lives become more human.

And so, AI moving forward is part of the process of actually more deeply humanizing us in our work, in our lives, in our thinking. I think there will be a moment where we embrace that finally, but I wish we could get to it. Understand the excitement of having intelligent partners, whose job is to help us and help move us forward, and to give us more of what it means to be human.

(Image credit: Saad Faruque)

Previous post

Google Engineering Office in Russia is Shutting Down Amidst Growing Unrest Over Passing of Laws that Stifle Internet Freedom

Next post

GE, Cisco Orchestrate Plans to Provide Big Data Services to the Enterprise