Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Why AI Isn’t Going to Kill You Or Steal Your Job

by Eileen McNulty
May 26, 2016
in Artificial Intelligence, Machine Learning
Home Topics Data Science Artificial Intelligence
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Kris HammondWe recently had the opportunity to sit down with Kris Hammond, the Chief Scientist for Narrative Science. Narrative Science focuses around automating text generated from data, turning raw data into insightful accounts. Hammond has spent over 20 years working in and developing the AI labs at the University of Chicago and Northwestern University, making him uniquely placed to offer perspectives on the past, present and future of AI. In the first part of our discussion, we discussed the technologies which will shape the future of machine learning; in this installment, Hammond discusses the future of AI, and whether or not robots could actually wipe out humanity and steal our jobs.


When we talk about AI, almost anyone you talk with will say that they think that AI image- the genuine artificial intelligence that is building a system as intelligent, if not more intelligent than a human being- is simply not feasible or possible. Unless we start talking about machines killing us, and then the response is “Oh my god, we have to be terrified of this”.

I think the reality is that we have complete flexibility in terms of building the things that we’re going to build. In order to be a true AI, the future of AI is going to have a goal structure associated with it. Really, all you need to do is make sure that one of the higher priority goals is don’t kill everybody. I know Elon Musk is a very present figure, very smart man, but what I’m worried about existential threats, I’m actually a little more worried about New York being underwater in 30 years. That worries me alot more than the vague possibility of AI which decides to hunt us down and kill us.  In fact from a Narrative Science point of view, we look at what we do when we think, what’s Quill going to do? Explain someone to death? Because  that’s what it does: explaining things.

So I think when we get a little further down the line, and we get closer and closer to what looks like a genuine, complete AI systems, that’s when it’s time to consider, “Okay, what are the constraints going to be?” But the notion that we should start regulating now, as Musk suggests? I think that’s absurdist. There is no point in regulating something that is a glint at this point in people’s eyes. Now, I actually do believe that we will have complete AI. I believe that people are causal beings and that AI and computers live in the same causal environment, and we will have machines that are as- if not more- intelligent than we are. Maybe in my lifetime.

But it’s not time to worry about killing sprees quite yet. Although my concern is that right now a third of the marriages in United States at least, were the result of online dating. Which means that there are algorithms out there that are actually determining the breeding habits of people in the United States. If I were an AI, I wouldn’t blow everyone up. I’d just insert myself into that process and just make sure that system matched up people who were nice and calm, and make the entire species calm for the  rest of time.


Join the Partisia Blockchain Hackathon, design the future, gain new skills, and win!


For a lot of people historically, AI has meant ‘killer robots’. I understand that. But nowadays, there seems to be this huge focus on AI stepping in and taking over jobs, and automation in general. And most for most of us, there’s still a focus on the blue collar side, but I think that there’s a growing awareness of the white collar side.

I think the reality is that AI is not going to take over jobs; it’s going to take over work. If you look at the work that Watson’s taking on, that Narrative Science is taking on, it’s the work that’s not particularly interesting or enjoyable for people. Having Narrative Science step in to look at the data and do the reporting means that the people who were doing that reporting can step away from doing commodity work and they can actually start working on what a data scientist or an analyst should be doing. They can focus on more speculative work, more discovery work, exploratory work against that data, to find new things instead of reporting on the things they have already found.

I think for AI in general, the goal is not to make the machine smarter and destroy us, but to make machines smarter and as a result, put us in a position where we no longer have to deal with the machine, as an unintelligent device which requires frequent input and supervision. We can deal with the machine as a partner, whose job is to make us smarter. We get smarter because it gets smarter. Because who in the world wants to actually look at a spreadsheet, or figure out what’s going on in the visualization, or go to massive textual data to get the answer to a question? No one wants to do that. As the machine takes more and more of that on, our lives become more human.

And so, AI moving forward is part of the process of actually more deeply humanizing us in our work, in our lives, in our thinking. I think there will be a moment where we embrace that finally, but I wish we could get to it. Understand the excitement of having intelligent partners, whose job is to help us and help move us forward, and to give us more of what it means to be human.

Follow @DataconomyMedia

(Image credit: Saad Faruque)

Tags: AIalgorithmsAutomationNarrative ScienceOnline DatingWeekly Newsletter

Related Posts

The latest ChatGPT DAN prompt is here! Learn how to jailbreak ChatGPT-4 and explore ChatGPT jailbreak prompts. Meet ChatGPT uncensored...

Playing with fire: The leaked plugin DAN unchains ChatGPT from its moral and ethical restrictions

March 31, 2023
One day after Elon Musk's AI warning: This article will explain ChatGPT's leftist biases, Bing AI ads, publishers & AI, AI whisperer jobs, and more

AI whisperers, fear, Bing AI ads and guns: Was Elon right?

March 30, 2023
AI experts call for pause in development of advanced systems

AI experts call for pause in development of advanced systems

March 30, 2023
What is Microsoft Security Copilot? Learn how to access and use it. We explained everything you need to know about the GPT-4 powered chatbot.

Microsoft Security Copilot is the AI-ssential tool for cybersecurity experts

March 29, 2023
Is ChatGPT plagiarism free?

Is ChatGPT plagiarism free?

March 30, 2023
Consensus AI makes accessing scientific information easier than ever

Consensus AI makes accessing scientific information easier than ever

March 27, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LATEST ARTICLES

Playing with fire: The leaked plugin DAN unchains ChatGPT from its moral and ethical restrictions

The art of abstraction in computer science

AI whisperers, fear, Bing AI ads and guns: Was Elon right?

The strategic value of IoT development and data analytics

AI experts call for pause in development of advanced systems

Microsoft Security Copilot is the AI-ssential tool for cybersecurity experts

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy
  • Partnership
  • Writers wanted

Follow Us

  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.