In today’s IT world, everything is about being fast, flexible and efficient. We work agile, build prototypes and use fast-scaling adoptive cloud infrastructure. At the same time, the advances in the fields of AI and machine learning seem to make a business world possible, in which many tasks are optimized or even taken over by learning algorithms and intelligent software.
But whoever thinks of this world in terms of just picking out the low-hanging fruits from the growing AI-tree might be in for a big disappointment. Sure, rapid hardware advancements and cloud infrastructure enable fast computations but they don’t solve one of the core challenges inherent in the way most algorithms learn: based on mathematical properties that are deducted from input data.
Of course, this is not a secret at all. In many discussion and contributions focusing on the limits of AI and machine learning this topic comes up quickly. As Peter Guy puts it in the South China Morning Post: AI is only as smart as the given data input or Jason Pontin on WIRED: “new situations baffle artificial intelligences, like cows brought up short at a cattle grid”. However, this limitation does not seem to be very prominent when businesses envision the great potential they see in AI bringing them forward. Gartner estimates that in 2021, AI augmentation will generate $2.9 trillion in business value and recover 6.2 billion hours of worker productivity (see “Forecast: The Business Value of Artificial Intelligence, Worldwide, 2017-2025”).
How much time and effort are we willing to spend training machines?
In order to meet these high expectations regarding AI to leverage business processes and boost productivity, we need to deal with the dependency on input data when learning. Otherwise, I’m seriously wondering who is going to teach all those machines. Or else: who is going to get them all the adequate data input they need in order to drive forward a business? When you listen to Michael Chui and James Manyika in their McKinsey podcast about the real- world potential and limitations of artificial intelligence, you get an impression of how tedious and time consuming it can be to teach and generate adequate training data for only one specific machine learning task. And how much this is often underestimated when thinking about “self-learning” machines.
This underestimation could grow into a serious issue because in order to learn properly, algorithms have some requirements, which seem to become scarcer in today’s IT and business: time, consistency and extensive variety. If we want algorithms to pick up patterns and machines to make smart decisions, we need to teach them over time. Or at least give them data, rules and an environment in which they can explore and create their own base for learning in terms of success and failure. But the large effort it takes to create a proper environment for machine learning is still overlooked many times. Possibly because the expectation in AI is somewhat different: it is expected to leverage business, to further elevate rapid, flexible and efficient processes, not slowing them down by causing an unforeseen workload.
Welcome to the team: the new AI intern
One way to deal with the dilemma might be to shift our view on AI systems and see them more like the learning systems that they are. A new AI tool in the business world is exactly that: a newbie. When you’ve built a functional prototype based on a test set of data, don’t expect it to act as an expert out of nowhere. Look at it more like your new intern. It still needs time – meaning more and more different data constellations that come its way – and consistent feedback about which decisions paid off and which didn’t. Which mistakes were the worst, most expensive and should be avoided at all cost in the future. Which patterns are a reliable marker for success, which are just pure noise and bear no information at all? And all of this can be best learned based on experience over time. Only consistent feedback on a variety of different scenarios is going to give an algorithm a fighting chance at making steady and beneficial decisions. And maybe a good way to provide that feedback is to incorporate the new AI system into the business processes in a way that it can improve over time while working hand-in-hand with the human expert teacher.
How well do we actually know what we are in fact trying to teach?
But being the human expert teacher to an AI system also means we need to question what we do and why, as well as to become more consistent with our goals. Otherwise, we cannot enable a machine to reliably take over a task. Today’s AI algorithms are extremely good at specific things, but they perform miserably if you confuse them by trying to teach them too many mixed up aspects. There is a quote from Google DeepMind’s Research Scientist Raia Hadsell stating “There is no neural network in the world, and no method right now that can be trained to identify objects and images, play Space Invaders, and listen to music.”, which is why James Vincent in his article on the biggest problems facing today’s AI calls them “idiot savants”. So we need to become very clear about what exactly we want a specific algorithm to improve. Otherwise, we might only create neurotic pieces of software that perform random operations in record time and act like very bad students.
Maybe AI learning is forcing us to improve our business decisions per se
Looking at young AI systems in terms of interns rather than experts for a while might be an unforeseen invest. Not an invest in terms of financials but in slowing things down. In taking the time to clearly define goals, brake them down into data and gather consistent experience. And even though that somewhat goes against the fast-paced centred mentality, I believe it is an invest worth taking, even necessary. First, because at the moment there is no other way to implement AI. The more we aim for the quick-wins and low-hanging fruits, trying to cut short on the what and the why, the less the resulting AI machine is going to meet our expectations. Second, I believe the kind of self-reflection this is asking from us contains a hidden value. To be forced into thinking about what we do, why we do it and how we try to achieve it has a value in itself. And maybe much more potential than to blindly automate processes and boost KPIs. Maybe this, in the end, contributes significantly to better decisions already and needs to be seen as just an important part of the AI journey as the AI itself.
Annina Neumann will be speaking at Data Natives 2018– the data-driven conference of the future, hosted in Dataconomy’s hometown of Berlin. Data Natives brings together a global community of data-driven pioneers to explore the technologies that are shaping our world- from big data to the blockchain, from AI to the Internet of Things.
On the 22nd & 23rd November, 110 speakers and 1,600 attendees will come together to explore the tech of tomorrow. As well as two days of inspiring talks, Data Natives will also bring informative workshops, satellite events, art installations and food to our data-driven community, promising an immersive experience in the tech of tomorrow.