Machine learning is Big Data being used at its most extreme level, processing vast and disparate data sets at a machine level to find patterns buried within, producing insights beyond human recognition.
In 2016, Google’s net worth was reported to be $336 billion, and this is largely due to the advanced learning algorithms the company employs. Google was the first company to realize the importance of incorporating machine learning in business processes. And the technology powerhouse doesn’t stop at any given point; it keeps
As happens when boundless potential meets hard reality, enterprises now face a long, painful slog through the trenches of disillusionment and disappointment as they pursue the business transformation promised by Machine Learning for the Enterprise. The machine learning hype cycle is in overdrive, inflating expectations for magically easy and automated solutions to complex business problems decades
A long time ago, in January 2006, Business Week published an article entitled ‘Math Will Rock Your World” declaring, “There has never been a better time to be a mathematician.” The fact is that although this article is almost 15 years old, the article reinforces a consistently valid case for
This article is part of a media partnership with PyData Berlin, a group helping support open-source data science libraries and tools. To learn more about this topic, please consider attending our fourth annual PyData Berlin conference on June 30-July 2, 2017. Miroslav Batchkarov and other experts will be giving talks
Deep learning is a subfield of machine learning and it comprises several approaches to tackling the single most important goal of AI research: allowing computers to model our world well enough to exhibit something like what we humans call intelligence. On a basic conceptual level, deep learning approaches share a
Whilst most businesses don’t earn revenue by processing data, they do spend a large amount of their hard earned revenue in manually processing data, validating it and ultimately performing manual tasks that don’t scale. But at what point does this manual involvement become a burden of cost upon your business?
This article is part of a media partnership with PyData Berlin, a group helping support open-source data science libraries and tools. To learn more about this topic, please consider attending our fourth annual PyData Berlin conference on June 30-July 2, 2017. Matti Lyra and other experts will be giving talks
R is ubiquitous in the machine learning community. Its ecosystem of more than 8,000 packages makes it the Swiss Army knife of modeling applications. Similarly, Apache Spark has rapidly become the big data platform of choice for data scientists. Its ability to perform calculations relatively quickly (due to features like in-memory
For people in the know, machine learning is old hat. Even so, it’s set to become the data buzzword of the year — for a rather mundane reason. When things get complex, people expect technology to ‘automagically’ solve the problem. Whether it’s automated financial product consultation or shopping in the supermarket of
The Estimators API in tf.contrib.learn (See tutorial here) is a very convenient way to get started using TensorFlow. The really cool thing from my perspective about the Estimators API is that using it is a very easy way to create distributed TensorFlow models. Many of the TensorFlow samples that you