At Criteo we display online advertisement, and we sell clicks to our clients. So we have to predict, for each of our 2 billion daily banners, whether it will likely be clicked or not. That’s why we use machine learning, and we feed the algorithms of our well-oiled engine with big data.
But it has not always been like this. Actually, the shape of our prediction engine –and its underlying architecture- had to evolve as our business grew.
At first, when our dataset fitted in a SQL table, there was no need to invoke the name of Hadoop. At that time, we implemented regression tree algorithms in C#. Single process learnings took place on a single server. And we were happy.
An issue with regression tree, is that their size can explode exponentially as you add dimensions. Before our algorithm reached those limits, we improved it. First we used Bayesian networks for a while. Then we implemented generalized linear models. This change increased our performance a lot. And we were proud.
But then, as the needs of the business increased, we had to add another server. And another one. And… and we were worried that our architecture would reach its limits in a near future.
Migrating our existing solution on our Hadoop cluster seemed a healthy way to go. It was far from trivial, but it was definitely where we wanted to be. To achieve this, a lot of questions had to be answered. How do we distribute a mono-threaded gradient descent, into several mappers and reducers? How do we run an existing C# codebase on a Hadoop Linux cluster? How do we keep the reliability of an architecture developed and tuned over the years, when we apply such a big bang?
Our Prediction and Scalability teams worked hand in hand in order to answer those concerns. Our data scientists showed us how we could distribute our learnings. Technical surveillance provided tools that would fit our technology. Reliability has been handled as always, thanks to our engineering culture.
This has been one of our major work last year. It took time and effort. But the outcome met the expectations since we’ve been able to increase the size of our training set, and at the same time nearly doubling the number of trained algorithms. However, I won’t have time to talk much more about it: now there are still a lot of improvements and new technologies I want to test!
If you are curious about what we do and want to join us, have a look at our tech blog and drop us a line at r&[email protected]!!!
-By Guillaume Turri, Software Developer, R&D, Criteo
(Image credit: Diana Robinson, via Flickr)