Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
  • AI
  • Tech
  • Cybersecurity
  • Finance
  • DeFi & Blockchain
  • Startups
  • Gaming
Dataconomy
  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Criteo’s Prediction on Hadoop: How and Why it Came About

byGuillaume Turri
February 24, 2015
in Articles
Home Resources Articles

At Criteo we display online advertisement, and we sell clicks to our clients. So we have to predict, for each of our 2 billion daily banners, whether it will likely be clicked or not. That’s why we use machine learning, and we feed the algorithms of our well-oiled engine with big data.

But it has not always been like this. Actually, the shape of our prediction engine –and its underlying architecture- had to evolve as our business grew.

At first, when our dataset fitted in a SQL table, there was no need to invoke the name of Hadoop. At that time, we implemented regression tree algorithms in C#. Single process learnings took place on a single server. And we were happy.

An issue with regression tree, is that their size can explode exponentially as you add dimensions. Before our algorithm reached those limits, we improved it. First we used Bayesian networks for a while. Then we implemented generalized linear models. This change increased our performance a lot. And we were proud.

But then, as the needs of the business increased, we had to add another server. And another one. And… and we were worried that our architecture would reach its limits in a near future.

Migrating our existing solution on our Hadoop cluster seemed a healthy way to go. It was far from trivial, but it was definitely where we wanted to be. To achieve this, a lot of questions had to be answered. How do we distribute a mono-threaded gradient descent, into several mappers and reducers? How do we run an existing C# codebase on a Hadoop Linux cluster? How do we keep the reliability of an architecture developed and tuned over the years, when we apply such a big bang?

Our Prediction and Scalability teams worked hand in hand in order to answer those concerns. Our data scientists showed us how we could distribute our learnings. Technical surveillance provided tools that would fit our technology. Reliability has been handled as always, thanks to our engineering culture.

This has been one of our major work last year. It took time and effort. But the outcome met the expectations since we’ve been able to increase the size of our training set, and at the same time nearly doubling the number of trained algorithms. However, I won’t have time to talk much more about it: now there are still a lot of improvements and new technologies I want to test!

If you are curious about what we do and want to join us, have a look at our tech blog and drop us a line at  r&[email protected]!!!

-By Guillaume Turri, Software Developer, R&D, Criteo

Follow @DataconomyMedia

(Image credit: Diana Robinson, via Flickr)

Stay Ahead of the Curve!

Don't miss out on the latest insights, trends, and analysis in the world of data, technology, and startups. Subscribe to our newsletter and get exclusive content delivered straight to your inbox.

Tags: Apache HadoopcCriteoData MigrationHadoopsurveillanceWeekly Newsletter

Related Posts

When Regulation Embraces Innovation: Xenco Medical Founder and CEO Jason Haider Discusses the Upcoming 2026 CMS Transforming Episode Accountability Model

When Regulation Embraces Innovation: Xenco Medical Founder and CEO Jason Haider Discusses the Upcoming 2026 CMS Transforming Episode Accountability Model

August 26, 2025
DeFAI and the Future of AI Agents

DeFAI and the Future of AI Agents

July 26, 2025
Unifying the fragmented AI ecosystem: A new paradigm for generative AI workflows

Unifying the fragmented AI ecosystem: A new paradigm for generative AI workflows

July 21, 2025

How to plan for technical debt before it buries you

July 21, 2025
Optimizing performance for a global user base

Optimizing performance for a global user base

July 17, 2025
How the right FPS mouse can make or break your game (or workflow)

How the right FPS mouse can make or break your game (or workflow)

July 14, 2025
Please login to join discussion

LATEST NEWS

UK Home Office seeks full Apple iCloud data access

iPhone 17 may drop physical SIM in EU

Zscaler: Salesloft Drift breach exposed customer data

AI boosts developer productivity, human oversight still needed

Windows 11 25H2 enters testing with no new features

ChatGPT logo fixes drive demand for graphic designers

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy

Follow Us

  • News
    • Artificial Intelligence
    • Cybersecurity
    • DeFi & Blockchain
    • Finance
    • Gaming
    • Startups
    • Tech
  • Industry
  • Research
  • Resources
    • Articles
    • Guides
    • Case Studies
    • Glossary
    • Whitepapers
  • Newsletter
  • + More
    • Conversations
    • Events
    • About
      • About
      • Contact
      • Imprint
      • Legal & Privacy
      • Partner With Us
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.