Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

The Arrival of Scalable, Fault Tolerant Big Data Ingestion

by Himanshu Bari
September 7, 2015
in Data Science, Technology & IT
Home Topics Data Science
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

The concept of data ingestion has existed for quite some time, but remains a challenge for many enterprises trying to get necessary data in and out of various dependent systems. The challenge becomes even more unique when looking to ingest data in and out of Hadoop. Hadoop ingestion requires processing power and unique specifications that cannot be met by one solution on the market now.

Given the variety of data sources that need to pump data into Hadoop, customers often end up creating one-off data ingestion jobs. These one-off jobs copy files using FTP & NFS mounts or try to use standalone tools like ‘DistCP.’ Since these jobs stitch together multiple tools, they encounter problems around manageability, failure recovery and ability to scale to handle data skews.

So how do enterprises design an ingestion platform that not only addresses the scale needed today but also scales out to address the needs of tomorrow? Our solution is DataTorrent dtIngest, the industry’s first unified stream and batch data ingestion application for Hadoop.

At DataTorrent, we work with some of the world’s largest enterprises, including leaders in IoT and ad tech. These enterprises must ingest massive amounts of data with minimal latency from a variety of sources, and dtIngest enables them to establish a common pattern for ingestion across various domains. Take a look at the following diagram, for example.

0CDEA3E4-1860-4C24-82D4-D1E1A7318EB6


Join the Partisia Blockchain Hackathon, design the future, gain new skills, and win!


Each of the blocks signifies a specific stage in the ingestion process:

  • Input – Discover and fetch the data for ingestion. The discovery of data may be from File System, messaging queues, web services, sensors, databases or even the outputs of other ingestion apps.
  • Filter – Analyze the raw data and identify the interesting subset. The filter stage is typically used for quality control or to simply sample the dataset or parse the data.
  • Enrich – Plug in the missing pieces in the data. This stage often involves talking to external data sources to plug in the missing data attributes. Sometimes this may mean that the data is being transformed from a specific form into a form more suitable for downstream processes.
  • Process – This stage is meant to do some lightweight processing to either further enrich the event or transform the event from one form into another. While similar to the enrich stage in that it requires employing external systems, the process stage usually computes using the existing attributes of the data.
  • Segregate – Often times before the data is given to downstream systems, it makes sense to bundle similar data sets together. While this stage may not always be necessary for compaction, segregation does make sense most of the time.
  • Output – With Project Apex, outputs are almost always mirrors of inputs in terms of what they can do and are as essential as inputs. While the input stage requires fetching the data, the output stage requires resting the data – either on durable storage systems or other processing systems.

There are many different ways in which these stages can occur, and the order, or even number of instances required, are dependent on the specific ingestion application.

DataTorrent dtIngest app, for example, simplifies the collection, aggregation and movement of large amounts of data to and from Hadoop for a more efficient data processing pipeline. The app was built for enterprise data stewards and intends to make their job of configuring and running Hadoop data ingestion and data distribution pipelines a point-and-click process.

Some sample use cases of dtIngest include:

  • Bulk or incremental data loading of large and small files into Hadoop
  • Distributing cleansed/normalized data from Hadoop
  • Ingesting change data from Kafka/JMS into Hadoop
  • Selectively replicating data from one Hadoop cluster to another
  • Ingest streaming event data into Hadoop
  • Replaying log data stored in HDFS as Kafka/JMS streams

Future additions to dtIngest will include new sources and integration with data governance.

(image credit: Kevin Steinhardt)

Tags: Data ingestionDataTorrentdtIngest

Related Posts

What is ChatGPT Plus, and how to get it? Learn its features, price, and how to join ChatGPT Plus waitlist. Is it worth it? Keep reading and find out

ChatGPT Plus: How does the paid version work?

February 2, 2023
AI Text Classifier: OpenAI's ChatGPT detector can distinguishes AI-generated text

AI Text Classifier: OpenAI’s ChatGPT detector indicates AI-generated text

February 2, 2023
5 best digital process automation tools

A journey worth taking: Shifting from BPM to DPA

February 1, 2023
BuzzFeed ChatGPT integration: Buzzfeed stock surges in enthusiasm over OpenAI

BuzzFeed ChatGPT integration: Buzzfeed stock surges after the OpenAI deal

February 2, 2023
Adversarial machine learning 101: A new frontier in cybersecurity

Adversarial machine learning 101: A new cybersecurity frontier

January 31, 2023
What is digital maturity: Model, scale, level

Fostering a culture of innovation through digital maturity

January 30, 2023

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LATEST ARTICLES

Cyberpsychology: The psychological underpinnings of cybersecurity risks

ChatGPT Plus: How does the paid version work?

AI Text Classifier: OpenAI’s ChatGPT detector indicates AI-generated text

A journey worth taking: Shifting from BPM to DPA

BuzzFeed ChatGPT integration: Buzzfeed stock surges after the OpenAI deal

Adversarial machine learning 101: A new cybersecurity frontier

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy
  • Partnership
  • Writers wanted

Follow Us

  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.