Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

Big Data isn’t the problem – data copies are

by Ash Ashutosh
May 24, 2016
in Big Data, Understanding Big Data
Home Topics Data Science Big Data
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Big Data. It’s everyone’s favourite buzzword.

The Big Data trend has the potential to revolutionise the IT industry by offering businesses new insight into the data they previously ignored. For many, it is seen as the Holy Grail for businesses today. For organisations, it’s the route towards better understanding exactly what their customers want – and allows them to respond appropriately.

In an age where Big Data is the mantra and terabytes quickly become petabytes, the surge in data quantities is causing the complexity and cost of data management to skyrocket. At the current rate, by the end of this year the world will be producing more digital information than it can store.

When the words ‘Big Data’ are used, there is much discussion about how to use, manage and store data as a strategic advantage for companies. What is often forgotten is the fact that most organisations do not need special Big Data applications that are promoted under this hype. However, what in many cases is useful and necessary as a prerequisite for the efficient use and analysis of any company’s data is the virtualisation of this data in the enterprise. The idea is based on the same concept as virtualised servers and networks in the past already having contributed significantly to the efficiency of businesses. By performing the essential step of data virtualisation, businesses are ideally equipped for handling the upcoming petabyte data loads that can be expected from Big Data.

Table of Contents


Join the Partisia Blockchain Hackathon, design the future, gain new skills, and win!


  • The challenge
  • Data bloating
  • The master copy

The challenge

The problem of overwhelming data quantity exists because of the proliferation of multiple physical data copies. IDC estimates  that 60% of what is stored in data centres is actually copy data – multiple copies of the same thing or out-dated versions. The vast majority of stored data are extra copies of production data created every day by disparate data protection and management tools like backup, disaster recovery, development, testing and analytics.

IDC predicts up to 120 copies of specific production data is being circulated by a company whereby, the cost of managing the flood of data copies reached $44 billion dollars worldwide.

Data bloating

While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of data bloating. In the same way that prevention is better than cure, reducing this weed-like data proliferation should be a priority for all businesses.

The volume of data grows daily, not because of new data, but rather by the unchecked proliferation of data copies. But where does the flood of data copies come from? Multiple copies of data are generated in separate silos for different purposes such as data backup, disaster recovery, test, development, analysis, snapshots or migrations. According to the IDC study, up to 120 copies of specific production data can circulate within a company, whereby the cost of managing the flood of data copies, reached 44 billion dollars worldwide. As a net result, the management of this issue within a company is now taking more resources than the management of the actual production data.

The master copy

While many IT experts are focused on how to deal with the mountains of data that are produced by this intentional and unintentional copying, far fewer are addressing the root cause of copy data. In the same way that prevention is better than cure, reducing this weed-
like data proliferation should be a priority for businesses.

Data virtualisation – freeing organisations’ data from their legacy physical infrastructure just as virtualisation did for servers a decade ago – is increasingly seen as the way forward. In practice, copy data virtualisation reduces storage costs by 80%. At the same time, it makes virtual copies of ‘production quality’ data available immediately to everyone in the business anywhere they need it.

That includes regulators, product designers, test and development teams, back-up administrators, finance departments, data-analytics teams, marketing and sales departments. In fact, any department or individual who might need to work with company data can access and use a full, virtualised data set. This is what true agility means for developers and innovators.

Moreover, network strain is eliminated. IT staff – traditionally dedicated to managing the data – can be refocused on more meaningful tasks for growing the business. Data management licences are reduced as back-up agents, separate de-duplication software and WAN (wide area network) optimisation tools are no longer required.

By eliminating physical copy data and working off a ‘golden master’, storage capacity is reduced – and along with it, all the attendant management and infrastructure overheads. The net result is a more streamlined organisation driving innovation and improved competitiveness for the business, faster.

image credit: Andrew M Harlan

Like this article? Subscribe to our weekly newsletter to never miss out!

Follow @DataconomyMedia

Tags: Big DataData CopiesData Managementdata science

Related Posts

How data engineers tame Big Data?

How data engineers tame Big Data?

February 23, 2023
What are data silos and how to get rid of them?

Data silos are the silent killers of business efficiency

December 23, 2022
TikTok data practices for data transfer of EU citizens to China and ads catering to kids are under investigation by the EU

EU probes TikTok’s data practices with multiple investigations

November 23, 2022
Big data and artificial intelligence: What's the future for them?

AI and big data are the driving forces behind Industry 4.0

November 7, 2022
What is the impact of artificial intelligence in insurance with examples? Explore AI in insurance use cases and find out insurance companies using artificial intelligence.

The insurance of insurers

September 22, 2022
Business analytics and business intelligence solutions in retail

Navigate through the rough seas of retail with business intelligence as your compass

September 20, 2022

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LATEST ARTICLES

Adobe Firefly AI: See ethical AI in action

A holistic perspective on transformational leadership in corporate settings

Runway AI Gen-2 makes text-to-video AI generator a reality

Maximizing the benefits of CaaS for your data science projects

Microsoft 365 Copilot is more than just a chatbot

The silent spreaders: How computer worms can sneak into your system undetected?

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy
  • Partnership
  • Writers wanted

Follow Us

  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.