Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
Subscribe
No Result
View All Result
Dataconomy
No Result
View All Result

2015: The Year that In-Memory Becomes a Mainstay Part of the Enterprise & Startup Database Workflow

by Yiftach Shoolman
January 13, 2015
in Uncategorized
Home Uncategorized
Share on FacebookShare on TwitterShare on LinkedInShare on WhatsAppShare on e-mail

Yiftach Shoolman Redis LabsYiftach Shoolman is CTO & Co-Founder at Redis Labs, the largest commercial supporter of Redis with more than 70,000 databases under management and 3,700+ paying customers. Follow him on Twitter.


 

Table of Contents

  • Prediction #1 – The Demand for In-Memory Databases Will Increase
  • Prediction #2 – In-Memory Databases Become More Than Just a Cache
  • Prediction #3 – Enterprise-Class Features Become a Must Have
  • Prediction #4 – In-Memory Will Be More than Just DRAM
  • Predication #5 – Multiple Delivery Models Will Become More Pervasive
  • Predication #6 – Open Source Technology Will Win

Prediction #1 – The Demand for In-Memory Databases Will Increase

The expectation from modern apps is to provide a response to any request in under 100ms. Assuming that the Internet’s average latency is 50ms, this leaves only 50ms for processing the request inside the datacenter, including front-end appliance overheads (such as firewalls, application security and load-balancers) and business logic processing by the web, application and database tiers. In many cases a single user request requires multiple calls to the database to prepare a response. This practically mandates sub-millisecond processing time at the database tier, which can’t be achieved without in-memory technologies.

Prediction #2 – In-Memory Databases Become More Than Just a Cache

With more than 70,000 databases under management at Redis Labs, we’ve noticed that a growing number of users migrate their data to Redis and use it as their primary database, rather than as a cache-only solution.  When your in-memory database provides rich functionality (e.g. various data structures, a robust command set and support for embedded scripts, like in the Redis world) and an enterprise-class feature set (see my next prediction), why would you split your already complex application logic across so many database technologies?


Join the Partisia Blockchain Hackathon, design the future, gain new skills, and win!


Prediction #3 – Enterprise-Class Features Become a Must Have

Everyone agrees that database scalability is important, but when it comes to in-memory databases, seamless and instant scalability is truly critical. Unlike disk-based database technologies, an in-memory database allows for massive and sudden dataset growth, often from a few gigabytes to terabytes in just minutes and bursts of hundreds of thousands operations per second in a few seconds.

The same is true for high-availability – when you run an in-memory database like Redis that can support hundreds of thousands of operations per second, you must have an instant auto-failover mechanism that takes a only a few seconds (single digit) to execute. The lack of such a mechanism will result in the loss of a significant amount of your writes, and will leave your application in an inconsistent state. Furthermore, while it is understood that an in-memory database should be equipped with data persistence mechanisms for durability, achieving that goal without significantly degrading database performance is a major challenge. While it is expected that a number of in-memory databases will boast support for a fast data-persistence engine in 2015, their performance may be severely limited unless implemented correctly.

Lastly,  in-memory databases should be deployed across data-centers and geographical regions in order to increase the your app’s availability and retain the performance of an in-memory database even with a multi-site deployment.

Prediction #4 – In-Memory Will Be More than Just DRAM

With state-of-the-art flash arrays, high throughput SSDs and the new Storage Class Memory products created out of flash-based NAND, achieving multi-million IOPS at sub 10 microsecond latency is no longer fiction. A well designed in-memory database can utilize these new technologies to run at near DRAM performance without the high cost of deployment associated with RAM-only databases.

Predication #5 – Multiple Delivery Models Will Become More Pervasive

As mentioned earlier, it is extremely important for an in-memory database to be deployed as close as possible to your application servers to avoid network latencies and bandwidth costs. At the same time, the freedom to choose between different deployment models is equally important. A fully managed database-as-a-service allows you to ‘deploy and forget’ your in-memory database and operate everything with minimal ops. On the other hand, it prohibits you from deploying the database at any arbitrary location. That is the reason why several in-memory database vendors are starting to provide a hybrid delivery model in which you can use a database-as-a-service for the parts of your application that run on the cloud, as well as downloadable database software for on-premise, private cloud environments. Some of these solutions come with a tool that even allows you to synchronize between the two deployment models in a secure manner.

Predication #6 – Open Source Technology Will Win

The majority of new emerging in-memory databases are based on open source projects. But how many of these are truly open source projects rather than the development efforts of the sponsoring company’s employees?

It is nearly impossible for a database vendor that open sources its code, seemingly as an afterthought, to compete with a real open-source project that has been developed for years by a vibrant community. The value of a strong open source project with an abundance of clients, libraries, use cases and deployment options is unparalleled. Furthermore, when your in-memory database is based on a real open source project, developers with relevant knowledge and experience are easier to find.

Follow @DataconomyMedia

(Image credit; Planilog)

 

Tags: In-memoryIn-Memory Computingopen sourceRedis Labs

Related Posts

Data sourcing is still a major stumbling block for AI

Data sourcing is still a major stumbling block for AI

August 18, 2022
AI and data analytics COVID-19

How AI and Data Analytics Will Impact The Era of COVID-19

February 17, 2022
Medical field changing thanks to AI

The Medical Field is Changing Because of Artificial Intelligence

August 19, 2021
Zeni series B funding

AI-Powered Fintech Startup Zeni Raises $34m in Series B Round

August 6, 2021
Coming up LIVE: Can we have both Privacy and Security?

Coming up LIVE: Can we have both Privacy and Security?

June 4, 2020
How GDPR is Affecting Marketing Data

How GDPR is Affecting Marketing Data

July 5, 2018

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

LATEST ARTICLES

AI Text Classifier: OpenAI’s ChatGPT detector indicates AI-generated text

A journey worth taking: Shifting from BPM to DPA

BuzzFeed ChatGPT integration: Buzzfeed stock surges after the OpenAI deal

Adversarial machine learning 101: A new cybersecurity frontier

Fostering a culture of innovation through digital maturity

Nvidia Eye Contact AI can be the savior of your online meetings

Dataconomy

COPYRIGHT © DATACONOMY MEDIA GMBH, ALL RIGHTS RESERVED.

  • About
  • Imprint
  • Contact
  • Legal & Privacy
  • Partnership
  • Writers wanted

Follow Us

  • News
  • AI
  • Big Data
  • Machine Learning
  • Trends
    • Blockchain
    • Cybersecurity
    • FinTech
    • Gaming
    • Internet of Things
    • Startups
    • Whitepapers
  • Industry
    • Energy & Environment
    • Finance
    • Healthcare
    • Industrial Goods & Services
    • Marketing & Sales
    • Retail & Consumer
    • Technology & IT
    • Transportation & Logistics
  • Events
  • About
    • About Us
    • Contact
    • Imprint
    • Legal & Privacy
    • Newsletter
    • Partner With Us
    • Writers wanted
No Result
View All Result
Subscribe

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.