Yiftach Shoolman Redis LabsYiftach Shoolman is CTO & Co-Founder at Redis Labs, the largest commercial supporter of Redis with more than 70,000 databases under management and 3,700+ paying customers. Follow him on Twitter.


 

Prediction #1 – The Demand for In-Memory Databases Will Increase

The expectation from modern apps is to provide a response to any request in under 100ms. Assuming that the Internet’s average latency is 50ms, this leaves only 50ms for processing the request inside the datacenter, including front-end appliance overheads (such as firewalls, application security and load-balancers) and business logic processing by the web, application and database tiers. In many cases a single user request requires multiple calls to the database to prepare a response. This practically mandates sub-millisecond processing time at the database tier, which can’t be achieved without in-memory technologies.

Prediction #2 – In-Memory Databases Become More Than Just a Cache

With more than 70,000 databases under management at Redis Labs, we’ve noticed that a growing number of users migrate their data to Redis and use it as their primary database, rather than as a cache-only solution.  When your in-memory database provides rich functionality (e.g. various data structures, a robust command set and support for embedded scripts, like in the Redis world) and an enterprise-class feature set (see my next prediction), why would you split your already complex application logic across so many database technologies?

Prediction #3 – Enterprise-Class Features Become a Must Have

Everyone agrees that database scalability is important, but when it comes to in-memory databases, seamless and instant scalability is truly critical. Unlike disk-based database technologies, an in-memory database allows for massive and sudden dataset growth, often from a few gigabytes to terabytes in just minutes and bursts of hundreds of thousands operations per second in a few seconds.

The same is true for high-availability – when you run an in-memory database like Redis that can support hundreds of thousands of operations per second, you must have an instant auto-failover mechanism that takes a only a few seconds (single digit) to execute. The lack of such a mechanism will result in the loss of a significant amount of your writes, and will leave your application in an inconsistent state. Furthermore, while it is understood that an in-memory database should be equipped with data persistence mechanisms for durability, achieving that goal without significantly degrading database performance is a major challenge. While it is expected that a number of in-memory databases will boast support for a fast data-persistence engine in 2015, their performance may be severely limited unless implemented correctly.

Lastly,  in-memory databases should be deployed across data-centers and geographical regions in order to increase the your app’s availability and retain the performance of an in-memory database even with a multi-site deployment.

Prediction #4 – In-Memory Will Be More than Just DRAM

With state-of-the-art flash arrays, high throughput SSDs and the new Storage Class Memory products created out of flash-based NAND, achieving multi-million IOPS at sub 10 microsecond latency is no longer fiction. A well designed in-memory database can utilize these new technologies to run at near DRAM performance without the high cost of deployment associated with RAM-only databases.

Predication #5 – Multiple Delivery Models Will Become More Pervasive

As mentioned earlier, it is extremely important for an in-memory database to be deployed as close as possible to your application servers to avoid network latencies and bandwidth costs. At the same time, the freedom to choose between different deployment models is equally important. A fully managed database-as-a-service allows you to ‘deploy and forget’ your in-memory database and operate everything with minimal ops. On the other hand, it prohibits you from deploying the database at any arbitrary location. That is the reason why several in-memory database vendors are starting to provide a hybrid delivery model in which you can use a database-as-a-service for the parts of your application that run on the cloud, as well as downloadable database software for on-premise, private cloud environments. Some of these solutions come with a tool that even allows you to synchronize between the two deployment models in a secure manner.

Predication #6 – Open Source Technology Will Win

The majority of new emerging in-memory databases are based on open source projects. But how many of these are truly open source projects rather than the development efforts of the sponsoring company’s employees?

It is nearly impossible for a database vendor that open sources its code, seemingly as an afterthought, to compete with a real open-source project that has been developed for years by a vibrant community. The value of a strong open source project with an abundance of clients, libraries, use cases and deployment options is unparalleled. Furthermore, when your in-memory database is based on a real open source project, developers with relevant knowledge and experience are easier to find.


(Image credit; Planilog)

 

Previous post

Obama Sets Personal Data Notification and Protection Act in Motion

Next post

Data Collated from US Community Survey Reveals Poorest Counties within States