The all flash array (AFA) market is quickly growing, bringing with it high expectations for transforming data centers. But switching to all flash can be fraught with obstacles, including silos, over-promised and under-delivered data reduction and high disaster recovery costs. When considering a move to all flash, it is important to weigh expectations against the realities that IT teams face. However, done the right way, there is tremendous upside in choosing the right all flash solution, including ease of use, better performance, scalability and investment protection.
First, it’s important to face some harsh realities:
Silos Everywhere
As enterprises continue to consolidate and virtualize, moving away from traditional silos has enabled IT to become more agile, reduce operational costs, expedite deployments and better utilize assets. However, with the introduction of AFAs, IT is forced to adopt the very same thing they previously avoided: silos. AFAs have thus far been deployed more tactically than strategically due to cost, architectural incompatibilities with their hybrid infrastructure, or the non-existence of a hybrid approach. Management and day-to-day operations of these silos demands new training and certification as well as additional processes for installs, ongoing management and upgrades.
Scale-Up or Scale-Out, but Not Both
AFAs come in different packages, densities and configurations, but the architectural decisions made during the design and implementation phases may have significant scalability ramifications. Most modern AFAs in the market to this point have been designed to scale-up or scale-out, but not both. This is typically due to metadata-intense designs that place a heavy compute and memory burden as well as flash capacity restrictions on the system, making scaling out a better option. Alternately, the architecture itself may not have been designed for scale-out, making scale-up the only option. However, with scale-up only designs, the challenge now becomes managing multiple systems with frequent hardware refreshes and possible forklift upgrades, which make investment protection obsolete.
Data Reduction: Over-Promised and Under-Delivered
In the words of a well-known tier-one service provider, “We have vendors coming in telling us data reduction stories anywhere from 5:1 to even 20:1. Overall, we see around 3:1 to 5:1, that’s reasonable.”
How do these inflated data reduction figures appear? During proof of concepts (PoCs), customers may test one or two workloads and see very good data reduction results. Later in production, they start making general data reduction assumptions based on their PoC experience. As multiple workloads are then added to the production system, reduction rates often drop significantly.
The average data reduction is not the arithmetic mean but the harmonic mean, therefore a very high reduction ratio in one dataset has limited benefit to the overall reduction ratio in the presence of other datasets.
The overall reduction ratio tends to be dominated by datasets that are larger and that do not reduce well. Could it be that global data reduction techniques, while good in their simplicity, may ultimately result in greater resource strain and a bit less in overall space savings? Could there be an additional choice to help squeeze out greater efficiencies while eliminating some of the resource burden?
Disaster Recovery: With Enough Money, You Too Can Have All Flash DR
Disaster recovery (DR) costs can already be high. Throw an AFA in the mix and for most businesses, DR becomes a pipe dream. Due to the financial burden and inflexibility of most AFAs, customers are left to decide which applications need to be part of a DR plan. As a result, AFA silos are now seeping into recovery sites, sitting mostly idle, hosting business-critical applications. If you’re an AFA-only vendor, you have no choice but to loudly proclaim “disk is dead” and quickly bury your head in the sand.
Monitoring and Analytics: Driving in the fog without headlights
Enterprise storage administrators are tasked with performing a wide variety of functions, from monitoring and reporting, to maintenance and support, to risk assessment in addition to forecasting and planning. However, with the vast majority of today’s storage management systems, these tasks are based on cumbersome manual efforts, often require the deployment of multiple tools, are time consuming and lack end-to-end infrastructure visibility.
While traditional on-premise storage management systems generate large amounts of raw telemetry data, they are unable to correlate all this information into actionable, easy-to-consume, easy-to-understand formats. The end result is storage management tasks that are inefficient, troubleshooting efforts that are largely ineffective and capacity and performance planning tasks that are inaccurate.
In a nutshell, the shortcomings above are currently the norm, primarily due to architectural deficiencies around first- and second-generation AFAs.
New Gen All Flash Arrays
With the introduction of the new generation of all flash arrays, modern architectures are finally beginning to include native, seamless integration and data mobility services. As a result, scale-out systems comprised of different classes of data services are now possible. These classes of service can be optimized for performance, cost/performance and/or cost/capacity with non-disruptive data mobility under a single pane of management.
Silos Belong on Farms, Not in Data Centers
In today’s IT world, whether you’re juggling balls or managing silos, fewer is always easier.
New gen AFAs can be deployed either in isolation or as part of an existing scale-out cluster. Together with hybrid flash arrays, all flash systems are managed as a single entity. In this type of scale-out system, workloads can be moved non-disruptively and in real time between all flash and hybrid flash depending on SLAs. New gen AFAs also offer rich, predictive analytics with install-base learning event correlation, along with forecasting, planning and granular monitoring of virtual environments. These new analytics capabilities can even identify non-storage infrastructure barriers that prevent applications from reaching their full performance potential and restrict business agility and even revenue potential.
Why Settle When You Can Have Both? Scale-Up and Scale-Out
One of the important areas AFAs need to address is the ability to scale-up and scale-out. Knowing you have a platform that can adapt to changing requirements and scale with the needs of the business is a significant insurance policy.
Modern AFAs can accommodate scale-up (adding performance), scale-deep (adding capacity) and scale-out (clustering multiple systems and treating them as one) using the latest cost-optimized 3D-NAND flash drives. Modern AFA architectures also focus on providing high per rack unit densities given the high costs associated with power, cooling and floor space in the data center.
Four years ago, NAND flash capacity was between 100GB and 256GB using single level cell (SLC). Within four years, the market has gone from SLC, to eMLC, to MLC and now to 3.84TB 3D-NAND TLC flash. In fact, within the next 12–18 months we will see 16TB SSDs starting to become available.
Given the pace at which NAND flash vendors innovate and densities increase, new gen AFAs should allow for the flexibility to intermix different drive densities within the system and/or within the scale-out cluster. This type of flexibility allows customers to transition to larger densities seamlessly without forklift upgrades and data migrations. Additionally, customers should demand the ability to non-disruptively:
- Create and scale up and out an AFA (use case: capacity and performance)
- Merge two different clusters (use case: data center consolidation)
- Evacuate an AFA from a cluster (use case: HW refresh)
- Shrink a cluster (use case: HW refresh and array relocation)
- Move a workload from a hybrid flash array to all flash or vice versa (use case: changing SLAs)
Data Reduction: Intelligence Is a Good Thing
While prices are dropping, flash is still a more expensive media than rotating disk, especially if the comparison is against high density NL-SAS or SATA HDDs. An AFA without inline data reduction capabilities is nothing but a niche. If an AFA is to serve as a consolidation platform and bring costs down to an acceptable level, then data reduction is table stakes.
That said, the implementation of various data reduction methods varies per vendor architecture, but what can be said with certainty is that there is no free lunch and there are always tradeoffs to be made.
In addition to variable block inline compression and zero pattern removal, which already exists in many architectures, variable block inline deduplication is a technique that has gained a lot of acceptance, primarily because workloads like VDI that lend themselves to very high dedupe ratios as do clones and backups. However, deduplication can have a significant performance impact if not properly implemented.
Modern AFAs currently implement the most secure SHA-2 256bit cryptographic algorithm to identify duplicates and use unique metadata indexing designs. These designs allow AFAs to keep DRAM requirements small, and not be significantly bound by DRAM to flash capacity ratios as compared to older generation AFAs. In addition, these new designs reduce controller costs, significantly increase scale-up per controller capacities and require fewer controllers for full capacity scale-out.
Additionally, they provide dedupe optimized for locality. In real-world datasets when a block at location N gets deduped, it’s highly likely that surrounding blocks are also duplicates. Think about VDI or clones for a moment. We’re dealing with multiple GBs of duplicates at a time. These implementations recognize this spatial locality effect and instead of doing one-by-one comparisons, do side-by-side comparisons of larger chunks.
Deduplication typically works at the system level, and is by default turned on for everything. It can also be granularly controlled at the volume level for applications that tend to not dedupe well, such as certain databases. An additional option is application aware dedupe, which provides the ability to define application-specific domains, dedupe within these domains and track savings at the application level.
You Can Have Your Disaster Recovery and Eat It Too
New gen AFAs provide native replication. That is, replication originates and is managed at the array using the array’s native UI.
Enterprises can replicate from their new all flash array to their hybrid flash array or vice versa at a third of the cost. This can be done securely, with encryption, with efficiencies over the wire and without requiring any licensing. You can replicate and encrypt everything or you can replicate and encrypt some.
Predictive Analytics: Prevent problems, assess the present and predict the future
While flash performance is a game changer, the ability to collect, analyze, correlate, identify and remove infrastructure obstacles that slow down data velocity is critical. New gen AFAs provide not just impressive speeds and data reduction, but also proactively enhance system availability. By providing, among other things, actionable install-base learned correlational analytics, automatic case creation, root cause analysis, end-to-end performance, capacity metrics and accessibility anytime, anywhere from any device, new AFAs dramatically simplify day-to-day operational activities, allowing administrators to focus more on providing business value by building and delivering new services to their organizations.
So What About Hybrid … and Hard Disk Drives?
Flash isn’t always the end-all be-all. New gen all flash arrays now offer the cost efficiency to be a logical option for many environments, but hard disk drives still play an important role for less performant sensitive and secondary workloads. While flash focuses largely on optimizing performance, a hybrid approach can help optimize for cost/performance.
While most modern AFAs are built to serve as consolidation platforms for handling large and diverse multi-terabyte environments, the reality is typical deployments today tend to be more targeted. At present, most AFA deployments are primarily focused around customer-facing applications and business-critical, revenue-generating workloads – everything from databases such as Oracle, SQL Server or SAP, to virtual desktops, reservation systems, transactional systems, stock trading platforms and Web services.
For less critical workloads, a hybrid approach that incorporates disk alongside flash should not be overlooked. This includes use cases such as email, SharePoint, file services, batch processing, data warehousing or for data archival purposes. Hard disk drives paired with flash are still a safe bet to optimize for capacity and cost. Furthermore, modern hybrid architectures continue to serve as excellent consolidation platforms for a large number of use cases and produce 3–5x better cost/performance than older generation hybrid systems. To support the aforementioned applications, you can implement a hybrid array and dial up or down the amount of flash used based on your specific needs.
The Bottom Line
New gen all flash arrays provide a unique, differentiated solution that’s simple, easy to use and backed by predictive analytics that provide a unique set of monitoring eyes on the infrastructure. They integrate seamlessly into your other tiers of storage, providing non-disruptive application mobility across different classes of service. These architectures make DR affordable again and you only have to train staff on one platform.
If you’ve noticed, I’ve not talked much about performance. That’s because great performance and sub-millisecond latency are a given with these platforms. Scaling, reliability, ease of use and predictive analytics at an affordable cost is now the new normal.
Like this article? Subscribe to our weekly newsletter to never miss out!
Image: Yuriy Akopov