Of the three pillars of enterprise infrastructure – compute, storage and networking – storage remains the most complex. I know, processors are still gaining in strength and flexibility and networking is, well, networking, but in terms of options, storage is the most diverse. Do you go all-cloud, all-local, or hybrid? Do you opt for all-Flash or hybrid disk, or even tape, solutions? And then there is the rising cadre of in-memory and server-side solutions that do away with independent storage infrastructure altogether.
One thing is certain: The enterprise will need access to vast amounts of untapped storage in the coming years if it is to have any chance of realizing the benefits of Big Data and the Internet of Things. This may fly in the face of recent market data that has both the price and capacity of storage deployments on the wane, but as IDC noted in its latest quarterly assessment, this has more to do with changing buying patterns than diminishing demand. Sales of large external arrays, which represent the largest market segment, dropped by 3.7 percent, while ODM sales to hyperscale enterprises tumbled nearly 40 percent, which sounds like a lot but is largely in keeping with what has so far been a highly volatile market.
On the upside, however, both Flash-based solutions and server-side deployments are on the rise. According to new data from 451 Research, Flash is now present in 90 percent of enterprises, with more than half having already deployed hybrid SANs and another 30 percent looking to make the move within two years. Perhaps even more significant, 27 percent are running all-Flash arrays and an equal portion is planning to do the same in two years. The biggest barrier, of course, is cost, which is why many organizations are pairing their Flash systems with dedupe and compression to stretch capacity as much as five-fold.
But since storage is at heart a commodity, many enterprises make the mistake of basing their deployment decisions on technology rather than operational criteria like cost and performance. As Merritt notes, the primary goal for most organizations should be to build storage infrastructure with a low TCO by taking into account not just upfront costs but lifecycle factors as well. Some best practices to abide by are extensive leveraging of legacy infrastructure and deployment of new systems that stress flexibility, ease of use and, most importantly, scalability. Increased modularity is also a key attribute as it improves the value of physical data center space.
Storage tiering is also a valuable asset in modern mixed storage environments. Many organizations employ Flash and other forms of caching, but actual tiering involves a much higher degree of data analysis and dynamic movement across multiple storage media. And as the data ecosystem becomes more diverse and more complex, many organizations will have no choice but to automate their tiering processes in order to derive the highest value from available resources. Selecting the proper domain for the automated tiering stack is also a vital concern, as different vendors will house it on different layers, such as the storage system itself or the hypervisor.
The modern storage environment, then, will be vastly different from the monolithic arrays of the past, and even the criteria for evaluating successful storage operations are shifting away from raw capacity to high degrees of flexibility and performance.
The underlying function is still the same – to keep data readily available – but the scale and scope of that challenge is changing dramatically as the enterprise transitions to the digital economy. Traditional storage architectures still have a role to play, but they are no longer the only game in town.