Search
Close this search box.

Azure Native Qumulo Now available in the EU, UK, and Canada – Learn More

How To Resolve the Storage Pains of Legacy Software, Availability, and Budget

Authored by:
Part 2 of 3 in a series about the 10 most common data storage pains and explain various ways to overcome them.

In this 3-part series, we talk about the ten most common data storage pains in large-scale enterprise environments and explain various ways to overcome them. In part 1, we explained how data storage pains could be evaluated against the same universal pain scale that medical professionals use to determine patients’ treatment. Then, we discussed each one of the first three pains of the top ten: storage capacity, performance, and scalability.

Dealing with legacy software, lack of availability, and storage budget pains

Now, we’re going to talk about storage pains 4 through 6: legacy software, lack of availability, and budget.

4. Legacy software pain–outdated systems impact users’ performance

Contrary to what they might tell you, large established storage vendors are not risk-free. Big legacy codebases are frequently full of untested code and lurking problems. Support is difficult to scale. Those two things together can lead to frustrating support engagements with said vendors.

Obviously, you want to value customer support very highly – if you’ve got tight deadlines or large data sets, look at the track record of how they have helped to address problems with other customers.

Less obviously, you should investigate the engineering organization. Do they use modern software development techniques? What do they think about testing? How do they release software? What is their track record of releases? This should matter more than it currently does in the minds of storage decision makers. Storage is a long-term relationship and engineering is a key driver of the value of that relationship in the future.

  • Don’t be afraid to investigate software development in your storage vendors. Talk to existing customers about how accurate the engineering roadmap has been and how good the release record is. Talk to executives about vision and direction.
  • Measure your predicted needs from the vendor’s roadmap. You’re going to be putting your crown jewels on the storage system you purchase and that system is going to be around for a long time. Your chosen vendor(s) need to be moving in the same direction you are.
5. Storage availability pain–storage lacks resiliency and goes down occasionally, impacting productivity

When data isn’t available, work is stopped. There’s a cost-of-work stoppage and that’s more so if you have a large team of creative or technical people that get blocked by unavailable storage.

With a monolithic system, availability can be dicey. Often you’re going to need to buy two systems and then add a software layer that can move a workload between them in the event of a failure. You’re going to want to look carefully at the expense of adding redundancy to a monolithic system and make sure that insurance policy is worth it.

If your cost in downtime is very high, it might be worth buying two systems just for the sake of redundancy. But if your cost of downtime is flexible or low, it might not be worth that added system – so think about it if you’re considering your backup.

Another option, a compromise of sorts, is to look for a solution with a higher-end service contract, with bigger SLAs. For a scale-out system, a single node failure doesn’t take the whole system down, so you have some protection inherent in the architecture – but you could still have a cluster go down due to a network problem. In either case, you’re going to have a backup for recovery, and business continuity. There’s always high value in buying two.

“The thing we saw in Qumulo versus other products is that it was straightforward, simple, and easy to use. It has great feature sets. It addressed our primary requirements which were basically performance and reliability. Those are huge for us in the medical field.” – Chris Painter, Senior Storage Engineer, Carilion Clinic

6. Storage budget pain–storage is always too expensive

As we all know, money isn’t infinite and storage isn’t free, and even “free” software needs hardware and engineers to run it. Storage capacity has a cost to it, and that cost is going to be viewed as too expensive, always. Too often I find that folks get hung up on dollars per capacity, and ignore other important things like dollars per throughout, dollars per iops, startup cost, or data center burden (racking up technical debt). Following are a few suggestions that will help you manage and control the cost of storage.

  • Use the right storage technology for your workflow. Using flash when you don’t need it is wasting money. Using disk storage when you need flash simply won’t work. Consider hybrid storage systems that combine flash and disk intelligently as they’ll help you along both performance and budget axes.
  • Along those same lines, use the cloud where appropriate. It is not a panacea. A steady-state workload is unlikely to result in cost savings if you move it to the cloud. However, a very bursty workload or a very occasional workload might benefit the bottom line if you can use a public cloud or a hybrid cloud system.
  • You will want to engage integrators or VARs – they spend a lot of their time talking to vendors and understanding the marketplace, and can add value when you are evaluating storage systems or complete workflow solutions. Don’t stand for VARs that don’t add value!
  • You’re going to enter a buying process based on your previous storage experience and things have changed dramatically – what you knew about storage a year ago is likely not true today. Perform due diligence when planning a new storage buy or a facilities buildout.
  • Look at modern file systems’ storage efficiency: software that is optimized to store more in less rack space, and consumes less power — helps lower costs while enabling you to run a greener data center.

Next up: the storage pains of data blindness, data loss, locality and data migration

In the final article in this blog 3-part series, we’ll cover the final four pains of large-scale enterprise storage environments. Data blindness is that sinking feeling you get when you don’t know how your data is being used or what’s going on in your storage repositories, or worst case, it’s lost. Then there’s the drag on performance when data locality of reference is not optimized for high use.

Additional Resources

Data migration pain–to the cloud and back–for example, is a whole other pain altogether; but, doesn’t have to be.

Qumulo’s modern file data management and storage software was purpose built to support hybrid cloud strategies for high-performance workloads at massive scale.

Related Posts

Scroll to Top