By David A. Chapa
Today, the new sticky bit in the data center seems to be the network attached storage or NAS. Not because it is in a proprietary format, but because of the sheer volume of data, the permissions that must be maintained, the links, etc.
Sticky like an old rusted bolt?
In IT we often hear how some solutions are more sticky than others. Back in the day, former Gartner analyst, Dave Russell, would say that backup solutions were sticky. In other words it made it difficult to move from one solution to the next with ease. The main reason for this stickiness really has to do with the backup data, which in some of the older solutions would use a proprietary format when writing to disk or tape, and compliance, or simply the retention of the backup data. So, for those customers who wanted to move to a new platform it meant keeping the original solution around for a period of time until the maturation period was met for retention or they would try to migrate the data to the new platform while trying to retain the retention metadata. Sounds daunting, doesn’t it? Well, that was 20 years ago and I was part of many projects that did just that and it was time consuming.
Today, the new sticky bit in the data center seems to be the network attached storage (NAS). Not because it is in a proprietary format, but because of the sheer volume of data, the permissions that must be maintained, the links, etc. There are a number of storage vendors who would love for you to remain quite stuck with them for years. Their belief is, if the solution they offer is “good enough”; then, it may not be enough to justify a customer attempting to move away. I used to work for an ad agency many years ago whose motto was “Good Enough is Not Enough.” In other words, the goal is not to just meet expectations but to exceed. While good enough for some of these storage vendors may be just enough to keep a customer, it definitely does not focus on delighting a customer.
But what if you wanted to move from one storage solution to another? Have you ever considered it, and have you ever done it? Back when I was an administrator many moons ago, we did that as well, moving from one storage platform to another. It was labor intensive, it required a lot of verification on the new storage solution such as any hard or soft links we may have created on the old solution. There really wasn’t a good way for us to view logs for success and failures. For the most part, it was a very single threaded process and to say the least, a pain to manage. This is one of the reasons some storage solutions are sticky just like a rusty bolt. It makes it difficult to disengage so you can migrate from one on-premises solution to another or even from on premises to the cloud.
So, just what kind of sticky NAS do you have?
Have you ever been walking on a hot summer day and stepped on a piece of chewing gum that stuck to your shoe? Then you know that annoying feeling. It isn’t horrible but it is a bit of a mess to remove and a major pain. Like that newcomer storage solution you may have bought to try out in your data center as a small storage footprint, that grew a little more than the intended design for the “science experiment.”
So, just what kind of sticky storage do you have? Is it annoying and messy like gum stuck on your shoe, or is it sticky like an old rusted bolt? Having worked on old classic cars, I have come across my fair share of old rusted bolts that take time and a lot of effort and arm work, and may even require a little help by a third-party to help remove.
The same is true of some data center NAS solutions today where there may be barriers of entry (or exit as it were) such as volume of data, application complexity such as possibly having to refactor apps if you are going to the cloud, overall storage platform instability, etc. Any or all of these factors may force a customer to take the “easy road” by just adding more of this oligarch’s legacy storage to spread out the volume of data across multiple silos and avoid the whole migration to a new platform either on premises or in the cloud. But this strategy, as we know, is just kicking the can down the road.
What makes migration so unnerving for some?
As was mentioned earlier, there could be a lot of complexities that force the hand of some IT groups to keep the status quo and just add shelves to an already aged NAS solution. However, the digital strategy of many CIOs include moving some, many, and in some cases all workloads to one of the major cloud providers such as Azure, AWS, and GCP. If it were simply a net-new application being deployed the decision would be much easier and setting up the new compute and storage in the cloud wouldn’t seem as overwhelming. However, existing workloads may be more of a challenge. One of those challenging factors gets down to file for on-premises workloads versus object for cloud workloads and this is where the application refactoring discussion comes into play.
What if you didn’t have to worry about refactoring your applications but you just migrated your data from your old legacy NAS to a modern file data management platform in the cloud? For many organizations, this would be the preferred method because you can immediately see the cost value of cloud for your workloads while determining if refactoring is really necessary for any of your applications. This places you in a position of strength, and allows you to make better decisions for your organization overall.
That’s where Qumulo Core comes into play
We are a true hardware-independent software solution, meaning you can run Qumulo Core on any supported hardware in your data center, colocation facility, or the cloud, without impacting your experience between locations. We call that “software portability” and it is one of the things that makes Qumulo very different from the old school storage vendors and even some of the newcomers who have physically tied themselves to hardware.
For example, if you are using a storage solution by Dell EMC Powerscale (Isilon), you would quickly discover that running this solution in the cloud essentially requires a lift and shift of hardware to either a colocation facility or into GCP to be hosted. That’s not a hardware-independent solution, that clearly is an “appliance” that requires the integration of both hardware and software to work as advertised. And, speaking of appliances, the other legacy company, NetApp, known for making appliance and storage a household name within IT, gives you a very limited experience between its on-premises solution and its cloud offering. For example, with Azure NetApp File the maximum available storage is 100TB and a subset of the functional operation may force some medium to heavy refactoring of applications from an on-premises experience to Azure NetApp File.
It is no wonder IT professionals find it a struggle to move into the cloud. Not only must refactoring applications be top of mind, but if they want to stay on the old legacy storage platform, there aren’t many options that make this fiscally practical.
Modern file data management and storage is key to your digital strategy
One of the things the pandemic has brought to light is the fact that we can survive and run in the cloud and see the same or better outcomes for our customers. Organizations were forced to find a meaningful way to continue business operations with limited access to data centers and offices. This is why many CIOs are now looking at ways to deliver higher quality services to their customers, while driving costs down at the same time by reducing the dependency on big data center footprints and the associated costs of management, cooling, and power to help keep it all running optimally. We believe our modern file data management platform is an important component to any digital strategy, so we have created a migration service through our strategic alliances to give our customers the ability to get “unstuck” from their old legacy NAS solutions.
Qumulo will help customers who want to cleanly migrate their data off of their old legacy NAS solutions to our modern file data management platform through our strategic alliance partnerships providing customers a variety of options. Many leading enterprise organizations have successfully leveraged our migration service to move petabytes of unstructured data to the Qumulo File Data Platform, quickly and easily.
IT professionals, it’s time to get “unstuck” from old NAS
Confidence is key to a successful migration, and with Qumulo, confidence is what we instill. Remember what I said at the outset of this blog about the challenges I faced as an administrator conducting one of these migrations many moons ago?
- We didn’t have the confidence in our processes
- We didn’t have proper logging to give us confidence
- We weren’t 100% confident all of our hard and soft links were transferred
- We didn’t know for certain that all of the permissions and extended attributes of the files would transfer over neatly to the NAS platform.
Qumulo has worked hard to deliver a solution that will not only offer you peace of mind, but also the insights you need to know the job is being done as expected and anticipated. Again, this is about confidence, and knowing your migration will be successful.
So, now you can, with confidence, migrate petabytes of unstructured data from your old legacy NAS solutions to Qumulo’s modern file data platform — whether it be on-premises or in the cloud. And if your digital strategy isn’t quite ready for refactoring all of your applications, Qumulo Core has you covered there as well. Your applications will run just as efficiently in the cloud as they did on premises because when your unstructured data is stored on Qumulo Core, you can expect the same experience, no matter where the storage is located.
Chapa, signing off
David A. Chapa is a technical evangelist and strategist with over 30 years of industry executive and analyst experience helping organizations with data protection, recovery, and storage–and how to leverage cloud infrastructure, from hybrid to hybrid-multi-cloud. David is the head of competitive intelligence at Qumulo.