5 Simple Questions to Ask When Evaluating NetApp

Authored by:

NetApp’s annual conference, INSIGHT 2020, kicks off this week, and it should be an interesting event with a number of cloud experts and industry executives participating. If you’re considering NetApp’s ONTAP data management software for your organization, here are five things to consider before purchasing.

5 data storage considerations when evaluating NetApp


1. You need to do a migration to get to NetApp’s distributed file system. Why not migrate to a modern distributed file system?

In ONTAP, NetApp added the functionality of a distributed file system on top of the legacy architecture, but did so with obvious and inherent limitations. It is actually a great validation to Qumulo’s product strategy that NetApp finally recognized, in today’s world of extremely large unstructured data file sets, a distributed file system is the only viable solution.

It is, however, unfortunate for NetApp’s installed base, because in order to get to the distributed file system (FlexGroups) you must do a complete data migration. And as we all know, migrations are never easy.

You also have to wonder if all the other tools that worked with previous versions of FlexGroups will seamlessly work, or if you will need to wait until they are rewritten.

2. Can you really use 100% of the usable storage capacity you buy?

In a recent blog post about Isilon (now known as Dell EMC PowerScale), we likened the buying experience to visiting a car dealership, making a purchase, and then discovering you can only use three out of the four tires on the vehicle.

Purchasing NetApp storage can often feel very similar. NetApp recommends that users only consume 80-85% of their usable storage to avoid performance degradation. When you exceed this recommended usable capacity, you can experience significant performance problems. This means you not only did you pay for capacity you can’t use, you also need to be constantly monitoring your storage platforms for performance issues.

ONTAP also has a limit of 300TB per volume and 2 billion inodes. The practical limit, however, seems to be a half a million inodes; once this is exceeded, performance severely degrades. Admins need to know how many files are contained by their volumes in order to determine when they need to increase the number of (public) inodes to prevent running out.

Data protection on ONTAP is on a per-HA pair basis. You can’t use the entire cluster to reprotect, and sparing space is on the node/HA pair. This could significantly impact your performance and your available space.

3. Is NetApp simple to purchase, manage and scale?

NetApp ONTAP is notoriously difficult to purchase, deploy and operate. ONTAP is a solution that is over 25 years old, and as a result has many different components which are often priced separately.

NetApp bases its ONTAP licensing on software bundles, plus a-la-carte licenses for some features. ONTAP licensing also varies across deployment methods. Consequently, ONTAP licensing can be complex. For instance, the description of ONTAP Software-Defined Storage licensing concepts is 11 pages long – just the description!

Managing ONTAP is also complex. For example, whether dealing with a node failure or adding an additional node, ONTAP does not do automatic rebalancing. Manually rebalancing systems is a very time-intensive and error-prone exercise.

Upgrading is also a challenge. The complexity and time required to do an upgrade increases with the size of the cluster. When dealing with multi-petabyte clusters, this can become very problematic. NetApp recommends at least 90 minutes to do an upgrade, and more if you are dealing with a very large cluster.

NetApp also has significant challenges when it comes to understanding how much storage to add to nodes, and how to manually balance that storage in the cluster. You need to understand how many shelves should be configured per node, and also if you need to add a new HA pair, or more shelves to an existing HA pair. When you finally do add capacity to a cluster, you run into the complex and challenging task of determining how that capacity should be re-balanced to existing workloads. Oh, and you must always add storage in two nodes at a time.

Customer Perspective 

If you could start over, what would your organization do differently?

“Isolate more of our manufacturing critical file systems/datasets on their own storage clusters.”

-Senior Storage Engineer, manufacturing company
Source: Gartner Peer Insights, April 2020

4. Does Netapp offer a scalable solution in the cloud?

The public cloud has, and will continue to have, a significant impact on how IT strategies are designed. It is important to understand how legacy vendors, most of which are built on architectures that were designed decades before the cloud was even a concept, implement their cloud solutions.

NetApp was built 25 years ago for central IT storage as an appliance in the data center. As unstructured data began to grow at exponential rates, the scale limits of NetApp’s architecture were quickly reached and new scale-out and cloud native solutions were needed.

To build a cloud offering, NetApp developed a second, less scalable, version of ONTAP called Cloud Volumes ONTAP, that runs in the cloud. Cloud Volumes ONTAP can only scale to 368TB of data – which is pretty small given that most customers are dealing with multiple PB datasets that are growing exponentially.

On-prem also has its challenges. NetApp depends on its data reduction to make these on-prem clusters affordable. For example, with traditional enterprise workloads or genomic sequencing, the file sizes are getting larger and the data reduction often has very little or no impact. Given the small file size and small capacity options in the cloud, organizations that burst to the cloud would likely need to closely manage their usage. Be sure you understand your workload if you are considering ONTAP.

5. Is your current customer experience delightful, or do you feel like just a number?

Everyone knows NetApp’s customer support can be extremely frustrating and time-consuming. You’re forced to open a ticket where you become just a number, and more often than not the support team just wants to close that ticket as quickly as possible. The focus is not on how well they solve your problem or answer your question, but how many tickets they close out during the day. On top of this, callers often have to endure numerous, time-consuming escalations because the first-level support team is unable to answer your question or solve your problem.

Customer Perspective 

What do you dislike most about the product or service?

“I think there have been some missteps in terms of some support options/quality which is expected as the company transforms. I would like to see more use of hands-off support on the customer side and more automated/intelligent recommendations from NetApp support. Other competitors have automated analysis systems for their customers and I would like to see NetApp be a leader in this space.”

Senior Storage Engineer, manufacturing company, Gartner Peer Insights, April 2020
Source: Gartner Peer Insights, April 2020

NetApp vs. Qumulo

Wondering how Qumulo is different? Check out the detailed differences between NetApp ONTAP and Qumulo here. Alternatively, let us count the ways…

Qumulo is easy to procure, deploy and manage

Qumulo has an all-inclusive license. All-inclusive means you get everything – auditing, real-time analytics, and all new features as long as the subscription is up-to-date. These licenses are completely transferable between on-premises and cloud clusters. No surprises.

Qumulo, being a single software solution, is easy to deploy. Once the hardware is set up, Qumulo can be up and running in ten minutes. No RAID analysis is needed to configure volumes; everything is protected by erasure coding, and as a result automatic balancing happens within the cluster.

Our real-time analytics, with no tree-walks, enables simple management of the file data, regardless of size.

And last but not least, whether replacing a disk after a failure or adding a node to the cluster, the system immediately and automatically rebalances so you are up and running almost instantly. Your additional storage is ready to use and you can be off doing more high value tasks versus rebalancing.

With Qumulo, you can use 100% of your usable storage capacity with no performance impact

You shouldn’t have to choose between utilization and performance—but that’s the position NetApp has put its customers in. Qumulo lets you use what you pay for, 100% of your usable storage!

We don’t just aim for customer satisfaction, we aim for customer delight

At Qumulo, we don’t offer “customer support,” we have a team dedicated to customer success. This might seem like a small nuance, but helping our customers be successful is what we are all about, and what our entire company is focused on.

Qumulo provides unmatched customer satisfaction, with net promoter scores (NPS) in the mid-to-high 80s. With Qumulo, you can speak directly to a dedicated team via Slack, offering instant communication with experts who not only know our product inside and out, they know how you are using our file data platform and the value it provides your company.

You have an open line of communication to the experts until your problems are solved or your questions are answered. You will never be asked, “Can I close the ticket now?”.

Qumulo recognized from day one that the cloud would be a key part of IT storage strategies

We designed and continue to develop the entire file system to work exactly the same on-premises as it does in the cloud. This means that you have the same functionality and user interface in a hybrid cloud or multi-cloud environment. Qumulo’s file data platform also scales to multiple petabytes and billions of files, in the cloud exactly like it does on-prem.

Don’t be shy, reach out!

Contact us if you’d like to set up a meeting or request a demo. And subscribe to our blog for more helpful best practices and resources!

Related Posts

Scroll to Top