“Why is my storage slow?”
It’s the question that all storage admins dread. The answer is often “because something is pushing harder on the storage than normal.” But determining who or what that something is can be difficult or impossible.
Since every workload and situation is different, you need a full suite of tools to find and fix performance issues. These tools should show you in real time:
Let’s take a minute and look at that last one – the type of load. For some examples, a script writing many small files has high file write IOPS. A bunch of ls and find commands cause high metadata read IOPS but almost no throughput. Writing large files generates high write throughput but very little IOPS.
In order to get an accurate picture of what’s happening, you need to be able to see all of this in real time. You need to be able to filter on specific types of workload. And you need to be able to drill down to see this workload by client and path. With Qumulo Core 2.0.3, we’ve added throughput analytics to our existing suite of IOPS analytics. Specifically, we added a new analytics view – Throughput Hot Spots. This view shows you what parts of the file system are experiencing significant throughput loads, both reads and writes.
There’s also a new API that returns throughput rates for read and write with their associated paths and client IPs.
With this rich set of analytics tools, you’ll finally be able to pinpoint where your storage performance is going and no longer fear getting the question “why is the storage slow?”
We’ll continue to iterate on this using our agile software development model. In the coming weeks you’ll see incremental additions to our analytics pages with this throughput data. It’s another way Qumulo continues to extend our real-time analytics to help you instantly understand your data footprint and usage patterns.
We are always looking for new challenges in enterprise storage. Drop us a line and we will be in touch.
Enter a search term below