Why go All-Flash file storage when you can go All-NVMe file storage? You might be asking yourself, aren’t these the same thing? While they both use solid-state storage (SSDs), they are, indeed, not the same thing.
Let’s go back in time a bit.
Historically the storage industry was (and still is) pretty excited about SSDs. SSDs really made a significant, positive impact on the industry and its never-ending desire for speed. After all, faster storage, lower latency, and higher throughput is always a good thing, right?. SSDs made everything faster. They were more durable and reliable than spinning disks. They used less power. They had a smaller footprint. They were, however, much more expensive.
Now all of this sounded great for critical workloads where you could justify the cost of SSDs to get the increased performance. That is, until you realize that the storage industry, by using technologies created 20-30 years that were designed for hard disk drives (HDDs), was seriously limiting the potential performance of an SSD. By limiting, I mean really, really strangling the performance of SSDs. The industry was using a new storage medium, but with legacy protocols.
New Storage Medium | Legacy Protocols
Most SSDs today have been retrofitted into systems that were used by HDDs and continue to use the same legacy protocols. Serial Advanced Technology Attachment (SATA) and Serial Attached SCSI (SAS) protocols were designed for HDDs that required a single (serial) queue due to the fact that there was spinning disk and a moving head. SATA could only have a command queue of 32 commands and a maximum bandwidth of 600 MB/s, SAS increased that to 254 commands per queue for a maximum bandwidth of 1GB/s. These legacy protocols were creating a significant bottleneck for SSDs. It’s like having a bunch of Ferraris running full out on an eight-lane highway and suddenly the highway turns into a one-lane country road – things slow down pretty fast! All-flash (all-SSDs) may be faster but, with legacy protocols, you are certainly not getting what you paid for.
Legacy protocols significantly limit the performance of SSDs.
A Modern Protocol
Non-Volatile Memory Express (NVMe) is a new protocol that was designed to address the bottlenecks inherent in the legacy SATA and SAS protocols. NVMe is based on the PCI Express (PCIe) which is a high-speed serial computer expansion bus standard which connects directly to the CPU.
Each NVMe SSD requires eight PCIe lanes to reach its full performance – up to 6 GB/s. Remember the SATA interface got 600 MB/s and the SAS interface increased that to 1 GB/s. On top of this, NVMe can have 64,000 (yes 64,000!) parallel queues with each queue having 64,000 commands. This allows for over 4 billion commands! SATA and SAS’s one queue can have only 32 and 254 commands, respectively. Over 4 billion commands – that is a pretty significant increase.
NVMe with PCIe and 100Gb/E enables storage arrays that are screaming fast!
The thing to remember is that it is the same SSD behind the NVMe protocol, and the SATA or SAS protocols. NVMe delivers the full performance potential of flash storage by efficiently supporting more processor cores, lanes per device, IO threads, and IO queues – giving you what you paid for.
So, why doesn’t everyone have All-NVMe?
The critical technology design is that the SATA/SAS implementation has a controller card while the NVMe implementation does not. This is because the controller is in the NVMe SSD drive. This means that to retrofit a HDD platform for all-NVMe there needs to be significant operating system modifications to deal with the fact that the controller will be removed when the drive is removed. Things like turning on and off LEDS, allowing hot swapping, and having the system not crash when the drive is removed are some of the issues that need to be addressed.
It turns out these are not easy modifications and many legacy vendors are creating patches to solve these issues. Rewriting the operating system would just be too time-consuming and expensive. Using a system that was “patched” to enable new NVMe drives sounds like a risky way to manage your company’s file data, and could also be performance limiting.
How does Qumulo do it and do it so fast?
Qumulo designed a modern, distributed file system with the expectation that new protocols and drives would appear on the market at a steady cadence. Seems like a pretty obvious thing to plan for, right? Legacy vendors, by the fact that they are well, legacy, means they don’t have the option to easily adopt new modern underlying technology. Long development cycles and often re-architecting software is required to take advantage of new storage platforms, protocols and devices.
Qumulo chose Linux as our underlying operating system. By developing our file system as a user-space application that runs on Linux, we have incredible flexibility to run anywhere that Linux does. Many traditional NAS vendors are based on FreeBSD, which does not innovate or support newer hardware anywhere near the rate that Linux provides us.
Qumulo runs in user-space and we write directly to the media to get the maximum performance possible. And we support all-NVMe which enables the screaming performance that SSDs were designed for.
Qumulo is an enterprise-proven, scale-out NAS-based file system that was designed for scale that supports all-NVMe. Real-time analytics are also an integral part of the Qumulo file system that eliminates data blindness. Qumulo provides real-time visibility into file and directory usage across the hybrid cloud storage environment. Its integrated analytics give insight into both bandwidth and capacity utilization.
Qumulo, with its advanced file-system technology means that 100 percent of provisioned capacity is available. This is unlike other NAS and file storage vendors that limit you to only using around 80 percent of your available storage before encountering performance or system management bottlenecks. With Qumulo, you not only get reduced latency with all-NVMe, you get to use 100 percent of this incredibly fast storage.