We believe everyone should be able to easily access the performance they need.
Anyone sitting in front of a computer or smartphone (like you!) wants it to perform faster, and they also want faster results from complex applications or services. There’s a universal desire for speed without incurring a lot of expense or making things more complicated.
The same is true for storage admins, system engineers, and IT leadership around the world.
All-flash used to be the fastest thing going in file storage technology. Today, all-NVMe is the latest technology that is becoming ubiquitous in the file data platforms that manage petabyte-levels of unstructured data.
Time for a quick step back in time
Historically the storage industry was (and still is) pretty excited about SSDs. SSDs made a significant, positive impact on the industry and its never-ending desire for speed. After all, faster storage, lower latency, and higher throughput is always a good thing, right? SSDs made everything faster. They were more durable and reliable than spinning disks. We loved them when they appeared in our laptops and made everything so much faster. They used less power. They had a smaller footprint. They were, however, much more expensive.
If you were working with critical workloads, you might be able to justify the cost of SSDs to get the increased performance. That is, until you realize by using technologies created 20-30 years ago that were designed for hard disk drives (HDDs), the storage industry was seriously limiting the potential performance of an SSD.
By limiting, I mean really, really strangling the performance of SSDs.
New storage medium, legacy protocols
Most SSDs today have been retrofitted into systems that were used by HDDs and continue to use the same legacy protocols.
Serial Advanced Technology Attachment (SATA) and Serial Attached SCSI (SAS) protocols were designed for HDDs that required a single (serial) queue due to the fact that there was spinning disk and a moving head. SATA could only have a command queue of 32 commands and a maximum bandwidth of 600 MB/s, SAS increased that to 254 commands per queue for a maximum bandwidth of 1GB/s.
These legacy protocols were creating a significant bottleneck for SSDs. It’s like having a bunch of Ferraris running full out on an eight-lane highway and suddenly the highway turns into a one-lane country road – things slow down pretty fast!
All-flash (all-SSDs) may be faster but, with legacy protocols, you are certainly not getting what you paid for.
Legacy protocols significantly limit the performance of SSDs.
A modern protocol for the performance you need
Non-Volatile Memory Express (NVMe) is a new protocol that was designed to address the bottlenecks inherent in the legacy SATA and SAS protocols.
NVMe is based on the PCI Express (PCIe) which is a high-speed serial computer expansion bus standard which connects directly to the CPU.
Each NVMe SSD requires eight PCIe lanes to reach its full performance – up to 6 GB/s. Remember the SATA interface got 600 MB/s and the SAS interface increased that to 1 GB/s. On top of this, NVMe can have 64,000 (yes, 64,000!) parallel queues with each queue having 64,000 commands. This allows for over 4 billion commands!
SATA and SAS’s one queue can have only 32 and 254 commands, respectively. Over 4 billion commands – that is a pretty significant increase.
NVMe with PCIe and 100Gb/E enables storage arrays that are screaming fast!
The thing to remember is that it is the same SSD behind the NVMe protocol, and the SATA or SAS protocols. NVMe delivers the full-performance potential of flash storage by efficiently supporting more processor cores, lanes per device, IO threads, and IO queues – giving you what you paid for.
So, why doesn’t everyone have all-NVMe?
The critical technology design factor is that the SATA/SAS implementation has a controller card while the NVMe implementation does not. This is because the controller is in the NVMe SSD drive. This means that to retrofit a HDD platform for all-NVMe, there needs to be significant operating system modifications to deal with the fact that the controller will be removed when the drive is removed.
Things like turning on and off LEDS, allowing hot swapping, and having the system not crash when the drive is removed are some of the issues that need to be addressed.
It turns out these are not easy modifications and many legacy vendors are creating patches to solve these issues. Rewriting the operating system would just be too time-consuming and expensive. Using a system that was “patched” to enable new NVMe drives sounds like a risky way to manage your company’s file data, and could also be performance limiting.
How does Qumulo do it and do it so fast?
Qumulo designed a modern, file data platform with the expectation that new protocols and drives would appear on the market at a steady cadence. Legacy vendors, by definition, don’t have the option to easily adopt new modern underlying technology. Long development cycles and often re-architecting software is required to take advantage of new storage platforms, protocols and devices.
Qumulo chose Linux as our underlying operating system. By developing our file data platform as a user-space application that runs on Linux, we have incredible flexibility to run anywhere that Linux does. Many traditional NAS vendors are based on FreeBSD, which does not innovate or support newer hardware anywhere near the rate that Linux provides us.
Qumulo runs in user-space and we write directly to the media to get the maximum performance possible. Qumulo, given our file data platform’s unique architecture, can provide this new technology with radical simplicity. The challenge is, how do we also provide it economically?
Qumulo’s advanced file-system technology means that 100 percent of provisioned capacity is available. This is unlike other NAS and file storage vendors that limit you to only using around 80 to 90 percent of your available storage before encountering performance or system management bottlenecks. With Qumulo, you not only get reduced latency with all-NVMe, you get to use 100 percent of this incredibly fast storage.
Watch this on-demand webinar “Why NVMe Now?” to learn more about the market drivers and use cases for NVMe file storage, and how NVMe can help to meet the unique performance and capacity demands of research computing environments, both in the data center and in the cloud.
You can also watch “NVMe or Cloud? Why choose when you can get the best of both worlds!?” to learn how Qumulo’s file data platform enables organizations to avoid data silos so users, applications, and cloud services can access all of their unstructured data on the infrastructure, in the location and in the format they need to leverage that data for innovation.