Until recently, the world’s economy was driven primarily by fossil fuels. Over the past 80 years, a global infrastructure has arisen to enable the extraction, transport, and refining of oil. Today, a new “oil” is taking the lead in driving our global economy — data. And just like oil, infrastructure to extract, move, and transform this data are sprouting faster than ever.
Just like oil, what starts as raw data must be collected, processed and transformed to become valuable for a business. Like oil, much of that value can only be realized if data can be moved: from IoT and edge devices like CT scanners and autonomous vehicles, to on-premises legacy workflows and data lakes, to AI engines and transcoding apps in the cloud.
Over the last two decades, new use cases reliant on data continued to grow, driving both a trend and a challenge for us storage professionals — exponential data growth. At the local level, an organization that owned and managed 10TB of file data in 2002 might have 10PB now. That’s literally 1,000x more than they had 20 years ago!
With all that growth in unstructured data comes a series of associated challenges, most of which weren’t even on the radar as recently as 10 years ago:
- Beyond just being stored, today’s data also needs to be analyzed, monitored, and prioritized
- It needs to be seen by the right people, made accessible to the right workflows, and delivered to the right destinations – reliably and on time, especially at scale
- Today’s data needs to be retained, sometimes for years, and sometimes indefinitely, all while it grows to exabyte size and beyond
And that isn’t all that’s changed in the past 20 years. In the modern economy, organizations of all types increasingly depend on remote and overseas workforces needing low-latency data access. Even single-site orgs can have a global reach where data must be moved to where the workforce is. On the home front, data center constraints are pushing even legacy workloads into the cloud. And while the cloud promises unlimited performance and capacity, it’s on you to figure out how to get all that new and valuable data into and out of it before you can realize that promise.
It can’t get worse, right?? Of course, it can: Most cloud storage options don’t include native support for file-based apps and workflows, and if they do, they’re proprietary to a single cloud platform. Most of these platforms don’t scale to the capacity your apps need, and probably don’t integrate easily with your on-prem storage. If you need to run any legacy apps in the cloud, get ready to do a lot of tedious and expensive rewriting of these applications.
More Data, More Problems
Just being able to scale storage capacity isn’t enough anymore. Data has changed, your environment has changed, your apps have changed, and your workforce has changed. For all the problems that come with massive amounts of data, adding more capacity solves only one of them.
Like infrastructure for refining oil, we now need a data-focused infrastructure for collecting, storing, transporting, transforming, and archiving it.
What if you could get everything you need, without the compromises? What if you could deploy a single storage solution that, besides unlimited scalability, also lets you:
- Deploy it anywhere? In today’s environment, you might need full-scale data services on hardware in your data center, on edge-based VMs, and on any public clouds: AWS, Azure, and/or GCP.
- Choose your hardware vendor and model for on-prem deployments? Not all workloads are created equally. You need the flexibility to use hardware from your preferred vendor, and you want to customize your system to meet your workloads’ specific performance and capacity needs.
- Move your data from anywhere to anywhere else? Modern data life-cycle management also requires data mobility: from edge to core, from core to cloud; sometimes between file and object storage.
- Make your data available to any type of client and workload? For enterprises with mixed file and object workloads, the ability to share data across all workloads and clients – NFS, SMB, and S3 – can be critical to maximizing value.
- Use a single runbook to automate management of both on-prem and cloud-based instances? Every deployment, anywhere, should include the same features and the same API library so you don’t have to modify your scripts or give up your critical features.
What if you could have the scalability to grow as big as you need; the flexibility to deploy on-prem or in any public cloud; the power to put any amount of data anywhere on any timeline; and the versatility to provide file and object services to any workload?
You’d have Qumulo.
Scale Anywhere – On Your Terms
Purpose-built for the needs of today’s data and applications, only Qumulo delivers the flexibility that the modern enterprise requires: any size, any workflow; any location. Only Qumulo delivers the same unstructured data services in the cloud and at the edge as on-premises: a high-performance hardware- and platform-agnostic solution capable of storing and moving massive amounts of data simply and reliably, whether it’s in your data center, on AWS, Azure, GCP, or in your edge environment.
And now, with our Scale Anywhere release this month, Qumulo is adding new features to our solution, empowering you to leverage the power and value of your data as it grows, moves, and transforms:
- While we’ve always offered you a choice in vendor hardware for your on-prem workflows, we’re giving you more options to meet your needs. You can use all-flash nodes for performance-intensive workloads, or hybrid flash-and-disk nodes for data-intensive workloads. Now, we’re adding new NVMe-based hybrid nodes from HPE, Supermicro, and Arrow, any of which will deliver the performance of flash at the price of disk.
- We’re also giving you the ability to scale your on-prem deployments to all-new levels, with support now for 200+ nodes in a single Qumulo cluster. With densities approaching 500TB in today’s node families, the ability to scale to almost 100PB in a single namespace is now achievable.
- You’ll also see more features and capabilities to leverage your data more effectively, with the addition of S3 support as a standard protocol on all clusters. We’ve always offered multi-protocol access to your data, and now we’re offering S3 as part of that package. You can seamlessly share the same data across your file and object-based clients and applications.
For SMB clients, we’re adding two new features to our protocol stack, which will deliver improved performance and more effective data visibility in large-scale environments:
- SMB Change Notify will enable SMB clients to more effectively detect and show changes within the file system. For example, new and changed files will automatically show in the Windows File Explorer and Windows Search features, and clients can now watch specific folders and ingest-based workloads to easily identify data changes.
- SMB Multichannel lets multi-NIC clients use all available network paths for SMB data – either for additional bandwidth or to add fault tolerance in the event of a failure on the primary path. This feature will enable 8K native video editing on AWS-based Qumulo instances, and deliver better performance for PACS workloads in the healthcare sector.
Qumulo’s 100% software-defined data platform gives you simplicity at scale, and more choice and flexibility than any other file system when it comes to how and where you manage your data.
Modern Problems Require Modern Solutions
Only with Qumulo can today’s organizations manage data growth, ensure performance objectives are met, and ensure critical data is delivered anywhere in the world on schedule. And even in today’s complex operating environments, only Qumulo is so simple that a single storage admin can easily manage petabytes of data across multiple environments.
With Qumulo, you can store, manage, and transform your data at virtually any scale, in any environment, without compromise or added complexity. Among all enterprise file data solutions, only Qumulo gives you whatever scale you need, on whatever hardware and cloud platforms you want – along with the data mobility to make it all possible – without sacrificing functionality, simplicity, or value.
Learn more about our Scale Anywhere platform.
James Walkenhorst is the Sr. Technical Marketing Engineer on Qumulo’s Product team. He has been working on and around NAS platforms and hands-on demo environments for 15 years, and spent the 10 years before that in IT operations as both an engineer and manager.