Qumulo File System for AWS
Qumulo for AWS is the fastest persistent file storage system in the cloud.
Ready for a test drive? You can try Qumulo for free! Find us in the AWS Marketplace.
Qumulo offers an AWS file storage system that can run in the cloud, on-premises, or any combination in between. Forget scale-out. This is scale-across.
Lightning fast is easy
Qumulo for AWS is the fastest file storage system in the cloud. Check this out:
- Multi-stream read (per node) = 500 MB/sec
- Multi-stream write (per node) = 350 MB/sec
- IOPS (per node) = 8,000
* Cluster built with an m4.16xlarge EC2 instance
The boring, exciting enterprise facts
The tools and applications on which you’ve built your business are now available in the cloud, without the need to re-architect. Qumulo for AWS has a complete set of enterprise features like support for NFS, SMB, FTP, and REST. Continuous replication moves data between your on-premise Qumulo cluster and Qumulo for AWS. And you get robust DR through continuous replication along with the snapshots.
Qumulo’s visibility into the file system is unmatched in the industry, and it makes storage management so much easier! You can instantly identify throughput and IOPS hotspots, the most active clients and paths, and get accurate reports of how much usable storage you have. If you have a rogue process or user who’s monopolizing system resources, apply a quota in real time to take back control.
Use the tools you’re familiar with to deploy and manage your storage, like Terraform and CloudFormation to automatically spin up Qumulo for AWS.
Want technical details?
Download the Qumulo for AWS data sheet.
|Cluster||Four m4.16xlarge EC2 instances||Four m4.2xlarge EC2 instances|
|Read throughput||1.8 GB/s||270 MB/s|
|Write throughput||1.3 GB/s||190 MB/s|
“When I think about how easy it was, it still doesn’t sound real.”
“[We’re] actual proof that the solution works. There’s no possible way I could install 1,000 machines in our network here. I don’t have the power or cooling to support them. We were able to make the decision, and in less than 1 hour be rendering on 1,000 machines. After the jobs finished, we simply terminated the instances. When I think about how easy it was, it still doesn’t sound real.”