Data ServicesOur data services make it simple and stress-free to manage massive amounts of file data. Get a Demo
Dynamically scale to store petabytes in a single namespace
Dynamically scale to serve petabytes of data, billions of files, millions of operations, and thousands of users with cost-efficient storage for different files that intelligently adapts to your growing needs.
- Linearly scale capacity and performance
- Automatically rebalances data when nodes or instances are added
- Limitless file system capacity within configuration constraints of chosen deployment option
- Support billions and billions of large and small files in a single namespace
- Mixed node compatibility
Intelligent caching that meets most rigorous workloads
Enable high-compute workloads and thousands of users actively working across zones and Linux, Mac and Windows protocols.
- Intelligent cache that automatically places data on the appropriate media depending on usage patterns
- Flash-first design enables uniformly fast ingestion
- Single cluster performs at over 1 million IOPS, 333GB/s reads and 200GB/s writes
- Burst compute performance to cloud instances
Instant visibility with real-time analytics
Easily monitor petabytes of data usage and performance including throughput and latency for every file and user across on-prem and on the public cloud with real-time operational (ITOps) analytics.
- Capacity awareness – Understand how your data is growing and predict capacity growth trends into the future in real-time and at billion-file and petabyte scale.
- Performance awareness – Visualize throughput, IOP and latency to different applications and users to deliver expected performance.
- Usage awareness – See and understand the workloads you serve in real-time so you can rapidly diagnose issues, distribute resources and show usage to different departments.
Automate workflows and build applications with advanced API
Build integrated solutions, automate, and manage data services with APIs for every feature in the Qumulo file data platform and an easy-to-use dashboard.
- Create, manage, and tear down data services and programmatically retrieve all information with REST calls.
- Invoke file system operations with the REST API to automate administrative tasks, saving you valuable time.
- Store them externally in a database or send them to another application such as Splunk or Tableau.
- Explore the wide range of available commands to customize and automate.
Easily move data to the cloud
Shift data to the cloud without re-factoring applications. You can copy data from Qumulo’s file data platform to Amazon S3 with an easy visual interface that lets you tier data to Glacier, collaborate on it globally, or push that data into cloud services such as Sagemaker, Rekognition, or even a simple Lambda function.
- Transform data from file to object
- Cloud backup
- Cloud archive
- Leverage file data with cloud-native applications designed for object data
Easily protect your data to minimize disruptions
Simple data protection to meet company requirements for disaster recovery, backup, and user management without the added cost or complexity of third party applications.
- Erasure coding
- Snapshot replication
- Continuous replication
- Failover for simple disaster recovery
- Cloud volume snapshot to S3
Automatically protect data from external threats
- AES 256-bit software encryption at rest
- SMBv3 in-flight (over-the-wire) encryption
- Configure, schedule and query programmatically and securely with RESTful API over HTTPs
- Secure FTP traffic on TCP networks leveraging Transport Layer Security (TLS)
- Track user activity within the filesystem with Audit logging
- Designate different levels of user and group privileges leveraging Role Based Access Control (RBAC)
See Qumulo in action with a demo
See how radically simple it is to manage your file data at massive scale across hybrid cloud environments.