With the explosive growth of AWS, Azure and Google Cloud, it’s clear that many organizations are finding success on their cloud journey — but it’s only the beginning. The public cloud offers dynamic and scalable services where hyper-scale infrastructure solves customer needs, but requires using cloud-native APIs and architecting for application-level resiliency. New stack applications in the public cloud touch our lives every day (Netflix or Uber) and are built to get to market fast and enable rapid iteration by DevOps teams.
But, what happens with existing applications? Migrating applications to the cloud can prove challenging. Traditionally, applications or workflows require costly re-development to use cloud-native APIs. Existing applications represent the next wave of cloud adoption, and there is an easier path today without significant redevelopment, taking advantage of cloud native services such as AI, IoT and Blockchain to “wrap” migrated apps or those stranded on-premises with new functionality.
Virtualization of compute resources in the public cloud was solved in the last decade, but the problems with data management remain. Today, industries like media and entertainment, silicon chip design and oil and gas exploration want to utilize the near-infinite scale of public cloud IaaS resources. However, the data they want to process is in petabyte-scale solutions outside of the public cloud, accessed with an entirely different set of APIs and crucial for apps to access. Cloud-native data management APIs create simple storage services that are easy to consume as objects and buckets at almost any scale, but those require use of different data access models than most existing applications (which can require burdensome re architecture).
File has been the de facto data management API for applications for more than thirty years with NFS for Unix/Linux or CIFS for Windows/MAC to access remote file resources. Workflows and services use file at the core for data access and movement. The file API offers a rich set of metadata, security and compatibility but, most importantly, it offers transparent access from almost any platform. We know there is a better way – one in which customers can use the power of cloud provider object and block services, while leveraging file at scale for all their data needs across hybrid environments.
Meeting the needs for file API workloads is a multi-billion dollar industry outside the public cloud.
Petabyte-scale distributed systems manage the data lifecycle with reliability and protection. Public cloud providers offer basic support for file API-based services and lack the features many customers require today in their existing systems. Initially designed so applications can burst into the cloud, such that a working set of data is copied into the file cloud service, the workload executes using the existing file API in the public cloud, and then the results are pulled back out of the public cloud. There is a level of sophistication required by customers to orchestrate data into the public cloud that is still a barrier. In media and entertainment, where compute resources are critical to render special effects, GCP and AWS have built data centers near studios in Hollywood to eliminate data orchestration with low-latency private links to public cloud utility compute, connected directly to the studios existing data management systems. There is a need for persistent petabyte-scale, file-based solutions in the public cloud to bridge the needs of existing customer applications and workflows, that offer the agility and scalability of public cloud services.
Qumulo provides software for file and data services across public and private clouds at petabyte-scale, with data services that meet the demands of industry giants in media and entertainment, silicon design, life sciences, manufacturing and oil and gas exploration. This means customers can harness all of the public cloud IaaS and PaaS power through Qumulo’s API and leverage their data no matter where it resides. Businesses no longer have to re-architect their apps when deciding to move them to public cloud. If a user chooses to leave an app on-premises, but move the data to the public cloud (or vice-versa), Qumulo can connect the data seamlessly, along with cloud provider services the data might be using, such as AI or IoT platforms. Qumulo provides users with a robust data analytics engine and dashboard to have full visibility over what is happening with their file data in real-time. That data can be pulled into other ISV-provided data services, such as Splunk or Databricks. Qumulo can also be placed into other services such as ServiceNow for easier integration into DevOps deployment workflows.
File data of all types and sizes can now be moved easily to the public cloud or a mixed private and public cloud environment. Because of Qumulo, businesses are not tethered to hardware limitations to scale up or down instantly and build, deliver and innovate on the apps and services which matter most to their customers.