Serverless computing promises to usher in an era of auto-scaling, pay-as-you-go software design that aligns perfectly with SaaS business models. While significant hurdles remain for serverless compute platforms, we believe serverless application design patterns will become commonplace in the next few years.
After major innovations in computation, advancements in storage technology typically follow shortly thereafter. The shift to serverless software design patterns is no different, and storage needs to be ready for what’s next.
The promises of serverless computing
The term “serverless” computing is clearly an oxymoron; compute cannot happen without servers. However, hardware can be abstracted so far away from developers that they feel as if their code just gets deployed “auto-magically” to the cloud, without ever having to think about infrastructure ever again.
To do this, developers must break up their applications into a series of independent, stateless functions which then get executed based on a developer-defined trigger. Today, each cloud provider offers easy-to-integrate triggers for most of their cloud services. For example, in AWS, developers can configure a function to run when an object is uploaded to a specified bucket. The ability for third party applications to trigger functions is also available via the AWS SDK or API. This is true of the other public clouds as well.
Where serverless (currently) falls short
Ironically, the great things about serverless computing platforms, such as the ability to auto-scale pay-as-you-go stateless functions, are also at the core of why serverless computing has yet to take off. Fundamentally, most customers are still busy virtualizing or containerizing applications, let alone doing entire rewrites to “functionalize” them.
Above:Enterprise adoption of cloud technology
Yet when application rewrites occur, or when new application development begins, developers often find it’s much harder to implement massively parallel systems comprised of stateless functions than they might initially think. Just like the term “serverless,” “stateless” computing is also an oxymoron. With functions, state might not be stored in the function itself, but it does get stored somewhere. Usually that’s in shared storage such as S3.
The trouble is that shared storage is slow, so companies have to pay for the time it takes to execute the function as well as the time it takes to store and retrieve data from shared storage. This becomes most painful when a function is spun up for the first time and needs to load all the libraries the code references. Today, the major FaaS platforms must load the entire codebase from a library instead of just the parts the function uses, resulting in cold starts downloading hundreds of megabytes of dependency libraries.
Developers often attempt to overcome the performance limitations of slow shared storage by implementing one of two anti-patterns. Either they create caches on their functions which need quick access to data, resulting in an anti-pattern of high fan-out systems with caches everywhere, or they fall into a “ship data to code” anti-pattern where lots of large data gets passed around from function to function. Moving lots of data to computation can be inefficient, and it makes functions highly dependent on one another, breaking the promise of a highly decoupled system.
Storage innovation is needed
While other issues with FaaS platforms exist, the limitations of slow shared storage is a common issue with serverless application design in practice. To enable the widespread adoption of serverless computing, next-generation storage with the following characteristics must be built:
- Instantly available – This means that provisioning storage happens in milliseconds.
- Elastically scaleable – Each function shouldn’t have to provision its own storage. Instead, as requests increase, and new functions are created, a central data repository should be able to grow to meet demand.
- Close to compute – Customers don’t want to pay for time spent querying or storing data, so storage should be as close to compute as possible.
- Economical for bursted workloads – Ideally, storage should be antifragile from a cost perspective. The more demand there is for a piece of data, the quicker it becomes to deliver it to users.
- Easy to access via applications – One of the reasons S3 took off was it was designed for web applications and launched with amazing APIs and SDKs. Storage used in serverless computing needs to be easy to securely access by functions via an easy-to-use developer SDK and API.
It will be interesting to see how storage will meet these needs. What’s unclear is whether or not existing platforms can evolve fast enough, or whether entirely new storage products will need to be created to enable serverless computing. Either way, the storage industry is about to experience massive disruption yet again.
Grant leads the performance teams at Qumulo towards building the fastest file system in the world.