Since Storage Area Networks (SAN) were introduced in the mid-90s, they have been the storage solution of choice for IT admins looking for fast storage that could scale with their organization’s capacity requirements. SAN vendors around the world promise the ability to scale capacity to their customers, but it turns out, that scale comes with a catch.
A typical SAN environment, might look like the one illustrated below:
With SANs, there are three separate networks an organization needs to worry about deploying and maintaining.
First, there’s the underlying ethernet network connecting the clients to one another. Next, a Fibre Channel network connects a SAN’s block storage to client machines. Lastly, an entirely separate metadata network interfaces with each client via custom drivers (which need to constantly be kept up to date) to serve content stored on that block storage.
Beyond management, scaling capacity turns out to be a huge problem with SANs.
Yes, administrators can add newly purchased block storage devices to their Fibre Channel network, but the newly added storage will reside in a separate namespace. By default, from a user’s perspective, that might mean half of their project lives on the D:\ drive, and the other half on the E:\ drive.
Here’s a diagram of SAN architecture before adding additional block storage devices:
And here it is after adding additional block storage devices:
SAN vendors cleverly allow their customers to add new block storage to their network by concatenating new block storage devices to existing ones. Technically, two separate stripe groups exist, but the SAN is able to abstract that away from end users. Unfortunately, performance for the concatenated storage device greatly suffers, so administrators usually resort to wiping the SAN and restriping all devices so their users can quickly access their data all from a single namespace.
Here’s SAN architecture after concatenating a block storage device:
Although data “lives” under a single namespace, data residing on the newly added block storage device will be much slower to access than data living on the storage devices with the original stripe group.
SAN vs NAS storage: Which scales better?
The only way around SAN’s inherent scaling issues is to switch over to a NAS (Network Attached Storage) architecture like Qumulo.
Instead of having three separate networks to maintain, organizations simply connect a NAS product to their existing ethernet network. With Qumulo’s node-based architecture, you can scale capacity all under one namespace without suffering any degradation in performance so end users can quickly access the data they need when they need it. Simply add another node to your Qumulo cluster, and increase capacity and performance of your storage.
Lastly, there’s an entirely unique way Qumulo helps your organization scale in ways both SAN and traditional NAS cannot. Qumulo can run both in your data center, as well as in the public cloud. So whether you need storage for burst computing, virtualized workflows, or archiving data to the public cloud, Qumulo will be able to meet your needs.
If you’re interested in learning more about how Qumulo can meet all of your scale needs, reach out. We’d love to talk with you!
Grant leads the performance teams at Qumulo towards building the fastest file system in the world.