Qumulo, Inc. | SPECstorage Solution 2020_ai_image = 700 AI_Jobs |
---|---|
Azure Native Qumulo – Public Cloud Reference | Overall Response Time = 0.82 msec |
Performance
|
Product and Test Information
Azure Native Qumulo – Public Cloud Reference | |
---|---|
Tested by | Qumulo, Inc. | Hardware Available | May 2024 | Software Available | May 2024 | Date Tested | May 28th, 2024 | License Number | 6738 | Licensee Locations | Seattle, WA USA |
None
Solution Under Test Bill of Materials
Item No | Qty | Type | Vendor | Model/Name | Description |
---|---|---|---|---|---|
1 | 33 | Microsoft Azure Virtual Machine | Microsoft | Standard_E32d_v5 | The Standard_E32d_v5 Azure Virtual Machine runs on the 3rd Generation Intel® Xeon® Platinum 8370C (Ice Lake) processor in a hyper-threaded configuration. Each VM features 32 vCPU cores, 256 GiB of memory, 8 NICs, and supports up to 16000 Mbps of network egress bandwidth. Network ingress is not metered, as described https://learn.microsoft.com/en-us/azure/virtual-network/virtual-machine-network-throughput (“Ingress isn’t metered or limited directly”). For testing, 32 VMs were used as Spec Load Generators, and 1 VM was designated as the Prime Client. Each VM utilized a single NIC with Advanced Networking enabled. |
Configuration Diagrams
Component Software
Item No | Component | Type | Name and Version | Description |
---|---|---|---|---|
1 | Azure Native Qumulo filesystem | File System | 7.1.0 | The Azure Native Qumulo (v2) File System is a cloud service offering provisioned directly from the Azure portal. This architecture is entirely different from previous releases of Qumulo Core Filesystems. Customers specify the initial capacity and a customer subnet delegated to Qumulo.Storage/FileSystems, as explained here: https://learn.microsoft.com/en-us/azure/virtual-network/subnet-delegation-overview. Qumulo uses vNet injection to add only IP addresses to the customer network. Customers can request performance increases on-demand through service requests. From a service consumption perspective, the IP addresses are the only architectural component customers need to be aware of, similar to how a virtual network service endpoint functions. |
2 | Ubuntu | Operating System | 22.04 | The Ubuntu 22.04 operating system is deployed on all Standard_E32d_v5 VMs. |
Hardware Configuration and Tuning – Virtual
Component Name | ||
---|---|---|
Parameter Name | Value | Description |
Accelerated Networking | Enabled | Microsoft Azure Accelerated Networking enables single root I/O virtualization (SR-IOV) |
Hardware Configuration and Tuning Notes
None
Software Configuration and Tuning – Virtual
Component Name | ||
---|---|---|
Parameter Name | Value | Description |
sysctl | 1 | custom sysctl settings used |
Software Configuration and Tuning Notes
/etc/sysctl: net.core.wmem_max = 16777216 net.core.wmem_default = 16777216
net.core.rmem_max = 16777216 net.core.rmem_default = 16777216 net.ipv4.tcp_rmem
= 1048576 8388608 16777216 net.ipv4.tcp_wmem = 1048576 8388608 16777216
net.core.optmem_max = 2048000 net.core.somaxconn = 65535 net.ipv4.tcp_mem =
4096 89600 4194304 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_sack = 1 net.ipv4.tcp_no_metrics_save = 1 net.ipv4.route.flush = 1
net.ipv4.tcp_low_latency = 1 net.ipv4.ip_local_port_range = 1024 65535
net.ipv4.tcp_slow_start_after_idle = 0 net.core.netdev_max_backlog = 300000
vm.dirty_expire_centisecs = 100 vm.dirty_writeback_centisecs = 100
vm.dirty_ratio = 20 vm.dirty_background_ratio = 5 net.ipv4.tcp_sack = 0
net.ipv4.tcp_dsack = 0 net.ipv4.tcp_fack = 0 fs.file-max = 2097152
Service SLA Notes
Azure Service SLA:
https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services
Storage and Filesystems
Item No | Description | Data Protection | Stable Storage | Qty |
---|---|---|---|---|
1 | Azure Native Qumulo persists data using Azure Blob Storage, which is a Microsoft-managed service providing cloud storage that is highly available, secure, durable, scalable, and redundant.Architecturaly, Read cache is serviced from an in-memory L1 cache and a NVMe L2 cache. The global read-cache is increased on-demand based on customer service requests, elastically. Write transactions leverage high performance Azure Managed disks, which act as a protected write-back cache for incoming writes, continuously flushing them to Azure Blob Storage. | Provided by Microsoft Azure Blob Storage | Azure Blob Storage | 64 |
Number of Filesystems | 1 |
---|---|
Total Capacity | 100 TB |
Filesystem Type | Azure Native Qumulo |
Filesystem Creation Notes
Azure Native Qumulo is deployed directly from the Azure Portal, the Azure CLI,
or using an Azure REST endpoint. Customers specify the capacity of storage and
what subnet they want to deploy vNet-injected IP addresses in. Requested
aggregate performance capability of 100GB/s and 500,000 IOPS by submitting a
service ticket to the service in Azure.
Storage and Filesystem Notes
Azure Native Qumulo is a cloud storage service that has a different
architecture than traditional storage architectures, including a traditional
Qumulo Core cluster. Data is persisted to Azure Blob, with only caching
occuring at the compute layer. This disaggregates the storage persistence and
file system performance from data storage at rest, allowing each layer to scale
independently and elastically. The Azure Native Qumulo filesystem is built on
top of the Azure Managed Blob (LRS) object tier. The architecture acts as an
accelerator that executes parallelized reads which are prefetched from object,
and served directly from the filesystem to the clients (over NFSv3 in this
case). Read cache is serviced from an in-memory L1 cache and a NVMe L2 cache.
The global read-cache is increased on-demand, elastically. Write transactions
leverage high performance Azure Managed disks, which act as a protected
write-back cache for incoming writes, continuously flushing them to Azure Blob
Storage.
Transport Configuration – Virtual
Item No | Transport Type | Number of Ports Used | Notes |
---|---|---|---|
1 | NFS version 3 | 97 | The mount options used were only tcp and vers=3 (without nconnect). IP addresses are distributed to clients using a round-robin DNS record. Each client in this benchmark used two of the vNet-injected IP addresses per NFS client, resulting in 2 NFS mounts per load generator, each to distinct IP addresses. |
2 | Azure vNet injected vNIC | 64 | Azure Native Qumulo uses vNet injection to place these NICs onto a custoer owned subnet. They provide access to the filsystem.. |
Transport Configuration Notes
None
Switches – Virtual
Item No | Switch Name | Switch Type | Total Port Count | Used Port Count | Notes |
---|---|---|---|---|---|
1 | Azure | 100Gbps Ethernet with Enhanced Networking | 97 | 97 | In Azure, there is not awareness of what the actual physical switch port sizes are. Since a value is required we set it to the number of NICs used. |
Processing Elements – Virtual
Item No | Qty | Type | Location | Description | Processing Function |
---|---|---|---|---|---|
1 | 1 | vStorage | Azure Region EastAsia | Azure Native Qumulo Cloud Storage Service | Qumulo File System, Network communication, Storage Functions |
Processing Element Notes
None
Memory – Virtual
Description | Size in GiB | Number of Instances | Nonvolatile | Total GiB |
---|---|---|---|---|
Azure Standard_E32d_v5 VM memory | 256 | 33 | V | 8448 |
Grand Total Memory Gibibytes | 8448 |
Memory Notes
None
Stable Storage
Azure Blob Storage is a cloud storage solution provided by Microsoft Azure
designed to store large amounts of unstructured data. This data can include
text, binary data, documents, media files, backups, and logs. Blob Storage is
optimized for storing and retrieving large blobs of data efficiently, providing
scalable, durable, and highly available storage for various types of
applications and workloads. Azure Blob Storage uses HMAC to ensure the
integrity of data. When data is uploaded, a hash is computed and stored. When
data is retrieved, the hash is recomputed and compared to ensure data
integrity. The Azure Blob Storage class used is the Locally Redundant Storage
(LRS) which keeps multiple copies of data within a single region.
Solution Under Test Configuration Notes
The solution under test was a standard Azure Native Qumulo cluster deployed
natively in Azure. The cluster is elastic both in performance and capacity and
can be requested to handle 100’s of GB/s of throughput, millions of IOPS, and
100’s of PB’s of capacity.
Other Solution Notes
Requested aggregate performance capability of 100GB/s and 500,000 IOPS for the
Azure Native Qumulo performance. This can be changed with a service ticket in
Azure. The Azure Native Qumulo (v2) File System is a cloud service offering
that is provisioned directly from the Azure portal. This architecture is
completely different than prior releases of Qumulo Core Filesystems. Customers
specify the capacity they wish to start with and a customer subnet delegated to
Qumulo.Storage/FileSystems.
Dataflow
None
Other Notes
None
Other Report Notes
None
Generated on Tue Jun 4 01:14:12 2024 by SpecReport
Copyright © 2016-2024 Standard Performance Evaluation Corporation