Learn how Qumulo’s built-in data analytics provide detailed information on the efficiency and usage of a Splunk deployment.
Splunk is a leading data analytics platform. It gathers many types of log and machine-generated data and indexes, analyzes, and creates visualizations for very large data sets. Splunk provides both historic and real-time data analytics and has developed a large ecosystem including machine learning (ML) libraries and various types of software developer kits (SDKs). Splunk—like Qumulo—is highly scalable, which makes Qumulo the ideal platform for running Splunk solutions.
Qumulo’s file system complements the Splunk data platform to optimize Splunk data storage efficiency. This article will help you understand the Splunk repository at the file level using Qumulo Core’s real-time data analytics and explain how the Qumulo file system can help you:
- Eliminate silos of storage using a single storage namespace for Splunk data
- Achieve transparent capacity and IO expansion with a linear scale-out storage architecture
- Customize your Splunk environment via programmable REST API
- Optimize your storage infrastructure for both sequential write index, random search as well as hot, warm and cold data
Architecture of Splunk
The main components of any Splunk implementation are forwarders, indexers and search heads.
Forwarders are typically software agents that run on the devices Splunk monitors. They forward streams of logs from those systems to the Splunk indexers.
The indexers are the heart of the Splunk architecture. They parse and index log data in real time.
Splunk Search Headers
Search heads are separate servers to which users connect to query data, build reports and visualize data. In smaller environments indexers and search heads can run on the same servers.
This diagram shows the Splunk architecture. The indexers contain hot (H), warm (W) and cold (C) buckets, which we’ll discuss in the next section.
Data in Splunk is stored in buckets organized by the age of the data:
- Hot buckets store data when it is first indexed. A hot bucket can be written to until a predefined threshold is reached. Then, the hot bucket is closed and data is moved to a warm bucket.
- Warm buckets contain data that is indexed and searchable. Data can still be written to warm buckets. When the threshold for warm bucket capacity is reached, older data in warm buckets is moved to cold buckets.
- Cold buckets hold the majority of the data in most cases. Cold buckets are read-only but are still in the index. Thus, cold buckets will appear in all search results, reports, and so on.
- Frozen Buckets are long-term archives that are no longer indexed. Frozen buckets are intended to store old data for archival purposes. They are not available for searching, analysis, or reporting.
Qumulo Improves Efficiency of Highly Scalable Storage for Splunk Environments
Splunk can use direct-attached storage (DAS) for all bucket types. However, this type of configuration is relatively inefficient as DAS storage is complex to manage and this complexity increases as capacity grows. Whether you are using JBODs or RAID arrays, in either case there is significant administration overhead. Also consider traditional RAID arrays have extremely long rebuild times, which translates to increased risk of data loss. Typically, high-performance network-attached storage (NAS) is a better solution which we will discuss further in this document.
If reliability is required, the Splunk replication factor (RF) and the search factor (SF) can be increased. The RF indicates the number of copies of raw data to be persisted, while the SF determines the number of copies of index data to keep. Both have a default value of two, but this value can be changed to meet specific goals. At the default setting of two, each index will be set to keep a full second copy, which can imply a very large amount of data to store.
In the following sections you’ll learn how Qumulo’s real-time data analytics provide detailed information on the efficiency of file data storage and usage of a Splunk deployment. Qumulo Aware comes standard in Qumulo Core providing instant visibility into your Splunk data footprint with real-time analytics.
Qumulo’s Efficient Hybrid Cloud Architecture
A Qumulo cluster starts with four nodes and it can scale to many petabytes of capacity simply by adding more nodes at any time. Qumulo Core is optimized to extract maximum performance and efficiency from the underlying hardware—from HPE, Fujitsu, Supermicro, and others—taking full advantage of all-NVMe designs, and hybrids with SSDs in front of HDDs. Qumulo Core will intelligently prefetch and cache data to SSD automatically which means that most reads will come from SSDs most of the time, delivering all-flash levels of performance even on hybrid systems.
Even though Splunk doesn’t yet support using NAS storage for hot and warm buckets, using Qumulo with Splunk is an excellent solution for cold buckets (where typically the majority of data is stored). When buckets are moved from the storage defined for warm buckets to the Qumulo cluster for cold buckets, all data lands first on SSDs. This makes the transfer very fast. Also, cold buckets are still indexed and searchable. Data that is on the SSDs will be served at much higher speeds compared to data on HDDs.
The Qumulo File Data Platform is an ideal foundation for a Splunk environment because it stores data more efficiently, is highly resilient, infinitely scalable, and runs on-premises or in any public cloud. Additionally, because data on Qumulo is protected at the block level rather than the file level, any re-protect operation is fast and reliable regardless of file size and designed to have no adverse impact performance while running.
Advantages of Using Qumulo with Splunk
Most Splunk implementations regularly capture, index and serve petabytes of data making it searchable by the entire enterprise. The volume of data processed by Splunk can create challenging demands on your organization’s storage infrastructure. Working together, Qumulo and Splunk have delivered a solution for providing scalable, efficient storage for Splunk data, as well as API integration directly with Splunk. In summary:
- The Qumulo file system can scale to handle billions of files and many petabytes of data all in a single namespace, yet remain easy to manage.
- The capacity of a Qumulo cluster can be scaled as needed by adding nodes. This can be done while the whole cluster is running without any interruption.
- By adding capacity with additional Qumulo nodes, processing power and throughput will increase linearly as well.
- Frozen buckets can be avoided because data can be stored efficiently and cost effectively on Qumulo in cold buckets. Data in cold buckets remains searchable. Storing more Splunk data allows you to run queries against data that has been stored for many years, rather than only on data from the last couple of months. This provides a more accurate view of trends, as well as making it easier to pinpoint anomalies.
- Instead of increasing Splunk’s RF to increase reliability, Qumulo Core protects data using erasure coding, which is very efficient, in terms of how it uses disk space.
- Qumulo Protect includes data services that come standard in Qumulo Core and features snapshots and snapshot replication, providing a capable backup system.
While this article is about using Qumulo as an efficient storage repository for cold buckets in Splunk, it’s worthwhile to mention that many of our customers also feed Qumulo telemetry data into Splunk for auditing and threat hunting purposes and using Splunk as their main Security Information and Event Management (SIEM) platform. Learn how to do it in this article: Qumulo Core Audit Logging with Splunk.
Another example of how to protect your data against cyber threats in Qumulo with a Cloud based SIEM is described here: Threat Hunting with Qumulo Audit Logs and Azure Sentinel SIEM
- Related story: How to Detect Ransomware
- Data sheet: An Efficient and Highly Scalable Data Platform for Splunk
- Solution page: Splunk Storage Designed for Massive Data
Dr. Stefan Radtke, Field CTO EMEA, has spent his career working in technology and is the principal evangelist of universal-scale storage for Qumulo. He started as employee #1 in EMEA in 2017 as Technical Director where he built a fantastic multi-national technical team. Recently he took over the role of the Field CTO and he is now focusing on building a strong technical team for Cloud Q. He’s a certified AWS Solution Architect Professional and Azure Solution Architect Expert.