Modern imaging techniques are an essential part of cutting-edge research. Spanning life sciences, pharmaceutical research, materials science, medicine, geophysical science and astronomy, modern imaging has an impact in all these areas that often goes unheralded.
For example, nanoscience is the study of atoms, molecules, and objects whose size is on the nanometer scale. A nanometer is one billionth of a meter, which is on the scale of atomic diameters. For comparison, a human hair is about 100,000 nanometers thick.
Electron microscopes enable nanoscience with a level of detail that simply was not available a decade ago and is essential in understanding biological processes. Modern imaging techniques offer many technologies: microscopy, magnetic resonance imaging, mass spectrometry, X-ray imaging, radar imaging, ultrasound imaging, photoacoustic imaging, thermal imaging, and imaging from telescopes and satellites. All of these techniques produce large amounts of file data and put tremendous demands on an organization’s file storage system.
Electron microscopy, for example, creates high-resolution images that generate very large data sets. These data sets are used in different types of compute-intensive analyses, such as creating a 3D model from a data set of 2D images. A single 2D image is hundreds of megabytes, and a full 3D model, built from the collection of 2D images, can be 20TB of file storage or more.
In one study, a group of researchers used electron microscopy and QF2 to create a map of all the neural connections that exist in a rabbit retina. Thin slices of the retina were scanned and then digitally assembled into a high-resolution, 3D image. Researchers, who are always under pressure to publish their results, needed to quickly process these images for analysis. They also required that they could rotate and section the final 3D model in real time. Both of these activities required that the storage system serve up data fast enough to constantly feed the compute pipeline.
Qumulo File Fabric (QF2) is a modern, highly scalable file storage system that runs in the data center and in the public cloud. QF2 handles large and small files with equal efficiency and can scale to billions of files. Its performance, cost, real-time visibility and simplicity make it ideal for institutions engaged in cutting-edge research.
Managing throughput and latency are critical for research imaging. Researchers want to view images in real time, and are sensitive to latency. Viewing images, especially 3D images, can require many random reads from a large file. In addition, images are being processed while researchers are accessing previously generated image data. All these activities put heavy demands on the storage system.
QF2 provides two times the price performance compared to legacy storage systems. QF2’s hybrid architecture with adaptive read caching provides the performance needed for random reads within large files. Read caching uses an access-based heat map to intelligently move data to the flash tier when needed. QF2’s read caching fits well with the variable access patterns of 2D image planes and linked metadata records. Customers get high performance even with 3D image files that require random small block reads to access the image the researcher wants to see.
QF2 costs less and has lower TCO than legacy storage appliances. Its advanced file-system technology means that 100% of provisioned capacity is available and not just the 70% or 80% that legacy storage systems recommend. QF2 is optimized for standard hardware with SSDs and HDDs, which cost less than proprietary hardware. In the cloud, QF2 intelligently trades off between low-latency block resources and higher-latency, lower-cost block options. QF2 delivers flash performance at hard disk prices. A single, simple subscription service covers everything, including all features, updates and support. There are no added charges or hidden fees.
Storage administrators need real-time insight into how the storage system is being used by the different departments and they need real time control so they can manage bottlenecks and rogue processes. Legacy storage systems that rely on off-cluster appliances and slow treewalks cannot provide timely information.
QF2’s real-time visibility and control are extremely useful for managing research imaging workflows. With QF2, it’s easy to find the I/O hotspot and stop the process. Administrators can assign real-time quotas, so they're always in control of how resources are allocated among different projects. Rogue processes that can use up a storage system's resources and halt the system are easy to identify and stop.
The capacity explorer and capacity trends tools give up-to-the-minute information about how storage is being used now and how storage has been used over different periods of time. With these tools, administrators can give researchers realistic numbers when they include storage costs in their grant applications.
Research institutes usually have a small IT staff. The complexity of setting up legacy storage systems and the lack of insight they offer make them expensive systems in terms of management. From the moment QF2 is unboxed to when it can start serving data is a matter of hours, not days. (QF2 for AWS has almost instantaneous setup.) Once the system is running, the student help desk or junior staff can manage it.
Here is an example of a research imaging workflow.
The images generated by the microscope are transferred to the QF2 cluster. Processing occurs on the compute farm and, at the same time, researchers can carry out their analyses on their own workstations. QF2 performance ensures concurrent access of the data by researchers and compute resources.
We are always looking for new challenges in enterprise storage. Drop us a line and we will be in touch.
Enter a search term below