Blogs

Best practices for assessing flash storage performance

In a recent survey of storage architects at F500 companies, 54% said that over the next 12 months, they’d be deploying all flash arrays. Not exclusively, but for select applications. And while flash arrays offer significant environmental / footprint benefits, performance is the reason everyone is evaluating solid state storage. That being said, 9 of the top 10 things you need to know about flash are performance-related. Load DynamiX specializes in performance testing and it gives us a unique vantage point; and we’d like to share some of what we’ve learned.

  1. Data deduplication. Dedupe reduces both data storage footprints and transmission loads (bandwidth requirements). But inline dedupe imposes additional computational costs and can therefore have a significant impact on application performance. Vendor algorithms vary greatly, and their differences significantly affect performance. Because the economic payoff of flash may heavily rely on reduced storage capacity requirements, and because different vendors handle dedupe techniques differently, the performance of a given all flash array may differ widely depending on data type.
  2. Data compression. Compression carries many of the same benefits and potential performance costs as dedupe, but needs to be considered separately. As with dedupe, vendor support and algorithms can vary greatly. Your test method and load generator must be able to be extremely configurable for compression. If you want to get a feel for how compressible your files will be, zip them and compare with unzipped sizes. Keep in mind that the rate of change in the flash space is incredible and a vendor which didn’t support compression three months ago, may deliver bleeding edge capabilities today.
  3. Metadata. A great deal of the internal management of flash-based arrays is meant to optimize the performance and reliability of the media. Array performance and scale is greatly affected by where metadata is stored and how it is used. This is a big reason to precondition most flash arrays properly (i.e., write to each flash cell) before testing, to avoid artificially fast read results.
  4. Workload profiles and scale. Hard disk arrays are capable of IOPS in the range of many thousands. Common flash-based arrays can support IOPS in the hundreds of thousands. Workload profiles for which flash-based arrays are generally deployed are very different from the classic workloads of the past. The mixed virtualized workloads for which flash-based arrays can be deployed exhibit much more variability than traditional workloads. They include both extremely random and sequential data streams; a wide mix of block sizes and read/write ratios; compressible, dedupable, and non-compressible/dedupable blocks; and hot spots.
  5. Over provisioning. To improve the probability that a write operation arriving from the host has immediate access to a pre-erased block, most, but not all, flash products contain extra capacity. Over provisioning is common because it can help flash designers mitigate various performance challenges that result from garbage collection and wear leveling, among other flash management activities. It also increases the longevity of flash arrays.
  6. Hotspots. Most real-world workloads exhibit hotspots (i.e. the characteristics of temporal and spatial locality). Garbage collection, which proactively eliminates the need for whole block erasures prior to every write operation, may exacerbate hotspots. Methodologies differ.
  7. Protocols. We’ve spend over 5 decades learning how HDDs perform under different protocols. Storage protocols often create quite different performance levels with flash. Factors such as block sizes and error correction overhead can make a big difference in throughput and IOPS.
  8. Software services. Replication, snapshots, clones, and thin provisioning can be very useful for improving utilization, recovery options, failover, provisioning, and disaster recovery. However, implementation may have a big performance impact. Their effects will often be different than what you find in HDD systems.
  9. QoS at scale. Quality of Service affects both infrastructure and application performance. Be sure to build and run your tests with QoS configured for how you plan to use it. As your load increases, measure the ability to deliver expected performance in mixed workload environments.
  10. Effective cost of storage. Looking at just cost per gigabyte ($/GB) is not a good way to compare storage costs. Most industry experts suggest also looking at $/IOPS. A good question to ask is, how much is usable? Arrays vary widely in their conversion from raw storage to usable storage. For instance, due to the inherent speed of flash, you can effectively use deduplication and compression to fit substantially more data on a given amount of raw storage. Of course, you need to ensure that your data reduction assumptions are realistic. Talk with your application vendors and storage vendors. Storage vendors have storage efficiency estimates that will give you an accurate idea of what to expect from their particular storage platforms.

Storage architects considering all flash arrays for their workloads must explore the behavior of these products, and as far as possible, assess their performance in the context of their expected workloads. With a robust test and validation process in place, storage engineers and architects can select and configure flash storage solutions for their workloads with a clear idea of their impact on both performance and cost in production.

by
Jim Bahn
Senior Director, Marketing