By Jim Bahn, Senior Director, Product Marketing
Over the past few years, I’ve talked with several IT pros who try to match their storage infrastructure buys with their application performance needs. Kudos to them for trying. Frankly, if you just depend on the storage vendor’s spec sheets, the claims look so similar that I pity the buyer who is forced to depend on them. I was once one of those vendors. And when a customer asked me if our array would support their SLA, the best I could say was “probably, and if it doesn’t, we’ll sell you more stuff”.
Unfortunately, many people download freeware/shareware tools, cobble together a lab for a procurement project, attempt to develop an accurate synthetic workload model, and cross their fingers. Unless the workload model accurately simulates your actual production workload, you’re still just guessing. We assert that guessing is what you’re doing, and here’s why. Based on years of working with storage vendors and enterprise customers, we’ve come to learn that application workloads exhibit sub-second burstiness. And if the bursts are large, they can cause serious performance issues. If your freeware tools can’t simulate this behavior, and I don’t know any that can, then your testing is deeply flawed.
Recently, some of our engineers developed some tests to show this, and created a very useful whitepaper that quite dramatically shows the effect of bursty I/O on performance. The differences are not small. They can be orders of magnitude and they often can’t be mitigated by throwing Flash at the problem. Of course, our Load DynamiX Enterprise simulates sub-second bursts. Most of our customers were once users of freeware, but they’ve come to depend on the more realistic tests enabled by LDX-Enterprise when making major procurement decisions. To quote Kurt Vonnegut in Cat’s Cradle: “In this world, you get what you pay for”.