Sean Maxwell –
The effect of virtualization on IT has so polarized the challenges of delivering real-time infrastructure performance management, that it’s time we do something about it. And when you’ve gone blind to the performance and availability of your mission-critical workload (and we all have), it’s time to under-go ‘Lasik for your Legacy’ and get ‘X-ray Vision for your Cloud’!
A large, UK-based financial services customer recently described to me his real-life challenge of virtualizing their data center. He talked about “the good old days, when they had a 10,000 square foot legacy, ‘non-cloud-ified’ data-center”. When they experienced a really nasty performance or availability problem, and the cadre of 73 device-based tools they owned couldn’t solve the problem (because it rarely did), they could deploy a team to the data center, checking physical connections, looking for crimped cables, loosely seated connections, check for blinking lights, you know the drill. They were able to use all of their senses (sight, touch, sound, etc) to help solve the nastiest of the nasty problems.
Well, they just finished building their first “next-generation, virtualized data center”. It’s a thing of beauty too; 100% virtualized and 100% cloud-enabled. But he bemoaned his CIO-inspired charter of guaranteeing the performance and availability of his new mission-critical infrastructure (new charter same as the old charter), because he felt that many of his senses were now ‘dead’, and his life had just become immeasurably more difficult.
He stated “it feels like my new ‘virtual’ data center is now a million square feet. I’ve got data everywhere, and nowhere at all. There are no blinking-lights, we’re wearing blind-folds, and we have oven mitts on our hands as we try and do our jobs.” And remember his 73 device-based tools? They’re now worse than useless, since they were designed in a physical world to manage physical infrastructure. Picture that for a minute…it doesn’t give you a warm and toasty feeling, does it?
He also said that because he lacked performance and utilization data about his legacy data center, they simply “copied the new estate port-for-port, LUN-for-LUN” when they built the new data center. Yes, the fabric was newer, and the drives were faster. That’s good news for their SAN and Storage vendors. But for this hard working IT staff, it was bad news. Availability had dropped, performance had worsened.
Like most customers, this shop had a myriad of latent infrastructure problems: slow-draining devices, vm-storms, multi-pathing issues, optics issues, broken/bent cables, flooded ISLs, hot hosts/LUNs, etc. These problems had been affecting the performance of their legacy applications for what seemed like years. And in spite of this legacy environment being over-provisioned, it was still limping along. So after a very painful data center migration (planned for 3 months, now 12 months & counting) and storage consolidation (same LUNs, denser drives), this customer is no better off than before. And ironically, they now have less visibility to performance and availability because they’re 100% virtualized and cloud enabled. Same old tools, but now a complete abstraction of the I/O workload.
So here we are folks: the dark ages of IT are gone, and the enlightened new world is upon us. ‘Virtual, Cloud and Software-defined’; once Marketing buzzwords, they’re now Procurement line items on large IT purchase orders. But with all this great new technology, why isn’t performance better now than it used to be? Why isn’t availability any higher than it used to be? Doesn’t the new converged infrastructure come with a new set of tools that makes all this virtualization easier? The answer, unfortunately, is NO.
So the real question is: what should I do now?
It’s time for IT to undergo “Lasik for Legacy”. Just like our legacy IT infrastructure, we’re all getting older too 😉 And there’s no shame when we correct our vision, is there? Heck, I’ve had to put on my 1.25x readers in order to write this article. And as easily as you can put on a pair of glasses to read, there’s now a way improve your ‘IT vision’ too. You can provide real-time visibility back into your legacy environments. You can then unlock your data and use it to remediate latent, lagging issues. You can use this data to improve the performance of your legacy apps. And then you can use this data to determine how much of your legacy environment you’re actually using. More important still, you can actually see how well it’s being used.
For 20/20 vision into your legacy environments, I recommend the following:
VI Critical Infrastructure Audit (CIA): affectionately known as our Lasik for Legacy service by our happy customers, this is a Software-based Services engagement to measure the Health, Utilization and Performance of the virtualized Host & SAN environment. We perform a 1-2 week non-disruptive, agentless data collection. We analyze the results, and make recommendations to improve the performance and availability of your legacy environment BEFORE you migrate or consolidate your data. The cost of this service is returned many-fold by identifying areas to optimize your existing assets.
And if you’re on the road to heavy virtualization, deploying cloud or converged infrastructure, or considering an early software-defined strategy, you’re going to need a lot more than Lasik for Legacy. You’re going to need X-ray vision for the Cloud!
For net-new physical, virtual and cloud environments, I recommend the following:
VI Performance Management Service: often called our ‘X-Ray Vision for the Cloud’), this Services engagement is a combination of our Host/SAN software probes plus our TAPs & hardware Performance probes. By Tap-ping into the stack, we’re able to capture and analyze the systemic, real-time performance of every I/O, whether you’re on a physical, virtual or cloud infrastructure. This enables proactive monitoring to guarantee availability, aggressively manage the performance of the infrastructure, and build/buy only what you need to support your mission-critical workload.
The moral of the story is this: the server, fabric and storage companies ‘device-based tools’ are no longer sufficient to manage the performance and availability needs of today’s business. You need a system-wide view of your infrastructure that is real-time, agentless, out-of-band, and provides an agnostic view of your performance and availability, whether it’s physical, virtual or cloud. And there’s only one company that can provide that.
Does it sound too good to be true? I can assure you, it’s not.
With 35 of the Fortune 100 as customers already, and 350+ enterprise customers around the world, we’re creating a world where applications and infrastructure simply perform better together. Let us show you how.