By Jim Bahn, Senior Director, Product Marketing
Earlier this year, Enterprise Strategy Group released a report that showed more than half of respondents had already pulled workloads back from the cloud, and 68 percent said their applications are still supported by on-premise storage. IDG and Datalink also found that nearly 40 percent of organizations with public cloud experience have made the reverse leap, migrating applications from the cloud back to the data center.
So, what’s spurring this movement away from the cloud, and what will the next generation of applications look like if not cloud-based?
The Move to Cloud
The initial move to public cloud was driven by perceived cost savings and business agility. With the sheer complexity of today’s modern data centers, it’s no surprise that many organizations thought they could minimize complexity, costs and risk by outsourcing their business applications to public cloud providers.
As cloud providers touted the lower IT costs of moving to the cloud, as well as increased agility, “cloud first” became the mantra for mid- to large-sized companies. While some of these organizations struggled in the beginning to make the transition, ultimately, they were successful in migrating many of their less-critical apps to the cloud.
As several high visibility outages hit leading cloud providers, businesses began to think twice about putting business-critical apps in the public cloud. Performance issues also arose, as it became clear that most cloud providers could deliver on service level agreement (SLA) guarantees when it came to availability, but were not able to offer SLAs based on user response times or performance.
Some businesses also noted inflated expectations and industry pressure as factors that led them into the cloud too early in the first place. Those who moved the bulk of their company’s IT assets to the cloud before they were ready soon found themselves overwhelmed and underprepared for the kind of support the public cloud demanded. This peer pressure, coupled with notable public cloud outages, left IT leaders feeling conflicted. They wanted to maintain control over their applications and ensure their security, but didn’t want to be labeled as laggards when it came to cloud adoption.
Taking A Step Back
Most companies with limited internal resources don’t have the ability to “DIY” for their own cloud environments; so why are we seeing smaller, non-hyperscale organizations “unclouding” and bringing their applications back in-house?
After being deployed in the public cloud for a year or two, IT leaders began conducting detailed cost and performance analysis and comparisons between their public cloud workloads and those they kept on-premise. As more organizations began to see that their cloud deployments were costing them the same – or even more – than their on-premise environments, the justification for public cloud deployments based purely on cost lost its appeal.
The cloud is no longer an “all or nothing” solution, and business leaders shouldn’t make their deployment decisions as if it were. Business leaders are becoming more selective when deciding which applications they want deployed in the cloud and which they want to keep on-premise. I call this intelligence workload placement. Because of this, we’re seeing a new generation of companies that are taking a hybrid approach to their application deployments and are embracing both the benefits of public cloud and on-premise infrastructure. They are deploying applications where they offer the optimal cost/performance tradeoff.
Combined with a comprehensive monitoring platform to help measure real-time performance, health and utilization of the on-premise infrastructure, this hybrid approach to application deployments allows organizations to better control costs, assure performance and respond more rapidly to changing business conditions.
To learn more about the unclouding phenomenon, check out my article in SDxCentral.