Discover PerformanceHP Software's community for IT leaders // April 2012
Four elements of the cloud-ready app
If cloud-readiness isn’t understood and instilled by application delivery teams, you might lose the cost savings the cloud was meant to provide.
In enterprise IT, conversations about the cloud often focus on operations and infrastructure. But even with the right infrastructure strategy, enterprises can’t simply drop existing applications into a cloud environment and expect cost savings. At a minimum, existing apps must be optimized; more often, they need a full re-architecting. And when they’re being built from scratch, they require careful planning. In fact, moving an application to the cloud without the aforementioned is a recipe for higher costs, not lower.
To ensure that the enterprise can reap the cost savings of the cloud, the apps team must understand how to build cloud-readiness into an application’s DNA. This means optimizing four key characteristics.
The challenge: An underperforming application robs you of the cost savings that most likely attracted you to the cloud first place. In the cloud model, where you pay by usage, performance flaws will have a more direct, measurable impact on costs by causing you to consume more cloud resources than you might otherwise require. There is also a bigger incentive for fine-grained optimization because any problems or bottlenecks—inefficient database queries, memory leaks, etc.—will needlessly drive up your monthly bill.
How to address it: To avoid paying unnecessarily, your application must scale proportionately with rising demand. During delivery, focus on identifying and fixing any flaws or bottlenecks to weed out performance inefficiencies. Once the application is live, you can harvest real usage patterns from production, which often differ significantly from the usage assumptions made during development. This data can be used to refine your performance scripts to be more accurate and true-to-life, thus providing greater insight into how the application can be tuned to maximize the savings offered by the cloud.
The challenge: The idea of using the cloud to scale up quickly is nothing new to ops or apps teams, but the ability to scale down often gets less attention. Scaling down—turning cloud resources off when they are not needed—however, is what fuels the economic benefits that cloud adopters are seeking.
Elasticity and performance are two sides of the efficiency coin. It’s how you keep down your electric bill at home: Good performance ensures that the application doesn’t turn on any more light bulbs than necessary; proper elasticity ensures the application turns off the lights when it leaves the room.
How to address it: Isolate application functions into discrete components that scale individually based on the functionality requested. Breaking your applications into as many scalable components as possible—essentially, “dials” that can be manipulated independently of one another—lets you maximize your application’s elasticity.
The challenge: Moving to the cloud means the application’s infrastructure is now either shared or external to the enterprise. In either case, the environment is more opaque than when each app had a dedicated, internally hosted environment. Resilience is making sure the app has capabilities of “self-healing,” since when the inevitable failures occur, the ops ambulance may be slow in coming.
How to address it: First, ensure that your app can recover gracefully from failures across the different layers. That means employing safeguards such as process threads that resume on reboot, message queues that can reload the state of the system, and writing data to a database instead of keeping it in memory. Second, build components in a loosely coupled way so that if one component were to die or slow down, the others can continue to function as though no failure had occurred.
Finally, QA must test for these types of failure scenarios. You might have a test case in which a dependent service shuts down in the middle of a transaction. A resilient application should cope with such a failure with no manual intervention and with minimal impact to the users.
The challenge: The cloud’s shared environment affords attackers new opportunities to seek access to your applications through weaker adjacent apps running on the same infrastructure. More dauntingly, in a public cloud, operations and security teams have less visibility than they would with an in-house app. Plus, many modern applications found in the cloud rely on third-party components and web services, which you can’t simply assume are thoroughly tested and secure.
How to address it: It’s no longer enough to rely on the traditional approach of perimeter defense by using firewalls and intrusion detection. Cloud apps must be secure from the code level upward. Perform static application security testing (SAST) regularly throughout your development iterations to uncover security flaws as the code is written. Once teams reach late-stage delivery and QA activities, perform dynamic application security testing (DAST) to reveal vulnerabilities in your running application. Security vigilance should of course continue as a key part of your production monitoring strategy after the application is live.
For more on optimizing your apps for the cloud, watch the brief slidecast, “Delivering Cloud-Ready Applications,” then take HP’s free Cloud Assessment for insights into where you are in your cloud journey—and where to go next.
Diving into disruptive technology trends like cloud, mobile, and Big Data, HP’s CEO talks about moving not just IT, but the whole enterprise, into a new era.
Dig into strategic trends with our new Discover Performance Weekly video series, and go backstage at events like RSA.