Most data centers of today run applications on dedicated machines. This is often referred to as static provisioning. In addition, applications are typically provisioned to handle expected peak loads. Both lead to over-provisioning and low resource utilization.
John Foley wrote an article back in August Private Clouds Take Shape in which he describes how data centers are reshaping themselves by taking ideas from public cloud providers, such as Amazon and Google. The idea is to make the data center more cost-effective by enabling on-demand utility-based computing rather than dedicated machines.
The shift towards a utility data-center is a game changer. It will change the way IT operates, the way applications run in the data center as well as the culture of IT organizations.
The push to private clouds has a strong momentum these days, as all the major players, starting with the hardware vendors and ending with virtualization vendors, realize that their future rests in how well they fit in this model. Microsoft's Azure announcement one of the most significant announcements from a major vendor so far.
The need for Private/Public Clouds
At the same time, it is clear that to make IT operations more effective, it doesn't make sense to run all the applications that are currently hosted in a company's data center in the private cloud. Not all applications in the data center are mission-critical or production systems. For example, take staging or testing environments. Such environments are supposed to be a mirror of the production environment. This is reasonable when our production system runs on a single dedicated server, but what if it runs on 10 or even 100 servers? Does it make sense to have another 10 or 100 dedicated servers just for that purpose? Another good example is disaster recovery. Disaster recovery sites require us to double our resources, let alone the cost associated with maintaining two separate data centers. These are classic scenarios in which running applications on a public cloud could lead to huge cost savings.
A recent InformationWeek survey (which Foley mentions in his piece) provides a more detailed view on the types of applications likely to move from private clouds to public clouds.
Making your application ready for the private/public cloud
The challenges
There are a few challenges to be aware of if we want as ready applications for a hybrid private/public cloud:
1. How do we design applications to be cloud-agnostic: how do we perform application testing on a public cloud and then run that exact same application in production on a private cloud. For the application to be cloud-agnostic we need to ensure that neither our application code or configuration is going to change by the transition and that our application is going to behave the same in both environments.
2. How do we enable seamless fail-over to a public cloud? To enable a disaster recovery scenario, the public and private clouds need to be connected in a way that enables seamless fail-over from the private to the public cloud
3. Future-proofing: There are many cases in which we can't make a clear decision as to where our application should be running at the time of writing or developing the application. We would like to be in a position to change the decision as to where our application will be running even after our application has been completely developed.
The solution
1. Enterprise-ready Platform-as-a-Service (PaaS)
Many recent discussions on cloud computing have been centered on the low-level infrastructure, such as virtualization. This is sometimes referred to as Infrastructure-as-a-Service (e.g., Amazon EC2). It is clear that to address the the first and the third challenges mentioned above we need a new middleware stack that will provide generic services for running applications in a virtualized environment, or a platform-as-a-service.
You can read more about this layer in GigaSpaces as Alternative to Google AppEngine for the Enterprise. The role of the enterprise-ready PaaS will be similar in nature to that of the application server of today, only that it will broaden to support the needs of private cloud environments, as I outlined in my earlier post.
2. Cohesive FT's VPN Cubed
While entperise-ready PaaS shields application code from the underlying cloud infrastructure, CohesiveFT's VPN Cubed is responsible for connecting one or more cloud networks through a secured channel in a way that makes them all appear as one big cloud, even if they are not owned by the same provider.
See for yourself how it works live!
My colleague Dekel Tankel blogged about the joint solution by GigaSpaces and CohesiveFT aimed at addressing these challenges. The solution will be presented in a webinar next week Making Cloud Portability a Practical Reality.
We will show how you can write and deploy standard applications on top of GigaSpaces' Cloud Framework and use CFT's VPN Cubed for a seamless transition across clouds. Even more interesting is that using this solution you can even use multicast discovery across clouds.
By "standard application" I mean that you can deploy a standard JEE web application packaged as a WAR and deploy it in the multi-cloud environment. It doesn't need to be a "GigaSpaces application". In the webinar next week we will show live how we can deploy an application across both Amazon EC2 and Flexiscale, kill one of the machines and see how the application fails-over seamlessly between the clouds with zero downtime.
The webinar will take place next week: November 18, 2008, 1:00 PM EST.
You can register online here