There is a lot of buzz around the term cloud computing, with announcements from Google & IBM, the Amazon Elastic Compute Cloud (EC2) and others. From the Wikipedia definition:
These "cloud applications" or "cloud apps" utilize massive data centers and powerful servers that host web applications and web services. They can be accessed by anyone with a suitable Internet connection and a standard web browser.
The discussion on cloud computing specifically, and utility computing in general, is really exciting and introduces many new ideas on how enterprise software can be made simple. There is one fundamental element, however, that has received very little attention in this discussion, and that is how do we allow our existing and new applications to take full advantage of this type of virtualized environment. The answer is virtualization, i.e., we take the same concepts and models that we used in the server-centric world and virtualize them to work in this cloud environment. Let me explain:
In the current server-centric world we use middleware to provide common infrastructure services, such as application containers, data and messaging. To make that same model work in a cloud environment we need to virtualize all of those components. That is, we need to virtualize the container, the data and messaging. By doing so we abstract the application from the fact that it is running on a "cloud" and make the transition from a server-centric model to cloud computing relatively seamless. How do we achieve that?
Well, this is where Space Based Architecture comes to the rescue: In today's middleware stack we think of middleware in terms of tiers: presentation, business logic, messaging and data. Each tier has its own API, backed by a specific server implementation. The tiers are designed as silos, meaning that each tier assumes nothing about and often shares nothing with the other tiers, even when it comes to scalability, high availability and so on. In Space-Based Architecture we break the silo approach and de-couple the APIs from the runtime platform that is used to serve the API. The API is used just as a client-side façade that utilizes a common runtime environment. It is mostly responsible for exposing specific semantics to the user. The underlying runtime system provides common services for scalability, high availability, message routing, deployment, etc. The Space is used to deliver that common runtime system and is, therefore, a critical piece of the puzzle.
The diagram above illustrates how we can virtualize data (tables) and messaging (queues) using the same underlying runtime implementation. We basically store both data and messaging tuples in the space "cloud" and we map these objects into either a queue or a table based on the API that is used to access them. To view them as a table we create a view based on their class name. At the same time, we can view them as queues by reading them in the order in which they where written.
Beyond the implicit scalability and simplicity that we achieve from this virtualization, this model opens up a whole new set of possibilities for how we can think of middleware. The fact that there isn't tight binding between the API and the implementation that serves the API means that we can create an interesting mesh of API interoperability: one part of the application acts as a messaging API and can send a message into a virtualized queue, while the other can act as a data query API and search for any item written to any queue and picks the one which fits the query best, regardless of the queue it was written to. I can continue down this path and easily map ten or even more new use cases that are now possible due to the virtualization of the middleware.
One thing that I recommend is to take a look at the work we're doing at GigaSpaces regarding application services virtualiztion through the Remoting framework we developed in OpenSpaces. In a nutshell, this framework maps multiple instances of POJO services that are running on the network through a single remoting interface. This remoting interface is used to map the user invocation and the physical service that are used to deliver that service.
Next, we need to achieve the same level of virtualization for the application container. I refer to this as an SLA-Driven Container. The SLA-driven container takes an application bundle and manages the deployment of that bundle over a set of containers based on Service-Level Agreements. The SLAs define the clustering topology (e.g., partitioning, size of the application pool, scalability, fail-over policies, etc.). It is used to map the available physical compute resources to the application needs. It is also used to provide self-healing capabilities to our application. For example, we can set an SLA to ensure that at any point in time we always have primary and backup instances for each node in our environment - and that each node's primary and backup must run on separate physical machines. In case a primary fails, the system will dynamically set the backup as the new primary, and will launch another backup on another machine.
In summary, cloud computing and utility computing are going to drastically change the way we think about enterprise software and middleware. Middleware virtualization is going to serve an important role in enabling new and existing application in this new world. The next step is to simplify the way we design and deploy applications to run on the cloud. There is no reason, for example, that users will not be able to develop their applications on their local desktops and then just ask the middleware to run on the cloud. In turn, it will interact with the cloud, provision the right set of hardware resources and run it seamlessly both locally and on the entire utility -style grid. At GigaSpaces, we have already achieved this with the integration of the GigaSpaces virtual middleware and amazon EC2.