OpenStack is an open source cloud infrastructure that is considered by many as a cost-effective alternative to VMware.
In reality, the transition from VMware to OpenStack isn’t that trivial, which leads most enterprises to take a hybrid cloud approach. This means they run their OpenStack infrastructure alongside their existing VMware infrastructure with an aim to gradually transition workloads into their new OpenStack environment.
Recently, VMware announced their own VMware Integrated OpenStack, which is a VMware-supported OpenStack distribution providing tighter integration between existing VMware environments and OpenStack.
In this post I will discuss three of the options for putting OpenStack and VMware together and weigh what I believe are the pros and cons of each approach.
1. Using OpenStack with VMware Hypervisor plug-in
OpenStack Nova comes with a pluggable architecture for integrating various hypervisors. It supports both KVM, VMware as well as Hyper-V.
The first option for integrating VMware into OpenStack is through the ESXDriver for Nova.
In this option the Nova scheduler can spawn VMware ESX VM’s through an ESX enabled node (as opposed to KVM which runs directly from the Nova compute node).
In this approach, we can potentially re-use our VMware images assets and easily import them into our OpenStack environment.
1. Limited use of VMware features: ESXDriver cannot use many of the vSphere platform advanced capabilities, namely vMotion, high availability, and Dynamic Resource Scheduler (DRS).
2. Limited Portability: There are some features of VMware, such as vMotion, that many enterprises rely on today and would need to be turned off as we move to OpenStack. As some of the images were built with the assumption that this unique VMware features exists, it wouldn’t be possible to transition them into an OpenStack environment and expect them to work.
In summary, while this option makes sense, the cost and technical limitations make it less popular. In fact, a recent OpenStack survey report indicates a fairly small percentage of users who actually use this feature.
Using VMware vSphere with OpenStack
In this approach we utilize the compute resources using vSphere. This will allow us to take advantage of all the VMware features that come with vSphere and overcome the limitations mentioned above.
In this case, the entire vCenter ESX cluster appears as a one big hypervisor. The actual allocation of the ESX hosts is done through VMware vCenter and is not exposed to the Nova Controller, as outlined in the diagram below:
Similarly, we will plug-in the VMware Storage and Network services to allow for even more complete integration across the compute, network and storage stack as outlined in the diagram below:
You can learn more about how this integration works in the OpenStack reference guide.
The main advantage of this approach is that it enables users of VMware to benefit from both worlds; on one hand, they can use OpenStack as an open API and on the other hand they can utilize their existing VMware infrastructure.
The main disadvantage of this approach is that it creates a completely different OpenStack implementation, which has some serious differences in its implementation and behavior from the original open source version of OpenStack - specifically in the way the compute nodes are managed.
The Elephant in the Room:
One of the main motivations to transition to OpenStack in the first place is to cut costs.
In both options we rely on the VMware stack, and therefore the actual savings are still unknown.
3. Using a Common Management and Orchestration as an abstraction to both VMware and OpenStack
In the third option we will not use any of the OpenStack VMware plug-ins, but instead we will use an orchestration layer as higher level abstraction between OpenStack and our VMware environment.
The orchestration layer provides a common management and deployment infrastructure. In this approach we will not be trying to force the VMware infrastructure to fit into the OpenStack API, but instead we will just map the different calls to either OpenStack or VMware into the appropriate type. In this way, the application is kept aware of whether it is running on OpenStack or VMware. However, since the calls to each of the infrastructure components are now centralized into one driver per environment, it is managed once for all the applications. Additionally, there is a default implementation for the built-in types, so in most cases the user will need to deal with the details of implementations of each element type only for specific customization.
- We are able to utilize all of the features and capabilities of each infrastructure with no limits.
- We reduce dependency risk - With this abstraction we’re less vulnerable to a specific infrastructure and we keep our options open to move or add new environments as needed.
- Support for vSphere and vCloud -Since we are not limited to the use of a specific API, we can integrate with both vSphere and vCloud API.
- We are shifting the dependency on the management and orchestration layer.
- We may lose some of the management and orchestration capabilities that are specific to each of the environments.
- Additional customization effort: Since we’re not relying on a common API, we may need to customize the built-in types per environment to fit our specific needs. Having said that, it is important to note that this customization is only done once for all of our applications. Additionally, it is expected that over time the default types will cover most of the use cases. Therefore, the need for customization will be minimized.
Given that the cloud infrastructure world keeps on changing and evolving very rapidly (not just between OpenStack and VMware), any tight integration approach will have a higher chance of breaking compatibility or being limited to the least common denominator at some point. We also need to be aware that even though in the first two options we remain compatible with the OpenStack API, we still end up with different OpenStack implementations from a behavior perspective.
On top of that, we need to be prepared for new disruptions. This is actually taking place right now, for example, with Docker orchestration that continues to disrupt and challenge the way we handle our compute and network infrastructure.
With this in mind it would be too risky not to keep our options open.
Having an abstraction layer at a higher level of the stack gives us the benefit of being less vulnerable to changes at the lower level infrastructure. It also provides us the flexibility to adopt new infrastructure changes in the future.
Having said all this, the question “Are we minimizing or just shifting the risk?” still lingers.
This is where TOSCA comes in handy. TOSCA provides a standard way to describe our application blueprint. This significantly reduces our dependency on a specific implementation of the orchestration and management layer and this was one of the main reasons that led us to choose TOSCA when we designed the third generation of our orchestration with Cloudify.
OPENSTACK COMPUTE FOR VSPHERE ADMINS (highly recommended and the source for many of the diagrams in this article)