Are all Hypervisors made equal?
There are lots of content available nowadays especially with the Broadcom acquisition of VMware, there are many on how to migrate off VMware and feature function comparison.
One of the great content that is easily digestible from 2TekGuys. Below is a breakdown from the video on the features that was mentioned available on other hypervisors in comparison with VMware vSphere. I am not going to go into feature beyond mentioned in the video. Here are the list of features mentioned:
Load Balancing: Moving using live migration of virtual machines (VMs) between hosts to due to contention.
Backup: Support of backup from popular backup vendors or from hypervisor vendor themselves.
Storage: Able to utilize external network storage/SAN or hypervisor own hyper-converged storage only.
Live Migration: Ability to move VMs without any downtime between hosts.
Having specialized on VMware vSphere for a long time in my career and been in a technical role from picking up VMware. I am always amazed by all the innovation been put into all the solutions in VMware especially on vSphere. It is just logical innovation or improvement that was introduced not because of a feature check box, but it has been put into deep consideration.
I felt there must be a fair comparison knowing why you are using the feature for and why did you choose VMware vSphere hypervisor instead of others. So I am going to share my 2 cents worth of the above and go a little bit deeper why VMware vSphere been one of the most trusted and innovative hypervisor today comparing with other hypervisors. But however, if you just need a feature to fill the check list and not bothered on why a particular feature excel in VMware vSphere, other hypervisors would fit the bill easily as listed in the video.
Load Balancing
This was introduced in VMware Virtual Infrastructure (VI) 3.5 (before the name changed to vSphere). Duncan Epping wrote some articles on vSphere 4.0 Distributed Resource Scheduler (DRS) here. You will appreciate more and be confident with DRS to set it in full automation mode. I still remember back in the days when it was first introduced, most people are only confident to set it in partial automated mode. Worrying a musical chair might happen. Not understanding that when alert recommendation to place a VM else where, it need to be conducted at the point in time and not hours later when contention is already over.
vSphere has been a very matured product. DRS started with CPU and memory utilization to move VMs between hosts. In vSphere 6.5, network utilization was also place into consideration. This really make sense. Imagine having a bunch of low load on CPU and memory web VMs placed on a host but they are network intensive. What will become of your performance of these VMs and the host? Check out this whitepaper to find out more. Also DRS isn't just triggered when there is a load contention, it does comes with lots of mechanism to ensure a move is beneficial between risk and benefit ratio.
For other hypervisors, the question is how mature is it?
- Will it cause musical chair effect? How accurate is the DRS mechanism in term of load balancing it where you can benefit from?
- Can you set to partial automation mode to observe before deciding full automation in load balancing?
Load Balancing will need to work hand in hand with Live Migration. Then the rest of the questions will be on Live Migration.
Backup
If you have experienced with VMware vSphere in the past, it used to require a backup proxy known as VMware Consolidated Backup (VCB) VM to be deployed to take the load of the backup process in VMware Infrastructure 3.x.
In vSphere 4, this was no longer needed. And vSphere Storage API for Data Protection (VADP) was introduced. This allows all backup vendors who wants to support vSphere to leverage on the APIs and implement this into their solution. This way, the backup load would then be transferred to the backup server or the storage system if storage function is called. This free up ESXi and give it the ability to achieve higher consolidation and do what is it suppose to do, hosting VMs. Here is one article that explains VADP.
VMware also introduced for VADP to support vSAN and other storage activities such as snapshot. Read more about it here.
Will other hypervisor, would need to cater additional resources such as CPU and memory to handle backup activities? Here are some other questions in mind:
- So how much resources per host do you need to cater for backup?
- Does resources above link to number of VMs that can be backup?
- How many VMs at one time per host can it backup?
- How many backup solutions are supported for other hypervisors or solution comes from the hypervisor themselves or one other backup vendor? Will there be choice when there is short of a feature or issue or even cost?
Storage
vSphere has the most comprehensive storage supported options on the market. From IP storage, SAN and even hyperconverged solution such as VMware vSAN or even Nutanix HCI. But do you know VMware vSphere do comes with a collection of vSphere Storage APIs. More details here. Which contains two APIs namely: vSphere Storage APIs - Array Integration (VAAI), vSphere Storage APIs - Storage Awareness(VASA). With vSphere Storage APIs, storage activities such as moving of VMs within the storage, or performing snapshots, cloning are all offload to the storage. vSphere is able to recognize the ability of the storage and leverage it to it's use via Storage Policy Based Management (SPBM) and also view some of the storage information on vSphere Client. For vSAN, there is introduce of vSANsparse snapshot. Refer to vSAN VADP.
In such, you do not have to cater large amount of resources such as CPU and memory to perform such activities and suffer from performance impact if such resources are insufficient. Or cater resources unused to perform such activities.
- Using other hypervisors would then consider how much resources do you need to cater for each host to be set aside for such activities?
- Can it offload storage activities to the storage system?
- Does it support external storage? If yes, what are storages supported?
- If No, are you willing to forgo all your external storage and never use again. Giving up on external storage capability such as near line replication, etc.
A fun fact. Do you know SANDisk was one of the design partner for VMware vSphere APIs for IO Filtering in vSphere 6. Check out how it works here. Here are some articles 1, 2, showing you what it can be use. It also available for technology partner to be part of it. Veeam was one of the early partner who leverage VAIO for near zero replication in their solution. Guess what. None of VMware solution utilized it. Not even vSphere Replication. Not sure why, till date I have no idea.
Live Migration
I simply love vSphere vMotion. It was first introduced in VI 3.5 and and matured over the years. It is just an idea that came from an engineer mind and how it was invented in VMware. You can checkout details of vMotion here. To describe this in an easy way. vMotion may sound simple just moving VM between hosts, but in depth, many considerations was put into it to make it what it is today. From putting a VM in a micro pause in memory to do a cut over, leveraging on the amount of bandwidth you supply it to have it move faster. If you ever bought VMware vSphere Technical Deep Dive books by two famous architects, you would appreciate the technology place in it. Here I have links to their blogs, Duncan Epping, Frank Denneman. Do you know that Storage vMotion allows you to move your VMs across different storage datastores between the storage or across different storage.
Find out what's new for vMotion in vSphere 8.
If you are also leveraging on GPU, question would be can you Live Migrate a VM with GPU or does it support vendor such as Nvidia technology i.e. NVswitch. vSphere is the first to support NVswitch with the partnership with NVidia. Here are two more articles 1, 2. on GPU vMotion support. In fact,
You can use other hypervisors' Live Migration feature but will it be successful with monster VMs,
- How long does it take to move a VM? Can it move VMs with GPU?
- How many VMs can I move at a time if I were to do a maintenance for a host or a cluster?
- Can VMs be moved and change of it storage location at the same time?
- When will it fail to move?
- When it fails, what happen are there any data lose?
- Do I need to backup first before move?
- Can it move faster by providing more bandwidth or more NICs?
- Can VM be relocated to other storage or within the same storage location?
These are some very simple questions many did not consider. A move is not simply a move. Many things can be involved.
High Availability (HA)
I added this ability as I was going through VMware Certified Advanced Professional certification with someone and realize something many forgotten. In VMware vSphere HA, the management network has an option to set up multiple isolation address to prevent a single gateway failure. To prevent false positive or for environment where management network has intermittent problems, vSphere HA can be triggered unnecessarily. Datastore heartbeat was introduced for that matter to prevent that since the host and the VMs are still alive and working where the need to trigger vSphere HA.
Fun fact: Did you know vSphere HA works without vCenter Server. vCenter Server is only used to configure and enable vSphere HA. Check out how vSphere HA works.
This is one important note that a matured product has. Other hypervisor may have or may not or even on roadmap. In such a case, would a check box be sufficient?
Hardware Requirements
Lastly on the hardware requirement, ESXi isn't the most intensive or the smallest in requirement. You can check out the requirements here. It needs at least 2 CPU cores, 8GB memory and 32GB storage space. It's installation image is 600MB and the kernel is about 260MB. It is definitely the smallest hypervisor you can find in the market. With a small footprint, also means it has a small security footprint for attacks.
Fun Fact: Did you know before ESXi was not the only hypervisor from VMware, there was ESX. The same hypervisor but running on linux kernel 2.6 and would require at least 80GB of storage to install it. It has since end of availability and replaced by ESXi as it was a linux kernel based hypervisor with a bigger security footprint and has all linux security vulnerability. ESXi was purposely build to do one thing, virtualization. Does any other hypervisors have similarity to old ESX?
Conclusion
In a summary, a feature check box is easily to do but many times reason that goes behind are often forgotten and a swift decision made without proper assessment can be a suffering for a long time.
If you are seriously looking to migrate out of VMware, make no mistake, talk to a VMware Engineer. Why? You might ask. Won't they just prevent you from migrating away.
If you are seriously looking to migrate out of VMware, make no mistake, talk to a VMware Engineer. Why? You might ask. Won't they just prevent you from migrating away.
You are right! Because they have your interest at hand, they will be able to show you in great details what you will lose. Cause they know your environment better than other hypervisor vendors, their inputs will be the most comprehensive. End day, having all the honest assessment, if you choose to move on, at least you know you have make the right choice and not a bad choice!
Lastly, remember there are tons of tools on the market to migrate out of VMware any day. How many are they on other platform or even in the cloud?
Comments