Posts

Showing posts with the label vmotion

Are all Hypervisors made equal?

Image
There are lots of content available nowadays especially with the Broadcom acquisition of VMware, there are many on how to migrate off VMware and feature function comparison. One of the great content that is easily digestible from 2TekGuys . Below is a breakdown from the video on the features that was mentioned available on other hypervisors in comparison with VMware vSphere. I am not going to go into feature beyond mentioned in the video. Here are the list of features mentioned: Load Balancing : Moving using live migration of virtual machines (VMs) between hosts to due to contention. Backup : Support of backup from popular backup vendors or from hypervisor vendor themselves. Storage : Able to utilize external network storage/SAN or hypervisor own hyper-converged storage only. Live Migration : Ability to move VMs without any downtime between hosts. Having specialized on VMware vSphere for a long time in my career and been in a technical role from picking up VMware. I am always amazed by...

What's New in vSphere 7.0 Overview

Not going in-depth of new features but some overview so that everyone gets a quick glimpse and here is the link to VMware Blogs . Will update this article (if any) with links when it becomes available. vSphere 7.0 Overview . vCenter Server Simplified SSO Topology vCenter Server upgrade for customers with external PSC will enjoy the consolidated topology through this upgrade. embedded PSC will be the only topology moving forward. External PSC topology will be deprecated. vCenter Server Profiles ( link ) Just like how Host Profiles works. You can now compare and export the settings in JSON format as a backup or apply them to a new vCenter via REST API vCenter Multi-Homing ( link ) up to 4 vNIC where vNIC 1 is reserved for vCHA Maximum limit increased refer to  configmax.vmware.com Content Library There is a new view which you can enable. To help in managing templates, there is Check In/Out function to control versioning and revert to the previous version C...

vMotion Between CPUs

With the release of vSphere 6.7, and the ability to have EVC on a per VM level instead of a per cluster level raise some questions. Before we start here is an article on how to check what level of EVC to use here . One of the questions often asked, does vMotion works across newer CPUs in the same generation without an EVC cluster? If you follow this KB , in the last paragraph: Once the virtual machine is power cycled: They are only able to move to other ESX/ESXi hosts that are at the same CPU generation or newer. What this state means if you have a new server with a new CPU generation, technically you can perform a vMotion without having the VM in an EVC cluster. However, there are cases where vMotion will fail even the CPU is of the same generation due to an older version of VM hardware which has a more stringent check. As stated here , due to the destination host with a newer CPU with ISA extension not found on the source host. In the above case, vMotion will stil...

What So New in vSphere 6?

With the announcement and also from the datasheet , it seems to be pretty lots of functionalities been added.  However there are some critical ones that are more appealing and wanting to see approvement or resolution to those who are already using since vSphere 4 and prior till today which are not make known to many. Storage There were many discussion over storage UNMAP via thin provisioning and many called it a "myth".  This was also discussed heavily in our Facebook VMUG - ASEAN group.  This was due to many changes since VMFS3 to till VMFS5.  Cody wrote a long history of what are the changes for those who have missed out here . A KB was also release and this create some discussion VMFS3 with different block size would benefit thin provision so to speak before vSphere 5.0 Update 1.  Sadly after which, all UNMAP was not possible via GUI or automatically other than via command line or script. I try to ask internally as well and luckily Cormac with his f...

vSphere 5.1: vMotion with no Shared Storage

Image
In vSphere 5.1, vMotion without shared storage was introduced.  Frank Denneman has mentioned here there is no named to this features though many has given names like Enhanced vMotion, etc. Some have tried to perform this but realize even though they have upgraded to vSphere client 5.1, it still show greyed out and given a message to power off the VM.  This is because, in vSphere 5.1, all new features enhancement will only be found in the Web Client.  In such, the C# client will not have this option. So using the Web Client, I was able to perform this vMotion in my home lab where I do not have any shared storage other than the local disk of each ESX servers or across two different clusters which have shared storage within there respective cluster. Do note that you can only perform 2 concurrent vMotion without shared storage at one time, any additional will be queued.  Also the total of such vMotion adds to the total of concurrent vMotion (max of 8) and Storage ...

vSphere 5: vMotion with Multiple nics

The below is a good comparison of Hyper-V Live Migration versus vMotion.  Wiht multiple nics supported for vMotion in vSphere 5 its no longer a constraint. Performance of vMotion comparison with Hyper-V Live Migration: Virtual Reality With the new vSphere 5, multiple nics for VMkernel use for vMotion is possible.  With up to 16 nics for 1GE links and up to 4 for 10GE links. Sadly I am unable to do a demo for this on my home lab since where on earth can I get 10GE link.  But anyway, I would like to point out certain consideration when planning for vMotion with such links. For 1 GE links, you can bundle up mutiple port groups for vMotion.  I was totally confused for this and after watching this video I got a clearer picture however my next question arised. How many port groups of vMotion can I create?  Well the answer is simple up to 16.  I was still not really clear so does it tally with the nics used? Ok here is the simple explanation if you w...

vSphere 5: Cluster with mixed ESX/ESXi version

Image
Many have asked me whether if a cluster can be mixed with different versions of ESX/ESXi servers.  The answer is yes even down to ESX/ESXi 3.5, however for version 3.5, you would need to cater the legacy license server.  Please refer to the documentation here . Here is a demo of a setup I did with ESX 4.1 and ESXi 5.0 in the same cluster with HA and DRS enabled managed using a vCenter 5.0. Please note that upgrading VMware Tools to the latest version will still be supported on lower version of ESX/ESXi servers as it only update the OS drivers.  However, upgrading the virtual hardware will only allow it to be supported on the latest ESXi server. In such, that in consideration when doing your migration and upgrading of your vSphere environment and perform the virtual hardware upgrade last if possible unless you have enough resources to cater for your HA. Update 19th Apr 2013 Refresh the video as first half has missing audio.

vSphere 5: vMotion enhancement

I was reading through this article vMotion Architecture, Performance, and Best Practices in VMware vSphere 5 . I was not aware (perhaps only myself) that ESXi 5 introduces virtual NUMA (vNUMA).  What this means that in terms of performance, the ESXi is able to know which is the most efficient way to access the memory.  This was not possible in ESXi 4.x. On reading further, this brought something to me especially for environment who enable EVC with mixed of hardware differences. ESXi 5 introduces virtual NUMA (vNUMA), which exposes the ESXi host’s NUMA topology to the guest operating systems. When using this feature, apply vMotion to move virtual machines between clusters that are composed of hosts with matching NUMA architectures. This is because the very first time a vNUMA-enabled virtual machine is powered on, its vNUMA topology is set, based in part on the NUMA topology of the underlying physical host on which it is running. After a virtual machine’s vNUMA topology is...