Posts

Showing posts with the label vnuma

What So New in vSphere 6?

With the announcement and also from the datasheet , it seems to be pretty lots of functionalities been added.  However there are some critical ones that are more appealing and wanting to see approvement or resolution to those who are already using since vSphere 4 and prior till today which are not make known to many. Storage There were many discussion over storage UNMAP via thin provisioning and many called it a "myth".  This was also discussed heavily in our Facebook VMUG - ASEAN group.  This was due to many changes since VMFS3 to till VMFS5.  Cody wrote a long history of what are the changes for those who have missed out here . A KB was also release and this create some discussion VMFS3 with different block size would benefit thin provision so to speak before vSphere 5.0 Update 1.  Sadly after which, all UNMAP was not possible via GUI or automatically other than via command line or script. I try to ask internally as well and luckily Cormac with his f...

vNUMA Improvement in vSphere 6

Image
NUMA is always a very interesting topic when in design and operation in virtualization space.  We need to understand it so we can size a proper VM more effectively and efficiently for application to perform at its optimum. To understand what is NUMA and how it works, a very good article to read will be from here .  Mathias has explained this in a very simple terms with good pictures that I do not have to reinvent.  How I wish I have this article back then. Starting from ESX 3.5, NUMA was made aware to ESX servers.  Allowing for memory locality via a NUMA node concept.  This helps address memory locality performance. In vSphere 4.1, wide-VM was introduce this was due to VM been allocating more vCPUs than the physical cores per CPU (larger than a NUMA node).  Check out Frank's post . In vSphere 5.0, vNUMA was introduced to improve the performance of the CPU scheduling having VM to be exposed to the physical NUMA architecture.  Understanding ho...

vSphere 5: vMotion enhancement

I was reading through this article vMotion Architecture, Performance, and Best Practices in VMware vSphere 5 . I was not aware (perhaps only myself) that ESXi 5 introduces virtual NUMA (vNUMA).  What this means that in terms of performance, the ESXi is able to know which is the most efficient way to access the memory.  This was not possible in ESXi 4.x. On reading further, this brought something to me especially for environment who enable EVC with mixed of hardware differences. ESXi 5 introduces virtual NUMA (vNUMA), which exposes the ESXi host’s NUMA topology to the guest operating systems. When using this feature, apply vMotion to move virtual machines between clusters that are composed of hosts with matching NUMA architectures. This is because the very first time a vNUMA-enabled virtual machine is powered on, its vNUMA topology is set, based in part on the NUMA topology of the underlying physical host on which it is running. After a virtual machine’s vNUMA topology is...