vSphere 5: VAAI features round up
Was reading up the VAAI thin provisioning alert and reclaimation and the differences on VMFS volumes and guest portion have me came across this article by Hitachi.
It explain in very simple terms which I felt was good to understand. Those who are not as storage trained like myself would appreciate this portion.
NAS Support is also extended in vSphere 5 which was previously only for FC in vSphere 4.1. You can refer to VMware Blog.
Full clone - Allow load to perform the block copy to be done by NAS device.
Native Snapshot support - offload of load to NAS array.
Extended Statistics - visibility of NAS device space usage.
Reserved Space - Allow creation of thick disk instead of thin previously.
For vSphere 4.1 three VAAI Features, it was clearly explain by Brain here.
Full Copy – So you’re probably wondering how this feature is going to help me. I can think to two VMware functions that this VAAI feature provides upwards of 10x speed improvements in. The first would be when you are deploying a VM from a template. We will say for example that you are going to deploy a 50 GB VB. When the VM is deployed vSphere is going to read the entire 50 GB and then write the 50 GB for a total of 100 GB of I/O. With VAAI enabled and a storage array that supports VAAI this process creates very little I/O at the host. The vSphere host will send a command to the storage array that say make a copy of this VM and name it this. The copy will be done locally on the storage array and results in very little I/O between the host and array. Once completed the array will send a notice to the host to let it know the works was completed.
The second VMware feature to benefit from this is a Storage vMotion. I feel that this is where this really pays off because you are most likely moving a larger chunk of data with this command. For example sake let’s say we are going to move a 100 GB Virtual Machine from one disk to another. To do this in the past this would have caused 200 GB of I/O on the host. With VAAI the burden on the host is almost nothing as this work is done on the storage array.
Hardware assisted locking – Too allow multiple hosts in your cluster to talk to the same storage volume VMware would lock the volume when one of the VM’s needed to write to it. This locking is to prevent another host from trying to write to the same blocks. This was not a large issue If you were using smaller volumes with only a handful of virtual machines on them. Now with VAAI the locking has been offloaded to the storage array and it’s now possible to lock only the blocks that are being written to. This opens up the possibility to use larger volumes and increase the amount of VM’s that can be run on a single volume.
Block Zeroing – This feature is saving vSphere from having to send redundant commands to the array for writes. The host can simple tell the storage array which blocks are zeros and move on. The storage device will handle the work without needing to receive repetitive write commands from the host.
To put it simple, when moving between datastore with different block size, the native fsdm is used. Though been the slowest datamover, it is the only one that goes through to the Application layer thus reclaiming space from guest.
For both fsdm and fs3dm, both reclaim is done on the VMFS volume only. The only advantage of fs3dm is its faster since it does not goes through the application layer and it comes in hardware and software type depending if your array supports it.