vSphere 5: VAAI features round up
VMware Blogs actually have a 4 parts series on the Storage features in vSphere 5.
Was reading up the VAAI thin provisioning alert and reclaimation and the differences on VMFS volumes and guest portion have me came across this article by Hitachi.
It explain in very simple terms which I felt was good to understand. Those who are not as storage trained like myself would appreciate this portion.
Was reading up the VAAI thin provisioning alert and reclaimation and the differences on VMFS volumes and guest portion have me came across this article by Hitachi.
It explain in very simple terms which I felt was good to understand. Those who are not as storage trained like myself would appreciate this portion.
Just to recap where we are in the lifecycle of VAAI, vSphere 4.1
introduced three primitives: XCOPY, ATS (Atomic Test & Set) &
WRITESAME. vSphere 5.0 adds two new primitives: Thin Provisioning-STUN
(TP-STUN) and UNMAP, bringing us to a total of 5 primitives.
Here is more detail about these new primitives and what they mean to the VMware & storage admins:
Thin Provisioning STUN – This is an Error Code to
report “Out of Space” for a thin volume. If a storage array supports
this SCSI command, then when a datastore reaches 100% capacity and any
of the VM’s require additional blocks of storage, the ESXi host will
receive the alert and will pause those VM’s while the other VM’s will
continue to run. When using a Hitachi Dynamic Provisioning pool,
however, administrators get thresholds and alerts within our array to
prevent this from even happening, so in a real life situation, our
customers would not get into this state. Consider this a second level of
protection in a Hitachi storage array infrastructure.
UNMAP –When used with an array’s thin provisioned
volume and the WRITESAME command, this new primitive allows the ESXi
Host to reclaim space for a VMDK that had its file format converted
(e.g. converted from a “zeroedthick” to “eagerzeroedthick” or deleted
within the datastore). The biggest advantage is that now a VMware admin
can see the actual disk space available within the data store and also
have far greater efficiency for disk capacity management.
NAS Support is also extended in vSphere 5 which was previously only for FC in vSphere 4.1. You can refer to VMware Blog.
Full clone - Allow load to perform the block copy to be done by NAS device.
Native Snapshot support - offload of load to NAS array.
Extended Statistics - visibility of NAS device space usage.
Reserved Space - Allow creation of thick disk instead of thin previously.
For vSphere 4.1 three VAAI Features, it was clearly explain by Brain here.
Full Copy – So you’re probably wondering how this feature is going to help me. I can think to two VMware functions that this VAAI feature provides upwards of 10x speed improvements in. The first would be when you are deploying a VM from a template. We will say for example that you are going to deploy a 50 GB VB. When the VM is deployed vSphere is going to read the entire 50 GB and then write the 50 GB for a total of 100 GB of I/O. With VAAI enabled and a storage array that supports VAAI this process creates very little I/O at the host. The vSphere host will send a command to the storage array that say make a copy of this VM and name it this. The copy will be done locally on the storage array and results in very little I/O between the host and array. Once completed the array will send a notice to the host to let it know the works was completed.
The second VMware feature to benefit from this is a Storage vMotion. I feel that this is where this really pays off because you are most likely moving a larger chunk of data with this command. For example sake let’s say we are going to move a 100 GB Virtual Machine from one disk to another. To do this in the past this would have caused 200 GB of I/O on the host. With VAAI the burden on the host is almost nothing as this work is done on the storage array.
Hardware assisted locking – Too allow multiple hosts in your cluster to talk to the same storage volume VMware would lock the volume when one of the VM’s needed to write to it. This locking is to prevent another host from trying to write to the same blocks. This was not a large issue If you were using smaller volumes with only a handful of virtual machines on them. Now with VAAI the locking has been offloaded to the storage array and it’s now possible to lock only the blocks that are being written to. This opens up the possibility to use larger volumes and increase the amount of VM’s that can be run on a single volume.
Block Zeroing – This feature is saving vSphere from having to send redundant commands to the array for writes. The host can simple tell the storage array which blocks are zeros and move on. The storage device will handle the work without needing to receive repetitive write commands from the host.
NAS Support is also extended in vSphere 5 which was previously only for FC in vSphere 4.1. You can refer to VMware Blog.
Full clone - Allow load to perform the block copy to be done by NAS device.
Native Snapshot support - offload of load to NAS array.
Extended Statistics - visibility of NAS device space usage.
Reserved Space - Allow creation of thick disk instead of thin previously.
For vSphere 4.1 three VAAI Features, it was clearly explain by Brain here.
Full Copy – So you’re probably wondering how this feature is going to help me. I can think to two VMware functions that this VAAI feature provides upwards of 10x speed improvements in. The first would be when you are deploying a VM from a template. We will say for example that you are going to deploy a 50 GB VB. When the VM is deployed vSphere is going to read the entire 50 GB and then write the 50 GB for a total of 100 GB of I/O. With VAAI enabled and a storage array that supports VAAI this process creates very little I/O at the host. The vSphere host will send a command to the storage array that say make a copy of this VM and name it this. The copy will be done locally on the storage array and results in very little I/O between the host and array. Once completed the array will send a notice to the host to let it know the works was completed.
The second VMware feature to benefit from this is a Storage vMotion. I feel that this is where this really pays off because you are most likely moving a larger chunk of data with this command. For example sake let’s say we are going to move a 100 GB Virtual Machine from one disk to another. To do this in the past this would have caused 200 GB of I/O on the host. With VAAI the burden on the host is almost nothing as this work is done on the storage array.
Hardware assisted locking – Too allow multiple hosts in your cluster to talk to the same storage volume VMware would lock the volume when one of the VM’s needed to write to it. This locking is to prevent another host from trying to write to the same blocks. This was not a large issue If you were using smaller volumes with only a handful of virtual machines on them. Now with VAAI the locking has been offloaded to the storage array and it’s now possible to lock only the blocks that are being written to. This opens up the possibility to use larger volumes and increase the amount of VM’s that can be run on a single volume.
Block Zeroing – This feature is saving vSphere from having to send redundant commands to the array for writes. The host can simple tell the storage array which blocks are zeros and move on. The storage device will handle the work without needing to receive repetitive write commands from the host.
I was also going through the VMFS volumes and Guest. Apparently all these only for VMFS volumes so do not mixed it up. I must admit I got it confused badly. I manage to chanced on a post by Duncan Epping and asked and him and he replied and cleared my doubts.
To put it simple, when moving between datastore with different block size, the native fsdm is used. Though been the slowest datamover, it is the only one that goes through to the Application layer thus reclaiming space from guest.
For both fsdm and fs3dm, both reclaim is done on the VMFS volume only. The only advantage of fs3dm is its faster since it does not goes through the application layer and it comes in hardware and software type depending if your array supports it.
To put it simple, when moving between datastore with different block size, the native fsdm is used. Though been the slowest datamover, it is the only one that goes through to the Application layer thus reclaiming space from guest.
For both fsdm and fs3dm, both reclaim is done on the VMFS volume only. The only advantage of fs3dm is its faster since it does not goes through the application layer and it comes in hardware and software type depending if your array supports it.
Comments