vSphere 5: vMotion with Multiple nics
Performance of vMotion comparison with Hyper-V Live Migration:Virtual Reality
With the new vSphere 5, multiple nics for VMkernel use for vMotion is possible. With up to 16 nics for 1GE links and up to 4 for 10GE links.
Sadly I am unable to do a demo for this on my home lab since where on earth can I get 10GE link. But anyway, I would like to point out certain consideration when planning for vMotion with such links.
For 1 GE links, you can bundle up mutiple port groups for vMotion. I was totally confused for this and after watching this video I got a clearer picture however my next question arised.
How many port groups of vMotion can I create? Well the answer is simple up to 16. I was still not really clear so does it tally with the nics used?
Ok here is the simple explanation if you were like me who got confused. Say if you use 8 nics bundle up for vMotion, you can create up to 8 port groups whichever you do it. However of cause, you can still use less than 8 but the maximum you can go is 8 port groups since you only have 8 nics.
That is one consideration do you really need that much? Yes having more means vMotion can really be fast but however do note that with more activities at the nics, it also means more CPU resource required.
The second thing to note, what if I used 10GE? Well if you have that luxury to use that, it would be good. In fact, you do not have to create multiple port group since the bandwidth would be good enough.
But hold on, what if I want it better? You can actually do that but 10GE uplinks been busy also mean lots of computing resource will be used. Can you take that? So I am not going to suggest doing that but its all about your design.
Lastly why does vMotion becomes faster when you have more VMkernel port groups for it? The reason is in vSphere 5, with the support of multiple nics, you can have multiple port groups for vMotion. This also add that you are having mutliple vMotion initiators when vMotion is triggered. The advantage of this is vMotion traffic is now send over through these initiators and which helps in improving vMotion.
The amount of traffic in this case is applicable to just one single vCPU. Ever thought that in future when SMP is possible, the traffic will be much higher and the reason for multiple nics would be more visible.
Also to note, often vMotion traffic will be high when performing it, it is advisable to avoid sharing this nic with FT which is unknowingly to many does produce pretty high bandwidth. I suggestion to have separate nic for FT rather to share this together with something else.
So when planning for your design, two things to bear in mind:
- 1GE or 10GE ports to be used for VMkernel? The cost and the benefits.
- 1 Port group or mutiple port groups? Consider the computing requirements especially when using 10GE. Suggestion to have more for 1GE but perhaps stick with just one for 10GE.