Tuesday, July 3, 2018

Assumed Support from Third Party Solutions

While I was doing some presentation slides for one workshop, I happen to look for 3rd party virtual switches support on vSphere.

This is the KB that is from VMware. Just for those who are not aware, VMware has announced the end of support for third party virtual switches on vSphere and vSphere 6.5 Update 1 will be the last release to support these switches with vSwitch APIs.

While reading through the pointers I came across one point that caught my attention:


What about Cisco AVS, which is part of the Cisco ACI solution? Are you also discontinuing support for AVS?
VMware has never supported Cisco AVS from its initial release.


This might come as a surprise but there are customers who have implemented the above without knowing that VMware does not support.

Just by using the above for discussion. there are many solutions currently on the market that claims or market to support certain hardware or software. However, with further research, this has been a one-sided claim support. And was never two ways.

Using the above example from Cisco (hope Cisco don't hate me for that), when you hit an issue running Cisco AVS thinking that VMware supported it. When you raise a support case with VMware, and needed something to be changed, or an API to be a tweak, or a driver to be created, you will get nothing out of it as it is unsupported in the first place.

Imagine if this is running in your production environment, you just got your environment not supported. Logging a case with Cisco might not going to solve your problem if it requires something from VMware to help support Cisco AVS.

Now to bring to attention. There are many solutions that are currently claiming to support some hardware and software.  So when selecting a solution make sure to check that support comes in both ways and not just by one-sided claims. Imagine running a software and require certain drivers or firmware from a hardware vendor where it is not supported, you are as good as hitting a dead end.

Tuesday, June 12, 2018

Software Support Service Level, Why it Auto Close?

Many times I have heard comments on software support from other vendors externally by customers as well as internally working in principle.

The interesting part is many does not know how these support services measure their support quality or success criteria.

This article is just to illustrate how a support ticket goes through and how it is closed or close temporarily till a user response.

Typically when we raised a support request, there are always three levels or severity. I would go into the details. But you can check out my past post on that.

An engineer typically response to a support request upon receiving a support based on the severity SLA if it's raised online. If that is over the phone, the user will have to wait for the next available engineer to answer the user.

Once a call is completed with the user, they will reply to the user based on what was communicated over the phone. This then follows the next step typically awaiting user to perform a certain task and revert.

This can carry on several times but eventually, once it waiting for the user to respond, a timer will start. It will last for 3 days before a ticket is automatically put to a close or temporary close and email typically is triggered to inform the user.

Now I know this is frustrating, as a user you still want that ticket to be open as you haven't got the time or didn't expect an unfinished issue to be closed.

This is the part that needs explanation. The support engineer is measured by the number of closed tickets on time. So the request system helps by identifying tickets that are opened for three days but not closed by doing an auto closure or temporary closure with an email to the customer. For a user to keep the ticket open more than 3 days, he would either reply to the service request so that the system reset the timer, or request to the engineer so he can flag it out with the system doing an auto closure.

Also having more tickets close, also means that support service quantity is higher as they are able to close more tickets and have less pending tickets. This also measures success criteria.

So the next time you need more time, or need to have a service request ticket to remain open, either reply to the email within 3 days or inform the engineer you are speaking to on not closing.

However, do note that not all system allows the engineer to prevent auto closing. Best is to reply on it within 3 days.

Tuesday, May 22, 2018

vMotion Between CPUs

With the release of vSphere 6.7, and the ability to have EVC on a per VM level instead of a per cluster level raise some questions.

Before we start here is an article on how to check what level of EVC to use here.

One of the questions often asked, does vMotion works across newer CPUs in the same generation without an EVC cluster?

If you follow this KB, in the last paragraph:

Once the virtual machine is power cycled:
  • They are only able to move to other ESX/ESXi hosts that are at the same CPU generation or newer.

What this state means if you have a new server with a new CPU generation, technically you can perform a vMotion without having the VM in an EVC cluster.

However, there are cases where vMotion will fail even the CPU is of the same generation due to an older version of VM hardware which has a more stringent check. As stated here, due to the destination host with a newer CPU with ISA extension not found on the source host.

In the above case, vMotion will still fail without having the VM in an EVC cluster unless the VM is upgraded with a newer version of VM hardware.

In a good practice, when upgrading your vSphere environment, upgrade your VMware Tools and VM hardware as much as possible. Often than not, I have seen many environments with old VMware Tools and VM hardware but of a newer version vSphere environment.

In any of which, both upgrading of VM hardware and placing a cluster or a VM (in vSphere 6.7) in an EVC mode, require a power cycle (note the difference, not a restart).

Saturday, May 5, 2018

VMUG Singapore by VMware and HPE

If you are in Singapore, do remember to register for VMUG Singapore event sponsored by VMware and HPE.

Look for the event details here.

This is not going to be the usual evening session but going to start at 2pm coming Friday, 11th May. There will be several sessions on the updated release from VMware and HPE and a networking session, vBeer to interact with fellow professionals as well as a chance for you to find out more what VMware and HPE are cooking.

We will also have our special guest Don Sullivan, author of Virtualizing Oracle Databases on vSphere.

So don't look further, if you are in town, Join Us!

Tuesday, April 17, 2018

New in Software Defined Compute in vSphere 6.7

Today marks the release of the next iteration of vSphere. Most changes are the improvement of existing features and that includes what is embedded together with ESXi which is vSAN.

First, vCenter Appliance will support Single Sign On domain with embedded PSC with Hybrid Linked mode. During this release, support for the upgrade with older vCenter Server with External PSC will not be possible at release. External PSC setup is still supported. There is a Hybrid Linked Mode which will support on prem vCenter Server 6.7 with VMware Cloud on AWS vCenter Server 6.5. Lastly, this is also the last release support for vCenter Windows Server as mentioned in the last release.

There will be a backup tool and can be scheduled to help manage vCenter recovery process.
In terms of migration to vCSA, the migration tool allows asynchronize background process to reduce the amount of downtime.

The HTML5 Client (Clarity UI) has not feature priority up to 95%, up from version 6.5. You can now operate almost everything not limited to Content Library, Storage Policies, and vDS Topology Diagram to name a few. VM encryption also has more granular control to allow further customization. TLS 1.2 will be default used.

Update Manager is completely using Clarity UI.

For ESXi, the biggest change here is a new feature, "Quick Boot". This removes the need to reboot the server to the hardware boot screen but only reboot at the hypervisor level. This definitely save lots of time. Don't you hate the point to keep waiting for every single hardware device test to be done before you even reach the hypervisor or OS. To enjoy this, you need to be at least on 6.5 and upgrade to 6.7.

In terms of security, TPM is used to ensure hardware root trust with Secure Boot (in vSphere 6.5) validate boot loader and VMkernel. With the support of Windows 10 and Server 2016, VBS and Credential Guard is also supported. vTPM is also support for VM. However, do note that this requires the upgrade to the newer vHardware.

vSphere will also support Nvidia GRID for normal server VM. Suspend and resume is 
Instant clone is another big feature

One big enhancement is on EVC. From a per cluster level, you are now able to do it on a Per VM. That really make life really much easier if you do use EVC.

Check out the details here.

Update 19th Apr
Fault Tolerance now supports per VM 8vCPU and 128GB of memory. Check out https://configmax.vmware.com/home new site for configure maximum.

VVOLs now support SCSI-3 persistent reservations which can now support WSFC. Which also means you can leverage on vSphere Replication to replicate a WSFC VM without using RDM! Check it out.

What So New in vSAN 6.7

With the release announcement of vSphere 6.7 it comes with his in-kernel vSAN 6.7 upgraded together.

With the big move to HTML5 client (Clarity UI), vSAN 6.7 will support Clarity and with much of its functions and management done in Clarity. That definitely better than using vSphere Web Client.

Together with this release, a new assessment tool for HCI is introduced. This will work not just on vSphere but also Hyper-V and physical server. The best part is that this assessment tool is free.

The long awaited support for WFSC is not possible with iSCSI target. Bigger improvement on destaging and data placement and failure handling.

Check out the post here.

Tuesday, April 3, 2018

VMware vCenter Server Virtual Machine Name Character Limit

Recently I got asked how many characters can a VM name character support and any special character can be used?

Been doing vSphere since version 3.x, it has never encountered to me there was a limit in that space.

Having said that, there is a case where a customer would need this. Example, to have the VM name similar to the FQDN especially true in a multi-domain or tenant environment where VM name could be the same and only the domain or tenant is the differentiator.

So doing a quick check here is the below KBs that state the limit:

  • As of vCenter Server 4.1, the number of characters support is 80. KB
  • Display names for any objects e.g. VM Name, Datastore Name, etc. should not contain special characters like %, &, *, $, #, @, !, \, /, :, *, ?, ", <, >, |, ;, ' etc are contained in names of vSphere entities such as virtual machine name, cluster name, and datastore/folder/file name. However, '-' and '.' is apparently supported. KB

Here are the test results:


To be inline I did a check on Microsoft Active Directory DNS, 64 characters are the maximum allowed for a DNS name and 255 characters for a FQDN as stated here.



Assumed Support from Third Party Solutions

While I was doing some presentation slides for one workshop, I happen to look for 3rd party virtual switches support on vSphere. This is t...