Amazon Banner

Thursday, May 14, 2015

What So New in vSphere 6?

With the announcement and also from the datasheet, it seems to be pretty lots of functionalities been added.  However there are some critical ones that are more appealing and wanting to see approvement or resolution to those who are already using since vSphere 4 and prior till today which are not make known to many.


Storage
There were many discussion over storage UNMAP via thin provisioning and many called it a "myth".  This was also discussed heavily in our Facebook VMUG - ASEAN group.  This was due to many changes since VMFS3 to till VMFS5.  Cody wrote a long history of what are the changes for those who have missed out here.

A KB was also release and this create some discussion VMFS3 with different block size would benefit thin provision so to speak before vSphere 5.0 Update 1.  Sadly after which, all UNMAP was not possible via GUI or automatically other than via command line or script.

I try to ask internally as well and luckily Cormac with his findings has listed all the answers on questions here.  Sadly we still cannot support Linux due to legacy SCSI version.  At least we are on the right track now to see at least Windows are supported.


Backup
VMware Data Protection (VDP) first introduced in vSphere 5.1 replacing VMware Data Recovery.   VDP is running a vApp version of EMC Avamar and first introduced with the normal edition and Advanced edition.  The Advanced edition (VDPA) has to be purchased and comes with three agents (SQL, Sharepoint, Exchange) and storage of deduped data up to 8TB instead of 2TB per appliance as on the normal edition.

With VDPA, customers were also able to purchase the per OS Instance license to backup their physical server as shown here.

With vSphere 6, VDPA is now known as VDP and provided free and no longer a purchase option.  So the next question that arise was can user used VDP in vSphere 6 to backup physical server via the agent?  The answer is Yes.  Is there a cost to this?  VDP is now free so the simple answer is yes it is free!  How good is that!


Locked Down
There are two different mode of Locked Down mode.
  • Normal Locked Down
  • Strict Locked Down
This is been explained here.  A KB on this is also provided.

Exception Users is also introduced.  Only users with administrative privileges added into Exception Users list will allow be able to access the DCUI in Normal Locked Down mode.  Other options is to add user into DCUI.Access in advanced option to have access to DCUI.

In Strict Locked Down, DCUI is disable, only when SSH or ESXi Shell is enabled, will users with administrative privileges in Exception Users able to access the ESXi server.  If not, a reinstall is required.


Network
NIOC version 2 and 3 coexist in vSphere 6.0 and what is the different can be recorded here.  The performance improvement white paper is also been produced.


vSphere Replication
Many might not be aware or not make aware the changes that has been done on vSphere Replication (vR).  There are actually enhancements been done on it but not publicly made known.  One of the major enhancement is compression.  This helps in reducing the amount of data to be replicated across and effectively save you on bandwidth.  Also mentioned here is the introduction of dedicated Network used for NFC instead of sharing with Management Network in the past.  Also the inclusion of Linux OS quiesce.  Also removing to the need of Full Sync whenever a Storage vMotion is triggered.  A White Paper just on vR is also provided here.


vNUMA
I have previously written an article on the new improvement on vNUMA here.  With this improvement, memory locality can be increased across NUMA nodes.

I will include more information here on things that are not really made known here as I get hold of it.  Hope this give you the beauty of this release.

Monday, May 11, 2015

vNUMA Improvement in vSphere 6

NUMA is always a very interesting topic when in design and operation in virtualization space.  We need to understand it so we can size a proper VM more effectively and efficiently for application to perform at its optimum.

To understand what is NUMA and how it works, a very good article to read will be from here.  Mathias has explained this in a very simple terms with good pictures that I do not have to reinvent.  How I wish I have this article back then.

Starting from ESX 3.5, NUMA was made aware to ESX servers.  Allowing for memory locality via a NUMA node concept.  This helps address memory locality performance.

In vSphere 4.1, wide-VM was introduce this was due to VM been allocating more vCPUs than the physical cores per CPU (larger than a NUMA node).  Check out Frank's post.

In vSphere 5.0, vNUMA was introduced to improve the performance of the CPU scheduling having VM to be exposed to the physical NUMA architecture.  Understanding how this works help to understand why in best practice we try not to placed different make of ESXi servers in the same cluster.  You can read more of it here.

With all these improvement on NUMA helps address memory locality issues.  How memory allocation works when using Memory Hot-Add since Memory Hot-add was not vNUMA aware.

With the release of vSphere 6, there are also improvement in NUMA in terms of memory.  One of which is Memory hot-add is now vNUMA aware.  However many wasn't aware how Memory was previously allocated.

Here I will illustrate with some diagram to help in understanding.

Let's start with what happen in prior with vSphere 6 when a VM is hot-added with memory.

Let's start with a VM with 3 GB of virtual memory configured.

When a additional 3 GB of memory is hot added to VM, memory will be allocated by placing to the first NUMA node follow by the next once memory is insufficient one after another in sequence.

In vSphere 6.0, Hot-Add memory is now more NUMA friendly.

Memory allocation is now balance evenly across all the NUMA nodes instead of all in one basket on the first NUMA node.  This helps in trying to access memory mostly from the lowest NUMA node and thus increase the chance of a local memory access.

We would wish that this could be smarter but of course we cannot predict where memory would be accessed from which NUMA node when a processes is running.

Hope this helps give you a better picture when doing sizing and enabling hot-add function.

Sunday, May 3, 2015

Applications for Storage or Storage for Applications?

With many new start ups from storage arrays, converged, hyper-converged to software defined storage (SDS), many users starts to have lots of choice to make.

Recently encountering many questions on which should they choose and which is better.  However there is no straight answer as there are just too many choices to choose from just like in a supermarket.  In the end, some may choose one that advertise the best and create the best reminder in your mind.  To be truth, you will not buy and replaced the rest, but rather some have a hybrid environment for some reasons which we will go through later.

With several asks and questions, I like to give some guideline when deciding.  Here I will do my best to start with no bias towards any technology and this is my personal opinion and may not be the same with others.

1.  Ease of management: A big word often misused by marketing I would say.  Assess it and ask yourself do you have a team to manage different components and if you have a lean team to manage it.  How it is define for ease of use?  Walk through the daily things you commonly need to perform on a traditional setup comparing to this new technology you are evaluating.

2.  What are the applications you are running it on, can this components support the performance:  When performance comes into play, many only look at the storage throughput and IOPS, we need to also look at the daily operations tasks.  How fast can it spin up a workload in a server landscape and VDI landscape (if you are using as well)?  Test everything not just look at a demonstration on one scenario but all.  Rate everything a score and decide which you can do with or without.  There won't be one that will fit all the bill.  Pay for what you need and now and not extras and future.

3.  How are you intend to protect this application?
You know they can meet your requirement in day 0 operations however do you need to protect this application?  If you are doing a backup, can it support any backup API.  This can be from Microsoft or VMware?  Weight the cost between the two.  Would you need storage snapshot, if so, would your workload need to be application/data consistent?  Can the storage as part of this new devices you are looking at able to do it?  If it can, is it build in or via a script or via an agent?  How easy?

4.  How are you going to do disaster recovery?
The cheapest way might be leverage some host based replication technology that will work with any of the device chosen.  However what if you need to perform some kind of storage replication?  Will your workload be application/data consistent?  Can the storage as part of this new devices you are looking at able to do it?  If it can, is it build in or via a script or via an agent?  What are the application it support if you are going to place them running on these new equipments?

5.  Is it easy to do maintenance doing physical components upgrade, firmware upgrade, software upgrade?
This is important as you will definitely do this as it comes along.  We can't expect to have something which give you an ease of day 0 operations yet create lots of work for a maintenance.

6. Does it comes with a per-requisites?
The fine prints that always exist in this world of things.  Ask other than the equipment you are choosing, does it come with a requirement you need have or can it work with your existing infrastructure components.  Leveraging existing investment.

7.  Proof of Concept: Before you perform a pilot or proof of concept, are you placing real data or dummy data.  You need to decide whether this data can be removed easily from the equipment later and whose responsible to do that?  If it's yours, know how you are suppose to do it before you start any activity.  You definitely do not want decision to be made because your data is on it after the test instead of it meet your requirements.

8.  Can it offload storage activities e.g. Full copy, snapshot activities to storage or this will leverage on your hosts' CPU cycle?  Understand this help to identify the specification requirement for your nodes or servers you are using and not to find out contention later.

9. Can the new device leverage on your current investment?  E.g. Reuse existing SAN, IP storage, etc.  Can it use both its build in storage for converged and Hyper converged with existing storage.  For new storage array, can it work with your existing equipments e.g. Servers HBA, Network cards, etc.

From all these above considerations, there might be more however these are just some questions ought to be thought through.  Definitely not one equipment can fulfill everything, this also means, either you might have mixture for different workloads which might need your traditional setup.

Tuesday, April 14, 2015

Orphaned Replica in Horizon View

In my home lab, I do not have any redundancy since it for testing purpose and in such, I encounter a power failure and resulted in my Horizon Connection server to be absent.

Upon recovery of powering up again, one of the replica end up lost link with the database.  In my vSphere Web Client, I see a replica (orphaned).  I was able to delete the replica folder but the entry still stay in the vSphere Web Client inventory tree (that applies the same to vSphere Client).

By the way I am running Horizon View 6.1 and this solution found in the KB still works.

I didn't manage to capture my screenshot and found the below similar.  You will see the same on the web client.  Do note that the name of the replica will look like "replica-d0d123123c-f3j2-... (orphaned)".  Do note the actual name does not contain "(orphaned)".  DO NOT include when using the commands below.

source: http://www.vladan.fr

The command in the KB states and this was the confusion I have and I like to address for you to use it correctly.


sviconfig -operation=UnprotectEntity -DsnName=name_of_DSN -DbUsername=Composer_DSN_User_Name -DbPassword=Composer_DSN_Password -VcUrl=https://vCenter_Server_address/sdk -VcUsername=Domain\User_of_vCenter_Server_account_name -VcPassword=vCenter_Server_account_password -InventoryPath=/Datacenter_name/vm/VMwareViewComposerReplicaFolder -Recursive=true
  • In View Composer 2.5 and later, you can re-protect the VMwareViewComposerReplicaFolder using -operation=ProtectEntity

The main problem comes from the InventoryPath.  As my Datacenter_Name contain space, please use "" for the whole string.

Leave the second part /vm the same, do not change it.

Since replica is in the main inventory list not in any Folder created, then leave /VMwareViewComposerReplicaFolder.   This is the default recognized for vCenter.

Here is an example how it should look:

sviconfig -operation=UnprotectEntity -DsnName=viewDSN -DbUsername=viewdsnuser -DbPassword=mypassword123 -VcUrl=https://prodvC/sdk -VcUsername=plain-virt\vCadmin -VcPassword=vCadminpasswd123 -InventoryPath=/Datacenter_name/vm/VMwareViewComposerReplicaFolder -Recursive=true


Hope this helps you to resolve such incident.  This was the first time I encounter it so I thought I just share this out. 

Friday, April 10, 2015

vSphere 6.0 Web Client Mark Disk As Flash or HDD

Just some sharing that I chanced upon.

This might be most useful for those who are building their demo labs when you try to nested environment or simulate your disk as Flash when it is actually magnetic disk (MD).

It is also applicable where your Flash Disk/SSD is detected as a normal MD but you need to mark is as Flash disk or revert it when needed.

This is typically very true in a Virtual SAN (VSAN) environment.  When you need for a disk group with a minimum of at least one SSD.

If you refer to our KB, you might have to go through a list of commands.  However in vSphere 6.0, the web client has the function to do it via the GUI.

Below is a screenshot taken from my home lab.  My ESXi server has a normal SATA disk and a SSD disk.  So upon selecting either of them the icon will change to allow you to change it.  Technically you can even build a all Flash VSAN without having to own a VSAN license only different is the read and write ratio remain the same unlike a real All-Flash VSAN where 100% write goes to the cache disk and 100% read comes from the persistent disks.

Mark as flash disks.

 Mark as HDD disks.

Wednesday, April 8, 2015

vSphere 6 Installation Experience

Previously I posted on my installation and upgrade of my vSphere 5.5 here.  Since each upgrade comes about a year or two that I perform, we ought to forget some important things.  So here is my experience.

Before you start, head to the vSphere Upgrade Center.  Here you will find all the resources needed.  There is also a simple install walkthrough to guide you through with screenshots!

1) Read the documents on the requirements!  During the installation the vCenter requires at least 17GB of space to store the MSI and part of it the new Platform Service Controller (PSC) takes up 8GB install in the C:\ProgramData path.  Go through the requirements in the product documentation page.


2) Your DSN user in SQL for vCenter (vpxuser), needs to have additional rights for installation and upgrade but not during operations time. Grant this rights back.  In vCenter 6 installation, it points out the additional rights required.  This is great!  So I just use my SQL Management Studio and do the needed instead of going through a list of stuff (Nope I am not a SQL expert :) )  You can of course remove these two rights after installation.



3) Yes poor Update Manager is still using 32bit ODBC DSN.  I haven't been using it for awhile so decide to install it.  So head over to Windows C:\Windows\SysWOW64\odbcad32.exe.  Also the SQL user for vCenter Update Manager need db_owner rights for MSDB too.


4) Already have all your VMFS volumes in version 5 yet still receiving warning that some of your datastore is deprecated?  This is a false positive warning as state in the KB, just restart your management agent.

5) In any case this is your new installation, do note that Transparent Page Sharing is disabled by default and if you need it to be on, refer to this KB.

Lastly, if you did a fresh install but using back the same database which contain all your data of your infrastructure, do note that all password will not be recognize.  What I mean by password will be e.g. those you created in the Customization Specification Manager for automatic installation by joining domain etc., where you ahve specify perhaps a domain account to help join domain.

Wednesday, April 1, 2015

VMware VCP6-DCV Certification is now available!

The wait is over.  With the announcement of vSphere 6 in Feb 2015, to the General Availability in Mar 2015, today marks the day where the certification becomes available.  This is one of the fastest release over a major version release I seen since VI3.5.

Read about it here.  You can also find out the requirement to be certified VCP6-DCV.  There is also extension to re-certify your VCP and discount for existing VCP on VCP6-DCV as announced here.

In previous article, for new comers, you will have to take up the 5 days course, if you have attended a vSphere 5 training course but not taken an exam, you can proceed to next step.  Next step is to pass the foundations exam (75 questions in 90 minutes for USD50) and the VCP6-DCV (100 questions in 120 minutes for USD50) exam.  All exams requires application for authorization.

For existing VCP5-DCV, course is recommended but not required.  You can go straight to VCP6-DCV Delta (75 question in 90 minutes for USD50) exam.  Alternative you can choose to take the official exam too (which I do not see the need though).

Do note that current release are all beta exams.  In such, the passing score is not stated.  The price is also at priced at USD50 which you will not see this once this is general available.

Do it fast and quick and help evaluate the exam at this special price.  If you have experience, this should not be tough.

Good Luck!

Thinapp Assignment in Horizon View Access Denied

Finally upgrade my home lab with vSphere 6.0 and Horizon 6.1.  Have to perform a demonstration to my customer how to assign Thinapp easily via the Horizon View Admin portal.  Guess what, luck never on my side and hit with lots of issues.

This issue comes with lots of testing.  The error I encounter after assignment was HRESULT hr = 0×80070005. Access is denied.  Doing a few Google didn't bring me nowhere.  Assignment installation failed.

Let me just walk through what I have done:
  • Package a few Thinapp packages with editing of Package.ini mainly the MSI settings since I am doing a assignment from Horizon View Admin Portal.  Read more here.
  • Place them in a file share with permission to VDI users (yes make sure the VDI users can access those folders)
  • In Horizon View Admin portal, scan the repository for the Thinapp, make sure the .msi and .exe and .dat (if you are streaming and app is too big to build into .exe file) in the same folder in the repository. 
  • Assigned the Thinapp to my desktop pool.
  • Logon to one desktop that is part of the pool, check in Horizon View Admin portal on Thinapp and saw the screen shot as shown.
I know this is some permission problem but did not know where to start.  I checked the MSI settings and rebuild the Thinapp multiple times.

I came across this article, point 2 sound like something I never tried before.  I decide to give it a try, if it could be Local System account doesn't have rights to install on the virtual desktops which make sense here.

I create a service account (make sure it has local admin rights on the View Connection server else the service will not start) for View Connection Server.

This time round my Thinapp got installed successfully!  Horizon View Connection Server service is the one that trigger the Thinapp push down to the assigned desktop.  If it is by default installation, the service is assigned to Local System.  This "account" has log in rights for the local OS it is running on.  In such, it does not have rights to perform log on to a remote desktop.  Having to create a domain account and use it to run the Horizon View Server service, now I have an account that has rights to log on to the remote desktop.

Do note that, if you have already create any Application Pool or Desktop Pool, all your entitlement will be lost, you will need to entitle the rights again.


Tuesday, March 24, 2015

Upgrade or Migrate VMware Certification from 5 to 6

VMware has announced a proper migration/upgrade path many of us might be still wondering after VMware announced the version 6 certification roadmap awhile ago.  All these currently do not require a class.

VCP6
I read through a few times to make out how the migrates works.  Before we start with the example, let's understand some of the acronyms for the track.

VCP-DCV = Data Center Virtualization
VCP-DTA = Desktop & Mobility (formerly VCP-DT = Desktop)
VCP-CMA = Cloud Management & Automation (formerly VCP-Cloud = Cloud)
VCP-NV = Network Virtualization

For existing VCP5-DCV, you can upgrade this without a class via a VCP6-DCV Delta exam (this is yet to release) similar to VCP5.5-DCV Delta exam.

For new users trying to get certified, you will be taking the VCP6-DCV inclusive of a class as usual.

Since there are already new some version 6 exams: VCP6-DT and VCP6-Cloud, already released (Check here).  These certification will be automatic upgrade to VCP6-DTA and VCP6-CMA respectively without taking another exam.

Do note that this does not renew any of your VCA if you own any.  VCA is just an accreditation for individual in the business function to prove they understanding the VMware solution they acquired.

VCIX6
The main jest of it is here upgrading to VCIX6.  Reminder VCIX6 contains two exams: Design & Administration similar for all three tracks: DCV, DTM, CMA.

For existing single VCAP5 either in Design or Administration, you will need to take the other of what you do not have in VCIX6 exam.  E.g. if you hold VCAP5-DCD, you will need to take VCIX6 administration exam and vice versus.  This applies similar to all tracks.

For existing dual VCAP5 for a single track (Design & Administration), you can choose to take either VCIX6 exams; design or administration.

For existing VCIX-NV, you will be upgraded automatically to VCIX6-NV.

So to take VCAP design or administration first, before doing one of the other exam of VCIX is an option or is it better to head directly to VCIX by taking both design and administration exams is subjective.  One give you a VCAP certification upon passing while the other needs both exams to be VCIX.

One thing to note, each exam in VCAP is 4 hours each for design or administration while each exam in VCIX is 2 hours, taking both adds up to 4 hours to achieve VCIX.

An incentive, while passing VCIX, it will automatically renew your VCP6.


VCDX6
For VCDX5, this applies the same.  They are only required to take VCIX6 design exam portion for the tracks they are on.  This also renew your VCP6.

Friday, March 13, 2015

VMware 6 GA!

Following the announcement on 2nd Feb 2015 on the new vSphere 6 which is part of vSphere with Operations Management and vCloud Suite 6, today is the general availability (GA) for the binary.

These includes but not limited to vSphere 6 (vCenter Server 6, ESXi 6), vRealize Orchestrator (vRO) 6, VMware Data Protection (VDP) 6, vSphere Replication (vR) 6, Virtual SAN (VSAN) 6, Site Recovery Manager (SRM) 6, vRealize Automation (vRA) 6.2.1, vRealize Business (vRB) 6.1.0 and also not new in today release vRealize Operations (vROps) 6.0.1.

Head over to the vCloud Suite download site to get those bits.

Also do check out the product documentation site on the changes from release notes.

In conjuction, the release of Horizon 6.1 also announced and made available.  In this release, Horizon 6 Enterprise now includes App Volumes.

Get the binary here and product document here.  Also refer to the vSphere Update Center for more information on upgrading.

Need some help from community, head over to the vSphere Product Community.

Monday, March 9, 2015

Virtual Machine Automatic Shutdown

Like to share this incident which I encounter twice.  Once was during our ASEAN demo lab and the next is one incident that was reported by one user in our VMUG-ASEAN Group in Facebook.

VMs started shutting down after 1 hour by itself automatically and on vCenter you start to see memory status changes.
Source: VMUG-ASEAN group in Facebook

From the above, you see that memory state changes.  The first thing most people thought was there must be some issue or misconfiguration on the vSphere portion.

Further checks and view, the host and VM seems to be healthy and this is affecting a majority of VMs.  Further isolation, all these were running Microsoft Windows.

The first thing I wanted to confirm was to run a VM that is non-windows to make sure this is not due to OS issue.  Instead of installing one, you can use a tiny VM version small and won't take up much resource from here.

The VM runs normal and nothing happens.  This confirmed is not a vSphere issue but more of a OS related problem.  So a check on the license activation is required.  And indeed confirmed license has expired and not activated.

We experience this in our ASEAN demo lab with our Active Directory shutting down and similar problem and this is all mentioned in Microsoft knowledge base.  For Windows 2008 R2 & Windows 7 refer to this KB for Windows 2012 refer to this KB.  From the field, the reported time for Windows 7 & Windows 8 automatic restart within 1 hour while Windows 2012 is 30 minutes.


Side Thoughts.
If this happens on a physical server, we would likely not suspect a hardware failure since it was able to run normally.  Instead we will be troubleshooting in the OS.  So why in virtualized environment we start to investigate on the virtualization layer when all was good?

We start to forget the main place to start.  With or without virtualization, we should always find out more from one area and isolate it rather of jumping straight to an area that hasn't done any harm but showing your more information.

This do not change very much and I hope this give everyone a clearer picture that something things happen just very near to us but we have jump to conclusion too far away.