Tag Archives: vSphere 5.1

VMware Converter: Permission to perform this operation was denied

While attempting to carry out a V2V using VMware Converter I had a permissions related issue when selecting the source machine:

Permission to perform this operation was denied


I triple-checked my permissions within vCenter were sufficient to carry out a V2V, even temporarily granting my own account with the Administrator role directly to the location in question, but still received the above error.

I stumbled across this VMware Community posting which suggested that granting local Administrator rights to the account I was using on the Windows server where VMware Converter was running from might help.

For various reasons my account did not have local admin access to that server. I added my account to the local Administrators group and lo and behold taking that course of action worked and I could carry on with the V2V.

Note: Opening VMware Converter with Run as administrator rather than the above solution had not resolved the problem.


vCenter 5.1 SSO: A General System Error Occurred: Authorize Exception

Experienced this issue a month or so back (ended up logging a call with VMware to get confirmation of what happened) and it occurred again today so figured it was worth posting about.

If you receive the following error when attempting to log into vCenter 5.1 with an AD account:

A general system error occurred: Authorize Exception

SSO AuthorizeException

there are a number of potential issues. however most likely it is related to SSO and one of the Identity Sources. Some of the issues that weren’t a problem for me (among many reported on the Interwebs), but I checked out first were:

Typically they involve issues with AD DCs being replaced, vCenter computer account problems or re-creating an Identity Source.

In my case none of the above really applied and the Identity Source appeared to check out OK:


A restart of the vCenter SSO service brought things back to life.

Looking into it a bit deeper and after filing an SR with VMware they pointed me at this KB having discovered many of the below in the ssoAdminServer.log file.

Error connecting to the identity source……..No ManagedConnections available within configured blocking timeout…….

The symptoms in the KB also tied in with some scenarios around vCenter aware backup failures which had occurred during this time.

As per the KB a restart of the vCenter SSO service refreshes the LDAP Connection Pool. The engineer at the time informed me that it would be fixed in vSphere 5.5 and the KB now confirms this is the case, however no indication of whether it would be back-ported to 5.1. I haven’t yet been foolhardy brave enough to attempt upgrading a production SSO deployment to vSphere 5.5 to confirm if the fix resolves the issue we have seen.

Storage vMotion Fails with VM Hardware Version 4

Having recently enabled Storage DRS in a vSphere 5.1 environment we began to see a lot of the following errors in vCenter:

The device or operation specified at index ‘x’ is not supported for the current virtual machine version ‘vmx-04’. A minimum version of ‘vmx-06’ is required for this operation to succeed


The host(s) running the VM(s) in question contained the error matched in this VMware KB article:

[2009-07-10 14:13:41.632 F638BB90 info ‘vm:/vmfs/volumes/4a56e6c2-9319e3df-f1af-001e0bea4030/RVHOLS029/RVHOLS029.vmx’] Upgrade is required for virtual machine, version: 4

The VMs in question were all quite old, were still running VM Hardware Version 4 and required upgrading to a later version before Storage vMotion would move them.

This can sure generate a lot of failure alerts with Storage DRS turned on, a thin-provisioned datastore going over the specified capacity threshold and many VMs attempting to be moved to clear some space back 😉

I’m sure you don’t need me to tell you that keeping VM Hardware Versions, VMware Tools, OS patches etc up-to-date in your environment is one of those necessary maintenance tasks to keep smooth running systems.

You can identify which VMs are on a particular Hardware Version with a simple PowerCLI command:

Get-VM | Where-Object {$_.Version -eq 'v4'} | Sort-Object Name | Format-Table Name,Version -AutoSize

Even better, if you are regularly running the vCheck report then you would have already picked up on this since one of the checks is for VM Hardware Versions prior to a value you have specified 😉

Adding a Host to vCenter Fails Because of Time Issues

I recently experienced an issue adding a vSphere 5.1 host to vCenter while using the Add-VMHost cmdlet in PowerCLI. I’m pretty sure the same problem would have occured if I was using the GUI, but this work was for part of some automated deployment work.

On a freshly baked ESXi 5.1 install one of the first tasks is to get it into vCenter. However, this was failing with what initially appeared to be a license issue, despite there being plenty of available licenses.

“The Evaluation Mode license assigned to Host xxxx.xxxx.xxxx has expired. Recommend updating the license.”





Turns out the issue was related to the date and time being incorrect on the host (it had been powered off for some time) and consequently the eval license had ‘expired’ even though it has just been installed.


Despite the fact that my automated deployment configures NTP settings and starts the NTP service before adding the host to vCenter, the host had not yet corrected to the current date and time – possibly because it was so far out, in this case more than 1 year.

So to ensure the date and time are correct before adding the host to vCenter, I created the Set-VMHostToCurrentDateandTime function below. This uses the UpdateDateTime method to set the time on the ESXi host to the current time. I recommend you configure NTP settings and start the NTP service, then carry out the time set. You can use it in the following manner:

Get-VMHost ESXi01| Set-VMHostToCurrentDateandTime


function Set-VMHostToCurrentDateandTime {
 Function to set the Date and Time of a VMHost to current.

 Function to set the Date and Time of a VMHost to current.

 VMHost to configure Date and Time settings for.



 PS> Set-VMHostToCurrentDateandTime -VMHost ESXi01

 PS> Get-VMHost ESXi01,ESXi02 | Set-VMHostToCurrentDateandTime




begin {


 process {

 foreach ($ESXiHost in $VMHost){

try {

if ($ESXiHost.GetType().Name -eq "string"){

 try {
 $ESXiHost = Get-VMHost $ESXiHost -ErrorAction Stop
 catch [Exception]{
 Write-Warning "VMHost $ESXiHost does not exist"

 elseif ($ESXiHost -isnot [VMware.VimAutomation.ViCore.Impl.V1.Inventory.VMHostImpl]){
 Write-Warning "You did not pass a string or a VMHost object"

# --- Set the Date and Time to the current Date and Time
 Write-Verbose "Setting the Date and Time to the current Date and Time for $ESXiHost"
 $Time = Get-Date
 $DateTimeSystem = $ESXiHost | ForEach-Object { Get-View $_.ExtensionData.ConfigManager.DateTimeSystem }
 $DateTimeSystem.UpdateDateTime((Get-Date($Time.ToUniversalTime()) -Format u))
 Write-Verbose "Successfully set the Date and Time to the current Date and Time for $ESXiHost"
 catch [Exception]{

 throw "Unable to set current Date and Time"
 end {


Thanks to this post for refreshing my memory on how to do this

Cisco UCS C210 M2 ESXi 5.1 Stuck At ‘Initializing scheduler….’

After upgrading a Cisco UCS C210 M2 rack mount server to ESXi 5.1 and then ESXi patches from 25/07/2013 the host was stuck at ‘Initializing scheduler….’


I had checked my firmware version was satisfactory for ESXi 5.1


but found reports suggesting this (intermittent) issue has been around for a while with earlier versions of ESXi, different versions of UCS models and firmware and maybe HP models too.

Before trying the suggested workaround of disabling legacy USB support, I decided to get the box up to the latest firmware.

To do this in a rack mount which is not managed by UCS Manager, download the firmware from Cisco.com (requires registration) and boot from the ISO



Existing firmware levels and the level to move to are displayed


The firmware packages progress will be updated…..


Once complete, restart the server


Since applying the update 1.4(3u) I have not seen the issue occur again, yet…….

“There was an error connecting to VMware vSphere Update Manager.” – vSphere 5.1 Web Client

Following a switch over from self-signed certificates for vCenter 5.1 to those signed by a internal CA, access to Update Manager via the vSphere Web Client (which seems fairly limited anyway in terms of Update Manager) no longer worked. It was failing with “There was an error connecting to VMware vSphere Update Manager.”


The Webclient log file vsphere_client_virgo.log contains the following error

ERROR [ERROR] http-bio-9443-exec-5 c.vmware.vum.client.adapters.updatemanager.aspects.LoggingAspect Exception caught in class ‘com.vmware.vum.client.adapters.updatemanager.UpdateManagerService’, line 160 com.vmware.vim.vmomi.client.exception.SslException: com.vmware.vim.vmomi.core.exception.CertificateValidationException: Server certificate chain not verified

There were no corresponding issues accessing Update Manager via the full vSphere client.

I filed an SR with VMware support who confirmed a known issue with the Web Client and that “…The next major release will contain the fix”. In the meantime the advise (should you wish) is to “disable the plugin,  you can do this via the Web Client by logging in as [email protected] and going to Administration–>Solutions–>Plug-In Management then right-clicking on the VMware vSphere Update Manager plug-in and selecting Disable.”


I’ll update the post when I’ve confirmed the issue is resolved in the so far unconfirmed version / update release……


Update 11/07/2013:

VMware have now posted a KB article for this issue.

Migrating Email Alarm Actions between vCenter 5.0 and 5.1

I needed to migrate some Email Alarm Actions between two vCenters; the target at version 5.1 being a replacement for an existing 5.0 vCenter. The first task was to identify which Alarm Definitions had been configured with an email alert. To do that I used the following PowerCLI command to export them to a CSV file:

Get-AlarmDefinition | Select Name,@{N="EmailAction";E={$_ | Get-AlarmAction | Where {$_.ActionType -eq "SendEmail"}}} | Export-Csv AlarmActions.csv -NoTypeInformation

I could then easily identify those which needed to be migrated across.


Before creating new Email Alarm Actions in the 5.1 vCenter I wanted to check that the alarms which had been configured with an Email Actions still existed in 5.1, since it’s reasonably likely to assume that there will be some changes in Alarm Definitions that exist between vCenter versions. After modifying my CSV to cut out those I didn’t need, I imported the data into PowerShell and ran a query to establish if any no longer matched:

$data = Import-CSV AlarmActions.csv
Get-AlarmDefinition ($data | Select -ExpandProperty Name)

One didn’t exist anymore; one appeared similar, but significantly renamed; a couple of others appeared to have a minor name change.


The two minor name changes to me look unnecessary, inconsistent and frankly a bit sloppy. To the naked eye you might not even notice them, and you might even think what’s the big deal, but adding a full stop character at the end of an Alarm Definition when it was not there in the previous version and the majority of other definitions don’t have them is pretty poor.

vCenter 5.0 (no full stop)


vCenter 5.1 (full stop)


Anyway, I updated my data source to match the new names and created the new Email Actions on the Alarm Definitions that required them:

$data = Import-CSV AlarmActions.csv
Get-AlarmDefinition ($data | Select -ExpandProperty Name) | New-AlarmAction -Email -To "[email protected]"

Removing ESXi 5.0 host from vCenter – General system error: Invalid Fault

While I was removing some ESXi 5.0 hosts from a 5.1 vCenter I encountered the following issue with a couple of them:

General system error: Invalid Fault


In the vCenter vpxd log file this message was accompanied by

msg = “vim.fault.AdminNotDisabled”

It turned out to be a known fault where Lockdown Mode was out of sync between vCenter and the ESXi host. In vCenter it showed as enabled:


However, in the ESXi host it was showing as disabled. You can identify this by using the following command in the console:

vim-cmd -U dcui vimsvc/auth/lockdown_is_enabled

The KB article states to enable lockdown via the console

vim-cmd -U dcui vimsvc/auth/lockdown_mode_enter



and once successful disable via the GUI.

Once complete, you’ll then be able to remove the host from vCenter.

vCenter Certificate Automation Tool – what is “the Original Database Password”?

Having installed vCenter 5.1 U1a, it was time to replace certificates. Using the vCenter Certificate Automation Tool to replace self-signed SSL certificates with full certificates you will be taken through various menus to replace each certificate:


When the time comes to replace the vCenter Server certificate, you will firstly be in Menu 4 and then selecting Option 2, Update the vCenter Server SSL Certificate (starting to sound like a dreaded phone call to a utilities or government call centre yet? It’s almost as painful):


One of the questions prompted by selecting this option is “Enter the vCenter Server original database password”, preceded by some horrible warnings about what might happen if you get it wrong!:


I’m not sure if the choice of the word ‘original’ is deliberate here or potentially misleading. Luckily for me this was for a new install, not an upgrade and since I was using Windows Authentication I took it to mean the password of the AD service account for vCenter. Using this was successful for me, what it means if you have historically changed the vCenter service account, moved from a SQL User or currently a SQL user is not immediately clear. I would take it to mean the current account used (Windows or SQL) by vCenter to connect to the database.

Would be interested to hear of others’ experiences.

vCenter Server 5.1 installation fails with “Wrong input – either a command line argument is wrong…”

While installing a fresh vCenter 5.1 recently I was presented with this really helpful error message at the point where you are registering a vCenter Server administrator user or group with vCenter Single Sign On:

Wrong input – either a command line argument is wrong, a file cannot be found or the spec file doesn’t contain the required information, or the clocks on the two systems are not synchronized. Check vm_ssoreg.log in system temporary folder for details.





There’s a VMware KB article which references this, but wasn’t quite my problem. Additionally it states that “This issue is resolved in VMware vCenter Server 5.1.0a”, where I was using vCenter 5.1 U1a and you would hope that the fix would be kept in for a later release 😉

A communities post here suggested that it may be related to the SSO install option I had chosen below for HA and the fact that the group above was local not AD based:



Switching the local Administrators group above for an AD based group then permitted me to continue the install.