Get-Task: ID Parameter is Case Sensitive

There aren’t many occasions when you trip up in PowerShell because of something being case sensitive, it generally doesn’t happen since most things are typically not like that. I was working with the PowerCLI cmdlet Get-Task and in particular the ID parameter to do something like:

Get-Task -Id 'task-task-2035'

I had originally found the ID via:

Get-Task | Format-Table Name,ID -AutoSize

However, I received the error that no tasks of that ID were found :

Get-Task : 24/02/2015 20:51:57 Get-Task The identifier task-task-2035 resulted in no objects.
At line:1 char:1
+ Get-Task -Id task-task-2035
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (:) [Get-Task], VimException
+ FullyQualifiedErrorId : Client20_OutputTracker_ReportNotFoundLocators_LocatorNotProduced,VMware.VimAutomation.ViCore.Cmdlets.Commands.GetTask

Turned out that making the task ID match the exact case worked:

Get-Task -Id 'Task-task-2035'

Apparently the IDs are case sensitive by design :-)

One to watch out for anyway…..

PowerCLI is now a Module!

We’ve been waiting for this a long time, but with the 6.0 release PowerCLI is now available as a module. Microsoft changed the way itself and third-parties should deliver PowerShell functionality back in PowerShell version 2 by offering modules. Previously in PowerShell version 1 additional functionality was available via snap-ins.

It’s not fully there yet, but some of the functionality is now available in a module. 6.0 will be a hybrid release, with the rest to follow later.

Notice how the Core functionality is in both lists since this is a hybrid release.

Get-Module *vmware* -Listavailable


Get-PSSnapin *vmware* -Registered


I believe there was significant effort in making this leap, so many thanks to Mr Renouf and his team :-)

Issue with Nested ESXi and Multiple VMkernel Ports

While working with Nested ESXi in my lab, I had an issue where I could communicate with the IP address on vmk0, but after adding multiple additional VMkernel Ports could not communicate with any of the additional IP addresses. It’s a simple network for testing, everything on the same subnet and no VLANs involved.

I hadn’t done too much reading on the subject before, other than knowing I needed to implement Promiscuous Mode for the Port Group on the physical ESXi servers. It seemed strange that I could communicate with one of the addresses, but not the rest. I tracked down the following posts, but both suggested that only Promiscuous Mode need be enabled.

I was running a Distributed Switch on the physical ESXi servers, so I tested moving one of the VMkernel ports to a Standard Switch with Promiscuous Mode enabled on the Port Group. It worked fine there, so was naturally curious why.

This communitites posting showed that Forged Transmits also needed to be enabled. The difference between the Standard and Distributed switches is that Forged Transmits is Accepted by default on a Standard switch


and Rejected by default on a Distributed switch


hence my experience above.

For more information check out these two posts from William Lam and Chris Wahl who are about two years ahead of me on this ;-)


London VMUG January 2015

The first London VMUG of 2015 is almost upon us and as usual looks like a great line up of activities. My employer Xtravirt is sponsoring the labs and have a tech-preview of some software that you may be interested to check out. Plus one of my colleagues Michael Poore will be talking about a real world automation project.


Make sure you register and get along to the event.

Rooms Capital A Capital B Central Room
1000 - 1015 Welcome
1015 - 1100 Frank Denneman, PernixData & James Leavers, Cloudhelix - FVP
Software in a real-world environment
1100 - 1145 vFactor Lightning Talks - Philip Coakes, Alec Dunn, Dave Simpson, Gareth Edwards, Chris Porter
1145 - 1215 Break in Thames Suite
1215 - 1300 Robbie Jerrom, VMware - What is Docker and Where Does VMware VMware GSS Xtravirt Lab -
Fit? SONAR Tech
Preview - an easyto-use SaaS service providing on-
demand vSphere
automated analytics and reporting
1300 - 1400 Lunch
1400 - 1450 Simplivity - Stuart Gilks, Making Sense of Converged Infrastructure Unitrends - Ian Jones, Datacentre Failover
1500 - 1550 Phil Monk, VMware, Bringing Andy Jenkins, VMware - Cloud Xtravirt Lab -
SDDC to Life - A Real World Native @ VMware - Give your developers & ops teams SONAR Tech
Deployment with Michael Poore everything they want without Preview - an easyto-use SaaS service providing on-
losing control demand vSphere
automated analytics and reporting
1550 - 1600 Break in Thames Suite
1600 - 1650 VMware GSS Dave Hill, VMware - 5 Starting Points for Cloud Adoption Xtravirt Lab -
Preview - an easyto-use SaaS service providing on-
demand vSphere
automated analytics and reporting
1700 - 1715 C l o s e
1715vBeers - Pavilion End – sponsored by 10ZIG

How To Make Use Of Functions in PowerShell

Over the last few weeks I’ve had a number of comments on posts essentially asking the same question: “How do I use the functions that you publish on your blog?”. So I thought it worth making a post to refer people to, rather than trying to respond in kind to each comment. There are a number of ways it can be done depending on your requirements and they are listed below.

First of all, let’s create a simple function to use for testing:

function Get-TimesResult {

Param ([int]$a,[int]$b)

$c = $a * $b

Write-Output $c

1) Paste Into Existing PowerShell Session

If you are working interactively in the console then the function can be copy / pasted into that session and is then available for the duration of that session. I find this easier to do via the PowerShell ISE than the standard console.

Copy the function into the script pane:


Click the Green Run Script button or hit F5 and the code will appear in the console pane:


The function is now available for use and if using the ISE will appear interactively when you start typing the name:




2) PowerShell Profile

If the function is something that you wish to use regularly in your interactive PowerShell sessions then you can place the function in your PowerShell Profile and it will be available every time you open your PowerShell console.

If you are unsure what a PowerShell profile is or how to use one, there is some good info here. A quick way to create one is:

New-Item -Path $profile -ItemType File -Force

Once you have created a PowerShell profile, place the function in the profile and save and close. Now every time you open your PowerShell console the function will be available.


3) Directly In A Script

If you wish to use the function in a script, place the function in the script above the sections where you need to use it. Typically this will be towards the top. The plus side of doing it this way is everything is contained in one file, a negative is that if you have a number of functions then readability of the script is reduced since there may be a long way to scroll down before anything of significance starts to happen.


4) Called From Another Script

One method I have seen quite often in the wild (and I’m not a particular fan of, point 5 is a much better approach) is to store all regularly used functions in a script file and dot source the functions script file in the script where you need to use one or more of the functions.

Functions script file Tools.ps1:


Get-Results script file calling Tools.ps1:

Note the dot and a space before the reference to the Tools.ps1 file

. C:\Users\jmedd\Documents\WindowsPowerShell\Scratch\Tools.ps1

Get-TimesResult -a 6 -b 8



5) Stored in a Module

Using a PowerShell module is a more advanced and significantly more structured and powerful method of achieving what was done in 4). If you haven’t used PowerShell modules before I wrote an introduction to PowerShell modules a while back which you can find here.

Essentially they are a method to package up your reusable functions and make them available in a manner similar to how other teams in Microsoft and third-parties produce suites of PowerShell cmdlets for consumption.

For this example I have created a Tools module to use, which essentially is the same content as the Tools.ps1 file, but stored in a *.psm1 file (Tools.psm1) in the Modules\Tools folder on my workstation.

Note: the name of the *.psm1 file should match that of the folder. Its possible to create a more enhanced module than taking this approach using a Module Manifest, but we don’t need that for the purposes of this post. It’s described further in the previously mentioned article.


Now we can use the *-Module PowerShell cmdlets to work with our content.

To observe the module we can use Get-Module:

Get-Module Tools -ListAvailable


To use the functions contained in  the module we can use Import-Module

Import-Module Tools

Get-TimesResult -a 6 -b 8



Note: Since PowerShell v3 automatic cmdlet discovery and module loading has been supported. (You can find out more about it here) Consequently, you don’t actually need to use Import-Module to get access to the functions as long as you place the Module in the correct location. However, it would be a good practice to add the Import-Module line to your script, so that another user is aware of where you are getting the functionality from.

Presenting a Password Confirmation Form in vCO / vRO


Present a vCO / vRO form which contains two password entry fields using SecureStrings and a field which displays whether the two entered passwords match.

Using an if statement to test whether two SecureStrings are equal will fail even if the text entered is identical. As mentioned in this communities post, in a workflow it is possible to take the SecureStrings into a scriptable task and output them as Strings. However, in the presentation of the workflow this method is not possible.


Create an action which converts a SecureString to a String. Call that action from another action that is used to display whether the two entered passwords match. Here are the details of how I did it.

Create an action secureStringToString

outputText = text;

return outputText



Create an action testPasswords


var passwordTest1 = System.getModule("").secureStringToString(password1);
var passwordTest2 = System.getModule("").secureStringToString(password2);

if (passwordTest1 == passwordTest2){

return "Matching Passwords"
else {

return "Non-Matching Passwords"


Create a workflow with the following inputs:


Set the presentation for the first three inputs to be mandatory and the displayConfirmation input to use the testPasswords action:

username mandatory


displayConfirmation Data binding


displayConfirmation Data binding testPasswords action


Run the workflow and observe the text changes in displayConfirmation dependent on the passwords matching:




I’d be interested to hear if anyone has a better way to do this because I reckon there might be one :-)

Enabling NFS VAAI Support in Synology 5.1

Synology enabled VAAI support for NFS in version 5.1 of their DSM software. In order to take advantage of this technology from ESXi hosts we need to do two things:

  • Upgrade DSM to at least version 5.1-5004 (2014/11/06)
  • Install the Synology NFS Plug-in for VMware VAAI


DSM can be upgraded from within the Control Panel application. Head to the Update & Restore section, check for and install updates. This will likely require a reboot so ensure anything or anyone using it is shutdown or notified.




Prior to installing the NFS plugin my two NFS datastores don’t have the Hardware Acceleration support.



From the 5.1-5004 Release Notes:

Added NFS support for two primitives: Full File Clone and Reserve Space.
Please note that you should install the Synology NFS Plug-in for VMware VAAI and read the instructions in README.txt to make sure installation is successful.

Once the plugin has been downloaded it is possible to use either VMware Update Manager or esxcli to install the vib. For the purposes of my home lab without Update Manager I’m going to show you the esxcli way.

Upload the vib to a datastore all hosts can access, then the command to install the vib is:

esxcli software vib install -v /path to vib/esx-nfsplugin.vib

Once installed, the ESXi host will require a reboot


After the reboot you can check all was successful by running:

esxcli software vib list | grep nfs


and examining the NFS datastores


Improving the PowerShell ISE Experience with ISESteroids 2

For a long time I’ve used the built-in to Windows, PowerShell ISE for my PowerShell scripting experience. Most people tend to have a particular favourite editor for their coding, usually after trialling out a few different ones. For pretty much everything else I’ve settled on Sublime Text, but for PowerShell I use the ISE since I really like the integration with the PowerShell console.

The ISE was introduced in PowerShell 2.0 and to be honest was pretty basic back then. It’s  improved significantly since then into version 4, but still has some areas where there could be improvement or features missing that you would like to see.

Earlier in the year I tried out ISESteroids 1.0 which started to plug a number of the gaps I found in the ISE. Recently I had chance to upgrade to ISESteroids 2 and it has improved even further.

For a quick preview of what is available check out the below video.

A few things in it I particularly like:

1) More distinct (yellow) highlighting of bracket matching or other sections


(single click on this double quoted string)


2) Block commenting (this was a real annoyance for me – there is a keyboard shortcut to do it in the standard ISE, but still fiddly)



After pressing the single button below:




3) ScriptMap which allows you to easily navigate your way through long scripts



4) Manage Version History


Clicking on Compare opens WinMerge and a view of what has changed between versions


5)  Autoselection. Click repeatedly to select various code segments






6) Enhanced Debugging

Best explained in the following video

For a more in-depth look at some of the features, check out the below video with ISESteroids creator Dr Tobias Weltner and fellow PowerShell MVP Jeff Wouters.

Delegating Permissions to vCO Workflows and Publishing for Consumption

I needed to look into the possibilities around granting delegated access to vCO workflows and ways to consume them without necessarily using the standard vCO client. The general idea was to have one group of people authoring workflows and another group consume some of them.

vCO has the ability to delegate permissions to workflows and folders of workflows using users and groups; ideally you would have already setup vCO to use your AD for authentication so enabling this delegation via AD users and groups.

Each workflow or folder has a Permissions tab where users and groups can be added with different rights selectable; View, Inspect, Admin, Execute, Edit.



One thing to watch out for is that (at least) View Permissions need to be set right at the top of the tree for a user to be able to authenticate with the vCO (or other) client. The top level has a similar Permissions tab, however no Pencil style edit button like all of the levels further down.

Turns out it is hidden away in a context sensitive right-click set of options on the top level, Edit access rights…

Now we’ve figured that out, let’s take a scenario where we want to give an AD group access to run workflows in one folder, but not in another. We’ll use the AD group vCO_Users.

In the screenshot above vCO_Users have been given View and Inspect rights at the top level, which will filter all the way down. That will get our vCO_User authenticated, but not able to do too much.

On the folder JM-Dev vCO_Users have been given View, Execute and Inspect rights. Consequently, in the vCO client they are able to view and run the workflow, but not edit it – note the presence of the green Run button, but the Edit button is greyed out.

On the folder JM-Dev2 vCO_Users have only the View and Inspect rights which have filtered down from the top. Consequently, they can see the workflow, but neither run it nor edit it.

So we’ve got the permissions sorted for this example, but how about that requirement to not use the vCO client?

vCO has a built in web client, known as web operator. It is enabled by navigating to the Administer section of the client and then publishing the default weboperator.


Now we navigate to:


and login with some credentials of a member of the vCO_Users group

Now we can see the tree of workflows similar to the vCO client view

As this user we are able to run the workflow in the JM-Dev folder where we have the Execute permission:

but not the workflow in the JM-Dev2 folder which doesn’t have the Execute permission:


I found the web interface to be pretty basic, but possibly it’s worth evaluating if it might meet your needs.


Automating Disk Zeroing on VM Deletion

A requirement for a project I had was to zero the VMDK of all VM disks at the time of VM removal.


One consideration was to SSH into the host where the VM was located and use vmkfstools like the below on each vmdk to zero the disk.

vmkfstools –w /vmfs/volumes/<…>.vmdk

Looking for alternatives I found that the PowerCLI cmdlet Set-HardDisk has a ZeroOut parameter. Note the text from the help (version 5.8 R1):

Specifies that you want to fill the hard disk with zeros. This parameter is supported only if you are directly connected to an ESX/ESXi host. The ZeroOut functionality is experimental.

The points to note are:

  • You will need to connect PowerCLI directly to the ESXi host that the VM is registered on. So you will most likely first of all need to connect to vCenter to find out where the VM lives.
  • The functionality is ‘experimental’. A quick scan back through releases showed this had been the same for some time. From my observations the functionality appeared to work fine. There have been many things in vSphere over the years which have been ‘experimental’, but have usually worked fine.

So once you have identified where the VM is and connected to the ESXi host in question, it’s a case of simply looping through all of the disks and zeroing them (with a bit of logging thrown in) – note it will likely take a fair amount of time to zero each disk!

$VM = VM01
$HardDisks = Get-HardDisk -VM $VM

foreach ($HardDisk in $HardDisks){

$HardDisk | Set-HardDisk -ZeroOut -Confirm:$false | Out-Null

$Text = "Zeroed disk $($HardDisk.Filename) for VM $VM"
$Text | Out-File -FilePath C:\log\zerodisk.log -Append