Tag Archives: powershell

How To Make Use Of Functions in PowerShell

Over the last few weeks I’ve had a number of comments on posts essentially asking the same question: “How do I use the functions that you publish on your blog?”. So I thought it worth making a post to refer people to, rather than trying to respond in kind to each comment. There are a number of ways it can be done depending on your requirements and they are listed below.

First of all, let’s create a simple function to use for testing:

function Get-TimesResult {

Param ([int]$a,[int]$b)

$c = $a * $b

Write-Output $c

1) Paste Into Existing PowerShell Session

If you are working interactively in the console then the function can be copy / pasted into that session and is then available for the duration of that session. I find this easier to do via the PowerShell ISE than the standard console.

Copy the function into the script pane:


Click the Green Run Script button or hit F5 and the code will appear in the console pane:


The function is now available for use and if using the ISE will appear interactively when you start typing the name:




2) PowerShell Profile

If the function is something that you wish to use regularly in your interactive PowerShell sessions then you can place the function in your PowerShell Profile and it will be available every time you open your PowerShell console.

If you are unsure what a PowerShell profile is or how to use one, there is some good info here. A quick way to create one is:

New-Item -Path $profile -ItemType File -Force

Once you have created a PowerShell profile, place the function in the profile and save and close. Now every time you open your PowerShell console the function will be available.


3) Directly In A Script

If you wish to use the function in a script, place the function in the script above the sections where you need to use it. Typically this will be towards the top. The plus side of doing it this way is everything is contained in one file, a negative is that if you have a number of functions then readability of the script is reduced since there may be a long way to scroll down before anything of significance starts to happen.


4) Called From Another Script

One method I have seen quite often in the wild (and I’m not a particular fan of, point 5 is a much better approach) is to store all regularly used functions in a script file and dot source the functions script file in the script where you need to use one or more of the functions.

Functions script file Tools.ps1:


Get-Results script file calling Tools.ps1:

Note the dot and a space before the reference to the Tools.ps1 file

. C:\Users\jmedd\Documents\WindowsPowerShell\Scratch\Tools.ps1

Get-TimesResult -a 6 -b 8



5) Stored in a Module

Using a PowerShell module is a more advanced and significantly more structured and powerful method of achieving what was done in 4). If you haven’t used PowerShell modules before I wrote an introduction to PowerShell modules a while back which you can find here.

Essentially they are a method to package up your reusable functions and make them available in a manner similar to how other teams in Microsoft and third-parties produce suites of PowerShell cmdlets for consumption.

For this example I have created a Tools module to use, which essentially is the same content as the Tools.ps1 file, but stored in a *.psm1 file (Tools.psm1) in the Modules\Tools folder on my workstation.

Note: the name of the *.psm1 file should match that of the folder. Its possible to create a more enhanced module than taking this approach using a Module Manifest, but we don’t need that for the purposes of this post. It’s described further in the previously mentioned article.


Now we can use the *-Module PowerShell cmdlets to work with our content.

To observe the module we can use Get-Module:

Get-Module Tools -ListAvailable


To use the functions contained in  the module we can use Import-Module

Import-Module Tools

Get-TimesResult -a 6 -b 8



Note: Since PowerShell v3 automatic cmdlet discovery and module loading has been supported. (You can find out more about it here) Consequently, you don’t actually need to use Import-Module to get access to the functions as long as you place the Module in the correct location. However, it would be a good practice to add the Import-Module line to your script, so that another user is aware of where you are getting the functionality from.

Improving the PowerShell ISE Experience with ISESteroids 2

For a long time I’ve used the built-in to Windows, PowerShell ISE for my PowerShell scripting experience. Most people tend to have a particular favourite editor for their coding, usually after trialling out a few different ones. For pretty much everything else I’ve settled on Sublime Text, but for PowerShell I use the ISE since I really like the integration with the PowerShell console.

The ISE was introduced in PowerShell 2.0 and to be honest was pretty basic back then. It’s  improved significantly since then into version 4, but still has some areas where there could be improvement or features missing that you would like to see.

Earlier in the year I tried out ISESteroids 1.0 which started to plug a number of the gaps I found in the ISE. Recently I had chance to upgrade to ISESteroids 2 and it has improved even further.

For a quick preview of what is available check out the below video.

A few things in it I particularly like:

1) More distinct (yellow) highlighting of bracket matching or other sections


(single click on this double quoted string)


2) Block commenting (this was a real annoyance for me – there is a keyboard shortcut to do it in the standard ISE, but still fiddly)



After pressing the single button below:




3) ScriptMap which allows you to easily navigate your way through long scripts



4) Manage Version History


Clicking on Compare opens WinMerge and a view of what has changed between versions


5)  Autoselection. Click repeatedly to select various code segments






6) Enhanced Debugging

Best explained in the following video

For a more in-depth look at some of the features, check out the below video with ISESteroids creator Dr Tobias Weltner and fellow PowerShell MVP Jeff Wouters.

Automating Disk Zeroing on VM Deletion

A requirement for a project I had was to zero the VMDK of all VM disks at the time of VM removal.


One consideration was to SSH into the host where the VM was located and use vmkfstools like the below on each vmdk to zero the disk.

vmkfstools –w /vmfs/volumes/<…>.vmdk

Looking for alternatives I found that the PowerCLI cmdlet Set-HardDisk has a ZeroOut parameter. Note the text from the help (version 5.8 R1):

Specifies that you want to fill the hard disk with zeros. This parameter is supported only if you are directly connected to an ESX/ESXi host. The ZeroOut functionality is experimental.

The points to note are:

  • You will need to connect PowerCLI directly to the ESXi host that the VM is registered on. So you will most likely first of all need to connect to vCenter to find out where the VM lives.
  • The functionality is ‘experimental’. A quick scan back through releases showed this had been the same for some time. From my observations the functionality appeared to work fine. There have been many things in vSphere over the years which have been ‘experimental’, but have usually worked fine.

So once you have identified where the VM is and connected to the ESXi host in question, it’s a case of simply looping through all of the disks and zeroing them (with a bit of logging thrown in) – note it will likely take a fair amount of time to zero each disk!

$VM = VM01
$HardDisks = Get-HardDisk -VM $VM

foreach ($HardDisk in $HardDisks){

$HardDisk | Set-HardDisk -ZeroOut -Confirm:$false | Out-Null

$Text = "Zeroed disk $($HardDisk.Filename) for VM $VM"
$Text | Out-File -FilePath C:\log\zerodisk.log -Append

Calling PowerShell.exe -Command ScriptName and Parameters with Commas

Bit of an obscure one this, but I hit it recently and wasted some time on it so I thought it might be useful for someone, somewhere, someday.

If you need to call a PowerShell script via a command line style prompt (maybe in a scheduled task or an external system like vCenter Orchestrator) there are a number of different options.

I was troubleshooting a problem where an existing system was failing with a command along the lines of this:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command C:\Scripts\TestComma.ps1 -input1 'banana,pear'

and would fail with the following error:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe : C:\scripts\TestComma.ps1 : Cannot process argument transformation on parameter
At line:1 char:1
+ C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command C:\scripts\Te …
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (C:\scripts\Test…n on parameter :String) [], RemoteException
+ FullyQualifiedErrorId : NativeCommandError

‘Input1′. Cannot convert value to type System.String.
At line:1 char:34
+ C:\scripts\TestComma.ps1 -input1 banana,pear
+ ~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [TestComma.ps1], ParameterBindi
+ FullyQualifiedErrorId : ParameterArgumentTransformationError,TestComma.p


So it looked like it was having an issue with the string being supplied as the parameter ‘banana,pear’ even though there is normally no issue with this being a string. I eventually tracked it down to being a problem with the comma – with no comma there is no issue.

Note: This is only an issue when being called by powershell.exe. When used in a standard PowerShell console or script there is no issue with this text being a string:



There are a number of ways round this:

1) Run it from cmd.exe

Sacrilege I know, but the system I was working with effectively was calling it from cmd.exe which subsequently didn’t experience the issue.



2) Escape the comma

Escape the comma character like so

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command C:\scripts\TestComma.ps1 -input1 'banana`,pear'


3) Use the File parameter instead

The better solution in my opinion is to use the File parameter. I typically use this anyway rather than the Command parameter. It was introduced in PowerShell v2 and has been my preferred way of doing this kind of thing since then.

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -File C:\scripts\TestComma.ps1 -input1 'banana,pear'



Start-Transcript Now Available in the PowerShell ISE in PowerShell v5

*Warning. This article was written using the September 2014 PowerShell v5 Preview*


Prior to PowerShell v5 it was not possible to use Start-Transcript in the PowerShell ISE, it could only be used in the standard PowerShell console. You would receive the error:

Start-Transcript : This host does not support transcription.


(There were alternatives to get round it , and here)

Now in PowerShell v5 it can be used natively:

Start-Transcript -Path C:\Test\Transcript.txt



Run your commands (in this example just one, Get-Service). Then Stop-Transcript


Now view the transcript  file

notepad C:\Test\Transcript.txt



Getting Zippy with PowerShell v5

*Warning. This article was written using the September 2014 PowerShell v5 Preview*



(OK, I was really looking for an excuse to use the below picture in a blog post)



One of the most popular and long standing requests for PowerShell is native support for working with Zip files. With PowerShell v5 we get two new cmdlets Compress-Archive  and and Expand-Archive. Here’s a couple of examples of how they work.


1) Create a Zip file

C:\Test contains a number of text files. We want to zip them up into one convenient file.



Compress-Archive -Path C:\Test\* -DestinationPath C:\Zip\Test.zip -CompressionLevel Optimal

and now we have the zip file:

Note: as of this release there are three Compression Levels, the default of which is Optimal.




2) Update a Zip file

Now we add an extra file to C:\Test and want to update the zip file with this new file



Compress-Archive -Path C:\Test\* -DestinationPath C:\Zip\Test.zip -Update

Here’s the new file, now contained in the zip file:



3) Expand a Zip file

Now we want to expand a zip file. Let’s use the one we just created and expand it to a different folder C:\Expand.

Expand-Archive -Path C:\Zip\Test.zip -DestinationPath C:\Expand

Here are the files:


All pretty straightforward, but it’s great to have this simple functionality finally native :-)


Setting Static Routes with PowerShell when connecting to a PPTP VPN

Sometimes as a consultant I have a need to connect to customer or client networks to carry out some of the work. This typically involves a myriad of different remote connection and VPN style systems. Some are better than others and while it’s possible to use different VMs to connect to them, that’s not always practical. Typically I only want traffic destined for the remote system(s) to go down the VPN, not all of my Internet traffic.

Many reasons for this, but one of the top ones is that it sends my Lync client used for internal communication into a frenzy of disconnecting / re-connecting to conversations if the VPN connection drops any time during the day. This leads to timed out messages and half the time wondering if the message got through, whether to send it again and generally a pretty frustrating experience.

One of the VPN connections I need to use is pretty basic and uses a PPTP connection created via the built-in wizard in Windows.


I hadn’t used one of these for a long time and thankfully a colleague pointed out to me the other day that by changing it’s configuration it was possible not to send all of your Internet traffic down it.

Clearing the below setting Use default gateway on remote network will stop all Internet destined traffic heading down that connection.



Then we simply need to set a static route for the subnet we want to connect to via the VPN and send it down that route. So it will be something like:

route add mask metric 1

However, the IP I’m allocated from the VPN server ( above) may change every time I connect to the VPN.


So I put together the below function which will grab the IP that has been allocated and use it in the route add command. Since I wanted to support downlevel OSs for people like me using Windows 7 I went with ipconfig to get this info rather than than the newer networking cmdlets like Get-NetIPAddress . Consequently, I used this really handy tip on filtering ipconfig output.

Then all I need to do is run the following (note: make sure your PowerShell session has elevated privileges):

Set-VPNRoute -VPNNetwork 172.100.25 -RouteNetwork -RouteMask

function Set-VPNRoute {
 Set a route for VPN traffic

 Set a route for VPN traffic

 VPN Connected Network

 .PARAMETER RouteNetwork
 Target Route

 Target Mask



 PS> Set-VPNRoute -VPNNetwork 192.168.200 -RouteNetwork -RouteMask






try {

 $VPNIP = @(ipconfig) -like "*$VPNNetwork*"
 $VPNIP = $VPNIP[0].substring($VPNIP[0].length - 14, 14)
 route add $RouteNetwork mask $RouteMask $VPNIP metric 1 | Out-Null
 catch [Exception]{

 throw "Unable to set VPN Route"

Automating vCAC Tenant Creation with vCO: Part 3 Install the vCAC plugin for vCO

In this series we will see how to automate the creation of a tenant in vCAC using vCO. There are multiple tasks to provision a tenant in vCAC, so even though it is an automation product itself, there’s no reason why you shouldn’t look at automating parts of it too. In part 3 we look at installing the vCAC plugin for vCO

1) Download the vCAC plugin   o11nplugin-vcac-6.0.1.vmoapp vCOADPlugin40

2) Install the plugin I’m installing this on a Windows based vCO box. Ensure that the vCO Configuration service is started since it is usually on manual startup. vCOADPlugin41

Navigate to the Configuration webpage, in my case https://localhost:8283/


and then Plugins


Enter credentials of a member of the vCO admins group. (If you haven’t set this up you might want to add an AD connection on the Authentication page)


and select the downloaded plugin, then Upload and install


Accept the License Agreement


Hopefully you get a nice green success


If so, you’ll get a note further down that you need to restart the vCO Server service


Get-Service VMwareOrchestrator | Restart-Service

After the restart, all is now OK


The built-in vCAC workflows are now available in the vCO client


3) Configure the plugin Navigate to Configuration and run the Add a vCAC host workflow


Fill out the details of the default vCAC tenant

vCOvCACPlugin11 vCOvCACPlugin12

…and now we have a vCAC server to work with



Automating vCAC Tenant Creation with vCO: Part 1 AD SSL
Automating vCAC Tenant Creation with vCO: Part 2 AD Users, Groups and OUs
Automating vCAC Tenant Creation with vCO: Part 3 Install the vCAC plugin for vCO
Automating vCAC Tenant Creation with vCO: Part 4 Creating a Tenant
Automating vCAC Tenant Creation with vCO: Part 5 Creating an Identity Store
Automating vCAC Tenant Creation with vCO: Part 6 Adding Administrators
Automating vCAC Tenant Creation with vCO: Part 7 Creating a vCAC Catalog Item

I’ll be presenting some automation at the June 2014 South West VMUG


Those great guys down in the South West of England, @mpoore, @jeremybowman,  @virtualisedreal and @simoneady have kindly invited me down to their next VMUG to present about automation. So I will be talking about some of my experiences in automation projects from the last few years and particularly how to write your own code in a generic way so that it is portable across different projects and systems.

It looks like there is plenty of other good content lined up that day so I’d suggest you get down there too.





Using Git, Stash and Dropbox to Manage Your Own Code

Sometimes I’m asked how I manage my own (PowerShell) code, in terms of version control, backups, portability etc. In this presentation I demonstrated how my PowerShell code is typically broken down into functions and then placed into modules. This allows me to make very generic code for granular tasks, typically either to plug a gap missing from the out-of-the-box cmdlets or maybe stringing a few of them together. As a consultant this enables me to build up a toolkit of functions for particular scenarios gained over various different experiences and use them in a modular fashion where needed for each particular project. However, once these number in the hundreds how do you manage them effectively? I need them to:

  • be easily available depending on where I am working
  • be backed up
  • track changes via version control, useful even if you are not working in a team developing code together – mostly so I can remember how or why I changed something :-)

So I’m going to run your through the system I have found that works for me. It uses the following components:

  • A Dropbox account to sync the code between different machines, be available to download via a web browser and also store the code outside of my home lab. This means I can get access to my functions pretty much wherever I am and whether I am using my own or a customer machine.
  • A Linux VM in my homelab to run Git and Atlassian Stash for version control – $10 for a 10 user license (free to try out for 30 days)
  • Atlassian SourceTree. Free Git client for Mac or Windows

DropboxCode Stash I discovered Stash via a previous customer and found it to be a very useful and easy to use add-on to manage Git repositories. It’s possibly overkill for my current needs, but I have worked on shared code projects in the past and it could be useful to have an easy way to do this in the future. If you are thinking “why not use GitHub?”, there are reasons for this. While I share a lot of my code via this blog and possibly via GitHub in the future, there are some commercial and other reasons why I’m not able to share everything. GitHub has private repositories for this, but they start at $7 a month, so this approach with a home hosted Stash does me fine for now. Setting Up Stash I have a 1 vCPU, 1GB Ubuntu VM, installed with the Minimal Server option. The first thing to do is install Git. The below command will install Git and any dependencies.

sudo apt-get install git-core

Stash01 Check the versions  of Git and Perl. The version of Git should be 1.7.6 or higher. The version of Perl should be 5.8.8 or higher. (Note that out of the box RedHat / Centos currently do not include a version of Git that supports Stash.)

git --version
perl --version

Stash03 Check the version of Java and install if necessary. The version of Java should be 1.6.0 or higher. The below is the easiest way I found to do this.

java -version
sudo apt-get purge openjdk*
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java7-installer
java -version

Stash04 Create an install folder, download the Stash installation files (obviously change the version number for that which you wish to download) and extract the download.

sudo mkdir /var/stash
sudo mkdir /var/stash/install
sudo mkdir /var/stash/home
cd /var/stash/install
sudo wget http://www.atlassian.com/software/stash/downloads/binary/atlassian-stash-2.11.4.tar.gz
sudo tar zxf atlassian-stash-2.11.4.tar.gz

Create a user account to run Stash under.

sudo /usr/sbin/useradd --create-home --home-dir /usr/local/stash --shell /bin/bash stash

Set STASH_HOME in setenv.sh to /var/stash/home.

sudo vi atlassian-stash-2.11.4/bin/setenv.sh

Stash05 Change the ownership of the stash folder and below to the stash account

sudo chown -R stash /var/stash

Start Stash.


Stash06 Navigate to the Setup Wizard at http://hostname:7990 Take the option for an Internal database Stash07 Enter your license key Stash08 Enter Administrator credentials and Go to Stash (I am not using the accompanying Jira product) Stash09 This will take you to the login page where you can authenticate with the credentials you just created. Stash10 You should now see the Stash Welcome Page. Stash11 Connecting the SourceTree Client to Stash Download the SourceTree Client for Mac or Windows and get it installed. Once installed, fire up the client. You may be prompted to install Git for your local client to issue Git commands, go with the embedded version if so: Stash12 Say No to Mercurial (unless you wish to use that as well) Stash13 Enter your information – if you have a GitHub account and wish to use repositories there it can be useful to add the associated email account here. Stash14 Accept the recommendation to use Putty Stash15 If you want to use an SSH key then enter that here, I’m not going to for this tutorial. Stash16 Enter your credentials for the Stash website (and others if you wish to) Stash17 Once complete, the SourceTree client will open. They are pretty regular at providing updates for it, so you may be prompted to install further updates before first use. Stash18 Stash19 Stash20 Now we need to create a project and a repository. First of all through the web interface to Stash create a relevant project. (I liked the way it shortened this one to POW – reminds me of the old Batman series) Stash20a Stash20b Now create a repository Stash20c Stash20d Further down this page you will find instructions on how to upload existing code, which in my case is what I want to do. Stash26a From the SourceTree client open up a Git terminal and run the above commands: Stash26 Stash27 Stash28 At this point if you navigate back to your Stash website you should see your files have been uploaded. You can view inside the code too via the webbrowser. Stash31 Now I need to configure the SourceTree client to be aware of this repository. Click the button for Clone / New and choose Add Working Copy. Enter the path to your working folder and Add. Stash29 It is now available in the SourceTree client. Stash30   So now when I either:

  • Edit code directly on my laptop
  • Or copy code edited elsewhere and pop it in the Dropbox folder

Following synchronisation with the Dropbox folder, the SourceTree client will show that files have changed and require committing. In the below example a couple of minor spelling mistakes have been corrected. SourceTree shows the file has been updated and also what has been changed since the previous version. Red line was removed, Green line was added. Stash32 Right-click the file and choose Commit. Stash33   Enter an explanation for what was changed in this version and it also makes sense to select Push commits immediately to origin so that you create the commit and also submit it in the same action. Stash34 Assuming there were no errors then SourceTree should display no files to commit and navigating to the Log / History tab you will notice that a history of your changes will start to build up. Again, useful if you need to track back and see what changed and when. Stash35