Working with the vRealize Automation REST API via vRealize Orchestrator

As of vRealize Automation version 6.2.1 there are a few different approaches to automating elements of the product itself, as opposed to using it for the automation tasks it is designed to help you with. This is along the lines of configuring elements within vRA, some of which I have covered previously within this blog post series. That series focused on using the vRA plugin for vRealize Orchestrator. However, the plugin doesn’t cover everything that you might need to automate within the product. Things are also not helped by the fact that vRA itself at this time is in a split-brain state making some parts of it hard to automate.

The good news is that elements which belong to the vRA Appliance side of the split-brain and are not in the vRO plugin, may well be covered by the vRA REST API. This blogpost from Kris Thieler is a really useful guide to getting started with the vRA REST API.

Taking elements from that post, I have applied them for use in vRO, i.e. I want to be able to run workflows in vRO to use the vRA REST API.

Getting an Authentication Token

The Getting Started blogpost demonstrates that to authenticate with the vRA REST API requires first of all generating an authentication token which can then be used for all subsequent REST requests for up to a 24 hour period.

My previous experience with using REST within vRO had been straightforward cases of adding a REST Host via the Add a REST host configuration workflow and supplying a set of credentials at that point which would then be used for each request. This approach was obviously not going to work in this instance.

The following is the procedure I came up with to work with authentication tokens; more than happy for comments on this post for easier or better ways to do it :-)

First of all run the Add a REST host configuration workflow with the vRA appliance set as the target URL and set the authentication method to None.






Next step is to add a REST operation with the query to generate a token. It’s a POST request to the URL /identity/api/tokens .


This will create an operation which is viewable from the Inventory view:


Now we need to create our own Workflow to use based off of that REST operation. Run the Library workflow Generate a new workflow from a REST operation and select the REST operation just created:


I’ve named it Request-Token and am storing it in the Test vRO folder.


We need to modify this workflow to add an extra header required by the API. The Getting Started blogpost shows that we need an Accept header Accept : application/json . (In this previous post I demonstrate how to add headers) On the Scripting tab add the following code:

request.setHeader("Accept", "application/json");


Once successfully complete, we can make use of it to generate a token. Create a new workflow Request-vRAToken which will take inputs of the info we need to generate a token (vRA username, password and Tenant name) and use the Request-Token workflow to send the request to generate it.


Set inputs for Request-vRAToken to be:

  • username – String
  • password – SecureString
  • tenant – String


Add a scriptable task to the schema, Create POST text, and set the inputs to be the parameters just created. This task will generate the text we need to send as part of the POST request.


Set an attribute output as:

  • postText – String


On the Scripting tab add the following code:

var postText = "{\"username\":\"" + username + "\",\"password\":\"" + password + "\",\"tenant\":\"" + tenant + "\"}";

System.log("PostText is: " + postText);

Note: once you are happy this is working, it would be worth removing the System.log line so that the password is not echoed into the logs.


Close the scriptable task and add a Workflow element next in the schema, selecting the Request-Token workflow previously created.  Set the input as the postText attribute:


Set output attributes to match the standard REST output names:


Close the workflow settings and add a final scriptable task, Output Token. For inputs select contentAsString :


Create an output parameter token, which we will use to get the token out of the workflow:


On the Scripting tab add the following code to parse the JSON response from the vRA API and pick out the token:

var jsonResponse = JSON.parse(contentAsString);

var token =

System.log("Token is: " + token);


Close the scriptable task and the schema will look like this:


Save and close the workflow. Then run it, supplying credentials and a tenant name:


All being well, well get a successful run of the workflow and a generated token:

[2015-05-15 14:50:15.557] [I] PostText is: {“username”:”[email protected]”,”password”:”P@ssword”,”tenant”:”Tenant01″}
[2015-05-15 14:50:15.609] [I] Request: DynamicWrapper (Instance) : [RESTRequest]-[class] — VALUE :
[2015-05-15 14:50:15.609] [I] Request URL: https://vraap01.vrademo.local/identity/api/tokens
[2015-05-15 14:50:16.030] [I] Response: DynamicWrapper (Instance) : [RESTResponse]-[class] — VALUE :
[2015-05-15 14:50:16.031] [I] Status code: 200
[2015-05-15 14:50:16.031] [I] Content as string: {“expires”:”2015-05-16T13:51:55.456Z”,”id”:”MTQzMTY5NzkxNTQ1NDowMGZiNWUyMmNlZjI2ZTI1MTAzYTp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWw6ODVmZDE4MGM2ZTkzZjBkOGRlMzk3MzhkNTQ0NWRlNTU2YjI0ZjFmZmI2OThlNmZjZjI2ZDExZThhNjI0MzY5YzBmMTUzY2Q4M2QwY2JhMjE0ZmRlYjYzNzJjZWEzNTY2YzAzNDFhZGJjOTdkMmI3ZGVmMTY0NjY1OGM2MjE4NmE=”,”tenant”:”Tenant01″}
[2015-05-15 14:50:16.113] [I] Token is: MTQzMTY5NzkxNTQ1NDowMGZiNWUyMmNlZjI2ZTI1MTAzYTp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWw6ODVmZDE4MGM2ZTkzZjBkOGRlMzk3MzhkNTQ0NWRlNTU2YjI0ZjFmZmI2OThlNmZjZjI2ZDExZThhNjI0MzY5YzBmMTUzY2Q4M2QwY2JhMjE0ZmRlYjYzNzJjZWEzNTY2YzAzNDFhZGJjOTdkMmI3ZGVmMTY0NjY1OGM2MjE4NmE=

Using the Authentication Token in other API Requests

Now that we have a mechanism for generating a token, let’s look at an example using the token. The vRA API details a GET request for retrieving all custom groups and SSO groups that correspond to a specified search criteria. For a simple example we can run a GET request against the URL /identity/api/tenants/{tenantId}/groups using tenantId as a parameter.

Firstly we need a REST operation for that URL. Run the Add a REST operation workflow to create an operation Get-Groups:


We now have an additional operation available:


We need a workflow for it, so run the Generate a new workflow from a REST operation workflow:


Give it a name Get-TenantGroups and again put it in the Test folder:


We need to modify this workflow to use the same Accept header added previously and also the authentication token. Add an extra input:

  • token – String



Add the token parameter as an input to the existing scriptable task:


Modify that scriptable task and set the contentType to application / json:

request.contentType = "application\/json";

Then add the following code for the Accept and Authorization headers:

var authorizationToken = "Bearer " + token

request.setHeader("Accept", "application/json");
request.setHeader("Authorization", authorizationToken);


Save and close the workflow changes. Now we can create a workflow Get-vRATenantGroups to put all of the component pieces in place:


Create inputs for username, password and tenant – for future use outside of this example, you might want to think about storing these as vRO Configuration Items instead.

  • username – String
  • password – SecureString
  • tenant – String


In the schema add the Request-vRAToken workflow. Set inputs to match the input parameters:


Set the token output to be an attribute token in this workflow:


Close the tab. Add the Get-TenantGroups workflow to the schema. Set the inputs to be the tenant parameter and the token attribute:


Set the outputs to be the standard REST attribute outputs:


Close the tab. Finally, add a scriptable task to parse the results of the JSON response. For this example we will just output the names of the groups. For the inputs select contentAsString:


On the Scripting tab add the following code:

var jsonResponse = JSON.parse(contentAsString);

var groups = jsonResponse.content

for each (group in groups){

var name =;
System.log("Name is: " + name);



Save and close the workflow. Then run it with suitable parameters:


A successful workflow run will see something similar output to the logs:

[2015-05-15 16:35:54.485] [I] Name is: ExternalIDPUsers
[2015-05-15 16:35:54.485] [I] Name is: ActAsUsers
[2015-05-15 16:35:54.485] [I] Name is: SolutionUsers
[2015-05-15 16:35:54.486] [I] Name is: TenantAdmins01
[2015-05-15 16:35:54.486] [I] Name is: Users
[2015-05-15 16:35:54.486] [I] Name is: Tenant01_Approvers
[2015-05-15 16:35:54.486] [I] Name is: Administrators
[2015-05-15 16:35:54.486] [I] Name is: TenantUsers01
[2015-05-15 16:35:54.486] [I] Name is: TestCustom01
[2015-05-15 16:35:54.486] [I] Name is: TestCustom03
[2015-05-15 16:35:54.486] [I] Name is: TestCustom02
[2015-05-15 16:35:54.486] [I] Name is: TenantInfraAdmins01


Using the vRO 2.0 Plugin for Active Directory to Work with Multiple Domains

When working with vRealize Orchestrator and Active Directory it has been possible for a long time to use the built in Active Directory plugin for many tasks. One of the drawbacks with the various iterations of the 1.0.x version of the plugin however, was the lack of support for multiple domains and multiple domain controllers. This was naturally quite restrictive in environments with more than a single domain which is pretty common for many reasons since as distributed management, mergers & takeovers and poor planning 😉

These issues are addressed in version 2.0 of the plugin, which also supports the latest release of vRO, 6.0.1.

Getting Started

Version 2.0 of the AD plugin did not ship as part of the 6.0.1 vRO release, so it needs to be downloaded and upgraded. In vRO 6.0.1 the version of the AD plugin is




So, firstly download the 2.0 version of the AD plugin and copy the file to somewhere accessible from the vRO Configuration Website. From within the Configuration Website navigate to the Plug-ins page and the Install new plug-in section. Select the downloaded plugin file and choose Upload and install.


Accept the License Agreement


All being well you will be informed that the existing plugin was overwritten and the plugin will be installed at next server startup.


Restart the vRO service to compete the installation


Once complete the version of the plugin should show at



Login to vRO with the Client and navigate to Library / Microsoft / Active Directory / Configuration. If you used previous versions of the plugin, you will notice some changes in this folder:

Version 1.0.x




Run the Add an Active Directory server workflow and configure it for a domain controller in the first domain.



Use a shared session and ideally a dedicated service account with permissions in that AD domain to do what it needs to do:


If everything supplied is correct, then you should receive a successful workflow run:


and then be able to browse through the domain on the Inventory tab:


To add a domain controller from a second domain, run the  Add an Active Directory server workflow again. I’m using a DC from a child domain:


Again, with a successful workflow run you should see the green tick:


and on the Inventory tab it is now possible to browse multiple domains! (Woo hoo – you should be saying at this point, it’s quite a big deal if you’ve been waiting for this functionality :-) )


Use Case

Consider an example where you need to create an Organizational Unit in both AD domains. Prior to version 2 of the AD plugin you would have needed to either use multiple vRO servers or likely use some PowerShell scripting instead.

Create a top level workflow New-ADOUinMultipleDomains workflow:


On the Inputs tab create an input ouName:

On the Schema tab drag in the  Create an organizational unit Library workflow


On the In tab of the Create an organizational unit Library workflow ouName should be automatically populated with the Input parameter of the same name; if not, make it so:


For ouContainer create an Input Parameter of the workflow parentDomainContainer :




On the Out tab set newOU to be an attribute parentDomainOU:




Repeat the above process with an extra workflow item on the schema for the child domain using Input parameter childDomainContainer and attribute childDomainOU.




Update the Presentation for the Domain Container inputs to provide more friendly text when the workflow runs:


So now our top-level workflow looks like this for Inputs:



and the schema looks like this:


Save and close the workflow. Now run the workflow and populate the fields with a name for the new OU and locations in the parent and child domains to create the OUs in. Note that you are able to browse through both domains, similar to the Inventory view – yay :-) :





We are ready to roll, so hit Submit. All being well we will have a successful workflow run and OUs named Multiple created in both domains in the correct locations.




 Final thoughts

When talking with people about vRO I often caution them that just because there is a VMware supplied plugin or one from a third-party, it does not necessarily mean that it will do everything that you need it to do. The AD plugin was a case in point, so the 2.0 version is a welcome and long awaited improvement and reduces the need to fall back to using some form of scripting to achieve AD automation in vRO.

vRO: Missing Line Breaks in SOAP Request

While working in vRealize Orchestrator with an external SOAP based system I was having issues with line breaks being removed from text sent across as part of a reasonably large SOAP request containing multiple items.

Say we have the following text strings and want to pass them into the SOAP request with line breaks in-between each one:

text1 = 'This is text1';
text2 = 'This is text2';
text3 = 'This is text3';

textToSend = '\n' + text1 + '\n' + text2 + '\n' + text3;

Place that code into a scriptable task in a workflow, output textToSend to the vRO SystemLog and you will observe the text with line breaks in them, placing each one onto its own line:


However, when textToSend is sent through to the SOAP request, the line breaks have been removed and the text appears in the interface all on one line, displaying it like so:


Turns out in this instance the SOAP request would support HTML tags for the text, so using ‘<br />’ instead of ‘\n’ would give the line break.

text1 = 'This is text1';
text2 = 'This is text2';
text3 = 'This is text3';

textToSend = '<br />' + text1 + '<br />' + text2 + '<br />' + text3;

The SystemLog now looks like this:


However, we don’t really care what it looks like in there, the important thing is how it translates through in the SOAP request. It is now displayed as desired:


This also means that any HTML formatting tag could potentially be used if say the text needed to be made Bold or a different size.



vRO, an External SQL Database, and the case of the Missing Plugins

After setting up a fresh deployment of the vRO appliance and configuring it to use an external SQL database I noticed that many of the default plugins appeared to be missing in the Workflow library folder:

(there should be a lot more than listed here)


Logging into the vRO configuration page  showed the below list of plugins (and more going off the screen) appeared to exist and be installed correctly.



Having mostly worked with Windows based vRO servers before and not seen this issue I got a few clues from this blogpost and this communities post  which suggests it appears to be a bug relating to configuring vRO to work with a different database.

The workaround is to navigate to the Troubleshooting section of the configuration page and select Reset current version


All being well you will receive the below green message:


I then restarted the vRO appliance, logged back in with the vRO client and lo and behold all of the plugins were then present.


I checked this against a default deployment of the vRO appliance with the embedded database and the issue is not present.

PowerCLITools Community Module: Now on GitHub

Over the last few years I have built up a number of functions to use alongside the out of the box functionality in PowerCLI. I’ve posted some of the content before on this blog, but have at last got round to publishing all of the content that I am able to share, in a module available on GitHub – I’ve named it the PowerCLITools Community Module in the hope that some others might want to contribute content to it or improve what I have already put together.


This took a fair amount of effort since it is not possible for me to share everything that I have as part of my locally stored version of this toolkit. Some of it was developed by others I was working on projects with (and are not as necessarily so keen to share certain parts of their work) and some can’t be shared for commercial reasons. However, I found some time recently to split out everything that could be shared into a new module and also updated some of the code – typically to add some nice features in PowerShell v3 and later which weren’t available when a lot of the code was developed during PowerShell v2 days.

Since the content has been developed over a few years, consistency and standardisation of approach may not be 100% there. A quick look back over them showed some looking a bit dated – I have spent a bit of time tidying them up, but part of the reason for sharing them  was to take feedback and some prompting on where they could be improved. If I left them until I thought they were just right I’d probably never end up publishing them. So your feedback is the impetus I need to go and improve them :-)

A lot of the functions are there to fill in gaps in cmdlet coverage with PowerCLI and there are a few which I made more for convenience where I have bundled together a few existing cmdlets into one function. These don’t particularly add a lot of value, but maybe demonstrate how you can tighten up your scripts a bit


Ensure that VMware PowerCLI is installed. Functions have been tested against v5.8 R1.


1) Download all files comprising the PowerCLITools module. Ensure the files are unblocked and unzip them.
2) Create a folder for the module in your module folder path, e.g. C:\Users\username\Documents\WindowsPowerShell\Modules\PowerCLITools
3) Place the module files in the above folder

So it should look something like this:


The below command will make all of the functions in the module available

Import-Module PowerCLITools

To see a list of available functions:

Get-Command -Module PowerCLITools


Nested Modules

You will note that each function is itself a nested module of the PowerCLITools module. In this blog post I describe why I make my modules like this.

VI Properties

If you take a look inside the PowerCLITools.Initialise.ps1 file you’ll notice a number of VI Properties. Some of these are required by some of the functions in the module and some are just there for my convenience and make using my PowerCLI session simpler. You can add and remove VI Properties as to your own personal preference, but watch out that some are actually needed.  You can find out more about VI Properties here.


I really hope people find these functions useful. I have a number of ideas on where some can be improved, but please provide your own feedback as it’ll be the nudge I need to actually go and make the changes :-)

Get-Task: ID Parameter is Case Sensitive

There aren’t many occasions when you trip up in PowerShell because of something being case sensitive, it generally doesn’t happen since most things are typically not like that. I was working with the PowerCLI cmdlet Get-Task and in particular the ID parameter to do something like:

Get-Task -Id 'task-task-2035'

I had originally found the ID via:

Get-Task | Format-Table Name,ID -AutoSize

However, I received the error that no tasks of that ID were found :

Get-Task : 24/02/2015 20:51:57 Get-Task The identifier task-task-2035 resulted in no objects.
At line:1 char:1
+ Get-Task -Id task-task-2035
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (:) [Get-Task], VimException
+ FullyQualifiedErrorId : Client20_OutputTracker_ReportNotFoundLocators_LocatorNotProduced,VMware.VimAutomation.ViCore.Cmdlets.Commands.GetTask

Turned out that making the task ID match the exact case worked:

Get-Task -Id 'Task-task-2035'

Apparently the IDs are case sensitive by design :-)

One to watch out for anyway…..

PowerCLI is now a Module!

We’ve been waiting for this a long time, but with the 6.0 release PowerCLI is now available as a module. Microsoft changed the way itself and third-parties should deliver PowerShell functionality back in PowerShell version 2 by offering modules. Previously in PowerShell version 1 additional functionality was available via snap-ins.

It’s not fully there yet, but some of the functionality is now available in a module. 6.0 will be a hybrid release, with the rest to follow later.

Notice how the Core functionality is in both lists since this is a hybrid release.

Get-Module *vmware* -Listavailable


Get-PSSnapin *vmware* -Registered


I believe there was significant effort in making this leap, so many thanks to Mr Renouf and his team :-)

Issue with Nested ESXi and Multiple VMkernel Ports

While working with Nested ESXi in my lab, I had an issue where I could communicate with the IP address on vmk0, but after adding multiple additional VMkernel Ports could not communicate with any of the additional IP addresses. It’s a simple network for testing, everything on the same subnet and no VLANs involved.

I hadn’t done too much reading on the subject before, other than knowing I needed to implement Promiscuous Mode for the Port Group on the physical ESXi servers. It seemed strange that I could communicate with one of the addresses, but not the rest. I tracked down the following posts, but both suggested that only Promiscuous Mode need be enabled.

I was running a Distributed Switch on the physical ESXi servers, so I tested moving one of the VMkernel ports to a Standard Switch with Promiscuous Mode enabled on the Port Group. It worked fine there, so was naturally curious why.

This communitites posting showed that Forged Transmits also needed to be enabled. The difference between the Standard and Distributed switches is that Forged Transmits is Accepted by default on a Standard switch


and Rejected by default on a Distributed switch


hence my experience above.

For more information check out these two posts from William Lam and Chris Wahl who are about two years ahead of me on this 😉


London VMUG January 2015

The first London VMUG of 2015 is almost upon us and as usual looks like a great line up of activities. My employer Xtravirt is sponsoring the labs and have a tech-preview of some software that you may be interested to check out. Plus one of my colleagues Michael Poore will be talking about a real world automation project.


Make sure you register and get along to the event.

Rooms Capital A Capital B Central Room
1000 - 1015 Welcome
1015 - 1100 Frank Denneman, PernixData & James Leavers, Cloudhelix - FVP
Software in a real-world environment
1100 - 1145 vFactor Lightning Talks - Philip Coakes, Alec Dunn, Dave Simpson, Gareth Edwards, Chris Porter
1145 - 1215 Break in Thames Suite
1215 - 1300 Robbie Jerrom, VMware - What is Docker and Where Does VMware VMware GSS Xtravirt Lab -
Fit? SONAR Tech
Preview - an easyto-use SaaS service providing on-
demand vSphere
automated analytics and reporting
1300 - 1400 Lunch
1400 - 1450 Simplivity - Stuart Gilks, Making Sense of Converged Infrastructure Unitrends - Ian Jones, Datacentre Failover
1500 - 1550 Phil Monk, VMware, Bringing Andy Jenkins, VMware - Cloud Xtravirt Lab -
SDDC to Life - A Real World Native @ VMware - Give your developers & ops teams SONAR Tech
Deployment with Michael Poore everything they want without Preview - an easyto-use SaaS service providing on-
losing control demand vSphere
automated analytics and reporting
1550 - 1600 Break in Thames Suite
1600 - 1650 VMware GSS Dave Hill, VMware - 5 Starting Points for Cloud Adoption Xtravirt Lab -
Preview - an easyto-use SaaS service providing on-
demand vSphere
automated analytics and reporting
1700 - 1715 C l o s e
1715vBeers - Pavilion End – sponsored by 10ZIG

How To Make Use Of Functions in PowerShell

Over the last few weeks I’ve had a number of comments on posts essentially asking the same question: “How do I use the functions that you publish on your blog?”. So I thought it worth making a post to refer people to, rather than trying to respond in kind to each comment. There are a number of ways it can be done depending on your requirements and they are listed below.

First of all, let’s create a simple function to use for testing:

function Get-TimesResult {

Param ([int]$a,[int]$b)

$c = $a * $b

Write-Output $c

1) Paste Into Existing PowerShell Session

If you are working interactively in the console then the function can be copy / pasted into that session and is then available for the duration of that session. I find this easier to do via the PowerShell ISE than the standard console.

Copy the function into the script pane:


Click the Green Run Script button or hit F5 and the code will appear in the console pane:


The function is now available for use and if using the ISE will appear interactively when you start typing the name:




2) PowerShell Profile

If the function is something that you wish to use regularly in your interactive PowerShell sessions then you can place the function in your PowerShell Profile and it will be available every time you open your PowerShell console.

If you are unsure what a PowerShell profile is or how to use one, there is some good info here. A quick way to create one is:

New-Item -Path $profile -ItemType File -Force

Once you have created a PowerShell profile, place the function in the profile and save and close. Now every time you open your PowerShell console the function will be available.


3) Directly In A Script

If you wish to use the function in a script, place the function in the script above the sections where you need to use it. Typically this will be towards the top. The plus side of doing it this way is everything is contained in one file, a negative is that if you have a number of functions then readability of the script is reduced since there may be a long way to scroll down before anything of significance starts to happen.


4) Called From Another Script

One method I have seen quite often in the wild (and I’m not a particular fan of, point 5 is a much better approach) is to store all regularly used functions in a script file and dot source the functions script file in the script where you need to use one or more of the functions.

Functions script file Tools.ps1:


Get-Results script file calling Tools.ps1:

Note the dot and a space before the reference to the Tools.ps1 file

. C:\Users\jmedd\Documents\WindowsPowerShell\Scratch\Tools.ps1

Get-TimesResult -a 6 -b 8



5) Stored in a Module

Using a PowerShell module is a more advanced and significantly more structured and powerful method of achieving what was done in 4). If you haven’t used PowerShell modules before I wrote an introduction to PowerShell modules a while back which you can find here.

Essentially they are a method to package up your reusable functions and make them available in a manner similar to how other teams in Microsoft and third-parties produce suites of PowerShell cmdlets for consumption.

For this example I have created a Tools module to use, which essentially is the same content as the Tools.ps1 file, but stored in a *.psm1 file (Tools.psm1) in the Modules\Tools folder on my workstation.

Note: the name of the *.psm1 file should match that of the folder. Its possible to create a more enhanced module than taking this approach using a Module Manifest, but we don’t need that for the purposes of this post. It’s described further in the previously mentioned article.


Now we can use the *-Module PowerShell cmdlets to work with our content.

To observe the module we can use Get-Module:

Get-Module Tools -ListAvailable


To use the functions contained in  the module we can use Import-Module

Import-Module Tools

Get-TimesResult -a 6 -b 8



Note: Since PowerShell v3 automatic cmdlet discovery and module loading has been supported. (You can find out more about it here) Consequently, you don’t actually need to use Import-Module to get access to the functions as long as you place the Module in the correct location. However, it would be a good practice to add the Import-Module line to your script, so that another user is aware of where you are getting the functionality from.