Category Archives: vmware

Importing a vRO Package via the API – tagImportMode

I like the new Swagger UI for the vRO API, it makes it really easy to use:


While using it to figure out some stuff around importing a package, I hit an issue with the tagImportMode parameter:


Depending on which option selected, the following additions to the URL were listed in the documentation as follows:




However, none of these choices seemed to work, just resulted in 400 (Bad request). Some trial and error followed with different possible combinations, but eventually I found two of them documented in the Developing a Web Services Client for VMware vCenter Orchestrator guide for vCO 5.5.1 and so was able to guess the third.




PowervRA 1.2.2 with Tested Support for vRA 6.2.4

One of the things we did for the 1.2.2 release of PowervRA was to test all of the functions against a vRA 6.2.4 deployment. Now that we have created Pester tests for all of the functions, it is quite straightforward for us to test against different vRA versions.

While we had initially targeted vRA 7+ because of the better API support, we know that currently the majority of installations out there are 6.x.x. So we are happy to confirm that 58 of the functions work fine with 6.2.4, which was a slightly higher figure than I was expecting.

All of the functions which will not work pre v7 have been updated to include an API version check and will exit the function with a message to reflect that if you try to use  them with 6.x.x, e.g.

Get-vRAContentPackage is not supported with vRA API version 6.2



Using Pester to Automate the Testing of PowervRA

Learning Pester has been on my list to get done this year and while working on PowervRA I finally had a real project that could make significant use of it. Being able to automate the testing of each PowerShell function means that we can quickly test the impact of any changes to a function. Also, it means that we can test the whole module full of functions against new (and potentially old) versions of vRA.

There is a very useful introduction to Pester on the Hey Scripting Guy site and that is what I used to get started with it.

So after we released the first version of PowervRA I set about creating a test for each function in the module – and here is where I learned my first mistake, although to be fair I knew I was making this mistake during the initial development of PowervRA. With 70+ functions in the module at that time, I needed to write a test for each of them. So after the initial interest in learning how Pester works, I then had the boring task of writing all of the tests.

What (I knew) we should have done, was write a Pester test for each function during (or before) the development of that function. Consequently, it would not seem like such a laborious task to make them. So going forward that’s what we are doing each time we create a new function.

So what does a test look like? Well here is one for Reservation Policies:

You should see that each set of tests is grouped in a Describe section. Each test starts with the It keyword, then typically we do something and check a property of the object returned afterwards. The Should keyword enables us to specify something to check the result against. As you can see Pester has been made so that the tests should be quite nicely readable.

We then follow a pattern of New-xxx, Get-xxx, Set-xxx, Remove-xxx, which all being well leaves us with a clean environment after the tests.

For these tests, we want to check each function against a real life instance of vRA, consequently we need some values. I’m not sure if this is the best way to do it, but for the time being we’ve abstracted the data out of the test files and into a JSON file of variables. This means if we want to run the same tests against a different instance of vRA, we just need to change some of the values in that file. (There is a way to carry out Unit testing in Pester using Mocking which we may visit at some point)

An example of how we can use them is as follows. We can fire the tests against a vRA 7.0 instance and get the following results:


By changing some of the variables in the JSON file, we can then fire the same tests against a vRA 7.0.1 instance:


and so we can tell with a good degree of confidence that nothing is broken for PowervRA between the two versions. As you can see we can run 81 tests in 60 – 75 seconds, which is pretty cool 🙂

Craig and I have discussed that we are only really scratching the surface with the tests so far and we could probably take someone onto the project who is solely dedicated to the testing (If you are interested, let me know 🙂  ). For example, for the time being we are only checking one property per New-vRAxxxx thing which gets created, ideally we should really test every property. For now though, what we have got so far is a big step forward and I’m looking forward to learning more about Pester.

If you want to check out what we have done with the tests you can find them here.

Import a Package from a Folder in vRO 7.0.1

vRO 7.0.1 in Design mode contains a new toolbar button in Packages; Import package from folder:


Previously it was possible to export a vRO package to either a zip file or directly to a folder, but only import back from a zip file.

For example, take the following package, which contains 3 workflows and 1 action:

Exported to a folder test, we get the following:

ImportFromFolder05 ImportFromFolder06 ImportFromFolder07


Now in a clean vRO server, I currently do not have those workflows:


If I select the Import Package from folder button, I can browse to the folder contained the previously exported to a folder package:


Then the process to import the package is the same as in the previous vRO versions when importing from a zip file:



and now I can use the workflows and action




Not only is this very handy, it is potentially significant, because it may make it slightly easier to integrate vRO workflow development with source control systems.

vRO requestCatalogItem from vRA Action – Available Properties

The vRO Action requestCatalogItem in the com.vmware.library.vcaccafe.request folder can be used to programmatically request an item from a vRA Catalog.


One of the inputs is a properties object which permits you to dynamically make changes to settings configured within the Catalog Item you are deploying from. So say for instance the Catalog Item maps to a Blueprint configured with 1 vCPU, you could change this at request time to be 2 vCPU – which for instance might lead you to needing to maintain fewer Catalog Items.

It’s possible that it exists already, but I haven’t been able to find where the entire list of available properties that you can use are documented. Many of those starting VirtualMachine. or VMware. can be found in the vRA Custom Properties guide, so I’m not going to list them all here. However, not every available property in those ranges actually appear to be in that guide.

So I have decided to maintain a list here garnered from various sources, including a few common ones from the Custom Properties guide. If you know of any others not in the Custom Properties guide, then please leave a comment and I’ll add them:
provider-blueprintId: vRA Blueprint
provider-ProvisioningGroupId: Business Group

provider-VirtualMachine.NetworkX.NetworkProfileName: Network profile for NIC 0
provider-VirtualMachine.NetworkX.Name: Network Name (portgroup) for NIC 0


provider-VirtualMachine.DiskX.StorageReservationPolicy: vRA Storage Reservation Policy
provider-VirtualMachine.DiskX.StoragePolicy.FriendlyName: vCenter Storage Policy
provider-VirtualMachine.DiskX.StoragePolicy.vCenterStoragePolicy: vCenter Storage Policy


provider-VMware.VirtualCenter.Folder: Virtual Center folder
provider-Cafe.Shim.VirtualMachine.Description: Description
provider-Cafe.Shim.VirtualMachine.AssignToUser: vRA Machine owner
provider-Cafe.Shim.VirtualMachine.NumberofInstances: Number of VMs to deploy
provider-__reservationPolicyID: vRA Reservation Policy


Create a vRA Tenant and set Directory and Administrator Configuration with PowervRA

One of the reasons behind creating PowervRA was as a consultant I often have the need to quickly spin up vRA Tenants and / or components within those Tenants to facilitate development or testing work of other things I am automating. PowervRA contains three functions, which when joined together would make a basic vRA Tenant available for use: New-vRATenant, New-vRATenantDirectory and Add-vRAPrincipalToTenantRole.

The following code example demonstrates how to use these in conjunction with each other to make a vRA Tenant (make sure to first of all have generated an API token with Connect-vRAServer with an account that has permission to create a vRA Tenant):

Note that since New-vRATenantDirectory has a lot of parameters, I have taken advantage of the ability to instead provide the necessary JSON text directly to it.

The result is a fresh vRA Tenant with a Directory configured and admin accounts assigned to both the Tenant Admins and Infrastructure Admins roles:




Find the vRO Workflow ID for an Advanced Service Blueprint with PowervRA

A colleague asked me the other day about how it might be possible to find out which vRO workflow was mapped to an Advanced Service Blueprint (or XaaS Blueprint) in vRA. If you look in the vRA GUI after a Service Blueprint has been created you can’t see which vRO workflow is mapped.

During the creation of the Service Blueprint there is a Workflow tab to select the vRO Workflow:



However, once it has been created, there is no longer a Workflow tab, so you can’t see which vRO workflow is used:

By using PowervRA though we find this information. The object returned by Get-vRAServiceBlueprint contains a WorkflowId property:



Update: 15/03/2017

Instead of the below long code section to search vRO for a workflow ID, you can use the Get-vROWorkflow function from the sister tool PowervRO.


We can now take that WorkflowId and find the corresponding workflow in vRO. Unless you have memorised all of the workflow IDs then you can issue a REST request to vRO to find out more. The following example uses PowerShell to query the vRO REST API for the WorkflowID above (note that we have to deal with self-signed certificates):

If we look at the data stored in the Workflows variable, we can see the name of the workflow in vRO (OK in this example it’s the same name as the Service Blueprint, but it might well not be in another example):


Also, if you look at $ you will see the first result shows the path in vRO to track down the workflow:ServiceBlueprintPowervRA05

i.e., it can be found in the top-level folder named Test:



Automate vRealize Automation with PowerShell: Introducing PowervRA

While putting together the PowerCLI book 2nd Edition we initially included in the proposed Table of Contents a chapter on vRealize Automation. However, it was fairly apparent that at that time (early 2015) there wasn’t a lot which could be done to fill out the chapter with good content. Firstly, most of the relevant content would be included in the vRO chapter, i.e. use vRA to call a vRO workflow to run PowerShell scripts. Secondly, automating elements within vRA 6.x could be done in part via the REST API, but a) there was a roughly 50 / 50 split between what was in the REST API vs Windows IaaS and b) I didn’t really have the time to make both a PowerShell toolkit for vRA and write a book about PowerCLI.

So we shelved that chapter and I put the thought to the back of my mind that I would revisit the idea when vRA 7 came out and the likelihood of greater coverage in the vRA REST API. At the start  of 2016 this topic came up in a conversation with Xtravirt colleague Craig Gumbley who it turned out had the same idea for making a PowerShell vRA toolkit. So we decided to combine our efforts to produce a PowerShell toolkit for vRA for both our own use as consultants and also to share with the community; consequently the project PowervRA was born.

Initial Release

For the initial release we have 60 functions available covering a sizeable chunk of the vRA 7 REST API. Compatibility is currently as follows:

vRA: version 7.0 – some of the functions may work with version 6.2.x, but we haven’t tested them (yet). Also, they have not been tested with 7.0.1.

PowerShell: version 4 is required.  We haven’t tested yet with version 5, although we wouldn’t expect significant issues.

You can get it from Github  or the PowerShell Gallery

We have provided an install script on Github if you are using PowerShell v4. If you have v5 you can get it from the PowerShell Gallery with:

Getting Started

Get yourself a copy of the module via one of the above methods or simply downloading the zip, unblocking the file and unzipping,  then copying it to somewhere in your $env:PSModulePath.



Import the module first:

You can observe the functions contained in the module with the following command:

Before running any of the functions to do anything within vRA, you will first of all need to make a connection to the vRA appliance. If you are using self signed certificates, ensure that you use the IgnoreCertRequirements parameter. :

You’ll receive a response, which most importantly contains an authentication token. This response is stored automatically in a Global variable: $vRAConnection. Values in this variable will be reused when using functions in the module, which basically means you don’t need to get a new authentication token each time, nor have to specify it with a function – it’s done for you.

Each of the functions has help built-in, alternatively you can visit this site

Example Use Case: Create a vRA Tenant

Having made a connection to the appliance, it’s now time to start using some of the functions. To create a Tenant in vRA we need to have made a connection to vRA with an account that has permissions to do so in the default tenant (typically [email protected]) and then it is as simple as the following:



If you look through the rest of the functions, you may notice that a lot of them contain a JSON parameter. So if you know the JSON required for the REST API request or are working with a system that produces it as an output, you can do something like the following:

The Future

VMware may put out something official at some point (I have no inside info on that, it could be weeks, months, years away or not even planned right now). Until that happens Craig and I have various things planned including greater coverage of the API, dealing with any feedback from this release and looking at automating some of our own testing so that we can more easily figure out which vRA versions are supported.

In the meantime, fill your boots and if you want to help us, feel free to get involved via the Develop branch on GitHub.


Create Blueprints in vRA 7 via REST and via vRO

A significant pain point on a recent project of mine was automating the creation of blueprints in vRA 6.2 with vRO. There was very little information around on how this could be achieved and even with the method that we eventually came up with still required some manual effort and was not always the most reliable.

Enter vRA 7 and some hope that things may have gotten better.

First of all I looked through the vRA 7 Programming Guide and found some examples on exporting content from vRA 7. I’d heard in some of the conversations around the release of version 7 that blueprints would be able to be manipulated in YAML files, so the first thing to do was to create a blueprint through the new Design interface and then get it exported out into a YAML file.

Create a Blueprint through the Design Interface

Here’s my Centos-Small blueprint created through the Blueprint Designer. A simple vCenter template clone connected to a single network and basic resource values (make sure to publish the blueprint):


Get an Authentication Token for vRA REST Queries

Using the Postman REST client, I first of all need to get an authentication token. I have previously detailed how to do this from vRO, however from Postman I need to set the following:

URL: https://vraapliance.fqdn/identity/api/tokens

Type: POST

Headers: Accept: application/json and Content-Type: application/json


"username":"[email protected]",
"password":"[email protected]",




Sending that request should give me a response with a token to use which is valid for up to 8 hours; it will look something like this:

"expires": "2016-01-19T18:37:42.000Z",
"tenant": "Tenant01"



Get a List of Blueprints

Now we can use that token to get a list of available blueprints that token has permission to view:

URL: https://vraapliance.fqdn/content-management-service/api/contents

Type: GET

Headers: Accept: application/json and Authorization: Bearer MTQ1MzE5OTg2MjYzMTo3MThjNGFiNDVmMjE4MjZiMzgxNjp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWxleHBpcmF0aW9uOjE0NTMyMjg2NjIwMDA6ZmJmZjU2ZmNjOTFkMDE3ODhkNjJmMzM3ZGMwMzM3NjRhMjQxNjJlMjhjMGU3YjU0YzNlZjUwYTlkYWFjNDAxYTBkODVlYzVkYWQ1YzY4ZDc0MTQ3NjBlM2Q3MDk1OGU5OTg1NjNiMTI4OWQwMGMzMzExMDAxNmEyOGY0M2MxYTk=



This will give us a JSON response, including some details of our Centos-Small Blueprint:

"links": [],
"content": [
"@type": "Content",
"id": "78f27bc8-dd51-4c10-97cf-fb5770f0836b",
"contentId": "cfa3b28a-6a59-4a85-8355-ff70c6fd3332",
"name": "IaaS VC VirtualMachine",
"description": null,
"contentTypeId": "xaas-resource-mapping",
"mimeType": null,
"tenantId": "_internal",
"subtenantId": null,
"dependencies": [],
"createdDate": "2016-01-05T11:05:21.673Z",
"lastUpdated": "2016-01-05T11:05:21.673Z",
"version": 0
"@type": "Content",
"id": "3ac8d4ed-1bc0-45d7-a54a-edb44a2fdeae",
"contentId": "9fd01109-c9ab-4ce7-9b9d-4d1c06bccdb9",
"name": "IaaS vCD VM",
"description": null,
"contentTypeId": "xaas-resource-mapping",
"mimeType": null,
"tenantId": "_internal",
"subtenantId": null,
"dependencies": [],
"createdDate": "2016-01-05T11:05:21.868Z",
"lastUpdated": "2016-01-05T11:05:21.868Z",
"version": 0
"@type": "Content",
"id": "2359c8c9-1ec1-4162-a9e0-aa2c5121815d",
"contentId": "CentosSmall-12345",
"name": "Centos - Small",
"description": "Centos - Small",
"contentTypeId": "composite-blueprint",
"mimeType": null,
"tenantId": "Tenant01",
"subtenantId": null,
"dependencies": [],
"createdDate": "2016-01-19T10:56:14.398Z",
"lastUpdated": "2016-01-19T10:56:14.398Z",
"version": 0
"metadata": {
"size": 20,
"totalElements": 3,
"totalPages": 1,
"number": 1,
"offset": 0

Create a Content Package Containing our Blueprint

Now we need to create a Content Package containing our Blueprint, so that it can be exported. We will only add a single item, but multiple items can be added to the Package. The id of the Blueprint retrieved above needs to be used in the JSON body as ‘contents’ .

URL: https://vraapliance.fqdn/content-management-service/api/packages

Type: POST

Headers: Accept: application/json, Content-Type: application/json and Authorization: Bearer MTQ1MzE5OTg2MjYzMTo3MThjNGFiNDVmMjE4MjZiMzgxNjp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWxleHBpcmF0aW9uOjE0NTMyMjg2NjIwMDA6ZmJmZjU2ZmNjOTFkMDE3ODhkNjJmMzM3ZGMwMzM3NjRhMjQxNjJlMjhjMGU3YjU0YzNlZjUwYTlkYWFjNDAxYTBkODVlYzVkYWQ1YzY4ZDc0MTQ3NjBlM2Q3MDk1OGU5OTg1NjNiMTI4OWQwMGMzMzExMDAxNmEyOGY0M2MxYTk=


"name":"Test package",
"description":"Test package for export",
"contents":[ "2359c8c9-1ec1-4162-a9e0-aa2c5121815d" ]




All being well, we should receive a 201 Created response:



Listing Existing Content Packages

We can see what Content Packages are available with:

URL: https://vraapliance.fqdn/content-management-service/api/packages

Type: GET

Headers: Accept: application/json and Authorization: Bearer MTQ1MzE5OTg2MjYzMTo3MThjNGFiNDVmMjE4MjZiMzgxNjp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWxleHBpcmF0aW9uOjE0NTMyMjg2NjIwMDA6ZmJmZjU2ZmNjOTFkMDE3ODhkNjJmMzM3ZGMwMzM3NjRhMjQxNjJlMjhjMGU3YjU0YzNlZjUwYTlkYWFjNDAxYTBkODVlYzVkYWQ1YzY4ZDc0MTQ3NjBlM2Q3MDk1OGU5OTg1NjNiMTI4OWQwMGMzMzExMDAxNmEyOGY0M2MxYTk=



which should give us a response like the following, including our newly created ‘Test package’:

"links": [],
"content": [
"@type": "Package",
"id": "736562b2-c991-449a-999d-6d25f2ff05c5",
"name": "demo package",
"description": "this is the description",
"tenantId": "Tenant01",
"subtenantId": null,
"contents": [
"createdDate": "2016-01-14T15:28:26.323Z",
"lastUpdated": "2016-01-14T15:28:26.323Z",
"version": 0
"@type": "Package",
"id": "339aeb0d-a15a-4ee1-84b7-87389d05c428",
"name": "demo package 2",
"description": "demo package 2",
"tenantId": "Tenant01",
"subtenantId": null,
"contents": [
"createdDate": "2016-01-14T17:05:47.458Z",
"lastUpdated": "2016-01-14T17:05:47.458Z",
"version": 0
"@type": "Package",
"id": "04fb81ce-9da1-46f6-b98f-fd48e8ce0e2c",
"name": "Test package",
"description": "Test package for export",
"tenantId": "Tenant01",
"subtenantId": null,
"contents": [
"createdDate": "2016-01-19T11:28:30.507Z",
"lastUpdated": "2016-01-19T11:28:30.507Z",
"version": 0
"metadata": {
"size": 20,
"totalElements": 3,
"totalPages": 1,
"number": 1,
"offset": 0



Export the Content Package to a Zip File

Now we want to export the Content Package to a zip file so that we can have a look at the YAML file that details the blueprint:

URL: https://vraapliance.fqdn/content-management-service/api/packages/packageid

Type: GET

Headers: Accept: application/zip and Authorization: Bearer MTQ1MzE5OTg2MjYzMTo3MThjNGFiNDVmMjE4MjZiMzgxNjp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWxleHBpcmF0aW9uOjE0NTMyMjg2NjIwMDA6ZmJmZjU2ZmNjOTFkMDE3ODhkNjJmMzM3ZGMwMzM3NjRhMjQxNjJlMjhjMGU3YjU0YzNlZjUwYTlkYWFjNDAxYTBkODVlYzVkYWQ1YzY4ZDc0MTQ3NjBlM2Q3MDk1OGU5OTg1NjNiMTI4OWQwMGMzMzExMDAxNmEyOGY0M2MxYTk=


This time in Postman click on Send and download


This will prompt us where to save the package and what to call it; make sure you create a zip file…


Have a look inside the zip file and you will see the following structure; a metadata.yaml file and a composite-blueprint folder containing individual yaml files for each blueprint that was part of the package:



The contents of each YAML file are listed here for reference:


name: "Test package"
productVersion: "7.0.0-SNAPSHOT"
- locator: "composite-blueprint/CentosSmall-12345.yaml"
name: "Centos - Small"
description: "Centos - Small"
exportTime: "2016-01-19T11:46:13.614Z"


id: CentosSmall-12345
name: Centos - Small
description: Centos Small - 1 vCPU - 1GB RAM
type: Infrastructure.Network.Network.Existing
fixed: Tenant01
fixed: Tenant01
type: Infrastructure.CatalogItem.Machine.Virtual.vSphere
fixed: 1
min: 1
fixed: FullClone
fixed: false
fixed: '1'
fixed: 1
min: 1
- capacity: 16
id: 1452533069864
initial_location: ''
is_clone: true
label: Hard disk 1
storage_reservation_policy: ''
userCreated: false
volumeId: 0
fixed: false
id: Tenant01
max_network_adapters: {}
max_volumes: {}
default: 1024
max: 1024
min: 1024
- address: ''
assignment_type: Static
id: 0
load_balancing: ''
network: ${_resource~Tenant01}
network_profile: Tenant01
id: CloneWorkflow
label: CloneWorkflow
security_groups: []
security_tags: []
id: 51784cf9-fc3a-4939-abf7-2e2965523036
label: template-centos06b
fixed: template-centos06b
default: 16
max: 40
min: 16
Tenant01: 0,0
vSphere_Machine_1: 1,0


Create Blueprints from Postman

We can now manipulate the YAML files to create Blueprints back in vRA. Say for instance we want to add Centos-Medium and Centos-Large Blueprints. All we need to do is create additional YAML files for those two templates, update the metadata.yaml file and send back to vRA.

So the composite-blueprint folder now looks like this:


And the metadata.yaml file has been updated to contain the new files:

name: "Test package"
productVersion: "7.0.0-SNAPSHOT"
- locator: "composite-blueprint/CentosSmall-12345.yaml"
name: "Centos - Small"
description: "Centos - Small"
- locator: "composite-blueprint/CentosMedium-12345.yaml"
name: "Centos - Medium"
description: "Centos - Medium"
- locator: "composite-blueprint/CentosLarge-12345.yaml"
name: "Centos -Large"
description: "Centos - Large"
exportTime: "2016-01-19T11:46:13.614Z"

We then create a new zip file with the updated content:


To create the Blueprints in vRA we send the following REST request via Postman:

URL: https://vraapliance.fqdn/content-management-service/api/packages

Type: POST

Headers: Accept: application/zip,  and Authorization: Bearer MTQ1MzE5OTg2MjYzMTo3MThjNGFiNDVmMjE4MjZiMzgxNjp0ZW5hbnQ6VGVuYW50MDF1c2VybmFtZTp0ZW5hbnRhZG1pbjAxQHZyYWRlbW8ubG9jYWxleHBpcmF0aW9uOjE0NTMyMjg2NjIwMDA6ZmJmZjU2ZmNjOTFkMDE3ODhkNjJmMzM3ZGMwMzM3NjRhMjQxNjJlMjhjMGU3YjU0YzNlZjUwYTlkYWFjNDAxYTBkODVlYzVkYWQ1YzY4ZDc0MTQ3NjBlM2Q3MDk1OGU5OTg1NjNiMTI4OWQwMGMzMzExMDAxNmEyOGY0M2MxYTk=

Body: set to form data and select the file



We should receive a 200 OK response with details of Blueprints created:

"operationType": "IMPORT",
"operationStatus": "WARNING",
"operationResults": [
"contentId": "CentosMedium-12345",
"contentName": "Centos - Medium",
"contentTypeId": "composite-blueprint",
"operationStatus": "SUCCESS",
"messages": null,
"operationErrors": null
"contentId": "CentosLarge-12345",
"contentName": "Centos -Large",
"contentTypeId": "composite-blueprint",
"operationStatus": "SUCCESS",
"messages": null,
"operationErrors": null
"contentId": "CentosSmall-12345",
"contentName": "Centos - Small",
"contentTypeId": "composite-blueprint",
"operationStatus": "WARNING",
"messages": [
"Found matching content, import will overwrite this content."
"operationErrors": null


Now login to vRA and you will see the three Blueprints:


and if we look at the details of the medium template it is showing additional vCPU and RAM resource that were specified in the YAML file:


Obviously, there are many other changes that could be made within the YAML file to make different templates.

Create Blueprints from vRO

While researching this topic I initially looked at a similar approach to the above with Postman when moving on to do the same for vRO. However, looking at the updated vRA 7 plugin for vRO showed a folder of workflows for working with Composite Blueprints:


Rather than try and re-invent the wheel I decided to see what these could do, specifically the Import a composite blueprint workflow. It has inputs of a vCACCAFE:VCACHost and a MimeAttachement, so is pretty straightforward to use:


Thinking slightly further ahead, rather than just run this single workflow, I would more likely use this as part of a vRA Tenant Creation workflow and would store multiple YAML files within Resource Elements and possibly update them on the fly before using them to create Blueprints. For this example though I have stored one for CentosXLarge in a Resource Element to illustrate the possibility. This is exactly the same type of YAML file for a Blueprint used in the above example with Postman, just has different values for id, name, vCPU and memory to make it XLarge:


So we then make a top-level workflow with two elements; a scriptable task to retrieve the YAML file from a resource element and the built-in Import a composite blueprint workflow:

CreateBlueprint27The scriptable task has an input of a Resource Element and outputs a MimeAttachment (in a more realistic example we would take and input of maybe a folder of Resource Elements and output an array of MimeAttachments). Then a simple one-liner to convert between the two object types:

var yamlMimeAttachment = yamlResourceElement.getContentAsMimeAttachment()




Now we can pass this MimeAttachement into the Import a composite blueprint workflow and set the vCACHost to be the host for the required vRA Tenant. From this workflow it will output the BlueprintId:



So we are ready to run the top level workflow. Specify a vCACHost and a YAML file in a ResourceElement and run the workflow:


Hopefully a successful run:


and here’s the Centos – XLarge Blueprint:





PowerCLI Book 2nd Edition is Now Available!

…..well in Kindle format anyway 🙂 The paperback version will be available on 11th January in the US and 12th January in the UK and according to the publisher, apparently in book shops from the 19th January. Seriously, if you actually see one in a book shop then please send me a photo, since I’ve not seen that happen outside of a VMworld.



With my fellow authors, Luc Dekens, Glenn Sizemore, Brian Graf, Andrew Sullivan and Matt Boren, we spent the best part of 2015 putting this book together. The 1st edition was written in 2010 and published in 2011; since then the VMware virtualisation landscape has changed significantly from pretty much VMware vSphere infrastructure and P2V projects and maybe some desktop work with VMware View to a wide variety of Management, Desktop, Application, Infrastructure and Cloud products.

Given this expansion of products and the fact the we all buy tech books ourselves and want value for money from them, we didn’t want to just ship an updated version to cover what was in the 1st Edition and make sure it worked with the latest versions of vSphere, PowerCLI and PowerShell. So as well as updating every chapter to make sure it worked with vSphere 6, PowerCLI 6 and PowerShell 4, and adding or replacing content and code to make sure it was still relevant (for instance the Distributed Switch is now covered out of the box, previously we contributed our own functions), we had the foolish excellent idea to add a number of additional chapters covering things such as:

  • vCloud Director
  • vCloud Air
  • vRealize Orchestrator
  • Site Recovery Manager
  • PowerActions
  • and an introduction to DevOps topics

Additionally the original Storage and Network chapter was split into two and content on new technologies VSAN and NSX added appropriately. To be honest each of VSAN and NSX could probably have their own chapters…maybe next time 😉

This took the page count to 984, around almost 200 pages more than the 1st edition, which is almost the size on its own of some tech books. Unfortunately, this partly led to the book taking longer than it should have to complete and there were still areas that we would have liked to include, but had to make some tough decisions not to or it would never have seen the light of day. As a wise man often says….


While there is some introductory content in some of the chapters, this is not a book that typically runs you through how to use each of the PowerCLI cmdlets in their relevant areas, rather this is a book that goes beyond what ships out of the box and into areas where you will need to do some of the hard work yourself.  We had some negative feedback that the 1st edition was not better in introductory areas and may well do so again this time; there are other great PowerCLI books out there now which will do a better job for beginners. So imagine the front cover has Deepdive stamped across it and that will give you a better feel for what to expect.

Personally I feel I made a way better contribution to this book than the previous one for various reasons:

  • I was involved from the beginning, rather than parachuting in halfway through to try and get the book out of the door.
  • Consequently I was able to think about and plan for the content of the chapters I was responsible for, rather than just trying to make the deadline for each chapter.
  • Being used to the publisher’s Word template and what was required in them. There are so many rules about fonts, spacing and expectations around how the chapter needs to be laid out, that you seem to be just expected to know and first time round caused a lot of pain and time wasted.
  • A supportive employer, Xtravirt, who were interested in what I was doing, rather than one who previously was not.
  • A big thank you to our Technical Editor, Matt Boren, who left no stone unturned in making sure my code was not only accurate, but also some outstanding suggestions to help me improve. Some of which I have taken forward for everything PowerShell based that I write now. In fact he liked the experience so much, he even got roped into contributing some of the book content himself.
  • I actually had a reasonable idea about the technologies involved……;-)


Last time round I guessed how much time I spent on my contribution. Being curious, I decided to track it this time. It’s also something I get asked about now and again by people considering writing a tech book, so now I can refer them to some data 😉

Originally I was going to track the data by week, but since it went on so long it ended up being by month. I’m not sure if writing a book with code samples requires more or less effort than a tech book without them. On the one hand you almost have to produce double the content; explanatory written text and create code to make stuff happen. On the other hand at least the code uses up some pages when printed in the book 🙂


In total I spent 330.5 hours (that extra 0.5 was crucial) of my own time on the effort for this book between Jan and Nov 2015, and remember there were 5 other authors. So if you are thinking about getting involved in a project like this then consider carefully whether you are able to commit to the level of time you may have to give up for it. I’m not sure how representative this is for authors of other tech books, but I never talked to one who didn’t say something like “you have no idea how much effort it takes”. Similar to last time, I pretty much worked on it most evenings Mon – Thu and stayed away from it over the weekend so that it did not impact my family life too much. (Although my children are now 5 years older and consequently don’t go to bed so early – which meant starting later and finishing too late regularly) It did mean I didn’t see my friends much during weekday evenings, which is when I tend to catch up with them, for about 6 months, but something has to give. I think my blog and podcast content suffered a fair bit too.

You’ll see that most of the original effort was in the early months either producing content for new chapters, or updating content for existing ones. I was actually originally expecting to be pretty much done by April, but for various reasons I took on a bit more than originally planned and April – June was spent producing more content and some early feedback reviews. August – November was pretty much taken up with reviews.

Reviews are something which I think vary from publisher to publisher, but for Wiley it pretty much follows this:


Author Review

If you can recognise your original text after it has been through all that process, then well done 🙂

So at least three responses required per chapter, multiply that by the number of chapters you commit to and then try finding time to write new chapters and respond to reviews both on similar deadlines. Pro tip: get your new content out of the way early on so you can do the reviews in peace –  see Jan / Feb in the chart 😉

With all that effort though, it is a personally (not financially) rewarding experience. I think both times I’ve said “Never again!” too many times, but I suspect it may happen one day. I’m confident that if it’s something you are interested in and you buy a copy that you will find it useful. I genuinely would happily buy a copy with my own money for the content I have seen the other guys produce for it.

Final comment: Writing in a foreign language also brings some difficulties. Having your own English ‘corrected’ to the US version requires a fair amount of patience. Not necessarily spelling differences, but phrases that are commonly understood this side of the Atlantic, but not the other. I’ll just leave this here……

Update 03/02/2016: Due to some issues mentioned in the comments around downloading the sample files, I have attached (most of) them here: