Using Pester to Automate the Testing of PowervRA

Learning Pester has been on my list to get done this year and while working on PowervRA I finally had a real project that could make significant use of it. Being able to automate the testing of each PowerShell function means that we can quickly test the impact of any changes to a function. Also, it means that we can test the whole module full of functions against new (and potentially old) versions of vRA.

There is a very useful introduction to Pester on the Hey Scripting Guy site and that is what I used to get started with it.

So after we released the first version of PowervRA I set about creating a test for each function in the module - and here is where I learned my first mistake, although to be fair I knew I was making this mistake during the initial development of PowervRA. With 70+ functions in the module at that time, I needed to write a test for each of them. So after the initial interest in learning how Pester works, I then had the boring task of writing all of the tests.

What (I knew) we should have done, was write a Pester test for each function during (or before) the development of that function. Consequently, it would not seem like such a laborious task to make them. So going forward that’s what we are doing each time we create a new function.

So what does a test look like? Well here is one for Reservation Policies:

You should see that each set of tests is grouped in a Describe section. Each test starts with the It keyword, then typically we do something and check a property of the object returned afterwards. The Should keyword enables us to specify something to check the result against. As you can see Pester has been made so that the tests should be quite nicely readable.

We then follow a pattern of New-xxx, Get-xxx, Set-xxx, Remove-xxx, which all being well leaves us with a clean environment after the tests.

For these tests, we want to check each function against a real life instance of vRA, consequently we need some values. I’m not sure if this is the best way to do it, but for the time being we’ve abstracted the data out of the test files and into a JSON file of variables. This means if we want to run the same tests against a different instance of vRA, we just need to change some of the values in that file. (There is a way to carry out Unit testing in Pester using Mocking which we may visit at some point)

An example of how we can use them is as follows. We can fire the tests against a vRA 7.0 instance and get the following results:

By changing some of the variables in the JSON file, we can then fire the same tests against a vRA 7.0.1 instance:

and so we can tell with a good degree of confidence that nothing is broken for PowervRA between the two versions. As you can see we can run 81 tests in 60 - 75 seconds, which is pretty cool :-)

Craig and I have discussed that we are only really scratching the surface with the tests so far and we could probably take someone onto the project who is solely dedicated to the testing (If you are interested, let me know :-)  ). For example, for the time being we are only checking one property per New-vRAxxxx thing which gets created, ideally we should really test every property. For now though, what we have got so far is a big step forward and I’m looking forward to learning more about Pester.

If you want to check out what we have done with the tests you can find them here.