Resources for the Google Cloud Architect Certification

I recently passed the certification exam for Google Cloud Certified Professional Cloud Architect. A number of people asked me for the resources I used to get through the exam, so I thought I would share them here.

One word of context. The exam is only Pass or Fail, you are given no indication of how close you were to the level required. Consequently, I do not know whether the extra effort I put in over and above attending a training course was actually necessary. Most consensus of trainers I encountered and others who have passed appears to be that it is necessary.

  1. Start with the exam page itself which details things such as cost, test centers etc.
  2. Review the exam guide . Make sure you are familiar with every topic within the guide. My experience of the exam was the broader your knowledge of the topics the better. Also not just GCP specifics are covered on the exam, it tests you as well on your general experience of public Cloud / Architecture / Automation.
  3. I was fortunate enough via my employer to attend three of the instructor led courses mentioned on the exam page:

Google Cloud Platform Fundamentals: Core Infrastructure – 1 day

Architecting with Google Cloud Platform: Infrastructure – 3 days

Preparing for the Professional Cloud Architect Examination – 2 days

The final preparing for the exam course was probably the most useful out of those. It had a very thorough (but quick) run through of all exam topics and gave you a good gauge on where further study was required. It also helped to emphasise the extra effort you would need to take to pass the exam (if you believe them ūüôā ) It is also worthwhile getting yourself into good shape before taking this class; I spent 3 days reviewing and replaying what I had learnt on the 3 day Architecting course before going to it…and was glad I had done when the instructor whizzed through the topics.

4. I took the free online ‘Preparing for the Google Cloud Professional Cloud Architect Exam’ course from Coursera. I took the time to work through all of the hands-on practice labs and follow up reading of online documentation for weaker areas and those I felt I needed to know a bit more detail.

5. I read most of the Site Reliability Engineering book written by Google employees on how they run their systems at scale for public consumption. This helped to cover some of the less GCP specific questions on the exam and was a poor man’s replacement for not attending the Design and Process course.

6. Review the three case studies which are tested on the exam. Some of the questions require specific knowledge about each case study, so it is worth at least being familiar with them prior to the exam so that you do not need to spend exam time reading them. It also helps (and is covered in both of the preparing for the exam courses above) to think in advance of the exam about what GCP products and solutions you could use in each of the case studies to deal with the technical and business requirements.

7. Review the Google suggested solutions for different design scenarios here: . This helps you to get familiar with which GCP products and solutions fit particular types of technical and business requirements for areas such as Finance, Media, Gaming etc

8. Take the practice exam to familiarise yourself with the types and styles of questions. It also helps (in a small way since only 20 questions I think) to gauge your strengths and weaknesses with certain topics.

9. I listened to the GCP Podcast on a few topic areas that I was not so familiar with. Also on the drive to the exam centre, since I had a 2 hour drive and it helped to focus the mind ūüôā

Specifically, I listened to these episodes:

Cloud SQL with Amy Krishnamohan

Cloud Dataflow with Frances Perry

Cloud Functions with Bret McGowen

Cloud Spanner with Deepti Srivastava

Cloud Networking with Ines Envid

I would suggest you look through the back catalogue and see what is most relevant for you.

Hope that helps!

PS If you are quick, you may still be inline for one of these kind of freebies to pick before Google stop giving them away. They send you a link of items to choose from a couple of days after passing the exam:

PSDayUK 10th October 2018

On Wednesday 10th October 2018 a group of people involved with organising PowerShell User Group events around the UK will be hosting a 1 day PowerShell conference at CodeNode in London, PSDayUK. This follows on from the highly successful event run last year.


Updated 3rd August 2018: 

An agenda will be published soon, once the session submissions have been reviewed. If you are interested in presenting then please fill out this form and your submission will be included in the review process.

Blind bird ticket (i.e. no agenda) pricing is available now at a significant discount. 

The agenda is now available:

Early bird tickets are available at a discount on the full price.

All of the sessions from last year were recorded and published to the event’s YouTube channel, so you can get a good of the likely content by checking out those videos.

Hope to see you there.

List Installed Jenkins Plugins with PowerShell

While looking to automate the installation of Jenkins¬†I needed to get a list of installed plugins into a plugins.txt file to be used by the automated install process. It’s possible to view them in the GUI, but not get an easy export:

It’s possible to query the API to get this information:

Note: my test Jenkins server is not https, hence the need for the AllowUnencryptedAuthentication parameter


Visual Studio Live Share – Initial Experiences

I saw a demo of Visual Studio Live Share last year and thought it looked pretty cool. If you are not aware of what it is, it allows you to collaborate on code projects with other developers without the need for some kind of screen sharing solution.

I work remotely in a development team and we often use screen sharing solutions to assist or get assistance from each other, demonstrate what we’ve been doing or just bounce some ideas off of someone else which can often help you get past problems when you are not physically sitting near your colleagues. Screen sharing solutions typically fall down in a few scenarios though:

  • When working on different projects, but seeking help from someone with particular technical knowledge and they don’t have your code in front of them.
  • Avoiding the “Try this…..hmmm maybe try that……or how about this….”, when the person making the suggestions has to talk you through making some changes
  • The “Can I take control of you screen?” scenario which never seems to work that great
  • Being able to both make code changes to the same code project at the same time

Earlier this year I was able to get into the Private Preview for  Visual Studio Live Share and give it a trial run. What it essentially lets you do is take your code project in Visual Studio or Visual Studio Code, share that project with a colleague just by giving them a web link and then collaborate on the same code in real time.

I suggested trying it out with a couple of colleagues instead of using our screen sharing solution and fairly quickly it has become the solution we default to using since it has provided a great experience already, even while in Private Preview.

Getting Started

Visual Studio Live Share is now in Public Preview, so anybody is able to test it out. I’m going to demonstrate with Visual Studio Code rather than Visual Studio, but both provide a similar experience.

Get the latest version of Visual Studio Code and install the VS Live Share Extension:

Then you will either be promoted to sign in or you can click the sign in button in the bottom left corner of VS Code. You’ll be prompted to use either a Microsoft Account or your Github account:

Once signed in, click the Share button:

You’ll shortly be prompted that a sharing link has been put on your clipboard:

Pass this to the person you want to share the session with. On their system they need the same requirements, i.e.¬†Visual Studio or Visual Studio Code installed, the VS Live Share extension and signed in with one of the above accounts. Then simply click the link sent though and you’ll get notification of a collaboration session starting:

Note:¬†You can even mix and match between the two editors, you don’t both have to have the same one.

The sharer then gets a Live Share workspace opened with the entire code project viewable and editable without having to setup anything on their system, or clone the same repository:


The initial feature we have mostly used is being able to show each other different files within the project and make live changes together. By clicking on the Pin icon you will automatically follow where the other person is as they move from file to file, and also within a file. You’ll see a coloured icon with the person’s name you are sharing with to indicate where they are in the file.

This had been fabulous and has really made a significant difference at times when needing to troubleshoot code with a fellow remote worker.

One thing we were missing though was being able to use a shared terminal. It was great to be able to edit code together, but then we had to fall back to screen sharing to see what was happening when it was run. So I contacted the Progam Manager Jon Chu to see if it could be a feature that would go on the roadmap Рto my surprise he informed me that it was already in development and in-fact was already available as an experimental feature!

You can start sharing a terminal by clicking on the session state status bar item and choosing Share Terminal:


You get the option for a read-only or read/write shared terminal:

The shared terminal will then be available on both systems and in this case you’ll see its labelled as ‘ps shared’


For us VS Live Share has been so good, it’s something we’ve already come to rely on and it’s still only in Public Preview. It gets updated regularly, so I’m confident it’s only going to get better. I encourage you to try it out with your co-workers and provide feedback.

Panini World Cup 2018 Sticker Collection Tracker

I made an Excel based Panini sticker collection tracker for Euro 2016, so thought I would update it for the World Cup 2018 in Russia.

On the Data sheet track via the colour used in cell A38 which stickers you have got, then you can easily see which ones you still need.

On the Swaps sheet, list out your swaps:

Then on the Analysis sheet you will get some figures and graphs to track your collecting progress:


You can download the Excel workbook from here.

Here’s another tip for your collection. Buy a box of stickers from somewhere like Ebay, rather than 100 individual packs – will probably save you around ¬£25. Plus we tend to find we got a lower percentage of swaps.


PowervRO ‚Äď Now available on macOS and Linux via PowerShell Core!

Back Story

Back in January 2017¬†¬†Craig¬†and I made PowervRA available for macOS and Linux via PowerShell Core. It was always our intention to do the same thing for PowervRO and , although slightly later than we hoped, we’re finally able to do that. PowerShell Core has come a long way itself over the last year, currently in Release Candidate and soon to be GA, and I’m sure a lot of the hard work and community feedback which has gone into that has helped make the job of PowervRO supporting PowerShell Core very straightforward.

In reality we had to make only a relatively small amount of changes to the code base, mostly around detecting which version of PowerShell is being used and consequently which method to use for making API calls to the vRO appliance when dealing with things like SSL certificates and protocols. There are a lot of great new things available in Invoke-RestMethod and Invoke-WebRequest in PowerShell Core which make API calls a lot simpler, so we take advantage of those.

Note: to take advantage of a lot of these new features we have raised the PowerShell version requirements for PowervRO 2.0.0 to be Windows PowerShell 5.1 and PowerShell Core 6.0.0-rc

Having invested a lot of time with the initial release of PowervRO in creating integration tests via Pester for each function, that really paid off for this release since we were very easily able to test everything against different versions of vRO with different versions of PowerShell across different operating systems. Again very little actually needed to be changed in the code for the functions themselves, which is a testament to the compatibility of PowerShell Core. Typically it was only things like cmdlet parameter changes, such as this one, which tripped us up.


You will need:

PowerShell Core Release Candidate + ….instructions on getting it installed for different OS flavours can be found here.

PowervRO 2.0.0 + . Get a copy of PowervRO onto the Linux  or macOS machine you want to run it from. Use the following to download it  from the PowerShell Gallery:

or manually copy the module yourself to one of the locations listed in $env:PSModulePath, for example:

In Action


Here’s PowervRO on my Macbook:

Connect to vRO:

Retrieve all Workflows, sort by CategoryName and display Name, CategoryName and Version:

Invoke a Workflow:


CentOS 7.3

Here’s PowervRO on CentOS 7.3:


Connect to vRO:

Retrieve all Workflows, sort by CategoryName and display Name, CategoryName and Version:

Create a new Category:


Ubuntu 17.04

Here’s PowervRO on Ubuntu 17.04:

Connect to vRO:

Retrieve all Workflows, sort by CategoryName and display Name, CategoryName and Version:

Remove a Category:

Side Note

In PowervRO 2.0.0 we have also made some under the hood changes that it is worth being aware of (check the changelog for more details):

  • Module Restructure: we changed the functions from being their own individual nested modules in *.psm1 files to more simply being in *.ps1 files and made part of the module in a different way. The build process for a release now combines all of the functions in the individual *.ps1 files to a single *.psm1 module file.
  • The Password Parameter of Connect-vRAServer now requires a SecureString, not a String. Consequently, you will now need to supply a SecureString when using it like the example below:

PowerShell Core does not have -Encoding Byte. Replaced with new parameter AsByteStream

Carrying out the following in Windows PowerShell worked, but didn’t always make a lot of sense because Byte is not really an Encoding type:

If you try to run the same command in PowerShell Core you will receive the following error:

Set-Content : Cannot bind parameter ‘Encoding’. Cannot convert the “Byte” value of type “System.String” to type “System.Text.Encoding”.

This is because Byte is no longer a valid selection for the Encoding parameter:

That’s because it has been replaced by a new parameter AsByteStream, which makes more sense for what you are typically actually trying to do.

Thanks to Mark Kraus for pointing this change out to me.

Using a Specific Security Protocol in PowervRA

A few months ago we had an issue logged in PowervRA where it was not possible to make a connection to the vRA appliance after it had been locked down following the VMware hardening guide. Specifically this was because SSLv3/TLSv1 (weak ciphers) had been disabled.

By default, Windows PowerShell 5.1 has the following Security Protocols available, Ssl3 and Tls – hence the above failure.

It’s possible to workaround this by adding in the security protocol required, in this case TLS 1.2

Note, however, that this change is for the PowerShell session itself, which may or may not be desired behaviour.

PowerShell Core adds in a new parameter SslProtocol to both of the web cmdlets, Invoke-WebRequest and Invoke-RestMethod. Consequently, this improvement means that you can specify a security protocol per request, not per PowerShell session. For example you could do something like this for Tls 1.2:

In PowervRA 3.0.0 we’ve updated Connect-vRAServer to support this functionality, also with a SslProtocol parameter:

If you’re on Windows PowerShell we’ll add to the available security protocols in the PowerShell session (and remove it afterwards when you use Disconnect-vRAServer). If you’re using PowerShell Core we’ll use the SslProtocol parameter of Invoke-RestMethod so that the requested protocol is used per request.

The $vRAConnection variable has been updated with the SslProtocol property to show you whether your connection is using the default protocol or a specified one:

Final note: this was a breaking change for us, since we require Windows PowerShell 5.1 and PowerShell Core 6 release candidate to easily implement the above functionality. So make sure you are on either of those versions before trying PowervRA 3.0.0.

Accessing Content from Variables of Type Any in the vRO Client

One of my colleagues showed me how to do this, so I thought it worth sharing since it has bugged me ever since I started using vRO.

If you have run a vRO workflow and are looking at the output, specifically the Variables tab:

you can then view what values each variable was at the time of workflow completion. If the value is a string or something else simple you will get a nice view of it. However, if it is say a collection of properties you will see something similar to the below and typically you will not be able to scroll across to view them all.

What I have typically done until now is add a Scriptable Task as the next step in the workflow and log all of the properties out. However, she demonstrated to me that it is possible to copy them and then paste into a text editor.


  1. Bring up the above view by clicking on the ‘i’, next to the magnifying glass
  2. Click once on the white section – in this example the word ‘Properties’
  3. Ctrl-A
  4. Ctrl-C

Even though there is no visual indication that everything was highlighted to be made available for copy, like in say a text editor, it has actually done it. The below is the copied output from the above:


OK, it is not that easy to read, but it is pretty handy if you just want to quickly grab it and search for something in the list of Properties.

Preparing for 70-534: Architecting Microsoft Azure Solutions

I recently passed the exam 70-534: Architecting Microsoft Azure Solutions so thought I would share a few preparation materials here. From reading the exam blueprint you will notice a certain amount of crossover with 70-533 (and to a slightly lesser extent 70-532), so a fair amount of the resources I used for those exams are also relevant. See my pages here for info on those: and

In addition for this exam I used the excellent 70-534 preparation course from¬†Scott Duffy¬†on Udemy:¬†¬†¬†Not only does it have excellent content, but it appears that Scott updates it on a regular basis. Even during the 3 ‚Äď 4 weeks I was using the course there were updates and new information coming through from Scott which was really helpful. It‚Äôs also often available for an excellent price on Udemy, I managed to pick it up for ¬£10.

Scott also has a set of practice questions available on the same site. Split into 3 tests, there are currently 150 questions. I managed to also pick these up for £10 and found them useful as part of the preparation.


Update 21/11/2017: Since I posted this blog I was made aware of the following about Udemy. I would suggest you read it, then make up your own mind about whether you still wish to take one of their courses.


There is a useful exam preparation session from Ignite 2017 which is well worth watching.

After completing the above I still had a week or so left to prepare for the exam so I picked up some practice questions from Measureup. These were a bit more pricey at £70 for 30 days access and while useful in terms of making me go read documentation on subjects I was not so good at, they felt a little out of date.

One additional thing to be aware of is that the 70-534 exam is due to expire 31st December 2017, to be replaced by 70-535. Depending on where you are at in your study preparation, you have a decision to make  on which exam to take. Scott Duffy has some useful info on what is the difference between the two exams which may be helpful in making a decision Рhis initial looks suggests there is a significant amount of new content added to the blueprint for 70-535 and only a few items removed.

Having now passed 70-532, 70-533 and 70-534 I’m done with these Azure certifications for some time. Having been through this process my recommendation if you are following the same path would be to take all three as close together as possible as you can given the overlap in content. I wasn’t able to for various reasons, but if I had to do it again, I would make more of an effort to make that happen.