Virtualising Citrix on VMware

I was lucky enough to take on a project initially started and blogged about by my co-host on the Get-Scripting podcast Alan Renouf.

In summary, his posts were mainly around the design decision of whether to go for VM’s with one or two vCPU’s and how many Citrix users you could support per VM.  Following on from his initial testing using Citrix Edgesight we ran a pilot with a few different scenarios and it turned out that the best performance with the highest number of Citrix users per VM came out to be a VM with 2 x vCPU’s; a conclusion which didn’t really match the initial testing, I guess you can’t beat real users doing real work and the sometimes crazy things they get up to pushing the boundries of performance.

A number of other decisions were also made at this time, most of which contributed to other significant cost savings on top of those we were going to achieve simply by reducing the number of physical boxes used to host the Citrix environment.

Something else which came out of the pilot was a decision to store the VM’s on local storage not SAN. Whilst this obviously reduces the flexibility offered by a virtualisation solution with shared storage, which gives options like VMotion, DRS etc, the cost savings gained by using local storage were very significant. Not only did we have none of the charges associated with SAN storage (fibre cards, cabling, switch ports, SAN disk) we could deploy the hosts with ESX Foundation licenses. From a redundancy and maintenance point of view we designed it so that we could afford to lose more than one host for a period of time and still have enough capacity to provide a good service.

We deployed four VM’s per ESX host, i.e. 8 cores available to 8 x vCPU’s. (Note: I have recently read Duncan Epping’s post around how many cores you should specify when using CPU affinity. It makes for interesting reading, thankfully we are not currently seeing any issues around what might arise from this)

During the pilot and the early part of the rollout we found we were able to happily achieve around 45 users per VM, i.e. up to 180 per ESX host with CPU levels on the host comfortably averaging below the 75% mark. As the rollout progressed and we retired the physical Citrix boxes the levels attracted by each VM were more typically around the 40 users mark, i.e. approx 160 users per physical host.

This was because we were able to replace three physical Citrix boxes with one ESX host containing four Citrix VM’s, so a 3 - 1 reduction physical, but a 25% increase in the number of Citrix servers which obviously means you naturally attaract less users per Citrix server with a consistent number of users.  However, since we deployed 2 x vCPU machines it also meant cost savings with half the required Windows VM’s over the original plan to deploy 1x vCPU VM’s which would have meant eight Citrix VM’s per host.

One issue we did experience was that of vCPU peaks from rogue user processes which would hog all the CPU for significant periods of time and give a bad experience to other users on that VM. This was believed to happen previously in the physical Citrix deployemnt, but was more easily masked by the availability of physical cores. Most typically these processes would be Internet Explorer, quite often accessing Flash based content. To mitigate this issue we used some application threading software on each VM to set maximum levels for CPU usage per user process. This performed very well by limiting these processes to a certain amount of vCPU and consequently not impacting other Citrix users’ performance - the decision to use 2 x vCPU’s in a VM helped here too, the 1x vCPU VM’s in the pilot really suffered with this problem.