This blog post is really a short advertisement for someone else’s blog. When I was last at the Bristol (South-West) VMUG in UK, my former co-author on the EUC book (Barry Coombs) asked me a very pertinent question. Being EUC focused, Barry was keen to see whitepapers and performance analysis that could be used to demonstrate the scalability claims made by the EVO:RAIL. Of course, Barry is specifically focused in this case on Horizon View as an example. But the demand is one that I would expect to see across the board for general server consolidation, virtual desktop and specific application types. Just to give you an idea of the publically stated scalability numbers this chart is a handy reminder:

Screen Shot 2014-10-23 at 15.35.53

At the time I pointed out that there is plenty of Virtual SAN performance data in the public domain. A really good example is the recent study that was undertaken to benchmark Microsoft Exchange mailboxes on Virtual SAN as well as posts about performance for Microsoft SQL. I must say that both Wade Holmes and Rawlinson Rivera are doing some stellar work in this area.

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-testing-part.html

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-testing-part-ii.html

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-microsoft-exchange-server.html

http://blogs.vmware.com/vsphere/2014/09/vmware-virtual-san-performance-microsoft-sql-server.html

Great though that is, Barry made what I think is an important point. EVO:RAIL represents quite a prescriptive deployment of Virtual SAN with respect to the hardware used, and the amounts and proportions of HDD to SSD. From his perspective as an architect he needs to be able to point to and justify the selection of any given platform. He needs to be able to judge how much performance a given deployment will deliver per appliance – and then demonstrate that the system will deliver that performance. It’s worth stating what those HDD/SSD amounts/proportions are again, just in case you aren’t familiar.

Each server in the EVO:RAIL 2U enclosure (4 servers per enclosure) has 192GB RAM allocated to it – two pCPU with 6-cores – and 1xSSD (400GB) drive for read/write cache in the Virtual SAN together with 3 SAS HDD drives. For the WHOLE appliance that works out at 14.4TB of HDD raw capacity and 1.6TB of SSD. It’s important to remember that Virtual SAN “Failures to Tolerate” is set to 1 by default – this means for every 1 VM created, a copy is created elsewhere in the cluster. The result is that 14TB of raw storage becomes about 6.5TB usable. If you look at these numbers you will see that a ratio of around 10% of the storage available is SSD based, which largely reflects the best practices surrounding Virtual SAN implementations.

So its with great pleasure I can say that the EUC team has been doing some independent validation of the EVO:RAIL platform specifically for Horizon View. The main take away – our initial claim for 250 virtual desktop VMs – holds true so long as the appliance isn’t housing other VMs at the same time. Basically, the EUC tested a configuration where the appliance is dedicated to just running the virtual desktops, and “virtual desktop infrastructure” components (the Connection server/Security server) are running elsewhere. The other configuration they tested was a more “VDI-in-a-box” configuration where both virtual desktops AND the Horizon View server services were contained in the same appliance. As you might suspect the number of supported virtual desktops comes down to about 200. Remember however that additional EVO:RAIL appliances could be added to exceed this per-appliance calculation to support up to 1000 virtual desktop instances. As the chart above indicates, the assumption is that all the virtual desktops are the same and are configured for 2vCPUs, 2GB RAM and 30GB virtual disk.

One question that occasionally comes up is the ratio of VMs per node. Sometimes people think that the ratio is a bit small. But its important to remember that we need to factor in resource management to any cluster – and work on the assumption of what resources would be available if a server is in maintenance mode for patch management OR if you actually have a physically failed server. As ever its best to err on the side of caution, and build an infrastructure that accommodates at least N+1 availability – rather than being over optimistic by assuming all things run all the time without a problem…

For further information about the VMware EUC Teams work with EVO:RAIL follow this link:

http://blogs.vmware.com/consulting/2014/10/euc-datacenter-design-series-evorail-vdi-scalability-reference.html