In my previous blog post I focused on the vSphere ‘metadata’ that makes up each and every configuration of vSphere, and for that matter, EVO:RAIL. Of course what matters is how we carve up and present the all-important physical resources. These can be segmented into compute, memory, storage and networking.

Compute:

The way compute resources are handled is pretty straightforward. EVO:RAIL creates a single VMware HA and DRS cluster without modifying any of the default settings. DRS is set to be fully-automated with the ‘Migration Threshold” left at the center point. We do not enable VMware Distributed Power Management (DPM) because in a single EVO:RAIL appliance with four nodes this would create issues for VSAN and patch management – so all four nodes are always on at all times. This remains true even if you created a fully populated 8-appliance system that would contain 32 ESXi hosts. To be fair this pretty a configuration that dictated by VSAN. You don’t normally make your storage go to sleep to save on power after all…

Screen Shot 2015-04-15 at 17.29.21

Similarly VMware HA does not deviate from any of the standard defaults. The main thing to mention here is that “datastore heartbeats” are pretty much irrelevant to EVO:RAIL, considering one single VSAN datastore is presented to the entire cluster.

Screen Shot 2015-04-15 at 17.31.32

Memory:

The EVO:RAIL Appliance ships with four complete nodes each with 192GB of memory. A fully populated EVO:RAIL environment with 8 appliances would present 32 individual ESXI hosts in a single VMware HA/DRS/VSAN cluster. That’s a massive 384 cores, 6TB of memory, and 128TB of RAW storage capacity. We let VMware DRS use its algorithms to decide on the placement of VMs at power-on relative to the amount of CPU and Memory available across the cluster, and we let VMware DRS control whether a VM should be moved to improve its performance. No special memory reservations are made for the System VMs of either vCenter, Log Insight or our Partner VMs.

Storage:

Every EVO:RAIL ships with 1xSSD for 400GB, and 3×1.2TB 10k SAS drives. When the EVO:RAIL configures it will enroll of this storage into a single disk group. You can see these disk groups in the vSphere Web Client by navigating to the cluster and selecting >>Manage, >> Settings, Virtual SAN and >>Disk Management. Here you can see that each of the four EVO:RAIL nodes are in a single disk group, with all disks (apart from the boot disk, of course) added into the group.

Screen Shot 2015-04-16 at 15.55.37

As for the Storage Policies that control how VMs consume the VSAN datastore, a custom storage policy called “MARVIN-STORAGE-PROFILE” is generated during the configuration of the EVO:RAIL.

Screen Shot 2015-04-16 at 17.05.04

With that said, this custom policy merely has the same settings as VSAN’s default, that is one rule is set making “Number of Failures to Tolerate” be equal to 1. The effect of this policy is such that for every VM created a copy is created on different node elsewhere in the VSAN datastore. This means should a node or disk become unavailable there is a copy held elsewhere in the vSphere Cluster that can be used. Think of it being like a per-VM RAID1 policy.

It’s perhaps worth mentioning that there are slight differences between some QEP’s EVO:RAILs from others. These difference have NO impact on performance. But they are worth mentioning. There two main types. It Type1 the enclosure has 24 drive bays at the front. That’s 6 slots per node – and each node receives a boot drive, 1xSSD drive and 3xHHD drives leaving one slot free. In Type2 system there is an internal SATADOM drive from which the EVO:RAIL boots – and at the front of the enclosure there are 16 drive bays. Each node uses four of those slots – for 1xSSD and 3xHHD drives. As you can tell both Type 1 and 2 system both end up presenting the same amount of storage to VSAN. So at the end of the day it makes little difference. But its subtle difference few publically have picked up on. I think in the long run its likely all our partners will wind up using 24-drive bay system, with an internal SATADOM device. That would free up all 6-drive bays for each node, and would allow for more spindles or more SSD.

Networking:

I’ve blogged previously, and at some length about networking in these posts:

EVO:RAIL – Getting the pre-RTFM in place
EVO:RAIL – Under The Covers – Networking (Part1)
EVO:RAIL – Under The Covers – Networking (Part2)
EVO:RAIL – Under The Covers – Networking (Part3)

So I don’t want to repeat myself excessively here, except to say EVO:RAIL 1.x uses a single Standard Switch(0), and patches both vmnic0 and vmnic2 for network redundancy. The vmnic1 interface is dedicated to VSAN, whereas all other traffic traverses vmnic0. Traffic shaping is enabled for the vSphere vMotion portgroup to make sure that vMotion events do not impact negatively on management or virtual machine traffic.

Summary

Well, that wraps up this two part series that covered the different aspects of vSphere environment once EVO:RAIL has done its special magic. Stay tuned for the next thrilling installment.