Acknowledgement: I’d like to thank William Lam’s work and his blog VirtualGhetto – to whom this work and blogpost would not have been possible. In many ways my work is just minor adjunct to his efforts: Thanks William! 🙂
Update: Since publishing this – William has released a new version of his template. Given the time and effort I put into validating my work – I’m going to keep mine available too. I’m still testing this on vSphere6.7 which what held me back releasing it sooner…
Note: This template will not work with VMware Fusion, and has NOT been tested with VMware Workstation
IMPORTANT: This configuration has been tested in vSphere 6.7, but I had to wait for the public drop. Despite working with VMware technologies since 2004, I was not invited to the preview. Additionally, I cannot do any verification work on vSphere 7.0 because I haven’t been approved for the beta.
As you might gather from my recent post on using VMware “ovftool” I’ve been looking at the process of importing and exporting VMs into a OVF/OVA format for a particular reason. One of my projects this year is to master VMware’s VSAN technology. You might remember I had a role previously in VMware’s Hyper-convergence Team, where I got more than plenty of exposure – but things have moved on considerably in the two-years I’ve been away. And my goal is to write in depth on the topic – beyond the shorter for blog format. More about that at later date!
I’ve got pretty good hardware at my home lab – but no SSD in my servers – and I’m not about to go dipping even further into my savings to do that. Not when I have hours worth of vExpert time on the Oracle Ravello Cloud – as well as access to SSD storage on my Synology NAS and local storage on the VMware ESXi host. It makes much more sense to go for nested vSphere 6.5U1 or vSphere6.7 configuration where a mere edit to the .VMX file marks a .VMDK as a SSD rather than an a HDD.
After a tiny amount googling it became clear that vSphere6.5U1 and vSphere 6.7 offered many enhancements for the nested vSphere homelab particularly if its is running on top of a physical VMware ESXi host or vSphere HA/DRS Cluster. More specifically fellow vExpert – William Lam – VirtuallyGhetto site rounds these enhancements in a blogpost very neatly which he wrote back in October, 2016:
So whether you running VMware ESXi or in “Da Cloud” as the basis for your lab – it was time to take advantage of improvements. I knew from memory that William had a range of different VSAN templates – which I thought I could use as the basis of a “new” Nested vSphere 6.x VSAN Cluster. Sadly, I’ve tried to contact William via Twitter and LinkedIn to discuss this – but I’ve not heard back from him. I think he must bizzy with other projects or like me is having a sabbatical from work. If you interested the “source” for my OVF template came from William’s blog here which he posted back in Feb, 2015:
Note: I’ve only tested this on a physical VMware ESXi host. It won’t work on VMware Fusion 10 does not yet support the VMware Paravirtual SCSI Controller. I don’t have access to a physical Windows PC with the necessary CPU attributes to test this with VMware Workstation. I’d recommend to Fusion/Workstations customers that you use the slightly older templates provided by William as I know it definitely won’t work on Fusion, Workstation is a mystery.
OVF Virtual Hardware Specification:
Anyway, I’ve taken William’s original 6-node nested VSAN OVF template which he built to demonstrate “Fault Domains” and made some changes – and spat thing back out for others to use.
My personal goal when ever I have done nesting has been to have something that resembles the customers physical world as much possible (I’m limited two physical NICs in my physical homelab which makes playing with Distributed Switches awkward…). As you might have gather increasingly I’m not someone who naturally predisposed to compromises. I was even tempted to even mirror the disk sub-system of the type of hyper-converge appliance – that might have 24 disk slots divided over 4 nodes in 2U chassis – like a vXRAIL. But I thought that was starting to get a bit silly!
Like me you will want to deploy the OVA to the FASTEST storage you have. For me my fastest storage happens to be to be my Synology NAS which is backed by SSD drives.
Here’s an overview of my customisation so folks know what’s changed.
- I’ve increased the number of nodes from 6 to 8. This allows for proper use of maintenance mode as well allowing for enough nodes to test “Fault Domains”. Remember you can dial down your number of nodes to suit your physical resources.
- Upgraded to Virtual Hardware Level 13 (Requires an VMware ESX 6.5 or VMware ESX 6.7
- Change Guest Operation System type from vSphere 5.x to vSphere 6.x
- Increased the number of nodes from 6 to 8 to allow for maintenance mode to work correctly, fault domains and the emulation of a stretched VSAN configuration aka two 4-node VSAN clusters that look like they are two locations – remember you only need one vCenter to do this…
- After some testing – I discovered the old RAM allocation of 6GB wasn’t sufficient for VSAN to configure itself in vSphere 6.5 and vSphere 6.7. So I had to increase the nested ESXi RAM allocation to 10GB and this seemed to fix most issues. I did try with 8GB of RAM with this configuration I found it was like having my teeth pulled. 10GB worked every time – smooth as a baby’s bottom. 🙂
- Removed the 2xE1000 NICs, and added 4xVMXNET – The additional NICS allow for easy both vSwitch and DvSwitch play time! It means I can keep the “core” management networks on a vSwitch0, and have virtual machine networking on a DvSwitch. Some might regard this as Old Skool an reminiscent of the 1Gb days when people thought 16 uplinks per host was a good idea. And they be right, and of course is not option in the physical world where the server might only have two 10Gb ports presented to it.
More Disks and Bigger Disks:
- Removed the LSILOGIC Controller with the VMware Paravirtualised Storage Controller
- Increased the boot disk from 2GB to 8GB – This means you now get a local VMFS volume with logs stored on the disk. Despite the ability to PXE/SD-CARD/SATADOM boot VMware ESXi it seems like many SysAdmins still install to 2xHHD mirrored or to FC-SAN LUN/Volume. Initially with vSphere 6.5 U1 this was set to 6GB, but with the release of vSphere 6.7, I had to up this value to 8GB.
- I’ve increased the size of the SSD and HHD disks and also their number. The sizes reflect conceptually that unless you have All-Flash VSAN generally you have more HHD backed storage than SSD…
- There is extra 2xSSD and 2xHHD per Virtual/Nested ESXi nodes – this allows for the setup of more than “VSAN Disk Group”. However, as the indication of a disk as SSD or HHD in nested environment the template does support all-Flash, and single VSAN Disk Group configurations. That’s your call.
- All VMDK’s are marked to be thinly-provisioned to save on space…
- All SSD drives are 10GB and all HDD drives are 15GB to make even easier to identify them – although VSAN itself does a good job of ID disk type.
Clearly this “Beefed Up” nested vSphere6.x VSAN cluster is going to consume more resources than the one previously built by William Lam. But there’s a coupe of things to bear in mind.
- Firstly, if you only need 3 or 4 VSAN nodes – then simply do not power up other nodes or delete them
- Secondly, If you do not require the additional SSD/HHD disks and are not interested in the delete those (before you enable VSAN!)
Download and Import:
The ONLY way to import the .OVA below is using the OVFTOOL. This is because it uses settings that a standard GUI import reports as unsupported. Specifically, the vCenter “Deploy OVF” file options has a problem with the SCSI Controller type called “VirtualSCSI”. I’m not sure WHY this happens. Either its broken, or is only partial implementation of the OVF import process and simple doesn’t not recognise the VMware Paravirtual SCSI Controller
To import the OVA, first download and install the OVFTOOL for your workstation type.
Then download the OVA file into a workstation environment that has access to your physical ESXi host or physical vSphere deployment:
For a VMware ESXi host managed by vCenter residing in a DRS Cluster:
Sample: To Deploy vApp to existing DRS/HA Physical Cluster:
"C:\Program Files\VMware\VMware OVF Tool\ovftool.exe" --acceptAllEulas --noSSLVerify=true --skipManifestCheck --allowExtraConfig --extraConfig:scsi0:0.virtualSSD=0 --extraConfig:scsi0:1.virtualSSD=0 --extraConfig:scsi0:2.virtualSSD=1 --extraConfig:scsi0:3.virtualSSD=0 --extraConfig:scsi0:4.virtualSSD=1 -ds="esx01nyc_local" -dm="thin" --net:"VM Network"="VM Network" "C:\Users\Michelle Laverick\Downloads\Nested-ESXi-8-Node-VSAN-6.5-1-5.ova" "vi://email@example.com:VMware1firstname.lastname@example.org/New York/host/Cluster1"
Note: The virtualSDD settings is where all the excitement happens. And allows you to control the disk configuration of this single OVA. For instance:
This setting –extraConfig:scsi0:0.virtualSSD=0 –extraConfig:scsi0:1.virtualSSD=0 –extraConfig:scsi0:2.virtualSSD=1 –extraConfig:scsi0:3.virtualSSD=0 –extraConfig:scsi0:4.virtualSSD=1 would allow for two disk groups – creates a nested VSAN with one HHD and one SSD per disk group
This setting –extraConfig:scsi0:0.virtualSSD=1 –extraConfig:scsi0:1.virtualSSD=1 –extraConfig:scsi0:2.virtualSSD=1 –extraConfig:scsi0:3.virtualSSD=1 –extraConfig:scsi0:4.virtualSSD=1 would import the template to be All-Flash VSAN
This setting –extraConfig:scsi0:0.virtualSSD=0 –extraConfig:scsi0:1.virtualSSD=1 –extraConfig:scsi0:2.virtualSSD=0 –extraConfig:scsi0:3.virtualSSD=0 –extraConfig:scsi0:4.virtualSSD=0 would import the template for a single disk group with one SSD and 3xHDD.
Of course the disk sizes would be a bit odd with some HHD being 10GB or 15GB. And in the case of all-flash a mix of SSD that were a combination of 10GB and 15GB. This isn’t the end of the world, and still works. Remember you can remove these disks if you rather not have 2x SDD for playing with VSAN “Disk Groups”. My example uses ds=”thin” but you may get better performance by using “thick” as the disk format. Finally, once imported you could increase the size of the SSD and HDD for capacity purposes.
For a Stand-Alone ESXi host using local storage only as an example:
Sample: To Deploy vApp to Stand-Alone Physical ESXi Host:
for /l %x in (1, 1, 8) do "C:\Program Files\VMware\VMware OVF Tool\ovftool.exe" --acceptAllEulas --noSSLVerify=true --skipManifestCheck --allowExtraConfig --extraConfig:scsi0:0.virtualSSD=0 --extraConfig:scsi0:1.virtualSSD=1 --extraConfig:scsi0:2.virtualSSD=0 --extraConfig:scsi0:3.virtualSSD=1 --extraConfig:scsi0:4.virtualSSD=0 -ds="esx01nyc_local" -dm="thin" --net:"VM Network"="VM Network" "C:\Users\Michelle Laverick\Downloads\Nested-ESXi-8-Node-VSAN-6.5-1-5-ESXi-%x.ova" "vi://root:VMware1email@example.com"
NOTE: In this example a FOR /L loop is used to run the command 8 times (loop starts at number 1, increments at 1, and ends when it reaches 8 – hence 1, 1,8) , each time processing an OVA file for each node. Stand-alone ESXi host have no understanding of the “vApp” construct so this is a quick and dirty way to import a bundle of them using a consistent filename convention. I’m sure there MUST be a quicker, smart and less dumb way to do this – and I will update this page if I find it…
IMPORTANT: Don’t bother doing the import without the OVFTOOL otherwise you will see this error – its shows the vSphere6.5U1 and vSphere 6.7 Web-Clients inability to import a VM with VMware Paravirtual SCSI Controller enabled which it calls by the name of “VirtualSCSI”
- Importing the OVA and Control where it runs: Using the parameters of the ovftools import can give you more control. For instance – import to local storage if you want to “peg” each of the nodes to a specific host, and protect yourself from accidentally filling up your home NAS. Plus once pegged to a specific host, you can use VM Start-Up and Shutdown options to always bring up the ESXi virtual nodes when you power up your physical ESXi host. Another option if your physical layer is vSphere Cluster is to use your home NAS for performance – but import to portgroups that only exist on some hosts. For instance “nested” only appears on esx01/esx02 but not on esx03 (which runs “infrastructure” services such as AD, vCenter, Jumpbox and so on). This emulate having a “management host” which is seperated and distinct from the hosts running the nested ESXi nodes. They can never be “VMotioned” to you “management host” as it doesn’t have all the “nested” portgroups.
- Importing the OVA and Set Disk Types: The template defaults to using “Thin Provisioning” for the VMDK. This is sub-par for performance. If you know the size of the disk that will store the nested nodes. Using the flag –diskMode can over-ride this default to switch to the mode “thick”. This pre-provisions the disks upfront, but you do need the capacity on the physical disk to do this.
- Importing the OVA and Set Disk Sizes: The default disk set contains just-enough-disks to do multiple disk groups in VSAN. However, if “capacity” is your issue – I would delete disk4/5 in the VM, and increase the size of disks 2/3 which is the SSD/HDD respectively. In my case I worked out the size of the volume/LUN/disk where the nested cluster is stored – divide by the number of nodes, and reserve 20% that space for the SSD, and the rest as HHD. Remember don’t forget the nested node when powered on will reserver a swapfile at power on at 10GB per node. Remember if you come close to filling a disk, this likely to trigger warnings and alarms in vCenter or the ESXi host. You can choose to ignore these if you are confident the storage of the nested VSAN is fully allocated.
- Update your vCenter/ESXi hosts: Make sure you a do VMware Update of the vCenter and Hosts – even if you have a relatively recent version of vCenter/ESX. Bugs exists, and tools exist to fix bugs. Use them. I like to do my VCSA updates from the Management UI that listens on 5480 – and then use Update Manager to remediate the hosts. The majority of this post was tested on vSphere 6.5U1 but didn’t play nicely until the host were upgraded from VMware ESXi, 6.5.0, 5146846 to
- Later it was tested on the new release of vSphere 6.7. A number of changes were triggered by testing this nested VSAN setup on the new software and this included
- Even more memory up from 10GB to 12GB!
- vSphere 6.7 requires you setup VSAN using the new HTML5 client. More about this shift in a companion blogpost
- Virtual Disk sizes increased – without a large disk a spurious alarm called “VSAN max component size” was triggered. The case comes from the fact that the disk is size is small. Yes, a mini value triggers a “max” alarm. This is so wrong on so many levels its hard to explain.
- Confirm Networking: Make sure you physical switch supports Jumbo Frames, That the Physical ESXi hosts are enabled for 9000 MTU and that the vSwitch is enabled for Accept, Accept, Accept on the Security Settings
- Build Order: I would recommend creating empty cluster, adding hosts – patch the hosts – and then enable VSAN afterwards
- Create TWO VSAN Disk Groups: Until ALL the disks are allocated to an existing disk group or a second disk group is created – you will receive a warning from VSAN “Configuration Assistant” – that not all disks have been claimed
- Enable other Cluster Features: Once VSAN is functionally – you can enable DRS first, followed by HA. I like to turn on each type of vSphere Cluster feature and confirm works – rather than going all guns blazing and enabling everything at once.
- Finally use the “Configuration Assist” to valid all is well. By default a warning will appear for “Hardware Compatibility” as the VMware Paravirtual SCSI controller used in nested environments like this isn’t on the HCL. And will never be!
- If this “hardware compatibility” warning really offends you – it is possible using the RVC console utility to disable tests – that are false positives. For example if SSH into the vCenter that owns the VSAN cluster (in my case vcnj.corp.local – and run the RVC console – this command will add this test of the HCL to the silent list. You do this by SSH into the VCSA as “root” using an IP/FQDN – typing “bash” running RVC and authenticating with your firstname.lastname@example.org account…. Phew. I’ve never come across a more obtuse way gaining access to a command-line tool!
vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'controlleronhcl'
Successfully add check “SCSI controller is VMware certified” to silent health check list for VSAN Cluster
With vSphere 6.7 I had a number of other bogus alarms that needed to be cleared using:
vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'smalldiskstest'
Note: vSphere 6.5 U1 passes with flying colours
Here we can see that the question is “SCSI controller VMware Certified” has been skipped…
What will Michelle do next? I dunno. I like to keep folks guessing. Something fun. Something useful to myself and others. Enjoy.
I did some experimentation with the performance of my nested vSAN. It’s important to know that the performance reflects the non-native performance created by nesting – and is not a reflection of physical VSAN performance characteristics.
Firstly on my hosts this how much a 4-node nested VSAN consumed at rest (just running ESXi and not doing very much.
So despite my 4xvESXI needing 10GB each – actually they consumed 28GB leaving 35GB on my physical host (which has 64GB of RAM). They consumed about 45GB from a 458.25GB local VMFS drive. A clone of a Windows 2016 VM from my Synology SAN across the wire to this local “VSAN” storage very slow. It took all night and by the morning still hadn’t completed the boot process.
I tried the same process with a “MicroLinux” – and despite it being tiny the clone process was a bit quicker taking 5mins of 1GB thin virtual disk (350MB in use). Perhaps it would be quicker once the MicroLinux was initially copied to the VSAN – taking the source template out of the loop. I did think this made a difference – the copy time came down to 2mins for the MicroLinux. The verdict is storing the nested VSAN on local storage in this way is subpar for performance – that’s because my local VMFS disk was just a SATA drive on a generic HP controller. Clearly, one would get better throughput with an SSD drive.
I decided to pivot my nested VSAN off local storage to my SDD backed Synology NAS to see if performance improved. For instance the Win2016 image took about 20-30mins to clone a 40GB VM from the Synology’s iSCSI layer into the nested VSAN layer – also this was quite reliable. So it looks at if the cloning process was the source of the bottleneck.
You will get much better performance using a “super-skinny” version of Linux such as the MicroCore Linux available from my downloads page. This is the VM used in VMware Hands-on-Lab’s and contains a copy of “Open Source” Tools. Its not the skinniest of skinny linux but its is very functional. Navigate to Download, scroll down to “Small Linux VMs for a Home Lab”