May 14

Kickstart Scripted VMware ESXi Install to USB Media

I recently decided to switch back to using USB Media for booting my VMware ESXi hosts. My main thought was that I want to use the local HHD for either some kind of VSA testing (such as StorMagic’s SvSAN) or else when I have the budget buy in SSDs to make a physical VSAN cluster, rather being dependent and reliant on a virtual nested VSAN setup. I went out and got some 32GB USB media to make sure they would be big enough to create the scratch partition.

I’m often wiping my VMware ESXi hosts to try out new builds – all to roll-backwards to previous builds for testing purposes. For that I used the UDA together with some kickstart scripts to do the bulk of the customisation. So times I just lay down a clean build with no customisation and use PowersHell scripting to build up the environment to the level I want it. I have that in a modular way that allows me to lay down, some but not all, of the configuration depending on my development needs.

Anyway, to script an install to USB media you can use this in the kickstart:

install --firstdisk=usb --novmfsondisk

The –firstdisk flag actually can take a series of arguments separated with a comma. It is possible to specify an order of particular disks type (not just the first USB, HHD/SSD or SAN based LUN) the installer searches for. And you can use to this set your own orders for how disks are discovered. For instance –firstdisk=usb,remote,local would first try to install to USB, then a LUN on a SAN, before lastly trying the local disk.

Also what’s supported is specifying the model of the disk like so –firstdisk=ST3120814A. I used this in the past with hyper-converged appliances that I was doing ground-zero resets on – and just so happened the bootdisk I was using had a unique model number. Of course this isn’t very helpful when a server contains disks that all have the same model number…

The other option is to use instead of install –firstdisk, but use install –disk=mpx.vmhba1:C0:T0:L0. This allows you to indicate the disk by which adapter its configured for, and the controller/target used (note these values are often 0, just the L number value that changes. For many years we have referred to this as the vmhba syntax or “Runtime Name”. In the past its been hard to trust 100% these values, but I think they are more reliable nowadays, as the way the VMware ESXi host boots the vmkernel is VERY different now.

Probably the easiest way to find the MODEL name/number and vmhba syntax is using the esxcli commands like so:

esxcli storage core path list

Here’s a cut and paste of the USB output – 

usb.vmhba32-usb.0:0-t10.SanDisk00Ultra00000000000000000000004C530001270222101242 UID: usb.vmhba32-usb.0:0-t10.SanDisk00Ultra00000000000000000000004C530001270222101242 

Runtime Name: vmhba32:C0:T0:L0 

Device: t10.SanDisk00Ultra00000000000000000000004C530001270222101242 

Device Display Name: Local USB Direct-Access (t10.SanDisk00Ultra00000000000000000000004C530001270222101242) 

Adapter: vmhba32 Channel: 0 Target: 0 LUN: 0 Plugin: 

NMP State: active Transport: usb 

Adapter Identifier: usb.vmhba32 

Target Identifier: usb.0:0 

Adapter Transport Details: Unavailable or path is unclaimed 

Target Transport Details: Unavailable or path is unclaimed Maximum IO Size: 32768

 

I imagined a system with multiple UBS media (!??!) would report C0:T0:L1 or else C0:T1:L1.

Oddly enough I found a second USB device added to my ESXi host doesn’t appear – not even when doing a manual install with ISO attached the HP ILO. I’m not sure why this is – but it may (or may not) be significant which USB slot the device gets inserted too, or that there are limits around the way the VMkernel enumerates USB device – perhaps only enumerating the first USB memory stick found?  This isn’t terribly important – but I will ask around my contacts and see what I can dig up. It’s a bit obscure, but I’m curious like that.

Category: vSphere | Comments Off on Kickstart Scripted VMware ESXi Install to USB Media
May 11

VSAN Maintenance Mode with PowersHell

I’m running VSAN in a nested configuration, and I generally shutdown my homelab in the evening each day. I grew tired of doing the manual process of maintenance mode for these nested nodes before shutting them down, and then shutting down the host that they run on. I did a bit of googling for the PowersHell that does that – and was see quite complicated scripts. I’m pleased I went back to the documentation for PowerCLI which is the primary source for the cmdlets:

https://code.vmware.com/doc/preview?id=6702#/doc/Set-VMHost.html

I discovered that the Set-VMhost cmdlet supports a -VsanDataMigrationMode switch with three different options for:

  • Full
  • EnsureAccessibility
  • NoDataMigration
1..4 | Foreach {
 $Num = "{0:00}" -f $_
 Set-VMHost -VMHost esx"$Num"nj.corp.local -State "Maintenance" -VsanDataMigrationMode NoDataMigration 
 }

1..4 | Foreach {
 $Num = "{0:00}" -f $_
 Stop-VMHost esx"$Num"nj.corp.local -Confirm:$False
 }
Category: vSphere | Comments Off on VSAN Maintenance Mode with PowersHell
May 9

Making a little nest for your VMs

Nesting VMware ESXi has become easier and easier as the years roll by. In case you don’t know “nesting” is the term used for running VMware ESXi inside a VM for development purposes. It’s the basis of VMware’s popular “Hands-on-Lab”, as well as homelabs – and in recent years “nesting” has gone to the Cloud with service like Ravello on the Oracle Infrastructure Cloud (OIC).

In years gone by there were many settings needed at the Physical and Virtual layer – that required hand-edits to configuration files to make this work. With the advent of vSphere 6.5 U1 a lot of that disappeared. HOWEVER, there’s still plenty of work that needs to be done at the physical and virtual layer to allow for networking to work properly. As well as successfully passomh some of the “health checks” around technologies like VMware VSAN that has specific networking requirements for your web-client to light up with little green ticks of happiness.

Here’s a brief check list:

  1. Physical Switch needs:
    • MTU of 9000,
    • with VLAN Tagging enabled as necessary
  2. Physical ESXi host vSwitch needs
    • MTU of 9000
    • vSwitch security policy of Accept, Accept, Accept
    • Portgroups used by the virtual “nested ESX” enabled for 4096 to allow for VLAN tagging in the nested layer to pass-thru to the physical switch
  3. Virtual Nested ESXi vSwitch needs
    • MTU of 9000
    • vSwitch security policy of Accept, Accept, Accept
    • Portgroups for VMs and VMkernel enabled for VLAN Tagging as necessary

Standard Switches work well with nesting, and have the added benefit of not being tied to a vCenter to the host. This makes blowing away the nested layer after your lab period ends a breeze. It’s slower and more awkward “clean-up” process if your using DvSwitches. To use the special MacLearn VIBs that improve network performance DvSwitches are needed at the physical layer – this isn’t an option if you main physical ESXi host is stand-alone and not managed vCenter.

1. Physical Switch Needs:

In my case I have a HP ProCurve 1810G – 24 G switch – its not a bad unit, whisper quiet and simple to configure:

I have simple VLAN configuration (mainly used to demonstrate/explain the VLAN Tagging concept) but I also VLAN off my VMotion traffic.

Physical ESXi host vSwitch needs

I set my MTU and Security policy on the properties of vSwitch0, which means all the portgroups inherit those settings. Its simple quick and easy.

To do that in the context of the VMware ESXi VMKernel you could use the ESXCLI command – and these commands could be part of a kickstart install script:

## - Enabling Jumbo Frames on vSwitch0 to pass VSAN "Configuration Assistant" tests.
esxcli network vswitch standard set -m 9000 -v vSwitch0

## - Lower security on vSwitch0 to allow traffic to flow in a nested environment. 
esxcli network vswitch standard policy security set -v vSwitch0 -f=true -m=true -p=true

The VLAN configuration can be set when the portgroup is being created as is the case with VMotion being on VLAN10 like so:

esxcfg-vswitch -A "VMotion" vSwitch0
esxcfg-vswitch -v 10 -p "VMotion" vSwitch0
esxcfg-vmknic -a "VMotion" -i [VMOT_IP] -n 255.255.0.0 -p "VMotion" vSwitch0
esxcli network ip interface tag add -i vmk1 -t VMotion

Note: In this case [VMOT_IP] is variable used as part of the UDA appliance.

For an existing portgroup for example the “VM Network” the VLAN value can be set to be one that passes thru the VLAN Tagging from the nested layer to the physical layer:

esxcli network vswitch standard portgroup set -p "VM Network" --vlan-id 4095

Alternatively, if your Physical ESXi hosts are managed by vCenter – you could do this with a PowersHell for each loop like so:

1..3 | Foreach {
 $Num = "{0:00}" -f $_
 $vswitch0 = Get-VirtualSwitch -VMHost esx"$Num"nyc.corp.local -Name vSwitch0
 Set-VirtualSwitch -VirtualSwitch $vswitch0 -MTU 9000 -Confirm:$false
 }
1..3 | Foreach {
 $Num = "{0:00}" -f $_
 $vswitch0 = Get-VirtualSwitch-VMHost esx"$Num"nyc.corp.local -Name vSwitch0
 Get-VirtualSwitch -Name $vSwitch0 | Get-SecurityPolicy | Set-SecurityPolicy -MacChanges $true
 Get-VirtualSwitch -Name $vSwitch0 | Get-SecurityPolicy | Set-SecurityPolicy -ForgedTransmitsInherited $true
 Get-VirtualSwitch -Name $vSwitch0 | Get-SecurityPolicy | Set-SecurityPolicy -AllowPromiscuous $true
 }

The VLAN configuration for an existing portgroup can be updated using the following:

1..3 | Foreach {
 $Num = "{0:00}" -f $_
 $vswitch0 = Get-VirtualSwitch -VMHost esx"$Num"nyc.corp.local -Name vSwitch0
 $VMNetworkPG = Get-VirtualPortGroup -VirtualSwitch $vSwitch0 -Name "VM Network"
 Set-VirtualPortGroup -VirtualPortGroup $VMNetworkPG -VLanId 4095
 }

Virtual Nested ESXi vSwitch needs

The setting on vSwitch0 on the Nested ESXi host mirror that of the physical… so everything is aligned from vESX>pESX>pSwitch everything is MTU 9000 with weakened security. The VLANs I have from 101-104 could be added at the physical and virtual level with the esxcli command like so:

esxcli network vswitch standard portgroup add --portgroup-name=VLAN101 --vswitch-name=vSwitch0

esxcli network vswitch standard portgroup set -p VLAN101 --vlan-id 101

or if you were wanting to do this via PowersHell, you could use the following method. This approach differs from my previous examples – it merely “gets” every ESXi host in a vCenter, and creates VLAN101, 102, 103, and VLAN 104.

101..104 | Foreach { $Num = $_ 

(Get-VMHost | sort-object name) | foreach 

{New-VirtualPortGroup -VirtualSwitch (Get-VirtualSwitch -Name vSwitch0 -VMHost $_) -Name VLAN$num -VLanId $num } 

}

A very common use of nested vSphere is to create a virtual/nested vSphere VSAN cluster – this is because not everyone can afford the servers and storage to build out VSAN on a physical level (3 hosts with 1xSDD, 1xHDD). For sometime its been possible to virtualise ESXi, as well as also marking specific VMDKs as either being HHD or SSD. Inside the nested ESXi environment you will need a VMKernel portgroup enabled for VSAN. A simple thing to do would be to enable either the Management Network or the VMotion network for dual usage. Alternatively, you could setup a net-new VMKernel portgroup who’s sole and only purpose is VSAN communications. That’s what I do (even if the traffic goes over the EXACT same nics – as it makes it clear what the usage of each portgroup/vmkernel port is.

You can do this with CLI using the following commands like so:

esxcfg-vswitch -A "VSAN" vSwitch0
esxcfg-vswitch -v 0 -p "VSAN" vSwitch0
esxcfg-vmknic -a "VSAN" -i [VSAN_IP] -n 255.255.255.0 -m 9000 -p "VSAN" vSwitch0
esxcli vsan network ipv4 add -i vmk2

Note: In my case [VSAN_IP] is a variable used in my kickstart sub-template as part of the UDA. The main thing is to know the number of vmk ports – vmk0 is always the default management port – in my script the next vmk port created is for VMotion, and that therefore make the VSAN port be vmk2.

or with PowersHell:

1..3 | Foreach {
 $Num = "{0:00}" -f $_
 $vswitch0 = Get-VirtualSwitch -VMHost esx"$Num"nj.corp.local -Name vSwitch0
 New-VMHostNetworkAdapter -VMHost esx"$Num"nyc.corp.local -VirtualSwitch $vswitch0 -PortGroup VSAN -IP 10.20.33.1"$Num" -SubnetMask 255.255.255.0 -VsanTrafficEnabled $true
 }

Category: vSphere | Comments Off on Making a little nest for your VMs
May 7

CURRENTLY: There is no supported upgrade path from vSphere 6.5 Update 2 to vSphere 6.7

https://blogs.vmware.com/vsphere/2018/05/vsphere-6-5-update-2-now-available.html

DISCLAIMER: Firstly, and most importantly. This issue likely to effect a tiny proportion of customers. Despite what many think as soon as new version of vSphere hits the streets for most large organisations it can take anything between 12-24 months to actually complete an upgrade from one flavour of vSphere to another. Therefore by the time a customer who has rolled out vSphere 6.5 U2, the chances are that there WILL be an upgrade path to vSphere 6.7 probably in the shape of U1 or U2 release. The important thing to remember is if your intention is in the short-term to upgrade to vSphere 6.7 for whatever reason – is to be aware of the title of this blogpost. You may as well skip U2 and head straight to vSphere 6.7 if that’s the case…

UPDATE (10th May, 2018): It looks like the VAMI interface on the VCSA is set to deploy vSphere 6.5 U2. It’s set as just a “bugfix” and that its severity is set to “critical”. I’m research what the situation is with update manager.

WHY DOES THIS HAPPEN: This has happened on more than one occasion in recent memory. I don’t personally think that precedent can be used to justify this. A much simpler reason exists. The reality is VMware now has a very complicated, and richly interdependent and tightly-coupled series of software and services. Attempts have been made in the past to build a “Train Release” by which the core platform like vCenter/ESX are the “head” or “engine” of the train, and the related products that sit on top of it – are couple behind it on a release schedule that if correctly managed should arrive in the customer station at roughly the same time. Occasionally, a carriage gets de-coupled and is left in the sidings somewhere outside Colchester. For customers for whom that software package is critical it can and will delay and upgrade processes until the last part of the train gets into the station. This can be difficult where one SR calls for upgrade to fix one problem, only to “break” a piece of software elsewhere.

This kind of sequencing of software releases is very, very difficult to do in a large multi-national software company where software is shrinked-wrapped and installed on-prem. It’s actually a compelling argument for buying software like vSphere-as-a-service, and heading off into the Public Cloud. You cease to have worry about this kind of poop.

HAS THIS BEEN COMMUNICATED EFFECTIVELY. I’m sorry to say this but no. Sadly, an almost like Apple like cloak of secrecy still envelopes VMware. There are good reasons for this in some cases, but other situations like this – it actively and effectively works against good communication. As ordinary mortal I am supposed to read all the blogs, read all the release notes and listen to all the podcasts. I would say I’m pretty well connected to the VMWorld – but I didn’t know this was happening. Heck, I didn’t know that vSphere 6.5 U2 was on its way – or know there wasn’t upgrade path.

Apparently, to some that’s my fault, and my problem – and it isn’t the responsibility of VMware to shout from the rooftops as if Four Horses of the Apocalypse were on their way. Before I rant on and loose the plot a little – I will point you back to this disclaimer. Look. I totally get it, and totally understand why and how this happens. And in many, many, many ways the vSphere 6.5 U2 release is a good, good,good thing because it quickly deals with the issue associated with “WHY DOES THIS HAPPEN”. The train becoming de-railed from the railway lines.

For those who are outside of the vendorland much of this just seems bug-eyed weird. Although “shit-sandwich” has become such a defacto standard in our industry many seem quiet happy to suck it up. Personally, I like to set my standards higher than that. Case in point. The download page for vSphere 6.5 U2 has one of those little yellow “read the KB” stickers – which incidentally disappears when you click “Go To Downloads”.

The downloads page of vSphere 6.7 (at the time of writing) has a tiny notification at all.

People will say “buyer beware” and you should read the Release Notes. Honestly, who has time for that – Release Notes are like EULAs. Practically no-one reads them (I’m unable to quantify that statement I know!).

Of course, if you fail to read all the blogs, the release notes and these stickers – then its “your own fault chum” according to some. Personally, I think vendors have to realise that in todays busy world with lots of competing channels searching for our eyeball time – something a little more noticeable is required.

With that. Read my disclaimer again…

Category: vSphere | Comments Off on CURRENTLY: There is no supported upgrade path from vSphere 6.5 Update 2 to vSphere 6.7
May 2

How to Import VMware Fusion VM into VMware vSphere

Executive Summary: There are lot of methods. If at first you don’t succeed give it another go. 2nd Time lucky and all that.

I recently lost my Windows10 Jumpbox due to a power failure. It disappeared from my SAN array. I didn’t have a backup (which was stupid of me) and Windows wasn’t activated. In the meantime I realise I had an activated version of Windows10 running in VMware Fusion. So I thought heck might as well import to vSphere, and avoid re-installing and that annoying watermark in the right-hand corner of my RDP desktop.

Ironically, after completing the move – because Windows10 was sitting on a different virtualization hardware platform – it de-activated itself – and wouldn’t reactivate. So it was all a waste of time really – I could have just deployed a fresh copy of Windows 10.

Anyway, I’ve learn my lesson the hardware – always backup your Jumpbox and make sure any work in progress is held elsewhere such as Dropbox, Google Drive or OneDrive. That way if your jumpbox is toast you don’t loose that PowersHell script you have been perfecting all week.

METHOD1: Connect to Server from VMware Fusion

This initially failed when the target was vSphere6.5 U1, but worked with vSphere 6.7. Hard to tell if the vSphere uplift was the decisive difference, or whether it was a case of just try, try and try again.

VMware Fusion has a “Connect to Server” option

Once connected and authenticated to the vCenter or ESXi host – you can drag and drop a VMware Fusion VM like so:

METHOD2: Export/Import OVA

Another method is to export the VM from Fusion into a the OVF/OVA format. I prefer OVA as its pre-compressed and gives you a single file to deal with – as OVF gives you a text-based “descriptor” file and whole bunch of VMDK files.

Select the VM in Fusion, and in the File menu, choose Export to OVF

Once exported you can login to either VMware vCenter or VMware ESX to import there. Personally I’ve found the import/export process in vCenter/ESX a bit 50:50…. Again, this failed in vSphere 6.5 U1, but was successful in vSphere 6.7.

Select a host or cluster in vCenter, and choose Deploy OVF Template…

Watch out for the format of the disk – as it does not default to “Thin Provisioning”.

METHOD3: VMware Convertor

I tried this – a number of times, and it did not work. VM would not boot. This is kind of odd because I’ve had more success with this method for importing physicals into VMware Fusion, than I have had with other methods of “importing” OSes into VMware Fusion….

Category: vSphere | Comments Off on How to Import VMware Fusion VM into VMware vSphere
May 1

What works best Clean Install or Dirty Upgrade?: VMware ESXi Upgrade on HP ML350e Gen8

One of the most common questions I was asked when I was an instructor was “what works best, clean install or an upgrade”. Without fail it was the kind of question everyone already had their own answer to. So I couldn’t add much to the debate, although I knew where I stood.

The upgrade process for vSphere with its myriad of service dependences is getting longer, and longer – and thornier and thornier. Even so despite an in-place upgrade taking a long time – its is still the best route considering the impact of clean installation. About the ONLY thing I’d would be tempted to do a clean install of is VMware ESXi, but that comes now with its own set of complications not least if the customer uses VMware’s Distributed Switches.

So this blog is NOT an agonising blow-by-blow of upgrading that would rival your teeth being extracted. It’s high-level view of the upgrade (just of vCenter and ESX and nowt else) and what that feels like.

VMware vCenter/PSC Upgrade on vCSA

This went preachy-smooth, and despite having the complicated “external” PSC and vCenter model to allow for “Enhanced Linked Mode” the upgrade process was lengthy, but easy and worked perfectly first time. It’s not so much an “upgrade” process, more of duplication/mirror process – as the the upgrade wizard spawns a duplicate vCenter and PSC, and then sets about copying the data from one to the other. At the end their IPs are inverted and the old vCenter/PSC is shutdown. The only danger here is if you do reboots and bring up the old vCenter/PSC by mistake. I did that once which was confusing. The nice thing about this process is that its non-intrusive to the Old vCenter/PSC. So if something goes pear-shaped (as opposed to peachy-smooth) you have a failback position

Verdict: Upgrade, fill your boots!

VMware ESXi Upgrade on HP ML350e Gen8

For me this was a less than good experience. I could only get just one of my three hosts to upgrade. Some issue with Update Manager in the end. By the end of it I realise that kickstart script install from my UDA was the only way to go – so the dirty upgrade fails, and a clean install won the day. Perhaps my upgrade issues were caused by much more intractable problem – my servers aren’t even on the HCL anymore!

I have 3x HP ML350e Towers – and they fell off the HCL around the time of vSphere 6.0 U3

Despite this – there was release from HPE that was dubbed pre-Gen10 of vSphere 6.6 U1 – this was the build called “VMware-ESXi-6.5.0-Update1-6765664-HPE-650.U1.9.6.5.1-Nov2017.iso”, if you accidentally installed the build that came after it called “VMware-ESXi-6.5.0-Update1-7388607-HPE-650.U1.10.2.0.23-Feb2018.iso” in Feb of this year – you would be installing a code based designed for a Gen9/10 server – and that would make the fans go bizzy and cause alarms and alerts in vCenter…

Things don’t tend to go back on the HCL once they have left it – and I think I was lucky that there was a pre-Gen9 custom image to work with my Gen8 systems. I don’t expect to see this at all with vSphere 6.7.

Both the vSphere 6.7 generic and 6.7 from HPE successful install to my HP ML350e Gen8s. But its not without complication. The fans go faster and so the lab makes more noise – and fan alarm is trigger is lit in vSphere 6.7.

Put simply before the vSphere 6.7 install. The fans were going much slower, and vCenter had no alarms.

Note: vSphere 6.5 U1 – Fans at 6%

 

Note: vSphere 6.7 – Fans at 25%

 

Note: vCenter lights up like a Christmas Tree!

Merely acknowledging or “Reset to Green” only causes this alarm to temporarily to be dismissed before it comes back again like a bad smell or penny. The only way to turn off this “false positive” is by disabling the alarm itself. Navigate to:

>> Global Inventory Lists  >> vCenters >>Select vCenter FQDN >>Configure  >>More >>Alarm Definitions

Search for the alarm called “Host hardware fan status“. Select and click the Disable button

 

Category: vSphere | Comments Off on What works best Clean Install or Dirty Upgrade?: VMware ESXi Upgrade on HP ML350e Gen8
April 30

Flash! Ahhhh! You Killed Everyone Of Us!

One the more pleasing aspects of vSphere 6.7 is the extension of the HTML5 client to cover more functionality. 95% feature parity with the Ye Old Flash Web-Client of teeth-extraction fame. Make no bones about it the HTML5 client above is the bees-knees – mainly because in comparison to its slow-coach predecessor it is blisteringly quick. Indeed, switching back to the previous version of the client that replaced the previous version of the client, that worked prefect fine – is an unpleasant reminder of how dreadful it was… However, there is something jivey about the way this simultaneous release of 3 different clients at one stage happened. This isn’t “Agile” devlopment its “Fragile” development at its worst – and sends a confused jagged and jarring message to customers. I would have preferred to have waited until 7.0 and full-release of a brand-new shinny clients. Rather than the HTML5 client being dribbled out in dribs and drabs.

I doubt very much if any customer purchases a product on the strength of the client front-end. I mean if they did with VMware they’d get a lot of bang for the buck when 3 of them were available. 😉 After all most customers are either automating much of the tasks it offers with PowersHell, or with some other overlay that abstracts away the complexity of full fat Web-Client.

With that said there is a tier of customers for whom vSphere is just 10-20% of their daily admin tasks (if that) and for whom an intuitive and easy to use interface is a must. Because unlike hard-core VMware-Fanatics like myself, they only log into it if the have need too. There’s some aspects of the new HTML5 client that irritate and some aspects that are broken.

Note: Is it me or is bug-eyed weird that VSAN doesn’t exist as service along side DRS and HA. But instead is a category within which there are services?

I’ve yet to see either the Flash Web-Client or the HTML5 Web-Client successfully handle OVF/OVA exports and imports – with me having to resort to the use of OVFTOOL to get a process that is reliable and dependable.

Category: vSphere | Comments Off on Flash! Ahhhh! You Killed Everyone Of Us!
April 24

Reading The F****** Manual: Setting Up VMware PowerCLI 10.x

lIf you visit the vmware.com/download website you’d be forgiven for thinking that PowerCLI is now on version 6.5 Release 1

However, you would be wrong because a blogpost in Nov, 2017 announced the release of 6.5.3, which downloadable from this community page:

https://communities.vmware.com/community/vmtn/automationtools/powercli

But wait.

Hang on. Uh-oh.

Wrong again.

The latest and greatest version of PowerCLI is actually version 10. And is downloadable using generic PowerShell commands from the PowerShell Gallery.

https://blogs.vmware.com/PowerCLI/2018/02/powercli-10.html 

https://www.powershellgallery.com/packages/VMware.PowerCLI/10.0.0.7895300

Confused?

Do keep up…

Your supposed to just know this by reading some obscure blogpost which links you to the PowerShell Gallery with positively NO instructions on how to set it up. Man, someone could really do with RTFM this – to make it easy for new people to get hold of.

So.. Drum roll here’s how its done.

1. Run PowersHell from your system – ensuring that use RunAs Administrator:

2. Type

Install-Module -Name VMware.PowerCLI

3. Choose [Y] to download and install the Nuget update engine

4. Chose [A] to get download all of the PowerCLI modules

5. Make a cup of tea whilst stuff downloads and unzips itself

6. Once this completes. Your not done yet. Just because the modules have been download and unzip – that doesn’t mean they have been loaded. That’s something you’ll need to do yourself. If you simple type connect-viserver you will get an error message like so:

There are lots of modules that contain lots of cmdlets (is that CMD-lets or “Command-Lets?)

VMware.VimAutomation.Sdk (≥ 10.0.0.7893910)
VMware.VimAutomation.Common (≥ 10.0.0.7893906)
VMware.VimAutomation.Core (≥ 10.0.0.7893909)
VMware.VimAutomation.Srm (≥ 10.0.0.7893900)
VMware.VimAutomation.License (≥ 10.0.0.7893904)
VMware.VimAutomation.Vds (≥ 10.0.0.7893903)
VMware.VimAutomation.Vmc (≥ 10.0.0.7893902)
VMware.VimAutomation.Nsxt (≥ 10.0.0.7893913)
VMware.VimAutomation.vROps (≥ 10.0.0.7893921)
VMware.VimAutomation.Cis.Core (≥ 10.0.0.7893915)
VMware.VimAutomation.HA (≥ 6.5.4.7567193)
VMware.VimAutomation.HorizonView (≥ 7.1.0.7547311)
VMware.VimAutomation.PCloud (≥ 10.0.0.7893924)
VMware.VimAutomation.Cloud (≥ 10.0.0.7893901)
VMware.DeployAutomation (≥ 6.5.2.7812840)
VMware.ImageBuilder (≥ 6.5.2.7812840)
VMware.VimAutomation.Storage (≥ 10.0.0.7894167)
VMware.VimAutomation.StorageUtility (≥ 1.2.0.0)
VMware.VumAutomation (≥ 6.5.1.7862888)

7. You cannot just run the command Import-Module VMware.VimAutomation.Core on clean system UNTIL you set your Execution Policy like so:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned

8. Followed by:

Import-Module VMware.VimAutomation.Core

9. Followed by opting out of the Customer Experience Program with:

Set-PowerCLIConfiguration -Scope User -ParticipateInCEIP $false

10. Before you make your first connect-viserver connection to your vCenter. You need to decide if you just going accept untrusted self-sign certificates that a generated during the install of VMware ESX and VMware vCenter – or whether you want go thru the ball-ache of issuing your own certificates. In a homelab environment your probably going just going to get rid of any warnings with:

Set-PowerCLIConfiguration -InvalidCertificateAction ignore -confirm:$false

There are two more steps left before you can use PowerCLI…

11. Firstly, wonder what became off PowerCLI 7, 8, and 9.

12. Wonder if this is actually progress… 🙂

13. But wait again. It is progress and here’s why. Once you have gone through all this hoop jumping updating the PowerCLI modules, doesn’t mean downloading yet an another package and in install. Up merely running an update from with the PowerCLI session likes so:

Update-Module -Name VMware.PowerCLI

In my case this updated the modules from 6.5 to 6.7 release of vSphere:

———- ——- —- —————-
Script 6.5.2.7… VMware.DeployAutomation {Add-DeployRule, Add-ProxyServer, Add-ScriptBundle, Copy-DeployRule…}
Script 6.5.2.7… VMware.ImageBuilder {Add-EsxSoftwareDepot, Add-EsxSoftwarePackage, Compare-EsxImageProfile, Export-EsxImageProfile…}
Manifest 10.0.0…. VMware.PowerCLI
Script 10.0.0…. VMware.VimAutomation.Cis.Core {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script 10.0.0…. VMware.VimAutomation.Cloud {Add-CIDatastore, Connect-CIServer, Disconnect-CIServer, Get-Catalog…}
Script 10.0.0…. VMware.VimAutomation.Common
Script 10.0.0…. VMware.VimAutomation.Core {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script 6.5.4.7… VMware.VimAutomation.HA Get-DrmInfo
Script 7.1.0.7… VMware.VimAutomation.HorizonView {Connect-HVServer, Disconnect-HVServer}
Script 10.0.0…. VMware.VimAutomation.License Get-LicenseDataManager
Script 10.0.0…. VMware.VimAutomation.Nsxt {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtService}
Script 10.0.0…. VMware.VimAutomation.PCloud {Connect-PIServer, Disconnect-PIServer, Get-PIComputeInstance, Get-PIDatacenter}
Script 10.0.0…. VMware.VimAutomation.Sdk {Get-PSVersion, Get-InstallPath}
Script 10.0.0…. VMware.VimAutomation.Srm {Connect-SrmServer, Disconnect-SrmServer}
Script 10.0.0…. VMware.VimAutomation.Storage {Add-KeyManagementServer, Copy-VDisk, Export-SpbmStoragePolicy, Get-KeyManagementServer…}
Script 1.2.0.0 VMware.VimAutomation.StorageUtility Update-VmfsDatastore
Script 10.0.0…. VMware.VimAutomation.Vds {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script 10.0.0…. VMware.VimAutomation.Vmc {Connect-Vmc, Disconnect-Vmc, Get-VmcService, Connect-VmcServer…}
Script 10.0.0…. VMware.VimAutomation.vROps {Connect-OMServer, Disconnect-OMServer, Get-OMAlert, Get-OMAlertDefinition…}
Script 6.5.1.7… VMware.VumAutomation {Add-EntityBaseline, Copy-Patch, Get-Baseline, Get-Compliance…}

Script 6.7.0.8… VMware.DeployAutomation {Add-DeployRule, Add-ProxyServer, Add-ScriptBundle, Copy-DeployRule…}
Script 6.5.2.7… VMware.DeployAutomation {Add-DeployRule, Add-ProxyServer, Add-ScriptBundle, Copy-DeployRule…}
Script 6.7.0.8… VMware.ImageBuilder {Add-EsxSoftwareDepot, Add-EsxSoftwarePackage, Compare-EsxImageProfile, Export-EsxImageProfile…}
Script 6.5.2.7… VMware.ImageBuilder {Add-EsxSoftwareDepot, Add-EsxSoftwarePackage, Compare-EsxImageProfile, Export-EsxImageProfile…}
Manifest 10.1.0…. VMware.PowerCLI
Manifest 10.0.0…. VMware.PowerCLI
Script 6.7.0.8… VMware.Vim
Script 10.1.0…. VMware.VimAutomation.Cis.Core {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script 10.0.0…. VMware.VimAutomation.Cis.Core {Connect-CisServer, Disconnect-CisServer, Get-CisService}
Script 10.0.0…. VMware.VimAutomation.Cloud {Add-CIDatastore, Connect-CIServer, Disconnect-CIServer, Get-Catalog…}
Script 10.1.0…. VMware.VimAutomation.Common
Script 10.0.0…. VMware.VimAutomation.Common
Script 10.1.0…. VMware.VimAutomation.Core {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script 10.0.0…. VMware.VimAutomation.Core {Add-PassthroughDevice, Add-VirtualSwitchPhysicalNetworkAdapter, Add-VMHost, Add-VMHostNtpServer…}
Script 6.5.4.7… VMware.VimAutomation.HA Get-DrmInfo
Script 7.1.0.7… VMware.VimAutomation.HorizonView {Connect-HVServer, Disconnect-HVServer}
Script 10.0.0…. VMware.VimAutomation.License Get-LicenseDataManager
Script 10.1.0…. VMware.VimAutomation.Nsxt {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtService}
Script 10.0.0…. VMware.VimAutomation.Nsxt {Connect-NsxtServer, Disconnect-NsxtServer, Get-NsxtService}
Script 10.0.0…. VMware.VimAutomation.PCloud {Connect-PIServer, Disconnect-PIServer, Get-PIComputeInstance, Get-PIDatacenter}
Script 10.1.0…. VMware.VimAutomation.Sdk
Script 10.0.0…. VMware.VimAutomation.Sdk {Get-PSVersion, Get-InstallPath}
Script 10.0.0…. VMware.VimAutomation.Srm {Connect-SrmServer, Disconnect-SrmServer}
Script 10.1.0…. VMware.VimAutomation.Storage {Add-KeyManagementServer, Copy-VDisk, Export-SpbmStoragePolicy, Get-KeyManagementServer…}
Script 10.0.0…. VMware.VimAutomation.Storage {Add-KeyManagementServer, Copy-VDisk, Export-SpbmStoragePolicy, Get-KeyManagementServer…}
Script 1.2.0.0 VMware.VimAutomation.StorageUtility Update-VmfsDatastore
Script 10.1.0…. VMware.VimAutomation.Vds {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script 10.0.0…. VMware.VimAutomation.Vds {Add-VDSwitchPhysicalNetworkAdapter, Add-VDSwitchVMHost, Export-VDPortGroup, Export-VDSwitch…}
Script 10.0.0…. VMware.VimAutomation.Vmc {Connect-Vmc, Disconnect-Vmc, Get-VmcService, Connect-VmcServer…}
Script 10.0.0…. VMware.VimAutomation.vROps {Connect-OMServer, Disconnect-OMServer, Get-OMAlert, Get-OMAlertDefinition…}
Script 6.5.1.7… VMware.VumAutomation {Add-EntityBaseline, Copy-Patch, Get-Baseline, Get-Compliance…}

If you wish to always open PowerCLI when you open PowersHell, you can create shortcut to it like so:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -noe -c "Import-Module VMware.PowerCLI"

And finally, because PowerShell/PowerCLI is so chuffing great and functional, there’s even a GitHub edition of PowerShell for Linux and Mac (take that up the ass Perl and Bash!)

https://github.com/PowerShell/PowerShell/releases

 

Category: vSphere | Comments Off on Reading The F****** Manual: Setting Up VMware PowerCLI 10.x
April 12

New Nested vSphere6.5 and vSphere 6.7 VSAN Cluster OVF

Acknowledgement: I’d like to thank William Lam’s work and his blog VirtualGhetto – to whom this work and blogpost would not have been possible. In many ways my work is just minor adjunct to his efforts: Thanks William! 🙂

Update: Since publishing this – William has released a new version of his template. Given the time and effort I put into validating my work – I’m going to keep mine available too. I’m still testing this on vSphere6.7 which what held me back releasing it sooner…

Note: This template will not work with VMware Fusion, and has NOT been tested with VMware Workstation

IMPORTANT: This configuration has been tested in vSphere 6.7, but I had to wait for the public drop. Despite working with VMware technologies since 2004, I was not invited to the preview. Additionally, I cannot do any verification work on vSphere 7.0 because I haven’t been approved for the beta.

Backstory:

As you might gather from my recent post on using VMware “ovftool” I’ve been looking at the process of importing and exporting VMs into a OVF/OVA format for a particular reason. One of my projects this year is to master VMware’s VSAN technology. You might remember I had a role previously in VMware’s Hyper-convergence Team, where I got more than plenty of exposure – but things have moved on considerably in the two-years I’ve been away. And my goal is to write in depth on the topic – beyond the shorter for blog format. More about that at later date!

I’ve got pretty good hardware at my home lab – but no SSD in my servers – and I’m not about to go dipping even further into my savings to do that. Not when I have hours worth of vExpert time on the Oracle Ravello Cloud – as well as access to SSD storage on my Synology NAS and local storage on the VMware ESXi host. It makes much more sense to go for nested vSphere 6.5U1 or vSphere6.7 configuration where a mere edit to the .VMX file marks a .VMDK as a SSD rather than an a HDD.

After a tiny amount googling it became clear that vSphere6.5U1 and vSphere 6.7 offered many enhancements for the nested vSphere homelab particularly if its is running on top of a physical VMware ESXi host or vSphere HA/DRS Cluster. More specifically fellow vExpert – William Lam –  VirtuallyGhetto site rounds these enhancements in a blogpost very neatly which he wrote back in October, 2016:

https://www.virtuallyghetto.com/2016/10/nested-esxi-enhancements-in-vsphere-6-5.html

So whether you running VMware ESXi or in “Da Cloud” as the basis for your lab – it was time to take advantage of improvements. I knew from memory that William had a range of different VSAN templates – which I thought I could use as the basis of a “new” Nested vSphere 6.x VSAN Cluster. Sadly, I’ve tried to contact William via Twitter and LinkedIn to discuss this – but I’ve not heard back from him. I think he must bizzy with other projects or like me is having a sabbatical from work. If you interested the “source” for my OVF template came from William’s blog here which he posted back in Feb, 2015:

https://www.virtuallyghetto.com/2015/02/updated-vsan-6-0-nested-esxi-ovf-templates-for-64-nodes-all-flash-array-fault-domain-testing.html

Note: I’ve only tested this on a physical VMware ESXi host. It won’t work on VMware Fusion 10 does not yet support the VMware Paravirtual SCSI Controller. I don’t have access to a physical Windows PC with the necessary CPU attributes to test this with VMware Workstation. I’d recommend to Fusion/Workstations customers that you use the slightly older templates provided by William as I know it definitely won’t work on Fusion, Workstation is a mystery.

OVF Virtual Hardware Specification:

Anyway, I’ve taken William’s original 6-node nested VSAN OVF template which he built to demonstrate “Fault Domains” and made some changes – and spat thing back out for others to use.

My personal goal when ever I have done nesting has been to have something that resembles the customers physical world as much possible (I’m limited two physical NICs in my physical homelab which makes playing with Distributed Switches awkward…). As you might have gather increasingly I’m not someone who naturally predisposed to compromises. I was even tempted to even mirror the disk sub-system of the type of hyper-converge appliance – that might have 24 disk slots divided over 4 nodes in 2U chassis – like a vXRAIL. But I thought that was starting to get a bit silly!

Like me you will want to deploy the OVA to the FASTEST storage you have. For me my fastest storage happens to be to be my Synology NAS which is backed by SSD drives.

Here’s an overview of my customisation so folks know what’s changed.

More Hosts:

  • I’ve increased the number of nodes from 6 to 8. This allows for proper use of maintenance mode as well allowing for enough nodes to test “Fault Domains”. Remember you can dial down your number of nodes to suit your physical resources.

Bigger Hosts:

  • Upgraded to Virtual Hardware Level 13 (Requires an VMware ESX 6.5 or VMware ESX 6.7
  • Change Guest Operation System type from vSphere 5.x to vSphere 6.x
  • Increased the number of nodes from 6 to 8 to allow for maintenance mode to work correctly, fault domains and the emulation of a stretched VSAN configuration aka two 4-node VSAN clusters that look like they are two locations – remember you only need one vCenter to do this…
  • After some testing – I discovered the old RAM allocation of 6GB wasn’t sufficient for VSAN to configure itself in vSphere 6.5 and vSphere 6.7. So I had to increase the nested ESXi RAM allocation to 10GB and this seemed to fix most issues. I did try with 8GB of RAM with this configuration I found it was like having my teeth pulled. 10GB worked every time – smooth as a baby’s bottom. 🙂

Better Networking:

  • Removed the 2xE1000 NICs, and added 4xVMXNET – The additional NICS allow for easy both vSwitch and DvSwitch play time! It means I can keep the “core” management networks on a vSwitch0, and have virtual machine networking on a DvSwitch. Some might regard this as Old Skool an reminiscent of the 1Gb days when people thought 16 uplinks per host was a good idea. And they be right, and of course is not option in the physical world where the server might only have two 10Gb ports presented to it.

More Disks and Bigger Disks:

  • Removed the LSILOGIC Controller with the VMware Paravirtualised Storage Controller
  • Increased the boot disk from 2GB to 8GB – This means you now get a local VMFS volume with logs stored on the disk. Despite the ability to PXE/SD-CARD/SATADOM boot VMware ESXi it seems like many SysAdmins still install to 2xHHD mirrored or to FC-SAN LUN/Volume. Initially with vSphere 6.5 U1 this was set to 6GB, but with the release of vSphere 6.7, I had to up this value to 8GB.
  • I’ve increased the size of the SSD and HHD disks and also their number. The sizes reflect conceptually that unless you have All-Flash VSAN generally you have more HHD backed storage than SSD…
  • There is extra 2xSSD and 2xHHD per Virtual/Nested ESXi nodes – this allows for the setup of more than “VSAN Disk Group”. However, as the indication of a disk as SSD or HHD in nested environment the template does support all-Flash, and single VSAN Disk Group configurations. That’s your call.
  • All VMDK’s are marked to be thinly-provisioned to save on space…
  • All SSD drives are 10GB and all HDD drives are 15GB to make even easier to identify them – although VSAN itself does a good job of ID disk type.

Clearly this “Beefed Up” nested vSphere6.x VSAN cluster is going to consume more resources than the one previously built by William Lam. But there’s a coupe of things to bear in mind.

  • Firstly, if you only need 3 or 4 VSAN nodes – then simply do not power up other nodes or delete them
  • Secondly, If you do not require the additional SSD/HHD disks and are not interested in the delete those (before you enable VSAN!)

Download and Import:

The ONLY way to import the .OVA below is using the OVFTOOL. This is because it uses settings that a standard GUI import reports as unsupported. Specifically, the vCenter “Deploy OVF” file options has a problem with the SCSI Controller type called “VirtualSCSI”. I’m not sure WHY this happens. Either its broken, or is only partial implementation of the OVF import process and simple doesn’t not recognise the VMware Paravirtual SCSI Controller

To import the OVA, first download and install the OVFTOOL for your workstation type.

https://my.vmware.com/web/vmware/details?productId=614&downloadGroup=OVFTOOL420

Then download the OVA file into a workstation environment that has access to your physical ESXi host or physical vSphere deployment:

For a VMware ESXi host managed by vCenter residing in a DRS Cluster:

https://www.michellelaverick.com/downloads/Nested-ESXi-8-Node-VSAN-6.5-1-5.ova

Sample: To Deploy vApp to existing DRS/HA Physical Cluster:

"C:\Program Files\VMware\VMware OVF Tool\ovftool.exe" --acceptAllEulas --noSSLVerify=true --skipManifestCheck --allowExtraConfig --extraConfig:scsi0:0.virtualSSD=0 --extraConfig:scsi0:1.virtualSSD=0 --extraConfig:scsi0:2.virtualSSD=1 --extraConfig:scsi0:3.virtualSSD=0 --extraConfig:scsi0:4.virtualSSD=1 -ds="esx01nyc_local" -dm="thin" --net:"VM Network"="VM Network" "C:\Users\Michelle Laverick\Downloads\Nested-ESXi-8-Node-VSAN-6.5-1-5.ova" "vi://administrator@vsphere.local:VMware1!@vcnyc.corp.local/New York/host/Cluster1"

Note: The virtualSDD settings is where all the excitement happens. And allows you to control the disk configuration of this single OVA. For instance:

This setting –extraConfig:scsi0:0.virtualSSD=0 –extraConfig:scsi0:1.virtualSSD=0 –extraConfig:scsi0:2.virtualSSD=1 –extraConfig:scsi0:3.virtualSSD=0 –extraConfig:scsi0:4.virtualSSD=1  would allow for two disk groups – creates a nested VSAN with one HHD and one SSD per disk group

This setting –extraConfig:scsi0:0.virtualSSD=1 –extraConfig:scsi0:1.virtualSSD=1 –extraConfig:scsi0:2.virtualSSD=1 –extraConfig:scsi0:3.virtualSSD=1 –extraConfig:scsi0:4.virtualSSD=1 would import the template to be All-Flash VSAN

This setting –extraConfig:scsi0:0.virtualSSD=0 –extraConfig:scsi0:1.virtualSSD=1 –extraConfig:scsi0:2.virtualSSD=0 –extraConfig:scsi0:3.virtualSSD=0 –extraConfig:scsi0:4.virtualSSD=0 would import the template for a single disk group with one SSD and 3xHDD. 

Of course the disk sizes would be a bit odd with some HHD being 10GB or 15GB. And in the case of all-flash a mix of SSD that were a combination of 10GB and 15GB. This isn’t the end of the world, and still works. Remember you can remove these disks if you rather not have 2x SDD for playing with VSAN “Disk Groups”. My example uses ds=”thin” but you may get better performance by using “thick” as the disk format. Finally, once imported you could increase the size of the SSD and HDD for capacity purposes.

For a Stand-Alone ESXi host using local storage only as an example:

https://www.michellelaverick.com/downloads/Nested-ESXi-8-Node-VSAN-6.5-1-5-ESXi-01-to-08.zip

Sample: To Deploy vApp to Stand-Alone Physical ESXi Host:

for /l %x in (1, 1, 8) do "C:\Program Files\VMware\VMware OVF Tool\ovftool.exe" --acceptAllEulas --noSSLVerify=true --skipManifestCheck --allowExtraConfig --extraConfig:scsi0:0.virtualSSD=0 --extraConfig:scsi0:1.virtualSSD=1 --extraConfig:scsi0:2.virtualSSD=0 --extraConfig:scsi0:3.virtualSSD=1 --extraConfig:scsi0:4.virtualSSD=0 -ds="esx01nyc_local" -dm="thin" --net:"VM Network"="VM Network" "C:\Users\Michelle Laverick\Downloads\Nested-ESXi-8-Node-VSAN-6.5-1-5-ESXi-%x.ova" "vi://root:VMware1!@esx01nyc.corp.local"

NOTE: In this example a FOR /L loop is used to run the command 8 times (loop starts at number 1, increments at 1, and ends when it reaches 8 – hence 1, 1,8) , each time processing an OVA file for each node. Stand-alone ESXi host have no understanding of the “vApp” construct so this is a quick and dirty way to import a bundle of them using a consistent filename convention. I’m sure there MUST be a quicker, smart and less dumb way to do this – and I will update this page if I find it…

IMPORTANT: Don’t bother doing the import without the OVFTOOL otherwise you will see this error – its shows the vSphere6.5U1 and vSphere 6.7 Web-Clients inability to import a VM with VMware Paravirtual SCSI Controller enabled which it calls by the name of “VirtualSCSI”

Tips:

  • Importing the OVA and Control where it runs: Using the parameters of the ovftools import can give you more control. For instance – import to local storage if you want to “peg” each of the nodes to a specific host, and protect yourself from accidentally filling up your home NAS. Plus once pegged to a specific host, you can use VM Start-Up and Shutdown options to always bring up the ESXi virtual nodes when you power up your physical ESXi host. Another option if your physical layer is vSphere Cluster is to use your home NAS for performance – but import to portgroups that only exist on some hosts. For instance “nested” only appears on esx01/esx02 but not on esx03 (which runs “infrastructure” services such as AD, vCenter, Jumpbox and so on). This emulate having a “management host” which is seperated and distinct from the hosts running the nested ESXi nodes. They can never be “VMotioned” to you “management host” as it doesn’t have all the “nested” portgroups.
  • Importing the OVA and Set Disk Types: The template defaults to using “Thin Provisioning” for the VMDK. This is sub-par for performance. If you know the size of the disk that will store the nested nodes. Using the flag –diskMode can over-ride this default to switch to the mode “thick”. This pre-provisions the disks upfront, but you do need the capacity on the physical disk to do this.
  • Importing the OVA and Set Disk Sizes: The default disk set contains just-enough-disks to do multiple disk groups in VSAN. However, if “capacity” is your issue – I would delete disk4/5 in the VM, and increase the size of disks 2/3 which is the SSD/HDD respectively. In my case I worked out the size of the volume/LUN/disk where the nested cluster is stored – divide by the number of nodes, and reserve 20% that space for the SSD, and the rest as HHD. Remember don’t forget the nested node when powered on will reserver a swapfile at power on at 10GB per node. Remember if you come close to filling a disk, this likely to trigger warnings and alarms in vCenter or the ESXi host. You can choose to ignore these if you are confident the storage of the nested VSAN is fully allocated.
  • Update your vCenter/ESXi hosts: Make sure you a do VMware Update of the vCenter and Hosts – even if you have a relatively recent version of vCenter/ESX. Bugs exists, and tools exist to fix bugs. Use them. I like to do my VCSA updates from the Management UI that listens on 5480 – and then use Update Manager to remediate the hosts. The majority of this post was tested on vSphere 6.5U1 but didn’t play nicely until the host were upgraded from VMware ESXi, 6.5.0, 5146846 to
  • Later it was tested on the new release of vSphere 6.7. A number of changes were triggered by testing this nested VSAN setup on the new software and this included
    • Even more memory up from 10GB to 12GB!
    • vSphere 6.7 requires you setup VSAN using the new HTML5 client. More about this shift in a companion blogpost
    • Virtual Disk sizes increased – without a large disk a spurious alarm called “VSAN max component size” was triggered. The case comes from the fact that the disk is size is small. Yes, a mini value triggers a “max” alarm. This is so wrong on so many levels its hard to explain.
  • Confirm Networking: Make sure you physical switch supports Jumbo Frames, That the Physical ESXi hosts are enabled for 9000 MTU and that the vSwitch is enabled for Accept, Accept, Accept on the Security Settings
  • Build Order: I would recommend creating empty cluster, adding hosts – patch the hosts – and then enable VSAN afterwards
    • Create TWO VSAN Disk Groups: Until ALL the disks are allocated to an existing disk group or a second disk group is created – you will receive a warning from VSAN “Configuration Assistant”  – that not all disks have been claimed
    • Enable other Cluster Features: Once VSAN is functionally – you can enable DRS first, followed by HA. I like to turn on each type of vSphere Cluster feature and confirm works – rather than going all guns blazing and enabling everything at once.
  • Finally use the “Configuration Assist” to valid all is well. By default a warning will appear for “Hardware Compatibility” as the VMware Paravirtual SCSI controller used in nested environments like this isn’t on the HCL. And will never be!

  • If this “hardware compatibility” warning really offends you – it is possible using the RVC console utility to disable tests – that are false positives. For example if SSH into the vCenter that owns the VSAN cluster (in my case vcnj.corp.local – and run the RVC console – this command will add this test of the HCL to the silent list. You do this by SSH into the VCSA as “root” using an IP/FQDN – typing “bash” running RVC and authenticating with your administrator@vsphere.local account…. Phew. I’ve never come across a more obtuse way gaining access to a command-line tool!

vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'controlleronhcl'

Successfully add check “SCSI controller is VMware certified” to silent health check list for VSAN Cluster

With vSphere 6.7 I had a number of other bogus alarms that needed to be cleared using:

vsan.health.silent_health_check_configure '1/New Jersey/computers/Cluster1' -a 'smalldiskstest'

Note: vSphere 6.5 U1 passes with flying colours

Note: vSphere 6.7 passes with flying colours

Here we can see that the question is “SCSI controller VMware Certified” has been skipped…

Conclusion:

What will Michelle do next? I dunno. I like to keep folks guessing. Something fun. Something useful to myself and others. Enjoy.

I did some experimentation with the performance of my nested vSAN. It’s important to know that the performance reflects the non-native performance created by nesting – and is not a reflection of physical VSAN performance characteristics.

Firstly on my hosts this how much a 4-node nested VSAN consumed at rest (just running ESXi and not doing very much.

So despite my 4xvESXI needing 10GB each – actually they consumed 28GB leaving 35GB on my physical host (which has 64GB of RAM). They consumed about 45GB from a 458.25GB local VMFS drive. A clone of a Windows 2016 VM from my Synology SAN across the wire to this local “VSAN” storage very slow. It took all night and by the morning still hadn’t completed the boot process.

I tried the same process with a “MicroLinux” – and despite it being tiny the clone process was a bit quicker taking 5mins of 1GB thin virtual disk (350MB in use). Perhaps it would be quicker once the MicroLinux was initially copied to the VSAN – taking the source template out of the loop. I did think this made a difference – the copy time came down to 2mins for the MicroLinux. The verdict is storing the nested VSAN on local storage in this way is subpar for performance – that’s because my local VMFS disk was just a SATA drive on a generic HP controller. Clearly, one would get better throughput with an SSD drive.

I decided to pivot my nested VSAN off local storage to my SDD backed Synology NAS to see if performance improved. For instance the Win2016 image took about 20-30mins to clone a 40GB VM from the Synology’s iSCSI layer into the nested VSAN layer – also this was quite reliable. So it looks at if the cloning process was the source of the bottleneck.

You will get much better performance using a “super-skinny” version of Linux such as the MicroCore Linux available from my downloads page. This is the VM used in VMware Hands-on-Lab’s and contains a copy of “Open Source” Tools. Its not the skinniest of skinny linux but its is very functional. Navigate to Download, scroll down to “Small Linux VMs for a Home Lab”

Category: vSphere | Comments Off on New Nested vSphere6.5 and vSphere 6.7 VSAN Cluster OVF
April 3

Using OVFTool

I’ve been using vSphere 6.5 for a few weeks and noticed a couple of oddness surrounding OVFs and OVAs. For those who maybe unfamiliar OVF/OVA is packaging format that allows you to easily import and export VMs – and is a recognise format beyond VMware’s boundaries that’s adopted almost universal industry acceptance. An OVA is just a gzip file which contains the OVF file itself (which is merely text descriptor) together with the VMDK’s that make up the VM.

Generally, I’ve found importing OVAs/OVFs is pretty easy and relative reliable – even though you often tussle initially with web-browser security settings and uploading/downloading of files. Sadly, I’ve found the export process can be a bit 50:50 and the resulting OVF generated unusable…

Firstly, we appear to have lost the ability to export to an OVA from the vSphere Web-Client altogether – that’s a disappointment to me because I quite like the simplicity of the simple OVA bundle. It’s not clear to me if this deliberate or an oversight by the vSphere Web-Client development team OR if this hints at some policy of moving away from OVA as companion to OVF.

Secondly, the resulting OVF file that’s exported (there was the appropriate files) gave me an unpleasant error message 🙁

Continue reading

Category: vSphere | Comments Off on Using OVFTool