One of the most common questions I was asked when I was an instructor was “what works best, clean install or an upgrade”. Without fail it was the kind of question everyone already had their own answer to. So I couldn’t add much to the debate, although I knew where I stood.

The upgrade process for vSphere with its myriad of service dependences is getting longer, and longer – and thornier and thornier. Even so despite an in-place upgrade taking a long time – its is still the best route considering the impact of clean installation. About the ONLY thing I’d would be tempted to do a clean install of is VMware ESXi, but that comes now with its own set of complications not least if the customer uses VMware’s Distributed Switches.

So this blog is NOT an agonising blow-by-blow of upgrading that would rival your teeth being extracted. It’s high-level view of the upgrade (just of vCenter and ESX and nowt else) and what that feels like.

VMware vCenter/PSC Upgrade on vCSA

This went preachy-smooth, and despite having the complicated “external” PSC and vCenter model to allow for “Enhanced Linked Mode” the upgrade process was lengthy, but easy and worked perfectly first time. It’s not so much an “upgrade” process, more of duplication/mirror process – as the the upgrade wizard spawns a duplicate vCenter and PSC, and then sets about copying the data from one to the other. At the end their IPs are inverted and the old vCenter/PSC is shutdown. The only danger here is if you do reboots and bring up the old vCenter/PSC by mistake. I did that once which was confusing. The nice thing about this process is that its non-intrusive to the Old vCenter/PSC. So if something goes pear-shaped (as opposed to peachy-smooth) you have a failback position

Verdict: Upgrade, fill your boots!

VMware ESXi Upgrade on HP ML350e Gen8

For me this was a less than good experience. I could only get just one of my three hosts to upgrade. Some issue with Update Manager in the end. By the end of it I realise that kickstart script install from my UDA was the only way to go – so the dirty upgrade fails, and a clean install won the day. Perhaps my upgrade issues were caused by much more intractable problem – my servers aren’t even on the HCL anymore!

I have 3x HP ML350e Towers – and they fell off the HCL around the time of vSphere 6.0 U3

Despite this – there was release from HPE that was dubbed pre-Gen10 of vSphere 6.6 U1 – this was the build called “VMware-ESXi-6.5.0-Update1-6765664-HPE-650.U1.9.6.5.1-Nov2017.iso”, if you accidentally installed the build that came after it called “VMware-ESXi-6.5.0-Update1-7388607-HPE-650.U1.10.2.0.23-Feb2018.iso” in Feb of this year – you would be installing a code based designed for a Gen9/10 server – and that would make the fans go bizzy and cause alarms and alerts in vCenter…

Things don’t tend to go back on the HCL once they have left it – and I think I was lucky that there was a pre-Gen9 custom image to work with my Gen8 systems. I don’t expect to see this at all with vSphere 6.7.

Both the vSphere 6.7 generic and 6.7 from HPE successful install to my HP ML350e Gen8s. But its not without complication. The fans go faster and so the lab makes more noise – and fan alarm is trigger is lit in vSphere 6.7.

Put simply before the vSphere 6.7 install. The fans were going much slower, and vCenter had no alarms.

Note: vSphere 6.5 U1 – Fans at 6%

 

Note: vSphere 6.7 – Fans at 25%

 

Note: vCenter lights up like a Christmas Tree!

Merely acknowledging or “Reset to Green” only causes this alarm to temporarily to be dismissed before it comes back again like a bad smell or penny. The only way to turn off this “false positive” is by disabling the alarm itself. Navigate to:

>> Global Inventory Lists  >> vCenters >>Select vCenter FQDN >>Configure  >>More >>Alarm Definitions

Search for the alarm called “Host hardware fan status“. Select and click the Disable button