Phew!

My last blog here was in April of this year. And what a year it’s been. They say hindsight is 2020, and who would have thought it would be in the year of 2020 that saying would be made true. I take the view that if you are still working, healthy, and haven’t lost a loved one to COVID, then you are blessed. I’m pleased to say I’m one of that group – and my love & best wishes go out to anyone affected by the events of the last 12 months.

One thing I didn’t realize is how bizzy I would in my first full year here at Droplet Computing – and it’s not just with doing 101 with new prospective customers or jumping on Zoom to assist with trials of software – or cranking out interoperability guides and KBs. What makes my job so enjoyable is the hours I get to spend testing, playing, and creating skunkworks in my lab, which I try to turn into products or product features. It’s probably the most creative part of what I do – and often driven by customers bringing challenges to us that you would only encounter in the real world. So one thing that is deeply rewarding is watching your technology become battle-hardened out there in the world of the production environment – forged in the fire of real-world use. One thing learned in the last 18 months or so with being at “Droplet” is how flexible you need to be, and able to turn your hand to any technology – which suits my character down to the ground. If variety is the spice of life, then working here is like a particularly hot vindaloo!

Anyway, I was inspired today to write a blog about my experiences of using an Intel NUC in my vSphere lab – a topic that would seem seaming unrelated to run running legacy or modern Windows apps in containerized format – but stick with me and you’ll see where I’m going.

So in my time of being a VMware Instructor and then an employee of the mighty VMW – I’ve largely kept away from the NUC style form factor. The resources always seemed so limited (volume of memory/number of NICs), and also because back in my ESX 2.x.x days I got burned by the HCL. I kid you not I once tried to install ESX 2.x.x to Dell Optiplex PC – and I’m proud to say that I’m one of a small number of people who ran ESX 2.x.x on Pentium III PowerEdge Pizza boxes connect to SCSI (yes SCISI!) JBOD!  Home labs have come along way since 2004. But I decided the time had come to take a punt on an Intel NUC especially as I hard so many good and trusted people in the VMware Community speak highly of them (That’s you William Lam, if you’re reading this 🙂 )

So this is what I added to my basket on Black Friday, with a plan to buy the gear – do some tests – do a demo to my management – and then claim it back through company expenses. In that “act now, and ask for forgiveness” kind of way [I’m happy to say my sneaky plot worked like a charm – thanks Barry…]

Parts wise I played safe consulting Intel’s site for the NUC10i7FNH and only buying RAM and the NVMe card that was on that list. I probably could have found cheaper RAM/Storage from no-name brands coming off the same factory somewhere in deepest darkest China – but I figured that was pointless considering I’d only save a couple of bucks (or pounds) here or there. I took the decision to just fill one bank on the memory slots – in case this approach didn’t fly – but I’ve been so pleased with the experience, I will buy the other 32GB SODIMM. The only question is how to sneak that thru my expenses claim without the big boss man noticing (Sorry, Barry…)

I won’t bore you with a 4hr unboxing video (WTF!) where I marvel at the quality packing materials (FFS!). Needless to say – fours screws at the base and your in – slot in your RAM. Unscrew the mounting screw on the NVMe, which is sprung loaded, and once the NVMe is in place – screw the mounting screw back. Pop the lid back on and your good to go. I did the whole thing on the living room carpet whilst watching Bing Crosby in High Society. Not quite taking ESD precautions there, but hey – you only live once – right?

Gotchas – none really except:

  • I had to go into the BIOS/EFI (F2) and disable the on-board Thunderbolt port when installing ESXi 7
  • You get a weird TPM warning – apparently, there’s a BIOS update for that… and then you have to apply a Firmware update – then disable Secure Boot – AND if that doesn’t work – clear the TPM keys by installing Windows before installing ESXi. I did the Firmware Update (F7 and provide a USB stick with the .cap file on it) – but it still didn’t clear the warning. I decided to live with the warning – and turned the Secure Boot back on… Life too short too worry about a false-positive warning in a homelab…
  • vSphere 6.7 wouldn’t install – and I got a weird EFI firmware message – apparently fixed by flash update – it didn’t work for me – I’m not bothered, who wants 6.7 when you can run 7 in your lab (but it would have been nice to have…)
  • Being old skool I burned a CD (how quaint) and plugged in a USB CD/DVD I have. I guess I could have put ESXi7 on a USB thumb drive but I couldn’t be bothered – I got ESX7i on the box which is the main thing. The build I used was ESX 7.0.1 (1685050804) this apparently has the NIC drivers built-in to the build so need to faff about with VIBs and Image Builder, blah, blah, blah.

So overall I really rate the NUC. It’s a great little ESXi home lab server for those who are operating so far up the stack that you really don’t care about features – you just need a cheap & cheerful bundle of CPU/Memory/Disk to run the high-level stuff without burning precious hours trying to get some custom shop thing working, and tearing your hair out to get network drivers going.

I’ve half a mind to sell on my aging HP ML350e – and use the money to replace them with NUCs. If I did that I could sell on the half-height rack I have in the garage, and being the home lab back indoors which is quite nice considering it’s Winter. That said, I do rather like having the HP ILOs for videos. My plan had been to just keep on adding RAM to the HP ML350e’s and sweat that asset – which I still could do. It turned out that in the end, it wasn’t a memory I ran out so much – more than my CPUs were getting so old that they lack the horsepower/feature set I needed to do my testing. That’s a bit of a bummer because you can always add more RAM, but I rather balk at the idea of ripping out CPUs. Server-class hardware is always more pricey than PC class – and I spent a pretty penny upgrading the HP ML350e for the bare-bones (2nd CPU, Add DIMMS, and GPUs….). I guess that’s another trade-off with the NUCs, no chance of adding a GPU.  So, having discussed this thru the medium of a blog, I think I will stick with the HP’s ML350e for now… add RAM as I run out, and use them increasingly more for management systems – and stick anything that needs performance to the NUC. The day the ML350s just won’t install whatever version of ESXi I’m running on is the day I put them on eBay.

 

Now for the Droplet part. For some time I’ve been concerned that some of my performance experiences weren’t very realistic in virtual platforms. My old home lab was based on some Gen8 ML350e which are some 4 years old (maybe more…). Although they have 2xSOCKETS they are only 8 cores of Xeon, and whilst they work fine for workhorse VMs (AD, Citrix, RDS, VMware View, and management nodes like SCCM and so on) it became clear I couldn’t use them to benchmark or get a feel for what our technology would do under the context of more modern hardware. So much so I abandoned that altogether this year – a took to testing our technology on cloud-based environments provided by our partners such as Amazon WorkSpace/AppStream and Microsoft WVD (big shout and thank you to Toby & Co at Foundation-IT for standing up that environment for me!)

The downside (for me) of these cloud environments is as a rule they don’t expose to their instances the Intel-VT attribute (Disclaimer: some Azure Linux instances, do have Intel-VT exposed) I assume mainly due to concerns about security and multi-tenancy. On physical Windows 10, Apple Mac and Linux/Chromebook Droplet Computing use a whole range of difficult hardware accelerators based on the container type, customer need and how is easy it is to get change management things approved (you know the score!)

So for me one question always was – if I expose hardware assistance to the vSphere VM running Windows 10, could I use a hardware accelerator to make our container “go faster”. I’d always failed to achieve this with my older hardware (BSODS and freezes ago-ago), and figured with a new chipset I might get a different outcome. Also wanted to see what ducks in a row might be needed to make this work – what combination of CPU/vSphere/Windows 10 Version – would be necessary. I’m pleased to say I was successful in getting to work on the Intel NUC. In fact, it flies – and I figure with a proper enterprise-class server – the result would most likely be even better.

The Setup

Firstly, I needed to enable X Expose hardware-assisted virtualization to the guest OS. You might find this is already in place if you’re in the habit of enabling Microsoft Virtualization-Based Security (VBS) as MS VBS requires access to Intel-VT as well as other attributes such as Secure Boot and TPM in the BIOS.

 

Secondly, I need the right version of Windows 10. As you might know, there are two channels for Windows 10 – the semi-annual which is updated twice a year, and the Long Service channel (LSTC). Whilst the LTSC still languishes on version 1809 of Windows 10 – there have been at least 4 major releases on semi-annual, and we are currently on 20H2.

https://docs.microsoft.com/en-us/windows/release-information/ 

I started my tests using 20H2, but I have successfully made this configuration work with 1909 and 2004.

Aside: BTW there appears to be a bug in Windows 10 1809. I suspect the bug is “well known” and the fix will be “upgrade your version of Window 10” rather than an individual hotfix.

Thirdly, before installing our software I enabled a hardware accelerator in the side of the guest operating system

And that’s it… our software automatically picks up the hardware enablement inside the VM, and our container is nice and nippy. This particular benefits our 32-bit container which we use a LOT with legacy applications. We do have a 64-bit container (for customers who have legacy apps, which only shipped as 64-bit and for whom there was no 32-bit version). With our modern container (which needs more grunt to run) for modern applications, the hardware acceleration makes all the difference. So the break thru for us – is being able to run all our container types in a Windows 10 VDI VM without concerns about performance.

As a side benefit, it should also mean myself and our partners can use our vSphere labs more flexibly for tests and demos.

Sweet! 🙂

My next step is to QA this with our partners on a wide range of hardware – before we consider giving the green light on this configuration and officially supporting it – and I also need to test RDSH guest running Windows Server 2016 and Windows Server 2019 for platforms like Citrix VirtualApps, VMware Horizon View and Microsoft RDS.