IMG_2977
Please Note: The guitar and amp are non-standard devices, and are not supported by HP in this configuration. If you want buy the guitar & AMP from HP.com you need HP838840902 and HP999200026 accordingly…

As you might know I recently quit my colocation, and moved back to homelab. I want to blog about why I picked the HP ML350e series, and my experiences. And NO, this isn’t a sponsored post. I bought the gear out the money that was left out of my business account. I’m lucky because I’m still VAT registered (what us brits call “sales tax”) and I’m able to reclaim that. VAT in the UK is currently 20% (yes, I know! Personally, I consider it tax on the poor), so that was the necked of £900 quid saved. If I wasn’t VAT registered I would have probably looked for something much cheaper. I think this is a legit business expense – I still run the old company, and return tax on the advertising – plus I was incurring massive personal expense running the colo. This way I get to reduce my companies costs (from colo to homelab) which helps me carry on doing work like the “Back To Basics” series…

So firstly like all towers this big box. It’s slightly smaller than my old Lenovo TS200 that I bought about 3 years, but its still a beast. So the WAF factor is quite low. Literally, my wife said that there were some “bloody big boxes” in the hallway waiting for me. Fortunately, I do have spare bedroom and utility room where they can live. They are relatively quiet, and you could live with them in a man cave – but remember if you have a couple; switch (with a fan) and PDU (with a fan) and a couple of NAS boxes (also with fans) that can all build up.

IMG_2948
Note: Some Bloody Big Boxes. WAF Factor = 1/5…

 Incidentally, I have NOT fitted the feet to the server, and they do sit on the carpet floor. I did this reduce the footprint, but I’m wondering if that’s entirely wise from a cooling perspective. You could put them on the side rather than stood vertically – but there is a label that says the motherboard side of the case can get hot…

IMG_2968
Note: I think this might be just an “elf & safety” warning. The side doesn’t get remotely warm…

One slightly irritating thing is each ML350e comes with a keyboard and mouse. I’m looking to offload these to a local school or charity. The reason I’m irritated is I would MUCH rather have “Advanced” ILO built-into the product than a keyboard/mouse. I mean what good is a mouse with VMware ESXi? I’m not singling out HP here by the way. It seems like all the major server vendors feel that having a remote console on a physical server is somehow an “advanced” feature in need of license or in the case on Lenovo a little blue dongle that plugs in the motherboard (excuse me what ****ing century are we in?). Personally, I think ALL the server vendors should bundle this nowadays. It’s like getting a printer without any cables or half-filled demo cartridges….

IMG_2951
Note: Don’t give me keyboards, gimme Advanced ILO! 😀

So why pick these over something much more compact – like the Intel NUC, ShuttlePC, MacMini, HP MicroServer – all which have a very small footprint, low cost and so on. Initially, I had hoped the new HP Microservers Gen8 would support >16GB RAM. But sadly because they only have 2 DIMM slots, and don’t support RDIMMs they are limited to 16GB. There is a ShuttlePC that will take RDIMMs, but it only has two slots and only supports a max of 32GB.

As we ALL know, RAM is something you always run out of in the lab. So I was looking for a box that would be future proofed, and allow for CPU/memory upgrades to prolong its shelf life. I also want to enjoy the SAME amount of RAM in the new homelab that I had enjoyed in the colo (5x16GB+4x12GB=128GB). I certainly didn’t want to go back to having LESS memory than I had previously. In the end there was a compromise on that front. My 3xML350e have 2x16GB DIMMs, giving me 96GB across my cluster. But these boxes these boxes will go as high as 192GB per server with two procs (12x16GB). So my plan is to add RAM along the way, most likely running them beyond the warranty. If they stop working with VMware ESX 9.x, I will just put the last supported version of VMware ESX on the physical – and go for Nested ESX (or vInception) model.

If you go down this route I have got a couple of tips, tricks and warnings:

Use the HP .ISO from vmware.com:
Customers installing ESX 5.5 should download the HP .ISO from vmware.com. A generic copy of ESX 5.5 resulted in a PSOD (purple screen of death). I have not tried  U1 or U2 so mileage may vary. This do worry me a little as I don’t want to have to wait for HP release new .ISOs as new versions of VMware ESX comes out. I guess I will just have to “suck it and see”. This is particularly critical if I want to work with beta releases of VMware ESX. Perhaps I’m over-reacting? I could always go for a nested ESX configuration for any beta work after all.

Screen Shot 2014-01-24 at 14.41.51

The RAID Controller is recognised:
With the HP custom .iSO the onboard RAID controller card is recognised without it is not… It’s possible to RAID up the disk(s) using the controller card – and this does appear in the installation should the customer want to install to a physical disk. In my case I installed VMware ESXi to a memory stick (kindly provided by VMware Education at the London VMUG last week!). I wanted to the leave disk blank to be used by either VSAN or for dual-booting the boxes to another hypervisor (such as Hyper-V, KVM, XenServer), I’ve done this in the past to toggle between VMware ESX and VMware ESXi in 3/4 days.

But not supported (AFAIK) with VSAN – Workarounds exist:
VSAN – It looks like the RAID controller card does not support pass-thru mode (where the card presents RAW unformatted storage without RAID levels). Indeed, its not on our BETA HCL for VSAN. With that home labbers could create one RAID0 volume on each disk, and that should work, but it would most likely regarded sub-optimal in terms of performance. But is still useful for learning purposes (I’ll know more when we GA, and have time to play more). If you are doing this – its recommend to turn of the controller caching. This was tip that I got off Erik Bussink who recently presented at the London VMUG about his use of Infiniband in his homelab.

Don’t be silly – VMware ESX needs more than 2GB RAM!
Remember  to remove 2GB RAM, and add your RDIMMs before installing. The ESX host installer will give a warning message which is a bit cryptic if you don’t have enough memory. It confused me for a moment before the penny dropped. It came up with a purple screen (not of death) with a message “The system has found a problem on your machine and cannot continue. Cannot load multi-boot modules: Admission check failed for memory resource.” The problem quickly went away once I’d actually engaged the brain for change.

IMG_2955

Use the right slots:
This situation probably reveals how little HW experience I actually get now, and how geriatric my old DL 385 G1 must have been. The ML350e motherboard has memory slots in white/black/white/black/white/black and so on. My 2x16GB DIMM had to go into two white slots. I need to read the documentation but this is some sort of paring requirement – similar to when you have two procs you need to fill the banks equally to both processors. (need to do more reading about this – its probably obvious to folks who are hardware system builders and do it frequently – that’s not me! It’s all to do with channels to the processor yadda yadda yadda…)

IMG_2972

The Noise Factor… Unrecognised PCI Devices…
This is an issue that I’m still working on. When I first powered on the ML350e I was surprised by the noise. I mean much louder than my older Lenovo TS200s. I just mean when they first power on and the sound of thunder reigns – that’s normal, but I mean all the time together with this really weird revving noise… A quick google appear to first suggest that this was known issue (a great euphemism of all IT) that would be fixed by a firmware update. It turns out the issue was with some network cards that I’d repurposed from the old Lenovo’s. These are recognised by VMware ESXi 5.5:

Screen Shot 2014-01-28 at 16.51.22

BUT not recognise fully by the BIOS…

Screen Shot 2014-01-28 at 15.33.15
Note: Yes, I did fork out for Advanced ILO licenses after all. 🙁

A quick look at VMware ESXi 5.5 Health status when cards are enabled/disabled tells the story – notice the different fan speeds…

 Screen Shot 2014-01-31 at 10.35.36

Screen Shot 2014-01-28 at 16.51.06

It turns out there’s some sort thermostatic controls that is detecting non-supported card. That causes the fans to get unhappy. For the moment I’ve disabled the cards to keep “the missus” happy. I’m not really sure to resolve the problem, and clearly I want avoid buying new cards altogether. There’s a number of differing opinions. (In the manner of Jeremy Clarkson introducing The Stig), Some say a firmware update will resolve the issue (I’m find it difficult to that given my lack of modern hardware experience – there’s just some many methods its really confusing me – but heck, I’m easily confused). Some say I can find out the chipset used and flash the card with HP updates (although this could render the card DOA). Some say I could disable the thermostatic controls from the motherboard (sounds a bit risky from a cooling/power perspectives) and some say I should sell the cards on ebay, and buy properly supported ones. Gee, and I just thought a PCI card was a PCI card… shows how little I know about server hardware!