Note:

This post was written a week or so ago and is a little bit out of date. Due to complexities and problems around NIC teaming and Microsoft Failover Clustering, I had to abandon my original design. My original plan was two have two teamed NICs dedicated to management, IP storage, Live Migration and so on, and then two teamed NICs for the virtual machines. It’s configuration I’ve done plenty of times on a 4-NIC box with VMware ESX. Sadly, I faced other issues associated with iSCSI, Heartbeat networking and Failover Clustering that made this impossible. I’ve since had to abandon that networking layout – opting instead for dedicated physical NIC for management, IP storage, Heartbeat and virtual machines. That means I’m unable to offer redundancy to any part of my configuration.  To be fair I think this a reflection of lack of functionality in my hardware, than a direct criticism of Microsoft. After all VMware often recommends dedicated NICs (even with VLAN Tagging segmentation) for performance or security purposes – it then leaves the administrator to decide whether those recommendations apply. For instance its recommended to have dedicated NIC for VMotion, but a lot of people put their VMotion/Management traffic on the same physical NIC.

But without over-labouring the point with vSphere I could create single “uber” Standard vSwitch or Distributed vSwitch that had ALL the vmnic’s patched to it. Then on the properties of each portgroup (representing the function – management, VMotion, IP storage, heartbeat) the “Active/Standby” option could be used to control which NICs took the traffic – or alternatively NIOC (Network IO Control) could be used to carve up two 10GB interface into pipes of bandwidth needed for each function – such as 100Mbit/s for Management. See this article of mine for an illustration: PowersHell – The Uber/Master vSwitch. This works because VMware implementation of NIC Teaming is flexible. You can set Active/Standby on the properties of the vSwitch or the Portgroup. In contrasts Microsoft is bonded to the underlying hypervisor. 

ubervswitch

ubervswitch-portgroups

Sorting out the Network:

The other week I managed to finally get to my colocation to install two Windows Server 2012 Hyper-V instances in my lab.  I’d sorted out the network teams for the host, but I needed to ensure I could access my NetApp and Equallogic arrays. I was a bit concerned that the traffic might go through my router, and after a couple of ping and tracert test, I realised that this was the case.

 Screen Shot 2013-06-29 at 20.38.31

This is normally regarded as a suboptimal configuration, as whatever the quality of the router, it will add an overhead – most people in the know would recommend a direct attachment to the switch. I do actually have this in my lab, but it meant allocating an IP address to the “Management-Team0” on each of the Hyper-V servers.  Assigning an IP address in Windows used to be a relatively small set of steps, but something weird happened to Windows around the Windows 7 era as Microsoft began to insert layers of UI between the administrator and setting – almost in a perverse way to keep us from doing any harm. Personally, I loathe this “wrap the administrator up in cotton wool” approach to modern computing – like the admin can’t be trusted to get anything right. To set a valid IP address on my “Management-Team0” these are the steps I went through:

1. Right-click the NIC icon in the tray, and select “Open Network and Sharing Center”

2. Select Adapter Settings

3. Right-click the NIC Team, and select Properties

4. Click Advanced

5. Under IP Settings, Click Add

6. Type the IP/Subnet Mask

7. Click Add

8. Click OK to back out of the Advanced dialog box

9. Click OK to back out of the TCP/IP dialog box

10. Click Yes, to any warning prompts about multiple gateways (even though I only have one gateway. WTF!?!)

11. Close a couple of Windows opened to get to the setting…

12. Repeat and Rinse for each host.

As hoped/expect assign a valid IP address for my NetApp/EqualLogics (that are on the same network) updated the local routing table, and bypassed the router.

Screen Shot 2013-06-29 at 20.55.51

Update: My “mistake” here was using my management team for both management traffic and iSCSI traffic. This is something you are allowed to do in vSphere (although its not recommended). In Windows Hyper-V, Microsoft impose requirements that can make working with a server with a small number of NICs (just 4 in my case) and at the same time maintaining network redundancy difficult. For instance the “Live Migrate” network must be dedicated, and I cannot run “cluster heartbeat” traffic across my iSCSI network. This configuration of giving my management team multiple IP addresses (192.168.3.x for management, 172.168.3.x/172.168.4.x for IP Storage) worked. I did see the Dell Equallogic volumes. But the configuration gave me problems later on when I came to configure Windows Failover Clustering. I could make a case for the flexibility of VMware ESX here, and make an argument for Windows Hyper-V requiring more NICs (and more ports as consequence). Personally, I think that would be bogus and unfair to Microsoft because in a production environment, with traffic separation for security, performance and meeting “best practises”, I think you would most likely come up with the same NIC count with both VMware ESX and Windows Hyper-V. What I would say is for homelabbers and SMBs who frequently have to do a lot with a little it could be a major headache. In the end I was forced to undo my network configuration completely just dedicated physical NICs to each type of network traffic, and accept I would have no redundancy or load-balancing (management-eth0, heartbeat-eth1, ipstorage-eth2, VMs-eth3)

This resolved my issue. But I can’t help thinking the VMware approach is easier. Any host communication is backed by a “VMkernel” portgroup. To add an IP address for host communication (VMotion, IP Storage, FT, Management), you simply select the vSwitch, add portgroup and assign an IP address. What’s more you don’t have to open a session on each host, it can all be carried out from the management plane of vCenter:

 Screen Shot 2013-06-29 at 21.05.49

In the past when I was an independent I was sometimes a bit critical of the vCenter interface. But now I’m comparing vSphere with alternatives, I’m starting to think I rushed to judgement….

Enabling iSCSI in Windows 2012 Hyper-V R2

Like everything in Windows the iSCSI Initiator is service that hasn’t been started, and when you run the main configuration tool you’ll be asked to start it. It’s not immediately obvious where this iSCSI Initiator is located, so you have to take pot-luck typing a search string to locate it.

Screen Shot 2013-06-29 at 21.44.55
Note: Admittedly in vSphere you have to use the “Add Adapter” option to add in the iSCSI initiator. But at least its all the same SINGLE PANE OF GLASS… 😉

Under the “Configuration” tab is where you will find the IQN settings, which if you’re like me – you use the IQN in the iSCSI array to control access to the volumes/LUNs.

Screen Shot 2013-06-29 at 21.50.19

Once configured you can hop back to the “Target” tab to specify the IP address of the iSCSI Target  which is in my case a Dell EqualLogic Array.

Screen Shot 2013-06-29 at 21.54.48

That more or less got my storage up, and I was able to see multiple disks in Computer Manager. In my experience despite support for CHAP in iSCSI the vast majority of people don’t bother deploying it. It just adds an extra layer of authentication that could go wrong on a network that is normally private.

There are whole host of other options the Microsoft iSCSI initiator. In general I don’t think there is much to separate the iSCSI initiator between the two platforms. I think it’s probably the case that the iSCSI initiator in Hyper-V has a couple more options – but that’s because its just general purpose iSCSI initiator (not just in Hyper-V, but Windows generally). For example I have setup CHAP on VMware ESX and Microsoft Hyper-V just to see what it was it like. Microsoft recommends CHAP, but VMware doesn’t have any special recommendations either way.

I did some reading around the recommendation and requirements shortly after getting the thing spun up. The article I found is a little dated (2009) but it does state it applies to Windows Server 2012. I assume that means Hyper-V as well, although the article doesn’t name-check it specifically. But heck Hyper-V and Windows Server 2012 are the same thing aren’t they? The article does explain how the Windows Hyper-V supports IPsec because all Windows systems do – but looking at their best practises – its not listed as a feature that you should enable. In my experience 99% of most iSCSI networks are on discrete private networks where encryption would be unnecessary. I took general poll on twitter (the most reliable source of mass opinion at my immediate disposal!) and most people shared my view.

Screen Shot 2013-07-16 at 10.10.52
Note: Of course, you can always rely on Tom Howarth to make good joke!

A bit further down the document under “Security Best Practises” it does list IPsec. But I think this can be largely discounted. It’s probably vanilla Windows OS recommendations, rather than something that would apply to Windows Hyper-V.  I think this pretty common with “best practises”, as such they usually are pretty general and I think customers need to be sceptical about them – look closely at your own environment and then determine if the best practise applies in your circumstances. In my experience both customers and vendors hide behind the “best practise” term as universal way to CYA.

Screen Shot 2013-07-01 at 10.54.47
Note: Microsoft list IPsec as best practise, but I think most SysAdmins would regard this as excessive and unnecessary.

Multi-Path and iSCSI Communications

Note: Because my of network woes documented above, and in detail in the clustering section – I did in the end remove MPIO. I didn’t have redundent paths due to a lack of physical NIC cards. So then MPIO became redundant. It was just one more check in the “Validate Configuration” for a Windows Failover Cluster configuration that I didn’t need.

Setting up for multi-path and iSCSI communications is thornier issue. The official Microsoft statement reads:

Use Microsoft MultiPath IO (MPIO) to manage multiple paths to iSCSI storage. Microsoft does not support teaming on network adapters that are used to connect to iSCSI-based storage devices. This is because the teaming software that is used in these types of configurations is not owned by Microsoft, and it is considered to be a non-Microsoft product. If you have an issue with network adapter teaming, contact your network adapter vendor for support. If you have contacted Microsoft Support, and they determine that teaming is the source of your problem, you might be required to remove the teaming from the configuration and/or contact the provider of the teaming software.

That seems to ignore the fact that Windows Server Hyper-V 2012 actually supports its own native NIC teaming technology. I must admit finding a definitive statement on what’s supported with Windows Server Hyper-V 2012 is tricky. There’s plenty of how-to’s using Windows Server Hyper-V 2008. I think this largely because there’s so much similarity between the two flavours at this level that no one is inspired to update or write new documentation. One configuration that is supported is requires is two separate NICs each configured with a unique IP address.  Then you need to install the MPIO software from Microsoft – and you may need to consult your storage vendor for additional “Device Specific Module”.  The supports statements around this issue are so opaque it even prompted Mark Morowczynski to write a blogpost just about this issue in an attempt to clarify what is supported – Is NIC Teaming in Windows Server 2012 supported for iSCSI, or not supported for iSCSI? That is the question. From reading Mark’s post it seems clear that Microsoft NIC Teaming IS supported for iSCSI communication, but remains the case that if use your vendors NIC Teaming this is not supported. Additionally, there is a user guide called “Microsoft Multipath I/O (MPIO) User’s Guide for Windows Server 2012”. It runs to 10,000 words and is 49 pages long.

You can’t help feeling that this could be clearer/simpler than it is.

I did find this blogpost by Keith Mayer called “Step-by-Step: Speaking iSCSI with Windows Server 2012 and Hyper-V” which walks you through the MPIO configuration using PowerShell.  With a bit more searching with Google I aslo found this article by Kyle Heath who works for CSCM IT Solutions conveniently entitled “Windows 2012 – How to configure Multi Path iSCSI I/O”. There were a couple of settings I hadn’t considered needed changing. So for example with NICs dedicated to iSCSI communications it is recommended to:

1. Turn off “Client for Microsoft Network”
2. Disable the options to “Register this connection’s addresses in DNS”
3. Turn off the “Allow this computer to turn off this device to save power” on the physical card

I don’t doubt Keith Heath’s expertise for a second, but it does seem to me there’s an awful lot of “gunk” in the Microsoft network stack to turn off – again suggesting an operating system designed for vanilla workloads, isn’t perhaps best suited for the role as virtualization host.  I really rate Keith Heath article, but it does illustrate the many steps you have to go through to make Microsoft multi-path IO to work.

As with a lot of Microsoft software MPIO isn’t installed by default. You have to walk thought the Add Roles & Features wizard to install the software.

Screen Shot 2013-07-01 at 13.20.40
Note: So long as the network is configured correctly – VMware ESX needs no special software installed or a DSM from a storage vendor to get iSCSI load-balancing to work. Just sayin’

You’ll notice on the right that along side MPIO there’s thing called the “Device Service Module”, and you may or may not need additional software from the storage vendor. As I’m using a Dell EqualLogics I contacted some of my buddies over in Nashua, New Hampshire for their expert advice.  This is how Keith Mayer puts it in his article:

Prior to attempting to implement MPIO between your hosts and storage array, be sure to check with your storage array vendor to confirm their compatibility with this DSM.  In some cases, your storage vendor may require an alternate DSM and/or a different MPIO configuration. Many storage arrays that are SPC-3 compliant will work with the Microsoft DSM, but we recommend confirming compatibility with your storage vendor before proceeding

Once the software is installed I needed to bring up the “iSCSI Initiator” dialog box, and add support for iSCSI communications, as well as going into the advanced settings

Screen Shot 2013-07-01 at 13.37.19
Note: After selecting “Add support for iSCSI device” you do need to click the “add” button, not the OK button and yes, this does require another reboot.

Re-Configure iSCSI Initiator

Caption: One of my hosts has been rebooted, but didn’t come back up. I don’t have an ILO/DRAC/BMC Remote Console interface to it. So I was a bit in the dark. In the end I just did a reboot from Levono management webpage.  Of course every time Windows tells me to do a reboot, I’m really dreading it. Will the box come back up? At least when Windows is in a VM, I can see what is going on.

Once the server is back up again, you need to add a session for the 2nd physical NIC.

Wit all these tiers and layers of cavernous UI it gets quite confusing. After a while it gets a bit overwhelming – and this is just for one Windows Server 2012 Hyper-V host. One thing I’ve noticed in the short time of my use of Windows 2012 is how PowersHell might become the de-facto method of carrying out configurations. The UI has become SO bloated its takes hours to find the settings you really want. As an old DOS 5.5 guy I find that truly ironic – the black background; white text world is back with a vengeance.

Once this configuration was completed, use the command mpclaim –v > c:\report.txt to double-check has two paths per volume:

In contrast, multi-path IO in VMware ESX is much easier to setup. First you need to create two portgroups each with a unique VMKernel port for communicating to the iSCSI Target.

vmkernelportip 

With a switch that is backed by multiple NIC, you set vmnics (Standard vSwitches) or DvUplink (Distributed vSwitches) to be active and not in use.

nicteamforiscsi

This has the affect of dedicating the NIC to iSCSI communications.  The nice thing about this configuration is that its clean and discrete, and nothing to do with your management network – rather than one NIC interface with multiple IP address backing the “Local Area Connection”. Plus there’s none of that silliness about disabling part of the network stack or disabling DDNS registrations that your get with Microsoft Hyper-V.

Once the networking is configured, we merely enable the iSCSI initiator – by adding in the multiple portgroups created for load balancing.

vmkportsforloadbalancing

There’s no software as such to “install” and no reboots when enabling the MPIO for VMware vSphere – and all this can be done from vCenter, without any need to visit the host itself directly. You could say vCenter is a SINGLE PANE OF GLASS.

Simplez….

simplez