September 23

Multi-Hypervisor Management and VM Conversion – Time to stop kicking the tyres?

tyresIMPORTANT: In the process of writing this article, I discovered that VMware MHM 1.1 is incompatible with Windows Hyper-V R2 2012. The issue seems to originate from the fact that the namespace through which the management objects of Hyper-V are accessed is no longer supported – VMware MHM manages Hyper-V through WMI objects from the namespace root\virtualization. Microsoft introduced root\virtualization\v2 with its release of Hyper-V 2012 and kept root\virtualization for backward compatibility. However, it seems that Microsoft  also deprecated the root\virtualization namespace with R2. So I had two switch from my Windows Hyper-V 2012 R2 lab, to a Windows Hyper-V 2012 environment midway through the process.

As part of my wider journey into “da the cloud.” I’ve become more and more interested in how we might define “hybrid”.  Does hybrid merely mean a public and private cloud? Does it mean a cloud where those two domains are linked – as is the case with vCloud Connector’s “Stretched Deploy” feature or does it mean a multi-hypervisor environment, and being able to move workloads from one virtualization platform to another? Whilst we clearly want mobility of workloads in a ANY:ANY way, does that necessarily mean running more than one virtualization platform within our private environments is a good idea? Lets face it VMware vSphere pretty much dominates virtualization, with Windows Hyper-V bringing up the rear – leaving just KVM and bunch of also-rans.  As technologists we might prize the ideal of platform interoperability, but I don’t think there enough customers to make it of interest to the vendors. Those vendors are likely to put more effort in developing and improving their own management platforms and virtualization layers before they worry about the competition.

Having setup a Windows Hyper-V 2012/SCVMM 2012 environment. I decided to look at VMware managing Windows Hyper-V with VMware MHM, and also converting Microsoft VMs to being vSphere VMs – a process I’ve dubbed “M2V”. Then in an effort to remain balanced – I’ve also looked at SCVMM attempts to manage vSphere, and the quality of the tools to covert a VMware vSphere VM to a Microsoft Hyper-V VM, something I’ve dubbed “V2M”.

What follows is a full and unfiltered account of my experiences – things get a bit knotty after this point so looking back I thought a “summary” or “highlights” might help clarify the situation more.

Edited Highlights:

  • The Microsoft Firewall is your enemy. It can cause things to stop working in all places where Windows is used – SCVMM, Windows Hyper-V and then source VM – whether that source VM is running in VMware vSphere and/or Windows Hyper-V
  • VMware Convertor works with or without VMware Multi-Hypervisor Manager – and generally presents more options/features. However, if you want to have simple/quick method of converting from Windows Hyper-V to VMware vSphere then MHM is your man, if you want the fancy options then VMware Convertor is what you should focus on IMHO.
  • VMware Convertor can convert both physical and virtual machines – on the fly without a power down of the source Windows Hyper-V VM. Remember VMware has been doing conversions like this ever since the P2V product which I first started using in 2003.
  • The time to convert from M2V is considerably shorter than the time convert from V2M (5-6mins compared 30-40mins).
  • VMware Convertor can do a “final sync” between the source (Windows Hyper-V) and the destination (VMware vSphere) which gives you an consistent copy of the VM.
  • If you’re serious about (or forced against your will into) a multi-hypervisor strategy – I think neither VMware MHM or Microsoft SCVMM, where one vendor manages another, is a serious proposition. I think if your serious about this approach then you should be looking at a provisioning tool like VMware vCloud Automation Center.
  • Adding VMware into SCVMM requires – adding vCenter, adding vSphere hosts AND retrieving/authentication to each and every host to get full functionality
  • Neither SCVMM or MVMC (Microsoft Virtual Machine Convertor) allow you to convert a vSphere VM to a Windows Hyper-V without some sort of power down in the process, and as consequence a maintenance window
  • Using SCVMM to convert V2M is fraught with authentication/communications problems, using “Convert Physical Machine” might work better and quicker (around 10mins compared to 30-40mins with MVMC/SCVMM V2M). I understand that SCVMM R2 removes “Convert Physical Machine” from SCVMM. This in the release notes for SCVMM R2 – http://technet.microsoft.com/en-us/library/dn303329.aspx
  • After any successful conversion from vSphere to Hyper-V I’ve yet to see any method that successfully allowed for Microsoft Intregration Services to be installed – that means no mouse control, and in some case no network capabilities.

Screen Shot 2013-09-19 at 14.42.43

 

UPDATE:
Since the publication of this blogpost – Microsoft have indicated how their customers will be able to carry on doing P2V conversion. There recommendation is to run old instance of SCVMM 2012 containing a Windows 2012 Hyper-V host and carry out the conversions there. Then, (I kid you not!) export the VM out of the old SCVMM and import into the new SCVMM R2 2012 environment. Joined up management? I don’t think so.

http://blogs.technet.com/b/scvmm/archive/2013/10/03/how-to-perform-a-p2v-in-a-scvmm-2012-r2-environment.aspx

Although my post is about V2V and not P2V. I do take this as being symptomatic of Microsoft dropping the ball on the R2 release. It’s now 8 steps to covert a physical machine to a virtual – requiring two management layers!!!


This is a lengthy post – so I decided to split it up into sections to ease navigation.

Continue reading

Category: Microsoft, vSphere | Comments Off on Multi-Hypervisor Management and VM Conversion – Time to stop kicking the tyres?
September 12

Travails in Hyper-V R2eality: Not So Logical Networking

Disclaimer:
In my lab I only have 4-NICs per physical box. Whilst this is quite good for a homelab, its not perhaps “production” ready. In my experience most virtualization servers that are based on 2U or 4U equipment generally have more NICs than this. I would say if were going to kick the tyres on Windows Hyper-V in your homelab you would ideally have more NICs than you would consider with vSphere. Of course, if you virtualization host has only one NIC you will have your work cut out experimenting with NIC Teaming on either platform. 😉

Update:
I recently had the need to add additional hosts to my SCVMM environment. This was a new cluster in the same network site. So as an addendum to this post – I covered the many, many steps required to complete those tasks.

Occasionally, it feels like that this series of blogposts are me having all these horrible experiences of Microsoft Virtualization – so you don’t have to go through the pain. But I am thinking about the challenges of anyone in their homelab trying to get their heads round the product. Watch out for the number of NICs you have, don’t expect the same level of ease of use you get with VMware vSphere…
Addendum: Adding new hosts to a pre-configured environment

In my early use of Windows Hyper-V R2 it quickly became apparent that few folks would manage the Microsoft hypervisor with just Hyper-V Manager and use standard virtual switches. What was more likely would be for them to use the “Logical Switch” available in SCVMM. The “Logical Switch” is analogous to the VMware vSphere Distributed Switch only in that it allows for centralized management. However, as I’ve worked with the logical switch over time, I’ve begun to feel they are subtly different. That’s important – not good, bad, just different.

For me the interesting distinction is whereas vSphere Standard and Distributed Switches are two totally independent virtual switch constructs, the Windows Hyper-V logical switch merely adds additional management options to Microsoft virtual switches. If it helps call the logical switch a virtual switch with bells on. The standard Windows Hyper-V virtual switches support functionality such as a “private” or “internal” virtual switch that restricts communication to just the VMs on the Windows Server or between the VMs and the Windows Server. These types of virtual switch are only available from the Hyper-V manager. So if you want to use a combination of all three (internal, private and external) you will need to switch from Hyper-V Manager to SCVMM to do your SysAdmin work. It’s worth stating that these Internal/Private switches are of little use to production environments, and probably only of interest to those working in a test/dev environment.

Screen Shot 2013-07-29 at 15.52.19
I’ve never really understood the need for an “internal” switch that would allow communication between the VMs and the physical server. I guess if you were running Windows Hyper-V on a one NIC PC then perhaps this would be useful. In the world of vSphere you would just use the default “VM Network” which is pre-created on the standard vSwitch. Most VMware Admin delete this virtual machine network because its unneeded and a possible security loop hole if both the management and VM Network are given the same VLAN Tagging ID.

Before I begin with the setup routine, it’s perhaps worth retelling briefly the shaggy-dog story that is my Windows Hyper-V network. My initial intention was to mirror my configuration that I normally do with VMware ESXi. I have four NICs, and I would normally create two virtual switches – one to carry my infrastructure traffic (management, IP storage, HA Heartbeat, vMotion), and another virtual switch dedicated to the virtual machines. Sadly, I was unable to carry out this configuration with Windows Hyper-V because the cluster “Validation Wizard” would not pass my network configuration. Instead I had to dedicate a single NIC to each type of network activity. At this point I was beginning to wonder if the real agenda behind Microsoft’s avowed endorsement of “converged networking” is a reflection of a much more brutal reality. In order to meet Microsoft pre-requisites you need a significant number of NICs – and doing that with conventional gigabit dual-port or quad-port cards is going to be an expensive operation. You know the port on the physical switch isn’t free (even if it doesn’t come out of your cost center) – I’ve always said take the price of the physical switch, and divide by the number of ports – that’s how much each of those cost you. I think this is an important design consideration for those who find converged networking and blades is beyond their budget, and they are building out a virtualization layer with conventional 2U and 4U servers.

Screen Shot 2013-07-30 at 13.01.52
Despite support for native NIC Teaming in Windows Hyper-V 2012, I was forced to undo that work and go for physical separation. Sadly, Microsoft cluster “Validate Configuration” wizard didn’t seem to understand a NIC Team, and see it as single interface – and warning me I had no network redundancy. I guess I could have ignored the yellow and red warnings and dismissed them. But I’m a green tick man by nature.

You can reach the wizard for creating a logical switch under “Fabric” in the left-hand corner of SCVMM. If you have setup SCVMM ability to manage VMware vSphere you will see both a pre-defined logical networks (in my case called “corp”) as well as any VMware Distributed vSwitches. As part of my research I’ve been investigating options for V2H and H2V (vSphere to Hyper-V and Hyper-V to VMware) conversions (more on that on a future blogpost).

Screen Shot 2013-07-30 at 13.13.00
The reference to DvSwitch come from me adding vCenter into my SCVMM environment. One thing I’m looking at in the future is how easy it is to move from Microsoft to VMware (M2V) and from VMware to Microsoft (V2M).

There are a number of stages in separate wizards that the SysAdmin has to go through to creating a logical network. The creation of these networking constructs generates dependencies as you would expect, and they do have to be done in a particular order. There’s nothing hard and fast here, but you can’t do step 3 or 7 until you have completed steps 1 or 6 – and you can’t do step 6 until you have done step 5. Step 9 can be carried out between each step depending on the state of your constitution. 🙂

  1. Create the Logical Network – where you can define “Network Sites”
  2. Create the Network Sites – these are where your VLAN definitions are held, and you can have many VLANs per site.
  3. Create an IP Address Pool – which is associated with a site – required if you want VMM to assign a static IP address when deploying templates
  4. Associate Physical NIC to Logical Network(s)
  5. Create a VM Network
  6. Create a Port Profile
  7. Create a Logical Switch
  8. Associate Physical NIC to Logical Switch
  9. Create a VM!
  10. Have a little lie down. Take a deep breath. Keep taking the medication. At times this will feel that your making a lot of different components that you then have to glue together to make networking function.

Continue reading

Category: Microsoft | Comments Off on Travails in Hyper-V R2eality: Not So Logical Networking
August 6

ManRAGEment: Installing System Center Virtual Machine Manager 2012 R2

with many thanks to my colleague, Randy Curry – and special thanks from Stu Fox (@stufox) for their valuable assistance and input…

Tip:

Save yourself heartache and download and install before doing anything to the SCVMM.

  • WADK (Windows Assessment & Deployment Kit)
  • Microsoft SQL 2012 Native Drivers
  • Microsoft SQL Command-Line Tools from the Feature Pack
  • …and don’t forget the obligatory reboot required after meeting these pre-reqs!

 

The first thing to mention about the install of System Center Virtual Machine Manager (SCVVM) 2012 R2 is that someone at Microsoft doesn’t know the difference between “extract” and “install”. After downloading the bundle, you’ll will be left with an .exe file. This is some kind of auto-executing extractor – it’s not the product installation.  The trouble is that all the wizard that accompanies the extraction process keeps on referring to installing. In fact that comes after the extraction process, with the setup.exe

Screen Shot 2013-07-04 at 14.22.40

The actual installer looks like this. Incidentally, notice a reboot might be required after the installation. Mmm, nothing new there then – I waited in expectation if I could add this reboot to my list.

Screen Shot 2013-07-04 at 14.28.41

In my configuration I opted for an external Microsoft SQL database. I have one already running under vSphere which hosts my VMware database for things like vCenter, VUM, vCAC and so on. As I was running a small lab I could get away with the recommendations for managing up to 150 host which is 4GB RAM and 40GB of local disk space for the C:. This sort of configuration is handy, because if I ever need to manager more than 150 hosts, all I need to do is increase the memory allocation. With VMware vSphere I could do that on the fly without shutting down the VM because it supports hot-add of memory. Unlike some of vendors virtualization software… [Just sayin’]. One funny thing about the specifications if you run SCVMM on Hyper-V itself is settings around dynamic memory:

Screen Shot 2013-07-05 at 13.25.39

This recommendation to increase “start-up” memory is rather telling of a flaw in Windows Hyper-V memory management. Occasionally, this start-up memory can be set too low, and big installers such as SCVMM cannot get their memory allocated in a timely enough fashion for them to function properly.  Now is not the place to investigate dynamic memory closely, but it something I want to look at in subsequent posts.

Initially, I decided my “VMM Library” would be hosted off a NetApp array using CIFS/SMB. That’s similar to what I do with VMware vSphere – I use NFS for both my templates and ISOs. It’s handy because it means you can have one repository for all your stuff that transcends the cluster “domain”, and means you don’t have to fanny about with FC masking or iSCSI IQN Masking just to get to ancillary datastores that you need to your admin.  However, I later discovered that this would be a problem. More about later…

Continue reading

Category: Microsoft | Comments Off on ManRAGEment: Installing System Center Virtual Machine Manager 2012 R2
July 29

Reboot City: Enabling the Hyper-V Role in Windows Server 2012 R2 Preview

with many thanks to my colleague, Randy Curry – and special thanks from Stu Fox (@stufox) for their valuable assistance and input…
Before enabling the Hyper-V role in Windows Server 2012 I decided to setup two SMB/CIFS shares on my NetApp 2040s. Sadly, the firmware on these units aren’t particularly up to date, and they are running OnTap 8.0.1 in 7-Mode. I’ve contacted my buddies at NetApp to looking in getting the latest version of OnTap that does support SMB3.0. I guess I could setup another physical Windows instance and use SMB3.0 on it. But I’m put off by the idea of using Windows as storage platform, plus beside which between my two NetApp 2040 I have nearly 4TB of storage, and it seems a shame not to use that. After all I pay colo fee to run these bad boys.
Screen Shot 2013-07-03 at 10.12.24
You can check the firmware on a NetApp system using the System Manager, and selecting the array in the view. Sadly, after discussions with my pals in NetApp these controllers cannot be update to the firmware to get support for SMB 3.0, they said that would be like running Windows Server 2012 on a x486.

Undaunted by this, I setup two shares called HyperVMs and HyperLibrary. I wanted to have some kind of shared storage available from the get go. You’ll see why in a second.  These CIFS/SMB shares were mapped as drives on the two Hyper-V instances I have in the lab to a V: and L: share (V for Virtual Machines, and L for Library. Doh!)

As you probably know Hyper-V is merely “role” which you add to an existing Windows installation. At the end of the process you’re required to do a reboot [so there’s nothing new there].

 Screen Shot 2013-07-03 at 10.29.58

One of the things I’ve been doing in my journey through Microsoft virtualization is clocking up my reboots. When I was a VMware Certified Instructor (VCI) in the previous decade one of the things I was really proud of was being able to list on less than 4 fingers on one-hand the reboots required with VMware ESX. It’s been a while since I’ve done that count, but I think it still holds true. So far I’ve clocked up 5 mandatory reboots that include:

  • Renaming the server and joining to the domain
  • Installing MPIO
  • Enabling the Hyper-V Role
  • Installing the SCVMM Management Console
  • Installing the SCVMM Management Service requires a reboot

I’ve also done a couple of non-mandatory reboots just for my own peace of mind such as making sure that on a reboot my storage was acceptably mounted and my NIC teaming was working correctly. The NIC Team test I did before leaving the colo because I wanted to have confidence I could remotely reboot these host and still have remote control. You might recall these boxes don’t have the expensive blue IBM widget that gives me full KVM style access to the box.

One thing that irks me about the Hyper-V role is how Microsoft claims that Hyper-V is a hypervisor. But the more I use it and play around with the more it feels like just another service running in the OS partition. I guess what you could argue is making the virtualization role an aspect that your enabling your existing “Windows” skills. What I would really love to see is some very technical documentation that explains what precisely is going on at this stage. So far all I’ve found is very vague comments stating that the existing “OS” is “slipped under” the virtualization layer. Mmm, that doesn’t really sound very rigorous to me. Of course, I’m kind of use to this. In a previous life I was a Citrix Certified Instructor (CCI), and one of the pre-reqs for Citrix Metaframe/Presentation/XenApp/Call-it-what-you-will-this-year has always been adding Microsoft Terminal Services (now rebadged as Remote Desktop Services – RDS). In the past enabling terminal services replaced the normal “Single-WIN” ntoskernel.exe with “Multi-WIN” capabilities (which allows for multiple windows sessions to be established). You might recall Citrix started off re-writing the NT 3.5.1 kernel in their earlier “WinFrame” days – until Microsoft said they couldn’t with NT4.  So I imagine something like this happening. I’d really love to hear a more techy explanation. For now it feels like Windows is like a house that’s been jacked up on stilts with a new foundation being added underneath.

jackeduphouse

Continue reading

Category: Microsoft | Comments Off on Reboot City: Enabling the Hyper-V Role in Windows Server 2012 R2 Preview
July 18

VMware Horizon View: A little Local Difficulty with Windows7 Error Recovery

windows-7-startup-repairLast night I softly (which sounds like was very gentle and kind to Windows) rebooted (to let Patch Tuesday do its work) my persistent Windows 7 virtual desktop (using Horizon View 5.2) and today I found I couldn’t connect to it. Why? In my View I run two of everything – two Security Servers, two Connection Servers and yes, two Windows 7 virtual desktops in case one stops working explicably. Opening the VMware Remote Console on the VM in question I discovered it was stuck in a loop attempting and failing an unwanted repair job. This is caused by the default when you get one of those ugly black screens in Windows being “Launch Start-up Repair”.

BPXHZ-OCAAEgFEi

I’ve had this happen to me a couple of times in my time of using Windows especially the more recent Windows /7/8/2012 releases which all display this functionality. 9 times out 10 the repair either is unnecessary or doesn’t work. So decide to consult the technical bible of the day – twitter – for suggestions on how to stop this ever happening again. Heck, I might even consider using in all my templates and parent VMs in linked clones.

Chris Neale of chrisneale.wordpress.com came up trumps first:

Screen Shot 2013-07-17 at 15.24.23

Followed quickly by Marcus Toovey

Screen Shot 2013-07-17 at 15.25.07

BCDedit is a utility for modifying what Microsoft call “Boot Configuration Data” (BCD) files provide a store that is used to describe boot applications and boot application settings. The objects and elements in the store effectively replace Boot.ini. Ahhh, boot.ini. That takes me back to when I was lowly NT4 instructor teaching RISC paths to hopeful MCSE candidates.

Whether the change should be applied to “current” or “default” is perhaps a bit moot – I imagine the default, is the current one – unless you selected a different boot option at start-up. What interested me was different settings being applied – and which was the right one. Chris quickly pointed me to a Microsoft MSDN article that summarises the differences:

Screen Shot 2013-07-17 at 15.28.39

I was also interested to know what criteria triggers a Startup Repair job if one hasn’t been manually requested. I have a feeling there is some sort of intergar in the Windows registry which accrues on N number of dirty shutdowns, and then N number of dirty shutdowns has occured this triggers the repair option – perhaps regardless of whether there is anything to repair as such.

As for the settings the general verdict is to turn them all on – in a belt and braces approach to try and stop this happening again.

Category: Microsoft, View/EUC | Comments Off on VMware Horizon View: A little Local Difficulty with Windows7 Error Recovery
July 17

One Step Beyond: iSCSI on Windows Server 2012 Hyper-V R2 Preview

Note:

This post was written a week or so ago and is a little bit out of date. Due to complexities and problems around NIC teaming and Microsoft Failover Clustering, I had to abandon my original design. My original plan was two have two teamed NICs dedicated to management, IP storage, Live Migration and so on, and then two teamed NICs for the virtual machines. It’s configuration I’ve done plenty of times on a 4-NIC box with VMware ESX. Sadly, I faced other issues associated with iSCSI, Heartbeat networking and Failover Clustering that made this impossible. I’ve since had to abandon that networking layout – opting instead for dedicated physical NIC for management, IP storage, Heartbeat and virtual machines. That means I’m unable to offer redundancy to any part of my configuration.  To be fair I think this a reflection of lack of functionality in my hardware, than a direct criticism of Microsoft. After all VMware often recommends dedicated NICs (even with VLAN Tagging segmentation) for performance or security purposes – it then leaves the administrator to decide whether those recommendations apply. For instance its recommended to have dedicated NIC for VMotion, but a lot of people put their VMotion/Management traffic on the same physical NIC.

But without over-labouring the point with vSphere I could create single “uber” Standard vSwitch or Distributed vSwitch that had ALL the vmnic’s patched to it. Then on the properties of each portgroup (representing the function – management, VMotion, IP storage, heartbeat) the “Active/Standby” option could be used to control which NICs took the traffic – or alternatively NIOC (Network IO Control) could be used to carve up two 10GB interface into pipes of bandwidth needed for each function – such as 100Mbit/s for Management. See this article of mine for an illustration: PowersHell – The Uber/Master vSwitch. This works because VMware implementation of NIC Teaming is flexible. You can set Active/Standby on the properties of the vSwitch or the Portgroup. In contrasts Microsoft is bonded to the underlying hypervisor. 

ubervswitch

ubervswitch-portgroups

Sorting out the Network:

The other week I managed to finally get to my colocation to install two Windows Server 2012 Hyper-V instances in my lab.  I’d sorted out the network teams for the host, but I needed to ensure I could access my NetApp and Equallogic arrays. I was a bit concerned that the traffic might go through my router, and after a couple of ping and tracert test, I realised that this was the case.

 Screen Shot 2013-06-29 at 20.38.31

This is normally regarded as a suboptimal configuration, as whatever the quality of the router, it will add an overhead – most people in the know would recommend a direct attachment to the switch. I do actually have this in my lab, but it meant allocating an IP address to the “Management-Team0” on each of the Hyper-V servers.  Assigning an IP address in Windows used to be a relatively small set of steps, but something weird happened to Windows around the Windows 7 era as Microsoft began to insert layers of UI between the administrator and setting – almost in a perverse way to keep us from doing any harm. Personally, I loathe this “wrap the administrator up in cotton wool” approach to modern computing – like the admin can’t be trusted to get anything right. To set a valid IP address on my “Management-Team0” these are the steps I went through:

Continue reading

Category: Microsoft | Comments Off on One Step Beyond: iSCSI on Windows Server 2012 Hyper-V R2 Preview
July 11

Go Team! – Windows Server 2012 Hyper-V R2 Networking

supportive-work-team

Note: This is based on the R2 Preview – so stuff could change before Microsoft GA their product. I guess they have 6 months to resolve any issue that can arise. So this is not production ready code, but “preview” or what we used to call a “beta” or “release candidate”. I was going to start my evaluation on production version, but I thought it might be more interesting to be exposed to the new features promised by the end of the year.

The phase “Go Team” was one a manager of mine used to say when I was employed in the ‘90s. Whenever she had dropped the ball, and was trying to inspire us to pick up her mess and sort it out for her…. She’d say “Go Team!”

As part of my Cloud Journey, I’ve been inspired to dip my toes into foreign waters and see what life is like on the other side of the fence. I must say I’ve been rather spoiled in the last ten years or so using VMware technologies. It’s only when you start using alternatives that you begin to see how much these competitors are mired in the past.

I recently bought a couple of books to help me learn more about Microsoft Server 2012 Hyper-V. It’s by Finn, Lownds, Luescher and Flynn called “Windows Server 2012 Hyper-V: Installation & Configuration Guide”. I think its good read, and it’s thankfully quite honest from a technical standpoint about the advantages and disadvantages of the platform. The only disappointment is that its main focus is Hyper-V Manager, rather than System Center Virtual Machine Manager (SCVMM). That’s a shame because I would have thought that’s how most sizable organizations would use that technology. In case you don’t know there’s a cheap and cheerful way to manage Hyper-V that is called “Hyper-V Manager”. In reality I would have thought most professional shops would prefer to use SCVMM.  So I back filled this book with a copy of Microsoft System Center Virtual Machine Manager 2012 Cookbook by Edvaldo Cardoso. My ideal would be both books sort of mashed-up together.

Reading the Hyper-V book I was taken about how different NIC Teaming support compared to VMware. That was re-enforced by a recent trip to my colocation. I’ve repurposed my hosts in my “silver” cluster to run two instances of Windows 2012 Hyper-V R2, and two instance of RHEL6.x to run KVM. The main reason is to learn more about the technologies in their own right, as well as seeing how management tools like vCloud Automation Center and Multi-Hypervisor Manager work with these foreign virtualization platforms. My goal was to get the colo, install Windows Hyper-V and RHEL and ensure I could communicate to them remotely, so I could do any further configuration from the comfort of my home office.  Sadly, some of my Leveno’s don’t have the expensive “media key” that allows complete remote management.

In case you don’t know Windows Server 2012 finally added built-in NIC teaming capabilities. Previously Microsoft offloaded this to 3rd party NIC vendors with customers having to find the exact right driver specifically for their network card. Even after doing that, if there was a problem – your problem was with Intel, Broadcom or who ever made your network cards. It’s their software, not Microsoft’s after all. So from a Microsoft Admins perspective, Windows Hyper-V having native NIC teaming is a step up for them. With advent of Windows 2012 Server Hyper-V, you’d be forgiven to think all is right in the world. Not quite.  I personally still don’t think its been done the right way…

Continue reading

Category: Microsoft | Comments Off on Go Team! – Windows Server 2012 Hyper-V R2 Networking