When I first got into virtualization with VMware, one of the most compelling advantages was the fact that the VM is just a process (albeit one running an OS inside it) and it liberated you from the constraints of physicalization. That’s a word I invented to describe the situation where someone is foolish enough to run x86 OSes directly on a physical server. When Virtualization 1.0 came on the scene it seemed a revelation that if I wanted more memory, disk, network or CPU – I merely had to power off the VM, click a spinner and power back on. Nowadays, even that seems rather quaint and charmingly old-fashioned. We have got so used to being able to add resources on the fly, you would assume that every vendor in this so-called era of the “commoditized hypervisor” would have this functionality. I mean especially Microsoft Hyper-V 2012 R2, right?

I’m going to do a quick compare of vSphere 5.5 with Windows Hyper-V 2012 R2. I’ll be using the same guest OSes and the latest and greatest versions – Hardware Level 10 with the vSphere Web Client and a Gen2 VM in Hyper-V. Sadly, what you will find is the Microsoft Virtual Machine is more like the Physical Machine, because to make changes often as not you have to power it down. To me that makes Microsoft Virtualization more like a 1.0 era solution. It’s all somewhat bizarre coming from a company that championed “Plug & Pray” in the 90s.

These comparison uses a Gen2 VM on Windows Hyper-V 2012 running Windows 2012 R2 inside it – remember Microsoft doesn’t’t offer any supported automated way to do this – as my previous post outlined:

Hyper-V R2eality: On the long, long road to Damascus – Hyper-V Gen1 conversion to Gen2

Adding a PCI Device

Adding a device in SCVMM 2012 R2 is done from the “Hardware Configuration”, clicking the + New icon allows you to add new devices. With Gen2 VM you can currently add virtual disk and DVD/ISO. However, the other devices SCSI Adapter, Network Adapter, Legacy Network Adapter and Fiber Channel Adapter are all unavailable.

image01

Now we have had the ability to hot add disks to physical servers for donkey’s years, so that’s no great shake, and its not like were going to build DVD server on a VM (are we?). As for a Gen1 machine it’s worse because you loose the capacity to even add a DVD…

image02

In contrast with VMware vSphere 5.5 with Windows 2012 R2 VM, you are not as half as restricted when it comes to adding devices. From the edit settings options you can many different device types – I decided to go through each one in turn to see what the effect was off adding whilst the VM was switched on:

image03

I found I was able to add all the devices whilst the VM was switched on except a Serial, Parallel port, and additional floppy device. I was unsuccessful with a USB Host Device and PCI device because my server either didn’t’t have those resources OR hardware pass-through enabled. Within a couple of second Windows 2012 R2 plug and played those devices.  It left me wondering, if only Microsoft could make Windows Hyper-V R2 VMs as plug and play as the operating systems they contained, they would have better system on their hands.

image04

For a summary the hotness of VMware devices shapes up like this:

Device Hot or Not
New Hard Disk Hot
Existing Hard Disk Hot
RDM Disk Hot
Network Hot
CD/DVD Hot
Serial Port NOT
Parallel Port NOT
Host USB Device It Depends!
USB Controller It Depends!
SCSI Device It Depends!
PCI Device It Depends!
SCSI Controller Hot
SATA Controller Hot

 Adding a CPU

Despite most modern hardware and operating system supporting hot-add of CPU resources, Windows Hyper-V 2012 R2 with a Gen2 VM – still stubbornly thinks it’s in the 1990’s. Although the screen grab does’t show this so well, the spinner and options are greyed out.

image05 

As you might suspect a Gen1 VM fairs no better:

 image06

It’s only once you power the VM off, and claim a maintenance window that you’re able to increase the CPU assignments.

image07

In contrast adding a vCPU in vSphere is merely click of a spinner – however caveats do exist around guest operating system support – after all VMware has no control over which OS you install into the VM and its functionality.

With VMware vSphere the story is better, but comes with qualifications. Hot-add of CPU is possible, but you must enable it as a feature, before you power on the machine. The hot-add on the CPU option has never been enabled as a default – this is because having hot-add enabled, it also disables NUMA for the VM – and its impossible for us to know the preferred configuration. So there’s a catch-22 there, if you fail to enable the option, and later discover you need to add additional CPU horsepower – you have to shutdown the VM. I guess one way round this to make sure the hot-add of CPU option is enabled by default in all your templates.

image08

Once this option to hot add CPU is enabled, then the spinner becomes available.

 image09

Remember whilst its possible to do processor upgrades, its not possible to processor downgrades. There have been instances in the past where a guest operating system sees the new CPUs, but the applications within aren’t able to utilize the new CPUs until they are restarted. So your mileage may well vary. So it could be Microsoft’s decision not to allow hot add of CPU is based on the fair assessment of their own operating systems capabilities (or should that be limitations?), and the fact that applications can often not conform to common standards and best practices.  A good example of this is Microsoft SQL. You can hot add a CPU to a Windows VM running Microsoft SQL, but that does’t mean Microsoft SQL will automatically schedule threads on the new CPU. Much is dependent on your applications configuration, so you may well need to go into the applications configuration to utilities the CPU properly. Faced with these complexities many folks in the community just take a maintenance window, and add the CPU whilst the VM is powered off. After all not applications or process lend themselves to being multi-thread anyway, as a wise man once said – not every task can be multithreaded. The classic real-life example is that it doesn’t take 1 month for 9 women to make a child… So there are many factors at play here – how well the guest operating system handles multi-threading generally (remember there are other operating systems other than Windows!) and how well the application behaves.

For example I did a simple test with the CPUstres (an application I’ve not looked at since my Windows NT days!). It allows for 4 threads to be simultaneously multi-threaded. Once the single vCPU VM was being hammered (by consulting Performance Monitor), I then added a 2nd CPU (I made sure this was a single core in a different socket to guarantee fully-fledge SMP). I didn’t’t see any real significant load put on CPU1, compared to CPU0. I had to stop and restart Cluster’s before I could see a load on both vCPUs. So, to really get the benefit of hot add CPU you must  a.) know thy application well and b.) be provided with tools allow correct processor management from that application.

Source for quote: http://www.tomshardware.co.uk/forum/239009-49-aren-multithreaded-programs-applications

Adding Memory

Adding memory is curious conundrum in Windows Hyper-V. As we all know Hyper-V has a chequered past concerning memory allocations. In 2008 all you could do was assign a flat amount of memory to VM, and Microsoft spent FUD dollars explaining to customer what an evil pernicious thing “memory over-commitment” was from VMware. This is a particular ploy of most vendors when they face a “feature gap”. You pour scorn on it, and invest in FUD demonstrate – how no one uses it, or should use it because it so laden with risks.  With the release of Windows Hyper-V 2012 R2, Microsoft trumpeted the arrival of “Dynamic Memory”. It’s curious feature name – up their with the “Active” era – because even if you use its actually quite static and inactive.

Firstly, be careful of the Microsoft default that is for “Static” memory. This allocates a set amount of memory upfront regardless of whether the OS/Application requires it. This chunk of static RAM must be physically present or else the VM will not power on (that’s akin to an administrator setting the Limit and Reservation in vSphere to VM thus guaranteeing physical RAM to the VM at all times – regardless of whether the VM needs it.) This static allocation is not hot-changeable until you power off the VM in Hyper-V.

 image10

So whatever you do. I would in most case NOT recommend static. It’s colossal waste of memory, and from an administrative perspective a pain in the butt.  With that said, be aware that there are some cases where “Dynamic Memory” is not recommended. A good example of this Microsoft Exchange:

Some hypervisors have the ability to oversubscribe or dynamically adjust the amount of memory available to a specific guest machine based on the perceived usage of memory in the guest machine as compared to the needs of other guest machines managed by the same hypervisor. This technology makes sense for workloads in which memory is needed for brief periods of time and then can be surrendered for other uses. However, it doesn’t make sense for workloads that are designed to use memory on an on-going basis. Exchange, like many server applications with optimizations for performance that involve caching of data in memory, is susceptible to poor system performance and an unacceptable client experience if it doesn’t have full control over the memory allocated to the physical or virtual machine on which it’s running. As a result, using dynamic memory features for Exchange isn’t supported.

http://technet.microsoft.com/en-us/library/jj619301(v=exchg.150).aspx

I guess THAT is how it should be. Settings like this should be dependent on the application, NOT the limitations of the virtualization layer. But as ever with Microsoft I feels like that tail wags the dog – the reason for these recommendations is (cough) a limitations in Hyper-V, rather than substantive best practice.

Anyway, enough said – lets assume like Voltaire we live in the best of all possible worlds, and you’ve elected to use “Dynamic” memory [Notice the subtle change of the speech marks there!]. Firstly, you will want to expend some cycles understanding the various settings for “Dynamic” memory. Let pretend we live in a perfect world and you more or less get them right first time – but they just need a little tweak. What then?

image11

As you can see the start-up memory option is disabled, leaving us with the minimum and maximum options. The truth comes when you try to change some of these changes – Microsoft fesses up to say that these settings can only be “loosened”.  This message is triggered when you try increase the MINIMUM value, and when you try to decrease the MAXIMUM memory value.

image12

In fairness to Microsoft you probably wouldn’t’t want to do the obverse (decrease memory max, and increase min). This does mean what goes up cannot come down. If you increase the dynamic memory maximum to a value that’s too high, you’re not able to bring it down again without powering of the VM. Remember that the “start-up memory” is what the Hyper-VM sees in the first instance. If you check the system resources it’s this value that is actually assigned. If the start-up memory is 2GB, and the maximum memory is 4GB, what actually gets assign is the 2GB. Woe betide anyone who sets the minimum value too low – you can find software installers cannot get enough memory to even start.

image13

In contrast vSphere has simple spinner used to increase the RAM allocation on the fly. As with the CPU example, the memory hot plug feature is not enabled by default. Once again this is because of the greater question of NUMA and the way it pegs memory access to a specified CPU.  Again, this option cannot be enabled whilst the VM is powered on; the VM was powered down before enabling the hot plug feature.

 image14

Once memory hot plug has been enabled the edit box opens to allow you to increase the allocation.

image15

Finally, on the subject of memory – I think it could be said in any form that the best practice is to guarantee resources for your most critical assets. I think where vSphere is its ability to over-commit and guarantee resources in times of contention, as well as a more hands-off approach to memory management. We install VMware tools and get the full range of memory management. In contrast Microsoft have some seven or so steps to setup a VM, with no top level Resource Pools or visibility which makes it difficult to fully understand what a VM will actually get in times of contention. So in that regard I think Microsoft has to tell users to not use Dynamic Memory, in order to guarantee those resources for a VM – in contrast vSphere has other controls that are part of a comprehensive memory management solution.

Adding Disks & Increasing Virtual Disk Size

One of the main features, and reasons for upgrading from Gen1 to Gen2 is the all-new ability to increase the size of the virtual disk. This is a simple task to carry out – you simple increase the spinner, and then use “Disk Management” or diskpart to grow the disk. This is possible even with the Windows C: drive where the most common reason to increase the partition is because of running out of the C: disk space. This feature has been around in VMware since 2003 (or perhaps even earlier), from the command-line console admins could increase the size of a VMDK with vmkfsools –X or alternatively, later on in Virtual Infrastructure 3.x, you could carry out the “grow” of a virtual disk with a spinner. Both VMware and Microsoft require the administrator to go into disk management or diskpart to do partition work. I’ve always felt there should be a tick off box in the virtualization management layer to handle this part – but that’s just my point of view (I later found out that PowerCLI has precisely this option!). I’m pleased that Microsoft has finally dealt with this issue – its long over due. It’s part of my joke that Microsoft is indeed catching VMware up – It just takes them about decade to do so. 😉

image16

Joking apart although this is great feature for Gen2 VMs, there’s little comfort for Windows Hyper-V customers using Gen1 VMs that use an IDE controller – which is the default in Windows Hyper-V 2012 and all previous releases of Windows Hyper-V. In truth this is a bit icky. Gen1 VMs boot from IDE, but… subsequent drives are added to SCSI controller that added once the VM’s guest OS has been installed. So additional data drives can be hot added to this SCSI controller.

image17

The IDE disk which hosts the boot disk remains stubbornly un-expandable, with the option to extend the disk greyed out. For me it feels like Microsoft have put their customers somewhere between a rock and hard-place. Running out the disk space on the C: drive of Gen1 VM? Sorry, can’t help you – why not spend sometime manually converting a VM to Gen2 version while you can. As ever it would be much better not to be in this situation where storing stuff on the C drive doesn’t’t happen – its shame that so many Microsoft products like SQL default to this location when you create a database…

Anyway, with vSphere you have always been able to increase the size of the disk, and hot add disk. That’s because SCSI has always been supported for all the components of the VM. This goes right back to VMware ESX 2.x days in the 2003/4 period where command-line tools were used to increase disk size. The only limitation back then was on operating systems like Windows 2003 that had quite poor disk management tools. I found I would use Dell’s Ext Part utility to increase the C: drive size of those generation of VMs. Things have got much better with later releases of Windows as the guest operating system. The only caveat around this is some older OSes on vSphere, notable WindowsXP default to IDE drive. This is ostensibly to make it easy to install these OSes types, by not asking the administrator to look for SCSI drivers for these client OSes. Now, IDE drives used this way cannot be extended. It’s one of these reason I download the LSI Logic drivers for WindowsXP and put them on a .flp floppy image. You do this once, and after that you clone your original source VM. One ounce of prevention is worth a pound of cure, as Ben Franklyn said….

Putting this issue to one side – increasing the disk capacity for a VM is merely click of a spinner. No special generation of VM is required to do this – it works for any virtual disk, including the boot disk. So there’s no unpleasant conversion from a Gen1 to Gen2 VM as you would find with Windows Hyper-V 2012 R2.

image18

There is one small qualification about increasing the VMDK size which concerns going from <2TB to >2TB. With Microsoft Hyper-V 2012 R2 this would require a power down, and cloning process. The same is true of vSphere, except there would be no cloning process. Remember it’s not just the virtual disk size that matters here, but also the cluster size used during the format. Just because the disk is 62TB in size doesn’t mean your file system has the right attributes to access the disk. To go beyond >2TB requires both the vSphere and Hyper-V VM to be powered down currently. If you want to learn more about these anomalies, then check out a blogpost I did around the time of vSphere 5.5 being released.

VMworld 2013: What’s New in vSphere 5.5 – The Virtual Machine – 62TB VMDK

Of course you could PowerCLI and PowerShell to hand this process end to end. You could PowerCLI to grow the virtual disk:

IMPORTANT: If this was just a second disk (hard disk 2) and not the boot disk the Set-HardDisk cmdlet supports the –ResizeGuestPartition option that makes the “invoke-command” part of the script redundant. However, the –ResizeGuestPartition cmdlet doesn’t support touching the boot disk.

Get-HardDisk “WIN2012R2” | where {$_.Name -eq “hard disk 1”} | Set-HardDisk -CapacityGB 80 -Confirm:$false

 80 makes the virtual disk 80GB in size, but if this were “Disk 1” (or Disk0 in Windows) some of the partition space would be used by the 350MB Windows RE partition. Of course, GB is just a rounded value – so 80GB is really 81919, minus the 350MB for the Windows RE command leaving 81569MB of addressable disk space.

And then we use the invoke-command (this Microsoft PowerShell cmdlet has a lot of pre-requisites so beware!). We use Update-Disk (PowerShell, not PowerCLI) to make sure that the Windows “Virtual Disk Service” in Disk Management has the recent information, followed by a resize-partition (PowerShell, not PowerCLI) cmdlet.

invoke-command -computername win2012R2.corp.com -scriptblock {Update-Disk 0}

Invoke-Command -Computername win2012R2.corp.com -Scriptblock {resize-partition -size 81567mb -Driveletter C}

Note: Notice how I’m specifying 2MB less than the calculated number – that’s because despite the RAW unformatted numbers you never get all your free space because of the payload of the file system itself. So if I’d increased the disk to 100MB, that would 100x1024MB or 102400MB minus 352MB to make 102048 to make the C: drive take up the remaining space.  I dare say a clever PowerShell Monkey could actually calculate the size of the C: drive remotely, discover the free space – add the two together and subtract a couple of MB. [ Alan Renouf? Are you reading this? 😉 ]

Other Dynamic Reconfigurations

I guess you could be forgiven for not being able to hot-add CPU, memory, PCI device or have horrible limitations and caveats about them – after all it’s within living memory that these process usually involved a maintenance window and a reboot. The trouble is even tasks that are exclusively virtual ones, occasionally rub against the same restriction – you have to power off the VM to do it. Here are a couple of choice examples:

1. Rename a VM

 image19

Update: From @stufox – apparently it is possible to rename a VM from PowerShell or via Hyper-V Manager. For some reason or other SCVMM isn’t aware of this option. I imagine once you have renamed a VM from PowerShell or Hyper-V Manager you will need to “Refresh Virtual Machine” to have SCVMM update so show that…

2. High Availability

Must be set when the VM is defined, else you have to power off the VM to make the VM available – this sounds like a case of Microsoft defining darkness as an industry standard. Incidentally, you cannot enable Hyper-V Replic(ation) without first enabling High Availability.

 image20

image21

Conclusions:

I guess to be fair – taken individually this lack of hotness of the Gen2 Windows 2012 Hyper-VM might not be a deal breaker for some. For me personally, they collectively add up big pain in the rear, especially if you coming off the back of virtualization product like VMware vSphere that does have them. For me the whole point of virtualization is it liberates us from the limitations of the physical world. What’s the point of software-defined-virtual-machines, when it feels more like the hardware-defined-physical-machines….