vmware_monster_vm

Executive Summary:

This became a bit of monster post. But here’s the keypoints in a nutshell.

  • VMware vSphere5.5 supports 62TB disks – it’s not 64TB because we reserve 2TB of disks space for snapshots etc.
  • To take <2TB disk beyond the >2TB range you do have to currently power off the VM. However, you can make the virtual disks bigger without an special conversion process that involves a length file copy process – unlike certain vendor up in Seattle 🙂
  • Disks with MBR need “converting” to GPT to go beyond the 2TB boundary. Currently, Windows 2008/2012 have no native tools to do this, however 3rd party tools do exist. I’ve used them, and they work!
  • Even once the VMDK in VMware, and the DISK in Windows is 62TB, you may still face issue of “allocation unit” or small “cluster” size within the NTFS PARTITION. NTFS defaults to 4K “allocation unit” size by default on smaller partitions, but to access a 62TB partition you need a “allocation unit” size of 16K or higher. So you mileage will ultimately vary based on the limits/features of the file system/Guest Operating System
  • These challenges go somewhere to explain why Windows 2012 HyperV has an add-disk, and copy from old-disk to new-disk process which takes time based on the volume of data, and speed of storage – and also requires the VM to power off…

Introduction:

VMware was first to really introduce the “monster VM” – and the monster in the datacenter just keeps on growing. So much so I think we going to have invent a new label to replace the term monster. For me the big change is not necessarily the increases in CPU and Memory capabilities – I think for many customers these became beyond production values sometime ago – the big change comes with the end of the 2TB limit on the virtual disks or VMDK size upto 62TB.

A brand new VM Version 10 VM supports these capabilities natively

Screen Shot 2013-04-18 at 11.05.15

During the creation of the VM you can expand the New Hard Disk option and change the increment to TB, a set the size you want. In my case I used 62TB, but set it to be thinly provisioned because the datastore I’m using is only 1.8TB in size.

Screen Shot 2013-04-18 at 11.10.34

Existing VMs with smaller virtual disks – or VM that need additional storage can be handled in similar way. So here I have VM with 40GB boot disk – and using the spinner I can make it larger without powering off the VM, and also add new disk for additional storage – all without powering off the VM. Unlike a certain virtualization competitor who requires you to power of the VM and go through a lengthy conversion process from one format to another. I mention no names.

Screen Shot 2013-04-18 at 13.38.22

Screen Shot 2013-04-18 at 13.39.13
Warning: Personally, I think even a 2TB DISK0 or C: Drive is unnecessary. But I was curious to know what would happen…

Windows 2008 R2:

In the next screen grab I have a standard installation of Windows 2008 R2. Notice how Disk0/C: Drive is still a “Basic Disk” and the new drive Disk1, has yet to be initialized.

A New Disk with NO Data…

Screen Shot 2013-04-18 at 13.41.58

All I need to do is right click Disk1 to bring it online. Then initialize the disk using GPT (GUID Partition Table) format – and then I can create simple volume on it.

Screen Shot 2013-04-18 at 13.47.04

The boot disk free space can be handled in couple of different ways. It could be converted into a Microsoft “Dynamic Disk” (mmm, dynamic? where do they get off using words like that?) and then the partition turned into a “simple volume” which then allows for the C: partition to made larger. Alternatively, I’ve always been a big fan of Dell ExtPart utility which I have used with success on older Windows operating systems such as Windows XP/Windows 2003. To be honest with advent of “thin” virtual disks from VMware the days of me “running out of space” more or less came to end. In the past because of lack of disk space in the lab I often resorted to using teeny-tiny thick disks to save on space. So I’ve pulled out ExtPart for some time.

An Existing Disk with Data…

WARNING: As ever when you dealing with partitions that contain existing data, a backup before hand is strongly recommended. However, reliable OS disk management tools are – thing can still go wrong. The other recommendation is that you take the disk “offline” before trying to extend beyond >2TB. Apparently, the GPT data can get modified when you cross this boundary. I personally didn’t bother because I was fool-hardy curious to know what would happen.

In this scenario I’m assuming you had a basic disk using MBR as the Partition Table type with existing data. In this case I have D:\ [DATA] Disk1 drive with some very important data – but I’m running out of free space on 40GB D: partition.

Screen Shot 2013-04-19 at 10.19.03

So power of the VM, and increase the .VMDK size to 62TB and power the VM back on. This results in two areas of free space being created – the first boundary of free space goes up to the 2TB limit associated with MBR, and the remainder goes up to the 62TB value. As disk1 was added with MBR the maximum size the D: drive could be extended to would be 2TB. Leaving the remainder of the disk inaccessible.

Screen Shot 2013-04-19 at 10.36.56

A right click of the D: drive would allow use to extend the D: partition to 2TB.

Screen Shot 2013-04-19 at 10.38.50

The only way to allow the D: drive to see the remain 62TB space would be to convert the disk from being MBR to GPT. However, the official documentation from Microsoft describes this conversion as – first backing up all your data, destroying all the partitions (with diskpart), and re-initializing the disk as GPT, creating a new partition and restoring it. Of course. One option is to create a brand new >2TB GPT disk, and just copy the data from old to new, and there are couple of commercial disk management products you can buy that will allow you do an in-place conversion such as:

http://www.partition-magic-server.com/ (by AOEMI)

http://www.disk-partition.com/

I spoke to the guys from AOEMI who make the Partition Assistent software, and managed to secure a temporary evaluation license to see if I could on the fly change the partition type from MBR to GPT on the data drive, and then take the partition beyond the >2TB barrier. The software was a breeze to install (but make sure you get the right version – like a many partition management tools there’s a “desktop” and “server” edition). A right-click on Disk 2, exposed the “Convert to GPT Disk” option and afterwards clicking the “Apply” button to commit these changes to disk.

Screen Shot 2013-04-24 at 15.22.15

I found I could extend the 2TB disk beyond this range by using AOEMI’s rather nicer looking utility instead:

Screen Shot 2013-04-24 at 15.27.37

Note: I’d have to say that regardless of which management tool you use this does take sometime!

Meanwhile the built-in tool for Microsoft disk management couldn’t cope with the re-size – that’s because the maximum size of partition is also limited by the cluster size of the original format of the D: drive. Meanwhile the built-in tool for Microsoft disk management couldn’t cope with the re-size – that’s because the maximum size of partition is also limited by the cluster size of the original format of the D: drive. In fact I had the same issue with AOEMI. Really, the NTFS partition must be formated with an “allocation unit” size of 16K or above for the FULL space of 62TB disk to be addressed. That’s true if the disk has existing data or not.

Screen Shot 2013-04-24 at 19.52.32

A Existing Boot Disk…

A rescan of the Disk0 should show the free space added by increase to 62TB. I was a bit surprised to see the free space truncated into two chunks. But thinking about it it made some sense. The C: Drive is formatted using MBR/LBA which goes all the way back to using cylinders, heads and sectors to innumerate the disk size. As we know MBR/CHS originally allowed on 2TB disks. The source of the original limitation… A little google on what Microsoft supports turned up this KB article – Windows’ Support for Disks with Capacity Greater than 2TB

A right-click of the C: Partition on the boot disk did expose the “Extend Volume” option but not for the entirety of the disk.

Screen Shot 2013-04-18 at 14.34.54
By default the C: drive can only be extended to a maximum of 2TB with a basic disk…

Reading through the KB article from Microsoft it appears there are some requirements to boot from a >2TB disk:

  • A 64-bit OSes Vista or old (32-bit is out of the question…)
  • UEFI Firmware (as opposed to BIOS)
  • Disk Initialized with GPT Support rather than MBR/LBA

From what I can see it is possible to take an existing boot disk0 and “convert” it from using MBR to GPT. The conversion process is called backup all your data, covert, and restore all your data. Nice!

So I wondered what would happen if I a.) just switch the VM from BIOS to UEFI and b.) If I created a brand-new VM using UEFI could I get a Windows VM boot disk to use GPT and get me a 62TB boot disk. I knew a.) would be very dangerous because the web-client warned me so!

Screen Shot 2013-04-18 at 19.16.33

As you might suspect this didn’t help. It switch the boot options, and made the VM unbootable – and simply switching the VM back to BIOS did not bring it back from the dead…

Screen Shot 2013-04-18 at 19.21.28

Screen Shot 2013-04-18 at 19.21.18

Windows 2012:

A New Disk with Existing Data:

WARNING: As ever when you dealing with partitions that contain existing data, a backup before hand is strongly recommended. However, reliable OS disk management tools are – thing can still go wrong. The other recommendation is that you take the disk “offline” before trying to extend beyond >2TB. Apparently, the GPT data can get modified when you cross this boundary. I personally didn’t bother because I was fool-hardy curious to know what would happen.

In Windows 2012 it with a new disk the default is all disks when brought online and initialised – is that they are GPT based. This means MBR is affectively depreciated – except for boot disks which default to MBR/Basic. So here Disk0 is the boot drive, and notice the partition type is set to MBR. The new disk on the other (Disk1) defaults to using GPT…

Screen Shot 2013-04-29 at 16.11.25

Personally, I don’t find the above UI very pleasant – so I find myself going to the “Computer Management” UI which is more familiar from Windows 2008 R2 era:

Screen Shot 2013-04-29 at 18.25.00

This disk1 can be made greater the 2TB in size using the same method we observed for Windows 2008. So I created a file on the E: drive of the Windows 2012 VM, and then increased the size of the .VMDK.

Screen Shot 2013-04-29 at 17.00.24

Once the disk has been increased in size you can set about making the partition larger – right click the Disk Management node and select “Rescan Disks”.

Screen Shot 2013-04-29 at 18.26.45

 

and then extend the partition as normal by right-clicking the partition (in my case the E: drive) and using the “Extend Volume” option:

Screen Shot 2013-04-29 at 18.29.37

However, you may still face a restriction based on the NTFS format cluster size as we saw in Windows 2008 R2.

Screen Shot 2013-04-29 at 18.31.52

Therefore to truely be able to take a <2TB disk and make it as large as 62TB care must be taken to select the correct cluster or “allocation unit” size during the format process. The command fsutil fsinfo ntfsinfo C: will print the “The Bytes Per Cluster” and this is the equivalent of the allocation unit.

Screen Shot 2013-04-29 at 18.48.06
Here the C: drive as formatted by the installation used 4096
Screen Shot 2013-04-29 at 18.49.25
Whereas as 62TB E: Drive used 16384

So on format you would want to use an allocation unit size of 16K and higher – to avoid a scenario where the disk size was great than the partition NTFS maximum size.

Screen Shot 2013-04-29 at 18.50.59

Installing to a GPT 62TB Virtual Disk:

So my next option was .b) creating a brand new VM with the boot options set to EFI, and trying to make the disk GPT enabled before the install of Windows itself.

So I created Windows Window 2012  VM changing the boot settings to EFI setting, and booted from the DVD.  This didn’t work. I wasn’t sure what the source of the problem was. Whether it was a problem in the EFI settings or if there was an issue with the LSILogic SAS Controller that we use in side the guest operating system.

Screen Shot 2013-04-18 at 19.46.26