This section covers common virtual machine settings and options that every administrator should be aware of. This is by no means exhaustive – but it should be good starting point to learn about alternative configurations. So it covers stuff such as:

  • Enabling Time Synchronisation and VMware Tools Updates
  • Managing Virtual Disks (Increasing size, adding new disks, adding RDMs)
  • Adding New Adapters and other devices
  • Hot Adding CPU and Memory

VMware Tools – Time Synchronisation and Updates

It’s common in most environments for VMs to receive time updates from the physical VMware ESXi host – which in turn is configured to an external NTP time source. Additionally, as user patch and maintain VMware ESX which can be done seamlessly without effecting VMs, its not unusual for VMware Tools to become out of date. Two settings on the properties of the VM can enable time synchronisation and also instruct the system to automatically update VMware Tools when ever a VM is powered on.

1. Select the VM in the inventory, in the Summary Tab select Edit Settings

2. Under the VM Options tab, expand >VMware Tools

3. Enable the options X Check and Upgrade VMware Tools before each power on and X Synchronize guest time with host

Screen Shot 2014-04-09 at 07.55.43.png

Managing Virtual Disks

It’s not uncommon for an administrator to want to increase the size of virtual disk and/or add additional disks to allow for the storage of application data. VMware vSphere has supported the hot-grow and hot-add of virtual disks for some years, and the process has become easier now that most guest operating system have the built-in tools to grow the file systems within the virtual disk. In the past third party partition tools such asDell ExtPart would be need to grow the OS partition if it was running out of space for instance. Limits do still exist around this – not least that in Windows 2012 R2 all operating systems disks and partitions default to using MBR to enumerate the size of the disk, and an allocation unit size in NTFS that makes extending the disk OS partition disk difficult beyond 2TB limit. In most case this isn’t a practical limitation.

Increasing Virtual Disk Size

1. Edit Settings of the target VM

2. In the Virtual Hardware tab, increase the spinner for the virtual Hard Disk to the desired size

3. Click OK

Screen Shot 2014-04-09 at 08.07.58.png

Note: Increasing the size of a thinly provision disk is unlikely to create any immediate pressure on the amount of free space in the vSphere datastore – however, if the virtual disk is “thickly” provision then free space must be available to increase the virtual disk size.

4. In a guest operating system such as Windows 2012 R2, open Computer Management – and with right-click on the Disk Management node, select Rescan Disks – to make the operating system aware of the new disk size

Screen Shot 2014-04-09 at 08.13.15.png

5. Next, right-click the partition within the virtual disk and select Extend Volume

Screen Shot 2014-04-09 at 08.13.43.png

6. After the wizard has been completed the partition within the newly grown virtual disk will take up the new free space

Screen Shot 2014-04-09 at 08.15.18.png

Adding Additional Virtual Disks

Adding additional disks to an existing VM is merely a question editing the settings, specifying the size and location. Most of the work is conducted within the scope of the guest operating system.

1. Edit Settings of the target VM

2. In the Virtual Hardware tab, at the bottom of the dialog box, click the New Device —-Select—- list,

3. From the list select New Hard Disk, and then Click Add

Screen Shot 2014-04-09 at 08.38.30.png

4. Select the size of the disk, and whether it should be thinly provisioned or not and click OK

5. The new virtual Hard Disk should appear in the VM Hardware pane

Screen Shot 2014-04-09 at 08.41.27.png

Note: The adding of a thinly provision disk is unlikely to create any immediate pressure on the amount of free space in the vSphere datastore – however, if the virtual disk is “thickly” provision then free space must be available to increase the virtual disk size.

4. In a guest operating system such as Windows 2012 R2, open Computer Management. The new virtual Hard Disk should appear, right-click the disk and select Online.

Screen Shot 2014-04-09 at 08.42.38.png

5. Once online the disk will need “initialization” with Initialize Disk. Care must be taken when initialising the disk to use GPT to ensure the file system disk can be easily increased in size beyond 2TB boundaries

Screen Shot 2014-04-09 at 08.44.22.png

6. After the disk is initialised it can be formatted, again case must be taken to choose an appropriate Allocation Unit size for the disk. If the cluster size is left at low value you may struggle to increase the size of partition to the maximum addressable size.

Screen Shot 2014-04-09 at 08.49.50.png

Once these guest operating system steps have been completed, the newly added virtual Hard Disk will be ready for use

Screen Shot 2014-04-09 at 08.50.50.png

Adding Raw Device Mappings

Raw Device Mappings or RDMs are way of giving a VM “direct” access to a LUN/volume on an FC or iSCSI array. It’s is not the only way of achieving this configuration – for instance the guest operating system residing in the VM may indeed support an iSCSI initiator – and if appropriate configured (network route, IQN, Target IP). However, considering the VMware ESX host(s) may already configured for iSCSI this could be regarded as more complicated configuration. One the myths of virtualization is that RDMs deliver better performance – this maybe due to misinterpretation of the term “Raw”, where actually is what is implied is “native” access to the storage. With RDMs the SCSI calls begin in the guest operating system and are translated through the VMware ESX host – the performance difference is negligible, if not nil. More commonly RDMs are used:

  • To grant access to large volumes of existing data, where coping that data into a virtual disk is deemed unnecessary
  • In some guest operating system clustering scenarios
  • To allow for advanced controls over the SAN from the VM. Often referred to as a “control” LUN

Care should be taking in selecting the use of RDMs outside of these use case as it makes the VM less “portable”. For instance decommissioning an array, and using Storage Migration to move VMs to a new storage array is more complicated if RDMs are in use. Additionally, there are some technologies from VMware and third-parties that do not support the use of RDMs currently. For instance, VMware’s vCloud Director does not support the use of VMs with RDM within the Service Catalog. Finally, any LUN that is presented to the VM using RDMs should be masked/presented to ALL the VMware ESXi hosts within the cluster – failure to do so can stop features like VMotion and Distributed Resource Scheduler (DRS).

The process of adding an RDM creates a small “mapping” file with the .VMDK extension. This file is only a couple of kilobytes on disk, but reports the true size of the LUN/Volume on the array. RDMs come into modes – Physical and Virtual. The virtual mode allows for features normally only available to virtual disks such as snapshots.

1. Edit Settings of the target VM

2. In the Virtual Hardware tab, at the bottom of the dialog box, click the New Device —-Select—- list,

3. From the list select RDM Disk, and then Click Add

Screen Shot 2014-04-09 at 10.07.36.png

4. From the Select Target LUN pop-up dialog box, select the LUN that will be granted to the VM

Screen Shot 2014-04-09 at 10.15.52.png

5. Expand the >New Hard Disk, and select Virtual from the Compatibility Modes.

Screen Shot 2014-04-09 at 10.40.54.png

Caution: You should not include RDMs in VMs you intend to later convert into templates – as the RDM mapping would be included in that template.

Removing Virtual Disks

Caution: To remove a virtual disk, start with taking the disk “Offline” from the guest operating system, thus ensuring no changes are taking place in the file system prior to removal. Removing the virtual disk offers the choice of either just disconnecting it from the VM, or disconnecting and deleting the virtual disk in one operation. Deleting a virtual disk is non-reversible operation, and the only way to restore a deleted virtual disk is by using the backup vendor of your choice.

1. Edit Settings of the target VM

2. In the Virtual Hardware tab, locate the virtual Hard Disk you wish to remove, and click the grey remove X icon

Screen Shot 2014-04-09 at 10.51.02.png

3. After clicking the X icon, you will have the option to either remove the device, or remove the device and delete the virtual disk.

Screen Shot 2014-04-09 at 10.52.29.png

Adding Network Interfaces and other hardware

The VM does support other devices including:

  • Network Adapters
  • Additional CD-ROM/Floppy
  • Serial Port
  • Parallel Port
  • Host USB Device
  • SCSI Controller
  • SATA Controller
  • PCI Device

Whether VMs actually require these devices depend much on the circumstances and requirement of the guest operating system. For instance if you are running clustering software within the guest operating system its likely that additional network adapters will be need for the primary network, and the “heartbeat” network. In conventional, classical clustering the VM may well need an OS disk, Data Disk and Quorum Disk – and these secondary disks could require additional SCSI controllers as well as BUS setting to indicate there are shared with other nodes in the cluster. Typically, Virtual Storage Appliance (VSA) will typical require multiple disks and network cards and will ship with these device ready configured. Finally, the OS inside the VM may have specialist network functions such as being a firewall, VPN, router or intrusion detection system, and thus will require multiple network adapters to connect to the various networks its services.

1. Edit Settings of the target VM

2. In the Virtual Hardware tab, at the bottom of the dialog box, click the New Device —-Select—- list

3. From the list select Network, and then Click Add

Screen Shot 2014-04-09 at 11.57.36.png

4. Expand the >New Network and select the Portgroup you require, and the NIC Driver type

Screen Shot 2014-04-09 at 11.58.40.png

Modern operating systems such as Windows 2012 R2 should plug-and-pray the network device, although you will need to set the IP address if you require a static IP.

Hot Adding Memory and CPU

Most modern operating system support the hot-add of both CPU and Memory. However, the older the operating system the more likely the right type from a licensing perspective will be required – or you will find that the OS supports hot add of memory or CPU but not both together. Somewhat ironically, hot-add is not enable by default on a VM, and ironically requires that the VM should be powered down to enable the feature. Therefore if your serious about requiring the hot-add memory/CPU feature its worth encoding this option in your templates. Remember that until very recently any increases in memory/CPU require a total shutdown of the OS and machine (physical or virtual), and it is something that many administrator plan a maintenance window around. Its worth saying that its is difficult to predict if an application will see the new memory/CPU resources – application developers often deviate from common standards via accidentally following out of date guidelines – and this can me the application or service misbehaves, and requires a restart to see the new resources. It is a given that the application must be “multi-threaded” to be able utilise more than one CPU core. Despite the now endemic use of multi-core processors this isn’t alway the case. Finally, enabling hot-add is not necessarily a positive function.

As Duncan Epping made clear on a blogpost in January 2012:

Lets answer the impact/overhead portion first, yes there is an overhead. It is in the range of percents. You might ask yourself where this overhead is coming from and if that is vSphere overhead or… When CPU and Memory Hot-add is enabled the Guest OS, especially Windows, will accommodate for all possible memory and CPU changes. For CPU is will take the max amount of vCPUs into account, so with vSphere 5 that would be 32. For memory it will take 16 x power-on memory in to account, as that is the max you can provision . Does it have an impact? Again, a matter of percents. It could also lead to problems however when you don’t have sufficient memory provisioned as described in this KB by Microsoft: http://support.microsoft.com/kb/913568. Another impact, mentioned by Valentin (VMware), is the fact that on ESXi 5.0 vNUMA would not be used if you had the HotAdd feature enabled for that VM. What is our recommendation? Enable it only when you need it. Yes they impact might be small, but if you don’t need it why would you incur it?!

1. First shutdown the guest operating system and VM with a right-click and Shutdown Guest OS

2. Edit Settings of the VM, and under the Virtual Hardware tab, expand >CPU

3. Enable the option X Enable CPU Hot Add

Screen Shot 2014-04-09 at 12.37.51.png

4. Next expand >Memory and enable X Memory Hot Plug

Screen Shot 2014-04-09 at 12.43.42.png

Once enabled and the VM is powered on, you should notice in the Edit Settings dialog box, that CPU and Memory are no longer restricted and can be modified.

Screen Shot 2014-04-09 at 15.02.51.png

In the example below memory was increased from 4GB to 8GB, and the vCPU count was doubled from 2 vCPUs to 4 vCPUs

Screen Shot 2014-04-09 at 15.06.10.png