Introduction

vSphere supports wide variety of storage configurations and optimisations. Virtual Machines can be stores on local, fibre-channel, iSCSI and NFS storage. There is all support for distributed storage model called vSAN which use combinations of HDD and SSD drives within servers to create a datastore for virtual machines. Additionally, there number of technologies which allow for the caching of VMs and their files to SSD or flash-cards to reduce the IO penalty. VMware’s is called Flash Read Cache (vFRC), but options exist also from the third-party market.

Storage is critical to vSphere as not only do you need to be able to store the virtual machines somewhere, that shared-storage is also a requirement for advanced features such as vMotion, High Availability, Fault-Tolerance and Distributed Resource Manager and Distributed Power Management.

This chapter focuses on the core storage technologies of fibre-channel, iSCSI and NFS protocols. Once presented to the ESX host, VMware uses the term “datastore” to describe the storage present. In some respects as you use VMware ESX you begin to care less about the underlying protocols backing the storage. Ever since more storage protocols have been supported there has been an endless debate about which is the “best” storage protocol to use. The reality is there too many factors at play (cache, seek times, spindles, RAID levels, Controller performance) to make such a comparison meaningful. It’s perhaps best said that all storage protocols have their own unique advantages and disadvantages, and the fact multiple protocols are still support is indicative of the fact that no one dominant protocol has laid the others to rest. NFS is perhaps the easiest to configure – all you need is a valid IP address to mount a NFS share across the network. With that said, by definition NFS does not support VMware’s VMFS File System which might be preferred by some customers – which means iSCSI and FC are the only protocols to offer this. Most enterprize class storage systems now support all three (if not more!) protocols, so what’s critical is understand the VMs/Application use-case together with an appreciation of their features and functionality. The decision to use one storage protocol or array over another is often decision which is outside the remit of the virtualization admin, and is a reflection of the organizations history of storage relationships through the years.

UPDATE:

This blogpost has been updated with a “Discuss The Options” session with vExpert, Tom Howarth – in the video Tom walks use through the challenges surrounding implementing fibre-channel, iSCSI and NFS storage with vSphere.

The second video which demos managing iSCSI and NFS volumes – together with how to provision new storage using a Synology Diskstation as an example – it also covers the steps to format and grow a VMFS volume. If you’re watching the YouTube version – be sure to set the video to full screen – and use 720p with HD for best quality. If you prefer the native video is also on mikelaverick.com

Alternatively: Native Video

Configuring Fibre-Channel Storage

Fibre-Channel connectivity is usually provided by pair of Host-Bus Adapters (HBA) typically from Qlogic or Emulex. As with Ethernet networking multiple HBAs are normally used to allow for redundancy and load-balancing. The HBAs are connected to the underlying “fabric” by the use of Fibre-Channel switches such as Brocade Silkworm devices – in what is commonly referred to as “fully redundant fabric”. This provides multi-paths to the the LUN or Volume being accessed on the storage array. FC based connections generally offer the lowest latencies to the underlying storage, but this does not necessarily mean they are faster – for instance a 2Gps FC system maybe out-performed by 10Gps Ethernet backplane. Whilst there is a great focus on bandwidth in storage typically in most virtualized environment these pipes to the storage are not remotely saturated, and bottleneck if they exist are to be found within the storage array itself.

In in new environment of VMware ESX hosts one requirement is to discover their WWN or World-Wide Name address which is imprinted on the HBA itself. The value is akin to a MAC address on Ethernet network cards. The HBA value is used by the storage array to “mask” or “present” LUNs to the hosts. Typically, when a LUN or Volume is presented to ESX this is done to all the hosts in a cluster. VMware’s File System – VMFS inherently supports a cluster mode where more than one host can see the same LUN/Volumes. In fact this is requirement for many of the advanced features such as vMotion and High-Availability. Most storage systems allow for the registration of the the HBA’s WWN backed by friendly name – and grouping of these, so the LUN/Volume assignment can be done to the group rather than remembering WWN values themselves

Locating the HBA WWPN on VMware ESX host

VMware ESX shows both the WWNN and WWPN which stand for the Node WWN and Port WWN. The “device” is the node (Node WWN) and the connections on the device are referred to as ports (Port WWN). You can locate the WWPN by navigating to:

1. >Hosts and Clusters >Select your vCenter >Select the DataCenter >Select the ESX host

2. Click the Manage tab

3. Select the Storage column

4. Select Storage Adapters

5. Select the vmhba device that represents your FC HBA

Screen Shot 2013-11-27 at 13.44.57.png

Note: In this case vmhba0 is an onboard IDE control; vmhba1 is the local RAID controller card in a HP Server, vmhba2/3 FC HBA cards of different generations and vmhba33 is virtual adapter used with the Software iSCSI Adapter.

Registering the FC HBA with the Storage Array

The vendors process for registering the HBA with the storage array varies massively from one vendor to another. Some auto-discover the HBA during boot time, and pre-populate their interfaces. Typically, you still need to know the WWPN in order to complete the process. In some case you might merely copy the WWPN value from the web-client into the storage management system. In this example a NetApp Array has been enabled for FC support, and the WWPNs have been added there – in what’s called the NetApp OnCommand System Manager.

Here a group called “ESX_Hosts_NYC_Cluster01” was created, and the WWPN added.

Screen Shot 2013-11-27 at 13.58.27.png

This group is then used to control access when creating a LUN

Screen Shot 2013-11-27 at 14.04.25.png

Rescanning the FC HBA in vCenter

Once a LUN/Volume has been presented, all that is required on the VMware ESX host is a “rescan” of the storage. There are number of ways to do this and different locations. A right-click of Datacenter or Cluster object in vCenter should present the option to rescan every ESX host. It probably best to do this on the properties of a cluster, if you have one – since every host within a cluster will need to see the same storage. It could be regarded as “dangerous” to do full datacenter wide-scan if you environment contains many ESX hosts and Clusters.

Screen Shot 2013-11-27 at 14.12.43.png

Screen Shot 2013-11-27 at 14.24.03.png

Alternatively, refreshes and rescans of storage can be done a per-host basis from the Storage Adapter location. Three different icons control the rescan process.

The first icon merely carries out a refresh of the overall storage system; The second icon rescans all storage adapters; and the third icon rescans just the selected storage adapter.

Screen Shot 2013-11-27 at 14.26.44.png

Once the rescan had completed you should see a new LUN/Volume mounted to the ESX host under the “Devices” Tab associated with a HBA and under “Storage Device” category.

Screen Shot 2013-11-27 at 15.25.36.png

Screen Shot 2013-11-27 at 15.26.49.png