Mounting NFS Volumes
NFS is popular protocol for both Linux and found on multi-protocol arrays. It’s very simple to setup on the ESX host. All that’s required is a valid IP address and the IP address of the array together with export or share path. Despite the existence FC and iSCSI Storage may virtualization admins like using NFS because it sometimes easier to present storage. Generally, each cluster is allocated its own chunk of LUNs/Volumes, and these are not visible to other clusters in the site. The separation/segmentation or siloing reduces the chance of conflicts and the chance of one cluster affecting another – it can also make it harder to move VMs from one cluster to another – because they lack a common shared storage location. Additionally, NFS can be useful for anncilary shared storage for such items as template or store of .ISO images or software.
NFS exports need to be setup with the “no root squash” property which allows for server-to-server connections will full access to the NFS file system. At no stage does the ESX host need to know the “root” account password on the storage array.
This blogpost has been updated with a “Discuss The Options” session with vExpert, Tom Howarth – in the video Tom walks use through the challenges surrounding implementing fibre-channel, iSCSI and NFS storage with vSphere.
The second video which demos managing iSCSI and NFS volumes – together with how to provision new storage using a Synology Diskstation as an example – it also covers the steps to format and grow a VMFS volume. If you’re watching the YouTube version – be sure to set the video to full screen – and use 720p with HD for best quality. If you prefer the native video is also on mikelaverick.com
Alternatively: Native Video
Creating an NFS volume on NetApp
NetApp is an extremely popular NFS vendor, and arguably has pioneered the mainstream adoption of the protocol in modern datacenters. NFS Volumes and Exports can be created through the OnCommand System Manager. When a NetApp Volume is defined in System Manager, the storage administrator has the ability to indicate how it will be accessed.
Once the volume has been defined permission need to be adjusted to allow the VMware ESX hosts access. In the example below, every host with a 184.108.40.206 address has been granted read/write/root access:
Mounting an NFS volume to VMware ESX Hosts
1. >Hosts and Clusters >Select your vCenter >Right-Click the DataCenter or Cluster
2. Select New DataStore
3. Click Next to accept the location
4. Select the Type of NFS
5. Next specify the details of the NFS mount process by typing a friendly name for the datastore; the IP address of the NFS service and the path to the export, in this case /vol/templates.
Note: Different NFS providers use different syntax. For instance NetApp defaults to /vol/<exportname> whereas an IOMega NAS device commonly would use /nfs/<exportname> and FreeNAS typically use /mnt/<exportname>. NFS datastores can be mounted in a read-only mode. This can be helpful where other methods are used to populate the share with content. You may prefer to template and .ISO datastore be made read-only to prevent virtualization administrator accidentally using them to store production virtual machines.
6. Next select which ESX hosts will mount the NFS datastore. Typically every host in the same VMware HR/DRS cluster would have access to the same storage. It is less common for datastores to be made available across clusters.
NFS volume Advanced Settings
By default the number of mounted NFS volumes is set to 8. If you wish to mount more than 8 NFS volumes this is possible, but an advanced setting needs to be modified for this to work called NFS.MaxVolumes. The maximum supported mount points is 256.
1. Select the ESX host, and click the Manage Tab and Settings column
2. Next select Advanced System Settings, and use the Filter option to show only NFS settings
3. Locate the parameter NFS.MaxVolumes, click the pencil icon to Edit the value