Today I completed updating the original vSphere 5.5 content on Standard Switches to make sure it chimed with the vSphere 6.0 U1 release. You can see the new chapter over here:

http://wiki.vmug.com/index.php/Configuring_Standard_Switches_in_vCenter_6

As you might suspect there really isn’t much to write home about in vSphere 6.0 U1 when it comes to Standard Switches – consider the functionality and configuration of this type of networking hasn’t really altered significantly from one generation of vSphere to another. For the most part I saw no earthly point in retaking graphics of videos where nowt has changed.

However. There was just one area which I noticed what I felt was a change worthy of note – the list of “Available Services” that can be enabled is slightly different from vSphere 5.5 to vSphere 6.0. Let me show you where in the UI…

Before: vSphere 5.5

After: vSphere 6.0

As you can see there are now options for vSphere Replication Traffic/vSphere NFC Traffic as well as this thing called “Provisioning” Traffic. A quick click of the ? in the top hand corner of the box will take you to the online documentation – and some further clicking a bit – will (eventually) tell you what these Provisioning Traffic is all about:

http://pubs.vmware.com/vsphere-60/index.jsp#com.vmware.vsphere.networking.doc/GUID-8244BA51-BD0F-424E-A00E-DDEC21CF280A.html

Supports the traffic for virtual machine cold migration, cloning, and snapshot creation. You can use the provisioning TPC/IP stack to handle NFC (network file copy) traffic during long-distance vMotion. NFC provides a file-type aware FTP service for vSphere, ESXi uses NFC for copying and moving data between datastores. VMkernel adapters configured with the provisioning TCP/IP stack handle the traffic from cloning the virtual disks of the migrated virtual machines in long-distance vMotion. By using the provisioning TCP/IP stack, you can isolate the traffic from the cloning operations on a separate gateway. After you configure a VMkernel adapter with the provisioning TCP/IP stack, all adapters on the default TCP/IP stack are disabled for the Provisioning traffic.

I think its worth saying the a lof the time this might not happen. If you provisioning tasks happen within the SAME array then ideally VAAI will use its awareness of SCSI primatives to offload any IOPS so it happens inside the array (at blistering speed). However, there are some cases where this logically can’t happen – such as a move between two different storage arrays (you decommisioning one and emptying of VMs) or your unfortunate enough to be using local storage and moving a VM from one ESXi host to another (if you doing this you should be really thinking about VSAN my friend). Clearly, if the ethernet network must be used – this traffic can chew up the available bandwidth on you default management network – so dedicating a physical NIC and associating a portgroup with that type of traffic mitigates against that traffic. It’s akin to having dedicated NIC for VMotion because by default VMotion just gobbles up all the available network traffic to move the VM as fast as possible. Of course there other ways of limiting the impact these bandwith heavy process with traffic shapping for example.

As for vSphere Replication Traffic/vSphere NFC Traffic – as ever the phrasology in the vSphere product is rather letting the side down here. vSphere Replication Traffic source replication traffic and vSphere NFC Traffic is destination replication traffic. There’s probably a good reason for the ‘funny’ names used here – most likely because vSphere NFC Traffic is just used for replication but for other background process – NFC comms has been used for a man of communications not just replication – for instance it has been used in the past (and present?) for moving data around for backup purposes (to be honest, I don’t know if it still is…)