For me this post has funny title. Why? Well, because with vCD 5.1 as soon as you add Provider Virtual Datacenter into the initial configuration the default behaviour is to enable VXLAN backed network pools automagically – it creates one for each Distributed vSwitch you have backing those vSphere clusters. That can lead to initially funky experience if your not aware of this. If you want a smooth configuration, you should really enable VXLAN support at the hosts first. This is something I wrote a blogpost about few weeks ago, and I’m going to recover that ground again in this blogpost – but with a bit more detail. For me this is particularly important because currently a lot of the education materials we have around vCloud Director are still based on the 1.5 version, and VXLAN is very new to many people.

I guess the first question lots of people will ask – is does VXLAN “sunset” the vCD-NI method (which use mac-in-mac encapsulation to generate many network ID within a single VLAN) – and also with the recent acquisition of Nicira – how that will eventual integrate with the other goals of software-defined networking. For me right now my view is that the “more the merrier” approach perspective – that there will be many ways in which a SDN can be created and that its really down to the customer to look at their needs, and the requirements of each method – and select the best one for them. Nicira is very new, and think its too early to speculate about how it will relate to the overall stack. So rather than dealing with speculation I’d rather focus on what’s on the truck currently, rather than what’s coming on the next truck. I’ll leave the speculation to the folks who work professionally in the media. 😉

So “VXLAN” stands for “Virtual Extensible LAN. Like vCD-NI it can create many network IDs within a single VLAN defined at the physical switch. Whereas vCD-NI uses “mac-in-mac” encapsulation, VXLAN uses MAC-in-UDP instead to generate a 24bit identifier and also for a name space that allows for around 16 million networks. Gulp. The important thing here is to understand that it can do this within a VLAN and also between VLANs. As well as allowing you to create many networks without having to worry about the VLAN limits of the physical switch, it allows for communication between VLANs as well. That allows for the VM to be more portable across compute/storage/network domains. The key here I think is often when vSphere environments are constructed the clusters often reflect silos of compute/storage/network, that’s great for making sure a problem in one cluster doesn’t affect another – but not so great if you want to move workloads (aka VMs) from one cluster to another. Where this really shows itself is in vCD where one Provider Virtual Datacenter can be backed by many clusters (assuming they have the same capabilities) – ideally what we want to do is break-free of the “constraints” of the cluster – so we can be begin to see them as pools of clusters that our tenants can use. We already live in a highly abstracted, virtualized world – but what we want to do is turn the dial up a notch. Virtualization turned up to 11 if you like Spinal Tap references!

Now, of course the way to get one VLAN to speak to another – one network to speak to another – is to connect them together with some sort of device that has routing/NATting capabilities – that’s something we can do right now with the vCNS Edge Gateway (or “vShield” as it used to be called, still is call in some quarters, in fact, okay, the product still called vShield! Lets just pretend its called vCNS Edge Gateway for now!). But it can get overly complex. What would be neater is to do what some folks cll “network overlay” – where another layer of abstraction is place on top of the existing VLAN configuration such that VMs speak to their VXLAN ID, unaware of the underlying VLAN configuration underneath it. As you might expect VXLAN is not without it requirements including:

  • VLAN to back the VXLAN (one per Distributed vSwitch)
  • Multicast and a range of multicast IP addresses
  • MTU correctly set to prevent frame fragmentation

The easiest way prepare the vSphere hosts for VXLAN you need to load up the vSphere Client. VXLAN is one of the few new features that is not visible in the web-client. That’s because essentially the management for this actually the vCNS Management Appliance (the artist formerly known as vShield Manager), and all the tabs are in the vSphere Networking view are links it the vCNS management pages…

1. You can locate these by logging into the vCNS Manager webpage, expanding +Datacenters, and selecting your datacenter (in vSphere), selecting the “Network Virtualization” tab, and clicking the “Preparation” link.

Screen Shot 2012-11-12 at 11.49.39.png

2. Once your here, this will show the Edit… button where can start the preparation process. This process installs an VIB into the vSphere host to allow it to support VXLAN.

3. This Edit… button opens dialog box where you can – select the cluster, Distributed vSwitch and VLAN that will contain the VXLAN. I guess in my case this would be “realistic” if the Gold and Silver cluster had their own unique Distributed vSwitch – with there own unique assigned VLANs. Increasingly I’m wondering if the way I’ve built my clusters might not be ideal for playing with these features. In this case – clusters that share the same Distributed vSwitch they VLAN ID must be the set to the be the same VLAN ID…

Screen Shot 2012-11-12 at 11.51.00.png

…If you don’t then you will see an error message. In my case I have two clusters able to see the “Virtual Datacenter DvSwitch” – and this is the Distributed vSwitch used by tenants.

4. After clicking next, you can select the “Teaming Policy” and MTU size. The defaults to 1600 bytes, but if you have been following my posts for a while you’ll know that I’ve been reconfiguring my network – and increased the MTU on my physical switches to support the maximum (9000bytes). I set my configuration up for a simple failover model – but there is support load-balancing options as well.

Screen Shot 2012-11-12 at 11.51.37.png

and you will see that system will add vmkernel portgroup on the Distributed vSwitch like so with the vxw-vmknicPg prefix.

Screen Shot 2012-11-12 at 11.52.40.png

If you go back to the vCNS view, and do refresh you will see that – that the hosts are “not ready”. This happens because the VXLAN vmkernel Portgroup defaults to a DHCP configuration – and if there isn’t a DHCP server in the VLAN ID used then these portgroups will default to using an “auto-IP” address like so:

Not-Ready-Status.jpg

Note: Due to a cock-up on my behalf, I lost the screen grab for this part of the post. So I had to steal it from someone else. Namely from Rawlinson Riveria’s post which focuses on the same/similar setup – http://www.punchingclouds.com/2012/09/09/vcloud-director-5-1-vxlan-configuration/

You can modify these IP settings via the web-client under >Host & Clusters, Select a host, >Manage and select the VMK number that corresponds to the vmkernel portgroup, and click the Edit button.

Screen Shot 2012-11-12 at 11.59.10.png

Of course the other option here could be to supply a DHCP server to the network that have assigned to the VXLAN – or alternatively we could use PowerCLI to change the IP settings of the existing vmkernel port like so:

$nic = Get-VMHost esx01nyc.corp.com | Get-VMHostNetworkAdapter -VMKernel | where { $_.Name -eq “vmk8” }

Set-VMHostNetworkAdapter -VirtualNIC $nic -IP 10.10.2.1 -SubnetMask 255.255.255.0 -Confirm:$false

By fixing up the IP requirements of the VMkernel Ports you can make all those Red X “Not Ready” turn to Green!

Screen Shot 2012-11-12 at 12.02.53.png

The next step varies – if you were NOT using vCD you would need to set the segment ID, network scopes and manually deploy a vCSN Edge Gateway. With vCD all we need to do is set the segment ID only, and vCD will take care of the rest when you create a vApp that uses the VXLAN network pools.

The segment ID is not unlike setting the number of vCD-NI backed networks you need – rather than a flat intregar like “100” its specified as range such as 5000-5100, would give you 100 network IDs, or virtual wires as they referred to. You also need specify a range of multicast addresses that will be used to back these. Segment ID Pools start their numbering at 5000, so 5000-5100 would give you one hundred networks within in my case VLAN2013.

5. Click the Segment ID button, and click Edit…

Screen Shot 2012-11-12 at 11.45.24.png

Once the segmentID pool and multi-cast range has been configured, you can head back to vCloud Director and “repair” the network pool that was created by default.

Screen Shot 2012-11-08 at 14.10.01.png

Once this repair process has been completed you can see that both VXLAN network pools (for Gold & Silver) are correctly enabled:

Screen Shot 2012-11-12 at 11.09.26.png

As I said in my previous post on the topic – for smooth vCD deployment I would put enabling VXLAN at the vSphere level on my to-do list along side other requirements such as pre-provisioning VLANs on the physical switch (if VLAN-backed network pools were in use), increasing MTU and so on.