During my study for the vCD-Cloud cloud and exam I’ve been experimenting with creating different types of network pools. Trying to understand their requirements and configuration. vCD 5.1 support four types of pool – VLAN, vCD-NI, Portgroup and VXLAN. I’ve chosen to put VXLAN at the end because that’s new to vCD, if your using the older 1.5 release you won’t see anything to do with VXLAN.

The next couple of posts are going to cover the configuration of these different types of network pools, and they are going to reflect some of my thinking of when and where you would use one type of network resource over another. In truth I think the decision to use VLAN over say VXLAN is more likely to be driven by politics and change-management than anything else. The reality is that both vCD-NI and VXLAN have requirements that must be met in the physical world before you can free yourself from the physical world. Ironic that isn’t? I’m think that both require a MTU change on both the physical switch and the Distributed vSwitch, and in the case of VXLAN it requires multi-cast support. In contrast VLAN-backed network pools only require the physical switch be pre-populated with VLAN IDs, and that VLAN Tagging has been enabled – something that many of us regard as standard in modern virtual infrastructures.  Laying aside the politics of change to one side, however I want to pretend like there is no politics and the decision to use one method over another is made from a pure technology perspective. And attempt (rather feebly) to align those to the demands of my tenants – in a I chose the Y-backed method because the customer had these X requirements – and the Y method was the best way OR the only way to achieve their needs.

First up I had to take a trip to my colocation. Why? Well, it might surprise you that in my lab environment I’ve never used VLANs in anger. In simple environment its difficult to justify the extra configuration. Fortunately, with me learning about vCD my hand has been turned, and I was forced to enable VLANs on my network. I timed the visit to the colo with the setup of a new vSphere host to the Gold cluster – a configuration that saw me add a new dual-port NIC and fibre-channel card to the host as well. I didn’t really know how successful this trip would be, because I’ve never had any management over the switch that was going to host the VLANs before. I mainly bought it for the 48-ports and the fact it was gigabit. I was running out of ports on my previous 24-port switch when I acquired it. Fortunately gain control over this NetGear GS748T was simple – using their “Smart Discovery” application

Screen Shot 2012-11-06 at 09.41.59.png

The software found the switch in question and I was soon in the management web-pages. My main mistake at this point was not checking the firmware version of the NetGear GS748T, or realizing that to update it would need not just the model number, but the serial number – to identify the right firmware bundle to use to upgrade it. I’m afraid I was so tunneled vision on the VLAN challenges, that I didn’t make a note of the serial number – and sadly the management web-pages don’t display it. Why is this important? Well, I discovered the firmware I’m on does not support multi-cast (required for VXLAN remember) and firmware update might add that functionality.

The first task I did was to enable jumbo frames on the switch. I was pleased to find this supported because it means I will be use the vCD-NI backed method for network pools.

Screen Shot 2012-11-06 at 09.46.22.png

Next I need to look at the network configuration for the VLANs. This pretty ugly on two fronts. First the UI for creating VLANs is pretty horrible – although it very easy to use – its not really geared up for bulk management. It involves first setting the switch to use IEEE 802.1Q VLAN (the default is port-based VLAN where each port is hard-coded to specific VLAN identity, and you’d need as many NICs as you had VLANs), and the typing in the name – and then clicking at little boxes to put a T against each port indicating it will be trunked. If you have a lot of VLANs to create as I did for realism – who wants a pool of two VLAN per network pool?

Screen Shot 2012-11-06 at 09.51.07.png

Note: Ports 45-48 are empty are intended for use with fibre-channel modules which I don’t have. This UI is awful I longed for a CLI to define the VLAN IDs, especially once I decide on the number of VLANs to allocate per Organization. In fact it was so bad, it made want to reduce the allocation – so I had less VLANs to define!!!

You might notice not all the ports on the switch are being used. Here’s the second ugly front – and I’m wondering if its even worth explaining. Ports 01-09 are vmnic0/1 of the vSphere Hosts in the Gold Cluster, used by the “Infrastructure DvSwitch”. This is meant to be totally separate network discrete from the network used by tenets – backed by totally different physical switch (actually a Dell PowerConnect which I got from the Equallogic guys, when they leant me two arrays – thanks guys!). The trouble is the cables of the Gold vSphere Hosts don’t reach to the Dell switch – so despite having enough spare ports for separate I can’t get to the switch. The easiest solution long term will be pulling these patch cables, getting longer ones so they can reach the Dell owerConnect (maybe I will do that when I go to find out the serial number of the NetGear). So for now the NetGear is having to service to different functions – ports 01-16 are in the default VLAN01, and uplinked to the Dell switch (Port10) – also the cable that goes to my Fibre-Channel (Port 11). I left port 13-14 as spare (in case I add a 6th host to the Gold Cluster) and ports 15 is spare and port 16 is “bad port”. Ports 17-44 host the vmnic2/3 of all the hosts (esx01nyc to esx08nyc), and they have been patched to the “Virtual Datacenter DvSwitch” to be used by vCloud Director. That means VLAN01 is one of my “external networks” that allows access to the outside world. It’s not an ideal configuration by any stretch of the imagination – but for now it will do. For peace of mind after setting up the VLANs I setup two new VMs and software based router to confirm they could only communicate via the VLAN tagging, and I also enabled the new “health check” feature in vSphere5.1 to confirm the configuration was good.

So now I had a network setup – I decided to think about how I would allocate these VLANs to network pools, and as consequence my Organizations and vApps. That was an interesting question. Precisely how many VLANs would my respective organizations actually need, and if they were growing what spare capacity would they need to grow into? If they needed 10 VLAN each – should I give them 10-20, 21-30, 31-40 – with the VLAN numbers rubbing up against each other – or should I hold some VLANs “in reserve” so to speak – more like 10-20, 40-50, 60-70. If the tenant needed more VLANs I would be able to just increase the size of the pool using contiguous VLAN numbers. The other issue was what if a new tenant came along? My switch only supports 256 VLAN as a maximum. That would mean I would have to have some VLAN IDs in reserve to accommodate their needs – additionally, I would need VLAN ID for any vCD-NI or VXLAN work I wanted to do in the future. So this is what I decided to do for each of my Organizations – you might recall the names of these organizations come from a fictitious company I’ve invent that acts as holding company for 3 different subsidaries… There’s three Organization not including the CorpHQ Organization which represents the holding company itself.

  • iStock Public Trading Platform (istoxs.com)
  • Quark AlgoTrading (quarkalgotrading.com)
  • Corp Offshore Investments Group (coig.com)

CorpHQ Organizaton

vCD-NI Backed Pool       VLAN-11

VLAN-backed Pool          VLAN-12-20

ISTOCKS Organization

vCD-NI Backed Pool       VLAN-21

VLAN-backed Pool          VLAN-22-30

QUARK Organization

vCD-NI Backed Pool        VLAN-31

VLAN-backed Pool          VLAN-32-40

COIG Organization

vCD-NI Backed Pool       VLAN-41

VLAN-backed Pool          VLAN-42-50

So each of the Organizations get a bundle of VLANs – one for the vCD-NI network, and the remainder for their own VLAN networking.

How did I come to these numbers? I plucked them out of the air – I mean why would I assume that each of my tenants would need an equal amount of VLANs? It’s a lab environment remember and ficticious at that – so I have no real “existing environment” to base these calculations on. One thing that interests me is the effort and time planning and configuring this. When I could have had just one jumbo network pool that contained all my possible VLANs… But I believe in trying to make things realistic (or complicated?) so I’m sticking with my plan. So there! 🙂

I’m going to assume that that my organizations will mainly use their VLANs for the Production vDC where they hold their VMs and vApps that are externally available to their end-users. VLAN-backed network pools are routable natively, and I’m going to assume that for political reasons this is easier to gain approval for than say vCD-NI or VXLAN backed network pools. That’s going to influence my naming convention. So my pool names are likely to include a combination of Organization name (CorpHQ-,iStock-, Quark-,Coig-) together with the type of network resource being used (VLAN, vCD-NI, and so on). So something like CorpHQ-VLAN Network Pool is what I’m likely to decide upon.

1. Creating the network pools is relatively easy affair – you can kick of the wizard by either selecting 4. Create another network pool from the home page or by navigating to “Manage & Monitor” tab, > Cloud Resources and Network Pools.

2. Click the + icon, and select the network resource you want to use as the backing type. In my case this would “VLAN-backed”

Screen Shot 2012-11-06 at 10.57.38.png

3. Next type the range of VLANs you want to include in the pool (in my case 12-20 for CorpHQ) and click the Add button, and then select the Distribute vSwitch these networks will be hosted on (in my case the Virtual Datacenter DvSwitch). Notice how this network pool is accessible by both my Gold & Silver Provider vDC because host in both vSphere Cluster have access to this Distributed vSwitch.

Screen Shot 2012-11-06 at 11.00.55.png

4. Finally, give the network pool a friendly name and description:

Screen Shot 2012-11-06 at 15.44.32.png

At the end of the process you should see that a network pool is created. You might notice that my VXLAN pools that are created by default have a red X next to their status. That’s because VXLAN has not be enabled on my vSphere hosts yet. That’s something I will be doing hopefully after I’ve upgraded the firmware on the NetGear.

Screen Shot 2012-11-06 at 11.12.19.png

At the moment I have some work ahead of me – because of my allocations model for VLANs requires me to make them. That means I have 120 VLANs to create and I’ve only got as far as 32 on my NetGear. I will be clicking with the mouse for sometime. Perhaps there’s a moral to be had their about VLAN backed network pools. It means work for the network team (or you if your the network team). That’s fine if you creating work for someone else, not so nice when your creating work for someone else. I could have not bother with VLAN – and just use vCD-NI/VXLAN to segment the network, and saved myself all this hassle.

The other thing I noticed it that from the top-level UI although vCD shows the type of “VLAN” it doesn’t have a view to show you which VLANs are backing the respective networks. So I decided to revise my naming convention to included in brackets the VLAN ranges used for the resource pools like so:

Screen Shot 2012-11-06 at 16.01.56.png

and finally. The other important thing to note is that these changes in vCD are not reflected in vSphere. That’s because the portgroups that point to these VLANs are created (and destroyed) on demand when they are need by Organizational Networks or vApp Networks. So its one of those changes that aren’t immediately reflected in the vSphere layer unlike say