lauren-greenfield-1 VS OLYMPUS DIGITAL CAMERA

Note: As ever when I was writing a blogpost – I did a google search an interesting for “Greenfield” and “Brownfield“. Interestingly the second image for Greenfield came with these ladies on the lawn. Apparently  Laura Greenfield is a famous US based photographer and video artist. The first photo comes from here collection on “Girl Culture“. Anyway, that’s my reason for the selecting the photo and I’m sticking with it. The ladies are at least standing on a greenfield…

 Introduction:

The term “Greenfield” and “Brownfield” actually come from the world of planning. Where a greenfield development is a new build on lush new ground unsullied by mans interventions in nature. The term “Brownfield” is used to describe the re-development of previous developed and usually abandon land – normally associated with the rapid decline in some heavy industry steel, coal, shipbuilding, chemicals. We use it in IT often to talk about whether an existing environment (brownfield) should be re-used for a new project, or whether net-new infrastructure (new servers and/or new switches and/or new storage) should be procured first. Now I hope your existing datacenter looks nothing like my second picture – if it does then you should have installed VMware Site Recovery Manager!

When I was attending the vCloud Director Design Best Practises a couple of weeks ago (hosted by the excellent Eric Sloof!) module 3 on “Cloud Architecture Models” talks about whether vCloud Director should be deployed on top a “Greenfield” fresh install of vSphere OR if can be deployed to an existing “Brownfield” install of vSphere. I was quite anxious about this because the course clearly states the recommendation is for a “Greenfield” deployment. Now that does not necessarily mean buying a new site, server rooms or datacenter – it could “just” mean a new rack of servers, network and storage – and the over time porting your existing VMs in the “legacy” vSphere environment into the shiny new cloud.

The thing that irks me a little about this recommendation, it is a problem I’ve had with the whole “greenfield” debate about the arrival of a new technology. For me its kind of up there with that question “What works best – a clean install or an upgrade”. Anyone who tried to upgrade an NT4/Exchange 5.5 box to Windows 2000/Exchange 2000 already knows the answer to this question don’t they. It stands to reason that greenfield deployment is going to be easier than trying to shoe-horn a new application into existing environment for which it was never originally designed. But my real problem with the “greenfield” mentality is the impact on adoption rates.

I had the similar discussion with the folks at Xsigo before they were acquired by Oracle. They told me that were they stood a good chance of getting their technology adopted, and where they got the most attrition – was in greenfield locations. But there’s a problem with that isn’t there? If a new technology is limited/hamstrung by only being deployed in greenfield environment you at stroke hobble its adoption. Because lets face it you’ve just raised the bar/barrier to adoption – to now having to include upfront CAPEX cost in acquiring new kit. Let’s face it’s not like that happens every day – so the other thing we do when we play the “Greenfield” wildcard is limit adoption to the cycle of maintenance, warranty that afflicts most hardware procurement cycles. Overplaying the “Greenfield” approach basically chokes of uptake of a technology.

Now, I’m not rubbishing the recommendation. Nor am I intending to rubbish the courseware from my good friends at VMware Education (remember I’m former VMware Certified Instructor). What I am doing is question this as design best practise. And if I am honest I can see why the courseware makes this statement if you look at the challenges of taking an existing vSphere platform designed for virtualization, not cloud – and preparing it to be utilised by vCloud Director. But what I would ask is just because a particular configuration introduces “challenges” is that a sufficient reason to either walk away or approach management with a request for a purchase order?

To be fair the courseware does map out a possible roadmap for migration/transition of:

  • Create greenfield vSphere install
  • Migrate virtual appliances
  • Remove hosts from “legacy” vSphere environment
  • Redeploy hosts into shiny new infrastructure

I have no issue with that but being the kind of gung-ho psycho I am, I’m actually more interested in whether the existing environment could be kept as is. After all you might have plenty of free compute resource on an existing vSphere environment – and we don’t say (for instance) if you want to deploy VMware View or SRM that you have to start again from scratch. So what makes vCloud Director so special? Well, the answer is ever is in the detail – which is also where you will find our friend the devil.

The Design Best Practise course does an equally good and honest job of stating what the challenges might be of taking an existing vSphere environment – and trying to drop vCloud Director (or any cloud automation software/layer from another vendor for that matter) on top of it. The courseware outlines four areas which could put a spanner in your spokes:

  • Non-Transferrable Metadata (The stuff you already have that won’t port magically into a cloud layer)
  • Resource Pools on existing HA/DRS Cluster
  • Network Layout
  • Storage Layout

Let’s take each one in turn and discuss.

Non-Transferrable Metadata:

There’s some stuff in vCenter which isn’t transferable into vCloud Director these include (but not limited too in my opinion):

  • Guest Customizations in vCenter
  • vShield Zone Configurations
  • VMSafe Configurations

I think these are relatively trivial. Guest customisation  are easy to reproduce, and I think its unlikely that many vSphere customers had any vShield presence in their environment prior to looking at vCloud Director. Whether we like it or vCloud Director & vShield (now vCNS) are bundled together in the minds of lot people. True you can have vCNS on its own, and there’s truckload of advantages to that – but as we move towards a more suite view of the world its difficult to imagine that two aren’t wedded to each other like husband and wife.

What I think missing here is the question of what happens to your existing VMs. Yeah, those things – remember them? They are really quite important aren’t they? IF you have ever install vCloud Director you will see that the abstraction and separation is soooo complete that after the install you don’t see any of your existing VMs. How do you get your old junk into the new shiny cloud layer? There a couple of methods.

  • Method 1: A bad approach in my book would be to power of your VMs in vSphere; export into OVF; importing into vCloud Director catalog; and then deploy.
  • Method 2: Login as “the system admin” to vCloud Director – and use the Import to vSphere option. That’s not bad. But its not really very “tenant” friendly. I’m not in the habit of giving my tenants “sysadmin” rights so they get their VMs into their OrganizationScreen Shot 2013-03-07 at 10.42.19
  • Method 3: I personally think the best approach would be to use vCloud Connector to copy the vApp/VMs for you – the vCC can also copy your precious templates from vSphere to vCloud Director – the only thing you loose are the Guest Customizations. But remember which ever way you cut it/dice and slice it – the VM or vApp must be power off to do the move (so that’s a maintenance window). It’s also a copy process – so you need temporary disk space for the original VM/vApp and new version in vCD.Screen Shot 2013-03-07 at 10.45.52

Existing Resource Pools:

It possible to use resource pools on DRS cluster. Many people do – sadly for ALL the wrong reasons. Many naughty vm-admins use them as if they are folders. They are not, as such the practise is not only stupid, it’s also very dangerous. If you don’t believe me read this article by Eric Sloof. If you do this, please stop. Really manually resource pools on DRS cluster have NO role play in vCloud Director – they just cause at best confusion, at worst more problems. Avoid like the plague.

The ONLY time I would use manual resource pools is in a homelab where you don’t have the luxury of dedicated “management cluster” separate and desecrate from the DRS clusters that host the Organizations VMs. That’s my situation. I have an ‘Infrastructure” resource pool where I put all my “infrastructure VM” – so my management layer is running on the same layer it manages. Not the smartest move in the book. But if you lack hardware resources in homelab, what’s a boy to do?

Screen Shot 2013-03-07 at 11.04.27

Network Layout:

This was a biggy for me – and remember last year fessing up to Josh Atwell at VMworld about how from a VLAN perspective my network was a brownfield site – dirty, polluted and contaminated. Like a lot of home-labbers I had a flat-network with no VLAN at all. I mean what’s the point in homelab, unless you get off on 802.3 VLAN Tagging? There are places where vCloud Director expects some sort of VLAN infrastructure for external networks for instance, and of course if you using VLAN-backed Network Pools as the name implies it’s pretty much mandatory. So I implemented VLANs for the first time (greenfield!) and it was much less painful than I expected. I keep one my Dell PowerConnect gigabit switches for management, and its downlinked to cheap and cheerful 48-port NetGear gigabit switch is VLAN’d for my vCloud Director tenants. The management layer can speak to the tenant layer (for ease of lab use) but there’s no comms from the tenant layer to the management layer – unless I allow that…

But less of me. What about production environments. What are the gotchas. Well, you have to be careful you don’t have overlapping VLAN Ranges – so vCloud Director doesn’t go about making portgroups with VLAN Tagging for VLANs that are already in use else where. The same goes with IP ranges. You can have IP pools but a bit like with badly implemented DHCP servers – if you have overlapping ranges of IP then there’s every possibility that a VM might get an IP address that’s in use elsewhere. My biggest problem in my lab is my rubbish IP ranges. Again this stems from it being a home lab. My core network is 192.168.3.x because that’s what my WiFI router at home used. What would make more sense would be to obey classful IP address. Like this:

Private Cloud:

  • External Network – 10.x.y.z
  • OrgNetwork – 172.168.x.x
  • vApp Network – 192.168.2.x

Public Cloud:

  • External Network – 81.82.83.1-82.83.83.254/24 (IP Sub-Allocated to each Organization – like 8 internet IPs each)
  • OrgNetwork – 172.168.x.x
  • vApp Network – 192.168.2.x

There are other issues to be aware of as well such as the vmnic configuration on the DvSwitches – are they being used for other purposes such as IP storage? Are those desecrate and separate networks? Can you guarantee that tenants cannot see vmkernel traffic such as vMotion, Management and so on. In my view if your VMs can see this traffic in vSphere, what you have is vSphere design problem. Not a vCloud Director one! But that’s another story!

Storage Layout:

This is a big one. In my implementation I destroyed ALL my datastores from my previous vSphere setup – except my template, ISO and infrastructure datastores. Everything else got destroyed… Now, I’m not of the opinon that a successful meeting with management begins with this phrase “In order to implement this new technology we must destroy all our data”. So what I did isn’t an option either in a greenfield or brownfield location.

It’s fair to say that you would want to avoid a situation where storage is being used BOTH in the vCloud Level and the vSphere Level. vCloud Director when it’s used as test/dev environment can and will create a destroy lots of VMs in short space of time. And you don’t want your “tenants” in the vSphere layer competing for disk IOs with those in the “tenants” in the vCloud layer. I guess there’s a couple of ways to stop that. You could have server/storage dedicated to your vCloud OR a judicious use of permissions on datastores could be used to “hide” the datastores from the vSphere users. That way they can’t touch the storage used by others. Of course that’s not the end of the story – if you have vSphere and vCloud users sharing the same cluster – then you could have all manner of performance problems by activity taking in one place affect the performance elsewhere – and because of the abstraction it might be tricky to see the cause. Nightmare. All of this does seem to point quiet heavily to seperate environments or deciding that EVERYONE has to get into the vCloud Director world and do their deployments there – with no opportunity to sneak around vCloud Director to gain access to vSphere layer on the QT.

Finally, from a storage perspective – the datastores used for catalogs should not be the same datastore used for running VMs. For the same reason – performance could be undermined by activity one tasks on another. That’s NOT something I did in my design. I use bronze storage to hold catalog items. In hindsight I wish I’d created dedicated “Catalog” store per Organization…