officeblocks.jpeg

In previous posts I wrote about how I was reconfiguring my physical resources down at the physical and vSphere layer to more “alighn” them to the concept of the Provider Virtual DataCenter. In my mind I’ve had an analogy for the provider vDC for a while – and its kind of related to the large concept of the cloud I guess. The idea comes from the word “tenant” and its use in cloud terminology of “multi-tenancy”. So In mind I kind of imagine the person who defines the Provider vDC as being like the “super” in commercial office or even an apartment. My job is organize the physical resources into different packages for my tenants. I make sure the electricity, heating, plumbing, Internet and so on all works. I guess in the way that someone like Regis offers a managed office facility for customers who don’t have the resources or don’t want the up front CAPEX cost associated with building their own corporate office. In my imagine business park I’ve got a range buildings that were built at different times with different qualities of workmanship. So the “Gold” office block is significantly plusher and salubrious affair than the old “Bronze” office built in the late 80s.

One of the things I really like about vCloud Director (and I was saying this before I joined VMware, I might add) is how does create or insert a proper layer of abstraction above the virtualization layer in away that a lot of cloud automation technologies do not. So in the vCloud Director world the tenants really have NO IDEA of the underlying plumbing and configuration down in vSphere. Why is that important? After all the much lauded “abstraction” isn’t necessarily an inherent good. For me there’s a couple of benefits – the first is as the building supervisor I can choose to reconfigure and add/remove capacity in my business park without my tenants having a clue. They don’t have to worry their pretty little heads about what’s going on down in the basement, whilst their are in their steel and glass box, admiring the view of the landscape lake. Secondly, I see this “abstraction” as part of much wider “virtualization”. For years our OSes were “superglued” to the physical server, and our applications were “superglued” in turn to the OSes, and finally the data was more or less “superglued” to the applications that created that data – inflexible and unmovable. Physical infrastructure was precarious – moving the pieces was like a dangerous game of Jenga. When I compare vCD to other cloud automation layers, those vendors just point to a cluster – essentially they feel just like fancy web-portals with a bit of “self-service” provisioning. Anyway, not to over labour the point with my analogy – but I hope I don’t end up looking like the “super” in Friends. šŸ˜‰

Adding a Provider vDC in vCloud Director 5.1 comes after you have added a vCenter to the cell. It’s natural part of the workflow to want to add vSphere virtualized representation of resources immediately afterwards. Clicking the “Create Provider vDC” brings up the wizard

Screen Shot 2012-10-04 at 19.39.47.png

Screen Shot 2012-10-04 at 19.42.08.png

If you followed my previous posts on computer and storage, you’ll know I have cluster that has ESX host with the most amount of memory and with fibre-channel access to SAS based storage, which has been enabled for replication every 5mins. My other “Silver” cluster has hosts with less memory, without the Qlogic cards to the FC-SAN. Filling the form above gave me pause for thought. I’ve no problem calling this “Gold”, but is there any need to make the distinction between a physical datacenter and virtual. For us as the super it does, but for my tenants do they care – are they even aware – why not just call it the “Gold Datacenter” and have done with it?

Anyway, its a small matter – but it kind of makes me think of the days when I used to hand off VMs to application owners, without even telling them it was a VM. The idea was to not even raise “virtualization” as an issue. He’s your server I would say them. Why not take the same attitude to the vDC concept. It’s useful to us, but them – should they even care. I’m thinking the same on the subject of the Organization vDC (which is what a tenant mainly interacts with). Why not just call this an organization and say they have a “Production Datacenter” and “Test/Dev Datacenter”. Anyway, I plan to keep with the official moniker for now. I guess one “anomaly” that struck me was the terminology around the whole “datacenter” thing. In vCenter we create a “Datacenter” object to hold “Hosts & Clusters” but then at the same time in vCloud Direct we create “Provider” vDC that point to the cluster object, and Organizational vDC’s for the logical world. Occasionally, I’ve felt like calling the “datacenter” in vCenter something else. Dunno what – but something like “New York Physical Resources” or something like that. It’s really a pandantic issue on my behalf, being former English graduate I like to see my terms be nice and neat.

The X next to Enabled in the “Add Provider vDC” is pretty self-explanatory – it temporarily pause the provisioning or and powering on off new VMs. I’m not sure what the usage case here is for this option – but I imagine if you were making some major changes to the vCD or vSphere environment it might function a little like “maintenance mode” on an ESX host – temporarily excluding the Provider vDC from serving up resources whilst work is being done. Perhaps if there was a problem with the vSphere environment that backed the Provider vDC you’d want to take it “offline” whilst those were being resolved, and stop tenants demanding more resources, when maybe they are not available. The “highest supported version” controls the “Hardware Version” that can be used in the Provider. It’s essentially a compatibility feature. I understand over time were going to be moving away from “hardware levels/version” for a more general “compatibility” concept – something were seeing already in the vSphere5.1 web-client. But I guess this serves the function. It’s a policy setting that would allow for mixed versions of vSphere to back the Provider vDC. I recently completed my upgrade of all my hosts to ESX5.1. That was a relatively painless affair with only 8 hosts. But out there in thing they call the “real world” you’re more likely to be doing “rolling upgrades” because if you have an estate worthy of being addressed by vCD, then its unlikely be an environment that you can upgrade in a day or two like mine! The other thing I completed to do was an upgrade of VMware Tools and Virtual Hardware on all my VMs AND my templates…

The next step is to add a DRS (and ideally HA) enabled cluster to the Provider vDC.

Screen Shot 2012-10-04 at 19.55.58.png

Two things here might strike you as “funny” (in a strange, none ha-ha way).Ā  Firstly there’s the use of this term “Resource Pool”. You can see here a couple of “resource pools” – Gold and Silver are two DRS/HA cluster and each have two “infrastructure” resource pools that reside off them. This screen grab below will give you the visuals:

Screen Shot 2012-10-04 at 19.59.58.png

So why classify a DRS/HA cluster as “resource pool”? Well, it does make sense if you know your DRS as well. A DRS enabled cluster is often referred to as the “root resource” pool. Despite the fact a DRS Cluster contains physical hosts – its as much a “logical” construct as a “resource pool”. Back to basics. Despite all the talk of creating a “software mainframe”, the reality is a VM can only execute on one ESX host at any one time (although through the power of vMotion it can be moved from one host to another to another). So in many ways the DRS cluster resources are merely a logical representation of the physical resources that reside in the host. The analogy I used to use in my courses was an airline. The DRS cluster is a like an airline proudly announcing it has the capacity to fly 10,000 passengers. That’s true, but in reality that’s achieved by flying 500 passengers on each plane. So to me a DRS cluster; the “parent” or “root” resource pool is aggregate of all the CPU/Memory resources available. I guess some people would say – that the “Select Resource Pool” chooser dialog box could be made prettier by using icons to differentiate a cluster from a resource pool – rather than just the labels. Also I’m wondering how sensible it was to create “infrastructure” resource pools for my ancillary VMs. I’d forgotten they would appear along side my clusters… I’m wondering if I could have made life simpler, and the UI neat if I’d just run these VMs directly on the cluster. However, given I don’t have the physical resources to run a dedicated management cluster. If you read the offical courseware a clear distinction is made between the “Management Cluster” and the “Resource Cluster(s)” that back a Provider vDC. I’m think I’m doing the right thing by ring-fencing these VMs with a resource pools that guarantees a reservation of memory/CPU and sets a share level higher than any other VM on the environment. I also felt that distributing these ancillary VMs across two different clusters would stop anyone cluster being “saturated” with back office VMs leaving no resources left for vApps!

The other thing that’s a bit “funny” about the dialog is that there are no “external networks” to select. That’s a bit of “chicken and egg scenario”. You can’t add what haven’t been created, and creating “external networks” comes later in the workflow if you use the “Quick Start” approach. Once we have fly past the quick start once and established the external networks there would be something to chose from here.

The next step in the creation of the Provider vDC is controlling what storage tiers are available. Starting with the 5.1 release vCloud Director is now “Storage Profiles” aware. You might recall I reclassified my storage and applied storage profiles to each of my datastores and datastore clusters (vCD 5.1 is now datastore cluster aware too!).

Screen Shot 2012-10-05 at 08.35.00.png

As you can see in the screen grab, I’ve added all the different tiers of storage available to the Gold Virtual Datacenter. The vast majority of my unclassified storage (marked with * (Any) is local storage, and of course I’ve not included that in the pool. Sadly given the limitations around storage this view isn’t particularly realistic. Ideally when I select the Platinum Storage Profile – that should show multiple datastores that meet that classification (tier1_platinumDR01,tier1_platinumDR01, tier1_platinumDR03 and so on). Right now I just have one storage profile containing one datastore or datastore cluster.

As this is the first time vCD has been setup in my lab, the vCloud Director Agent needs to be installed to each one of the ESX server in the Gold Cluster. To do this vCD needs temporary “root” access to the host. I’m lucky in the sense that all of my hosts have the same root password. I’m wondering what folks do about this when company policy decrees that that password should be different on each and every node. I don’t think I would fancy having to do this each ESX host by hand if I had lots of them. I’m assuming/hoping the new PowerCLI of vCloud could handle this or there is some sort of scripted/command-line method that has yet to cross my path.

Screen Shot 2012-10-05 at 08.46.10.png

Before clicking finish and letting the system prepare the environment I turned on my screen recorder. This is something I’ve been doing for sometime when writing books. Often a single action (the click of Finish) kicks off a number of different steps. Often these happen to quickly for me to capture screen grabs in the conventional way. However, using recording software means you get a rewind button, and pause button. If you watch the video you will see what my first experience was like, and what I learned about vCD by doing it. Anyway, there’s two videos one hosted on YouTube, and the other elswhere because YouTube does has tendency to degrade video quality.

Screen Shot 2012-10-29 at 12.47.21.png

…So you can see enabling the vCD Agent on every host in a cluster – when there are VMs all present isn’t the smartest move in the book. vCD attempts to put ALL hosts into maintenance mode to install the agent. That doesn’t really pan out for me in my lab environment since its not possible to have running VMs on a DRS cluster where ALL the hosts are in maintenance mode. Ideally, I would like to have some more hosts and setup a management cluster to run all my “infrastructure” VMs, with the added benefit that the “infrastructure” resource pool wouldn’t appear in the vCD UI. One thing I discovered about the “Prepare Hosts” page is it doesn’t allow the hosts to be added individually. The radio button “a different credential for each host” merely addresses the fact that the root password is different from one ESX hosts to another. So in my case the only option is allow the creation of the Provider vDC to go through, fail and then do host installation manually. So you know I did eventually get every host setup with the agent by staging them manually (as I show in the video – I did two at the time allow install the agent whilst keeping my infrastructure VMs up and running – and also having HA in operation), and I think in the real world you might not see this happen – unless you were taking an existing vSphere environment that had clusters already populated with VM, and enabling it with vCD for the first time.

Screen Shot 2012-10-29 at 12.47.40.png

I think this might be important thing to consider if you doing a demo of vCD to a customer and you want the experience to look as smooth as possible. This might create an initially bad first impression, but if you think about it the problem I had made logical sense. Although with that said the new agent that gets install with VXLAN knows to how handle the issue – and it would be nice if vCD did as well. Just sayin’.

There was one host that after the agent had been enabled – gave a “System Alert”. This was caused by poor administration on my behalf. It turned out that that one of my ESX hosts wasn’t connected to the Virtual DataCenter Distributed vSwitch.

Screen Shot 2012-10-05 at 11.04.01.png

Upon further investigation it appears although I added the host into the DvSwitch, and I hadn’t assigned any VMnics to it.

Screen Shot 2012-10-05 at 11.08.57.png

That was pretty easy to correct – all I need to do was hit the “Manage Physical Adapters connected to the Switch button” and add them in…

Screen Shot 2012-10-05 at 11.10.51.png

Adding additional Provider vDC can be done from the “Create another Provide vDC” link on the Quick Start homepage, or by navigating to the “Manage and Monitor” tab, selecting > Cloud Resources > Provider vDCs and clicking the + button. I went on to add a “Silver DataCenter” albeit offering only Gold, Silver and Bronze classes of storage. Flicking between the “Manage” and “Monitor” views allows me to check the configuration and capacity of each Provider vDC.

Screen Shot 2012-10-05 at 11.34.13.png

Screen Shot 2012-10-05 at 11.34.27.png

FINALLY, remember in vCD 5.1 adding for the first time a Provider vDC into the system automagically creates VXLAN network pools (even if the cluster have not been prepared/enabled for VXLAN in Home> Network> DataCenter > Network Virtualization Tab. For this reason the “Quick Start” automatically lights up the “Create another network pool”.

Screen Shot 2012-10-06 at 06.34.21.png