3-layers-vCD-vSphere-Physical.png

After completing the first portion of “Essentials” vCloud Director training one thing I’ve been keen to do is re-order my lab environment ready for learning more about the technology inside the vCloud Suite. One critical aspect of that is the hosts and how to re-organize them in such away that I can best learn and illustrate the features and functions. Previously, the structure of my lab kind of evolved over time essentially being shaped by the SRM book and the EUC book. I wound up with two clusters into different vCenters. One of things I wanted to do was return my lab into a single-site configuration to have as much resources residing under single vCenter – so I could represent different tiers of compute.

In this case the decision making I felt was quite simple. I really have two classes of servers in my lab. The first generation are extremely old HP DL385 G1’s (yes, I know these haven’t been on the HCL for sometime, but they still work!). Although these are quite old servers – they have more memory than new generation of boxes I bought about 12-18 months ago – they are also connected to a NetApp 2040 via fibre-channel running at 2GB. I decided to class these as my “gold” servers. It just goes so everything is relative to your resources. My “gold” servers are something that most of you would have put in skip years ago! If you want an idea of just how old these boxes are – the ESX5.0/5.1 installer even warns me about how my antiquated AMDs CPU’s don’t support AMD-V, and that the Qlogic cards are no longer supported! Yikes. Fortuanately for me they still work – but for how long for is anyone’s guess! 🙂

Screen Shot 2012-09-27 at 15.12.15.png

My second generation are more modern Levono TS200s. These are tower boxs (which I’ve actually laid on their side in the rack at the colocation). Originally these were bought for a “home lab”, but relocated them into the colocation when I was able to free up some rack space and power. These are single pCPU with 4-cores and through I’ve never bothered to look they probably have the same or more CPU horsepower than the older HP DL385s. These Levono’s only have 12GB within them, and the banks are full. They are not FC connected and just have 1GB ports to them. These I decided to class as “Silver” servers – despite the fact they have the right chipsets to support Intel-VT and vLockstop which are required for VMware Fault Tolerance.

I’ve also taken down the “infrastructure” footprint to the bare minimum VMs I need to stand this up. So out has gone the old Windows vCenter with its companion Microsoft SQL VM, and in comes the vCenter Server Appliance 5.1 as its replacement. I’ve got a little Windows 2003 Terminal Server (TS1) which services the remote access requirements and two domain controllers (dc01nyc and dc02nyc). Sadly, I’ve had to keep the little software router on the network. That’s a throw back to when I had two “sites” on two different IP subnets and I linked them together via the software router. The idea was that I could turn off the router to simulate network outage between two sites easily. So some of my storage was on networkA and the other on networkB – and wouldn’t replicate data without the router. I did consider re-IP-ing all the storage from the “old site” but didn’t feel too confident doing that. The other thing I was thinking is that didn’t want to ripout too much of the previous structure in case I needed it again at a later stage (SRM 6.0?). Anyway, I’ve put each of my “infrastructure” VMs in a “infrastructure” resource pool to ring-fence them for performance – so should the cluster(s) get overloaded then they will win out when it comes to CPU/Memory contention. I did consider for a little while making a “management” cluster, but in my case with such a lack of server resources in the lab, I needed to make as much of that hardware available to tennants in the cloud as possible.

Screen Shot 2012-09-26 at 09.27.38.png

Of course the reason for the change is to offer up the Gold/Silver Cluster to the Provider vDC object within vCloud Director when I’m ready. Ideally I would have liked more clusters to show how a vCloud Director Provider vDC can now contain many clusters. But you have to live within your means sometimes. My plan is that someone needing “Test/Dev” resource would choose the Silver Provider vDC with some cost-effective iSCSI/SATA storage I have. Whereas someone needing production/teir1 resource would use the Gold Cluster with some replicated FC/SAS storage available to those hosts.