If you being following my journal for a while, you’ll know I’ve been getting into creating the Organization vDCs. So far I’ve created 4 Organizations in vCloud Director (1 for the CorpHQ and 3 for the other two businesses that make up Corp, Inc Holdings my made-up company) – and precisely 1 Organization vDC. I created “Test & Dev” vDC for the CorpHQ Organization. There’s really three things I want to do around this part of the configuration:

  • Learn more about the allocation and reservation resource models
  • Create Organization Networks and their vCNS Edge Gateways using the standard UI (vCD 5.1 chains the whole process in a long wizard, and I’d like to know how to do the setup a bit more manually…)
  • Given I have 4 Organizations each are going have at least 2 Virtual Datacenters each (Prod & Test/Dev) I want to see how easy it is to automate this stuff. In particular I’d like to be able to setup an Organization, Organization vDC and Organization networks using the new PowerCLI for vCloud Director. That’s part of large effort of mine to automate the complete standing up and tear down of vSphere & vCD. Mainly so I can quickly adopt new versions during the time of alphas/betas/RC and so on. Like anything in life once I’ve mastered a manual skill, I pretty quickly move to automate the process – although I would say I still have TRUCKLOAD to learn about vCD!

So lets start with the first objective – learning more about other resource models for vCD. My gut instinct is both the reservation and allocation models are both appropriate approaches to running production level vApps in vCloud Director. In the world of production there certain expectations placed around QoS/SLA which often dictate some sort of guarantee of performance rather than hoping that resources will be there when they are needed. Your more likely to ring-fence those resources to those environments. For me this isn’t really very new – we have always had a system of limits and reservations – either to the VM or since DRS the “resource pool” rather depending on the vSphere hosts (very good ability) to just deliver the resources needs in a on-demand, as-you-need-it way. That’s something that more rudimentary virtualization platforms were not capable of – with them you needed the resource (especially memory) available upfront even though you might not need it all the time. Of course the other vendors have regarded this approach as “risky” – that’s quite common in our industry. When you don’t have feature – create a “Fog of FUD” and cast doubt on the wisdom of a technology – mainly because you don’t have it and can’t compete. Anyway, I digress…

So most of this was pretty straight forward – for my “Production” Virtual Datacenter – I select the “Gold Provider” (that’s the HA/DRS cluster that has the host with the most memory and access to FC storage using 2GB to SAS drives remember). You can already see my that “Test/Dev” Virtual Datacenter is already consumming resources from the one or two vApps I’ve got up and running there.

Screen Shot 2012-12-06 at 11.20.24.png

And in this case I went for the “Allocation Model” which is the default option. Why is it the default? Well because A comes before P in the alphabet as does P comes before R. They are a funny thing “defaults”. Often students I have spoken to when I was instructor, have assumed that default option is the default because its the best practice, recommend and/or required. For example when creating a VM in vCenter using “custom” you will find that the E1000 NIC is selected by default – rather than VMXNET2 or VMXNET3. That’s because E comes before V in the alphabet. This must really infuriate people who expect click NEXT and wear a blindfold… 🙂

Screen Shot 2012-12-06 at 11.21.44.png

The way I see it the essential difference between “allocation” and “reservation” models is the DEGREE to which you want to reserve stuff – and also WHO will control that reservation. With the allocation model the SysAdmin does the reservation as percentage, which allows them control how much we reserve and how much over-commit. With reservation model the resources are automatically reserved in their entirety and its the Organization – and folks who manage the vApps in that organization how much they over-commit. So someone sat in the “Reservation Model” could still use our “memory delivered on demand to the VM” approach, and allocate more memory to their VMs then there’s available. If some thing goes pear-shaped its the organizations problem, not the SysAdmins. Nice

For me one of the big attractions of the Reservation Model over the Allocation Model is how MUCH simpler it looks to configure. There’s a lot less choices and options for me to choose from. The downside as I see it is is that it allocates the resources up front (even if they are not needed) there’s a danger I could allocate a lot more resources to OrgA than should – and that those resources become “orphaned” and locked to that Org. Now its easy from technical stand point to reduce those resources, but politically that could be difficult.

Anyway, less of this flim-flam – lets take a look at those settings for the “allocation” model:

Screen Shot 2012-12-06 at 11.33.02.png

So here the allocation model is recommending a limit of 6.88Ghz of CPU to the Organization vDC, and guaranteeing 50% of those resources (3.44Ghz). vCD has rather nicely worked out that out of the 34.42Ghz worth of CPU I have available in my “Gold” cluster that would be an allocation of about 10%. I decided to look at the performance charts of vCenter to see how these numbers squared there. I think the all-new vSphere web-client does nice job of showing what the utilization is of cluster – and I could see that yes – that once all my “infrastructure VMs” were up and running there was about 34.42Ghz of resources available.

Screen Shot 2012-12-06 at 11.53.02.png

[Note: As is pretty typical in most home lab style environments the CPU is hardly being touched, its memory, memory, memory that’s being consumed. I actually powered down some appliances I’ve got setup which I’m currently not using to claw back some memory. I have 5 servers with 16GB each in the Gold Cluster (80GB), but once the vSphere software is installed and I’ve a couple of “infrastructure” VMs (domain controller, remote access, vCenter, vShield) loaded that consumes about 20GB memory, leaving about 60GB left to allocate to the Organization vDCs. One thing I’m toying with is moving all these VMs to the “Silver” cluster and treating it as if was a “management” cluster. So although my screen grabs will show “Silver” and “Gold” when ever anything gets deployed I will use the Gold cluster instead…]

As with the PAYG model as you start dial around with these settings the projected number of VMs you can expect to fire up adjusts accordingly. That’s based on the assumption that a single VM with 1vCPU will use 1Ghz. If we dial down the 1Ghz default allocation, the number of VMs goes up.

So if the vCPU speed is set at 1Ghz this is what my current settings would expect to deliver:

Screen Shot 2012-12-06 at 12.20.31.png

If the vCPU speed is set to 0.5Ghz this what that would mean in terms of the number of VMs would expect to deliver:

Screen Shot 2012-12-06 at 12.22.57.png

Shock horror halving the vCPU allocation results in a doubling of projected VMs. I must admit I’ve occasionally felt I wish I had the same control over the memory allocations as well. So I could indicate that my default allocation of memory was 1GB for example, or being able to say for each 1Ghz vCPU allocate 1GB of memory. That’s something you see a lot in application guidelines – you know 2GB RAM and 2Ghz of CPU says the application vendor. I guess the trouble with that is how piss-poor most application vendors are at giving reasonable guidelines on the right-amount of resource to allocate for their application. I guess the trouble is there’s too much “it depends” involved in such guidelines – a large number of small users compared to a small number of large users for example.

So how to go about divvying up the resource between my respective tenants. I could take 34Ghz, and carve off 4Ghz and say that’s “spare capacity” – to make my calculations easier. I expect to have 4 Production based Virtual Datacenters for each of my four Organizations. So could divide that 30Ghz evenly between them. And say they each only get a maximum of 7.5Ghz each. If they need more I could allocate more out of my “spare capacity” or could buy more horsepower, and add more hosts to be allocate more Ghz. Of course, that assumes that ALL of my tenants will consume their 7.5Ghz uniformly and at the same rate.

The same goes for memory. I have 60GB to play with that would divide equally as 15GB per tenant, per-Organization…

What’s interesting is that to me is that its not so much these allocations that matter, but the guarantees that back them. The default for both CPU and Memory is that the reservation is half the allocation. So if I allocate 15GB, the system will reserve 7.5GB. That’s quite important. If a tenet create a VM with a 10GB allocation, and reserved 8GB to it. That VM wouldn’t power on. The wouldn’t be enough memory “reserved” (7.5GB) to the Organization vDC to meet the reservation of the VM (8GB). In other word you can fly 8 passengers on a private jet that only has 7.5 seats. Quite what 1/2 seat would look like is anyone’s guess. But have you been on RyanAir flight recently?

As see the more you increase the % the more creep steadily towards what is in fact a “reservation model”. So if I create an allocation of 15GB and make the reservation 100% I’ve doing no memory over-commitment at the Virtual Datacenter level at all as all 15GB would reserved out of the 60GB I have free… For me that’s what makes this allocation model different – I’m presenting to the tenant 15GB of potential memory, but I’m actually only guaranteeing 7.5GB in the reservation.

[Beginning of old-gimmer part…]

Where does this 50% guarantee default come from for memory & CPU? Does it have any precedent in other technologies from VMware. Well, it does it be you would have to be extremely old and wrinkled (like me) to remember it. Back in ESX 2.x days the default behavior was that any new VM created had an automatic memory reservation. So if you gave a VM 1GB of memory – 512MB would be “reserved” in memory, and the other 512MB would “reserved” in swap. Back then the size of the swap defaulted to the maximum amount memory assigned to the physical box. So a 32GB Host would have 32GB swap file. All that changed in ESX 3.x when this default reservation for every VM was reduced to zero – and the swapfile became a per-VM parameter, not a per-host one. I never really understood why the change was made, but I’m guess some novice users who didn’t bother to RTFM found they couldn’t power on VMs after a certain amount of time – because of admission control. They would be confused why they couldn’t power on VM when there was plenty of memory free – the reason they couldn’t power on was that there wasn’t enough memory or swap-space left to meet the reservation.

[End of old-gimmer part…]

So much for the dim-and-distant past. What should the reservation be? 25%, 50%, 75%… After all if I have already done my allocation before I start (15GB per vDC) and then I reserve half of it (7.5GB). I’m not really doing much in the way of over-commitment – as my allocations (15GB x 4) doesn’t exceed to total maximum memory available in the cluster. Perhaps model of allocating 30GB, but and reserving 50% of that (15GB) is more “over-commitment” model. In this case I have allocated more memory than I have physically available (30GBx4=120GB) which is twice as much as I have physical available (60GB). If then state I’m only going to reserve 50%, then reservations would amount to 60GB (4x15GB). The assumption is that although my tenants will get 30GB and will PAY FOR 30GB, I don’t actually have that much memory. For me this is the very essence of all commitment. It means I guarantee certain quality of experience (15GB) memory, and just monitor the actually usage on a day-by-day basis. Using admission control I can prevent a tenant over-saturating the resource in an unexpected way. Plus I can charge them for 30GB RAM each even through I don’t actually have 120GB, just 60GB. That’s the same as Dropbox offering users 200GB of space. The assumption is whilst users might order up that as package, the reality is not every single subscriber is going to claim their allocation to the maximum simultaneously – I doubt very much if DropBox has the space to guarantee ALL its users allocation if everyone today started to fill up the DropBoxes! So in my example above if ALL my tenets wanted the full 30GB of memory each simultaneous – I would run out of memory (120GB into 60GB doesn’t go) and I would have truckload of swap activity at the vSphere host level. Of course, so long as monitor the usage and growth patterns I can watch the usage of memory increase – and take remedial action before anything bad happens.

So this is the model I went for in the end. Initially, also opted to limit the maximum number of VMs to 50. I think what is likely to happen is my Organization is likely to run out memory way before they get anywhere near this number of VMs. But I feel uncomfortable with a “unlimited” approach to creating VMs – so that any number of VMs could be created. I want to protect the system from DDOS service outage where some nutter creates a gizzillion VMs. To be honest, I even feel 50 VMs is too big a number. In the end I dialed this down further to 30 VMs. On 5x16GB hosts I usually find that once I have got about 6-8 instances of Windows 2008 R2 running I’m pretty much saturated when it comes to memory, and those are getting dangerously close to hitting swap – and that’s assuming they are sat their idling and not really doing much, and with nothing but the OS installed and booted – and that’s with all the memory compression and TPS enabled. So I worked out 5×6=30. But as said a moment ago I think my Organizations are going run out of memory way before they run out of this allocation or CPU.

Screen Shot 2012-12-06 at 14.13.13.png

When it came to storage – I included all the levels of storage available (Platinum, Gold, Silver and Bronze). I also decided to disable that fast provisioning feature (which leverages the linked-clone feature. I consider the fast-provisioning option vital in test/dev environment where you want to quickly instantiate a large number of VMs in the shortest possible time. But I think the performance hit that could potentially happen when you have a lot “child” VMs referencing a “parent” VM something I would want to avoid in a production environment. I’m happy to keep thin-provisioning on because I think the performance over-head is almost minimal, whilst the storage savings are massive. That’s probably more applicable to the block-based storage (VMFS on FC/iSCSI) than NFS, because on my NetApps thin-provisioning is a default anyway on NFS datastores. I also decided to make the “Gold Storage” the default. Remember this setting controls the location of the vCNS Edge Gateways for any Organization Networks or vApps that get created. I don’t have much platinum storage available and it seems a waste to place the Edge Gateway on the most expensive storage I have – especially for something that shouldn’t be that disk intensive anyway. But at the same token I don’t want them on the worst storage I have either! I left the allocations of storage at 20% because 4×20%=80% which leave me 20% “spare resources” to allocate should one of my tenet require increased space or be running out of space.

As for networking – in this case I elected to use my VLAN backed pool but chose not to create an Edge Gateway or Organization Network at this point. You might recall this wizard can get very lengthy – and want to learn the manual steps to creating a new Organization Network and so on. So all had to do was name the Organization, set description. The only thing I did differently was I did not enable it. After all my work isn’t done…

Screen Shot 2012-12-06 at 14.24.54.png

The affect of this can be seen in the vSphere Web-Client – a resource pool is created called “CorpHQ – Production Virtual Datacenter”. But the “System vDC” which is used to house and control the vCNS Edge Gateway (or VSE = vShield Edge Gateway) appliance has not been generated – that’s because I’m chose to not deploy a new network using the wizard. There’s also a folder for the CorpHQ – Production Virtual Datacenter” VMs unlike the Test & Dev vDC its empty because no vApps or vApp Template have been generated or created yet.

Screen Shot 2012-12-06 at 14.34.27.png

Screen Shot 2012-12-06 at 14.31.24.png