Note: The first part of this blogpost is very much focused on the “resources” (cpu/memory/storage) side of the fence of creating an Organization vDC. In my second part I will be focusing on “networking” side of things…
If you have been following my series. I got as far as creating my Organization for Corp, Inc Holdings. That was a pretty repetitive exercise – so I did it once, and then just repeated it for each Organization I have.
My plan is to create least two Organizational Virtual Datacenter (vDC) per subsidiary – one for Production and Test/Dev. In that way I can allocate my different Provider vDCs (Gold/Silver) with each of them – together with using the different features that come with the Organization vDC. That means I can run through the process each time for different scenarios. Rather than listing ALL the options in a single post, I can keep it focused on the features and scenarios where you might use them.
1. You can kick of the creation of an Organization vDC by selecting the link “6. Allocate resources to an organization“. This is the same as selecting the “Manage & Monitor” tab, expanding >Organization vDCs and hitting the + icon.
2. Next select the Organization that will have the new Virtual Datacenter assigned to it. In my case I select the CorpHQ Organization.
3. Next I select the Provider vDC. My intention is to create a “Test/Dev” based Organization Virtual Datacenter. For that reason I selected my lower-tier of compute resources I labeled as “Silver”. You might recall the Silver Provider vDC just contains 4 vSphere Hosts (not 5 as is the case with Gold), and these only have 12GB each. My view was that a Test/Dev Virtual Datacenter should have the lowest grade of hardware at my disposal.
4. Next I need to select a “allocation model”. There three options Pay-As-You-Go, Allocation Pool and Reservation Pool. For my money I see each of these models as becoming progressively conservative in the allocation of the resources. All of these models create “resource pools” on the Provider vDC which points to a HA/DRS enabled cluster(s) in vSphere. With the PAYG model there are no limits or reservations imposed, but with the allocation/reservation pool there are [READ THAT PART AGAIN]. To me the PAYG model clearly suggests a more on-demand model, where the tenant only “pays” for what they consume, with little or no guarantees. It’s very an over-commitment model, that might appeal to Service Providers who OR in Test/Dev environments.
5. Select a Allocation Model. As you can see there are many (many) settings even with the PAYG model. Unlike the other models where the numbers indicate the amount of the resource you want to reserve, in the case of the PAYG of model they indicate how much your prepared to over-commit.
A good way of seeing this is watching the settings which appear at the bottom of the “Configure PAYG model”. vDC gets these estimate number of VMs, by processing your settings against the Provider vDC selected earlier in the wizard.
The Silver Provider vDC has 4-vSphere hosts making up the cluster – each with 12GB each. That’ makes 48GB total memory available in the entire cluster. The current settings allow for 39 “large” VMs each with 2GB RAM each – that’s 78GB of memory over-commitment or above 30GB more memory than I actually have. You might notice that that small VMs come to 100. That’s mainly because the “Maximum number of VMs” is set to 100. Clearly, if we set a maximum, whatever the “overcommit” we can’t exceed that limit. I found if I increased the maximum, I found the number of “small” VMs increased to about 157, again if you take 157x512GB it comes to 78GB. So there’s a tip there – you might not be able to “see” these “typical number of vApps” change if you keep the default setting of 100 VMs as maximum.
Of course the assumption is even if I do create 39 large or 157 small VMs they will never simultaneous demand their maximum configured memory. That if you like is the principle of memory over-commitment. Generally, Devs and Application Owners demand more memory than they really need – and this is away of providing without actually needing 78GB of memory. We do have control over this level of over-commitment. By decreasing the % values for “CPU resource guaranteed” and “Memory resources guaranteed” – by decreasing the reservation we can increase the level over-commitment. So if I decreased the CPU/Memory resource guaranteed values to just 10% I can significantly increase the number of VMs allowed within the Organization vDC. That allowed me to increase the density of VMs to 315 for “small” VMs and 78 for “large” VMs.
Note: Number of VMs increases as we decrease the CPU/Memory resource guaranteed to 10%
So how does vCD come to these sort of values? Well, you’ll notice the vCPU rating for each of the types of VMs (small, medium,large) is set to a standard of 1.0 Ghz. This value comes from the default for the vCPU Speed of 1 Ghz. As the web-page notes this allocation is a per-vCPU basis – so a 2xvCPU VM would receive an allocation of 2Ghz. So if I reduced the expects CPU utilization to 0.5 Ghz the density of VMs increases even further.
Note: Number of VMs increases as we decrease the vCPU speed allocated per vCPU.
On the memory side of the fence… The default is that there is no limit on the allocation memory. This would allow for the Organization vDC to use all of the memory within the Provider vDC. The “Memory Quota” can be made less or more than the actual amount of physical memory in the Provider vDC. So if we do something silly, like engage the quota and set it low value – such as 4GB. We can see the number of VMs allowed would decrease substantially. From just 8 “small” VMs (4x512MB = 4GB) and 2 “large” VMs (2x2GB) – bare in mind you might get more if memory is not reserved to the guest operating system, and memory is delivered using normal model – memory is allocated as and when its needed.
Note: Number of VMs decreases as we lower the memory quota to just 4GB.
If ‘over-commit’ the memory to the Organization vDC – in this case allocating 100GB of memory – then the number of VMs increases substantially. To allow for 157 “small” VMs and 39 “large” VMs. Do notice something there. This is actually the same number we had when we had the “unlimited” setting. That’s because the calculation of the number of VMs is based on two factors – CPU and Memory.
Note: Increasing the memory quota beyond the actual amount of memory on the cluster has no affect. That’s because PAYG model already allows for a memory commitment.
The same principles apply to CPU, as we can limit the amount of CPU cycles allocated to the Organization vDC using the CPU Quota. By default there are no quotas. So if we assigned 10Ghz CPU with vCPU Speed of 1Ghz, that would allow for 10 “small” VMs (10Ghz/1Ghz) or 2 “large” VMs (10Ghz/4Ghz – that’s 1Ghz X 4vCPUs)
Note: With a vCPU speed of 1Ghz, the number of VMs possible decreases when a vCPU quota is introduced.
6. Allocating Storage to the Organization vDC.
As you can see the new wizard doesn’t expose datastores, so much storage profiles – that represent different classes of storage. It just so happens that in my case I have precisely one type of storage profile per-datastore. Despite the fact that the Silver Provider vDC has 3 classes of storage (remember my silver hosts, are not connected to the “platinum” storage because they lack FC HBAs), the Organization vDC could be just presented with Bronze and Silver. Once the Storage Profiles have been added, you able to select the default instantiation profile that will be used with any new vApp.
Note: This “default instantiation profile” also controls which datastore(s) the vCSN Edge Gateway appliance is deployed,
Additionally, you able to set a storage limit on each datastore. I’m not sure quite what logic is used to set the % value here. You can see one datastore has been allocated 20% of Silver Storage, and the other 21% for “Bronze Storage”. I get the feeling there is some sliding scale going on here. The more storage you have the greater the % allocated perhaps. Not that would make a great deal of sense here – given that Bronze has twice as much capacity as silver. As with with the CPU/Memory assignment previously I could set these to be unlimited if I wished, or divide the storage between my four tenants equally or unequally as my mood takes me.
I was thinking today about the general principles of resource management as they apply to vCD. I began to realize that although the technology is new, the folks using are not. Generally in life once person or group of people have got used to having something (whatever that privilege might be) the generally don’t react well to having it taken away. You don’t miss what you never had is general moral of the story. So I’m mindful to dial these values down or keep them as is – until such time as the Organizational Admin find they are a problem. In nowhere else is this true than in test/dev environment, where people tend to hold on to VMs and the configurations they contain for longer than they really should. Test/Dev environments should be by their nature disposable. As former instructor, if you left some important data on a test lab one week, and came back expecting to find itself there the week after – you would be regarded as living in la-la-land. What I’m concious of is giving the relatively small amounts of storage I have (relatively to large datacenter) these numbers look paltry. But by the same token, the resource is therefore scarce, and perhaps I shouldn’t be overly generous. For simplicity I rounded these numbers up to 500B and 200GB. Then I made a mental note and allocated the same resources to my other “Test/Dev” vDCs.
Finally, on this page – and checked out the “Provisioning” settings.
Compared to the allocation of storage these seem relatively straight forward to me. Both result in great efficiencies in terms of disk space used, and the time it takes to create new VMs. There’s a couple of caveats in my mind – there’s lots of talk on the blogs about the rights & wrongs of “thin virtual disks, on thinly provisioned” volumes. Some say it good; Some say it bad. Personally, I’m advocate of so called “thin-on-thin” mainly because I believe in the flexibility it provides together with the disk savings. Of course in some case you might faced with no choice – for example virtual disks provisioned on NetApp NFS volumes are always thin by default. As I see it the “downsides” of thin-on-thin, is that it becomes “harder” to see how much space is being utilized, and how much free space you actually have left. Some see it as a lie built on top of lie – the volume is 10TB in size (no it isn’t there’s only 2TB of storage on disk) and the virtual disk is 2TB in size (no it isn’t there’s only actually 6GB used). I think there are some people who see this as “voodoo storage” where your endlessly presenting to different systems virtual amounts of the resource. The way I see it is we do this quite calmly for CPU/Memory in compute virtualization, so why not with storage. So long as you MONITOR your array properly then you will not find yourself like Wilde Cayote walking over a cliff, gulping and looking directly at the camera. But to take a step back I understand the fear. From the tenants perspective the genuinely THINK they have X, when they actually Y – and we are rather pulling the wool over their eyes. But I feel the alternatives in a test/dev environment are less palatable – going out and buying the physical storage upfront, way before you ever might need it. So for me “Thin Provision” is enabled. I might not feel the same way in my “Production” Organizations vDCs where I may feel more cautious about guaranteeing both a qualify of service, and guarantee of capacity. I will have to make my tenants aware of the co$t of such conservatism.
I also kept the option for “Fast Provisioning” released with vSphere5, Fast Provision does for vCD what “Linked Clones” does for VMware View. It allows for a “parent VM” to be the read-only source for many “delta” VMs upto a maximum of 31 levels deep – how’s that for “Inception”. Once you hit 31, a new parent VM is cloned, and another tree is started. The cloning process should be blistering quick – much depends on where the “source” vApp template resides. There maybe an initial “full clone” to create new parentVM is generated, before the magic can begin. Fast Provision has sister feature which is VAAI capable NFS cloning. That’s where these deltas are generated by the storage array creating a series of pointers – a good example of this is NetApp FlexClone technology. I’m hoping to get an upgrade to my NetApp’s in the next couple of weeks to enable this option for both vCD and View.
…As this blogpost was getting a bit lengthy I was forced to split it in two. It seems like there is a limit on the number of images the VMTN blog platform allows, and I’ve reached it! The next part will follow tomorow…