3-layers-vCD-vSphere-Physical-STORAGE.png

One of the tasks I have to do soon is upgrade my lab environment. The last real use of it was writing the “Building End-User Computing Solutions with VMware View” book with fellow vExpert, Barry Coombs. That was based on the vSphere5.0 plaform using View 5.1, Horizon 1.5, ThinApp and ThinApp Factory 4.7.2. The whole environment is due with a shake-up with my vCloud Suite project coming sharply into focus. One of the things that I’ve been thinking about is how to take the kit I have an teir into various levels of quality – so I can align that to the Provider Virtual Datacenter that resides in vCloud Director. In case you don’t know the Provider vDC is object in vCloud Director that acts as layer of abstraction between the underlying hardware – and it does this by utilizing familiar resources in the vSphere layer – namely the resource pool and the cluster. As you might know I have this fictitious company called “Corp.com” which I used increasingly in books on SRM and View – and I want to use that as the basis for some of my work. The idea behind this is try to create as realistic as possible environment for me to play with vCD.

So anyway, I want to share with you what my thinking currently is in my journal. First off let me explain what resources I have available. I’ve really got two different teirs of servers – 5x HP DL385s that have about 16GB of RAM (Type1) on them and 4x Loveno TS200’s with 12GB (Type2) on them. It seems reasonable to say that despite their age the Type1 servers are probably the best I have simple because of the quantity of memory. Currently the type1 servers reside in a cluster managed by the “New York” vCenter, and the type2 are managed in a “New Jersey” vCenter. That’s a structure that largely came about due to the SRM books. Anyway, my plan is to decommission the “New Jersey” vCenter – and have two clusters in one vCenter for simplicity. These clusters will become my “Gold” (Type1) and “Silver” (Type2) clusters for my Provider vDCs. Of course you can add into vCD as many vCenters as you like but I thought my own life would be made simpler by having on Distributed vSwitch for both clusters.

The next thing I have to do is handle my storage. I’m thinking of blatting the storage arrays and starting from scratch – to give me maximum flexibility. Previously my storage lay out was very much governed by my SRM work – so I would have a LUN/Volume per application. That allowed for the failover of single application or business unit. What I want to do is re-organize my storage around providing different tiers of storage with different capabilities. I have currently four storage arrays in my rack at the moment.

1x NetApp – SAS, FC, iSCSI, NFS Supported with SnapMirror

1x NetApp – SATA, FC, iSCSI, NFS Supported with SnapMirror

1x Dell Equallogic – SAS, iSCSI only with Replication

1x Dell Equallogic – SATA, iSCSI Only with Replication

The iSCSI and NFS storage resides on just 1GB links, where as the FC connections are 2GB. So I’m classing the iSCSI and NFS storage as sub-par in terms of performance – merely because of the network infrastructure that backs it, rather than the protocols themselves.  What I want to do is try and reorganize the storage in a more abstract way than I do currently. That will mean I can map these different tiers of storage to the each of the provider vDC I intend to create. This is what I’m thinking of at the moment:

Tier 1: FC, SAS with synchronous replication on the NetApp Arrays

Tier 2: FC, SAS with replication with a 1hr RPO on the NetApp Arrays

Tier 3: FC, SAS, No Replication

Tier 4: iSCSI, SAS with 1hr RPO replication on the Dell Arrays

Tier 5: iSCSI, SAS, No Replication

Tier 6: NFS, SAS, No Replication

Tier 7: NFS, SATA, No Replication

I still haven’t decided if this division of storage is good idea. My initial reaction is it might be too complicated. Perhaps simpler structure would be more appropriate such as just three classes like so:

Tier 1: FC, SAS with synchronous replication

Tier 2: iSCSI, SAS with 1hr RPO

Tier 3: NFS, SATA, No Replication

This would correspond to clearer distinction in the storage classes I’m providing – a gold, silver, bronze storage structure. That way a vCloud consumer could chose between a gold or silver computer layer, and map their compute requirements to the storage class needs. In this case the Silver provider vDC with “Storage Class 3” would be ideal of test/dev Organization vDC – and the Gold provider vDC with “Storage Class 1” would be ideal for production VMs that offer business critical services.

When I was thinking about this I got talking to Cormac Hogan for a VMwareWag we were recording together – off the recording I talked about my lab with Cormac. He made some interesting comments about what I should do. He suggested that each array I possess could be classified as tier of storage.

Tier 1: NetApp/SAS (FC/NFS/iSCSI) – with replication

Tier 2: Dell/SAS (iSCSI) – with replication

Tier 3: NetApp/SATA (FC/NFS/iSCSi) – no replication

Tier 4: Dell/SATA – no replication

He made the valid point that not all arrays distribute the internal IO across ALL the spindles WITHIN the array. It’s also the case that if you create a datastore cluster that spans two arrays – then VAAI (with copy-offload) is not used to move a VM from one array to another – but the NFC (Network File Copy) protocol. So the best option for performance is for the VM to stay within array, and not be moved from one array to another. Cormac also pointed that now vCloud Director 5.1 support storage profiles – I could use storage profiles to mark my different teirs of storage – and if a “tenant” want to move from bronze to silver – all it would take is click of the mouse to change the profile setting – and then storage profiles would handle the rest. It also occurred to me that merely because of my bandwidth limitations (1GB for IP Storage, and 2GB for FC) that there would be danger of NFS/iSCSI being classed as “sub-par” to FC. That’s something I really would like to aviod in my classifications. In the last 24 months I’ve become a big lover of NFS – and its always irked me that concept that somehow NFS is less good than say iSCSI/FC with VMFS. The truth is that its just DIFFERENT.

A couple of things became apparent to me as started to repartition my storage. Only some of my ESX hosts are fibre-channel connected, and they just so happen to be the ones with the most memory – so that would make my “Platinum. iSCSI is available across all the hosts and so is NFS. One of the nice things about vCloud Director 5.1 is that you can present multiple tiers of storage to single Provider vDC. That should mean I can offer tier 1/2/3/4 to the Gold Provider vDC and tier 2/3/4 to the Silver Provider vDC.

So in the end I went for this configuration:

Tier 1: NetApp/SAS/FC with replication (to Class3)

Tier 2: Dell/SAS/iSCSI/ with replication (to Class4)

Tier 3: NetApp/SATA/NFS as series of 3x300GB Datastore Cluster with the remaining free space…

Tier 4: Dell/SAS/iSCSI single 2TB DataStore with the remaining free space…

One of the interesting things that came out of configuring this is much less GB of Tier1/2 I actually have (especially if you leaving room for growth/snapshots). One thing I would like to do at sometime is price up the cost per GB of each type – factoring in that for every 1GB written in Tier1 its is being duplicated in Tier3. So your 1GB actually costs you 2GB in reality. But more generally because I feel a resource that’s in short supply should if you allowed “market forces” to prevail should “cost” you more than an resource that is plentiful.

Below are sometime screen grabs – so hear we have screen grab of the “Tier1_PlatinumDR” datastore backed by FC connected NetApp LUN of 750GB.

Screen Shot 2012-09-25 at 22.02.52.png

As I looked at this setup, I began to realize I could offer vSphere Replication between my two “SATA” tiers of storage between NetApp and Dell. Of course these two arrays share nothing in common with each other when it comes to replication – but that’s not a problem with “virtual” replication as I like to call it. That way Platinum would offer a 5min RPO, Gold would offer 1hr RPO and vSphere Replication could offer 3hr RPO. I wanted to get vSphere Replication in there ready for the day when vCloud Director gets the enlightenments to offer per-VM protection right up into the tenants’ organization.

This layout above is something I have reflected in my Storage Profiles as well – giving each of my tiers descriptions and assign those to the profiles. That means even if someone is creating VMs in vCenter rather than in vCloud Director they still get the filter and guidance on different types of storage. One of the nice improvements to vCloud Director 5.1 is support for Storage Profiles and Storage DRS – so I wanted to get those setup and configured well before introducing vCloud Director into the environment.

Screen Shot 2012-09-26 at 09.47.20.png

Here I’ve selected the “Platinum Storage” profile when creating a VM that then filters out the storage that corresponds to this tier. It would be nice if I had more storage of this type in the lab. But I’m limited by rack space so can’t take any more storage. But if I had another array that matched this type of storage it could be classed in the same way and therefore the user would see more than one datastore with “Platinum” within it. But I guess the point has been made.

Screen Shot 2012-09-25 at 22.04.21.png

Note: This shows a NFS/SATA backed datastore cluster – containing 3 NFS datastores each of 300GB each.

Conclusion:

For me there’s a natural tension. At one level the “geek” side of my personality wants to create 300 different classes of storage to deal with EVERY eventuality. However, the “service” side of my personality argues that the SIMPLER the system is built the easier it will be for my “tenants” to consume it. It argues that if the storage is dice-and-sliced to the infinite degree like some mobile/cell phone packages are it will just leave my customer overwhelmed with too much choice, and they won’t be able to make a decision. It’s my guess that what should win out is the less complicated model. I have feeling its precisely this tension between the “geek” mode and my “service” mode that will be at the heart of my cloud journey. Balancing the competing needs of flexibility against complexity. After all cloud is meant to abstract and hide the underlying complexity of the physical/virtual infrastructure – if end-up making my cloud layer as complicated as it is – what will have achieved except introducing more complexity to my environment…