Part 51: My vCloud Journey – Building vSphere vApp in vCloud Director (2 of 4)
… In my previous post I mainly focused on setting up the network allow my vESX hosts to work with my pStorage – as well as making sure the relevant software (ESX 5.1 CD and VCVA) were uploaded to the catalog. Next thing I needed to do was build the vSphere vApp. So using the “Build New vApp” icon, I kicked off the wizard using the “Add Virtual Machine” option to define my vESX hosts.
I defined the vESX accordingly:
- Name: esxnj01
- Hardware Version: 9
- Operating System Family: Linux
- Operating System Version: RHEL6 64-bit
- vCPU: 2
- Enable: Expose Hardware-Assisted CPU Virtualization to guest OS
When you scroll down this dialog box your able to set other options such as memory, virtual disk size, Disk Bus Type and the number of virtual NICs. I defined these accordingly:
- vCPU: 2
- Memory: 4GB
- Hard Disk Size: 10GB
- Bus Type: LSI Logic SAS (SCSI)
- Number of Nics: 4
4GB of memory is probably just enough to boot ESX and run some low-level 32-bit VMs like Windows XP/Windows 2003 – anything else as nested VM (vINCEPTION Level2) is going to require more resources. You will want enough vESX to be able to meaningfully use features like VMware HA/DRS, but not too much because fundamentally these resources will have to be found at the vINCEPTION0 layer – the physical world. A 10GB disk is more than enough space for the ESX host software, and initially I started out with a smaller disk – however that triggered disk alarms from vCenter on the local VMFS volume so I increased this to avoid getting unsightly alarms on first use. We need to use the LSI Logic SAS controller to have our vESX recognise the virtual disk attached to it. As for NICs, a minimum of 4 has been my entry-number ever since ESX 2.x days. But remember this all virtual so you can have as many as you like fundamentally this network is being provided by the underlying physical hosts in vINCEPTION0.
The next part of the wizard concerns naming of the vESX hosts, and their storage location – your call. The next part is a little bit more significant the networking. Using the option to “Show network adapter type” is handy – as it shows the default type used is e1000. That’s a must because a vESX host currently has no VMware Tools package that would allow a virtual VMXNET device to work. Wouldn’t it be great if you could for improved vESX network performance? The two thing worth mentioning here is where NIC0/1/2/3 get patched and how we handle the IP. Notice in my screen grab how NIC3 is being patched into the CorpHQ-StorageNetwork for “direct” access to my pStorage layer. That will be significant later when I configure a VMkernel port, I will have to use vmnic3 to get the communication working… There’s no point in setting anything other than “DHCP” on any of these interfaces because we are doing a manual installation of ESX (not from a template in the catalog) and currently the vESX host cannot utilize the guest customization provided by vCloud Director.
Note: With hindsight I’m wondering if having NIC0/1/2 configured to the Organization Network is wise. It might have nice to have vSphere vApp behind a vApp Network – and then patch each NIC into its own network. One thing I haven’t dared to think of is how to do vCloud networking with vSphere vApp. I might be that doing something like vCDNI might be the best way of providing multiple networks – after all I can’t define VLANs within a vApp – I can only define pools of VLANs.
The next thing to do (before powering on the vESX) is to attach (or Insert as vCD likes to call it) the ESX 5.1 CD to the VM using the CD-icon…
On power on the vESX VM within the vApp should boot to the ESX 5.1 CD as normal…
When the install has completed you can use Eject CD/DVD on the right-click of the vESX VM to stop it booting to the CD again… By the end of this process this is what my vSphere vApp looked like:
So here you can see three ESX hosts with the vmnic4 mapped to different organization network compared to vmnic0/1/2 that allows for access to the underlying storage layer…