Last week I was in Palo Alto working with the team, and I had the chance to get some stick-time with a physical appliance. Believe it or not much of my experience to date has been focused on our very own ‘hands-on-lab’ and improving and extending it to be ready for our Partner Exchange (PEX) in February of this year. I think this is a great illustration of these HOL environments – allowing people to play with stuff without the need for the physical hardware.

I’ve been adding all sorts of additional tasks to the new HOL including such things as:

• Adding 2nd appliance to demo auto-discovery
• Patch Management of the EVO:RAIL engine
• Simulating a failure of a node, and the replacement workflow

On top of that I’ve been working on some internal training for both our staff and partners to get folks up to speed. I think the value-prop and purpose of EVO:RAIL is starting to be understood – but I think people are looking for more detail about how the product works. Not because you need to know this stuff to get the experience, but that generally if you’re a technical person it’s just nice to know how things fit together. Appliances tend to be referred to as “black boxes” but personally I find this can lead to fear of the unknown. I’ve never been one to believe that ignorance is bliss, and often quoted the poet Alexander Pope in my classes who stated: “A little knowledge is a dangerous thing”. So this series of posts is about making sure that there is lots of knowledge available.

4nodesled

So lets start with brass tacks. As you probably know by now EVO:RAIL is a 2U-4node system. In the EVO:RAIL we number each of these nodes – node 01, 02, 03, and 04, and you will see that in our UI as well. Each node has an identity that is a combination of the appliance ID and its node ID such as MAR12345604-01, MAR12345604-02, MAR12345604-03, and MAR12345604-04. The first node is an important node in that it’s where the EVO:RAIL Configuration and Management engine resides.

architecture

So here we can see that node01 is the one that initially holds the vCSA instance. Inside the vCSA instance is installed the EVO:RAIL Engine and the VMware LoudMouth daemon (incidentally, not shown in the diagram for simplicity is the vCenter Log Insight instance). The EVO:RAIL Engine is the service that provides the pretty looking Configuration and Management UI that you see in all our videos and in the HOL. VMware Loudmouth on the other hand is an implementation of “Zero Network Configuration” and it allows for the auto-discovery of the nodes, and is intimately associated with such tasks as discovery of additional appliances to increase compute and storage capacity.

You’ll notice that in the architecture diagram the vCSA and Log Insight appliances are not stored on the boot device – they are stored in a VSAN datastore. So there’s a bit of chicken and egg or Joseph Heller style Catch22 at play here if you think about it. The vCSA (as a consequence the EVO:RAIL software) is held on the storage it is about to configure. How do you power on the vCenter instance on a VSAN that has yet to be configured? After all one of the tasks of the EVO:RAIL Configuration engine is to create a VSAN datastore constructed of the 12 hard disks (HDD) and the 4 Solid-State Drives (SSD). Here’s the secret sauce.

It is actually possible to create VSAN Disk Groups and VSAN Datastore with just one node. It’s a process well documented by esteemed colleague William Lam on his virtualghetto blog:

http://www.virtuallyghetto.com/2013/09/how-to-bootstrap-vcenter-server-onto.html

In William’s case he was concerned with home-labbers who want to use VSAN as the primary and only storage, and have no other storage to temporarily hold the vCenter – which needs to be powered on and available to create the first HA Cluster and as a consequence the VSAN cluster as well. So when an EVO:RAIL appliance is powered on all four nodes are powered on simultaneously and node01 uses VMware ESXi “Virtual Machine Start-up/Shutdown” to power the vCSA for the first time.

Screen Shot 2014-12-04 at 10.32.16

Note: This screen grab shows the vCSA and Log Insight being stored on the same Marvin-Virtual-SAN-Datastore. Incidentally, Log Insight is only powered on if you enable it during the EVO:RAIL Configuration.

boot-order

Note: This screen grab shows how the vCSA appliance is set to auto-start when node01 is powered on for the first time.

Finally, as an aside – the boot device can either be a conventional HHD, or it could be a 32GB SLC SATADOM with reservation pool. This reservation pool holds back cells on the SLC SATADOM, so if cells become depleted, there are some in reserve that can be utilized. Typically these local boot devices are formatted with VMFS and labeled with the naming convention of appliance-nodeID-service-datastore1. This screen grab below shows this “service-datastore” together with the VSAN Datastore called MARVIN-Virtual-SAN datastore.

Screen Shot 2014-12-04 at 10.40.20

One thing to mention about the service-datastore is that each node has a “reset” folder that contains a backup file of each node (ConfigBundle.tgz). And that node01 service-datastore1 holds copies of the vCSA .OVF and Log Insight OVF in an “images” folder like so:

Screen Shot 2014-12-04 at 10.44.34

Conclusions:
That’s it for now for this first installment of “Under the Covers”. Check back with me in a couple of days time for the next post. The main take away here is the importance of node01 and how fundamentally vCSA and Log Insight end up being stored in the same cluster they help create and manage. That means they do not represent a single-point of failure as they are protected by the same technologies as any normal VM. An end-user just using the EVO:RAIL UI will not see the vCSA and Log Insight virtual appliance. We’ve hidden them away to provide a cleaner UI experience, but if you open either the vSphere Web Client or Desktop Client you will see them there like any other VM.

Note: To make things less complicated the vCSA and Log Insight virtual appliance are hidden to give a cleaner UI for customers who prefer the simplicity of creating VMs from the EVO:RAIL Management UI.