In my previous blog post I walked you through what happens when EVO:RAIL is being configured for the very first time. Now I want to look at the next important workflow. As you might know from reading this series of posts, EVO:RAIL has auto-discovery and auto-scale-out functionality. A daemon called “VMware Loudmouth” which is available on each of the four nodes that make up and EVO:RAIL as well as the vCenter Server Appliance, is used to “advertise” additional EVO:RAIL appliances on the network. The idea is a simple one – to make the adding of additional EVO:RAIL appliances, to increase capacity and resources, as easy as typing a password for ESXi and a password for vCenter.

When EVO:RAIL is brought up on the same network as an existing EVO:RAIL deployment the management UI should pick up on its presence using “VMware Loudmouth”. Once the administrator clicks to add the 2nd appliance, this workflow should appear.

newappliance

newappliance02

So long as there are sufficient IP addresses in the original IP pools defined when the first appliance was deployed, then it’s merely a matter of providing passwords. So what happens after doing that and clicking the “Add EVO:RAIL Appliance” button?

In contrast to the core 30 steps that the EVO:RAIL Configuration completes, the adding additional EVO:RAIL appliances to an existing cluster is significantly less – in total just 17 steps. They are as follows:

  1. Check settings
  2. Delete System VMs from hosts
  3. Place ESXi hosts into maintenance mode
  4. Set up management network on hosts
  5. Configure NTP Settings
  6. Configure Syslog Settings
  7. Delete Default port groups on ESXi hosts
  8. Disable Virtual SAN on ESXi hosts
  9. Register ESXi hosts to vCenter
  10. Setup NIC Bonding on ESXi hosts
  11. Setup FQDN on ESXi hosts
  12. Setup Virtual SAN, vSphere vMotion and VM Networks on ESXi hosts
  13. Setup DNS
  14. Restart Loudmouth on ESXI hosts
  15. Setup clustering for ESXi hosts
  16. Configure root password on ESXI hosts
  17. Exit maintenance mode on the ESXi hosts

One reason why the adding of subsequent appliances takes less than 7 minutes, compared with the initial configuration of 15 minutes, is that components such as setting up vCenter and SSO aren’t needed because they are already present in the environment. So pretty much all the EVO:RAIL engine has to do is setup the ESXi hosts so that they are in a valid state to be added to the existing cluster.

You’ll notice that I’ve chosen to highlight Step 2 in my bulleted list. Every EVO:RAIL that leaves a Qualified EVO:RAIL Partner (QEP) factory floor is built in the same way. It can be used to carry out a net-new deployment at new location or network, or it can be used to auto-scale-out an existing environment. If it is a net-new deployment the customer connects to the EVO:RAIL Configuration UI (https://192.168.10.200:7443 by default). If on the other hand the customer wants to add capacity they would complete the “Add New EVO:RAIL Appliance” workflow. In this second scenario the built-in instances of vCenter Server Appliance and vRealize Log Insight are no longer needed on node01, and so they are removed.

If you want to experience this process of adding a second EVO:RAIL appliance at first hand, don’t forget our hands-on-lab now showcases this process. Check out the HOL at:

HOL-SDC-1428 – Introduction to EVO:RAIL