Load Balancing View Security Servers
- 1 Originating Author
- 2 Video Content [TBA]
- 3 Introduction to Load Balancing View Security Servers
- 4 Configuring Microsoft NLB Clustering for Security Servers
- 5 Configuring F5 BIG-IP VE for Load balancing of Security Servers
- 5.1 Introduction
- 5.2 Configure your Virtual Switches
- 5.3 Download and Import the F5 BIG-IP Virtual Edition OVA File
- 5.4 First Power On & Initial Configuration
- 5.5 The Setup Utility
- 5.6 Increasing the Idle Timeout Before Automatic Logout
- 5.7 Configuring DNS and NTP Services
- 5.8 Importing VMware View imp
- 5.9 Copy the Security Servers Certificate Key File
- 5.10 Configuring the VMware View iApp
- 5.11 Configuring View to use BIG-IP Virtual Edition
- 5.12 Testing the Configuration
- 5.13 Configuring BIG-IP VE for High-Availability
- 5.14 Update your DRS Anti-Affinity Rules
- 5.15 Factory Reset of the F5 BIG-IP Virtual Edition
- 6 Enterprise Infrastructures and Availability
- 7 Conclusion
Video Content [TBA]
Introduction to Load Balancing View Security Servers
Version: Horizon View 5.1
There are many options for load balancing your VMware View environment. Ideally, whatever solution you use should offer load balancing (as you might expect) but also be able to detect when nodes in the cluster become unavailable. Load balancers vary in quality and some do not handle this second requirement very efficiently. Of course, some of your availability issues could also be addressed with a combination of VMware High Availability and Fault Tolerance if your services were running in vSphere VMs. In recent years many vendors have shaken off the “load balancing” tag in favour of terms like “Application Delivery Networking” or “Application Delivery Controllers”. The change in terminology is an attempt to recognize that we are now increasingly delivering applications across the LAN, WAN and the Internet and load balancing has evolved to be aware of those application specific features that make that process unique to the service the business is providing.
There are a number of virtual appliances on VMware.com’s “Virtual Network Appliance” store – and if you are working in a home lab environment you might want to consider looking at these. For example Orbit IT-Solutions have “VMware View Load Balancer” which is based on Novell Suse – although from what we can see it has only been tested on View 4:
Alternatively, a cost-effective solution is to use Microsoft Network Load Balancing to create an NLB cluster of two or more Security Servers. Microsoft NLB is relatively easy to set up, and while it does handle load balancing successfully, we’ve found it somewhat lacking in detecting whether one of the nodes in the NLB cluster has gone down. It doesn’t seem to have much awareness of the IP dependencies between the various components that it balances – additionally there will be scalability issues with NLB when you have a large enterprise deployment. The more we have investigated these options, the more we think how much simpler life would be if we only had one Security Server and Connection Server, and they were protected by VMware Fault Tolerance. However, the one thing that stops this configuration is patch management and upgrades. If you only have one Connection server and Security server, it becomes impossible to take down one of the roles to carry out maintenance of the server. Additionally, VMware FT does not protect a VM from service failure within the guest operation system.
In an effort to be vendor neutral and to cover all bases we have chosen to document a range of options for load balancing. This includes load balancing from Microsoft in the shape of their Network Load balancing Clustering technology. For those interested in free but high-quality load balancing appliances you may want to look at the section on Vyatta. Finally, as an example of commercial load balancing appliances we take a look at the implementation of F5 Networks BIG-IP appliances. We are keen to indicate that we’re not endorsing any particular approach because that is really down to the size and scope of your implementation – balanced against the financial limitations you must work within.
Configuring Microsoft NLB Clustering for Security Servers
What follows is very much a “Getting Started” guide to Microsoft NLB, it will not cover every single option or setting. Our intention here is merely to show you an example of a load balancing system, and how it affects the configuration of the Security Server.
Before you head off to set up the Microsoft NLB Cluster, you can do a couple of checks at each Security Server. Firstly, can they ping each other by FQDN name? Secondly, can you run an nslookup test on the public FQDN and receive a positive response?
If you cannot get name resolution working, you could use a host’s file on each Security Server. In our lab environments we often cheat and allow our Security Servers access to our DNS servers, which some people would regard as insecure. That said, a text file held on a local server in a DMZ could be regarded as less secure than accessing the DNS host. “You pays your money and you takes your choice”. Technically, name resolution between the Security Servers is NOT required, but name resolution to the external/internet FQDN is.
In the screen grab above, we have edited the image to not reveal the actual IP address that is the result. Successful responses to both of these questions will make the configuration of Microsoft NLB easier.
Microsoft NLB comes in two formats, a Unicast and Multicast method. We would strongly urge you to use Multicast as the method, for two main reasons. A Unicast Microsoft NLB is incompatible with vMotion, whereas Multicast is compatible. If you are forced to use a Unicast address you will have to modify the “Notify Switches” setting on the properties of a port group or vSwitch.
Secondly, a Unicast configuration is designed for when Windows has more than one network card. Since the way networking is configured in ESX is so different from the physical world, it’s not necessary to add more than one NIC to the Security Server for NIC fault tolerance if it’s running as a VM. However, you will still need more than one NIC in the Security Server so that it can communicate with the Connection Servers behind the internal firewall, and devices beyond the external firewall.
So for this configuration to work, you will need to install the “Reliable Multicast Protocol” for the Local Area Connection of each Security Server in the Microsoft NLB Cluster.
Creating the NLB Cluster
1. Login to the Security Servers and open the Server Manager MMC
2. Select the +Features node and click the +Add Features link
3. In the dialog box select Network Load Balancing
4. Click Next and Install
5. In the Administrative Tools menu – select the Network Load Balancing Manager
6. On the first Security Server, choose Cluster, New in the menu
7. Type in the name or IP address that represents your DMZ network interface (in our case ss01 - 80.x.y.z) and click Connect. This should enumerate the local area connections on the Security Server. We renamed our interfaces to make them more meaningful in the dialog box:
Remember the only reason our Security Servers have two NICs is because this is a recommended configuration for Microsoft NLB. If you were configuring the Security Server with a third-party load-balancer then this would not be required.
8. Select the interface which is connected to external firewall or DMZ, and click Next
9. In the Cluster Parameters dialog box, accept the defaults and click Next
Notice how this first host in the NLB cluster has a unique host identifier of 1, the second and third nodes added to the NLB cluster will each be given unique host identifiers of 2, 3 and so on. After you have waited a while, the cluster will be created with the first node joined to the cluster.
10. In the Cluster IP addresses dialog box type in the external IP address used to access the cluster. This IP address must be resolvable to external DNS name such as view.corp.com.
11. Type in the FQDN that will be the public external URL for the Security Server cluster, and select Multicast as the type:
Be very careful with the IGMP Multicast as it can generate a significant amount of broadcast traffic. For this reason, it can be a good idea to place the Security Servers in a VLAN of their own, so their traffic does not adversely affect other systems. Additionally, you might find your network(s) does not support Multicast. We found this to be the case in our co-location. We were forced to opt for Unicast to simply make the configuration work.
12. In the Port Rules delete the default rule and create a rule which is limited to listening for inbound 443 connections on TCP
Adding additional Security Servers
During this initial setup phase you can only add one server to the cluster. This server builds the cluster that then allows other servers to be added to it once it has completed its tasks. To join additional servers to the cluster, right-click the cluster name, in our case view.corp.com, and select Add Host to Cluster. In this wizard you can just click next.
After adding the second NLB host, there is a converging process before both nodes become active:
Test the Load-Balanced Configuration
Before you fire up a VMware View Client and try to log in to the VMware View NLB Cluster, first confirm that the client can resolve the external/internet FQDN to the correct address – in our case view.corp.com to our 80.x.x.x/23 address. Our ISP does allow ICMP packets (which surprised us) so we can even ping our cluster from the Internet because our hosting provider allows ICMP packets as well (which surprised us even more!)
In our case we had to manually set the appropriate gateway settings on the properties of the Local Area Connection of the Security Server for this to work. Additionally, we had to adjust the metrics for our gateways (because we have more than one) to set the external/internet-based gateway address to be the preferred one.
Within the View Admin web-pages we would need to update the configuration of the Security Servers and Connection Servers to reflect the fact that we now wish the View Clients to connect via the “Virtual IP” address of the Microsoft NLB Cluster.
This has to be manually configured on each of the Connection Servers that have been paired with a Security Server. To enable this navigate to the View Configuration, Servers and Connection Servers Pane then select the Connection Server, and click the Edit button. If you remember this is how the dialog box looks like by default:
Notice how the external URL is the internal name, not the external one – and that the PCoIP external URL is still an internal (192.x.y.z) not Internet address (in our case 80.x.y.z). Finally you can see the option to “Use PCoIP Secure Gateway for PCoIP connections to the desktop” has not been enabled. We need to change this configuration to make this all work together.
You can see in these dialog boxes that PCoIP listens on 4172. This is both TCP and UDP. So as such, both TCP/UDP 4172 and 443 will need to be open for connections to work across a firewall. Remember the Security Server merely works as a stateful NAT translator that brokers the client’s access to the desktop. PCoIP packets go directly from the Client to the virtual desktop and for this reason 4172 needs to be open on both the internal and external firewalls. Additionally, you may need to review the IP address used by the Security Server as this may have changed since the initial installation.
Configuring F5 BIG-IP VE for Load balancing of Security Servers
Acknowledgement: Before we begin we would like to thank F5 for engaging so quickly and closely with the authors of this book. We would especially like to thank Paul Pindell of F5 who is a Solution Architect within F5’s “VMware Alliance”. We found him particularly helpful, and without his guidance and advice the section would have been impossible. Top marks go to F5 for providing both software and support resources to make this section possible. We would also like to thank Tim Myers for reviewing this chapter.
It is possible to get hold of a trial version of F5’s BIG-IP LTM (Local Traffic Manager) Virtual Edition that functions for 45-days. Currently the trial version is on version 10. This section was written using version 11 of F5’s BIG-IP LTM. If you approach a representative of F5 directly, they will be happy to arrange an evaluation on version 11 of BIG-IP. Version 11 of BIG-IP supports the new iApp template feature that you will see here. We would recommend arranging an evaluation of the version 11 product wherever possible. Other alternatives include Citrix NetScaler VPX Express Edition – its free to use for up to 5Mbp, and it also provides load-balancing to a Security Server.
F5 Networks, Inc. provides application delivery networking technology that optimizes the delivery of network-based applications, and the security, performance, and availability of servers, data storage devices, and other network resources. Our focus is on their “BIG-IP” product family (often referred to as their Application Delivery Controller) that includes both virtual and physical appliances. For simplicity, we opted to use the virtual edition of their appliance, as this seemed the quickest way to get a third-party load balancing solution into our lab environment. Our intention here is not to be seen as “endorsing” a particular vendor, but more to use a commercial appliance as an illustration of how additional third-party services can be added to the core View services to enhance the overall virtual desktop infrastructure. Essentially, it’s a nod to the fact that very few View deployments rely purely on VMware technologies – and that a blended solution of complementary systems is often needed in a large enterprise environment. It should give you an insight on how introducing additional components to the View environment changes the configuration of the Security Servers and Connection Servers.
There are three main deployment scenarios support with F5’s BIG-IP appliance:
1. Connection Servers Only
2. Security Servers and Connection Servers
3. BIG-IP APM (Access Policy Manager) with VMware View
Our emphasis will be on Scenario 2, mainly because this follows the flow of the book and will compare favourably to the preceding section where we documented load balancing with Microsoft NLB. However, if you were deploying F5 in a real production environment we would heartily recommend examining all three for their validity in your environment. For example load balancing of the Connection Servers in a LAN environment (Scenario 1) is still a valuable exercise. Additionally, some government organizations will not tolerate a configuration where Security Servers are located in the DMZ. This is because Security Server does not yet meet the FIPS standard. Organizations who must configure their VDI environment to meet the FIPS standard might opt to deploy technologies like BIG-IP because they do. In this case, the Connection and Security Server (if you choose to use them) stay behind the internal firewall, and are never located in the DMZ. If you are interesting in learning more about the possible deployment scenarios for BIG-IP with VMware View then we would recommend consulting their guide handily entitled “Deploying the BIG-IP System v11 with VMware View 5.0” currently located in this location on the F5 website:
What follows is a step-by-step guide to importing, licensing and configuring the BIG-IP Virtual Edition in a high-availability mode. This is where you have two BIG-IP appliances paired together in an Active/Standby configuration such that if one of the appliances fails, the other appliance immediately takes over.
The configuration outlined in this book requires a number of IP addresses to function. Below I’ve listed the IP addresses used in our documentation to give you a feel for what you will need before you begin.
|IP Address||Active Appliance||Standby Appliance|
|Internal Floating IP||192.168.3.143|
|External Floating IP||18.104.22.168|
|Virtual Server IP||22.214.171.124|
In total, nine IP addresses are needed – two for management (192.168.5.x), three from the Internal network/DMZ (192.168.3.x) and four Internet IP addresses (80.81.82.x). In some environments, you could see NAT rules in place here in lieu of requiring external addresses being assigned on the BIG IP devices.
Configure your Virtual Switches
The F5 BIG-IP Virtual Edition comes pre-built with a number of virtual NIC interfaces; each needs to be mapped to an appropriate port group for communication to occur in a predictable fashion. By default, the virtual appliance’s virtual NICs PCI devices (0:0, 0:1, 0:2, 0:3) are associated with a particular network traffic type, these types include:
• Management Traffic • Internal Network to the View Security Servers • External Network of Virtual Server IP Traffic • Active/Standby High-Availability Heartbeat Traffic
The critical issue here is that management traffic interface should never be placed on the what’s referred to as the “External Network” for “Virtual IP” traffic network. This will cause problems and also is insecure, as the Virtual Server external network is in our case Internet facing and is used to service inbound requests from end-users. The last thing you want is your management network to be accessible from the outside world. There is also a network needed to reach the View Security Servers themselves, and the fourth and final network is the one used by the BIG-IP appliances when they are in high-availability mode. This is used to check a heartbeat signal between the appliances to determine which of the BIG-IP appliances has control of the traffic management duties (i.e. which one is active and which one is standby), and to avoid situations such as split-brain. In many respects, this high-availability heartbeat network is not unlike the network used in VMware HA Cluster to avoid a similar situation occurring where failover occurs due to a network communication failure, rather than genuine outage.
Note: The screen grab shows the four NICs that make up the virtual appliance – each virtual NIC is mapped to different port groups on the VMware Virtual Switch.
Clearly you will want to record your configuration so you know which interface is used for which type of traffic. So by default “Network adapter 1” is known as “Management Port” in the virtual appliance web-pages (aka eth0 from the command line) and will automatically be used for the Management Network. Once you are in the web-based management front-end of BIG-IP you will be asked to select which NIC device is used for different traffic. The BIG-IP appliance will automatically route inbound traffic from the external network (1.2) to the Security Servers residing in the View network in the internal network (1.1) whereas management traffic (Management Port) and high-availability heartbeat traffic (1.3) remains discrete and separate.
As you can see in the screen grab above, the virtual appliance needs around 4GB memory to function, and although only 535.3MB in size in a .OVA format, once extracted it will need either 1.5GB of disk space if provisioned with thin virtual disks, or 100GB if provisioned with thick virtual disks.
Download and Import the F5 BIG-IP Virtual Edition OVA File
To gain access to the virtual appliance you will need a user account and password to gain access to the download page. Additionally, you will need to license the product for it to function correctly. You can follow these steps to download the files that make up the appliance:
1. Browse to http://downloads.f5.com
2. Login or create login and then login.
3. Click on the "Find a Download" link
4. Click on the BIG-IP v11.x / Virtual Edition
5. Make sure drop down has 11.1.0 selected
6. Click on link for "Virtual-Edition"
7. Read and Accept the EULA
8. Download these three files:
Once downloaded we recommend checking the MD5sum of the .OVA file against the MD5 value to be found in the MD5 file, the MD5 file is merely a text file which contains the MD5 string. If you are using Windows you can use the popular WinMD5sum application or MD5 in Apple Mac and Linux. The next step is to “import” the OVA file into your vSphere environment.
9. In the vSphere Client, under the File menu select Deploy OVF Template
10. Browse to the location where you downloaded the .OVA file that contains the BIG-IP Virtual Edition
11. After acknowledging the OVF Template Details and the EULA, set a friendly name for the appliance and location in the vCenter folder inventory structure
Next set the Deployment Configuration options – in our case we selected 2-CPUs without an additional disk:
Note: Increasing the CPU count from 2-vCPU to 4-vCPU can improve the performance of the virtual appliance. Increasing the CPU count is used to add compute capacity to processes F5 perform in their "Linux Host Side" or those that are running as a plugin. F5 differentiates functions as those that are performed in what is called the Traffic Management Microkernel (TMM) and those done outside of TMM. Some of the non-TMM processes that would benefit from additional CPU are those that are contained in other licensable modules. ASM (Application Security Manager) is F5's L4-L7 Firewall module. It is commonly referred to as a WAF (Web Access Firewall), adding the additional CPUs would provide a benefit to ASM. WA (Web Accelerator) is another module which would benefit from additional CPU. In the Virtual Edition there is one 1vCPU dedicated to TMM activities and 1 or 3 vCPUs dedicated to non-TMM related activities. However, if you are only using it to load-balance VMware View, and you don’t intend to use it for other purposes 2-vCPUs should be more than sufficient.
The “extra” disk used as part of F5’s WAN Optimization Module allows the creation of “branch cache” style infrastructure where frequently called for files can be stored locally rather than brought repeatedly across the WAN. In the context of VMware View it doesn’t have a role to play, but it could be useful in solving other issues in your environment.
12. Next select which cluster and/or resource pool where the virtual appliance should reside:
13. Next select a datastore location for the appliance – if you select a storage location that has insufficient space for a thick or eagerzeroedthick format of virtual disks you will receive a warning. Of course it is a good idea to put the appliance on shared storage onto a VMware HA/DRS cluster. If you do wind-up setting up two BIG-IP appliances, we would suggest configuring anti-affinity rules in DRS to make sure the appliance don’t end up residing on the same ESX host in the cluster
14. Finally as mentioned earlier select the port groups that “map” to the various networks needed by the BIG-IP appliance:
Once completed the deploy wizard will begin importing the OVA file. You can repeat this process to create a second virtual appliance that we can configure later for a high-availability solution. Of course, you can just clone this newly imported virtual appliance without repeating the OVF import wizard, but remember this must be done before the first power as each instance has its own unique settings.
First Power On & Initial Configuration
When you first power on the virtual appliance it will default to using a hard-coded IP for the Management Network of 192.168.1.245. Of course you have a number of choices here – to configure your client for an IP in this range, or to open a vSphere console window on the appliance and use a “config” tool at the appliance to change the IP to an address that fits into your management network. The appliance has a “root” account and the default password is the word “default”. The web-based GUI setup of the virtual appliance will prompt you to reset the password on this account. Once you have logged on to the appliance the “config” utility can be run to change the IP configuration of the appliance. This “config” utility uses standard navigation keys that allow you to modify the default address.
The IP address that you set here must not be in the same range as your internal network (in our case 192.168.3.x) or in the range of the external IPs for the Internet (in our case the 80.81.82.x) range – because if you don’t the appliance won’t work – full stop.
The Setup Utility
Once the appliance is configured correctly you should be able to open a web-browser to https://w.x.y.z, where w.x.y.z is the IP address configured for the management network. You will be challenged to login to the web-interface of the virtual appliance, in this case you use the user account called “admin” with a password of “admin”. Again, as with the root account used for the initial setup, the virtual appliance will prompt you to reset the password on this account. After login you will be greeted by the setup utility that walks you through the primary configuration. One of the first tasks is to license the appliance. You can follow your progress through the setup wizard as the yellow block indicates where you are in the process:
Setup Step 1: Licensing BIG-IP Virtual Edition
The F5 has a licensing process that requires an “activation” step. Licenses will be issued to you as plain text strings, that are added to the virtual appliance, and these in turn will require activating via the F5 website. Manual and Automatic activation is possible, in our case we will be using manual activation because our appliance is not connected to the Internet via our management network. In our case the licensing process comprises two license keys. The first license string is referred to as the “Base Registration Key” that is needed to enable core functionality, and the additional license strings are used for “Add-ons”. In our case we will use this to enable the “Access Policy Manager” (APM) component. In the simple load balancing method we will use we do not need the APM module, but the configuration template used to make the appliance work with VMware View does include this option.
You can cut-and-paste the “Base Registration Key” into the box provided, and then cut-and-paste any additional “Add-On” registration keys you require, clicking the “Add” button to including them on the “Add-On Registration Key List”. Once this has been completed you can click next. This then creates a unique signature value that F5 refers to as the “Dossier”.
All the text in the field called “Step1: Dossier” can be copied to the clipboard, and then using the link called “Click here to access the F5 Licensing Server” can be used to activate the appliance. The F5 licensing server allows you to paste the dossier to edit field and then once you have accepted the EULA, it will issue you with a valid license.
As with the dossier the contents of the license file can be copied in to the clipboard and pasted into the license field labelled “Step 3: License”
After clicking next, the virtual appliance will read the license data and license the product for the core features and any add-ons.
Setup Step 2: Resource Provisioning
Resource provisioning controls what resources are allocated to the virtual appliance in the form of CPU, Disk and Memory, and also controls which modules or components are enabled. The interface shows which modules we are licensed or not licensed for. In our case we need to change “Access Policy Manager” (APM) and “Application Visibility and Reporting” (AVR) modules to “Nominal”
F5 offers a lot of flexibility here in that you could setup the appliance to be dedicated to a single task (dedicated is one of the options) or restrict the functionality to limited capabilities (minimum is another option in the pull-down list). Nominal here means “normal” functionality. Changes in this location can cause new daemons to be started and depending on your changes could trigger a reboot of the appliance. Once completed the next stage is to configure the platform options.
Setup Step 3: Platform Options
This step must be completed before continuing to configure the other network interfaces. You will be forced to re-login with the new credentials. You must complete the password fields – even if you enter the same passwords as present ones, and you cannot leave the password fields blank. Although this sounds a little intimidating configuring your platform options merely allows you to reconfigure the appliance with a valid hostname or FQDN; Configure Time zone, SSH Restrictions and the ability to reset your passwords for the root and admin accounts used so far.
Setup Step 4: Network
The network setup involves a number of sub-steps that cover such features as redundancy, VLANs, Failover, Mirroring and High-Availability. Not all of these steps will be relevant at this point, as this is the first appliance we are setting up. When we come to add a second appliance for resiliency we will use more of these options.
This part of the configuration allows you to control how multiple appliances will synch their configuration and detect the availability of the other appliances. In this case we just accept the default settings, as you might suspect using a serial cable with a virtual appliance isn’t a popular configuration.
This section allows you to assign IP addresses to the various networks that the appliance is connected to. The first set of IP addresses concern the “internal network’ where our View Servers are located. This IP address needs to be associated with the NIC 1.1 that corresponds to “Network Adapter 2” in the BIG-IP Virtual Appliance.
Note: Be careful what IP address you use here. Although the subnet mask can be modified the “self-IP” is not modifiable in the management webpages. To modify the self-IP you would have to delete its reference and recreate it.
The appliance communicates with the internal View Security Servers and would use the first address of 192.168.3.141, and the shared floating IP address of 192.168.3.143 is an IP address that is used between multiple appliances in a high-availability active/standby configuration. Once you add a second appliance, the “Active” appliance is accessible from the shared floating IP address – if the “Active” appliance becomes unavailable, the standby appliance takes over the role of “Active”, and becomes the “owner” of the floating IP Address, ensuring no disconnects takes place. When we add the second appliance it will need its own unique “self-IP” and we will type in the same floating IP address (in our case 192.168.3.143) to its configuration. You will notice we didn’t use 192.168.3.142 for the floating IP address as this IP address will be the “self-IP” of the standby appliance.
There are a number of ways of dealing with the VLAN configuration here. It is possible to use “Guest/VM VLAN Tagging” and have the appliance VLAN (802.1q) Tag packets, alternatively you can leave the VLAN Tag ID set to “auto” and allow the ESX host to tag the packets - assuming your ESX physical VMNICs are set as trunk ports, and you are using the ESX hosts VLAN tagging capabilities. In our simple configuration using ESX VLAN Tagging will probably suffice – after all we only have 4 networks – management, internal, external and HA. However, if you have many networks this will not scale well. For every VLAN you would need to create a new port group on a VMware vSwitch, and then add an additional virtual NIC to the BIG-IP Appliance. Wonderful though virtual machines are it’s not possible to keep on adding virtual NICs to them indefinitely. That’s where “Guest/VM VLAN Tagging” will become more useful with the VLAN packets being tagged before they leave the appliance and hit the physical switch. With “Guest/VM VLAN Tagging” the Standard vSwitch Port group is given a VLAN ID of 4095 to set it to use “trunk mode”, and then in the appliance you set the VLAN Tag ID in the edit box, and move the required PCI interface to the “Tagged” column. If you are using Distributed vSwitches you would set the port group to use “Trunking”.
Note: The screen above shows a Standard vSwitch where it is possible to set the VLAN Trunk value by configuring the VLAN ID value to be 4095.
Note: The screen above shows a Distributed vSwitch where it is possible to set the VLAN Trunk value by selecting “VLAN Trunking” in the VLAN type drop-down box.
After clicking next here you will be asked to set the external IP addresses for the BIG-IP appliance. In this case the virtual NIC to select is the one that equates to the Network Adapter 3 or PCI Device 1.2 in the setup utility.
Note: Although the PCI NIC 1.1 appears in the “available” list you should not select it, as this corresponds to the “internal network” where the View Servers reside.
The IP addresses here are shown for illustrative purposes only. In the real world they would correspond to free IP addresses in your pool of Internet ready IPs. The default gateway would correspond to the external IP address of your firewall. According to F5 many customers are beginning replace their firewalls with F5 devices to simplify their configurations.
It’s with the external network that the appliance communicates with the Internet. It would use the first address of 126.96.36.199, and the shared floating IP address of 188.8.131.52; again this is the shared IP address that is used between multiple appliances in a high-availability configuration. Once you add the second appliance, the “Active” appliance is accessible from the shared floating IP address – if the “Active” appliance becomes unavailable, the standby appliance takes over the role of “Active”, and becomes the “owner” of the floating IP Address thus ensuring no disconnect can take place. When we add the second appliance it will need its own “self-IP” and we will type in the same floating IP address (in our case 184.108.40.206) to its configuration. In our configuration, the standby appliance will use the IP of 220.127.116.11 as its own unique “self-IP” address.
After clicking next you will be left with the final IP to configure for the last virtual NIC device – the IP addresses to be used for the high-availability heartbeat network. This is normally a dedicated network used purely for sending and receiving signals between multiple BIG-IP devices and is used to ensure that failover from one BIG-IP appliance to another isn’t triggered accidentally by a network outage.
The “Config Sync” option controls on which VLAN synchronization of the configuration of the appliances occurs in an active/standby high-availability configuration – this ensures that configuration changes are synchronized to the other appliance. In our case, we just accepted the default “internal network” address.
It’s recommended to use the “internal network” for this configuration sync – as the other networks have their own discrete and separate purposes. So you would not want configuration data to either traverse the external network or the HA network. Remember using Guest/VM VLAN Tagging (as opposed to ESX VLAN Tagging) you could make the appliance aware of any number of VLANs, and therefore its possible to create a separate VLAN just for the config sync traffic. Notice how the management network (in our case 192.168.5.141) cannot be used for config sync traffic.
This allows the administrator to add additional IP addresses for detecting if a failover must occur between the multiple appliances. This is enabled by default on the management network and the HA network configured a moment ago. It is safe to click to click next here without making any changes.
Mirroring describes the process by which the active and standby units share connection and persistence data. This means should the active appliance fail, any session data is already being mirrored on the standby appliance.
Later in this chapter, we will be discussing how various outages of the View Infrastructure such as the failure of a Connection Server, Security Server and Load balancing service affects the end-user. As you will see if the active appliance fails, it is a matter of seconds before the end-user’s View session is resumed. What allows the PCoIP session to continue is this mirroring of session data between the two appliances.
Active/Standby Pair and Discover Peer
These last two options are not relevant in this first configuration. It allows the appliance to “discover” another appliance residing on the same network and sets them up as an Active/Passive pair using a discovery system. This option will be relevant when we add a second BIG-IP Appliance to the network. In this case we can simply click the “Finished” button to indicate we have completed the setup of the first BIG-IP appliance.
After clicking the “Finished” button the setup wizard completes and you are left logged in at the standard F5 BIG-IP Virtual Edition management interface. From here we will further customize our configuration making it configured appropriately for use with VMware View.
Increasing the Idle Timeout Before Automatic Logout
By default the management pages “timeout” after 1200 seconds of inactivity. This can be little frustrating when you are spending time looking at a configuration and pondering your next step – occasionally you can find yourself timed out at critical steps before your setup has been applied. F5 recommends increasing this timeout value during these early stages to avoid this situation. To increase the Idle Timeout Before Automatic Logout click the System icon in the main tab, and Preferences.
Configuring DNS and NTP Services
As part of the configuration of BIG-IP VE for View, we need to ensure that the BIG-IP has the correct configuration for DNS and NTP. This is because DNS is an essential part of View, and without correct DNS configuration or time settings – part of the Active Directory component upon which View resides would fail.
You can set correct DNS settings under
1. On the Main tab, expand System, and then click Configuration.
2. On the Menu bar, from the Device menu, click DNS.
3. In the DNS Lookup Server List row, in the Address box, type the IP address of the DNS server.
4. Click the Add button.
5. Click the Update button
From the same menu location you should also be able to set your NTP configuration.
Importing VMware View imp
As part of the appliance there is a component called “iApps”; these represent standard templates that guide you through the process of configuring the appliance to work with third-party products like VMware View. Without them there would be a great deal of potentially complicated manual configuration. With them you are guided through the process via a wizard to configure the appliance with the right settings for your application – in our case the View Security Servers.
Currently, the iApp template for VMware View 5 needs to be downloaded from F5’s “Developer Central” portal. This is because at the time of writing, the build of the appliance was released before the GA of View 5. It’s expected that in future releases of the appliance this iApp will be included by default. In order to download the iApp you will need a user account for DevCentral, and this is a different username/password than you might have used to download the appliance itself. The URL for downloading the iApp is currently:
The iApp downloads as .ZIP file and will need extracting before it can be imported into the appliance. Inside the .ZIP file you should find a “template” file with a .TMPL extension.
To import the .TMPL file follow these steps:
1. On the Main tab, expand iApp, and then click Templates
2. Click the Import button on the right side of the screen.
3. Click a check in the Overwrite Existing Templates box.
4. Click the Browse button, and then browse to the location you saved the iApp file.
5. Click the Upload button
Once the .TMPL file has been imported you should see that the VMware View 5 iApp template appears alphabetically towards the end of the list. The other reference to VMware View that you see here is the built-in iApp for VMware View 4.x. If you know you will not be supporting a legacy View 4.x environment you could remove it.
Copy the Security Servers Certificate Key File
Previously in Chapter 21 we covered the use of certificates with the Security Server. In order for BIG-IP to work correctly we need those certificate files installed to the BIG-IP. Without the certificate, users would be presented with a built-in SSL certificate that would not match the public name our users connect with (view.corp.com). We can import the .PFX file directly into BIG-IP and then later, once we have configured the VMware View iApp, select it as the preferred certificate to be used. That way when users connect to the BIG-IP it will appear as if they have connected directly to the Security Servers. To import the .PFX file follow these steps:
1. On the Main tab, expand System, and then click File Management
2. In the menu, select SSL Certificate List and Import
3. From the pull-down list select PKCS 12 (IIS)
4. Type in a friendly name to describe the certificate name such as view.corp.com
5. Browse for the .PFX file, and supply the password used during the certificate request export process and click the Import button
Configuring the VMware View iApp
Now that the VMware View iApp has been successfully imported we are ready to configure it. The process begins by configuring an “Application Service” within the iApp. Initially this involves inputting a friendly name for the new service and selecting the iApp template. The iApp can be configured multiple times under different “Application Service” names. This allows you to have complete control over the configuration. For example, you could have two configurations, one that controlled load balancing for external clients coming in over the Internet to the Security Servers, and another handling load balancing for LAN based clients which primarily communicates to the Connection Servers.
To complete this process take the following steps:
1. On the Main tab, expand iApp, and then click Application Services
2. Click the Create button. This will cause the Template Selection page to open
3. In the Name box, type a name. In our example, we used VMware-View5
4. From the Template list, select f5.vmware_view_5.yyyy-mm-dd where yyyy-mm-dd is the date of the most recently imported iApp.
5. In the next page there are a number of options to enable and configure. Start with configuring the “Analytics” option by first enabling it using the default profile:
Note: Although we don’t discuss viewing the analytics data collected by the appliance there is plenty of useful information collected such as concurrent sessions, response and request throughput, server latency stats. These are available in graphs suitable for viewing by any manager.
6. Next under “General Questions”, select “Yes” to indicate that in this scenario View Security Servers are in use:
Note: There are many benefits that choosing to deploy APM would provide to the user including single-sign on, client inspection checks, and UDP based VPN. However, a full depth discussion of this functionality is outside the scope of this chapter. The two first options require a bit of explanation, and could easily vary based on your environment and what you trying to achieve with the F5 technology.
The first option “Will PCoIP connections be routed through the BIG-IP system?” concerns the direction of traffic. If you choose “YES”, traffic including PCoIP packets flows from the client via the BIG-IP and on to the Security and Connection Servers. This option is popular when F5 has been deployed not only as traffic management system, but as replacement for your firewall. Selecting NO, indicates that route exists on your network from the internet in order for the client to make connection via the Security Server and on to the Connection Server. With “NO” the F5 appliance does not act as router or gateway to external services. In most customer environments at the moment this is likely to be set as “No”. As support improves with F5 for the PCoIP protocol we might see more customers cut to the chance and dispense with Security Server altogether and allow F5 to control all traffic coming inbound from the Internet. In our lab environment we don’t have a route from the “client” network and the “View Infrastructure” on our external firewall, so we are going to use F5 to bridge the two networks. If you choose “No” then second question of “Will PCoIP connections be proxied by the View Servers?” is removed because no interaction with F5 system is happening directly it becomes an irrelevant question.
The second option “Will PCoIP connections be proxied by the View Servers?” is sadly a rather convoluted way of asking – are you using Security Servers or not? Remember a F5 appliance could be used for load-balancing LAN and WAN communications. YOU would select “No” if you intended to use F5 on your internal LAN to load-balance connections across an array of Connection Servers.
7. Further down under “Web Traffic” we need to configure the iApp to use the certificate we imported earlier:
Note: The option “How should the BIG-IP system handle encrypted application traffic” was introduced when View 5.0/5.1 was released. View 5.1 subtly changed the way communications are handled. If you select this option you will be able to see there are different options based on what generation of View you have. View 5.0/5.1 only allows SSL communications on port 443. Since 5.0/5.1 View no longer supports the use of TCP port 80. In previous releases of View it was possible to turn off all SSL communications (that come with an overhead) and run everything on TCP port 80. The idea was this could be run entirely internally on a trusted network and therefore SSL security wouldn’t be required. The change probably comes from VMware deciding it was better to err on the side of caution and enforce a high-level of security. There is likely to be changes in this in future release of F5 BIG-IP but this is the state of play currently.
8. Next under ;;;“Virtual Server”;;; we enter the IP address that will be used to service inbound requests from external clients. F5 BIG-IP refers to this as a “Virtual Server” IP Address as it gives the impression that users are connecting directly to the View Servers, when in fact F5 BIG-IP sits in front of the Security Servers load balancing the inbound client requests.
Note: You might notice that we have skipped the 18.104.22.168 address. This is because the second BIG-IP appliance will use it once it has been configured. Additionally, you can see there are optimizations that allow you to indicate whether the iApp will be servicing inbound connections from the WAN or LAN. In our case we have set “No” for the “routing” question. This allows you to indicate if the View Security Servers are configured to use the BIG-IP as their default gateway for this subnet.
If you were using the appliance to load balance just the Connection Servers the IP used would be in the range of local subnet in our case 192.168.x.x. Additionally, the option for the client’s primary connection would be set to be “LAN” rather than “WAN”.
9. Finally, under “Server Pool, Load Balancing, and Service Monitor Questions” we need to add in the IP addresses of our Security Servers residing in the DMZ. The “Add” button can be used to include multiple Security Servers:
Note: BIG-IP supports many load balancing methods, however the recommended option is the default – which is to direct clients to the Security Server with the least connections.
If you were configuring the appliance for a LAN configuration – you would specify the IP addresses for your Connection Servers, rather than the Security Servers.
The “health check” monitors to confirm that both the Security Server and its paired Connection Server are both functioning correctly. Additionally, you can configure the frequency that BIG-IP uses to check if the View Servers are functioning. So if either CS01 or SS01 were down it would make this “path” as “down” and user would be redirected to the pairing between CS02 and SS02
In View 5.0 this string would be “VMware View Portal”, as of View 5.1 this string must be “VMware. *View Portal”. For some reason VMware chose to change the way the web-portal behaves in 5.1 compared to 5.0.
We would recommend before embarking on ANY VMware View upgrade that you validate your load-balancing processes against a test environment before rolling out any upgrade. VMware do have tendency to “depreciate” and change functionality within a release especially since they have moved over to a yearly release cycle for their technologies.
10. Once this configuration is completed click the “Finished” button. The Application Service definition will be added to the list, and can be modified using the “Reconfigure” option:
Configuring View to use BIG-IP Virtual Edition
Before you fire up a VMware View Client and try to log in to the VMware View infrastructure first confirm that the client can resolve the external/internet FQDN to the correct address – in our case view.corp.com to our 22.214.171.124/24 address of the F5 BIG-IP “Virtual Server”.
Within the View Admin web-pages we would need to update the configuration of the Security Servers and Connection Servers to reflect the fact that we now wish the View Clients to connect via the “Virtual Server” IP address of the BIG-IP Appliance.
This has to be manually configured on each of the Connection Servers that have been paired with a Security Server. To enable this navigate to the View Configuration, Servers and Connection Servers Pane then select the Connection Server, and click the Edit button. If you remember this what the dialog box looks like by default:
Notice how the external URL is the internal name, not the external one – and that the PCoIP external URL is still an internal (192.x.y.z) not Internet address (in our case 80.x.y.z). Finally you can see the option to “Use PCoIP Secure Gateway for PCoIP connections to the desktop” has not been enabled. We need to change this configuration to make all work together.
You can see in these dialog boxes that PCoIP listens on 4172. This is both TCP and UDP. So as such both TCP/UDP 4172 and 443 will need to be open for connections to work across a firewall. Remember the Security Server merely works as a stateful NAT translator that brokers the client’s access to the desktop. PCoIP packets go directly from the Client to the virtual desktop and for this reason 4172 needs to be open on both the internal and external firewalls. Additionally, you may need to review the IP address used by the Security Server as this may have changed since the initial installation.
Testing the Configuration
Aside from just firing up the client and seeing if a connection works, it is possible to use “tcpdump” to capture packets in a SSH session on the appliance itself. This can be used to verify that PCoIP traffic is passing through it. If connections to desktop are successful, and you do not see PCoIP traffic being passed, then the client has connected via some other method, potentially bypassing the BIG-IP Appliance. To capture traffic on all interfaces of the appliance you can use the command:
tcpdump –i 0.0 –s 0 –nnevv port 4172;;;
When a PCoIP session is established you should see that capture indicates packets are being sent and received using UDP from the BIG-IP Virtual Server IP address (in our case this 126.96.36.199.4172) to a client (in our case 188.8.131.52.500).
Configuring BIG-IP VE for High-Availability
Now we are satisfied that the first BIG-IP Virtual Appliance is configured, we now consider setting up the second appliance for high-availability. Just as it is unlikely you will only have one Connection Server and one Security Server, it is equally unlikely that you will only have one load balancing appliance. Generally, these devices, whether they are virtual or physical, have availability solutions where more than one appliance is paired together – in our case an Active/Standby configuration. This pairing process establishes a trust between the two appliances and creates a “device group” that allows the configuration data to be synchronized and enables the high-availability functionality. F5 uses the term “device groups” to merely refer to more than one appliance being managed as a single entity.
The setup and configuration of adding a BIG-IP Appliance is much the same as the configuration of the first. You assign a management IP address and activate its licenses, and set the platform options such as hostname and password. Where the configuration differs is in the network setup. So when configuring the “internal network” the new appliance will need its own unique “self-IP”, but will share the same “floating” IP that was set earlier:
Note: So here the standby BIG-IP Appliance’s self-IP is 192.168.3.142, but the floating IP address is the same one we configured on the Active appliance of 192.168.3.143. The same situation occurs when configuring the external network for the standby BIG-IP Appliance. In this case, the unique “self-IP” is 184.108.40.206, and we type the shared “floating IP” address that was configured at the “active” appliance.
Note: So here the standby BIG-IP Appliances self-IP is 220.127.116.11, but the floating IP address is the same one we configured on the Active appliance of 18.104.22.168
The final network is the one used to send high-availability heartbeat packets, to ensure we don’t get unnecessary failovers thus protecting the appliances from split-brain situations. This is relatively simple configuration, as we just need a free IP address that is in the same range as the “active” appliance.
Note: You might recall that earlier we gave the active BIG-IP Appliance the HA IP Address of 10.10.10.1
The other settings that concern Failover and Mirroring are dealt with in exactly the same way on the Standby as we did with the Active. Where the configuration is significantly different is in the Active/Stand Pairing process. Whereas with the first appliance we clicked the “Finished” button, in this case we would click the “Next” button indicating we wanted to pair up an active appliance with a standby appliance. Although F5 calls this a “Discovery Peer” you do in fact type in the IP address and “admin” credentials of the active appliance. The standby appliance does not “sniff” the network looking for the active appliance.
After clicking next, the setup utility will progress on to the “Discover Peer” part of the wizard where clicking the “Next” button will trigger the process of contacting the active peer on the network.
After clicking the “Next” button type in the IP address and admin credentials for the active appliance.
Once the active appliance has been “discovered” you will be asked to:
• Confirm the identity of the appliance by its built-in certificate
• Accept its hostname configured earlier
• Accept or Type a unique “Sync Failover Group Name”
• Confirm the new device trust between the active/standby appliance
This essentially ends the setup wizard for the standby and the two devices will now be paired together in an active and standby configuration. Although the appliances are paired together they do not automatically sync their configuration information to each other. By default synchronization is a manual process and this is a deliberate default from F5. The idea is this allows you to make changes to one appliance, and once that configuration is regarded as valid – it is then manually synchronized to the other pair in the configuration. It’s important to know which appliance is “active” and “standby” else you could accidentally overwrite the configuration incorrectly. In our case it is easy to tell that our active appliance is the node called “bigipve01.corp.com” as its status is marked as “Online (Active)”.
You can also tell which node is the active and standby by using SSH directly to their respective “self-IPs” as the console prints their status to the command-line prompt.
If you do make changes to the active appliance you will see the status in the management web-pages change from “In Sync” to “Changes Pending”
You can force synchronization manually by navigating to Device Management, Device Groups and select the Device Group created during the pairing process, selecting the Config Sync tab and clicking the “Synchronize TO Group” button.
In our scenario, it is important to make sure that our currently active appliance (bigipve01.corp.com) syncs its configuration to the newly configured standby appliance (bigipve01.corp.com). In production environments, it’s recommended that you make configuration changes to the standby appliance first, and then synch the changes to the currently active system. This way you’re not actively changing the configuration of an appliance that might be in use until you are 100% sure that you are happy with the new configuration. It is also possible to take an archive or “backup” of the active unit just prior to making changes and if they are not what you want you can restore the archive, thus restoring the appliance to its previous state.
If you do wish to manually change the role of active/standby from one node to another, it is possible if you connect directly to active node IP address to “Force to Standby”. To do this navigate to Device Management, Devices, Select the active node from the device list and then scroll to the bottom of the page and click the button marked “Force to Standby”.
Update your DRS Anti-Affinity Rules
We would recommend updating your DRS Anti-Affinity rules to ensure both appliances be made never to run on the same ESX host. There’s little point in having two appliances in an active/standby configuration reside on the same ESX host that could fail. To configure this just right-click the VMware Cluster where the appliances reside, and in the Properties and Rules section – create a rule configured to “separate virtual machines”.
Of course you could consider a similar rule for your Connection Servers and Security Server as well.
Factory Reset of the F5 BIG-IP Virtual Edition
During your evaluation of the F5 BIG-IP Virtual Edition you might wish to reset the appliance back to its initial clean state. It is possible to issue a “factory reset” to the appliance that will revert its configuration state. The tmsh command saves the currently-running configuration to a /var/local/scf/backup.scf file, and then restores the configuration to the factory default settings by loading the /defaults/defaults.scf file. The tmsh command retains certain configuration elements that are necessary for maintaining basic administrative functionality. When you restore the BIG-IP configuration to factory default settings, the system performs the following tasks:
• Removes all BIG-IP local traffic configuration objects
• Removes all BIG-IP network configuration objects
• Removes all non-system maintenance user accounts
• Retains the management IP address
• Retains system maintenance user accounts and passwords (root and admin)
• Retains the BIG-IP license file
• Retains files in the /shared partition
• Retains manually-modified bigdb database variables
To issue a factory reset:
1. SSH to the appliance using its self-IP
2. After login in as root enter
tmsh load sys config default
Enterprise Infrastructures and Availability
Over the last couple of chapters, we have gradually been putting in place the components that would be typical in a commercial business environment such as Connection Server Replicas, Multiple Security Servers and Load balancing. We would like to spend a brief amount of time discussing what level of availability is reasonable to expect with this type of infrastructure, and outline what the user experience would be should you experience an outage whilst user sessions are established. We firmly believe setting the correct expectations is critical to perceiving success of any project. Its important that the business and end-users understand that while 99% of the time the infrastructure we provide will be so reliable, they will take it for granted – there will be times when it fails and they must be psychologically prepared for what that experience is like. These failure scenarios can be seen as a series of simple “what if” situations. We believe you should test against these “what if” scenarios so you understand the implications of each type of failure.
What if a Security Server goes down?
If you do not have a load balanced Security Server environment then users would have to load up the client and point to a different Security Server. This is not ideal, and for this reason we have supplied two examples of load balancing the front-end Security Servers with a single load-balanced IP resolved to a public DNS name. Despite this configuration – remember load balancing Security Servers does not mean fault-tolerance as the Security Servers do not share connection state information between them, and if the Security Server that the user is currently connected to experiences an outage their session will be dropped. You can see which Security Server the end-user is currently connected to from the View Administration web page under the Monitoring and Remote Sessions.
As you might recall each Security Server is uniquely “paired” with Connection Server. In our case we paired SS01 to CS01, and SS02 to CS02 respectively. This means we should be able to test against both an outage of the Security Server and Connection Server. To test the current scenario we could hard power off the SS02 Security Server . Initially, the user sees no pop-ups – the session just hangs as if there was a network delay. Within less than 30 seconds the user will see this error message. Notice the error message suggests that the Connection Server (not the Security Server) may have been restarted.
The user should be able to reload the client and try again. The load-balancer in front of the Security Server should have detected that SS02 is no longer responding to TCP traffic, and force the end-user session to go through the alternative path from SS01 to CS01 reconnecting them to the desktop. In our tests with both MS NLB and F5 Networks this was the case. The Connection Servers share session data that allows these reconnects to be successful. The time it takes to reconnect can be governed by the time your load-balancer is configured to wait until it decides that a component of the View Infrastructure has experienced an outage. For example, with the F5 BIG-IP Virtual Edition the iAPP configuration sets this as a default of 30 seconds. However, its important to understand that a number of retries are made until the appliance marks one of the Security Servers as being unavailable in the “Server Pool” areas of the iMAP configuration. The health check algorithm is configured to try three times plus 1 second. This means that it would take up to 91 seconds (3x30 seconds, plus 1 second) before the appliance would regard the View Servers as being unavailable.
Finally, we think it would be nice if the View Client error message was more helpful to the user, or if the View Client defaulted to closing and reconnecting to the load-balanced Security Servers.
What if a Connection Server goes down?
In this case, the end user experiences no outage at all. The session has a stateful connection from the Security Server through to the virtual desktop. Remember the Connection Server serves just as a “broker” which authenticates the user and locates their desktop from the pool. However, there are some situations where an outage of the Connection Server can cause problems.
In our case, we have two Security Servers uniquely paired with two Connection Servers. It is not possible for a single Security Server to be paired with more than one Connection Server. Logically it is possible that if SS01 was down, and so was CS02 then there would be no access to the desktop. Essentially, there would be no path through the various View Infrastructure components to successfully broker a connection. The only solution to this problem is more Security Servers and Connection Servers to offer more availability.
Another issue is that it is possible that a load-balancer could select a Security Server that was least busy, but whose Connection Server was down. This is a classic example of how load balancing is often confused with service availability.
In terms of the View Client the user would find that their connection was unsuccessful and they would receive this error message:
In our tests with F5 we did find that after a short while the client was redirected to the SS02 and CS02 pairing – this is because the test string of “VMware View Portal” checks that both the Security Server and it paired Connection Server are both available. It’s likely that F5 will continue to make updates to the iApp to increase its application intelligence. It’s a great example of how increasingly we are moving away from basic load balancing to systems that understand the underlying service dependencies.
However, with less sophisticated load-balancing system this could give the impression that the load-balancer was somehow detecting the outage of CS01. In fact the reality could be that the load balancer has no awareness of loss of the Connection Server, and the fact that occasionally connections are successful is actually being caused by the load-balancer attempting to distribute load. The message here is not all load-balancers are the same, and the level of their “application awareness” varies significantly. Only rigorous testing can really expose the quality of their coding.
A common question from customers is whether the use of two layers of load balancing would be supported and if it would improve the availability of the View infrastructure. The answer is no. This because the pairing of the Security Server to the Connection Server is a discrete process and it is not possible to pair the Security Server to a load-balanced virtual IP or virtual server.
|1st IP LOAD-BALANCER|
|2nd IP LOAD-BALANCER|
Note: Although load balancing the Connection Servers is a viable configuration in a LAN-only environment currently, VMware does not support the configuration above when the Connection Server is paired with the Security Server through two layers of IP load-balancers. The pairing process happens directly between the Security Server and the Connection Server.
What if the IP Load-Balancer Fails?
The consequences of what happens if an external IP load-balancer fails largely depends on if you have followed your vendor’s solution for high-availability, and how well that has been implemented by both you and the vendor. In our case we used F5 BIG-IP Virtual Edition as an example of a third-party load-balancer. As you might recall we configured the systems in an active/standby configuration. Being a virtual appliance it was very easy to test the availability features, merely by hard powering off the “active” appliance to allow failover to occur whilst a PCoIP session was open.
On the Client we opened a file and typed in data without saving, and we also opened the clock application. This time was compared to another VM on the same cluster. As there was plenty of free CPU on the ESX hosts, and time synchronization was in place – both clocks ticked at precisely the same time. When the active BIG-IP Appliance was hard powered off the clock on the client stopped ticking altogether – we were pleased to see that failover to the Standby BIG-IP Appliance took place in about 5 seconds, and the View Client kept the client window open and the session was re-established.
What if the Virtual Desktop goes down?
If virtual desktops shut down unexpectedly for whatever reason (for example system administrator stupidity) then any active data which has yet to be saved will be lost. The user may or may not retrieve their data from an application that has its own “recovery” features such as Microsoft Word. Much depends on the configuration of the virtual desktop pool. If the pool is of the “floating” variety there is no guarantee that the user will be returned to the original desktop, and the user will be assigned a desktop from the available pool. If the pool is of a dedicated variety the user will only be able to reconnect once that desktop has been powered back on.
In this case, it can take some time before the View client reacts, in our tests upwards to nearly two minutes. Once the session has been dropped the user will receive this error message:
As we have seen, there are a number of options for load balancing ranging from the Microsoft NLB which might be suitable for a small-scale solution, to the commercial load-balancers. It’s worth mentioning that many of the commercial load-balancers come with TCP optimization options that can actually assist in managing the network traffic that virtual desktops create. So they go beyond pure load balancing into the realms of guaranteeing quality of service for the WAN applications you provide. VMware’s View Servers react pretty favourably to outages assuming the problems are limited. In the four-node environment we currently have setup, there is of course a point of failure that could result in the user being unable to connect at all. If you recall the pairing process is 1-to-1 between the Connection Server and the Security Server – if the SS01 was down and CS02 were unavailable there would be no path through which the user would be authenticated and receive their desktop.
Our next chapter returns to the theme of security – we have a chapter which covers configuring VMware’s vShield appliance for Endpoint security. In case you don’t know this is an optional component that allows you to relocate your anti-virus load out of the guest operating system, and allow virtual appliances to handle the process. It can significant help reduce the administrative burden of maintaining AV software, and critically reduce the CPU penalty caused by AV scans.