Getting Started with Dell EqualLogic Replication
- 1 Originating Author
- 2 Video Content [TBA]
- 3 Getting Started with Dell EqualLogic Replication
- 4 Creating an EqualLogic iSCSI Volume
- 5 Granting ESXi Host Access to the EqualLogic iSCSI Volume
- 6 Enabling Replication for EqualLogic
- 7 Using EqualLogic Host Integration for VMware Edition (HIT-VE)
- 8 Summary
Video Content [TBA]
Getting Started with Dell EqualLogic Replication
Version: vCenter SRM 5.0
In this chapter you will learn the basics of how to configure replication with Dell Equal-Logic. The chapter is not intended to be the definitive last word on all the best practices or caveats associated with the procedures outlined. Instead, it’s intended to be a quick-start guide outlining the minimum requirements for testing and running a Recovery Plan with SRM; for your particular requirements, you should at all times consult further documentation from Dell, and if you have them, your storage teams. Additionally, I’ve chosen to cover configuration of the VMware iSCSI initiator to more closely align the tasks carried out in the storage layer with the ESX host itself.
Before I begin, I want to clearly describe my physical configuration. It’s not my intention to endorse a particular configuration, but rather to provide some background that will clarify the step-by-step processes I’ll be documenting. I have two EqualLogic systems in my rack, each in its own “group.” In the real world you would likely have many arrays in each group and a group for each site where you have EqualLogic arrays. In my Protected Site, I have an EqualLogic PS6000XV which has sixteen 15k SAS drives, and a PS4000E which has sixteen SATA drives, and both are configured with the same management system: the EqualLogic Group Manager. Of course, these arrays are available in many different disk configurations to suit users’ storage and IOPS needs, including SATA, SAS, and SSD. Apart from the different types of disks in the arrays, the controllers at the back are slightly different from each other with the PS6000 offering more network ports. The PS6010/6510 is available with 10GB interfaces as well. From a networking perspective, the EqualLogic automatically load-balances I/O across the available NICs in the controller, and there is no need to configure any special NIC teaming or NIC bonds as is the case with other systems. On the whole, I find the EqualLogic systems very easy to set up and configure. Figure 2.1 shows the PS4000 at the top and the PS6000 at the bottom. If the figure was in color, you would see that the controls are shown in purple and in green to make the system easy to identify at the back of the rack, with each system offering a rich array of uplink ports and speeds.
Figure 2.1 The rear of these two Dell EqualLogic systems is color-coded to make them easy to distinguish from the rear of the racks.
In the screen grab of the Group Manager shown in Figure 2.2, you can see that I have two EqualLogic systems. Each member or array has been added into its own group, and of course it’s possible to have many members in each group. I configured these group names during a “discovery” process when the arrays were first racked up and powered on using a utility called the Remote Setup Wizard. The Remote Setup Wizard is part of the Host Integration Tools for Windows, and a CLI version is available as part of the Host Integration Tools for Linux. This setup wizard discovers the arrays on the network, and then allows you to set them up with a friendly name and IP address and add them to either a new group or an existing group.
Figure 2.2 A single Dell EqualLogic array (or member) in a single group called “New-York-Group”
In my configuration I have two members (new-york-eql01 and new-jersey-eql01) that have each been added to their own group (New-York-Group and New-Jersey-Group).
Creating an EqualLogic iSCSI Volume
Creating a new volume accessible to ESXi hosts is a very easy process in the EqualLogic Group Manager. The main step to remember is to enable “multiple access” so that more than one ESXi host can mount and access the volume in question. For some reason, in my work it’s the one blindingly obvious step I sometimes forget to perform, perhaps because I’m so overly focused on making sure I input my IQNs correctly. You can kick off the wizard to create a new volume, and see the volume information in detail in the Volumes pane in the Group Manager (see Figure 2.3).
Figure 2.3 Selecting the Volumes node shows existing volumes; selecting “Create volume” lets you carve new chunks of storage to the ESX host
To create a volume, follow these steps.
1. On the Step 1 – Volume Settings page of the wizard (see Figure 2.4), enter a friendly name for the volume and select a storage pool.
For my volume name I chose “virtualmachines,” which perhaps isn’t the best naming convention; you might prefer to label your volumes with a convention that allows you to indicate which ESX host cluster has access to the datastore. For the storage pool, in my case there is just one pool, called “default,” which contains all my drives, but it is possible to have many storage pools with different RAID levels. It is also possible to have multiple RAID levels within one pool, and this allows the administrator to designate a preference for a volume to reside on a RAID type that’s best suited for that application IOPS’s demands or resiliency. A recent firmware update from Dell enhanced the EqualLogic array’s performance load-balancing algorithms to allow for the automatic placement and relocation of data in volumes within and between EqualLogic arrays. This storage load balancing (conceptually, a storage version of DRS) helps to avoid performance “hits” by redistributing very active data to less heavily used array resources.
2. On the Step 2 – Space page of the wizard (see Figure 2.5), set the size of the volume, and whether it will be fully allocated from the storage pool or thinly provisioned. Also, reserve an allocation of free space for any snapshots you choose to take.
The allocation here is conservatively set as a default of 100%. Think of this value as just a starting point which you can change at any time. If you create a 500GB LUN, 500GB would be reserved for snapshot data—the configuration assumes that every block might change. You might wish to lower this percentage value to something based on the number of changes you envisage, such as a range from 10% to 30%. It is possible to accept the default at this stage, and change it later once you have a better handle on your storage consumption over time.
Figure 2.4 Naming a volume and selecting a storage pool
Figure 2.5 EqualLogic has, for some time, supported thin provisioning that can help with volume sizing questions.
Remember, this allocation of snapshot space will not influence your day-to-day usage of VMware SRM. This reservation of snapshot space comes from an allocation that is separate and distinct from that used by VMware SRM. So it’s entirely possible to set this as 0% on volumes where you don’t envisage yourself creating snapshots of your own, as would be the case with archive or test and development volumes
3. On the Step 3 – iSCSI Access page of the wizard (see Figure 2.6), set the access control for the volume.
This prevents unauthorized servers from accidentally connecting to a volume intended for another server. You can set this using CHAP, IP address, or iSCSI IQN. In the “Access type” section of the page we are enabling “simultaneous connections” on the volume. At this point you can only add one entry at a time, but later on you can add additional IQNs to represent your multiple ESXi hosts that will need to access the volume within the VMware HA/DRS cluster.
Figure 2.6 Setting the access control to prevent unauthorized servers from accidentally connecting to a volume intended for another server
Figure 2.7 As I only have two ESX hosts, it was easy to cut and paste to add the IQNs, and merely adjust the “alias” value after the colon.
4. Once the volume is created, use the volume’s Access tab and the Add button to add additional IQNs/IPs or CHAP usernames representing your ESXi hosts (see Figure 2.7).
Granting ESXi Host Access to the EqualLogic iSCSI Volume
Now that we have created the iSCSI target it’s a good idea to enable the software iSCSI target on the ESXi hosts. If you have a dedicated iSCSI hardware adapter you can configure your IP settings and IQN directly on the card. One advantage of this is that if you wipe your ESXi host, your iSCSI settings remain on the card; however, they are quite pricey. Many VMware customers prefer to use the ESXi host’s iSCSI software initiator. The iSCSI stack in ESXi 5 has recently been overhauled, and it is now easier to set up and offers better performance. The following instructions explain how to set up the iSCSI stack to connect to the EqualLogic iSCSI volume we just created. Just like with the other storage vendors, it’s possible to install EqualLogic’s Multipathing Extension Module (MEM) that offers intelligent load balancing across your VMkernel ports for iSCSI.
Before you enable the software initiator/adapter in the ESXi host, you will need to create a VMkernel port group with the correct IP data to communicate to the Equal-Logic iSCSI volume. Figure 2.8 shows my configuration for esx1 and esx2; notice that the vSwitch has two NICs for fault tolerance. In the figure, I’m using ESXi Standard vSwitches (SvSwitches), but there’s nothing to stop you from using Distributed vSwitches (DvSwitches) if you have access to them. Personally, I prefer to reserve the DvSwitch for virtual machine networking, and use the SvSwitch for any ESXi host-specific networking tasks. Remember, ESXi 5 introduced a new iSCSI port-binding policy feature that allows you to confirm that multipathing settings are correct within ESXi 5. In my case, I created a single SvSwitch with two VMkernel ports, each with their own unique IP configuration. On the properties of each port group (IP-Storage1 and IP-Storage2) I modified the NIC teaming policy such that IP-Storage1 had dedicated access to vmnic2 and IP-Storage2 had dedicated access to vmnic3. This configuration allows for true multipathing to the iSCSI volume for both load-balancing and redundancy purposes.
Before configuring the VMware software initiator/adapter, you might wish to confirm that you can communicate with the EqualLogic array by running a simple test using ping and vmkping against the IP address of the group IP address. Additionally, you might wish to confirm that there are no errors or warnings on the VMkernel port groups you intend to use in the iSCSI Port Binding setting of the Port Properties, as shown in Figure 2.9.
In ESXi 5 you should not need to manually open the iSCSI software TCP port on the ESXi firewall. The port number used by iSCSI, which is TCP port 3260, should be automatically opened. However, in previous releases of ESX this sometimes was not done, so I recommend that you confirm that the port is opened, just in case.
1. In vCenter, select the ESXi host and the Configuration tab.
2. In the Software tab, select the Security Profile link.
3. In the Firewall category, click the Properties link.
4. In the Firewall Properties dialog box, open the TCP port (3260) for the Software iSCSI Client (see Figure 2.10).
Figure 2.8 In this configuration for esx1 and esx2, the vSwitch has two NICs for fault tolerance.
Figure 2.9 iSCSI port binding is enabled due to the correct configuration of the ESX virtual switch
Figure 2.10 By default, ESXi 5 opens iSCSI port 3260 automatically.
5. Add the iSCSI software adapter. In previous releases of ESXi the adapter would be generated by default, even if it wasn’t needed. The new model for iSCSI on ESXi 5 allows for better control over its configuration. In the Hardware pane, click the Storage Adapters link and then click Add to create the iSCSI software adapter, as shown in Figure 2.11.
Figure 2.11 In ESXi 5 you must add the iSCSI software adapter. Previously, the “vmhba” alias was created when you enabled the feature.
6. Once the virtual device has been added, you should be able to select it and choose Properties.
7. In the iSCSI Initiator (vmhba34) Properties dialog box click the Configure button; this will allow you to set your own naming convention for the IQN rather than using the auto-generated one from VMware, as shown in Figure 2.12.
8. Bind the virtual device to one of the VMkernel ports on the ESXi host’s vSwitch configuration. In my case, I have a VMkernel port named “IP-Storage” which is used for this purpose, as shown in Figure 2.13.
Figure 2.12 Instead of using the auto-generated IQN from VMware, you can use a combination of IP addresses and CHAP.
Figure 2.13 The new iSCSI initiator allows you to add VMkernel ports that are compliant with load balancing to enable true multipathing.
9. Select the Dynamic Discovery tab, and click the Add button.
10. Enter the IP address of the iSCSI target that is the EqualLogic group IP address (see Figure 2.14); in my case, this is 184.108.40.206.
11. Click OK.
12. Click Close in the main dialog box; you will be asked if you want to rescan the software iSCSI virtual HBA (in my case, vmhba34). Click Yes. Remember, static discovery is only supported with hardware initiators.
Occasionally, I’ve noticed that some changes to the software iSCSI initiator after this initial configuration may require a reboot of the ESXi host (see Figure 2.15). So, try to limit your changes where possible, and think through what settings you require up front to avoid this.
Now repeat this setup for the Recovery Site ESXi hosts, changing the IP addresses relative to the location. You might find it useful to create a “test” volume at the Recovery Site to confirm that the iSCSI configuration is valid there. In order for SRM to work with iSCSI you must at least configure the iSCSI initiator, and add an IP address for the storage arrays at the Recovery Site. The SRA within SRM will take care of presenting the storage as Recovery Plans are tested or run.
Figure 2.14 Entering the group manager IP address to discover the volumes held on the group members assigned to ESX hosts
Figure 2.15 Because of a configuration change in the ESX iSCSI stack a reboot is required.
Enabling Replication for EqualLogic
Now that we have the ESX hosts accessing a datastore at both locations we can move on to consider enabling replication between the EqualLogic systems. Replication in EqualLogic requires a pairing process where arrays are partnered together; it’s not unlike the pairing process you will see in the core SRM product. Once this partnering process has been completed, you can begin the process of adding a replica to a volume. The wizard will automate the process of creating a volume at the destination array configured to receive updates. Once replication is enabled, you can then attach a schedule to the replication object to control when and how frequently replication is allowed.
Configuring Replication Partners
The first stage in setting up replication with EqualLogic involves partnering up your groups. This may already have been carried out in your environment, but I’m going to assume that you have two EqualLogic groups that have never been paired together before. To configure the relationship in the Group Manager select the Replication pane and click the “Configure partner” link to run the configure partner wizard; this wizard initially creates a one-way relationship between two arrays—you will also need to carry out this pairing process in the opposite direction (see Figure 2.16). The pairing up of the group partners is a one-time-only event—as you add more EqualLogic systems or “members” to the group they merely inherit the configuration from the Group Manager.
To configure a replication partner, follow these steps.
1. On the Step 1 – Replication Partner Information page of the wizard (see Figure 2.17), enter the name of the group in your DR location (in my case, this is New-Jersey-Group) as well as the group IP address used to manage the members contained within the group (in my case, this is 220.127.116.11). It’s important to note that these group names are case-sensitive.
Figure 2.16 In the Replication view you trigger the partnering process using the “Configure partner” link.
Figure 2.17 Under “Partner identification” enter the group name and group IP address used at the Recovery Site.
2. On the Step 2 – Contact Information page of the wizard, input your name, email address, and contact numbers as befits your environment. This is not a mandatory requirement and can be bypassed as necessary.
3. On the Step 3 – Authentication page (see Figure 2.18), set the password for the mutual replication between the partners.
4. On the Step 4 – Delegate Space page, it is possible to reserve disk space used on the destination group (in my case, New Jersey) to be used purely for receiving updates from the Protected Site group. Here, I reserved 500GB for this value.
This Delegate Reservation is intended to offer the storage administrator control over how space is consumed by replication partners. The last scenario we would want is the storage administrator in New York being able to consume disk space for replication purposes in New Jersey in an unchecked manner. You can see such reservations at the storage layer as being similar to the reservation you might make on a VM for memory or CPU space at the VMware layer. The reservation is a guarantee that a minimum amount of resources will be available for a given process. Set it too low and there may not be enough capacity; set it too high and you might waste resources that may have been better allocated elsewhere. Fortunately, if you do set these allocations incorrectly, you can always change them afterward. Remember, in a bidirectional configuration where New York and New Jersey both act as DR locations for each other, you would need a delegate reservation at both sites.
5. Click Finish; you should see that the relationship is listed under Replication Partners (see Figure 2.19). I repeated this configuration on my New-Jersey-Group.
Figure 2.18 Replication uses a shared password model to allow the ar¬rays to authenticate to each other.
Figure 2.19 From the Protected Site in New York I can see the relation-ship created with New Jersey.
Configuring Replication of the iSCSI Volume
Once the partnership relationships have been established, the next stage is to enable repli¬cation for the volume(s) that require it. Again, this is a very simple procedure of selecting the volume in the Volume Activities pane and clicking the “Configure replication” link (see Figure 2.20).
Figure 2.20 The Volume Activities pane shows options such as configuring replication and assigning a schedule to a snapshot or replication process.
To configure replication of the iSCSI volume, follow these steps.
1. On the Step 1 – General Settings page of the wizard (see Figure 2.21), set the percentage values for your replica reserve.
These two reservations consume space from the Delegate Reservation created earlier when the partnership was first created. Notice how the focus here is the New-Jersey-Group as it will be the destination of any block changes in the volume. The Local Reservation defaults conservatively to 100%. This means if all the blocks changed in a 100GB volume there would be 100GB guaranteed or reserved for those updates. So this “local reserve” is local to the group currently selected. The Remote Reservation is space allocated at the source location and is set to 200%; again, this is a conservative setting which allows for complete duplication of a volume (the 100%) together with an allocation of space should every block change in the volume (the next 100%).
2. On the Step 2 – Advanced Settings page (see Figure 2.22), enable the “Keep failback snapshot” option.
This is very useful in the context of planned failovers and failbacks with VMware SRM. It allows only the changes accrued while you were running from your Recovery Site to be replicated back to the Protected Site during the failback process. Without it, a complete transfer of all the data would have to be carried out during a failback process. That could be very time-consuming if you had only failed over to the DR location for a few hours or days. Of course, this all assumes that you have enough bandwidth to be able to replicate all your changes back to the Protected Site. If not, EqualLogic offers the Manual File Transfer Utility that allows you to transfer your replication data via removable media—in short, you might find that UPS or FedEx is faster than your network provider.
Figure 2.21 Note the storage reservations which control how much free space is assigned to “background” processes in Dell EqualLogic.
Figure 2.22 The “Keep failback snapshot” option accelerates failback; before enabling it, ensure that you have enough free space to hold the data.
3. On the Step 3 – Summary page of the wizard (see Figure 2.23), click Finish and you will be asked “Would you like to create a volume replica now?” Selecting Yes automates the process of creating a volume at the destination group to receive updates from the Protected Site group. Select Yes.
There are many places to monitor this initial one-off replication of the volume to the Recovery Site. You can view the status of the replication at the Protected Site array under the Replication pane, and view the status of the Replication Partners by examining the Outbound Replicas node. A similar interface exists on the Recovery Site array listed as an Inbound Replicas node. Additionally, on the properties of the primary volume itself at the Protected Site array, there is a Replication tab that will show statistics (see Figure 2.24). Notice here how no schedule is attached to this replica (that’s configured in the next stage); instead, this is where you indicate how frequently the replication will take place.
Figure 2.23 Select Yes to create a volume replica; later we will attach a schedule to enable the replication to occur at particular times.
Configuring a Schedule for Replication
Once a replication object has been created, you must attach a schedule to the process to make sure it repeats at a frequency that befits your recovery point objectives (RPOs) and recovery time objectives (RTOs). This is configured on the properties of the volume, in the Activities pane under the Schedules option (see Figure 2.25).
Figure 2.24 The Remote Replicas view will keep a history of each cycle of replication that has occurred.
Figure 2.25 EqualLogic separates the creation of the replication job from configuring its frequency.
To configure a schedule for replication, follow these steps.
1. On the Step 1 – Schedule Type page of the wizard (see Figure 2.26) enter a friendly name for the schedule, and select the radio button marked “Replication schedule.” Under the “Schedule options” section there are many options. Choose “Daily schedule (run daily at a specified time),” which will allow you to select the frequency and time for when the replication can occur.
2. On the Step 2 – Daily Schedule page of the wizard (see Figure 2.27), configure the start and end dates and start and end times for when the schedule will be running, the frequency with which the schedule will be run, and the maximum number of replicas to keep. In this sample, replication occurs every day at five-minute intervals during office hours, keeping only the last 50 minutes’ worth of information changes. This schedule is only offered as an example of the options available. Configure your schedule relative to your RPO based on the constraints of bandwidth, latency, dropped packets, and amount of data change within a given period.
Figure 2.26 Although the radio button states “daily” it is possible to specify schedules that replicate by intervals of one minute.
Figure 2.27 Configuring the daily schedule and setting the maximum number of replicas to keep
Once you have configured a schedule, you can save it as a template and then select “Reuse existing schedule” instead of reentering start and end dates and times. In addition, you can modify and delete schedules under the Schedules tab when you select the volume in question. It’s also possible to have multiple schedules active on the same volume at the same time if you require such a configuration.
This completes the configuration of the EqualLogic system for VMware SRM. If you monitor the various status windows for replication you should see the schedule building a list of previous replications relative to your schedule.
Using EqualLogic Host Integration for VMware Edition (HIT-VE)
Alongside many other storage vendors EqualLogic has created its own storage management plug-ins for vCenter. Indeed, you might prefer to use these on a daily basis for provisioning volumes since they are often very easy to use and they reduce the number of configuration steps required in the environment. In the context of SRM, they may well speed up the process of initially allocating storage to your ESXi hosts in the Protected Site. Once the volumes are provisioned and presented to the ESXi hosts, it will merely be a case of setting up the appropriate replication relationship. In addition to provisioning new storage, the EqualLogic HIT-VE has the ability to deploy new virtual desktop clones, and allows high-level access to management options previously only available from the Group Manager.
The HIT-VE is a virtual appliance that you can download, import, and power on with the vSphere environment. After the first power on of the virtual appliance, you will complete a series of steps to configure the appliance and “register” the HIT-VE with your vCenter. This process enables the extensions to the vSphere client. Once you open the vSphere client you should see that EqualLogic icons have been added to the Solutions and Applica¬tions section of the “home” location in vCenter (see Figure 2.28).
You can log in to the virtual appliance with a username of “root” and a password of “eql”; after you have logged in you should change the password to something more secure and less in the public domain. At this point, the “Welcome to EqualLogic Host Integration Tools for VMware” script runs, and a numbered menu guides you through the seven core stages for configuring the appliance for vCenter (see Figure 2.29). The virtual appliance ships with two virtual NIC interfaces: The eth0 interface is used to communicate to your management network where your vCenter and ESXi hosts reside, and the eth1 interface is used to communicate discretely with your EqualLogic systems. Steps 4 through 6 are where you inform the appliance of the hostnames, username, and password to commu¬nicate to the vCenter, Group Manager, and optionally, one of your VMware View Connection servers. Most of this configuration is very straightforward; there is only one caveat. The authentication to the VMware View server is not allowed using the named “administrator” account, either for the local machine or for the Microsoft Active Directory Domain. You must create an administrator account such as corp\view-admin and delegate it within the VMware View environment for the HIT-VE to use it. Once steps 4 through 6 have been confirmed, you can then register the plug-ins with vCenter in step 7. After you have completed steps 1 through 7, you can use option 10 to reboot the appliance for these configuration changes to take effect. In the context of VMware SRM, you should configure both the Protected and Recovery Sites with their own HIT-VE virtual appliance to ensure ongoing functionality and expectations should a DR event occur. At the moment you have one HIT-VE virtual appliance per vCenter installation, and it’s not possible to have one virtual appliance service the needs of many vCenters, even in a linked mode configuration.
Figure 2.28 The HIT-VE adds multiple icons to the Solutions and Applications section in vCenter.
Figure 2.29 The script to configure the HIT-VE appliance, and the core stages for configuring the appliance for vCenter
Once this reboot has been completed, you should find, alongside the management icons, a right-click context-sensitive menu on the properties of clusters, datastores, and VM folders (see Figure 2.30).
Once this is correctly configured, it’s possible to manage components of the replication relationship, such as schedules of the EqualLogic Auto-Snapshot Manager, directly from vCenter (see Figure 2.31).
If you would like to learn more about the EqualLogic HIT-VE, I recently conducted a survey of storage vendor plug-ins and wrote an extended article about the HIT-VE plug-in on my blog:
www.rtfm-ed.co.uk/201 1/03/1 3/using-dell-equallogic-hit-ve-plug-in/
Figure 2.30 Once enabled, the HIT-VE adds context-sensitive menus to various locations in vCenter.
Figure 2.31 You can manage schedules directly from vCenter.
In this chapter I briefly showed you how to set up Dell EqualLogic replication that is suitable for use with VMware SRM. We configured two EqualLogic systems and then configured them for replication. As I’m sure you have seen it sometimes takes time to create this configuration. It’s perhaps salutary to remember that many of the steps you have seen only occur the first time you configure the system after an initial installation. Once your group pairing is in place, you can spend more time consuming the storage you need.
From this point onward, I recommend that you create virtual machines on the VMFS iSCSI LUN on the Protected Site EqualLogic array so that you have some test VMs to use with VMware SRM. SRM is designed to only pick up on LUNs/volumes that are accessible to the ESXi host and contain virtual machine files. In previous releases this was apparently a frequent error message people had with SRM 4.0, but one that I have rarely seen—mainly because I have always ensured that my replicated volumes have virtual machines contained on them. I don’t see any point in replicating empty volumes! In SRM the Array Manager Configuration Wizard displays an error if you fail to populate the datastore with virtual machines. In my demonstrations I mainly used virtual disks, but I will be covering RDMs later in this book because it is an extremely popular VMware feature.