Getting Started with EMC Celerra Replication

From vmWIKI
Jump to: navigation, search

Originating Author

Michelle Laverick

Michelle Laverick.jpg

Video Content [TBA]

Getting Started with EMC Celerra Replication

Version: vCenter SRM 5.0

In this chapter you will learn the basics of configuring replication with EMC Celerra. This chapter is not intended to be the definitive last word on all the best practices or caveats associated with the procedures outlined. Instead, it’s intended to be a quick-start guide outlining the minimum requirements for testing and running a Recovery Plan with SRM; for your particular requirements, you should at all times consult further documentation from EMC and, if you have them, your storage teams. Additionally, I’ve chosen to cover configuration of the VMware iSCSI initiator to more closely align the tasks carried out in the storage layer of the ESX host itself.

EMC provides both physical and virtual storage appliances in the Fibre Channel market for which it is probably best known. However, like many storage vendors, EMC’s systems work with multiple storage protocols and will support iSCSI and NFS connectivity using its Celerra system. This process by which storage is increasingly addressable under any protocol you wish is branded as “Unified Storage” by EMC currently. Like other vendors, EMC does have publicly available virtual appliance versions of its iSCSI/NAS storage systems—specifically, its Celerra system is available as a virtual machine. If you want to learn more about the setup of the Celerra VSA, Cormac Hogan of VMware has written a getting started guide on the website. Additionally, the virtualgeek website run by Chad Sakac has a series of blog posts and videos on how to set up the Celerra VSA. 0/09/emc-celerra-vsa-uberv3-dart-60-now-available.html

Before I begin, I want to clearly describe my physical configuration. It’s not my intention to endorse a particular configuration, but rather to provide some background that will clarify the step-by-step processes I’ll be documenting. I have two EMC systems in my rack: a Celerra NS-20 and a newer Unified Storage EMC NS-120. The NS-120 supports the new VMware features such as vStorage APIs for Array Integration (VAAI), and the Unisphere management system which closely integrates with VMware such that you can tell a server is running ESX, and you can even inspect the contents of the virtual machine disks that make up the VM.

Both units are remarkably similar in appearance, as you can see in Figure 3.1. Incidentally, both systems are shown here without their accompanying disk shelves.

From the rear, the NS-120 cabling diagram (Figure 3.2) shows you how the system works.

The system is managed by a control station. This is purely a management node, and is not involved in any disk I/O. The Celerra is actually two “blades” which contain the code that allows for the iSCSI/NAS protocol to work. These blades are called Data Movers because they are responsible for moving data between the ESX hosts and the storage layer. The reason there are two is for redundancy. There can be up to eight, and this active/passive model can range from a 1:1 mapping to a 7:1 mapping. They find their storage by being in turn connected to the CLARiiON CX4 or CX3. The complete package when bought together allows for all three protocols (Fibre Channel, iSCSI, and NAS) to be used.

EMC CLARiiON uses a concept called “RAID groups” to describe a collection of disks with certain RAID levels. In my case, RAID group 0 is a collection of drives used by the Celerra host using RAID5. Allocating physical storage to a Celerra system when it has Fibre Channel connectivity to the CLARiiON is not unlike giving any host (ESX, Windows, Linux) access to the storage in a RAID group. You can see the Celerra host registered in the Unisphere management system like any other host (see Figure 3.3); the difference is that the Celerra sees block storage that it can then share using iSCSI, CIFS, and NFS.

Celerra-replication- (01).jpg

Figure 3.1 Two generations of EMC equipment, both managed from Unisphere

Celerra-replication- (02).jpg

Figure 3.2 NS-120 dual blade copper ports cabling diagram

In the screen grab of the Unisphere management console in Figure 3.4 you can see I have two Celerra systems, which have been connected to an EMC CLARiiON CX3 and CX4, respectively. In this case, new-york-celerra represents the array at the Protected Site (New York) and new-jersey-celerra represents the array at the Recovery Site (New Jersey). I’m going to assume that a similar configuration is already in place in your environment. Figure 3.4 shows all four components within the scope of a single management window. You can add systems to a Unisphere in the domains section of the GUI, using the Add & Remove System link.

Celerra-replication- (03).jpg

Figure 3.3 The Celerra system could be regarded as just another host in the system as it is registered alongside the ESX hosts.

Celerra-replication- (04).jpg

Figure 3.4 From the All Systems pull-down list it is possible to select other arrays to be managed.

Creating an EMC Celerra iSCSI Target

Before you begin, it’s worth checking that the Celerra is properly licensed for the features and protocols you want to use. Log in to Unisphere and select the Celerra system from the pull-down list near the “home” area; then select the Celerra system again and click the Manage Licenses link. Figure 3.5 shows the resultant window.

Additionally, you may wish to confirm that the Data Mover has been activated to receive iSCSI communications from the host. You can do this by navigating to the Sharing icon, selecting iSCSI, and clicking the Manage Settings link. Figure 3.6 show the result.

Once the Celerra is licensed and activated for iSCSI, we can create an iSCSI target. The iSCSI target is the listener (the server) that allows for inbound iSCSI requests from initiators (the client) to be received and processed. Many arrays come already configured with an iSCSI target, but with the Celerra you have complete control over defining its properties as you see fit. The Celerra supports many iSCSI targets, each with different aliases, giving a great deal of flexibility.

Celerra-replication- (05).jpg

Figure 3.5 Once a Celerra has been selected, you can review its settings from the System Information screen.

Celerra-replication- (06).jpg

Figure 3.6 For iSCSI requests to be successful, the Data Movers must be listening for those requests.

It’s important to know that the Celerra’s control station IP address is used solely for management. The I/O generated by ESX hosts reading and writing to a volume will be driven by the Data Mover’s interfaces. Similarly, the I/O generated by Celerra replication will be driven by the Data Mover’s interfaces. If you are unsure of what IP address the Data Mover’s interfaces have, you can see them by clicking the System icon in Unisphere and selecting the Network option in the menu.

As you can see in Figure 3.7, the New York Celerra (Prod_Replication) will be used to drive replication traffic, whereas (Prod_Access) will be used by the ESX hosts to access the iSCSI LUN. Each of these logical IP interfaces is mapped to distinct physical network interfaces named cge0 and cge1. It wouldn’t be unusual to have these mapped to different VLANs for security, and to ensure that replication traffic did not interfere with day-to-day production traffic. It is worth pointing out that the devices (the physical network cards of cge0/cge 1 and so on) and the logical interfaces (for example, Prod_Replication and Prod_Access) that you build on top of those devices do not necessarily require a one-to-one mapping, and particularly on 10GB devices you could have tens of logical interfaces. In my case, it’s just that a simple configuration suits my purposes for demonstration.

The remaining steps in creating an EMC Celerra iSCSI target are as follows.

1. In Unisphere, under the Sharing icon, select the iSCSI option in the menu.

2. In the iSCSI pane, select the Target Wizard link, as shown in Figure 3.8.

Celerra systems generally come with at least three blades; the first being the control station and the second and third being the Data Movers. Most Celerra systems ship with two Data Mover blades for fault tolerance and redundancy.

Celerra-replication- (07).jpg

Figure 3.7 The Interfaces tab allows you to view what IP addresses have been allocated.

Celerra-replication- (08).jpg

Figure 3.8 Under the Sharing button, you can select to see the three main file sharing protocols supported by the Celerra, including iSCSI.

3. Click Next in the wizard to select the default Data Mover. In Figure 3.9 you can see that I am selecting the first Data Mover called “nyc_server_2.”

4. Enter a unique target alias, such as “new-york-celerra-iscsitarget1”—this is the friendly name by which the iSCSI target will be known in the management tools. By default, if you leave the Auto Generate Target Qualified Name (IQN) option enabled, the system will create for the target an IQN that looks like this:

iqn. 1992-05 .com.emc:apm001024024270000-1

Alternatively, you can unselect this option and set your own custom IQN, such as:

iqn.2 009-1 1

Figure 3.10 shows the former configuration.

Celerra-replication- (09).jpg

Figure 3.9 Selecting the Data Mover that will host the iSCSI target

Celerra-replication- (10).jpg

Figure 3.10 The iSCSI target alias is a soft name presented to hosts connecting to it. The Celerra can support many unique inbound target names.

5. Click the Add button to include the network interfaces used by the Data Mover for iSCSI as you may choose to separate out your CIFS traffic, your NFS traffic, and your iSCSI traffic to take advantage of features such as jumbo frames.

These network interfaces have IP addresses, and will listen by default on the iSCSI TCP port of 3260 for inbound requests from initiators—in our case, this will be the software iSCSI initiator which is built into ESX. As shown in Figure 3.11, in the dialog box I selected the cge1 interface; if you remember this had the friendly name of “Prod_Access.”

6. Click Finish and then click Close.

This wizard essentially automates the process of creating an iSCSI target. We could have created the target manually and modified it at any time by clicking the Sharing icon, selecting iSCSI, and clicking the Targets tab (see Figure 3.12).

IMPORTANT NOTE: You can now repeat this step at the Recovery Site Celerra—in my case, this is New Jersey.

Celerra-replication- (11).jpg

Figure 3.11 The interface using the address which hosts use as the IP to connect to the Celerra system

Celerra-replication- (12).jpg

Figure 3.12 Now that a new target has been created it’s possible to begin creating iSCSI LUNs.

Granting ESX Host Access to the EMC Celerra iSCSI Target

Now that we have created the iSCSI target it’s a good idea to enable the software iSCSI target on the ESX hosts. This means that when we create an iSCSI LUN it will be “preregistered” on the Celerra system, so we will just need to select the iSCSI LUN as the ESX host and grant it access based on the iSCSI IQN we assign to it.

If you have a dedicated iSCSI hardware adapter you can configure your IP settings and IQN directly on the card. One advantage of this is that if you wipe your ESX host, your iSCSI settings remain on the card; however, they are quite pricey. Therefore, many VMware customers prefer to use the ESX host’s iSCSI software initiator. The iSCSI stack in ESX 5 has been recently overhauled, and it is now easier to set up and offers better performance. The following instructions explain how to set up the iSCSI stack in ESX 5 to speak to the Celerra iSCSI target we just created.

Before you enable the software initiator/adapter in the ESX host, you will need to create a VMkernel port group with the correct IP data to communicate to the Celerra iSCSI target. Figure 3.13 shows my configuration for esx1 and esx2; notice that the vSwitch has two NICs for fault tolerance. I’m using ESX Standard vSwitches (SvSwitches), but there’s nothing stopping you from using a Distributed vSwitch (DvSwitch) if you have access to it. Personally, I prefer to reserve the DvSwitch for virtual machine networking, and use the SvSwitch for any ESX host-specific networking tasks. Remember, ESX 4 introduced a new iSCSI port binding feature that allows you to control multipathing settings. ESX 5 added a port binding policy check which is used to validate your configuration. So, in my case, I created a single SvSwitch with two VMkernel ports, each with its own unique IP configuration. On the properties of each port group (IP-Storage1 and IP-Storage2) I modified the NIC teaming policy such that IP-Storage1 had dedicated access to vmnic2 and IP-Storage2 had dedicated access to vmnic3. This iSCSI port binding reflects a continued investment by VMware to improve iSCSI performance made in ESX 4 that allows for true multipathing access to iSCSI volumes for both performance and redundancy that one would normally expect from a Fibre Channel environment.

Celerra-replication- (13).jpg

Figure 3.13 In this configuration for esx1 and esx2, the vSwitch has two NICs for fault tolerance.

Before configuring the VMware software initiator/adapter you might wish to confirm that you can communicate with the Celerra by using ping and vmkping against the IP address of the Data Mover. Additionally, you might wish to confirm that there are no errors or warnings on the VMkernel port groups you intend to use in the iSCSI Port Binding setting of the Port Properties (see Figure 3.14).

Celerra-replication- (14).jpg

Figure 3.14 iSCSI port binding is enabled due to the correct configuration of the ESX virtual switch.

In ESXi 5 you should not need to manually open the iSCSI software TCP port on the ESXi firewall. The port number used by iSCSI, which is TCP port 3260, should be automatically opened. However, in previous releases of ESX this sometimes was not done, so I recommend confirming that the port is opened, just in case.

1. In vCenter, select the ESXi host and then the Configuration Tab.

2. In the Software tab, select the Security Profile link.

3. In the Firewall category, click the Properties link.

4. In the Firewall Properties dialog box (see Figure 3.15) open the TCP port (3260) for the Software iSCSI Client.

5. Add the iSCSI software adapter. In previous releases of ESX this would be generated by default, even if it wasn’t needed. The new model for iSCSI on ESX 5 allows for better control over its configuration. In the Hardware pane, click the Storage Adapters link and then click Add to create the iSCSI software adapter (see Figure 3.16).

Celerra-replication- (15).jpg

Figure 3.15 By default, ESXi 5 opens iSCSI port 3260 automatically.

Celerra-replication- (16).jpg

Figure 3.16 In ESXi 5 you must add the iSCSI software adapter. Previously, the “vmhba” alias was created when you enabled the feature.

6. Once the virtual device has been added, you should be able to select it and choose Properties.

7. In the iSCSI Initiator (vmhba34) Properties dialog box, click the Configure button; this will allow you to set your own naming convention for the IQN rather than using the auto-generated one from VMware (see Figure 3.17). Generally I use cut and paste to set this value if I have a small number of hosts, modifying the alias after the colon to ensure its uniqueness.

8. Bind the virtual device to one of the VMkernel ports on the ESX host’s vSwitch configuration. I have a port group named “IP-Storage” which is used for this purpose (see Figure 3.18).

9. Select the Dynamic Discovery tab and click the Add button.

10. Enter the IP address of the iSCSI target that is serviced by the two NICs of the Data Mover (see Figure 3.19). In my case, this is

Static discovery is only supported with hardware initiators. Remember, here the IP address of the Celerra iSCSI target is that of the Prod_Access Data Mover interface dedicated to iSCSI traffic (if one has been set up), not the control station which is there purely for management.

Celerra-replication- (17).jpg

Figure 3.17 Inserting a meaningful IQN for an ESX host

Celerra-replication- (18).jpg

Figure 3.18 The new iSCSI initiator allows you to add VMkernel ports that are compliant with load balancing to enable true multipathing.

Celerra-replication- (19).jpg

Figure 3.19 Entering the Group Manager IP address to discover the volumes held on the group members assigned to ESX hosts

Celerra-replication- (20).jpg

Figure 3.20 Because of a configuration change in the ESX iSCSI stack a reboot is required.

11. Click OK.

12. Click Close to close the main dialog box, and you will be asked if you want to rescan the software iSCSI virtual HBA (in my case, vmhba34). Click Yes.

Occasionally, I’ve noticed that some changes to the software iSCSI initiator after this initial configuration may require a reboot of the ESX host (see Figure 3.20). So, try to limit your changes where possible, and think through what settings you require up front to avoid this.

Repeat this setup for the Recovery Site ESX hosts, changing the IP addresses relative to the location. In order for SRM to work with iSCSI you must at least configure the iSCSI initiator, and add an IP address for the storage arrays at the Recovery Site. The SRA within SRM will take care of presenting the storage as Recovery Plans are tested or run.

Creating a New File System

As with other storage vendors, the Celerra system has its own file system within which LUNs can be created. Often, storage vendors have their own file system to allow for advanced features such as deduplication and thin provisioning. It is possible to create the configuration manually, but again, I prefer to use the wizards in the Celerra to guide me through the process.

1. Open Unisphere on the Protected Site Celerra (New York), and select the Storage button, and then the File Systems option.

2. Select the File System Wizard link in the File Systems pane.

3. Click Next in the wizard to select the default Data Mover (see Figure 3.21).

4. Select Storage Pool as the Volume Management Type.

5. Select a Storage Pool; I have just one, clar_r5_performance, with 232,162MB of space available (see Figure 3.22).

Celerra-replication- (21).jpg

Figure 3.21 Selecting the Data Mover that will be responsible for holding the new file system

Celerra-replication- (22).jpg

Figure 3.22 The Storage Pool is created within the main Unisphere applica¬tion and is an allocation of block storage to the Celerra system.

6. Specify a friendly name for the file system you are creating. I used “newyorkcelerraFS1” which is 200GB in size (see Figure 3.23).

It would be possible for me to make a file system that is almost 1TB in size and then populate it with many iSCSI LUNs. Notice how the value is specified in megabytes. A common mistake I’ve made is to make tiny file systems and LUNs because I’ve forgotten the increment is in megabytes and not gigabytes!

7. The rest of the wizard allows for more advanced settings, and you can just accept their defaults. When you’re done, click Finish and then click Close.

This wizard essentially automates the process of creating a file system to hold iSCSI LUNs. You can instead create the file system manually, and then modify it at any time by navigating to the File Systems tab (see Figure 3.24).

Celerra-replication- (23).jpg

Figure 3.23 Creation of a file system from the raw block storage

Celerra-replication- (24).jpg

Figure 3.24 The file system can be modified at any time.

IMPORTANT NOTE: You can repeat the previous steps at the Recovery Site Celerra (New Jersey), and adjust your naming convention to reflect the location. However, there is one critical caveat to this: The file system of the Recovery Site needs to be slightly larger to account for the snapshots that are generated by the replication process. How much larger? EMC recommends that if you expect to see a low volume of changes, the file system at the Recovery Site will need to be 20% larger (in my case, 240GB). In the worst-case scenario where you are experiencing a high volume of changes, this can be as high as 150% of reserved space. If you don’t reserve space for the snapshot you may receive a “Version Set out of space” error message. The fortunate thing is that if you get this wrong you can increase the size of the file system as required and it doesn’t disrupt your production environment. The screen grab in Figure 3.25 shows the file system on the Recovery Site Celerra (New Jersey) as 2 50GB. It could be made larger using the Extend button. Additionally, you also can specify that the temporary writable snaps (TWS) created at the Recovery Site are created as thin by default, and this could remove some of the space issues.

Celerra-replication- (25).jpg

Figure 3.25 A rather small 250GB file system from which LUNs could be created. In the real world this file system is likely to be much larger.

Creating an iSCSI LUN

In this section I will create an iSCSI LUN. Then I will set up asynchronous replication between the New York Celerra and the New Jersey Celerra using the ReplicatorV2 technology for which I have a license. It is possible to create the configuration manually, but in this case I prefer to use the wizards in the Celerra to guide me through the process.

1. In the Celerra Manager on the Protected Site Celerra (New York) select the Sharing button, and then select the iSCSI option. In the iSCSI pane click the Create LUN link (see Figure 3.26) to start the New iSCSI Lun Wizard.

2. Click Next in the wizard to select the default Data Mover.

3. Click Next to accept the target we created earlier (see Figure 3.27).

Celerra-replication- (26).jpg

Figure 3.26 You can see which Data Mover will service the request for the iSCSI LUN.

Celerra-replication- (27).jpg

Figure 3.27 The iSCSI target was created earlier in this chapter, so you may have one all ready to use.

4. Select the file system within which the iSCSI LUN will reside.

In my case, this is the newyorkcelerraFS 1 file system that was created in the Creating a New File System section. Notice that although I defined a file system of 200GB, not all of it is available, as some of the space is needed for the file system metadata itself (see Figure 3.28).

5. Set your LUN number; in my case, I chose 100 as the LUN ID with 100GB as its size.

The Create Multiple LUNs option allows you to create many LUNs in one go that all reside on the same file system, each being the same size. If I did that here they would be 100GB in size (see Figure 3.29).

6. In the LUN Masking part of the wizard, select the option to Enable Multiple Access, and use the Grant button to add the known initiators to the list (see Figure 3.30).

ESX hosts are already listed here because I enabled the iSCSI software target on the ESX hosts and carried out a rescan. If your ESX hosts are not listed here, you may need to manually add them to the access control list using the Add New button.

Celerra-replication- (28).jpg

Figure 3.28 Celerra systems can support multiple file systems. This file system, invisible to the ESX host, facilitates management of the wider system.

Celerra-replication- (29).jpg

Figure 3.29 A 100GB LUN being created in the 250GB file system. The Make LUN Virtually Provisioned option creates a thinly provisioned LUN.

Celerra-replication- (30).jpg

Figure 3.30 Setting up the ESX hosts’ iSCSI initiator with the IP of the iSCSI target “registers” them with the Celerra.

7. The CHAP Access dialog box allows you to configure the authentication protocol. Remember, CHAP is optional and not required by the ESX iSCSI initiator. If you do enable it at the iSCSI target you will need to review your configuration at the ESX hosts.

8. Click Finish and then click Close.

This wizard essentially automates the process of creating a LUN and allocating ESX hosts by their IQN to the LUN. You can instead create the LUN manually, and then modify it at any time by navigating to Sharing and selecting iSCSI and then the LUNs tab (see Figure 3.31).

Additionally, if you select the +iSCSI node from the top level, select the Targets tab, right-click the properties of the target, select Properties, and then select the LUN Mask tab, you can see the ESX hosts have been allocated to the LUN.

If you return to the ESX hosts that were allocated to the iSCSI target they should now have the iSCSI LUN available (see Figure 3.32).

Celerra-replication- (31).jpg

Figure 3.31 The 100GB LUN created

Celerra-replication- (32).jpg

Figure 3.32 After a rescan of the ESX hosts, the 100GB LUN appears in the list under the Devices view.

At this stage it would be a very good idea to format the iSCSI LUN with VMFS and populate it with some virtual machines. We can then proceed to replicating the LUN to the Celerra in the Recovery Site.

IMPORTANT NOTE: You can repeat these steps at the Recovery Site Celerra (New Jersey), adjusting your names and IP addresses to reflect the location. When you create the iSCSI LUN, remember to set the LUN to be “read-only,” the privilege required at the Recovery Site for LUNs earmarked as the destination for replication (see Figure 3.33). When you run a Recovery Plan in SRM initiating DR for real, the Celerra SRA will automatically promote the LUN and make it read-writable.

There is no need to format the VM’s volumes and populate them with VMs; this empty volume created at the Recovery Site will be in receipt of replication updates from the Protected Site Celerra (New York). Additionally, by marking it as a “read-only” LUN you prevent people from accidentally formatting it within the VMware environment at the Recovery Site, as they may mistake it for any area of freely usable disk space.

Celerra-replication- (33).jpg

Figure 3.33 The destination LUN required for replication, marked as “read-only”

Configuring Celerra Replication

By now you should have two Celerra systems up and running, each with an iSCSI target, a file system, and an iSCSI LUN. One iSCSI LUN is read-write on the Protected Site Celerra (New York), while the other is read-only on the Recovery Site Celerra (New Jersey). Again, we can use the Replication Wizard to configure replication. Generally, the replication setup is a three-phase process.

1. Create a trust between the Protected Site and Recovery Site Celerras in the form of a shared secret/password.

2. Create a Data Mover interconnect to allow the Celerras to replicate to each other.

3. Enable the replication between the Protected and Recovery Sites.

Before you begin with the wizard you should confirm that the two Celerras can see each other via the cge interface that you intend to use for replication. You can carry out a simple ping test using the Unisphere administration Web pages by selecting the System button, and in the Network pane clicking Run Ping Test. From the corresponding Web page you can select the Data Mover and network interface, and enter the IP address of the replication port on the destination side. In my case, I used the IP address to ping the IP address of Figure 3.34 shows the successful result.

Once you get the successful ping you can proceed to the Replication Wizard.

1. Within Unisphere on the Protected Site Celerra (New York) open the Replicas node and select Replica.

2. Click the Replication Wizard link.

Celerra-replication- (34).jpg

Figure 3.34 From Unisphere it is possible to carry out simple ping tests to confirm that IP communication is available between two arrays.

3. Under Select a Replication Type, select iSCSI LUN (see Figure 3.35).

4. Under Specify Destination Celerra Network Server click the New Destination Celerra button.

5. In the Celerra Network Server Name box shown in Figure 3.36, enter a friendly name to represent the Recovery Site Celerra (New Jersey) and enter the IP address of the control station. Also enter a passphrase which will create a trust between the Protected and Recovery Sites such that they are then able to send data and block updates.

6. Enter the credentials used to authenticate to the Recovery Site Celerra (New Jersey). By default, this should be the “nasadmin” user account and its password.

Now that the Protected Site Celerra (New York) knows of the Recovery Site Celerra (New Jersey) the next pane in the wizard, Create Peer Celerra Network Server, informs the Recovery Site Celerra of the identity of the Protected Site Celerra (New York). This effectively creates a two-way trust between the two Celerras (see Figure 3.37).

Celerra-replication- (35).jpg

Figure 3.35 Here the type of replication used is iSCSI LUN, valid for the type of storage we are presenting to the ESX host.

Celerra-replication- (36).jpg

Figure 3.36 Creating a relationship between the and arrays, valid for creating Data Mover interconnects

Celerra-replication- (37).jpg

Figure 3.37 After specifying both peers in the relationship you will be able to pair them together.

7. Click Next, and you will receive a summary of how the systems will be paired together. Click the Submit button, and the Celerras will trust each other with the shared passphrase. You will then be brought back to the beginning of the wizard where you were asked to select a destination Celerra server.

8. Select the Recovery Site Celerra (New Jersey) as shown in Figure 3.38, and click Next.

9. Now that the Celerras are trusted we need to create a Data Mover interconnect that will allow the Data Mover in the Protected Site to send data and block updates to the Recovery Site Celerra (New Jersey). Click the New Interconnect button in the Data Mover Interconnect part of the wizard.

10. Enter a friendly name for the Data Mover interconnect (see Figure 3.39).

In my case, I called the source (the Protected Site Celerra) interconnect “new-york-celerra1-to-new-jersey-celerra1.” Notice how I enabled the advanced settings, so I could select the Prod_Replication interface with the IP address of used by cge0.

Celerra-replication- (38).jpg

Figure 3.38 Both peers that will be paired together

Celerra-replication- (39).jpg

Figure 3.39 Entering a friendly name for the Data Mover interconnect relationship that is being created

11. Under Destination Settings select the IP address used to receive updates from the array at the Protected Site. In my case, I used for the New Jersey Celerra, as shown in Figure 3.40.

12. The Interconnect Bandwidth Schedule (optional) pane allows you to control how much bandwidth to allocate to the replication cycle as well as when replication happens (see Figure 3.41). Set your schedule as you see fit and click Submit to create the Data Mover interconnect.

13. Now that the Data Mover interconnect has been created you can select it in the wizard (see Figure 3.42).

14. In the Select Replication Session’s Interface pane you can select which IP address (and therefore which network interface) will take the replication traffic. I selected the address that is dedicated to Prod_Replication on the cge0 interface, as shown in Figure 3.43.

Celerra-replication- (40).jpg

Figure 3.40 Selecting the interface to be used for replication traffic

Celerra-replication- (41).jpg

Figure 3.41 Controlling when and at what frequency replication will take place via the Interconnect Bandwidth Schedule

Celerra-replication- (42).jpg

Figure 3.42 Selecting which Data Mover interconnect will be used as the path moving data from one array to another

Celerra-replication- (43).jpg

Figure 3.43 The source and destination IP addresses used for the replication traffic

15. Set a friendly replication session name for the session. Additionally, select the iSCSI LUN that you wish to replicate. In my case, this is the 100GB LUN I created earlier. I called mine “new-york-celerra-LUN100” (see Figure 3.44).

16. In the Select Destination pane, select the destination LUN which receives replication updates from the source LUN (see Figure 3.45).

17. The Update Policy pane allows you to configure the tolerance on what happens if replication is unavailable for a period of time. This reflects your RPO. For example, if you select ten minutes, the Recovery Site would be ten minutes behind your Protected Site (see Figure 3.46). It is worth noting that the algorithm that manages this is quite intelligent and will ensure that only the changes needed to keep the DR side within ten minutes of the production side will be replicated across. Figure 3.46 is only an example of the options available. Configure your schedule relative to your RPO based on the constraints of bandwidth, latency, dropped packets, and the amount of data change within a given period.

Celerra-replication- (44).jpg

Figure 3.44 Each replication session has a unique session name and allows for session-by-session management controls.

Celerra-replication- (45).jpg

Figure 3.45 Once a source and destination have been configured it is very easy to select them in the wizard.

Celerra-replication- (46).jpg

Figure 3.46 Configuring the tolerance on what happens if replication is unavailable for a period of time

18. Finally, click Submit, and the replication process will begin.

This wizard essentially automates the manual process of pairing the Celerras together, creating the Data Mover interconnects and the replication session. You can view and modify these entries from the Replication node in the Unisphere administration Web pages. As you gain confidence in using the Unisphere management system you will be able to create these relationships manually. Personally, I like the wizards as they stop me from forgetting critical components.

The Celerra Network Servers tab (see Figure 3.47) is where the control station IP address references are held so that the Protected Site Celerra (New York) knows how to communicate to the Recovery Site Celerra (New Jersey). The DM Interconnects tab shows the network pipe between the two Celerras which is used to transfer replication updates (see Figure 3.48). You can right-click and “validate” these connections, and also modify their properties as you wish.

The Replications tab shows you the replication session created by the wizard; it has buttons that allow you to stop, start, reverse, switch over, and failback the replication relationships (see Figure 3.49).

Celerra-replication- (47).jpg

Figure 3.47 The Celerra Network Servers tab, which is where the control station IP address references are held

Celerra-replication- (48).jpg

Figure 3.48 The Data Mover interconnect relationship created by the wizard. Click Validate to confirm the interconnect is functioning correctly.

Celerra-replication- (49).jpg

Figure 3.49 The Replications tab, showing the replication session created by the wizard


In this chapter I briefly showed you how to set up the EMC Celerra iSCSI Replicator which is suitable for use with VMware SRM. We configured two Celerra systems and then configured them for replication. As I’m sure you have seen it takes some time to create this configuration. It’s perhaps salutary to remember that many of the steps you have seen only occur the first time you configure the system after an initial installation. Once your targets are created, the file systems and LUNs are created, and replication relationships are in place, then you can spend more of your time consuming the storage.

From this point onward, I recommend that you create virtual machines on the VMFS iSCSI LUN on the Protected Site Celerra so that you have some test VMs to use with VMware SRM. SRM is designed to only pick up on LUNs/volumes that are accessible to the ESX host and contain virtual machine files. In previous releases, if you had a volume that was blank it wouldn’t be displayed in the SRM Array Manager Configuration Wizard. The new release warns you if this is the case. This was apparently a popular error people had with SRM 4.0, but one that I rarely saw, mainly because I always ensured that my replicated volumes had virtual machines contained on them. I don’t see any point in replicating empty volumes! In SRM 4.0 the Array Manager Configuration Wizard displays an error if you fail to populate the datastore with virtual machines. In my demonstrations I mainly used virtual disks, but I will be covering RDMs later in this book because it is an extremely popular VMware feature.