Getting Started with NetApp SnapMirror

From vmWIKI
Jump to: navigation, search

Originating Author

Michelle Laverick

Michelle Laverick.jpg

Video Content [TBA]

Getting Started with NetApp SnapMirror

Version: vCenter SRM 5.0

In this chapter you will learn the basics of configuring replication with NetApp storage. As with previous chapters, this chapter is not intended to be the definitive last word on all the best practices or caveats associated with the procedures outlined. Instead, it is intended to be a quick-start guide outlining the minimum requirements for testing and running a Recovery Plan with SRM. For your particular requirements, you should at all times consult further documentation from NetApp and, if you have them, your storage teams.

NetApp currently only provides a virtual storage appliance (VSA) or software version of its arrays through OEM solutions with blade server vendors. This is unlike other storage vendors that provide publicly available VSAs for the community with which to test and learn. However, NetApp training course attendees can acquire the NetApp ONTAP Simulator that runs inside a VMware virtual machine. As I have access to the real deal in my lab environment, and since the ONTAP Simulator is not publicly downloadable, I’ve chosen to not cover the setup of the Simulator. If you do have access to it, Cormac Hogan on the Viops.com website has created a quick guide to getting started with it:

http://viops.vmware.com/home/docs/DOC-1603

NetApp is probably best known for providing physical storage arrays that offer data deduplication and for advocating the use of NFS with VMware. Actually, NetApp’s physical storage appliances are a unified array, which means they support multiple storage protocols including Fibre Channel, Fibre Channel over Ethernet, and iSCSI SANs as well as NFS and SMB (a.k.a. CIFS) NAS connectivity.

At the time of this writing, my sources at NetApp are assuring me that a NetApp simulator should appear sometime after the release of Data ONTAP 8.1. NetApp has made a VSA available via an OEM partner, Fujitsu, which is including it with a new blade chassis. For more details, consult Vaughn Stewart’s blog post on the subject:

http://blogs.netapp.com/virtualstorageguy/2010/12/netapp-releases-our-first-virtual-storage-array.html

In 2011, NetApp very kindly updated my lab environment from two NetApp FAS2020 systems to two FAS2 040 systems (see Figure 6.1). They are racked up in my collocation facility, and they look very much like two 2U servers with vertically mounted disks behind the bezel. From what I understand, once you know one NetApp filer, you know them all. As such, what we cover in this chapter should apply to all NetApp deployments, large and small. Maybe this is what NetApp means when it says it offers a “unified storage array”?

In the main, I manage the FAS2040 systems using the NetApp System Manager, shown in Figure 6.2. The System Manager application allows you to see all your NetApp systems from one window, and its management console is very friendly. It was recently updated to enable the configuration of SnapMirror replication between multiple arrays. This new version is Web-based (in previous incarnations it was built around the Microsoft Management Console format) and is intended to replace the older FilerView which is a Web-based administration tool natively built into NetApp filers. You can add NetApp filers into the System Manager through the Discover Storage Systems Wizard that scans your IP network ranges, or you can click the Add button to include your arrays based on hostname or IP address.

Netapp-snapmirror- (01).jpg

Figure 6.1 A NetApp FAS2040 array

Netapp-snapmirror- (02).jpg

Figure 6.2 The NetApp System Manager looks set to replace the older FilerView management interface.

In Figure 6.2, which is a screen grab of the NetApp System Manager console, you can see that I have two NetApp FAS2 040 systems (new-york-filer1.corp.com and new-jersey-filer1.corp.com). I will create a volume called “virtualmachines” on the New York filer and configure it to replicate the data to the New Jersey filer using NetApp SnapMirror. This is a very simple setup indeed, but it is enough to get us started with the SRM product. Later I will cover NetApp support for block-based storage using Fibre Channel and iSCSI. Of course, it’s up to you which storage protocol you use, so choose your flavor, and once you’re happy, head off to the Configuring NetApp SnapMirror section later in this chapter.

Provisioning NetApp NFS Storage for VMware ESXi

Every NetApp storage system has the ability to serve storage over multiple protocols, so you can attach storage to ESXi/ESXi servers and clusters over NFS, Fibre Channel, FCoE, and iSCSI all from one NetApp box (actually, a LUN can have simultaneous access with FC, FCoE, and iSCSI—that’s wild!). To make provisioning a lot faster, NetApp has created a vCenter plug-in called the Virtual Storage Console (VSC) which, in addition to cloning virtual machines, lets you create, resize, and deduplicate datastores and storage volumes, including securing access and setting multipathing policies. Figure 6.3 shows the possible storage options within NetApp with virtualization in mind.

I should point out that NetApp’s official stance on this issue is that the VSC is the recommended means for provisioning datastores to vSphere hosts and clusters. I will show you both the manual provisioning process and the automated—and frankly, quite simple— plug-in process.

To begin I will show you how to provision storage the old-fashioned way. The process will be slightly different depending on whether you’re provisioning NFS or LUNs, so we’ll cover those in separate sections.

In addition to virtual disks, it is possible to provide guest-connected storage directly to a VM via a storage initiator inside the guest OS. This can be accomplished with a software-based initiator for iSCSI LUNs or NFS/SMB network shares over the VM network. Storage presented in this manner is unknown to the VMware Site Recovery Manager, and as such it will not be covered in this book. In addition to these limita-tions, guest-connected storage requires one to connect the VM network to the storage network. For many environments, such a requirement is considered a security risk, and therefore is not recommended.

Netapp-snapmirror- (39).jpg

Figure 6.3 The range of different protocols and file systems supported by NetApp

Source: Image by Vaughn Stewart; reprinted with permission from NetApp.

Creating a NetApp Volume for NFS

NetApp uses a concept called “aggregates” to describe a collection or pool of physical disk drives of similar size and speed. The aggregate provides data protection in the form of RAID-DP, which is configured automatically. In my case, aggr0 is a collection of drives used to store Data ONTAP, which is the operating system that runs on all NetApp storage systems. Aggr1 is the remainder of my storage, which I will use to present datastores to the ESXi hosts in the New York site. To create a datastore you begin by creating a volume, sometimes referred to as a FlexVol, after you log in to the NetApp System Manager GUI management tool.

1. Open the NetApp System Manager.

2. Double-click the filer, and log in with the username “root” and your own password. In my case, the password is new-york-Filer1.corp.com.

3. Expand the Storage node and select the Volumes icon.

4. Click the Create button. This will open the Create Volume box.

5. Enter the name of the volume. I called mine “vol1_virtualmachines.”

6. Select the aggregate that will hold the volume. I selected aggr1.

7. Ensure that the storage type is NAS.

8. The next part of the dialog consists of several options. In my case, I wanted to create a volume that the ESXi host would see as 100GB with 0% reserved on top for temporary snapshot space. The Create Volume dialog box allows you to indicate whether you want to guarantee space for a volume or whether you would like it to be thinly provisioned. The Options tab within the dialog box also allows you to enable data deduplication for the volume, as shown in Figure 6.4. Make your selections as befits your requirements in your environment.

9. Click the Create button to create the volume.

The next step is to give our ESXi hosts rights to the volume. By default, when you create a volume in NetApp it auto-magically makes that volume available using NFS. However, the permissions required to make the volume accessible do need to be modified. As you might know, ESXi hosts must be granted access to the NFS export by their IP address, and they also need “root” access to the NFS export.

We can modify the client permissions to allow the IP addresses used by ESXi hosts in the New York Protected Site. To do this, select the Exports icon, held within the Volumes node. Select the volume to be modified—in my case, vol1_virtualmachines—and click the

Netapp-snapmirror- (03).jpg

Figure 6.4 Volume creation options available in the Create Volume dialog box

Add button; then enter the IP addresses that reflect your ESXi hosts’ VMkernel ports for IP storage. Remember, in the case of ESXi, for an NFS export to be successfully accessed it should be mounted with “root access,” so make sure you include these permissions on export, as shown in Figure 6.5.

TIP: Some may find it easier to export the FlexVols created to the IP subnet of the VMkernel ports. This method allows one entry that provides access to all nodes on the storage network. To accomplish this, enter a client address in the following format: 172.168.3.0/24.

This graphical process can become somewhat long-winded if you have lots of volumes and ESXi hosts to manage. You might prefer the command-line options to handle this at the NetApp filer itself. For example, you can create a new volume with the following command:

vol create vol1_virtualmachines aggr1 100g

Once created, the volume can be “exported” for NFS access and made available to specific IP VMkernel ports within ESXi with this command:

exportfs -p rw=172.168.3.101,root=172.168.3.102 /vol/vol1_virtualmachines

Netapp-snapmirror- (04).jpg

Figure 6.5 The main Edit Export Rule dialog box from within which the correct privileges and rights can be assigned

Or it can be made available to an entire subnet of ESXi hosts with this command:

exportfs -p rw=172.168.3.0/24,root=172.168.3.0/24 /vol/vol1_virtualmachinee

Finally, log in to the filer in the Recovery Site—in my case, this is new-jersey-filer1.corp. com—and repeat this volume creation process. This volume at the Recovery Site will be used to receive updates from the Protected Site NetApp filer (new-york-filer1.corp.com). The only difference in my configuration was that I decided to call the volume “vol1_ replica_of_virtualmachines.” This volume at the Recovery Site must be the same size or larger than the volume previously created for SnapMirror to work. So watch out with the MB/GB pull-down lists, as it’s quite easy to create a volume that’s 100MB, which then cannot receive updates from a volume that’s 100GB. It sounds like a pretty idiotic mistake to make, and it’s easily corrected by resizing the volume, but you’d be surprised at how easily it occurs (I know because I have done it quite a few times!). The important thing to remember here is that we only need to set NFS export permissions on the volume in the Protected Site, as the Site Recovery Manager, with the NetApp SRA, will handle setting the export permissions for the SnapMirror destination volume, and will automatically mount the NFS exports whenever you test or run an SRM Recovery Plan.

Granting ESXi Host Access to NetApp NFS Volumes

The next stage is to mount the virtualmachines volume we created on the Protected Site NetApp filer (new-york-filer 1). Before you mount an NFS export at the ESXi host you will need to create a VMkernel port group if you have not already done so, configured with the correct IP addresses to communicate to the NetApp filer. Figure 6.6 shows my configuration for ESXi1 and ESXi2; notice that the vSwitch has two NICs for fault tolerance. Before proceeding with the next part of the configuration, you might wish to confirm that you can communicate with the NetApp filer by conducting a simple test using ping and vmkping.

1. In vCenter, select the ESXi host and click the Configuration tab.

2. In the Hardware pane, select the Storage link.

3. Click the Add Storage link in the far right-hand corner.

Netapp-snapmirror- (05).jpg

Figure 6.6 VMkernel port with an IP address valid for connecting to the NetApp array on a vSwitch backed by two vmnics for fault tolerance

4. Select Network File System in the dialog box.

5. Enter the IP address or name of the Protected Site filer.

6. Enter the name of the volume using “/vol/” as the prefix. In my case, this is “/vol/ vol1_virtualmachines,” as shown in Figure 6.7.

Note that if you have in excess of eight NFS exports to mount you will need to increase the NFS.MaxVolumes parameter in the advanced settings. Mounting many NFS exports to many ESX hosts using the GUI is quite tedious and laborious, so you may wish to use a PowerCLI script with a foreach loop instead.

7. Enter a friendly datastore name. I used “netapp-virtualmachines.”

Although the FQDN would have worked in the Add Storage Wizard, I prefer to use a raw IP address. I’ve found the mounting and browsing of NFS datastores to be much quicker if I use an IP address instead of an FQDN in the dialog box. Although, quite frankly, this might have more to do with my crazy DNS configurations in my lab environment—within one week, my lab environment can have up to four different identities! From this point onward, I’m sticking with “corp.com.” At the end of the day, you may prefer to use specific IP addresses—as the NetApp filer can listen for inbound connections on many interfaces. By using an IP you will be more certain which interface is being used.

The next stage is to configure these NFS volumes to be replicated with SnapMirror. If you know you won’t be using another storage protocol with ESXi, such as Fibre Channel and iSCSI, you can skip the next section which explains how to configure these storage types.

Netapp-snapmirror- (06).jpg

Figure 6.7 Naming the volume

Creating NetApp Volumes for Fibre Channel and iSCSI

Storage presented over Fibre Channel and iSCSI provides block-based access; this means the host will see what appears to be a disk drive and will allow you to use VMware’s own file system, VMFS. To achieve this, many storage vendors carve out areas from their physical disk drives into logical units (LUNs). NetApp takes a slightly different approach. Instead, LUNs are created inside flexible volumes, which allows for features such as deduplication to work with LUNs. By default, the iSCSI stack on a NetApp filer is not enabled by default, so I will begin with steps for enabling it.

1. Open the NetApp System Manager and connect to the NetApp filer. Expand the Configuration node, and select the Protocols icon and then the iSCSI icon; then click the Start button to enable the iSCSI service, as shown in Figure 6.8.

If you wanted to use Fibre Channel connectivity, you would need to start the FCP service that is just below the iSCSI interface in the System Manager. Of course, with the FCP you would have to make sure the appropriate zoning was configured at the FC-switch to allow your ESXi hosts the ability to “see” the NetApp filer with this protocol.

Before we try to create a new LUN, it’s worth setting up the initiator groups that will allow the ESXi hosts access to the LUNs. NetApp allows you to create groups that, in turn, contain either the IQN of each host (if it is the iSCSI protocol) or the WWN (if it is the Fiber Channel protocol).

2. To create these groups select the Storage node, and then select LUNs and the Initiator Groups tab. Click the Create button to create the group and add the appro-priate information. Give the group a friendly name, and indicate the system used—in my case, VMware—and the protocol required. Use the Initiators tab to input the IQN or WWN as required, as shown in Figure 6.9. Figure 6.10 shows the IQNs of two ESX hosts: esx1 and esx2.

Netapp-snapmirror- (07).jpg

Figure 6.8 The status of the iSCSI service on a NetApp array

Netapp-snapmirror- (08).jpg

Figure 6.9 The Create Initiator Group dialog box which can be configured to contain either IQN or WWN values

Netapp-snapmirror- (09).jpg

Figure 6.10 The IQNs of two ESX hosts: esx1 and esx2

The next step is to create a LUN to be presented to the ESXi hosts listed in the initiator group we just created. Remember, in NetApp the LUN resides in “volumes” to allow for advanced functionality. Fortunately, the Create LUN Wizard will create both at the same time.

3. Select the Storage node, and then select LUNs and the LUN Management tab; click the Create button to start the wizard. Give the LUN a friendly name, set the host type, and specify its size, as shown in Figure 6.11. Click Next.

4. Select which aggregate (array of physical disks) to use, and create a new volume for the LUN to reside within it, as shown in Figure 6.12.

5. Allocate the new LUN to the appropriate group. In my case, this is the NYC_ESX_ Hosts_iSCSI group, as shown in Figure 6.13. Click Next and then click Finish; the wizard creates the volume and the LUN and allocates the group to the volume.

Now that we’ve created a LUN and presented it to our ESXi server, we can create a datastore to use the LUN.

Netapp-snapmirror- (10).jpg

Figure 6.11 Creating lun10, at 100GB in size and using a type of “VMware”

Netapp-snapmirror- (11).jpg

Figure 6.12 Selecting an aggregate and creating a new volume for the LUN to reside within it

Netapp-snapmirror- (12).jpg

Figure 6.13 The LUN being bound to the initiator group created earlier

Granting ESXi Host Access to the NetApp iSCSI Target

Now that we have created the iSCSI LUN it’s a good idea to enable the software iSCSI target on the ESXi hosts, and grant the hosts access based on the iSCSI IQN we assign to them. If you have a dedicated iSCSI hardware adapter you can configure your IP settings and IQN directly on the card. One advantage of this is that if you wipe your ESXi host, your iSCSI settings remain on the card; however, they are quite pricey. Therefore, many VMware customers prefer to use the ESXi host’s iSCSI software initiator. The iSCSI stack in ESXi 5 was recently overhauled, and it is now easier to set up and offers better performance. The following instructions explain how to set it up to speak to the NetApp iSCSI target we just created.

Before you enable the software initiator/adapter in the ESXi host, you will need to create a VMkernel port group with the correct IP data to communicate to the NetApp iSCSI target. Figure 6.14 shows my configuration for ESXi1 and ESXi2; notice that the vSwitch has two NICs for fault tolerance. In Figure 6.14 I’m using ESXi Standard vSwitches (SvSwitches), but there’s nothing stopping you from using a Distributed vSwitch (DvSwitch) if you have access to it. Personally, I prefer to reserve the DvSwitch for virtual machine networking, and use the SvSwitch for any ESXi host-specific networking tasks. Remember, ESXi 5 introduced a new iSCSI port binding feature that allows you to control multipathing settings within ESXi 5. In my case, I created a single SvSwitch with two VMkernel ports each with their own unique IP configuration. On the properties of each port group (IP-Storage1 and IP-Storage2) I modified the NIC teaming policy such that IP-Storage1 has dedicated access to vmnic2 and IP-Storage2 has dedicated access to vmnic3, as shown in Figure 6.14.

Before proceeding with the configuration of the VMware software initiator/adapter, you might wish to confirm that you can communicate with the NetApp product by using ping and vmkping against the IP address you want to use for NFS on the filer. Additionally, you might wish to confirm that there are no errors or warnings on the VMkernel port groups you intend to use in the iSCSI Port Binding dialog box, as shown in Figure 6.15.

Netapp-snapmirror- (13).jpg

Figure 6.14 The configuration for multipathing for iSCSI connections

Netapp-snapmirror- (14).jpg

Figure 6.15 The iSCSI Port Binding policy is enabled, meaning the host is in a fit state to support iSCSI multipathing.

In ESXi 5 you should not need to manually open the iSCSI software TCP port on the ESXi firewall. The port number used by iSCSI, which is TCP port 3260, should be automatically opened (see Figure 6.16). However, in previous releases of ESX this sometimes was not done, so I recommend confirming that the port is open, just in case.

1. In vCenter, select the ESXi host and then select the Configuration tab.

2. In the Software tab, select the Security Profile link.

3. On the Firewall category, click the Properties link.

4. In the dialog box that opens, open the TCP port (3260) for the iSCSI software client. The next step is to add the iSCSI software adapter. In previous releases of ESXi this would be generated by default, even if it wasn’t required. The new model for iSCSI on ESXi 5 allows for better control of its configuration.

5. Click the Storage Adapters link in the Hardware pane, and click Add to create the iSCSI software adapter, as shown in Figure 6.17.

Netapp-snapmirror- (15).jpg

Figure 6.16 By default, ESXi 5 opens iSCSI port 3260 automatically.

Netapp-snapmirror- (16).jpg

Figure 6.17 In ESXi 5 you must now add a software iSCSI adapter. Previously the “vmhba” alias was created when you enabled the feature.

6. Once the virtual device has been added, you should be able to select it and choose Properties.

7. In the dialog box that opens, click the Configure button. This will allow you to set your own naming convention for the IQN rather than using the auto-generated one from VMware, as shown in Figure 6.18.

8. Bind the virtual device to one of the VMkernel ports on the ESXi host’s vSwitch configuration. In my case, I have a port group named “IP-Storage” that is used for this purpose, as shown in Figure 6.19.

9. Select the Dynamic Discovery tab, and click the Add button.

10. Enter the IP address of the iSCSI target (as shown in Figure 6.20) that is serviced by the interface of your NetApp filer—in my case, this is 172.168.3.89.

Netapp-snapmirror- (17).jpg

Figure 6.18 Using a combination of IP addresses and CHAP as the main authentication method to the array

Netapp-snapmirror- (18).jpg

Figure 6.19 VMkernel ports indicating that they are configured correctly for iSCSI multipathing

Netapp-snapmirror- (19).jpg

Figure 6.20 The Add Send Target Server dialog box, where you input the IP address of the interface on the array listening for inbound iSCSI connections

11. Click OK.

12. Click Close in the main dialog box. You will be asked if you want to rescan the software iSCSI virtual HBA (in my case, this is vmhba34). Click Yes.

Occasionally, I’ve noticed that some changes to the software iSCSI initiator after this initial configuration may require a reboot of the ESXi host, as shown in Figure 6.21. So try to limit your changes where possible, and think through what settings you require up front to avoid this.

If you were doing this for the Fibre Channel protocol the first thing you would need to do is to tell the ESXi hosts to rescan their HBAs to detect the new LUN. We can do this from vCenter on the right-click of the cluster. For QLogic HBAs, you might need to rescan a second time before the LUN is detected. You’ll see a new LUN listed under the HBA’s devices once the rescan has completed. So, once our ESXi server can see our LUN, we can create a new VMFS datastore to use it, as shown in Figure 6.22.

Netapp-snapmirror- (20).jpg

Figure 6.21 Changes to the configuration of the iSCSI initiator can require a reboot to take effect.

Netapp-snapmirror- (21).jpg

Figure 6.22 The NetApp iSCSI LUN is visible to the ESX host.

Configuring NetApp SnapMirror

SnapMirror is the main data replication feature used with NetApp systems. It can perform synchronous, asynchronous, or semi-synchronous replication in either a Fibre Channel or IP network infrastructure. In this section we will configure SnapMirror to replicate asynchronously between the volumes for NFS and Fibre Channel, which we created earlier on the Protected and Recovery Site filers, but there are a couple of things we need to confirm first.

Confirm IP Visibility (Mandatory) and Name Resolution (Optional)

Before beginning with the setup of replication between the two NetApp filers it’s worth confirming that they can see each other through your IP network. I like to enable SSH support on my filers so that I can use PuTTy with them as I do with my ESXi hosts. This means I can carry out interactive commands without resorting to the BMC card. NetApp filers obliviously support the ping command, and using this with both the IP address and the hostname of the Recovery Site NetApp filer (New-Jersey) you can determine whether they can see each other, as well as whether name resolution is correctly configured, as shown in Figure 6.23.

Netapp-snapmirror- (22).jpg

Figure 6.23 Use ping to verify connectivity between your NetApp arrays.

If you fail to receive positive responses in these tests, check out the usual suspects such as your router configuration and IP address. You can check your configuration for DNS under the Configuration node, and the Network and DNS icons, as shown in Figure 6.24.

Enable SnapMirror (Both the Protected and Recovery Filers)

On newly installed NetApp systems, the SnapMirror feature is likely to be disabled. For SnapMirror to function it needs to be licensed and enabled on both systems. You can confirm your licensing status by clicking the Configure link under the SnapMirror node in FilerView. You will need to repeat this task at the Recovery Site NetApp filer (New Jersey) for a new NetApp system.

1. Log in to FilerView by opening a Web browser directly to NetApp’s management IP address.

2. Expand the SnapMirror option.

3. Click the Enable/Disable link.

4. Click the Enable SnapMirror button, as shown in Figure 6.25.

Enable Remote Access (Both the Protected and Recovery Filers)

In order for us to configure NetApp SnapMirror we need to allow the filer from the Recovery Site (New Jersey) to access the Protected Site NetApp filer (New York). When we configure this we can use either an IP address or FQDN. Additionally, we can indicate whether the Recovery Site NetApp filer (New Jersey) is allowed remote access to all volumes or just selected ones. In the real world it is highly likely that you would allow remote access in both directions to allow for failover and failback; also, you would do this if your DR strategy had a bidirectional configuration where the New Jersey site was the DR location for New York, and vice versa.

Netapp-snapmirror- (23).jpg

Figure 6.24 Validate that your IP and DNS configurations are valid if you have connectivity problems.

Netapp-snapmirror- (24).jpg

Figure 6.25 Enabling SnapMirror using FilerView

1. Log in to the NetApp System Manager on the Protected Site NetApp filer. In my case, this is new-york-filer1.corp.com.

2. Select the Protection node and then click the Remote Access button, as shown in Figure 6.26.

3. In the Remote Access pop-up window, enter the IP address or name of the Recovery Site NetApp filer (New Jersey) and then click the Add button, as shown in Figure 6.27. You should now be able to browse and direct what volumes the filer in the Recovery Site is able to access.

Netapp-snapmirror- (25).jpg

Figure 6.26 Clicking the Remote Access button pairs the two NetApp arrays together.

Netapp-snapmirror- (26).jpg

Figure 6.27 Click the Add button in the Remote Access pop-up window to add the volumes that each array will be able to see.

It is possible to add the volume or Qtree by entering in the Edit box the string “All_ volumes”, which allows each filer access to all the volumes on the Recovery Site filer. You can now repeat this task at the Recovery Site NetApp filer (New Jersey) to allow the Protected Site access to the Recovery Site’s volumes.

TIP: While it is possible to create Qtree–SnapMirror relationships, NetApp does not recommend their use as datastores with VMware. It seems Qtree SnapMirror is not dedupe-enabled as Volume SnapMirror is. This setting can reduce bandwidth require-ments considerably. Note that all SnapMirror replications can enable compression for additional bandwidth savings.

Configure SnapMirror on the Recovery Site NetApp Filer (New Jersey)

The next step is to log on to the NetApp filer on the Recovery Site (New Jersey), and enable the replication. We’ll need to restrict our destination volume in the Recovery Site so that only SnapMirror can make changes to it. Then we can create the SnapMirror relationship. The important thing to notice here is how the configuration to enable SnapMirror is focused on the Recovery Site filer. Initially, it might feel odd that the SnapMirror configuration is controlled at the Recovery Site NetApp filer (New Jersey), and that in the wizard you specify the destination location before the source location. But if you think about it, in a real DR event the destination location is where you would be managing the storage layer from the DR location.

To enable the replication, follow these steps.

1. In the NetApp System Manager, open a window on the Recovery Site filer. In my case, this is the New Jersey NetApp filer.

2. Expand the Storage node and select the Volumes icon; then locate the destination volumes, right-click, and under the Status menu select the Restrict option, as shown in Figure 6.28. This restricted process is required when SnapMirror is being configured and the mirror is being initialized. Once initialization is complete, it will be marked as online.

3. Select the Protection node, and click the Create button to add a new mirror relationship, as shown in Figure 6.29.

Netapp-snapmirror- (27).jpg

Figure 6.28 The volume must be in a restricted mode to allow the initial configuration of SnapMirror to complete.

Netapp-snapmirror- (28).jpg

Figure 6.29 The Protection node on the source NetApp filer in the New Jersey location

4. In the Create Mirror Wizard select the radio button that reads “Select system <name of your NetApp filer> as destination system for the new mirror relationship to be created” if it is not already selected (see Figure 6.30).

5. In the System Name and Credentials page of the wizard select the source NetApp filer that will send updates to the Recovery Site, as shown in Figure 6.31.

6. Select the volume at the source location. In my case, this is the New York filer. Using the Browse button you can view the volumes on the filer and select the one you want. In my case, this is vol1_virtualmachines, as shown in Figure 6.32.

Netapp-snapmirror- (29).jpg

Figure 6.30 Selecting the system as the destination for the mirror relationship

Netapp-snapmirror- (30).jpg

Figure 6.31 The NetApp System Manager can securely save the root credentials to prevent endless reinput.

Netapp-snapmirror- (31).jpg

Figure 6.32 Once the volumes are authenticated, the administrator can browse them at the Protected Site and select the one to be mirrored.

7. In the Destination Volume or Qtree Details page of the Create Mirror Wizard, you can select or create the volume at the destination location, as shown in Figure 6.33. In my case, I selected vol1_replica_of_virtualmachines, which I created earlier.

Notice how the status of the volume is marked as restricted; remember, such volumes must be in this state when configuring SnapMirror for them the first time.

8. Enable the first initialization for SnapMirror and set values for when and how often the replication should take place, as shown in Figure 6.34.

Clearly, this enables you to control how frequently replication will occur. For DR purposes, you will probably find the daily, weekly, and monthly presets inadequate for your RTO/RPO goals. You might find that using an advanced schedule will allow for great flexibility—replicating or mirroring specific volumes with differing frequencies again based on your RPO/RTO goals. In fact, the interface is just using the “Cron” format, which you may be familiar with from Linux. It is possible to configure the relationship to replicate synchronously or semi-synchronously via the command line.

9. Choose whether to limit the bandwidth available to SnapMirror, or whether to allow unlimited bandwidth, as shown in Figure 6.35.

Netapp-snapmirror- (32).jpg

Figure 6.33 Selecting an existing volume as the destination for Snap-Mirror updates. It is possible to create a new volume at this time as well.

Netapp-snapmirror- (33).jpg

Figure 6.34 Additional options to replicate SnapMirror synchronously are available at the command line.

Netapp-snapmirror- (34).jpg

Figure 6.35 NetApp is one of a few vendors that allow you to control both schedule and bandwidth allocations to the SnapMirror process.

In my case bandwidth is not a limitation, but it’s important to remember that this factors into your schedule and the number of changes within the cycle to the volume. If you’re not careful, you could give SnapMirror a small amount of bandwidth and a very frequent schedule. In environments where there are many changes in a volume, the replication cycle might never complete. Think of it this way: How would you feel if you were given an excessive amount of work, with no time and insufficient resources? It would leave you with no hope of completing the job by the given deadline.

Introducing the Virtual Storage Console (VSC)

Alongside many other storage vendors NetApp has created its own storage management plug-ins for vCenter. As stated at the beginning of this chapter, NetApp recommends using the VSC as the primary method for provisioning datastores. I can see why; the VSC is very easy to use and reduces the number of configuration steps required in the environment. I think you might prefer to use it on a daily basis for provisioning and managing datastores.

In the context of SRM, the VSC may well speed up the process of initially allocating storage to your ESXi hosts in the Protected Site. Once the datastores are provisioned, it will merely be a case of setting up the appropriate SnapMirror relationship. Who knows; perhaps these storage vendor plug-ins may be extended to allow the configuration of SRM’s requirements directly from within vSphere. In addition to provisioning new storage, NetApp VSC also has enhanced “storage views” and the ability to create virtual desktops using array-accelerated cloning technologies.

The NetApp VSC installs as a server-side plug-in, and can be installed along with vCenter or on a separate management server depending on your requirements. After installing the service, you will be required to open a Web page and “register” the VSC with your vCenter. This process enables the extensions to the vSphere client. Once you open the vSphere client, you should see that a NetApp icon is added to the Solutions and Applications section of the “home” location in vCenter, as shown in Figure 6.36.

This icon will allow you to carry out the post-configuration phase that involves updating the plug-in to be aware of the NetApp filers that cover the scope of your vCenter environment. The VSC is currently a collection of plug-ins that were, until recently, historically separate, and they have now been bundled together. As such, each component needs to be made aware of the NetApp filers in your environment. It’s likely that in the next release there will be a single shared location from which all the plug-ins retrieve that configuration information. The following core plug-ins cover common administration tasks:

• Virtual Storage Console, which enhances the storage views within vSphere

• Provisioning and Cloning, which allows for the creation of new datastores and virtual machines automated for use as virtual desktop pools with systems such as VMware View and Citrix XenDesktop

• Backup and Recovery, which allows for a schedule of snapshots that can be mounted directly by the administrator and used in the event of the VM being corrupted or deleted

Under the Provisioning and Cloning tab, as shown in Figure 6.37, you can use the Add Storage Controller Wizard to configure the NetApp filer for use by vSphere.

The next page of the Add Storage Controller Wizard includes all the resources available to the NetApp filer. This window allows the administrator to control what resources are available to anyone creating new datastores or virtual desktops. In this rather simple way, it is possible to ensure that VMware administrators only can access the correct IP interface, volumes, and critical aggregates available, as shown in Figure 6.38. In my case, I made sure that my administrators did not have access to aggr0 which is the collection of disks used to hold the NetApp ONTAP system image, logs, and so forth, using the “Prevent further changes” option to stop other VMware admins from securing the right to make alterations.

Netapp-snapmirror- (35).jpg

Figure 6.36 Registering the NetApp VSC service causes a NetApp icon to be added to the Solutions and Applications section in vCenter.

Netapp-snapmirror- (36).jpg

Figure 6.37 Each component of the VSC requires a configuration of the storage controller.

Netapp-snapmirror- (37).jpg

Figure 6.38 Using the field picker, the administrator can control which interfaces, volumes, and aggregates are visible to the VMware admin.

The final part of the wizard allows you to configure advanced defaults, such as whether to allow the NetApp filer to reserve disk space for thinly provisioned volumes. Once this small configuration has been completed, the NetApp VSC will provide you with a sequence of buttons and menu options that let you configure new storage to your VMware clusters. This automates the creation of the volume and mounting of the volume to each ESXi host, as shown in Figure 6.39.

Netapp-snapmirror- (38).jpg

Figure 6.39 VSC provisioning is best targeted at a cluster, which auto-mates volume creation and mounting to the ESX hosts.

If you would like to learn more about the NetApp VSC, I recently conducted a survey of storage vendor plug-ins and wrote an extended article about the NetApp VSC on my blog:

www.rtfm-ed.co.uk/201 1/03/02/using-netapp-vsc/

NOTE: In writing this chapter, I used the latest release of the VSC, Version 2.01. NetApp seems to update or enhance the VSC with a rather frequent cadence, so you may want to check now, and thereafter every six months or so, for updates/new releases.

Summary

In this chapter I briefly showed you how to set up NetApp SnapMirror, which is suitable for use with VMware SRM. We configured two NetApp FAS arrays and then configured them for replication or for use with SnapMirror. Lastly, we connected an ESXi host to that storage. Additionally, I showed how the new plug-ins from NetApp allow you to create volumes and mount them efficiently on the ESXi hosts.

From this point onward, I recommend that you create virtual machines on the NFS mount point so that you have some test VMs to use with VMware SRM. SRM is designed to only pick up on LUNs/volumes that are accessible to the ESXi host and contain virtual machine files. In previous releases of SRM, if you had a volume that was blank it wouldn’t be displayed in the SRM Array Manager Configuration Wizard. In the new release, it warns you if this is the case. This was apparently a popular error people had with SRM, but one that I rarely see, mainly because I always ensure that my replicated volumes have virtual machines contained on them. I don’t see any point in replicating empty volumes! In my demonstrations, I mainly used virtual disks. VMware’s RDM feature is now fully supported by SRM (it wasn’t when Version 1.0 was first released). I will be covering RDMs later in this book because it is still an extremely popular VMware feature.

Since ESXi 3.5 and vCenter 2.5, you have been able to relocate the virtual machine swap file (.vswp file) onto different datastores, rather than locating it in the default location. A good tip is to relocate the virtual machine swap file onto shared but not replicated storage. This will reduce the amount of replication bandwidth needed. It does not reduce the amount of disk space used at the Recovery Site, as this will automatically be generated on the storage at the Recovery Site. I would also add that additional storage and replication bandwidth savings can be found by enabling data deduplication on the SAN or NAS datastores and compression in the SnapMirror settings.