Getting Started with the HP StorageWorks P4000 Virtual SAN Appliance with Remote Copy

From vmWIKI
Jump to: navigation, search

Originating Author

Michelle Laverick

Michelle Laverick.jpg

Video Content [TBA]

Getting Started with the HP StorageWorks P4000 Virtual SAN Appliance with Remote Copy

Version: vCenter SRM 5.0

Hewlett-Packard (HP) provides both physical and virtual storage IP-based appliances for the storage market, and in 2009 it acquired Lefthand Networks, a popular iSCSI provider. HP provides a virtual appliance called the StorageWorks P4000 virtual storage appliance (VSA) that is downloadable from the HP website for a 60-day evaluation period. In this respect, the P4000 VSA is ideal for any jobbing server guy to download and play with in conjunction with VMware’s SRM—the same applies to EMC’s Celerra VSA. If you follow this chapter to the letter you should end up with a structure that looks like the one shown in Figure 5.1 in the VSA’s management console, with the friendly names adjusted to suit your own conventions.

In the screen grab of the HP P4000 Centralized Management Console (CMC) shown in Figure 5.1 you can see that I have two VSAs (vsa1.corp.com and vsa2.corp.com), each in its own management group (NYC_Group and NJ_Group). As you can see, I have a volume called “virtualmachines” and it is replicating the data from vsa1 to vsa2 to the volume called “replica_of_virtualmachines.” It is a very simple setup indeed, but it is enough to get us started with the SRM product.

Storageworks-p4000- (01).jpg

Figure 5.1 Replication of the virtualmachines volume in the NYC_Group to the replica_of_virtualmachines volume in the NJ_Group

Some Frequently Asked Questions about the HP P4000 VSA

During my time using the HP P4000 VSA, I’ve been asked a number of questions about the use of these types of VSAs. Generally, the questions focus on scalability and performance issues. There’s a natural anxiety and assumption that a VSA will perform less well than a physical array. Of course, much depends on the scalability written into the software. As well, everyone now knows the VM itself presents few limitations in terms of scale and performance; these limitations evaporated some years ago. Nonetheless, that doesn’t mean every vendor-provided VSA can take advantage of the VM’s capabilities. The following is a list of questions that people in the VMware Community Forums and customers alike have asked me. The answers to these questions should help you with your decision making.

Q1. What are the recommended minimums for memory and CPU?

The minimum requirements are 1GB of RAM and one vCPU offering 2GHz or more of CPU time. Adding more vCPUs does not significantly improve performance, but the more storage you add the more memory you may require.

Q2. Should the VSA be stored on a local VMFS volume or a shared VMFS volume?

This depends entirely on the quality of the storage. If your local storage is faster and offers more redundancy than any remote storage you have, you would use local storage. In some environments, you might prefer to build your VSAs into a cluster, with volumes in that cluster configured with Network RAID 10 to protect the storage layer from unexpected outages. The HA capability is in the Network RAID functionality of the volumes. The best practice is not to use VMware HA, but to leverage the VSA functionality at the storage layer using Network RAID. If the VSA is placed on local storage, it effectively makes that local resource behave as network resources available to other hosts and systems on the network. Additionally, it offers replication to local storage where it previously had none.

Q3. VSA is licensed by a MAC address. Should you use a static MAC address?

It is recommended that you use a static MAC address if you decide to purchase the VSA. If you are just evaluating the VSA or simply using it to evaluate SRM a static MAC address is not required, just recommended.

Q4. Can you use vCenter cloning to assist in creating multiple VSAs?

Yes. But the VSA must not be configured or in a management group. If you have procured a licensed version of the VSA be aware that the VMware template deploy process generates a new MAC address for the new VM, and as such it will need licensing or relicensing after being deployed. If you do consider cloning the VSA, I recommend using static MAC addresses to more tightly control the configuration and licensing process.

Q5. Setting up two VSAs in a management group with all the appropriate settings takes some time. Can you use the clone feature in vCenter to reset lab environments?

Yes. Build up the two VSAs to the state you need, and then simply right-click the management group and choose the Shutdown Management Group option. You can then clone, delete, and clone again. You should be careful, as both the cloning and template processes change the MAC address. An alternative to this approach is to learn the HP command-line interface (CLI), which allows you to script this procedure. This is not covered in this book.

Q6. Can you capture the configuration of the VSAs and restore it?

No. You can capture the configuration for support purposes but not for configuration. Working with HP support you can restore your configuration if it has become damaged for some reason. You can back up your configuration to a file, and use this as a method of repairing a lost or damaged configuration. You can do this by right-clicking the group in the management console and selecting the View Management Group configuration, which opens a dialog from which you can save the setup to a .bin file. If you do have a significant number of VSAs to configure, I recommend that you use HP’s CLI called the CLiQ. This would allow you to script your configuration. It also would reduce the time required to roll out the appliance, and guarantees consistency of the build from one appliance to another.

Question 6 concludes the frequently asked questions surrounding the VSA. Next, I will take you through the process of downloading, uploading, and configuring the appliance. You should find this to be a relatively easy process. I think the HP VSA has one of the most intuitive setup and configuration routines I’ve seen.

Downloading and Uploading the VSA

You can download from HP’s website a full trial version of the HP P4000 VSA for use with VMware ESX or Microsoft Hyper-V. For obvious reasons, I’m using the ESX version in this book! To download the VSA, go to www.hp.com/go/tryvsa and follow the instructions.

After you unzip the downloaded file, use the “import” feature to upload the file to a datastore in your environment. Personally, I’m trying my best to use the OVF and OVA versions wherever possible. The main reason I am adopting the OVF version is to encourage its use among virtual appliance vendors; when it works, it is a joy to use.

Importing the StorageWorks P4000 VSA

To import the StorageWorks P4000 VSA, follow these steps.

1. To begin, extract the .zip file.

2. Open the vSphere client to your vCenter, and select the ESX host or VMware cluster on which you want the HP VSA to run.

3. In the File menu of the vSphere client, select Deploy OVF Template.

Figure 5.2 shows the page that appears after the administrator selects the “Deploy from a file or URL” radio button and browses to the .ovf file in the extracted .zip file. This should be located in the drive you unzipped it to at the following directory path:

HP_P4000_VSA_Full_Evaluation_Software_for_VMware_ESX_requires_ESX_ servers_AX696_1 051 2\Virtual_SAN_Appliance_Trial\Virtual_SAN_Appliance\vsatrial\VSA.ovf

4. Accept the OVF Template details.

5. Accept the End User License Agreement (EULA).

6. Enter a friendly VM name for the appliance, and select a VM folder location to hold it. In my case, I called the appliance “hp-vsa1” and placed it in the Infrastructure folder, as shown in Figure 5.3.

Storageworks-p4000- (02).jpg

Figure 5.2 Browsing for the VSA’s .ovf file

Storageworks-p4000- (03).jpg

Figure 5.3 Establish a good naming convention for your virtual appliances that matches your organization’s naming convention.

7. Select an ESX host or cluster upon which the VSA will run.

8. If you have one, select a resource pool. Figure 5.4 shows the administrator selecting the Infrastructure resource pool.

Resource pools are not mandatory, and any VM can run just on the DRS cluster (or root resource pool). However, you may find resource pools useful for organizing VMs logically, and ensuring that key VMs are fairly allocated the resources they demand.

9. Select a datastore to hold the VSA’s VMX and VMDK files. Figure 5.5 shows the administrator selecting the infrastructureNYC datastore.

Note that the VSA must reside on a datastore accessible to the host. Remember, while using local storage is cost-effective, you will be unable to use features such as vMotion, DRS, and HA to manage and protect the VSA.

Storageworks-p4000- (04).jpg

Figure 5.4 Selecting the Infrastructure resource pool

Storageworks-p4000- (05).jpg

Figure 5.5 Selecting the infrastructureNYC datastore

10. Accept the default for the virtual disk format used.

This first virtual disk essentially contains the system disk of the HP VSA, so it should be safe to use the thin virtual disk format. Later we will add a second virtual disk or RDM to the appliance; this is the storage that will be presented to the ESX host. In that case, you need to be more circumspect about the format and location used from a performance perspective.

11. Select a vSwitch port group for the VSA.

Remember, this network must be accessible to the ESX hosts to allow the software iSCSI stack that exists in ESX to speak to the HP VSA and the iSCSI volumes it presents. Using utilities such as ping and vmkping should allow you to confirm the accessibility of the VSA once its IP configuration has been completed. Incidentally, the port groups (vlan11, vlan12, and vlan13) in Figure 5.6 are port groups on VMware’s Distributed vSwitch (DvSwitch). SRM does work with the older Standard vSwitches (SvSwitches), but I thought it would be interesting to use all the Enterprise Plus networking features with SRM in this book. The Virtual Storage Appliance port group is a virtual machine port group on the same virtual switch as my VMkernel ports that I created to access IP storage from an ESX host.

Modifying the VSA’s Settings and First-Power-On Configuration

Once the VSA has been imported the next step is to power it on for the first time. You must complete a “run once” or post-configuration phase before you can manage the VSA with its companion software.

Storageworks-p4000- (06).jpg

Figure 5.6 Selecting the network port group on a vSphere vSwitch

Adding a Virtual Disk for Storage

The next step is to add a virtual disk or RDM to the HP P4000 VSA. This disk will be a volume presented to your ESX hosts and used to store virtual machines protected by SRM. As such, you will want to make it as big as possible as you will create VMs here. Additionally, it must be located on the SCSI 1:0 controller as shown in Figure 5.7. Note that the HP VSA does support adding more disks to SCSI 1:1, and so on.

Later, when we create volumes in the HP P4000 VSA, you will see that it does support thin provisioning to present a volume of any size you like, even though it does not have the actual disk space at hand. Despite doing this, the larger this second disk is the more space you will have for your virtual machines. The HP VSA can support five virtual disks on SCSI 1: With each virtual disk being a maximum of 2TB, it means one VSA can present up to 1 0TB of storage. Of course, you will have to review the settings for memory to the HP VSA in order to manage this amount of storage.

Licensing the VSA

Before you power on the VSA for the first time, you might want to consider how the product is licensed should you wish to use VSA beyond the 60-day evaluation period. VSA is licensed by the virtual MAC address of the VM generated by VMware at power on. While this auto-generated MAC address shouldn’t change, it can change in some cases where you manually register and unregister a VM from one ESX host to another. Additionally, if you fail to back up the VMX you could lose this information forever. Lastly, if for whatever reason you clone the VSA with a vCenter clone/clone to template facility, a brand-new MAC address is generated at that point. You might prefer to set and record a static MAC address to your VSA (whether you do this will depend on your circumstances and requirements) in the range provided by VMware. It is possible to set a static MAC address in the GUI, as shown in Figure 5.8, and there is no need to edit the virtual machine’s VMX files directly. Just remember to record your static MAC address alongside the VM name, hostname, and IP address.

Whatever you choose, static or dynamic, be sure to make a record of the MAC address so that your license key (if you have purchased one) will be valid if you need to completely rebuild the VSA from scratch. HP recommends a static MAC address.

Storageworks-p4000- (07).jpg

Figure 5.7 The VSA boots from SCSI 0:0, and must have its data held on a second SCSI controller (SCSI 1) to enhance its performance.

Primary Configuration of the VSA Host

In this section I will explain the steps involved in the primary configuration of the VSA host. Before we walk through those steps, though, you may want to consider your options for creating your second VSA. Although it doesn’t take long to add in the VSA, we currently have a VSA that is in a clean and unconfigured state; to rapidly create a second VSA you could run a vCenter “clone” operation to duplicate the current VSA VM configuration. You can do this even if the VM is located on local storage. HP does not support cloning the VSA once it is in a management group setup with the client console used to manage the system.

The primary configuration involves configuring the hostname and IP settings for the VSA from the VMware Remote Console window. You can navigate this utility through a combination of keystrokes such as the Tab key, space bar, and Enter/Return key. It is very simple to use; just stay away from the cursor keys for navigation, as they don’t work.

Storageworks-p4000- (08).jpg

Figure 5.8 The VSA is licensed through a MAC address, and as such the recommendation is to use a static MAC address.

1. Power on both VSA VMs.

2. Open a VMware Remote Console.

3. At the Login prompt, type “start” and press Enter. The VSA presents a blue background with white text. You can navigate around the VSA’s console using the Tab and Enter keys.

4. Press Enter at the Login prompt as shown in Figure 5.9.

5. In the menu that appears, select Network TCP/IP Settings and press Enter, as shown in Figure 5.10.

6. Cursor up, and select < eth0 > and press Enter as shown in Figure 5.11. Note that, by default, the HP VSA has (and requires) only a single virtual NIC; the VSA should receive its network redundancy by virtue of a vSphere vSwitch which has multiple, physical vmnics attached to it to create a network team.

7. Change the hostname and set a static IP address.

When I repeated this process for my second VSA, I set the name to be vsa2.corp.com with an IP address of 172.168.4.99/24 and a default gateway of 172.168.4.1. (A default gateway is optional, but if your VSAs are in two different physical locations, it is likely that you will need to configure routing between the two appliances.)

Storageworks-p4000- (09).jpg

Figure 5.9 The login option. By default, the “root” account on the HP VSA is blank.

Storageworks-p4000- (10).jpg

Figure 5.10 The tiered menu system on the HP VSA’s console

Storageworks-p4000- (11).jpg

Figure 5.11 Select the eth0 interface to configure a static IP address.

Although all my equipment is in the same rack, I’ve tried to use different IP ranges with routers to give the impression that NYC and NJ represent two distinct sites with different network identities, as shown in Figure 5.12. You can navigate the interface here by using the Tab key on your keyboard.

8. Press Enter to confirm the warning about the restart of networking.

9. Use the Back options to return to the main login page. You might wish to update your DNS configuration to reflect these hostnames and IP addresses so that you can use an FQDN in various HP management tools.

Installing the Management Client

Advanced configuration is performed via the HP CMC. This is a simple application used to configure the VSA (a Linux version is also available). Your PC must have a valid or routable IP address to communicate to the two VSAs. You will find the HP CMC in the directory where you extracted the .zip file from the evaluation page:

HP_P4000_VSA_Full_Evaluation_Software_for_VMware_ESX_ requires_ESX_servers_AX696_1 051 2\Virtual_SAN_Appliance_Trial\ Centralized_Management_Console

Storageworks-p4000- (12).jpg

Figure 5.12 Different IP ranges with routers to give the impression that NYC and NJ represent two distinct sites with different network identities

I will be using the Windows version of the CMC. Installation is very simple, and isn’t worth documenting here; a typical installation should be sufficient for the purposes of this book.

Configuring the VSA (Management Groups, Clusters, and Volumes)

Now that the appliance is operational and accessible, it’s possible to use the graphical management console to complete the higher-level configuration of the VSA. This includes adding the VSA into the console, and then configuring it so that it resides in a management group and cluster.

Adding the VSAs to the Management Console

Before you begin, you might as well test that your management PC can actually ping the VSAs. You’re not going to get very far in the next step if you can’t do this. I repeated this add process so that I have one management console showing two VSAs (vsa1/2) in two different locations.

1. Load the CMC, and the Find Systems Wizard will start.

2. If the VSA is not automatically discovered, you can click the Add button and enter the IP address or hostname of the VSAs, as shown in Figure 5.13.

3. Click OK, and repeat this for any other VSAs you wish to manage.

4. When you’re finished click Close.

Adding the VSAs to Management Groups

Each VSA will be in its own management group. During this process, you will be able to set friendly names for the groups and volumes. It clearly makes sense to use names that reflect the purpose of the unit in question, such as the following:

• NYC_Group and NJ_Group

• NYC_Cluster and NJ_Cluster

• Virtual_Machines Volume

• Replica_Of_Virtual_Machines Volume

Of course, it is entirely up to you what naming process you adopt. Just remember that these names are not allowed to contain a space as a character. To add a VSA, follow these steps.

1. In the Getting Started node, click “2. Management Groups, Clusters and Volumes” and then click Next to go to the Welcome page.

2. Choose New Management Group.

3. For the management group name enter something meaningful, such as “NYC_ Group”, and select the VSA you wish to add; in my case, this is vsa1.corp.com.

In a production setup, theoretically you could have five VSAs which replicate to one another asynchronously in the Protected Site and another five VSAs in the Recovery Site that replicate to one another and with the Protection location in an asynchronous manner. Remember, spaces are not allowed in the management group name. You can use CamelCase or the underscore character (_) to improve readability, as shown in Figure 5.14.

Storageworks-p4000- (13).jpg

Figure 5.13 For this to work, the hostname must be registered in DNS. Multiple VSAs can be added into the CMC and managed centrally.

Storageworks-p4000- (14).jpg

Figure 5.14 Multiple VSAs can be added to a group. In this case, we will have one VSA for each location: NYC and NJ.

4. Set a username and password, as shown in Figure 5.15.

The username and password are stored in a separate database internal to the VSA. The database is in a proprietary binary format and is copied to all VSAs in the same management group. If you are the forgetful type, you might want to make some record of these values. They are in no way connected to the logins to your vCenter or Active Directory environment.

5. Select the “Manually set time” radio button. As the VSA is a virtual appliance it should receive time updates from the ESX host, which is in turn configured for NTP. To enable this I edited the VMX file of my two VSAs and enabled the tools. syncTime = “TRUE” option.

Storageworks-p4000- (15).jpg

Figure 5.15 The HP VSA allows for its own set of user accounts and password for authentication to the group.

6. Configure the email settings as befits your environment.

Creating a Cluster

The next stage of the wizard is to create a cluster. In our case, we will have one VSA in one management group within one cluster and a separate VSA in a different management group within a cluster. The cluster is intended for multiple VSAs within one management group; however, we cannot set up replication or snapshots between two VSAs in different sites without a cluster being created.

1. Choose Standard Cluster.

2. Enter a cluster name, such as NYC_Cluster.

3. Set a virtual IP.

This is mainly used by clusters when you have two VSAs within the same management group, and, strictly speaking, it isn’t required in our case, but it’s a best practice to set this now for possible future use. I used my next available IP of 172.168.3.98, as shown in Figure 5.16. When I set the virtual IP on my other VSA, I used an IP address of 172.168.4.98. (Virtual IPs are used in the “cluster” configuration of the HP VSA, a setup which is beyond the scope of this book. However, you must supply at least one virtual IP in order to complete the configuration.)

Storageworks-p4000- (16).jpg

Figure 5.16 Setting a virtual IP

Creating a Volume

The next step in the wizard is to create your first volume. You may skip this stage in the wizard, using the Skip Volume Creation option in the bottom right-hand corner of the page. Volume is another word for L UN. Whatever word you are familiar with, we are creating a block of storage that is unformatted which could be addressed by another system (in our case, ESX) once formatted files can be created on it. Some storage vendors refer to this process as “creating a file system.” This can be a little confusing, as many people associate this with using EXT3, VMFS, or NTFS. A volume or file system is another layer of abstraction between the physical storage and its access by the server. It allows for advanced features such as thin provisioning or virtual storage.

A volume can be either full or thinly provisioned. With thinly provisioned volumes the disk space presented to a server or operating system can be greater than the actual physical storage available. So the volume can be 1TB in size even though you only have 512GB of actual disk space. You might know of this concept as virtual storage, whereby you procure disk space as you need it rather than up front. The downside is that you must really track and trace your actual storage utilization very carefully. You cannot save files in thin air; otherwise, you could wind up looking like Wile E. Coyote running off the edge of a cliff. If you need to, you can switch from full to thin provisioning and back again after you have created the volume.

1. Enter a volume name, such as “virtualmachines”.

2. Set the volume size.

3. For the provisioning type, check either Full or Thin.

As shown in Figure 5.17, I created a volume called “virtualmachines,” which is used to store VMs. The size of the “physical” disk is 100GB, but with thin provisioning I could present this storage as though it were a 1TB volume/LUN. (A replication-level option would be used if I were replicating within a management group. In the case of this configuration it is irrelevant because we are replicating between management groups.) The figure shows the creation of the “primary” volume at the Protected Site of NYC. At the Recovery Site, a configuration of “Scheduled Remote Copy” will configure the volume to be marked as “Secondary” and, as such, read-only.

Storageworks-p4000- (17).jpg

Figure 5.17 Creating the “primary” volume at the Protected Site of NYC

When I repeated this process for vsa2 I selected the Skip Volume Creation option, as when I set up replication between vsa1 and vsa2 the Replication Wizard will create for me a “remote volume” to accept the updates from vsa1. At the end of some quite lengthy status bars, the management group, cluster, and volume will have been created. Now we must repeat this process for vsa2, but using unique names and IP addresses:

• Management Group Name: NJ_Group

• Cluster Name: NJ_Cluster

• Volume Name: Select Skip Volume Creation

At the end of this process, you should have a view that looks similar to the one shown in Figure 5.18. The small exclamation mark alerts are caused by the HP VSA trying and failing to verify my “dummy” email configuration, and they can be safely ignored.

Licensing the HP VSA

Although the HP VSA is free to evaluate for 60 days without any license key at all, certain advanced features will need a license applied to them to be activated. You can find the Feature Registration tab when you select a VSA from the Storage Node category. License keys are plain text values that can be cut and pasted from the license fulfillment Web page or by using email/fax to submit your HP order number. This window of the interface has a Feature Registration button which, when clicked, will open a dialog box to edit the license key (see Figure 5.19). Remember, license keys are directly related to the MAC address of the VSA. Events that could potentially change the MAC address will result in features not being available to the VSA.

Storageworks-p4000- (18).jpg

Figure 5.18 The primary volume of “virtualmachines” held in the NYC_ Group. Replication has not been configured to the NJ_Group.

Storageworks-p4000- (19).jpg

Figure 5.19 The Edit License Key dialog box

Configuring the HP VSA for Replication

It is very easy to set up replication between two VSAs in two different management groups. With the HP VSA we use a scheduled remote snapshot. This allows for asynchronous replication between two VSAs at an interval of 30 minutes or more. A much smaller cycle of replication is supported between two VSAs in the same management group, but this does not work with SRM and was never intended for use across two sites. As with many iSCSI and NFS systems the HP VSA comes with the capability to control the bandwidth consumed by the cycle of replication to prevent this background process from interfering with day-to-day communications.

In the HP VSA the snapshot process begins with a local snapshot at the protected location; once completed, this snapshot is copied to the recovery location. After the first copy, the only data transferred is the changes, or deltas. This is a very common approach with asynchronous replication systems, and you are likely to see a similar approach from other storage vendors that occupy this space. We have a setting to control the retention of this data. We can control how long to retain the snapshot data both at the Protected and Recovery management groups. To do this, follow these steps.

1. In the CMC, expand the group node and cluster nodes to locate your volume in the list.

2. Right-click your volume and, from the context menu that appears, choose New Schedule to Remote Snapshot a Volume, as shown in Figure 5.20.

3. In the Recurrence section of the dialog box set the Recur Every option to be every 30 minutes.

4. Under Primary Snapshot Setup, enable the option to be retained for a maximum of three snapshots.

It’s really up to you how long you keep your snapshots. In this configuration I would have three snapshots in 180 minutes; when the fourth snapshot is taken the oldest one will be purged. The longer you retain your snapshots and the more frequently you take them, the more options exist for data recovery. In the test environment we are configuring you probably won’t want to hang on to this data for too long. The more frequently you take snapshots and the longer you retain them the more storage space you will require. For testing purposes you might find much less frequent intervals will be appropriate, as you need less space to retain the snapshots, as shown in Figure 5.21.

Storageworks-p4000- (20).jpg

Figure 5.20 The HP VSA presents several options, but only Scheduled Remote Copy is supported with VMware SRM.

5. Under Remote Snapshot Setup, make sure the NJ_Group is selected, and then click the New Remote Volume button.

This will start a separate wizard that will create a remote volume on the VSA in the NJ_Group. It will be the recipient of block updates from the other VSA in the NYC_Group.

6. Select the NJ_Group in the Management Groups, Clusters, and Volumes Wizard, and ensure that you select the radio button for Existing Cluster and the radio button to Add a Volume to the Existing Cluster, as shown in Figure 5.22.

Storageworks-p4000- (21).jpg

Figure 5.21 Notice how the option to click OK is not yet available. This is often because a “Start at” time has yet to be configured.

Storageworks-p4000- (22).jpg

Figure 5.22 Adding a new volume to the NJ_Cluster within the existing NJ_Group

7. In the Choose an Existing Cluster part of the wizard, select the cluster at the Recovery Site (in my case, this is the NJ_Cluster).

8. In the Create Volume dialog box, enter a friendly name for the volume, such as “replica_of_virtualmachines” as shown in Figure 5.23. Notice how the type of this volume is not “Primary” but “Remote”—remote volumes are read-only and can only receive updates via the scheduled remote copy process.

9. Click Finish and then click Close.

At the end of this process the New Schedule to Remote Snapshot a Volume dialog box will have been updated to reflect the creation of the remote volume. However, byou will notice that despite setting all these parameters, the OK button has not been enabled. This is because we have yet to set a start date or time for the first snapshot.

Storageworks-p4000- (23).jpg

Figure 5.23 The naming of your volumes is entirely up to you, but en-sure that they are meaningful and easy to recognize in the interface.

The frequency of the snapshot and the retention values are important. If you create too shallow a replication cycle, as I have done here, you could be midway through a test of your Recovery Plan, only to find the snapshot you are currently working on is purged from the system. In the end, because of lack of storage, I adjusted my frequency to be one hour, as about midway through writing this book I ran out of storage and that was with a system that wasn’t generating much in the way of new files or deleting old files. So my schedule is not an indication of what you should set in the real world if you are using HP storage; it’s merely a way to get the replication working sufficiently so that you can get started with SRM.

10. In the dialog box next to the Start At time text, click the Edit button, and using the date and time interface, set when you would like the replication/snapshot process to begin.

11. Click OK. If you have not licensed the VSA, this feature will work but only for another 60 days. You may receive warnings about this if you are working with an evaluation version of the VSA.

Monitoring Your Replication/Snapshot

Of course, you will be wondering if your replication/snapshot is working. There are a couple of ways to tell. Expanding the volumes within each management group will expose the snapshots. You might see the actual replication in progress with animated icons as shown in Figure 5.24. After selecting the remote snapshot, you will see a Remote Snapshots tab on the right-hand side (see Figure 5.25). This will tell you how much data was transferred and how long the transfer took to complete.

As you can see, my replication cycle is not especially frequent, and because of the retention of the snapshot, you could regard what the HP VSA method of replication offers as a series of “undo” levels. Now, to some degree this is true; if we have three snapshots (Pr1, Pr2, and Pr3), each separated by one hour, we have the ability to go back to the last snapshot and the one created an hour before it. However, most SRAs default to using the most recent snapshot created or to creating a snapshot on-the-fly, so if you wanted to utilize these “levels of undo” you would need to know your storage management tools well enough to replicate an old snapshot to the top of the stack. In other words, Pr1 would become Pr4.

Lastly, it’s worth saying that many organizations will want to use synchronous replication where bandwidth and technology allow. This synchronous replication offers the highest level of integrity because it is constantly trying in real time to keep the disk state of the Protected Site and Recovery Site together. Often with this form of replication, you are less restricted in the time you can roll back your data. You should know, however, that this functionality is not automated or exposed to the VMware SRM product and was never part of the design. As such, it’s a functionality that could only be achieved by manually managing the storage layer. A good example of a storage vendor that offers this level of granular control is EMC, whose RecoverPoint technology allows you to roll back a second-by-second level of the replication cycle. Also remember that this synchronous replication is frequently restricted in distance such that it may be unfeasible given your requirements for a DR location.

Storageworks-p4000- (24).jpg

Figure 5.24 The animated view of replication taking place

Storageworks-p4000- (25).jpg

Figure 5.25 The percentage status of the replication job together with the elapsed time since the replication began

Adding ESX Hosts and Allocating Volumes to Them

Clearly, there would be little security if you could just give your ESX hosts an IP address and “point” them at the storage. To allow your ESX hosts access to storage the hosts must be allocated an IQN (iSCSI Qualified Name). The IQN is used within the authentication group to identify an ESX host. In case you have forgotten, the IQN is a convention rather than a hardcoded unique name (unlike the WWNs found on Fibre Channel devices) and takes the format of iqn-date-reverse-fqdn:alias. As a domain name can only be registered once on a particular date (albeit it can be transferred or sold to another organization), it does impose a level of uniqueness fit for its purpose. An example IQN would be:

iqn.2011-03.com.corp:esx1

In this simple setup my ESX hosts are in the NYC site, and they are imaginatively called esx1.corp.com and esx2.corp.com. My other two ESX hosts (yes, you guessed it, esx3 and esx4) are at the NJ site and do not need access to the replicated volume in the NJ_Group management group. When the administrator runs or tests a Recovery Plan within SRM the HP SRA will grant the ESX hosts access to the latest snapshot of replica_of_virtual_ machines, so long as the ESX hosts in the Recovery Site have the iSCSI initiator enabled and the iSCSI Target IP has been configured. For the moment, esx3 and esx4 need no access to the VSAs at all. However, I recommend creating test volumes and ensuring that the ESX hosts in the Recovery Site can successfully connect to the HP VSA, just to be 100% sure that they are configured correctly.

Adding an ESX Host

To add an ESX host, follow these steps.

1. Expand the NYC_Group.

2. Right-click the Servers icon and select New Server.

3. Enter the FQDN of the ESX host as a friendly identifier, as shown in Figure 5.26, and in the edit box under “CHAP not required” enter the IQN of the ESX host. The VSA does support CHAP when used with SRM. But for simplicity, I’ve chosen not to enable that support here.

Allocating Volumes to ESX Hosts

Now that we have the ESX hosts listed in the HP VSA we can consider giving them access to the virtualmachines volume I created earlier. There are two ways to carry out this task. You can right-click a host and use the Assign and Unassign Volumes and Snapshots menu option. This is useful if you have just one volume you specifically want a host to access.

Storageworks-p4000- (26).jpg

Figure 5.26 Adding a host to the management system using a friendly name and its iSCSI IQN

Alternatively, the same menu option can be found on the right-click of a volume—this is better for ESX hosts, because in VMware all ESX hosts need access to the same volumes formatted with VMFS for features such as vMotion, DRS, HA, and FT. We’ll use that approach here.

1. Right-click the volume; in my case, this is virtualmachines.

2. In the menu that opens, select the Assign and Unassign Volumes and Snapshots option.

3. In the Assign and Unassign Servers dialog box, enable the Assigned option for all ESX hosts in the datacenter/cluster that require access, as shown in Figure 5.27. Click OK.

When you click OK, you will receive a warning stating that this configuration is only intended for clustered systems or clustered file systems. VMFS is a clustering file system where more than one ESX host can access the volume at the same time without corruption occurring. So it is safe to continue.

Storageworks-p4000- (27).jpg

Figure 5.27 Once registered with the management console, as-signing hosts to datastores is as simple as clicking with the mouse.

For now this completes the configuration of the VSA. All that we need to do is to configure the ESX host connection to the VSA.

Granting ESX Host Access to the HP VSA iSCSI Target

Now that we have created the iSCSI target it’s a good idea to enable the software iSCSI target on the ESX hosts.

If you have a dedicated iSCSI hardware adapter you can configure your IP settings and IQN directly on the card. One advantage of this is that if you wipe your ESX host, your iSCSI settings remain on the card; however, they are quite pricey. Therefore, many VMware customers prefer to use the ESX host’s iSCSI software initiator. The iSCSI stack in ESX 5 has been recently overhauled, and it is now easier to set up and offers better performance. The following instructions explain how to set up the iSCSI stack to speak to the HP VSA we just created.

Before you enable the software initiator/adapter in the ESX host you will need to create a VMkernel port group with the correct IP data to communicate to the HP P4000 VSA. Figure 5.28 shows my configuration for esx1 and esx2; notice that the vSwitch has two NICs for fault tolerance. In Figure 5.28, I’m using ESX SvSwitches, but there’s nothing stopping you from using a DvSwitch if you have access to it. Personally, I prefer to reserve the DvSwitch for virtual machine networking, and use the SvSwitch for any ESX host-specific networking tasks. Remember, ESX 5 introduced a new iSCSI port binding feature that allows you to control multipathing settings within ESX 5. In my case, I created a single SvSwitch with two VMkernel ports, each with there own unique IP configuration, as shown in Figure 5.28. On the properties of each port group (IP-Storage1 and IP-Storage2) I modified the NIC teaming policy such that IP-Storage1 has dedicated access to vmnic2 and IP-Storage2 has dedicated access to vmnic3.

Storageworks-p4000- (28).jpg

Figure 5.28 Multiple VMkernel ports are only required if you’re using multipathing to the iSCSI target. Otherwise, one VMkernel port suffices.

Before proceeding with the configuration of the VMware software initiator/adapter, you might wish to confirm that you can communicate with the HP VSA by using ping and vmkping against the IP address of the Data Mover. Additionally, you might wish to confirm that there are no errors or warnings on the VMkernel port groups you intend to use in the iSCSI Port Binding dialog box, as shown in Figure 5.29.

In ESXi 5 you should not need to manually open the iSCSI software TCP port on the ESXi firewall. The port number used by iSCSI, which is TCP port 3260, should be automatically opened. However, in previous releases of ESX this sometimes was not done, so I recommend confirming that the port is opened, just in case.

Storageworks-p4000- (29).jpg

Figure 5.29 With the correct configuration the iSCSI Port Bind-ing policy should switch to an “enabled” status.

1. In vCenter, select the ESXi host and then the Configuration tab.

2. In the Software tab, select the Security Profile link.

3. On the Firewall category, click the Properties link.

4. In the dialog box that opens, open the TCP port (3260) for the Software iSCSI Client, as shown in Figure 5.30.

The next step is to add the iSCSI software adapter. In previous releases of ESX this would be generated by default, even if it wasn’t required. The new model for iSCSI on ESX 5 allows for better control of its configuration.

5. Click the Storage Adapters link in the Hardware pane, and click Add to create the iSCSI software adapter, as shown in Figure 5.31.

Storageworks-p4000- (30).jpg

Figure 5.30 Confirming that the Software iSCSI Client port of TCP 3260 is open

Storageworks-p4000- (31).jpg

Figure 5.31 Unlike in previous releases, ESXi 5 requires the iSCSI adapter to be added to the host.

6. Once the virtual device has been added, you should be able to select it and choose Properties.

7. In the dialog box that opens, click the Configure button. This will allow you to set your own naming convention for the IQN rather than using the auto-generated one from VMware, as shown in Figure 5.32. ' 8.' The next step is to bind the virtual device to one of the VMkernel ports on the ESX host’s vSwitch configuration. In my case, I have a port group named “IP-Storage” which is used for this purpose, as shown in Figure 5.33.

9. Select the Dynamic Discovery tab and click the Add button.

10. Enter the IP address of the iSCSI target in the Add Send Target Server dialog box, as shown in Figure 5.34. In my case, that is serviced by the NIC of the HP VSA of 172. 168.3.99.

11. Click OK. ' 12.' Click Close in the main dialog box, and you will be asked if you want to rescan the software iSCSI virtual HBA (in my case, vmhba34). Click Yes.

Storageworks-p4000- (32).jpg

Figure 5.32 While it’s not mandatory to change the default IQN, most organizations do prefer to establish their own IQN convention.

Storageworks-p4000- (33).jpg

Figure 5.33 VMkernel ports 4 and 5 are compliant and acceptable for use in a multipathing configuration for iSCSI.

Storageworks-p4000- (34).jpg

Figure 5.34 Entering the iSCSI target’s IP address causes the iSCSI initiator to authenticate to the VSA and request a list of volumes to be returned.

Static discovery is only supported with hardware initiators. Occasionally, I’ve noticed that some changes to the software iSCSI initiator after this initial configuration may require a reboot of the ESX host, as shown in Figure 5.35. So try to limit your changes where possible, and think through what settings you require up front to avoid this.

Monitoring Your iSCSI Connections

There are many places where you can confirm that you have a valid iSCSI connection. This is important because networks can and do fail. In the first instance, you should be able to see the volume/LUN when you select the virtual iSCSI HBA in the storage adapters in the vSphere client. The properties of the “virtual” HBA—in this case, vmhba32 or vmhba40 (different hosts return a different “virtual” HBA number)—and the Manage Paths dialog box will display a green diamond to indicate a valid connection, together with other meaningful information which will help you identify the volumes returned. However, more specifically you can see the status of your iSCSI connections from the VSA’s management console.

1. Expand the NYC_Group.

2. Select the NYC_Cluster.

3. Select the Volumes and Snapshots node.

4. Select the volume in the list and then click the iSCSI Sessions tab.

In my case, there are four sessions: two for each ESX host, as shown in Figure 5.36. This is because my ESX hosts have two VMkernel ports for IP storage communications, and this allows me to have a true multipathing configuration for the iSCSI storage.

The HP StorageWorks P4000 VSA: Creating a Test Volume at the Recovery Site

I want to confirm that my ESX hosts at the Recovery Site can communicate with my second VSA. To do this I’m going to create and give them access to a blank LUN so that

Storageworks-p4000- (35).jpg

Figure 5.35 Significant changes to the software iSCSI initiator in ESX can require a reboot to take effect.

Storageworks-p4000- (36).jpg

Figure 5.36 Multipathing configurations will result in the appearance of multiple sessions to the iSCSI target system.

I am satisfied that they all see this “test” volume. For Recovery Plans to work, the ESX hosts in the Recovery Site (New Jersey) need to be listed in the management system. However, they do not need to be manually assigned access to the replicated volumes; that’s something the HP SRA will do automatically whenever we carry out a test or a run of the Recovery Plan. Essentially, the following steps repeat the configuration carried out on the Protected Site (New York), but for a different VSA and different ESX hosts.

1. Open the HP Centralized Management Console.

2. Select the NJ_Group and log on.

3. Expand the Cluster node and the Volumes node.

4. Right-click +Volumes and choose New Volume.

5. In the New Volume dialog box, enter a volume name such as “TestVolume”.

6. Enter a volume size, making sure it is more than 2GB. Although we will not be formatting this LUN, ESX itself cannot format a volume that is less than 2GB in size.

Figure 5.37 shows the result of these steps. Note that hosts in the Recovery Site must be configured with the iSCSI target IP address of the VSA in the site. The SRA will handle the mounting of volumes and authentication requirements.

7. Click OK.

Next you would add your servers to the HP VSA at the Recovery Site, and then assign the volumes to them. These are precisely the same steps I’ve documented earlier, so I won’t repeat them here. At the end of this process, all the ESX hosts in the Recovery Site (New Jersey) should be able to see the TestVolume from the HP VSA.

Storageworks-p4000- (37).jpg

Figure 5.37 Creating a test volume at the Recovery Site

Shutting Down the VSA

It is recommended that you use the VSA management console to take a VSA offline. To do so, follow these steps.

1. Right-click the VSA in the Storage Nodes.

2. In the menu that opens, select Power Off or Reboot.

Alternatively, you can right-click the management group that contains the VSA and select Shutdown from there. This will perform an orderly shutdown of all the VSAs within a management group. Essentially, this places the VSA into a maintenance mode when it is powered on again to prevent data corruption.

Summary

In this chapter I briefly showed you how to set up a 60-day evaluation copy of the virtual appliance that is suitable for use with VMware SRM. We set up two HP P4000 VSAs and then configured them for “Schedule Remote Copy.” Lastly, we connected an ESX host to that storage. It’s worth saying that once you understand how the VSA works the configuration is much the same for physical P4000 storage arrays.

From this point onward, I recommend that you format the volume/LUN with VMFS and create virtual machines. You might wish to do this so that you have some test VMs to use with VMware SRM. SRM is designed to only pick up on LUNs/volumes that are formatted with VMFS and contain virtual machine files. In previous releases, if you had a VMFS volume that was blank it wouldn’t be displayed in the SRM Array Manager Configuration Wizard. This was apparently a frequent error people had with SRM, but one that I rarely saw, mainly because I always ensured that my replicated VMFS volumes had virtual machines contained on them. I don’t see any point in replicating empty VMFS volumes! In SRM, the Array Manager Configuration Wizard now displays this error if you fail to populate the datastore with virtual machines. In my demonstrations, I mainly used virtual disks. VMware’s RDM feature is fully supported by SRM. I will be covering RDMs later in this book because it is an extremely popular VMware feature.

Since the release of ESX 3.5 and vCenter 2.5, you have been able to relocate the virtual machine swap file (.vswp) onto different datastores, rather than locating it in the default location. A good tip is to relocate the virtual machine’s swap file onto shared but not replicated storage. This will reduce the amount of replication bandwidth needed. It does not reduce the amount of disk space used at the Recovery Site, as this will be automatically generated on the storage at the Recovery Site.