Long distance links for Metro Mirror and Global Mirror partnerships

The links between clustered system pairs that perform remote mirroring must meet specific configuration, latency, and distance requirements.

Figure 1 shows an example of a configuration that uses dual redundant fabrics that can be configured for Fibre Channel connections. Part of each fabric is located at the local system and the remote system. There is no direct connection between the two fabrics.

You can use Fibre Channel extenders or SAN routers to increase the distance between two systems. Fibre Channel extenders transmit Fibre Channel packets across long links without changing the contents of the packets. SAN routers provide virtual N_ports on two or more SANs to extend the scope of the SAN. The SAN router distributes the traffic from one virtual N_port to the other virtual N_port. The two Fibre Channel fabrics are independent of each other. Therefore, N_ports on each of the fabrics cannot directly log in to each other. See the following website for specific firmware levels and the latest supported hardware: www.ibm.com/support

If you use Fibre Channel extenders or SAN routers, you must meet the following requirements:

  • The maximum supported round-trip latency between sites depends on the type of partnership between systems, the version of software, and the system hardware that is used. Table 1 lists the maximum round-trip latency. This restriction applies to all variant of remote mirroring. More configuration requirements and guidelines apply to systems that perform remote mirroring over extended distances, where the round-trip time is greater than 80 ms.
  • The round-trip latency between sites cannot exceed 80 ms for either Fibre Channel extenders or SAN routers. This maximum round-trip latency applies to all variants of remote mirroring, including Global Mirror with change volumes and IP partnership.
  • Metro Mirror and Global Mirror require 2.6 Mbps of bandwidth for intersystem heartbeat traffic.
  • If the link between two sites is configured with redundancy so that it can tolerate single failures, the link must be sized so that the bandwidth and latency statements are correct during single failure conditions.
  • The configuration is tested to confirm that any failover mechanisms in the intersystem links interoperate satisfactorily with Storwize® V3700 systems.
  • All other configuration requirements are met.

Configuration requirements for systems that perform remote mirroring over extended distances (greater than 80 ms round-trip latency between sites)

If you use remote mirroring between systems with 80 - 250 ms round-trip latency, you must meet the following additional requirements:

  • All nodes that are used for replication must be of a supported model (see Table 1).
  • There must be a Fibre Channel partnership between systems, not an IP partnership.
  • All systems in the partnership must have a minimum software level of 7.4.0.
  • The RC buffer size setting must be 512 MB on each system in the partnership. This setting can be accomplished by running the chsystem -rcbuffersize 512 command on each system.
    Note: Changing this setting is disruptive to Metro Mirror and Global Mirror operations. Use this command only before partnerships are created between systems or when all partnerships with the system are stopped.
  • Two Fibre Channel ports on each node that is used for replication must be dedicated for replication traffic, by using SAN zoning and port masking.
  • SAN zoning should be applied to provide separate intersystem zones for each local-remote I/O group pair that is used for replication. Figure 2 illustrates this type of configuration.

In addition to the preceding list of requirements, the following guidelines are provided for optimizing performance for remote mirroring by using Global Mirror:

  • Partnered systems should use the same number of nodes in each system for replication.
  • For maximum throughput, all nodes in each system should be used for replication, both in terms of balancing the preferred node assignment for volumes and for providing intersystem Fibre Channel connectivity.
  • On Storwize V3700 systems, provisioning dedicated node ports for local node-to-node traffic (by using port masking) isolates Global Mirror node-to-node traffic between the local nodes from other local SAN traffic. As a result, optimal response times can be achieved. This configuration of local node port masking is less of a requirement on Storwize family systems, where traffic between node canisters in an I/O group is serviced by the dedicated inter-canister link in the enclosure.
  • Where possible, use the minimum number of partnerships between systems. For example, assume site A contains systems A1 and A2, and site B contains systems B1 and B2. In this scenario, creating separate partnerships between pairs of systems (such as A1-B1 and A2-B2) offers greater performance for Global Mirror replication between sites than a configuration with partnerships that are defined between all four systems.

Limitations on host-to-system distances

There is no limit on the Fibre Channel optical distance between Storwize V3700 nodes and host servers. You can attach a server to an edge switch in a core-edge configuration with the Storwize V3700 system at the core. Storwize V3700 systems support up to three ISL hops in the fabric. Therefore, the host server and the Storwize V3700 system can be separated by up to five Fibre Channel links. If you use longwave small form-factor pluggable (SFP) transceivers, four of the Fibre Channel links can be up to 10 km long.