|
Problem |
This technote describes limitations and workarounds when
using zSeries® or S/390® servers that have Open System Adapter (OSA)
cards. |
|
Cause |
In general, when using the MAC forwarding method, servers
in the Load Balancer configuration must all be on the same network segment
regardless of the platform. Active network devices such as router,
bridges, and firewalls interfere with Load Balancer. This is because Load
Balancer functions as a specialized router, modifying only the link-layer
headers to its next and final hop. Any network topology in which the next
hop is not the final hop is not valid for Load Balancer.
Note: Tunnels, such as channel-to-channel (CTC) or inter-user
communication vehicle (IUCV), are often supported. However, Load Balancer
must forward across the tunnel directly to the final destination, it
cannot be a network-to-network tunnel.
There is a limitation for zSeries® and S/390 servers that share the OSA
card, because this adapter operates differently than most network cards.
The OSA card has its own virtual link layer implementation that has
nothing to do with ethernet, which is presented to the Linux® and z/OS®
hosts behind it. Effectively, each OSA card looks just like
ethernet-to-ethernet hosts (and not to the OSA hosts), and hosts that use
it respond to it as if it is ethernet.
The OSA card also performs some functions that relate to the IP layer
directly. Responding to ARP (address resolution protocol) requests is one
example of a function that it performs. Another is that shared OSA routes
IP packets based on destination IP address, instead of on ethernet address
as a layer 2 switch. Effectively, the OSA card is a bridged network
segment unto itself.
Load Balancer that runs on an S/390 Linux or zSeries Linux host can
forward to hosts on the same OSA or to hosts on the ethernet. All the
hosts on the same shared OSA are effectively on the same segment.
Load Balancer can forward out of a shared OSA because of the nature
of the OSA bridge. The bridge knows the OSA port that owns the cluster IP.
The bridge knows the MAC address of hosts directly connected to the
ethernet segment. Therefore, Load Balancer can MAC-forward across one OSA
bridge.
However, Load Balancer cannot forward into a shared OSA. This
includes the Load Balancer on an S/390 Linux when the backend server is on
a different OSA card than the Load Balancer. The OSA for the backend
server advertises the OSA MAC address for the server IP, but when a packet
arrives with the ethernet destination address of the server's OSA and the
IP of the cluster, the server's OSA card does not know which of its hosts,
if any, should receive that packet. The same principles that permit
OSA-to-ethernet MAC-forwarding to work out of one shared OSA do not hold
when trying to forward into a shared OSA. |
|
Solution |
In Load Balancer configurations that use zSeries or S/390
servers that have OSA cards, there are two approaches you can take to work
around the problem described in this technote.
- Using platform features
If the servers in the Load Balancer configuration are on the same zSeries
or S/390 platform type, you can define point-to-point (CTC or IUCV)
connections between Load Balancer and each server. Set up the endpoints
with private IP addresses. The point-to-point connection is used for Load
Balancer-to-server traffic only. Then add the servers with the IP address
of the server endpoint of the tunnel. With this configuration, the cluster
traffic comes through the Load Balancer OSA card and is forwarded across
the point-to-point connection where the server responds through its own
default route. The response uses the server's OSA card to leave, which
might or might not be the same card.
- Using Load Balancer's GRE feature
Note: The GRE feature is not available in the dual-protocol
environment of Load Balancer for IPv6.
If the servers in the Load Balancer configuration are not on the same
zSeries or S/390 platform type, or if it is not possible to define a
point-to-point connection between Load Balancer and each server, it is
recommended that you use Load Balancer's Generic Routing Encapsulation
(GRE) feature, which is a protocol that permits Load Balancer to forward
across routers.
When using GRE, the client->cluster IP packet is received by Load
Balancer, encapsulated, and sent to the server. At the server, the
original client->cluster IP packet is excapsulated, and the server
responds directly to the client. The advantage with using GRE is that Load
Balancer sees only the client-to-server traffic, not the server-to-client
traffic. The disadvantage is that it lowers the maximum segment size (MSS)
of the TCP connection due to encapsulation overhead.
To configure Load Balancer to forward with GRE encapsulation, use the
following command to add the servers:
dscontrol server add
cluster_add:port:backend_server routerbackend_server
Where router backend_server is valid if Load Balancer and the
backend server are on the same IP subnet. Otherwise, specify the valid
next-hop IP address as the router.
To configure Linux systems to perform native GRE excapsulation, for each
backend server, issue the following commands:
modprobe ip_gre
ip tunnel add grelb0 mode gre ikey 3735928559
ip link set grelb0 up
ip addr add cluster_addr dev grelb0
Note: Do not define the cluster address on the loopback of the
backend servers.
When using z/OS backend servers, you must use z/OS-specific commands to
configure the servers to perform GRE excapsulation.
|
|
|