[8.5.5.2 or later]

Migrate to Load Balancer IPv4 and IPv6

Learn how to migrate from Load Balancer IPv4 to Load Balancer IPv4 and IPv6, the differences between the versions of the load balancer, and considerations during migration.

Load Balancer for IPv4 and IPv6 is a software solution for distributing incoming client requests across servers. It boosts the performance of servers by directing TCP/IP session requests to different servers within a group of servers; in this way, it balances the requests among all the servers. This load balancing is not apparent to users and other applications. Load Balancer is useful for applications such as email servers, World Wide Web servers, distributed parallel database queries, and other TCP/IP applications.

When used with web servers, Load Balancer can help maximize the potential of your site by providing a powerful, flexible, and scalable solution to peak-demand problems. If visitors to your site cannot get through at times of greatest demand, use Load Balancer to automatically find the optimal server to handle incoming requests, thus enhancing your customers satisfaction and your profitability.

While there are not many differences in some of the core function and purpose of the product, there are differences in terms of features available within the function, more functionality is available, and so on. These changes are governed by:

Improvements to Load Balancer IPv4 and IPv6

Improvements to Load Balancer IPv4 and IPv6 are as follows:
Less dependent on Kernel
Load Balancer for IPv4 and IPv6 is less kernel intrusive, except for the AIX® platform, which is still kernel-dependent.
Automatic detection of cluster address
In Load Balancer for IPv4 and IPv6, you do not have to configure the cluster address, except for Linux for System z Layer 3. Detection of cluster address is automatically done and there is no need to write go scripts when you configure a high availability environment because it is done automatically. The ability to write go scripts is still available in case there is some adapter work or logging that needs to be done during a takeover. However, in Linux for System z layer3, the cluster address detection is same as before. For example, Cluster IP and go script is needed. High availability go scripts are not required and you can configure connection replication to improve performance.
Encapsulated MAC forwarding
Load Balancer for IPv4 and IPv6 supports both IPIP and GRE encapsulated forwarding methods, while Load Balancer for IPv4 supports GRE only. Data is encapsulated with either the IPIP or GRE method and the return traffic does not pass through the Load balancer. This method provides for load balancing servers on the WAN, while still maintaining reasonable throughput.
Important: The command-line syntax is different between Load Balancer for IPv4 and Load Balancer for IPv4 and IPv6. Notably is in the use of a delimiter. Load Balancer for IPv4 uses : as the delimiter, while Load Balancer for IPv4 and IPv6 uses @ as the delimiter. This change is to support IPv6 address, which allows for :.
CAUTION:
In addition to the difference in features, there is expected to be a degradation in performance of Load Balancer for IPv4 and IPv6 (especially on Windows platforms).

Features and functions that are supported by Load Balancer IPv4 and IPv6

Table 1. Supported Load Balancer IPv4 and IPv6 features
Feature Description
MAC forwarding MAC forwarding is the default forwarding method of the Dispatcher. The function that is provided by Load Balancer for IPv4 and IPv6 is identical to the function of Load Balancer for IPv4. Load Balancer for IPv4 and IPv6 offers encapsulated MAC to enable servers on remote subnets.
Encapsulation forwarding The function that is provided by the Dispatcher in Load Balancer for IPv4 and IPv6 is consistent with Load Balancer for IPv4. In addition, Load Balancer for IPv4 and IPv6 supports IPIP encapsulation. However, in Load Balancer for IPv4 and IPv6, encapsulation is enabled at the server level, as opposed to port level in Load Balancer for IPv4 to provide greater flexibility in choosing the forwarding method.
Network Address Translation forwarding The core forwarding function in Load Balancer for IPv4 and IPv6 is consistent with Load Balancer for IPv4, except for being able to enable NAT forwarding at server level. However, unlike Load Balancer IPv4, Dispatcher in Load Balancer for IPv4 and IPv6 does not support NAPT (Network Address Port translation) intrinsically as part of its implementation. For NAPT, use the NAPT feature that is available as part of the operating system on the managed server. Dispatcher for Load Balancer for IPv4 and IPv6 is tested with NAPT enabled on the managed servers that are deployed on Linux, Windows and AIX, HP-UX and Solaris. When the managed server needs to receive on a port that is different from the port on which data is sent, port translation that is available in the operating system (on which the server is deployed) is used. For example, iptables if deployed on Linux, netsh if deployed on Windows platforms and ipfilter if deployed on AIX, HP-UX, and Solaris.
Content Based Routing The Content Based Routing component is identical to the Content Based Routing component that is offered with Load Balancer for IPv4, except for the following changes:
  • The command line accepts @ as the cluster, port, server delimiter, instead of :.
  • There is no SNMP support.
  • Remote administration is not available.
  • The CBR commands are available at /opt/IBM/WebSphere/Edge/ULB/cbr/servers/bin/ on UNIX platforms and <install_directory>/cbr/servers/bin/ on Windows platforms. They are no longer present in the default /usr/bin/ directory.
  • The path to CBR plug-in in the caching proxy configuration file must be updated with the new Load Balancer for IPv4 and IPv6 CBR path. For example: on Linux platforms, the path to the CBR plug-in is as follows:
    ServerInit /opt/IBM/WebSphere/Edge/ULB/cbr/servers/lib/liblbcbr.so:ndServerInit
    PostAuth /opt/IBM/WebSphere/Edge/ULB/cbr/servers/lib/liblbcbr.so:ndPostAuth
    PostExit /opt/IBM/WebSphere/Edge/ULB/cbr/servers/lib/liblbcbr.so:ndPostExit
    ServerTerm /opt/IBM/WebSphere/Edge/ULB/cbr/servers/lib/liblbcbr.so:ndServerTerm
Site Selector The Site Selector component is identical to the Site Selector component offered with Load Balancer for IPv4, except for the following changes:
  • The command line accepts @ as the sitename, port, server delimiter, instead of :.
  • There is no SNMP support.
  • Remote administration is not available.
  • The SS commands are available at /opt/IBM/WebSphere/Edge/ULB/ss/servers/bin/ in UNIX platforms and <install_directory>/ss/servers/bin/ on Windows platforms. They are no longer present in the default /usr/bin/ directory.
Collocation on Linux and AIX The collocation feature in Load Balancer for IPv4 and IPv6 is consistent with Load Balancer for IPv4. It is not recommended to use collocated servers in production because there might be a performance bottleneck, as well as a single point of failure. In production, it is recommended that the Load Balancer be deployed on dedicated high end servers.
Advisor The following list of advisors are available in Load Balancer for IPv4 and IPv6. While most of the advisors supported in Load Balancer for IPv4 are available, caching proxy advisor is not available in Load Balancer for IPv4 and IPv6. However, Load Balancer for IPv4 and IPv6 provides two more advisors, for example, for was and wlm:
  • connect
  • Custom advisors
  • db2
  • dns
  • ftp
  • http
  • https
  • imap
  • ldap
  • ldapuri
  • mntp
  • ping
  • pop3
  • reach
  • sip
  • smtp
  • ssl
  • ssl2http
  • telnet
  • tls
  • was
  • wlm
Metric Server There is no difference with metric server, except for the additional support for IPv6 addressing.
High Availability High availability in Load Balancer for IPv4 and IPv6 is similar to Load Balancer for IPv4 except for the following differences:
  • Mutual High Availability is not supported
  • Version is matched for all (major.minor.service), except for the interim fix level
  • Go scripts are not required
  • You can configure connection replication to improve performance

Features and functions that are not supported by Load Balancer IPv4 and IPv6

Table 2. Features not supported by Load Balancer IPv4 and IPv6
Feature Explanation and alternative
Collocation on Windows, HP-Unix, and Solaris Collocated server is not a recommended deployment architecture. Also, there are alternatives like OS-based load balancers such as Network Load Balancing on Windows, which you can deploy based on customer requirements.
[Windows] Remember: Load Balancer IPv4 did not support collocation on Windows platforms as of Version 8.0, therefore there is no functional difference with Load Balancer IPv4.
SNMP The SNMP feature of Load Balancer for IPv4 is rarely used, and there were security concerns of this feature in the product. Basic SNMP agents that are currently available are much more robust. Hence, this feature is not supported in Load Balancer for IPv4 and IPv6. It is suggested to install a generic SNMP agent and use the user exits and command line to obtain any statistics that you might want to collect. The user exits are available when Load Balancer states change such as a failover or server states change such as a server goes offline. Command-line data includes statistics, such as number of failed and successful packets that are forwarded to servers.
Denial of Service and wildcard port These features are not heavily used in the Load Balancer for IPv4. The denial of server function was based on a threshold for half open connections and would then randomly drop connections. It only offered some benefit for SYN Flood attacks and did not prevent clients from noticing that there was a problem. It is recommended to use security-based software to secure your site, instead of trying to provide security through the load balancer. The wildcard port exposed servers to security risks as any port would be forwarded and sophisticated attacks might run port scan and other advanced attacks. Therefore, this feature is not supported in Load Balancer for IPv4 and IPv6. There are no recommended alternatives for wildcard port.
Passive FTP This feature is non-standard deployment of Load Balancer, with the availability of much more secure and advanced secure FTP products.
kCBR Kernel-based Content Routing was designed to make a server choice that is based upon HTTP headers, such as cookies. The performance of content routing was limited and it did not support encrypted connections. This function is available in other products, such as On Demand Router (ODR). ODR can make server choices that are based on any field in the http header. You can use the web server plug-in that is offered with WebSphere® Application Server with Caching Proxy and the CBR component.
Remote Administration The remote access features of Load Balancer for IPv4 can make the system vulnerable to security attacks. This feature is now available by default in the operating system. Hence this feature in the Load Balancer is obsolete.
Wildcard cluster The design and configuration of Load Balancer for IPv4 and IPv6 renders the wildcard feature redundant. With Load Balancer for IPv4, you had to configure the cluster address on the Ethernet card for the hardware to receive the packets. Because you no longer manually alias the addresses, this function provides no benefit.
Mutual High Availability This feature was relevant in the first generation of Load Balancer for IPv4, when the cost of hardware was a concern. With diminishing cost of hardware, this feature is no longer relevant. Moreover, this feature is prone to misconfiguration with serious consequences. In mutual high availability, both servers need to be configured only for 50% of the peak capacity of the servers, otherwise, during failure, a single server can be loaded beyond its peak capacity. There are no recommended alternatives.
Concept topic    

Terms and conditions for information centers | Feedback

Last updated: April 10, 2014 03:11 PM EDT
File name: cmig_loadbalancer.html