Migrate to Load Balancer IPv4 and IPv6
Learn how to migrate from Load Balancer IPv4 to Load Balancer IPv4 and IPv6, the differences between the versions of the load balancer, and considerations during migration.
Load Balancer for IPv4 and IPv6 is a software solution for distributing incoming client requests across servers. It boosts the performance of servers by directing TCP/IP session requests to different servers within a group of servers; in this way, it balances the requests among all the servers. This load balancing is not apparent to users and other applications. Load Balancer is useful for applications such as email servers, World Wide Web servers, distributed parallel database queries, and other TCP/IP applications.
When used with web servers, Load Balancer can help maximize the potential of your site by providing a powerful, flexible, and scalable solution to peak-demand problems. If visitors to your site cannot get through at times of greatest demand, use Load Balancer to automatically find the optimal server to handle incoming requests, thus enhancing your customers’ satisfaction and your profitability.
- The analysis of the scenarios in which Load Balancer is being used by customers for providing High Availability
- Other alternatives that are available
- Problems with some of the existing features
Improvements to Load Balancer IPv4 and IPv6
- Less dependent on Kernel
- Load Balancer for IPv4 and IPv6 is less kernel intrusive, except for the AIX® platform, which is still kernel-dependent.
- Automatic detection of cluster address
- In Load Balancer for IPv4 and IPv6, you do not have to configure the cluster address, except for Linux for System z® Layer 3. Detection of cluster address is automatically done and there is no need to write go scripts when you configure a high availability environment because it is done automatically. The ability to write go scripts is still available in case there is some adapter work or logging that needs to be done during a takeover. However, in Linux for System z layer3, the cluster address detection is same as before. For example, Cluster IP and go script is needed. High availability go scripts are not required and you can configure connection replication to improve performance.
- Encapsulated MAC forwarding
- Load Balancer for IPv4 and IPv6 supports both IPIP and GRE encapsulated forwarding methods, while Load Balancer for IPv4 supports GRE only. Data is encapsulated with either the IPIP or GRE method and the return traffic does not pass through the Load balancer. This method provides for load balancing servers on the WAN, while still maintaining reasonable throughput.
Features and functions that are supported by Load Balancer IPv4 and IPv6
Feature | Description |
---|---|
MAC forwarding | MAC forwarding is the default forwarding method of the Dispatcher. The function that is provided by Load Balancer for IPv4 and IPv6 is identical to the function of Load Balancer for IPv4. Load Balancer for IPv4 and IPv6 offers encapsulated MAC to enable servers on remote subnets. |
Encapsulation forwarding | The function that is provided by the Dispatcher in Load Balancer for IPv4 and IPv6 is consistent with Load Balancer for IPv4. In addition, Load Balancer for IPv4 and IPv6 supports IPIP encapsulation. However, in Load Balancer for IPv4 and IPv6, encapsulation is enabled at the server level, as opposed to port level in Load Balancer for IPv4 to provide greater flexibility in choosing the forwarding method. |
Network Address Translation forwarding | The core forwarding function in Load Balancer for IPv4 and IPv6 is consistent with Load Balancer for IPv4, except for being able to enable NAT forwarding at server level. However, unlike Load Balancer IPv4, Dispatcher in Load Balancer for IPv4 and IPv6 does not support NAPT (Network Address Port translation) intrinsically as part of its implementation. For NAPT, use the NAPT feature that is available as part of the operating system on the managed server. Dispatcher for Load Balancer for IPv4 and IPv6 is tested with NAPT enabled on the managed servers that are deployed on Linux, Windows and AIX. When the managed server needs to receive on a port that is different from the port on which data is sent, port translation that is available in the operating system (on which the server is deployed) is used. For example, iptables if deployed on Linux, netsh if deployed on Windows platforms and ipfilter if deployed on AIX. |
Content Based Routing | The Content Based Routing component is identical to the Content Based Routing
component that is offered with Load Balancer for IPv4, except for the following changes:
|
Site Selector | The Site Selector component is identical to the Site Selector component
offered with Load Balancer for IPv4, except for the following changes:
|
![]() ![]() |
The collocation feature in Load Balancer for IPv4 and IPv6 is consistent with Load Balancer for IPv4. It is not recommended to use collocated servers in production because there might be a performance bottleneck, as well as a single point of failure. In production, it is recommended that the Load Balancer be deployed on dedicated high end servers. |
Advisor | The following list of advisors are available in Load Balancer for IPv4 and
IPv6. While most of the advisors supported in Load Balancer for IPv4 are available, caching proxy
advisor is not available in Load Balancer for IPv4 and IPv6. However, Load Balancer for IPv4 and
IPv6 provides two more advisors, for example, for was and wlm:
|
Metric Server | There is no difference with metric server, except for the additional support for IPv6 addressing. |
High Availability | High availability in Load Balancer for IPv4 and IPv6 is similar to Load
Balancer for IPv4 except for the following differences:
|
Features and functions that are not supported by Load Balancer IPv4 and IPv6
Feature | Explanation and alternative |
---|---|
![]() |
Collocated server is not a recommended deployment architecture. Also, there
are alternatives like OS-based load balancers such as Network Load Balancing on Windows, which you can deploy based on customer requirements. ![]() |
SNMP | The SNMP feature of Load Balancer for IPv4 is rarely used, and there were security concerns of this feature in the product. Basic SNMP agents that are currently available are much more robust. Hence, this feature is not supported in Load Balancer for IPv4 and IPv6. It is suggested to install a generic SNMP agent and use the user exits and command line to obtain any statistics that you might want to collect. The user exits are available when Load Balancer states change such as a failover or server states change such as a server goes offline. Command-line data includes statistics, such as number of failed and successful packets that are forwarded to servers. |
Denial of Service and wildcard port | These features are not heavily used in the Load Balancer for IPv4. The denial of server function was based on a threshold for half open connections and would then randomly drop connections. It only offered some benefit for SYN Flood attacks and did not prevent clients from noticing that there was a problem. It is recommended to use security-based software to secure your site, instead of trying to provide security through the load balancer. The wildcard port exposed servers to security risks as any port would be forwarded and sophisticated attacks might run port scan and other advanced attacks. Therefore, this feature is not supported in Load Balancer for IPv4 and IPv6. There are no recommended alternatives for wildcard port. |
Passive FTP | This feature is non-standard deployment of Load Balancer, with the availability of much more secure and advanced secure FTP products. |
kCBR | Kernel-based Content Routing was designed to make a server choice that is based upon HTTP headers, such as cookies. The performance of content routing was limited and it did not support encrypted connections. This function is available in other products, such as On Demand Router (ODR). ODR can make server choices that are based on any field in the http header. You can use the web server plug-in that is offered with WebSphere® Application Server with Caching Proxy and the CBR component. |
Remote Administration | The remote access features of Load Balancer for IPv4 can make the system vulnerable to security attacks. This feature is now available by default in the operating system. Hence this feature in the Load Balancer is obsolete. |
Wildcard cluster | The design and configuration of Load Balancer for IPv4 and IPv6 renders the wildcard feature redundant. With Load Balancer for IPv4, you had to configure the cluster address on the Ethernet card for the hardware to receive the packets. Because you no longer manually alias the addresses, this function provides no benefit. |
![]() ![]() |
The dispatcher component is no longer supported on HP-UX or Solaris. |
Mutual High Availability | This feature was relevant in the first generation of Load Balancer for IPv4, when the cost of hardware was a concern. With diminishing cost of hardware, this feature is no longer relevant. Moreover, this feature is prone to misconfiguration with serious consequences. In mutual high availability, both servers need to be configured only for 50% of the peak capacity of the servers, otherwise, during failure, a single server can be loaded beyond its peak capacity. There are no recommended alternatives. |