This chapter explains how to configure the load balancing parameters and how to set up Load Balancer for advanced functions.
Task | Description | Related information |
---|---|---|
Collocate Load Balancer on a machine that it is load balancing | Set up a collocated Load Balancer machine. | Using collocated servers |
Configure high availability or mutual high availability | Set up a second Dispatcher machine to provide a backup. | High availability |
Configure rules-based load balancing | Define conditions under which a subset of your servers are used. | Configure rules-based load balancing |
Use port affinity override to provide a mechanism for a server to override the port sticky feature | Allows a server to override the stickytime setting on its port. | port affinity override |
Use sticky (affinity) feature to configure a cluster's port to be sticky | Allows client requests to be directed to the same server. | How affinity feature for Load Balancer works |
Use cross port affinity to expand the sticky (affinity) feature across ports | Allows client requests received from different ports to be directed to the same server. | Cross port affinity |
Use affinity address mask to designate a common IP subnet address | Allows clients requests received from the same subnet to be directed to the same server. | Affinity address mask (stickymask) |
Use active cookie affinity to load balance servers for CBR | A rule option that allows a session to maintain affinity for a particular server. | Active cookie affinity |
Use passive cookie affinity to load balance servers for Dispatcher's content-based routing and the CBR component | A rule option that allows a session to maintain affinity for a particular server based on the cookie name/cookie value. | Passive cookie affinity |
Use URI affinity to load-balance across Caching Proxy servers with unique content to be cached on each individual server | A rule option that allows a session to maintain affinity for a particular server based on the URI. | URI affinity |
Configure wide area Dispatcher support | Set up a remote Dispatcher to load balance across a wide area network. Or, load balance across a wide area network (without a remote Dispatcher) using a server platform that supports GRE. | Configure wide area Dispatcher support |
Use explicit linking | Avoid bypassing the Dispatcher in your links. | Using explicit linking |
Use a private network | Configure the Dispatcher to load balance servers on a private network. | Using a private network configuration |
Use wildcard cluster to combine common server configurations | Addresses that are not explicitly configured will use the wildcard cluster as a way to load balance traffic. | Use wildcard cluster to combine server configurations |
Use wildcard cluster to load balance firewalls | All traffic will be load balanced to firewalls. | Use wildcard cluster to load balance firewalls |
Use wildcard cluster with Caching Proxy for transparent proxy | Allows Dispatcher to be used to enable a transparent proxy. | Use wildcard cluster with Caching Proxy for transparent proxy |
Use wildcard port to direct unconfigured port traffic | Handles traffic that is not configured for any specific port. | Use wildcard port to direct unconfigured port traffic |
Use "Denial of Service Attack" detection to notify administrators (via an alert) of potential attacks | Dispatcher analyzes incoming requests for a conspicuous amount of half-open TCP connections on servers. | Denial of service attack detection |
Use binary logging to analyze server statistics | Allows server information to be stored in and retrieved from binary files. | Using binary logging to analyze server statistics |
Use a collocated client configuration | Allow Load Balancer to reside on the same machine as a client | Using a collocated client |
Load Balancer can reside on the same machine as a server for which it is load balancing requests. This is commonly referred to as collocating a server. Collocation applies to the Dispatcher and Site Selector components. Collocation is also supported for CBR, but only when using bind-specific Web servers and bind-specific Caching Proxy.
Linux: In order to configure both collocation and high availability at the same time, when running the Dispatcher component using the mac forwarding method, see Linux loopback aliasing alternatives when using Load Balancer's mac forwarding.
Solaris: There is a limitation that you cannot configure WAN advisors when the entry-point Dispatcher is collocated. See Using remote advisors with Dispatcher's wide area support.
Windows: Collocation is no longer available when you use Dispatcher's MAC forwarding method.
In earlier releases, it was necessary to specify the collocated server address to be the same as the nonforwarding address (NFA) in the configuration. That restriction has been lifted.
To configure a server to be collocated, the dscontrol server command provides an option called collocated which can be set to yes or no. The default is no. The address of the server must be a valid IP address of a network interface card on the machine. The collocated parameter should not be set for servers which are collocated using Dispatcher's nat or cbr forwarding method.
You can configure a collocated server in one of the following ways:
For Dispatcher's nat or cbr forwarding, you must configure (alias) an unused adapter address on the NFA. The server should be configured to listen on this address. Configure the server using the following command syntax:
dscontrol server add cluster:port:new_alias address new_alias router router_ip
returnaddress return_address
Failure to configure for this can lead to system errors, no response from the server, or both.
When configuring a collocated server using Dispatcher's nat forwarding method, the router specified in the dscontrol server add command must be a real router address and not the server IP address.
Support for collocation when configuring Dispatcher's nat forwarding method can now be done on the following operating systems if the following steps are performed on the Dispatcher machine:
arp -s hostname ether_addr pub
using the local
MAC address for ether_addr. This enables the local application to
send traffic to the return address in the kernel.CBR supports collocation on AIX, HP-UX, Linux, and Solaris platforms with no additional configurations required. However, the Web servers and Caching Proxy that you use must be bind-specific.
Site Selector supports collocation on AIX, HP-UX, Linux, and Solaris platforms with no additional configurations required.
The high availability function (configurable using dscontrol highavailability command) is available for the Dispatcher component (but not for the CBR or Site Selector component).
To improve Dispatcher availability, the Dispatcher high availability function uses the following mechanisms:
If possible, at least one of the heartbeat pairs should be across a separate subnet than the regular cluster traffic. Keeping the heartbeat traffic distinct will help prevent false takeovers during very heavy network loads and also improve complete recovery times after a failover.
Complete syntax for dscontrol highavailability is in dscontrol highavailability -- control high availability.
For a more complete discussion of many of the tasks below, see Setting up the Dispatcher machine.
dscontrol highavailability heartbeat add sourceaddress destinationaddress
Primary - highavailability heartbeat add 9.67.111.3 9.67.186.8
Backup - highavailability heartbeat add 9.67.186.8 9.67.111.3
At least one heartbeat pair must have the NFAs of the pair
as the source and destination address.
If possible, at least one of the heartbeat pairs should be across a separate subnet than the regular cluster traffic. Keeping the heartbeat traffic distinct will help prevent false takeovers during very heavy network loads and also improve complete recovery times after a failover.
Set the number of seconds that the executor uses to timeout high availability heartbeats. For example:
dscontrol executor set hatimeout 3
The default is 2 seconds.
dscontrol highavailability reach add 9.67.125.18
Reach targets are recommended but not required. See Failure detection capability using heartbeat and reach target for more information.dscontrol highavailability backup add primary [auto | manual] port
dscontrol highavailability backup add backup [auto | manual] port
dscontrol highavailability backup add both [auto | manual] port
dscontrol highavailability status
The
machines should each have the correct role (backup, primary, or both),
states, and substates. The primary should be active and synchronized;
the backup should be in standby mode and should be synchronized
within a short time. The strategies must be the same.dscontrol cluster set clusterA primaryhost NFAdispatcher1
dscontrol cluster set clusterB primaryhost NFAdispatcher2
dscontrol cluster set clusterB primaryhost NFAdispatcher2
dscontrol cluster set clusterA primaryhost NFAdispatcher1
Besides the basic criteria of failure detection (the loss of connectivity between active and standby Dispatchers, detected through the heartbeat messages), there is another failure detection mechanism named reachability criteria. When you configure the Dispatcher you can provide a list of hosts that each of the Dispatchers should be able to reach in order to work correctly. The two high availability partners continually communicate with each other through heartbeats, and they update one another on how many reach targets either one of them can ping. If the standby pings more reach targets than the active, a failover occurs.
Heartbeats are sent by the active Dispatcher and are expected to be received by the standby Dispatcher every half second. If the standby Dispatcher fails to receive a heartbeat within 2 seconds, a failover begins. All heartbeats must break for a takeover from the standby Dispatcher to occur. In other words, when two heartbeat pairs are configured, both heartbeats must break. To stabilize a high availability environment and to avoid failover, add more than one heartbeat pair.
For reach targets, you should choose at least one host for each subnet your Dispatcher machine uses. The hosts could be routers, IP servers or other types of hosts. Host reachability is obtained by the reach advisor, which pings the host. Failover takes place either if the heartbeat messages cannot go through, or if the reachability criteria are met better by the standby Dispatcher than by the primary Dispatcher. To make the decision based on all available information, the active Dispatcher regularly sends the standby Dispatcher its reachability capabilities. The standby Dispatcher then compares those capabilities with its own and decides whether to switch.
Two Dispatcher machines are configured: the primary machine, and a second machine called the backup. At startup, the primary machine sends all the connection data to the backup machine until that machine is synchronized. The primary machine becomes active, that is, it begins load balancing. The backup machine, meanwhile, monitors the status of the primary machine, and is said to be in standby state.
If the backup machine at any point detects that the primary machine has failed, it performs a takeover of the primary machine’s load balancing functions and becomes the active machine. After the primary machine has once again become operational, the machines respond according to how the recovery strategy has been configured by the user. There are two kinds of strategy:
The strategy parameter must be set the same for both machines.
The manual recovery strategy allows you to force the routing of packets to a particular machine, using the takeover command. Manual recovery is useful when maintenance is being performed on the other machine. The automatic recovery strategy is designed for normal unattended operation.
For a mutual high availability configuration, there is no per cluster failure. If any problem occurs with one machine, even if it affects just one cluster, then the other machine will take over for both clusters.
For Dispatcher to route packets, each cluster address must be aliased to a network interface device.
For information on aliasing the network interface card, see Step 5. Alias the network interface card.
Because the Dispatcher machines will change states when a failure is detected, the commands above must be issued automatically. Dispatcher will run user-created scripts to do that. Sample scripts can be found in the following directory:
These scripts must be moved to the following directory in order to run:
The scripts will run automatically only if dsserver is running.
The following sample scripts may be used:
On Windows systems: In your configuration setup, if you have Site Selector load balancing two Dispatcher machines that are operating in a high availability environment, you will need to add an alias on the Microsoft stack for the metric servers. This alias should be added to the goActive script. For example:
call netsh interface ip add address "Local Area Connection"
addr=9.37.51.28 mask=255.255.240.0
In the goStandby and goInOp, the alias will need to be removed. For example:
call netsh interface ip delete address "Local Area Connection"
addr=9.37.51.28
If there are multiple NIC's on the machine, then first check which interface you should use by issuing the following command on the command prompt: netsh interface ip show address. This command will return a list of currently configured interfaces and will number the "Local Area Connection" (for example, "Local Area Connection 2") so you can determine which one you should use.
On Linux for S/390®: Dispatcher issues a gratuitous ARP to move IP addresses from one Dispatcher to another. This mechanism is therefore tied to the underlying network type. When running Linux for S/390, Dispatcher can natively do high availability takeovers (complete with IP address moves) only on those interfaces which can issue a gratuitous ARP and configure the address on the local interface. This mechanism will not work properly on point-to-point interfaces such as IUCV and CTC and will not work properly in certain configurations of qeth/QDIO.
For those interfaces and configurations where Dispatcher's native IP takeover function will not work properly, the customer may place appropriate commands in the go scripts to manually move the addresses. This will ensure that those network topologies can also benefit from high availability.
You can use rules-based load balancing to fine tune when and why packets are sent to which servers. Load Balancer reviews any rules you add from first priority to last priority, stopping on the first rule that it finds to be true, then load balancing the content between any servers associated with the rule. It already balances the load based on destination and port, but using rules expands your ability to distribute connections.
In most cases when configuring rules, you should configure a default always true rule in order to catch any request that is passed by other higher priority rules. This default can be a "Sorry, the site is currently down, try again later" response when all other servers fail for the client request.
You should use rules-based load balancing with Dispatcher and Site Selector when you want to use a subset of your servers for some reason. You must always use rules for the CBR component.
You can choose from the following types of rules:
Make a plan of the logic that you want the rules to follow before you start adding rules to your configuration.
All rules have a name, type, priority, and may have a begin range and end range, along with a set of servers. In addition, the content type rule for the CBR component has a matching regular expression pattern associated with it. ( For examples and scenarios on how to use the content rule and valid pattern syntax for the content rule, see Appendix B. Content rule (pattern) syntax.)
Rules are evaluated in priority order. In other words, a rule with a priority of 1 (lower number) is evaluated before a rule with a priority of 2 (higher number). The first rule that is satisfied will be used. When a rule has been satisfied, no further rules are evaluated.
For a rule to be satisfied, it must meet two conditions:
If a rule has no servers associated with it, the rule only needs to meet condition one to be satisfied. In this case, Dispatcher will drop the connection request, Site Selector will return the name server request with an error, and CBR will cause Caching Proxy to return an error page.
If no rules are satisfied, Dispatcher will select a server from the full set of servers available on the port, Site Selector will select a server from the full set of servers available on the site name, and CBR will cause Caching Proxy to return an error page.
This rule type is available in the Dispatcher, CBR, or Site Selector component.
You may want to use rules based on the client IP address if you want to screen the customers and allocate resources based on where they are coming from.
For example, you notice that your network is getting a lot of unpaid and therefore unwanted traffic from clients coming from a specific set of IP addresses. You create a rule using the dscontrol rule command, for example:
dscontrol rule add 9.67.131.153:80:ni type ip
beginrange 9.0.0.0 endrange 9.255.255.255
This "ni" rule screens out any connection from unwanted clients. You would then add to the rule the servers that you want accessible, or if you do not add any servers to the rule, requests coming from 9.x.x.x addresses are not served by any of your servers.
This rule type is only available in the Dispatcher component.
You may want to use rules based on the client port if your clients are using some kind of software that asks for a specific port from TCP/IP when making requests.
For example, you could create a rule that says that any request with a client port of 10002 will get to use a set of special fast servers because you know that any client request with that port is coming from an elite group of customers.
This rule type is available in the Dispatcher, CBR, or Site Selector component.
You may want to use rules based on the time of day for capacity planning reasons. For example, if your Web site gets hit most during the same group of hours every day, you might want to dedicate five additional servers during the peak time period.
Another reason you might use a rule based on the time of day is when you want to take some of the servers down for maintenance every night at midnight, so you can set up a rule that excludes those servers during the necessary maintenance period.
This rule type is only available in the Dispatcher component.
You may want to use rules based on the content of the "type of service" (TOS) field in the IP header. For example, if a client request comes in with one TOS value that indicates normal service, it can be routed to one set of servers. If a different client request comes in with a different TOS value that indicates a higher priority of service, it can be routed to a different set of servers.
The TOS rule allows you to fully configure each bit in the TOS byte using the dscontrol rule command. For significant bits that you want matched in the TOS byte, use 0 or 1. Otherwise, the value x is used. The following is an example for adding a TOS rule:
dscontrol rule add 9.67.131.153:80:tsr type service tos 0xx1010x
This rule type is available in the Dispatcher and CBR components.
You may want to use rules based on connections per second if you need to share some of your servers with other applications. For example, you can set two rules:
Or you might be using Telnet and want to reserve two of your five servers for Telnet, except when the connections per second increases above a certain level. This way, Dispatcher would balance the load across all five servers at peak times.
Setting rule evaluate option "upserversonrule" in conjunction with the "connection" type rule: When using the connections type rule and setting the upserversonrule option, if some of the servers in the server set are down, then you can ensure that the remaining servers will not be overloaded. See Server evaluation option for rules for more information.
This rule type is available in the Dispatcher or CBR component.
You may want to use rules based on active connections total on a port if your servers get overloaded and start throwing packets away. Certain Web servers will continue to accept connections even though they do not have enough threads to respond to the request. As a result, the client requests time out and the customer coming to your Web site is not served. You can use rules based on active connections to balance capacity within a pool of servers.
For example, you know from experience that your servers will stop serving after they have accepted 250 connections. You can create a rule using the dscontrol rule command or the cbrcontrol rule command, for example:
dscontrol rule add 130.40.52.153:80:pool2 type active
beginrange 250 endrange 500
or
cbrcontrol rule add 130.40.52.153:80:pool2 type active
beginrange 250 endrange 500
You would then add to the rule your current servers plus some additional servers, which will otherwise be used for other processing.
Reserved bandwidth and shared bandwidth rules are only available in the Dispatcher component.
For bandwidth rules, Dispatcher calculates bandwidth as the rate at which data is delivered to clients by a specific set of servers. Dispatcher tracks capacity at the server, rule, port, cluster, and executor levels. For each of these levels, there is a byte counter field: kilobytes transferred per second. Dispatcher calculates these rates over a 60 second interval. You can view these rate values from the GUI or from the output of a command line report.
The reserved bandwidth rule allows you to control the number of kilobytes per second being delivered by a set of servers. By setting a threshold (allocating a specified bandwidth range) for each set of servers throughout the configuration, you can control and guarantee the amount of bandwidth being used by each cluster-port combination.
The following is an example for adding a reservedbandwidth rule:
dscontrol rule add 9.67.131.153:80:rbw type reservedbandwidth
beginrange 0 endrange 300
The begin range and end range are specified in kilobytes per second.
Prior to configuring the shared bandwidth rule, you must specify the maximum amount of bandwidth (kilobytes per second) that can be shared at the executor or cluster level using dscontrol executor or dscontrol cluster command with the sharedbandwidth option. The sharebandwidth value should not exceed the total bandwidth (total network capacity) available. Using the dscontrol command to set shared bandwidth only provides an upper limit for the rule.
The following are examples of the command syntax:
dscontrol executor set sharedbandwidth size
dscontrol cluster [add | set] 9.12.32.9 sharedbandwidth size
The size for sharedbandwidth is an integer value (kilobytes per second). The default is zero. If the value is zero, then bandwidth cannot be shared.
Sharing bandwidth at the cluster level allows a maximum specified bandwidth to be used by the cluster. As long as the bandwidth used by the cluster is below the specified amount, then this rule will evaluate as true. If the total bandwidth used is greater than the specified amount, then this rule will evaluate as false.
Sharing bandwidth at the executor level allows the entire Dispatcher configuration to share a maximum amount of bandwidth. As long as the bandwidth used at the executor level is below the specified amount, then this rule will evaluate as true. If the total bandwidth used is greater than that defined, then this rule will evaluate as false.
The following are examples of adding or setting a sharedbandwidth rule:
dscontrol rule add 9.20.30.4:80:shbw type sharedbandwidth sharelevel value
dscontrol rule set 9.20.34.11:80:shrule sharelevel value
The value for sharelevel is either executor or cluster. Sharelevel is a required parameter on the sharebandwidth rule.
Dispatcher allows you to allocate a specified bandwidth to sets of servers within you configuration using the reserved bandwidth rule. By specifying a begin and end range, you can control the range of kilobytes delivered by a set of servers to the clients. When the rule no longer evaluates as true (the end range is exeeded), the next lower priority rule is evaluated. If the next lower priority rule is an "always true" rule, a server could be selected to respond to the client with a "site busy" response.
For example: Consider a group of three servers on port 2222. If the reserved bandwidth is set to 300, then the maximum kbytes per second is 300, over a period of 60 seconds. When this rate is exceeded, then the rule no longer evaluates as true. If this were the only rule, then one of the three servers would be selected by Dispatcher to handle the request. If there were a lower priority "always true" rule, then the request could be redirected to another server and answered with "site busy".
The shared bandwidth rule can provide additional server access to clients. Specifically, when used as a lower priority rule following a reserved bandwidth rule, a client can still access a server even though the reserved bandwidth has been exceeded.
For example: By using a shared bandwidth rule following a reserved bandwidth rule you can allow clients to gain access to the three servers in a controlled manner. As long as there is shared bandwidth available to be used, the rule will evaluate as true and access is granted. If there is no shared bandwidth available, then the rule is not true and the next rule is evaluated. If an "always true" rule follows, the request can be redirected as needed.
By using both reserved and shared bandwidth as described in the preceding example, greater flexibility and control can be exercised in granting (or denying) access to the servers. Servers on a specific port can be limited in bandwidth usage, while others can use additional bandwidth as long as it is available.
This rule type is only available in the Site Selector component.
For the metric all rule, you choose a system metric (cpuload, memload, or your own customized system metric script), and Site Selector compares the system metric value (returned by the Metric Server agent residing in each load-balanced server) with the begin and end range that you specify in the rule. The current system metric value for all the servers in the server set must be within the range for the rule to run.
The following is an example of adding a metric all rule to your configuration:
sscontrol rule add dnsload.com:allrule1 type metricall
metricname cpuload beginrange 0 endrange 100
This rule type is only available in the Site Selector component.
For the metric average rule, you choose a system metric (cpuload, memload, or your own customized system metric script), and Site Selector compares the system metric value (returned by the Metric Server agent residing in each load-balanced server) with the begin and end range that you specify in the rule. The average of the current system metric values for all the servers in the server set must be within the range for the rule to run.
The following is an example of adding a metric average rule to your configuration:
sscontrol rule add dnsload.com:avgrule1 type metricavg
metricname cpuload beginrange 0 endrange 100
This rule type is available in the Dispatcher, CBR, or Site Selector component.
A rule may be created that is "always true." Such a rule will always be selected, unless all the servers associated with it are down. For this reason, it should ordinarily be at a lower priority than other rules.
You can even have multiple "always true" rules, each with a set of servers associated with it. The first true rule with an available server is chosen. For example, assume you have six servers. You want two of them to handle your traffic under all circumstances, unless they are both down. If the first two servers are down, you want a second set of servers to handle the traffic. If all four of these servers are down, then you will use the final two servers to handle the traffic. You could set up three "always true" rules. Then the first set of servers will always be chosen as long as at least one is up. If they are both down, one from the second set is chosen, and so forth.
As another example, you may want an "always true" rule to ensure that if incoming clients do not match any of the rules you have set, they will not be served. You would create a rule using the dscontrol rule command like:
dscontrol rule add 130.40.52.153:80:jamais type true priority 100
Then you would not add any servers to the rule, causing the clients packets to be dropped with no response.
You can define more than one "always true" rule, and thereafter adjust which one gets run by changing their priority levels.
This rule type is available in the CBR component or Dispatcher component (when using Dispatcher's cbr forwarding method).
You will want to use content type rules to send requests to sets of servers specifically set up to handle some subset of your site's traffic. For example, you may want to use one set of servers to handle all cgi-bin requests, another set to handle all streaming audio requests, and a third set to handle all other requests. You would add one rule with a pattern that matches the path to your cgi-bin directory, another that matches the file type of your streaming audio files, and a third always true rule to handle the rest of the traffic. You would then add the appropriate servers to each of the rules.
Important: For examples and scenarios on how to use the content rule and valid pattern syntax for the content rule, see Appendix B. Content rule (pattern) syntax.
With port affinity override, you can override the stickiness of a port for a specific server. For example, you are using a rule to limit the amount of connections to each application server, and you have an overflow server with an always true rule that says "please try again later" for that application. The port has a stickytime value of 25 minutes, so you do not want the client to be sticky to that server. With port affinity override, you can change the overflow server to override the affinity normally associated with that port. The next time the client requests the cluster, it is load balanced to the best available application server, not the overflow server.
See dscontrol server -- configure servers, for detailed information on command syntax for the port affinity override, using the server sticky option.
You can add rules using the dscontrol rule add command, by editing the sample configuration file, or with the graphical user interface (GUI). You can add one or more rules to every port you have defined.
It is a two-step process: add the rule, then define which servers to serve to if the rule is true. For example, our system administrator wanted to track how much use the proxy servers were getting from each division on site. IP addresses are given to each division. Create the first set of rules based on client IP address to separate each division's load:
dscontrol rule add 130.40.52.153:80:div1 type ip b 9.1.0.0 e 9.1.255.255
dscontrol rule add 130.40.52.153:80:div2 type ip b 9.2.0.0 e 9.2.255.255
dscontrol rule add 130.40.52.153:80:div3 type ip b 9.3.0.0 e 9.3.255.255
Next, add a different server to each rule, then measure the load on each of the servers in order to bill the division properly to the services they are using. For example:
dscontrol rule useserver 130.40.52.153:80:div1 207.72.33.45
dscontrol rule useserver 130.40.52.153:80:div2 207.72.33.63
dscontrol rule useserver 130.40.52.153:80:div3 207.72.33.47
The server evaluation option is only available in the Dispatcher component.
On the dscontrol rule command there is a server evaluation option for rules. Use the evaluate option to choose to evaluate the rule’s condition across all the servers on the port or to evaluate the rule’s condition across just the servers within the rule. (In earlier versions of Load Balancer, you could only measure each rule’s condition across all servers on the port.)
The following are examples of adding or setting the evaluate option on a reserved bandwidth rule:
dscontrol rule add 9.22.21.3:80:rbweval type reservedbandwidth evaluate level
dscontrol rule set 9.22.21.3:80:rbweval evaluate level
The evaluate level can be set to either port, rule, or upserversonrule. The default is port.
The option to measure the rule’s condition across the servers within the rule allows you to configure two rules with the following characteristics:
The result is that when traffic exceeds the threshold of the servers within the first rule, traffic is sent to the "site busy" server within the second rule. When traffic falls below the threshold of the servers within the first rule, new traffic continues once again to the servers in the first rule.
Using the two rules described in the previous example, if you set the evaluate option to port for the first rule (evaluate rule’s condition across all the servers on the port), when traffic exceeds the threshold of that rule, traffic is sent to the "site busy" server associated to the second rule.
The first rule measures all server traffic (including the "site busy" server) on the port to determine whether the traffic exceeds the threshold. As congestion decreases for the servers associated to the first rule, an unintentional result may occur where traffic continues to the "site busy" server because traffic on the port still exceeds the threshold of the first rule.
For the Dispatcher and CBR components: You enable the affinity feature when you configure a cluster's port to be sticky. Configuring a cluster's port to be sticky allows subsequent client requests to be directed to the same server. This is done by setting stickytime at the executor, cluster, or port level to some number of seconds. The feature is disabled by setting stickytime to zero.
If you are enabling cross port affinity, stickytime values of the shared ports must be the same (nonzero) value. See Cross port affinity for more information.
For the Site Selector component: You enable the affinity feature when you configure a sitename to be sticky. Configuring a sitename to be sticky allows the client to use the same server for multiple name service requests. This is done by setting stickytime on the sitename to some number of seconds. The feature is disabled by setting stickytime to zero.
A sticky time value for a server is the interval between the closing of one connection and the opening of a new connection during which time a client is sent back to the same server used during the first connection. After the sticky time expires, the client can be sent to a server different from the first. The sticky time value for a server is configured using the dscontrol executor, port, or cluster commands.
With the affinity feature disabled, whenever a new TCP connection is received from a client, Load Balancer picks the right server at that moment in time and forwards the packets to it. If a subsequent connection comes in from the same client, Load Balancer treats it as an unrelated new connection, and again picks the right server at that moment in time.
With the affinity feature enabled, if a subsequent request is received from the same client, the request is directed to the same server.
Over time, the client will finish sending transactions, and the affinity record will go away. Hence the meaning of the sticky "time." Each affinity record lives for the "stickytime" in seconds. When subsequent connections are received within the stickytime, the affinity record is still valid and the request will go to the same server. If a subsequent connection is not received within stickytime, the record is purged; a connection that is received after that time will have a new server selected for it.
The server down command (dscontrol server down) is used to bring a server offline The server is not taken down until after the stickytime value expires.
Cross port affinity only applies to the Dispatcher component's MAC and NAT/NATP forwarding methods.
Cross port affinity is the sticky feature that has been expanded to cover multiple ports. For example, if a client request is first received on one port and the next request is received on another port, cross port affinity allows the dispatcher to send the client request to the same server. In order to use this feature, the ports must:
More than one port can link to the same crossport. When subsequent connections come in from the same client on the same port or a shared port, the same server will be accessed. The following is an example of configuring multiple ports with a cross port affinity to port 10:
dscontrol port set cluster:20 crossport 10
dscontrol port set cluster:30 crossport 10
dscontrol port set cluster:40 crossport 10
After cross port affinity has been established, you have the flexibility to modify the stickytime value for the port. However, it is recommended that you change the stickytime values for all shared ports to the same value, otherwise unexpected results may occur.
To remove the cross port affinity, set the crossport value back to its own port number. See dscontrol port -- configure ports, for detailed information on command syntax for the crossport option.
Affinity address mask only applies to the Dispatcher component.
Affinity address mask is a sticky feature enhancement to group clients based upon common subnet addresses. Specifying stickymask on the dscontrol port command allows you to mask the common high-order bits of the 32-bit IP address. If this feature is configured, when a client request first makes a connection to the port, all subsequent requests from clients with the same subnet address (represented by that part of the address which is being masked) will be directed to the same server.
For example, if you want all incoming client requests with the same network Class A address to be directed to the same server, you set the stickymask value to 8 (bits) for the port. To group client requests with the same network Class B address, set the stickymask value to 16 (bits). To group client requests with the same network Class C address, set the stickymask value to 24 (bits).
For best results, set the stickymask value when first starting the Load Balancer. If you change the stickymask value dynamically, results will be unpredictable.
Interaction with cross port affinity: If you are enabling cross port affinity, stickymask values of the shared ports must be the same. See Cross port affinity for more information.
To enable affinity address mask, issue an dscontrol port command similar to the following:
dscontrol port set cluster:port stickytime 10 stickymask 8
Possible stickymask values are 8, 16, 24 and 32. A value of 8 specifies the first 8 high-order bits of the IP address (network Class A address) will be masked. A value of 16 specifies the first 16 high-order bits of the IP address (network Class B address) will be masked. A value of 24 specifies the first 24 high-order bits of the IP address (network Class C address) will be masked. If you specify a value of 32, you are masking the entire IP address which effectively disables the affinity address mask feature. The default value of stickymask is 32.
See dscontrol port -- configure ports, for detailed information on command syntax for stickymask (affinity address mask feature).
Quiesce handling applies to the Dispatcher and CBR components.
To remove a server from the Load Balancer configuration for any reason (updates, upgrades, service, and so forth), you can use the dscontrol manager quiesce command. The quiesce subcommand allows existing connections to complete (without being severed) and forwards only subsequent new connections from the client to the quiesced server if the connection is designated as sticky and stickytime has not expired. The quiesce subcommand disallows any other new connections to the server.
Use the quiesce "now" option if you have stickytime set, and you want new connections sent to another server (instead of the quiesced server) before stickytime expires. The following is an example of using the now option to quiesce server 9.40.25.67:
dscontrol manager quiesce 9.40.25.67 now
The now option determines how sticky connections will be handled as follows:
This is the more graceful, less abrupt, way to quiesce servers. For instance, you can gracefully quiesce a server and then wait for the time where there is the least amount of traffic (perhaps early morning) to completely remove the server from the configuration.
You can specify the following types of affinity on the dscontrol rule command:
Active cookie affinity only applies to the CBR component.
Passive cookie applies to the CBR component and to Dispatcher component's cbr forwarding method.
URI affinity applies to the CBR component and to Dispatcher component's cbr forwarding method.
The default for the affinity option is "none." The stickytime option on the port command must be zero (not enabled) in order to set the affinity option on the rule command to active cookie, passive cookie, or URI. When affinity is set on the rule, you cannot enable stickytime on the port.
The active cookie affinity feature applies only to the CBR component.
It provides a way to make clients "sticky" to a particular server. This function is enabled by setting the stickytime of a rule to a positive number, and setting the affinity to "activecookie." This can be done when the rule is added, or using the rule set command. See dscontrol rule -- configure rules, for detailed information on command syntax.
When a rule has been enabled for active cookie affinity, new client requests are load-balanced using standard CBR algorithms, while succeeding requests from the same client are sent to the initially chosen server. The chosen server is stored as a cookie in the response to the client. As long as the client's future requests contains the cookie, and each request arrives within the stickytime interval, the client will maintain affinity with the initial server.
Active cookie affinity is used to ensure that a client continues to be load balanced to the same server for some period of time. This is accomplished by sending a cookie to be stored by the clients browser. The cookie contains the cluster:port:rule that was used to make the decision, the server that was load balanced to, and a timeout timestamp for when the affinity is no longer valid. The cookie is in the following format: IBMCBR=cluster:port:rule+server-time! The cluster:port:rule and server information are encoded so the CBR configuration is not revealed.
Whenever a rule fires that has active cookie affinity turned on, the cookie sent by the client is examined.
This new cookie is then inserted in the headers that go back to the client, and if the client's browser is configured to accept cookies, it will send back subsequent requests.
Each affinity instance in the cookie is 65 bytes in length and end at the exclamation mark. As a result, a 4096 byte cookie can hold approximately 60 individual active cookie rules per domain. If the cookie fills up completely, then all expired affinity instances are purged. If all instances are still valid, then the oldest one is dropped, and the new instances for the current rule is added.
The active cookie affinity option, for the rule command, can only be set to activecookie if port stickytime is zero (not enabled). When active cookie affinity is active on a rule then you cannot enable stickytime on the port.
To enable active cookie affinity for a particular rule, use the rule set command:
rule set cluster:port:rule stickytime 60
rule set cluster:port:rule affinity activecookie
Making a rule sticky would normally be used for CGI or servlets that store client state on the server. The state is identified by a cookie ID (these are server cookies). Client state is only on the selected server, so the client needs the cookie from that server to maintain that state between requests.
Active cookie affinity has a default expiration of the current server time, plus the stickytime interval, plus twenty-four hours. If your clients (those sending requests to your CBR machine) have inaccurate times on their system (for example, they are more than one day ahead of the server time), then those clients' systems will ignore the cookies from CBR because the system will assume that the cookies have already expired. To set a longer expiration time, modify the cbrserver script. In the script file, edit the javaw line, adding the following parameter after LB_SERVER_KEYS: -DCOOKIEEXPIREINTERVAL=X where X is the number of days to add to the expiration time.
On AIX®, Solaris and Linux systems, the cbrserver file is located in /usr/bin directory.
On Windows systems, the cbrserver file is located in \winnt\system32 directory.
Passive cookie affinity applies to the Dispatcher component's content-based routing (cbr) forwarding method and to the CBR component. See Dispatcher's content-based routing (cbr forwarding method) for information on how to configure Dispatcher's cbr forwarding method.
Passive cookie affinity provides a way to make clients sticky to a particular server. When you enable the affinity of a rule to "passivecookie", passive cookie affinity allows you to load-balance Web traffic with affinity to the same server, based on self-identifying cookies generated by the servers. You configure passive cookie affinity at the rule level.
When the rule fires, if passive cookie affinity is enabled, Load Balancer will choose the server based on the cookie name in the HTTP header of the client request. Load Balancer begins to compare the cookie name from the client's HTTP header to the configured cookie value for each server.
The first time Load Balancer finds a server whose cookie value contains the client's cookie name, Load Balancer chooses that server for the request.
If the cookie name in the client request is not found or does not match any of the content within the servers' cookie values, the server is chosen using existing server selection or the weighted round-robin technique.
To configure passive cookie affinity:
The passive cookie affinity option, for the rule command, can only be set to passivecookie if port stickytime is zero (not enabled). When passive cookie affinity is active on a rule then you cannot enable stickytime on the port.
URI affinity applies to Dispatcher's cbr forwarding method and the CBR component. See Dispatcher's content-based routing (cbr forwarding method) for information on how to configure the cbr forwarding method.
URI affinity allows you to load-balance Web traffic to Caching Proxy servers which allow unique content to be cached on each individual server. As a result, you will effectively increase the capacity of your site's cache by eliminating redundant caching of content on multiple machines. Configure URI affinity at the rule level. After the rule fires, if URI affinity is enabled and the same set of servers are up and responding, then Load Balancer will forward new incoming client requests with the same URI to the same server.
Typically, Load Balancer can distribute requests to multiple servers that serve identical content. When using Load Balancer with a group of caching servers, frequently accessed content eventually becomes cached on all the servers. This supports a very high client load by replicating identical cached content on multiple machines. This is particularly useful for high volume Web sites.
However, if your Web site supports a moderate volume of client traffic to very diverse content, and you prefer to have a larger cache spread across multiple servers, your site would perform better if each caching server contained unique content and Load Balancer distributed the request only to the caching server with that content.
With URI affinity, Load Balancer allows you to distribute the cached content to individual servers, eliminating redundant caching of content on multiple machines. Performance for diverse-content server sites using Caching Proxy servers is improved with this enhancement. It will send identical requests to the same server, thereby caching content on single servers only. And, the effective size of the cache will grow larger with each new server machine added to the pool.
To configure URI affinity:
The URI affinity option, for the rule command, can only be set to URI if port stickytime is zero (not enabled). When URI affinity is active on a rule then you cannot enable stickytime on the port.
This feature is only available for the Dispatcher component.
If you are not using the Dispatcher's wide area support and not using Dispatcher's nat forwarding method, a Dispatcher configuration requires that the Dispatcher machine and its servers all be attached to the same LAN segment (see Figure 32). A client's request comes into the Dispatcher machine and is sent to the server. From the server, the response is sent directly back to the client.
The wide area Dispatcher feature adds support for offsite servers, known as remote servers (see Figure 33). If GRE is not supported at the remote site and if Dispatcher's nat forwarding method is not being used, then the remote site must consist of a remote Dispatcher machine (Dispatcher 2) and its locally attached servers (ServerG, ServerH, and ServerI). A client's packet will go from the Internet to the initial Dispatcher machine. From the initial Dispatcher machine, the packet will then go to a geographically remote Dispatcher machine and one of its locally attached servers.
All the Dispatcher machines (local and remote) must be on the same type of operating system and platform in order to run wide area configurations.
This allows one cluster address to support all worldwide client requests while distributing the load to servers around the world.
The Dispatcher machine initially receiving the packet can still have local servers attached to it, and it can distribute the load between its local servers and the remote servers.
To configure wide area support :
dscontrol server add cluster:port:server router address
For more information on the router keyword, see dscontrol server -- configure servers.
On entry-point Dispatchers:
An entry-point dispatcher will treat the second-level Dispatcher as a server, and it will monitor the health of it as a server and tie results to the real IP of the dispatcher.
On remote Dispatchers: Perform the following configuration steps for each remote cluster address. For a high-availability configuration at the remote Dispatcher location, you must perform these steps on both machines.
AIX systems
ifconfig en0 alias 10.10.10.99 netmask 255.255.255.255
dscontrol executor configure 204.67.172.72 en0 255.255.255.255
HP-UX systems, Linux, Solaris, and Windows systems
This example applies to the configuration illustrated in Figure 34.
Here is how to configure the Dispatcher machines to support cluster address xebec on port 80. LB1 is defined as the "entry-point" Load Balancer. An Ethernet connection is assumed. Note that LB1 has five servers defined: three local (ServerA, ServerB, ServerC) and two remote (LB2 and LB3). Remotes LB2 and LB3 each have three local servers defined.
At the console of the first Dispatcher (LB1), do the following:
dscontrol executor start
dscontrol executor set nfa LB1
dscontrol cluster add xebec
dscontrol port add xebec:80
dscontrol executor configure xebec
At the console of the second Dispatcher (LB2):
dscontrol executor start
dscontrol executor set nfa LB2
dscontrol cluster add xebec
dscontrol port add xebec:80
At the console of the third Dispatcher (LB3):
dscontrol executor start
dscontrol executor set nfa LB3
dscontrol cluster add xebec
dscontrol port add xebec:80
Generic Routing Encapsulation (GRE) is an Internet Protocol specified in RFC 1701 and RFC 1702. Using GRE, the Load Balancer can encapsulate client IP packets inside IP/GRE packets and forward them to server platforms such as OS/390® that support GRE. GRE support allows the Dispatcher component to load balance packets to multiple server addresses associated with one MAC address.
Load Balancer implements GRE as part of its WAN feature. This allows Load Balancer to provide wide area load balancing directly to any server systems that can unwrap the GRE packets. Load Balancer does not need to be installed at the remote site if the remote servers support the encapsulated GRE packets. Load Balancer encapsulates WAN packets with the GRE key field set to decimal value 3735928559.
For this example (Figure 35), to add remote ServerD, which supports GRE, define it within your Load Balancer configuration as if you are defining a WAN server in the cluster:port:server hierarchy:
dscontrol server add cluster:port:ServerD router Router1
Linux systems have the native ability to excapsulate GRE which allows Load Balancer to load balance to Linux for S/390 server images, where many server images share a MAC address. This permits the entry-point Load Balancer to load balance directly to Linux WAN servers, without passing through a Load Balancer at the remote site. This also allows the entry-point Load Balancer's advisors to operate directly with each remote server.
On the entry point Load Balancer, configure as described for WAN.
To configure each Linux backend server, issue the following commands as root. (These commands may be added to the system's startup facility so that changes are preserved across reboots.)
# modprobe ip_gre
# ip tunnel add gre-nd mode gre ikey 3735928559
# ip link set gre-nd up
# ip addr add cluster address dev gre-nd
In general, the load-balancing functions of the Dispatcher work independently of the content of the sites on which the product is used. There is one area, however, where site content can be important, and where decisions made regarding content can have a significant impact upon the Dispatcher’s efficiency. This is in the area of link addressing.
If your pages specify links that point to individual servers for your site, you are in effect forcing a client to go to a specific machine, thus bypassing any load balancing function that might otherwise be in effect. For this reason, always use the address of Dispatcher in any links contained in your pages. Note that the kind of addressing used may not always be apparent, if your site uses automated programming that dynamically creates HTML. To maximize your load-balancing, you should be aware of any explicit addressing and avoid it where possible.
You can set up Dispatcher and the TCP server machines using a private network. This configuration can reduce the contention on the public or external network that can affect performance.
For AIX systems, this configuration can also take advantage of the fast speeds of the SP High Performance Switch if you are running Dispatcher and the TCP server machines on nodes in an SP Frame.
To create a private network, each machine must have at least two LAN cards, with one of the cards connected to the private network. You must also configure the second LAN card on a different subnet. The Dispatcher machine will then send the client requests to the TCP server machines through the private network.
Windows systems: Configure the nonforwarding address using the executor configure command.
The servers added using the dscontrol server add command must be added using the private network addresses; for example, referring to the Apple server example in Figure 36, the command should be coded as:
dscontrol server add cluster_address:80:10.0.0.1
not
dscontrol server add cluster_address:80:9.67.131.18
If you are using Site Selector to provide load information to Dispatcher, you must configure Site Selector to report loads on the private addresses.
Using a private network configuration only applies to the Dispatcher component.
Using wildcard cluster to combine server configurations only applies to the Dispatcher component.
The "wildcard" refers to the cluster's ability to match multiple IP addresses (that is, acts as a wildcard). Cluster address 0.0.0.0 is used to specify a wildcard cluster.
If you have many cluster addresses to load-balance, and the port/server configurations are identical for all your clusters, you can combine all the clusters into one wildcard cluster configuration.
You must still explicitly configure each cluster address on one of the network adapters of your Dispatcher workstation. You should not add any of the cluster addresses to the Dispatcher configuration using the dscontrol cluster add command however.
Add only the wildcard cluster (address 0.0.0.0), and configure the ports and servers as required for load balancing. Any traffic to any of the adapter configured addresses is load balanced using the wildcard cluster configuration.
An advantage of this approach is that traffic to all the cluster addresses is taken into account when determining the best server to go to. If one cluster is getting a lot of traffic, and it has created many active connections on one of the servers, traffic to other cluster addresses is load balanced using this information.
You can combine the wildcard cluster with actual clusters if you have some cluster addresses with unique port/server configurations, and some with common configurations. The unique configurations must each be assigned to an actual cluster address. All common configurations can be assigned to the wildcard cluster.
Using wildcard cluster to load balance firewalls only applies to the Dispatcher component. Cluster address 0.0.0.0 is used to specify a wildcard cluster.
The wildcard cluster can be used to load balance traffic to addresses that are not explicitly configured on any network adapter of the Dispatcher workstation. In order for this to work, the Dispatcher must at least be able to see all the traffic it is to load balance. The dispatcher workstation will not see traffic to addresses that have not been explicitly configured on one of its network adapters unless it is set up as the default route for some set of traffic.
After Dispatcher has been configured as a default route, any TCP or UDP traffic through the Dispatcher machine is load balanced using the wildcard cluster configuration.
One application of this is to load balance firewalls. Because firewalls can process packets for any destination address and any destination port, you need to be able to load balance traffic independent of the destination address and port.
Firewalls are used to handle traffic from non-secure clients to secure servers, and the responses from the secure servers, as well as traffic from clients on the secure side to servers on the non-secure side, and the responses.
You must set up two Dispatcher machines, one to load balance non-secure traffic to the non-secure firewall addresses and one to load balance secure traffic to the secure firewall addresses. Because both of these Dispatchers must use the wildcard cluster and wildcard port with different sets of server addresses, the two Dispatchers must be on two separate workstations.
Using wildcard cluster with Caching Proxy for transparent proxy only applies to the Dispatcher component. Cluster address 0.0.0.0 is used to specify a wildcard cluster.
The wildcard cluster function also allows Dispatcher to be used to enable a transparent proxy function for a Caching Proxy server residing on the same machine as Dispatcher. This is an AIX feature only, as there must be communication from the dispatcher component to the TCP component of the operating system.
To enable this feature, you must start Caching Proxy listening for client requests on port 80. You then configure a wildcard cluster (0.0.0.0). In the wildcard cluster, you configure port 80. In port 80, you configure the NFA of the Dispatcher machine as the only server. Now any client traffic to any address on port 80 is delivered to the Caching Proxy server running on the Dispatcher workstation. The client request will then be proxied as usual, and the response is sent back from Caching Proxy to the client. In this mode, the Dispatcher component is not performing any load balancing.
The wildcard port can be used to handle traffic that is not for any explicitly configured port. One use of this is for load balancing firewalls. A second use is to ensure that traffic to an unconfigured port is handled appropriately. By defining a wildcard port with no servers, you will guarantee that any request to a port that has not been configured is discarded rather than delivered back to the operating system. Port number 0 (zero) is used to specify a wildcard port, for example:
dscontrol port add cluster:0
When configuring a cluster to handle passive FTP and the wildcard port, passive FTP by default utilizes the entire non-privileged TCP port range for data connections. This means a client, with an existing connection through a load-balancing cluster to an FTP control port, will have subsequent control connections and high port connections (port >1023) to the same cluster automatically routed by Load Balancer to the same server as the FTP control connection.
If the wildcard port and the FTP port on the same cluster do not have the same server set, then high port applications (port >1023) may fail when a client has an existing FTP control connection. Therefore, configuring different server sets for the FTP and wildcard ports on the same cluster is not recommended. If this scenario is desired, the FTP daemon passive port range must be configured in the Load Balancer configuration.
This feature is only available for the Dispatcher component.
Dispatcher provides the ability to detect potential "denial of service" attacks and notify administrators by an alert. Dispatcher does this by analyzing incoming requests for a conspicuous amount of half-open TCP connections on servers, a common trait of simple denial of service attacks. In a denial of service attack, a site receives a large quantity of fabricated SYN packets from a large number of source IP addresses and source port numbers, but the site receives no subsequent packets for those TCP connections. This results in a large number of half-opened TCP connections on the servers, and over time the servers can become very slow, accepting no new incoming connections.
Load Balancer provides user exits that trigger scripts which you can customize that alert the Administrator to a possible denial of service attack. Dispatcher provides sample script files in the following directory:
The following scripts are available:
In order to run the files, you must move them to the following directory and remove the ".sample" file extension:
To implement the DoS attack detection, set the maxhalfopen parameter on the dscontrol port command as follows:
dscontrol port set 127.40.56.1:80 maxhalfopen 1000
In the above example, Dispatcher will compare the current total number of half-open connections (for all servers residing on cluster 127.40.56.1 on port 80) with the threshold value of 1000 (specified by the maxhalfopen parameter). If the current half-open connections exceeds the threshold, then a call to an alert script (halfOpenAlert) is made. When the number of half-open connections drops below the threshold, a call to another alert script (halfOpenAlertDone) is made to indicate that the attack is over.
To determine how to set the maxhalfopen value: Periodically (perhaps every 10 minutes) run a half-open connection report (dscontrol port halfopenaddressreport cluster:port) when your site is experiencing normal to heavy traffic. The half-open connection report will return the current "total half-open connections received." You should set maxhalfopen to a value that is anywhere from 50 to 200% greater than the largest number of half-open connnections that your site experiences.
In addition to statistical data reported, the halfopenaddressreport will also generate entries in the log (..ibm/edge/lb/servers/logs/dispatcher/halfOpen.log) for all the client addresses (up to approximately 8000 address pairs) that have accessed servers that resulted in half open connnections.
To provide additional protection from denial of service attacks for backend servers, you can configure wildcard clusters and ports. Specifically, under each configured cluster add a wildcard port with no servers. Also add a wildcard cluster with a wildcard port and no servers. This will have the effect of discarding all packets which are not addressed to a non-wildcard cluster and port. For information on wildcard clusters and wildcard ports, see Use wildcard cluster to combine server configurations and Use wildcard port to direct unconfigured port traffic.
The binary logging feature allows server information to be stored in binary files. These files can then be processed to analyze the server information that has been gathered over time.
The following information is stored in the binary log for each server defined in the configuration.
Some of this information is retrieved from the executor as part of the manager cycle. Therefore the manager must be running in order for the information to be logged to the binary logs.
Use dscontrol binlog command set to configure binary logging.
The start option starts logging server information to binary logs in the logs directory. One log is created at the start of every hour with the date and time as the name of the file.
The stop option stops logging server information to the binary logs. The log service is stopped by default.
The set interval option controls how often information is written to the logs. The manager will send server information to the log server every manager interval. The information is written to the logs only if the specified log interval seconds have elapsed since the last record was written to the log. By default, the log interval is set to 60 seconds. There is some interaction between the settings of the manger interval and the log interval. Since the log server is provided with information no faster than manager interval seconds setting the log interval less than the manager interval effectively sets it to the same as the manager interval. This logging technique allows you to capture server information at any granularity. You can capture all changes to server information that are seen by the manager for calculating server weights. However, this amount of information is probably not required to analyze server usage and trends. Logging server information every 60 seconds gives you snapshots of server information over time. Setting the log interval very low can generate huge amounts of data.
The set retention option controls how long log files are kept. Log files older than the retention hours specified are deleted by the log server. This will only occur if the log server is being called by the manager, so stopping the manager will cause old log files not to be deleted.
The status option returns the current settings of the log service. These settings are whether the service is started, what the interval is, and what the retention hours are.
A sample Java program and command file have been provided in the following directory:
This sample shows how to retrieve all the information from the log files and print it to the screen. It can be customized to do any type of analysis you want with the data. An example using the supplied script and program for the dispatcher would be:
dslogreport 2001/05/01 8:00 2001/05/01 17:00
to get a report of the Dispatcher component's server information from 8:00 AM to 5:00 PM on May 1, 2001. (For CBR, use cbrlogreport.)
Only Linux systems support configurations where the client is located on the same machine as Load Balancer.
Collocated client configurations might not function correctly on other platforms because Load Balancer uses different techniques to examine the incoming packets on the various operating systems that it supports. In most cases, on systems other than Linux, Load Balancer does not receive packets from the local machine. It receives packets coming from the network only. Because of this, requests made to the cluster address from the local machine are not received by Load Balancer and cannot be serviced.