Make sure you are using at least this level of IHS. Older levels have more performance concerns.
Release | Recommended level |
IHS 6.0 | any |
IHS 2.0.47.x | 2.0.47.1 or later |
IHS 2.0.42.x | 2.0.42.2-PQ85834 or later |
IHS 1.3.28.x | 1.3.28.1 or later |
IHS 1.3.26.x | 1.3.26.2 or later, along with e-fix PQ86671, which upgrades GSKit. |
GSKit levels used with IHS:
IHS 6.0.x | GSKit 7 |
IHS 2.0.47.x | GSKit 7 |
IHS 2.0.42.x | GSKit 5 |
IHS 1.3.28.x | GSKit 7 |
IHS 1.3.26.x | GSKit 5 |
The first tuning decision you'll need to make is determining how many simultaneous requests your IHS installation will need to support. Many other tuning decisions are dependent on this value.
For some IHS deployments, the amount of load on the web server is directly related to the typical business day, and may show a load pattern such as the following:
Simultaneous requests | 2000 | | | ********** | **** *** 1500 | ***** ** | **** *** | *** *** | * ** 1000 | * ** | * * | * * | * * 500 | * * | * * | ** * | *** *** 1 |*** ** Time of +------------------------------------------------------------- day 7am 8am 9am 10am 11am 12am 1pm 2pm 3pm 4pm 5pm
For other IHS customers, providing applications which are used in many time zones, load on the server varies much less during the day.
The maximum number of simultaneous connections must be based on the busiest part of the day. This maximum number of simultaneous connections is only loosely related to the number of users accessing the site. At any given moment, a single user can require anywhere from zero to four independent TCP connections.
The typical way to determine the maximum number of simultaneous connections is to monitor mod_status reports during the day until typical behavior is understood, or to use mod_mpmstats (IHS 2.0.42.2 and later).
Monitoring with mod_status |
|
Note that if the web server has not been configured to support enough simultaneous connections, one of the following messages will be logged to the web server error log and clients will experience delays accessing the server.
Windows [warn] Server ran out of threads to serve requests. Consider raising the ThreadsPerChild setting Linux and Unix [error] server reached MaxClients setting, consider raising the MaxClients setting
Check the error log for a message like this to determine if the IHS configuration needs to be changed.
Once the maximum number of simultaneous connections has been determined, add 25% as a safety factor. The next section discusses how to use this number in the web server configuration file.
Note: Setting of the KeepAliveTimeout can affect the apparent number of simultaneous requests being processed by the server. Increasing KeepAliveTimeout effectively reduces the number of threads available to service new inbound requests, and will result in a higher maximum number of simultaneous connections which must be supported by the web server. Decreasing KeepAliveTimeout can drive extra load on the server handling unnecessary TCP connection setup overhead. A setting of 5 to 10 seconds is reasonable for serving requests over high speed, low latency networks.
The netstat command can be used to show the state of TCP connections between clients and IBM HTTP Server. For some of these connection states, a web server thread (or child process, with 1.3.x on Unix) is consumed. For other states, no web server thread is consumed. See the following table to determine if a TCP connection in a particular state requires a web server thread.
TCP state | meaning | is a web server thread utilized? |
LISTEN | no connection | no |
SYN_RCVD | not ready to be processed | no |
ESTABLISHED | ready for web server to accept and process requests, or already processing requests | yes, as soon as IHS realizes that connection is established; but if there aren't enough configured web server threads (e.g., MaxClients is too small), the connection may stall until a thread becomes ready |
FIN_WAIT1 | web server has closed the socket | no |
CLOSE_WAIT | client has closed the socket, web server hasn't yet noticed | yes |
LAST_ACK | client closed socket then web server closed socket | no |
FIN_WAIT2 | web server closed socket then client ACKed | no |
TIME_WAIT | waiting for 2*MSL timeout before allowing quad to be reused | no |
CLOSING | web server and client closed at the same time | no |
IBM HTTP Server on Windows has a Parent process and a single multi-threaded Child process.
Relevant config directives on Windows:
Recommended settings:
Directive | Value |
ThreadsPerChild | maximum number of simultaneous connections |
ThreadLimit | same as ThreadsPerChild (IHS 2.0 and above) |
On UNIX and Linux platforms, a running instance of IBM HTTP Server will consist of one single threaded Parent process which starts and maintains one or more multi-threaded Child processes. HTTP requests are received and processed by threads running in the Child processes. Each simulaneous request (TCP connection) consumes a thread. You need to use the appropriate configuration directives to control how many threads the server starts to handle requests and on UNIX and Linux, you can control how the threads are distributed amongst the Child processes.
Relevant config directives on UNIX platforms:The MaxSpareThreads and MinSpareThreads directives affect how the server autonomically reacts to server load. You can use these directives to instruct the server to automatically increase the number of Child processes when server load increases (subject to limits imposed by ServerLimit and MaxClients) and to decrease the number of Child processes when server load is low. This feature can be a useful for managing overall system memory utilization when your server is being used for tasks other than serving HTTP requests. IBM recommends disabling this autonomic behaviour by setting MaxSpareThreads to ....
Setting MaxSpareThreads to a relatively small value has a performance penalty: Extra CPU to terminate and create child processes. During normal operation, the load on the server may vary widely (e.g., from 150 busy threads to 450 busy threads). If MaxSpareThreads is smaller than this variance (e.g., 450-150=300), then IHS will terminate and create child processes frequently, resulting in reduced performance.
Recommended settings:
Directive | Value |
ThreadsPerChild | leave at the default value | MaxClients | maximum number of simultaneous connections, rounded up to an even multiple of ThreadsPerChild |
StartServers | 2 |
MinSpareThreads | same value as ThreadsPerChild |
MaxSpareThreads | same value as MaxClients |
ServerLimit | MaxClients divided by ThreadsPerChild |
ThreadLimit | ThreadsPerChild |
Note: ThreadLimit and ServerLimit need to appear before these other directives in the configuration file.
IHS 1.3 on Linux and Unix systems uses one single-threaded child process per concurrent connection.
Recommended settings:
Directive | Value |
MaxClients | maximum number of simultaneous connections |
MinSpareServers | 1 |
MaxSpareServers | same value as MaxClients |
StartServers | default value |
Refer to the previous section.
The default SSLCipherSpec ordering enables maximum strength SSL connections at a significant performance penalty. A much better performing and reasonably strong SSLCipherSpec configuration is given below.
With IHS 2.0 and above, Sendfile usage is disabled in the current default configuration files. This avoids some occasional platform-specific problems, but it also increases CPU utilization on platforms on which IHS supports it (Windows, AIX, Linux, and HP-UX).
If you enable sendfile usage on AIX, ensure that the
nbc_limit
setting displayed by the no
program is not too high for your system. On many systems, the AIX
system default is 768MB. We recommend setting this to a much more
conservative value, such as 256MB. If the limit is too high, and
the web server use of sendfile results in a large amount of network
buffer cache memory utilization, a wide range of other system
functions may fail. In situations like that, the best diagnostic step
is to check network buffer cache utilization by running netstat
-c
. If it is relatively high (hundreds of megabytes), disable
sendfile usage and see if the problem occurs again. Alternately,
nbc_limit
can be lowered significantly but sendfile still
be enabled.
With IHS 2.0.42 and above, the default bin/envvars
file specifies
the setting MALLOCMULTIHEAP=considersize,heaps:8. This
enables a memory management scheme for the AIX heap library which is
better for multithreaded applications, and configures it to try to
minimize memory use and to use a moderate number of heaps. For
configurations with extensive heap operations (SSL or certain
third-party modules), CPU utilization can be lowered by changing this
setting to the following: MALLOCMULTIHEAP=true. This may
increase the memory usage slightly.
The Fast Response Cache Accelerator (FRCA, aka AFPA) is disabled in the current default configuration files because some common Windows extensions, such as Norton Antivirus, are not compatible with it. FRCA is a kernel resident micro HTTP server optimized for serving static, non-access protected files directly out of the file system. The use of FRCA can dramatically reduce CPU utilization in some configurations. FRCA cannot be used for serving content over HTTPS/SSL connections.
IBM HTTP Server supports some features and configuration directives that can have a severe impact on server performance. Use of these features should be avoided unless there are compelling reasons to enable them.
Performance penalty: Extra DNS lookups per request.
This is disabled by default in the sample IHS configuration files.
Performance penalty: Delays introduced in the request to contact RFC 1413 ident daemon possibly running on client machine
This is disabled by default in the sample IHS configuration files.
Performance penalty: Extra CPU and disk I/O to try to find the file type
This is disabled by default in the sample IHS configuration files.
Performance penalty: Extra CPU to compute MD5 hash of the response
This is disabled by default in the sample IHS configuration files.
Performance penalty: Extra CPU to terminate and create child processes
This is set to the optimal setting (0) in the sample IHS configuration files.
Performance penalty: Extra CPU and disk I/O to locate .htaccess files in directories where static files are served
.htaccess files are disabled in the sample IHS configuration files.
Detailed logging (SSLTrace, plug-in LogLevel=trace, GSKit trace, third-party module logging) is often enabled as part of problem diagnosis. When one or more of these traces is left enabled after the problem is resolved, CPU utilization is higher than normal.
Detailed logging is disabled in the sample IHS configuration files.
If the static files served by IHS are maintained by untrusted users, you may want to disable this option in the IHS configuration file, in order to prevent those untrusted users from creating symbolic links to private files that should not ordinarily be served by IHS. But disabling FollowSymLinks to prevent this problem will result in performance degradation since IHS then has to check every component of the pathname to determine if it is a symbolic link.
Following symbolic links is enabled in the sample IHS configuration files.
This directive is commonly modified as part of tuning the web server. There are advantages and disadvantages for different values of ThreadsPerChild:
This directive is commonly modified as part of tuning the web server to handle a greater client load (more concurrent TCP connections).
When MaxClients is increased, the value for MaxSpareThreads should be scaled up as well. Otherwise, extra CPU will be spent terminating and creating child processes when the load changes by a relatively small amount.
This directive controls whether some important information is saved
in the scoreboard for use by mod_status and diagnostic
modules. When this is set to On
, web server CPU
usage may increase by as much as one percent. However, it can make
mod_status reports and some other diagnostic tools more useful.
The use of the MaxConnections parameter in the WebSphere plug-in configuration is most effective when IHS 2.0 and above is used and there is a single IHS child process. However, there are other tradeoffs:
When an SSL connection is established, the client (web browser) and the web server negotiate the cipher to use for the connection. The web server has an ordered list of ciphers, and the first cipher in that list which is supported by the client will be selected.
The selection of SSL cipher can dramatically affect performance of IBM HTTP Server. Stronger ciphers consume more CPU cycles than weaker ciphers. Of the strong ciphers, Triple-DES ciphers are the strongest but are far more expensive than the more commonly used 128 bit RC4 ciphers. Triple DES requires 3 passes (Encrypt/Decrypt/Encrypt) and was originally intended for compatibility with older DES devices. If the SAME key is used for each pass, you get DES and can communicate with older devices/programs, if you use DIFFERENT keys you get a very strong, but very expensive, cipher.
In its default cipher list, IBM HTTP Server attempts to use the strongest Triple-DES ciphers first, such that Triple-DES will be used with any client which supports it. If the high CPU cost of triple-DES must be avoided, use the SSLCipherSpec directive to rearrange the negotiation order.
IBM HTTP Server supports the following SSL ciphers:
SSL V2: shortname longname Meaning Strength ========= ======== ============= ======== 27 SSL_DES_192_EDE3_CBC_WITH_MD5 Triple-DES (168 bit) (stronger) 21 SSL_RC4_128_WITH_MD5 RC4 (128 bit) 23 SSL_RC2_CBC_128_CBC_WITH_MD5 RC2 (128 bit) | 26 SSL_DES_64_CBC_WITH_MD5 DES (56 bit) V 22 SSL_RC4_128_EXPORT40_WITH_MD5 RC4 (40 bit) 24 SSL_RC2_CBC_128_CBC_EXPORT40_WITH_MD5 RC2 (40 bit) (weaker) SSL V3 and TLSV1: shortname longname Meaning Strength ========= ======== ============= ======== 3A SSL_RSA_WITH_3DES_EDE_CBC_SHA Triple-DES SHA (168 bit) (stronger) 35 SSL_RSA_WITH_RC4_128_SHA RC4 SHA (128 bit) 34 SSL_RSA_WITH_RC4_128_MD5 RC4 MD5 (128 bit) | 35b TLS_RSA_WITH_AES_256_CBC_SHA AES SHA (128 bit) 2F TLS_RSA_WITH_AES_128_CBC_SHA AES SHA (128 bit) 39 SSL_RSA_WITH_DES_CBC_SHA DES SHA (56 bit) V 62 TLS_RSA_EXPORT1024_WITH_RC4_56_SHA RC4 SHA(56 Bit) 64 TLS_RSA_EXPORT1024_WITH_DES_CBC_SHA DES SHA(56 Bit) 33 SSL_RSA_EXPORT_WITH_RC4_40_MD5 RC4 MD5 (40 bit) 36 SSL_RSA_EXPORT_WITH_RC2_CBC_40_MD5 RC2 MD5 (40 bit) (weaker) 32 SSL_RSA_WITH_NULL_SHA 31 SSL_RSA_WITH_NULL_MD5 30 SSL_NULL_WITH_NULL_NULL FIPS Approved NIST SSLV3 and TLSV1 (only available with SSLFIPSEnable): shortname longname Meaning Strength ========= ======== ============= ======== 3A SSL_RSA_WITH_3DES_EDE_CBC_SHA Triple-DES SHA (168 bit) (stronger) FF SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA Triple-DES SHA (168 bit) 35b TLS_RSA_WITH_AES_256_CBC_SHA AES SHA (128 bit) 2F TLS_RSA_WITH_AES_128_CBC_SHA AES SHA (128 bit) | 39 SSL_RSA_WITH_DES_CBC_SHA DES SHA (56 bit) V FE SSL_RSA_FIPS_WITH_DES_CBC_SHA DES SHA (56 bit) (weaker)
The following configuration directs the server to prefer strong 128-bit RC4 ciphers first and will provide a signicant performance improvement over the default configuration. This configuration does not support the weaker 40-bit, 56-bit, or NULL/Plaintext ciphers that security scanners may complain about.
<VirtualHost *:443> SSLEnable Keyfile keyfile.kdb ## FIPS approved SSLV3 and TLSV1 128 bit Ciphers SSLCipherSpec 35b SSLCipherSpec 2F ## SSLV3 128 bit Ciphers, other than 2F (above) SSLCipherSpec 34 SSLCipherSpec 35 ## SSLV2 128 bit Ciphers SSLCipherSpec 21 SSLCipherSpec 23 ## Triple DES Ciphers 168 bit Ciphers ## These can still be used, but only if the client does ## not support any of the ciphers listed above. SSLCipherSpec 3A SSLCipherSpec FF SSLCipherSpec 27 </VirtualHost>
Here is an alternate version which omits SSLv2 ciphers from the set of supported ciphers, such that only SSLv3 and TLSv1 will be used:
<VirtualHost *:443> SSLEnable Keyfile keyfile.kdb ## FIPS approved SSLV3 and TLSV1 128 bit Ciphers SSLCipherSpec 35b SSLCipherSpec 2F ## SSLV3 128 bit Ciphers, other than 2F (above) SSLCipherSpec 34 SSLCipherSpec 35 ## Triple DES Ciphers 168 bit Ciphers ## These can still be used, but only if the client does ## not support any of the ciphers listed above. SSLCipherSpec 3A SSLCipherSpec FF </VirtualHost>
You can use the following LogFormat directive to view and log the SSL cipher negotiated for each connection:
LogFormat "%h %l %u %t \"%r\" %>s %b \"SSL=%{HTTPS}e\" \"%{HTTPS_CIPHER}e\" \"%{HTTPS_KEYSIZE}e\" \"%{HTTPS_SECRETKEYSIZE}e\"" ssl_common CustomLog logs/ssl_cipher.log ssl_common
This logformat will produce an output to the ssl_cipher.log that looks something like this:
127.0.0.1 - - [18/Feb/2005:10:02:05 -0500] "GET / HTTP/1.1" 200 1582 "SSL=ON" "SSL_RSA_WITH_RC4_128_MD5" "128" "128"
The SSL CPU utilization will be lower with lower values of ThreadsPerChild. We recommend using a maximum of 100 if your server handles a lot of SSL traffic, so that the client load is spread among multiple child processes. (Note: This optimization is not possible on Windows, which supports only a single child process.)
Set this to the value true when there is significant SSL work-load, as this will result in better performance for the heap operations used by SSL processing.
The preferred approach to improving SSL performance is to use software tuning to the greatest extent possible. Installation and maintenance of crypto cards is relatively complex and usually results in a relatively small reduction in IHS CPU usage. We have observed many situations where the improvement is less than 10%.
HTTP keep-alive has a much larger benefit for SSL than for non-SSL. If the goal is to limit the number of IHS worker threads utilized for keep-alive handling, performance will be much better if KeepAlive is enabled with a small timeout for SSL-enabled virtual hosts, than if keep-alive is disabled altogether.
Example:
<VirtualHost *:443> normal configuration # enable keepalive support, but with very small timeout # to minimize the use of IHS worker threads KeepAlive On KeepAliveTimeout 1 </VirtualHost>
Warning! We are not recommending "KeepAliveTimeout 1" in general. We are suggesting that this is much better than setting KeepAlive Off. Larger values for KeepAliveTimeout will result in slightly better SSL session utilization at the expense of tying up an IHS thread for a longer period of time in case the browser sends in another request before the timeout is over. There are diminishing returns for larger values, and the optimal values are dependent upon the interaction between your application and client browsers.
An SSL session is a logical connection between the client and web server for secure communications. During the establishment of the SSL session, public key cryptography is used to to exchange a shared secret master key between the client and the server, and other characteristics of the communication, such as the cipher, are determined. Later data transfer over the session is encrypted and decrypted with symmetric key cryptography, using the shared key created during the SSL handshake.
The generation of the shared key is very CPU intensive. In order to avoid generating the shared key for every TCP connection, there is a capability to reuse the same SSL session for multiple connections. The client must request to reuse the same SSL session in the subsequent handshake, and the server must have the SSL session identifier cached. When these requirements are met, the handshake for the subsequent TCP connection requires far less server CPU (80% less in some tests). All web browsers in general use are able to reuse the same SSL session. Custom web clients sometimes do not have the necessary support, however.
The use of load balancers between web clients and web servers presents a special problem. IBM HTTP Server cannot share a session id cache across machines. Thus, the SSL session can be reused only if a subsequent TCP connection from the same client is sent by the load balancer to the same web server. If it goes to another web server, the session cannot be reused and the shared key must be regenerated, at great CPU expense.
Because of the importance of reusing the same SSL session, load balancer products generally provide the capability of establishing affinity between a particular web client and a particular web server, as long as the web client tries to reuse an existing SSL session. Without the affinity, subsequent connections from a client will often be handled by a different web server, which will require that a new shared key be generated because a new SSL session will be required.
Some load balancer products refer to this feature as SSL Sticky or Session Affinity. Other products may use their own terminology. It is important to activate the appropriate feature to avoid unnecessary CPU usage in the web server, by increasing the frequency that SSL sessions can be reused on subsequent TCP connections.
End users will generally not be aware that SSL session is not being reused unless the overhead of continually negotiating new sessions causes excessive delay in responses. Web server administrators will generally only become aware of this situation when they observe the CPU utilization approaching 100%. The point at which this becomes noticeable will depend on the performance of the web server hardware, and whether or not a cryptographic accelerator is being used.
When SSL is being used and excessive web server CPU utilization is noticed, it is important to first confirm that Session Affinity is enabled if a load balancer is being used.
First, get the number of new sessions and reused sessions.
LogLevel must be set to info
or debug
.
IBM HTTP Server 2.0.42 or 2.0.47 up through cumulative fix PK07831, and IBM HTTP Server 6 up through 6.0.2 writes messages of this format for each handshake:
[Sat Jul 09 10:37:22 2005] [info] New Session ID: 0 [Sat Jul 09 10:37:22 2005] [info] New Session ID: 1
0 means that an existing SSL session was re-used. 1 means that a new SSL session was created.
Getting the number of each type of handshake:
$ grep "New Session ID: 0" logs/error_log | wc -l 1115 $ grep "New Session ID: 1" logs/error_log | wc -l 163
IBM HTTP Server 2.0.42 or 2.0.47 starting with cumulative fix PK13230, and IBM HTTP Server 6 starting with 6.0.2.1 writes messages of this format for each handshake:
[Sat Oct 01 15:30:17 2005] [info] [client 9.49.202.236] Session ID: YT8AAPUJ4gWir+U4v2mZFaw5KDlYWFhYyOM+QwAAAAA= (new) [Sat Oct 01 15:30:32 2005] [info] [client 9.49.202.236] Session ID: YT8AAPUJ4gWir+U4v2mZFaw5KDlYWFhYyOM+QwAAAAA= (reused)
To get the relative stats:
$ grep "Session ID.*reused" logs/error_log | wc -l 1115 $ grep "Session ID:.*new" logs/error_log | wc -l 163
The percentage of expensive handshakes for this test run is 163 / (1115 + 163), or 12.8%. To confirm that the load balancer is not impeding the reuse of SSL sessions, perform a load test with and without the load balancer*, and compare the percentage of expensive handshakes in both tests.
*Alternately, use the load balancer for both tests, but for one load test have the load balancer to send all connections to a particular web server, and for the other load test have it load balance between multiple web servers.
This sections does not presume to address even a small fraction of possible network tuning issues that can affect IBM HTTP Server performance. However, we occasionally see recurring issues that can cause severe performance degradation. This section will focus on these recurring issues.
Low data transfer rates handling large POST requests.
This problem can be caused by a small TCP receive buffer size being used for IHS. This results in the client being limited in how much data it can send before the server machine has to acknowledge it, resulting in poor network utilization.
Some data transfer performance problems can be solved using the native operating system mechanism for increasing the default size of TCP receive buffers. IHS must be restarted after making the change.
Platform | Tuning parameter | Instructions |
---|---|---|
AIX | tcp_recvspace
| Run no -o tcp_recvspace to display the old value.
Run no -o tcp_recvspace=new_value to set a
larger value.
|
Solaris | tcp_recv_hiwat
| Run ndd /dev/tcp tcp_recv_hiwat to display the old
value. Run ndd -set /dev/tcp tcp_recv_hiwat
new_value to set a larger value.
|
HP-UX | tcp_recv_hiwater_def
| Run ndd /dev/tcp tcp_recv_hiwater_def to display the
old value. Run ndd -set /dev/tcp tcp_recv_hiwater_def
new_value to set a larger value.
|
Linux | rmem_default
| Run cat /proc/sys/net/core/rmem_default to display
the old value. Run echo new_value >
/proc/sys/net/core/rmem_default to set a larger value.
|
The following levels of IBM HTTP Server contain a ReceiveBufferSize directive for setting this value in a platform-independent manner, and only for the web server:
Usage:
ReceiveBufferSize number-of-bytes
This directive must appear at global scope in the configuration file.
Low data transfer rates running on AIX 5 when handling large (multi-megabyte) POST requests from Windows machines. Network traces show large delays (~150 ms) between packet acknowlegements.
This performance problem can be corrected by setting an AIX network tuning option and applying AIX maintenance.
For all releases of AIX, set the tcp_nodelayack network option to 1 by using the following command:
no -o tcp_nodelayack=1
For AIX 5.1, apply the fix for APAR IY53226. For more information, see: IY53226
For AIX 5.2, apply the fix for APAR IY53254. For more information, see: IY53226
Unexpected network latency when the application is somewhat slow. Network traces show a normal HTTP 200 OK message for the first part of the response, then AIX waits ~150ms for a delayed ACK from the client.
This performance problem can be corrected by setting an AIX network tuning option.
Set the rfc2414 network option to 1 by using the following command:
no -o rfc2414=1
Instructions for tuning some operating system parameters are available in the WebSphere InfoCenter. Many of these parameters, such as TCP layer configuration or file descriptor configuration, apply to IBM HTTP Server as well.
This comparison is not applicable to IBM HTTP Server on Windows, where memory usage is much more similar between 1.3 and 2.0.
Many customers on Unix and Linux systems have encountered swap space or physical memory problems with IBM HTTP Server 1.3 due to the large number of child processes which may be required, and the memory overhead per child process.
The swap space usage with IBM HTTP Server 1.3 is much more significant than the physical memory usage, because there is significant virtual memory usage in every child process which does not contribute towards resident set size. So a quick fix for an IBM HTTP Server 1.3 configuration with many child processes may be to drastically increase the available swap space and see if that allows sufficient processes to be created without resulting in excess paging.
StartServers 500 MaxClients 500 MaxSpareServers 500 MinSpareServers 1 MaxRequestsPerChild 0
<IfModule worker.c> ServerLimit 2 ThreadLimit 250 StartServers 2 MaxClients 500 MinSpareThreads 1 MaxSpareThreads 500 ThreadsPerChild 250 MaxRequestsPerChild 0 </IfModule>
ps -A -o pid,ppid,vsz,rss,comm
)
PID PPID VSZ RSS COMMAND 22729 22676 13448 4768 bin/httpd 22721 22676 13448 4768 bin/httpd 22734 22676 13448 4768 bin/httpd 22745 22676 13448 4768 bin/httpd 22737 22676 13448 4768 bin/httpd 22719 22676 13448 4768 bin/httpd 22740 22676 13448 4768 bin/httpd 22731 22676 13448 4768 bin/httpd 22728 22676 13448 4768 bin/httpd 22741 22676 13448 4768 bin/httpd 22720 22676 13448 4768 bin/httpd 22724 22676 13448 4768 bin/httpd 22746 22676 13448 4768 bin/httpd 22717 22676 13448 4768 bin/httpd 22730 22676 13448 4768 bin/httpd 22718 22676 13448 4768 bin/httpd 22722 22676 13448 4768 bin/httpd 22732 22676 13448 4768 bin/httpd 22743 22676 13448 4768 bin/httpd 22739 22676 13448 4768 bin/httpd 22733 22676 13448 4768 bin/httpd 22676 1 13448 8760 bin/httpd 22742 22676 13448 4768 bin/httpd (and 478 more children)
Totals: Total virtual memory size is about 6.7 GB. Total resident set size is about 2.4GB.
ps -A -o pid,ppid,vsz,rss,comm
)
PID PPID VSZ RSS COMMAND 394 390 44240 36696 /home/trawick/testihsbuild/ihsinstall/bin/httpd 390 1 15136 9528 /home/trawick/testihsbuild/ihsinstall/bin/httpd 393 390 44144 36536 /home/trawick/testihsbuild/ihsinstall/bin/httpd 392 390 14552 3328 /home/trawick/testihsbuild/ihsinstall/bin/httpd
Totals: Total virtual memory size is about 117 MB. Total resident set size is about 86MB. Note that IBM HTTP Server 2.0 and above has an extra child process when CGI requests are enabled.
This is the same except for the OS, which is AIX 5.3.
ps auxw
then post-processing and picking the
largest child)
USER SZ RSS root 11800 800 nobody 12956 1976 (and 499 more children like this)
Totals: Total virtual memory size is about 6.5 GB. Total resident set size is about 1 GB.
ps auxw
then post-processing)
USER SZ RSS nobody 32876 32932 nobody 33348 33384 root 632 668 nobody 636 808
Totals: Total virtual memory size is about 68 MB. Total resident set size is about 68MB. Note that IBM HTTP Server 2.0 and above has an extra child process when CGI requests are enabled.
In support of IPv6 networking, these levels of IBM HTTP Server will query the resolver library for IPv4 and IPv6 addresses for a host. This can result on extra DNS lookups on AIX, even when the IPv4 address is defined in /etc/hosts. To work around this issue, IPv6 lookups can be disabled.
Edit /etc/netsvc.conf, which configures the resolver system-wide.
Add or modify the lookup rule for hosts
so that it has
this setting:
hosts=local4,bind4
That will disable IPv6 lookups. Now restart IBM HTTP Server and confirm that the delays with proxy requests have been resolved.