TCP/IP sockets in CLOSE_WAIT state on a Web server loaded with either the WebSphere® Application Server V4.0, V5.0 or V5.1 plug-in module
 Technote (troubleshooting)
 
Problem(Abstract)
You can configure a Web server plug-in to route requests to WebSphere Application Server V4.0, V5.0 and V5.1 releases. This technote explains why you can observe TCP/IP sockets in the CLOSE_WAIT states, and the steps required to determine if the number of CLOSE_WAIT sockets are normal or excessive.
 
Cause
You want to determine if the number of TCP/IP sockets in the CLOSE_WAIT state is not excessive on a Web server loaded with either the WebSphere Application Server V4.0, V5.0 or V5.1 plug-in module.
 
Resolving the problem
When running a load balance test on your WebSphere Application Server V4.0, V5.0 or V5.1 configuration, you can observe the following Transmission Control Protocol/Internet Protocol (TCP/IP) connections on your system:
Protocol
Local Address
Foreign Address
State
TCP  
slovakia:1615
slovakia:9080
CLOSE_WAIT
TCP  
slovakia:9080
slovakia:1615
FIN_WAIT_2

Port 1615 is an arbitrary port that listens on port 9080 for TCP/IP connections to WebSphere Application Server. The table above is an example from a configuration where the Web server was collocated with WebSphere Application Server so the netstat -a command lists it twice.

Background

It is not uncommon for the plug-in to have some TCP/IP connections between the WebSphere Application Server plug-in arbitrary ports and WebSphere Application Server ports in the CLOSE_WAIT state.

The WebSphere Application Server plug-in reuses connections to the WebSphere Application Server using the Hypertext Transfer Protocol (HTTP)/1.1 Keep-Alive capabilities. As a result, the WebSphere Application Server Keep-Alive timeout for the connection can apply and cause the connection to go into the CLOSE_WAIT state. The WebSphere Application Server plug-in does not complete the close until the connection is pulled off the internal queue. The plug-in then determines that the connection is no longer active. Finally, the plug-in destroys that connection and creates a new one.

If you take a TCP/IP trace, such as ethereal trace, you can observe the communication between the WebSphere Application Server and the plug-in:
  1. Application Server sends the FIN to the plug-in when KeepAlive Timeout expires.
  2. Web server or plug-in sends the ACK to the Application Server.

As mentioned before, the behavior of the plug-in connection pool management is to hold a connection, if possible, then try to reuse the connection through the HTTP/1.1 Keep-Alive capabilities. The Web server or plug-in might later try to reconnect to the Application Server across that connection, but when KeepAlive expires, the Application Server will FIN the connection.

If the plug-in detects that the connection is bad (CLOSE_WAIT) on the Web server side, it closes the connection through the operating system.

If the connection is still good (Established) on the Web server side, the data is sent through the existing connection to the Application Server in a PUSH/ACK.

If the connection is dead on the Application Server side, the connection is reset (RST), forcing a new connection by the plug-in. This partial FIN/ACK, ACK behavior is performed from the plug-in for performance reasons.

Connections held in a CLOSE_WAIT state do not have any noticeable impact on the system, such as performance, because they are closed once the plug-in attempts to use them. These CLOSE_WAIT connections also don't take any CPU cycles. You must consider, however, that there are a sufficient number of file descriptors configured on the operating system that is hosting the Web server, plug-in, and Application Server.

Calculating the maximum number of TCP/IP sockets in the CLOSE_WAIT state
  1. From the Web server perspective

    You must know the maximum number of TCP/IP sockets that can be established between the plug-in and the WebSphere Application Server back-end from the Web server perspective. Here is the formula:

    Max TCP/IP _=_ Max number of Web__x___Number of Transports in
    _sockets________server clients____________the plugin-cfg.xml

    You can obtain the Max number of Web server clients from your Web server configuration files. The same formula also applies to the maximum number of the TCP/IP connections in the CLOSE_WAIT state.

    For example, you can obtain this information from the following:

    1. For all releases of IBM HTTP Server V1.3 or Apache V1.3:

      • On the Unix® platform, collect the MaxClients directive.

        Note: On Unix, V1.3 releases are process based Web servers.

      • On the Windows® platform, collect the ThreadsPerChild directive.

        Note: On Windows, releases of IBM HTTP Server V1.3 are thread based Web servers.

    2. For all releases of IBM HTTP Server V2.0 or Apache V2.0:

      Note: The V2.0 releases are thread-based Web servers on all platforms.

      • On Unix or Linux platforms, collect a combination of the ServerLimit, ThreadsLimit and ThreadsPerChild directives.

      • On the Windows platform, collect the ThreadsPerChild directive.

    3. For SunOne, IIS, or Domino, review your Web server documentation for equivalent directives.


  2. From the WebSphere Application server perspective

    You must know the maximum number of TCP/IP sockets that can be established between the plug-in & the specific WebSphere Application Server (AppServer) back-end from the WebSphere perspective. Here is the formula:

    __Max TCP/IP _______Max Web ______Connection Backlog
    _sockets of one__=___Container___+_____of all its Web
    ___AppServer________threads________Container transports

    You can get the maximum number of Web Container threads and the Connection Backlog setting for each Web Container Transport from your WebSphere Application Server configuration files. Go here for more information: http://www-306.ibm.com/software/webservers/appserv/infocenter.html

    The maximum number of TCP/IP sockets that can be established between the plug-in and all WebSphere Application Server back-end from the WebSphere perspective is in this formula:

    __Max TCP/IP_______Max TCP/IP___________Max TCP/IP
    _sockets of all__=___sockets of___+_....._+___sockets of
    __AppServer1______AppServer2 ______ ___AppServerN

    where AppServer1, AppServer2, ....., AppServerN are all Application Servers in the WebSphere Application Server domain, including all horizontal or vertical clones.

    Important:
    The maximum number of TCP/IP sockets in the CLOSE_WAIT state can not exceed the maximum number allowed TCP/IP sockets from the Web server or the WebSphere Application Server perspective, whichever is less.


Ways that CLOSE_WAIT connections are terminated:

The plug-in manipulation of CLOSE_WAIT sockets isn't the only way that those connections can be terminated from the operating system. Below is a list of ways that both process and thread based Web servers handle this:

  • Process-based Web server, such as all releases of IBM HTTP Server V1.3.26 or V1.3.28:
    1. Web server process (HTTPD) that opened the connection dies through Web server management (for example, MaxSpareServers directive applies).

    2. Any new HTTP request that handled by the same Web server (HTTPD) process and is routed to the same Application Server (Transport) port.

    3. Web server stops completely.

  • Thread-based Web server, such as all releases of IBM HTTP Server V2.0.42 or V2.0.47:
    1. Web server thread (HTTPD) that opened the connection closes through Web server management (for example, MaxSpareThreads directive applies).

    2. Any new HTTP request handled by the same Web server thread and is routed to the same Application Server (Transport) port.

    3. Web server is stops completely.

As is evident in the above examples, we recommend that you configure a thread-based Web server with WebSphere Application Server, such as IBM HTTP Server 2.0.47 instead of a process-based Web server such as IBM HTTP Server 1.3.28. This is for the following reasons (regarding CLOSE_WAIT issues):

  • Thread-based Web servers produce one plug-in connection pool for each Web server child process. Since a single child process can have multiple threads to serve various client requests, each child process shares the plug-in connection pool across all of its threads. This keeps the number of plug-in connection pools to a minimum. This is contrary to the process-based Web server scenario where a connection pool is unique to each child process, and only one client is served through each child process.
  • Thread based Web servers have a higher chance of new HTTP requests handled by the same Web server process that originally established the TCP/IP socket in the CLOSE_WAIT state. It tries to reuse the existing connection, which results in the existing connection ending. It then attempts to find another connection within the connection pool. The same scenario can be repeated for other CLOSE_WAIT connections for the same Transport until the plug-in finds a reusable connection in a Keep-Alive state, or create a new one if necessary. In the most optimistic scenario, all CLOSE_WAIT connections for the same Transport in this Web server process are closed.
  • Process-based Web servers remove the CLOSE_WAIT socket if a new HTTP request is handled by an existing Web server process that originally established a TCP/IP socket. If the connection cannot be reused (it is in the CLOSE_WAIT state), the plug-in removes it from the plug-in connection pool and creates a new one. In this case, the plug-in does not go through all connection pools established by other Web server processes to close sockets in a CLOSE_WAIT state within the same Transport.


Verifying if there are an excessive number of TCP/IP sockets in a
CLOSE_WAIT state
  1. Through "netstat -an" running on the Web server, find out the total number of CLOSE_WAIT connections established with the back-end Application Servers.

  2. Calculate the maximum number of allowed CLOSE_WAIT connections for your WebSphere Application Server environment and verify that the number has not been exceeded.

  3. Through netstat -an running on the Web server, find the number of CLOSE_WAIT connections for each Application Server transport port. In most cases it is sufficient to find out which Application Server Transport port has very high number of CLOSE_WAIT connections and count only these connections to find out the total value. There is no need to focus on the Application Servers with a lower value of CLOSE_WAIT connections.

  4. Verify that the total value of CLOSE_WAIT connections for any Application Server port does not exceed the maximum number of clients allowed by the Web server (such as MaxClients in releases of IBM HTTP Server V1.3).

If results for steps 2 and 4 are positive (answers are no), everything is normal and you do not have to worry about CLOSE_WAIT sockets, because they are not excessive.


FIN_WAIT connection notes

The following is a brief explanation on FIN_WAIT connections that you can observe on the Application Server side.

In the FIN_WAIT_2 state, the Application Server has sent a FIN and the plug-in has acknowledged it. Unless the Application Server has done a half close, it is waiting for the plug-in to recognize it and send a FIN to close its end of the connection. Only when the process at the other end performs its close does our end move from the FIN_WAIT_2 to the TIME_WAIT state. This means that our end of the connection remains in this state forever. The other end is still in the CLOSE_WAIT state, and it can also remain in this state forever until the application decides to close it.

Many Berkeley-derived implementations prevent this infinite wait in the FIN_WAIT_2 state as follows.

If the application that does the active close does a complete close, not a half-close indicating that it expects to receive data, then a timer is set. If the connection is idle for 10 minutes plus 75 seconds, TCP moves the connection into the CLOSED state. Most TCP/IP implementations have this or a similar feature implemented (although it violates TCP specification), so this is a reason why you do not see FIN_WAIT2 sockets on the Application Server side that correspond to CLOSE_WAIT sockets on other end.


Conclusion

This technote explains why you can observe TCP/IP sockets in CLOSE_WAIT state and the steps required to determine if the number of CLOSE_WAIT sockets are normal or excessive.

This technote also explains that this is not a defect but rather a feature implemented by the design, mainly for performance reasons to leave TCP/IP connections open between the plug-in and Application Server.

IBM HTTP Server 2.0, (a thread-based Web server), with the WebSphere Application Server plug-in, is fully tested and supported with all current and future releases of V4.0, V5.0 and V5.1. IBM recommends using IBM HTTP Server 2.0, rather than IBM HTTP Server 1.3, whenever possible. Review these documents if you plan to migrate from IBM HTTP Server 1.3 or any other third-party Web server to IBM HTTP Server 2.0:


Notes regarding attached files

Attached are three files from WebSphere Application Server V4.0 releases with local IBM HTTP Server V1.3.19 releases Web server configuration. As is evident from the XML Export of WebSphere Application Server, the WebSphere Application Server domain has only one Application Server (Default Server) defined. The maximum number of its Web Container threads is 50 and the Connection Backlog is set to 511.

<web-container>
......
<transport name="http">
<transport-host>*</transport-host>
<transport-port>9080</transport-port>
<http-transport>
<connection-timeout>5</connection-timeout>
<backlog-connections>511</backlog-connections>
<keep-alive-timeout>5</keep-alive-timeout>
<maximum-keep-alive>25</maximum-keep-alive>
<maximum-req-keep-alive>100</maximum-req-keep-alive>
<ssl-enabled>false</ssl-enabled>
</http-transport>
</transport>
<thread-maximum-size>50</thread-maximum-size>
<thread-minimum-size>25</thread-minimum-size>
<thread-inactivity-timeout>10</thread-inactivity-timeout>
<thread-is-growable>false</thread-is-growable>
......
</web-container>

The plugin-cfg.xml reflects this configuration, it includes only one HTTP Transport to the port 9080, such as:

<Transport Hostname="slovakia" Port="9080" Protocol="http"/>

From httpd.conf, you can see that IBM HTTP Server will not handle more than 50 concurrent users, such as:

ThreadsPerChild 50

The maximum number of TCP/IP sockets in the CLOSE_WAIT state from the Web server perspective is 50 x 1 = 50 (ThreadsPerChild x Number of Transports)

The maximum number of TCP/IP sockets in the CLOSE_WAIT state from the WebSphere Application Server perspective is 50 + 511 = 561 (Max Web Container threads + ConnectionBacklog)

The maximum number of TCP/IP sockets in the CLOSE_WAIT state cannot exceed the maximum number allowed TCP/IP sockets from the Web server or the WebSphere Application Server perspective, whichever is less. This means that the maximum number of TCP/IP sockets in the CLOSE_WAIT state for this configuration is 50.
 
plugin-cfg.xmlWebSphere_Export.xmlhttpd.conf
 
Cross Reference information
Segment Product Component Platform Version Edition
Application Servers Runtimes for Java Technology Java SDK
 
 


Document Information


Product categories: Software > Application Servers > Distributed Application & Web Servers > WebSphere Application Server > Plug-in
Operating system(s): Windows
Software version: 5.1
Software edition:
Reference #: 1163659
IBM Group: Software Group
Modified date: Mar 18, 2004