We have a RHES3 server set up running Apache 2.0.52 w/ PHP 4.3.9 compiled against the Oracle 10g client library. The system functions properly, but slowly but surely the memory usage on each httpd thread inches up until all memory and swap on the box are used. If you restart Apache, all the used memory is released and the system continues functioning normally.

We've noticed that doing a netstat -tupan shows a *lot* of TIME_WAIT items, making it seem like some sort of keep-alive thing was causing connections to stay open and not release their resources. Adding KeepAlive Off to httpd.conf didn't seem to fix this problem.

What *does* work for us (kinda kludgy) is to set MaxRequestsPerChild in httpd.conf to 50 or so. This kills a child after it processes 50 requests and thus frees the resources it was using. So this keeps us up and running without relying on a cron job to restart Apache every 20 minutes or so.

While this works, it seems like we shouldn't need to be doing this. :-) I'm trying to figure out if there are any problems with RHES3, PHP, Apache or the Oracle 10g libraries that could potentially cause this problem. If we can rule this out then it's most likely a problem with the PHP scripts being called. Note that we have three RHES3 servers running this exact same configuration and all of them exhibit this problem--it's just that this server is by far the busiest so the problem pops up much more frequently.

Here are the relevant configuration options:

Apache
./configure \
--prefix=/usr/local/apache2 \
--enable-ssl \
--enable-so \
--enable-rewrite=shared \
--enable-expires \
--enable-info \
--enable-deflate \
--with-ssl=/usr

PHP
'./configure' \
'--with-oci8=/u01/app/oracle/product/10.1.0/client_1' \
'--with-apxs2=/usr/local/apache2/bin/apxs' \
'--enable-track-vars' \
'--enable-calendar' \
'--enable-sigchild' \
"$@"

Not going to attach all of httpd.conf, but here are the prefork config options we're using:

StartServers 15
MinSpareServers 5
MaxSpareServers 10
MaxClients 256
MaxRequestsPerChild 50

Thanks for any insight.