PHP/FastCGI HTTP 500 error after ~500 requests

PHP has a environmental variable PHP_FCGI_MAX_REQUESTS that limits the number of requests PHP uses-- it defaults to 500. Apache mod_fcgid has a option called FcgidMaxRequestsPerProcess that defaults to unlimited. When the mod_fcgid option is bigger than the PHP option, a HTTP 500 error is returned after PHP exits.

This environmental variable is documented at this location:

https://svn.php.net/repository/php/php-src/trunk/sapi/cgi/README.FastCGI

The code that uses the environmental variable is here:

https://svn.php.net/repository/php/php-src/trunk/sapi/cgi/cgi_main.c

Additional details on the environmental variable can be found in Apache's mod_fcgid documentation, under the Special Considerations section

http://httpd.apache.org/mod_fcgid/mod/mod_fcgid.html#examples

By default, PHP FastCGI processes exit after handling 500 requests, and they may exit after this module has already connected to the application and sent the next request. When that occurs, an error will be logged and 500 Internal Server Error will be returned to the client. This PHP behavior can be disabled by setting PHP_FCGI_MAX_REQUESTS to 0, but that can be a problem if the PHP application leaks resources. Alternatively, PHP_FCGI_MAX_REQUESTS can be set to a much higher value than the default to reduce the frequency of this problem. FcgidMaxRequestsPerProcess can be set to a value less than or equal to PHP_FCGI_MAX_REQUESTS to resolve the problem.

Anyway, the common fix seems to be to either adjust PHP_FCGI_MAX_REQUESTS and/or adjust FcgidMaxRequestsPerProcess (defaults to 0-- unlimited) as the Apache documentation recommends.

The exact timing of the error depends on activity of site as mod_fcgid may distribute the requests over more than one PHP process. The problem can be reproduced fairly easily on an completely inactive website using ab-- a phpinfo() file is sufficient. I ran the following on a website (virtual host) with no other traffic than me testing it. The PHP process was spawned only as this test was run.

$ ab -n 500 [domain]/info.php # Basic 500 requests to show the 500 error
...
Non-2xx responses:      1
$ ab -n 2000 [domain]/info.php # Try 2000 instead of just 500
...
Non-2xx responses:      4
$ ab -q -n 10000 -v 3 [domain]/info.php  | grep "HTTP/1.1" | sort | uniq -c # get the specific HTTP/1.1 status codes-- for 10000 requests
   9980 HTTP/1.1 200 OK
     20 HTTP/1.1 500 Internal Server Error

Error in virtual host error_log that repeats for each failure

[Thu Oct 20 15:49:18 2011] [warn] [client xxx.xxx.xxx.xxx] (104)Connection reset by peer: mod_fcgid: error reading data from FastCGI server
[Thu Oct 20 15:49:18 2011] [error] [client xxx.xxx.xxx.xxx] Premature end of script headers: info.php
Status: 
Active

Comments

Just to note reults after adding "FcgidMaxRequestsPerProcess 500" to /etc/apache2/mods-enabled/fcgid.conf

$ ab -n 500 [domain]/info.php # Basic 500 requests
Complete requests:      500
Failed requests:        0
Write errors:           0
$ ab -n 2000 [domain]/info.php # Try 2000 instead of just 500
...
Complete requests:      2000
Failed requests:        0
Write errors:           0
$ ab -q -n 10000 -v 3 [domain]/info.php | grep "HTTP/1.1" | sort | uniq -c # get the specific HTTP/1.1 status codes-- for 10000 requests
 10000 HTTP/1.1 200 OK

Seems like one simple solution would be to set PHP_FCGI_MAX_REQUESTS to unlimited (or some huge number) in the PHP fcgi wrapper scripts Virtualmin creates in ~/fcgi-bin (and then restart Apache).

This would ensure that it is mod_fcgid that does the restart, if ever.

Does that sound reasonable?

Because it seems easier to change FcgidMaxRequestsPerProcess than PHP_FCGI_MAX_REQUESTS, it sounds reasonable.

Edit: Would we be able to regenerate the wrapper script for existing servers automatically, or would that need to be done manually?

Ok, I'll set it to effectively unlimited in the next Virtualmin release.

You'd be able to re-generate the wrapper scripts with a command like :

virtualmin modify-web --domain example.com --mode fcgid

or for all domains :

virtualmin modify-web --all-domains --mode fcgid

Assuming all your domains are already using fcgid mode..

AlReece45 in comment #1 is showing what to change in Ubuntu. In RHEL/CentOS based systems we have the following two files:

/etc/httpd/conf.d/fcgid.conf
/etc/httpd/conf.modules.d/10-fcgid.conf

and it is not clear which file to edit to add the "FcgidMaxRequestsPerProcess" value. Could *-min gurus point to how to address this issue in Fedora-based systems? IS there any Virtualmin/Webmin settings in UI to make this change? Thanks.

In CentOS 7, they do indeed have two directories for modules.

The "/etc/httpd/conf.modules.d/" directory is primarily used for loading modules.

And the "/etc/httpd/conf.d/" directory is primarily used for configuring them.

My suggestion then would be to make that change in "/etc/httpd/conf.d/fcgid.conf".

That said, either should work. But we'd recommend using the CentOS conventions, it makes things easier to remember later when you're trying to figure out where the change was made :-)

ok for FcgidMaxRequestsPerProcess 500

but there's something else to consider: eg. here on CentOS 7 & Apache 2.4.6

mpm event & mod_fcgid-2.3.9-4.el7.x86_64

you can't do a graceful restart with Apache...always stop and then start otherwise you could get many errors by processes never killed

I don't know if is a bug from 2.3.9 and may be fixed...

I will try to set the GracefulShutDownTimeout (which at default is unlimited...)

maybe it will fix it...

bye