[MAJOR NGINX-PHP BUG] Suexec problems with multiple domains, affects PHP

Create a server example.com

Create a sub-server foo.example.com

Create a sub-server foo2.example.com

Create a sub-server foo3.example.com

You now have 4 sub-servers using their own PHP processes. Apart from this being a waste of resources (they could all instead use example.com's FastCGI daemon since they all run as the same unix user anyway), there's also a bug.

Session variables cannot be written to disk by any of the sub-server daemons. Only the top-level daemon can do it.

My solution to all of this resource bloat is just to disable the Bootup actions for all of the extra daemons for the server, and modifying the NGINX .conf files to point each sub-domain to the main-domain's FastCGI instance. This is the best solution unless the subdomains are high load and would benefit from running their own FastCGI processes.

So, recap:

  • Bug in suexec permissions for subserver FastCGI processes, prevents storing sessions to disk
  • Resource bloat by starting 5 new PHP processes per subdomain even though they could be using the parent-domain's processes instead. (Maybe add a "Re-use parent's FastCGI process" checkbox on sub-server creation.)
Status: 
Closed (fixed)

Comments

What error do you get from PHP when it tries to save session variables?

One reason for having separate per-domain processes is that they can have different php.ini files.

I am aware of the different php.ini files.

I don't get any error at all. But the session variables won't save.

A simple test is this:

<?php
session_start();
var_dump($_SESSION['foo']);
$_SESSION['foo']='bar';
var_dump($_SESSION['foo']);
?>

For the main domain, you see both values. For the subdomains you only see the latter since the session storage isn't working.

I verified that permissions in the ./tmp folder are correct.

And it's not a user-permissions issue, because every process is running as the same user.

You might want to check the session.save_path variable in the sub-domain's php.ini files. The cause may be that it is pointing to a non-existent or un-writable directory..

i can reproduce this. i have one nginx domaind and one subdomain. two seperate php-fastcgi processes.

the subdomain doesnt save sessions.

the subdomains php.ini is in /home/[user]/domains/[subdomain]/etc/php5/php.ini

it says:

session.save_path = /home/[user]/domains/[subdomain]/tmp

the permissions of tmp folder are:

drwxr-x--- 2 [user] [user] 4096 Feb 7 15:48 tmp

the fastcgi daemon runs on port 9001, and the nginx config is there too si I dunno whats wong...

when I try to write a session in the subdomain, nothing is placed in the tmp filder. when I try it in the main domain, a session is stored properly.... how can this be? the whole path from the home folder and down is owned by the same user... bah...

edit: typos..

I just re-tested this, but on my systems session saving worked fine in an Nginx sub-domain..

nothin worse than a bug that only some people have :(

To debug this further, I'd need to login to the system of a user who is being effected by it..

hey i found the bug, kinda...

had nothin to do with sub-server vs main server.

its somethin that happens when you create new servers, any new servers.

those new serers cannot save sessions until you reboot the computer.

but reboot ("shutdown -r now") and they will work.

i also tested to "service php-fcgi-example-com restart" and "service nginx restart" but that gave "502 bad gatewau", so i had to restart whole machine to fix this bug, cant just restart php or nginx...

hope this helps

That is even stranger - I can't re-produce this either.

Is your system perhaps running the nscd daemon for caching users and groups?

O_o

no... i am not running ncpd...

and btw i tried reinstalling the server and still get same result

centos 6.0 64-bit from cloudmin

latest virtualmin

latest nginx 1.2.7 from nginx.org

My offer to login to your system and take a look still stands..

okay ill try to set up a remote login info on a test machine

didn't have any time to set up a test machine, because i have been way too busy setting up a bunch of new servers for people.

however, i just had a brilliant breakthrough idea, if i may say so myself.

i created a new domain, and then i immediately created a single php file with a phpinfo() call.

looking at the output, i saw that "loaded configuration file" says "/etc/php.ini" and that session.save_path is "/var/lib/php/session"

no wonder that new sites cannot write to their session storage! they are using the DEFAULT php config file!

looking at "running processes" for the domain, i see that php is running as:

"/usr/bin/perl /usr/bin/php-loop.pl /usr/bin/php-cgi -b 127.0.0.1:9010"

i then restarted the entire server.

php is still running with the same command line under running processes, BUT...

php-info now shows "/home/[user]/etc/php5/php.ini" and a session save path of "/home/[user]/tmp"

so, when the nginx website module creates the php-cgi process, it seems to be launching it WITHOUT giving the argument to load the per-site config file.

seems this has been the bug all along.

i would also like to VERY STRONGLY suggest that you change session.save_path in the per-site files from "/home/[user]/tmp" to "/home/[user]/etc/php-sessions" so that sessions are safe from the automatic tmp-folder wipeout that happens every few days, and so that they won't collide with other files in the temp folder.

Ok, I finally found the bug that causes this - it was hard to track down, as it only happens in the short period between when the initial PHP server processes are started, and when they get restarted. I will fix it in the next release of the Nginx plugin though.

Regarding the session.save_path change ... what process deletes files from user tmp directories? Virtualmin doesn't setup anything like that by default.

oh wow, that's fantastic news, it's really going to feel incredibly good not having to "shutdown -r now" every time i add a new domain. :) i have tried just restarting the php-fcgi daemon and nginx and that never worked, always gave gateway errors, so the only thing i could do was reboot the entire machine every time.

hmm about the tmp path, i was sure i saw some cron job or something set to clear out the tmp folder every week, but now i cannot find it on any of my systems. if someone threatened to eat all my candy unless i gave them the name of the task that clears the tmp folder, then i would have to solemnly shed tears as my candy was being taken away, never to be seen again...

There may be a cron job that clears /tmp , but I've never seen one that clears ~/tmp

Automatically closed -- issue fixed for 2 weeks with no activity.