Using Virtualmin with NFS

Hi,

I am thinking about switching from the single dedicated server I have now to a cluster of servers to create a high-availability environment in which the services that I run and Web sites I shot for clients are more stable, reliable, and available. To that end, I thought about moving the /home directory and all its files and subdirectories onto an NFS setup that is hosted on my main dedicated server and shared to all of the other servers in the cluster.

My question is, how would I go about doing this? I have never used NFS before, and am not sure how to set it up so that Virtualmin user files and directories can be mounted and available from any of the servers in the cluster. Furthermore, each machine in the cluster, as well as the main dedicated server, will have Virtualmin installed; is there a way I can share other configurations and services, like e-mail, DNS, and etc.? That way, for example, I can have multiple mail servers set up to take incoming mail so that if one machine goes down mail can still be received and, if possible, delivered.

I already know about using LDAP and am trying to set that up now on the main dedicated server before branching everything off onto the cluster.

Any help with this would be greatly appreciated.

Thanks! -Logan

Status: 
Active

Comments

It should be possible to have /home shared by a central server, and exported to all your Virtualmin systems. You just need to make sure that the NFS server and clients have quotas setup properly, so they can be set and enforced over NFS.

Sharing other services is more complex due to the need to keep configurations in sync. We don't yet have a good solution here, sorry.

Jamie,

Thanks for your response. I am relatively familiar with Linux but very very new to NFS. How do I set up /home on my central server so that I can share it with other servers, especially if /home already has things in it?

Also, something I don't understand, what would the real benefit of NFS be when sharing /home onto other systems? I mean, sure I'd now have multiple servers set up, each running Virtualmin and etc., but if they are all just connecting back to the central NFS server to update files in /home what would the advantages/benefits be there? Or would it be better to set up a distributed system?

My ultimate goal with this project is to have multiple servers set up and ready so that if one goes down, customer Web sites will not be affected and the entire network will not go down with it. Or, at the very least, essential services like e-mail and DNS are still accessible on the other servers in the network, even if some client sites are stored on the downed server and others are stored on working servers, that would at least eliminate total downtime.

What are your thoughts?

Thanks!

Having redundant servers is actually really tricky - it is much more than just sharing /home via NFS. The reason it is tricky is that you have to somehow sync data that might change between all machines, be that email, home directory contents or data in MySQL.

Are you hosting just static websites, or are they dynamic in any way? (ie. blogs, online stores, etc)

I am also interested in LDAP and how it's implemented but only for email and ssh access, so that users can access their email or ssh from a single server. I am only focusing on nginx/wordpress hosting, and trying to follow: http://www.virtualmin.com/documentation/id,combining_virtualmin_and_ldap/.

I've been trying to follow every LDAP/NFS thread, so apologies for hijacking airshock's thread. But I'm curious how it all works together as well.

Jamie,

I am mainly hosting dynamic sites, mostly WordPress and Joomla, but also some other database-driven software as well. Right now I have one really powerful educated server, but want to branch out so that in theory there is no single point of failure. I've got a few cloud VPS servers running on Amazon AWS and DigitalOcean and was thinking of setting up some redundancy on those machines, BIND DNS and MySQL replication at the very least.

My main thought for sharing /home is that the primary server, which is running on dedicated hardware (e.g. no cloud VPS) has much more capacity than the cloud machines I have just set up, so I wanted it to be the primary place for users' files to be stored (it will of course be backed up off-site). I was then thinking this could be shared across the cloud servers, and the cloud servers could run instances of MySQL/Postfix/etc but all access /home and other directories stored on the primary server.

Are the cloud servers located close to the primary server? You need pretty low network latency to share /home over NFS.

Jamie,

The cloud servers aren't on the same network as the main server but both the main server and cloud servers are in the US and are supported by a very fast network, so latency shouldn't be a problem, in theory.

NFS is really only designed for use over the local LAN - it performs poorly between datacenters simply due to the round-trip time.

I am also trying to do that. My goal is to share home directory and replicate mysql db with master to master setup. Both tested ok. I use glusterfs to replicate home directory in realtime. My problem is how can I replicate other settings like plan, quota, new accounts created by users.. etc. I was writing a scripts to sync the /etc/pasword, shadow and group by using rsync and cron jobs. So the new userid created by users also working fine. I used command line backup and restore options but when I restored the servers, the uid and gid are differents on both servers. It was caused a big problem because I shared home directory over glusterfs so I got some permission denied error. If virutalmin can replicate such kind of configurations without changing anything to other server, just changing main ip, we can accomplish the goals by using dynamic dns setup on other monitor server. Please let me know anybody did it and successfully done?

Regards,