This website is deprecated, and remains online only for historic access to old issues and docs for historic versions of Virtualmin. It has been unmaintained for several years, and should not be relied on for up-to-date information. Please visit www.virtualmin.com instead.
Hi,
Has anyone tried to install CentOS on a SATA RAID? On a new server I setup the RAID as bootable then installed CentOS. On reboot, when reading the MBR it just hangs with the prompt "GRUB_"
I'm a newbie setting up an onboard bootable raid device with centos.
It's an adaptec onboard with 3 SDs. When I set it up as bootable (raid 0 is the only option it gives) then run the centos install cd I get lost in what to do with GRUB and keep getting hung when grub starts. I'd feel more comfortable with the onboard, though claims are made that OS raid is becoming faster and more reliable than onboard.
I want to set up redundancy but the way Centos describes setting up you still set boot all by itself on raid 1 meaning that if that physical drive dies you're SOL. Or do I have to depend on a floppy boot disk? I want to have a hot swap as my first line of defense as opposed to a remote backup.
I don't know much about the specific problems you're running into...but I have a suspicion you'd be better off going with the Linux software RAID, rather than using the RAID on your motherboard.
Unless the onboard is really hardware RAID (and unless it's a pretty heavy-duty server motherboard--in the $300+ price range, it's software RAID), I'd avoid it. You gain nothing in stability or speed (you probably lose something, actually, as those motherboard guys basically suck at software), and you lose the ability to move your disks into other systems and expect them to work (this can be a life saver if your motherboard dies and you happen to have another system handy that you can drop the disks into).
The Linux software RAID implementation is simply excellent, and far more flexible, fast, and reliable than the software RAID that is included on any motherboard I've ever seen. Since yours is limited to RAID 0, I'm quite confident that it's just software RAID implemented in the BIOS and OS drivers, and you don't want anything to do with it.
That said, if I'm guessing wrong, and it is a hardware Adaptec RAID controller with a real on-board CPU handling the RAID and on-board memory for buffering, then performance will be better, reliability will be great, and you should certainly work on getting it spinning. Again, I strongly doubt this is the case. Adaptec makes crappy software RAID, too, though it tends to be better than the software RAID from many other vendors. ;-)
I have my Slack on a raid 5 but it's a card and come to think about it, it's not adaptec either. You're right about the onboard, I've heard it referenced as nothing more than a winmodem usefulness.
The question is, on CentOS, how do I setup the 3 drives as raid 5 mounted as root, yet still get to boot? It keeps kicking me out claiming boot needs to be raid 1 only. How big a partion should boot be and does it need to be it's own raid device?
I know it's a CentOS question but they often take a week to get an answer back
You'd put /boot on a RAID 1 mirror (/boot can't be on anything but a mirror or a non-RAID disk...grub isn't RAID-capable, so it has to either have a full copy, as in a mirror, or a non-RAID partition). /boot only needs to be ~128MB or so, and all the rest of your space can be devoted to a RAID 5 for /.
I really like having a mirrored /boot, because then I <i>know</i> I'll never have a boot failure...if I do, I just change the boot order in BIOS and boot from another disk.
<i>I'm still confused with the multiple GRUBS though. I ended up with 4 of them only one of which would boot.</i>
It's just one grub, but multiple kernel definitions, I suspect...which is normal in a system that has been updated with a few kernel revisions. By default, on an SMP CentOS system, you'll have two kernel options (SMP and single CPU builds) immediately after install. If you then update with up2date or yum, you'll get a couple of additional kernels with the newest becoming the new default. Kernels make use of RPMs capability to install multiple versions of the same package--so, everytime there's a new kernel, you'll also get a new grub option when you boot.
However, all of those options should generally boot, unless something is going wrong with the initrd (CentOS and Fedora both had a bug about a year ago, where all kernel upgrades resulted in a non-bootable system, but I believe that has long since been fixed in both...so as long as you're using the latest release CD of CentOS your initrd should be sane for all kernels).
Anyway, I have no other guesses about why only one of your kernels is bootable...we'd need to see the boot errors.
Ya, that totally confused me. It would explain the problem when I was using the 4.2 release that I had from last year and VMPro yummed an update. But this is a recent download of 4.4 Final( 1-25-2007) but then update might have downloaded something more recent.
I guess one question is what is an SMP and EL version? I had one of each for the CentOS 4.4 server_cd GRUB and the other created CentOS 4.4 CD Grub.
The error that came up was failing while trying to access an ATI device for which there are no bootable devices outside of the CDROM.
<i>I guess one question is what is an SMP and EL version? I had one of each for the CentOS 4.4 server_cd GRUB and the other created CentOS 4.4 CD Grub.</i>
SMP is Symmetric Multi-Processing (for a system with more than one CPU, or a single dual core CPU, or an Intel CPU with HyperThreading).
<i>The error that came up was failing while trying to access an ATI device for which there are no bootable devices outside of the CDROM.</i>
I just finished an install on one last night. I'm using an Asus board with on board RAID.
I'm a newbie setting up an onboard bootable raid device with centos.
It's an adaptec onboard with 3 SDs. When I set it up as bootable (raid 0 is the only option it gives) then run the centos install cd I get lost in what to do with GRUB and keep getting hung when grub starts. I'd feel more comfortable with the onboard, though claims are made that OS raid is becoming faster and more reliable than onboard.
I want to set up redundancy but the way Centos describes setting up you still set boot all by itself on raid 1 meaning that if that physical drive dies you're SOL. Or do I have to depend on a floppy boot disk? I want to have a hot swap as my first line of defense as opposed to a remote backup.
Any ideas how to configure 3 drives safely?
Hey Dan,
I don't know much about the specific problems you're running into...but I have a suspicion you'd be better off going with the Linux software RAID, rather than using the RAID on your motherboard.
Unless the onboard is really hardware RAID (and unless it's a pretty heavy-duty server motherboard--in the $300+ price range, it's software RAID), I'd avoid it. You gain nothing in stability or speed (you probably lose something, actually, as those motherboard guys basically suck at software), and you lose the ability to move your disks into other systems and expect them to work (this can be a life saver if your motherboard dies and you happen to have another system handy that you can drop the disks into).
The Linux software RAID implementation is simply excellent, and far more flexible, fast, and reliable than the software RAID that is included on any motherboard I've ever seen. Since yours is limited to RAID 0, I'm quite confident that it's just software RAID implemented in the BIOS and OS drivers, and you don't want anything to do with it.
That said, if I'm guessing wrong, and it is a hardware Adaptec RAID controller with a real on-board CPU handling the RAID and on-board memory for buffering, then performance will be better, reliability will be great, and you should certainly work on getting it spinning. Again, I strongly doubt this is the case. Adaptec makes crappy software RAID, too, though it tends to be better than the software RAID from many other vendors. ;-)
--
Check out the forum guidelines!
I have my Slack on a raid 5 but it's a card and come to think about it, it's not adaptec either. You're right about the onboard, I've heard it referenced as nothing more than a winmodem usefulness.
The question is, on CentOS, how do I setup the 3 drives as raid 5 mounted as root, yet still get to boot? It keeps kicking me out claiming boot needs to be raid 1 only. How big a partion should boot be and does it need to be it's own raid device?
I know it's a CentOS question but they often take a week to get an answer back
Thanks,
Dan
Hey Dan,
You'd put /boot on a RAID 1 mirror (/boot can't be on anything but a mirror or a non-RAID disk...grub isn't RAID-capable, so it has to either have a full copy, as in a mirror, or a non-RAID partition). /boot only needs to be ~128MB or so, and all the rest of your space can be devoted to a RAID 5 for /.
I really like having a mirrored /boot, because then I <i>know</i> I'll never have a boot failure...if I do, I just change the boot order in BIOS and boot from another disk.
--
Check out the forum guidelines!
Oh, yeah, and the CentOS installation is fully capable of handling this kind of layout for you--it understands that a mirror is bootable.
--
Check out the forum guidelines!
It was a bit more work (manually partitioning) but it worked out pretty slick, you're right as always ;-)
I'm still confused with the multiple GRUBS though. I ended up with 4 of them only one of which would boot.
Well, at least we're finally heading into the 20th century here. ;-)
DAn
Hey Dan,
<i>I'm still confused with the multiple GRUBS though. I ended up with 4 of them only one of which would boot.</i>
It's just one grub, but multiple kernel definitions, I suspect...which is normal in a system that has been updated with a few kernel revisions. By default, on an SMP CentOS system, you'll have two kernel options (SMP and single CPU builds) immediately after install. If you then update with up2date or yum, you'll get a couple of additional kernels with the newest becoming the new default. Kernels make use of RPMs capability to install multiple versions of the same package--so, everytime there's a new kernel, you'll also get a new grub option when you boot.
However, all of those options should generally boot, unless something is going wrong with the initrd (CentOS and Fedora both had a bug about a year ago, where all kernel upgrades resulted in a non-bootable system, but I believe that has long since been fixed in both...so as long as you're using the latest release CD of CentOS your initrd should be sane for all kernels).
Anyway, I have no other guesses about why only one of your kernels is bootable...we'd need to see the boot errors.
--
Check out the forum guidelines!
Ya, that totally confused me. It would explain the problem when I was using the 4.2 release that I had from last year and VMPro yummed an update. But this is a recent download of 4.4 Final( 1-25-2007) but then update might have downloaded something more recent.
I guess one question is what is an SMP and EL version? I had one of each for the CentOS 4.4 server_cd GRUB and the other created CentOS 4.4 CD Grub.
The error that came up was failing while trying to access an ATI device for which there are no bootable devices outside of the CDROM.
Hey Dan,
<i>I guess one question is what is an SMP and EL version? I had one of each for the CentOS 4.4 server_cd GRUB and the other created CentOS 4.4 CD Grub.</i>
SMP is Symmetric Multi-Processing (for a system with more than one CPU, or a single dual core CPU, or an Intel CPU with HyperThreading).
<i>The error that came up was failing while trying to access an ATI device for which there are no bootable devices outside of the CDROM.</i>
No clue what to do with that. ;-)
--
Check out the forum guidelines!