Jump to content

Why Not Motherboard RAID?


DanJeffrey

Recommended Posts

In your white box build and several places in the forums, people talk about the need for RAID cards instead of using the RAID that comes with the motherboard.

I assume there are good reasons not to use on-board RAID — otherwise 3ware might not be in business. What are those reasons?

Is expansion card RAID faster than on-board RAID?

Is on-board RAID less reliable?

Do you *ever* use the on-board RAID included with the mobo?

BTW -- Hak5er's: Thanks for the great show.

-- Dan

Link to comment
Share on other sites

The on board cards are often partially hardware partially software RAID.

The configuration of the RAID in the BIOS is more like configuring the RAID controller drivers outside of the operating system. In this case it's more like a software RAID that is hidden from the operating system by the driver. With such controllers this causes interoperability problems (drive support mainly). With such systems it's entierly possible to access each disk individually, it just requires the use of the correct driver in most cases.

With hardware RAID you might see a performance boost as the OS doesn't have to use either any additional CPU time to think about RAID management or performance been consistent reguarding disk access times even when the server is doing lots of CPU stuff.

With software RAID it's possible to have increased flexibility, been able to add and remove disks with out braking the RAID and such.

With these half software/hardware raid you loose both advantages. It's probably not flexible in terms of expandability and the driver for the controller is probably doing allot of the thinking about the RAID even though the OS doesn't know there is a RAID there. This probably also means that monitoring/management tools are less existent than if you went with an exclusively software RAID or exclusively hardware RAID.

Link to comment
Share on other sites

I wonder about PCIe bus considerations when using an add-on card. Wouldn't there be a difference in bandwidth comparing the chipset interface with the PCIe interface of an add-on card? I would expect to see better perfomance with chipset raid.

Link to comment
Share on other sites

http://www.tomshardware.com/reviews/pci-ex...sis,1572-2.html

SATA can transfer 384MB/s (theoretical maximum). Most hard disks don't come close to 100MB/s, so lets use that as a good bandwidth cap for each hard drive.

With a PCI express x1 card you might see a performance drop running 4 drives when compared to 2 or 3. Probably will see a performance loss more than 4 drives.

A PCI express x16 slot can support 10(.6 recurring) SATA ports running at full speed. Given that drives are very slow you can probably run 30 drives on a PCI express x16 slot with out any noticeable performance loss. Apart from the controller card would probably need an Atom processor (or some thing like it) to manage all that data.

Link to comment
Share on other sites

A couple comments based on my understanding which may or may not be correct; most motherboards don't have x16 slots available for add-on cards other than video. Some have x8 or x4 slots but the majority only provide x1 slots. Current high-end solid state drives are seeing burst read rates of 250MB/s. A raid set with just 2 of those drives would be heavily bottlenecked by an x1 slot. The SATA 6Gb standard has been released and we'll probably see that implemented soon. As the costs go down and speeds go up for SS drives, they'll become more popular. They'll have the capability to saturate 3Gb SATA and take full advantage of 6Gb SATA. In that case, it would require at least an x8 slot to support just a few of those drives in a raid set. Unless, of course, there's a standard in the works to improve the speed of the PCIe bus. In any case, I would want to use at least an x8 add-on card requiring a motherboard that can accept it. That limits choices quite a bit, a lot more than chipset raid which is pretty common these days.

Link to comment
Share on other sites

for me the motherboard raids have have very few problems other than being a little unintuitive to setup

and it is hard to notice any performance impact without doing extensive benchmarking.

but 1 problem I have noticed about pcie cards that handle this is that some of the cheaper ones tend to fail and it causes many problems with the data on the drives

Link to comment
Share on other sites

There are two different benchmarks really.

"Does disk access suffer from heavy CPU usage?". I would suspect that a software and half software/hardware RAID would suffer (more) noticeably than compared to a hardware RAID as the OS doesn't have to processes the individual disk accesses.

The other test "Which is faster, motherboard RAID or hardware RAID?" is almost superfluous if the other question answered "Yes, CPU usage does cause disk access time to increase on software RAID than hardware RAID". This would mean you always want to use a hardware solution when ever possible. This excludes devices that are used exclusively for storage as they don't use as much CPU compared to a computer running ESXi (for example).

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...