PDA

View Full Version : Which RAID setup & hard drives do you use, and why?


lazytown
03-29-2006, 09:06 AM
<font color="yellow">Which RAID setup & hard drives do you use, and why (or do you not use RAID at all)?</font>

My large forum server currently uses a SCSI RAID 5 array with 3 10,000 RPM drives. However, I am starting to regret that choice as I understand the performance of RAID 5 writes is not really any better than a stand alone drive. RAID 5 arrays are fast at reads and there is hardware failure protection of course. I now wish I went with RAID 0+1 or a RAID 10 array, which improves both read and write performance and includes fault tolerance. Does anyone know the difference between the two (0+1 vs 10)?

I've decided to add a single large 4th SCSI drive outside the array for daily backups and possibly log files. Of course, I do regular external backups as well. It seems the log file creation (over 2GB every few days) is one of the most intensive write operations that generates a significant constant load -- especially since RAID 5 is not as fast at writes. If I offload those less important files to a secondary drive I'm hoping it will take a lot of the writes off the array. Plus, backups on my server are the single biggest load generator, take several hours, and slow down the forum more than anything else. I believe backing up from the array to another drive could reduce the load and significantly and greatly decrease the time it takes.

-vissa

alexi
03-29-2006, 11:28 AM
Raid 0+1 is a mirrored set of striped arrays so 2 raid 0's mirroring each other.
Raid 10 or 1+0 is a stripe set of mirrored disks to the 2 sets are mirrored to each other then they make a raid 0 stripe set.
Raid 10 has more fault tolerance but is not as fast

The Prohacker
03-29-2006, 01:12 PM
Which RAID setup & hard drives do you use, and why (or do you not use RAID at all)?

My large forum server currently uses a SCSI RAID 5 array with 3 10,000 RPM drives. However, I am starting to regret that choice as I understand the performance of RAID 5 writes is not really any better than a stand alone drive. RAID 5 arrays are fast at reads and there is hardware failure protection of course. I now wish I went with RAID 0+1 or a RAID 10 array, which improves both read and write performance and includes fault tolerance. Does anyone know the difference between the two (0+1 vs 10)?

I've decided to add a single large 4th SCSI drive outside the array for daily backups and possibly log files. Of course, I do regular external backups as well. It seems the log file creation (over 2GB every few days) is one of the most intensive write operations that generates a significant constant load -- especially since RAID 5 is not as fast at writes. If I offload those less important files to a secondary drive I'm hoping it will take a lot of the writes off the array. Plus, backups on my server are the single biggest load generator, take several hours, and slow down the forum more than anything else. I believe backing up from the array to another drive could reduce the load and significantly and greatly decrease the time it takes.

-vissa


I inherited our current cluster of servers that are all RAID5. Slowly I am changing the database servers over to RAID10 for the same reasons as you. I am also moving from U160 10k drives over to newer U320 15k ones. I am also looking at Serial attached SCSI instead of moving to U320. IOwait has been a major issue with some of our DB servers and the two servers we have moved from RAID5 to RAID10 have been doing much better. We also changed the IO scheduler in our kernel to deadline.

MrLister
03-29-2006, 03:29 PM
I inherited our current cluster of servers that are all RAID5. Slowly I am changing the database servers over to RAID10 for the same reasons as you. I am also moving from U160 10k drives over to newer U320 15k ones. I am also looking at Serial attached SCSI instead of moving to U320. IOwait has been a major issue with some of our DB servers and the two servers we have moved from RAID5 to RAID10 have been doing much better. We also changed the IO scheduler in our kernel to deadline.
x2 raid 10 is better. i got mine set up as 4x150GB (U320 15k) on raid 10. works great.

silvrhand
06-21-2006, 03:15 AM
Ok..

Instead of doing a complex RAID 1 + stripping of the raid1 set, add a 5 disk RAID5 array of smaller 36G 15k drives. The great thing about raid 5 is the more spindles you add to the array the faster it'll get.

Going from 15k scsi to 10k serial ata would not be prudent. Cheaper yes but not faster, SCSI drive BIOS are configured for server type data paterns while desktop disks are not.

RAID10 while better than a 3 disk RAID5 array will not be better than a 5 disk array, and even a 6 disk array. Smaller faster (15k) drives will perform better than 10k serial ata. Make sure you have a good writeback cache on your controller as well, don't also forget to make sure you have the latest firmware/drivers for your array controller as well.

Also check into setting the stripe size correctly as well.

vantage255
06-22-2006, 12:35 AM
I have had great luck with several 12 drive raid10 arrays. either 2 6 drive raid0s mirroring or 4 3 drive raid0s mirroring. Depending on the particular load and space needs.

I have had very poor performance from most raid5 arrays untill you get past 6 drives. 5 drives seams to just outperform a single drive, and 7 or 8 seams to be nice. But for the 12 drive cases I use I have much better performance with raid10 than with 5.

silvrhand
06-22-2006, 02:53 AM
Something is wrong with your controller or testing methods if a 5 disk raid5 array is it's barely faster than a single drive. Are you doing synchronous writes on a pour FS or something?

I have had great luck with several 12 drive raid10 arrays. either 2 6 drive raid0s mirroring or 4 3 drive raid0s mirroring. Depending on the particular load and space needs.

I have had very poor performance from most raid5 arrays untill you get past 6 drives. 5 drives seams to just outperform a single drive, and 7 or 8 seams to be nice. But for the 12 drive cases I use I have much better performance with raid10 than with 5.

alexi
06-22-2006, 05:37 AM
silvrhand, I would be curious to see where you got your data on RAID performance. Everything I have ever seen says RAID 5 does poorly on write performance, no matter how many disks but raid 10 does better on read and write

Marco van Herwaarden
06-22-2006, 06:54 AM
RAID 5 will very likely perform less then RAID-10 for write operations, this even get worse when adding more disks (reason, a checksum of the sector for all disks must be calculated and written, the more disks, the more sectors to calculate). Some of the performance downgrade can be reduced by proper configuration and write back cache.

RAID-10 will (depending on the hardware implementation of the RAID-controller) be faster with writes simply because there is no checksum to calculate.

What performs better greatly depends on your Read/Write ratio and how sequential the data is that is read. For a board, writes are often much lower then reads, so the performance on writes should not have such a big impact.

vantage255
06-22-2006, 10:49 AM
I dont believe that there is anything wrong with my testing methods. and this expirience has been gained over several years and many server. it is definatly not a hardware problem.

The same issue goes for an emc SAN or a storagetek drive shelf. raid5 likes drives. the more you throw at it the faster it will be up to a point. as for 5 drive arrays. they are pretty much the break even point. this has been true for a long time. The way most companies raid5 algorythems are writen you need 5 drives so that each write can go to 4 of them simultaniously. less than that and you will alwasy have at least one drive that had data and parity data being writen to is at the same time. This basicaly halves write performance.
Some of this can be made up in cache on the controller, but that buffer gets eaten p fast in an intensive DB type app.

dbembibre
06-22-2006, 11:21 AM
I have two 2X 73 GB SCSI in raid 1

BoardTracker
06-22-2006, 02:00 PM
Depends what you will use it for but generally RAID10 is a good choice.. safer and faster than raid5 although costs a bit more.

We used to use raid5 on a fileserver with a fairly big array (nearly 5TB in 24 disks) but performance sucked and when so many disks are involved things can and do go wrong. We switched to raid10 and its all so much better.. server loads are down, traffic is up and data is safer.. everyone is happy. ;)

silvrhand
06-22-2006, 02:38 PM
New cards in the market are removing this problem, see the Netcell SPU for an example of that.

"Revolution storage processing cards feature a revolutionary 100% hardware-based 64-bit RAID engine that offers a mainstream RAID solution with the simultaneous benefits of both RAID 0-class performance and RAID 5-class data protection."

RAID 5 will very likely perform less then RAID-10 for write operations, this even get worse when adding more disks (reason, a checksum of the sector for all disks must be calculated and written, the more disks, the more sectors to calculate). Some of the performance downgrade can be reduced by proper configuration and write back cache.

RAID-10 will (depending on the hardware implementation of the RAID-controller) be faster with writes simply because there is no checksum to calculate.

What performs better greatly depends on your Read/Write ratio and how sequential the data is that is read. For a board, writes are often much lower then reads, so the performance on writes should not have such a big impact.

http://tweakers.net/reviews/557/25

There is a great review there, and look at the mySQL results, this is in a mixed environment as well, so lots of read and writes where I'm mostly in the 25-30% write for our forums.

This is a very thorough test and the Areca ARC-1160 with 1GB cache shows a huge lead over most other cards in the test.

RAID 10 will beat RAID5 in HEAVY write tests, but in our scenario RAID5/10 arrays are very close to the same, at least for me in my < 30% write scenario. Your mileage MAY vary, I generalized a bit too much in my earlier posts.

FYI also:

The new ARC-1220/1230/1260 uses Intel? IOP333 I/O processor. So XOR calculations should be even better improving database performance. The review above was on the 1160, which uses an older I/O processor.

vantage255
06-22-2006, 09:07 PM
with 8 or more drives, they are close. The big speed issue that most people here will see isnt a processor based issue though. Most people dealing with hosting on a non enterprise level will have issues due to their raid arrays being only 3 or 4 drives. As this forces 2 writes onto one drive for every write to the array. This hurts write performance badly.

With enough drives raid5 is certainly a good choice. And a lot of buffer help. some EMC sans I work with have 32 or 64 GB of buffering. that makes a huge performance boost.

silvrhand
06-22-2006, 09:37 PM
EMC/Netapps are great, wish I could afford to run MySQL on that hehe.

with 8 or more drives, they are close. The big speed issue that most people here will see isnt a processor based issue though. Most people dealing with hosting on a non enterprise level will have issues due to their raid arrays being only 3 or 4 drives. As this forces 2 writes onto one drive for every write to the array. This hurts write performance badly.

With enough drives raid5 is certainly a good choice. And a lot of buffer help. some EMC sans I work with have 32 or 64 GB of buffering. that makes a huge performance boost.

BoardTracker
06-22-2006, 10:47 PM
EMC/Netapps are great, wish I could afford to run MySQL on that hehe.

I've had a netapp before and its only great until it comes time to buy another shelf of disks and then they slap you with the $25k bill.. :ermm:

These days I'd go for a SATABlade or some other Nexsan servers.. the SATABeast is truely a beast.

vantage255
06-23-2006, 03:57 AM
I am buying up Sun D-1000 and A-1000 drive arrays at the moment. cheap on ebay and they are solid hardware. Good for hosting.

EMC is nice... but not the price point you want for hosting.

DevilYellow
07-03-2006, 12:29 AM
My webserver runs 10k dual SATA drives (RAID 1) for backup purposes.

I am going to start building a deciated DB server. The DB is over 1GB as it sits and I dont see it going anywhere but up.

Right now my vauge plans were just dual dual-core Opterons, nice Tyan mobo, 4GB RAM, and some sort hard drive solution.

What would be best? U320, SAS, or SATAII?

For a DB server RAID 10 would be the best for performance and
redundancy?