The Arcive of Official vBulletin Modifications Site.It is not a VB3 engine, just a parsed copy! |
|
![]() |
|||||||||||||||||||||||||||
Which RAID setup & hard drives do you use, and why?
![]() Developer Last Online: Nov 2023 ![]() ![]()
<font color="yellow">Which RAID setup & hard drives do you use, and why (or do you not use RAID at all)?</font>
My large forum server currently uses a SCSI RAID 5 array with 3 10,000 RPM drives. However, I am starting to regret that choice as I understand the performance of RAID 5 writes is not really any better than a stand alone drive. RAID 5 arrays are fast at reads and there is hardware failure protection of course. I now wish I went with RAID 0+1 or a RAID 10 array, which improves both read and write performance and includes fault tolerance. Does anyone know the difference between the two (0+1 vs 10)? I've decided to add a single large 4th SCSI drive outside the array for daily backups and possibly log files. Of course, I do regular external backups as well. It seems the log file creation (over 2GB every few days) is one of the most intensive write operations that generates a significant constant load -- especially since RAID 5 is not as fast at writes. If I offload those less important files to a secondary drive I'm hoping it will take a lot of the writes off the array. Plus, backups on my server are the single biggest load generator, take several hours, and slow down the forum more than anything else. I believe backing up from the array to another drive could reduce the load and significantly and greatly decrease the time it takes. -vissa Show Your Support
|
Comments |
#2
|
|||
|
|||
![]()
Raid 0+1 is a mirrored set of striped arrays so 2 raid 0's mirroring each other.
Raid 10 or 1+0 is a stripe set of mirrored disks to the 2 sets are mirrored to each other then they make a raid 0 stripe set. Raid 10 has more fault tolerance but is not as fast |
#3
|
||||
|
||||
![]() Quote:
I inherited our current cluster of servers that are all RAID5. Slowly I am changing the database servers over to RAID10 for the same reasons as you. I am also moving from U160 10k drives over to newer U320 15k ones. I am also looking at Serial attached SCSI instead of moving to U320. IOwait has been a major issue with some of our DB servers and the two servers we have moved from RAID5 to RAID10 have been doing much better. We also changed the IO scheduler in our kernel to deadline. |
#4
|
||||
|
||||
![]() Quote:
|
#5
|
|||
|
|||
![]()
Ok..
Instead of doing a complex RAID 1 + stripping of the raid1 set, add a 5 disk RAID5 array of smaller 36G 15k drives. The great thing about raid 5 is the more spindles you add to the array the faster it'll get. Going from 15k scsi to 10k serial ata would not be prudent. Cheaper yes but not faster, SCSI drive BIOS are configured for server type data paterns while desktop disks are not. RAID10 while better than a 3 disk RAID5 array will not be better than a 5 disk array, and even a 6 disk array. Smaller faster (15k) drives will perform better than 10k serial ata. Make sure you have a good writeback cache on your controller as well, don't also forget to make sure you have the latest firmware/drivers for your array controller as well. Also check into setting the stripe size correctly as well. |
#6
|
||||
|
||||
![]()
I have had great luck with several 12 drive raid10 arrays. either 2 6 drive raid0s mirroring or 4 3 drive raid0s mirroring. Depending on the particular load and space needs.
I have had very poor performance from most raid5 arrays untill you get past 6 drives. 5 drives seams to just outperform a single drive, and 7 or 8 seams to be nice. But for the 12 drive cases I use I have much better performance with raid10 than with 5. |
#7
|
|||
|
|||
![]()
Something is wrong with your controller or testing methods if a 5 disk raid5 array is it's barely faster than a single drive. Are you doing synchronous writes on a pour FS or something?
Quote:
|
#8
|
|||
|
|||
![]()
silvrhand, I would be curious to see where you got your data on RAID performance. Everything I have ever seen says RAID 5 does poorly on write performance, no matter how many disks but raid 10 does better on read and write
|
#9
|
|||
|
|||
![]()
RAID 5 will very likely perform less then RAID-10 for write operations, this even get worse when adding more disks (reason, a checksum of the sector for all disks must be calculated and written, the more disks, the more sectors to calculate). Some of the performance downgrade can be reduced by proper configuration and write back cache.
RAID-10 will (depending on the hardware implementation of the RAID-controller) be faster with writes simply because there is no checksum to calculate. What performs better greatly depends on your Read/Write ratio and how sequential the data is that is read. For a board, writes are often much lower then reads, so the performance on writes should not have such a big impact. |
#10
|
||||
|
||||
![]()
I dont believe that there is anything wrong with my testing methods. and this expirience has been gained over several years and many server. it is definatly not a hardware problem.
The same issue goes for an emc SAN or a storagetek drive shelf. raid5 likes drives. the more you throw at it the faster it will be up to a point. as for 5 drive arrays. they are pretty much the break even point. this has been true for a long time. The way most companies raid5 algorythems are writen you need 5 drives so that each write can go to 4 of them simultaniously. less than that and you will alwasy have at least one drive that had data and parity data being writen to is at the same time. This basicaly halves write performance. Some of this can be made up in cache on the controller, but that buffer gets eaten p fast in an intensive DB type app. |
![]() |
|
|
X vBulletin 3.8.12 by vBS Debug Information | |
---|---|
|
|
![]() |
|
Template Usage:
Phrase Groups Available:
|
Included Files:
Hooks Called:
|