I dont believe that there is anything wrong with my testing methods. and this expirience has been gained over several years and many server. it is definatly not a hardware problem.
The same issue goes for an emc SAN or a storagetek drive shelf. raid5 likes drives. the more you throw at it the faster it will be up to a point. as for 5 drive arrays. they are pretty much the break even point. this has been true for a long time. The way most companies raid5 algorythems are writen you need 5 drives so that each write can go to 4 of them simultaniously. less than that and you will alwasy have at least one drive that had data and parity data being writen to is at the same time. This basicaly halves write performance.
Some of this can be made up in cache on the controller, but that buffer gets eaten p fast in an intensive DB type app.
|