When you colo, what happens if a piece of hardware fails? (hard drive, ram, or god forbid the motherboard)? Wouldn't you then have immense downtime unless you have redundancy? So that then doubles the hardware you will need and the rack space.
Most datacenters have a hardware stock you can purchase if something dies. The most common failures are fans and hard drives. If your motherboard dies, you would need to have one drop shipped in and just pay for remote hands to install it.
With hard drives, most RAID controllers support running a hot spare. So you can have a hot spare incase a drive dies giving you the ability to suffer two drive failures in a RAID5 configuration.
I just got a dual opteron dual core 270, with 8gb ram, 4x150gb raptor on raid 5 over at softlayer.com great guys. i defiently suggest you guys try them out for anyone looking at a opteron
I still will never understand why people waste money on dedicated servers. We have purchased about ten servers in the past and have had them co-located with Defender Hosting (.com) for the past three years. Never had any problems.
A post of mine from BBA about the TCO for one of my clients
The user I was talking with required less than 200gb of bandwidth a month, And thinking about it now I didn't take into account un-metered bandwidths (the throughput plans, but those are just as costly).
I must also say his arguments were vastly flawed due to the fact he didn't take into account the original build cost for a server, added into his monthly bills. However, my post is still valid.
Quote:
Honestly? You can't.
However, lets assume that you do colo, and you need way more than 200gb of bandwidth (most of the sites I'm runnin go though 500-1500GB of bandwidth a month).
The cost of building servers to acomdiate the trafic would cost upwords of 1500 dollars, not to mention my time configuring, and setting up the initial install of the OS, not to mention any license fees if I were to use one distro over another.
Ontop of this, lets say that hardware fails, perhaps my entire scsi array and drives, now I have to replace it. Cost goes up.
I'm glad colo works well for you, but it doesn't for everyone.
I just priced the servers we rent now (most of them) Each would cost me
$3,454 from dell, x 6 = 20724 for intital setup. Now, this doesn't even start to cover the bandwidth charges that I(my clients) will need to pay for. I believe all of the servers go though a total of about 4TB a month, at least to be on the safe side. Thats another 4000 a month just for colocation.
So, we have 24724 dollars just to get our first month plus setup.
In the first year we spent 68,724 on colocation fees.
Compared to 1800~ a month on retal fees, which is 21000~.
So, ontop of this, if any of my hardware fails, I am responseable for it, which means more money directly out of my clients pockets, if the reneted hardware fails, its not our problem and replaced quickly.
Sure we could sell them, but I generally don't get rid of servers.
Edit: We go though upwards for 5TB a month atm, for all of the servers.
I've found in most cases that server load/capacity is generally IO limited. In other words -- hard drives, raid, etc. Lots of RAM helps a lot in caching the data. So does going from a Dual Xeon to 4/8 Opterons really make any difference if the limiting factor is your hard drive/raid speed?
I'm a little late to the party, but I've got a dedicated DB box with dual opterons and can't rave enough about the performance, after putting up with Xeons for a log time - I can hardly believe it. 2.5M posts, 600 users online now, and server load is .04 (as in, not even .1). Hosted at liquidweb.com (though I cannot wholeheartedly recommend them). I'll be colocating two of these puppies in Dallas soon.
we use iweb. great service. they don't offer opterons but went out and bought us one to see if all this thread is true. if so, they will add it to their packages. can't rave enough.
Preferably <1.0 for each processor, however, load can get much higher before things really get bad. Honestly, I've yet to see any real marker for when you'll start to have real problems. I've had machines of the same setup differ wildly - One machine will be fine at 10, and the other will have things failing at 6-7, running the same programs to test the stability.
If you're spiking up to 5 for shorts amount of time under heavy traffic, I wouldn't worry too much about it. Theoretically, you should be able to handle that much constantly, but some would question how healthy that is on parts of the machine, especially the hard drives, depending on what's being used that's causing the issue. If you're hitting higher than 5 for more than a few minutes at a time, I'd seriously suggest trying to figure out why that's happening, specificly, and seeing what you can do to optomize the server.
And load isn't always the best guide - Depending on what's causing it, you might see no performance degredation with 20, 30, or even higher.
While load is a good way a lot of the time to check the overall status of the server at a glance, looking at actual CPU and MEM usage is much better.