Version: , by alexi
Developer Last Online: Sep 2014
Version: Unknown
Rating:
Released: 03-23-2006
Last Update: Never
Installs: 0
No support by the author.
I have seen a lot of questions on this subject and I happend to have some graphs handy so I thought I would put up a post that should help all "big" boards understand this a little better.
In a multi server setup the web server needs to talk to 2 different places. The internet so users can come and get their data and the database server to get the information they are requesting. This diagram shows that relationship:
The web server should have 2 seperate NIC cards, one facing the internet and 1 facing the database server. Even if your traffic is not that high trying to do this over 1 nic card is not a good idea because database requests will have to wait for the web requests.
The database server NIC will handle far more traffic than the public NIC. Let's look at some graphs. This graph shows 24 hours on my web server. That would be about 300 users at the low and 2200 simoultaneous users at peak
The blue line represents the amount of data going out to the users, the green line represents the data coming in. Notice that there is far more going out as the web server serves up the pages. The "95th percentile" a measure of how much bandwidth you use is 4.97 mbits or megabits per second so out to the users a 10 based connection would be more than enough.
Here is the same graph between the webserver and the database server:
In this case the blue line, way at the bottom represents the data from the web server to the database server. The green lines are the database server returning data to the web server. Notice how much more data goes over this connection than actually goes out to the users. That is one of the reasons it is so important to have it on a seperate nic card. Also note that the 95th percentile is 38.8 mbit so you would not be able to run a 10 based nic card you need a 100 based to not create a bottleneck. It is not neccesary to run a gigabit card although you would still see some improvement from that as it would let stuff get "off the wire" quicker at peak load.
Hope this helps!
Show Your Support
This modification may not be copied, reproduced or published elsewhere without author's permission.
What I love to see is the correlation between how much goes out to the user vs. how much the server needs.
Why is there a need for the DB server to ship 40Mbits to the Web server, when the Web server only serves up, at most 4Mbits? Clearly a lot of the data is discared.
This is where stored routines on the DB server would come very handy. Instead of requesting a record, manipulating, then requesting an other record, manipulating... etc. 4,000 times - it could be shifted to the DB server.
Anyone experiment with rewriting some of the more DB intensive routines as stored routine?
On every forum page request, a web server requests from a db server:
1. datastore (896KB here), includes forum cache which may be quite big if you have lots of forums (we do)
2. style data (20+KB here)
3. whole set of templates for that forum page - up to 50 templates for a single showthread.php! it's difficult to estimate their size, assuming every template is 1KB, it adds another 50KB for every request
4. some other relevant data: session info, user info, forum/thread/post info, threads/posts themselves, etc
So for a single 50KB showthread page web server slurps about 1MB from the database. Scary, huh? And no, stored procedures won't help here as you need all that information on the web server to properly format and output the pages.
The proper solution would be to cache datastore, styles and templates in some kind of memory cache (like memcached, eaccelerator, apc).
At the moment I cache the datastore using eAccelerator which helps a lot using the inbuilt config.php info (bearing in mind that it is not supported by vB as this is buggy).