Go Back   vb.org Archive > Community Discussions > Forum and Server Management
FAQ Community Calendar Today's Posts Search

Closed Thread
 
Thread Tools Display Modes
  #1  
Old 06-22-2010, 05:12 AM
DieselMinded's Avatar
DieselMinded DieselMinded is offline
 
Join Date: Mar 2007
Posts: 1,655
Благодарил(а): 0 раз(а)
Поблагодарили: 0 раз(а) в 0 сообщениях
Default Got A New Server

Tell me if this server is any good ...

Fully managed private HSphere Virtual Machine:
- Two Intel Nehalem E5520 logical CPU cores
- 4 GB of memory
- 100 GB of RAID10 storage on our EqualLogic PS5500E iSCSI SAN (note some of this space would be used by OS and Virtual Memory paging, giving you ~80 GB effective space).
- 1000 GB of monthly transfer
- 1 IP address
- Protected (High Availability).

The above setup will replace your current server.

The physical server (server node) hosting your VM would be Dell PowerEdge R710, equipped with Dual Intel Nehalem E5520 (16 logical CPU cores running at 2.26 GHz/core) and 72 GB memory with complete redundancy on iSCSI paths, network paths, and power supply.

Please allow me to explain what our Enterprise Virtual Machine (VM) is and the high end and completely redundant platform this VM will be hosted on.

Over the last two years we have invested more than $175,000 in capital and man hours to built our new platform based on server and storage virtualization technology, both in the server and storage layers, developing our cloud infrastructure. We use Citrix XenServer Platinum and EqualLogic PS5500E iSCSI SAN to power our Cloud infrastructure.

This cloud infrasturcture helps us achieve higher overall uptime, better redundancy, and more scalable infrastructure. All of our servers are now provisioned on this cloud infrastructure rather than using regular traditional physical server setup. Under traditional server deployment, each physical server is very susceptible to hardware failure and other issues resulting from non-redundant setup. Even a high end server with RAID subsystem still have single point of failure, the RAID controller itself (or the server motherboard itself). Issues can result in long hours of downtime. Not to mention, long downtime is even needed if you need to allocate more resources (CPU or memory) to the server. And even more complication will arise if you need to enlarge your disk subsystem; in this case, most likely you would have to prepare and provision a new server with larger disk subsystem, install everything on the new server, and migrate all customers from the old server to the new one. And last, what if the physical server itself crashes?

Taking advantage of complete server and storage virtualization, our cloud infrastructure avoids all of these issues in the traditional server setup by offering customers:
1. Higher Overall Uptime. Since the server is virtualized, the server will no longer be tied to any particular physical hardware. Hardware maintenance no longer implies extended server downtime. And server upgrades (memory, CPU, etc) will no longer be a cause of service interruption. Upon detecting a hardware failure or prior to performing hardware upgrades, we can perform a non-service-impacting migration of all VMs from one server node to another. In fact, we have just recently upgrade our hardware nodes without noticeable downtime to customers.
2. Dynamic Resource Scaling. With server and storage virtualization, scaling server resources can be done dynamically, quickly, and easily. All we will need to do is specify how much additional memory, CPU cores, and disk capacity we need and assign it to the VM. After rebooting the VM, the VM will be up and running with the new resource allocation. The ability to add resources quickly also means that we can also address issues more quickly and more efficiently, offering our customers higher overall Quality of Service.
3. High Availability. All servers will be put together under one resource pool. When the system detects that one of the servers in the pool is down (perhaps due to hardware failure, etc), the system will restart the Virtual Machines on that server on other available servers in the pool. This only involves a short downtime, a significant improvement over the traditional setup where multiple hours of downtime is typical to restore service.
4. Greener Hosting. Rather than having wasteful and unused resources, on-demand resource allocation and thin storage provisioning allow us to only allocate the resources needed today while still having the ability to add resources for tomorrow's need. This increases resource usage efficiency while reducing the number of physical servers/storage needed and, hence, reducing the overall power consumption and carbon emission. Using our new setup, we can achieve faster and better performance while reducing our power consumption by 65-75%.
  #2  
Old 06-22-2010, 11:54 AM
Marco van Herwaarden Marco van Herwaarden is offline
 
Join Date: Jul 2004
Posts: 25,415
Благодарил(а): 0 раз(а)
Поблагодарили: 0 раз(а) в 0 сообщениях
Default

As per Forum & Server Management posting guidelines, please post this on vBulletin.com.

Quote:
Topics that do not fit this forum:

Hosting Discussions.
Please use vBulletin.com for discussion on hosting companies or server suggestions: vBulletin Hosting Options
Closed Thread


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT. The time now is 09:27 AM.


Powered by vBulletin® Version 3.8.12 by vBS
Copyright ©2000 - 2024, vBulletin Solutions Inc.
X vBulletin 3.8.12 by vBS Debug Information
  • Page Generation 0.05012 seconds
  • Memory Usage 2,174KB
  • Queries Executed 13 (?)
More Information
Template Usage:
  • (1)SHOWTHREAD
  • (1)ad_footer_end
  • (1)ad_footer_start
  • (1)ad_header_end
  • (1)ad_header_logo
  • (1)ad_navbar_below
  • (1)ad_showthread_beforeqr
  • (1)ad_showthread_firstpost
  • (1)ad_showthread_firstpost_sig
  • (1)ad_showthread_firstpost_start
  • (1)bbcode_quote
  • (1)footer
  • (1)forumjump
  • (1)forumrules
  • (1)gobutton
  • (1)header
  • (1)headinclude
  • (1)navbar
  • (3)navbar_link
  • (120)option
  • (2)post_thanks_box
  • (2)post_thanks_button
  • (1)post_thanks_javascript
  • (1)post_thanks_navbar_search
  • (2)post_thanks_postbit_info
  • (2)postbit
  • (2)postbit_onlinestatus
  • (2)postbit_wrapper
  • (1)spacer_close
  • (1)spacer_open
  • (1)tagbit_wrapper 

Phrase Groups Available:
  • global
  • inlinemod
  • postbit
  • posting
  • reputationlevel
  • showthread
Included Files:
  • ./showthread.php
  • ./global.php
  • ./includes/init.php
  • ./includes/class_core.php
  • ./includes/config.php
  • ./includes/functions.php
  • ./includes/class_hook.php
  • ./includes/modsystem_functions.php
  • ./includes/functions_bigthree.php
  • ./includes/class_postbit.php
  • ./includes/class_bbcode.php
  • ./includes/functions_reputation.php
  • ./includes/functions_post_thanks.php 

Hooks Called:
  • init_startup
  • init_startup_session_setup_start
  • init_startup_session_setup_complete
  • cache_permissions
  • fetch_postinfo_query
  • fetch_postinfo
  • fetch_threadinfo_query
  • fetch_threadinfo
  • fetch_foruminfo
  • style_fetch
  • cache_templates
  • global_start
  • parse_templates
  • global_setup_complete
  • showthread_start
  • showthread_getinfo
  • forumjump
  • showthread_post_start
  • showthread_query_postids
  • showthread_query
  • bbcode_fetch_tags
  • bbcode_create
  • showthread_postbit_create
  • postbit_factory
  • postbit_display_start
  • post_thanks_function_post_thanks_off_start
  • post_thanks_function_post_thanks_off_end
  • post_thanks_function_fetch_thanks_start
  • post_thanks_function_fetch_thanks_end
  • post_thanks_function_thanked_already_start
  • post_thanks_function_thanked_already_end
  • fetch_musername
  • postbit_imicons
  • bbcode_parse_start
  • bbcode_parse_complete_precache
  • bbcode_parse_complete
  • postbit_display_complete
  • post_thanks_function_can_thank_this_post_start
  • tag_fetchbit_complete
  • forumrules
  • navbits
  • navbits_complete
  • showthread_complete