Parent, From 2009 and earlier: Server Sizing notes.

Serving Sizing (Interim)

I have been meaning to rewrite this for a long time, but cannot seem to get the explanation just right. For now, I will just provide a short list of recommended system configurations spanning a broad price range. The pricing is for Dell PowerEdge servers, but no explicit recommendation is given over other vendors including HP and IBM, or even white box systems. Most people should start with B or C. The decision to employ the D-2 and E systems should be based on expert analysis (cough, cough).

SystemProcessorGHzCoresL3MemoryPrice
A1 x Xeon E34402.534 8M12GB 2x4G$1,750
B2 x Xeon E56202.40412M24GB 6x4G$4,644
C2 x Xeon X56803.33612M48GB 6x8G$9,000
D-14 x Opteron 61742.21212M128GB 32x4G$14,100
D-24 x Xeon X75602.26824M128GB 16x8G$30,300
E8 x Xeon X75602.26824M256GB 32x8G$100K?
Price includes 2 x 73GB 15K HDD, but not the main storage system.

Notes:

System A
The lowest configuration with a quad-core Nehalem processor comes in at $1750. Some additional saving can be had by using SATA disks instead of SAS.

System B
The memory configuration for B is 6 x 4GB Dual Ranked RDIMMs, which should be sufficient for most intermediate workloads. Upgrading B to 6x8GB is $1,191.

System C
The memory configuration for C is 6 x 8GB for C Dual Ranked RDIMMs. Each additional set of 6 x 8GB DIMMs cost $2382, or about $400 per 8GB RDIMM. The 16GB DIMM is $1,100 each. The max memory configuration is 12 x 16GB = 192GB. A total of 144GB via 18 x 8GB DIMMs is $4764 extra, while 192GB via 12 x16GB is $10,818 extra.

The dual 10GbE option is another $1500. Dual power supplies and H700 controller 512MB NV also included.

System D-1
The 2U Dell PowerEdge R815 does not support 4-way with the 2.3GHz Opteron 6176, but the HP ProLiant DL585G7 (4 or 5U?) does. Dell does not list a 16 x 8GB memory option(?)

System D-2
Each additional set of 16 x 8GB RDIMMs will add $6,122. A full set of 64 x 8GB = 512GB will be $18.4K extra. A set of 64 x 16GB = 1TB will be $64K extra. The power supply is 4 x 1100W.

System E
A full set of 128 x 8GB = 1TB will be $36.7K extra. A set of 128 x 16GB = 2TB will be $129K extra.

Main Storage

The above does not included the primary storgage system, which vary widely in price. Typical SAN storage system will run $2000-6000 per disk, and it is always necessary to distributed IO load over many disks (think 50-200+) no matter what the SAN sales rep says or promises. Is he/she a deep database expert? What technical papers do they have on this? Can you show me a TPC-E benchmark report feature a SAN storage system with so few disks? We he/she be there to fix your problems? Direct-attach storage is recommended for data warehouse systems, where clustering is not required. SSD might be a good option for relatively small databases (<1TB) that generate super-high random IO (>20K IOPS).

The Dell PowerVault MD1220 with 24 146GB 15K 2.5in drives price is $14K, or just under $600 per drive. There used to be an option for 73GB 15K drives, for $500 per drive. It might be a good idea to plan for one fully populated MD1220 per processor socket.

Think along the the following for storage.

SAS

Server Sizing (Interim)

(This is the material I am still working on)

The traditional approach to server sizing is requirement driven exercise. From a model for the mix of calls that represent the average transaction or user activity in general, the processor, memory and IO necessry to support the required volume is determined. This could be an estimate, or it could be an actual load test.

Load testing frequently becomes an overly long exercise due to the desire to simulate live user behavior, instead of concentrating only on actions that affect server system performance characteristics. Furthermore, interpreting performance measurements requires sophistication beyond the scope of normal IT experience, and many efforts often result in misleading conclusions.

The high expense and poor results of technical server sizing may have contributed to the widespread practice of adopting standard systems. Over the last 15 years, the standard system for database servers was a 4-way system, and a 2-way server for web and application servers. In the earlier years, it was quickly realized that web and application servers frequently did not scale up beyond two processors (and might not even be stable on more than one processor), but could easily be architected to scale-out with inexpensive 2-way servers.

Database servers, including SQL Server usually did scale reasonably well to 4-way systems. The 4-way was often very good choice for several reasons. Until about four years ago, even if it was sufficient in term of CPU resources, the 2-way systems may not have had sufficient memory capacity or IO bandwidth.

On the high-end, large organizations and other information intensive operations may have desired greater compute capability than the moderate cost 4-way server (typically $30-50K heavily loaded), especially when the project budget is on the order of ten of million (some times even one billion, no joke) dollars and the value of the information processing was even higher. Most servers with more than four processor sockets were Non-Uniform Memory Architecture (NUMA) systems.

While it was possible to scale-up database engines, including SQL Server 2000, it was also common to encouter erratic performance characteristics. Some SQL Server operations did scale beyond 4 processors, some did not, and others could have negative scaling, possibly causing system instability. It was unclear at the time whether this was due to the number of processors or to NUMA.

Until very recently, there was very little quality material discussing the full nature of SQL Server on NUMA systems. Most vendors usually arranged for "feel good" paper showing scaling on a benchmark, without getting into the details of the special techniques employed. Some special techniques might contribute a few percent. This may not be critical to a production environment, but was important the world of vendor competition. Other techniques might be absolutely critical for specific application characteristics, and could make or break a production system. There was almost zero public technical information on which were minor and which were major. It seemed that people employing high-end NUMA systems were either oblivious that special handling was required, or elected to employ some techniques and not others without any technical basis. Guessing which is more correct did not seem to happen.

So even when there was desire and budget to employ high-end servers, the standard 4-way server was often the best choice given the poor disemination of technical material and skills to support NUMA systems. In fact, in large environments, it was often better to scale back on features in order to support the require transaction volume. This is why employing separate reporting database servers is a common practice.

SQL Server 2005,and later versions, service packs, and hotfixes, along with the corresponding operating system improvements, have greatly improved NUMA systems scaling. There are now far fewer cases of erratic behavior than in SQL Server 2000 (and Windows Server 2000/2003). Since 2005 or so, dual-core and later multi-processors also appeared. So 4-way (processor sockets) systems could have eight or more processor cores. It did seem that even SQL Server 2000 had fewer problems in 4-way eight-core systems than was the case on NUMA systems.

AMD entered the 4-way server system market with their Opteron processor sometime in 2004-5. Technically, all Opteron systems with more than one processor are NUMA, but the difference in memory access between local and remote nodes is sufficiently small that Opteron systems did not have the negative NUMA characteristics. So while some people like to point out that the Opteron is a NUMA system, it does not require special handling. (This is a good thing).

In summary, formal load testing seems to have dropped off the board at many organizations, and the practice of adopting standard systems has been sufficiently successful. In the last several years, many have realized that the modern multi-core processors have become so powerful that CPU bottlenecks are becoming a thing of the past. Often when it does occurr in transaction processing systems, the cause can be traced to missing indexes of silly SQL (especially in code to implement multiple optional search parameters), and not a true CPU deficiency. The low system processor utilization is one reason why virtualization has become very popular.

While the virtual evironment enable many desirable features, it is still not recommend for critical line-of-business systems except for very small operations. Mostly this is driven by the value (or cost of down-time) of the application, and to lesser degree, the remaining deficiencies of virtual environments.

Based on the points discussed above, it is proposed that most situations can dispense with the requirement driven approach. Only very large information processing operations need to be concerned with load testing and technical server sizing analysis. An alternative is to adopt a budget driven approach. Assess the both the value of the application and the cost of down-time, determine the appropriate server system budget. Presuming that this system is already far more powerful than required, possibly step down one system, the determine how to best employ the compute resources available. Also bear in the nature of queuing, and the benefits of maintaining low average CPU on a transaction processing system, including the benefit of many processor (cores).

Given that the common system strategy was successful in the past, this could also be employed, except that now the standard (default choice) database server should be a 2-way system. Default means the server is selected more or less without technical analysis. Even if it is later determined that a larger system is required, the 2-way did not cost very much, certainly less than the cost of the effort to determine the correct choice, and the 2-way that was purchased for the database can always be repurposed.

Today NUMA is not necessarily a hugely meaningful term, because for both AMD and current Intel processor lines, all multi-processor system are technically NUMA. For lack of a better term, I am calling 8-way systems and larger: Big-Iron, partly out of nostalgia. The previous generation Opteron allowed glueless 8-way, but the most recent AMD Magny-Cours (Opteron 6100 series) only support upto 4-way. The Intel Xeon 7500 series supports glueless 8-way. Several system vendors have glueless 8-way Xeon 7500 systems. The HP ProLiant DL980G7 has node controllers, so it is not a glueless architecture. I have speculated that HP went to the trouble of designing the node controller to enable a 16-way system, which presumable might proceed if the 8-way market develops.

Prior to late 2005, scaling up on NUMA systems was technically very challenging. Expect it to be necessary to rewrite a number SQL to hint around the problem areas, possibly even resetting the cluster keys. After SQL Server 2005, running on big-iron was both less difficult, and there started to be more technical material available. So even while todays 2/4-way server are sufficiently capable to handle most needs, it is still encourage to consider the value of the application, and assess whether even more immense compute power could better realize the potential. If so, technical expertise is still a good idea in the employment of large systems.