Making the Case for a SQL Server Platform Refresh

04 December 2012
by Glenn Berry

With the release of Windows Server 2012, SQL Server 2012, and the new generation of  Sandy Bridge Xeon processors, your  organization is likely to get many tangible benefits from upgrading your current database infrastructure with a complete platform refresh.

As 2012 is nearly over and we start getting into the Holiday season, I think it makes sense to take stock of your current database infrastructure to determine whether it is a good time to start making some strategic upgrades.  A number of events have happened over the past year that may help you make a more compelling case for a complete platform refresh, where you get new hardware, with a new operating system, and a new version of SQL Server, all at the same time.

New Hardware

Back in March of 2012, Intel released the Xeon E5-2600 series family of processors for the two socket server market. This was followed in May of 2012 by the Xeon E5-4600 series family of processors for four-socket servers. This family of processors is also known by the code name of “Sandy Bridge-EP”, and it is a Tock release in Intel’s Tick-Tock release strategy for processors.  

Every two years, Intel releases a Tock release, followed a year later by a Tick release. Tock releases use a new microarchitecture, with the same size lithography as the previous Tick release, while Tick releases use the same microarchitecture as the previous Tock release, with a process shrink to smaller size lithography. Tock releases typically mean a pretty substantial increase in performance and new features, and Tick releases usually mean a smaller increase in performance and features.  

Using a smaller process technology typically allows the processor to use less energy and have slightly better performance than the previous Tock release, but the performance jump is not nearly as great as you get with a Tock release. Tick releases are usually pin-compatible with the previous Tock release, so that lets the hardware systems vendors start using the Tick release processor in their existing models much more quickly, usually with just a BIOS update. A Tock release requires a new server model from the hardware system vendor, which sometimes delays the widespread availability of the new processor.

Figure 1 shows the relationship between Tick and Tock releases. It shows how the Tick-Tock model works, with the Tock release (in blue) using the existing manufacturing process technology, while the Tick release (in orange) moves to a new, smaller manufacturing process technology. New Intel processors are first released for the desktop market, and then for the mobile market, followed later by the single-socket server market, the two-socket server market and finally the four-socket server (and above) market coming last. The four-socket server market does not always get every release because of the lower sales volume and slower release cycle. This explains why there has not been a Sandy Bridge-EX release for the four-socket market.

Description: The Tick-Tock model through the years

Figure 1: Intel Tick-Tock Model

 

Year

Type

Process

Code Name

Model Families

2008

Tock

45nm

Nehalem

Xeon 5500, 7500

2010

Tick

32nm

Westmere

Xeon 5600, E7

2011

Tock

32nm

Sandy Bridge

Xeon E3, E5

2012

Tick

22nm

Ivy Bridge

Xeon E3 v2

2013

Tock

22nm

Haswell

 

2014

Tick

14nm

Rockwell

 

2015

Tock

14nm

Skylake

 

2016

Tick

10nm

Skymont

 

Table 1: Intel Tick-Tock Release History and Schedule

Table 1 shows the history and future release schedule for Intel processors according to this Tick-Tock release strategy. Being aware of this makes it a little easier for you to plan and schedule platform upgrades. It also helps you to understand how old your current hardware is and how far out-of-date it may be. If you have an Intel processor that is older than the Xeon 5500 or Xeon 7500 series (such as a Xeon 5400 or Xeon 7400 series), that means that it is using the older symmetrical multiprocessor (SMP) architecture instead of the current non-uniform memory access (NUMA) architecture, which has a serious negative effect on performance and scalability, especially when you have four or more processor sockets.

The 32nm Sandy Bridge-EP platform is a very significant improvement over the previous Nehalem and Westmere releases.  It offers better single-threaded performance, higher processor core counts, much higher memory bandwidth and capacity, and much higher I/O bandwidth and capacity. Sandy Bridge-EP has PCI-E 3.0 support which has double the bandwidth of the previous PCI-E 2.0 standard. Many two-socket, Sandy Bridge-EP servers have six or seven PCI-E 3.0 slots, which allows you to have a large number of RAID controllers, host bus adapters, or PCI-E storage devices for your storage subsystem.

Sandy Bridge-EP also allows two socket servers to support 24 DIMM slots, with 32GB DDR3 ECC DIMMs (or more affordable 16GB DDR3 ECC DIMMs) so it is possible to have up to 768GB of RAM in a two socket server. If that is not enough memory for your workload, you have the option of using a four-socket server with a Xeon E5-4600 series processor that will support 1.5TB of RAM on the Sandy Bridge-EP platform.

The increased capabilities of the Sandy Bridge-EP platform may let you move from an older four-socket database server to a new, two-socket database server. This would let you spend less on hardware and much less on SQL Server 2012 license costs while getting better performance and still having adequate capacity and scalability for your workload. Bigger servers (in terms of socket counts) are not faster servers. If you have an even bigger workload, the four and eight-socket Westmere-EX (Xeon E7-4800 and Xeon E7-8800 series) is still a good choice, with up to ten physical cores, and support for up to 4TB of RAM in a four-socket server, with 32GB DDR3 ECC DIMMs. Even Windows Server 2012 only supports 4TB of RAM.

A brand new server will be under warranty; it will use much less power and generate less noise, and it will offer better performance and scalability than a server from two or more years ago. If you follow my advice, you can select the server model and exact components to minimize your SQL Server 2012 license costs to the point that your license cost savings more than offset the capital cost for the server itself.

Another factor to consider when thinking about a new database server is your storage subsystem. New server models have the option of using 2.5” internal drive bays instead of 3.5” internal drive bays, so it is possible to have up to 26 internal drive bays in a 2U rack-mounted server. This gives you a lot more flexibility in designing a storage subsystem that has both the space and the I/O capacity, both in random and sequential I/O, to support your workload without the expense of external storage.

You can use more affordable, server-class SSDs (such as the new Intel DC S3700 series) either by themselves or in combination with conventional magnetic 6Gbps SAS drives to design a very high performance I/O subsystem at a relatively low cost. You can also use relatively affordable PCI-E flash-based storage devices in your PCI-E 3.0 expansion slots to get additional I/O performance. For example, an 800GB Intel 910 PCI-E card currently costs about $4000.00 while giving you up to 2000MB/sec of sequential I/O performance.

With SQL Server 2012 Enterprise Edition, you can use SQL Server AlwaysOn Availability Groups as part of your HA/DR strategy using multiple servers that don’t require shared storage (such as a SAN). That means you can use direct attached storage (DAS), internal drives, or PCI-E storage for your storage subsystem. If you will be using SQL Server 2012 Standard Edition, you can still use synchronous database mirroring (even though it has been deprecated) with non-SAN storage as part of your HA/DR strategy.

New Operating System

Windows Server 2012 became generally available to customers in September of 2012. Once you get used to the new user interface, it has a number of useful improvements over Windows Server 2008 R2. The first is higher hardware licensing limits than with previous versions of Windows Server. Windows Server 2012 lets you use up to 4TB of RAM and up to 640 logical processors, where Windows Server 2008 R2 was limited to 2TB of RAM and 256 logical processors.  You can also use Windows Server 2012 Standard Edition for your SQL Server deployments, since it does not have a license limit of 32GB of RAM and it has support for Windows Failover Clustering (which is required for traditional failover clustering and for AlwaysOn Availability Groups).

Windows Server 2012 will be in mainstream support for a longer period than Windows Server 2008 R2 (which ends mainstream support on January 15, 2015).  Windows Server 2012 supports a new feature called memory error recovery, as long as you have a processor that supports it (such as an Intel Xeon 7500 series or Xeon E7 series), ECC memory, and SQL Server 2012 Enterprise Edition. This feature allows SQL Server 2012 Enterprise Edition to repair clean pages in the buffer pool by reading the pages again from disk. These “soft” errors are caused by electrical or magnetic interference inside a server that cause single bits inside of DRAM chips to flip to an opposite state. The main cause of this is background radiation from cosmic rays.

Windows Server 2012 also gives you faster failover time for Windows failover clusters compared to previous versions, along with cluster-aware updating. Windows Server 2012 also has SMB 3.0 support, which gives you much better file-copy performance between Windows Server 2012 machines. This is very useful when initializing AlwaysOn Availability replicas, database mirrors, log shipping secondaries, and transactional replication subscribers.

New Version of SQL Server

SQL Server 2012 became generally available to customers in March of 2012, and SQL Server 2012 Service Pack 1 was released on November 7, 2012. This is important because it is still fairly common for some organizations to wait to deploy a new version of SQL Server until the first Service Pack is released. Depending on what type of workload you have and what SQL Server components you use, there are a number of valuable new features in SQL Server 2012 that make it a worthwhile upgrade over previous versions.

Here are some of the more valuable features for the Database Engine:

  • AlwaysOn Availability Groups
  • Columnstore indexes
  • Online indexing operations improvements
  • Extended Events improvements
  • T-SQL language improvements
  • SQLOS and memory management improvements
  • Resource Governor improvements
  • Server Core support

Depending on how you are using SQL Server you may come up with a completely different list of new features or improvements that will make you want to upgrade to SQL Server 2012. As you do this, you need to stress the tangible benefits to your organization from that feature rather than just describing the feature. For example, AlwaysOn Availability Groups can help give you a much better HA/DR solution than was possible with older versions of SQL Server.

Another argument for migrating to SQL Server 2012 is that it will be in mainstream support for a longer period of time. SQL Server 2008 and SQL Server 2008 R2 will fall out of mainstream support from Microsoft on January 14, 2014, which is really not that far away. Once those versions are out of mainstream support, there will be no more service packs or cumulative updates for either version.

There was a lot of public consternation when Microsoft initially announced the new core-based licensing model for SQL Server 2012 back in November of 2011. Core-based licensing forces you to buy SQL Server 2012 Enterprise Edition core-licenses instead of the old socket-based processor licenses used by previous versions. In a worst case scenario, this could be substantially more expensive than SQL Server 2008 R2 processor licenses, the worst case being a 16-core AMD Opteron 6200/6300 series processor that would be about four times more expensive with SQL Server 2012 compared to SQL Server 2008 R2.

The reality turns out to be quite a bit better than that. Microsoft released a SQL Server 2012 Core Factor Table on April 1, 2012 that provides a 25% reduction in physical core counts for licensing purposes for most modern AMD processors that have six or more physical cores. This makes licensing for AMD-based servers more affordable, but you can avoid this issue completely by choosing an Intel-based server instead. According to numerous TPC-E OLTP benchmarks, SQL Server performs significantly better on newer Intel-based servers than on newer AMD-based servers. The latest Intel Xeon processors have a maximum of eight or ten physical cores, so their SQL Server licensing cost is also lower.

If you are going to buy a new database server for your migration to SQL Server 2012 (as I highly recommend), both AMD and Intel offer “frequency-optimized” models of their current processor lines that allow you to specifically select a processor model that has fewer physical cores but a higher base clock speed. This will give you reduced SQL Server 2012 licensing costs and better single-threaded OLTP performance at the cost of some reduced scalability and overall processor capacity.

Conclusion

Performing a complete platform refresh offers many advantages over upgrading the hardware, operating system, or SQL Server version in isolation. New hardware will be under warranty, it will use less power, and it will have better performance and scalability than older, existing hardware. You can also specifically pick your hardware and processors to get the best performance and lowest SQL Server 2012 license costs. Having a brand new server will allow you to install a fresh copy of Windows Server 2012 and get it fully patched and configured in a convenient, non-stressful fashion. It will also let you install a fresh copy of SQL Server 2012 and get it fully patched and configured in a non-stressful fashion. Once everything is installed and configured, you can take as much time as necessary to do a complete cycle of testing and validation with your databases and applications in your new environment before you go live in production. Having at least one new server is the best way to make this all possible with a much higher chance of success.


© Simple-Talk.com