Click here to monitor SSC
  • Av rating:
  • Total votes: 19
  • Total comments: 10
Glenn Berry

Making the Case for a SQL Server Platform Refresh

04 December 2012

With the release of Windows Server 2012, SQL Server 2012, and the new generation of  Sandy Bridge Xeon processors, your  organization is likely to get many tangible benefits from upgrading your current database infrastructure with a complete platform refresh.

As 2012 is nearly over and we start getting into the Holiday season, I think it makes sense to take stock of your current database infrastructure to determine whether it is a good time to start making some strategic upgrades.  A number of events have happened over the past year that may help you make a more compelling case for a complete platform refresh, where you get new hardware, with a new operating system, and a new version of SQL Server, all at the same time.

New Hardware

Back in March of 2012, Intel released the Xeon E5-2600 series family of processors for the two socket server market. This was followed in May of 2012 by the Xeon E5-4600 series family of processors for four-socket servers. This family of processors is also known by the code name of “Sandy Bridge-EP”, and it is a Tock release in Intel’s Tick-Tock release strategy for processors.  

Every two years, Intel releases a Tock release, followed a year later by a Tick release. Tock releases use a new microarchitecture, with the same size lithography as the previous Tick release, while Tick releases use the same microarchitecture as the previous Tock release, with a process shrink to smaller size lithography. Tock releases typically mean a pretty substantial increase in performance and new features, and Tick releases usually mean a smaller increase in performance and features.  

Using a smaller process technology typically allows the processor to use less energy and have slightly better performance than the previous Tock release, but the performance jump is not nearly as great as you get with a Tock release. Tick releases are usually pin-compatible with the previous Tock release, so that lets the hardware systems vendors start using the Tick release processor in their existing models much more quickly, usually with just a BIOS update. A Tock release requires a new server model from the hardware system vendor, which sometimes delays the widespread availability of the new processor.

Figure 1 shows the relationship between Tick and Tock releases. It shows how the Tick-Tock model works, with the Tock release (in blue) using the existing manufacturing process technology, while the Tick release (in orange) moves to a new, smaller manufacturing process technology. New Intel processors are first released for the desktop market, and then for the mobile market, followed later by the single-socket server market, the two-socket server market and finally the four-socket server (and above) market coming last. The four-socket server market does not always get every release because of the lower sales volume and slower release cycle. This explains why there has not been a Sandy Bridge-EX release for the four-socket market.

Description: The Tick-Tock model through the years

Figure 1: Intel Tick-Tock Model

 

Year

Type

Process

Code Name

Model Families

2008

Tock

45nm

Nehalem

Xeon 5500, 7500

2010

Tick

32nm

Westmere

Xeon 5600, E7

2011

Tock

32nm

Sandy Bridge

Xeon E3, E5

2012

Tick

22nm

Ivy Bridge

Xeon E3 v2

2013

Tock

22nm

Haswell

 

2014

Tick

14nm

Rockwell

 

2015

Tock

14nm

Skylake

 

2016

Tick

10nm

Skymont

 

Table 1: Intel Tick-Tock Release History and Schedule

Table 1 shows the history and future release schedule for Intel processors according to this Tick-Tock release strategy. Being aware of this makes it a little easier for you to plan and schedule platform upgrades. It also helps you to understand how old your current hardware is and how far out-of-date it may be. If you have an Intel processor that is older than the Xeon 5500 or Xeon 7500 series (such as a Xeon 5400 or Xeon 7400 series), that means that it is using the older symmetrical multiprocessor (SMP) architecture instead of the current non-uniform memory access (NUMA) architecture, which has a serious negative effect on performance and scalability, especially when you have four or more processor sockets.

The 32nm Sandy Bridge-EP platform is a very significant improvement over the previous Nehalem and Westmere releases.  It offers better single-threaded performance, higher processor core counts, much higher memory bandwidth and capacity, and much higher I/O bandwidth and capacity. Sandy Bridge-EP has PCI-E 3.0 support which has double the bandwidth of the previous PCI-E 2.0 standard. Many two-socket, Sandy Bridge-EP servers have six or seven PCI-E 3.0 slots, which allows you to have a large number of RAID controllers, host bus adapters, or PCI-E storage devices for your storage subsystem.

Sandy Bridge-EP also allows two socket servers to support 24 DIMM slots, with 32GB DDR3 ECC DIMMs (or more affordable 16GB DDR3 ECC DIMMs) so it is possible to have up to 768GB of RAM in a two socket server. If that is not enough memory for your workload, you have the option of using a four-socket server with a Xeon E5-4600 series processor that will support 1.5TB of RAM on the Sandy Bridge-EP platform.

The increased capabilities of the Sandy Bridge-EP platform may let you move from an older four-socket database server to a new, two-socket database server. This would let you spend less on hardware and much less on SQL Server 2012 license costs while getting better performance and still having adequate capacity and scalability for your workload. Bigger servers (in terms of socket counts) are not faster servers. If you have an even bigger workload, the four and eight-socket Westmere-EX (Xeon E7-4800 and Xeon E7-8800 series) is still a good choice, with up to ten physical cores, and support for up to 4TB of RAM in a four-socket server, with 32GB DDR3 ECC DIMMs. Even Windows Server 2012 only supports 4TB of RAM.

A brand new server will be under warranty; it will use much less power and generate less noise, and it will offer better performance and scalability than a server from two or more years ago. If you follow my advice, you can select the server model and exact components to minimize your SQL Server 2012 license costs to the point that your license cost savings more than offset the capital cost for the server itself.

Another factor to consider when thinking about a new database server is your storage subsystem. New server models have the option of using 2.5” internal drive bays instead of 3.5” internal drive bays, so it is possible to have up to 26 internal drive bays in a 2U rack-mounted server. This gives you a lot more flexibility in designing a storage subsystem that has both the space and the I/O capacity, both in random and sequential I/O, to support your workload without the expense of external storage.

You can use more affordable, server-class SSDs (such as the new Intel DC S3700 series) either by themselves or in combination with conventional magnetic 6Gbps SAS drives to design a very high performance I/O subsystem at a relatively low cost. You can also use relatively affordable PCI-E flash-based storage devices in your PCI-E 3.0 expansion slots to get additional I/O performance. For example, an 800GB Intel 910 PCI-E card currently costs about $4000.00 while giving you up to 2000MB/sec of sequential I/O performance.

With SQL Server 2012 Enterprise Edition, you can use SQL Server AlwaysOn Availability Groups as part of your HA/DR strategy using multiple servers that don’t require shared storage (such as a SAN). That means you can use direct attached storage (DAS), internal drives, or PCI-E storage for your storage subsystem. If you will be using SQL Server 2012 Standard Edition, you can still use synchronous database mirroring (even though it has been deprecated) with non-SAN storage as part of your HA/DR strategy.

New Operating System

Windows Server 2012 became generally available to customers in September of 2012. Once you get used to the new user interface, it has a number of useful improvements over Windows Server 2008 R2. The first is higher hardware licensing limits than with previous versions of Windows Server. Windows Server 2012 lets you use up to 4TB of RAM and up to 640 logical processors, where Windows Server 2008 R2 was limited to 2TB of RAM and 256 logical processors.  You can also use Windows Server 2012 Standard Edition for your SQL Server deployments, since it does not have a license limit of 32GB of RAM and it has support for Windows Failover Clustering (which is required for traditional failover clustering and for AlwaysOn Availability Groups).

Windows Server 2012 will be in mainstream support for a longer period than Windows Server 2008 R2 (which ends mainstream support on January 15, 2015).  Windows Server 2012 supports a new feature called memory error recovery, as long as you have a processor that supports it (such as an Intel Xeon 7500 series or Xeon E7 series), ECC memory, and SQL Server 2012 Enterprise Edition. This feature allows SQL Server 2012 Enterprise Edition to repair clean pages in the buffer pool by reading the pages again from disk. These “soft” errors are caused by electrical or magnetic interference inside a server that cause single bits inside of DRAM chips to flip to an opposite state. The main cause of this is background radiation from cosmic rays.

Windows Server 2012 also gives you faster failover time for Windows failover clusters compared to previous versions, along with cluster-aware updating. Windows Server 2012 also has SMB 3.0 support, which gives you much better file-copy performance between Windows Server 2012 machines. This is very useful when initializing AlwaysOn Availability replicas, database mirrors, log shipping secondaries, and transactional replication subscribers.

New Version of SQL Server

SQL Server 2012 became generally available to customers in March of 2012, and SQL Server 2012 Service Pack 1 was released on November 7, 2012. This is important because it is still fairly common for some organizations to wait to deploy a new version of SQL Server until the first Service Pack is released. Depending on what type of workload you have and what SQL Server components you use, there are a number of valuable new features in SQL Server 2012 that make it a worthwhile upgrade over previous versions.

Here are some of the more valuable features for the Database Engine:

  • AlwaysOn Availability Groups
  • Columnstore indexes
  • Online indexing operations improvements
  • Extended Events improvements
  • T-SQL language improvements
  • SQLOS and memory management improvements
  • Resource Governor improvements
  • Server Core support

Depending on how you are using SQL Server you may come up with a completely different list of new features or improvements that will make you want to upgrade to SQL Server 2012. As you do this, you need to stress the tangible benefits to your organization from that feature rather than just describing the feature. For example, AlwaysOn Availability Groups can help give you a much better HA/DR solution than was possible with older versions of SQL Server.

Another argument for migrating to SQL Server 2012 is that it will be in mainstream support for a longer period of time. SQL Server 2008 and SQL Server 2008 R2 will fall out of mainstream support from Microsoft on January 14, 2014, which is really not that far away. Once those versions are out of mainstream support, there will be no more service packs or cumulative updates for either version.

There was a lot of public consternation when Microsoft initially announced the new core-based licensing model for SQL Server 2012 back in November of 2011. Core-based licensing forces you to buy SQL Server 2012 Enterprise Edition core-licenses instead of the old socket-based processor licenses used by previous versions. In a worst case scenario, this could be substantially more expensive than SQL Server 2008 R2 processor licenses, the worst case being a 16-core AMD Opteron 6200/6300 series processor that would be about four times more expensive with SQL Server 2012 compared to SQL Server 2008 R2.

The reality turns out to be quite a bit better than that. Microsoft released a SQL Server 2012 Core Factor Table on April 1, 2012 that provides a 25% reduction in physical core counts for licensing purposes for most modern AMD processors that have six or more physical cores. This makes licensing for AMD-based servers more affordable, but you can avoid this issue completely by choosing an Intel-based server instead. According to numerous TPC-E OLTP benchmarks, SQL Server performs significantly better on newer Intel-based servers than on newer AMD-based servers. The latest Intel Xeon processors have a maximum of eight or ten physical cores, so their SQL Server licensing cost is also lower.

If you are going to buy a new database server for your migration to SQL Server 2012 (as I highly recommend), both AMD and Intel offer “frequency-optimized” models of their current processor lines that allow you to specifically select a processor model that has fewer physical cores but a higher base clock speed. This will give you reduced SQL Server 2012 licensing costs and better single-threaded OLTP performance at the cost of some reduced scalability and overall processor capacity.

Conclusion

Performing a complete platform refresh offers many advantages over upgrading the hardware, operating system, or SQL Server version in isolation. New hardware will be under warranty, it will use less power, and it will have better performance and scalability than older, existing hardware. You can also specifically pick your hardware and processors to get the best performance and lowest SQL Server 2012 license costs. Having a brand new server will allow you to install a fresh copy of Windows Server 2012 and get it fully patched and configured in a convenient, non-stressful fashion. It will also let you install a fresh copy of SQL Server 2012 and get it fully patched and configured in a non-stressful fashion. Once everything is installed and configured, you can take as much time as necessary to do a complete cycle of testing and validation with your databases and applications in your new environment before you go live in production. Having at least one new server is the best way to make this all possible with a much higher chance of success.

Glenn Berry

Author profile:

Glenn Berry is a Principal Consultant with SQLskills. He has worked as a SQL Server professional for many years in a variety of roles, most recently as Database Architect for Avalara in Parker, CO.

Glenn has been a SQL Server MVP since 2007, and he has a whole collection of Microsoft certifications, including MCITP, MCDBA, MCSE, MCSD, MCAD, and MCTS, which proves that he likes to take tests. His expertise includes DMVs, high availability, hardware selection and configuration, and performance tuning. He is also an Adjunct Faculty member at University College - University of Denver, where has been teaching since 2000. He has completed the Master Teacher Program at Denver University - University College.

Glenn is heavily involved in the SQL Server community, and is a frequent speaker at user groups, SQL Saturdays, and the PASS Community Summit. He is the author of the book SQL Server Hardware, and he wrote chapters for SQL Server MVP Deep Dives and SQL Server MVP Deep Dives Volume 2 books.

Glenn's blog is at http://sqlserverperformance.wordpress.com/ and he can be reached by email at glenn@SQLskills.com and on Twitter at GlennAlanBerry.

Search for other articles by Glenn Berry

Rate this article:   Avg rating: from a total of 19 votes.


Poor

OK

Good

Great

Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.


Subject: Timely article
Posted by: mlmoore (view profile)
Posted on: Wednesday, December 05, 2012 at 6:43 AM
Message: Great article and very timely. We are planning exactly what you are suggesting (hardware, OS, SQL and SAN upgrade[Violin]) for our largest and most critical SQL app. Nice to have an independent resource support our plan.

Subject: Happy to Help
Posted by: GLennAlanBerry (view profile)
Posted on: Wednesday, December 05, 2012 at 10:56 AM
Message: Glad you enjoyed the article. I hope it give you more ammunition to get the complete upgrade done. Good luck!

Subject: Happy to Help
Posted by: GLennAlanBerry (view profile)
Posted on: Wednesday, December 05, 2012 at 10:57 AM
Message: Glad you enjoyed the article. I hope it give you more ammunition to get the complete upgrade done. Good luck!

Subject: Time to Get Normal
Posted by: Robert young (view profile)
Posted on: Thursday, December 06, 2012 at 1:14 PM
Message: With the growing availability of such power at such a relatively low price, perhaps "platform refresh" should also be explicit about shifting to SSD as primary storage, and thus to 5NF refactoring of the schema. I know, I know; coders love to refactor code at the drop of a hat, but they're always too scared to "break the database" by normalizing.

Subject: E5-2690 compared to Itaniium
Posted by: Anonymous (not signed in)
Posted on: Monday, December 10, 2012 at 4:32 AM
Message: We've got some 2 socket 2-core itaniutms setup as passive clusters (1 olap and 1 sql cluster). Age is cira 2006, (IA64_Family_32_Model_0\_0, 1.6Gz ), although they may be older than that and have not been recent tech. when we got stuck with them.

I am desperate to upgrade to 2012 and to E5-2690 for OLAP. Would the chip also be best suited for SQL or should I try and squeeze a 20 core solution out? (licensing being the restriction on cost there).

Subject: E5-2690 compared to Itaniium
Posted by: Anonymous (not signed in)
Posted on: Monday, December 10, 2012 at 4:33 AM
Message: We've got some 2 socket 2-core itaniutms setup as passive clusters (1 olap and 1 sql cluster). Age is cira 2006, (IA64_Family_32_Model_0\_0, 1.6Gz ), although they may be older than that and have not been recent tech. when we got stuck with them.

I am desperate to upgrade to 2012 and to E5-2690 for OLAP. Would the chip also be best suited for SQL or should I try and squeeze a 20 core solution out? (licensing being the restriction on cost there).

Subject: Intel Xeon E5-2690
Posted by: GLennAlanBerry (view profile)
Posted on: Monday, December 10, 2012 at 8:45 AM
Message: This processor gives the best single-threaded performance and scalability of any current server processor for SQL Server OLTP workloads.

For people who want to minimize their SQL Server 2012 core-based license costs, the E5-2643 is a good alternative, with only four physical cores, but a higher base clock speed.

Subject: Intel Xeon E5-2690
Posted by: GLennAlanBerry (view profile)
Posted on: Monday, December 10, 2012 at 8:45 AM
Message: This processor gives the best single-threaded performance and scalability of any current server processor for SQL Server OLTP workloads.

For people who want to minimize their SQL Server 2012 core-based license costs, the E5-2643 is a good alternative, with only four physical cores, but a higher base clock speed.

Subject: 5NF?
Posted by: RGarrison (view profile)
Posted on: Monday, December 10, 2012 at 10:15 AM
Message: Robert Young,

How does the growing speed of processors and storage push us "thus to 5NF refactoring of the schema"?

I am a strong proponent of normalizing data (to the appropriate extent), but I've never thought of the processor or storage speed as having any effect on that decision.

Subject: Join Together
Posted by: Robert young (view profile)
Posted on: Monday, December 10, 2012 at 11:16 AM
Message: RGarrison:

Then you're blessed. I've always gotten the "we have to denormalize for speed" crap.

As to processor and storage speed. For a schema defined to organic normal form, there is a penalty from having to synthesize the rows (processor load, mostly) and a penalty for having to retrieve the component rows (storage speed, mostly). We gain both reduced storage footprint and DRI of organic normal form. But: They Ain't No Such Thing As A Free Lunch. That's the reason (mostly) why coders **still** "design" schemas that are just flatfiles dumped into RDBMS. That, and usurping data integrity into their venue, client code. They should be hung from the yard arm, of course.

Updates to the database will, in many/most cases, be done with SP, and thus processor load.

Normal form databases, in general, put more load on the database server, and less on the client. This was by Codd's design. In 1969, when Codd presented inside IBM (the paper went public in 1970), the 3270 hadn't been yet released. The paradigm at the time was client/server-in-a-box (and much how RS-232 apps on RDBMS/*nix were built): the datastore (IMS in IBM's case) was a patch of memory, and the "terminal" was another patch wired to the display (RS-232/SNA/etc.). The terminal, by the 3270 timeframe, was a bit programmable, but nothing fancy not much more than isolated input field mask edits.

The "true" client/server era was propelled by local nets of engineering workstations, which were justified on the basis of compute-heavy/data-light structure; i.e. not much data on the wire, and not much concurrency at the datastore. Client/server was never intended to move data integrity out to intelligent clients.

Now, what with Cloud and such, we're in the position to re-create that client/server-in-a-box paradigm, just with a rather longer wire. The client end of the wire is just a pixelated, albeit rather dumb, display. With Godzilla rank servers, and fast-enough internet, we should.

 

Phil Factor
Searching for Strings in SQL Server Databases

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.... Read more...

 View the blog

Top Rated

Continuous Delivery and the Database
 Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of... Read more...

The SQL Server Sqlio Utility
 If, before deployment, you need to push the limits of your disk subsystem in order to determine whether... Read more...

The PoSh DBA - Reading and Filtering Errors
 DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of... Read more...

MySQL Compare: The Manual That Time Forgot, Part 1
 Although SQL Compare, for SQL Server, is one of Red Gate's best-known products, there are also 'sister'... Read more...

Highway to Database Recovery
 Discover the best backup and recovery articles on Simple-Talk, all in one place. Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Reading and Writing Files in SQL Server using T-SQL
 SQL Server provides several "standard" techniques by which to read and write to files but, just... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.