Click here to monitor SSC
  • Av rating:
  • Total votes: 78
  • Total comments: 20
Grant Fritchey

SQL Server Performance Crib Sheet

31 May 2007

Updated March 2013

SQL Server Performance-
The Crib Sheet

For things you need to know rather than the things you want to know

Contents

Introduction

"We're not building Amazon.com."

Have you heard this statement or others like it? This is usually delivered early in a development cycle when someone, probably not the person spewing these words of "wisdom," suggests that performance should be taken into account when designing and building the database, or laying out the new server configuration, or writing a trigger. Don't listen to them. Performance is as important to a small system as it is to a large one. A tiny database with 10 users is as important to those 10 users as Amazon.com is to the 10 million users it supports. It's true that worrying about the difference between a 20ms and a 5ms query on the small system may be a waste of time, but there is plenty of work to do before you get down to worrying about that sort of minutiae. But with the introduction of Azure SQL Database to traditional local copies of SQL Server instances, performance tuning has actually become a method for saving money.

Overview

How much performance is enough? Where do you start tuning? When do you stop tuning? Each application being developed will answer these questions in a different way. The important thing is not to establish a single mechanism for answering them. Your goal is to establish best practices and guidelines that will lead to the answers in the right way for the application under consideration.

First, and most important, the SQL Server system itself needs to be configured correctly. It also needs to be running on a correctly configured Windows server. This is the foundation on which the databases will be built. These same requirements do not exist when dealing with an Azure SQL Database. After the server is configured, you need to design and build the database to perform optimally. That's assuming you're building a new database. If you're trying to tune an inherited database, then you'll want to know what a good database looks like. Appropriately designing the database is even more important in Azure. Once the server and database are out of the way, you need to be concerned with the code running against it. This means the views, triggers, functions and, on local copies of SQL Server, the CLR code. It doesn't stop there because you need to be sure that the development staff is accessing the database correctly either in their general use of the database and it's code, or in their own use of ADO or whatever other client they might be using to access the system

In order to address all these concerns you need to understand how to measure and test performance. Once you've measured the performance and found it wanting, you'll need to know what to do about it. After addressing structural changes to the database or changes to the server or refactoring the T-SQL code, you'll need to have a mechanism in place to test the results in order to be sure your work is accurate

After all this, you should have a correctly functioning system that performs and scales well.

Measuring Performance

While setting up the server and designing a database are the foundations on which performance is built, understanding how to measure performance allows you to verify the decisions you've made are the right ones and provides you with direction on future decisions. The two main areas that you'll measure are the performance of the server itself, including both Windows and SQL Server, and the performance of databases and their associated code within SQL Server. To well and truly understand what's happening with your system you'll combine these two sets of measures. There are some tools provided with SQL Server, and whatever flavor of Windows server you're running, that perform these measurements. Just remember that if you’re tuning an Azure SQL Database, you only get access to the internals of the database, not to the server itself. An entire industry has grown around monitoring and measuring performance of SQL Server and Windows.

Perfmon

Microsoft provides Performance Monitor Counters as a means for keeping an eye on the server. These are accessed through a product called Performance Monitor, commonly referred to as perfmon, from the name of the executable. The counters themselves are grouped into Performance Objects. These vary from the mundane of Physical Disk, Server, Memory, Processor and Cache to the more obscure like Telephony and WMI Objects, all included as part of a standard server installation. After you install SQL Server an additional list of counters specific to SQL Server Objects to measure how the server is behaving such as SQL Statistics, Buffer Manager, Cache Manager, SQL Server Memory and more are available. It can be quite overwhelming when you consider that each object then lists the available counters. So for example the Buffer Manager object contains 13 counters from "Background writer pages/sec" to "Target Pages." Defining a standard set of counters that capture the core information to monitor the system is a must.

Perfmon Counter Set

As an initial list, collecting all the counters for these objects can act as a good baseline for data to make performance tuning decisions:

  1. Memory
  2. Network Segment
  3. Physical Disk
  4. Processor
  5. Server
  6. System
  7. SQL Server: Access Methods
  8. SQL Server: Buffer Manager
  9. SQL Server: General Statistics
  10. SQL Server: Locks
  11. SQL Server: SQL Statistics

Using perfmon

Performance Monitor is a graphical interface that is split in two. When you first open the GUI it will be in the display mode, probably showing a single metric for “% Processor Time” running on a one minute loop. Perfmon can be used one of two ways.

  1. You can add a series of counters and observe their behavior through the GUI in real time.
  2. You can use perfmon's ability to create Data Collector Sets that can be scheduled to start and stop collection with Window's Scheduler to record counters for later review and manipulation.

The latter approach is preferred because it allows you to be more systematic. Data Collector Sets are collections of data written to a file. You can run them over a period of time, say during year-end processing if that's when you're expecting high load, or over the course of a two hour test (more on that below in Performance Testing). Instead of simply watching the counters scroll by for one minute and then losing the data forever, you'll have a record that allows you to isolate when and where bottle necks occurred. You’ll also be able to look at this data over time. A key point to keep in mind is that Perfmon has a data collection cycle interval. This means that if you're experiencing high loads, but short transactions, a 30 second sample rate may entirely miss the events you'd most like to record. With this in mind, when not performing a long running system test, a low interval like 10 seconds would not be unreasonable. Remember that the lower the interval the more data will be collected. Keep this in mind when planning for disk space. The collection can contain start and stop times or define a length of time for which to run. All of these settings are available through the command line, so you can start data collection using third party tools or schedulers.

Supported file formats include binary, comma delimited and tab delimited. You can also store the data directly to SQL Server, but for most performance monitoring situations, storing the data to file rather than to SQL Server works well. You can run Perfmon on a machine other than that which is being monitored, which means that any I/O costs are not incurred by the monitored machine. If you were to use SQL Server, probably to set up some sort of enterprise level of monitoring, you would want to be careful to not use the machine you're monitoring for storing the data being collected as this will mask the normal performance of the machine behind the transactions necessary to support perfmon.

One other source of performance counter information in SQL Server is the dynamic management view sys.dm_os_performance_counters. These are only a sub-set of the counters available in perfmon specific to your SQL Server instance, but this subset is immediately available inside of queries for whatever monitoring solution you might be trying to put in place.

Evaluating perfmon data

Having collected the data on your server during high production use, or as part of a load test, you need to know how to evaluate these counters. Each set of counters, and each individual counter, tells a different part of the story. A massive amount of drill-down is required to define all of them. Instead, I'll focus on a very few of the most important counters - ones that can directly indicate the health of your overall system, the Server itself and SQL Server.

Server health

Starting with the server itself, you need to worry about:

  1. Memory
  2. Disk I/O
  3. The Processors
Memory
The most basic memory check is "Pages/­Sec". According to the MS documentation, this counter shows the:
"Rate at which pages are read from or written to disk to resolve hard page faults. This counter is a primary indicator of the kinds of faults that cause system-wide delays."
A high number here means there is a lot of activity in the memory of your system. What constitutes "a high number" depends on the amount of memory on the machine. My laptop shows spikes of 156 pages/¬Sec under a light load, whereas one of my production servers can show a spike of over 1,000 under a normal load. Measuring your system over time will allow you to develop the knowledge of what constitutes an abnormal load.
Disk I/O
The core check for disk drives is the length of time that writes (including updates, inserts and deletes) or reads wait in the queue, and this is measured by the counter "Avg. Disk Queue Length." Again, a high number is bad. This counter can be set to average all the disks on a system or you can look at each individual disk. Averaging the disks may be an interesting exercise, but to understand what is happening on your system, you really need to set a counter for each individual disk. Further, if you are in a SAN, this number will only be meaningful if you know how many disks are configured for the drive on your system.
Processors
Finally you can look at the activity of the processors using the “System:Processor Queue Length” counter. This counter, like the disk one, can either be an average or a precise measure of each processor and once more, the higher the number, the worse the performance. If the values are consistently low, while the processors on your system may be used extensively, they’re not being used so much that they’re causing other threads to wait for access to processor.

SQL Server health

The above counters point to the general health of the server, but what about the SQL Server itself? There are a few counters here that can indicate how well the server is behaving, but keep in mind that these are only broad indicators. Detailed troubleshooting will require more detailed information to support accurate analysis.

Memory Grants Pending
SQL Server manages it’s own memory space within the memory provided by the Windows operating system. The Memory Grants Pending measure indicates when processes within SQL Server are waiting to get access to memory. Like most of these measures, you’ll need to understand how your system behaves by collecting this measure over time and comparing values. In general, a consistently high number is indicative of memory pressure within SQL Server.
Full Scans­/­Sec
Next, as a general measure of health, it's good to look at the Full Scans/­Sec counter in Access Methods. This is basically the number of table or index scans that the system is experiencing. A high number here shows either poorly written stored procedures or bad indexing. Either way, you need to get to work identifying the source of the problem.
Lock Requests/­Sec
Under Locks the Lock Requests/¬Sec counter can show the number of new locks and lock conversions that are taking place on the system. This counter can be general, showing Total Locks, or it can get extremely specific, counting row ID locks (RID), key, file and page locks. While higher numbers may be bad, depending on the circumstances, they may also be an indicator of just a lot of use on the system. If this number is spiking or growing over time, you will need to pursue more details to ascertain whether or not you have a problem.
Deadlock/­Sec
Also under Locks, you may want to put the Deadlock/¬Sec counter on if you're experiencing deadlocks. Deadlocks, by their very nature are indicative of performance problems that require, at the least, procedure tuning and review, and a check on the indexes of a machine. Other steps may be required.
User Connections
Not actually an indicator, but a very useful measure in combination with all the other counters. is the User Connections counter under General Statistics. It's not that this number will necessarily indicate a problem on your server, but it helps in conjunction with the other counters to identify where real problems exist. For example, say you have a larger than normal number of locks. Check the number of user connections. If it's higher than normal, then you're probably just experiencing a spike in usage, but if it's average or below average, then you may have a problem and it's time for more detailed investigation
Batch Requests/­Sec
The last general measure is Batch Requests/¬Sec under SQL Statistics. This measure quite simply is the number of requests coming in to the server. This is a very general number and may be indicative of nothing more than high use, but it's a good value to track over time because it can show you how your system use is scaling. It can also be used to indicate when you have peaks and valleys in the number of user requests on the system.

All of the counters outlined above are only the beginning of the metrics you can use to get a general measure of system performance. All the other available counters will enable you to drill down into specifics within SQL Server or the server itself. After determining what the OS and the Server are up to, you will need to look inside at what the queries are doing. This is where Extended Events come into play.

 Extended Events

Extended events were introduced in SQL Server 2008. They only worked by T-SQL commands until SQL Server 2012. But, because the architecture of how extended events are implemented, they cause radically less load than the older method of setting up a server-side trace. Extended events are now the best method for gathering performance metrics, in addition to a number of other events, within SQL Server. This is the list of categories of events that can be captured:

  1. Access_methods
  2. AlwaysOn
  3. Broker
  4. Cdc_Logscan
  5. Change_Tracking
  6. CLR
  7. Cursor
  8. Database
  9. Deadlock_monitor
  10. Errors
  11. Exception
  12. Execution
  13. Filetable
  14. Fulltext
  15. Garbage_Collection
  16. Hadr
  17. Index
  18. Init
  19. IO
  20. Latch
  21. Lock
  22. Memory
  23. Oledb
  24. Process
  25. Replication
  26. Scheduling
  27. Security
  28. Server
  29. Session
  30. Storage_Management
  31. Synchronization
  32. Task
  33. Transactions
  34. Transmitters
  35. Transport
  36. UCS
  37. Warnings

In order to capture performance metrics, you’ll usually focus on just a few specific events that capture T-SQL performance. Unlike the old mechanisms with trace events, you don’t have to define all the columns to be collected. You just have to capture the extended event and the majority of the information you need will be capture with it. Here are the principal events you would want to capture:

Rpc_completed

This event fires after the completion of a remote procedure call, or, in other words, when a stored procedure finishes executing.

Sql_batch_completed

When a T-SQL batch, other than a stored procedure, completes, you’ll get a result from this event.

Each of these events will return information such as the amount of reads, writes, CPU, duration, start and stop times. They’ll also return the procedure call or SQL batch, so you can see exactly what was passed to SQL Server.

In addition to the standard Event Fields, you can add additional fields, called Actions or Global Fields. These can provide additional useful information for the queries you’re collecting. However, they add additional overhead to the event capture, so you need to be very judicious in their use. Here are a few that might be worth adding to your data collection:

Database_id

In the event that you have the same procedure or query running on different databases, it would be worth knowing which database this particular call was run against.

Client_app_name

If the application name is supplied through the connection, you can see which applications are making which calls.

Query_hash

A hash is created of queries and stored in cache. You can use this value to identify common query patterns within your system. If you have ad hoc queries, it’s a great way to identify multiple tuning opportunities.

Nt_username

Provides the name of the login that is currently running the query. Depending on the security settings of your system, this can be useful for figuring out who ran a query.

Username

If you use SQL logins, this functions in the same way as nt_username.

 Just remember, while Extended Events are a very inexpensive method for collecting performance data from your system, nothing is ever free. It is possible to add enough events and enough actions to those events to seriously impact your system. But, judicious application of these events will enable you to gather extensive query metrics without negatively affecting the people accessing your data.

 You can output the Extended Events to anything from the ring buffers (an internal alerting mechanism within the Windows operating system) to a file or a database. In general, output to a file works best for this type of data collection. Be sure you have enough space to accommodate the data collected. You can then use either the Data Explorer in SQL Server 2012 to look at the data or an extended stored procedure to pull the data from the disk and put it into tables in order to aggregate and query against it:

 SELECT *

FROM sys.fn_xe_file_target_read_file(‘E:\ExEventData\QueryPerf.xel’,NULL,NULL,NULL);

The primary information is returned in an XML column, event_data. This means you’ll need to write some XQuery statements to pull out the pertinent information:

WITH   xEvents

     AS (SELECT  fxftrf.object_name AS xEventName,

                      CAST(fxftrf.event_data AS XML) AS xEventData

         FROM        sys.fn_xe_file_target_read_file('c:\PerformanceTuning\Performance*.xel',

                                                         NULL, NULL, NULL) AS fxftrf

                      )

SELECT x.xEventName,

              x.xEventData.value('(/event/data[@name=''duration'']/value)[1]',

                                                'bigint') AS Duration,

              x.xEventData.value('(/event/data[@name=''physical_reads'']/value)[1]',

                                                'bigint') AS PhysicalReads,

              x.xEventData.value('(/event/data[@name=''logical_reads'']/value)[1]',

                                                'bigint') AS LogicalReads,

              x.xEventData.value('(/event/data[@name=''cpu_time'']/value)[1]',

                                                'bigint') AS CpuTime,

              CASE x.xEventName

                     WHEN 'sql_batch_completed'

                     THEN x.xEventdata.value('(/event/data[@name=''batch_text'']/value)[1]',

                                                              'varchar(max)')

                     WHEN 'rpc_completed'

                     THEN x.xEventData.value('(/event/data[@name=''statement'']/value)[1]',

                                                              'varchar(max)')

              END AS SqlText

FROM   xEvents AS x;

 

Dynamic Management Objects

Setting up Performance Monitor to observe the behavior of your system along with Extended Events to track the behavior of your queries is the best method for gathering useful performance metrics. However, you have other methods available that can help you with spot checks and with checks not available through the other two methods. This method is querying against the Dynamic Management Objects (DMO). The DMOs were introduced in SQL Server 2005. They consist of views and functions that give you access to information within SQL Server. Earlier the view sys.dm_os_performance_counters was mentioned. There are a number of others that can help you. Here are just a few:

Sys.dm_os_wait_stats

This view shows an aggregation of the waits occurring within your SQL Server instance since the last time the server was started or failed over. Using wait statistics is a great way to understand exactly where your system is running slow because it’s waiting on resources. Querying this view will not tell you which queries are waiting though. For that you’ll have to go to other DMOs.

Sys.dm_exec_requests

To see exactly what is currently running within your system you can query this view. It will show you which processes are running, how long, when they started, any previous waits, any current waits and if they are blocked by other processes. You can combine this with a number of other DMOs to retrieve information from the cache such as the T-SQL text and execution plans.

Sys.dm_exec_query_stats

For the queries currently in cache in SQL Server’s memory, this view will show aggregate performance information. You can see how long queries have been running, how many times they’ve been called, when they were first compiled and a number of other interesting statistics.

Sys.dm_exec_sql_text

To see the query from sys.dm_exec_requests or sys.dm_exec_query_stats you can call this function. It will return the text of an entire batch, or, you can pass it a begin and end offset to get individual statements.

Sys.dm_exec_query_plan

To understand how SQL Server is retrieving the data you’re manipulating within your T-SQL statement, you need to look at the execution plan. You can get the plan_handle from sys.dm_exec_requests or sys.dm_exec_query_stats to get a look at the execution plan in the cache.

 As an example of how you can put the DMOs together to retrieve interesting information, this query will get the top 10 query statement, the total execution statistics and the query plan for all the queries currently in cache. It will order them by the total execution time so that you can know which queries have been taking up the most time on your system:

SELECT TOP 10

              SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,

                             ((CASE WHEN deqs.statement_end_offset = -1

                                          THEN DATALENGTH(dest.text)

                                          ELSE deqs.statement_end_offset

                                  END) - deqs.statement_start_offset) / 2 + 1) AS StatementText,

              deqs.execution_count,

              deqs.total_elapsed_time,

              deqs.total_logical_reads,

              deqs.total_logical_writes,

              deqs.total_worker_time,

              deqp.query_plan

FROM   sys.dm_exec_query_stats AS deqs

              CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest

              CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp

ORDER BY deqs.total_elapsed_time DESC;

Third Party Tools

There is an entire industry built around monitoring servers and databases. Most of these use the same counters that you can access through Performance monitor or Extended Events. A few of these tools find ways to monitor packets or logs or memory and provide a slightly different set of measures. However, most of them do things that you could do for yourself. What they offer is the ability to set up monitoring and alerts and take advantage of all the work that they've put into recording the data over time, setting up reports, building enterprise level alerts, etc. If you have more than a handful of servers to monitor, doing the research and finding a third party tool that works in a manner that you find acceptable is worth the time and effort they will save you.

Tuning Performance

Performance tuning consists of setting or changing the server configuration, the database indexing or storage or the T-SQL code in order to achieve enhanced performance. After gathering information about what is happening on your machine, you can begin the process of identifying where you are experiencing slow performance. Once you've identified the source you then have the information you need to determine what changes you can make to improve performance.

 Server Performance

The quickest possible solution here is to get more and faster CPUs running on more and faster memory against more advanced and faster disk storage. There, we're done. What's that, you've got a big honking machine and it isn't performing properly, or you've got a limited budget so you need to squeeze more from the machine that you have? OK. Listed below are some areas where you can change settings in order to make a difference to how your machine is configured. Most of these are suggestions as each and every set of circumstances is unique. You need to test these suggestions in your environment in order to ensure their applicability to your own set of circumstances.

 Rather than let SQL Server manage memory allocation, and grow and shrink it as necessary, simply determine the maximum memory that you can allocate to the server and fix the memory allocation at this point. Variable memory is only helpful if you're running SQL Server in a mixed environment with other applications (this, by the way, will lead to poor performance as the server has to contend with other applications for precious memory, cpu, and disk resources).

You should reserve some amount of memory for the OS, depending on the amount of memory on the system. Smaller systems with 32gb of memory or less can probably make due with between 1gb and 2gb reserved for the OS.

For most systems the Max Degree of Parallelism (MAXDOP) default value of zero (0) which enables all the available processors should work well. But, you may need to adjust the allocation of CPUs in order to ensure some for the operating system and for other processes running on the server.

The default value for the Cost Threshold for Parallelism is 5. This is much too low for the majority of systems out there, especially OLTP systems. Set this value to 35 as a starting point and observe the use of parallel queries and CPU from there to determine if it needs to be adjusted up or down.

One of the most consistent bottlenecks in SQL Server is the tempdb. This is used when applications create objects such as temporary tables, but it is also used when rebuilding indexes, sorting data, snapshot isolation and a number of other processes. The common recommendation is to create one file in tempdb for each CPU core in your system. This can be a good starting point, but may be overkill in small systems or inadequate in larger systems.  Before changing your tempdb file allocation, watch for PAGELATCH wait types in your tempdb. If you are seeing those, you may not have enough files for tempdb and will need to add them. Where possible, isolate the tempdb files to their own drives.

Database Performance

Database performance almost always comes down to I/O. How much data can you get in/out of the system and how quickly? First, and most important, will be your actual database design. It can't be over-emphasized that a poorly constructed database, no matter how sophisticated the server, will perform badly.

Indexing

To start with, you need to plan an indexing strategy at the same time as you plan the database. First, and most important, is the clustered index. As a general rule, every table in a SQL Server database should get a clustered index. There are exceptions, but the exceptions should be exceptional. You can only put one clustered index on a given table, so the proper selection of exactly what column or columns to place it on is extremely important. By default, the clustered index gets created on the primary key. This may well be the appropriate place to put a clustered index, but you need to evaluate how the data is most likely to be accessed.

It makes sense to leave the cluster on the primary key if that primary key provides the most commonly used access path, the most common column used to either search the table or relate the table to another.

Changing the parameters slightly, if the access path is mostly through another column, say Company Name, this may make a more appropriate clustered index. Another situation is when the table is no longer at the top of the chain, but somewhere in the middle and the most likely avenue of selection for the data is going to come through the foreign key to a parent table. Then the foreign key column becomes a good candidate for the clustered index. A further wrinkle could be added by needing to get the related data in this child table, but based on a date. This would result in a clustered index composed of two columns: the foreign key and the date. As you can see, in a few sentences a number of options were laid out. You need to think this through as you design the system.

You may also identify, either during design, testing or monitoring the production system, that other indexes are needed. While multiple indexes on any given table may be needed, you need to keep in mind that each index adds overhead to the system because these values have to be maintained as the data gets modified, which includes inserts, updates and deletes. Further, since indexes are stored by pointing to the clustered index (one of the many reasons you need a clustered index on the table) changes to the clustered index can result in a cascade through all the other indexes on a table. Because of all this, while indexes are good and necessary and absolutely should be created where needed, restraint must be exercised in their application & use.

Files and FileGroups

Other factors that can affect the performance of your system include the way the files and file groups are laid out. You should be creating multiple files for your databases to optimize performance. A baseline for this would be to create one file for the logs, another for the data (defined by the clustered index), and another for non-clustered indexes. Additional files may be necessary to separate out BLOB data or XML data or unusually active tables, each onto its own file, and where possible, onto its own disk. This has been found to be true even on SAN systems because distributing the load takes further advantage of the architecture of the SAN.

Normalization

The data stored and the data retrieved should be defined appropriately. This means normalizing the storage. If you simply want a flat file, don't put it into a relational database system. A NoSQL system will perform much better. You're paying for overhead you don't need and sacrificing benefits that you could achieve. Normalization actually acts as a performance enhancement and the query optimizer can use enforced referential constraints to make your queries perform better. That said, some targeted denormalization, picking some fields to duplicate rather than maintaining a perfect third normal form system, can provide some performance benefits. But you must test this to ensure you’re in that situation.

Data Types

Define the data types that you need, not what you think you might need someday. A phone number is a string, not a number. Define the length of field that you need and enforce that length.

Other Issues

Simple things can make a difference too.

  • Turn auto-shrink off.
  • Make sure auto-update of statistics is turned on.
  • If a database is read only, set it to read only.
  • Use triggers very judiciously. They mostly operate in the background making them difficult to monitor and troubleshoot.
  • Be very careful of autogrowth settings on the database. 10% autogrowth will be fine when the database is 500mb. It makes a huge difference when the system is 50gb. For larger databases, change the setting to grow by a fixed amount rather than a percentage of total database size.

T-SQL Performance

After you've configured your server and built a functional database, you'll need to move data in and out of it. Most of this will be through T-SQL queries. These queries should be defined within stored procedures and will make use of views and user-defined functions to manipulate and access sets of data. The most important part of this concept is the set. Most people think in terms of pieces of data instead of sets of data. This conceptual shift, to manipulating the data in batch instead of row by row delivers the biggest performance benefits when working with T-SQL. Learning T-SQL syntax and set-based querying methodologies up front will provide more performance benefits by having well written procedures up front. This is much easier than attempting to tune or fix hundreds or even thousands of poorly written queries after the fact.

Writing T-SQL queries that perform well isn't always as easy as it sounds. The target needs to be to work with the T-SQL compiler and optimizer, processes internal to the SQL Server itself, to provide them with queries that they can tune in order to optimally deliver your data. You must then start with the basics and get the simple stuff right at the start.

  • Make sure that your queries are written to manipulate only the data you need.
  • Ensure that simple things like qualifying database objects by their owning user or schema are a regular part of your coding practices
  • Learn and use the latest ANSI style of syntax employed by SQL Server (ANSI 99 for 2005, ANSI 92 for 2000).
  • Avoid cursors as much as possible. While there are some good uses for them, they usually become a crutch used instead of learning how to manipulate the data in sets.
  • Remember that transactions and transaction processing within the procedure should be kept as small as practicable. For example, if you need to do an insert and read some data, separate the read from the insert.
  • Minimize the dependence in your code on constructs such as table variables or temporary tables. Again, these very useful tools frequently substitute a piece-meal approach for one that emphasizes sets of data.
  • When writing stored procedures, things as simple as making the data type of parameters match the data type of the columns being referenced can make a big difference.
  • Table variables, table parameters, and multi-statement table valued functions do not have statistics, which can make them difficult for the optimizer to create good queries for. Use these objects very judiciously.

Once you've identified a stored procedure or query as being problematic, you'll want to tune it. This is where a query plan comes into play. SQL Server can display either a graphical plan (estimated and actual) or an XML plan. There are advantages to each type of plan. The graphical plan can be a quick and easy way to peruse the actions a query has put the system through. Especially useful is the ability to display the estimated query plan, which could identify problems without having to actually execute a query. XML query plans present data to the user that can be searched or parsed through code, allowing for some automation of tuning if you so desired it. Reading and understanding execution plans is a complex process. I suggest reading my book on that topic.

Client Access

All the good work done within SQL Server can be undone by a poorly written piece of client code. More often than not, the DBA can't write that code for the company. However, you can monitor, mostly through Profiler, what that code is doing to your system and how it is doing it. This is where you can make a difference.

If transactions are managed by code instead of by the database, observe these transactions to ensure that they are as short as possible and that any data access is isolated from other client side processing.

A classic error is to open a database transaction and then wait for user input. Be sure that the application is using the procedures you provided in the manner in which they were meant. You may have a large query that returns a lot of data being called frequently. Talking to the developers you may find that they only need one or two columns from this result set. Providing them with a modified or new procedure can save lots of processing time.

Be sure that the latest ADO.NET is in use. Be sure that the connections being made are using the settings you expect since things such as the connection timeout can be different than the database timeout resulting in odd behavior within your application. Again, moving only the data you need and only when you need it should be a working maxim.

Testing Performance

Instead of "discovering" problems only when the application hits production, try loading up the database with some data and run tests against it. While it is time consuming, database testing is as easy as that. First, you need a good understanding of the data. Just how many rows of what type of data are you likely to see? Then you need a good understanding of your data model so that you know that "x" parents have so many children on various tables of type "y". This isn't a technical issue. It's primarily a series of questions put to your business.

Once this is defined, you either have to define your transactions, or put trace events (not extended events, unfortunately) to work capturing a set of transactions from the application. Obviously the second approach is preferable since the data will be much more accurate.

You can then use the Distributed Replay tool from Microsoft. This tool is included as part of SQL Server 2012. It allows you connect multiple different machines to run the load against a server. You can control how the output runs, either running transactions as fast as possible, letting them run in a real time manner, or introducing your own customized wait times. You can then gather metrics using the tools you already have in order to understand how your system is performing.

If nothing else, open up each query and look at the query plan to identify areas that might cause you trouble down the road. While that table scan is hitting 75 rows, it doesn't really affect performance, but when it's hitting 75,000 or 75 million rows, you'll probably notice. While running through all these tests, use Performance Monitor and Profiler to capture the data. This can then be used over time to track how well your performance tuning is working out.

Suggested Reading

All of this only scratches the surface of what may be required to accurately assess and rectify performance issues in SQL Server. For more information consult books such as:

  • "Inside SQL Server 2000" by Kalen Delaney
  • "Inside SQL Server 2005: The Storage Engine" by Kalen Delaney
  • "Inside SQL Server 2005: TSQL Querying" by Itzik Ben-Gan
  • "Inside SQL Server 2005: TSQL Programming" by Itzik Ben-Gan

There are also fantastic web sites, in addition to Simple-Talk, like:

Grant Fritchey

Author profile:

Grant Fritchey, SQL Server MVP, works for Red Gate Software as Product Evangelist. In his time as a DBA and developer, he has worked at three failed dot–coms, a major consulting company, a global bank and an international insurance & engineering company. Grant volunteers for the Professional Association of SQL Server Users (PASS). He is the author of the books SQL Server Execution Plans (Simple-Talk) and SQL Server 2008 Query Performance Tuning Distilled (Apress). He is one of the founding officers of the Southern New England SQL Server Users Group (SNESSUG) and it’s current president. He earned the nickname “The Scary DBA.” He even has an official name plate, and displays it proudly.

Search for other articles by Grant Fritchey

Rate this article:   Avg rating: from a total of 78 votes.


Poor

OK

Good

Great

Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.


Subject: That's cleared things up.
Posted by: Anonymous (not signed in)
Posted on: Friday, June 08, 2007 at 9:21 AM
Message: As you said:
"when you first open performance it will have three basic counters your current machine up and running. assuming that not logged onto the server in order to add counters need to type in the machine name. this will load the counters for that so panic if you see only the standard set of counters and not sql server specific counters. permon can be used one of two ways."

Permon man, permon.

Subject: Forgive us our typos,
Posted by: Granted (view profile)
Posted on: Friday, June 08, 2007 at 10:01 AM
Message: As we forgive those who typo against us...

I'll see if Tony can fix it.

Subject: typo fixed
Posted by: Tony Davis (view profile)
Posted on: Monday, June 11, 2007 at 6:03 AM
Message: Apologies for this lapse. The offending "permon" typo has now been fixed. I hope it didn't spoil your enjoyment of the article too much.

Cheers,

Tony (Ed.)

Subject: Excellent summary
Posted by: Jacob Hamacher (not signed in)
Posted on: Wednesday, June 13, 2007 at 2:39 AM
Message: Thank you for an excellent article!

This has been one of the most informative articles on performance tuning on SQL Server I have read. I appreciate high signal to noise ratio. Adding it to my bookmarks, and will return here many times.

/jacob

Subject: Good Tips!
Posted by: Dyego Fernandes (not signed in)
Posted on: Wednesday, June 13, 2007 at 7:50 AM
Message: Thanks for the tips. I have a doubt: When a run my SQL Server 2000 Standard in a Windows 2003 Standard i can use the AWE to improve best use off memory. Right?
Please, if you can, contact me: dyegofernandes@centauronet.com.br

Thanks.

Subject: AWE
Posted by: Granted (view profile)
Posted on: Wednesday, June 13, 2007 at 11:18 AM
Message: For details on the proper use of AWE, I'd consult the BOL & Microsoft. You can also take a look at a couple of articles over on SQL Server Performance.com
http://www.sql-server-performance.com/awe_memory.asp
and
http://www.sql-server-performance.com/hardware_tuning.asp
There are other resources out there.

Subject: AWE
Posted by: n00b (view profile)
Posted on: Thursday, June 14, 2007 at 9:22 AM
Message: Good reference for AWE:

http://www.sql-server-performance.com/awe_memory.asp

Subject: need for sql script
Posted by: Anonymous (not signed in)
Posted on: Thursday, June 14, 2007 at 2:00 PM
Message: i want the sql script that is used for processing sql 2000

Subject: question on profiler loading permon log?!
Posted by: randyvol (view profile)
Posted on: Friday, June 15, 2007 at 11:50 AM
Message: snippet from article, Evaluating profiler data...
"You now have the ability to load a performance monitor log file and a profiler trace file into the Profiler."
OK, just so happens I've cutover to a new production SQL Server 2005 system and took a baseline measure yesterday for 12 hours. Saved as a tabbed delimited file from perfmon (.tsv) file.

I see no way to load this file up in profiler and examine it. (Nor did I see an option to load in a 'standard' perfmon binary file).

Is this information incorrect or am I missing something?

Thanks...

Subject: 2005 Profiler
Posted by: Granted (view profile)
Posted on: Friday, June 15, 2007 at 2:14 PM
Message: I didn't post the details on how to do this in the article because it just didn't fit with the 20,000 foot focus.

First off, you have to have saved the trace out to a file or a database. Open this saved file/database in Profiler. Then click on the "File" menu choice and down near the bottom you'll see "Import Performance Data..." Click on that and you'll be able to browse to a perfmon file and import it.

Here's a link that I copied from BOL:
ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/e1b3072c-8daf-49a7-9895-c8cccd2adb95.htm

Subject: index fragmentation
Posted by: Steve Morris (not signed in)
Posted on: Sunday, July 01, 2007 at 11:30 AM
Message: For completeness you might like to a note about index fragmentation as in my experience the failure to do any index maintenance is one of the most common causes of poor performance

Subject: Re: Index Fragmentation
Posted by: Granted (view profile)
Posted on: Wednesday, July 11, 2007 at 1:58 PM
Message: You're absolutely right. I knew there were a few things I missed. That's one of them. Thanks for pointing it out. If I get the chance to update this, I'll definately add a section on that topic.

Subject: good one
Posted by: Anonymous (not signed in)
Posted on: Sunday, July 15, 2007 at 2:51 AM
Message: very useful for newbies, sure this would help how to troubleshoot performance issues..

thanks

Subject: profiler
Posted by: Sandeep (not signed in)
Posted on: Thursday, November 22, 2007 at 6:09 AM
Message: How to Capture Profiler data in a Table .Request to give the script if any to store the Profiler values into table??
Thanks in advance.

Subject: re: good one
Posted by: Granted (view profile)
Posted on: Monday, December 10, 2007 at 9:42 AM
Message: Glad you liked it. It was really hard to summarize and simplify this topic. I don't think it was entirely successful, but I tried.

Subject: re: profiler
Posted by: Granted (view profile)
Posted on: Monday, December 10, 2007 at 9:52 AM
Message: The thing to do is to write the trace out to a file then use the function fn_trace_gettable to load it.

You can use the gui to write directly to a table, but that's a serious load to put on a system. You're either placing load on the system that you're trying to trace by placing the load there or you're slowing down the trace process by sending everything over the network and causing it to wait on another database for writes. Much better to use the file system and then import it later.

Subject: Best practices summary
Posted by: Héctor García Salas (not signed in)
Posted on: Wednesday, January 09, 2008 at 6:52 PM
Message: Quite a good article for beginners... It summarizes most of the basic practices. I would've add Page Life Expectancy as a basic counter in 2005 and OS processor Privileged time. Also, I believe it's better to keep the default configuration of max degree of parallelism due to CXPACKET waits: reconfigure once you know what your're handling. As already pointed out, index defrag is a common issue of degraded performance.
All in all, excellent reading.

Subject: re: Best practices summary
Posted by: Granted (view profile)
Posted on: Thursday, February 07, 2008 at 10:10 AM
Message: Very good points. Thanks for bringing them up.

I haven't run into CXPACKET waits issues, but I sure have hit lots of procedures that are converting to parallelism at cost, not benefit. Some of these things are really very dependent on circumstance.

Subject: Disk IO and SAN Queue
Posted by: WJB (not signed in)
Posted on: Tuesday, May 14, 2013 at 3:00 PM
Message: Comments are a little old but Simple Talk reposted this article recently....

"Further, if you are in a SAN, this number will only be meaningful if you know how many disks are configured for the drive on your system."

Could you elaborate at the 20,000 ft level? :-)
Would a RAID 0, 5 or 10 also be of interest?

Subject: Disk IO & SAN
Posted by: Grant Fritchey (view profile)
Posted on: Thursday, May 16, 2013 at 5:14 AM
Message: It's really down to how many disks you have, So, for a RAID 0, there are two. So you can actually have a higher value without worry. On a SAN, until the SAN admin tells you, you don't know if the LUN is a single disk or 100. So the number is meaningless until you know that.

 

Phil Factor
Searching for Strings in SQL Server Databases

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.... Read more...

 View the blog

Top Rated

SQL Server XML Questions You Were Too Shy To Ask
 Sometimes, XML seems a bewildering convention that offers solutions to problems that the average... Read more...

Continuous Delivery and the Database
 Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of... Read more...

The SQL Server Sqlio Utility
 If, before deployment, you need to push the limits of your disk subsystem in order to determine whether... Read more...

The PoSh DBA - Reading and Filtering Errors
 DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of... Read more...

MySQL Compare: The Manual That Time Forgot, Part 1
 Although SQL Compare, for SQL Server, is one of Red Gate's best-known products, there are also 'sister'... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Reading and Writing Files in SQL Server using T-SQL
 SQL Server provides several "standard" techniques by which to read and write to files but, just... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.