Click here to monitor SSC
  • Av rating:
  • Total votes: 152
  • Total comments: 22
Feodor Georgiev

The default trace in SQL Server - the power of performance and security auditing

14 March 2011

Since the introduction of SQL Server 2005, there is a simple lightweight trace that is left running by default on every SQL Server. This provides some very valuable information for the DBA about the running server, but it isn't well-documented. Feodor reveals many of the secrets of this facility and shows how to get reports from it.

SQL Server provides us with variety of tools for auditing. All of them have their advantages and pitfalls. The default trace, introduced in SQL Server 2005, has the great advantage of being switched on by default, and is usually already there to use. It provides comprehensive information about changes in the system.

Firstly, let’s start by answering some basic questions:

What is the default trace? The default trace is enabled by default in SQL Server and is a minimum weight trace which consists by default of five trace files ( .trc) located in the SQL Server installation directory. The files are rolled over as time passes.

How do we know that the default trace is running? We can run the following script in order to find out if the default trace is running:

SELECT* FROM sys.configurations WHERE configuration_id = 1568

If it is not enabled, how do we enable it? We can run this script in order to enable the default trace:

sp_configure 'show advanced options', 1;

GO

RECONFIGURE;

GO

sp_configure 'default trace enabled', 1;

GO

RECONFIGURE;

GO

What is logged in the Default Trace? If we open the Default trace file in Profiler and look at the trace definition we will see that events in 6 categories are captured: Database, Errors and Warnings, Full-Text, Objects, Security Audit and Server. Also, all available columns are selected for every sub-event.

image

Figure 1: This is how the Default trace looks like

So, how can we benefit from each audited category? In the following sections I will explain briefly what each category means, as well as some of the sub-events, and will provide essential scripts for auditing the events in the Default Trace.

Database Events

Let’s start with the first event: the Database. As we can see, the sub-events are pretty much self-explanatory – the growth and shrinkage of data and log files, together with the changes in mirroring status. It is important to monitor file growths and shrinkages; It would be a vast topic to explain why, but in an nutshell, it is because of possible performance issues. Every time a file is grown or shrunk, SQL Server will halt and wait for the disk system to make the file available again. And halt, in this case, means halt: no transactions processed until the action is completed.

These are the database events that are monitored:

  • Data file auto grow
  • Data file auto shrink
  • Database mirroring status change
  • Log file auto grow
  • Log file auto shrink

Here is a script which will list the data file growths and shrinkages:

SELECT  TE.name AS [EventName] ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.Duration ,

        t.StartTime ,

        t.EndTime

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

WHERE   te.name = 'Data File Auto Grow'

        OR te.name = 'Data File Auto Shrink'

ORDER BY t.StartTime ; 

The output of the script will not tell you why the database grew, but will show you how long it took to grow. (Be careful about the value of the Duration column, since it might be in milliseconds or in microseconds, depending on the SQL Server version).

Also, I would recommend extending this query to search for databases which took longer than, say, a second to grow (this is just a guideline).

Here is another query which will return the log growths and log shrinking.

SELECT  TE.name AS [EventName] ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.Duration ,

        t.StartTime ,

        t.EndTime

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

WHERE   te.name = 'Log File Auto Grow'

        OR te.name = 'Log File Auto Shrink'

ORDER BY t.StartTime ; 

Also keep in mind that the query will not tell you if your junior DBA has been shrinking the data and log files. In the default trace we can find only the AUTO growth and shrink events and not the ones triggered by the ALTER DATABASE statement.

Errors and Warnings

Now let’s move on to the next section of the events: the Errors and Warnings. As we can see, there is an abundance of information here.

The Errorlog sub-event occurs when something is written to the SQL Server event log; Hash and Sort warnings happen generally when a sort or a hash match operation is spilled to disk (and since the disk subsystem is the slowest, then our queries become much slower.) Missing column statistics events will occur only when the ‘Auto create statistics’ option is set to off. In this case SQL Server indicates that it might have chosen a bad execution plan. The missing join predicate occurs when two tables do not have a join predicate and when both tables have more than one row. This can result in a long running queries or unexpected results.

These categories of errors and warnings are:

  • Errorlog
  • Hash warning
  • Missing Column Statistics
  • Missing Join Predicate
  • Sort Warning

Here is a script which will outline the errors:

SELECT  TE.name AS [EventName] ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime ,

        t.TextData ,

        t.Severity ,

        t.Error

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

WHERE   te.name = 'ErrorLog'

Note that this script has neither  EndTime nor Duration columns, for obvious reasons.

Here is another script which will outline the sort and hash warnings:

SELECT  TE.name AS [EventName] ,

        v.subclass_name ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   te.name = 'Hash Warning'

        OR te.name = 'Sort Warnings'

… and finally, one more script which outlines the missing statistics and join predicates.

SELECT  TE.name AS [EventName] ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

WHERE   te.name = 'Missing Column Statistics'

        OR te.name = 'Missing Join Predicate'

The Full Text Events

The Full-Text event category shows information about the Full-Text population events: If it is aborted, then you should look into the event log for a more detailed message; the FT Crawl Started sub-event indicates that the population request has been picked up by the workers. FT Crawl Stopped indicates either a successful completion or stop by error.

Full-Text events are...

  • FT Crawl Aborted
  • FT Crawl Started
  • FT Crawl Stopped

Here is a script which will return the Full text events:

SELECT  TE.name AS [EventName] ,

        DB_NAME(t.DatabaseID) AS DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime ,

        t.IsSystem

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

WHERE   te.name = 'FT:Crawl Started'

        OR te.name = 'FT:Crawl Aborted'

        OR te.name = 'FT:Crawl Stopped'

Notice that the records in the DatabaseName column are null, so we have to get the database name from the DB_NAME() function.

Object events

Here is where the real detective work starts: the changes of the object. In this category we have altered, created and deleted objects, and this includes anything from index rebuilds, statistics updates, to database deletion.

Object events include:

  • Object Altered
  • Object Created
  • Object Deleted

Here is a script which will give you the most recently manipulated objects in your databases.

SELECT  TE.name ,

        v.subclass_name ,

        DB_NAME(t.DatabaseId) AS DBName ,

        T.NTDomainName ,

        t.NTUserName ,

        t.HostName ,

        t.ApplicationName ,

        t.LoginName ,

        t.Duration ,

        t.StartTime ,

        t.ObjectName ,

        CASE t.ObjectType

          WHEN 8259 THEN 'Check Constraint'

          WHEN 8260 THEN 'Default (constraint or standalone)'

          WHEN 8262 THEN 'Foreign-key Constraint'

          WHEN 8272 THEN 'Stored Procedure'

          WHEN 8274 THEN 'Rule'

          WHEN 8275 THEN 'System Table'

          WHEN 8276 THEN 'Trigger on Server'

          WHEN 8277 THEN '(User-defined) Table'

          WHEN 8278 THEN 'View'

          WHEN 8280 THEN 'Extended Stored Procedure'

          WHEN 16724 THEN 'CLR Trigger'

          WHEN 16964 THEN 'Database'

          WHEN 16975 THEN 'Object'

          WHEN 17222 THEN 'FullText Catalog'

          WHEN 17232 THEN 'CLR Stored Procedure'

          WHEN 17235 THEN 'Schema'

          WHEN 17475 THEN 'Credential'

          WHEN 17491 THEN 'DDL Event'

          WHEN 17741 THEN 'Management Event'

          WHEN 17747 THEN 'Security Event'

          WHEN 17749 THEN 'User Event'

          WHEN 17985 THEN 'CLR Aggregate Function'

          WHEN 17993 THEN 'Inline Table-valued SQL Function'

          WHEN 18000 THEN 'Partition Function'

          WHEN 18002 THEN 'Replication Filter Procedure'

          WHEN 18004 THEN 'Table-valued SQL Function'

          WHEN 18259 THEN 'Server Role'

          WHEN 18263 THEN 'Microsoft Windows Group'

          WHEN 19265 THEN 'Asymmetric Key'

          WHEN 19277 THEN 'Master Key'

          WHEN 19280 THEN 'Primary Key'

          WHEN 19283 THEN 'ObfusKey'

          WHEN 19521 THEN 'Asymmetric Key Login'

          WHEN 19523 THEN 'Certificate Login'

          WHEN 19538 THEN 'Role'

          WHEN 19539 THEN 'SQL Login'

          WHEN 19543 THEN 'Windows Login'

          WHEN 20034 THEN 'Remote Service Binding'

          WHEN 20036 THEN 'Event Notification on Database'

          WHEN 20037 THEN 'Event Notification'

          WHEN 20038 THEN 'Scalar SQL Function'

          WHEN 20047 THEN 'Event Notification on Object'

          WHEN 20051 THEN 'Synonym'

          WHEN 20549 THEN 'End Point'

          WHEN 20801 THEN 'Adhoc Queries which may be cached'

          WHEN 20816 THEN 'Prepared Queries which may be cached'

          WHEN 20819 THEN 'Service Broker Service Queue'

          WHEN 20821 THEN 'Unique Constraint'

          WHEN 21057 THEN 'Application Role'

          WHEN 21059 THEN 'Certificate'

          WHEN 21075 THEN 'Server'

          WHEN 21076 THEN 'Transact-SQL Trigger'

          WHEN 21313 THEN 'Assembly'

          WHEN 21318 THEN 'CLR Scalar Function'

          WHEN 21321 THEN 'Inline scalar SQL Function'

          WHEN 21328 THEN 'Partition Scheme'

          WHEN 21333 THEN 'User'

          WHEN 21571 THEN 'Service Broker Service Contract'

          WHEN 21572 THEN 'Trigger on Database'

          WHEN 21574 THEN 'CLR Table-valued Function'

          WHEN 21577

          THEN 'Internal Table (For example, XML Node Table, Queue Table.)'

          WHEN 21581 THEN 'Service Broker Message Type'

          WHEN 21586 THEN 'Service Broker Route'

          WHEN 21587 THEN 'Statistics'

          WHEN 21825 THEN 'User'

          WHEN 21827 THEN 'User'

          WHEN 21831 THEN 'User'

          WHEN 21843 THEN 'User'

          WHEN 21847 THEN 'User'

          WHEN 22099 THEN 'Service Broker Service'

          WHEN 22601 THEN 'Index'

          WHEN 22604 THEN 'Certificate Login'

          WHEN 22611 THEN 'XMLSchema'

          WHEN 22868 THEN 'Type'

          ELSE 'Hmmm???'

        END AS ObjectType

FROM    [fn_trace_gettable](CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                            value

                                                    FROM    [fn_trace_getinfo](NULL)

                                                    WHERE   [property] = 2

                                                  )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   TE.name IN ( 'Object:Created', 'Object:Deleted', 'Object:Altered' )

                -- filter statistics created by SQL server                                         

        AND t.ObjectType NOT IN ( 21587 )

                -- filter tempdb objects

        AND DatabaseID <> 2

                -- get only events in the past 24 hours

        AND StartTime > DATEADD(HH, -24, GETDATE())

ORDERBY t.StartTime DESC ;

Keep in mind that SQL Server by default has five trace files, 20 MB each and there is no known supported method of changing this. If you have a busy system, the trace files may roll over far too fast (even within hours) and you may not be able to catch some of the changes. This article will not discuss in detail any workarounds for efficient SQL Server tracing, instead I will do this in a separate article later on.

Security Audit Events

Another part of the default trace is the Security Audit. As you can see from the event list below, this is one of the richest parts of the default trace. In general, what this event group tells us is what significant security events are occurring in our system.

Security events include...

  • Audit Add DB user event
  • Audit Add login to server role event
  • Audit Add Member to DB role event
  • Audit Add Role event
  • Audit Add login event
  • Audit Backup/Restore event
  • Audit Change Database owner
  • Audit DBCC event
  • Audit Database Scope GDR event (Grant, Deny, Revoke)
  • Audit Login Change Property event
  • Audit Login Failed
  • Audit Login GDR event
  • Audit Schema Object GDR event
  • Audit Schema Object Take Ownership
  • Audit Server Starts and Stops

Let’s take it one step at a time and:

  • create a SQL Server login
  • assign read permissions to this user in one of our databases.

By running the following query we will be able to track what users have been created on our SQL Server instance:

SELECT  TE.name AS [EventName] ,

        v.subclass_name ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime ,

        t.RoleName ,

        t.TargetUserName ,

        t.TargetLoginName ,

        t.SessionLoginName

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   te.name IN ( 'Audit Addlogin Event', 'Audit Add DB User Event',

                     'Audit Add Member to DB Role Event' )

        AND v.subclass_name IN ( 'add', 'Grant database access' )

Here is how the result of the query looks like after we have created one login with giving it read permission to one database:

image

As we can see, the first row is announcing the creation of the login in the master database, together with the creator (SessionLoginName column) and the create user (TargetLoginName column).

The next two rows are as follows: creating the database user and granting it database access, and last – adding the database user to a DB role.

Keep in mind that if you add the user to more than one role and if you give the login access to more than one database, then you will see several rows noting every event in your default trace.

Now let’s audit the dropped users and logins by running the following query:

SELECT  TE.name AS [EventName] ,

        v.subclass_name ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime ,

        t.RoleName ,

        t.TargetUserName ,

        t.TargetLoginName ,

        t.SessionLoginName

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   te.name IN ( 'Audit Addlogin Event', 'Audit Add DB User Event',

                     'Audit Add Member to DB Role Event' )

        AND v.subclass_name IN ( 'Drop', 'Revoke database access' )

As you can see, the event name is the same for both creating and dropping logins: i.e. Audit Addlogin Event, however the subclass column value is what defines the difference, ie. In the case of creation of a login the subclass would be ‘Add’ and in the case of deletion it would be ‘Drop’.

In fact, if we drop the database user and the SQL login we created earlier, this query will return two rows – one for each event together with the dropped user and login names and the login name of the user who deleted the user and the login.

The following query will give us all the failed logins contained in our default trace file:

SELECT  TE.name AS [EventName] ,

        v.subclass_name ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime ,

        t.SessionLoginName

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2                   )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   te.name IN ( 'Audit Login Failed' )

There are quite a few events in the Security Audit class and for the sake of compactness of this article I will turn your attention only to one more event, namely to the ‘Audit Server Starts and Stops’.

The following query will give you only the server start event:

SELECT  TE.name AS [EventName] ,

        v.subclass_name ,

        T.DatabaseName ,

        t.DatabaseID ,

        t.NTDomainName ,

        t.ApplicationName ,

        t.LoginName ,

        t.SPID ,

        t.StartTime ,

        t.SessionLoginName

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   te.name IN ( 'Audit Server Starts and Stops' )

Yes, you read it correctly: the above query will return only the Server Start event, and never the Server Stop event. Here is the explanation: as I mentioned earlier, SQL Server’s default trace consists of five trace files in total, which are 20 MB each. These five trace files are rotated (‘refurbrished’ or ‘recycled’, if you like) upon several conditions: when the instance starts or when the file size reaches 20 MB. Now, let’s think about this for a second: the queries I have listed so far in this article are returning the results only from the current trace file, i.e. the most recent one. Further, since the default trace file is rolled over every time the instance starts, this means that the event indicating the Server Stop will remain in the previous default trace file. Put simply, after the SQL Service restarts, our current default trace file will have the Server Start event as a first row. If you really wish to know when your SQL Server instance was stopped, you will need to include at least the contents of the previous file, but in fact we can include the contents of the other four default trace files to our result set. We can do this by changing the way we call sys.fn_trace_gettable so that it appends all default trace files. This function accepts 2 parameters – file location and name and number of files; if we pass as the first parameter the file location and the name of the oldest default trace file, then the sysfn_trace_gettable will append the newest ones, as long as we specify the appropriate value for the second parameter (the number of files). If we specify the newest file as a parameter to the function (as it is the case in all scripts in this article) then the older files will not be appended. As the filename contains the index of the file and they increment as each new file is created, it is easy to calculate the name of the oldest file.

To find the exact file location of the default trace files, you just need to execute the following query:

SELECT  REVERSE(SUBSTRING(REVERSE(path), CHARINDEX('\', REVERSE(path)), 256)) AS DefaultTraceLocation

FROM    sys.traces

WHERE   is_default = 1

Server Memory Change Events

And now, let’s move on to the last event class in our default trace: the Server class. It contains only one event – Server Memory Change.

The following query will tell us when the memory use has changed:

SELECT  TE.name AS [EventName] ,

        v.subclass_name ,

        t.IsSystem

FROM    sys.fn_trace_gettable(CONVERT(VARCHAR(150), ( SELECT TOP 1

                                                              f.[value]

                                                      FROM    sys.fn_trace_getinfo(NULL) f

                                                      WHERE   f.property = 2

                                                    )), DEFAULT) T

        JOIN sys.trace_events TE ON T.EventClass = TE.trace_event_id

        JOIN sys.trace_subclass_values v ON v.trace_event_id = TE.trace_event_id

                                            AND v.subclass_value = t.EventSubClass

WHERE   te.name IN ( 'Server Memory Change' )

The event subclass indicates if the memory has increased or decreased.

Conclusion

The default trace is a very powerful way to examine the health and the security of your SQL Server instance. There are several pitfalls to keep in mind – mainly related to file rollovers and size limitations, but with some programming the workarounds are not impossible. It is important to remember that the queries presented in this article will return the result from the single most recent default trace file. Depending on how busy the SQL Server instance is, the files may roll over way too fast for a DBA to catch all significant events; therefore, some automation is needed.

Further Reading

I have written a set of reports for SSMS which query the default trace.

Here is some documentation: Default Trace Audit documentation and FAQ

And here are some screenshots of some of them: Default Trace Audit documentation and FAQ: the screenshots/

Feodor Georgiev

Author profile:

Feodor has been working with SQL Server since 2002, starting on the 2000 version and mixing it up as newer versions - 2005, 2008 and 2012 - were released. He specializes in database performance tuning, documentation and scalability management. He also works as project leader and mentor on SQL Server and Business Intelligence projects on Microsoft-based solutions. HIs specialties include: Database Architecture, Microsoft SQL Server Data Platform, Data Model Design, Database Design, Integration Solutions, Business Intelligence, Reporting, Performance Optimization, Big Data. When he is not busy with his DBA work, keeping up with the latest SQL Server tricks or sharing tips on forums, he writes articles on SQLConcept.com.

Search for other articles by Feodor Georgiev

Rate this article:   Avg rating: from a total of 152 votes.


Poor

OK

Good

Great

Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.


Subject: Wonderful Article
Posted by: pinaldave (view profile)
Posted on: Tuesday, March 15, 2011 at 7:18 AM
Message: I have been big fan of Feodor for long time and this article kept my attention for long time.

It was easy and top the point. Totally fun to read and learned a lot as well.

Perfect and easy to read article by Feodor.

Subject: Great article
Posted by: Anonymous (not signed in)
Posted on: Tuesday, March 15, 2011 at 9:35 AM
Message: Thank you for sharing!

Subject: Useful and informative
Posted by: Phil Factor (view profile)
Posted on: Tuesday, March 15, 2011 at 10:17 AM
Message: There is something here in this article for everyone involved with SQL Server. I certainly learned useful things from it!

Subject: I have a dumb question
Posted by: BlueCollarCritic (not signed in)
Posted on: Tuesday, March 15, 2011 at 12:29 PM
Message: What’s the diff between getting the Default Trace by directly querying the system objects as shown in your examples and calling it like this( a method and using it like this (which I've seen used in many places):

“FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc',0) T”

I tried both and filtered on the same criteria and they did not return the same number of rows even though I made sue when using your method to provide the name of the oldest trc file so as to get all 5 traces files worth of data.

The "::fn" method returned almost twice as many rows.

I ask only because it looks like the "::fn" approach has the advantage of being able to return all 5 files worth without having to first get the exact name of the oldest trc file but then again I may just have misread this somewhere.

Thanks.

BTW - Great piece.



Subject: Great article
Posted by: Anonymous (not signed in)
Posted on: Tuesday, March 15, 2011 at 1:13 PM
Message: Thank you for sharing!

Subject: I have a dumb question
Posted by: BlueCollarCritic (not signed in)
Posted on: Tuesday, March 15, 2011 at 2:04 PM
Message: What’s the diff between getting the Default Trace by directly querying the system objects as shown in your examples and calling it like this( a method and using it like this (which I've seen used in many places):

“FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\LOG\log.trc',0) T”

I tried both and filtered on the same criteria and they did not return the same number of rows even though I made sue when using your method to provide the name of the oldest trc file so as to get all 5 traces files worth of data.

The "::fn" method returned almost twice as many rows.

I ask only because it looks like the "::fn" approach has the advantage of being able to return all 5 files worth without having to first get the exact name of the oldest trc file but then again I may just have misread this somewhere.

Thanks.

BTW - Great piece.



Subject: RE: I have a dumb question
Posted by: sibir1us (view profile)
Posted on: Tuesday, March 15, 2011 at 2:57 PM
Message: @BlueCollarCritic
The difference between the scripts in the article and the script you are using is the following: the article scripts are using the sys.fn_trace_getinfo to get the location and name of the current trace file and then pass the value as a parameter to the fn_trace_gettable, and in your script you are manually typing the location and the name of the first trace file. As explained in the article, there is a different result, depending on which trace file you give as a parameter to fn_trace_gettable.
Let me explain in a bit more detail: when a trace file is rolled over, at the end of it (the last row of it) there is a pointer (or a mark) that the file has been rolled over, and hence when the file is read, SQL Server knows that there is a file following. The problem is, however, that the newly created rollover file does not have a pointer to the old file which was just closed, hence the fn_trace_gettable function does not pick up the previous file.
As I mentioned, the scripts in the article are showing only the current trace file results, since the trace location and name is picked up dynamically.

Feodor

Subject: RE: I have a dumb question
Posted by: sibir1us (view profile)
Posted on: Wednesday, March 16, 2011 at 8:04 AM
Message: @BlueCollarCritic
Actually, after some testing, it turns out that if we pass the default trace file path + log.trc to the fn_trace_gettable function, then we can get all trace files appended, however this will require some programming, since the folder location of the default trace file might be different for different SQL Server instances.


Subject: SQL Query Performance
Posted by: Jayganesh Sambath (not signed in)
Posted on: Monday, March 21, 2011 at 12:54 AM
Message: Hi Feodor,

Thanks for the article. It's very useful.

I have a Query performance question. I am running a query from my ASP.NET program and it runs nearly 1 minutes when it is executed for the first time. The subsequence executions are pretty fast. Please help me on this.

Thank you,
Jayganesh S

Subject: RE: SQL Query Performance
Posted by: sibir1us (view profile)
Posted on: Monday, March 21, 2011 at 2:45 AM
Message: Hello Jayganesh,
since the default trace is very lightweight, it can be used only as an entry point (an overview) of performance and security matters.
in your case, however, you need to evaluate in details. Here is what you should do:
1. make sure the logic of your query is encapsulated as a stored procedure (and maybe views and functions, if needed)
2. make sure all objects are qualified
3. run the query / procedure through the DTA (database tuning advisor) and check if there are index, statistics and partitioning sugestions; if there are suggestions, apply them as appropriate.
4. monitor your memory - your data and plan caches and see if there are any abnormalities in the behavior of the query.
5. read the article I wrote on the SQL Server externals: http://tinyurl.com/6goeebf

If you have further questions, do not hesitate to send me an email (http://tinyurl.com/5ta3swy).

Feodor

Subject: Thank you!
Posted by: DJ (not signed in)
Posted on: Monday, March 21, 2011 at 7:01 AM
Message: Thanks for sharing the results of your hard and very detailed work. There is a lot of useful information in this article.

Thanks again!
dj

Subject: RE: Thank you!
Posted by: sibir1us (view profile)
Posted on: Monday, March 21, 2011 at 8:20 AM
Message: Thanks, DJ!
Thank you for reading!

Feodor

Subject: SQL Query Performance
Posted by: Jayganesh Sambath (not signed in)
Posted on: Monday, March 21, 2011 at 8:45 AM
Message: Hi Feodor,

Thanks for the article. It's very useful.

I have a Query performance question. I am running a query from my ASP.NET program and it runs nearly 1 minutes when it is executed for the first time. The subsequence executions are pretty fast. Please help me on this.

Thank you,
Jayganesh S

Subject: Great work; one recommended improvement
Posted by: Jared Faulkner (not signed in)
Posted on: Monday, March 21, 2011 at 9:06 AM
Message: It would be helpful if the scripts you provide used consistent case (capitalization) for identifiers for those of us working with binary or other collations.

Subject: RE: SQL Query Performance
Posted by: Jayganes Sambath (not signed in)
Posted on: Monday, March 21, 2011 at 9:36 AM
Message: Hi Feodor,

Thank you for your prompt response. Happy to see your reply.

Actually, this problem is reproduced in the Client's environment alone. We are not able to reproduce it on our developement environment. Since I am going to have a WebEx tomorrow, I am very tensed what to check at that time.

However, I try to know what are the things you have suggested as I have not much worked on the performance tuning part in SQL Server 2005.

Thank you very much for your help!
Jayganesh S

P.S. Sorry for the duplicate posting of my question. I did it unknowingly.


Subject: RE: SQL Query Performance
Posted by: sibir1us (view profile)
Posted on: Monday, March 21, 2011 at 2:05 PM
Message: Hello Jayganes,
If I understand your question correctly, you have a query which executes slow the first time it runs, and any consequent executions are performing well. If this is the case, then you should look into your query compilation time(execution plan generation) and the data caching. Also, check how often the query is recompiling. Check if you have memory pressure in your production environment.
There are plenty of reasons for the query to perform slow, however you should start from the memory and the cache.

Feodor



Subject: RE: SQL Query Performance
Posted by: Jayganesh Sambath (not signed in)
Posted on: Wednesday, March 23, 2011 at 7:28 AM
Message: Thank you Feodor. I will learn on how to check memory and cache related things in SQL Server.

I will revert you when I need help.

Thank you,
Jayganesh S

Subject: Linked Server deletion
Posted by: chandan_jha18 (view profile)
Posted on: Thursday, April 14, 2011 at 12:33 AM
Message: Hi,

this article is really useful and I have been using it in my day-to-day work.Yesterday someone deleted some linked server but these traces are not able to give me that info under objects created or deleted category.

Any thoughts on this.

Subject: Linked Server deletion
Posted by: chandan_jha18 (view profile)
Posted on: Thursday, April 14, 2011 at 1:08 AM
Message: Hi,

this article is really useful and I have been using it in my day-to-day work.Yesterday someone deleted some linked server but these traces are not able to give me that info under objects created or deleted category.

Any thoughts on this.

Subject: Inconsistent casing on table aliases
Posted by: Martin Smith (view profile)
Posted on: Thursday, July 21, 2011 at 7:56 AM
Message: Please could authors test their scripts on CS collations.

Subject: Missing ColumnStatics
Posted by: m60freeman (view profile)
Posted on: Friday, October 07, 2011 at 1:49 PM
Message: When I run your query to find Missing Column Statistics from the default trace, it returns a couple of dozen rows. However, when I check the database properties, both Auto Create Statistics and Auto Update Statistics are True. How is this possible?

Subject: Audit Login Failed
Posted by: Alexander Suprun (view profile)
Posted on: Wednesday, March 21, 2012 at 1:36 PM
Message: It's also very useful to include TextData column to Audit Login Failed query because it will show you the exact reason why logon attempt was unsuccessful

 

Phil Factor
Searching for Strings in SQL Server Databases

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.... Read more...

 View the blog

Top Rated

Continuous Delivery and the Database
 Continuous Delivery is fairly generally understood to be an effective way of tackling the problems of... Read more...

The SQL Server Sqlio Utility
 If, before deployment, you need to push the limits of your disk subsystem in order to determine whether... Read more...

The PoSh DBA - Reading and Filtering Errors
 DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of... Read more...

MySQL Compare: The Manual That Time Forgot, Part 1
 Although SQL Compare, for SQL Server, is one of Red Gate's best-known products, there are also 'sister'... Read more...

Highway to Database Recovery
 Discover the best backup and recovery articles on Simple-Talk, all in one place. Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Reading and Writing Files in SQL Server using T-SQL
 SQL Server provides several "standard" techniques by which to read and write to files but, just... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.