Click here to monitor SSC
  • Av rating:
  • Total votes: 42
  • Total comments: 2
Tony Davis and Shawn McGehee

Managing the SQL Server Transaction Log: Dealing with Explosive Log Growth

14 March 2013

You've just become responsible for a database, only to find that the log file is growing out of control. Why is it happening and what do you do to correct it?

Consider a situation where, as a DBA, a new database falls under your care. Having implemented some monitoring, you find that the log has grown massively in size, is almost full, and that there isn't the capacity on its disk drive to accommodate an urgent database maintenance operation.

What next? The reaction to avoid is panic, with resort to one of the many quick but dangerous ways to empty the log file, that appear on various online forums. We need to find a way to allow the maintenance process to proceed, we need to find out why the log grew so large and fix the issue and then we need to reduce the size of the log, in the process removing any internal fragmentation. We need to do so in a way that preserves the integrity of the log chain, which we may need subsequently for a database restore operation.

Explosive Log Growth

For demonstration purposes, we'll create a simple MyMessages database, as shown in Listing 1. Notice that we specify a small initial log file (2 MB), which is set to grow in small increments (2 MB). We set the database to FULL recovery model and perform a full database backup (remember that FULL recovery model becomes operation only after the first full database backup). Finally, we create a simple Messages table.

USE master;
IF DB_ID('MyMessages') IS NOT NULL
   DROP DATABASE MyMessages; 
   (  NAME = N'MyMessages' 
    , FILENAME = N'C:\SQLData\MyMessages.mdf' 
    , SIZE = 199680KB 
    , FILEGROWTH = 16384KB ) 
   (  NAME = N'MyMessages_log'
    , FILENAME = N'D:\SQLData\MyMessages.ldf' 
    , SIZE = 2048KB 
    , FILEGROWTH = 2048KB 


USE master; 
TO DISK ='D:\SQLBackups\MyMessages_full.bak' 

USE MyMessages; 
CREATE TABLE dbo.Messages 
      MessageText nvarchar(200) NOT NULL , 
      MessageDate datetime2 NOT NULL 

Listing 1: Creating the MyMessages database and Messages table

Next, we'll simulate some transactional activity on the Messages table, by inserting a row into the Messages table and then updating it in an explicit transaction, without committing or rolling back. Open a new tab in SSMS and execute Listing 2.

USE MyMessages; 
INSERT INTO dbo.Messages 
   ( MessageText , 
VALUES ( 'A first message' , 
         '2013-02-26 17:54:47' 

UPDATE dbo.Messages 
SET MessageText = 'A newer message' 

Listing 2: An uncommitted transaction on the Messages table

Back in the original tab, we'll proceed to insert 1.3 million rows into Messages.

USE MyMessages; 
   ( MessageText, MessageDate 
   SELECT TOP 1000000 
        REPLICATE('a', 200) 
      , GETDATE() 
   FROM msdb.sys.columns a 
        CROSS JOIN msdb.sys.columns b; 

-- repeat above to add another 300K rows

Listing 3: Insert 1.3 million rows into Messages

Let's assume, obviously without any prior knowledge of what's gone before, that this is the point that the DBA receives news of the problem. From log size and space usage data, we find that the log file for the MyMessages database is 836 MB in size and close to full.

-- MyMessages Log Size: 836 MB, Log Space Used: 86%

Listing 4: Log size and space usage stats for MyMessages

Making Room in the Log

The first priority is to try to make some room in the log so that it doesn't fill up completely, and so that the maintenance process can proceed. We try a log backup but for reasons that require further investigation, SQL Server will not truncate the log.

BACKUP LOG MyMessages 
TO DISK = 'D:\SQLBackups\MyMessages_log1.trn' 

-- MyMessages Log Size: 836 MB, Log Space Used: 86%

Listing 5: Log backup does not truncate the log

In order to buy some time, and stave off further problems, we decide to create more space in the log by adding a 3 GB secondary log file, on a separate disk.

USE master; 
ADD LOG FILE ( NAME = N'MyMessages_log2', 
   FILENAME = N'D:\SQLData\MyMessages2.ldf' , 
   SIZE = 512000KB , FILEGROWTH = 512000KB ); 

USE master 
   ( NAME = N'MyMessages_log2', SIZE = 3146752KB ); 

-- MyMessages Log Size: 3909 MB, Log Space Used: 14.5%

Listing 6: Adding a secondary 3 GB log file

The planned database maintenance operation can now proceed and we can turn our attention to finding out what caused the log to grow so large, and why SQL Server won't truncate it.

Why doesn't SQL Server truncate the log?

For a FULL or BULK_LOGGED recovery model database, only a log backup will result in truncation of the log (i.e. enable reuse of space in the log). When a log backup occurs, SQL Server can reuse the space in any portions of the log that are marked as "inactive". The oldest log record (referred to as the MinLSN record) that is still required for a successful database wide rollback, or is still required by another activity or operation in the database marks the start of the active log. The MinLSN log record could be the one marking the start of the oldest open transaction, or the oldest log record still required for a log backup, or the oldest log record still required by another database process, such as database mirroring or transactional replication (see for more details).

This explains why a long-running uncommitted transaction or an application leaving "orphaned transactions" in the database can keep large areas of the log "active" and so prevent log truncation. In this case, we've contrived to leave an uncommitted transaction in the database and so can simply commit it or roll it back. To simulate this, return to listing 2, and roll back the open transaction.

Next, we can take another log backup and it truncates the log, and creates plenty of reusable space in the primary log file.

BACKUP LOG MyMessages 
TO DISK = 'D:\SQLBackups\MyMessages_log2.trn' 

-- MyMessages Log Size: 3909 MB, Log Space Used: 1.4%

Listing 7: A second log backup, after clearing the open transaction, truncates the log

So far so good: we've allowed the maintenance operation to proceed, and we've fixed the problem that caused the explosive log growth. However, we still have two outstanding problems:

  • A database with multiple log files
  • A database with a principal log file that is bloated and likely highly fragmented

Remove any Secondary log files

We need to get rid of that secondary log file as soon as possible. SQL Server does not write log records in parallel to multiple log files, even when created on separate drives, so there is no performance advantage to having multiple log files. As soon as an "emergency" secondary log file is no longer required, we need to remove it, as all it will do is slow down any restore or recovery operations, since SQL Server has to zero-initialize the log during full and differential restore operations.

Instant file initialization and log files

The instant file initialization feature, enabled by assigning the 'Perform volume maintenance tasks' privilege to the SQL Server service account, applies only to data files, for reasons explained by Paul Randal, here:

To see the impact this can have, let's leave our secondary log file in place and perform a restore on the MyMessages database.

USE master; 
FROM DISK ='D:\SQLBackups\MyMessages_full.bak' 

FROM DISK='D:\SQLBackups\MyMessages_log1.trn' 

FROM DISK='D:\SQLBackups\MyMessages_log2.trn' 
/*Processed 89539 pages for database 'MyMessages', file 'MyMessages_log' on file 1. 
RESTORE LOG successfully processed 89539 pages in 24.409 seconds (28.658 MB/sec). 
Processed 0 pages for database 'MyMessages', file 'MyMessages' on file 1. 
Processed 3 pages for database 'MyMessages', file 'MyMessages_log' on file 1. 
Processed 0 pages for database 'MyMessages', file 'MyMessages_log2' on file 1. 
RESTORE LOG successfully processed 3 pages in 37.085 seconds (0.000 MB/sec).*/

Listing 8: Restoring MyMessages (with secondary log file)

The restore took over 50 s. If we repeat the exact same steps, but without adding the secondary log file, the comparative restore, in our tests, took about 6 seconds. This is a substantial impact, even for a relatively modestly size secondary log. In cases where the log file is much larger, the effect on backup times can be dramatic.

In order to remove the secondary log file, we need to wait until it contains no part of the active log. Since our goal is to remove it, it's permissible to shrink this secondary log file (demonstrated shortly), and turn off auto-growth for this file. Shrinking the secondary log to zero will return it to its original size (500 MB) and so "encourage" the active log to move swiftly back into the primary log file. It's important to note that this will not move any log records in the secondary log over to the primary log (some people expect this behavior because if we specify the EMPTYFILE parameter, when shrinking a data file, SQL Server will move the data to another data file in the same filegroup).

As soon as a log backup means that the secondary log file contains no part of the active log, we can simply remove it.

USE MyMessages; 

Listing 9: Removing the secondary log file

Shrink Primary Log and Remove Fragmentation

When we first created the MyMessages database, we sized the primary log file at just 2 MB and allowed SQL Server to auto-grow it in 2MB increments to over 800 MB in size. A transaction log that auto-grows frequently, in small increments, will have a very large number of small Virtual Log Files (VLFs). This phenomenon is log fragmentation. If a database process, such as the crash recovery process, has to read the log file, it starts by reading in all the VLFs. If there are many of them, this will be a longer operation, and may affect the overall time taken to recover the database. A similar argument applies to other operations that read the log, such as log backups.

Essentially, the initial size and relatively small growth increments we've chosen for this database are inappropriate for this sort of data load and lead to the creation of a large number of VLFs. We can confirm this by interrogating the VLF architecture using a command called DBCCLogInfo, as shown in Listing 8.

USE MyMessages 
DBCC Loginfo; 
-- returns 1672 rows

Listing 10: A fragmented log!

We're not going to discuss the output of DBCC LogInfo in any detail. All we're interested in at this stage is that DBCC LogInfo returns 1672 rows, meaning 1672 VLFs.

Interrogating VLFs using DBCC LogInfo

DBCC LogInfo is an undocumented and unsupported command. We'll use it in this whitepaper to peek at the VLFs, but we won't go into detail about the information it returns. Kalen Delany has a good blog post that explains its use, and output:

Note that when we created the 3 GB secondary log file, but this time setting a reasonable initial size (500 MB), followed by a manual growth, the secondary log file comprised only 24 new VLFs, a very reasonable number of virtual files.

The way to reduce the primary log file to a reasonable size, and remove the fragmentation, is to shrink it, and then manually resize it. Note that we should never shrink the log as part of our standard maintenance operations, as it will simply need to grow again as we add more data, and modify our existing data, and these log growth events are expensive, since SQL Server has to zero-initialize the new log space.

However, shrinking the log is permissible in situations such as this, in the knowledge that we have investigated and resolved the cause of the excessive log growth, and will then correctly size the log such that shrinking the log should be a "one off" event.

The recommended approach is use DBCC SHRINKFILE (see to reclaim the space, and remove fragmentation. If we specify 0 (zero) as the target size, or don't specify a size, SQL Server will attempt to shrink the log back to its initial size. Alternatively, we can specify a target size to which to shrink the file (such as "1", if we wish SQL Server to order shrink the log to its smallest possible size). In Listing 11, we use zero in order to shrink the log back to its initial size (2 MB).

USE MyMessages; 
DBCC SHRINKFILE (N'MyMessages_log' , target_size=0); 

Listing 11: Shrinking the primary log file (partial success)

In the output from this command, we see the current database size (91008*8-KB pages) and minimum possible size after shrinking (256*8-KB pages). This is actually an indication that the shrink did not work fully. SQL Server shrank the log to the point where the last VLF in the file contained part of the active log and then stopped. Check the messages tab for confirmation.

/*Cannot shrink log file 2 (MyMessages_log) because the logical log file located at the end of the file is in use. 
(1 row(s) affected) DBCC execution completed. If DBCC printed error messages, contact your system administrator.*/

Perform a log backup and try again.

BACKUP LOG MyMessages 
TO DISK = 'D:\SQLBackups\MyMessages_log3.trn' 

USE MyMessages; 
DBCC SHRINKFILE (N'MyMessages_log' , 0); 

Listing 12: Shrinking the primary log file after log backup

Having done this, we can now manually grow the log to the required size, in a similar way to that demonstrated in Listing 6.


If a log file grows excessively, and you need to add a temporary secondary log file, remove it as soon as it is no longer required. If you leave the secondary log file in place, it may slow down considerably database restore and recovery operations.

It is a bad idea to undersize the transaction log, and then let SQL Server auto-grow it in an uncontrolled fashion, in small increments. This can lead to log fragmentation, which may slow down log backup and database recovery operations. The way to avoid issues relating to expensive log growth events, and log fragmentation, is simply to set the correct initial size for the log (and data) file, allowing for current requirements, and predicted growth over a set period.

Ideally, having done this, the log would never auto-grow, which isn't to say that we should disable the auto-growth facility. It must be there as a safety mechanism, and we should set a reasonable auto-growth increment in order to avoid growth events fragmenting the log. However, having sized the log appropriately, we are not relying on auto-growth being the mechanism that controls log growth.

If you’d like to learn more about how to build a "bullet-proof" backup and restore strategy for your databases, and optimize your transaction log management, check out the following resources:

Bulletproof your Database Backup and Recovery Strategy (free whitepaper)

SQL Server Backup and Restore, by Shawn McGehee (free eBook)

SQL Server Transaction Log Management, by Tony Davis and Gail Shaw (Paperback)

Tony Davis and Shawn McGehee

Author profile:

Tony Davis is an Editor with Red Gate Software, based in Cambridge (UK), specializing in databases, and especially SQL Server. He edits articles and writes editorials for both the and websites and newsletters, with a combined audience of over 1.5 million subscribers. You can sample his short-form written wisdom at either his blog, or his author page.

Shawn McGehee is a DBA living in Orlando, FL. He has been working in IT since graduating high school in 1997, in positions ranging from web developer, to help desk operative, to his current position as a Senior DBA. He is heavily involved in the SQL Server community, speaking at and organizing local users groups in Florida since 2008, and is currently President of the OPASS group in Lake Mary, FL.

Search for other articles by Tony Davis and Shawn McGehee

Rate this article:   Avg rating: from a total of 42 votes.





Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.

Subject: How do you roll back a transaction from months ago?
Posted by: Bev Kaufman (view profile)
Posted on: Friday, May 10, 2013 at 9:25 AM
Message: I got what you said about the old transaction that is sitting there in the minimum position, blocking the normal log truncation. In the sample you gave us, we're able to go back to the transaction and complete the rollback because we know how it was created, and we still have the connection opened.
But in the hypothesis you present at the beginning, you've just taken over the database. You have no idea what happened way back when. And of course the program that caused the problem has been shut down and started many many times in the interim, so the original connection is long gone.
So how do you locate and deal with the old incomplete transaction that is jamming up the works?
Another thing I noticed. By failing to complete the transation, the record has remained as it was before the update was done, which means the users had likely noticed that something didn't get done and gone ahead and done it over again. (That of course assumes the operation should have been committed.) The goal then should not be to complete or roll back an old transaction so much as to remove its existence in the log file.

Subject: Reasons behind log full
Posted by: Andrew Jackson (view profile)
Posted on: Saturday, July 9, 2016 at 1:38 AM
Message: check out the following link it will show you the reasons behind sql server transaction log log full:

Simple-Talk Database Delivery

Patterns & Practices Library

Visit our patterns and practices library to learn more about database lifecycle management.

Find out how to automate the process of building, testing and deploying your database changes to reduce risk and make rapid releases possible.

Get started

Phil Factor
How to Build and Deploy a Database from Object-Level Source in a VCS

It is easy for someone who is developing a database to shrug and say 'if only my budget would extend to buying fancy... Read more...

 View the blog

Top Rated

SQL Server Statistics Basics
 Distribution statistics are used by SQL Server's Query Optimiser to determine a good execution plan for... Read more...

Clone, Sweet Clone: Database Provisioning Made Easy?
 One of the difficulties of designing a completely different type of development tool such as SQL Clone... Read more...

Database Lifecycle Management: Deployment and Release
 So often, the unexpected delays in delivering database code are more likely to happen after the... Read more...

The PoSh DBA: Assigning Data to Variables Via PowerShell Common Parameters
 Sometimes, it is the small improvements in a language that can make a real difference. PowerShell is... Read more...

Issue Tracking for Databases
 Any database development project will be hard to manage without a system for reporting bugs in the... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

Temporary Tables in SQL Server
 Temporary tables are used by every DB developer, but they're not likely to be too adventurous with... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.