Click here to monitor SSC
  • Av rating:
  • Total votes: 17
  • Total comments: 4
Phil Factor

Database Deployment: The Bits - Database Version Drift

30 November 2012
Database Lifecycle Management Patterns & Practices Library Stage 4


Automated Deployment

When you are about to deploy a new version of a database by updating the current version, one of the essential pre-deployment checks is to make sure that the version  in production is exactly what  it should be. If changes have somehow slipped in, you'll need to understand them and deal with them before you can deploy.

Version-drift (or Version-creep) is the ironic name given to the habit of 'patching' the production database to make any type of change, without deploying it properly. The change could be as innocent as adding or changing an index to aid performance, but you can’t take that for granted. If you find changes in a production system that weren’t there when the version was deployed, they need to be investigated.

At the simplest level, when we detect version-drift, we are discovering that the version in production is not the version we think it is.

If we don't check for version-drift, then it is likely that beneficial changes can be lost, or undesirable changes can be retained unintentionally: The exact consequences depend on the methodology we're using for a deployment. If there has been drift, we need to find out why and take action to ensure that nothing gets lost, be it changes to indexes, configuration settings, or forensic evidence of fraud.

When deploying a database as a data-tier application, we might get a warning message about version-drift when we attempt the deployment. The safest way to check for version-drift is to compare the metadata before each deployment. Are the tables, routines, settings, static data, indexes, and so on, in production the same as in the archived version, usually represented by the scripts in source-control? I'll show you how to do this.

Why is drift a bad thing?

I've made plenty of changes to a live system in the past: They're called deployments; it is what they’re for, and there is a correct way of preparing for them and  doing them.  It should be easy to do the right thing. Version-drift shouldn't happen, but it does. If the deployment process wasn’t so irksome, then the temptation to circumvent it, and 'patch' in a change wouldn’t be so strong. If you are finding a lot of version-drift then there is something wrong with the deployment system, either its design or implementation. If drift has happened, then we must delay deployment whilst we put it right.

If we are creating the deployment script by running a Synchronization tool, and press on without correcting the drift, we will retain stuff that maybe shouldn't be there, whereas if we do it from source control, we will lose stuff that perhaps ought to be there. In either case, there is a chance that something will break. The only way to correct drift is to look at the changes, and 'sort out the sheep from the goats'; in other words, keep the changes that should have been deployed properly, and discard the alterations that were caused by humanity's imperfect state.

There is no sense in having an automated process to preserve unexpected changes since not all changes prove to be desirable and we need to inspect all of them before deciding how to deal with them. After all, while the unexpected change could be a genuine and effective change to improve performance, it could equally be part of a fraud, or an attempt to compromise the system's security. Even if unintentional, the mindless thrashings of a DBA in a muck-sweat can result in a change that leaves a security vulnerability, or a performance time-bomb, in the database. We must treat all unexpected changes as suspect until we prove them otherwise.

When there is an auditing system in place for a production system, then it is unlikely that you’ll hit version drift, because you’ll know of any change. Many shops will have an automated alarm that detects an unscheduled metadata change in a production system which then rings the big red bell on the wall, similar to the ones like you see in the Bond films. (Okay, it sends an email alert). In any event, it is part of a good database intrusion-detection system. We can and should, of course, trap and record any DDL changes as they happen and keep a log of them, as part of an auditing system. Depending on the method of doing this, it can come with a cost in extra resources, though In the case of Extended Events this is isn't significant.

How does drift happen?

When circumventing the deployment process in order to make a change to a live database system, we make a judgment that the risks of not making the change are greater than the risks of doing so, and hit the button. It could be that the production system isn't important enough to matter. It could be that the proper deployment process is horribly broken, or that the person in charge of deployment is a power-crazed cabbage. One of the downsides of teamwork is that we learn to tolerate the failings of others, so we manage the change retrospectively. We can't really assume that we can simply blitz all such database changes. We just have to accept that legitimate version-drift happens. It is frustrating at times, but one can feel a twinge of sympathy with anyone who prefers to go home in good time rather than rolling up their sleeves, updating the same version in Source Control, installing it on the integration server, running the automated integration tests, and then doing the deployment.

Version-Drift and Database Deployment

We need to check for 'uncontrolled' version drift before any deployment. If the metadata check reveals that the scripts in Source Control reflect what is in production and they are at the same version, then we breathe a sigh of relief. If there has been version-drift, without the changes going into Source Control, then our day starts to get more complicated.

Some differences, such as replication metadata, are added to the production server automatically on deployment and should be ignored. We can also ignore bogus differences arising from machine-generated names in the SQL Server metadata for 'anonymous' keys.

Other modifications, however, we can't ignore. We need to review all the changes. If we're in luck, the developer will have documented the change in the live database, via extended properties or headers. If we need to keep the changes then we need to update source control and rerun the pre-deployment automated tests before attempting any deployment.

If we run a deployment script, and the current version is different to the one on which we've rehearsed the deployment, then a number of things can go wrong. The 'ad-hoc' changes could be lost, the script could fail and roll-back, or the database could be caught in an intermediate state and crash. The migration script could crash.

Checking for Version-Drift

So, what's required?

  • The Catch-points to prevent a faulty deployment.
    Any database deployment process needs to  stop with an error if it detects that the metadata the database it is upgrading  is different to what is in source control for that same version.  Both Microsoft's DAC and  Redgate's DLM Automation will do this automatically. In the case of DAC, a DAC-registered database has stored with it on the same server an XML version of the metadata for that version, along with its version number. The process compares this with what is actually installed. If it is not the same  then it blows the whistle on the deployment. DLM Automation can be set to either check directly with the version in Source Control, or a 'snapshot' of the correct version.
  • The means of working out how to remedy the problem
    We need to be able to check for changes.  If a change has happened, we want to know what has changed, why, and who did it.. To work out the whole sequence of changes and unpick them, we need to see the whole history of changes.  There are several ways we can do this. If we are quick enough in spotting a change, we can pull most of this information out from the Default Trace. We can set up a DDL trigger to record all changes and store them in a log. For large production shops, we can use Service Broker to centralize the recording of DDL changes to all our databases.

We need to inspect all 'uncontrolled' changes. This process needs a human. Changes need to be understood. This requires eye-balling. We need to look at the differences between what's on the live system and what source Control thinks is in this version.  Unless you have caught these changes via a system such as a DDL trigger, and you've not been in time to get them from the default trace, you will need to do this is by means of a schema-comparison tool. You can compare databases and synchronize them in Visual Studio, of course, but  it probably isn't a good idea to do this against a production server! You can use DAC, either by using SQLPackage, or by automating the DAC library in PowerShell, to generate a build script that, effectively, lists the changes between the production system and the DACPC that was used to build it.  However, a schema-comparison tool with a user-interface will give you a side-by-side rendering of the metadata in the production database, and the metadata as represented by the version in source control, or the build from it on the integration server. You can then scroll through all the changes rapidly and  get a quick appreciation of any ‘drift’.

I generally don't even touch the production system, but look at a recent backup set from it. I also compare directly with what's in source control via a Scripts Folder. If  you haven't yet discovered SQL Compare, then you're in for a happy surprise, because SQL Compare does this fast and elegantly. You don't even need a database, but can compare a scripts folder directly with a recent backup set.

With a schema-comparison tool, the process is straightforward. You just kick off a new project if it is the first time you’re doing the comparison, or select an existing one. You then specify whether you wish to use a database, backup, snapshot or scripts folder for the  database you want to check, and fill in the appropriate details. You do the same thing for the database you want to compare it against and away you go. You don’t need the advanced synchronization features: only metadata comparison, and you can navigate through the source by the type of database object and get immediate information about what database objects have changed, and how many of each.

If you’re looking to automate the process of drift detection, another approach to consider is to use Redgate’s recently released DLM Dashboard, it is designed to be usable in the simplest possible way. It allows you to  track every change to your database schemas.  As soon as a change occurs, DLM Dashboard will alert you to the details of who, what, and how of the database schema changes, via email notifications and a clear dashboard overview, and provide you with a history of the changes.

Rolling your own solution

One of the cutest  and easiest ways of detecting version drift is to use DACFx. This will tell you if a database has drifted but it will not tell you who did it, or give you a history of the changes. It also will tell you what objects have changed but won't tell you what it is withing the object that has changed. It is designed just to tell you that a new version can be deployed because the installed database is at the version that you expect it to be, and nobody has slipped in any changes in the meantime.  To do this, the database has to be registered, which means that its metadata at the point of registration is stored in a table in MSDB, along with the version number.

Here is a simple test routine in PowerShell that will register a database if it is not yet registered, create a new stored procedure, and then check that the DAC can detect the change

Trap #simple error trapping


       # Handle the error

       $err = $_.Exception

       write-error "Oops! $err.Message"

       while ($err.InnerException)


              $err = $err.InnerException

              write-error $err.Message


       # End the script.

    if ($ThisEvent) {UnRegister-event $ThisEvent.Name}




set-psdebug -strict

$ErrorActionPreference = "stop" #


If (!(Test-Path SQLSERVER:)) { Import-Module “sqlps” -DisableNameChecking }

If (!(Test-Path SQLSERVER:)) { throw "Cannot load SQLPS" }

# load the SQLPS module if it isn't already loaded.

add-type -path "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\Microsoft.SqlServer.Dac.dll"

#actually, it inly does so if necessary


$DatabaseName = 'MyDatabase'

$Servername = 'MyServerInstance'


#create a server instance

$srv = New-Object Microsoft.SQLServer.Management.SMO.Server $servername

#locate the right database

$Mydatabase = $srv.databases[$DatabaseName]

$SourcePackage = new-object Microsoft.SqlServer.Dac.DacServices $srv.connectionContext.ConnectionString

# register events, if you want 'em

$ThisEvent = register-objectevent -in $SourcePackage -eventname Message -source "msg" -action { out-host -in $Event.SourceArgs[1].Message.Message }


$MyResult = $Mydatabase.ExecuteWithResults(@"


Declare @VersionCode Varchar(50)

Select @VersionCode= convert(varchar(50),SERVERPROPERTY('productversion'))


if case when  @VersionCode like '%.%.%'

        then Convert(numeric,left (@VersionCode,


        else 7.00 end --default to version 6 if odd version number

       >=  10.5 --only access the table if you know it is there

       --check that it is registered

              Select database_name, type_version

               from msdb.dbo.sysdac_instances

         where database_name like '$DatabaseName'

else--return something that won't crash the system

              Select  database_name, type_version

                     from (Select' ','1,2,3,4')f(database_name, type_version) where 1=0




       $ItHasBeenRegistered = $false;

       if ($MyResult.Tables[0].rows.count -eq 1)


              # this has been registered

              $version = $MyResult.Tables[0].rows[0]['type_version']

              $ItHasBeenRegistered = $true;




              # this has not been registered

              $version = ''



if (-not $ItHasBeenRegistered)


       $SourcePackage.Register("$DatabaseName", 'We are testing database drift detection', "")


'creating stored procedure'

$MyResult = $Mydatabase.ExecuteNonQuery(@"




[xml]$DriftReport = $SourcePackage.GenerateDriftReport("$DatabaseName")

if ($driftReport.DriftReport.Additions.innerXML -eq $null -and $driftReport.DriftReport.Removals.innerXML -eq $null -and $driftReport.DriftReport.Modifications.innerXML -eq $null)

{ $Drifted = $false }



       $Drifted = $true

    $driftReport.DriftReport.Additions.Object|Select-object Name,Parent,Type,@{Name="Change"; Expression = {'Addition'}} |Format-table

       $driftReport.DriftReport.Removals.Object|Select-object Name,Parent,Type,@{Name="Change"; Expression = {'Removal'}} |Format-table

    $driftReport.DriftReport.Modifications.Object|Select-object Name,Parent,Type,@{Name="Change"; Expression = {'Modification'}} |Format-table


if ($drifted) { "Hey! $Databasename on $servername has drifted!" }


$MyResult = $Mydatabase.ExecuteNonQuery(@"

drop PROCEDURE [dbo.TheCurrentDateAndTime]


UnRegister-event $ThisEvent.Name #unregister the event you set up

This will give the result

Name                          Parent   Type             Change   

----                          ------   ----             ------

[dbo.TheCurrentDateAndTime]   [dbo]    SqlProcedure     Addition


Hey! MyDatabase on MyServerInstance has drifted!

This is a good start, and as it was a simple addition, it won't take much investigation. However, if it is a modification to a complex database routine or table, and if you have a comparison tool such as SQL Compare, it is easy to dive in and see what modifications have been made. If you don't have access to a database-schema comparison tool, then you can still do it, but in a rather more longwinded way. You use a differencing tool on the two source files. A Text comparison tool is part of every programmer's toolkit, even if only used when embedded into your source control system. There are a few snags that I’ve come across with using this method: You need to script the live system, and compare it with an integration database (one built from the source in source control for a particular version), the script-generation process is surprisingly slow, and the comparison is purely text-based  rather than being semantic.

Hah! The foreign keys are unchecked in the clone

Using a Programmers File-Comparison Tool

WinMerge is on SourceForge, published under a GNU GPL. It is free and open-source. It is probably the most popular of the many tools that show the difference between two text files. There are plenty of others such as ExamDiff and Beyond Compare. None of these will help us much with an ordinary build script, such as the ones generated from SSMS, or are embedded in DACPACs (Model.SQL), because we need the order of all the objects to be the same in both scripts. SSMS will sort the tables in dependency order, but you’ll soon find that this can result in any number of orders for identical databases. We have to resort to SMO to get the compare script we want, which is merely a sequenced collection of all the individual scripts. Remember that, unlike a proper database comparison tool, a file comparison tool has no knowledge of semantic, only text differences: it doesn't understand SQL!

To generate the scripts, we need to build the integration or staging server to the version in source control and compare this to an exact copy of what is in production, by restoring a backup. By happy coincidence, a test restore is something we need to do anyway, occasionally. We can then run a script that will do something like this. (I use PowerShell, but C#, F# or IronPython is just as good)

<# This script creates two comparison database scripts that you can then compare with a file comparison tool. This is used for checking that the production database matches the copy of the database in Integration or Staging. We store them in a directory, placed in subdirectories according to server and database. #>
$FileComparisonTool='C:\Program Files (x86)\WinMerge\WinmergeU.exe' #or whatever you choose
$directory='E:\MyScriptsDirectory' # the directory where you want to store the scripts
$filenames = @(); #we make a list of the filenames
<# and we have a list of the serverInstance/Database we want the script from. There is a third column which is blank if it is Windows authentication and has the default Login if it is SQL Server authentication #>
$Databases = @(
       ('MyServer',' MyDatabase','PhilFactor'),
       (' MyProductionServer’, 'MyDatabase',''))
<# This script will actually create as many comparison scripts as you want. You can then use them later on.#>
Trap {
  # Handle the error
  $err = $_.Exception
  write-host $err.Message
  while( $err.InnerException ) {
   $err = $err.InnerException
   write-host $err.Message
  # End the script.

set-psdebug -strict
$ErrorActionPreference = "stop" #

#Load SMO assemblies, checking that it was successful
@('.SMO','.SmoExtended','.ConnectionInfo') |
     foreach-object {
        if ([System.Reflection.Assembly]::LoadWithPartialName("$MS$_") -eq $null)
           {"missing SMO component $MS$_"}
# just to prevent the need for retyping
$My="$MS.Management.Smo" #

$Databases | foreach-object{
        $ServerName = $_[0] #the name of the server or instance for this job
        $DatabaseName = $_[1] #the name of the database for this job
        <# and we can now establish the filename based on the server and database and maybe create the directories #>
        $Scriptdirectory = "$directory\$($ServerName -replace '[\\\/\:\.]','-')\$($DatabaseName -replace '[\\\/\:\.]','-' )"
        # a little easier than doing it the Powershell way
   if (![System.IO.Directory]::Exists($Scriptdirectory)) {$null=[System.IO.Directory]::CreateDirectory($Scriptdirectory)}
   $filename ="$($Scriptdirectory)\$($DatabaseName)_Compare.sql" #note different suffix to build
        $Filenames += $filename #remember the filename
        if ($Login -ne '')
                $Credentials=get-credential -Credential $Login
      $mySrvConn = new-object Microsoft.SqlServer.Management.Common.ServerConnection
      $mySrvConn.LoginSecure = $false
      $mySrvConn.Login = $Credentials.UserName
      $mySrvConn.SecurePassword = $Credentials.Password
      $s= new-object ("$My.Server") $mySrvConn }
                {$s = new-object ("$My.Server") $ServerName } #find the server instance
        if ($s.Version -eq  $null ){Throw "Can't find the instance $ServerName"}
        $TheDB= $s.Databases[$DatabaseName]
        if ($ -ne $DatabaseName){Throw "Can't find the database '$DatabaseName' in $ServerName"};
        <# now we will use the canteen system of SMO to specify what we want from the script. It is best to have a list of the defaults to hand and just over-ride the defaults where necessary, but there is a chance that a later revision of SMO could change the defaults so beware! #>
   $scripter = new-object ("$My.Scripter") $s #script out the database creation
   $Scripter.Options.AllowSystemObjects = $false #who wants 'em?
        $Scripter.Options.ExtendedProperties= $true # yes we want these
        $Scripter.Options.DRIAll= $true # and all the constraints
        $Scripter.Options.Indexes= $true # Yup, these would be nice
        $Scripter.Options.Triggers= $true # This should be included when scripting a database
        $Scripter.Options.IncludeHeaders = $false; # they have today's date!
        $Scripter.Options.ToFileOnly = $true #no need of string output as well
        $Scripter.Options.IncludeIfNotExists = $false #not necessary
        $Scripter.Options.Filename = $filename;
   $scripter.options.AppendToFile=$false #because this is the first part that gets written
   $scripter.Script($TheDB) #do it
   $scripter.options.AppendToFile=$true #we add each object script
foreach ($object in   @("Tables","Functions", "Views","XmlSchemaCollections", "Assemblies", "StoredProcedures", "Schemas", "Triggers", "Roles", "MessageTypes", "Contracts",  "Queues", "Routes", "Services", "RemoteServiceBindings")) #there are more in later versions, but not so important.
        $ObjectCollection=$TheDB.($object) #get the collection
   if ($ObjectCollection.count -gt 0) { #if there is anything there
    $ObjectCollection  | #for each in the object collection
      Sort-Object name  | # just in case
        foreach-object { #prefetch makes no difference at all
                      $scripter.Script($_) #and script each indivisual object
                "done $object for $servername, $DatabaseName at $(get-date)"
if ($filenames.count -gt 1)
        {&$FileComparisonTool $filenames(0)  $filenames(1)}

A Watched kettle never boils

Just don't hang about, watching the screen, because little furry mammals evolve in the time this script takes to run; AdventureWorks takes two minutes locally. No, there are definitely good reasons for having SQL Comparison tools. However, with advance planning, we can prepare all the scripts overnight.

Of course, for a great deal of the database, it is much quicker to get the script from the metadata, via SQL, but the build-script for tables and the related DRI aren't stored within the database metadata. We can create a SQL routine to do our table-build script but to get everything we need is going to be immensely complex. It is also possible to pull down the metadata to a local database but the devil is in the detail.

One way of speeding the process in C# is to increase the number of threads to an optimum of around six (thanks to Linchi Shea for discovering this), or in PowerShell we can run the scripts for each database simultaneously.

My solution would be to shrug and plan this in advance, running this on a scheduler.


Before any deployment, it is worth checking that the version of the database in production is the same as the version you think is there. If all is well, then we will know exactly what is in the production database since, for all practical purposes, it will be what's in source control for the same version. If we've suffered 'Version-Creep' , and find there are differences, then we need to understand why these changes have happened and decide if they need to be made permanent by retrospectively legitimizing them in source control.

This article is part of our Database Lifecycle Management patterns & practices.

Return to the library to find more articles on DLM, or visit for more information about SQL Server products for database lifecycle management.

Phil Factor

Author profile:

Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications. Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career. See also :

Google + To translate this article...

Search for other articles by Phil Factor

Rate this article:   Avg rating: from a total of 17 votes.





Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.

Subject: Using checksum queries to identify schema changes
Posted by: Michael Baylon (not signed in)
Posted on: Thursday, December 6, 2012 at 6:33 AM
Message: Nice post.

Another relatively simple way to identify 'unexpected' changes to a production database is by carrying out checksum queries on INFORMATION_SCHEMA views for different types of sql objects.

In outline, run the checksum queries on your production schema (and then ideally store the returned values in a ' build version' table).

Then run the same queries before the carrying out the next deployment.

If the production schema has not changed between deployments the checksum values will be the same. If different - it indicates a change has taken place.

More detail on this process can be found on the following post -

Subject: SSDT is free
Posted by: Peter Schott (not signed in)
Posted on: Thursday, December 6, 2012 at 10:05 AM
Message: Assuming this is for SQL Server, SQL Server Data Tools is available for free and works backwards to SQL 2005. There's a built-in Schema Comparison tool that is serviceable. It's better to keep up with the schema changes, but it's at least something. Wish it could ignore non-essentials (square braces, extra parentheses) a little better, but it works.

Setting up DDL Audits is definitely worthwhile. Automated notices of schema changes to tell of changes make keeping up with that drift a bit easier. I blogged about doing that a while back, but it could save someone some time.

Subject: SSDT + Git works well
Posted by: Anonymous (not signed in)
Posted on: Monday, December 10, 2012 at 3:18 PM
Message: I've recently switched away from using SSMS + Redgate Source Control for this purpose to using SSDT + Git and I'm very happy with the results. Redgate has served me well but it's always been a bit slow and not well integrated.

Switching to the full project model and deployment methodology in SSDT really makes this work a lot easier, especially the ability to merge schema changes in either direction (so you can easily review and accept/reject those rogue DBA changes from production :)

Subject: Re SSDT + Git works well
Posted by: Phil Factor (view profile)
Posted on: Friday, March 20, 2015 at 12:34 PM
Message: I've amended the article to include the DACFx (SSDT) method of determining version drift in a registered database, now that it seems to work fine on the full SQL Server metadata. Thanks for the info, and sorry it took so long!

Simple-Talk Database Delivery

Patterns & Practices Library

Visit our patterns and practices library to learn more about database lifecycle management.

Find out how to automate the process of building, testing and deploying your database changes to reduce risk and make rapid releases possible.

Get started

Phil Factor
How to Build and Deploy a Database from Object-Level Source in a VCS

It is easy for someone who is developing a database to shrug and say 'if only my budget would extend to buying fancy... Read more...

 View the blog

Top Rated

Clone, Sweet Clone: Database Provisioning Made Easy?
 One of the difficulties of designing a completely different type of development tool such as SQL Clone... Read more...

Database Lifecycle Management: Deployment and Release
 So often, the unexpected delays in delivering database code are more likely to happen after the... Read more...

The PoSh DBA: Assigning Data to Variables Via PowerShell Common Parameters
 Sometimes, it is the small improvements in a language that can make a real difference. PowerShell is... Read more...

Issue Tracking for Databases
 Any database development project will be hard to manage without a system for reporting bugs in the... Read more...

Releasing Databases in VSTS with Redgate SQL CI and Octopus Deploy
 You can still do Database Lifecycle Management (DLM) workflows in the hosted version of Team foundation... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

Temporary Tables in SQL Server
 Temporary tables are used by every DB developer, but they're not likely to be too adventurous with... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.