23 August 2013

Test Your SQL Server Backups with PowerShell

The best way of checking SQL Server backups is to restore them and run DBCC CHECKDB on the restored database. To do this regularly means that you need to automate the task. Allen White shows how, with PowerShell.

You’re a good DBA, and you know that the first thing a DBA has to do is make sure that backups are done.  Every one of us knows that our backups are the first most important thing we’re responsible for, so we make sure they’re done. So my next question to you is ‘have you tested them?’

Sure we all periodically have to restore a copy of a database to recover some data that got inadvertently deleted, and we haven’t had a problem with that, so we’re good, right? Well, not necessarily. Are we really good? Are we testing all of our backups? Are we running a DBCC CHECKDB against those databases? Sometimes running CHECKDB can impact server performance in unfortunate ways, so maybe we’re not doing that as consistently as we should. Is that bad? (Yeah, it really is. You know it, but you’re busy, and we’ve got all these user requests we’ve got to get done first, right?)

One of the things I love about PowerShell is the ability to automate tasks, so my solution to this problem uses PowerShell and the new Backup-SqlDatabase and Restore-SqlDatabase cmdlets I presented in my last article.  I’ve created a script which you can download, use, modify or just study.  It uses two separate instances of SQL Server, which you specify on the command line when you invoke the script, and a directory which is accessible by both instances, and to which the service accounts for both instances have full rights. I’ve added an alternative  version for Red Gate SQL Backup Pro as an attachment to this article.

The first thing we want to do is create a backup of all the user databases in the first instance.  I’ll assume that this instance is your production instance, and has the databases which your business relies upon. We don’t want to interfere with the log chain in these databases, so we’ll do a ‘copy only’ backup.  As it backs these databases up it writes the backups to the directory you’ve specified on the command line, and since both Enterprise and Standard editions of SQL Server 2008 R2 and newer support backup compression, we’ll include that option as well.

After the backups are complete we’ll connect to the destination instance and determine the default file and log directory locations, so we can intelligently place the data files as we restore them.  We’ll also use WMI (Windows Management Instrumentation) to determine the space available on the destination drives, and use that to make sure we have enough space on those drives before attempting to restore the databases.

Finally we’ll run DBCC CHECKDB against each of the user databases on the destination server, to make sure that there aren’t any corruption issues in the production databases that haven’t been caught yet.

Sound like a plan? OK, let’s get started.

The first thing we’ll do is to provide comment-based help for anyone wanting to use the script. To see the results of this use the Get-Help cmdlet followed by the path and name of the script.

After the help we’ll specify the parameters for the script. We’ll precede the parameters with the CmdletBinding() method to allow advanced parameter features to be supported in our scripts. We’ll precede each parameter with a comment, which will be displayed to the user should they neglect to include any required parameters, and well set each parameter to be mandatory.

As always, we should provide error handling, and I still tend to use the Trap function for that purpose. (Note that I specify ‘continue’ after the break, because I want to try to restore each database backed up, even if one restore might fail.)

Because the new backup and restore cmdlets are part of the SQL Server 2012 SQLPS module, we need to test to see if it’s loaded, and load it if it hasn’t already been.  Also, because loading the module changes the current location the script’s running in, I added Push-Location to store the current location before loading the module, and Pop-Location after it, to return us to our original location.

Now we can connect to the source instance to perform our backups. We’ll create an SMO Server object and connect to the instance with that, and pull in the Databases collection.  We’ll also create an empty collection to store the database name and backup file name for each backup we do.  We create the backup file name by taking the database name and concatenating the current date and time to it.  For each backup we create a new system object with two note properties, one for the database name and one for the backup file path and name and add that to our collection, then back up the database.

If you want to use SQL Backup Pro, you’ll need to use Transact-SQL. Normally you’d use the Invoke-Sqlcmd cmdlet to send T-SQL to the server, but there’s a bug (that’s been fixed – see Connect: invoke-sqlcmd -querytimeout 0 still times out by Grant Fritchey) that may prevent large databases from backing up correctly.  Instead, ADO.NET makes perfect sense. (Notice that we open the connection to run the query, then close it after it’s done, following database development best practices.)

We use these commands instead of the Backup-SqlDatabase command.

The backups are done, so it’s time to connect to the destination instance using the SMO Server object, and determine the default file and log locations for the destination databases.

Using the Win32_LogicalDisk namespace in WMI, we can capture the size and free space on each of the disk drives on our destination server.  We’ll store just the disk drive (DeviceID) and free space in note properties in a system object and add them to a collection we’ll use during the restore process.

Now, for each backup we’ll store the filename and database name in simple string variables and create a new backup device item object for the backup.  This backup device gives us a lot of information about the backup, which we’ll use when restoring on the destination instance.  We’ll also create an empty collection for the relocation objects we’ll need to be able to ‘MOVE’ the restored database data files to the new directories on the destination instance.

Firstly, this is the way of doing this for native backup.

We have to create an SMO Restore object to be able to read the contents of the backup device, so we’ll do that, and add the backup device to the Restore Devices collection.  We can then use the Restore’s ReadFileList() method to see what files need to be ‘moved’.

For each file we want to determine the filename and path, and its size on disk.  We can use PowerShell’s Split-Path cmdlet with the -Leaf argument to get just the file name without the path (which is useful, since we’ll be placing that file into a new directory on a new drive anyway) and again later with the -Qualifier argument on the default file or log path to determine the just the disk drive for our space calculation.  We’ll create a RelocateFile object for each file, set its properties for LogicalFileName from the source file list and PhysicalFileName from the default file or log directories combined with the physical file name we split out, and we’ll save the file size for the next step.

 If using SQL Backup Pro, you’d instead use

We want to make sure we’re not going to run out of space, so we loop through the disk drive collection we stored earlier, and subtract the amount of space for the file from the FreeSpace property in the collection for that drive.  Then, after setting a variable called $spaceok to $True, we loop through the drives looking for any where the space has fallen below zero. If we find one, we set the variable to $False.

After checking to see that our $spaceok variable is still set to $True we can restore the database. But if someone’s connected to the database we won’t be able to restore it, so we’ll use the SMO Server object method called KillAllProcesses and specify the name of the database we’re about to restore as the method argument to kick anyone out of that database.  Then we can restore the database.

The last thing we need to do is to run DBCC CHECKDB against all the databases on the destination server.  That’s done via an SMO Database method called CheckTables.  Any errors returned by the CHECKDB will be returned to the console, and will also be posted on the server’s error log for later reference.

The complete script can be downloaded here for Native Backup and here for Redgate Backup. Both are also from links on the right of the title to the article

 I’ve used a set of virtual machines I’ve built with a domain controller (where I’ve also put the file share to store the backup files), a client machine where I’m running the PowerShell console, and two servers running SQL Server 2012 on Windows Server 2012 Core.  I’ve set up a set of databases on WS12SQL01, the “production” server, and we’ll restore those databases and test them on WS12SQL02.  The share where the databases will be stored is \\WinSrvrDC\BackupWork and the SQL Server service account on both WS12SQL01 and WS12SQL02 has full access to that share.

To test the help features, first I’ll use Get-Help with the name of the script.

1862-clip_image001-630x317.png

Now, to run the script with the proper parameters.

1862-clip_image003-630x306.png

Using the Backup-SqlDatabase and Restore-SqlDatabase cmdlets allows you to see the progress of the backups and restores while the script’s running.

1862-clip_image005-630x108.png

As you can see one of my production databases is corrupted, and you can see the results in the command window, and you can go into the error log and see the same messages there.  Most importantly, you know that the production database has a problem, and you can take steps to resolve it before it causes greater harm should the application encounter that corrupted data.

By running this script on a regular basis you can test to make sure your backups are sound and usable, and that your production databases don’t have hidden corruption, without impacting your production environment.

Keep up to date with Simple-Talk

For more articles like this delivered fortnightly, sign up to the Simple-Talk newsletter

Downloads

This post has been viewed 26942 times – thanks for reading.

  • Rate
    [Total: 19    Average: 4.6/5]
  • Share

Allen White

View all articles by Allen White

Related articles

Also in Backup

Jodie Beay and the Production Database Drift

You make an example database, like NorthWind or WidgetDev in order to test out your deployment system and the next thing you know you're worrying about constraints, backup and security. Then you add an index to the production system and feel a pang of guilt. What would the Devs say? Somehow databases take on lives of their own, populated by the lost souls of users, Developers and DBAs. Has the Redgate DLM Team's practice Forex database somehow come alive?… Read more

Also in Database

Relational Algebra and its implications for NoSQL databases

With the rise of NoSQL databases that are exploiting aspects of SQL for querying, and are embracing full transactionality, is there a danger of the data-document model's hierarchical nature causing a fundamental conflict with relational theory? We asked our relational expert, Hugh Bin-Haad to expound a difficult area for database theorists.… Read more

Also in Database Administration

The SQL Server 2016 Query Store: Forcing Execution Plans using the Query Store

The SQL Server 2016 Query Store can give you valuable performance insights by providing several new ways of troubleshooting queries, studying their plans, exploring their context settings, and checking their performance metrics. However, it can also directly affect the performance of queries by forcing Execution Plans for specific queries.… Read more

Also in Powershell

PowerShell Desired State Configuration: LCM and Push Management Model

PowerShell's Desired State Configuration (DSC) framework depends on the Local Configuration Manager (LCM) which has a central role in a DSC architecture. It runs on all nodes that have PowerShell 4.0 or above installed in order to control the execution of DSC configurations on target nodes. Nicolas Prigent illustrates the role of the LCM in the 'Push' mode of configuring nodes.… Read more
  • Edward J Pochinski

    Using sqlps for this, Nicely Done !!
    I did this 3 years back with T-SQL and pulled the .bak files from a tape server to a restore server. Very nicely done with sqlps and wow much less code then I needed. I also had to build a pipe log that I FTP and this gets loaded & tracked via a web site.
    EjP

  • timothyawiseman@gmail.com

    Nicely Done
    I agree, this is nicely done. I rather like the scripts. But why have it create additional backups?

    When I test my backups, I normally make a copy of one of the backups for production to test rather than take a new backup. That both saves the time to make the new backup and ensures that the actual backups I am relying are working properly.

  • Frimbyte

    QueryTimeOut
    As regards the Querytimeout bug, I got round that by using "-querytimeout ([int]::MaxValue)"

Join Simple Talk

Join over 200,000 Microsoft professionals, and get full, free access to technical articles, our twice-monthly Simple Talk newsletter, and free SQL tools.

Sign up

See what's happening behind the scenes

Take a peek at the bowels of the ship – the lower decks – the actual servers of SQL Server Central itself.

See what's happening