Click here to monitor SSC
  • Av rating:
  • Total votes: 11
  • Total comments: 5
Shawn McGehee

SQL Backup 6: Wind of Change

03 August 2009

One could be forgiven for thinking that there is little more you can do with any third-party backup software than just backup databases. It takes a DBA who has to use such software every day in a real commercial setting to appreciate the full implications of the improvements that have been made to SQL Backup for version 6. It is the hard-working DBAs who are best-placed to judge whether it is all worth-while. So Red Gate Software commissioned DBA Shawn McGehee, also a Friend of Red Gate, to give us his true thoughts on SQL Backup 6.

The decision whether to upgrade software is not an easy one at times.  In the past, most of us have been stung into caution by the upgrade that promises us a cure, but delivers a placebo instead.   I won’t make you wait to let you know that SQL Backup 6 does not fall into this category.  Version 6 comes with a few great features that I, personally, am excited to see.  There is also a friendly face from a previous version that is making a welcome comeback.

Turn it up

I think that most DBAs who are using SQL Backup do so for the compression features that the software delivers.  In my environment we get, on average, 82% compressed files when we use level 3 compression on our database backups.  From a hardware-cost standpoint this is a huge saving.  If you take an enterprise whose nightly backups would take 1 TB of space with native backups, you would be able to keep 3 days on the same space with plenty of room to spare.

With SQL Backup 6, we are given one more level of compression to choose from: Level 4 compression. This utilizes the LZMA algorithm for an increase of about 5% on the compression level of  your backup files.  This doesn’t sound like a huge change but if we look at a few example files we can see just how much of a space saving this can lead to.

'This is going to
make your SAN
very happy.'

We can see that the original file from version 5.4 was 203 MB.  What we can’t see, but trust me it is there, is that the compression rate on this file was 81.92%.  This is almost identical with the level 3 backup taken with version 6.0 which came out to just under 204 MB and had a compression rate of 82% on the button.  The new level 4 backup came in at a slightly higher compression rate.  This file has a compression percentage slightly under 90%.   While not a significantly higher rate, the file size difference is much more impressive.  We can see a 44% file size decrease from this 8 percent rise in compression.  This is a huge saving on disk space, which at most shops is one of the most sought after resources.  With level 4 compressed backups, we can keep that 1 TB of native backups for 3 days and only use half the space.  This is going to make your SAN administrator very happy.

There is, however, a slight drawback to this new level of backup compression.  We gain compression at the expense of backup  speed.  The level 3 backup took 40 seconds to complete on a local drive while the level 4 backup took 55 seconds.  15 seconds does not sound like a lot, but just, as we did with size, we can extrapolate the time difference: This tells us that a backup that normally takes one hour could take over an hour and twenty minutes. 

This is something that you will have to weigh out before making this change.  As a DBA you probably want your backup jobs to complete as fast as possible, but take up the smallest amount of space that they can.  It is a delicate balance that you have to tweak to get it perfectly aligned.  This is why SQL Backup comes with a compression analyzer to help you make that choice. 

The other issue we all have faced is that of failing backups.  In general, the longer a backup takes to complete, the more likely we are to be faced with an issue that will cause it to fail, whether it is a hardware, software or network issue.  This brings us to another great addition to SQL Backup 6.

Network Resilience

'... at 3:00 AM
the last thing
I want to think
about are
network issues.'

There is nothing more frustrating for a DBA than waking up at 3:00 AM to restart a backup that failed because of a network hiccup.   I use the term hiccup to lighten the mood, because at 3:00 AM the last thing I want to think about are network issues.  SQL Backup 6 now has built-in flexibility to handle any network outages on your backup jobs, whether taking the actual backups, or performing a COPYTO during the backup.  This new resilience is completely configurable either through the GUI or  the SQL Backup stored procedure.  Let’s take a look at the GUI configuration to see what we have control over.

We can see the new section of options given to us in the GUI in the Optimization section, under the Network resilience heading.  They are a simple set of options that will let us set:

  • a delay to wait after a network outage is detected,  and before a new attempt.  
  • the number of times to retry before signalling failure. 

These simple options can give us a much more robust backup job.  Let’s give it a try and see if we can simulate a network failure using one of the techie’s favorite tools, the thumb drive.

I have used the Red Gate GUI to configure a new backup script using my thumb drive, or drive E.  I am going to set up the script to attempt the backup up to 10 times, with a timeout delay of 10 seconds.  We are also going to use compression level 4 and use 2 threads to perform the backup.  You can see all of this in the final script below.

EXECUTE master..sqlbackup '-SQL "BACKUP DATABASE [RedGateTesting]

      TO DISK = ''e:\<AUTO>.sqb'' WITH DISKRETRYINTERVAL = 10,


This script also shows us the two new keywords that can be, and definitely should be, used in our backup jobs.  The first is DISKRETRYINTERVAL, which is simply the amount of time in seconds to wait between each retry.  The second is DISKRETRYCOUNT, which sets the number of times we are going to retry the current backup.  Just remember that those two numbers multiplied together are going to be the delay you will have to wait before a job will alert if there is a long outage.  Any job that is going to fail will have to retry and wait for all delays configured before finally alerting you to the failure of a backup.  The options set here will also need to be carefully balanced to give you the right amount of protection without causing huge delays in the alerting of a major issue.

Now, with the script ready for use let’s kick off the backup.  Before we do, however, we are going to remove the thumb drive so that the first and second attempts fail.  At 20 seconds we will plug the drive back in and check the results using either SQBSTATUS or the SQL Backup GUI.

We can see that I took two backups of the same database just a few minutes apart.  One I let run completely normally and the other I kept the thumb drive unplugged for the first 20 seconds and then plugged it back in.  You can see that from a review standpoint, both backups are identical with the one exception of the second taking 25 seconds longer to backup.  The network resilience options caught the fact that the drive was unavailable and gave me ten second retry windows until either the backup finished or the backup failed.  This new feature alone is going to save many DBAs from their beauty sleep being rudely interrupted in the wee hours of the morning.

Huge Improvements to a Huge Upgrade

I remember when SQL Backup 5.0 was first released.  The first time I opened the new GUI I was struck as to how much cleaner and easier it was to view my entire server list.  Going from a tree view of servers that I had to interrogate one at a time to a timeline view that I could simply browse for red areas of issues was a huge relief.  If I had to list the all time favorite upgrades for my DBA related tools, this one would show up very high on that list.

Although this new GUI did put a sparkle in my eye at first, the more I used it, the more I became frustrated with its all-knowing interface.  I have quite a few servers to monitor and the GUI would load so much history for each server that the more machines I monitored, the more resources it took and soon my new favorite GUI was all but unusable.   I was forced to make the tough decision and slim down what I loaded each morning to a group of core production machines and any servers that had given me problems the night before.  It was a little disappointing, but still much better than anything I had been using before.

SQL Backup 6 is here to relieve some of these growing pains with the updates it brings to the interface.  We can control how much of the history is imported for each server we have loaded in the GUI and the importing of this data has been streamlined under the hood of the software to load faster so that we can get to the tasks at hand, not wasting time waiting for the GUI to process months of information.

Now that we can get to our servers faster we can also see some of the other improvements made to the new GUI.  The file browser has been redesigned to be much more user friendly.  It is going to allow you to add remote servers much more easily and even navigate the local machines files with greater ease.  Just from doing a few restores with the new version I can tell you that it is a much nicer and easier to use interface and just a small piece of what makes this new version worth the upgrade.

'If you have ever moved a
database from one server
to another, you know the
issues that can arise with
orphaned users.'

If you have ever moved a database from one server to another, you know the issues that can arise with orphaned users.  They can end up cluttering up a database and become a pain when not nipped in the bud early on.  They are also a huge security concern.  If these orphaned users have elevated privileges on one system and are restored to a machine where the users should not have the same level of control and that user is later added to the destination server, you can find yourself in quite a bit of trouble.  This new version of SQL Backup gives you a nice new report to let you know about these poor troubled users.  The report is available in the log and will let you know which users don’t have a corresponding login and you can easily take care of these early.  Just make sure to add the new keyword, ORPHAN_CHECK, to have this test run on the database to see the new report.

And speaking of SQL Backup log files, you don’t have to go digging through directories to find the log file you need anymore.  Another great new feature that the GUI is bringing with it is the ability to view these log files directly from any of the servers you are utilizing SQL Backup 6 on.  Once you complete a restore, you can double click the successful restore notification to see the normal summary information on the operation, nothing new.  Once this window is shown you will see a new button called Show Log.  This will query the server to pull the log file associated with the new restore or backup (which is a feature only available to your newly installed SQL Backup 6 servers).  You can see from the screen, that we have a few of these poor orphan users in the database that we will have to take care of.

This new version also gives you a bit more control with the log shipping wizard internal to the GUI.  You can now select whether to overwrite the destination database with a fresh restore of the source.  This can save you headache in the cases where you don’t need this new, up to the minute data refresh.  But this is not the biggest change that was made to the already great log shipping capabilities in SQL Backup 6.

Shape up or ship out

'Log shipping can be a DBA’s
friend in the best of times and
 a pain in the rump at 3:00 AM
when a log restore fails
because of a network outage.'

Log shipping can be a DBA’s friend in the best of times and a pain in the rump at 3:00 AM when a log restore fails because of a network outage.  It has been the DBA’s cheap, easy-to-set-up answer for any reporting needs that come up and can have the flexibility to have the 15 minute (more or less) delay in data which comes with the technology.  With SQL Backup you don’t have to worry about those little network hiccups when a log file can’t be copied to the target server or can’t be restored from a remote location.

SQL Backup 6 will not only use the network resilience technology that was mentioned earlier to make sure that a quick network hiccup or system outage doesn’t break the log shipping but has added even more protection.  The new “self-healing” log shipping will make sure that any log that fails to make it to its destination is placed in a queue to be processed when it makes it to its final location.  This will make sure you are not notified until a registry configurable time has gone by.  The default for this is 24 hours but can, and should I believe, be changed by the DBA for a shorter time frame.

24 hours is a little too long in my opinion to wait before notification is sent to the team that will be responding.  I know that most of my log shipping scenarios are used a little more frequently and that if 24 hours went by before I was notified and began looking into the issue, I would not be having a very good day.  The SQL Backup team has, of course, allowed us to configure this to a more reasonable time that will suit all of our individual needs.  Regardless of these defaults, this new feature will definitely  help to alleviate one more of the on-call DBA’s late night wake up calls.

The return of an old friend.

Do you remember the friendly face I mentioned in the beginning of this article?  Well it is part of the interface changes that came along with SQL Backup 6 and I can’t tell you how happy I am to see its return.  The first time I scripted a restore with version 5.0 and I received a failure I thought I would easily jump back to the configuration options and see what I had messed up.  Perhaps I needed to clear some connections to the database before I restarted.  Well I was dumbfounded when I couldn’t find the button to take me back to try the restore again.  I might have simply fat fingered a parameter or chosen the wrong file to restore over, but I had to start from scratch when I made a mistake.  I felt like I had said goodbye to an old friend.

You can see above that this friend has returned and I am extremely glad to see him.  With SQL Backup 6.0 the retry button has returned and I am almost tempted to mess up my restores just so I can make sure I don’t waste the opportunity to use it again.  If you misconfigure a parameter or just forget to kill any connections to the database, you can now use this button to go back and give it another try without having to start over from the beginning.

Speaking of forgetting to kill connections to the database, SQL Backup 6 has also added this into its arsenal of improvements to make your DBA life that much easier.  We now are given the option during a restore to automatically kill the database connections that exist at the time of restoration.  This is a huge benefit to those of us who have automated restores to development or QA environments.  No longer do we have to use our home brewed connection killing scripts to get the job done.  We can sleep peacefully at night knowing our restores are not going to fail because of an errant developer’s connection.

The List Goes On

There are even more features that I could not expound on in this article.  Even as I write this, the SQL Backup team is working on more amazing features to bring to your SQL Server environment.  Not to get too far ahead of myself, but I have heard rumor (ok, it is fact) that an even newer version of SQL Backup 6 than I have been using will give us the ability to do object level restores.  I don’t know about you, but that pumps me up even more about this version.  I have always felt that this was the biggest missing feature in previous releases.  As users we asked and Red Gate delivered.

I would recommend to each of you that are using any version of SQL Backup, not yet using a third party backup tool or using a different product to at least download the trial and see for yourself.  I have tried all the major third party backup tools and with the version 6 release of SQL Backup I am more firmly entrenched in using this software than I have ever been.

You can try out SQL Backup 6 for 14 days by downloading it from here

Shawn McGehee

Author profile:

Shawn McGehee is a DBA living in Orlando, FL. He has been working in IT since graduating high school in 1997, in positions ranging from web developer, to help desk operative, to his current position as a Senior DBA. He is heavily involved in the SQL Server community, speaking at and organizing local users groups in Florida since 2008, and is currently President of the OPASS group in Lake Mary, FL.

Search for other articles by Shawn McGehee

Rate this article:   Avg rating: from a total of 11 votes.





Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.

Subject: Panacea
Posted by: Robert Carnegie (not signed in)
Posted on: Monday, August 17, 2009 at 8:42 AM
Message: If someone is selling an authentic panacea then I think that's all that I need here. I'm tired of making do with pablum and placebo.

Subject: Re: Panacea
Posted by: Andrew Clarke (view profile)
Posted on: Monday, August 17, 2009 at 12:54 PM
Message: After some heated editorial debate on the precise meaning of the term 'panacea', we changed it to 'placebo'. 'Panacea' usually refers ironically to the 'universal Panacea' or 'nostrum remedium', typified by 'Snake oil', whose claims as a universal cure were invariably found to be bogus.

Subject: Additional thought on Network resilience
Posted by: Anonymous (not signed in)
Posted on: Wednesday, August 19, 2009 at 9:20 AM
Message: Just a different thought as to how to utilize this best. I would hope that the failure of something in the network would be monitored and reported by some other source. A failing db backup is not the best method to discover network issues. While I would still say that you would not want to set this to anything completely unreasonable I might be tempted to make it give as much as 15 - 20 mins of retrys to give the network support team a chance to fix the issue. On a backup that takes an hour or more to do restarting is much less attractive then finishing the last 5 - 10 mins with a 15 min pause. Unless there is a way to have it continue that I missed?

Subject: RE: Additional thought on Network resilience
Posted by: ShawnNWF (view profile)
Posted on: Wednesday, August 19, 2009 at 10:24 AM
Message: One would deffinately hope that there was some other monitoring method in place for the network. This new feature is in no way a replacement for network monitoring tools, but merely a way for small network issues or hiccups to be taken care of in a clean and more elegant way.

Unfortunately, the backup would not be able to continue where it left off if it were to fail at the tail end of a backup. But, for me, this is a better situation than waking up in the middle of the night for a backup restart/cleanup.

Subject: RE: Additional thought on Network resilience
Posted by: Colin Millerchip (not signed in)
Posted on: Wednesday, August 19, 2009 at 10:40 AM
Message: One thing to bear in mind is that, whilst a backup is on-going, there is an extra load on the SQL Server, including higher memory usage. So there's a trade-off here that people need to make: how much resilience do they want against network outages, vs. how much would they be willing to lengthen the backup process. We've set defaults retry parameters that would lengthen a backup by a fairly small amount of time, but as Shawn points out this is completely configurable.

Simple-Talk Database Delivery

Patterns & Practices Library

Visit our patterns and practices library to learn more about database lifecycle management.

Find out how to automate the process of building, testing and deploying your database changes to reduce risk and make rapid releases possible.

Get started

Phil Factor
Automatically Creating UML Database Diagrams for SQL Server

SQL Server database developers seem reluctant to use diagrams when documenting their databases. It is probably... Read more...

 View the blog

Top Rated

In-Memory OLTP - Row Structure and Indexes
 There are several decisions to be made when designing indexes for Memory-optimized tables in In-Memory... Read more...

In-Memory OLTP – Understanding Memory-Optimized Tables
 How do you get started with In-memory OLTP? Murilo Miranda first advises on the setup of the server and... Read more...

Automatically Creating UML Database Diagrams for SQL Server
 SQL Server database developers seem reluctant to use diagrams when documenting their databases. It is... Read more...

SQL Server Security Audit Basics
 SQL Server Server Audit has grown in functionality over the years but it can be tricky to maintain and... Read more...

The SQL Server 2016 Query Store: Analyzing Query Store Performance
 There are some obvious advantages to having the Query Store, but what is the performance impact that it... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

Temporary Tables in SQL Server
 Temporary tables are used by every DB developer, but they're not likely to be too adventurous with... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.