Click here to monitor SSC
  • Av rating:
  • Total votes: 10
  • Total comments: 3
Phil Factor

Database Deployment: The Bits - Agent Jobs and Other Server Objects

23 May 2013

Database delivery patterns & practices

STAGE 3 CONTINUOUS INTEGRATION

Databases often need more than just the database objects to run. There may be certain server objects and components, and SQL Agent objects, that are required as well. For an automated deployment, these need to be identified and their build script placed in source control. They then need to be deployed via the pre, or post deployment script. Phil spells out how and why.

You've stored the source of your database in source control. When it's time to deploy, you extract all the database objects from source control and build the system on your integration server, but it doesn't run. Why not? That depends, of course, but it is likely that you've missed out part of the system that is essential for running the database.

A database of any complexity is more than just a collection of tables, views and routines. Aside from database objects, we need to consider a number of server objects such as linked databases, endpoints, linked servers and triggers. There will be SQL Server Agent objects, exposed by the JobServer class in Server Management Objects (SMO), such as Agent alerts, Agent jobs, job alerts, and operators. You will probably need different security objects, such as those associated with your application. There are ETL processes, driven by PowerShell or SSIS, and other processes that defy generalization.

Sometimes, several databases share a SQL Server instance. At the other extreme, we might spread a single logical database across several linked instances, containing several physical databases. Co-hosted databases all share the same system databases so the job and alerts for all databases on the instance are stored in the same msdb database. Likewise, SQL Server defines linked-server mappings at the server instance level, so they are not stored with the database either. If these mappings are missing when you deploy a database then your links won't work: Instead, you’ll get an error.

Yes, SQL Server has its own version of DLL Hell. When deploying a database, we need to consider all these extra resources, check all dependencies, and create all necessary server objects at the correct version. There is no automated process that will reliably do this for us retrospectively.

In this article, I'll describe the scripting details and considerations for all important server objects and then present a PowerShell script that will script out all of your existing server objects. After some initial tidying up, such as weeding out those objects irrelevant for the application, and ensuring Alert and Job names accurately identify the associated database, we can then modify this PowerShell script to capture only the relevant objects, and run it every time we perform server object updates. In this way, we ensure that Source Control always has the latest versions.

Having done all this, database deployments become much smoother. Armed with a deployment script for the database, and either in a pre- or post-deployment script, we can deploy and integrate the required server objects, all from Source Control. Remember that the configuration of security objects will vary between development, test, integration and production, so we often need different post-deployment scripts for different destinations, or create different categories of destination server, such as "test", "integration", "production" categories", and so on. The logic here can get quite complex, and some deployments use complex  IF/THEN logic to apply the right security, or store a security script for each separate destination type.

Scripting Details

We need to store in Source Control the T-SQL scripts to create the latest versions of all of the required Agent and other server objects (jobs, alerts, linked server mappings, endpoints, triggers, and so on). SMO exposes these objects through the  Server and JobServer class, and we'll create a PowerShell script to iterate over all of the various collections of this class, containing the jobs and alerts and other server objects, along with associated operators, schedules and so on. It scripts out these objects into a directory named according to the server name.

As noted in the introduction, not every object will be relevant for the application so after running the script for the first time, you'll have some tidying up to do, weeding out irrelevant objects. You'll need a reliable way to 'flag' which job, alert, endpoint, or whatever, is part of which database. There are a few ways to do, one of which is to use the permissions on the resource. However, using the name is probably the easiest it's a property common to all the objects. In some cases, such as with job steps, things are easy since SQL Server allows you to assign them explicitly to a database, but this isn't generally the case. Using the name is ugly, but it meets the purpose.

Agent Jobs and Job Steps

DBAs use SQL Server Agent jobs for all manner of tasks, such as running background processes, maintenance tasks, backups, and ETL. I've never dealt with a database that has no Agent jobs.

Agent jobs are generally, but by no means always,  T-SQL batches. They're easy to miss because, unlike job steps, SQL Server does not allow you to associate jobs with a specific database. SQL Server stores jobs for all databases on the SQL Server instance, hugger-mugger, in the SQL Server Agent, in msdb, which is why it is important to back up this database.

Even if we have no job steps assigned to a specific database, they can certainly affect it. There could be Agent job steps in the form of PowerShell scripts, ActiveX Scripting jobs, replication jobs, Analysis services jobs, Integration Services jobs, executable programs or batch processes, as well as the most common T-SQL batch jobs.

We  need to have a copy of any code or executable  that is run from these job steps., but we don't want the replication jobs. There is no reliable, automated way of unpicking only the job steps that are relevant to your database unless, of course, it has been correctly assigned to the database. What's in that SSIS job, for example? Are you seriously going to examine that PowerShell script to find out what database it is accessing?

To make sense of this, we need to be careful, and keep everything neat and tidy. Continuous integration or deployment is not always a free lunch.

  • Don't access several logically separate databases with the one job unless it is an administrative server job such as backups or maintenance. For application tasks it is one job, one database. Otherwise, what happens when the database versions get 'out-of-sync'?
  • Document in the name and description of the job which database(s) it is accessing. Use the description to provide special detail, such as a PowerShell script on the server that is called from a powershell-based job and which should also be saved<
  • Keep the source of the job in source control
  • Ensure that the creation or update of the job is repeatable
  • Ensure that the job ownership is consistent and logical.

Once we have this, the task is easier, but we're not done yet. Are all the SSIS tasks in Source Control too? Those batch files that are called from job steps  need to be saved as well. Is that PowerShell script executing a file as a script block? (that means that the file must be saved as well) Are server-based executable files involved? Someone needs to ensure all of this is in Source Control and then check they have not changed since the last deployment (I find that the description field of the job step helps).

In our post-deployment script (or pre-deployment if a dependency to database build scripts exists) , we need go through each job, delete the current job if it exists and then execute the creation script of each job. In the script attached to this article I  generate the TSQL script that does  this initial deletion for you.  If using a script generated in SMO then beware. In some earlier versions, the drop command specifies a static jobid whereas the creation generates a dynamic jobid so that it will run the first time but fail every subsequent time. If you come across one, alter it  to use the object name.

Agent Alerts

Agent alerts are generally more relevant to the server than the database. However, for the application, it is a good idea to set up special alerts for severe database errors. I like to handle message 825 (tried to read and failed), and define a separate alert for each severity 19, 20, 21, 22, 23, 24, and 25. Where I've worked, it is often the responsibility of the development DBA to make sure that all the alerts are set up ready for the production DBA.

I like to alert on severity levels 10 – 16, indicating faults in the database application code, usually programming errors and input errors. Usually, the end user hears about these before the developers, which is bad practice. There will also be various performance alerts, set up in order to alert the operator when a certain performance condition happens.

Alert notifications will change according to the type of installations in order to specify the right recipient. For integration or UA testing, the database developers need to know about it, whereas in production, the production DBA will need to know about these.

We need to script all these alerts and, like jobs, we should use the name to signal which alerts belong to which database.

Operators

These are the recipients of the alerts, the addresses notified by email of corruption, errors in code, job failures, or major problems. The operator is likely to be different in development, test, integration and production.

SharedSchedules

We use Shared Schedules to kick off regular jobs. Imagine that our application has a background task that processes all the completed orders without slowing down the purchasing process. That schedule that kicks off the job is part of our application, and shouldn't logically be in the same pot as a scheduled process to rebuild indexes.

ProxyAccounts

Proxy accounts, or proxies, are useful when some principals need to access external resources, but when it is unwise, for security reasons, to grant those necessary extra permissions to those principals. To avoid having to do this in order to, for example, run a PowerShell script, we can map, purely for that task, the agent account to another account that already has these rights. This is generally going to be a database-level security device, so we need to install those proxies appropriately for the server on which the database is to be deployed. You may even need different proxies in development, test, integration and production

TargetServers

In order to allow multi-server administration, a target server allows us to run a job against a different server than the one on which it is defined. A master server distributes jobs to, and receives events from, a target server. We can use this feature to define a job once on a master server and have it executed against many servers. I've never used this within an application,or come across its use, but I can imagine that it has its uses.

Server Triggers

Server-scoped triggers appear in the SQL Server Management Studio Object Explorer in the Triggers folder. We can use them to track server-wide DDL changes, such as a database creation or drop, and Login attempts to the current server. If specified, the trigger fires whenever the event occurs anywhere in the current server. Their main use is for server security and audit, but an application might use them for security purposes.

LinkedServers

An application might use Linked Servers for ETL, distributed database layering or data transfer. I've also seen them used for archiving old data. We need to install these, with the application, and it makes things neater for deployment if we obey the following rules:

  • One linked server, one database
  • Keep the source of the linked server in source control
  • Ensure that the creation or update is repeatable.

The PowerShell Script

Finally, here is the general-purpose PowerShell script that I use for initial creation or update of the T-SQL scripts for the various server objects, such as Triggers, Backup Devices, Linked Servers, Logins, User-Defined Messages, Alerts, Jobs, Operators, Shared Schedules, Proxy Accounts and Target Servers. Of course, you can extend it to any others you require.

It saves each object into a separate file, named from the type of object and the object's name. It will do as many servers as you like from a list, or use your registered servers.

I find it a good way of getting things started. I prune the list of server objects according to what is relevant to the application. This version will script all objects of the type you specify in the list at the start of the two scripting pipelines, but when you have named your objects systematically, you can introduce into the pipeline a Where-Object clause to take notice of only those server objects that are relevant to your application.

<#This script generates TSQL creation scripts for all your server objects. They are placed in a
subdirectory of the directory you specify based on the name of the server. Each server object is
in its own file, which includes the name and the name of the object. #>

$directory=
'E:\MyScriptsDirectory' # the directory where you want to store the scripts
<
# if you only have one server, then you need a comma before it (PowerShell quirk!
e.g.
$servers = @(,('mkt-philf01','PhilFactor')) #>


$servers = @(
# you would put your servers in here that you want scripts for
  (
'mkt-philf01','PhilFactor'),#a  way of allowing an instance with SQL Server authentication
  (
'andrewctest.testnet.red-gate.com\sql2008', '')) # a server with Windows Authentication
 
Trap {
 
# Handle the error
  $err = $_.Exception
 
write-error $err.Message
 
while( $err.InnerException ) {
   $err = $err.InnerException
 
  write-error $err.Message
 };
 
# End the script.
 
break
  }

set-psdebug -strict
$ErrorActionPreference =
"stop" #

#Load SMO assemblies, checking that it was successful
$MS=
'Microsoft.SQLServer'
@(
'.SMO','.SmoExtended','.ConnectionInfo') |
 
foreach-object {
  
if ([System.Reflection.Assembly]::LoadWithPartialName("$MS$_") -eq $null)
   {Throw
"missing SMO component $MS$_"}
 }
# just to prevent the need for retyping
$My=
"$MS.Management.Smo" # just to save lots of typing
$BadChars=
'[\\\/\:\.]' # the characters you don't want in filenames
#create the scripting options just the once
$ScriptOptions =
new-object ("$My.ScriptingOptions")
$ScriptOptions.ScriptBatchTerminator = $true
# this only goes to the file
$ScriptOptions.IncludeHeaders = $true;
# of course
$ScriptOptions.ToFileOnly = $true
#no need of string output as well
$ScriptOptions.IncludeIfNotExists = $true
# safer!
$ScriptOptions.ScriptDrops = $false
#and now do it for every server in your list
$Servers |
foreach-object{
  $ServerName = $_[0]
#the name of the server or instance for this job
  $Login=$_[1].Trim()
  
if ($Login -ne '')
   {
    $Credentials=
get-credential -Credential $Login
       $mySrvConn =
new-object Microsoft.SqlServer.Management.Common.ServerConnection
       $mySrvConn.ServerInstance=$ServerName
       $mySrvConn.LoginSecure = $false
       $mySrvConn.Login = $Credentials.UserName
       $mySrvConn.SecurePassword = $Credentials.Password
       $s=
new-object ("$My.Server") $mySrvConn }
  
else
   {
   $s =
new-object ("$My.Server") $ServerName } #find the server instance
  
if ($s.Version -eq  $null ){Throw "Can't find the instance $ServerName"}
  $Scriptdirectory =
"$directory\$($s.DomainInstanceName -replace $BadChars,'-')"
  <
# and we can now establish the filename based on the server and maybe create the directories #>
  
# a little easier than doing it the Powershell way
  
if (![System.IO.Directory]::Exists($Scriptdirectory))
     {$null=[System.IO.Directory]::CreateDirectory($Scriptdirectory)}
  $ScriptOptions.Filename = 
"$Scriptdirectory\Agent.sql";
  $s.jobserver.script($ScriptOptions) 
#script out the agent creation (its properties, in fact!)
      @(
'Triggers','BackupDevices','LinkedServers') | #  you might want 'Logins', 'UserDefinedMessages'
   
foreach {$s."$_"} | # we can get a description of each type from the URN and write each object out
   
foreach { $ScriptOptions.Filename =  "$Scriptdirectory`\$($_.urn.Type)-$($_.Name -replace $BadChars,'-').sql";
     $ScriptOptions.ScriptDrops = $true; $ScriptOptions.AppendToFile = $false;
     try {$_.Script($ScriptOptions)} catch{
"$_"} #delete server object if it is there
     $ScriptOptions.ScriptDrops = $false; $ScriptOptions.AppendToFile = $true;
     try {$_.Script($ScriptOptions)} catch{
"$_"} #create server object if not there
     }
      
#iterate over the collections we want to script out... 
      @(
'Alerts','Jobs','Operators','SharedSchedules','ProxyAccounts','TargetServers') |
   
foreach {$s.JobServer."$_"} |  # all the Agent objects we want to script out
   
foreach { $ScriptOptions.Filename =  "$Scriptdirectory`\$($_.urn.Type)-$($_.Name -replace $BadChars,'-').sql";
     $ScriptOptions.ScriptDrops = $true; $ScriptOptions.AppendToFile = $false;
     try {$_.Script($ScriptOptions)} catch{
"$_"}#delete agent object if it is there
     $ScriptOptions.ScriptDrops = $false; $ScriptOptions.AppendToFile = $true;
     try {$_.Script($ScriptOptions)} catch{
"$_"}#create agent object
     }
   }

Final thoughts

It is much better to pause for thought before implementing anything outside usual database objects just to make sure that whatever you develop is going to be easy to deploy. A little more work at this stage saves much more later on in the project. Just take a simple example, which could be a deployment to the various test servers from your development (integration) server., and use this to practice deployment from source control until it works smoothly. My advice would be to use a naming convention that makes it clear what server-based objects belong to the application. For example, I would keep application-based Agent Jobs so that they access only one database, and make this clear in every job step. I like to keep in source control all server-configuration scripts that are required for the database, along with any external executable.

Test out your post-deployment scripted processes to make sure that they collect from source control all of the necessary agent jobs and alerts, for example, and install them on the various servers. Make some changes to the server objects on the development server and test that you can then get source control updated easily. Develop these server-based objects in a way that makes it easy to deploy them, with consistent naming and ownership. Don't consider the development job finished until you have tested the deployment.

You may find that different types of server, such as development 'sandbox', development 'integration', test and production will all have special configuration requirements. This will be especially true of security, replication and ETL, in which case you'll need to handle all this within the post-deployment script.

When you've tested out your post-deployment scripts, you'll find that many routine jobs become a great deal easier and so your databases will get a lot more, and more varied, testing, and developers will be less likely to consider database development to be a black art.

This article is part of our database delivery patterns & practices series on Simple Talk.

Find more articles for version control, automated testing, continuous integration & deployment.

Phil Factor

Author profile:

Phil Factor (real name withheld to protect the guilty), aka Database Mole, has 30 years of experience with database-intensive applications. Despite having once been shouted at by a furious Bill Gates at an exhibition in the early 1980s, he has remained resolutely anonymous throughout his career. See also :

Google + To translate this article...

Search for other articles by Phil Factor

Rate this article:   Avg rating: from a total of 10 votes.


Poor

OK

Good

Great

Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.


Subject: Sql Server Database Project
Posted by: Emmett E. Childress (not signed in)
Posted on: Tuesday, May 28, 2013 at 8:34 AM
Message: Great article. Phil is it possible to replicate this process with a Sql Server Database Project in Visual Studio 2012? Is there anything special that needs to be done to work with Sql in Azure?

Subject: Cool - but how would you add objects to script out?
Posted by: mbourgon (view profile)
Posted on: Wednesday, May 29, 2013 at 10:51 AM
Message: Since my PS-fu is not strong enough, how do you get a list of all the possible things you can script out? If I do the following I can get a listing of a bunch of options (including JobCategory), but I don't see where Logins or UserDefinedMessages comes from.

$MS='Microsoft.SQLServer' [...etc...]
$My="$MS.Management.Smo"
$s = new-object ("$My.Server") myservername
$s.jobserver

Subject: Re: adding objects to script out
Posted by: Phil Factor (view profile)
Posted on: Wednesday, May 29, 2013 at 3:38 PM
Message: All these things are properties that are actually collections on either the server object or the jobserver object. All you need to do is to look at the SMO documentation on MSDN, get the list of properties and look for the collections. Logins and UserDefinedMessages are collections on the server object. Just add the into the list and you should see them scripted out. This technique works for database objects too, but in this cas you look for collection of the database object such as Tables.

I hope I've understood the question right!

 

Phil Factor
Searching for Strings in SQL Server Databases

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.... Read more...

 View the blog

Top Rated

Searching for Strings in SQL Server Databases
 Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine... Read more...

The SQL Server Sqlio Utility
 If, before deployment, you need to push the limits of your disk subsystem in order to determine whether... Read more...

The PoSh DBA - Reading and Filtering Errors
 DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of... Read more...

MySQL Compare: The Manual That Time Forgot, Part 1
 Although SQL Compare, for SQL Server, is one of Red Gate's best-known products, there are also 'sister'... Read more...

Highway to Database Recovery
 Discover the best backup and recovery articles on Simple-Talk, all in one place. Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Reading and Writing Files in SQL Server using T-SQL
 SQL Server provides several "standard" techniques by which to read and write to files but, just... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.