Database Deployment: The Bits – Agent Jobs and Other Server Objects

Databases often need more than just the database objects to run. There may be certain server objects and components, and SQL Agent objects, that are required as well. For an automated deployment, these need to be identified and their build script placed in source control. They then need to be deployed via the pre, or post deployment script. Phil spells out how and why.

Database Lifecycle Management Patterns & Practices Library
Automated Deployment

You’ve stored the source of your database in source control. When it’s time to deploy, you extract all the database objects from source control and build the system on your integration server, but it doesn’t run. Why not? That depends, of course, but it is likely that you’ve missed out part of the system that is essential for running the database.

A database of any complexity is more than just a collection of tables, views and routines. Aside from database objects, we need to consider a number of server objects such as linked databases, endpoints, linked servers and triggers. There will be SQL Server Agent objects, exposed by the JobServer class in Server Management Objects (SMO), such as Agent alerts, Agent jobs, job alerts, and operators. You will probably need different security objects, such as those associated with your application. There are ETL processes, driven by PowerShell or SSIS, and other processes that defy generalization.

Sometimes, several databases share a SQL Server instance. At the other extreme, we might spread a single logical database across several linked instances, containing several physical databases. Co-hosted databases all share the same system databases so the job and alerts for all databases on the instance are stored in the same msdb database. Likewise, SQL Server defines linked-server mappings at the server instance level, so they are not stored with the database either. If these mappings are missing when you deploy a database then your links won’t work: Instead, you’ll get an error.

Yes, SQL Server has its own version of DLL Hell. When deploying a database, we need to consider all these extra resources, check all dependencies, and create all necessary server objects at the correct version. There is no automated process that will reliably do this for us retrospectively.

In this article, I’ll describe the scripting details and considerations for all important server objects and then present a PowerShell script that will script out all of your existing server objects. After some initial tidying up, such as weeding out those objects irrelevant for the application, and ensuring Alert and Job names accurately identify the associated database, we can then modify this PowerShell script to capture only the relevant objects, and run it every time we perform server object updates. In this way, we ensure that Source Control always has the latest versions.

Having done all this, database deployments become much smoother. Armed with a deployment script for the database, and either in a pre- or post-deployment script, we can deploy and integrate the required server objects, all from Source Control. Remember that the configuration of security objects will vary between development, test, integration and production, so we often need different post-deployment scripts for different destinations, or create different categories of destination server, such as “test”, “integration”, “production” categories”, and so on. The logic here can get quite complex, and some deployments use complex  IF/THEN logic to apply the right security, or store a security script for each separate destination type.

Scripting Details

We need to store in Source Control the T-SQL scripts to create the latest versions of all of the required Agent and other server objects (jobs, alerts, linked server mappings, endpoints, triggers, and so on). SMO exposes these objects through the  Server and JobServer class, and we’ll create a PowerShell script to iterate over all of the various collections of this class, containing the jobs and alerts and other server objects, along with associated operators, schedules and so on. It scripts out these objects into a directory named according to the server name.

As noted in the introduction, not every object will be relevant for the application so after running the script for the first time, you’ll have some tidying up to do, weeding out irrelevant objects. You’ll need a reliable way to ‘flag’ which job, alert, endpoint, or whatever, is part of which database. There are a few ways to do, one of which is to use the permissions on the resource. However, using the name is probably the easiest it’s a property common to all the objects. In some cases, such as with job steps, things are easy since SQL Server allows you to assign them explicitly to a database, but this isn’t generally the case. Using the name is ugly, but it meets the purpose.

Agent Jobs and Job Steps

DBAs use SQL Server Agent jobs for all manner of tasks, such as running background processes, maintenance tasks, backups, and ETL. I’ve never dealt with a database that has no Agent jobs.

Agent jobs are generally, but by no means always,  T-SQL batches. They’re easy to miss because, unlike job steps, SQL Server does not allow you to associate jobs with a specific database. SQL Server stores jobs for all databases on the SQL Server instance, hugger-mugger, in the SQL Server Agent, in msdb, which is why it is important to back up this database.

Even if we have no job steps assigned to a specific database, they can certainly affect it. There could be Agent job steps in the form of PowerShell scripts, ActiveX Scripting jobs, replication jobs, Analysis services jobs, Integration Services jobs, executable programs or batch processes, as well as the most common T-SQL batch jobs.

We  need to have a copy of any code or executable  that is run from these job steps., but we don’t want the replication jobs. There is no reliable, automated way of unpicking only the job steps that are relevant to your database unless, of course, it has been correctly assigned to the database. What’s in that SSIS job, for example? Are you seriously going to examine that PowerShell script to find out what database it is accessing?

To make sense of this, we need to be careful, and keep everything neat and tidy. Continuous integration or deployment is not always a free lunch.

  • Don’t access several logically separate databases with the one job unless it is an administrative server job such as backups or maintenance. For application tasks it is one job, one database. Otherwise, what happens when the database versions get ‘out-of-sync’?
  • Document in the name and description of the job which database(s) it is accessing. Use the description to provide special detail, such as a PowerShell script on the server that is called from a powershell-based job and which should also be saved<
  • Keep the source of the job in source control
  • Ensure that the creation or update of the job is repeatable
  • Ensure that the job ownership is consistent and logical.

Once we have this, the task is easier, but we’re not done yet. Are all the SSIS tasks in Source Control too? Those batch files that are called from job steps  need to be saved as well. Is that PowerShell script executing a file as a script block? (that means that the file must be saved as well) Are server-based executable files involved? Someone needs to ensure all of this is in Source Control and then check they have not changed since the last deployment (I find that the description field of the job step helps).

In our post-deployment script (or pre-deployment if a dependency to database build scripts exists) , we need go through each job, delete the current job if it exists and then execute the creation script of each job. In the script attached to this article I  generate the TSQL script that does  this initial deletion for you.  If using a script generated in SMO then beware. In some earlier versions, the drop command specifies a static jobid whereas the creation generates a dynamic jobid so that it will run the first time but fail every subsequent time. If you come across one, alter it  to use the object name.

Agent Alerts

Agent alerts are generally more relevant to the server than the database. However, for the application, it is a good idea to set up special alerts for severe database errors. I like to handle message 825 (tried to read and failed), and define a separate alert for each severity 19, 20, 21, 22, 23, 24, and 25. Where I’ve worked, it is often the responsibility of the development DBA to make sure that all the alerts are set up ready for the production DBA.

I like to alert on severity levels 10 – 16, indicating faults in the database application code, usually programming errors and input errors. Usually, the end user hears about these before the developers, which is bad practice. There will also be various performance alerts, set up in order to alert the operator when a certain performance condition happens.

Alert notifications will change according to the type of installations in order to specify the right recipient. For integration or UA testing, the database developers need to know about it, whereas in production, the production DBA will need to know about these.

We need to script all these alerts and, like jobs, we should use the name to signal which alerts belong to which database.

Operators

These are the recipients of the alerts, the addresses notified by email of corruption, errors in code, job failures, or major problems. The operator is likely to be different in development, test, integration and production.

SharedSchedules

We use Shared Schedules to kick off regular jobs. Imagine that our application has a background task that processes all the completed orders without slowing down the purchasing process. That schedule that kicks off the job is part of our application, and shouldn’t logically be in the same pot as a scheduled process to rebuild indexes.

ProxyAccounts

Proxy accounts, or proxies, are useful when some principals need to access external resources, but when it is unwise, for security reasons, to grant those necessary extra permissions to those principals. To avoid having to do this in order to, for example, run a PowerShell script, we can map, purely for that task, the agent account to another account that already has these rights. This is generally going to be a database-level security device, so we need to install those proxies appropriately for the server on which the database is to be deployed. You may even need different proxies in development, test, integration and production

TargetServers

In order to allow multi-server administration, a target server allows us to run a job against a different server than the one on which it is defined. A master server distributes jobs to, and receives events from, a target server. We can use this feature to define a job once on a master server and have it executed against many servers. I’ve never used this within an application,or come across its use, but I can imagine that it has its uses.

Server Triggers

Server-scoped triggers appear in the SQL Server Management Studio Object Explorer in the Triggers folder. We can use them to track server-wide DDL changes, such as a database creation or drop, and Login attempts to the current server. If specified, the trigger fires whenever the event occurs anywhere in the current server. Their main use is for server security and audit, but an application might use them for security purposes.

LinkedServers

An application might use Linked Servers for ETL, distributed database layering or data transfer. I’ve also seen them used for archiving old data. We need to install these, with the application, and it makes things neater for deployment if we obey the following rules:

  • One linked server, one database
  • Keep the source of the linked server in source control
  • Ensure that the creation or update is repeatable.

The PowerShell Script

Finally, here is the general-purpose PowerShell script that I use for initial creation or update of the T-SQL scripts for the various server objects, such as Triggers, Backup Devices, Linked Servers, Logins, User-Defined Messages, Alerts, Jobs, Operators, Shared Schedules, Proxy Accounts and Target Servers. Of course, you can extend it to any others you require.

It saves each object into a separate file, named from the type of object and the object’s name. It will do as many servers as you like from a list, or use your registered servers.

I find it a good way of getting things started. I prune the list of server objects according to what is relevant to the application. This version will script all objects of the type you specify in the list at the start of the two scripting pipelines, but when you have named your objects systematically, you can introduce into the pipeline a Where-Object clause to take notice of only those server objects that are relevant to your application.

Final thoughts

It is much better to pause for thought before implementing anything outside usual database objects just to make sure that whatever you develop is going to be easy to deploy. A little more work at this stage saves much more later on in the project. Just take a simple example, which could be a deployment to the various test servers from your development (integration) server., and use this to practice deployment from source control until it works smoothly. My advice would be to use a naming convention that makes it clear what server-based objects belong to the application. For example, I would keep application-based Agent Jobs so that they access only one database, and make this clear in every job step. I like to keep in source control all server-configuration scripts that are required for the database, along with any external executable.

Test out your post-deployment scripted processes to make sure that they collect from source control all of the necessary agent jobs and alerts, for example, and install them on the various servers. Make some changes to the server objects on the development server and test that you can then get source control updated easily. Develop these server-based objects in a way that makes it easy to deploy them, with consistent naming and ownership. Don’t consider the development job finished until you have tested the deployment.

You may find that different types of server, such as development ‘sandbox’, development ‘integration’, test and production will all have special configuration requirements. This will be especially true of security, replication and ETL, in which case you’ll need to handle all this within the post-deployment script.

When you’ve tested out your post-deployment scripts, you’ll find that many routine jobs become a great deal easier and so your databases will get a lot more, and more varied, testing, and developers will be less likely to consider database development to be a black art.

This article is part of our Database Lifecycle Management patterns & practices
Return to the library to find more articles on DLM, or visit www.red-gate.com/dlm for more information about SQL Server products for database lifecycle management.

For more articles like this, sign up to the fortnightly Simple-Talk newsletter.

Tags: , , , , ,

  • 17078 views

  • Rate
    [Total: 12    Average: 4.8/5]
  • Emmett E. Childress

    Sql Server Database Project
    Great article. Phil is it possible to replicate this process with a Sql Server Database Project in Visual Studio 2012? Is there anything special that needs to be done to work with Sql in Azure?

  • mbourgon

    Cool – but how would you add objects to script out?
    Since my PS-fu is not strong enough, how do you get a list of all the possible things you can script out? If I do the following I can get a listing of a bunch of options (including JobCategory), but I don’t see where Logins or UserDefinedMessages comes from.

    $MS=’Microsoft.SQLServer’ […etc…]
    $My="$MS.Management.Smo"
    $s = new-object ("$My.Server") myservername
    $s.jobserver

  • Phil Factor

    Re: adding objects to script out
    All these things are properties that are actually collections on either the server object or the jobserver object. All you need to do is to look at the SMO documentation on MSDN, get the list of properties and look for the collections. Logins and UserDefinedMessages are collections on the server object. Just add the into the list and you should see them scripted out. This technique works for database objects too, but in this cas you look for collection of the database object such as Tables.

    I hope I’ve understood the question right!

  • Mo64

    MS Reference
    Hi

    Notice that this article is over a year old but would like to draw attention to this list of "metadata" to move when deploying databases on another server. A couple of extra categories (Service Broker, Fulltext etc) that might also need scripting and transfer.

    https://technet.microsoft.com/en-us/library/ms187580%28v=sql.110%29.aspx