Click here to monitor SSC
  • Av rating:
  • Total votes: 8
  • Total comments: 2
Tony Davis

Rapid SQL Server Database Deployment

21 February 2014

With the right tools, it is possible to rapidly deploy upgraded databases. By deploying early in development, it's easier to iron out any potential problems that the new deployment could cause in production long before it's time to actually release the software. Tony Davis explains.

In an Agile development environment, teams automate their build and test processes to the point where they routinely deploy to their test environments every day, often multiple times per day. Sometimes, however, this applies to the application only; the database is not subject to the same rigorous versioning and testing procedures. Even if the team put the database into source control along with the application, there are specific challenges to working with a database in source control that mean that the team often delay database deployment until as late as possible in the development process. This is problematic, as it means that any problems that are likely to arise when deploying to production do not show up till late in the process, when they are harder to fix, and deadlines are looming.

In a common approach to database source control, we start with a blank database, and then store in Source Control an upgrade script to describe each schema modification, with one change per script file. It's a hard process to control, and error prone, requiring the team to string together in the correct order a long chain of upgrade scripts, in order to reproduce the current schema. This article will describe how an alternate approach, whereby each member of a team of developers can work on their own sandbox database, and then simply right-click on the database, in SSMS, and commit their schema and other database changes to a shared source control repository, for access by other team members. In this scheme, every developer uses Red Gate SQL Source Control source control client, integrated with SSMS, to commit their changes and get the changes of others.

We'll start from an empty database (SalesReportsDB) and proceed through a series of development phases, comprising a range of typical development tasks, from building new tables, views, functions and stored procedures to adding users, changing permissions, and so on. Developers will share changes seamlessly between their dedicated development environments, via source control. At the end of each phase, the team use Red Gate Deployment Manager to deploy the new database quickly and in a repeatable fashion, through the testing and staging environments. We'll also explore the potential issues that can arise in this model if we treat the deployment of database security objects (users, roles and permissions) in the same manner as code and schema objects, and suggest some possible solutions.

The Development and Deployment Architecture

The inspiration for this piece is a series of articles by Alexander Karmanov, describing his 'hand-rolled' database development and deployment framework, comprising a set of T-SQL scripts glued together with environment-aware configuration files, and a "Change Log" to track all changes made to the database structure and its static data. What he achieves is a "one-click" process to deploy a database from scratch in any known state (build/version), or to upgrade an existing database from one known state to another. By using a tool to automate the deployment process, we ensure that each stage is identical, reproducible and can be rolled back. This allows us to test the deployment process itself during the build.

I was interested to see for myself how the functionality he described could be made available to IT departments that were unable or unwilling to support a custom solution built on configuration files, batch commands and a deployment utility (in the form of a Visual Basic script).

I assume the same basic development approach, making a series of incremental changes to a source-controlled database, and verifying each change (for example through unit testing – not covered in this article). Crucially, I wanted to replicate the ability to then deploy the tested changes quickly though our various Server environments, such as Test, Integration and Staging.

Instead of a script-based framework, the development and deployment scheme presented in this article relies on several Red Gate tools, namely Deployment Manager (DM), SQL Source Control (SoC) and the Deployment Manager add-in for SSMS. For installation details, and other documentation, for each tool, please see the Red Gate Tool Reference section at the end of this article.

Figure 1: Incremental Database Development and Deployment

Let's review briefly the component parts of each of the three major sections of this figure, the deployment environments, development environments and Deployment Manager.

Deployment Environments

In the bottom right of Figure 1, we see the database deployment environments. The key premise behind this development and deployment model is that we can iterate through development phases quickly, and at each phase deploy the new database rapidly not only to our testing environment, but also to more "production like" environments such as Staging.

In the example in this article, we'll deploy a database only (no associated application) to two deployment environments, TEST and STAGING.

Deployment Manager requires a login to exist on each instance in a deployment environment that it can use to connect and deploy to that instance

Development Environments

Our development environments, depicted in the bottom left portion of Figure 1, use the dedicated model, where each developer works in their own sandbox, with their own copy of the database under development, in this case SalesReportsDB.

Dedicated versus shared development model

Details of each model, and arguments for and against, are discussed in detail elsewhere and I won't rehash them here. Troy Hunt offers a "pro-dedicated" perspective in his article, The Unnecessary Evils of the SHared Development Database. It's a perspective with which I suspect many developers will sympathize. However, in smaller shops the higher degree of control, and lower costs, associated with the Shared model can be attractive.

Each developer works in SSMS, equipped with Red Gate SQL Source Control (SoC). SoC integrates into SSMS to allow developers to work from within SSMS with database files stored in a source control repository, SVN in this example. Each developer links their copy of the database to the same project repository in Source Control and then regularly commits changes to the database objects on which they are working. Other developers in the team can then update their sandbox database with any changes submitted to the source control repository. In this way, developers can perform all routine source control operations directly from within their normal development environment (SSMS).

At least one developer also requires the Deployment Manager plug-in for SSMS. When a version of the database is ready for deployment, we use this tool to create a deployment package, and save it to a package feed (more on this later), from where Deployment Manager can access it, and deploy it to one or more target environments, as part of a release.

Deployment Manager

We use Deployment Manager (DM) for all deployments, as depicted in the top portion of Figure 1. In DM's model of the deployment process, we have the following major components:

  • Deployment project – hosting all the steps, packages and releases for a specific database, or application project.
  • Package Steps – each step is responsible for defining the deployment configuration for a specific package, such as a database package. It defines the name of the package, the location of the package feed, the target deployment machines and database instances for the package, and various deployment configuration variables, such as the name of the database on the deployment target.
  • Package – contains all the necessary files to perform a specific task in the overall deployment process, such upgrade an existing database, or deploy an new application version, and so on
    • Package feed – DM sets up a default package feed to which we can save our packages for its use, but we can also create custom feeds
  • Release – we group into a release all of the required versions of all the packages for a project deployment, and assign the release a version number. DM then deploys each package in the release to the appropriate target deployment environment.
  • Environments – we deploy a release progressively through various environments, such as TEST, STAGING and so on, each of which contains one or more target machines, or SQL Server instances.

Figure 1 depicts deployment of an application and database together. This deployment project contains two steps, which define the application package and database package, respectively. When we create a release, DM grabs the required versions of each of these packages, usually the latest versions, in this example v1.1 of the application package and v1.3 of the database package. We assign a release number, in this case v1.4, and DM deploys it to the specified targets.

In the examples in this article, we deploy a database only, and not any associated website or application, which simplifies our database deployment project in DM. It will consist of a single step, which identifies our database package (generated by), and how we wish to deploy it. At each stage of our development cycle, we'll generate a new database package, using the DM add-in for SSMS, and publish it to the package feed. In DM, we then simply create a new release of our database deployment project, to deploy the latest package version.

Post-deployment, the DM web-interface dashboard provides a summary of the version of each release that exists in each environment, for each project, as depicted in Figure 2.

Figure 2: The Deployment Manager Dashboard

Deployment Considerations

The scheme set out in Figure 1 is one that will allow developers to perform all database actions, from initial database creation, to creating and modifying schema and code objects, to creating and modifying users, roles and permissions, and so on, by executing T-SQL code directly against their sandbox database, and then committing the changes to source control. When ready to deploy, we can create a database package encapsulating these changes and then deploy it to the target instance, and DM will attempt to synchronize all these objects.

Under the covers, DM uses the SQL Compare engine to generate a 'difference script' that will synchronize the source database, i.e. the database as described by the database package being deployed, with the target database, as it exists on the target SQL Server instance.

Therefore, as part of a deployment, DM will synchronize all database-scoped objects, including schema objects, code objects, database users, and so on, as well as any reference data that we have in source control. It is a model that allows for quick, iterative development cycles, and fast, frequent deployments through to "production-like" environments and even to production itself.

However, there are some serious considerations to how we handle database deployments, both in terms of establishing the initial database environment, and in terms of 'synchronizing' an upgraded database with the one that already exists on the target deployment environment. We'll discuss briefly a few of them here, but there are others.

Configuring the initial database environment

In the DM model described in this article, we can create in code all database objects, including the initial database, as well as tables, stored procedures, users, roles, permissions, and so on, and then add them to source control.

However, there is a solid argument that it is better to use configuration files to define the initial database environment, including the empty database, with the appropriate file settings and all other properties, as well as logins, users and roles, permissions and so on. We store these configuration files in source control. The big advantage to this approach, one that we'll explore in detail in the next article, is that the initial database, and all database and environment settings, are in source control before we write a single T-SQL script. Our deployment utility simply uses these source-controlled configuration files to create the basic database and environment, reproducibly, on every target SQL Server instance, in both development and deployments environments.

The original Karmanov series, the inspiration for this piece, adopted this approach. His development and deployment framework separates database and security configuration from the schema and object modification scripts (i.e. the code). In the Karmanov scheme, we never create a database, user, login, or role directly in a T-SQL script. Instead, we store this information at the "instance level" in XML-based configuration files, in source control.

In the next article, we'll describe how to adapt the DM-driven development and deployment model used in this article to incorporate configuration files and PowerShell scripting to set up the initial database environment.

However, in this article, we'll create all objects in script and then add them to source control, and for deployment we'll allow Deployment Manager to handle synchronization all schema and code objects as well as database users, roles and permissions. As such, there are a few specific considerations to bear in mind.

Synchronizing database and Server-level Properties

It is a wise idea for developers to ensure that all server-scoped and database-scoped properties in the development environment, match those that will be used in production. Any differences can cause subtle, unexpected differences in behavior.

Server-level configuration settings are out of scope for DM deployments, though it's a good practice to store these instance level settings in source control, and ensure they match across all SQL Server instances – see, for example, Brad McGehee's article, How to Document and Connfigure SQL Server Instance Settings.

Database-scoped properties, such as file and filegroup settings and the various database configuration options, are notionally in scope for DM database deployments. However, DM does not attempt to synchronize them.

If we create a database in script, in the development environment, SQL Server will apply all the default settings for the database properties and options, unless we specify otherwise. During deployment, DM will create a database on the target SQL Server instance if it does not currently exist. However, it will simply create it with the settings defined in the model database on the target instance. This is similar to the behavior of SQL Compare, except that SQL Compare will, by default, attempt to mimic the file and filegroup settings of the source database.

If the settings of model on the target instance are appropriate for all new database on the instance, then we can allow DM to create new databases for us. However, if we need to specify custom properties for a database, on the target server, then in lieu of a configuration file-driven approach, we will simply need to take manual control of the initial database creation on all environments, rather than let DM do it, which is the approach we take in this article.

Synchronizing Security Objects

DM does not synchronize server-scoped objects, so we will need to deal separately with any SQL Server logins that we need to exist on the target server.

By default DM will attempt to synchronize database security objects, such as users and roles, in exactly the same manner as any other database object. However, this approach to database configuration and security can cause issues, especially when deploying changes from one server environment to another, where there is every likelihood that users and roles will exist in the deployment environment that don't exist in the development environment.

As noted, in this article we'll allow DM to synchronize database security objects, but in a later section, Security: Logins, Users and Roles, we'll discuss possible issues with this, and suggest some alternatives (in lieu of the configuration file-driven approach described above).

Synchronizing Schema Objects

When schema changes are involved in a database deployment, the potential complexity grows significantly, not only because of the interdependency of objects within a database but also the need to presence any the data stored in the existing version of the database.

After we put a database into production, the database acquires "state" in the form of the business data that's stored in it. If we wish to upgrade the database, we have to do more than simply update the data definition language (DDL) script files in source control and then use those to overwrite the old database structure. When we update the database schema, for example altering a column's data type, we must do so in a way that preserves the business data. This may require additional 'migration scripts', which must also be in source control, to enable existing data to fit in any changed schema. An example might be trying to deploy a change that added a NOT NULL constraint to a column with existing data. In order to deploy such a change successfully, we'd need to write a migration script that loaded a default value to any NULL columns in the table, store that script in source control, and incorporate it in the deployment process. We won't cover use of migration scripts in this article, but you can find a worked example in the SQL Source Control documentation.

Synchronizing Other Server Objects

There is more to a database than the structures that store our data, the accompanying interface, in the form of views, stored procedures, and so on.

Aside from SQL Server logins, many other server objects will be vital to the running of the database in the deployment environment, such as SQL Server Agent jobs, job alerts, and operators, as well as ETL processes, driven by PowerShell or SSIS, and many more. We won't discuss this topic further in this article, but we need to consider how to script out, store in source control, and deploy all such objects. See, for example, Phil Factor's article on Database Deployment: The Bits.

Set up the Deployment Environments

In Deployment Manager, we need to set up our deployment environments (TEST and STAGING) and the machines, or in this case SQL Server instances, that consituture each environment. We will have two test SQL Server instances (Test1 and Test2) in the TEST environment and one SQL Server instance (Staging) in the STAGING environment.

However, in order to add a SQL Server instance to an environment, Deployment Manager needs to be able to connect to it. As such, on Test1, Test2 and Staging, we need to create a SQL Server login for Deployment Manager.

Create a SQL Server Login for Deployment Manager

For each target deployment SQL Server instance, I created a SQL Server login called DepMan1 and added it to the sysadmin role, as shown in Listing 1.

Why sysadmin permissions?

It is best not to assign sysadmin permissions unless absolutely necessary. However, in this case, the recommended permissions for SQL Compare (which Deployment Manager uses under the covers) is dbo (see http://www.red-gate.com/SupportCenter/Content/SQL_Compare/knowledgebase/SC_Permissions_Required). SQL Server maps members of the sysadmin server role to the dbo database user, by default. The dbo user is a member of the db_owner database role and can perform any action on or in the database.

USE[master]
GO
CREATE LOGIN[DepMan1]

   WITHPASSWORD=N'<your pwd>,
      DEFAULT_DATABASE=[master],
      CHECK_EXPIRATION=ON,
      CHECK_POLICY=ON
GO

/*SQL 2008 and earlier*/
EXEC master..sp_addsrvrolemember
      @loginame=N'DepMan1',
      @rolename=N'sysadmin'
GO

/*SQL 2012*/
ALTER SERVER ROLE sysadminADD MEMBER [DepMan1];
GO

Listing 1: Create DepMan1 SQL Server login, as a member of the sysadmin server role

Having created this login on each instance, we can open Deployment Manager and set up our deployment environments.

Windows authentication and DM

Initially, I assumed we could create a Windows login, associated with whatever Windows User or Group that Deployment Manager used. You can make it work by creating a login for the machine rather than a Windows User or Group– see the documentation. Hopefully, this will change in the near future. In the meantime, the documentation strongly recommends letting DM use a SQL Server login.

Create the Deployment Environments in Deployment Manager

In the Deployment Manager Web interface, we create each environment, in turn, adding the SQL Server instances that belong to each environment. Navigate to the Environments page, and click Create Environment. We need to provide a name, such as TEST or STAGING, or similar, and a brief description. DM will create the shell environment to which we can add target machines (for application deployments) or in this case SQL Server instances.

Figure 3: Adding a SQL Server instance to the STAGING deployment environment

In this case, click Add SQL Server, and on the next screen provide a suitable name (such as Staging1), server name (in the form server name\instance name), and the authentication details. As discussed, in this case, we'll use SQL Server authentication with the DepMan1 user. DM will validate that it can connect to the instance and add it to the environment. Once this is done, its status will be Online.

Figure 4 shows the enviroments and instances that I set up, as seen from Deployment Manager's Environments tab, with two instances in the TEST environemnt, and one in STAGING.

Figure 4: Configured instances in each deployment environment

Set up the Development environments

In this example, we'll mimic two developers (Dev1 and Dev2) working together to develop a SalesReportDB database. As noted earlier, each developer will work from SSMS, with the SoC add-in, on their own working copy of the SalesReportDB database. They will keep their environments in synchronization by each linking their database to the same Source Control repository, and using SoC to commit their changes, and get the latest changes made by the other developer. One developer, at least (Dev1 in this example), will also require the Deployment Manager SSMS add-in.

Create a project folder in the Source Control Repository

I'm not going to describe here the steps necessary to install and configure a source control repository and working folder, as the documentation for your source control system will describe these steps in detail.

For this article, I use the Subversion (SVN) source control system, along with the TortoiseSVN client, a Windows shell extension that allows execution of source control commands directly from Windows Explorer, plus the VisualSVN management console for general administration.

I will explain how I set this up for the example, so you can follow the example if you wish.

Using VisualSVN, I created a TonyMain SVN repository with the "trunk, branches, tags" structure.

Figure 5: The SVN repository structure

Within the TonyMain repository I created a SalesReportsDB project folder, as a subfolder in trunk.

Figure 6: Create the SalesReportsDB repository

As you can see, it is currently empty. Grab the URL for your new project folder (select the project folder, right-click and select Copy URL to clipboard) as we'll need it soon, when we will switch to SSMS and use SoC to link the SalesReportsDB database to this folder in our SVN repository.

In Windows Explorer, I created a working folder (C:\TonySVNWorkingDirectory) and performed an SVN Checkout, linking the working folder to the root folder of the TonyMain repository.

Create SalesReportsDB on Dev1 and Dev2

As discussed earlier, it is arguably better to create the initial database on each instance from a configuration file that is already in source control. However, here, we simply create the database manually on every SQL Server instance.

Therefore, we need to create SalesReportsDB on Dev1andDev2. We'll start with an empty version of the SalesReportsDB database (as it existed upon initial creation in Step 1 of the Karmanov series).

Listing 2 shows the full script to create the initial database. Notice that the database has a secondary filegroup called ReportData, as well as the primary data file, and of course the log file. Since, by default, the model database for a SQL Server instance specifies only a primary data file and a log file, DM consider the file structure in Listing 2 to be "non-standard" (more on this later).

USE master;
GO
CREATE DATABASE [SalesReportsDB] ON PRIMARY
  ( NAME=N'PrimaryDataFile_01',
    FILENAME = N'D:\SQLDATA\SalesReportsDB_primary_PrimaryDataFile_01.mdf',
    SIZE = 5120KB , MAXSIZE = 1073741824KB , FILEGROWTH = 2048KB),
    
    FILEGROUP[ReportData]
  ( NAME = N'ReportData01',
    FILENAME = N'D:\SQLDATA\SalesReportsDB_ReportData_ReportData01.ndf',
    SIZE = 5120KB,MAXSIZE = 104857600KB , FILEGROWTH = 10240KB )

       LOG ON
  ( NAME = N'LogFile_01',
    FILENAME=N'D:\SQLDATA\SalesReportsDB_LogFile_01.ldf',
    SIZE = 5120KB , MAXSIZE = 104857600KB , FILEGROWTH = 10240KB );
GO

Listing 2: Create the empty SalesReportsDB database

If the file layout (such as drive letters) is different in either of the instances in your development environment, simply adapt Listing 2 as necessary for that instance.

Link SalesReportsDB to Source Control

In SSMS, connect to a development instance (Dev1), right click on the newly-created SalesReportsDB database and select, Link Database to Source Control. We need to select a repository system (SVN, in this case), a database repository URL (see Figure 6), optionally a Migration scripts repository URL (leave this empty for now), and a development model (we are using the dedicated model, as discussed earlier).

Figure 7: Linking SalesReportsDB to Source Control

Click Link, and SoC will link to this repository. The indication of this in SSMS is that the database icon for SalesReportsDB turns green.

Having made the link, switch back to the VisualSVN server (or your equivalent). We should see a repository structure similar to that shown in Figure 8 (all the folders are currently empty).

Figure 8: The new SalesReportsDB repository structure

The final step is to connect to the second development instance (Dev2) and link the database to the same SalesReportsDB repository in SVN.

Test Deployment of empty SalesReportsDB database

In order to gain familiarity with the Deployment Manager tool, and its model of working, our first task will simply be to deploy the empty SalesReportsDB database, as it exists in Source Control, to our TEST and STAGING environments, the former containing two SQL Server instances and the latter one.

We will walk through the following steps:

  • Create a database package for the empty SalesReportsDB Database, using the SSMS Deployment Manager add-in
  • In Deployment Manager, create a project to deploy SalesReportsDB
  • Create a deployment step in the project to deploy our database package
  • Create a release, which contains everything necessary to deploy a database to an environment (in this case, just a single package)
  • Deploy the release to TEST and STAGING

Step 1: Create database deployment package

In this step we use the Deployment Manager add-in for SSMS to package up the empty SalesReportsDB Database

In SSMS object explorer, in a development database (in this case Dev1), right click on SalesReportsDB and select Publish [SalesReportsDB] for Deployment. We need to provide the URL for Deployment Manager and the API key, both of which we can obtain from the Deployment Manager Web interface.

Figure 9: Creating a database package: connecting to Deployment Manager

The next screen asks us to select a Source for the database. As the screen text explains we can create the database package based on the current live database, or on a known version of the database in source control.

Figure 10: Creating a database package from a revision in Source Control

It is safer to choose the version in source control: A live database is, by definition, a 'moving target'. It also gives us the option to include other files in the deployment package (more on this later). Therefore, we'll select SQL Source Control as the source for the database package.

The Choose revision… button will allow us to deploy a specific version from Source Control, but here we simply want to deploy the latest version, which is the default behavior, so simply click Next to reach the Package Details screen.

DM will automatically name the package after the database. DM also automatically assigns a version number to the package, starting at 1.0.0 for the first ever package for that database. The next package we create, for a subsequent version of the database, will by default get the number 1.0.1, and so on. Since I've previously performed a few "test" deployments, my package version number is 1.0.8.

Figure 11: Database package version

The final screen asks us to select the static data tables, but we don't have any tables yet so we can just click Publish. We should get a "success" message informing us that we can now deploy the package using Deployment Manager.

By default, DM stores these packages on the DM server in the package directory, C:\ProgramData\Red Gate\DeploymentManager\Data\feed. Note that DM also caches them in various places.

Deleting packages

If you wish to delete an existing package in this directory, for maintenance reasons, or simply due to error in preparing the package, it does not necessarily mean that you will be allowed to reuse the package version number of a deleted package – unless you also flush the caches. As of v2.3.18.7, there is no reliable way to ensure you flush all the caches (this ability will come in a future release). You could encounter a situation, as I did where DM allows you to reuse a package version number for a subsequent package, but then retrieves the old version from cache. For the time being, it is safer not to try to reuse package numbers.

By default, DM picks up these packages, for deployment, from a package feed called the "Deployment Manager package feed", located at http://<DM Server address>:8080/nuget/. However, we can set up custom feeds through the DM web interface (Settings | Package feeds)

We are now ready to deploy the database package for the empty SalesReportsDB database to the TEST and STAGING environments When we come to deploy the package, DM will store the package name and version number in the target database, as a table-valued function (older versions of the tool used extended properties).

Step 2: Create a Deployment Manager Project

A Deployment Manager Project is essentially a vehicle for grouping together the set of deployment tasks that we need to perform in order to deploy a database, an application, or both together.

To create a project, simply go to the Projects page in DM web interface and click Create Project. All DM asks for is a project name and a brief description. A suitable name is simply the name of the application, or in this case the name of the database for which we wish to automate deployment, SalesReportsDB.

Figure 12: Create the SalesReportsDB deployment project

Define one project per database. Initially, I misunderstood the intent behind a "project" and set up several projects for the same database, for example, a project to create the database from scratch, a project to upgrade it, and so on. I learned quickly that tracking package and release numbers across multiple projects for the same database gets very confusing. Instead, in the DM model, we create one overarching project, and define within in it the steps and packages required for each release.

Step 3: Add Database Package step

The next step of the deployment process, in DM, is to create the steps and packages that will comprise our deployment process. Each project consists of a number of project steps, each step performing one required action in the overall deployment process, as defined by an associated package step. In this case, we have a very simple one-step deployment, which deploys our newly created SalesReportsDB database package. As we complete each stage of our incremental development of the SalesReportsDB database, we'll generate a new database package each time, based on a specific version of our source controlled database, and then use this package step to deploy it.

Open the SalesReportsDB project and click Add Database Package Step. We'll create a database package step in order to deploy database packages for our SalesReportsDB database. Figure 13 shows the top half of this screen, where we define a Step name for the step, the database package that we want to deploy via this step, and the deployment environments, and SQL Server instances, to which we may wish to deploy this package.

Figure 13: Add the SalesReportsDB database package as a deployment step

In this example, I included all SQL Server instances in each of the environments. Later, when we create a release and deploy it, we'll be able to choose to which of these environments (TEST or STAGING) we wish to deploy the release.

The bottom half of this screen allows us to set the values for some pre-defined DM variables.

Figure 14: Pre-defined Deployment Manager Variables

By default, DM will deploy a database named after the package, as indicated by the reference to the $RedGatePackageName variable. In other words, the target database will have the name SalesReportsDB. If we wanted to change the name, for example to SalesReportsDB_test, then we'd simply enter $RedGatePackageName_test for the Database name.

For the Deployment mode field (which refers to the RedGateDatabaseMode pre-defined variable), we can choose between Upgrade or Drop and Create New. As the name suggests, in Drop and Create New mode, DM will always drop any existing database of the same name and create a new one, using the database configuration properties dictated by the model database for the target SQL Server instance. We won't be using this mode at all in our deployments. Instead, we'll choose Upgrade mode, and update the existing database in the environment. In Upgrade mode, if the database doesn't exist on the target instance, DM will simply create it, again using the database configuration properties dictated by the model database for the target SQL Server instance.

We're going to wait until a later article for a full discussion of the other variables on this screen, at which point we'll also discuss in more detail the general topic of customizing deployment projects with variables, and the use of these variables in pre and post-deployment scripts, and so on. Note though that the setting of Abort on Warning severity could be important. By default, DM will only abort a deployment (leaving the target environment untouched) if it encounters a High warning, such as when the deployment will involve dropping a table. By comparison, dropping a database user is a Low warning. We'll discuss the best setting to choose later in the article.

For now, we can click Add to add this step to our SalesReportsDB project.

Figure 15: The newly-created database package step

Step 4: Creating a Release

For reasons just discussed, we will skip past the Variables screen and move on to Releases . A "release" simply pulls into a bundle all the packages and other scripts necessary to deploy a database to an environment. Click Create Release and you'll then see a screen similar to that shown in Figure 16.

Figure 16: Creating a release

When we click Create release, DM grabs the latest Version of the relevant database package from the feed, in this case 1.0.8.

We associate a release with a specific Version number, just as we do for a database package. For a single package deployment, it makes sense that the version number will be the same as the database package number and that's the number that DM uses by default. However, if we're deploying a database and application together then we can easily imagine a situation where, for example, a release combines version 1.3 of a database package with version 1.8 of an application package. Depending on our established conventions, we may then want to assign a distinct Version number to the release. Within each release we can see the specific package versions that comprise it (see later).

Add Release notes as desired and then click Create . You should reach a Summary screen. The top half of this screen shows the release number, release note and details of the release creation and the bottom half provides details of the packages that comprise the release and the environments to which we can deploy the release (in this case, all of them).

Figure 17: Release Summary screen

Step 5: Deploying a Release

As discussed earlier, if we simply use DM to deploy our package to the test instances, where SalesReportsDB does not currently exist, DM will deploy it according to the settings of the model database for each of those instances.

Our source SalesReportsDB database has a custom file configuration with a primary data file, a secondary data file and a log file, in specified locations. If the model database for each target instance is set up to accommodate this database configuration (perhaps with different drive numbering) then this is fine. However, if model for the target instance simply had the SQL Server default settings then the database deployed would have no secondary data file and the primary and log files would be in the default locations (e.g. D:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\).

In this article, we take manual control of the initial database creation on all environments, so at this point we need to run Listing 2, adapted as appropriate for each of our target SQL Server instances (Test1, Test2 and Staging).

Of course, this means that our database deployment package 1.0.8 will deploy "nothing", since that same database will already exist, but it still allows us to test the overall deployment process and become familiar with the DM deployment model.

Click Deploy this release . From the Deploy to : dropdown, we can select any of our environments as the target for the release. In this case, we're going to deploy to the Test1 and Test2 SQL Server instances in our TEST environment.

Figure 18: Deploy the release to TEST

DM executes the various stages of the deployment process, hopefully culminating in a screen that looks as shown in Figure 19.

Figure 19: Deployment Summary

By expanding the Output section at the bottom of the screen, we can see a full 'report' of what was deployed in the release, including the database package version, the variable settings for the project and so on. Deployment Manager will also report here any warning or errors it encountered while deploying the release.

Having created a release, we can deploy it multiple times, to different environments. Simply navigate to the Releases page for the SalesReportsDB project, hit Deploy for the relevant release and choose the deployment environment ( STAGING ), and deploy the releases as described previously.

Figure 20: Re-deploy a release

We can only re-deploy the same release if we make no changes to the configuration of the underlying database package step. For example, if we update the value of a variable (see Figure 14), then we must create a new release, and DM will assign it a new release version number. This means it's possible that different release numbers will contain the same version of the database.

As we progress through deployments, the DM Dashboard provides a useful overview of which releases we've deployed to which environments. Figure 21 shows that we've deployed release 1.0.8 both to TEST and to STAGING .

Figure 21: Release summary on the DM Dashboard

Reviewing DM Deployment Output

Although, Deployment Manager reports that the deployment to STAGING succeeded without a problem, it's worth checking the Output for the release. In my case, hidden in this output is a 'Low' warning that as part of the deployment, Deployment Manager (or SQL Compare, under the covers) dropped two database users from the target database. We'll discuss this issue in more detail later, in the section Security: Logins, Users and Roles .

Figure 22: Low level warning on dropping database users

Rapidly Deploying Database Upgrades with Deployment Manager

Having deployed an empty SalesReportsDB database to our TEST and STAGING environments, we can now iterate though a series of development phases, immediately deploying our upgraded database at the end of each phase

Our two developers will work on the SalesReportsDB to:

  1. Create a schema (reports), and two tables ( ReportHeaders and ReportLines), a database role (ReportReader) with read access to these tables, and a user ( WebConnect) who is a member of this role
  2. Create a stored procedure (GetTheBigBossReport), revoke direct table access from ReportReader, and instead assign execute privileges on GetTheBigBossReport
  3. Add a new column (Comments) to ReportHeaders and update GetTheBigBossReport to include it in its output

After each of steps 1 and 2 we're going to generate a new database package to deploy the TEST environment. After step 3 we'll deploy to both TEST and STAGING.

Development Phase 1: two Tables, a Database User and a Database role

Listing 3 shows the script to create a schema (reports), and two tables (ReportHeaders and ReportLines).

USE    SalesReportsDB;
GO
CREATE SCHEMA Reports
go
CREATE TABLE Reports .ReportHeaders
   (
     ReportSID INT IDENTITY(1,1)
                   PRIMARY KEY ,
     CreatedTS DATETIME NOT NULL
                   DEFAULT ( GETDATE () ) ,
     ReportType VARCHAR(50) NOT NULL
) 
ON "ReportData";
go

CREATE TABLE Reports.ReportLines
   (
     ReportSID INT NOT NULL ,
     AttributeTypeId INT NOT NULL ,
     AttributeAmount NUMERIC (10, 2)
   )
ON "ReportData"
ALTER TABLE Reports.ReportLines
  ADD CONSTRAINT pk_Reports_ReportLines
                PRIMARY KEY ( ReportSid, AttributeTypeId)
ALTER TABLE Reports.ReportLines
ADD CONSTRAINT fk_Reports_ReportLines_to_ReportHeaders
                FOREIGN KEY (ReportSid)
                   REFERENCES Reports.ReportHeaders (ReportSID);
GO

Listing 3: Creating a schema, and two tables.

In SSMS, connect to one of the development instances (Dev1 or Dev2) and execute Listing 3. In a second or so, SSMS object explorer should display small blue globes next to each folder that contains affected objects, and next to each individual object within. This indicates that these changes should be committed to source control.

Figure 23: SoC indicates changed objects to commit to source control

Right-click on the SalesReportsDB icon and select Commit changes to source control .

Figure 24: Committing changes in SoC

In this example, we'll receive a medium-severity warning about " Objects need to be created using one or more non-standard filegroups . These filegroups may need to be created ". This happens because we are deploying the tables to a named secondary filegroup (rather than to the primary filegroup). The changes are committed and the blue globes disappear.

Next, we need to create the database user and role. Before we do that, however, we'll have to create an associated SQL-authenticated login for our user. We will need to create this login manually on every environment, since it is a server-level object and therefore outside the scope of SQL Source Control or Deployment Manager. Note however that if the login does not exist, Deployment Manager will create it, with a password of p@ssw0rd .

USE [master] ;
GO
CREATE LOGIN [WebConnect] WITH PASSWORD = N'<your pwd>',
   DEFAULT_DATABASE = [master], DEFAULT_LANGUAGE = [us_english],
   CHECK_EXPIRATION = OFF, CHECK_POLICY = ON
GO

Listing 4: Creating the WebConnect SQL login

Listing 5 creates the database user and role, and grants to the database role permission to run SELECT queries on objects in the Reports schema.

USE [SalesReportsDB];
go
CREATE USER [WebConnect]FOR LOGIN [WebConnect] ;
GO
CREATE ROLE  [ReportReader]
GO

EXEC sp_addrolemember N'ReportReader', N'WebConnect';
GO

GRANT SELECT ON SCHEMA:: Reports TO [ReportReader] ;
GO

Listing 5: Create the WebConnect database user and ReportReader role, with SELECT permissions on the Reports schema

SoC does not recognize the same need to commit the new user and role as it does a table, unless we disconnect from the instance and then reconnect (simply refreshing at the server level does not seem to work). Having done so, the blue globes appear next to the affected U ser s and Roles folders and objects.

Figure 25: Changes to commit to source control

We can commit these changes to Source Control, as described previously, and shown in Figure 26.

Figure 26: Commit new user, role and permission change on Reports .

It is interesting that the granting of SELECT permissions to ReportReader, for the Reports schema, appears as an edit of the securable (the Reports schema) rather than of the principal (ReportReader). Notionally, this will make it harder to adopt the Karmanov approach of separating out entirely our handling of users, roles and permissions, from the handling of code and schema changes.

These objects don't currently exist on our second development instance. However, our second developer can simply click on SalesReportsDB and hit Get latest .

Figure 27: Get the latest changes in Dev2

Hit Get Latest again and the SoC will synchronize the two instances. Note that you may see the blue globes appear again next to the two table, though if you click Commit changes , SoC commits nothing.

On either instance, you should now be able to connect as the user WebConnect and issue SELECT statements against the two tables (although they contain no data as yet).

Let's deploy these changes to TEST using the SSMS Deployment Manager add-in. Once again, we'll create the database package based on the latest version of SalesReportsDB in Source Control. On the Static Data screen of the DM package publisher, you'll notice that it will automatically include any static data tables linked to Source Control, but to add them manually you'll need to select the live database as the source. Since neither of our tables contain static data, we can simply hit Publish .

Figure 28: Create a new database package (1.0.9)

Having created the database package, from SSMS, switch to the DM web interface, click into the SalesReportsDB project, create a new release, and deploy it to test, as described previously.

Figure 29: DM dashboard after deploying 1.0.9 to TEST

Connect to the test instances and verify that all the objects are present and that the WebConnect user can connect and access the tables.

Development Phase 2: Create a new Stored Procedure

The next stage of the development effort is to create a stored procedure that users can execute in order to return their report data, rather than execute queries directly against the base tables, as shown in Listing 6. This method of upgrading code objects is adapted from Alexander Karmanov's series of articles, previously referenced.

USE SalesReportsDB;
GO

IF OBJECT_ID('Reports.GetTheBigBossReport') IS NOT NULL
   SET NOEXEC ON

PRINT 'Reports.GetTheBigBossReport: creating initial routine stub'

go
CREATE PROCEDURE Reports.GetTheBigBossReport
AS
    BEGIN
       PRINT 'Reports.GetTheBigBossReport: routine not implemented yet'
       RETURN -1
  
    END
  
go
SET NOEXEC OFF
go
PRINT 'Reports.GetTheBigBossReport: routine code is updated'
go

ALTER PROCEDURE Reports .GetTheBigBossReport @ReportSid  INT
AS
   BEGIN
     SET nocount ON
     
     SELECT rl.AttributeTypeId
            , rl.AttributeAmount
        FROM Reports.ReportLines AS rl
        WHERE rl.reportsid = @ReportSid


   END
go

PRINT 'Reports.GetTheBigBossReport: procedure creation script finished'
GO

REVOKE SELECT ON SCHEMA::[Reports] TO [ReportReader];
GO
GRANT EXECUTE ON SCHEMA::[Reports] TO [ReportReader];
GO

Listing 6: Create the GetTheBigBossReport stored procedure

SoC should indicate immediately the need to commit the new stored procedure, but again, unfortunately, you may need to disconnect and reconnect in order for it to pick up the permission changes on our schema.

We can confirm that that the change worked by disconnecting, logging back in as WebConnect and issuing a query against one of the two tables. We should see an error message.

Msg 229, Level 14, State 5, Line 2
The SELECT permission was denied on the object 'ReportHeaders', 
database 'SalesReportsDB', schema 'Reports'.

However, we can access the tables via our new stored procedure.

EXEC Reports.GetTheBigBossReport @ReportSid  =  0  -- int

Listing 7: Confirm that WebConnect can execute GetTheBigBossReport

Log back on as your normal user, click on SalesReportsDB, select Commit changes and we should see the new object and schema edit to commit.

Figure 30: Commit changes to Source Control

Hit Commit to submit the changes to source control. Connect to the second development instance, select SalesReportsDB and grab the latest change from Source Control and likewise verify that WebConnect can no longer access the tables.

Once again, we can package up this database version and release it to TEST, as described previously.

Figure 31: DM dashboard after deploying 1.0.10 to TEST

Once again, it's worth connecting to each of the test instances, both as an administrative user and as WebConnect , and verifying that all objects and permissions are as expected.

Development Phase 3: Add a new column, alter stored procedure

In this final step, in response to feedback, we need to add a new Comments column to Reports.ReportHeaders table, and therefore also adapt our GetTheBigBossReport stored procedure to return this new column.

Listing 8 adds the new column.

USE SalesReportsDB;
GO
ALTER TABLE Reports.ReportHeaders
ADD Comments VARCHAR(MAX) NULL;

Listing 8: Add a new Comments column to SalesReportsDB

The full script to alter the stored procedure is in the code download file, but essentially it is identical to the stored procedure script in Listing 6, but with the ALTER PROCEDURE statement as shown in Listing 9.

SELECT rl.AttributeTypeId
  , rl.AttributeAmount
  , rh.Comments
  FROM Reports.ReportLines  AS   rl
     JOIN Reports.ReportHeaders AS rh
       ON rl.ReportSID  =  rh.ReportSID
     WHERE rh.ReportSID  =
@ReportSid

Listing 9: Alter GetTheBigBossReport to return the Comments column

Click on SalesReportsDB, select Commit changes

Figure 32: Commit changes to SoC

Hit Commit to submit the changes to source control. Connect to the second development instance, select SalesReportsDB and grab the latest change from Source Control.

After that, we can package up this database version and release it to TEST, as described previously and then verify that we now return the Comments column when executing GetTheBigBossReport.

Figure 33: DM dashboard after deploying 1.0.11 to TEST

This completes our development cycle, and assuming we've also completed all necessary testing, we're ready to deploy the final release to Staging. Of course, our Staging environment is still back at version 1.0.8, just an empty database, with no tables, WebConnect user, ReportReader role, or stored procedure. However, we can bring it up to speed simply by re-deploying release 1.0.11.

Figure 34: Deploy 1.0.11 to STAGING

For a final time, connect to Staging and confirm that all the required objects exist.

Security: Logins, Users and Roles

In our development and deployment scheme, we've handled database users and roles in the same manner as schema and code objects.

This is rather different from the Karmanov scheme, where we handled security as a separate but integral part of the database deployment process, using configuration files. For example, in the Karmanov scheme, we store logins and users in one configuration file, and roles and permissions in another. For example, the LoginsAndUsers.config file lists each environment, the instances that exist in each environment and the logins and users that must exist on these instances.

<Environment Name="Default">
        <SQLServerInstances>
            <SQLServerInstance Name="machinename\instancename" />
            < …ETC…>

      <Logins>
        <Login Type="sql" Name="WebConnect" Password="p@$$w0rd">
          <Database Name="ExampleDB" User="WebConnect">
            <Role Name="ReportReader" />
          </Database>
        </Login>
      </Logins>
</Environment>

Listing 9: The LoginsAndUsers.config file from Karmanov article

As discussed earlier, there is much to recommend this approach as it gives us a single source from which we to deploy a 'complete', standardized database environment. It also provides a single reference file for all login and user details and the roles to which these users belong, and then another single source to view the permissions assigned to these roles.

Furthermore, there are several potential issues with deploying users from one instance to another, in the same manner as code and schema objects. For example, we may encounter the "orphaned user" problem where we need to realign a deployed user with its associated login (ALTER USER WebConnect WITH LOGIN = WebConnect ;)

There are also issues with how Deployment Manager, using SQL Compare under the covers, will attempt to synchronize database users. It will attempt to remove any users that exist on the target but not on the source, or vice versa. This could be completely inappropriate behavior. On the target environment, especially if the target environment is set up to mimic the production environment, then there will be valid reasons why, for example, database users must exist in the target database that do not exist in the source database, in development.

Disappearing Users

We referred to this problem earlier in the article, mentioning that Deployment Manager had 'quietly' dropped two users from the STAGING version of our database.

Let's investigate this issue in a little more detail. Unbeknown to us at the time, the model database for the Staging SQL Server instance was configured to add two database users to every new database, in addition to the 'default' users, such as dbo, and so on.

When we first created the SalesReportsDB database on Staging, manually (running Listing 2), these users duly get created in the database.

Figure 35: Auto-created database users on the Staging instance

These users do not exist in the development environment, and therefore do not exist in our database deployment packages. When we deployed release 1.0.8 (the empty database package), Deployment Manager 'synchronized' the user objects and therefore dropped these two users from the Staging version of the database. Deployment Manager showed no outwards signs that this had happened, but in the Output for the deployment, we saw Low level warnings raised for dropping the users (Figure 22).

Under the covers: SQL Compare

If SC has to delete a user from the target database, in order to synchronize it with the source database, it raises a Low warning. If the user owns a schema, it will also issue a Low warning for deleting that schema. If the schema contains tables, it will issue a High warning about dropping those tables. In DM, by default, only High warnings will prevent a deployment from starting.

We'll discuss possible solutions to this problem shortly, but first, let's consider one other possible user synchronization issue that you may encounter.

'Partially' Failed Deployments

One interesting issue that can arise when handling users in this manner is a 'partially failed' deployment. By default, DM will only abort a deployment on a High warning (from SQL Compare), such as if dropping a table. Dropping users and schemas are both low warnings.

Therefore, by default, Deployment Manager will proceed with the deployment in spite of 'low' warnings, but then may find that it can't, for whatever reason, delete a user or its associated schema. This can lead to the 'partially failed' deployment.

I encountered this problem when I created the DepMan1 user (Listing 1) on a target instance but then neglected to make it a member of the sysadmin server role. I deployed the initial "empty database" package and all seemed well. I performed phase 1 of the database development, generated a new database package to capture the changes and then attempted to deploy it, only to be confronted with a "Deployment failed" error.

Figure 36: Deployment Manager reports a failed deployment

We can expand the error output for the failed deployment to find out what went wrong. The error that was reported is long, but the crux of the matter is that Deployment Manager is trying to remove a database principal that exists on the target but not in development, and cannot drop it because it owns a schema that is cannot drop. The error output does not identify the database principal.

2014-01-07 13:13:18 +00:00 INFO    The database update succeeded
2014-01-07 13:13:18 +00:00 INFO    Dropping users
2014-01-07 13:13:19 +00:00 ERROR   Error while executing job: Dynamic deployment failed
2014-01-07 13:13:19 +00:00 ERROR   The database principal owns a schema in the database, and cannot be dropped.

Figure 37 shows the resulting DM dashboard.

Figure 37: A failed deployment but the new release version number

Note that this suggests that release version 1.0.9 now exists on STAGING but that there was a problem with the deployment. We can confirm that the deployment essentially "succeeded", despite the reported failure, by connecting to Staging, where we see that the new tables, user and role have been deployed.

Figure 38: A "partially successful" deployment

The problem lies with the DepMan1 user, indicated with the arrow in Figure 38. When DepMan1 is a not member of the sysadmin role, Deployment Manager creates it as a user in the target database and makes it the owner of the RedGateLocal schema, where Deployment Manager stores data about its various deployments to that instance. If DepMan1 is a member of the sysadmin role, it maps automatically to the dbo user in the target database, which becomes owner of RedGateLocal and so DM does not create DepMan1.

If you own a copy of SQL Compare, it helps clarify what is going on if you open it and compare the source database to the target, as shown in Figure 39.

Figure 39: Using SQL Compare to compare the source and target databases

Here we see three objects that exist only on the target database and not in the source database. Deployment Manager created these objects as a result of the previous successful deployment (1.0.8):

  • DeploymentManagerLastDeployment – a table-valued function that DM creates and updates on the target database to keep track of the most recent deployed package
  • RedGateLocal – a schema in which DM stores this table-valued function
  • DepMan1 – created in the database and owner of RedGateLocal in cases where DepMan1 login is not a sysadmin .

When Deployment Manager started with subsequent deployment (1.0.9), it generated Low warnings for dropping a user (DepMan1) and a schema (RedGateLocal) but proceeded with the deployment process. The deployment of the schema objects succeeded, but then DM found that it could not drop the user, because it owns a non-empty schema. At this point, DM reported a failure.

It may confuse some readers that Deployment Manager did not roll back the failed deployment completely, which is what one might expect from documentation:

Failed database deployments are rolled back The SQL script Deployment Manager uses to deploy a database runs inside a transaction. If the scripts fails, any changes are rolled back…

However, DM can only rollback transacted changes, and user actions (CREATE | ALTER | DROP USER) or are not part of a transaction, so it can't rollback these changes. Therefore, it's possible for the transacted part of the operation (schema objects, code objects etc.) to succeed, but for the user synchronization to fail, and the deployment overall to be reported a failure. In effect, the only part of the deployment that failed in this case was the attempt to delete DepMan1, and the act of deleting this user may well not have been desirable in the first place!

Avoiding User Synchronization issues

As discussed earlier, in many respects, it makes sense to deal with all aspects of security (logins, users, roles, permissions) as a separate part of the automated deployment process. We will discuss such an approach, using PowerShell scripting and DM, in a later article but for now, there are several possible ways to avoid these sorts of user synchronization issues.

  • Alter the Warning settings in Deployment Manager – attempts to drop a user or schema generate Low warnings and by default DM will only abort a deployment completely (i.e. not even attempt it) in response to a High warning. If we adopt a more sensitive warning threshold, we'll avoid "partially failed" deployments, but of course some trivial errors will then cause termination of the deployment
  • Don't put users into Source Control – we can set up filters so that SoC simply does not try to place users in source control. Of course, this means that we will need to manually create and maintain users on each instance.
  • Instruct DM to ignore certain objects – we can place a filter into source control to which DM will refer, and that we can use to filter out certain types of objects, or even objects with specific names.

We'll explore the approach of using filters to instruct DM not to deploy specific objects.

Using SQL Source Control Filters with Deployment Manager

Right-click on SalesReportsDB in SSMS object explorer, and select Other SQL Source Control tasks | Edit filter rules... . The Edit Filter Rules… tab in SQL Source Control allows us to a set of filter rules, where we can exclude certain types of objects from the comparison. For example, we can exclude database users and roles from a deployment, if we wish to deploy them by a separate process.

There is also a button, which we can use to define custom rules, for example to exclude objects that follow a particular naming pattern. In this example, I've created a filter rule to exclude any objects that begin with the name DepMan 1.

Figure 39: Creating a filter in SQL Source Control

We can include AND clauses and OR conditions, so could also include filters to prevent the earlier 'disappearing users' problem.

Click save and close, and SoC will create the filter. Click Commit changes in SoC, and you'll see it detects the new filter, and we can commit it to Source Control.

Figure 40: Committing the new filter to source control

SoC will save the new filter (Filter.scpf) in the root of our project directory in source control. We can open it with a text editor and you will see that it has a standard XML format.

<?xml version="1.0" encoding="utf-8"  standalone="yes" ?>
<!---->
<NamedFilter version="1" type="SQLCompareFilter">
   <!-- This filter can be loaded using the SQL Compare UI or used with a SQL Compare command line session using the /filter switch-->
<Filter Name> Filter created using SQL Source Control</FilterName>
   <Filter version="1" type="DifferenceFilter">
      <FilterCaseSensitive>False</FilterCaseSensitive>
      <Filters version="1">    
         <None version="1">
             <Include>False</Include>
             <Expression>(@NAME LIKE 'DepMan1%')</Expression>
         </None>
         <Assembly version="1">
            <Include>True</Include>
            <Expression>TRUE</Expression>
         </Assembly>
         <AsymmetricKey version="1">
           <Include>True</Include>
           <Expression>TRUE</Expression>
         </AsymmetricKey>
         <Certificate version="1">
           <Include>True</Include>
           <Expression>TRUE</Expression>
        </Certificate>
        <Contract version="1">
           <Include>True</Include>
           <Expression>TRUE</Expression>
        </Contract>
…<output snipped>…

Listing 10: XML format of Filter.scpf file

We can see our exclusion of a filter for objects with names like "DepMan1". Further, for any types of object we wish to filter out, the <Expression> element would be empty rather than TRUE.

Current Bug when trying to use a SQL Compare filter

Listing 10 refers to this filter as a "SQL Compare filter", and notionally the SoC filter we created should be exactly the same as a SQL Compare filter. Therefore, it should be equally possible to create a filter in SQL Compare, save it (as Filter.scpf), and store it in the root of our project directory in source control. However, some difference in the format between a SoC-created filter and a SQL Compare-created filter means that when the next time we perform a Get Latest in SoC, and it includes the new filter, SoC ceases to function (it cannot register the database).

The next step is to generate a new database package, as described previously several times. The package will automatically include the new filter and this time the deployment to Staging should succeed, since DM no longer attempts to remove the DepMan1 user from the target.

Summary

This article described, in some detail, a development and deployment scheme based on the Red Gate toolset. This enables a very natural mode of database development, working on a dedicated database in SSMS, and allows for easy deployments between environments. It also allows rapid deployment of upgraded databases through our various deployment environments, including those configured in the manner of production, such as Staging. By deploying to such environments early in the development, developers and administration staff can work together to iron out any potential problems that the new deployment will cause in production long before its time to actually deploy.

The only notes of caution concern, firstly, how to handle the initial creation of the database and its configuration. In this article, initial database creation was a manual process but it may be better to have a configuration file-driven process.

Secondly, we have the issue of how to handle deployment of security-related objects, namely logins, users, roles and permissions. If we deploy changes to these objects in the same way as we deploy changes to schema and code objects, it can lead to difficulties such as accidentally-deleted users and partially-failed deployments. An interesting challenge in separating treatment of schema/code objects from security objects, using the model described in this article is that, under the covers, SQL Compare treats a modification of permissions as an edit of the securable rather than of the principal.

In the next article in this series, I'll explain how to automate initial database creation, and deployment of security objects, as separate but integral part of the deployment process, using PowerShell scripts. I'll also examine some more advanced aspects of database deployment, such as the use of migration scripts to customized deployments.

Red Gate Tool Reference

Tony Davis

Author profile:

Tony Davis is an Editor with Red Gate Software, based in Cambridge (UK), specializing in databases, and especially SQL Server. He edits articles and writes editorials for both the Simple-talk.com and SQLServerCentral.com websites and newsletters, with a combined audience of over 1.5 million subscribers. You can sample his short-form writing at either his Simple-Talk.com blog or his SQLServerCentral.com author page.

As the editor behind most of the SQL Server books published by Red Gate, he spends much of his time helping others express what they know about SQL Server. He is also the lead author of the book, SQL Server Transaction Log Management.

In his spare time, he enjoys running, football, contemporary fiction and real ale.

Search for other articles by Tony Davis

Rate this article:   Avg rating: from a total of 8 votes.


Poor

OK

Good

Great

Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.


Subject: Data-impacting changes
Posted by: Robert Gammon (not signed in)
Posted on: Wednesday, March 12, 2014 at 10:00 AM
Message: Ok, so far so good.

Now alter the Comments column to varchar(200) not null, replacing nulls with empty string and truncating any comments greater than 200 characters to the first 197 characters + "..."

Subject: +
Posted by: Alexander Karmanov (view profile)
Posted on: Thursday, March 13, 2014 at 1:36 PM
Message: Thanks Tony,
Looking forward reading the next part.

 

Phil Factor
Searching for Strings in SQL Server Databases

Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine like Google.... Read more...

 View the blog

Top Rated

Searching for Strings in SQL Server Databases
 Sometimes, you just want to do a search in a SQL Server database as if you were using a search engine... Read more...

The SQL Server Sqlio Utility
 If, before deployment, you need to push the limits of your disk subsystem in order to determine whether... Read more...

The PoSh DBA - Reading and Filtering Errors
 DBAs regularly need to keep an eye on the error logs of all their SQL Servers, and the event logs of... Read more...

MySQL Compare: The Manual That Time Forgot, Part 1
 Although SQL Compare, for SQL Server, is one of Red Gate's best-known products, there are also 'sister'... Read more...

Highway to Database Recovery
 Discover the best backup and recovery articles on Simple-Talk, all in one place. Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Reading and Writing Files in SQL Server using T-SQL
 SQL Server provides several "standard" techniques by which to read and write to files but, just... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.