DevOps and Database Lifecycle Management

Database Lifecycle Management (DLM) aims to provide a roadmap of what is required, and when, to deliver and maintain effective databases that can respond quickly to business change. How does the DevOps movement, as it applies to databases, fit into this? William Brewer explains how DevOps provides the organisational change between delivery and operations to make important parts of the process easier to introduce.

DevOps, Continuous Delivery & Database Lifecycle Management
Culture and Organization

As internet commerce took off spectacularly in the ‘noughties’, a new breed of application became more common, one that was not only the public face of the organization but was actually intrinsic to the organization.

The organization was, effectively, the application but unlike the classic applications of commerce, these internet applications had to evolve continuously in the face of competition and demand from users. They did not conform to the accepted lifecycle of previous applications, and could not afford any relationship with the business other than total involvement.

The traditional divisions and practices of established IT simply weren’t appropriate any more. A new way of team-working had to evolve to make it possible to develop applications rapidly and deliver new functionality with as little delay as possible. DevOps in this context wasn’t just a nice idea, it was essential.

Today, the majority of organizations aren’t in this predicament. Instead of focusing on a handful of applications under continuous development, with a plethora of Business Intelligence satellites, most applications have a considerable operational task with clear-cut objectives and constraints. In many cases, the delivery activity is focused on relating the data-flow between bought-in applications, and on both reporting and BI. The creation of new business applications is far less crucial to the business.

In this sort of setting, DevOps isn’t essential, but does it represent good practice? The answer must be that the DevOps movement has developed techniques to solve the challenges of rapid delivery, and these techniques have turned out to be generally useful. DevOps practices have become, in many cases, mainstream. Techniques such as continuous deployment have  helped a large proportion of the  industry. 

DevOps, originally known as ‘Agile Infrastructure’, started as a groundswell of frustration at the entrenched segregation and autonomy of the three application-oriented activities within IT: delivery, operations and governance. There was no established methodology that dictated such an ‘iron curtain’. On the contrary, methodologies such as Catalyst had been advocating a more cooperative cross-specialism approach for over 20 years.

None of the development methodologies such as SSADM advocated this silo mentality either. No ‘waterfall’ approach had actually insisted on a Chinese wall between operations and delivery. However, IT practices had become entrenched in a particular way of working. It was a cultural problem rather than a technical one, but the growing need for a more rapid and predictable delivery of change to ecommerce sites brought matters to a head.

DevOps is all about bridging the cultural and technical gap between different teams so that practices such as Continuous Integration (CI) and Continuous Delivery (CD), which demand a high level of cooperation between teams, become possible.

Done well, DevOps represents a transfer of effective practices and methods between operations and delivery teams, infecting Ops people with some of the best development practices such as source control, versioning and issue-tracking, and automated processes like testing. In turn, operations teams give insights to the delivery teams about concerns such as infrastructure monitoring, immutable architecture, real-time alerts, configuration and integration management, and the requirements underlying high availability, delivery and hosting.

In some cases, it may involve operational people working ’embedded’ within delivery teams, or developers working on operational tasks, but it doesn’t require it. The focus is on the exchange of ideas and expertise. It does not tackle the sometimes conflicting concerns between governance and delivery or operations, nor does it encompass the very earliest stages of planning a database, or the mature system that is no longer being actively developed.

Both delivery and operations have aspects of responsibility that are of little outside interest. Ops teams have a responsibility to maintain many processes, applications and databases that aren’t being developed in-house, or aren’t being actively supported at all. Delivery teams have responsibility for implementation detail that is of no interest to either operations or governance. DevOps is relevant where the interests of delivery and operations meet.

The difference between DLM and Database DevOps

How does the DevOps movement, as it applies to databases,  differ from Database Lifecycle Management (DLM)? The short answer is that database DevOps focuses on part of the database lifecycle; the intersection between Delivery and Operations. It is, to be frank, a very interesting part, but DevOps for the database doesn’t aim for a broad view of the whole life of a database. Database DevOps is more concerned with enabling good co-operation between Development and Operations to facilitate the rapid delivery of database changes.

DLM, on the other hand, aims to make sure that the development, use and replacement of a database system achieves a measure of quality and scientific method, rather than relying on individual heroics. It encourages teams to evolve their way of working from the chaotic to the managed. It sheds light on all the necessary activities of providing a technical service to users, even those that are of no interest or concern to the vast majority of developers.

Is DevOps enough?

DevOps is a key to the effective delivery of an application, and is particularly useful in moving from big intermittent releases to Continuous Delivery.

It makes it easier to introduce version control for all aspects of hosting applications in organizations. It is likely to make applications easier to support and maintain, through making developers aware of operational requirements earlier in development, and by ‘shifting-left’ many of the test, security and compliance chores that were customarily postponed until the deployment pipeline.

However, it really needs to go hand-in-hand with a new spirit of cooperation and culture-sharing with the governance aspect: otherwise, we are in danger of delivering the wrong functionality more quickly and efficiently.

DevOps and the database

There was an idea that gained a lot of traction when DevOps was maturing; the database, like the application, was ‘just code’, and should be treated as part of the application. A database, in this way of thinking, is just a slightly weird component of the application that can be coded, built, tested and deployed alongside the application code.

The truth is rather more complex. It is true that a build-script can be generated for everything within a database, and these build scripts can reconstitute the database. This is also case for both the data and metadata, the DML and DDL. However, database developers and DBAs use a number of ways, including Entity-relationship diagramming tools, wizards, or interactive table-design tools, to develop a database, and the code is often generated retrospectively. Each build must bring all the development and test databases in line with the current version, whilst preserving the existing data. Where changes affect base tables in ways that change the data, this requires a migration scripts, or changes to the data import process.

There are also server-settings that either affect the way the database works, such as server configuration, or are part of the database, such as agent jobs or alerts. There are database settings too that can be enshrined in code, such as file locations, that have to be changed for the server it has to run on. In short, it is easier to work in isolation with application development than with database development: the latter is always a team-based activity.

Another problem for businesses grappling with the issue of evolving applications rapidly in the face of competition is that relational databases aren’t amenable to rapid change once they have become established. Relational database practice assumes that the data is well-understood before the database is designed. Changes to the table schema of a complex database can be tricky, not only in the design changes and corresponding changes to every dependent database object, but also in the data migration. Databases don’t easily fit the Agile paradigm, though Agile does acknowledge the need for a good understanding of the business domain before development starts.

Initially, NoSQL offered the tempting hope that, by being ‘schema-less’, one could get all the advantages of relational databases without the pain. Sadly, they had forgotten that RDBMSs provide ACID-based data and transactional integrity, and these qualities couldn’t be fudged by promising ‘eventual consistency’. NoSQL databases were fine for the type of data for which they were developed but were dangerous for transactional operations. Some spectacular Bitcoin Exchange collapses convinced even the NoSQL die-hards that transactionality was required for any commercial activity.

Microservice architectures, developed from the ideas of service-oriented architectures, also offered the hope of avoiding the need for an enterprise-scale RDBMS to manage the transactional integrity of business processes. Such architectures were seen to make even fast-evolving applications dependent on an institutionally-oriented operational culture. Here again, the problems of managing distributed transactions reliably had been grossly underestimated. However, where transactionality isn’t an issue, this architecture can cut down on interdependencies and therefore make development easier.

The most promising solution to this problem of how to accommodate fast-developing applications that need to deploy continuous changes to a changing market, is to understand the nature of the data involved. This makes it possible to use Graph databases, document databases and other specialized database systems for unstructured or semi-structured data, relational databases for transaction processing, and OLAP/Cubes for business intelligence. In other words, a heterogeneous data platform is adopted, rather than a single platform.

Is there such a thing as a ‘Devops’ tool?

Before DevOps, there has a temptation in the past for some of the major players in the industry to take the opportunity of a change or initiative in approach to software development to  tie users into an all-encompassing solution to the entire application delivery process. The Godzilla IDE.  This would include the obvious components such as build servers, source control system, bug tracking, provisioning and so on. Rather than try to integrate existing applications, behemoths were created that solved the problem of integration and communication by not having to communicate at all, as theirs was the only necessary application.

Ordinary people working in IT were even less inclined to buy that dream than senior management. Over the past two decades,  software tools have become more specialised and component-oriented. Delivery and Ops people demanded that software had to work together, not only on Windows or Linux, but across platforms.  A delivery team and operations team needed to be able to pick and choose from a whole range of specialist tools and had to rely on them working together in a CLI-based  ‘toolchain’ that was then able to automate an entire process such as building, testing, packaging, releasing, configuring and monitoring. For network intercommunication, components can expose their services as  Representional State Transfer (RESTful)  services. The accent is on simplicity and ensuring that tools that support development processes they can, where necessary,  conform to the demands of the toolchain.  The groundswell of DevOps became concerned with solving problems with tool-chains, often open-source, where each tool, of specialised purpose, could take their input from the previous tool in the chain and pass it on to the next. It meant that tools needed to co-habit, rather than dominate. Microsoft were late into this change in the way of managing processes, but with a fairly clear field and little need for backward-compatibility, were able to introduce a scripting technology that was not only able to provide the toolchain, but to allow objects rather than documents to be passed. Not only was it close to the old Command-line batch scripts, but also took inspiration from Bash scripts.  Windows DevOps tools tend to be PowerShell-aware, or even shipped as PowerShell CmdLets, but they can be useful participants with a Command-line Interface (CLI)

Can a relational database be Agile?

The short answer is yes, but it requires some compromises in the way that data is accessed from front-end applications. Agile relational database techniques are well-established but rarely practiced.

Rapid deployments need to be as risk-free as possible. By default, a new release exposes the latest functionality to all users immediately. However, if it is possible to preserve the functionality of the original interface intact and expose the new functionality only to a few users whilst allowing immediate rollback, then this risk can usually be reduced to the point that it doesn’t affect the business significantly.

Where the delivery team are working on the deployment alongside the Ops team, it becomes easier to manage an operation that requires a lot of knowledge of the system.

Databases have a particular problem with this approach: the need to be certain of preserving all the changes to data in a controlled delivery. It requires that the database is designed in a particular way to achieve this, and relies on the same discipline in insulating the actual database base tables from the application(s) behind a defined interface at the database level, in much the way you would between application modules. Base tables do not represent a satisfactory interface!

There are some features in the SQL Standard that lend themselves to this type of Agile development such as Feature Toggles and feature routers. Some aspects of SQL Server are especially useful, such as table-valued functions and synonyms. It is perfectly possible to expose different versions of the database to different users whilst keeping the underlying data up-to-date, and to change the version of the database exposed to a particular user merely by changing the users’ windows group.

This sort of release needs a lot of testing, and defensive development. It is best to use feature-switching techniques that can be controlled simply by making changes to user access, so schema-based security is ideal. The downside to this is that schema-based access requires considerable re-engineering of an existing database. Feature-switching via code changes are ill-advised. The worst mistakes come from switching features via global toggle variables, especially if they are recycled.


There has always been a gap between good practice and organization, and the conventional structure of the average IT department. In terms of the lifecycle of the application and database, organizational convention often dictates a strange wall of non-communication, sometimes even non-cooperation between the delivery activity and operations. This has never been sanctified or condoned by any known methodology and gets in the way of delivering updates to an application or database.

To be sure, a delivery team is focused on their project, whereas an operational team has a wide-ranging responsibility for the whole range of current applications, databases and processes, and the way they interact. They are very different perspectives. This means that cooperation and the exchange of ideas and techniques is valuable but the two teams can only merge into a single team in the unusual case where an organization has just one application that requires operational support.

DevOps is an initiative that comes from frustration felt with an organizational system that has evolved in an adaptive way. It stems from individual professional people caught up in the resulting confusion. It makes it easier to introduce many of the techniques that underlie the best DLM practices, especially the delivery pipeline and database provisioning, and it focuses on part of the database lifecycle. It is not concerned with the entire sweep of DLM, and has little to say about governance or the operational support of databases that are no longer being actively developed in-house.

It has, however, been a very positive influence in making it possible to take part of the database lifecycle, the delivery pipeline, and encourage both good practices and new initiatives in reducing the time it takes to deliver change.

DevOps, Continuous Delivery & Database Lifecycle Management
Go to the Simple Talk library to find more articles, or visit for more information on the benefits of extending DevOps practices to SQL Server databases.

For more articles like this, sign up to the fortnightly Simple-Talk newsletter.

Tags: ,


  • Rate
    [Total: 7    Average: 4.3/5]
  • davidkallen

    I like the general thrust of your article – that CI,CD are useful and DevOps enables them. I also get the point that databases are different from pure application code and it is tricky to manage them. But I am confused by your remarks about feature changes.
    First you say

    “Feature-changes via code changes are ill-advised.”

    But that is the definition of an enhancement. We change features by updating and deploying changed code. Surely you don’t mean we should not make enhancements (changes) to code, do you?

    Second, you say

    ” The worst mistakes come from switching features via global toggle variables”

    Yes, I read the Phil Factor article linked to the next clause,” especially if they are recycled.” But re-using an allegedly “unused” global switch is a horrendous mistake. That is as stupid as reusing another variable instead of making a new one. But we don’t stop using variables because someone misuses variables.

    So help me understand your advice. Are you saying we should never use global feature switches for any feature toggle? Even if the feature is code based only? And that ALL feature toggles should be implemented by changing database permissions? Maybe your thought was clear and got muddle in the writing. But I don’t understand what you are recommending and in what context.

    What I took away from the last section is that schema variation and permission changes can be a useful addition to the tools we use for implementing feature toggles where database change is involved. However, if this is our scenario, I have also heard of a promising approach of making only additive changes with any forward database change. This makes rollbacks safe. Then, once it’s safely in production, and verified as working, meaning we will no longer roll back, we can do a subsequent change to delete the old stuff. This means carrying redundant code and database objects as a bridge. But sometimes bridges are needed, like sutures on a wound, that can be removed later, but serve a vital transition until new tissue is in place. There may still be edge cases where even that is problematic. But it’s another useful tool.

  • Andrew Clarke

    (Editor) the phrase “Feature-changes via code changes are ill-advised.” should have been “Feature-switching via code changes are ill-advised.” Mea Culpa, and it has been corrected. . It will be best if William answers the other points!

  • William Brewer

    David, very pleased you like the general thrust, and that you were interested in Feature-switching.
    The section on Feature-Switching really could occupy the space of several articles, and here I was only trying to explain the general principles. Sorry if I made it too condensed!
    The first point you make is easily dealt with. It seems to have been a copy-editing mistake. I meant that feature-switching via code isn’t a good idea in a database. I should have explained more about this. My point here is that if you need to change code, such as the contents of a function, or even to change a synonym to switch features, then that represents what a DBA would understand as a version change, with all the knock-on consequences. Sure, a DevOps culture would make that easier but I’ve spent a long time trying to persuade colleagues of the dangers of version-creep and I’m quite pleased that they are picky about this.
    Global variables don’t exist in an RDBMS, and I don’t like using ‘tricks’ to provide them such as extended properties for reasons of security, access control, audit and operational safety. I have to admit that I tend to use a ‘control’ table, accessed only by a function, to do feature switching. I suspect that there are plenty of other ways. Any approach can be made safe as long as access control is in place to cover the switching. The reason I use a special table is that the only change you make is in data, it can be audited, it’s OK by the DBAs, it is simple to monitor and the process isn’t wacky. It is easily done. The stored procedure or table-valued function that you call from the application checks the value in the ‘control’ table via the function and reacts accordingly. If your application is accessing the data via a view, you can again make the appropriate switch according to the value in the ‘control’ table if it is calling a TVF under the covers.
    In answer to your last question, I’d prefer to use the permission system and schema-based security to do it because it is the easiest to administer and allows you a much finer granularity, (you can actually move users across individually or in groups) but this requires the database to be designed that way. Realistically, we also need to have easier approaches such as the control table that require fewer changes. Like you, I’ve heard good things of the ‘roll-forward’ approach to managing deployments, but I’ve never done it that way. I sounds like what you describe would be a useful way of delivering features. Perhaps you should publish it!
    I’ll check over that paragraph to make sure it isn’t ambiguous.

  • daz

    Hi William, thanks for the article, some great insights. When I read this kind of article I’m always looking to spot validation for the techniques I’m using and maybe pick up some new ideas. Right now I’m a one man band DB dev shop, so technique is especially important to me but something like agile is a bit of a stretch – no stand-ups for me! Where you say “Some aspects of SQL Server are especially useful, such as table-valued functions and synonyms.” my immediate thought was why didn’t you mention Views? – and I note that you do say “some aspects”. Rather than table switching, Views are good for hiding small DB changes in the underlying model until everything else is ready, and certainly in the defensive programming realm. Is that fair?

    • William Brewer

      Thanks very much for the compliments. Yes, you are right that Views are important, and I should have mentioned that: However Views, which are most commonly used for this purpose, aren’t the only way, and I was trying to emphasise other techniques that are even more useful. I was intrigued that you mentioned table-switching as a separate technique to using Synonyms. Is there another way of table-switching (or more precisely object-switching) other than using synonyms? I’d be fascinated to hear of other viable techniques. The reason I prefer TVFs to Views is that the application passes parameters rather than doing searches directly on views, and this gives you quite a bit of extra flexibility in making changes invisibly to the application. It also allows you to do more in the way of monitoring how the interface is used by the application(s). An inline TVF, as opposed to a multi-line TVF, is treated by the Query Optimiser in exactly the same way as a view, so I don’t really see the disadvantage over using TVFs over Views.