Click here to monitor SSC
  • Av rating:
  • Total votes: 27
  • Total comments: 7
Matthew Skelton

Common database deployment blockers and Continuous Delivery headaches

06 August 2014
Database Lifecycle Management Patterns & Practices Library Stage 4


Release Management

Deployability is now a first class concern for databases, so why isn’t it as easy as it should be? Matthew Skelton explores seven of the most common challenges which will bring your database deployments to their knees, and the steps you can take to avoid them.

Deployability is now a first-class concern for databases, and there are several technical choices (conscious and accidental) which band together to block the deployability of databases. Can we improve database deployability and enable true Continuous Delivery for our software systems? Of course we can, but first we have to see the problems.

Until recently, the way in which software components (including databases) were deployed was not a primary consideration for most teams, but the rise of automated, programmable infrastructure, and powerful practices such as Continuous Delivery, has changed that. The ability to deploy rapidly, reliably, regularly, and repeatedly is now crucial for every aspect of our software.

We’re going to look at some of the most common reasons why databases often tend to become increasingly less deployable over time, some suggested remedies for how we can improve deployability, along with some clear practices and perhaps a little re-thinking around the way that databases are used and evolved. I’m not about to pretend that there aren’t team structures, responsibilities, and communication patterns that affect database deployability, but that’s a discussion for another day. For now, let’s focus on the most common technical blockers to database deployment and how to fix them.

Deployability and Continuous Delivery

The value of Continuous Delivery – regular, rapid, reliable and controlled delivery of working software systems into Production – is becoming increasingly apparent. Not only do your users see results and new features more quickly (thus making for a happier management team), but the shorter feedback cycle vastly increases your ability to learn about and improve the software system as a whole, reducing bug-fixing and 'firefighting'. Everyone wins!

However, in order to sustain this high rate of controlled change, the component parts must be easy and quick to deploy and verify, and the simplicity and speed of deployment (and post-deployment verification) are measures of their “deployability”. In short, it’s now essential to be able to deploy database schemas, objects, code, procedures, reference data, and a representative sample of dynamic data as easily as deploying an OS package or a web application.

Thankfully, the tools and techniques for achieving this are already largely known, although in some cases we may need to change how we use our databases in order to take full advantage of them.

What prevents deployability for databases?

Several 'forces' act against databases to make them increasingly undeployable over time: the complexity of database-internal logic; complexity of cross-application dependencies; sheer size (data volume); risk (or more accurately, perception of risk); and Production-specific tooling and configuration. None of these is necessarily a critical issue in itself, and most will be recognised by experienced DBAs. However, taken together, they will bring your deployment to its knees.

As a first step, we need to acknowledge that many databases – particularly those ingesting and supplying transactional data from live applications – need to be treated somewhat differently in Continuous Delivery from most other parts of the application and infrastructure estate, and this is partly why their deployability blockers might not be obvious at first.

Many of these problems aren’t at all unique to databases, but if you’re a DBA you’ll almost certainly recognise and have worked around them individually for separate reasons. However, even though these problems may seem disconnected, taken in combination, they are sure-fire ways of making your databases painful to deploy: the whole is greater than the sum of the parts.

By the same token, the 'remedies' described here may well be practices you’ve put into place before but, taken together, they’ll form the backbone of a painless database release process. Of course, none of these changes or steps are “free”, because refactoring is hard (as is explaining to the Business Intelligence team that they can’t have access to the production database anymore). If you want to optimise for database deployability – and you should! – then here are some things to tackle right away.

1. Flow of Production-originated data vs. everything else

Let’s start with a problem that actually IS fairly special to databases. Our applications, middleware, infrastructure, database schemas, reference data, and even cloud hosting 'fabric' can and should now all be defined as code, and flowed down a deployment pipeline from development towards Production, with significant automated (and possibly some manual) testing on the way. In effect, like salmon swimming upstream, Production-originated data needs to go 'against the flow' of changes and user requirements, which all flow 'downstream'. However, too often this pattern of 'Production first' is also used for changes to schemas, reference data, and database logic, leading to 'drift' between the Production database and versions used by development and test teams. As you can guess (and are probably aware from painful experience), that’s going to block your deployments.

Remedy: Minimize changes in Production

One way we can improve the deployability of databases is to separate the database changes that should flow downstream (with other code changes) from the data which cannot. Ideally, the only database changes that should be made 'in Production' are changes to live data, and reference data should change in development and flow down. The addition of new indexes and other performance tuning should be made in a pre-Live environment using recent live data and traffic replay techniques.

By reducing the 'specialness' of Production down to its live data, we can increase the amount of testing possible in upstream environments. This will increase confidence in database changes, encouraging a higher rate of change, and in turn a reduced fear of database deployment (not to mention reducing surprise fire-fighting). This also improves the team’s ability to track changes and roll-back changes that don’t go according to plan.

2. Accidental size and complexity

I’ve seen several almost identical cases where successful organisations, operating with an increasing online presence since around 1999, have reached a point in 2014 where their core database effectively dictates the rate of change of their systems. These organisations all have (or had) a large, central database that started out small and gradually grew to become highly unwieldy, apparently risky to change, and difficult to deploy.

The database was seen as the 'single source of truth' and allowed to grow, not only in terms of the accretion of live records, indexes and views, but also in terms of the applications that depended upon it directly. Given that they’re all now at varying stages of adopting Continuous Delivery, this presents them with a serious challenge!

Early on, the presence of all data in the same place was great for rapid development, but over time the complexity of the data required specialist skills to maintain, and changes became more and more painful and risky as multiple teams vied to make changes to suit their application's view of the business domain. This typically leads to highly expensive or complicated database technology in order to manage the database, perhaps available only in Production/Live (and I’ll come to that later), and other systems in the business suffer as budgets and resources are suddenly decimated. Sound familiar?

Of course, some of this is irreducible once your organisation or application reaches a certain scale, but often it stems from opaque legacy processes, organisational red-tape, or just inexperience. In any case, the additional complexity makes it difficult for individuals or teams to understand the database, and so – particularly when diagnosing a failed deployment – makes database deployment difficult.

Remedy: Reduce accidental complexity

In short, the smaller and less complex an individual database, the easier it becomes to deploy. As a general rule, I’d argue that it’s always preferable to reduce the internal complexity of any system, especially any unnecessary complexity, down to just what is truly irreducible. Complexity in systems generally scales geometrically with number of interrelationships, not linearly – so keeping the number of interrelationships fairly low is a useful engineering maxim if we want to build human-comprehensible systems.

Moving non-core data and logic out of the main database and into separate databases helps to make the core database less complex, and more amenable to human understanding. This both reduces fear around database changes and reduces the number of possible failure states for database changes! Reducing the size of databases by taking out accidental data then has the double benefit of reducing the hardware requirements to run the databases (potentially even allowing a technology change), and allowing more rapid and frequent upstream testing.

But how do you decide where to split the data? I’m glad you asked…

3. Lack of data warehousing and archiving

Sheer data size can also work against deployability. Large backups take a long time to run, transmit, and restore, preventing effective use of the database in upstream testing. Moving from eight hours for a backup/transmit/restore cycle to even 30 minutes makes a huge difference in a Continuous Delivery context, particularly when you include a few retries in the process. In some of the post-Dotcom systems I mentioned above, around 70-80% of the data held in the core transactional database was historical data that was rarely requested or used, but was preventing the rapid export/restore of data that would have led to effective testing.

Remedy: Archive, distinguish, and split data

This is one of the issues which is, at least at first glance, more pertinent to databases than many we’ve discussed (certainly more painful in many cases.) Applications (and developers) need to be more 'patient' when working with data; we should not expect to have all data readily to hand, but instead appreciate that data should, in fact, be archived. Working in a more asynchronous, request-and-callback fashion has the benefit of allowing smaller 'Live' databases, making restoring and testing on real data easier and quicker. By archiving rarely-requested data, we can drastically reduce the size of our 'data space' to something that more accurately reflects user demand. Of course, applications then need to be updated to understand that 'live' data is held only for (say) 9 or 12 months (depending on business need), and so to treat 'live' and historical data as two discrete sets.

In practice this might mean performing an asynchronous, lazy load for historical data (archived to secondary or tertiary location and rehydrated for the user when requested), but a synchronous, greedy load for 'live' data. This live vs. historical approach also matches human expectations, where 'retrieving from an archive' is naturally understood to take longer, so your users shouldn’t be up in arms.

Of course, we still need to distinguish between data that correctly belongs in a large central database, and data that has been added out of laziness or mere convenience, and store them separately. Once we’ve managed to split data along sensible partitions, we can also begin to practice polyglot persistence and use databases to their strengths: a relational database where the data is relational, a graph database where the data is highly connected, a document database where the data is more document-like, and so on.

Fortunately, there are tried and tested patterns for building software systems that do not have single central data stores: Event Sourcing, CQRS, and message-queue based publish/subscribe asynchronous designs work well and are highly scalable. The 'microservices' approach is another way to improve deployability, and is deep enough to warrant its own article later on.

These approaches are not easy, but the requirement for rapid and reliable changes (for which Continuous Delivery is a good starting point) already makes the problem domain more tricky, so we should expect an increase in the required skills.

4. Poor naming of system components

On a related note, the way in which subsystems or components are named can lead to that system unnecessarily aggregating all kinds of functionality and data.

I once did some work for an organisation which wanted some help with moving older systems to Continuous Delivery, but all the systems had names like (let's say) SPRUCE, REDWOOD, or CHESTNUT. The names were entirely opaque, giving no clues to their purpose, which had resulted in each new piece of functionality simply being bundled into an arbitrary monolithic subsystem! The tendency to 'lump' data into a poorly-named (and so poorly-defined) databases is depressingly common.

You can probably imagine the conversation:

"Where should this Customer data go?" "Uh, just put it in SPRUCE, I guess".

Whereas the answer should have been: "In the Customer service, of course!"

Now of course this isn’t unique to databases, but opaque names in any part of a system are bad for software. Well-named components tend to retain a good degree of coherence and focus, whereas their poorly-named cousins attract the unnecessary data and complexity we’ve just described.

Remedy: Name things transparently

This is a really simple fix, but we can dramatically improve deployability by using good names for databases and other software components, because opaque names tend to lead to arbitrary 'clumping' and coupling of features and data. Instead, use transparent names that describe the purpose of the subsystem or database, so that it is clear whether data should belong in that system.

If you want a little guidance in this area, I recommend "Uncle Bob" Martin's classic book Clean Code, where Tim Ottinger lays out some ground rules for effective naming of things in software systems, including: 'use intention-revealing names'; 'avoid mental mapping'; 'use solution domain names'; 'don't be cute'; and so on. Steve McConnell uses similar advice in his excellent book Code Complete.

5. Opportunistic Business Intelligence data

While we’re at it, having a single, large, data-rich database sometimes results in the database being co-opted for Business Intelligence (BI) purposes: simply provide a BI person with a read-only database login, and let them write SQL directly against the database, right? This ‘opportunistic BI’ pattern naturally leads to strong resistance to database change from the BI team; their SQL queries are often hand-crafted and brittle, and database changes tend to break them. This direct use of application & integration databases for BI actively pushes against more frequent changes and deployments by arguing for stability-at-all-costs (albeit in a particularly narrow domain). BI folk also tend to have the ear of the finance department, which usually overrules IT requests for change. The result: a tendency to avoid changing and deploying the database.

Remedy: Source Business Intelligence from a data warehouse

We can reduce the friction around database changes and deployments by explicitly and consciously servicing the BI team. We should not just get BI 'for free' by aggregating data in the same transactional database, but design a proper system that provides the necessary data from multiple data sources, aggregated into a data warehouse, and accessed via a specific datamart. If there is a need for BI, address that properly, not just as a happy result of the data being conveniently in the same place.

6. Optimisation for data administration

Another force working against database deployability (and related to the BI problem above) can be the desire of the DBA team to make the database as easy as possible to administer, especially if the database also has significant accidental complexity! For instance, the need to keep track of data for audit purposes is very real, and data protection and privacy regulation is increasing year by year. However, optimizing for the ease of audit at the database level by, for example, aggregating data in the same transactional database is an example of a local optimization that works against deployability. If you’re serious about making the deployability of your database a first-class concern, you might have to make some compromises.

Remedy: Value more highly the need for change

To avoid optimizing for data administration within a single database or database technology, we need to move to a more distributed view of authoritative/definitive data. By that I mean that we need never lose the sense of a 'single source of truth' for data, but the existence of read-only copies in multiple locations isn’t seen as a problem, and we might source different kinds of data from different locations and database technologies. These changes help to reduce the size and complexity of individual databases, and reduce the perceived risk of changes in each specific case, leading to easier database deployments. Yes, there is an increased overhead in management and administration, but with careful planning on good team communications, that increase can be minimised.

7. Production-only technology and configurations

Special Production-only database technologies, configurations (e.g. linked server mappings), and routines all work against database deployability. Even if the volumes of Production data cannot feasibly be re-created in upstream environments, the database technologies, configurations, and features should be – at the very least in a Pre-Production environment. I’d even go so far as to say that if the cost of licenses for upstream environments would be prohibitive, then you should seriously consider an alternative technology because the hidden costs of a 'unique' database set-up in Production are huge.

Some of this might stem from one of sheer data volume problems we’ve already discussed, which could be fixed in a few different ways, but it might also be down to resourcing, organisational politics, or some other set of priorities that don’t rate ‘the ability to release software’ as highly as they should.

One organisation which I recently worked with moved to Oracle Exadata for its live transactional database. The improvement in query speeds was astonishing, and the built-in diagnostic tooling very useful. However, the Exadata machines were for Production and Pre-Production only (due to cost) and, because Exadata worked so differently from the versions of Oracle in upstream environments, it was difficult to predict the query behaviour before deployment to Pre-Prod, leading to uncertainty about performance. Furthermore, the 'developer' version of the database, Oracle XE, only allowed a single schema, significantly hampering useful automated testing in deployment pipelines. The presence of the high-speed Exadata machine also meant that requests from developers and testers to improve the database to enable better development and testing tended to be ignored because "Exadata will handle it fine in Production", leading to ever-slower integration tests and development speed!

If you can’t test or develop every element of your software – including the database – in an environment that simulates Production, then how can you possibly have confidence that your software will deploy, let alone work as expected? Production-only setups work against database deployability by ignoring upstream development and testing requirements, meaning that high rates of failed deployments and buggy releases are effectively guaranteed.

Remedy: Avoid Production-only tooling and config where possible

We should aim to keep the live transactional databases fairly small and simple, moving rarely-requested data to a warehouse, and allow the transactional database to run on less specialised technology. Using the same technology and configuration across environments reduces uncertainty and fear within teams, aids collaboration and diagnosis, and even helps to keep data size and complexity low, all of which help to make the databases as deployable as possible.

For large data warehouses, expensive database technology is often needed purely for performance reasons. However, for live transactional databases that need rapid and reliable changes and deployments, it is usually better overall to use technologies that can be present in all environments from the developers’ laptops, through Test and Pre-Production, all the way to the Production racks.

Wrapping up

None of the issues explored here would, by themselves, necessarily prevent rapid and reliable database deployments, but taken together these issues present significant blockers to database deployability. At the same time, none of these issues are necessarily new or unique to databases, but hopefully by considering them as a related set you can see how significant their combined impact can be. To address them individually perhaps doesn’t seem too challenging, but to tackle deployability requires close, effective collaboration between developers, DBAs, and operations teams to achieve the right balance between rapid deployment and access to data.

This article is part of our Database Lifecycle Management patterns & practices.

Return to the library to find more articles on DLM, or visit for more information about SQL Server products for database lifecycle management.

Matthew Skelton

Author profile:

Matthew Skelton has been building, deploying, and operating commercial software systems since 1998, and for several years he led a team that built and operated large database-driven websites for clients across many industry sectors. Co-founder and Principal Consultant at Skelton Thatcher Consulting Ltd, he specialises in helping organisations to adopt and sustain good practices for building and operating software systems: Continuous Delivery, DevOps, aspects of ITIL, and software operability. Matthew founded and leads the 700-member London Continuous Delivery meetup group, and instigated the first conference in Europe dedicated to Continuous Delivery, PIPELINE Conference. He also co-facilitates the popular Experience DevOps workshop series, and is a Chartered Engineer (CEng), and can be found on twitter as @matthewpskelton

Search for other articles by Matthew Skelton

Rate this article:   Avg rating: from a total of 27 votes.





Must read
Have Your Say
Do you have an opinion on this article? Then add your comment below:
You must be logged in to post to this forum

Click here to log in.

Subject: DB changes in production - sad truth
Posted by: Mark Warren (not signed in)
Posted on: Monday, August 11, 2014 at 3:35 AM
Message: Great article, Matt. A lot of good points.

I ask about this issue at many of the CD meetings I attend. Often the answer is "We give up, it's too hard"!

The issues re DB changes in production needing to swim upstream reminds me of a webinar I did a couple of months ago with DBMaestro who are trying to solve that problem.


Subject: Baby Steps
Posted by: Chuck Desmarais (not signed in)
Posted on: Monday, August 18, 2014 at 11:21 AM
Message: Excellent article.

I think 100% continuous deployment of large OLTP systems is going to be out of reach for most orgs. There is a small subset of DB changes (mostly data migrations) which are always going to be snowflakes. But I think when you look at what you actually deploy on a regular basis, you should see that the vast majority is effectively code (stored procs, functions, most views, new schema) and can be fairly trivially deployed along with the application code in your CI pipeline. Just getting that far is a huge win, and well worth the effort. Then your DBA's can focus on solving the hard data migration problems.

Subject: Re: DB changes in production - sad truth
Posted by: MatthewSkelton (view profile)
Posted on: Wednesday, August 20, 2014 at 4:57 AM
Message: Hi Mark, thanks for the webinar link. The "it's too hard!" comment from people about DB deployment is not surprising but with a sensible timeframe (6-12 or even 18 months) I suspect that most teams would see some tangible benefit. Changes to 10-year-old DBs do not happen overnight!

Subject: Re: Baby Steps
Posted by: MatthewSkelton (view profile)
Posted on: Wednesday, August 20, 2014 at 5:00 AM
Message: Hi Chuck, thanks for the comment. I agree with you that continuous *deployment* for OLTP DBs is probably not very valuable for most orgs. Even if some DB migrations remain as snowflakes, if we can reduce the size/volume of those changes substantially, then we will have simplified the problem space significantly, which is definitely worthwhile.

Subject: Unblocking deployments
Posted by: David Atkinson (view profile)
Posted on: Wednesday, August 20, 2014 at 11:10 AM
Message: Interesting discussion, especially on limiting the ‘specialness’ of the Production environment by making pre-Production as close a replica of Production as possible. Grant Fritchey’s recent webinar on building a database deployment pipeline also has some useful points for DBAs and developers on working together to deploy database changes:

Subject: Accidental size
Posted by: Rob Bygrave (view profile)
Posted on: Sunday, April 10, 2016 at 4:07 PM
Message: Hi,

Nice article, thanks. I'd like to dig deeper into the "Accidental size and complexity". You make some great points about archiving and data warehouse but I'm wondering if there are comments around:

... "Once we’ve managed to split data along sensible partitions, we can also begin to practice polyglot persistence".

How do people go about doing this? As I see it, if there are physical splits (different db) then we tend to lose the ability to enforce foreign key constraints declaratively in the DB (and so that means those FK's have to be enforced in application code and typically use eventual consistency to do so.

How are people going about logically or physically splitting a database design? Are people happy without declarative FK constraints being enforced? ... or perhaps only logically splitting the design?

Thanks, Rob.

That is,

Subject: Re: Accidental size
Posted by: MatthewSkelton (view profile)
Posted on: Monday, April 11, 2016 at 12:43 PM
Message: Hi Rob,

Your questions about splitting up data are very topical. As you hint at, there are several approaches. For some systems I have worked on recently, we've taken the approach of better DB design using VIEWs to provide an abstraction layer between applications and the underlying tables. In other cases, we have migrated a set of related tables to a separate database or data technology (e.g. document store) where FKs are not needed.

I think that it's important here to distinguish cases where FKs are essential (for instance, as part of transactional integrity) from cases where FKs are merely helping to enforce data consistency in the current monolithic RDBMS. In the former case, we're open to data loss if we stop using FKs, but in the latter case, we can perhaps make do without FKs and instead run some kind of regular reconciliation across multiple data stores.

Hope this helps.


Simple-Talk Database Delivery

Patterns & Practices Library

Visit our patterns and practices library to learn more about database lifecycle management.

Find out how to automate the process of building, testing and deploying your database changes to reduce risk and make rapid releases possible.

Get started

Phil Factor
How to Build and Deploy a Database from Object-Level Source in a VCS

It is easy for someone who is developing a database to shrug and say 'if only my budget would extend to buying fancy... Read more...

 View the blog

Top Rated

Predictive Analysis Basics
 Statistics holds out the promise of teasing out significant relationships and to determine cause and... Read more...

The Enterprise DBA Mindset as a Practical Problem-solving Approach
 In order to keep the demands of the job under control, any DBA needs to automate as many as possible of... Read more...

In-Memory OLTP - Row Structure and Indexes
 There are several decisions to be made when designing indexes for Memory-optimized tables in In-Memory... Read more...

Automatically Creating UML Database Diagrams for SQL Server
 SQL Server database developers seem reluctant to use diagrams when documenting their databases. It is... Read more...

SQL Server Security Audit Basics
 SQL Server Server Audit has grown in functionality over the years but it can be tricky to maintain and... Read more...

Most Viewed

Beginning SQL Server 2005 Reporting Services Part 1
 Steve Joubert begins an in-depth tour of SQL Server 2005 Reporting Services with a step-by-step guide... Read more...

Ten Common Database Design Mistakes
 If database design is done right, then the development, deployment and subsequent performance in... Read more...

Temporary Tables in SQL Server
 Temporary tables are used by every DB developer, but they're not likely to be too adventurous with... Read more...

SQL Server Index Basics
 Given the fundamental importance of indexes in databases, it always comes as a surprise how often the... Read more...

Concatenating Row Values in Transact-SQL
 It is an interesting problem in Transact SQL, for which there are a number of solutions and... Read more...

Why Join

Over 400,000 Microsoft professionals subscribe to the Simple-Talk technical journal. Join today, it's fast, simple, free and secure.