One of the frustrations of implementing a Continuous Integration (CI) system is understanding how the individual components of the workflow work together to contribute to the process. The software engineer for a successful system describes how his team designed and use a useful CI system based on Jenkins, Subversion and Redgate Tools.
Practice Fusion is a web based Electronic Health Record (EHR) company that hosts over 35 million patient records (more than Kaiser Permanente and the Veterans Administration combined). We are the fastest growing Electronic Health Record community in the US. Founded in 2005, we’re rapidly expanding and adding new users. Over 150,000 physicians and practice managers in 50 states currently use Practice Fusion’s Electronic Health Record.
We have a small but growing DBA team, with three DBA engineers and a Data Architect. We also have a group of 10+ developers who create stored procedures, functions and triggers. I am a Senior Software Engineer and I work closely with the Build and Release team in all of our Continuous Integration (CI) efforts.
Continuous integration: Our current workflow
We currently use Jenkins (Hudson) as our CI build server. All database development is done using Redgate SQL Source Control and Subversion. SQL Source Control handles both the schema and the data changes.
Our workflow is as follows:
- Changes are committed to Subversion using SQL Source Control.
- Jenkins polls Subversion for changes and triggers a build
- Jenkins creates a brand new database and rebuilds everything from scratch and deploys all the schema objects to it via SQL Compare.
- If there are any problems caused by missing dependent tables, columns, procedures or functions, then the build will fail and the developer who committed the offending change will get emailed.
- We also use the Jenkins "Promotions" plug-in to deploy database builds to our Integration, QA and Staging environments. In doing this
- Redgate SQL Compare is used to deploy the schema
- Redgate SQL Data Compare is used to synchronize the contents of lookup tables
- We have custom scripts to Stop and Start replication in environments where replication is in place. The script dynamically loops through all entities in the database and attempts to drop the subscription of any replicated object
- When we are preparing deployment to our Staging and Production environments, we do not blindly deploy the changes. Instead, we use Redgate Compare to generate the alter script, which is then inspected manually by our DBA team to ensure we are making appropriate changes (e.g. make sure we are not needlessly creating a large index)
- We use the Jenkins versioning module to create a build number for every build. This build number gets baked into a user defined function and if you want to know what version is installed, you can just call SELECT dbo.DATABASE_VERSION() and you will see the formatted version number
The problem that we fixed
In the past, everything was managed by developers using hand crafted SQL scripts. Developers were inconsistent in how they authored the scripts. Some scripts could be run multiple times (e.g. It performed a CREATE if the object did not already exist or an ALTER if it did), some only allowed for forward moving updates. Some developers would put everything into a single script that was hard to maintain and diff between updates; others would break apart the scripts in order to have one object defined in its own file, making the deployment task more difficult. Sometimes the files were checked into our version control system, sometimes these files lived only on the developer's workstation.
Our previous attempts at versioning our databases were all clunky and were not scalable.
We had no way of determining what database version we had running in the various environments. Every release involved a lot of finger-crossing because we could never be 100% sure what we were testing was representative of what the production environment would look like when we deployed. Deploying database changes was a nightmare and relied on developers executing scripts that they usually stored locally on their hard drives.
Problems of a manual process
The team was getting too big. A release would contain dozens of files that were written in an inconsistent manner and had to be opened up and run from SSMS during a release. If there were changes to scripts, the QA team would have to holler out to the Developer teams when they wanted the scripts deployed. Now it's just a button click in Jenkins to get the changes they desire.
Lost database updates
We also did not have confidence that what we were testing in our development environments would resemble what we were releasing to production. Smoke testing database updates often revealed missing objects that were deployed manually to our development database servers, but never captured in a deployment script. This was probably our biggest headache that forced our hand in finding a solution.
Search for a solution
We tried using Visual Studio Database Project. At one point, I got it working, but it was really fragile and it was a nightmare to manage. It did not have the flexibility that we needed. For example, we had no way to determine what changes were being made behind the scenes. The deployment process was done blindly and it seemed to break very easily and without explanation. Searching the internet was usually a fruitless effort since almost no one seems to be using Visual Studio Database Project.
Our company's engineering architect gave me the task of finding a way to version control our database. He was terrified because if we ever needed to roll back a release, we'd have no idea what state the database was previously in. I've known about Redgate SQL Compare since around 2006, and discovered SQL Source Control in May 2011. By June 2011, we were using Redgate to deploy our database changes to production, primarily using SQL Source Control, but also using SQL Compare and SQL Data Compare. We use SQL Source Control to make Subversion the source of record for all database entities. We also couple it tightly with our Continuous Integration server to validate builds and promote (deploy) changes to our many environments.
Introducing the new process
It’s always a challenge to introduce a new process to a development team. Some developers were more enthusiastic than others. A few of the developers who were used to the status quo had a harder time understanding the “why” regarding the change. After an hour-long demonstration on how Redgate Source Control works the team seemed satisfied that checking in their changes would result in saving time as opposed to having a tax on their workload.
We have seen a lot of benefits from using SQL Source Control.
- We now have a source of authority on the state of a database build -- We can be confident that what we are developing and testing is what is going to be deployed to production. That by itself is priceless.
- Developers don't need to waste time hand-crafting database alter scripts
- Deployment is now automated -- Developers don't need to be bugged by QA every time QA is ready to accept a new version of the application.
- We have identified holes in our process. In many cases, engineers were making changes directly to production, which is a big no-no. Redgate helps identify these unauthorized changes.
Database Source Control has given us the following benefits
- We now have a definitive build of our database in source control
- Database deployment is now automated -- We can deploy any database version to any development environment with a few mouse clicks and by both developers *and* QA engineers
- Developers have more time to develop, since they no longer need to manage deployment scripts and they no longer need to deploy them
- We were able to identify differences between instances of our databases in different environments. In some cases the changes were minor (e.g. rarely used indexes), in other cases the changes were quite large (e.g. missing columns and different stored procedure parameters).
We now have fewer issues when we deploy changes to production. We have the ability to roll back changes. We can now perform a simple SELECT statement to determine what version the database is running.