An interesting blog post about best practices in continuous integration (CI) caught my eye recently. It contains a lot of valuable information, especially the part describing continuous integration as a practice, not a tool.
In fact, I appreciate the article so much that it spurred me to write my own post on the subject of continuous integration best practices. In the following post, I will focus on an area of CI that is severely lacking, the database.
Along with DevOps, continuous deployment and others, CI is one of several methods that software application developers have really started to embrace in recent years.
Some of these methods, like Agile, are already well-known and well-regarded. But others, like CI, are still relatively unknown among development teams. Knowledge is key – the more developers know about these methods and their benefits, the easier it will be to implement them.
The author of the above referred to blog post, one Manuel de la Peña, also offered some great recommendations about continuous integration best practices. “Don’t check in on a broken build”, he strongly advises in the post’s title. He also explains why the use of CI is so beneficial in the long run – the faster you know a build is broken, the faster you can find out why, which means that you can also fix it quicker because it’s still fresh on your mind (or another developer’s mind).
For database developers, however, the question of how to combine continuous delivery database changes with CI activities still remains. The truth is, it all depends on how database changes are being managed. There are three primary methods:
1) Develop changes with scripts and execute them in the database
In this scenario, there is no real “management” of the database changes. Everything is done manually, and only the script is truly being managed.
The problem with this approach is that the database and the repository aren’t properly synchronized with each other. Furthermore, once part or all of a script is executed, it cannot be executed again, since the starting point has changed.
Some database development teams use source control for object creation, then employ a simple compare & sync tool to generate and execute the deployment script. If you adopt this approach, you may find yourself with limited visibility and encumbered by two systems that don’t integrate well.
For example, the compare & sync tools won’t be able to take advantage of the important information contained in the source control solution. This poses a problem because there will be no reference to the source of changes, so developers won’t know which changes should be promoted, ignored, or merged.
3) Make changes directly in the database
Check-out and check-in, the same principles of source control that have existed in application development for decades, must be applied to the database in this scenario. Additionally, the deployment module (more than just compare & sync) responsible for generating scripts must be fully integrated with the database source control repository. This will make important information, such as when two database environments have been merged into one, readily available to developers.
Without database source control in place, teams working directly on the database are likely to accidentally overwrite each other code. Comprehensive database source control, on the other hand, will let each developer know who did what, when, where, and why.
Each of the three approaches outlined above has its own advantages, but there’s no way around the fact that the first two create severe bottlenecks and constraints on the way to producing effective results.
The solution? Make the database part of the answer rather than just the question. With a little know-how and by following these continuous database integration best practices, you can achieve significantly better database enforced change management by managing and making changes directly within the database.
Enjoyed this article? Check out our post on how to implement continuous delivery. Our lessons learned from Yahoo.