Continuous delivery is a method that promotes the adoption of an automated deployment pipeline to quickly and reliably release software into production. This method comes from the agile school and is a natural partner to the DevOps movement.
The goal of continuous delivery pipelining is to establish an optimized end-to-end process, enhance the development to production cycles, lower the risk of release problems, and provide a quicker time to market.
In one of my previous blog posts, I listed Jez Humble’s 8 Principles of Continuous Delivery. In order to achieve the Holy Grail of an “automatic, high quality, repeatable, reliable, continuously improving process”, you must first break that process into simpler component practices.
Building the “pipeline” in this way will enable you to deal with the different stages of the process, one by one.
A deployment pipeline makes sure a change is processing in a controlled flow. The system is as follows:
- A code check-in or configuration change triggers the flow
- The change is compiled
- The database changes goes through a set of tests – usually unit tests and static code analyses
- Passing that set of test triggers automatic application tests and regression tests
After successfully passing these tests, the change can be either ready for production use, or go through additional manual and user-acceptance tests before hitting production.
Achieving an efficient deployment pipeline is done by following these best practices:
- Build your binaries only once: Compiling code should only happen once, eliminating the risk of introducing difference due to environments, third party libraries, different compilation contexts, and configuration differences.
- Deploy the same way to all environments: Use the same automated release mechanism for each environment, making sure the deployment process itself is not a source of potential issues. While you deploy many times to lower environments (integration, QA, etc.) and fewer times to higher environments (pre-production and production), you can’t afford failing deployment to production because of the least tested deployment process.
- Smoke-Test your deployments: A non-exhaustive software test (essentially testing “everything” – services, database, messaging bus, external services, etc.) that doesn’t bother with finer details but ascertains that the most crucial functions of a program work, will give you the confidence that your application actually runs and passes basic diagnostics.
- Deploy into a copy of production: Create a production-like or pre-production environment, identical to production, to validate changes before pushing them to production. This will eliminate mismatches and last minute surprises. A copy of production should be as close to production as possible with regards to infrastructure, operating system, databases, patches, network topology, firewalls, and configuration.
- Instant propagation: The first stage should be triggered upon every check-in, and each stage should trigger the next one immediately upon successful completion. If you build code hourly, acceptance tests nightly, and load tests over the weekend, you will prevent the achievement of an efficient process and a reliable feedback loop.
- Stop the line: When a stage in the pipeline fails, you should automatically stop the process. Fix whatever broke, and start again from scratch before doing anything else.
The pipeline process helps establish a release mechanism that will reduce development costs, minimize risk of release failures, and allow you to practice your production releases many times before actually pushing the “release to production” button.
Continuous improvement of the automated pipeline process will ensure that fewer and fewer holes remain, guaranteeing quality and making sure that you always retain visibility of production readiness.
Making sure your database can participate in the efficient deployment pipeline is obviously critical. However, the database requires dealing with different challenges than application code. Implementing continuous delivery for the database proves to be a challenge.