“Build your code once, and deploy it many times.” This foundational principle for release automation is widely accepted in the DevOps community as the gold standard. It means that once the binaries are created, they should be applicable to any environment, regardless of its internal configurations.

Why do you need repeatability in your release process?

This rule has a very clear logic behind it: by deploying many times but compiling only once you eliminate the risk of introducing differences that stem from varying environments, third party libraries, different compilation contexts, and configurations.

The same automated release mechanism is deployed for each environment, ensuring that  the deployment process itself is not a source of potential issues.

DevOps teams usually deploy frequently to lower environments (Integration, QA, etc.), less frequently to higher environments and—ideally—only once to Production.

But there is considerable risk when the PROD deployment is different from the other environments.  After all, can you afford the PROD to be the only untested deployment?

The Database Challenge

The principles above are accepted as the gold standard. But they are only applied to the application code.  Databases have been left out of DevOps processes.

Currently, it is well accepted when the release automation and CI/CD processes ground to a complete halt at the database. But the result of treating the database as something out of scope of the CI/CD pipelines are bottlenecks and slowed down release cycles, configuration drifts, errors and downtimes.

It seems that DevOps engineers are mostly resigned to the fact that the database deployments are still managed though manual workflows, take time, and often break the release pipeline. This resigned attitude is due, in part, to the inherent differences between the database and the application code.

Why is the database left in the cold while we automate our app releases to death?

Releasing application code is far simpler than releasing database changes; with application code, you can override your changes with newer or older releases, you can override configuration gaps, and you can reinstall your application from scratch.

Databases, however, are different. They have both configuration and data; and the data is persistent and accumulating. In most cases, when it comes to the database you can’t override QA with DEV data, and you shouldn’t override PROD data with anything else.

As a result, the way to introduce changes is to alter the state of the database from its older structure and code, to its desired new structure—in sync with your application code. This requires a transition code that alters the database’s structure—the SQL script. Again, the app code is just the app code. The database code is the result of running the delta scripts.

Configuration drifts are the leading cause of deployment errors

In theory, upgrading your database with SQL scripts just works. In practice, however, there’s usually something missing: a database configuration tends to drift.

Someone performs a maintenance task, a performance optimization is implemented, or a different team’s work overlaps with your own. As a result, that script that worked in one environment might not work in the next, or worse—it ends up creating damage and downtime.

Configuration drift can present itself as different schema configuration, different code, good code that was introduced by other teams, or plain production hotfixes, that might be blown away by the SQL script you are introducing. Therefore, working with just the change scripts is a risky business. All the stars must be aligned 100% of the time for it to work

I will argue that we can (and we should) be treating the database code exactly the same as we do with the application code when it comes to delivery automation. DevOps best practices, including the “build once, deploy many” rule can be extended to the database.

Clearing the Confusion: State-Driven and Migration-Driven Deployments

We cannot keep treating the database as a discrete area outside of DevOps best practices. The more automated your release pipelines become, the more problematic is the database bottleneck that keeps holding your releases back.

It is time to bring DevOps processes to the database. To create the transition or upgrade database code, we can employ the concepts of state-driven or migration-driven deployments. Let’s review each one of these methods, their pros and cons.

What is state-based database delivery?

State-driven database delivery is generally referred to as a compare and sync tool. The idea is simple: all we need to do is to use a compare tool to auto-generate the scripts required to upgrade any existing database to the next environment.

That tool will always push changes from lower environments upward, such as from DEV, to QA, to PRE-PROD and finally PROD.

A model-based delivery method is another variant for state-driven delivery. In this case, defining a model—with either UI diagrams (a designer) or XML representation (translator)—enables you to define the desired database structure, and then compare and sync to the target environment.

The pros of state-based delivery

The compare or model tools do the heavy lifting: offering a script to make the target database look like your source database or your model.

The cons of state-based delivery

The biggest problem is that the change process is not repeatable: the script generated to one environment might be different from the one generated for the next environment, as the environments might be different (drifted). This means that the process is not repeatable and does not follow a ‘build once’ model.

Another issue is that the tool will push changes forward (from source to target) unless manually reviewed and explicitly directed to do otherwise. This can present high risk in an automated process.

In this scenario, changes originating from the target environment will be overridden with older code or conflicting code from the source (e.g. dropping an index that was added to PROD to deal with performance).

This is a great example of a mistake that would be very easy to make and very costly to fix.

The script is generic and designed for a broad audience. You must review it, adjust it, and change or fine-tune it to fit your specific case with every iteration and environment (useless you fall under the generic case).

The bottom line: this approach creates a situation where you ‘build many’ for each environment and eliminate repeatability.

Migration-driven database delivery

In migration-driven database delivery approach migration steps are created to transition a database from one version to the next, mostly implemented as plain SQL scripts. Upgrade scripts can be executed on the database.

The pros of migration-based database delivery

Hallelujah – the perfect repeatability! With migration-based delivery the same code is executed in all environments. So we can finally implement a classic ‘build once’ approach in the database delivery, we discussed earlier. The script is honed to fit your coding style, specific needs, performance variations and your application requirements.

The cons of migration-based database delivery

Well, here comes the biggest problem – In most cases, you’ll have to build the migration code manually.

Another issue is increased risk. In the face of drifted database environments, the SQL script might produce results that vary from undesirable to catastrophic—these can include anything from overriding someone else’s changes (if the script is not properly synced with other development efforts), to overriding hotfixes targeted for production.

The bottom line: Build once, deploy many – but with manual work and increased risk at the deployment phase.

State-based vs migration-based: why not both?

As I showed above, both state-based and migration-based approaches have drawbacks when it comes to automating database delivery. So why not combine the two approaches to get the best of both worlds?

That is exactly what DBmaestro does. By combining the power of both approaches, DBmaestro brings “Build once, deploy many” principle to your database, while increasing the frequency and the quality of your releases:

  • Automate and get full visibility and insights into your database release pipelines—at enterprise scale.
  • Easily define and run continuous delivery pipelines with high security and compliance with organizational policies.
  • Seamless integration with all sources of database changes
  • Dry-run the code and predict the success of database deployments before PROD
  • Get alerts for configuration drift or non-policy actions.

DBmaestro Release Automation enables you to publish releases quickly, prevent accidental overrides of changes, and reduce application downtime caused by database-related errors, all while maintaining stability, scalability, and security.

Want to learn more about our approach to database delivery? Download the whitepaper below for a detailed overview