Motivated by reports of the benefits, including quicker time-to-market, reduced costs, and higher quality products, an ever increasing number of organizations are implementing DevOps.
In order to properly implement DevOps, you’ll need trustworthy source control. There are two ways to go about this: native code and database source control.
In this article, I outline both processes and explain the differences between them.
Every organization has its own processes for compiling native code. It can be manual or automatic. Either way, the build always relies on one specific thing: Source control.
Every build starts with an empty folder that is gradually filled with the relevant source code files from the file-based version control repository (SVN, Git, Perforce, Microsoft TFS, IBM RTC etc.). The system then compiles the source code files, and, if the compilation succeeds, the process can continue to the next step, which is to deploy to an automated test environment.
Some organizations also save the file-based compilation phase output (binary artifacts) in a binary source control repository (SVN, Git, Perforce, TFS, RTC, etc.), so the deployment system can retrieve the relevant artifacts from the source control repository.
Deployment to an automated test environment is done differently in different organizations. Some do it manually, while some rely on scripts, and still others use an application release automation tool, such as IBM UrbanCode Deploy or CA Release Automation. What is common between all these methods is the need to copy artifacts (binary) to the relevant server, while overriding the previous file version.
Every change a developer makes must be documented in the source control repository. If it doesn’t exist there, it won’t be included in the build process. What’s more, if a developer copies a locally-generated artifact to a test environment, the next deployment will override this “out-of-process” change.
Occasionally, defects will only recreate in test environments, but due to infrastructure limitations (storage, costs, complex architecture, etc.) the developer is required to work in the test environment and not in the development environment. In such cases, the developer may need to copy the locally-generated artifact directly to the test environment.
Once the developer checks-in the code changes, the next deploy to the test environment will override the locally-generated artifact with an artifact from the binaries’ source control. This built in safety net in the process, prevents out-of-process locally-generated artifacts from entering into the production environment.
Native code deployment requires the copying of the binaries (DLL, jar, etc.) to the target environment. The previous version of the artifact is no longer valid (but may be saved for a quick rollback). This is the safety net which prevents out-of-process artifacts from reaching the production servers.
Unlike native code which invalidates previous file versions, database code deployment uses SQL scripts (DDL, DCL and DML triggers and commands generated in the build phase) to change the structure and data to meet the desired version. The script changes version A to version Z via DDL, DCL and DML commands, with every command changing the version a little bit. If the current version of the database is not A, there are two possible outcomes:
- The script will ignore the current version and override the structure with whatever exists in the script.
- The script will fail. For example: trying to add a column that already exists with a wrong data type.
While an error in the deployment process is not usually a desirable outcome, in this case, generating an error is better than having the script run successfully and then, without warning, reverting changes from a different team or branch in production (or changes made in the trunk/stash) as an emergency fix.
So if you’re still reading this, congratulations. You learned something new! But why does the difference beween native code and database deployment matter? Because native code deployment is only effective if we can safely assume that the file-based version control repository is the single source of truth for the code. As it happens, that assumption is far less reliable than one might expect.
With a native code approach to source control, only changes that are in the version control repository are available to the build process, changes made locally and not check-in will not to be promoted and may actually be lost – even if those changes make up the properly authorized, most up-to-date version.
Database managed source control, on the other hand, will always catch version conflicts, and where it cannot automatically resolve them on the basis of a well-defined protocol, it will report them as errors – thus assuring review and resolution by capable human hands.
This distinction is hugely consequential, especially at the enterprise level, where a lost update can take days to notice and cost millions of dollars.
To fully embrace DevOps, therefore, you’ll need to be agile, continuous, and controlled across all environments. Frequent releases only amplify the importance of a stable database backbone for your software.
When it comes to intelligently and scalably managing source control, working from within the database should be the standard. That said, you might need to outfit your system with added functionality to make sure it can rapidly implement changes without error and without lengthy development cycles, while eliminating the possibility of overriding critical changes.