Want to make sure you are on the right track on your continuous delivery journey? The following best practices will help put you there and keep your DevOps team headed in the right direction.
Ensure Continuous Integration is in Place and Running
The first step to a solid continuous delivery pipeline is to have a build of your application ready to be deployed. That application build, as part of continuous integration, should also have some test automation with a decent amount of code coverage required to ensure it is ready to be deployed.
The same build of the application should be used through the entire pipeline. That’s right—only build the application binaries once, not at every environment. If a problem is discovered as the application build travels the pipeline to production, and it is deemed not worthy, abandon it. Do not fix it in place. Fix the problem in the code. Check the code in, and let continuous integration and continuous delivery do their thing and start moving that new build through the pipeline.
At Least One Stop between Development and Production
The core concept behind continuous delivery is that the entire release package—from the application build to the scripts that build and configure the environment it runs in—are solid and ready for production. Production should simply be another environment to run the same automation through the same steps.
To get to the point where everyone is comfortable, and make doing a release so routine that it is mind numbingly boring, the pipeline needs to have at least one stop between the development area and production. You need at least one completely hands-off environment to validate that the release is production-ready and all the scripts and mechanisms included in the release will build the environment and deploy the application as planned, every time.
Some projects will have five or more environments that are stops along the way with names like:
- System test
- UAT (User Acceptance Testing)
But remember that development to somewhere then on to production makes everything run better.
Fail and Restart Your Pipeline
I mentioned it previously, but I just want to reiterate: The true value of a well-running continuous delivery pipeline is how easy it is to start again at the beginning.
I am sure anyone in development or operations remembers spending time tweaking settings in an environment by adding a manual step or two, which can compound as the number of environments increase to accommodate some little thing “because there isn’t time to fix it right.” This whole idea goes away with a pipeline using immutable infrastructure like Docker containers.
If it didn’t work, find out why. Fix it back at the beginning, let the automation take over and get you back to the point where you paused.
This may seem obvious (but I have seen it more times than I care to admit) where some step is still manual, rather than automated, and needs to be done before/during/after a release gets deployed for everything to work. You may not know it is manual, but the most common telltale sign is that the pipeline only seems to work as expected when specific staff is involved. This is not mal-intent, it is just human nature. If something is routine, people often don’t think about it. It might not have occurred to them to automate that step in the right script.
Version Control for Everything
Use a version control system (like GitHub, Subversion, or Team Foundation) for every script, database change, and configuration file that is part of the release package. How else can you really know what has changed? Traditional IT operations teams may need some help getting used to using a version control system instead of just having directories renamed with things like “old” or the date in the names, but they will catch on and then wonder how they ever lived without it.
Binaries can and should be stored in a package repository like Nexus, Artifactory, or Archiva. Otherwise they just cause trouble if binaries are inside a code version control system.
Include the Database in the Pipeline
If you include all changes to the database as part of releases, you will truly have an automated and repeatable application deployment. This can be done as part of the application itself or with a third-party tool like FlyWay or Liquibase.
Monitor Your Continuous Delivery Pipeline
No matter how well tested your code is, a faulty algorithm, dependencies on services and system resources, or failure to account for unforeseen conditions can cause software to behave unpredictably. Within the continuous delivery (CD) pipeline, troubleshooting can be difficult and in cases like debugging in a production environment it may not even be possible. By proactively monitoring telemetry throughout pipeline, you’re more likely catch problems before they reach production.
Get Your CD Pipeline Flowing
That’s CD optimization in a nutshell. There is a lot more that could be said, of course, but the pointers above are a great frame of reference if you’re looking to build a CD pipeline or improve the one you already have.