Thomas Step

← Blog

I recently launched a Slack app to help with channel bloat! Simply installing it would help me out a bunch. I have 2/10 installations required to submit my app to the Slack Marketplace. Thanks for helping me reach that goal.

Why You Should Use a CICD Pipeline

CICD is talked about quite a bit, and I want to add my voice to those that approve of, recommend, and use CICD pipelines. What I love most about constructing a pipeline is watching everything flow through after it has been set up. There’s something magical and fun about watching it trigger and perform all those actions that took so long to build and would take even longer to complete manually. If you are not already using a CICD pipeline or know about them, I hope you will want to incorporate them into your workflows after this.

To boil it down, a pipeline is just a set of scripts that runs on every code change. That code can be application code or infrastructure as code. The most popular way to trigger a pipeline is by a webhook set up with your remote repository, most likely GitHub. Since committing and pushing code is a normal part of a developer’s workflow, this fits right in without being a bother. After that trigger, the pipeline takes care of the rest, or at least it should.

There are two parts of CICD: CI or continuous integration and CD which can be continuous delivery, continuous deployment, or both. Continuous integration more or less checks that new code is acceptable to be merged in from a branch (more on branching strategy later). Normally static analysis, linting, and tests are run in this section of the pipeline, but every team has its own flow or acceptance criteria. Continuous delivery can happen for a change to any branch but is most important on the long-running branches (think main). In this section of the pipeline, code is packaged into some sort of artifact that is ready to be deployed. That artifact could be a Docker image, zip file, binary, or some other type of executable. With today’s platforms, I would bet that more often than not, the output of continuous delivery is either a Docker image or a zip file. Continuous deployment is the section of the pipeline where the artifact finally goes live. Application code is ready for use and infrastructure has been provisioned and configured.

Various pieces of software help this process along like Jenkins, Spinnaker, CircleCI, Travis, CodePipeline, GitHub Actions. All of those names aside, the idea behind it all is to get code ready for deployment in a consistent, easily reproducible manner. However it gets there and whatever software helping it along doesn’t matter in the end. The goal is consistency, time savings, and deployment. Plenty of organizations out there have articles about fancy pipelines that do wacky and wild things, but while building out your pipeline just remember the goal behind it all. Start from the foundation and build it up. Mature organizations have to do the same thing before they can showcase their intricate pipelines. Some of the extras that organizations like to throw in are alerting and feedback, automatic promotion through environments, blue/green or canary deployments, various levels and styles of testing, and the list goes on. All of these features have benefits and it’s up to the engineer and organization to determine what fits their needs best.

As promised, I will touch on the branching strategy for a second. This is important because it can affect the day-to-day workflow of developers, and in my experience, it is best to sort this out ahead of time and build a pipeline based on the branching strategy. In my opinion, the GitHub Flow is best suited for today’s CICD practices. The idea is to have a single long-running branch (main), which serves as the source of truth and should always be in a deployable state. Feature branches should branch off of main and merge back in on approval. If at any point a bug is discovered in main, every developer drops their work until the bug is fixed. I have worked on a few projects that use this branching strategy, and it seems to work well. There are other strategies out there so explore and find what works best for you and your team.

One last benefit that I would like to bring up is how well CICD and a microservices architecture go hand in hand. The whole idea around microservices is to keep things small and agile. It’s easy to take pieces out and put others in. It’s also easy to iterate on. Practicing CICD functions similarly. Keeping merges in main small helps to introduce fewer bugs at a time and keeps things moving quicker. Having simpler services also means simpler deployments (hopefully) and therefore simpler pipelines. Everything is easier to create and destroy as need be, which also makes iteration easier with building out a pipeline alongside a microservice.

There are a great many benefits to using a CICD pipeline, and the software development lifecycle moves much quicker with one. There are off-the-shelf solutions to many generic and commonly-used architectures and services, or a team can build its own pipeline because at the end of the day it’s just a set of scripts. One last word of warning is to watch out for over-engineering a pipeline. I have seen pipelines that go overboard and bloat the process. I would still encourage everyone to look into the practice and potentially implement it on their team.

Categories: aws | ops