GitOps
The term GitOps became quite popular these days. There are plenty of good reads on the internet about it but I bet if you google “gitops”, 9 out of 10 will be about application deployments in Kubernetes. This lead to a common confusion that GitOps is for Kubernetes only.
In fact GitOps works great in any scenario where declarative configuration is used. I have successfully used it to provision Cloud Infrastructure with Terraform and I am going to share some insights, tips and tricks I’ve learned over the last year.
So what is GitOps?
In a nutshell, GitOps is an approach for handling Ops tasks using Git as a centerpiece. We all know that git is amazing collaboration tool. It helps us to track changes in code, and reverse them in the matter of minutes when it is necessary. All modern Continuous Integration(CI) systems work in combination with Git. They observe the state of a git repository and trigger jobs when the state changes.
Git repo becomes in a way the single source of truth, because every time the repo state changes new CI job is triggered to reflect that change on the target environment. That means your provisioning procedure might run multiple times in a row on the same or almost the same infrastructure code, so it must be idempotent. This is the only constraint that GitOps puts on the tools you can use. Terraform is a good example of such tool due to the fact that it keeps track of the infrastructure state and (re)deploys only the delta.
Bells and whistles
Git itself is a small CLI tool and is rarely used on its own, typically one will use a Git based code hosting service. The most popular are GitHub, GitLab, and BitBucket. They all offer the distributed version control and source code management functionality of Git but also bring some additional features (bug tracking, pull requests, integrated ci, etc).
All of them provide capability(either integrated or with help of WebHooks) to launch scripts on events such as push, pull request creation or update, creation of the comment in the PR discussion, etc. Moreover they give you access to their API allowing you to build complex scenarios.
Another common and incredibly useful features are merge checks and branch protection. They empower you to enforce how and when your code “moves” between branches of the repo. Here are some examples of what you can do with it:
- Deny direct push to the branch unless it was done by merging a Pull Request
- Prevent merging a Pull Request unless it was approved by one ore more people of a certain group (eg code owners)
- Prevent merge if last CI build on the branch has failed
- Prevent merge if discussions on the PR were not resolved
Use them to create “gates” between different environments.
The GitOps Workflow
I’ve diverted a bit into the hosted Git solutions and their feature set to give a little bit perspective of what they have to offer. Now lets see how you can use them to build more or less useful GitOps workflow.
As you can see the diagram is very generic and can be applied with any hosted Git offering, CI system or Cloud Provider. Pull Request event is used only once to validate the code in the testing environment before pushing it to staging. After successful validation neither the code nor deployment process will change so you can use the same job/script. To make this possible you will have to store environment-specific variables, secrets and credentials in some sort of Secret Management system and then fetch them at runtime. Variables that are the same for all environments and are not sensitive can be stored in git together with the infrastructure code.
I prefer Hashicorp Vault’s KV engine for storing secrets and credentials. The KV path can be composed as something like:
kv/TEAM_NAME/BRANCH_NAME/*
That way you can write a generic script that will fetch all secrets from the prefix and write them to the .tfvars
file or set as environment variables. You can use it with any number of different teams and environments using BRANCH_NAME
as a pointer to the environment.
To get an idea of how you can achieve that you can check out my post Injecting secrets into CI/CD from Hashicorp Vault.
What else you can do?
Of course you can also have CI jobs for other PRs if you have a suitable action to launch. For example you could generate and send a notification to the Slack channel with a URL of the PR and a results of test/deployment to previous environment. Generally speaking appropriate notifications is a very important aspect of a GitOps workflow. Having a message sent to your Slack channel or a Hipchat room with a good summary helps a lot. Just try to keep the noise down.
Conclusion
I believe that GitOps is extremely useful approach and definitely will be a nice addition to the DevOps engineer’s toolbox. At the same time it’s not a silver bullet and in some use case scenarios might not make sense. So before going wild transforming everything into GitOps workflows try it in a “proof of concept” and see if it adds value. I hope this article will bring you some inspiration and will help you to build GitOps workflows in your organization.