Azure is my favorite cloud provider. We use Azure for most of the infra & services. Our code goes to Azure DevOps, we use Azure Container Registry (ACR) to host our docker container images and our Kubernetes clusters running in Azure Kubernetes Service (AKS).
We configured the CI/CD pipelines in Azure DevOps. In my case, we have a monorepo that contains several ASP.NET Core microservices and the folder structure looks like below, which we inherited from eShopOnContainers.
1- build2 - azure-devops3 - common4 - ci-steps-template.yml5 - ci-vars-template.yml6 - project-one7 - ci-pipeline.yml8 - project-two9- deploy1011 - azure-devops12 - common13 - cd-steps-template.yml14 - cd-steps-template-prod.yml15 - cd-vars-template.yml16 - project-one17 - cd-pipeline.yml18 - project-two19 - k8s20 - helm21 - project-one22 - project-two2324- src25 - Services26 - Project-One27 - Project-Two
One of the great articles that helped me to initial setup the CI/CD is given below.
Its kind of outdated now since it is using az acr helm
commands which were deprecated later.
But it is still worth reading. So definitely check it out.
👉 Tutorial: Using Azure DevOps to setup a CI/CD pipeline and deploy to Kubernetes
CI Pipeline
The CI pipeline does the following,
- Build a Docker image and push to ACR
- Build Helm chart and push to ACR
Prerequisites
- Helm chart for your project. Here my chart directory is located at
deploy > k8s > helm
. To create a new chart for your project, refer Helm Create. - acr-connection-name: ACR service connection in Azure DevOps. You can add it under
Azure DevOps > Project > Project Settings > Service Connections
.
The ACR credentials I stored in the Azure DevOps Variable Groups (acr-variable-group).
Name | Value |
registryName | Your ACR name |
registryLogin | ACR login |
registryPassword | ACR password |
Common
ci-vars-template.yml
1parameters:2 projectName: ""3 dockerRegistryServiceConnectionName: ""4 dockerfile: ""5 buildContext: ""67variables:8 helmVersion: 3.2.39 HELM_EXPERIMENTAL_OCI: 110 registryServerName: "$(registryName).azurecr.io"11 dockerRegistryServiceConnectionName: ${{ parameters.dockerRegistryServiceConnectionName }}12 dockerfile: ${{ parameters.dockerfile }}13 buildContext: ${{ parameters.buildContext }}14 projectName: ${{ parameters.projectName }}15 imageName: ${{ parameters.projectName }}16 imageTag: $(build.sourceBranchName)17 helmChartVersion: $(build.sourceBranchName)18 helmfrom: $(Build.SourcesDirectory)/deploy/k8s/helm19 helmto: $(Build.ArtifactStagingDirectory)/deploy/k8s/helm
Few things to note here,
HELM_EXPERIMENTAL_OCI
is to enable OCI support in the Helm 3 client. Currently, this support is experimental.build.sourceBranchName
as the image tag and chart version is handy if you are following Gitflow (which we follow) or similar git branching convention, so each release (eg, refs/tags/project-one/2.2.6) will generate Docker image and Helm chart with the same version.
ci-steps-template.yml
1steps:2 - task: Docker@23 displayName: Build and push an image to container registry4 inputs:5 command: buildAndPush6 repository: $(imageName)7 dockerfile: $(dockerfile)8 containerRegistry: $(dockerRegistryServiceConnectionName)9 buildContext: $(buildContext)10 tags: |11 $(imageTag)1213 - task: HelmInstaller@114 displayName: "install helm"15 inputs:16 helmVersionToInstall: $(helmVersion)17 - bash: |18 echo $(registryPassword) | helm registry login $(registryName).azurecr.io --username $(registryLogin) --password-stdin19 cd deploy/k8s/helm/20 helm chart save $(helm package --app-version $(imageTag) --version $(helmChartVersion) ./$(projectName) | grep -o '/.*.tgz') $(registryName).azurecr.io/charts/$(projectName):$(imageTag)21 helm chart push $(registryName).azurecr.io/charts/$(projectName):$(helmChartVersion)22 echo $(jq -n --arg version "$(helmChartVersion)" '{helmChartVersion: $version}') > $(build.artifactStagingDirectory)/variables.json23 failOnStderr: true24 displayName: "helm package"25 - task: CopyFiles@226 inputs:27 sourceFolder: $(helmfrom)28 targetFolder: $(helmto)29 - publish: $(build.artifactStagingDirectory)30 artifact: build-artifact
The steps in the CI pipeline we moved to a common template file ci-steps-template.yml
so that we can reuse it on other pipelines as well, and the steps include,
Build and push the docker image
Installs Helm client
A series of script which does
- Authenticate to ACR
- Creates and push Helm chart to ACR.
- Creates
variables.json
which contain the newly created Helm chart version. Which we will use to fetch the right chart version during CD.
Copy some additional files to the artifact. Which we can use to override Helm chart values.
ci-pipeline.yml
1trigger:2 branches:3 include:4 - refs/tags/project-one/*5 paths:6 include:7 - src/Services/ProjectOne/*89pr: none1011pool:12 vmImage: "ubuntu-latest"1314variables:15 - group: acr-variable-group16 - template: ../common/ci-vars-template.yml17 parameters:18 projectName: "project-one"19 dockerRegistryServiceConnectionName: "acr-connection-name"20 dockerfile: "src/Services/Project-One/Dockerfile"21 buildContext: "$(System.DefaultWorkingDirectory)"2223steps:24 - template: ../common/ci-steps-template.yml
If everything went well, you will have two repositories under your ACR.
- project-one which contains the Docker image
- chart/project-one for the Helm chart
CD Pipeline
The CD pipeline will install the Helm chart on AKS. The CD pipeline stage requires following details,
Name | Value |
aks | AKS name |
rg | AKS resource group |
aksSpTenantId | Subscription tenant id |
aksSpId | Service principal Id |
aksSpSecret | Service principal password |
These credentials I stored in another varible group named aks-variable-group
.
Helpful commands
Service principal credentials
Create new service principal aks-name-deploy
by
1az ad sp create-for-rbac -n aks-name-deploy --scopes aks-resource-id --role "Azure Kubernetes Service Cluster User Role" --query password -o tsv
Where aks-resource-id
is,
1az aks show -n $aks -g $rg --query id -o tsv
The above command will output service principal password aksSpSecret
.
To get service principal id aksSpId
,
1az ad sp show --id http://aks-name-deploy --query appId -o tsv
Also we need to attach ACR with AKS so that AKS can pull our private docker images from our ACR.
Attach ACR with AKS
1az aks update -g $rg -n $aks --attach-acr acr-resource-id
Where acr-resource-id
is the output of,
1az acr show -n $registryName -g acr-resource-group-name --query id -o tsv
Get Azure Tenant Id
To get tenantId aksSpTenantId
,
1az account show --query tenantId -o tsv
Now lets explore the pipeline YAML files.
Common
cd-vars-template.yml
1parameters:2 projectName: ""34variables:5 helmVersion: 3.2.36 HELM_EXPERIMENTAL_OCI: 17 registryServerName: "$(registryName).azurecr.io"8 projectName: ${{ parameters.projectName }}
cd-steps-template.yml
1steps:2 - checkout: none3 - task: HelmInstaller@14 displayName: "install helm"5 inputs:6 helmVersionToInstall: $(helmVersion)7 - download: ci-pipeline8 artifact: build-artifact9 - bash: |10 az login \11 --service-principal \12 -u $(aksSpId) \13 -p '$(aksSpSecret)' \14 --tenant $(aksSpTenantId)15 az aks get-credentials \16 -n $(aks) \17 -g $(rg)18 echo $(registryPassword) | helm registry login $(registryServerName) --username $(registryLogin) --password-stdin19 helmChartVersion=$(jq .helmChartVersion $(pipeline.workspace)/ci-pipeline/build-artifact/variables.json -r)20 helm chart pull $(registryServerName)/charts/$(projectName):$helmChartVersion21 helm chart export $(registryServerName)/charts/$(projectName):$helmChartVersion --destination $(pipeline.workspace)/install22 helm upgrade \23 --namespace $(k8sNamespace) \24 --create-namespace \25 --install \26 --wait \27 --version $helmChartVersion \28 --set image.repository=$(registryServerName)/$(projectName) \29 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/app.yaml \30 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/inf.yaml \31 $(projectName) \32 $(pipeline.workspace)/install/$(projectName)33 failOnStderr: true34 displayName: "deploy helm chart"
The common CD steps include a series of script which does,
- Authenticate to Azure using the service principal credentials
- Set the specified AKS cluster as the context.
- Authenticate ACR with the ACR credentials (The same credentials we used in CI pipeline defined in the
acr-variable-group
) - Extract the Helm chart version that need to install
- Pulls the Helm chart and installs (or upgrade) it. Here we are overriding the chart image repository to our ACR repository and some additional common values (app.yaml & inf.yaml).
cd-pipeline.yml
1trigger: none2pr: none34# define variables: registryName, registryLogin and registryPassword in the Azure pipeline UI definition5variables:6 - group: acr-variable-group7 - template: ../common/cd-vars-template.yml8 parameters:9 projectName: "project-one"10 - name: k8sNamespace11 value: myteam1213resources:14 pipelines:15 - pipeline: ci-pipeline16 source: "project-one-ci"17 trigger:18 enabled: true19 branches:20 include:21 - refs/tags/project-one/*2223# define 5 variables: aks, rg, aksSpId, aksSpSecret and aksSpTenantId in the Azure pipeline UI definition24stages:25 - stage: test26 displayName: test27 jobs:28 - deployment: test29 variables:30 - group: aks-variable-group31 displayName: deploy helm chart into AKS32 pool:33 vmImage: ubuntu-latest34 environment: test-$(projectName)35 strategy:36 runOnce:37 deploy:38 steps:39 - template: ../common/cd-steps-template.yml40 - stage: production41 displayName: production42 jobs:43 - deployment: production44 variables:45 - group: aks-prod-variable-group46 displayName: deploy helm chart into AKS47 pool:48 vmImage: ubuntu-latest49 environment: production-$(projectName)50 strategy:51 runOnce:52 deploy:53 steps:54 - template: ../common/cd-steps-template-prod.yml
In the CD pipeline above, I have defined two stages, one for TEST and one for PROD.
The main difference between them is in the variable group used.
aks-variable-group
has the TEST cluster values and you guessed right, aks-prod-variable-group
has the PROD cluster values.
And the difference between cd-steps-template.yml
and cd-steps-template-prod.yml
is that prod file has some additional chart value overrides with respect to our PRODUCTION environment.
cd-steps-template-prod.yml
1steps:2 - checkout: none3 - task: HelmInstaller@14 displayName: "install helm"5 inputs:6 helmVersionToInstall: $(helmVersion)7 - download: ci-pipeline8 artifact: build-artifact9 - bash: |10 az login \11 --service-principal \12 -u $(aksSpId) \13 -p '$(aksSpSecret)' \14 --tenant $(aksSpTenantId)15 az aks get-credentials \16 -n $(aks) \17 -g $(rg)18 echo $(registryPassword) | helm registry login $(registryServerName) --username $(registryLogin) --password-stdin19 helmChartVersion=$(jq .helmChartVersion $(pipeline.workspace)/ci-pipeline/build-artifact/variables.json -r)20 helm chart pull $(registryServerName)/charts/$(projectName):$helmChartVersion21 helm chart export $(registryServerName)/charts/$(projectName):$helmChartVersion --destination $(pipeline.workspace)/install22 helm upgrade \23 --namespace $(k8sNamespace) \24 --create-namespace \25 --install \26 --wait \27 --version $helmChartVersion \28 --set image.repository=$(registryServerName)/$(projectName) \29 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/app.yaml \30 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/inf.yaml \31 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/inf-prod.yaml \32 -f $(pipeline.workspace)/ci-pipeline/build-artifact/deploy/k8s/helm/$(projectName)/values-prod.yaml \33 $(projectName) \34 $(pipeline.workspace)/install/$(projectName)35 failOnStderr: true36 displayName: "deploy helm chart"
Few more Notes
- CD pipeline is also YAML based (You gonna like it), hence create it like a regular pipeline (Not as RELEASE) in the Azure DevOps, and choose the
cd-pipeline.yml
after choosing to create pipeline based on Existing Azure Pipelines YAML file. - Once you create the CD pipeline, check the Environments under Azure DevOps Pipelines. There will be two environments as per the above example, test-project-one and production-project-one. Inside each, you can configure the approvals and more for the respective CD stages.
A sample reference source code is also pushed to here.
If you have any grey area in this article, feel free to shoot it in the comments below 👇, I will try to shed some light on that part.