Flagger and Drone — for an effective CI/CD platform

Sirishagopigiri
16 min readJul 18, 2021

--

Flagger — a simple tool used to roll out new versions of cloud-native applications.

Drone — an effective Continuous Integration platform.

These two projects have strong architectures and various features which help DevOps Engineers in many ways. In a way, they can be integrated to develop an effective CI/CD platform, which best servers the need of the user.

Here are some previous blogs which help to understand the projects better and try them in a testing environment. Flagger and Drone blogs. In these blogs, we discussed how to use the projects in a standalone way.

In this blog, we will focus on how the two projects can be integrated to build an effective CI/CD platform.

Pre-requisite

Below are the platform and system details used for this blog.

  • Ubuntu 20.04 16GB RAM and 4 CPUs
  • Docker-ce 20.10 version
  • Kubernetes v1.21 Single node — using Calico CNI
  • Gitlab Community Edition 13.12.3 — Private Gitlab running on the same host — Installation steps
  • Helm binary — Refer here for installation

We would repeat the same steps used for installing the flagger and drone as discussed in the previous blogs. The steps are listed in brief here, for detailed steps refer to the blogs.

Flagger Installation

The steps listed below install Istio and then flagger on the Kubernetes cluster.

# Istio installation
$ curl -L https://istio.io/downloadIstio | sh -
$ cd istio-1.10.0
$ export PATH=$PWD/bin:$PATH
# To install Istio using istioctl on the kubernetes cluster
$ istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete Thank you for installing Istio 1.10. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/KjkrDnMPByq7akrYA
# Install prometheus
$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.10/samples/addons/prometheus.yaml
# Flagger installation$ helm repo add flagger https://flagger.app
$ kubectl apply -f https://raw.githubusercontent.com/fluxcd/flagger/main/artifacts/flagger/crd.yaml
$ helm upgrade -i flagger flagger/flagger --namespace=istio-system --set crd.create=false --set meshProvider=istio --set metricsServer=http://prometheus:9090# Enabling grafana
$ helm upgrade -i flagger-grafana flagger/grafana --namespace=istio-system --set url=http://prometheus.istio-system:9090 --set user=admin --set password=change-me
# Enable port-forwarding to access grafana on localhost:3000
$ kubectl -n istio-system port-forward svc/flagger-grafana 3000:80
# Check pods in istio-system namespace
$ kubectl -n istio-system get pods
NAME READY STATUS RESTARTS AGE
flagger-5c49576977-gdhc9 1/1 Running 0 52s
flagger-grafana-6594969455-lg9nx 1/1 Running 0 12s
istio-egressgateway-55d4df6c6b-k6wgm 1/1 Running 0 2m54s
istio-ingressgateway-69dc4765b4-9nxdg 1/1 Running 0 2m54s
istiod-798c47d594-fx2nl 1/1 Running 0 3m54s
prometheus-8958b965-qq5zj 2/2 Running 0 2m1s

Drone Installation

Once we have the flagger installed we will now proceed with the drone installation.

Drone Server Installation

Step 1: Create Oauth application in GitLab

Oauth application creation
Application ID and Secret for the Oauth application

Step 2: Generate openssl hex key

$ openssl rand -hex 16

Step 3: Use the application ID, secret and openssl keys from previous steps to launch drone server

$ docker run \
--volume=/var/lib/drone:/data \
--env=DRONE_GITLAB_SERVER=http://192.168.1.102 \
--env=DRONE_GITLAB_CLIENT_ID=<application_id_from_step_2> \
--env=DRONE_GITLAB_CLIENT_SECRET=<secret_from_step_2> \
--env=DRONE_RPC_SECRET=servet_from_step_3 \
--env=DRONE_SERVER_HOST=<Ip_and_port_for_the_server_to_run> \
--env=DRONE_SERVER_PROTO=http \
--env=DRONE_USER_CREATE=username:root,admin:true \
--env=DRONE_LOGS_TRACE=true \
--env=DRONE_LOGS_DEBUG=true \
--env=DRONE_LOGS_PRETTY=true \
--publish=<port_of_drone_server>:80 \
--publish=443:443 \
--restart=always \
--detach=true \
--name=drone \
drone/drone

Step 4: Sync Gitlab and drone server follow the below steps

Access the drone server
Redirects to Gitlab for authorization
After authorization an create account
Drone dashboard

Drone Runner Installation

Drone runner runs as a kind deployment in the kubernetes default namespace. It integrates with drone server and executes the pipeline on the kubernetes. Before deploying update the env variables in the yam file. Sample yaml file is available here.

# deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: drone
labels:
app.kubernetes.io/name: drone
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: drone
template:
metadata:
labels:
app.kubernetes.io/name: drone
spec:
containers:
- name: runner
image: drone/drone-runner-kube:latest
ports:
- containerPort: 3000
env:
- name: DRONE_RPC_HOST
value: <DRONE SERVER IP AND PORT>
- name: DRONE_RPC_PROTO
value: <http or https>
- name: DRONE_RPC_SECRET
value: <openssl secret from above step>
$ kubectl apply -f deploy.yaml
deployment.apps/drone created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
drone-5bdf497677-zs5cd 1/1 Running 0 27s

This completes the installation of the Flagger and Drone platforms. Now we will proceed with a sample project creation in Gitlab and integrate with Drone and Flagger to get an effective CI/CD pipeline for the microservice application.

Sample Project Integration with Drone and Flagger

Create a simple go-lang project in Gitlab. The code used for the application is available here.

Sample golang project creation

Sync projects in drone

Open drone UI and click sync
Activate the repo

Next create secrets in the drone, which includes docker username and password to push the docker image to a private repository. Also, create Kubernetes access secrets which will be used to deploy the application on Kubernetes.
In the drone dashboard, navigate to project Settings->Secrets ->New Secret

Creating secrets for the project.

We would also need to give “default” Service account in kubernetes privileges to launch applications in kubernetes cluster. Here is the yaml file used.

$ kubectl apply -f role.yaml
role.rbac.authorization.k8s.io/drone created
rolebinding.rbac.authorization.k8s.io/drone created
clusterrole.rbac.authorization.k8s.io/drone created
clusterrolebinding.rbac.authorization.k8s.io/drone created

Below is the drone pipeline configuration file used for this testing. The steps involve building binary, running test cases, building docker image and publishing it to private repo and finally deploying the application on the kubernetes cluster. This will be part of the repo under the file name .drone.yml the same is available here.

# Drone pipeline 
kind: pipeline
type: kubernetes
name: test-go
steps:
- name: test
image: golang:alpine
commands:
- "apk add build-base"
- "go mod download"
- "go build -o app ."
- "go test -v"
- name: publish
image: plugins/docker
settings:
registry: quay.io
repo: quay.io/sirishagopigiri/golang-app
tags: [ "${DRONE_COMMIT_SHA:0:7}","latest" ]
username:
from_secret: docker_username
password:
from_secret: docker_password
- name: deliver
image: sinlead/drone-kubectl
settings:
kubernetes_server:
from_secret: k8s_server
kubernetes_cert:
from_secret: k8s_cert
kubernetes_token:
from_secret: k8s_token
commands:
- sed -i "s|golang-app:v1|golang-app:${DRONE_COMMIT_SHA:0:7}|g" deployment.yaml
- kubectl apply -f deployment.yaml

Before creating a commit, which will trigger the pipeline and deploy the application, let us first create the namespace and enable the Istio label so that the envoy proxy pod will be created automatically on the application deployed using the drone pipeline.

# Create and label namespace
$ kubectl create ns test
$ kubectl label namespace test istio-injection=enabled

Now let us create the first commit to the repo, to trigger the drone pipeline and deploy the application. The code used in the commit is available here.

# Clone from github
$ git clone https://github.com/SirishaGopigiri/drone-flagger.git
# Clone local project
$ git clone http://192.168.1.102/root/golang-app.git
Cloning into 'golang-app'...
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 3 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (3/3), done.
$ cd golang-app# Copy files from github sample project
$ cp -r ../drone-flagger/go-app/* .
$ cp -r ../drone-flagger/go-app/.drone.yml .# Commit and push
$ git add .
$ git commit -m "Initial commit"
$ git push origin master
Enumerating objects: 12, done.
Counting objects: 100% (12/12), done.
Delta compression using up to 8 threads
Compressing objects: 100% (10/10), done.
Writing objects: 100% (10/10), 4.35 KiB | 4.35 MiB/s, done.
Total 10 (delta 0), reused 0 (delta 0)
To http://192.168.1.102/root/golang-app.git
1ed68f8..352462c master -> master

Check drone dashboard to see if the pipeline is triggered

Pipeline triggered from the Initial commit

Wait for the pipeline to complete so that the application will be created in the “test” namespace. Once completed checking the resources in the “test’ namespace.

This shows the application is deployed
# Application status
$ kubectl get pods -n test
NAME READY STATUS RESTARTS AGE
golangapp-849448dcc6-dwmlg 2/2 Running 0 3m57s
$ kubectl get svc -n test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
golangapp ClusterIP 10.103.159.167 <none> 5000/TCP 4m1s

We will now try to access the application and observe the responses. As the application is using Istio’s sidecar, we would need to access it using the Istio’s gateway.

# Check if the service is accessible
$ kubectl -n test run -i -t nginx --rm=true --image=nginx -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# curl -X GET http://golangapp:5000/
{"message":"hello world!!"}

An alternative way is to create VirtualService and Gateway resources to access via Istio’s Ingress gateway. Required files are available here.

$ export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="http2")].nodePort}')# Create VS and Gateway
$ kubectl -n test apply -f virtualservice.yaml
$ kubectl -n test apply -f gateway.yaml
# Access service
$ curl -X GET "http://127.0.0.1:$INGRESS_PORT/"
{"message":"hello world!!"}

For the initial deployment of the application, we can either directly deploy manually or use the pipeline like this. Once we have the application deployed we will now associate the canary object from flagger to this sample application.

Associating Flagger Canary to the application

After the application deployment, create a canary object so that the rolling update and lifecycle will be managed by the flagger. With this approach, if any gaps exist in the application in terms of API, the rolling update would fail and the user can go back and work on the commit which is failing. Wait for the canary to be initialized completely. The canary yaml file is available here. The threshold parameters and other configuration can be observed in the canary.yaml file.

Additionally we are also creating a load-tester which will help generate traffic during canary analysis, the deployment file is available here.

# Before creating the canary delete the virtual service created above, as it will be managed by flagger now
$ kubectl -n test delete vs golangapp-vs
# Deploy load generator to generate load
$ kubectl -n test apply -f tester.yaml
# Now apply canary to the application
$ kubectl -n test apply -f canary.yaml
# Check the canary status and wait for it to be completely initialized
$ kubectl -n test get canary
NAME STATUS WEIGHT LASTTRANSITIONTIME
golangapp-canary Initialized 0 2021-07-16T12:23:34Z
# Check other resources
$ kubectl -n test get pods
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flagger-loadtester-5b766b7ffc-vdhws 2/2 Running 0 11m 192.192.43.142 harrypotter <none> <none>
golangapp-primary-6d8b478d57-g5vfs 2/2 Running 0 9m2s 192.192.43.143 harrypotter <none> <none>
$ kubectl -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flagger-loadtester ClusterIP 10.109.117.250 <none> 80/TCP 11m
golangapp ClusterIP 10.103.159.167 <none> 5000/TCP 14m
golangapp-canary ClusterIP 10.111.142.233 <none> 5000/TCP 9m10s
golangapp-primary ClusterIP 10.100.6.1 <none> 5000/TCP 9m9s
$ kubectl -n test get ep
NAME ENDPOINTS AGE
flagger-loadtester 192.192.43.142:8080 11m
golangapp 192.192.43.143:5000 14m
golangapp-canary <none> 9m14s
golangapp-primary 192.192.43.143:5000 9m13s
$ kubectl -n test get vs
NAME GATEWAYS HOSTS AGE
golangapp ["golangapp-gateway"] ["*"] 8m37s
$ kubectl -n test get gateway
NAME AGE
golangapp-gateway 12m
# Check if the service is accessible
$ kubectl -n test run -i -t nginx --rm=true --image=nginx -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# curl -X GET http://golangapp:5000
{"message":"hello world!!"}
$ curl -X GET "http://127.0.0.1:$INGRESS_PORT/"
{"message":"hello world!!"}

We could now see that the deployment of the application is now managed by the flagger.

New commits

We will now change the application by adding a new API and push it to Gitlab to see if the canary deployments for the new commit is managed by Flagger. Sample code is available here.

# Below are the changes made
diff --git a/app.go b/app.go
index 819a61a..f066c20 100644
--- a/app.go
+++ b/app.go
@@ -9,6 +9,11 @@ func RunServer() *gin.Engine {
"message": "hello world!!",
})
})
+ r.GET("/newapi", func(c *gin.Context) {
+ c.JSON(200, gin.H{
+ "message": "New testing api added to the application!!",
+ })
+ })
return r
}

diff --git a/app_test.go b/app_test.go
index e065544..ea0a5a2 100644
--- a/app_test.go
+++ b/app_test.go
@@ -28,3 +28,25 @@ func TestServeHTTP(t *testing.T) {
t.Errorf("Expected the message '%s' but got '%s'\n", expected,actual)
}
}
+
+func TestServeHTTPNewAPI(t *testing.T) {
+ server := httptest.NewServer(RunServer())
+ defer server.Close()
+
+
+ resp, err := http.Get(server.URL+"/newapi")
+ if err != nil {
+ t.Fatal(err)
+ }
+ if resp.StatusCode != 200 {
+ t.Fatalf("Received non-200 response: %d\n", resp.StatusCode)
+ }
+ expected := `{"message":"New testing api added to the application!!"}`
+ actual, err := ioutil.ReadAll(resp.Body)
+ if err != nil {
+ t.Fatal(err)
+ }
+ if expected != string(actual) {
+ t.Errorf("Expected the message '%s' but got '%s'\n", expected,actual)
+ }
+}
$ git add .
$ git commit -m "Adding new API"
[master bb51c8b] Adding new API
2 files changed, 27 insertions(+)
$ git push origin master
Enumerating objects: 7, done.
Counting objects: 100% (7/7), done.
Delta compression using up to 8 threads
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 837 bytes | 837.00 KiB/s, done.
Total 4 (delta 2), reused 0 (delta 0)
To 192.168.1.102:root/golang-app.git
7507ca1..8cbc10f master -> master

The above commit triggers the pipeline in the drone. Wait for it to complete successfully.

Pipeline execution for new commit

Once the execution reaches the last step where we deploy the application, the flagger observes that a new deployment is being rolled out and it starts executing the test cases based on the configuration in the canary.yaml file.

Here the flagger starts the canary deployment for the new image runs the load test using the load-tester and based on the success parameters defined it will mark the canary deployment as success or failure. The parameters that it will monitor here are the requests should not have a 500 HTTP response code and the response time should be less than 500ms.

# Check canary analysis status
$ kubectl -n test get canary
NAME STATUS WEIGHT LASTTRANSITIONTIME
golangapp-canary Progressing 0 2021-07-16T12:40:32Z
$ kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flagger-loadtester-5b766b7ffc-vdhws 2/2 Running 0 21m 192.192.43.142 harrypotter <none> <none>
golangapp-6789869ddc-vj9pp 2/2 Running 0 79s 192.192.43.146 harrypotter <none> <none>
golangapp-primary-6d8b478d57-g5vfs 2/2 Running 0 19m 192.192.43.143 harrypotter <none> <none>
$ kubectl -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flagger-loadtester ClusterIP 10.109.117.250 <none> 80/TCP 21m
golangapp ClusterIP 10.103.159.167 <none> 5000/TCP 24m
golangapp-canary ClusterIP 10.111.142.233 <none> 5000/TCP 19m
golangapp-primary ClusterIP 10.100.6.1 <none> 5000/TCP 19m
$ kubectl -n test get ep
NAME ENDPOINTS AGE
flagger-loadtester 192.192.43.142:8080 21m
golangapp 192.192.43.143:5000 24m
golangapp-canary 192.192.43.146:5000 19m
golangapp-primary 192.192.43.143:5000 19m

As we have added only a test API the defined thresholds will be met and the rolling update will be successful. So flagger will create a new pod with the latest image. Check the service access.

# Check service access
$ kubectl -n test run -i -t nginx --rm=true --image=nginx -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# curl -X GET http://golangapp:5000
{"message":"hello world!!"}
root@nginx:/# curl -X GET http://golangapp:5000/newapi
404 page not found

Since the rolling update is still in progress, the api is not accessible. Keep checking the canary status for success status.

# Get canary status
$ kubectl -n test get canary
NAME STATUS WEIGHT LASTTRANSITIONTIME
golangapp-canary Succeeded 0 2021-07-16T12:48:31Z
$ kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flagger-loadtester-5b766b7ffc-vdhws 2/2 Running 0 32m 192.192.43.142 harrypotter <none> <none>
golangapp-primary-6ff5dbf455-mllw9 2/2 Running 0 6m15s 192.192.43.148 harrypotter <none> <none>
$ kubectl -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flagger-loadtester ClusterIP 10.109.117.250 <none> 80/TCP 32m
golangapp ClusterIP 10.103.159.167 <none> 5000/TCP 35m
golangapp-canary ClusterIP 10.111.142.233 <none> 5000/TCP 30m
golangapp-primary ClusterIP 10.100.6.1 <none> 5000/TCP 30m
$ kubectl -n test get ep
NAME ENDPOINTS AGE
flagger-loadtester 192.192.43.142:8080 32m
golangapp 192.192.43.148:5000 35m
golangapp-canary <none> 30m
golangapp-primary 192.192.43.148:5000 30m
# Checking service access
$ kubectl -n test run -i -t nginx --rm=true --image=nginx -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# curl -X GET http://golangapp:5000/
{"message":"hello world!!"}
root@nginx:/# curl -X GET http://golangapp:5000/newapi
{"message":"New testing api added to the application!!"}
$ curl -X GET "http://127.0.0.1:$INGRESS_PORT/"
{"message":"hello world!!"}
$ curl -X GET "http://127.0.0.1:$INGRESS_PORT/newapi"
{"message":"New testing api added to the application!!"}

This validates that the rolling update is successful

Failed commit

In this new commit, we will respond to the API with a 500 HTTP error code, which will fail the canary analysis and the rolling update will be marked as failed. So the application will be retained in the previous version.
Please note: The application API is returning 500 HTTP code for testing purposes and the test case is updated to ignore the 200 status. In any actual application, there would be multiple APIs that would respond based on the request body and there can be a situation where the application returns 500 HTTP response code which is not intended. We are just trying to mimic such a situation here. Go code is available here.

# Below are the changes to the code
$ git diff
diff --git a/app.go b/app.go
index f066c20..96eca9f 100644
--- a/app.go
+++ b/app.go
@@ -1,10 +1,14 @@

func RunServer() *gin.Engine {
r := gin.Default()
r.GET("/", func(c *gin.Context) {
- c.JSON(200, gin.H{
+ c.JSON(500, gin.H{
"message": "hello world!!",
})
diff --git a/app_test.go b/app_test.go
index ea0a5a2..46a471d 100644
--- a/app_test.go
+++ b/app_test.go
@@ -16,9 +16,6 @@ func TestServeHTTP(t *testing.T) {
if err != nil {
t.Fatal(err)
}
- if resp.StatusCode != 200 {
- t.Fatalf("Received non-200 response: %d\n", resp.StatusCode)
- }
expected := `{"message":"hello world!!"}`
actual, err := ioutil.ReadAll(resp.Body)
$ git add .
$ git commit -m "Fail canary"
[master a1aecc6] Fail canary
1 file changed, 5 insertions(+), 1 deletion(-)
$ git push origin master
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 360 bytes | 360.00 KiB/s, done.
Total 3 (delta 2), reused 0 (delta 0)
To 192.168.1.102:root/golang-app.git
8cbc10f..a1aecc6 master -> master

Wait for the pipeline to execute successfully in the drone.

Once we have the deployment updated using the drone pipeline, the flagger observes this change and starts the canary analysis. While the canary analysis is running, we will check the service access using both the golangappand golangapp-canary services. While accessing the service using the golangapp-canaryservice name, we can see that the response HTTP code is 500, this is because this service is being mapped to the new version of the application.

Because of this 500 HTTP response, the canary analysis will fail and the rolling update will not happen.

# Check canary status$ kubectl -n test get canary
NAME STATUS WEIGHT LASTTRANSITIONTIME
golangapp-canary Progressing 0 2021-07-16T13:06:34Z
$ kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flagger-loadtester-5b766b7ffc-vdhws 2/2 Running 0 47m 192.192.43.142 harrypotter <none> <none>
golangapp-556bc946d-lmd89 2/2 Running 0 113s 192.192.43.152 harrypotter <none> <none>
golangapp-primary-6ff5dbf455-mllw9 2/2 Running 0 21m 192.192.43.148 harrypotter <none> <none>
$ kubectl -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
flagger-loadtester ClusterIP 10.109.117.250 <none> 80/TCP 47m
golangapp ClusterIP 10.103.159.167 <none> 5000/TCP 50m
golangapp-canary ClusterIP 10.111.142.233 <none> 5000/TCP 45m
golangapp-primary ClusterIP 10.100.6.1 <none> 5000/TCP 45m
$ kubectl -n test get ep
NAME ENDPOINTS AGE
flagger-loadtester 192.192.43.142:8080 47m
golangapp 192.192.43.148:5000 50m
golangapp-canary 192.192.43.152:5000 45m
golangapp-primary 192.192.43.148:5000 45m
# Check service status
$ ubectl -n test run -i -t nginx --rm=true --image=nginx -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# curl -v -X GET http://golangapp:5000/
* Trying 10.103.159.167...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5587502f3fb0)
* Connected to golangapp (10.103.159.167) port 5000 (#0)
> GET / HTTP/1.1
> Host: golangapp:5000
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=utf-8
< date: Fri, 16 Jul 2021 14:47:01 GMT
< content-length: 27
< x-envoy-upstream-service-time: 10051
< server: envoy
<
* Connection #0 to host golangapp left intact
{"message":"hello world!!"}
# Check the same api with canary service to check if you see some delay in response time
root@nginx:/# curl -v -X GET http://golangapp-canary:5000/
* Trying 10.100.219.243...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5610595d8fb0)
* Connected to golangapp-canary (10.111.142.233) port 5000 (#0)
> GET /return_version HTTP/1.1
> Host: golangapp-canary:5000
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< content-type: text/html; charset=utf-8
< content-length: 35
< server: envoy
< date: Fri, 16 Jul 2021 16:58:50 GMT
< x-envoy-upstream-service-time: 26
<
* Connection #0 to host appdeploy-canary left intact
{"message":"hello world!!"}

Once the canary analysis completes we can verify that the canary analysis will be marked as failed and the service access will still be as usual, that is the response code will be 200 HTTP code. This is because the previous version of the golang application is still maintained in the cluster as the rolling update failed.

# Check canary status
$ kubectl -n test get canary
NAME STATUS WEIGHT LASTTRANSITIONTIME
golangapp-canary Failed 0 2021-07-16T13:18:31Z
$ kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
flagger-loadtester-5b766b7ffc-vdhws 2/2 Running 0 57m 192.192.43.142 harrypotter <none> <none>
golangapp-primary-6ff5dbf455-mllw9 2/2 Running 0 31m 192.192.43.148 harrypotter <none> <none>
$ kubectl -n test run -i -t nginx --rm=true --image=nginx -- bash
If you don't see a command prompt, try pressing enter.
root@nginx:/# curl -v -X GET http://golangapp:5000
* Trying 10.103.159.167...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5587502f3fb0)
* Connected to golangapp (10.103.159.167) port 5000 (#0)
> GET / HTTP/1.1
> Host: golangapp:5000
> User-Agent: curl/7.64.0
> Accept: */*
>
< HTTP/1.1 200 OK
< content-type: application/json; charset=utf-8
< date: Fri, 16 Jul 2021 14:47:01 GMT
< content-length: 27
< x-envoy-upstream-service-time: 10051
< server: envoy
<
* Connection #0 to host golangapp left intact
{"message":"hello world!!"}

This inturn confirms that the rolling update has failed.

Conclusion

Integrating Flagger and Drone will definitely help users against any accidental deployment of the application that may have potential risks. As noticed, the integration is quite simple and doesn’t require adding any additional libraries or code to the application. This also can act as the first layer of integration testing. And instead of using the load-tester, the user can run a specific set of test cases while the canary analysis is happening.

References

  1. https://docs.flagger.app/
  2. https://istio.io/latest/
  3. https://docs.drone.io/
  4. https://www.magalix.com/blog/building-a-cd-pipeline-with-drone-ci-and-kubernetes
  5. https://istio.io/latest/docs/setup/getting-started/#download
  6. https://istio.io/latest/docs/ops/integrations/prometheus/
  7. https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/
  8. https://flask.palletsprojects.com/en/2.0.x/
  9. https://www.howtoforge.com/tutorial/how-to-install-and-configure-gitlab-on-ubuntu-16-04/

--

--

Sirishagopigiri
Sirishagopigiri

Written by Sirishagopigiri

Engineer by profession. Chef by passion (applicable only for some dishes :-P). Trying to become a blogger.