Deploying Envoy Filter on Istio
Istio — Service mesh gives a lot a capabilities to the user for deploying applications on Kubernetes according to various requirements. It provides observability, telemetry, management and security features that can be integrated with the applications.
As Istio uses envoy for controlling the data plane traffic and it also extends the custom filter support from envoy. Using this feature the user can develop custom filters or use some upstream filters from the community to integrate with the application. The filters can be either HTTP, TCP or any custom filter which helps in working with the packet data.
Envoy supports WASM based filters, which accepts a wasm binary as input. The wasm binary is generated from the custom filter code written in assembly script, JS or golang.
The WASM proxy golang sdk from tetratelabs can be used as a library to write custom filters. Here we will be exploring one such filter written in golang using the sdk.
In this blog, we will see how the wasm based envoy filters can be deployed to the application using Istio framework. We will be using the HTTP headers example provided in tetratelabs-go-lang-sdk framework.
Pre-requisite
For deploying and testing envoy filter using Istio we will be using a kubernetes cluster with the below configuration.
- Ubuntu 20.04 OS
- Kubernetes v1.21.5
- Docker 20.10.8
- Istio 1.11.0
- Golang 1.17.1
- Tinygo 0.20.0
It is expected that the kubernetes cluster is already setup. For testing purposes, I have installed a single node kubernetes cluster with calico CNI.
Note: For installing golang and tinygo please see references section
Install Istio
Once we have the kubernetes cluster setup, next step would be to install Istio. We will be using istioctl
cli to install and configure it on the cluster.
# Download istio
$ curl -L https://istio.io/downloadIstio | ISTIO_VERSION=1.11.0 TARGET_ARCH=x86_64 sh -# Install or Configure Istio
$ cd istio-1.11.0
$ bin/istioctl install --set profile=demo -y
✔ Istio core installed
✔ Istiod installed
✔ Ingress gateways installed
✔ Egress gateways installed
✔ Installation complete
Thank you for installing Istio 1.11. Please take a few minutes to tell us about your install/upgrade experience! https://forms.gle/kWULBRjUv7hHci7T6
Once the installation completes check if the pods are in running state.
$ kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-bdd5f97c5-rf8qw 1/1 Running 0 11m
calico-system calico-node-2mjf7 1/1 Running 1 11m
calico-system calico-typha-65c9d8856d-8z97j 1/1 Running 0 11m
istio-system istio-egressgateway-6f9d4548b-cvgdb 1/1 Running 0 116s
istio-system istio-ingressgateway-5dc645f586-98vm2 1/1 Running 0 116s
istio-system istiod-79b65d448f-dx898 1/1 Running 0 2m19s
kube-system coredns-558bd4d5db-qhw4v 1/1 Running 0 46m
kube-system coredns-558bd4d5db-zmp9t 1/1 Running 0 46m
kube-system etcd-harrypotter 1/1 Running 0 47m
kube-system kube-apiserver-harrypotter 1/1 Running 0 48m
kube-system kube-controller-manager-harrypotter 1/1 Running 1 48m
kube-system kube-proxy-87nz9 1/1 Running 0 46m
kube-system kube-scheduler-harrypotter 1/1 Running 0 47m
tigera-operator tigera-operator-76bbbcbc85-ckxt7 1/1 Running 0 12m
Envoy Filter
The sample envoy filter that we will be configuring is an HTTP filter, it logs the http request and response headers to the envoy. The filter is available in the tetratelabs repo and is being re-used here.
We will be compiling the code to build a wasm binary. This binary is mounted as a volume in the envoy side car present in the sample application. Later we will define a kubernetes EnvoyFilter
to use this binary for all the HTTP processing.
# Building WASM binary
$ git clone https://github.com/tetratelabs/proxy-wasm-go-sdk.git
$ cd proxy-wasm-go-sdk
$ cd examples/http_headers
$ tinygo build -o ./main.go.wasm -scheduler=none -target=wasi ./main.go
$ ls
README.md envoy.yaml main.go main.go.wasm main_test.go
Once we have the binary generated, we will create a ConfigMap of the binary to later mount it to the envoy side car.
# Creating config map in default namespace$ kubectl create configmap http-filter --from-file=http-filter.wasm=main.go.wasm
Deploying Sample Application
To test envoy filter deployment on Istio we will be using the httpbin application from Istio’s examples.
Before deploying the application, we will add some istio specific annotations to the httpbin example. These annotations are as explained below
- sidecar.istio.io/userVolume — defines which resource has to be mounted as volume, here we are creating a ConfigMap of the WASM binary and mounting it as volume
- sidecar.istio.io/userVolumeMount — specifies the location in the envoy side where the volume mount should be done
- sidecar.istio.io/logLevel — the default log level for envoy is warning, since we are logging the HTTP request and response headers with info log level, we need to edit this to set it to INFO
Deploying the application with the above annotations. The copy of the httpbin deployment yaml file is available here.
$ kubectl label namespace default istio-injection=enabled
$ kubectl apply -f https://raw.githubusercontent.com/SirishaGopigiri/Istio-WASM-plugin/main/httpbin.yaml
Check if the pod has come to running state and has the wasm binary mounted as volume from configmap.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
httpbin-5dcc45899f-rcwl2 2/2 Running 0 4m58s$ kubectl describe pod httpbin-5dcc45899f-rcwl2|grep -i http-filter
sidecar.istio.io/userVolume: [{"name":"http-filter","configMap":{"name":"http-filter"}}]
sidecar.istio.io/userVolumeMount: [{"mountPath":"/var/local/wasm","name":"http-filter"}]
/var/local/wasm from http-filter (rw)
http-filter:
Name: http-filter
Patch the httpbin service to nodeport and try accessing it.
$ kubectl patch svc httpbin --type='json' -p '[{"op":"replace","path":"/spec/type","value":"NodePort"}]'
service/httpbin patched$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
httpbin NodePort 10.96.66.221 <none> 8000:31242/TCP 7m23s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 143m$ curl -v -X GET http://localhost:31242/status/200
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 127.0.0.1:31242...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 31242 (#0)
> GET /status/200 HTTP/1.1
> Host: localhost:31242
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Thu, 30 Sep 2021 10:17:05 GMT
< content-type: text/html; charset=utf-8
< access-control-allow-origin: *
< access-control-allow-credentials: true
< content-length: 0
< x-envoy-upstream-service-time: 3
< x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
<
* Connection #0 to host localhost left intact
Alternatively, we can test the application from the istio side car itself.
$ kubectl exec -ti deploy/httpbin -c istio-proxy -- curl -v http://httpbin.default:8000/status/200
* Trying 10.96.66.221:8000...
* TCP_NODELAY set
* Connected to httpbin.default (10.96.66.221) port 8000 (#0)
> GET /status/200 HTTP/1.1
> Host: httpbin.default:8000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Thu, 30 Sep 2021 10:18:25 GMT
< content-type: text/html; charset=utf-8
< access-control-allow-origin: *
< access-control-allow-credentials: true
< content-length: 0
< x-envoy-upstream-service-time: 1
< x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
<
* Connection #0 to host httpbin.default left intact
This validates that the service is running properly.
Creating Envoy Filter
Next we will create an envoy filter to use the wasm binary filter. This filter logs the HTTP request and responses to the envoy proxy side car in the httpbin application.
Before deploying the filter check the envoy pod logs.
$ kubectl logs -f --tail=10 httpbin-5dcc45899f-rcwl2 -c istio-proxy
2021-09-30T10:12:07.437424Z info Envoy proxy is ready
2021-09-30T10:12:31.069867Z info envoy upstream cds: add 21 cluster(s), remove 5 cluster(s)
2021-09-30T10:12:31.090024Z info envoy upstream cds: added/updated 1 cluster(s), skipped 20 unmodified cluster(s)
2021-09-30T10:13:05.478647Z info envoy main shutting down parent after drain
2021-09-30T10:16:25.821056Z info envoy upstream cds: add 21 cluster(s), remove 5 cluster(s)
2021-09-30T10:16:25.831548Z info envoy upstream cds: added/updated 0 cluster(s), skipped 21 unmodified cluster(s)
[2021-09-30T10:17:05.738Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 4 3 "-" "curl/7.68.0" "441174c0-248c-9a83-8528-17f9fa980ace" "localhost:31242" "192.192.43.136:80" inbound|80|| 127.0.0.6:35235 192.192.43.136:80 192.168.1.102:14129 - default
[2021-09-30T10:18:25.128Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 1 1 "-" "curl/7.68.0" "8959375c-1629-948c-bd86-4694de8d3770" "httpbin.default:8000" "192.192.43.136:80" inbound|80|| 127.0.0.6:48021 192.192.43.136:80 192.168.1.102:28045 - default
As we can notice from the envoy pod logs that currently only the url details is being logged.
Once we deploy the filter along with the url, the HTTP request and response headers will be logged.
The envoy filter is available here.
$ kubectl apply -f https://raw.githubusercontent.com/SirishaGopigiri/Istio-WASM-plugin/main/filter.yaml
envoyfilter.networking.istio.io/golang-filter created$ kubectl get envoyfilter
NAME AGE
golang-filter 29s
We can check the envoy side car logs to see if there are any errors, if there are none and the listeners are updated, this shows that the filter is loaded successfully.
$ kubectl logs -f --tail=10 httpbin-5dcc45899f-rcwl2 -c istio-proxy
2021-09-30T10:13:05.478647Z info envoy main shutting down parent after drain
2021-09-30T10:16:25.821056Z info envoy upstream cds: add 21 cluster(s), remove 5 cluster(s)
2021-09-30T10:16:25.831548Z info envoy upstream cds: added/updated 0 cluster(s), skipped 21 unmodified cluster(s)
[2021-09-30T10:17:05.738Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 4 3 "-" "curl/7.68.0" "441174c0-248c-9a83-8528-17f9fa980ace" "localhost:31242" "192.192.43.136:80" inbound|80|| 127.0.0.6:35235 192.192.43.136:80 192.168.1.102:14129 - default
[2021-09-30T10:18:25.128Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 1 1 "-" "curl/7.68.0" "8959375c-1629-948c-bd86-4694de8d3770" "httpbin.default:8000" "192.192.43.136:80" inbound|80|| 127.0.0.6:48021 192.192.43.136:80 192.168.1.102:28045 - default
2021-09-30T10:25:43.293875Z info envoy upstream cds: add 21 cluster(s), remove 5 cluster(s)
2021-09-30T10:25:43.298821Z info envoy upstream cds: added/updated 0 cluster(s), skipped 21 unmodified cluster(s)
2021-09-30T10:25:44.073777Z info envoy upstream lds: add/update listener 'virtualInbound'
Testing Envoy Filter
Once we have the filter deployed successfully, we will send the same curl requests to see if the HTTP headers are logged in the envoy side car.
$ kubectl exec -ti deploy/httpbin -c istio-proxy -- curl -v http://httpbin.default:8000/status/200
* Trying 10.96.66.221:8000...
* TCP_NODELAY set
* Connected to httpbin.default (10.96.66.221) port 8000 (#0)
> GET /status/200 HTTP/1.1
> Host: httpbin.default:8000
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Thu, 30 Sep 2021 10:28:53 GMT
< content-type: text/html; charset=utf-8
< access-control-allow-origin: *
< access-control-allow-credentials: true
< content-length: 0
< x-envoy-upstream-service-time: 0
< x-envoy-decorator-operation: httpbin.default.svc.cluster.local:8000/*
<
* Connection #0 to host httpbin.default left intact
Checking logs
$ kubectl logs -f --tail=25 httpbin-5dcc45899f-rcwl2 -c istio-proxy
[2021-09-30T10:17:05.738Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 4 3 "-" "curl/7.68.0" "441174c0-248c-9a83-8528-17f9fa980ace" "localhost:31242" "192.192.43.136:80" inbound|80|| 127.0.0.6:35235 192.192.43.136:80 192.168.1.102:14129 - default
[2021-09-30T10:18:25.128Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 1 1 "-" "curl/7.68.0" "8959375c-1629-948c-bd86-4694de8d3770" "httpbin.default:8000" "192.192.43.136:80" inbound|80|| 127.0.0.6:48021 192.192.43.136:80 192.168.1.102:28045 - default
2021-09-30T10:25:43.293875Z info envoy upstream cds: add 21 cluster(s), remove 5 cluster(s)
2021-09-30T10:25:43.298821Z info envoy upstream cds: added/updated 0 cluster(s), skipped 21 unmodified cluster(s)
2021-09-30T10:25:44.073777Z info envoy upstream lds: add/update listener 'virtualInbound'
2021-09-30T10:28:53.764904Z info envoy wasm wasm log: request header --> :authority: httpbin.default:8000
2021-09-30T10:28:53.764921Z info envoy wasm wasm log: request header --> :path: /status/200
2021-09-30T10:28:53.764923Z info envoy wasm wasm log: request header --> :method: GET
2021-09-30T10:28:53.764925Z info envoy wasm wasm log: request header --> :scheme: http
2021-09-30T10:28:53.764927Z info envoy wasm wasm log: request header --> user-agent: curl/7.68.0
2021-09-30T10:28:53.764929Z info envoy wasm wasm log: request header --> accept: */*
2021-09-30T10:28:53.764931Z info envoy wasm wasm log: request header --> x-forwarded-proto: http
2021-09-30T10:28:53.764933Z info envoy wasm wasm log: request header --> x-request-id: 5e46439c-65cf-962b-9034-0dde53ea2219
2021-09-30T10:28:53.764938Z info envoy wasm wasm log: request header --> test: best
2021-09-30T10:28:53.766003Z info envoy wasm wasm log: response header <-- :status: 200
2021-09-30T10:28:53.766014Z info envoy wasm wasm log: response header <-- server: gunicorn/19.9.0
2021-09-30T10:28:53.766016Z info envoy wasm wasm log: response header <-- date: Thu, 30 Sep 2021 10:28:53 GMT
2021-09-30T10:28:53.766018Z info envoy wasm wasm log: response header <-- connection: keep-alive
2021-09-30T10:28:53.766020Z info envoy wasm wasm log: response header <-- content-type: text/html; charset=utf-8
2021-09-30T10:28:53.766021Z info envoy wasm wasm log: response header <-- access-control-allow-origin: *
2021-09-30T10:28:53.766023Z info envoy wasm wasm log: response header <-- access-control-allow-credentials: true
2021-09-30T10:28:53.766025Z info envoy wasm wasm log: response header <-- content-length: 0
2021-09-30T10:28:53.766027Z info envoy wasm wasm log: response header <-- x-envoy-upstream-service-time: 0
2021-09-30T10:28:53.766234Z info envoy wasm wasm log: 2 finished
[2021-09-30T10:28:53.764Z] "GET /status/200 HTTP/1.1" 200 - via_upstream - "-" 0 0 1 0 "-" "curl/7.68.0" "5e46439c-65cf-962b-9034-0dde53ea2219" "httpbin.default:8000" "192.192.43.136:80" inbound|80|| 127.0.0.6:53827 192.192.43.136:80 192.168.1.102:7970 - default
We can observe that the HTTP request and response headers are logged in the envoy pod.
Conclusion
Envoy Filter CRD, provides a very great mechanism to deploy custom envoy filters to the application. It extends the default filter plugin behaviour from envoy.
Instead on having static WASM binaries, we can also host the binaries in a repo to pull automatically in the Envoy Filter.