It makes sense for DevOps engineers and architects to perform canary deployments in their CI/CD workflows. They cannot skip testing a release for the sake of adhering to continuous delivery practices, can they?
In canary deployments, the new version, called canary, is tested with limited live traffic at first. Ops teams and SREs then observe and analyze the performance and customer experience of the canary, before gradually rolling it out for the larger audience in case of no issues.
The crucial part of a canary deployment is to split the live traffic and route a small portion of it to the canary. Architects and DevOps teams need the best tool to carry out such traffic splitting between services. Of course, API gateways can do it. But it is difficult to split traffic between internal services or subsets with API gateways.
This is where the open-source Istio service mesh comes in. Istio provides 5 traffic management API resources to handle traffic from the edge and also between service subsets:
- Virtual services
- Destination rules
- Service entries
Out of all the above resources, virtual services and destination rules form the core of Istio’s traffic routing features. Canary deployment is just one of the use cases they provide. Let us explore the resources, like what virtual services and destination rules are, and how they work.
But before we begin, let us understand how Istio routes and load balance traffic by default.
A quick introduction to Istio Envoy proxy load balancing
Istio uses Envoy proxy as its data plane. Envoy proxy runs as a sidecar container with each application pod and intercepts the traffic going in and out of the pod.
Istio’s Envoy proxy uses the least requests model to load balance traffic by default. That means, two random service instances (pods) are selected from a service’s load balancing pool (or replicas) and the traffic is routed to the pod that has fewer active requests to serve. This prevents service instances or pods from request overloading and ensures effective utilization of resources.
Istio works fine with the default load-balancing model. However, there are circumstances where you want to configure certain rules. For example, consider the below scenarios:
- You want to change the default routing policy into a weighted or round-robin model.
- You want to limit the number of simultaneous connections or requests to upstream services.
- You want to set an outlier detection policy to eject unhealthy pods for a certain amount of time to keep the infrastructure resilient.
- You would like to A/B test between two versions of services — split traffic and route a certain percentage of them to the new version.
These are some instances where DevOps and cloud architects can use `VirtualServices` and `DestinationRules`.