K8s - Pod Update Strategies in Kubernetes: From Default to Canary and Beyond
š Pod Update Strategies in Kubernetes: From Default to Canary and Beyond
Updating Kubernetes pods can feel risky, especially with live environments. But with the right strategy, it can be a breeze. Today, weāll cover the default Kubernetes update strategy and dive into a Canary Deployment setup, plus a quick look at Blue-Green Deployment as a bonus. Letās get into it! š
āļø Kubernetesā Default Update Strategy: The Rolling Update
Kubernetes uses Rolling Update by default, replacing old pods with new ones in small increments. Think of it as changing tires on a car thatās still movingāat any given time, thereās a mix of old and new pods.
- What it does: Gradually replaces old pods with new pods.
- Benefits: The service stays up with minimal downtime.
- Limitations: You donāt get fine-grained control over which version of the app is served and when.
But what if you want to test a new version on just a small slice of traffic first? Enter Canary Deployment. š¤
šØ Quick Nod to Blue-Green Deployment
Blue-Green Deployment is like bringing a whole new version online alongside the current one (like a backup band for the star šø). Then, when youāre ready, you switch all traffic to the new version (Green š). If anything goes wrong, you flip back to Blue š. Itās a great rollback strategy, but it doubles resource requirements since you need both versions running simultaneously.
š The Update Strategies in a Nutshell
Hereās a handy flowchart that summarizes the various update strategies, including Rolling Update, Blue-Green, and Canary Deployment:
š¤ Implementing a Canary Deployment in Kubernetes
A Canary Deployment lets you roll out updates gradually by sending a small slice of traffic to a ācanaryā version. You monitor its performance and, if all goes well, gradually scale it up until it replaces the old version. Hereās how to set it up!
š§ Step-by-Step Setup of a Canary Deployment
Letās start by creating version 1 of our app and then introduce version 2 in a limited capacity. Hereās what the YAML and commands look like:
Version 1 Deployment (my-app-v1
)
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v1
labels:
app: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
version: v1
template:
metadata:
labels:
app: my-app
version: v1
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: busybox:1.28
command:
- /bin/sh
- -c
- "echo version-1 > /work-dir/index.html"
volumeMounts:
- name: workdir
mountPath: "/work-dir"
volumes:
- name: workdir
emptyDir: {}
Explanation:
- initContainers: This is a special container that runs before the main container starts. Here, we use
busybox
to create anindex.html
file that writesversion-1
to it. This file will be served by Nginx. - volumeMounts: Both the main and init containers mount an
emptyDir
volume. This volume is a temporary storage that lives only for the lifecycle of the pod and lets us share files between the init container and the main Nginx container.
š·ļø Service for the App (my-app-svc
)
The Service connects both versions (v1 and v2) based on the shared label app: my-app
. This acts as a middleman, distributing traffic to the matched pods.
apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
selector:
app: my-app
š Testing Version 1
With everything set up, letās test version 1 to make sure itās live. Hereās the command:
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox --command -- wget -qO- my-app-svc
Explanation:
- kubectl run: Creates a temporary pod (weāll use
--rm
to delete it after). - ārestart=Never: Prevents Kubernetes from restarting the pod if it fails, so itās a one-time run.
- wget -qO- my-app-svc: The
wget
command fetches the web page atmy-app-svc
and outputs (-O-
) its content. This is a quick way to see if version 1 is serving the correct content, which should sayversion-1
.
š± Deploying the Canary Version (Version 2)
Now we create a deployment for version 2 with only one replica. This keeps it small, directing only a small fraction of traffic to this canary.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app-v2
labels:
app: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
version: v2
template:
metadata:
labels:
app: my-app
version: v2
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: workdir
mountPath: /usr/share/nginx/html
initContainers:
- name: install
image: busybox:1.28
command:
- /bin/sh
- -c
- "echo version-2 > /work-dir/index.html"
volumeMounts:
- name: workdir
mountPath: "/work-dir"
volumes:
- name: workdir
emptyDir: {}
š Testing the Service with Canary
With both versions running, we can observe traffic splitting by repeatedly querying the service. This is the fun part where we see if both versions are reachable:
kubectl run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- /bin/sh -c 'while sleep 1; do wget -qO- my-app-svc; done'
Explanation:
- /bin/sh -c: Runs a simple shell loop to query the service every second.
- wget -qO- my-app-svc: Outputs the response from
my-app-svc
, showing eitherversion-1
orversion-2
. You should start seeing mixed results if the canary is live.
Output:
version-1
version-1
version-1
version-2
version-2
version-1
Kubernetesā service load balancer is now sending a small amount of traffic to version 2 (the canary), with the bulk still hitting version 1.
š Scaling Up the Canary
If all looks good, we scale up version 2 to handle more traffic:
kubectl scale --replicas=4 deploy my-app-v2
Now, version 2 will start handling more requests. When weāre fully confident, we can retire version 1:
kubectl delete deploy my-app-v1
And just like that, our canary deployment has evolved into the primary version with zero downtime.
šŗļø Detailed Flow for Canary Deployment
To provide a clearer picture of the steps involved in Canary Deployment, hereās a more detailed flowchart focusing specifically on this strategy:
š Final Thoughts
There you have it! From Kubernetesā default Rolling Update to Blue-Green and Canary Deployments, youāve seen multiple ways to handle updates with minimal impact. Remember, each strategy has its place. Canary is perfect for gradual rollouts, while Blue-Green is great when you need a fast rollback option. Happy deploying! š