Selective Kubernetes Deployment Using Affinity

Hashedin

Manish Dave

21 Aug 2020

Picture yourself in possession of a sample web application and HA (Multi Zone) Kubernetes cluster and in need of a high availability of the application using kubernetes cluster. Well, the first move that any K8s expert would do is to deploy applications into nodes, but there is a lack of assurance that each node has at least one pod running. This blog gives you a clear solution to the above scenario.
 
Kubernetes allows selective deployment of new pods using affinities. These can be good solutions to common HA problem statements like –

  1.  n number of pods each node
  2.  Ignore a certain node group for a group of Pods
  3.  Preferred regions, AZs or nodes to deploy auto-scale pods

 
Let’s discuss all these in detail. Before proceeding please find below some of the common terminologies that would be used in this blog.

So, a rule of podAntiAffinity with SOFT scheduling will do the task here!
 
First, let’s have a look at the deployment of the below yaml file which uses podAntiAffinity with a replica count of 3.
 

Deployment with soft podAntiAffinity:

apiVersion: apps/v1
kind: Deployment
metadata:
 creationTimestamp: null
 labels:
   run: nginx
 name: nginx
spec:
 replicas: 3
 selector:
   matchLabels:
     run: nginx
 strategy: {}
 template:
   metadata:
     creationTimestamp: null
     labels:
       run: nginx
   spec:
     affinity:
       podAntiAffinity:
         preferredDuringSchedulingIgnoredDuringExecution:
         – podAffinityTerm:
             labelSelector:
               ;matchExpressions:
               – key: run
                 operator: In
                 values:
                 – nginx
             topologyKey: failure-domain.beta.kubernetes.io/zone
           weight: 100
     containers:
     – image: nginx
      name: nginx
      resources:
         limits:
           memory: “200Mi”
           cpu: “200m”
         requests:
           memory: “100Mi”
           cpu: “100m”
status: {}

 
This antiaffinity rule ensures that two pods with the key run equals nginx must not run in the same node. This deployment was deployed to a 3 master node HA cluster.
 

Result:

As you can see here, each pod is deployed to each k8’s node (master1, master2 and master3).
 

 
We have already seen what will happen with the deployment and now will see what happens while scaling the deployment?
 
Scale the replica to a higher count, let’s say 6!
 
kubectl scale deployment nginx –replicas=6

Result:

As you can see the next set of 3 pods also got distributed evenly over the nodes. Each node has 2 pods of nginx running.
 

 
Will the same work with HPA (Horizontal Pod Scaling) or not?
 
Configure the HPA and create a load generator pod which hits the application endpoint with multiple requests so that it can trigger the scaling.
 
Now as you can see the newly launched pods are successfully distributed among the nodes.

Result:

 

 
podAffinity is therefore the ultimately easy solution for a high availability of deployments. We hope this blog has helped you understand the importance of podAffinity.
 


Have a question?

Need Technology advice?

Connect

+1 669 253 9011

contact@hashedin.com

linkedIn youtube