21 Aug 2020
Selective Kubernetes Deployment Using Affinity
   
Manish Dave
#Technology | 4 Min Read
Objective
Picture yourself in possession of a sample web application and HA (Multi Zone) Kubernetes cluster and in need of a high availability of the application using kubernetes cluster. Well, the first move that any K8s expert would do is to deploy applications into nodes, but there is a lack of assurance that each node has at least one pod running. This blog gives you a clear solution to the above scenario.

Kubernetes allows selective deployment of new pods using affinities. These can be good solutions to common HA problem statements like –

  1. n number of pods each node
  2. Ignore a certain node group for a group of Pods
  3. Preferred regions, AZs or nodes to deploy auto-scale pods

Let’s discuss all these in detail. Before proceeding please find below some of the common terminologies that would be used in this blog.

  • podAffinity: can tell the scheduler to locate a new pod on the same node as other pods if the label selector on the new pod matches the label on the current pod.
  • podAntiAffinity: can prevent the scheduler from locating a new pod on the same node as pods with the same labels if the label selector on the new pod matches the label on the current pod.
  • weight: can be any value from 1 to 100. The weight number gives the matching node a relatively higher weight than other nodes.The more you want your preference to be fulfilled, set weight to a higher value.
  • topology: can be defined as node labels
  • requiredDuringSchedulingIgnoredDuringExecution (HARD): using this approach deployment will choose the node only if the rule is satisfied. As a result only one pod will be deployed to each node, the next pods will be in pending state.
  • preferredDuringSchedulingIgnoredDuringExecution (SOFT): using this approach deployment will first prefer the nodes which satisfies the rule and if none exists it will deploy to non preferred nodes. Along with weight in the deployment we can deploy pods which will get evenly distributed among nodes.
    So, a rule of podAntiAffinity with SOFT scheduling will do the task here!

    First, let’s have a look at the deployment of the below yaml file which uses podAntiAffinity with a replica count of 3.

Deployment with soft podAntiAffinity:

apiVersion: apps/v1
kind: Deployment
metadata:
 creationTimestamp: null
 labels:
   run: nginx
 name: nginx
spec:
 replicas: 3
 selector:
   matchLabels:
     run: nginx
 strategy: {}
 template:
   metadata:
     creationTimestamp: null
     labels:
       run: nginx
   spec:
     affinity:
       podAntiAffinity:
         preferredDuringSchedulingIgnoredDuringExecution:
         – podAffinityTerm:
             labelSelector:
               ;matchExpressions:
               – key: run
                 operator: In
                 values:
                 – nginx
             topologyKey: failure-domain.beta.kubernetes.io/zone
           weight: 100
     containers:
     – image: nginx
      name: nginx
      resources:
         limits:
           memory: “200Mi”
           cpu: “200m”
         requests:
           memory: “100Mi”
           cpu: “100m”
status: {}

This antiaffinity rule ensures that two pods with the key run equals nginx must not run in the same node. This deployment was deployed to a 3 master node HA cluster.

Result:
As you can see here, each pod is deployed to each k8’s node (master1, master2 and master3).

We have already seen what will happen with the deployment and now will see what happens while scaling the deployment?

Scale the replica to a higher count, let’s say 6!

kubectl scale deployment nginx –replicas=6

Result:
As you can see the next set of 3 pods also got distributed evenly over the nodes. Each node has 2 pods of nginx running.
Will the same work with HPA (Horizontal Pod Scaling) or not?
Configure the HPA and create a load generator pod which hits the application endpoint with multiple requests so that it can trigger the scaling.

Now as you can see the newly launched pods are successfully distributed among the nodes.

Result:
podAffinity is therefore the ultimately easy solution for a high availability of deployments. We hope this blog has helped you understand the importance of podAffinity.