How to configure access limitation for a Kubernetes Pod

This blog provide a guideline how to restrict pod communication using Network Policies on AWS EKS cluster.

About Network Policy

We will use network policy feature of Kubernetes.

A network policy is a specification of how groups of pods are allowed to communicate with each other and other network endpoints.
By default, the pod’s communication is open within themselves and other endpoints. In a production-level cluster, it is not secure to have an open pod to pod communication.

In order to implement network policies in your cluster, you must use a compatible container network plugin and Calico is one of the compatible technology.

For more information, Network Policies

Calico on your Amazon EKS cluster

Project Calico is a network policy engine for Kubernetes.

  1. Apply the Calico manifest from the aws/amazon-vpc-cni-k8s GitHub project. This manifest creates DaemonSets in the kube-system namespace.
kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.5/config/v1.5/calico.yaml
  1. Watch the kube-system DaemonSets and wait for the calico-node DaemonSet to have the DESIRED number of pods in the READY state. When this happens, Calico is working.
kubectl get daemonset calico-node --namespace kube-system

NAME          DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                 AGE
calico-node   1         1         1       1            1           beta.kubernetes.io/os=linux   27m

To uninstall Calico

kubectl delete -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/release-1.5/config/v1.5/calico.yaml

For more information from AWS Installing Calico on Amazon EKS

Create Network Policies to restrict access to your pod

Create manifest file

Please check and update correct value for your access source pod in target-sample-pod-network-policy.yaml

Temporarily, let say it’s name is source-sample-pod.

target-sample-pod-network-policy.yaml

# Deny all access to target-sample-pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all-to-target-sample-pod
  namespace: default # namespace of target-sample-pod
spec:
  podSelector:
    matchLabels:
      # labels of target-sample-pod
      app: target-sample-pod
      release: target-sample-pod
  policyTypes:
    - Ingress

---
# Allow only access from specified Pod
# Web application API Pod
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-source-sample-pod-to-target-sample-pod
  namespace: default # namespace of target-sample-pod
spec:
  podSelector:
    matchLabels:
      # labels of target-sample-pod
      app: target-sample-pod
      release: target-sample-pod
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              # labels of source-sample-pod pod
              # please update with your label values
              app: source-sample-pod
      # Optional - using when restrict access to a specified port
      ports:
        - protocol: TCP
          port: 8080 # port of target-sample-pod

Create network policy

Use kubectl to create a NetworkPolicy from the above api-server-task-policy.yaml file:

kubectl apply -f kubernetes/target-sample-pod-network-policy.yaml

Please note that it may take 10 seconds to be effected.

Verification

You should see the following:

  • We cannot access target-sample-pod pod from another pod
  • source-sample-pod can now access target-sample-pod pod (on TCP port 8080 only).

Verify that we can not access from another pod

kubectl run --generator=run-pod/v1 busybox --rm -ti --image=busybox -- /bin/sh

/ # wget --spider --timeout=1 target-sample-pod:8080
Connecting to target-sample-pod:8080 (172.20.238.20:80)
wget: download timed out
/ #

Verify that we can access from pod with correct label app: source-sample-pod

kubectl run --generator=run-pod/v1 busybox --rm -ti --labels="source-sample-pod" --image=busybox -- /bin/sh

/ # wget --spider --timeout=1 target-sample-pod:8080
Connecting to target-sample-pod:8080 (172.20.238.20:80)
remote file exists

How to delete network policy

kubectl delete -f kubernetes/target-sample-pod-network-policy.yaml

Reference

Kubernetes – Set max pod (replica) limit per node

We have to allocate exactly N pod of service A per node. When new pod of A (N+1) is coming, pod cannot schedule due to lack of capacity and new nodes are added by Cluster Autoscaler

We can find a similar user case from this issue of Kubernetes Github repo add a new predicate: max replicas limit per node · Issue #71930

From Kubernetes 1.6, it seems we can use Pod Topology Spread Constraints to resolve.

You can use topology spread constraints to control how Pods are spread across your cluster among failure-domains such as regions, zones, nodes, and other user-defined topology domains. This can help to achieve high availability as well as efficient resource utilization.

Unfortunately, i am working on a old version of Kubernetes cluster (v1.12). It requires a workaround for this problem.

I found a workaround using Kubernetes QoS class Guaranteed setting to implement it.

When Kubernetes creates a Pod it assigns one of these QoS classes to the Pod:

  • Guaranteed
  • Burstable
  • BestEffort

Kubernetes QoS class

For a Pod to be given a QoS class of Guaranteed: Pods where both limit and (optionally) request are set for all resources (CPU and memory) and their values are the same. These pods are high priority, therefore they are terminated only if they are over the limit and there are no lower priority pods to terminate.

Example, 1 node t3.xlarge have 4 CPUs, 16 GBs. We want to spread 3 pods per node the resources/limits, requests are the same value 1 CPU (1 CPU for Kubernetes system pod, log pod such as Fluentd, …)

Pod manifest

containers:
  name: pong
    resources:
      limits:
        cpu: 1
        memory: 1Gi
      requests:
        cpu: 1
        memory: 1Gi

This workaround works like a charm for my case.