Certified Kubernetes Application Developer (CKAD) exam Preparation Cheatsheet

Cloud Journeys with Anindita
9 min readNov 3, 2021

The Certified Kubernetes Application Developer (CKAD) exam is one of the advanced exam focused on the Application Developers, Cloud Architects, Solution Architects, Web developers to deploy their applications in Kubernetes containers including core concepts of Pod design, observability, managing securityContexts, service accounts, configMaps & secrets, container design patterns, network policies, custom resource definitions(CRD), writing statefulsets with PersistenceVolume, etc.

The latest CKAD curriculum is available in the following link from the CNCF official GitHub repo.

https://github.com/cncf/curriculum/blob/master/CKAD_Curriculum_v1.22.pdf

In this blog post, the topic-wise demos from the CKAD exam curriculum level(updated as of September 2021) are demonstrated.

  1. Application Design & Building (20%)

This module deals with the defining, building & updating of container images in the Kubernetes cluster, creation of jobs/cronjobs, the multi-container pod design patterns(i.e. init container, sidecar pattern, ambassador pattern), managing stateful sets with persistent volumes etc.

Here goes a basic yaml manifest to create a Nginx pod with port 80 (Declarative way).

apiVersion: v1
kind: Pod
metadata:
name: mypod
namespace: ckad-prep
spec:
containers:
- name: mypod
image: nginx:1.14
ports:
- containerPort: 80

Next, this yaml manifest can be executed with kubectl create command or kubectl apply command as the following manner.

kubectl create -f nginx_pod.yaml

Similarly to create the nginx pod imperatively, use the Kubernetes run command with kubectl.

kubectl run nginx --image=nginx:1.14 --restart=Never --dry-run=Client --port=80

In order to define the namespace through the kubectl tool in an imperative way while pod creation, the namespace first has to create followed by the namespace name invoked.

kubectl create namespace ckad
kubectl run nginx-pod --image=nginx --restart=Never --port=80 -n ckad

Alternatively, the following manifest can be edited further with Postgres DB as backend.

apiVersion: v1
kind: Pod
metadata:
name: nginx
namespace: ckad
spec:
containers:
- name: nginx
image: nginx:1.17.10
ports:
- containerPort: 80
env:
- name: DB_URL
value: postgresql://mydb:5432
- name: DB_USERNAME
value: admin

The init container can be used alongside the main application container while it has its own lifecycle. The init container is started, executed & then terminated independently of the app container lifecycle. Here goes the sample yaml manifest of init container deployment.

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: business-app
spec:
initContainers:
- name: configurer
image: busybox
command:
- wget
- "-O"
- "/usr/shared/app/config.json"
- https://raw.githubusercontent.com/bmuschko/ckad-crash-course/master/exercises/07-creating-init-container/app/config/config.json
volumeMounts:
- name: configdir
mountPath: "/usr/shared/app"
containers:
- image: bmuschko/nodejs-read-config:1.0.0
name: web
ports:
- containerPort: 8080
volumeMounts:
- name: configdir
mountPath: "/usr/shared/app"
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
volumes:
- name: configdir
emptyDir: {}
status: {}

The container is deployed with ephemeral volumes while the storage is lost once the container is restarted. The usage of persistent volume is required for stateful state management using persistent volume(PV). The PV is attached with the help of a persistent volume claim (PVC).

A sample yaml manifest to create the persistent volume (PV) with storage of 512 MB & access mode as “ReadWriteMany”.

apiVersion: v1
kind: PersistentVolume
metadata:
name: pv
spec:
capacity:
storage: 512m
accessModes:
- ReadWriteMany
hostPath:
path: /data/config

The persistent volume claim (PVC) can be created similarly with storage capacity/request quota along with access mode.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 256m

The application container pods can be provisioned with the volumeMounts specification where the volume name, mountPath etc. to be defined including the storage quota etc. An example of Nginx pod deployed with persistent volume, persistent volume claim is given below.

apiVersion: v1
kind: Pod
metadata:
name: app
spec:
containers:
- image: nginx
name: app
volumeMounts:
- mountPath: "/data/app/config"
name: configpvc
volumes:
- name: configpvc
persistentVolumeClaim:
claimName: pvc
restartPolicy: Never

For dynamic binding scenarios, the storageClass is being used for kubernetes deployment. Here goes a sample YAML manifest for Kubernetes storageClass deployment with label, annotations, metadata etc.

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
namespace: kube-system
name: custom
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
addonmanager.kubernetes.io/mode: Reconcile
provisioner: k8s.io/minikube-hostpath

2. Application Deployment (20%)

This module deals with mainly the app container deployment with a various strategies like “blue-green”, “canary deployment”, “rolling update” etc. Every deployment keeps track of the rollout history, within it, a new version of the rollout is called as “revision”. The following command can be used for defining the deployment revision.

kubectl rollout history deplyoment <your-deployment-name>

While the deployment rollout can be revised for a new image version or revision numbers. For example, a new version of Nginx container can be deployed with deployment revision.

kubectl set image deployment <your-deployment-name> nginx=nginx:1.19.2 

Similarly, to move out to the next revision(e.g. revision =2) of the deployment, the following commands are required to be executed.

kubectl rollout history deployment <your-deployment-name> --revision=2

The deployment revisions can be undone using “undo” command like as the following:

kubectl rollout undo deployment <your-deployment-name> --revision=1

An example of deployment.yaml manifest is shown as the following where the Nginx container is deployed with 3 replicas running on containerPort 8080.

# define the deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deploy
labels:
app: my-deploy
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 8080

The blue-green deployment strategy is based on managing two identical deployments while one of them is active, accepting client requests another is idle. The Kubernetes health probes like startup probe, readiness & liveness probe are configured in blue-green container deployment accordingly.

  • Startup Probe: This probe is used to identify whether the container has been started or not.
  • Readiness Probe: This probe is used to set where the container is ready to accept the incoming traffic/client request.
  • Liveness Probe: This probe is used to identify whether the container is alive & working as expected. The container heartbeats are to be checked.

The blue -green deployment strategy ensures that there are target roles = ‘blue’ / ‘green’ is deployed with “readinessProbe” settings. Here goes an example of a sample blue-green deployment.

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: tomcat-deployment-${TARGET_ROLE}
spec:
replicas: 2
template:
metadata:
labels:
app: tomcat
role: ${TARGET_ROLE}
spec:
containers:
- name: tomcat-container
image: tomcat:${TOMCAT_VERSION}
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /
port: 8080

3. Application observability & maintenance: (15%)

The basic pod design consists of the selection of labels & annotations. The labels act as tags used to identify the pods & annotations are key-value pairs used to identify pods deployed in specified env = dev , type = backend, image = nginx etc.

Here goes a sample yaml manifest of applying labels to a nginx pod with “env” = “dev” & “tier”=”backend”.

apiVersion: v1
kind: Pod
metadata:
name: labeled-pod
labels:
env: dev
tier: backend
spec:
containers:
- image: nginx
name: nginx

The kubernetes troubleshooting of pods / containers can be achieved through monitoring of container logs which provides the in-depth details of the error status like “ImagePullBackOff”, “CreateContainerConfigError”, “CrashLoopBackOff”. While each errors have their own reasons / rootcause like “ImagePullBackOff” is typically inclined with the authentication / authorization issues while pulling images from Registry, Similarly, “CrashLoopBackOff” involves while the executing commands in a container crashes or container image is unavailable.

The commands which’re typically useful while debugging in pods are —

kubectl logs <pod-name-to-debug> 

while another important command is “kubectl get events” command which the event details of the pod & containers in each namespace level.

You can analyze the container crash, disk failure, or error details through the “describe” command.

kubectl describe <pod-name>, <deployment-name> etc. 

Promethus & Datadog provides extensive features for monitoring the diagnostic logs, metrics, alerts based on events or failures.

Promethus overview & architecture

Kubernetes Datadog agent integration & monitoring setup

4. Application Environment, Configuration & Security (25%)

This module basically is focused on custom resource definition build, securityContext definitions, creation of configMap & secrets, understanding of Kubernetes ServiceAccounts.

The authentication/authorization in Kubernetes cluster is managed by serviceAccounts. For managed Kubernetes service(AKS, EKS, GKE, etc.), it’s managed by the respective cloud provider IAM / RBAC policy. For example, authN / authZ in AKS cluster is managed by the Azure AD managed identities.

Details of managing AKS cluster role-based access control & managed identity policies can be found here.

The configMap & Secret is a key-pair injected at runtime as environment variable which can later mounted as volume. The secret is more specifically adjusted for storing of values for sensitive elements like passwords, API keys etc. & is base-64 encoded. The secrets are loaded during runtime applied to only specific pod required the credentials & stored in memory.

The ConfigMap & Secret can be provisioned using the commands like with four different options like

  • creating configmap / secret as key-value pair file as environment variable
  • creating configmap / secret as key-value pair file
  • creating config / secret as file with arbitrary contents
  • a directory with many files
kubectl create configmap 
kubectl create secret

Lets take a quick look in the yaml manifest for creating a configMap for the pod of db-backend where configMap is injected with configMapRef handler with db-config file where the db credentials are stored.

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: backend
name: backend
spec:
containers:
- image: nginx
name: backend
envFrom:
- configMapRef:
name: db-config
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}

Similarly, secrets also helps to configure & store the credentials of the database, API keys, password, secrets etc. loaded config from secretKeyRef.

apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: backend
name: backend
spec:
containers:
- image: nginx
name: backend
env:
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: db-password
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}

The securityContexts in kubernetes is a primitive which helps to define the policy such running a container with nonRoot priviledge or with specific filesystem permission.

Here goes a simple example of yaml manifest of Kubernetes SecurityContext where the container filesystem group permission set to ID 3000 of nginx pod mounted to data-vol in the path /data/app.

apiVersion: v1
kind: Pod
metadata:
name: secured
labels:
run: secured
spec:
securityContext:
fsGroup: 3000
containers:
- image: nginx
name: secured
volumeMounts:
name: data-vol
mountPath: /data/app
resources: {}
volumes:
- name: /data/vol
emptyDir: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}

The Custom Resource Definition (CRD) allows to define the custom resources. It defines the customization of kubernetes installation , API endpoints consumption. The custom resources can be managed from custom controller which can be defined with declarative manner.

Here goes a sample manifest on Kubernetes CRD.

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: crontabs.stable.example.com
spec:
group: stable.example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIv3Schema:
type: object
properties:
spec:
type: object
properties:
cronSpec:
type: string
image:
type: string
replicas:
type: integer
scope: Namespaced
names:
plural: crontabs
singular: crontab
kind: CronTab
shortName:
- ct

5. Service & Networking (20%)

This module demonstrates the features of kubernetes network policies, ingress & egress rules of containers, ingress controller / rules for exposing applications, troubleshooting of apps using kubectl top commands etc.

The network policies in Kubernetes define the ingress/egress rules with allow/deny permissions of ports for communicating between frontend, middleware & backend pods. The containers within the same pod can communicate via localhost while pod to pod communication happens via its local IP address.

The ClusterIP allows for communication to API Server using Static port & it’s reachable only within the cluster.

The NodePort allows for communication over a static port but it’s reachable from outside the cluster.

The service is a Kubernetes REST object.

Here go the sample network policies attached to various pods (i.e. front-end, middleware, backend) then attached specific ingress rules/port allowed from middleware pod to DB pod only.

kind: Pod
apiVersion: v1
metadata:
name: frontend
namespace: app-stack
labels:
app: todo
tier: frontend
spec:
containers:
- name: frontend
image: nginx
---kind: Pod
apiVersion: v1
metadata:
name: backend
namespace: app-stack
labels:
app: todo
tier: backend
spec:
containers:
- name: backend
image: nginx
---kind: Pod
apiVersion: v1
metadata:
name: database
namespace: app-stack
labels:
app: todo
tier: database
spec:
containers:
- name: database
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: example

The network policies are attached with the help of label selectors for opening specific mysql db ports for middleware / app-backend pod.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: app-stack-network-policy
namespace: app-stack
spec:
podSelector:
matchLabels:
app: todo
tier: database
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: todo
tier: backend
ports:
- protocol: TCP
port: 3306

The port forwarding is a concept to port mapping of a pod to a particular port number. For example, the mongodb pod can be mapped to port 27017.

kubectl port-forward pods/mongo-75f59d57f4-4nd6q 28015:27017

The kubectl top command uses to collect the metrics from API server & assists to troubleshoot with the command.

kubectl top pod <POD ID> 

The kubectl top pod commands uses the memory working set which helps to compare the value from kubectl top with container memory metrics value for troubleshooting on memory related issues.

You can find the details of the CKAD / CKA prep code in this Github repo.

~ Happy Kuberneting & good luck with CKAD / CKA exams!

--

--

Cloud Journeys with Anindita

Cloud Architect. Azure, AWS certified. Terraform & K8, Cloud Native expert. Passionate with GenAI. Views are own.