Cheatsheets

Kubernetes

Kubernetes

Kubernetes is an open-source container orchestration platform for automating deployment, scaling, and management of containerized applications.

8 Categories 24 Sections 68 Examples
Kubernetes kubectl Containers Orchestration Cloud Native K8s Container Orchestration Cluster Management

Getting Started

Core Kubernetes concepts and initial setup for beginners

Kubernetes Basics

Introduction to Kubernetes architecture and core concepts

Understand Kubernetes architecture

Shows your Kubernetes cluster endpoints and components

Code
Terminal window
# Kubernetes architecture consists of:
# 1. Control Plane (Master): Manages cluster state and decisions
# 2. Worker Nodes: Run containerized applications
# 3. Pods: Smallest deployable units (wrappers around containers)
# 4. Services: Expose pods to network traffic
# 5. Storage: Persistent data storage for pods
# Analogy to VMs:
# Traditional: Cluster -> Node -> VM -> Application
# Kubernetes: Cluster -> Node -> Pod -> Container
# Key resources:
# - Pod: Single or multiple containers sharing network
# - Deployment: Manages pod replicas
# - Service: Network access to pods
# - ConfigMap: Configuration data
# - PersistentVolume: Storage resources
Execution
Terminal window
kubectl cluster-info
Output
Terminal window
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  • Requires kubectl and KUBECONFIG configured
  • Control plane manages cluster operations
  • Worker nodes run actual workloads

Check cluster nodes and capacity

Lists all nodes in your cluster with their status and information

Code
Terminal window
# Get list of all nodes in cluster
kubectl get nodes
# Get detailed node information
kubectl get nodes -o wide
# View node resource capacities and allocations
kubectl top nodes
# Describe specific node
kubectl describe node node-1
Execution
Terminal window
kubectl get nodes -o wide
Output
Terminal window
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP
minikube Ready master 10d v1.24.0 192.168.1.1 <none>
  • STATUS Ready means node is healthy and accepting workloads
  • Roles indicate control plane vs worker nodes
  • top requires metrics-server to be installed

Verify kubectl installation and context

Verifies kubectl installation and shows active cluster context

Code
Terminal window
# Check kubectl version
kubectl version --client
# View current context
kubectl config current-context
# List all available contexts
kubectl config get-contexts
# Switch to different context
kubectl config use-context docker-desktop
# Get cluster information
kubectl config view
Execution
Terminal window
kubectl version --client
Output
Terminal window
Client Version: v1.26.0
Kustomize Version: v4.5.4
  • Context determines which cluster kubectl connects to
  • KUBECONFIG can contain multiple clusters
  • Switch contexts for multi-cluster environments

Install and Configure kubectl

Set up kubectl CLI tool and configure cluster access

Install kubectl on Linux

Installs kubectl CLI tool required for managing Kubernetes clusters

Code
Terminal window
# Download kubectl binary
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# Make it executable
chmod +x kubectl
# Move to PATH
sudo mv kubectl /usr/local/bin/
# Verify installation
kubectl version --client
# Using package manager (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install -y kubectl
Execution
Terminal window
kubectl version --client
Output
Terminal window
Client Version: v1.26.0
Kustomize Version: v4.5.4
  • Always download from official Kubernetes release repository
  • Version should be within 1 minor version of cluster API

Configure kubectl with cloud clusters

Configures kubectl to access cloud-managed Kubernetes clusters

Code
Terminal window
# AWS EKS - Get cluster config
aws eks update-kubeconfig --region us-east-1 --name my-cluster
# Google GKE - Get cluster credentials
gcloud container clusters get-credentials my-cluster --zone us-central1-a
# Azure AKS - Get cluster credentials
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
# Verify kubectl can access cluster
kubectl cluster-info
Execution
Terminal window
kubectl config view
Output
Terminal window
apiVersion: v1
clusters:
- cluster:
server: https://example.com
name: my-cluster
contexts:
- context:
cluster: my-cluster
user: my-user
name: my-context
  • Each cloud provider has specific commands for credential setup
  • Kubeconfig stored in ~/.kube/config by default

Set up kubectl shell completion

Enables tab completion for kubectl commands in your shell

Code
Terminal window
# Bash completion
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
# Zsh completion
echo "source <(kubectl completion zsh)" >> ~/.zshrc
source ~/.zshrc
# Fish completion
kubectl completion fish | source
# Temporary completion (current session)
source <(kubectl completion bash)
Execution
Terminal window
kubectl completion bash
Output
Terminal window
# bash completion for kubectl
_kubectl_complete() { ... }
  • Greatly improves command line productivity
  • Available for bash, zsh, fish, and powershell

Namespaces and Basic Navigation

Organize resources using namespaces and navigate clusters

Explore and create namespaces

Lists and creates Kubernetes namespaces for resource organization

Code
Terminal window
# List all namespaces
kubectl get namespaces
# Create new namespace
kubectl create namespace development
# Create namespace with YAML
kubectl apply -f - <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: production
EOF
# Get default namespaces
# default - for user workloads
# kube-system - for system components
# kube-public - world-readable resources
# kube-node-lease - node heartbeats
Execution
Terminal window
kubectl get ns
Output
Terminal window
NAME STATUS AGE
default Active 10d
kube-system Active 10d
kube-public Active 10d
kube-node-lease Active 10d
  • Default namespace is used if not specified
  • Namespaces isolate resources within same cluster
  • Good for multi-team or multi-environment setups

Set default namespace and switch between them

Sets default namespace for kubectl commands without -n flag

Code
Terminal window
# Set permanent default namespace
kubectl config set-context --current --namespace=development
# View current namespace
kubectl config view --minify --output=jsonpath='{..namespace}'
# View resources in specific namespace
kubectl get pods --namespace=production
kubectl get pods -n production # short form
# Switch context with different namespace
kubectl config use-context dev-context
Execution
Terminal window
kubectl config set-context --current --namespace=default
Output
Terminal window
Context "minikube" modified.
  • Default context is stored in ~/.kube/config
  • -n flag overrides default namespace per command

View all resources across namespaces

Lists resources across all namespaces for cluster-wide visibility

Code
Terminal window
# List pods across all namespaces
kubectl get pods --all-namespaces
kubectl get pods -A # short form
# View services across all namespaces
kubectl get svc -A
# Get all resources in all namespaces
kubectl get all -A
# Describe resource in specific namespace
kubectl describe pod my-pod -n production
Execution
Terminal window
kubectl get pods -A
Output
Terminal window
NAMESPACE NAME READY STATUS RESTARTS
default nginx-pod 1/1 Running 0
kube-system coredns-64897fb6d9-x8z5k 1/1 Running 0
production app-deployment-abc123-xyz789 1/1 Running 1
  • -A flag is equivalent to --all-namespaces
  • Useful for troubleshooting across entire cluster

Cluster Management

Manage cluster configuration, nodes, resources, and monitoring

Cluster Context and Configuration

Manage multiple clusters and kubeconfig contexts

Manage kubeconfig contexts and clusters

Shows all available contexts and allows switching between clusters

Code
Terminal window
# View all contexts and clusters
kubectl config get-contexts
kubectl config get-clusters
# Get current context
kubectl config current-context
# Switch to different context
kubectl config use-context another-cluster
# Create new context
kubectl config set-context production --cluster=prod-cluster --user=prod-user
# Delete context
kubectl config delete-context old-context
Execution
Terminal window
kubectl config get-contexts
Output
Terminal window
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
docker-desktop docker-desktop docker-desktop default
kind-cluster1 kind-cluster1 kind-cluster1 default
  • Context combines cluster, user, and namespace information
  • * indicates current context

Configure cluster authentication

Manually configure cluster, user, and context settings

Code
Terminal window
# Set cluster details
kubectl config set-cluster my-cluster \
--server=https://kubernetes.example.com:6443 \
--certificate-authority=/path/to/ca.crt
# Set user authentication
kubectl config set-credentials my-user \
--client-certificate=/path/to/client.crt \
--client-key=/path/to/client.key
# Create context binding user to cluster
kubectl config set-context my-context \
--cluster=my-cluster \
--user=my-user \
--namespace=default
# Verify configuration
kubectl config view
Execution
Terminal window
kubectl config view
Output
Terminal window
apiVersion: v1
clusters:
- cluster:
server: https://kubernetes.example.com:6443
users:
- name: my-user
contexts:
- context:
cluster: my-cluster
user: my-user
  • Certificates can be base64-encoded in kubeconfig
  • kubectl config view shows merged configuration

Merge kubeconfig files and manage credentials

Manage multiple kubeconfig files for different clusters

Code
Terminal window
# View kubeconfig location
echo $KUBECONFIG
# Merge multiple kubeconfig files
export KUBECONFIG=~/.kube/config:~/.kube/prod-config:/tmp/temp-config
kubectl config view --merge
# Flatten kubeconfig (consolidate into single file)
kubectl config view --flatten > ~/.kube/consolidated-config
# Set KUBECONFIG permanently
echo "export KUBECONFIG=$HOME/.kube/config" >> ~/.bashrc
# Verify current kubeconfig
kubectl config view --minify
Execution
Terminal window
echo $KUBECONFIG
Output
Terminal window
/home/user/.kube/config
  • Multiple KUBECONFIG files are separated by colon (:)
  • Useful for managing dev, staging, and production clusters

Cluster Information and Monitoring

Monitor cluster health, resources, and component status

Check cluster health and component status

Shows running components and their endpoints

Code
Terminal window
# Get cluster information
kubectl cluster-info
# Get system components (requires metrics-server)
kubectl get componentstatuses
# Check API server and cluster version
kubectl api-versions
# List all API resources available
kubectl api-resources
# View cluster details
kubectl describe cluster
Execution
Terminal window
kubectl cluster-info
Output
Terminal window
Kubernetes control plane is running at https://127.0.0.1:6443
CoreDNS is running at https://127.0.0.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
  • Critical for verifying cluster connectivity
  • Dump shows more detailed debug information

Monitor node resources and health

Shows CPU and memory usage for all cluster nodes

Code
Terminal window
# List nodes with resource information
kubectl get nodes --show-labels
# Get node resource usage
kubectl top nodes
# Describe specific node for details
kubectl describe node minikube
# Check node logs (requires SSH or specific monitoring)
kubectl logs -f -n kube-system --tail=50 <pod-name>
# Get node conditions
kubectl get nodes -o jsonpath='{.items[*].status.conditions}' | jq .
Execution
Terminal window
kubectl top nodes
Output
Terminal window
NAME CPU(cores) CPU% MEMORY(Mi) MEMORY%
minikube 245m 12% 1234Mi 32%
  • Requires metrics-server installed for top command
  • CPU in millicores, memory in megabytes

Check persistent volume and storage status

Lists persistent storage resources in the cluster

Code
Terminal window
# List persistent volumes
kubectl get pv
# List persistent volume claims
kubectl get pvc --all-namespaces
# Check storage classes
kubectl get storageclass
# Describe specific PV
kubectl describe pv pv-name
# Check PVC status
kubectl describe pvc pvc-name -n namespace
Execution
Terminal window
kubectl get pv
Output
Terminal window
NAME CAPACITY ACCESS MODES RECLAIM STATUS CLAIM
pv-001 10Gi RWO Delete Bound ns/pvc-001
  • PV is cluster-level, PVC is namespace-level
  • Status should be Bound for normal operation

Node and Resource Management

Manage cluster nodes, taints, and resource quotas

Cordon and drain nodes for maintenance

Safely cordons and drains nodes for maintenance

Code
Terminal window
# Cordon node (prevent new pods from scheduling)
kubectl cordon node-1
# Drain node (evict all pods safely)
kubectl drain node-1 --ignore-daemonsets --delete-emptydir-data
# Uncordon node (allow scheduling again)
kubectl uncordon node-1
# Check node status
kubectl get nodes
kubectl describe node node-1
Execution
Terminal window
kubectl get nodes
Output
Terminal window
NAME STATUS ROLES AGE
node-1 Ready,SchedulingDisabled <none> 10d
node-2 Ready <none> 10d
  • SchedulingDisabled status indicates cordoned node
  • Drain ensures graceful pod termination

Add and remove node taints

Adds and removes taints to control pod scheduling

Code
Terminal window
# Add taint to node (prevents scheduling unless tolerated)
kubectl taint nodes node-1 key=value:NoSchedule
# Add effect types:
# NoSchedule - new pods won't be scheduled
# NoExecute - existing pods will be evicted
# PreferNoSchedule - prefer not to schedule but may
# Remove taint from node
kubectl taint nodes node-1 key=value:NoSchedule-
# View node taints
kubectl describe node node-1 | grep Taints
Execution
Terminal window
kubectl describe node node-1 | grep Taints
Output
Terminal window
Taints: gpu=true:NoSchedule
  • Pods need matching tolerations to schedule on tainted nodes
  • Common for GPU nodes or specialized hardware

Set resource quotas and limits per namespace

Sets resource limits for namespaces to prevent overallocation

Code
Terminal window
# Create resource quota for namespace
kubectl create quota myrquota --hard=pods=10,cpu=3,memory=10Gi -n development
# View resource quotas
kubectl get resourcequota -n development
# Describe quota details
kubectl describe resourcequota myrquota -n development
# Create with YAML for more control
kubectl apply -f - <<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-quota
namespace: development
spec:
hard:
requests.cpu: "10"
requests.memory: "20Gi"
limits.cpu: "20"
limits.memory: "40Gi"
EOF
Execution
Terminal window
kubectl get resourcequota -A
Output
Terminal window
NAMESPACE NAME AGE REQUEST.CPU REQUESTMEMORY
development myrquota 5d 500m / 3 2Gi / 10Gi
  • Quotas prevent namespace from consuming excessive cluster resources
  • Pods larger than quota cannot be created

Pod Management

Create, manage, inspect, and debug Kubernetes pods

Creating and Listing Pods

Create pods imperatively and declaratively, list and filter them

Create pods imperatively with kubectl run

Creates pods using imperative kubectl run command

Code
Terminal window
# Create simple pod from image
kubectl run nginx-pod --image=nginx
# Create pod with port mapping
kubectl run web --image=nginx --port=8080
# Create pod with resource requests/limits
kubectl run app --image=myapp --requests=cpu=100m,memory=128Mi --limits=cpu=500m,memory=512Mi
# Create pod with command
kubectl run busybox --image=busybox --command -- sleep 3600
# Create pod in specific namespace
kubectl run test-pod --image=alpine -n development
# Create pod and output YAML (dry-run)
kubectl run nginx-pod --image=nginx --dry-run=client -o yaml
Execution
Terminal window
kubectl run test-pod --image=alpine --dry-run=client -o yaml
Output
Terminal window
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: test-pod
spec:
containers:
- image: alpine
name: test-pod
  • Imperative approach is fast for quick testing
  • Use dry-run to preview YAML before creating

Create pods declaratively with YAML manifests

Creates pods using declarative YAML manifests

Code
Terminal window
# Create pod from YAML file
kubectl apply -f pod.yaml
# Create pod from inline YAML
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: web-pod
namespace: default
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
- name: sidecar
image: busybox
command: ['sleep', '3600']
EOF
# Verify pod creation
kubectl get pods
Execution
Terminal window
kubectl get pods
Output
Terminal window
NAME READY STATUS RESTARTS AGE
web-pod 2/2 Running 0 2m
  • Declarative approach is preferred for reproducibility
  • YAML files can be version controlled
  • Multiple containers can run in same pod

List and filter pods

Lists pods with various filtering and output options

Code
Terminal window
# List pods in current namespace
kubectl get pods
# List pods with detailed info
kubectl get pods -o wide
# List pods across all namespaces
kubectl get pods -A
# List pods with labels
kubectl get pods --show-labels
# Filter pods by label
kubectl get pods -l app=web
# Filter by multiple labels
kubectl get pods -l app=web,tier=frontend
# List pods with custom columns
kubectl get pods -o custom-columns=NAME:metadata.name,STATUS:status.phase,IP:status.podIP
Execution
Terminal window
kubectl get pods -o wide
Output
Terminal window
NAME STATUS IP NODE NOMINATED
web-pod Running 10.244.0.5 minikube <none>
  • Default shows only current namespace
  • -o flag controls output format (json, yaml, custom-columns)

Inspecting and Debugging Pods

Describe, view logs, and debug pod issues

Describe pods and view details

Shows detailed pod information including events and status

Code
Terminal window
# Get basic information about pod
kubectl get pod web-pod
# Get detailed pod information
kubectl describe pod web-pod
# View pod definition in YAML
kubectl get pod web-pod -o yaml
# View pod in JSON format
kubectl get pod web-pod -o json
# Extract specific fields with JSONPath
kubectl get pod web-pod -o jsonpath='{.status.phase}'
kubectl get pod web-pod -o jsonpath='{.spec.containers[0].image}'
Execution
Terminal window
kubectl describe pod web-pod
Output
Terminal window
Name: web-pod
Namespace: default
Status: Running
IP: 10.244.0.5
Containers:
nginx:
Image: nginx:latest
State: Running
  • describe shows useful events and error messages
  • Events help identify why pods fail to start

View pod logs and stream output

Shows container logs for debugging application issues

Code
Terminal window
# View logs from pod
kubectl logs web-pod
# View logs from specific container in multi-container pod
kubectl logs web-pod -c nginx
# Stream logs in real-time
kubectl logs -f web-pod
# View logs from previous container (crashed pods)
kubectl logs web-pod --previous
# Show logs with timestamps
kubectl logs web-pod --timestamps=true
# Tail last 50 lines
kubectl logs web-pod --tail=50
# View logs from deployment pods
kubectl logs -l app=web --max-log-requests=10
Execution
Terminal window
kubectl logs web-pod --tail=20
Output
Terminal window
192.168.1.1 - - [28/Feb/2025:10:30:00] "GET / HTTP/1.1" 200 612
192.168.1.2 - - [28/Feb/2025:10:30:01] "GET /index.html HTTP/1.1" 200 612
  • -f flag tails logs in real-time like tail -f
  • --previous shows logs from before container restart

Interactive debugging and shell access

Executes commands and provides shell access to running pods

Code
Terminal window
# Execute command in running pod
kubectl exec web-pod -- ls -la
# Get interactive shell in pod
kubectl exec -it web-pod -- /bin/bash
kubectl exec -it web-pod -- /bin/sh
# Execute command in specific container
kubectl exec -it web-pod -c nginx -- /bin/bash
# Run debugging sidecar in pod
kubectl debug web-pod -it --image=busybox
# Copy files from pod
kubectl cp web-pod:/var/www/html/index.html ./index.html
# Copy files to pod
kubectl cp ./config.yaml web-pod:/etc/config.yaml
Execution
Terminal window
kubectl exec -it web-pod -- hostname
Output
Terminal window
web-pod
  • -i flag keeps stdin open, -t allocates tty
  • Useful for runtime troubleshooting and inspection

Deleting and Cleaning Up Pods

Delete pods and manage pod lifecycle

Delete single and multiple pods

Deletes pods from the cluster

Code
Terminal window
# Delete single pod
kubectl delete pod web-pod
# Delete multiple pods by name
kubectl delete pod web-pod app-pod db-pod
# Delete all pods in namespace
kubectl delete pods --all
# Delete all pods in all namespaces
kubectl delete pods -A --all
# Delete with confirmation
kubectl delete pod web-pod # will prompt for confirmation
Execution
Terminal window
kubectl delete pod web-pod
Output
Terminal window
pod "web-pod" deleted
  • Delete operations are immediate (use graceful termination)
  • Bare pods are not recreated; use Deployments for self-healing

Graceful pod termination and force delete

Gracefully terminates pods with shutdown timeout

Code
Terminal window
# Delete with grace period (seconds to shutdown cleanly)
kubectl delete pod web-pod --grace-period=30
# Force delete immediately (no grace period)
kubectl delete pod web-pod --grace-period=0 --force
# Delete using label selector
kubectl delete pods -l app=web
# Delete using field selector
kubectl delete pods --field-selector=status.phase=Failed
# Check termination status during deletion
kubectl get pod web-pod --watch
Execution
Terminal window
kubectl delete pod web-pod --grace-period=10
Output
Terminal window
pod "web-pod" deleted
  • Default grace period is 30 seconds
  • Pod has time to close connections and save state

Deployment Management

Deploy applications, manage replicas, and perform rolling updates

Creating Deployments

Create deployments imperatively and declaratively

Create deployments imperatively

Creates deployments using imperative kubectl commands

Code
Terminal window
# Create deployment from image
kubectl create deployment web --image=nginx
# Create deployment with replicas
kubectl create deployment web --image=nginx --replicas=3
# Create deployment and save YAML
kubectl create deployment web --image=nginx --dry-run=client -o yaml > web-deployment.yaml
# Create deployment with port
kubectl run web --image=nginx --port=80 --replicas=3
# Verify deployment creation
kubectl get deployments
kubectl get pods
Execution
Terminal window
kubectl create deployment web --image=nginx --replicas=3
Output
Terminal window
deployment.apps/web created
  • create is imperative, while apply is declarative
  • Deployments automatically create ReplicaSet

Create deployments with YAML manifests

Creates deployments declaratively with full control

Code
Terminal window
# Create deployment from YAML
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
labels:
app: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
EOF
# List deployments
kubectl get deployments
Execution
Terminal window
kubectl get deployments -o wide
Output
Terminal window
NAME READY UP-TO-DATE AVAILABLE AGE
web-deployment 3/3 3 3 2m
  • YAML approach is reproducible and version-controllable
  • spec.replicas defines number of pod replicas
  • selector must match template labels

Create deployments with health checks

Creates deployments with health checks for better reliability

Code
Terminal window
# Create deployment with liveness and readiness probes
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
spec:
replicas: 2
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: myapp:v1
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
EOF
Execution
Terminal window
kubectl describe deployment app-deployment
Output
Terminal window
Name: app-deployment
Replicas: 2 desired | 2 updated | 2 ready
Strategy: RollingUpdate
  • Liveness probe restarts unhealthy containers
  • Readiness probe controls traffic to pods

Scaling Deployments

Scale deployments up and down dynamically

Manually scale deployments

Scales deployments manually by changing replica count

Code
Terminal window
# Get deployment info
kubectl get deployments
# Scale deployment to 5 replicas
kubectl scale deployment web-deployment --replicas=5
# Scale multiple deployments
kubectl scale deployment web-deployment app-deployment --replicas=3
# Verify scaling
kubectl get deployments
kubectl get pods
# Scale down to 0 (stop deployment)
kubectl scale deployment web-deployment --replicas=0
Execution
Terminal window
kubectl scale deployment web-deployment --replicas=5
Output
Terminal window
deployment.apps/web-deployment scaled
  • Scaling is immediate
  • Previous pods will be terminated gracefully

Set up horizontal pod autoscaling

Sets up automatic scaling based on metrics

Code
Terminal window
# Create HPA imperatively
kubectl autoscale deployment web-deployment --min=1 --max=10 --cpu-percent=80
# View HPA status
kubectl get hpa
# Describe HPA details
kubectl describe hpa web-deployment
# Create HPA with YAML for more control
kubectl apply -f - <<EOF
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: web-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: web-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
EOF
Execution
Terminal window
kubectl get hpa
Output
Terminal window
NAME REFERENCE TARGETS MINPODS MAXPODS
web-hpa Deployment/web-deployment 45%/80% 2 10
  • Requires metrics-server for CPU/memory metrics
  • HPA v2 supports custom metrics

Monitor scaling events and history

Monitors horizontal pod autoscaling events

Code
Terminal window
# Monitor scaling in real-time
kubectl get hpa --watch
# Check HPA events
kubectl describe hpa web-hpa
# View scaling history
kubectl get events --field-selector involvedObject.name=web-deployment
# Check deployment history
kubectl rollout history deployment web-deployment
Execution
Terminal window
kubectl get hpa --watch
Output
Terminal window
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
web-hpa Deployment/web-deploy 88%/80% 2 10 8 3m
  • HPA cooldown prevents rapid scaling churn
  • Monitor targets to verify autoscaling behavior

Updating and Rolling Back Deployments

Update application versions and manage rollouts

Update deployment images

Updates deployment image to new version

Code
Terminal window
# Update image in deployment
kubectl set image deployment/web-deployment nginx=nginx:1.22
# Update multiple containers
kubectl set image deployment/app app=myapp:v2 sidecar=sidecar:v1 --record
# Update from file
kubectl apply -f - <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
template:
spec:
containers:
- name: nginx
image: nginx:1.22
EOF
# Watch rollout progress
kubectl rollout status deployment/web-deployment
Execution
Terminal window
kubectl set image deployment/web-deployment nginx=nginx:1.22
Output
Terminal window
deployment.apps/web-deployment image updated
  • Triggers rolling update by default
  • Old pods are gradually replaced with new version

Perform rolling updates and monitor progress

Monitors and controls rolling update process

Code
Terminal window
# Apply updated deployment
kubectl apply -f web-deployment.yaml
# Watch rollout status
kubectl rollout status deployment/web-deployment
# Check rollout history
kubectl rollout history deployment/web-deployment
# View specific revision details
kubectl rollout history deployment/web-deployment --revision=2
# Pause rollout if issues detected
kubectl rollout pause deployment/web-deployment
# Resume paused rollout
kubectl rollout resume deployment/web-deployment
Execution
Terminal window
kubectl rollout status deployment/web-deployment
Output
Terminal window
deployment "web-deployment" successfully rolled out
  • Pause allows verification before continuing update
  • History shows all previous revisions

Rollback deployments to previous versions

Reverts deployment to previous working version

Code
Terminal window
# Rollback to previous revision
kubectl rollout undo deployment/web-deployment
# Rollback to specific revision
kubectl rollout undo deployment/web-deployment --to-revision=2
# Check rollback status
kubectl rollout status deployment/web-deployment
# Verify rollback with describe
kubectl describe deployment web-deployment
# Check pod images to confirm rollback
kubectl get pods -o wide
Execution
Terminal window
kubectl rollout undo deployment/web-deployment
Output
Terminal window
deployment.apps/web-deployment rolled back
  • Undo creates new ReplicaSet with old version
  • Useful for quick recovery from bad deployments

Service & Ingress

Expose applications with Services and Ingress

Creating Services

Expose pods with ClusterIP, NodePort, and LoadBalancer services

Create services imperatively

Creates services to expose deployments within or outside cluster

Code
Terminal window
# Expose deployment as ClusterIP service
kubectl expose deployment web-deployment --type=ClusterIP --port=80
# Expose as NodePort service
kubectl expose deployment web-deployment --type=NodePort --port=80 --target-port=8080
# Expose as LoadBalancer service
kubectl expose deployment web-deployment --type=LoadBalancer --port=80
# List created services
kubectl get svc
# Get service details
kubectl describe svc web-deployment
Execution
Terminal window
kubectl expose deployment web-deployment --type=ClusterIP --port=80
Output
Terminal window
service/web-deployment exposed
  • ClusterIP: internal only
  • NodePort: accessible on node IP
  • LoadBalancer: managed external IP

Create services with YAML

Creates services declaratively with full control

Code
Terminal window
# Create ClusterIP service
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: web-service
spec:
type: ClusterIP
selector:
app: web
ports:
- port: 80
targetPort: 8080
protocol: TCP
EOF
# Create NodePort service
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: web-nodeport
spec:
type: NodePort
selector:
app: web
ports:
- port: 80
targetPort: 8080
nodePort: 30080
EOF
# Create LoadBalancer service
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: web-lb
spec:
type: LoadBalancer
selector:
app: web
ports:
- port: 80
targetPort: 8080
EOF
Execution
Terminal window
kubectl get svc
Output
Terminal window
NAME TYPE CLUSTER-IP PORT(S)
web-service ClusterIP 10.0.0.1 80/TCP
  • selector determines which pods receive traffic
  • targetPort is container port, port is service port

List and inspect services

Lists services and their endpoints

Code
Terminal window
# List services
kubectl get svc
# List in all namespaces
kubectl get svc -A
# Get service endpoints
kubectl get endpoints
# Describe service details
kubectl describe svc web-service
# Get service YAML
kubectl get svc web-service -o yaml
# Watch for external IP (LoadBalancer)
kubectl get svc -w
Execution
Terminal window
kubectl get svc -o wide
Output
Terminal window
NAME TYPE SELECTOR IP EXTERNAL-IP
web-service ClusterIP app=web 10.0.0.1 <none>
  • Endpoints show which pods the service routes to
  • EXTERNAL-IP may take time for LoadBalancer type

Setting up Ingress

Configure Ingress for HTTP/HTTPS routing

Create basic Ingress routes

Creates basic Ingress for routing HTTP traffic

Code
Terminal window
# Create simple path-based Ingress
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
EOF
# List ingresses
kubectl get ingress
# Get Ingress IP address
kubectl get ingress -o wide
Execution
Terminal window
kubectl get ingress
Output
Terminal window
NAME CLASS HOSTS ADDRESS PORTS
web-ingress nginx * 192.168.1.1 80
  • Requires Ingress Controller (nginx, traefik, etc.)
  • Address is Ingress Controller's IP

Configure hostname-based routing

Routes different hosts to different services

Code
Terminal window
# Create host-based Ingress
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
- host: api.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 8080
EOF
Execution
Terminal window
kubectl describe ingress web-ingress
Output
Terminal window
Name: web-ingress
Rules:
Host Path Backends
example.com / web-service:80
api.example.com / api-service:8080
  • Requires DNS pointing to Ingress IP
  • Multiple rules for VHOST configuration

Configure TLS termination with Ingress

Configures HTTPS/TLS termination for Ingress

Code
Terminal window
# Create TLS secret
kubectl create secret tls web-tls --cert=cert.pem --key=key.pem
# Create Ingress with TLS
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress-tls
spec:
tls:
- hosts:
- example.com
secretName: web-tls
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web-service
port:
number: 80
EOF
# Verify TLS setup
kubectl describe ingress web-ingress-tls
Execution
Terminal window
kubectl get secrets
Output
Terminal window
NAME TYPE DATA AGE
web-tls kubernetes.io/tls 2 3m
  • TLS certificate stored as Secret
  • Ingress Controller terminates SSL

Port Forwarding and Debugging

Forward local ports to cluster resources for debugging

Forward local port to pod

Creates local port forward to pod

Code
Terminal window
# Forward local to pod
kubectl port-forward pod/web-pod 8000:80
# Forward with background process
kubectl port-forward pod/web-pod 8000:80 &
# Forward to specific pod in deployment
kubectl port-forward deployment/web-deployment 8000:80
# Forward with address binding
kubectl port-forward pod/web-pod 127.0.0.1:8000:80
# Forward random local port
kubectl port-forward pod/web-pod :80
Execution
Terminal window
kubectl port-forward pod/web-pod 8000:80 &
Output
Terminal window
Forwarding from 127.0.0.1:8000 -> 80
Forwarding from [::1]:8000 -> 80
  • Access pod at localhost:8000 from host
  • Useful for testing without exposing service

Access services through port forwarding

Forwards to Service which routes to backend pods

Code
Terminal window
# Forward to service
kubectl port-forward service/web-service 8000:80
# Forward to service in specific namespace
kubectl port-forward -n production service/db-service 5432:5432
# Forward multiple ports
kubectl port-forward pod/app 8000:8000 8080:8080
# Kill port forward
# Use Ctrl+C or kill process
ps aux | grep port-forward
kill <pid>
Execution
Terminal window
kubectl port-forward service/web-service 8000:80
Output
Terminal window
Forwarding from 127.0.0.1:8000 -> 80
  • Service provides load balancing across pods
  • Random pod is selected if multiple exist

Storage Management

Manage persistent storage with volumes and storage classes

Persistent Volumes and Claims

Create and manage persistent storage

Create persistent volumes and claims

Creates persistent storage volumes and claims

Code
Terminal window
# Create Persistent Volume
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-001
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
hostPath:
path: /data/pv-001
EOF
# Create Persistent Volume Claim
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-001
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: manual
resources:
requests:
storage: 5Gi
EOF
# List PVs and PVCs
kubectl get pv
kubectl get pvc
Execution
Terminal window
kubectl get pv,pvc
Output
Terminal window
NAME CAPACITY ACCESSMODES STATUS CLAIM
pv-001 10Gi RWO Bound default/pvc-001
  • PV is cluster resource, PVC is namespace resource
  • Status Bound means PVC successfully claimed PV

Use volumes in pod specifications

Mounts persistent volume in pod using PVC

Code
Terminal window
# Create pod with PVC volume
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: data-app
spec:
containers:
- name: app
image: busybox
command: ['sleep', '3600']
volumeMounts:
- name: data-volume
mountPath: /data
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: pvc-001
EOF
# Verify PVC is mounted
kubectl get pod data-app -o yaml | grep -A5 volumes
Execution
Terminal window
kubectl describe pod data-app | grep -A5 Mounts
Output
Terminal window
Mounts:
/data from data-volume (rw)
  • mountPath is where volume appears in container
  • Volume must exist or pod will not start

Manage storage lifecycle

Manages PV/PVC lifecycle and reclamation policies

Code
Terminal window
# Check PVC details
kubectl describe pvc pvc-001
# View PV details
kubectl describe pv pv-001
# Delete PVC
kubectl delete pvc pvc-001
# Delete PV
kubectl delete pv pv-001
# Check reclaim policy behavior
# - Retain: Keep PV after PVC deletion
# - Delete: Remove PV after PVC deletion
# - Recycle: Clear PV data (deprecated)
Execution
Terminal window
kubectl describe pvc pvc-001
Output
Terminal window
Name: pvc-001
Status: Bound
Volume: pv-001
Capacity: 5Gi
  • Reclaim policy determines what happens after PVC deletion
  • Retain preserves data for manual recovery

Storage Classes and Dynamic Provisioning

Use storage classes for dynamic volume provisioning

Create and list storage classes

Creates storage classes for automatic volume provisioning

Code
Terminal window
# Create storage class
kubectl apply -f - <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-storage
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp3
iops: "3000"
throughput: "125"
reclaimPolicy: Delete
allowVolumeExpansion: true
EOF
# List storage classes
kubectl get storageclass
# Set default storage class
kubectl patch storageclass fast-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Execution
Terminal window
kubectl get storageclass
Output
Terminal window
NAME PROVISIONER RECLAIMPOLICY
fast-storage kubernetes.io/aws-ebs Delete
  • Provisioner depends on cloud provider
  • Parameters vary by provisioner

Use storage class in PVC

Automatically provisions PV when PVC is created

Code
Terminal window
# Create PVC using storage class
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: app-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: fast-storage
resources:
requests:
storage: 50Gi
EOF
# Monitor dynamic PV creation
kubectl get pv -w
Execution
Terminal window
kubectl get pvc
Output
Terminal window
NAME STATUS VOLUME CAPACITY
app-storage Bound pvc-abc123 50Gi
  • Storage class provisioner automatically creates PV
  • No need to manually create PV first

Expand persistent volumes

Expands PVC size without downtime

Code
Terminal window
# Edit PVC to increase size
kubectl patch pvc app-storage -p '{"spec":{"resources":{"requests":{"storage":"100Gi"}}}}'
# Or edit directly
kubectl edit pvc app-storage
# Monitor expansion progress
kubectl describe pvc app-storage
# Verify expansion in pod
kubectl exec -it <pod> -- df /data
Execution
Terminal window
kubectl describe pvc app-storage
Output
Terminal window
Name: app-storage
Capacity: 100Gi
  • allowVolumeExpansion must be true in StorageClass
  • Some filesystems require filesystem expansion in pod

Volume Types and EmptyDir

Use different volume types for various scenarios

Use emptyDir and hostPath volumes

Uses emptyDir for temporary storage and hostPath for node access

Code
Terminal window
# Create pod with emptyDir and hostPath
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: app-with-volumes
spec:
containers:
- name: app
image: nginx
volumeMounts:
- name: cache
mountPath: /cache
- name: host-data
mountPath: /host-data
volumes:
- name: cache
emptyDir: {}
- name: host-data
hostPath:
path: /data
type: Directory
EOF
Execution
Terminal window
kubectl get pod app-with-volumes -o yaml
Output
Terminal window
volumes:
- name: cache
emptyDir: {}
  • emptyDir deleted when pod terminates
  • hostPath accesses node filesystem

Mount ConfigMaps and Secrets as volumes

Mounts ConfigMaps and Secrets as volumes

Code
Terminal window
# Create ConfigMap
kubectl create configmap app-config --from-literal=key1=value1
# Create Secret
kubectl create secret generic app-secret --from-literal=password=secret
# Create pod mounting both
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: app-with-config
spec:
containers:
- name: app
image: busybox
command: ['sleep', '3600']
volumeMounts:
- name: config
mountPath: /etc/config
- name: secret
mountPath: /etc/secrets
volumes:
- name: config
configMap:
name: app-config
- name: secret
secret:
secretName: app-secret
EOF
Execution
Terminal window
kubectl get configmap,secret
Output
Terminal window
NAME DATA AGE
configmap/app-config 1 2m
  • ConfigMap/Secret updates appear in mounted files
  • Good for configuration without pod restart

Security & RBAC

Secure cluster with authentication, authorization, and policies

RBAC Roles and Bindings

Control access with Roles and RoleBindings

Create RBAC roles and bindings

Creates RBAC roles and grants permissions to service accounts

Code
Terminal window
# Create service account
kubectl create serviceaccount app-sa -n development
# Create role with permissions
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: development
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["pods/logs"]
verbs: ["get"]
EOF
# Create role binding
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-binding
namespace: development
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: app-sa
namespace: development
EOF
# List RBAC resources
kubectl get roles,rolebindings -n development
Execution
Terminal window
kubectl get serviceaccounts,roles,rolebindings -n development
Output
Terminal window
NAME SECRETS AGE
serviceaccount/app-sa 1 2m
NAME CREATED AT
role.rbac...pod-reader 2m
  • verbs define allowed actions (get, list, create, delete)
  • apiGroups depend on resource type (empty = core API)

Create ClusterRoles for cluster-wide permissions

Grants cluster-wide permissions across all namespaces

Code
Terminal window
# Create cluster role
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: node-reader
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["nodes/stats"]
verbs: ["get"]
EOF
# Create cluster role binding
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: node-reader-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: node-reader
subjects:
- kind: ServiceAccount
name: monitoring-sa
namespace: monitoring
EOF
Execution
Terminal window
kubectl get clusterroles,clusterrolebindings
Output
Terminal window
NAME CREATED AT
clusterrole.rbac.../monitoring-binding 2m
  • ClusterRole is cluster-scoped, not namespace-scoped
  • Use for cluster admins and system components

Check permissions and debug RBAC

Verifies RBAC permissions and troubleshoots access issues

Code
Terminal window
# Check what user can do
kubectl auth can-i get pods -n development --as=system:serviceaccount:development:app-sa
# Check multiple permissions
kubectl auth can-i create deployments -n default
kubectl auth can-i delete pods -n default
# List all role bindings for user
kubectl get rolebinding,clusterrolebinding -A
# Describe role to see permissions
kubectl describe role pod-reader -n development
Execution
Terminal window
kubectl auth can-i get pods --as=system:serviceaccount:development:app-sa
Output
Terminal window
yes
  • can-i helps verify permissions before assigning access
  • Format: system:serviceaccount:namespace:name

Network Policies

Control network traffic with network policies

Create network policies for traffic control

Creates network policies to restrict traffic

Code
Terminal window
# Create deny-all network policy
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
EOF
# Create allow policy for specific pods
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-traffic
namespace: production
spec:
podSelector:
matchLabels:
app: web
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 80
EOF
Execution
Terminal window
kubectl get networkpolicies -n production
Output
Terminal window
NAME POD-SELECTOR AGE
deny-all <none> 2m
allow-web-traffic app=web 1m
  • NetworkPolicy requires network plugin with support
  • podSelector: {} matches all pods

Configure egress policies

Restricts outbound traffic from pods

Code
Terminal window
# Allow specific egress traffic
kubectl apply -f - <<EOF
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app: database
ports:
- protocol: TCP
port: 5432
- to:
- namespaceSelector:
matchLabels:
name: kube-system
ports:
- protocol: UDP
port: 53
EOF
Execution
Terminal window
kubectl describe networkpolicy allow-egress -n production
Output
Terminal window
Name: allow-egress
Namespace: production
Egress:
To: app=database, port: 5432
  • Egress allows specifying allowed destination pods
  • DNS access usually required for pod-to-pod communication

Secrets and Secret Management

Securely store and manage sensitive data

Create and manage secrets

Creates secrets to store sensitive data

Code
Terminal window
# Create secret from literals
kubectl create secret generic db-secret \
--from-literal=user=admin \
--from-literal=password=secretpass
# Create secret from file
kubectl create secret generic app-config \
--from-file=config.yaml
# Create docker registry secret
kubectl create secret docker-registry regcred \
--docker-server=myregistry.com \
--docker-username=user \
--docker-password=pass
# List secrets
kubectl get secrets
Execution
Terminal window
kubectl get secrets
Output
Terminal window
NAME TYPE DATA AGE
db-secret Opaque 2 2m
app-config Opaque 1 1m
  • Secrets are base64-encoded, not encrypted by default
  • Consider using encryption at rest in production

Use secrets in pod specifications

Uses secrets as environment variables in pods

Code
Terminal window
# Create pod using secret as environment variables
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: app-with-secret
spec:
containers:
- name: app
image: myapp
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-secret
key: user
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
imagePullSecrets:
- name: regcred
EOF
# Verify secret is applied
kubectl describe pod app-with-secret
Execution
Terminal window
kubectl get pod app-with-secret -o yaml
Output
Terminal window
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: db-secret
key: user
  • imagePullSecrets for private registry authentication
  • Secret data injected at runtime

View and update secrets

Views and manages secrets

Code
Terminal window
# View decoded secret
kubectl get secret db-secret -o jsonpath='{.data.password}' | base64 -d
# Edit secret
kubectl edit secret db-secret
# Delete secret
kubectl delete secret db-secret
# Get secret as YAML
kubectl get secret db-secret -o yaml
Execution
Terminal window
kubectl describe secret db-secret
Output
Terminal window
Name: db-secret
Type: Opaque
Data
user: 5 bytes
password: 10 bytes
  • Base64 decoding shows actual values
  • Be careful with secret exposure in logs

Advanced Operations

Logging, debugging, resource management, and advanced queries

Logging and Debugging

Collect and analyze logs for troubleshooting

Advanced logging and filtering

Gets detailed logs with filtering and streaming

Code
Terminal window
# Get logs from all containers in pod
kubectl logs pod-name --all-containers=true
# Get logs from previous pod instance
kubectl logs pod-name --previous
# Stream logs with timestamps
kubectl logs pod-name --timestamps=true -f
# Get logs from specific time range
kubectl logs pod-name --since=1h
kubectl logs pod-name --since-time='2025-02-28T10:00:00Z'
# Get logs from multiple pods
kubectl logs -f -l app=web --max-log-requests=10
# Tail specific number of lines
kubectl logs pod-name --tail=100
Execution
Terminal window
kubectl logs -f pod-name --tail=50
Output
Terminal window
2025-02-28T10:30:01.123Z INFO Starting application
2025-02-28T10:30:02.456Z INFO Connected to database
  • -f flag streams logs in real-time
  • --previous useful for crashed containers

Describe and inspect resources for debugging

Examines resource details and troubleshoots issues

Code
Terminal window
# Get full resource details
kubectl describe pod pod-name
# Get resource events
kubectl get events
# Watch resource for changes
kubectl get pods --watch
# Get events for specific resource
kubectl get events --field-selector involvedObject.name=pod-name
# Describe deployment to see replica status
kubectl describe deployment web-deployment
# Check resource conditions
kubectl get pod pod-name -o jsonpath='{.status.conditions}' | jq .
Execution
Terminal window
kubectl describe pod pod-name
Output
Terminal window
Name: pod-name
Status: Running
Conditions:
Type Status Reason
Ready True ContainersReady
  • Events show resource state changes
  • Conditions show readiness and health status

Advanced debugging with temporary containers

Creates temporary debugging containers

Code
Terminal window
# Create debug container in running pod
kubectl debug pod-name -it --image=busybox
# Debug specific container
kubectl debug pod-name -c container-name -it --image=busybox
# Debug with node access
kubectl debug node/node-name -it --image=ubuntu
# Create copy of pod for debugging
kubectl debug pod-name -it --copy-to=debug-pod
# Share process namespace for debugging
kubectl debug pod-name --target=container-name
Execution
Terminal window
kubectl debug pod-name -it --image=busybox
Output
Terminal window
Debugger started, running in pod-name ephemeral-debug-xyz
/ #
  • Debug containers have tools for troubleshooting
  • Copy-to creates standalone pod for destructive testing

JSONPath Queries and Output Formatting

Extract specific data with JSONPath queries

Extract data with JSONPath

Extracts specific fields using JSONPath syntax

Code
Terminal window
# Get pod names
kubectl get pods -o jsonpath='{.items[*].metadata.name}'
# Get pod IPs
kubectl get pods -o jsonpath='{.items[*].status.podIP}'
# Get image names from deployment
kubectl get deployment web -o jsonpath='{.spec.template.spec.containers[*].image}'
# Get container names and images
kubectl get pods -o jsonpath='{.items[*].spec.containers[*].name}'
# Format output with custom columns
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\n"}{end}'
Execution
Terminal window
kubectl get pods -o jsonpath='{.items[*].metadata.name}'
Output
Terminal window
pod1 pod2 pod3
  • jsonpath is powerful for extracting nested data
  • Can combine with other tools like awk

Custom columns and wide output

Creates custom output columns for better readability

Code
Terminal window
# Define custom columns
kubectl get pods \
-o custom-columns=NAME:.metadata.name,READY:.status.conditions[?(@.type=="Ready")].status,IMAGE:.spec.containers[0].image,IP:.status.podIP
# Format with custom columns shorthand
kubectl get pods --sort-by=.metadata.creationTimestamp
# Get pods with sorted output
kubectl get pods --sort-by='{.status.phase}'
# Wide format (standard custom columns)
kubectl get pods -o wide
Execution
Terminal window
kubectl get pods -o custom-columns=NAME:.metadata.name,STATUS:.status.phase,IP:.status.podIP
Output
Terminal window
NAME STATUS IP
pod1 Running 10.244.0.1
pod2 Running 10.244.0.2
  • Custom columns can format complex nested data
  • Sorting by specific fields helps organize output

Complex JSONPath queries with filters

Filters resources based on conditions in JSONPath

Code
Terminal window
# Get pods that are currently running
kubectl get pods -o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}'
# Get pods with specific label
kubectl get pods -o jsonpath='{.items[?(@.metadata.labels.tier=="web")].metadata.name}'
# Get containers with specific resource requests
kubectl get pods -o jsonpath='{.items[?(@.spec.containers[0].resources.requests.cpu)].metadata.name}'
# Format with line breaks for readability
kubectl get pods -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.phase}{"\t"}{.status.podIP}{"\n"}{end}'
Execution
Terminal window
kubectl get pods -o jsonpath='{.items[?(@.status.phase=="Running")].metadata.name}'
Output
Terminal window
pod1 pod3
  • Filter expressions use @.field syntax
  • Complex queries can extract specific information

Dry-Run and Testing Patterns

Test changes before applying with dry-run

Preview changes with dry-run

Tests manifest application without creating resources

Code
Terminal window
# Preview pod creation
kubectl run test-pod --image=nginx --dry-run=client -o yaml
# Preview deployment creation
kubectl create deployment web --image=nginx --dry-run=server -o yaml
# Preview manifest application
kubectl apply -f deployment.yaml --dry-run=client
# Apply with server-side validation
kubectl apply -f deployment.yaml --dry-run=server
# Save dry-run output for review
kubectl apply -f - --dry-run=client -o yaml > deployment-preview.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:latest
EOF
Execution
Terminal window
kubectl apply -f deployment.yaml --dry-run=client
Output
Terminal window
deployment.apps/web created (dry run)
  • client: validates locally, server: validates on server
  • Useful for checking YAML syntax before applying

Validate resources and configurations

Validates manifests before actual deployment

Code
Terminal window
# Validate YAML syntax
kubectl apply -f deployment.yaml --dry-run=server
# Check if resource would be created
kubectl create deployment test --image=alpine --dry-run=client
# Validate all manifests in directory
kubectl apply -f ./manifests/ --dry-run=server
# Test with specific namespace
kubectl apply -f deployment.yaml --namespace=test --dry-run=client
# Get validation details
kubectl apply -f deployment.yaml --dry-run=server -o yaml
Execution
Terminal window
kubectl apply -f deployment.yaml --dry-run=server -o jsonpath='{.metadata.name}'
Output
Terminal window
web
  • Server-side dry-run catches API errors
  • Good for CI/CD pipeline validation

Test resource limits and constraints

Code
Terminal window
# Create pod with resource limits to test
kubectl apply -f - --dry-run=server <<EOF
apiVersion: v1
kind: Pod
metadata:
name: resource-test
spec:
containers:
- name: app
image: myapp
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
EOF
# Check if quota allows creation
kubectl apply -f deployment.yaml --dry-run=server --validate=strict
# Test PVC binding
kubectl apply -f pvc.yaml --dry-run=server
Execution
Terminal window
echo '{"apiVersion":"v1","kind":"Pod","metadata":{"name":"test"},"spec":{"containers":[{"name":"app","image":"nginx"}]}}' | kubectl apply -f - --dry-run=server --validate=strict