Kubernetes Basics Every Backend Developer Should Know
You don't need to be a DevOps engineer to understand Kubernetes. Learn the core concepts — Pods, Deployments, Services — that every backend developer encounters in production.
Kubernetes Basics Every Backend Developer Should Know
Kubernetes (K8s) runs most production backend infrastructure today. Even if you’re not the person configuring clusters, you’ll read K8s manifests, debug pod crashes, and push deployments. Understanding the basics makes you a far more effective backend developer.
This guide covers what you actually need to know day-to-day — not cluster administration, but the developer-facing concepts.
What Kubernetes Does
Kubernetes is a container orchestration system. You give it a set of containers to run and describe how you want them to run — how many copies, what resources they need, how they communicate. Kubernetes handles scheduling, restarts, scaling, and networking across a cluster of machines.
Think of it as a very smart process manager for containers at scale.
The Core Objects
Pod
A Pod is the smallest deployable unit in Kubernetes — usually one container (sometimes a few tightly coupled ones). Pods are ephemeral: they can be killed and rescheduled at any time.
You almost never create Pods directly. You create Deployments that manage Pods for you.
Deployment
A Deployment declares the desired state for a set of Pods. You say “run 3 copies of this container image” and Kubernetes keeps 3 running, restarting any that crash.
apiVersion: apps/v1
kind: Deployment
metadata:
name: api-server
spec:
replicas: 3
selector:
matchLabels:
app: api-server
template:
metadata:
labels:
app: api-server
spec:
containers:
- name: api
image: myrepo/api-server:v1.2.3
ports:
- containerPort: 8000
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: db-secret
key: url
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "500m"
Service
Pods have dynamic IP addresses — they change every time a Pod restarts. A Service gives you a stable endpoint that routes traffic to the correct Pods using label selectors.
apiVersion: v1
kind: Service
metadata:
name: api-server
spec:
selector:
app: api-server
ports:
- port: 80
targetPort: 8000
type: ClusterIP # internal only; use LoadBalancer to expose externally
ConfigMap and Secret
- ConfigMap — non-sensitive configuration (feature flags, hostnames)
- Secret — sensitive data (passwords, API keys) stored base64-encoded
Never hardcode secrets in your container image or manifest. Reference them as environment variables or volume mounts.
Essential kubectl Commands
kubectl is the CLI for interacting with a cluster.
# List resources
kubectl get pods
kubectl get deployments
kubectl get services
# Describe a resource (shows events, useful for debugging)
kubectl describe pod <pod-name>
# View logs
kubectl logs <pod-name>
kubectl logs <pod-name> -f # Stream live logs
kubectl logs <pod-name> --previous # Logs from the last crashed container
# Open a shell inside a pod
kubectl exec -it <pod-name> -- /bin/sh
# Apply a manifest
kubectl apply -f deployment.yaml
# Delete a resource
kubectl delete pod <pod-name>
# Scale a deployment
kubectl scale deployment api-server --replicas=5
# View rollout status
kubectl rollout status deployment/api-server
# Rollback if something goes wrong
kubectl rollout undo deployment/api-server
Deploying a New Version
Update the image tag in your manifest and apply:
kubectl set image deployment/api-server api=myrepo/api-server:v1.2.4
Kubernetes performs a rolling update by default — it brings up new Pods before terminating old ones, keeping the service available.
Debugging a Crashing Pod
When a Pod is in CrashLoopBackOff:
kubectl describe pod <pod-name>— look at Events at the bottom for the exit reasonkubectl logs <pod-name> --previous— logs from the container before it crashed- Check
resources.limits— the container may be getting OOMKilled if it exceeds memory limits - Check liveness/readiness probes — misconfigured probes cause Kubernetes to kill healthy containers
Resource Requests and Limits
Always set these. Without them, one runaway pod can starve the entire node.
resources:
requests:
memory: "128Mi" # Guaranteed allocation — used for scheduling
cpu: "100m" # 100 millicores = 0.1 CPU cores
limits:
memory: "256Mi" # Container killed if it exceeds this
cpu: "500m"
What to Learn Next
This covers the developer fundamentals. When you’re ready to go deeper, explore:
- Ingress — HTTP routing and TLS termination at the cluster edge
- Horizontal Pod Autoscaler — automatically scale replicas based on CPU/memory
- Namespaces — logical isolation within a cluster
The official Kubernetes docs are excellent. Start with the Concepts section.
Kubernetes has a steep initial learning curve, but once the mental model clicks, it becomes the clearest way to reason about how production services run.
Related Articles
Docker for Backend Developers: A Practical Introduction
Learn how Docker works, why backend developers need it, and how to containerize your first Python or Go application in under 30 minutes.
Containerising a Backend Service: From Docker to Kubernetes
A practical walkthrough of containerising a Python backend service with Docker, deploying it to Kubernetes on ECS, and the production gaps that only show up once real traffic hits.
Environment Variables Explained: Keeping Secrets Out of Code
Learn what environment variables are and why every developer needs them. This guide covers how to use .env files, os.environ in Python, process.env in Node.js, and best practices.