☸️ Kubernetes · Intermediate

Deploy a Node.js App with zero-downtime rolling updates

⏱ 60 minutes☸️ Kubernetes 1.28+🟢 Node.js 20🐳 Docker

Kubernetes orchestrates your containers at production scale. In this complete tutorial, we explore key concepts, deploy a Node.js API on a local cluster (Minikube), expose the application via a Service, and perform a rolling update without service interruption. You'll understand the "why" behind every decision.

Understanding Kubernetes: Fundamental Concepts

📖 Term: Kubernetes Cluster

A cluster is a set of machines (physical or virtual) orchestrated by Kubernetes. Each machine is called a node. The cluster runs your containers (pods), distributes traffic, manages updates, and automatically restarts applications that crash.

Why it's useful: Instead of managing each server manually, you describe the desired state ("I want 3 instances of my API"), and Kubernetes maintains that state automatically.

📖 Term: Minikube

Minikube is a minimal Kubernetes cluster that runs locally on your machine (in a VM or container). It simulates a real Kubernetes cluster for development.

Why use it: Instead of renting a cloud cluster (GKE, EKS) during development, Minikube lets you test for free on your laptop. The concepts are identical, only the infrastructure changes.

📖 Term: Kubernetes Glossary

Prerequisites

Before you begin, make sure you have:

1. The Node.js Application

We'll create a simple Express API with three endpoints: the root (for testing), /health (for liveness probes), and /ready (for readiness probes). These endpoints allow Kubernetes to verify that the application is healthy.

app.js
const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;
const VERSION = process.env.APP_VERSION || 'v1';

// Main route: tests that the API works and returns info
app.get('/', (req, res) => {
  res.json({
    message: 'Node.js API operational',
    version: VERSION,  // Allows seeing which version is running during updates
    hostname: require('os').hostname(),  // Kubernetes pod name (useful to observe traffic distribution)
    timestamp: new Date().toISOString()  // Response timestamp
  });
});

// Liveness probe: Kubernetes uses this route to check if the pod is alive
// If it returns a code != 200, the pod will be restarted
app.get('/health', (req, res) => {
  res.json({ status: 'healthy', version: VERSION });
});

// Readiness probe: Kubernetes uses this route to check if the pod can receive traffic
// Here we simulate a check: of course, we also verify dependencies (DB, cache)
app.get('/ready', (req, res) => {
  // In production, verify here that DBs and caches are connected
  res.json({ ready: true });
});

// Start the server on the specified port
app.listen(PORT, () => {
  console.log(`Server v${VERSION} started on port ${PORT}`);
});
This file defines a minimal Express API with three routes: The application reads APP_VERSION and PORT from environment variables, which will be passed by Kubernetes in the Deployment.
📖 Term: Liveness Probe vs Readiness Probe

Liveness Probe (/health): verifies that the process is still alive. If it fails 3 times in a row, Kubernetes restarts the pod. This detects deadlocks or memory leaks that freeze the app.

Readiness Probe (/ready): verifies that the pod can handle traffic. If it fails, Kubernetes removes the pod from the Service (no restart). This prevents sending traffic to a pod that's initializing a DB connection or undergoing maintenance.

Use case: An app that takes 10 seconds to connect to the DB. During those 10 seconds, readiness = false, but liveness = true (the process is alive). Kubernetes doesn't send traffic, but doesn't restart either.

package.json
{
  "name": "api-k8s",
  "version": "1.0.0",
  "scripts": {
    "start": "node app.js"
  },
  "dependencies": {
    "express": "^4.19.2"
  }
}
A standard package.json declaring Express as a dependency. The start script launches the application.

2. Dockerfile: Building the Container Image

Dockerfile
# Base image: Node.js 20 on Alpine Linux
# Alpine is small (30 MB) and secure — advantage for Kubernetes
FROM node:20-alpine

# Create working directory
WORKDIR /app

# Copy package.json and package-lock.json if it exists
# We use package*.json (wildcard): copies both files if they exist
COPY package*.json ./

# Install dependencies in production mode only
# npm ci = "clean install" = exact reproduction of versions from package-lock.json
# --only=production = don't install devDependencies
RUN npm ci --only=production

# Copy the rest of the source code
COPY . .

# Create a non-root group and user for security
# NEVER run Node as root — security vulnerability
RUN addgroup -g 1001 -S nodejs && adduser -S nodeapp -u 1001
USER nodeapp

# Declare the port being listened to (informational, doesn't bind the port)
EXPOSE 3000

# Command to run at container startup
CMD ["node", "app.js"]
Alpine images are tiny (30 MB vs 300 MB for Node:20). In Kubernetes, the smaller the image, the faster it starts and the less network bandwidth it uses. Every second counts in production.
The package-lock.json file contains exact versions of all dependencies. npm ci respects this file, guaranteeing that the container built today will work like the one built yesterday. npm install could upgrade minor versions, introducing unpredictable bugs.
If an attacker breaks into the container, at least they won't be root. Running as root increases the damage from a compromise. This is a good security practice called "principle of least privilege".
The Dockerfile builds a Docker image containing: Alpine Linux + Node.js + npm dependencies + your code. This image will run in each Kubernetes pod.

3. Build and Load the Image into Minikube

Terminal
# Step 1: Start Minikube (if not already running)
minikube start

# Step 2: Point Docker to Minikube's daemon
# eval loads Minikube's Docker environment variables into your shell
eval $(minikube docker-env)

# Step 3: Build the image directly in Minikube's Docker
# (no need to push to Docker Hub)
docker build -t api-nodejs:v1 .

# Step 4: Verify that the image exists in Minikube
docker images | grep api-nodejs
# output: api-nodejs   v1   abc123def456   100MB   2 minutes ago
The complete build process:
  1. minikube start launches a Minikube VM or container with Kubernetes
  2. eval $(minikube docker-env) configures your Docker CLI to talk to the Docker daemon inside Minikube, not the one on your machine
  3. docker build -t api-nodejs:v1 . builds the image in Minikube
  4. docker images lists the images present in Minikube
📖 Term: Docker Registry

A registry is a server that stores Docker images. Examples: Docker Hub (public), Google Container Registry (GCR), Amazon ECR. In production, you push your image to a registry, then Kubernetes downloads it from there.

Why build in Minikube? Minikube has its own local registry. By running eval $(minikube docker-env), you build directly inside it, without needing an external registry. This is the ideal development approach: fast and free.

4. Kubernetes Manifests: Declarative Infrastructure

📖 Term: Declarative vs Imperative Infrastructure

Imperative: "Start a container, then attach a network, then configure logging..." (step-by-step commands)

Declarative: "Here's the YAML file describing the desired state. Kubernetes, make reality match it."

Kubernetes works declaratively. You describe the desired state in YAML files (Deployment, Service, etc.), then Kubernetes ensures that reality converges to that state — even if you reapply the files 10 times.

4.1 Deployment — Describing Your Application

deployment.yaml
# Which version of the Kubernetes API to use
# apps/v1 is the stable version for Deployments
apiVersion: apps/v1

# The type of resource: Deployment
kind: Deployment

# Metadata of the resource
metadata:
  # Unique name of the Deployment in the cluster
  name: api-nodejs
  # Labels to sort/search resources (optional but best practice)
  labels:
    app: api-nodejs
    tier: api

# Specification: the desired state
spec:
  # Number of pods to maintain (3 copies of the app, for high availability)
  replicas: 3

  # Selector: which pods are managed by this Deployment?
  # The pods having the label app:api-nodejs
  selector:
    matchLabels:
      app: api-nodejs

  # Update strategy: RollingUpdate = zero downtime
  strategy:
    type: RollingUpdate
    rollingUpdate:
      # maxSurge: create 1 EXTRA pod before removing the old one
      # = at one moment, 4 pods (3 + 1 surplus) instead of 3
      # Allows new pods to be ready before old ones are removed
      maxSurge: 1

      # maxUnavailable: never have < 3 available pods
      # = 0 = always keep at least the number of replicas active
      # Guarantee: the service is never interrupted
      maxUnavailable: 0

  # Pod template: description of the container
  template:
    metadata:
      labels:
        # This label will match the selector matchLabels
        app: api-nodejs
    spec:
      containers:
        # Application container
        - name: api-nodejs
          # Docker image to use (built earlier)
          image: api-nodejs:v1
          # imagePullPolicy: Never = use only the local Minikube image
          # (don't try to download from a registry)
          imagePullPolicy: Never

          # Ports exposed by the container
          ports:
            - containerPort: 3000  # The container port

          # Environment variables passed to the container
          env:
            - name: APP_VERSION
              value: "v1"
            - name: PORT
              value: "3000"

          # CPU/memory resources: requests and limits
          resources:
            # MINIMUM resources guaranteed by Kubernetes
            # Kubernetes will only schedule this pod on a node with enough free resources
            requests:
              cpu: "100m"        # 100 milliCPU = 0.1 CPU = 10% of a CPU
              memory: "128Mi"     # 128 megabytes
            # MAXIMUM resources allowed
            # If the pod exceeds limits, Kubernetes kills and restarts it
            limits:
              cpu: "250m"        # 0.25 CPU = 25% of a CPU
              memory: "256Mi"     # 256 megabytes

          # ── LIVENESS PROBE: Restart pod if API no longer responds ──
          livenessProbe:
            httpGet:
              path: /health         # Call GET /health
              port: 3000
            # Wait 15 seconds before first check (startup time)
            initialDelaySeconds: 15
            # Check every 20 seconds
            periodSeconds: 20
            # Restart after 3 consecutive failures
            failureThreshold: 3

          # ── READINESS PROBE: Remove pod from Service if not ready ──
          readinessProbe:
            httpGet:
              path: /ready          # Call GET /ready
              port: 3000
            # Wait 5 seconds before first check (quick initialization)
            initialDelaySeconds: 5
            # Check every 10 seconds
            periodSeconds: 10
The Deployment is the heart of Kubernetes configuration. It says: Kubernetes will apply these rules automatically and maintain them forever.

4.2 Service — Exposing the Application to the Network

service.yaml
apiVersion: v1
kind: Service

metadata:
  name: api-nodejs-service
  labels:
    app: api-nodejs

spec:
  # Service type: NodePort
  # Exposes the service on a static port (30000-32767) on each cluster node
  # Useful for development; in production, use LoadBalancer
  type: NodePort

  # Selector: which pods are behind this service?
  # All pods with the label app:api-nodejs (created by the Deployment)
  selector:
    app: api-nodejs

  # Port mappings
  ports:
    - protocol: TCP
      # Service port within the cluster (internal endpoint)
      port: 80
      # Container port (what we expose from the pod)
      targetPort: 3000
      # Port published on each node (accessible from outside the cluster)
      # Users connect to node_ip:30080
      nodePort: 30080
The Service acts as a load balancer and service discoverer: The Service acts as a single stable entry point for the application, hiding the complexity of the underlying pods.
📖 Term: Kubernetes Service Types
Kubernetes pods are ephemeral: they can die and be replaced at any time. Without a Service, you couldn't rely on a stable IP address. The Service provides an abstraction: "no matter what pods exist, accessing api-nodejs-service is stable".

5. Deploy to the Minikube Cluster

Terminal
# Apply the manifests (order doesn't matter much, but Service first is more logical)
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml

# Check the Deployment
kubectl get deployments
# output:
# NAME          READY   UP-TO-DATE   AVAILABLE   AGE
# api-nodejs    3/3     3            3           1m

# See all running pods
kubectl get pods
# output:
# NAME                         READY   STATUS    RESTARTS   AGE
# api-nodejs-7d9f8b5c9-abc12   1/1     Running   0          2m
# api-nodejs-7d9f8b5c9-def34   1/1     Running   0          2m
# api-nodejs-7d9f8b5c9-ghi56   1/1     Running   0          2m

# See the created services
kubectl get services
# output:
# NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)
# api-nodejs-service   NodePort    10.96.201.45    nodes:30080

# Get the access URL via Minikube
minikube service api-nodejs-service --url
# output: http://192.168.49.2:30080
The output shows that: The Service gets an internal IP (10.96.201.45) and exposes port 30080 on each node.
Terminal — Testing the API
# Get the service URL
URL=$(minikube service api-nodejs-service --url)

# Test the root
curl $URL
# output: {"message":"Node.js API operational","version":"v1","hostname":"api-nodejs-7d9f8b5c9-abc12","timestamp":"2026-04-18T10:45:00.123Z"}

# Test the health probe
curl $URL/health
# output: {"status":"healthy","version":"v1"}

# Test the readiness probe
curl $URL/ready
# output: {"ready":true}
Your Node.js API is now running on 3 pods orchestrated by Kubernetes! The Service automatically balances traffic.

6. Rolling Update — Update Without Downtime

Kubernetes's major advantage is the rolling update: update the application without users noticing any interruption. Old instances are gradually replaced by new ones.

Traditionally, you stop all servers, deploy the new version, then restart. Users see service interruption (downtime). Kubernetes uses rolling updates: replace pods one by one, ensuring at least 3 pods remain active. The application never stops.

6.1 Build v2 of the Application

Terminal — Modify app.js for v2
# (Suppose we add a /metrics endpoint to app.js)
# Then rebuild

eval $(minikube docker-env)
docker build -t api-nodejs:v2 .

# Verify v2 exists
docker images | grep api-nodejs

6.2 Monitor Traffic During Rolling Update

Open a separate terminal and run a loop that simulates continuous traffic:

Terminal 1 — Simulate Traffic
URL=$(minikube service api-nodejs-service --url)

# Infinite loop: calls every 500ms
while true; do
  curl -s $URL | python3 -m json.tool | grep -E "version|hostname"
  sleep 0.5
done

# output:
# "version": "v1",
# "hostname": "api-nodejs-7d9f8b5c9-abc12",
# "version": "v1",
# "hostname": "api-nodejs-7d9f8b5c9-def34",

6.3 Trigger the Update

Terminal 2 — Update to v2
# Option 1: Direct update via kubectl set image
kubectl set image deployment/api-nodejs api-nodejs=api-nodejs:v2

# Option 2: Edit deployment.yaml (image: api-nodejs:v2) and reapply
kubectl apply -f deployment.yaml

# Monitor the rollout in real time
kubectl rollout status deployment/api-nodejs
# output:
# Waiting for deployment "api-nodejs" rollout to finish: 1 out of 3 new replicas updated...
# Waiting for deployment "api-nodejs" rollout to finish: 2 out of 3 new replicas updated...
# Waiting for deployment "api-nodejs" rollout to finish: 1 old replicas pending termination...
# deployment "api-nodejs" successfully rolled out
The rolling update proceeds as follows (with maxSurge=1, maxUnavailable=0):
  1. Create 1 v2 pod (= 4 total pods: 3 v1 + 1 v2)
  2. The v2 pod passes readiness probes, becomes active
  3. Remove 1 v1 pod (= 3 pods: 2 v1 + 1 v2)
  4. Create 1 v2 pod (= 4 pods: 2 v1 + 2 v2)
  5. Remove 1 v1 pod (= 3 pods: 1 v1 + 2 v2)
  6. Create 1 v2 pod (= 4 pods: 1 v1 + 3 v2)
  7. Remove the last v1 pod (= 3 pods: 0 v1 + 3 v2)
At each step, at least 3 pods are active. The Service never withdraws traffic.

6.4 Observe the Transition to v2

In Terminal 1 (traffic), you'll see the transition:

Terminal 1 Output During Rollout
# Before: only v1
# "version": "v1",
# "version": "v1",

# During: mix of v1 and v2
# "version": "v1",
# "version": "v2",
# "version": "v1",
# "version": "v2",

# After: only v2
# "version": "v2",
# "version": "v2",
Notice that no connection errors appear. Traffic flows seamlessly from one version to the other. This is the power of rolling updates.
Thanks to maxSurge: 1 and maxUnavailable: 0, Kubernetes guarantees we always have at least 3 ready pods. The Service sends traffic only to ready pods (readiness probe). No requests are ever lost.

7. Rollback — Undo a Bad Update

If v2 has a critical issue, Kubernetes allows instant rollback:

Terminal
# See the history of deployments
kubectl rollout history deployment/api-nodejs
# output:
# REVISION  CHANGE-CAUSE
# 1         kubectl apply --filename=deployment.yaml
# 2         kubectl set image deployment/api-nodejs api-nodejs=api-nodejs:v2

# Go back to the previous revision (v1)
kubectl rollout undo deployment/api-nodejs

# Or go back to a specific revision
kubectl rollout undo deployment/api-nodejs --to-revision=1

# Check that the rollback is in progress
kubectl rollout status deployment/api-nodejs
Kubernetes maintains a history of all deployments. An undo simply re-runs the rolling update in reverse: re-instantiate v1 pods and stop v2 pods. It's as fast and transparent as a normal update.

8. Manual Scaling — Adjust Replica Count

Terminal
# Scale manually to 5 replicas (increases from 3 to 5)
kubectl scale deployment api-nodejs --replicas=5

# Verify the 2 new pods are starting
kubectl get pods
# output: 5 pods listed (3 old + 2 new)

# Reduce to 2 replicas
kubectl scale deployment api-nodejs --replicas=2

# Verify 3 pods are stopped
kubectl get pods
# output: 2 pods listed
kubectl scale dynamically changes the number of replicas. Kubernetes immediately adds or removes pods to reach the desired count. This is useful to quickly react to increasing load.

9. HPA — Horizontal Pod Autoscaling

📖 Term: HPA (Horizontal Pod Autoscaler)

The HPA automatically increases or decreases the number of replicas based on observed metrics (CPU usage, memory, or custom metrics).

Example: You define "maintain average CPU at 70%. If it exceeds 70%, add pods. If it drops below 50%, remove some."

Use case: During a traffic spike (Black Friday), the HPA automatically adds pods. When traffic drops, it removes them. Zero manual intervention.

Terminal
# Create an HPA: auto-scale between 2 and 10 pods based on CPU
kubectl autoscale deployment api-nodejs \
  --min=2 \
  --max=10 \
  --cpu-percent=70

# Check the HPA
kubectl get hpa
# output:
# NAME          REFERENCE                    TARGETS       MINPODS MAXPODS REPLICAS AGE
# api-nodejs    Deployment/api-nodejs        15%/70%       2       10      3        1m

# TARGETS shows current usage (15%) vs target (70%)

# Delete the HPA
kubectl delete hpa api-nodejs
In production, traffic is never constant. During peaks (midnight in Asia), you need 100 pods. During troughs (3am), 10 are enough. Instead of paying for 100 pods 24/7, the HPA adds and removes them on the fly, reducing infrastructure costs while guaranteeing performance.
The HPA works in a loop:
  1. Every 15 seconds, read metrics from all pods
  2. Calculate average CPU usage
  3. If average > 70%, add pods (up to max of 10)
  4. If average < (70% * 0.8) = 56%, remove pods (down to min of 2)

10. Essential kubectl Commands

Here's a summary of the most useful commands for managing Kubernetes daily:

kubectl Cheat Sheet
# ─ Inspection ─

# List all pods in the cluster
kubectl get pods

# List all deployments
kubectl get deployments

# List all services
kubectl get services

# List all resources (pods, deployments, services, etc.)
kubectl get all

# Get detailed info about a specific pod
kubectl describe pod <pod-name>

# ─ Logs and Debugging ─

# See logs from a pod (last 50 lines)
kubectl logs <pod-name> --tail=50

# Follow logs in real time (tail -f)
kubectl logs <pod-name> -f

# Open an interactive shell in a pod
kubectl exec -it <pod-name> -- sh

# Execute a command in a pod
kubectl exec <pod-name> -- ps aux

# ─ Updates and Rollouts ─

# Apply/update the YAML manifests
kubectl apply -f deployment.yaml

# Update the image of a deployment
kubectl set image deployment/api-nodejs api-nodejs=api-nodejs:v3

# See the status of the rollout
kubectl rollout status deployment/api-nodejs

# See the history of deployments
kubectl rollout history deployment/api-nodejs

# Undo an update (go back to previous version)
kubectl rollout undo deployment/api-nodejs

# ─ Scaling ─

# Scale to a given number of replicas
kubectl scale deployment api-nodejs --replicas=5

# Create an autoscaler
kubectl autoscale deployment api-nodejs --min=2 --max=10 --cpu-percent=70

# ─ Deletion ─

# Delete a deployment (stops all associated pods)
kubectl delete deployment api-nodejs

# Delete a service
kubectl delete service api-nodejs-service

# Delete via YAML files
kubectl delete -f deployment.yaml -f service.yaml

# ─ Utilities ─

# Display cluster information
kubectl cluster-info

# See resource usage (CPU, memory) by pods
kubectl top pods

# See resource usage by nodes
kubectl top nodes

# Complete cleanup: stop and delete Minikube
minikube delete
These commands cover 80% of daily Kubernetes use cases. get to list, describe for details, logs for debugging, apply to deploy, delete to clean up.

Complete Flow Summary

  1. Create app.js with Express and /health and /ready endpoints
  2. Create Dockerfile to containerize the app
  3. Build the image in Minikube: eval $(minikube docker-env) && docker build -t api-nodejs:v1 .
  4. Create deployment.yaml describing the desired state (3 replicas, probes, resource limits)
  5. Create service.yaml exposing the app via NodePort
  6. Deploy: kubectl apply -f deployment.yaml service.yaml
  7. Test: curl $(minikube service api-nodejs-service --url)
  8. Update: kubectl set image deployment/api-nodejs api-nodejs=api-nodejs:v2
  9. Monitor: kubectl rollout status deployment/api-nodejs
  10. Scale: kubectl scale deployment api-nodejs --replicas=5 or kubectl autoscale ...
Tip: Regularly check logs and pod descriptions to understand what Kubernetes is doing. kubectl logs and kubectl describe pod are your best friends for debugging.

Going Further