๐Ÿณ Docker ยท Beginner

Containerise a Python API with Docker

โฑ 30 minutes ๐Ÿ“ฆ Docker 24+ ๐Ÿ Python 3.11 โšก FastAPI

Docker allows you to package an application and all its dependencies into a portable container. In this tutorial, we'll create a Python API with FastAPI, wrap it in a Docker image, and run it in a few commands โ€” the same result guaranteed on every machine.

Prerequisites

๐Ÿ“– Term: Docker

Definition: A containerisation platform that packages an application and all its dependencies (runtime, libraries, code) into a portable unit called a container.

Purpose: Ensure the application runs identically wherever it executes (local machine, server, cloud).

Why here: Docker eliminates the "it works on my machine" problem by completely isolating the runtime environment.

1. Create the FastAPI

We'll start with a simple API with two routes. Create the project folder:

Terminal
mkdir my-api && cd my-api
This command creates a project directory named "my-api" and enters it. All Docker files will be placed here.

Create the main.py file:

main.py
# Imports: load FastAPI, HTTP error handling and Pydantic models
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
import datetime

# Create the FastAPI instance with metadata for Swagger documentation
app = FastAPI(
    title="My API",
    description="Example API containerised with Docker",
    version="1.0.0"
)

# Define the Item data model with Pydantic validation
class Item(BaseModel):
    name: str
    price: float
    in_stock: bool = True

# Simulate an in-memory database for this tutorial
items_db: List[Item] = [
    Item(name="Laptop", price=999.99),
    Item(name="Mouse", price=29.99),
]

# Root GET route: respond with a message and timestamp
@app.get("/")
def root():
    return {
        "message": "API operational ๐Ÿš€",
        "timestamp": datetime.datetime.now().isoformat()
    }

# GET /items route: return the complete list of items
@app.get("/items", response_model=List[Item])
def get_items():
    """Returns all available items."""
    return items_db

# GET /items/{item_id} route: retrieve a specific item by ID
@app.get("/items/{item_id}", response_model=Item)
def get_item(item_id: int):
    if item_id < 0 or item_id >= len(items_db):
        raise HTTPException(status_code=404, detail="Item not found")
    return items_db[item_id]

# POST /items route: add a new item to the database
@app.post("/items", response_model=Item, status_code=201)
def create_item(item: Item):
    items_db.append(item)
    return item

# GET /health route: verify that the API is operational
@app.get("/health")
def health_check():
    return {"status": "healthy"}
This file defines a complete REST API with FastAPI: each route (@app.get/@app.post) responds to an HTTP request. FastAPI automatically generates Swagger documentation from these definitions.

Create the requirements.txt file:

requirements.txt
# Modern web framework for building REST APIs
fastapi==0.111.0
# ASGI server for running FastAPI (asynchronous, performant)
uvicorn[standard]==0.29.0
# Data validation and JSON serialisation
pydantic==2.7.0
This file lists all Python dependencies with pinned versions. Pinning versions ensures everyone (you, colleagues, Docker) installs exactly the same versions, eliminating surprises in production.
Always pin versions in requirements.txt to guarantee Docker image reproducibility.

2. Test the API locally (without Docker)

Terminal
# Create an isolated virtual environment for this project
python -m venv venv
# Activate the environment (Windows: venv\Scripts\activate)
source venv/bin/activate

# Install dependencies listed in requirements.txt
pip install -r requirements.txt

# Start the API with auto-reload on file changes
uvicorn main:app --reload --port 8000
These commands create an isolated environment, install dependencies, and start the Uvicorn server exposing the API on port 8000. The --reload flag restarts the server when you save a file, useful for development.
Open http://localhost:8000/docs โ€” you'll see the interactive Swagger documentation of your API.

3. Create the Dockerfile

The Dockerfile is the recipe that describes how to build your application image.

๐Ÿ“– Term: Dockerfile

Definition: A text file containing a sequence of instructions to assemble a Docker image. Each instruction creates a new layer in the image.

Purpose: Automate image creation in a reproducible and documented way.

Why here: Instead of manually creating and configuring containers, the Dockerfile allows anyone to generate the identical image by simply running "docker build".

Dockerfile
# โ”€โ”€ Official base Python image โ”€โ”€
# python:3.11-slim: lightweight variant (~150MB vs ~1GB for full image)
FROM python:3.11-slim

# โ”€โ”€ Environment variables for Python โ”€โ”€
# PYTHONDONTWRITEBYTECODE=1: don't generate unnecessary .pyc files
# PYTHONUNBUFFERED=1: send logs directly to stdout (real-time)
ENV PYTHONDONTWRITEBYTECODE=1 \
    PYTHONUNBUFFERED=1

# โ”€โ”€ Set the working directory in the container โ”€โ”€
WORKDIR /app

# โ”€โ”€ Copy dependencies FIRST โ”€โ”€
# Docker uses cache: if requirements.txt doesn't change, this layer is reused
# Changes to source code won't force a dependency reinstall
COPY requirements.txt .
# --no-cache-dir: don't store pip cache (saves space)
RUN pip install --no-cache-dir --upgrade pip && \
    pip install --no-cache-dir -r requirements.txt

# โ”€โ”€ Copy the application source code โ”€โ”€
COPY . .

# โ”€โ”€ Port exposed by the application (documentation) โ”€โ”€
# Note: EXPOSE doesn't publish the port, you must use -p in docker run
EXPOSE 8000

# โ”€โ”€ Startup command โ”€โ”€
# 0.0.0.0 makes the API accessible from outside the container
# By default, 127.0.0.1 would only be accessible from inside the container
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
This Dockerfile creates an image in successive layers: base image โ†’ environment variables โ†’ dependencies โ†’ code. Each layer can be reused by Docker if unchanged, speeding up future builds.
We copy requirements.txt before the source code because dependencies change rarely. If we copied code first, every code change would invalidate the cache and force reinstalling all dependencies, slowing builds.

Add a .dockerignore

Like .gitignore, the .dockerignore file prevents copying unnecessary files into the image:

๐Ÿ“– Term: .dockerignore

Definition: A file listing patterns of files/folders to exclude when copying into the container (COPY instruction in the Dockerfile).

Purpose: Reduce image size by avoiding non-production files.

Why here: Your local virtual environment (venv/), compiled Python files (__pycache__/), and Git history (.git/) should never enter the Docker image.

.dockerignore
# Temporary folders and local environments
venv/
__pycache__/
*.pyc
*.pyo

# Secrets and local configuration (never include in image)
.env
.env.local

# Version control and documentation
.git
.gitignore
*.md

# Cache and test files
.pytest_cache/
.coverage
Without .dockerignore, "docker build" would copy your local venv/ folder (hundreds of MB) and Python cache, bloating the image. With .dockerignore, only code and requirements.txt are copied, then Docker reinstalls them in the container (with correct versions and dependencies).
Excluding .env is critical for security: secrets should never be hard-coded in Docker images, as layers are inspectable. Secrets must be passed at runtime via environment variables.

4. Build the Docker image

๐Ÿ“– Term: Docker Image

Definition: An immutable set of stacked layers (layers) containing a minimal OS, dependencies and code. It's a blueprint used to create containers.

Purpose: Have a portable, versioned package that can run on any machine with Docker.

Why here: An image is to a container what a class is to an instance. Build the image once, then create as many containers as needed.

๐Ÿ“– Term: Layer

Definition: A filesystem snapshot created by a Docker instruction (FROM, COPY, RUN, etc.). Layers are stacked to form the final image.

Purpose: Allow Docker to use cache: if a layer hasn't changed, it can be reused.

Why here: Understanding layers is key to optimising builds. Instruction order in the Dockerfile determines cache efficiency.

Terminal
# Build the Docker image
# -t my-api:v1: give the image a name and tag
# . : use the Dockerfile from the current directory
docker build -t my-api:v1 .

# Verify the image is created and see its size
docker images | grep my-api
The "docker build" command executes Dockerfile instructions line by line, creating a layer for each instruction. At the end, all layers are merged into a single image you can see with "docker images".
Expected output: my-api v1 abc123def456 2 minutes ago 180MB

5. Run the container

๐Ÿ“– Term: Container

Definition: A running instance of a Docker image. It's an isolated process with its own filesystem, network and environment variables.

Purpose: Completely isolate the application: a container only sees its own files, ports and variables, without interfering with the host or other containers.

Why here: Unlike a heavy VM, a Docker container is lightweight (starts in seconds) and runs directly on the host kernel.

Terminal
# Run a container based on the my-api:v1 image
# -d: detached (background mode, frees the terminal)
# -p 8000:8000: publish port 8000 from container to 8000 on host
#        (without this, the API would only be accessible from inside the container)
# --name api-container: give the container a name (more readable than random ID)
docker run -d -p 8000:8000 --name api-container my-api:v1

# List running containers
docker ps

# Display container logs in real time (-f = follow)
docker logs -f api-container
These commands create and start a container from the image. The -p flag establishes port mapping: requests to http://localhost:8000 are forwarded to port 8000 inside the container. The -d flag lets the container run in the background without blocking your terminal.
Your API is accessible at http://localhost:8000 โ€” exactly like local, but inside an isolated container.

6. Test the containerised API

Terminal (curl)
# Test 1: root route
curl http://localhost:8000/
# Response: {"message":"API operational ๐Ÿš€","timestamp":"2025-01-15T10:30:00"}

# Test 2: get all items (GET /items)
curl http://localhost:8000/items
# Response: [{"name":"Laptop","price":999.99,"in_stock":true},...]

# Test 3: create a new item (POST /items)
curl -X POST http://localhost:8000/items \
  -H "Content-Type: application/json" \
  -d '{"name":"Keyboard","price":79.99,"in_stock":true}'
# Response: {"name":"Keyboard","price":79.99,"in_stock":true}

# Test 4: health check to verify the API responds
curl http://localhost:8000/health
# Response: {"status":"healthy"}
These curl requests test each API endpoint. The API responds identically locally and in the container, proving containerisation didn't break anything. The /health route is especially useful for orchestrators (Kubernetes, Docker Compose) to verify API readiness.

7. Pass environment variables

In production, never hard-code secrets. Use environment variables:

main.py (with config)
# Import os to read environment variables
import os
from fastapi import FastAPI

# Read environment variables with defaults
# In production, these values will be overridden by deployment
APP_ENV = os.getenv("APP_ENV", "development")
SECRET_KEY = os.getenv("SECRET_KEY", "change-me-in-production")
DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./test.db")

app = FastAPI()

# Endpoint that exposes the current environment (useful for debugging)
@app.get("/")
def root():
    return {"env": APP_ENV, "message": "API operational"}
Instead of hard-coding secrets in code (dangerous and inflexible), use os.getenv() to read from environment variables. This allows each environment (dev, staging, production) to have its own configuration.
Why pass secrets at runtime? If you hard-coded the secret in code or the Dockerfile, it would remain visible in all image layers (inspectable), and your Git history would expose it publicly. Environment variables stay external to the image, changing based on where the container runs, separating config from deployment.
Terminal โ€” run with variables
# Method 1: Pass variables directly with -e
docker run -d -p 8000:8000 \
  -e APP_ENV=production \
  -e SECRET_KEY=my-super-secret \
  -e DATABASE_URL=postgresql://user:pass@db:5432/mydb \
  --name api-prod \
  my-api:v1

# Method 2: Load all variables from a .env file
# (more practical in production, less verbosity)
docker run -d -p 8000:8000 \
  --env-file .env \
  --name api-prod \
  my-api:v1
Method 1 (-e) suits development, but Method 2 (--env-file) is better for production: a single file centralises all variables, and it never appears in command history. Warning: never commit .env files with secrets to Git!

8. Docker Compose: API + Database

In practice, your API needs a database. Docker Compose orchestrates multiple containers together:

๐Ÿ“– Term: Docker Compose

Definition: A tool for defining and running multiple Docker containers in a single command via a YAML file (docker-compose.yml).

Purpose: Manage multi-container architectures: API + database + cache + queue, all locally before deploying.

Why here: A standalone API isn't useful. Docker Compose tests integration between services before production.

๐Ÿ“– Term: Docker Registry

Definition: A centralised server storing public or private Docker images. Docker Hub is the official free registry.

Purpose: Share and download pre-built images (postgres, redis, nginx, etc.) instead of creating from scratch.

Why here: We use postgres:16-alpine and adminer from Docker Hub rather than creating them ourselves, saving time.

docker-compose.yml
# Docker Compose syntax version
version: '3.9'

services:
  # โ”€โ”€ FastAPI service โ”€โ”€
  api:
    # build: . builds the image from the Dockerfile
    build: .
    ports:
      # Map port 8000 on host to port 8000 in container
      - "8000:8000"
    environment:
      # Environment variables for this container
      - APP_ENV=development
      # @db: the db container is accessible by this hostname (Docker internal network)
      - DATABASE_URL=postgresql://user:password@db:5432/mydb
    depends_on:
      # Wait for db service to be "healthy" before starting API
      db:
        condition: service_healthy
    volumes:
      # Mount the local /app directory in container
      # Enables hot reload: code changes = automatic restart
      - .:/app
    # Command override: run uvicorn with --reload (watch mode)
    command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload

  # โ”€โ”€ PostgreSQL service โ”€โ”€
  db:
    # Use PostgreSQL 16 image on Alpine base (ultra-lightweight)
    image: postgres:16-alpine
    environment:
      # Database credentials
      POSTGRES_USER: user
      POSTGRES_PASSWORD: password
      POSTGRES_DB: mydb
    volumes:
      # Mount a volume to persist data between restarts
      - postgres_data:/var/lib/postgresql/data
    # Health check: Docker Compose tests if PostgreSQL is ready
    healthcheck:
      # Command to check service health
      test: ["CMD-SHELL", "pg_isready -U user"]
      # Test every 5 seconds
      interval: 5s
      # Command timeout: 5 seconds
      timeout: 5s
      # Retry count before marking service unhealthy
      retries: 5

  # โ”€โ”€ Adminer service: web interface for PostgreSQL โ”€โ”€
  adminer:
    # Official image for managing databases graphically
    image: adminer
    ports:
      - "8080:8080"
    # Wait for PostgreSQL to be ready
    depends_on:
      - db

# โ”€โ”€ Named volumes โ”€โ”€
# postgres_data: creates a Docker volume to persist PostgreSQL data
# Survives even if container is deleted
volumes:
  postgres_data:
This docker-compose.yml declares three services: api (the application), db (PostgreSQL), and adminer (admin interface). Docker Compose creates an internal network where services communicate by hostname (api talks to db via "db:5432"). The "condition: service_healthy" flag ensures the API doesn't try connecting to PostgreSQL until it's ready.
We use postgres:16-alpine (pre-existing image) instead of creating our own database. Similarly, adminer is a public image. Reusing standard images is faster and safer than writing custom Dockerfiles for each.
Terminal
# Start all services defined in docker-compose.yml
# -d: detached mode (background)
docker compose up -d

# View logs from all services in real time
# -f: follow (keep watching for new logs)
docker compose logs -f

# Stop and remove all containers (volume data persists)
docker compose down

# Stop AND remove volumes (completely reset the DB)
# โš ๏ธ Use with caution: deletes data!
docker compose down -v
These commands manage the lifecycle of all services: up creates and starts containers, down stops and removes them. The -d flag lets Compose return control of the terminal without showing logs (ideal for production). The -v flag also removes volumes, useful for a complete reset in development.

9. Essential Docker commands

Terminal โ€” cheat sheet
# โ”€โ”€ Manage containers โ”€โ”€
# List running containers
docker ps

# List all containers (active and stopped)
docker ps -a

# Gracefully stop a container (SIGTERM)
docker stop my-container

# Remove a stopped container
docker rm my-container

# Force immediate container removal (even if running)
docker rm -f my-container

# โ”€โ”€ Enter a container (debug interactively) โ”€โ”€
# Open a Bash shell inside the container
docker exec -it my-container bash

# Run a command without interaction
docker exec my-container ls /app

# โ”€โ”€ Manage images โ”€โ”€
# List all downloaded/built images
docker images

# Remove a specific image
docker rmi my-api:v1

# Remove images not used by any container
docker image prune

# โ”€โ”€ Complete cleanup โ”€โ”€
# Remove unused containers, images, networks and caches
# โš ๏ธ Drastic: use with caution!
docker system prune -a
These commands inspect and manage containers and images. "docker ps" always shows current containers; "docker exec" lets you enter a running container to debug; "docker rmi" and "docker image prune" clean up unused images to save disk space.

Final project structure

Directory tree
my-api/
โ”œโ”€โ”€ main.py              # FastAPI application code
โ”œโ”€โ”€ requirements.txt     # Python dependencies
โ”œโ”€โ”€ Dockerfile           # Docker image recipe
โ”œโ”€โ”€ docker-compose.yml   # Multi-service orchestration
โ”œโ”€โ”€ .dockerignore        # Files excluded from image
โ””โ”€โ”€ .env                 # Environment variables (don't commit!)
Never commit your .env file containing secrets to Git. Add it to your .gitignore.

Summary

You now know how to:

The next logical step is to optimise the image for production with multi-stage builds, or deploy to Cloud Run in minutes.