Docker allows you to package an application and all its dependencies into a portable container. In this tutorial, we'll create a Python API with FastAPI, wrap it in a Docker image, and run it in a few commands โ the same result guaranteed on every machine.
Definition: A containerisation platform that packages an application and all its dependencies (runtime, libraries, code) into a portable unit called a container.
Purpose: Ensure the application runs identically wherever it executes (local machine, server, cloud).
Why here: Docker eliminates the "it works on my machine" problem by completely isolating the runtime environment.
We'll start with a simple API with two routes. Create the project folder:
mkdir my-api && cd my-api
Create the main.py file:
# Imports: load FastAPI, HTTP error handling and Pydantic models
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List
import datetime
# Create the FastAPI instance with metadata for Swagger documentation
app = FastAPI(
title="My API",
description="Example API containerised with Docker",
version="1.0.0"
)
# Define the Item data model with Pydantic validation
class Item(BaseModel):
name: str
price: float
in_stock: bool = True
# Simulate an in-memory database for this tutorial
items_db: List[Item] = [
Item(name="Laptop", price=999.99),
Item(name="Mouse", price=29.99),
]
# Root GET route: respond with a message and timestamp
@app.get("/")
def root():
return {
"message": "API operational ๐",
"timestamp": datetime.datetime.now().isoformat()
}
# GET /items route: return the complete list of items
@app.get("/items", response_model=List[Item])
def get_items():
"""Returns all available items."""
return items_db
# GET /items/{item_id} route: retrieve a specific item by ID
@app.get("/items/{item_id}", response_model=Item)
def get_item(item_id: int):
if item_id < 0 or item_id >= len(items_db):
raise HTTPException(status_code=404, detail="Item not found")
return items_db[item_id]
# POST /items route: add a new item to the database
@app.post("/items", response_model=Item, status_code=201)
def create_item(item: Item):
items_db.append(item)
return item
# GET /health route: verify that the API is operational
@app.get("/health")
def health_check():
return {"status": "healthy"}
Create the requirements.txt file:
# Modern web framework for building REST APIs
fastapi==0.111.0
# ASGI server for running FastAPI (asynchronous, performant)
uvicorn[standard]==0.29.0
# Data validation and JSON serialisation
pydantic==2.7.0
# Create an isolated virtual environment for this project
python -m venv venv
# Activate the environment (Windows: venv\Scripts\activate)
source venv/bin/activate
# Install dependencies listed in requirements.txt
pip install -r requirements.txt
# Start the API with auto-reload on file changes
uvicorn main:app --reload --port 8000
The Dockerfile is the recipe that describes how to build your application image.
Definition: A text file containing a sequence of instructions to assemble a Docker image. Each instruction creates a new layer in the image.
Purpose: Automate image creation in a reproducible and documented way.
Why here: Instead of manually creating and configuring containers, the Dockerfile allows anyone to generate the identical image by simply running "docker build".
# โโ Official base Python image โโ
# python:3.11-slim: lightweight variant (~150MB vs ~1GB for full image)
FROM python:3.11-slim
# โโ Environment variables for Python โโ
# PYTHONDONTWRITEBYTECODE=1: don't generate unnecessary .pyc files
# PYTHONUNBUFFERED=1: send logs directly to stdout (real-time)
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONUNBUFFERED=1
# โโ Set the working directory in the container โโ
WORKDIR /app
# โโ Copy dependencies FIRST โโ
# Docker uses cache: if requirements.txt doesn't change, this layer is reused
# Changes to source code won't force a dependency reinstall
COPY requirements.txt .
# --no-cache-dir: don't store pip cache (saves space)
RUN pip install --no-cache-dir --upgrade pip && \
pip install --no-cache-dir -r requirements.txt
# โโ Copy the application source code โโ
COPY . .
# โโ Port exposed by the application (documentation) โโ
# Note: EXPOSE doesn't publish the port, you must use -p in docker run
EXPOSE 8000
# โโ Startup command โโ
# 0.0.0.0 makes the API accessible from outside the container
# By default, 127.0.0.1 would only be accessible from inside the container
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]
Like .gitignore, the .dockerignore file prevents copying unnecessary files into the image:
Definition: A file listing patterns of files/folders to exclude when copying into the container (COPY instruction in the Dockerfile).
Purpose: Reduce image size by avoiding non-production files.
Why here: Your local virtual environment (venv/), compiled Python files (__pycache__/), and Git history (.git/) should never enter the Docker image.
# Temporary folders and local environments
venv/
__pycache__/
*.pyc
*.pyo
# Secrets and local configuration (never include in image)
.env
.env.local
# Version control and documentation
.git
.gitignore
*.md
# Cache and test files
.pytest_cache/
.coverage
Definition: An immutable set of stacked layers (layers) containing a minimal OS, dependencies and code. It's a blueprint used to create containers.
Purpose: Have a portable, versioned package that can run on any machine with Docker.
Why here: An image is to a container what a class is to an instance. Build the image once, then create as many containers as needed.
Definition: A filesystem snapshot created by a Docker instruction (FROM, COPY, RUN, etc.). Layers are stacked to form the final image.
Purpose: Allow Docker to use cache: if a layer hasn't changed, it can be reused.
Why here: Understanding layers is key to optimising builds. Instruction order in the Dockerfile determines cache efficiency.
# Build the Docker image
# -t my-api:v1: give the image a name and tag
# . : use the Dockerfile from the current directory
docker build -t my-api:v1 .
# Verify the image is created and see its size
docker images | grep my-api
my-api v1 abc123def456 2 minutes ago 180MBDefinition: A running instance of a Docker image. It's an isolated process with its own filesystem, network and environment variables.
Purpose: Completely isolate the application: a container only sees its own files, ports and variables, without interfering with the host or other containers.
Why here: Unlike a heavy VM, a Docker container is lightweight (starts in seconds) and runs directly on the host kernel.
# Run a container based on the my-api:v1 image
# -d: detached (background mode, frees the terminal)
# -p 8000:8000: publish port 8000 from container to 8000 on host
# (without this, the API would only be accessible from inside the container)
# --name api-container: give the container a name (more readable than random ID)
docker run -d -p 8000:8000 --name api-container my-api:v1
# List running containers
docker ps
# Display container logs in real time (-f = follow)
docker logs -f api-container
# Test 1: root route
curl http://localhost:8000/
# Response: {"message":"API operational ๐","timestamp":"2025-01-15T10:30:00"}
# Test 2: get all items (GET /items)
curl http://localhost:8000/items
# Response: [{"name":"Laptop","price":999.99,"in_stock":true},...]
# Test 3: create a new item (POST /items)
curl -X POST http://localhost:8000/items \
-H "Content-Type: application/json" \
-d '{"name":"Keyboard","price":79.99,"in_stock":true}'
# Response: {"name":"Keyboard","price":79.99,"in_stock":true}
# Test 4: health check to verify the API responds
curl http://localhost:8000/health
# Response: {"status":"healthy"}
In production, never hard-code secrets. Use environment variables:
# Import os to read environment variables
import os
from fastapi import FastAPI
# Read environment variables with defaults
# In production, these values will be overridden by deployment
APP_ENV = os.getenv("APP_ENV", "development")
SECRET_KEY = os.getenv("SECRET_KEY", "change-me-in-production")
DATABASE_URL = os.getenv("DATABASE_URL", "sqlite:///./test.db")
app = FastAPI()
# Endpoint that exposes the current environment (useful for debugging)
@app.get("/")
def root():
return {"env": APP_ENV, "message": "API operational"}
# Method 1: Pass variables directly with -e
docker run -d -p 8000:8000 \
-e APP_ENV=production \
-e SECRET_KEY=my-super-secret \
-e DATABASE_URL=postgresql://user:pass@db:5432/mydb \
--name api-prod \
my-api:v1
# Method 2: Load all variables from a .env file
# (more practical in production, less verbosity)
docker run -d -p 8000:8000 \
--env-file .env \
--name api-prod \
my-api:v1
In practice, your API needs a database. Docker Compose orchestrates multiple containers together:
Definition: A tool for defining and running multiple Docker containers in a single command via a YAML file (docker-compose.yml).
Purpose: Manage multi-container architectures: API + database + cache + queue, all locally before deploying.
Why here: A standalone API isn't useful. Docker Compose tests integration between services before production.
Definition: A centralised server storing public or private Docker images. Docker Hub is the official free registry.
Purpose: Share and download pre-built images (postgres, redis, nginx, etc.) instead of creating from scratch.
Why here: We use postgres:16-alpine and adminer from Docker Hub rather than creating them ourselves, saving time.
# Docker Compose syntax version
version: '3.9'
services:
# โโ FastAPI service โโ
api:
# build: . builds the image from the Dockerfile
build: .
ports:
# Map port 8000 on host to port 8000 in container
- "8000:8000"
environment:
# Environment variables for this container
- APP_ENV=development
# @db: the db container is accessible by this hostname (Docker internal network)
- DATABASE_URL=postgresql://user:password@db:5432/mydb
depends_on:
# Wait for db service to be "healthy" before starting API
db:
condition: service_healthy
volumes:
# Mount the local /app directory in container
# Enables hot reload: code changes = automatic restart
- .:/app
# Command override: run uvicorn with --reload (watch mode)
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
# โโ PostgreSQL service โโ
db:
# Use PostgreSQL 16 image on Alpine base (ultra-lightweight)
image: postgres:16-alpine
environment:
# Database credentials
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: mydb
volumes:
# Mount a volume to persist data between restarts
- postgres_data:/var/lib/postgresql/data
# Health check: Docker Compose tests if PostgreSQL is ready
healthcheck:
# Command to check service health
test: ["CMD-SHELL", "pg_isready -U user"]
# Test every 5 seconds
interval: 5s
# Command timeout: 5 seconds
timeout: 5s
# Retry count before marking service unhealthy
retries: 5
# โโ Adminer service: web interface for PostgreSQL โโ
adminer:
# Official image for managing databases graphically
image: adminer
ports:
- "8080:8080"
# Wait for PostgreSQL to be ready
depends_on:
- db
# โโ Named volumes โโ
# postgres_data: creates a Docker volume to persist PostgreSQL data
# Survives even if container is deleted
volumes:
postgres_data:
# Start all services defined in docker-compose.yml
# -d: detached mode (background)
docker compose up -d
# View logs from all services in real time
# -f: follow (keep watching for new logs)
docker compose logs -f
# Stop and remove all containers (volume data persists)
docker compose down
# Stop AND remove volumes (completely reset the DB)
# โ ๏ธ Use with caution: deletes data!
docker compose down -v
# โโ Manage containers โโ
# List running containers
docker ps
# List all containers (active and stopped)
docker ps -a
# Gracefully stop a container (SIGTERM)
docker stop my-container
# Remove a stopped container
docker rm my-container
# Force immediate container removal (even if running)
docker rm -f my-container
# โโ Enter a container (debug interactively) โโ
# Open a Bash shell inside the container
docker exec -it my-container bash
# Run a command without interaction
docker exec my-container ls /app
# โโ Manage images โโ
# List all downloaded/built images
docker images
# Remove a specific image
docker rmi my-api:v1
# Remove images not used by any container
docker image prune
# โโ Complete cleanup โโ
# Remove unused containers, images, networks and caches
# โ ๏ธ Drastic: use with caution!
docker system prune -a
my-api/
โโโ main.py # FastAPI application code
โโโ requirements.txt # Python dependencies
โโโ Dockerfile # Docker image recipe
โโโ docker-compose.yml # Multi-service orchestration
โโโ .dockerignore # Files excluded from image
โโโ .env # Environment variables (don't commit!)
.env file containing secrets to Git. Add it to your .gitignore.You now know how to:
The next logical step is to optimise the image for production with multi-stage builds, or deploy to Cloud Run in minutes.