Docker for Developers Who Just Want to Ship

2026-03-05 · Nico Brandt

You’ve been avoiding Docker for months. Maybe years.

Every tutorial starts with container theory, dives into Linux namespaces, and somehow ends up at Kubernetes before you’ve run a single command. You don’t need to understand cgroups. You need your app to run the same way on your machine, your coworker’s machine, and the server. That’s it.

This docker tutorial for developers covers exactly that — and nothing more. By the end, you’ll have a working Dockerfile, a docker-compose setup for local development, and the mental model to actually use Docker without memorizing 200 commands.

What Docker Actually Does (30-Second Version)

Docker packages your application and everything it needs — runtime, libraries, system tools, config files — into a single portable unit called a container. That container runs identically everywhere.

Think of it like a shipping container. Before standardized containers, every dock had to handle cargo differently. Different shapes, different sizes, different loading equipment. Standardized containers made shipping universal. Docker does the same thing for software.

Your app needs Node 20, PostgreSQL 16, and Redis 7? Docker bundles all of that. Your coworker has Node 18 installed globally? Doesn’t matter. The container has its own Node 20. The server runs Ubuntu while you’re on macOS? Doesn’t matter. The container is the same everywhere.

That’s the entire value proposition. Not microservices. Not Kubernetes. Not cloud-native architecture. Just: it works the same everywhere.

Install Docker and Verify It Works

Install Docker Desktop from docker.com. It runs on macOS, Windows, and Linux. The install is straightforward — just follow the prompts.

Verify it’s working:

docker --version
# Docker version 27.x.x

docker run hello-world
# Should print "Hello from Docker!" and some explanation

If hello-world runs successfully, you’re ready. If not, Docker Desktop probably isn’t running — check your system tray.

Your First Dockerfile: A Real App, Not Hello World

Let’s containerize something useful. Here’s a simple Node.js Express API. If you have an existing project, skip the example and apply the same Dockerfile pattern.

Create a basic Express app (or use your own):

// server.js
import express from 'express';
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/health', (req, res) => {
  res.json({ status: 'ok', timestamp: new Date().toISOString() });
});

app.get('/api/hello', (req, res) => {
  res.json({ message: 'Hello from Docker' });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

Now the Dockerfile:

# Start with a Node.js base image
FROM node:20-slim

# Set the working directory inside the container
WORKDIR /app

# Copy package files first (for better caching)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy the rest of your app
COPY . .

# Tell Docker which port your app uses
EXPOSE 3000

# The command to start your app
CMD ["node", "server.js"]

Every line does one thing. No magic. Let me explain the two decisions that actually matter.

COPY package*.json before COPY .: Docker builds in layers. Each line creates a layer that gets cached. If your code changes but package.json doesn’t, Docker skips the npm ci step entirely. This takes your build from 45 seconds to 3 seconds on subsequent builds. Layer caching is the single most important Docker performance concept.

npm ci instead of npm install: ci installs from the lockfile exactly. No surprises. No version drift. This is what you want in a container — deterministic builds.

Build and run it:

# Build the image (the -t flag gives it a name)
docker build -t my-api .

# Run the container
docker run -p 3000:3000 my-api

Hit http://localhost:3000/health in your browser. If you see {"status":"ok"}, your app is running in a container.

The -p 3000:3000 flag maps port 3000 on your machine to port 3000 inside the container. Without this, the container is running but unreachable.

Docker Compose: Your Local Development Stack

A single Dockerfile gets your app running. But most real apps need a database, maybe Redis, maybe a background worker. Running three separate docker run commands with networking flags is miserable.

Docker Compose lets you define your entire stack in one file:

# docker-compose.yml
services:
  api:
    build: .
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://postgres:postgres@db:5432/myapp
      - REDIS_URL=redis://cache:6379
    depends_on:
      - db
      - cache
    volumes:
      - .:/app
      - /app/node_modules

  db:
    image: postgres:16
    environment:
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=myapp
    ports:
      - "5432:5432"
    volumes:
      - pgdata:/var/lib/postgresql/data

  cache:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  pgdata:

Start everything with one command:

docker compose up

Your API, PostgreSQL, and Redis are all running. They can talk to each other using the service names as hostnames — db, cache — because Docker Compose creates a network for them automatically.

Stop everything:

docker compose down

The Volumes Trick for Hot Reloading

Notice this line in the api service:

volumes:
  - .:/app
  - /app/node_modules

The first line mounts your local code directory into the container. When you edit a file on your machine, the change appears inside the container instantly. Combined with nodemon or a similar file watcher, you get hot reloading without rebuilding the container.

The second line prevents your local node_modules from overwriting the container’s node_modules. This matters because if you’re on macOS and the container runs Linux, the native binaries in node_modules won’t match. The anonymous volume keeps the container’s dependencies separate.

This pattern gives you the development experience you expect — edit, save, see changes — while running in a container.

The .dockerignore File (Don’t Skip This)

Create a .dockerignore file in your project root:

node_modules
npm-debug.log
.git
.env
.env.local
.DS_Store
coverage
dist

Without this, COPY . . copies your entire node_modules directory into the container — then npm ci installs a fresh one anyway. That adds hundreds of megabytes to your build context and wastes minutes on every build. The .dockerignore file is the single easiest Docker performance win.

Five Commands That Cover 90% of Daily Use

You don’t need to memorize the Docker CLI. These five commands handle almost everything:

# Start your stack
docker compose up

# Start in the background (detached mode)
docker compose up -d

# See running containers
docker compose ps

# View logs
docker compose logs -f api

# Stop everything and clean up
docker compose down

For debugging:

# Open a shell inside a running container
docker compose exec api sh

# Run a one-off command
docker compose exec db psql -U postgres -d myapp

That last one is how you connect to your PostgreSQL database without installing psql locally. The database is running in a container, and you execute psql inside that container.

Common Problems and Actual Fixes

I’ve hit every Docker problem worth hitting. Here are the fixes for the ones you’ll encounter in your first week.

“Port already in use”: Something else is using port 3000 (or 5432, or whatever). Either stop the other process (lsof -i :3000 to find it) or change the port mapping: -p 3001:3000.

Build is slow: Check that .dockerignore exists and includes node_modules. If your build is still slow, look at your Dockerfile order — put things that change rarely (base image, dependencies) at the top, things that change often (your code) at the bottom. Layer caching only works for consecutive unchanged layers.

Container can’t connect to the database: In Docker Compose, services communicate by service name, not localhost. Your database URL should be postgres://postgres:postgres@db:5432/myapp where db is the service name in your compose file. A common mistake is using localhost — inside a container, localhost is the container itself, not your machine.

“No space left on device”: Docker accumulates old images and containers. Clean them up:

docker system prune -a

This removes all stopped containers, unused networks, and dangling images. Run it monthly.

Hot reload isn’t working: Make sure your volumes are mounted correctly and your file watcher is configured to watch the right directory. Inside the container, your code lives at /app (or whatever WORKDIR you set). Some watchers need explicit polling mode inside containers: nodemon --legacy-watch.

When Docker Isn’t Worth It

I’d be lying if I said Docker is always the right call. Here’s when I skip it.

Solo projects in the first week. If you’re prototyping alone and the tech stack is simple (one language, no database), Docker adds overhead without much benefit. Just run it locally. Add Docker when you need it — a teammate joins, or you’re ready to deploy.

When your team doesn’t use it. Docker’s value is in consistency. If you’re the only person using containers, you’re not solving the “it works on my machine” problem — you’re creating a new one where you’re the only person who can debug the container setup.

When the app is a static site. A static site generator that outputs HTML files doesn’t need Docker for development. It needs Docker for deployment, maybe, but for local dev you’re adding complexity to a simple problem.

For everything else — multi-service apps, team projects, anything that needs a database — Docker pays for itself within the first week. The first time you onboard a new developer and they run docker compose up instead of spending a day configuring PostgreSQL, Redis, and environment variables, you’ll understand why.

What’s Next After This

You now have enough Docker knowledge to be productive. You can build images, run services, compose multi-container stacks, and debug common problems. That covers the first six months of most developers’ Docker usage.

When you’re ready, the next steps are:

But don’t rush there. Use what you’ve learned here for a few weeks first. Build muscle memory with the five commands. Get comfortable reading a Dockerfile like you read a package.json or a tsconfig. The advanced stuff matters more when you understand the fundamentals.

Docker’s real value isn’t in the technology. It’s in the guarantee: this will run the same way everywhere. Once you trust that guarantee — because you’ve verified it yourself — you’ll wonder how you ever shipped software without it.