Docker Security Best Practices: Building Secure Container Images
A comprehensive guide to securing Docker containers from image creation to runtime, covering base images, secrets management, scanning, and hardening techniques.
Container Security Team
Security Engineering
Introduction
Docker containers have revolutionized application deployment, but their security requires careful attention throughout the entire lifecycle. From selecting base images to runtime monitoring, each decision impacts your security posture. This guide covers essential Docker security practices that every development team should implement.
1. Secure Base Image Selection
Your container's security foundation starts with the base image. Choosing the right base image significantly reduces your attack surface and maintenance burden.
Use Minimal Base Images
Minimal images like Alpine, distroless, or scratch-based images reduce the attack surface by eliminating unnecessary binaries and libraries:
# Bad: Large base image with many unnecessary packages
FROM ubuntu:latest
RUN apt-get update && apt-get install -y \
curl wget vim net-tools
# Good: Minimal Alpine-based image
FROM alpine:3.19
RUN apk add --no-cache ca-certificates
# Better: Distroless for production
FROM gcr.io/distroless/static-debian12:nonroot
COPY --from=builder /app/myapp /app/myapp
ENTRYPOINT ["/app/myapp"]Use Specific Image Tags
Always use specific version tags instead of 'latest' to ensure reproducible and predictable builds:
# Bad: Unpredictable FROM node:latest # Good: Specific version FROM node:20.11.0-alpine3.19 # Better: SHA256 digest for immutability FROM node:20.11.0-alpine3.19@sha256:4d64c5d...
Verify Image Signatures
Use Docker Content Trust (DCT) to verify image signatures and ensure authenticity:
# Enable Docker Content Trust export DOCKER_CONTENT_TRUST=1 # Pull signed images only docker pull nginx:1.25.3 # Sign your own images docker trust sign myregistry.com/myapp:v1.0.0
2. Dockerfile Security Best Practices
Run as Non-Root User
Never run containers as root. Create and use a dedicated non-privileged user:
FROM node:20-alpine # Create app user and group RUN addgroup -S appgroup && adduser -S appuser -G appgroup # Set working directory WORKDIR /app # Copy and install dependencies as root COPY package*.json ./ RUN npm ci --only=production # Copy application files COPY --chown=appuser:appgroup . . # Switch to non-root user USER appuser EXPOSE 3000 CMD ["node", "server.js"]
Minimize Layers and Use Multi-Stage Builds
Multi-stage builds separate build dependencies from runtime, creating smaller and more secure images:
# Stage 1: Build
FROM golang:1.21-alpine AS builder
WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo \
-ldflags '-w -s' -o app ./cmd/api
# Stage 2: Runtime
FROM alpine:3.19
RUN apk --no-cache add ca-certificates && \
addgroup -S appgroup && \
adduser -S appuser -G appgroup
WORKDIR /app
COPY --from=builder /build/app .
COPY --from=builder /build/configs ./configs
USER appuser
EXPOSE 8080
ENTRYPOINT ["/app/app"]Set Read-Only Root Filesystem
Configure containers with read-only root filesystems and explicit tmpfs mounts for directories that need write access:
# Dockerfile
FROM nginx:1.25-alpine
RUN touch /var/run/nginx.pid && \
chown -R nginx:nginx /var/run/nginx.pid /var/cache/nginx
USER nginx
# Docker run with read-only root fs
docker run --read-only \
--tmpfs /tmp \
--tmpfs /var/run \
--tmpfs /var/cache/nginx \
myapp:latest3. Secrets Management
Never Hardcode Secrets
Never include secrets in your Dockerfile or commit them to version control. Use Docker secrets, environment variables, or external secret managers:
# Bad: Hardcoded secret FROM node:20-alpine ENV DATABASE_PASSWORD="supersecret123" # Good: Use Docker secrets (Swarm) FROM node:20-alpine COPY --from=secrets /run/secrets/db_password /app/secrets/ RUN cat /app/secrets/db_password > /dev/null && rm /app/secrets/db_password # Better: Use external secret manager FROM node:20-alpine # Application loads secrets from AWS Secrets Manager, HashiCorp Vault, etc. CMD ["node", "server.js"]
Use .dockerignore
Prevent sensitive files from being copied into the image:
# .dockerignore .git .env .env.local *.pem *.key secrets/ .aws/ .ssh/ node_modules/ .vscode/ .idea/ *.log coverage/ .DS_Store
Use BuildKit Secret Mounts
Docker BuildKit provides secure secret mounting that doesn't persist in image layers:
# syntax=docker/dockerfile:1.4
FROM python:3.11-slim
WORKDIR /app
# Mount secret during build without persisting it
RUN --mount=type=secret,id=pip_token \
pip install --index-url=https://${PIP_TOKEN}@pypi.company.com/simple/ \
-r requirements.txt
# Build with secret
DOCKER_BUILDKIT=1 docker build \
--secret id=pip_token,src=./pip_token.txt \
-t myapp:latest .4. Image Scanning and Vulnerability Management
Scan Images for Vulnerabilities
Integrate vulnerability scanning into your CI/CD pipeline using tools like Trivy, Grype, or Snyk:
# Scan with Trivy trivy image --severity HIGH,CRITICAL myapp:latest # Scan and fail build on high severity trivy image --exit-code 1 --severity CRITICAL myapp:latest # Generate SARIF report for GitHub Security trivy image --format sarif --output trivy-results.sarif myapp:latest # Scan with Docker Scout docker scout cves myapp:latest docker scout recommendations myapp:latest
CI/CD Integration Example
# .github/workflows/security-scan.yml
name: Container Security Scan
on:
push:
branches: [main]
pull_request:
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: 'trivy-results.sarif'
- name: Fail on critical vulnerabilities
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
exit-code: '1'
severity: 'CRITICAL'5. Runtime Security
Use Security Profiles
Apply AppArmor, SELinux, or seccomp profiles to restrict container capabilities:
# Custom seccomp profile (seccomp-profile.json)
{
"defaultAction": "SCMP_ACT_ERRNO",
"architectures": ["SCMP_ARCH_X86_64"],
"syscalls": [
{
"names": ["read", "write", "open", "close", "stat", "fstat",
"mmap", "exit_group", "rt_sigreturn"],
"action": "SCMP_ACT_ALLOW"
}
]
}
# Run with seccomp profile
docker run --security-opt seccomp=seccomp-profile.json myapp:latest
# Run with AppArmor profile
docker run --security-opt apparmor=docker-default myapp:latest
# Drop all capabilities and add only needed ones
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE myapp:latestSet Resource Limits
Prevent resource exhaustion attacks by setting memory and CPU limits:
docker run -d \
--name myapp \
--memory="512m" \
--memory-swap="512m" \
--cpus="1.0" \
--pids-limit 100 \
--ulimit nofile=1024:1024 \
myapp:latest
# Docker Compose
services:
app:
image: myapp:latest
deploy:
resources:
limits:
cpus: '1.0'
memory: 512M
reservations:
cpus: '0.5'
memory: 256M
security_opt:
- no-new-privileges:true
read_only: true
tmpfs:
- /tmpEnable User Namespace Remapping
Configure Docker daemon to remap container users to unprivileged host users:
# /etc/docker/daemon.json
{
"userns-remap": "default",
"live-restore": true,
"userland-proxy": false,
"no-new-privileges": true
}6. Network Security
Use Custom Bridge Networks
Isolate containers using custom networks instead of the default bridge:
# Create isolated networks docker network create --driver bridge app-network docker network create --driver bridge db-network # Run containers on specific networks docker run -d --name frontend --network app-network frontend:latest docker run -d --name backend --network app-network --network db-network backend:latest docker run -d --name database --network db-network postgres:15 # Backend can reach database, frontend cannot
Docker Compose Network Segmentation
version: '3.8'
services:
frontend:
image: frontend:latest
networks:
- frontend-network
ports:
- "80:80"
backend:
image: backend:latest
networks:
- frontend-network
- backend-network
environment:
- DB_HOST=database
database:
image: postgres:15
networks:
- backend-network
volumes:
- db-data:/var/lib/postgresql/data
networks:
frontend-network:
driver: bridge
backend-network:
driver: bridge
internal: true # No external access
volumes:
db-data:7. Image Signing and Supply Chain Security
Implement Cosign for Image Signing
# Generate key pair cosign generate-key-pair # Sign image cosign sign --key cosign.key myregistry.com/myapp:v1.0.0 # Verify signature before deployment cosign verify --key cosign.pub myregistry.com/myapp:v1.0.0 # Sign with keyless signing (OIDC) cosign sign myregistry.com/myapp:v1.0.0
Generate and Attach SBOM
# Generate SBOM with Syft syft myapp:latest -o spdx-json > sbom.spdx.json # Attach SBOM to image cosign attach sbom --sbom sbom.spdx.json myregistry.com/myapp:v1.0.0 # Verify SBOM cosign verify-attestation --key cosign.pub \ --type spdx myregistry.com/myapp:v1.0.0
8. Logging and Monitoring
Centralized Logging
# Configure logging driver docker run -d \ --log-driver=json-file \ --log-opt max-size=10m \ --log-opt max-file=3 \ myapp:latest # Forward logs to external system docker run -d \ --log-driver=syslog \ --log-opt syslog-address=tcp://logs.example.com:514 \ myapp:latest
Runtime Monitoring with Falco
# Falco rules for Docker
- rule: Unauthorized Process in Container
desc: Detect unexpected process execution
condition: >
spawned_process and container and
not proc.name in (allowed_processes)
output: >
Unexpected process started in container
(user=%user.name command=%proc.cmdline container=%container.name)
priority: WARNING
- rule: Write below root
desc: Detect attempt to write to root filesystem
condition: >
open_write and container and
fd.name startswith / and
not fd.name startswith /tmp
output: >
File write below root in container
(file=%fd.name container=%container.name)
priority: ERRORBest Practices Checklist
Essential Docker Security Checklist:
- ✅ Use minimal base images (Alpine, distroless)
- ✅ Run containers as non-root users
- ✅ Use specific image tags and verify signatures
- ✅ Implement multi-stage builds
- ✅ Never hardcode secrets in Dockerfiles
- ✅ Scan images for vulnerabilities in CI/CD
- ✅ Use read-only root filesystem
- ✅ Drop unnecessary capabilities
- ✅ Set resource limits (CPU, memory, PIDs)
- ✅ Apply seccomp/AppArmor profiles
- ✅ Use custom bridge networks
- ✅ Enable user namespace remapping
- ✅ Sign images and attach SBOMs
- ✅ Monitor containers with runtime security tools
- ✅ Regularly update base images and dependencies
Conclusion
Docker security is not a one-time task but a continuous practice integrated throughout the container lifecycle. By following these best practices—from selecting minimal base images to implementing runtime monitoring—you significantly reduce your attack surface and improve your security posture.
Start by implementing the foundational practices like running as non-root and scanning for vulnerabilities, then progressively adopt advanced techniques like image signing and runtime security monitoring. Remember that the most secure container is one that's regularly updated, continuously monitored, and deployed with defense-in-depth principles.