Named volumes are Docker-managed storage locations created and controlled by Docker. They are stored in Docker's internal directory (/var/lib/docker/volumes/ on Linux, Docker Desktop VM on macOS/Windows) and provide persistent data storage that survives container restarts and removals. Docker handles filesystem creation, permissions, and management automatically. Named volumes are abstracted from host filesystem paths, making them portable across different environments and platforms. They support volume drivers for extended functionality like cloud storage (rexray/s3fs), NFS, or custom backends. Commands: docker volume create data-volume, docker run -v data-volume:/app/data nginx, docker volume inspect data-volume (view metadata), docker volume ls. Best for production databases, stateful applications, and container-to-container data sharing. Backup: docker run --rm -v data-volume:/volume -v $(pwd):/backup alpine tar czf /backup/backup.tar.gz /volume.
Docker FAQ & Answers
61 expert Docker answers researched from official documentation. Every answer cites authoritative sources you can verify.
Jump to section:
unknown
37 questionsUse tmpfs mounts for sensitive temporary files that shouldn't persist to disk (encryption keys, passwords, session tokens, JWT secrets), high-speed temporary processing (video transcoding, image manipulation, data transformations), application caches that can be rebuilt (Redis session store), build temporary directories (/tmp), and any data requiring maximum I/O performance with zero persistence. Perfect for security-sensitive applications (tmpfs ensures data vanishes on container stop, no disk forensics), CI/CD pipeline temporary artifacts, in-memory data processing, and compliance requirements (PCI-DSS, HIPAA). Monitor memory usage: tmpfs consumes host RAM, set appropriate limits with tmpfs-size=100M. Avoid for: large datasets, persistent data, logs. Common pattern: --tmpfs /tmp:rw,noexec,nosuid,size=100M for hardened /tmp. Linux-only feature.
Use Macvlan/IPvlan when containers need direct network access without NAT, legacy applications expecting physical network integration, containers requiring routable IPs on physical network, or network monitoring tools needing packet capture. Perfect for IoT gateways, DHCP/DNS servers, VoIP applications, network appliances, and environments where containers must appear as physical machines. Macvlan vs IPvlan: Use Macvlan for general physical network integration; use IPvlan when MAC address limits exist (some switches limit MAC addresses per port) or for better performance with many containers. Requirements: Promiscuous mode on host NIC (may not work on public cloud VMs due to security restrictions), adequate IP address space. Security: Containers have full network access, implement firewall rules and network segmentation. Production: Test network compatibility before deployment.
BuildKit is Docker's next-generation build engine, default since Docker 23.0, now at version 0.12+ in Docker 25.0. Major advantages: (1) Parallel build stages for dramatically faster builds, (2) Improved layer caching with content-addressable storage, (3) Build secrets (--secret) never stored in layers, (4) SSH agent forwarding (--ssh), (5) Cache mounts (--mount=type=cache) for package managers, (6) Efficient context transfer (only sends changed files), (7) Multiple output formats (local, tar, OCI, registry), (8) Better error messages with source locations. Enable: DOCKER_BUILDKIT=1 or set in daemon.json "features": {"buildkit": true}. Modern Dockerfile syntax: # syntax=docker/dockerfile:1 enables latest features. Example cache mount: RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements.txt. BuildKit v0.12+ adds: Improved performance, better network handling, SBOM generation. Significantly faster builds (30-80% improvement) and better cache utilization. Essential for modern CI/CD workflows.
2025 Secure Secret Management Options: (1) BuildKit secret mounts (BEST for build-time): RUN --mount=type=secret,id=mysecret,target=/run/secrets/mysecret command, pass with docker build --secret id=mysecret,src=./secret.txt - secrets never stored in layers or image history. (2) Docker secrets (Swarm mode): docker secret create mysecret secret.txt, use in service: docker service create --secret mysecret myapp - encrypted at rest and in transit, mounted as files in /run/secrets/. (3) Environment variables (runtime only): docker run -e SECRET=value or --env-file secrets.env - visible in 'docker inspect', use only for non-sensitive config. (4) Bind mount config files: docker run -v ./secrets:/secrets:ro - ensure proper file permissions (chmod 600). (5) External secrets managers: HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, Google Secret Manager - containers fetch at runtime. (6) Kubernetes secrets (K8s): Define secrets, mount as volumes or env vars. NEVER: Use ARG for secrets (visible in history), commit secrets to images, use plain ENV in Dockerfile. Scan images: 'docker scout cves' to detect leaked secrets. Production pattern: External secrets manager + init container to fetch secrets + mounted volumes.
ARG: build-time variable, available only during docker build, not in running container, set with --build-arg flag. Scope ends after stage in multi-stage builds. ENV: runtime environment variable, persists in built image and running containers, set with -e flag at runtime or overridden with docker run -e. ARG can provide default values for ENV using ARG VERSION=1.0 then ENV APP_VERSION=$VERSION. Use ARG for build configuration (version numbers, build targets, URLs), ENV for runtime configuration (API keys, ports, database URLs). Security: ARG values visible in image history (docker history), never use for secrets - use BuildKit secrets instead (--secret). ENV increases image size slightly due to layer metadata; ARG doesn't affect final image. BuildKit supports --build-arg for dynamic builds.
Host networks eliminate network isolation by making containers share the host's network stack directly without network namespaces. Container processes bind to host network interfaces with no NAT translation, port mapping, or virtual networking overhead. The container sees the same network interfaces (eth0, lo) as the host and can bind to any available ports directly. Provides maximum network performance (zero overhead, no iptables rules). Commands: docker run --network host nginx (container serves directly on host's port 80), docker run --network host alpine ip addr (shows host network interfaces). Port mappings (-p) are ignored with host networking. Linux-only feature (macOS/Windows use VM networking). Security: Container has full host network access. Use cases: High-performance applications, monitoring tools (Prometheus node_exporter), legacy apps requiring specific network behavior.
Macvlan assigns containers unique MAC addresses, making them appear as physical devices on the network. Containers get direct access to physical network without NAT, receiving IPs from physical network DHCP or static assignment. Macvlan modes: bridge (default, container-to-container via bridge), private (no container communication), vepa (hairpin mode via external switch), passthru (single container gets full control). IPvlan provides similar functionality but shares host's MAC address while using different IP addresses, avoiding MAC address exhaustion. IPvlan modes: l2 (layer 2 switching), l3 (layer 3 routing, no ARP). Commands: docker network create -d macvlan --subnet=192.168.1.0/24 --gateway=192.168.1.1 -o parent=eth0 macnet. Requires promiscuous mode on host NIC. Use cases: Legacy application integration, network monitoring tools, containers needing routable IPs.
Bridge networks are Docker's default network type, providing private network communication between containers on the same host. Docker creates a virtual bridge (docker0) acting as a software switch. Containers get private IP addresses (typically 172.17.0.0/16 range) and can communicate by container name via built-in embedded DNS server. External access requires port mapping with -p flag (NAT translation). Commands: docker network create app-net (user-defined bridge with automatic DNS), docker run --network app-net --name web nginx, docker run --network app-net alpine ping web. User-defined bridges provide better isolation than default bridge, support container name resolution, and allow runtime network connect/disconnect. Network inspection: docker network inspect app-net. Use for most single-host applications, microservices, and development environments.
Key differences: (1) Storage location - bind mounts use absolute host paths (/host/path), named volumes use Docker-managed locations (/var/lib/docker/volumes/). (2) Portability - bind mounts tied to host structure, volumes portable across environments. (3) Permissions - bind mounts inherit host user/group, volumes managed by Docker daemon. (4) Backup - bind mounts require manual host backups, volumes use docker volume commands. (5) Use cases - bind mounts for development (hot reload, source code sync), volumes for production data. (6) Docker Compose - volumes defined in top-level volumes: section, bind mounts inline in service definition. Security: Bind mounts expose host filesystem, use read-only flag (ro) when possible. Both have similar I/O performance. Use --mount syntax (not -v) for explicit control and validation.
Multi-stage builds use multiple FROM statements in one Dockerfile, creating intermediate stages for building and a final stage with only runtime dependencies. BuildKit (default since Docker 23.0) only builds required stages and executes them in parallel when possible, dramatically improving build speed. This reduces final image size by 10-50x by excluding build tools, compilers, source code, and intermediate artifacts. Example: FROM golang:1.22 AS builder → build binary → FROM alpine:3.19 → COPY --from=builder. Stages can be named with AS keyword and referenced with --from. Critical for production security (minimal attack surface), faster deployments, and reduced storage costs. Supports --target flag to build specific stages during development.
Volume drivers extend Docker storage capabilities beyond local filesystem to external systems like cloud storage, NFS, or specialized plugins. Enable remote storage, advanced features like encryption, compression, snapshots, or backup. Common drivers: local (default), vieux/sshfs (SSH/SFTP), rexray/s3fs (AWS S3), azure-file (Azure Storage), nfs (NFS mounts). Commands: docker volume create --driver vieux/sshfs -o sshcmd=user@host:/path -o password=secret ssh-volume, docker run -v ssh-volume:/data nginx. Install drivers: docker plugin install vieux/sshfs. List installed plugins: docker plugin ls. Use cases: Multi-host data sharing, cloud storage integration, backup/disaster recovery, compliance requirements (data residency). Production: Prefer managed storage (AWS EFS, Azure Files) over self-managed NFS for reliability.
docker exec: Runs new process inside running container, creates new session (independent stdin, stdout, stderr), exiting doesn't stop container, can run as different user with --user, can set environment with --env. Best for debugging, running commands, accessing shell. Example: 'docker exec -it container bash', 'docker exec container ps aux'. Process gets new PID inside container. Can run multiple exec sessions simultaneously. Common flags: -i (interactive), -t (TTY), -w (working directory), -e (environment vars). docker attach: Attaches terminal to container's main process (PID 1), shares same stdin/stdout/stderr, exiting (Ctrl+C) sends signal to main process and may stop container, detaching requires Ctrl+P Ctrl+Q. Use for viewing main process output, interactive containers, or applications expecting terminal input. Only one attach session possible at a time. Modern practice: Use 'exec' for most debugging, 'attach' only for interactive applications or viewing startup logs.
Docker captures stdout/stderr from container's main process (PID 1). Applications must log to stdout/stderr, not files. View logs: 'docker logs container-name', --follow/-f for real-time streaming, --tail N for last N lines, --since/--until for time filtering, --timestamps to show timestamps. Logging drivers: json-file (default, stored on disk at /var/lib/docker/containers/), syslog (system log), journald (systemd journal), gelf (Graylog), fluentd (Fluentd aggregator), awslogs (AWS CloudWatch), splunk (Splunk), gcplogs (Google Cloud Logging), local (optimized json-file), none (disable logging). Configure driver: --log-driver=json-file --log-opt max-size=10m --log-opt max-file=3 (rotation). Global config in daemon.json. json-file driver supports log rotation to prevent disk exhaustion. Centralized logging strongly recommended for production: ELK Stack (Elasticsearch, Logstash, Kibana), Grafana Loki, Datadog, Splunk. Docker Compose logging: Configure per service in docker-compose.yml.
Distroless images contain only application and runtime dependencies (language runtime, libraries), no OS distribution, no package managers (apt, yum), no shells (sh, bash), no GNU coreutils, no debugging tools. Pioneered by Google (gcr.io/distroless/*). Benefits: (1) Minimal attack surface (70-90% smaller than full OS images, no shell prevents shell injection), (2) Smaller image size (10-50 MB vs 100-500 MB for OS base images), (3) Faster deployments and pulls, (4) Compliance and audit-friendly (fewer CVEs), (5) Forces better security practices. Trade-offs: Harder debugging (no shell - use 'docker cp', or debug variants with :debug tag that include busybox). Available for: Java, Python, Node.js, Go, Rust, .NET. Use multi-stage build: FROM golang:1.22 AS builder (full image for build) → FROM gcr.io/distroless/base-debian12 (distroless for runtime). Not suitable for: Complex apps needing system tools, applications requiring dynamic troubleshooting. 2025 trend: Distroless gaining adoption in security-conscious organizations. Alternative: Alpine Linux (still has package manager but 5MB base).
COPY: straightforward copying of files/directories from host to image, recommended for most use cases. Transparent, predictable behavior. ADD: like COPY but with extra features: auto-extracts compressed files (tar, gzip, bzip2, xz), can fetch files from URLs. Docker best practices (2025) strongly recommend using COPY unless you specifically need ADD's extraction feature. ADD's magic behaviors can be surprising and reduce Dockerfile clarity. Both support --chown flag for setting ownership: COPY --chown=user:group. BuildKit provides additional options: --link for improved layer reuse, --chmod for setting permissions. Security: Never ADD from untrusted URLs. Use COPY for application code, ADD only for extracting trusted archives.
BuildKit caches each layer (instruction result) during build using content-addressable storage. If instruction and context haven't changed, BuildKit reuses cached layer. Cache invalidates when instruction changes, invalidating all subsequent layers. Optimize by: (1) Order instructions from least to most frequently changing, (2) COPY package.json first, RUN npm install, then COPY source code, (3) Combine related RUN commands with &&, (4) Use .dockerignore to exclude unnecessary files. BuildKit cache mounts (--mount=type=cache,target=/path) persist directories across builds without layer commits. Example: RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements.txt. COPY/ADD invalidate cache on file content changes (checksum-based). Use docker buildx build --cache-from/--cache-to for distributed cache with registries.
None network disables all networking for containers, creating complete network isolation with no network interfaces except loopback (lo). Container cannot communicate externally or with other containers - completely air-gapped. Commands: docker run --network none alpine ifconfig (shows only lo interface), docker run --network none alpine ping 8.8.8.8 (fails, no network). Use cases: (1) Maximum security workloads processing sensitive data, (2) Batch processing jobs not needing network, (3) Testing network isolation, (4) Manual network configuration (disable Docker networking, configure manually), (5) Compliance requirements mandating network isolation. Containers can still use volumes for data exchange. Performance: Zero network overhead. After creation, you can manually add network interfaces using docker network connect. Combine with read-only filesystem (--read-only) for maximum security posture.
Use named volumes for production databases (PostgreSQL, MySQL, MongoDB, Redis), application data persistence, log aggregation, and any data that must survive container lifecycle events. They are ideal for stateful applications, data requiring backup/restore, multi-container data sharing, and when you want Docker to manage storage complexity. Best practices: (1) Use explicit naming (db-data, app-logs), (2) Define in docker-compose.yml for consistency, (3) Set volume drivers for cloud storage integration. Named volumes work with volume plugins for advanced features (snapshots, encryption, replication). Avoid for: development source code (use bind mounts), secrets (use Docker secrets), temporary data (use tmpfs). Performance equivalent to native filesystem. In Kubernetes, these map to PersistentVolumeClaims.
Tags are aliases/pointers for image IDs (SHA256 digests), format: [registry/]repository:tag. 'latest' is convention for newest version but NOT automatic - you must explicitly tag as 'latest'. Semantic versioning recommended for production: myapp:1.2.3 (specific), myapp:1.2 (minor updates), myapp:1 (major updates), myapp:latest (bleeding edge). Tags are mutable (can be reassigned to different images) - security risk. Best practices (2025): (1) Always specify explicit version tags in production deployments, never use 'latest'. (2) Use multi-tagging: same image with multiple tags (docker tag myapp:1.2.3 myapp:1.2 myapp:1 myapp:latest). (3) Use SHA256 digest for immutability: myapp@sha256:abc123 (guarantees exact image). (4) Include build metadata: myapp:v1.2.3-alpine-20250101. (5) Use commit SHA for CI/CD: myapp:git-abc1234. Tag commands: 'docker tag source target', 'docker push registry/myapp:tag'. Automated tagging in CI/CD pipelines recommended.
Use host networks for high-performance networking applications (database servers, high-throughput web servers, real-time data processing), monitoring tools needing full host visibility (Prometheus node_exporter, cAdvisor, netdata), legacy applications requiring specific network behavior, and when you need maximum network throughput without NAT overhead. Perfect for local development, CI/CD pipelines, performance-critical microservices, and avoiding port conflicts. Trade-offs: No port isolation (container port 80 = host port 80), reduced security (full network access), Linux-only. Best practices: Combine with resource limits (--memory, --cpus), run as non-root user (--user), monitor with docker stats. Avoid on multi-tenant systems or when network isolation required. Alternative: Use macvlan for network performance with container isolation.
CMD: provides default arguments, easily overridden by docker run arguments. ENTRYPOINT: defines executable, harder to override (requires --entrypoint flag). When both used: ENTRYPOINT is executable, CMD provides default args that can be overridden. Exec form (JSON array) recommended: ["executable", "arg1"] - avoids shell processing, handles signals correctly. Shell form runs as /bin/sh -c - use only when you need shell features. Use ENTRYPOINT for executable containers (databases, CLI tools), CMD for default arguments or simple commands. Combine both for flexible but opinionated containers. Example: ENTRYPOINT ["python", "app.py"], CMD ["--port", "8080"] - users can override port but not the executable. Security: Avoid shell form to prevent shell injection.
HEALTHCHECK instruction tells Docker how to test if container is working correctly beyond process running. Syntax: HEALTHCHECK CMD command (exit 0 = healthy, 1 = unhealthy, 2 = reserved). Options: --interval=30s (check frequency), --timeout=30s (command timeout), --start-period=0s (initialization time before checks count), --retries=3 (consecutive failures before unhealthy). Container health states: starting (during start-period), healthy (passing checks), unhealthy (failed retries). Orchestrators (Kubernetes, Docker Swarm, ECS) use health checks for automated recovery, rolling updates, and load balancer integration. Example: HEALTHCHECK --interval=30s --timeout=3s CMD curl -f http://localhost/ || exit 1. Can override at runtime: docker run --health-cmd='curl -f http://localhost/' --health-interval=10s nginx. Critical for production reliability, zero-downtime deployments, and automatic failover. View health: docker inspect --format='{{.State.Health.Status}}' container.
Bind mounts map a host directory or file directly into a container using absolute filesystem paths. They bypass Docker's storage management and provide direct access to the host filesystem. Unlike named volumes, bind mounts don't create new storage - they reference existing host paths. They can mount individual files or entire directories. Commands: docker run -v /host/path:/container/path nginx, docker run --mount type=bind,source=/host/path,target=/container/path nginx. Bind mounts give containers direct access to host files with the same permissions as the host user running Docker. They are the original Docker storage mechanism and remain essential for many use cases.
FROM: base image (required, must be first instruction, supports multi-platform). WORKDIR: sets working directory. COPY/ADD: copies files from host (use COPY for transparency, ADD for archives). RUN: executes commands (combine with && for layer optimization). ENV: sets environment variables. EXPOSE: documents ports (metadata only). CMD: default command when container starts. ENTRYPOINT: configures container as executable. ARG: build-time variables. USER: sets non-root user (security best practice). LABEL: adds metadata (maintainer, version). HEALTHCHECK: defines container health checks. Order matters critically for BuildKit layer caching. Modern syntax: # syntax=docker/dockerfile:1 enables latest features.
An image is a read-only template containing application code, libraries, dependencies, and configuration - essentially an immutable snapshot built from layers. A container is a runnable instance of an image with an added writable layer on top using copy-on-write (CoW) filesystem. Multiple containers can run from the same image independently, each with its own writable layer. Analogy: image is like a class, container is like an object instance. Images are built with Dockerfiles using BuildKit (default in Docker 23+), stored in registries (Docker Hub, ECR, GCR). Containers are created with 'docker run' and managed through their lifecycle states. Images are identified by SHA256 digest for immutability; containers have unique IDs and names.
Like .gitignore, .dockerignore excludes files/directories from Docker build context sent to daemon before build starts. Improves build performance (smaller context, faster upload to daemon), reduces final image size (prevents accidental COPY of large files), prevents sensitive files (credentials, .env, .git, secrets, private keys) from being copied into image. Supports glob patterns: wildcards (*.log, **/temp), negation (! for exceptions). Place in same directory as Dockerfile. Syntax: one pattern per line, # for comments. Critical for security (prevent secret leakage) and performance (exclude node_modules, .git, test files). Common exclusions: node_modules/, .git/, *.log, .env, dist/, build/, **/*.md. BuildKit respects .dockerignore for better caching. Modern pattern: Also use .dockerignore to exclude Dockerfile itself and docker-compose.yml from context.
Docker Compose v2 is a tool integrated into Docker CLI for defining and running multi-container applications using YAML files (compose.yaml or docker-compose.yml). Solves problems: (1) Manage multiple containers together, (2) Define services, networks, volumes declaratively, (3) Automatic service discovery and networking, (4) Environment-specific configurations with .env files, (5) One command to start/stop entire stack. Commands: 'docker compose up -d' (detached), 'docker compose down' (cleanup), 'docker compose watch' (hot reload). Compose v2 (v2.39.2+ in 2025) requires Docker 28.3.3+, written in Go (faster than v1 Python). New features: Compose Bridge for Kubernetes/Helm conversion, GPU support for AI/LLM workloads, direct cloud deployment to Google Cloud Run/Azure Container Apps. Use 'docker compose' not 'docker-compose'.
Tmpfs mounts store container data directly in the host's memory (RAM), creating a virtual filesystem that exists only while the container is running. They provide the fastest possible storage I/O since they bypass disk entirely. Data is completely non-persistent and disappears when the container stops. Tmpfs mounts are limited by available host memory and can be sized with specific limits. Commands: docker run --tmpfs /tmp nginx, docker run --mount type=tmpfs,destination=/app/cache,tmpfs-size=100M nginx, docker run --tmpfs /app/data:rw,size=500m,uid=1000,gid=1000,mode=0755 app. They support standard mount options like size, uid, gid, mode, and read/write permissions.
Docker Hub is Docker's official public registry for storing and distributing Docker images at hub.docker.com. Free for public repositories, paid for private (with rate limiting on free tier: 100 pulls/6hrs for anonymous, 200/6hrs for authenticated). Contains official images (curated by Docker), verified publishers (Docker Verified Publisher program), and community images. Other major registries: AWS ECR (Elastic Container Registry), Google Artifact Registry (replaces GCR), Azure ACR (Azure Container Registry), GitHub Container Registry (ghcr.io), GitLab Container Registry. Private registries: Harbor (CNCF), Nexus, Artifactory, JFrog. Registries use image naming: [registry.domain/]repository:tag. Docker pull/push commands interact with registries. Critical for CI/CD pipelines. Security: Enable Docker Content Trust for image signing, scan images for vulnerabilities.
2025 Security Best Practices: (1) Use official/verified images from trusted sources (Docker Official Images, Verified Publishers). (2) Scan for vulnerabilities using Docker Scout (built-in), Trivy (open-source), or Snyk (enterprise). (3) Run as non-root user (USER directive, avoid UID 0). (4) Use multi-stage builds to minimize attack surface. (5) Never store secrets in images - use Docker secrets (Swarm), BuildKit secret mounts (--mount=type=secret), or external secrets managers (Vault, AWS Secrets Manager). (6) Use read-only root filesystem (--read-only, --tmpfs /tmp). (7) Limit container resources (--memory, --cpus, --pids-limit). (8) Keep Docker Engine updated (latest stable: 25.x). (9) Enable Docker Content Trust for image signing (DOCKER_CONTENT_TRUST=1). (10) Drop unnecessary Linux capabilities (--cap-drop ALL, --cap-add specific). (11) Use security scanning in CI/CD pipeline. (12) Apply security profiles: AppArmor, SELinux, Seccomp. (13) Regular image updates for security patches. (14) Use distroless or alpine base images.
Overlay networks enable communication between containers across multiple Docker hosts, creating a virtual network spanning physical machines. Uses VXLAN (Virtual Extensible LAN) encapsulation for packets, encrypted by default with AES-GCM. Essential for Docker Swarm mode and Kubernetes. Containers discover each other by name via distributed DNS regardless of physical location. Requires cluster setup: docker swarm init, then docker network create --driver overlay --attachable app-overlay. Commands: docker service create --network app-overlay nginx (Swarm mode), docker run --network app-overlay nginx (standalone with --attachable). Built-in load balancing via ingress network. Use cases: Distributed microservices, multi-host applications, Swarm clusters. Performance: Slight overhead from encapsulation (~5-10%). Modern alternative: Cilium with eBPF for better performance.
Registry is a stateless server-side application for storing and distributing Docker images, implements Docker Registry HTTP API V2. Docker Hub is default public registry. Private registries provide: access control, internal hosting, faster pulls (network proximity), compliance (data residency), cost savings (no Hub rate limits). Setup options: (1) Official registry image: 'docker run -d -p 5000:5000 --name registry registry:2' (basic, insecure). (2) Production setup: Enable TLS with certificates, add authentication (htpasswd, token), use persistent storage (volume or S3/Azure Blob backend), configure garbage collection. Tag images: 'docker tag myapp localhost:5000/myapp:v1', push: 'docker push localhost:5000/myapp:v1', pull: 'docker pull localhost:5000/myapp:v1'. Enterprise alternatives: Harbor (CNCF, full-featured with UI, vulnerability scanning, replication), JFrog Artifactory, Sonatype Nexus, AWS ECR, Azure ACR, Google Artifact Registry, GitLab Container Registry. Modern recommendation: Use managed registry services for production.
Cleans up unused Docker resources to reclaim disk space, removing stopped containers, dangling images, unused networks, and build cache. Commands: 'docker system prune' (safe, removes only dangling resources), 'docker system prune -a' (aggressive, removes ALL unused images not referenced by containers), 'docker system prune --volumes' (includes unused volumes - DESTRUCTIVE). Individual resource cleanup: 'docker container prune' (stopped containers), 'docker image prune' (dangling images), 'docker image prune -a' (all unused images), 'docker volume prune' (unused volumes), 'docker network prune' (unused networks), 'docker builder prune' (build cache). Shows space reclaimed after operation. Use cases: CI/CD environments (run after builds), development machines (periodic cleanup), disk space emergencies. Production: Use with caution, understand what will be deleted. Best practice: Run 'docker system df' first to see space usage breakdown. Filters: --filter "until=24h" (older than 24 hours), --filter "label=project=myapp". Automated cleanup: Configure with cron jobs or Docker's log rotation settings. BuildKit cache: Use 'docker buildx prune' for BuildKit cache.
Resource limits prevent containers from exhausting host resources and implement resource isolation. Memory: --memory/-m 512m (hard limit, OOM killer if exceeded), --memory-swap (memory+swap total, -1 for unlimited), --memory-reservation 256m (soft limit, kicked in under memory pressure). CPU: --cpus 1.5 (number of CPUs, fractional), --cpu-shares 512 (relative weight for CPU scheduling, default 1024), --cpuset-cpus 0-3 (pin to specific CPU cores). I/O: --device-read-bps, --device-write-bps (block I/O limits). PIDs: --pids-limit 100 (max processes). Limits prevent noisy neighbor problems in multi-tenant environments. OOM killer terminates container if hard memory limit exceeded. In docker-compose.yml: deploy.resources.limits/reservations. Monitor resources: 'docker stats' (real-time), 'docker stats --no-stream' (one-time). Production: Always set memory limits, use CPU limits for fairness. Kubernetes equivalents: requests (reservations), limits (hard limits).
Docker is a platform for developing, shipping, and running applications in containers. Unlike VMs which virtualize hardware and run full OS instances (heavyweight, GB-sized, slower startup), Docker containers share the host OS kernel, package only application and dependencies (lightweight, MB-sized, seconds startup). Containers provide process isolation using Linux namespaces and cgroups without OS overhead. VMs provide stronger isolation with hypervisor-level security boundaries. Docker is optimal for microservices, CI/CD pipelines, and cloud-native applications. VMs better for running different OS types or workloads requiring hardware-level isolation. Modern Docker (24+) uses containerd runtime with improved security and OCI compliance.
Created: container created but not started (docker create). Running: actively executing (docker run/start), main process running (PID 1). Paused: processes suspended using cgroups freezer, memory preserved (docker pause/unpause). Stopped/Exited: container stopped gracefully (docker stop sends SIGTERM then SIGKILL after timeout) or after main process exits. Restarting: restarting per restart policy (no, on-failure[:max-retries], always, unless-stopped). Dead: removal failed, requires manual intervention. Commands: start, stop, restart, pause, unpause, rm, kill. Stopped containers persist until removed (docker rm). Use 'docker ps' for running, 'docker ps -a' for all states. Health states (with HEALTHCHECK): starting, healthy, unhealthy. Restart policies: Crucial for production reliability, set with --restart flag or in docker-compose.yml. OOM (Out of Memory) state: Container killed by OOM killer if exceeds memory limit.
Image Management
7 questionsUse docker history <image> to see all layers and commands that created the image. Shows: IMAGE (layer ID), CREATED (when), CREATED BY (command), SIZE (layer size). Options: --no-trunc shows full commands (not truncated). --quiet shows only layer IDs. --format for custom output. Useful for: understanding image composition, debugging large images, checking for security issues, seeing what was installed. Layers from base image show as 'missing' for IMAGE ID. Size column helps identify bloated layers.
Remove specific image: docker rmi image:tag or docker image rm image:tag. Remove by ID: docker rmi abc123. Force remove (even if used by stopped containers): docker rmi -f image:tag. Remove multiple: docker rmi image1 image2. Remove dangling images: docker image prune. Remove all unused images: docker image prune -a. Remove all images: docker rmi $(docker images -q). Cannot remove image used by running container - stop container first. Images share layers - removing one image may not free all its disk space.
Check usage: docker system df. Remove all stopped containers, unused networks, dangling images, and build cache: docker system prune. Add -a to also remove unused images (not just dangling): docker system prune -a. Add --volumes to also remove unused volumes: docker system prune --volumes. Individual cleanup: docker container prune, docker image prune, docker volume prune, docker network prune. Use --filter to limit scope: docker system prune --filter 'until=24h'. Add -f to skip confirmation prompt.
Export container filesystem: docker export container > container.tar. Import as image: docker import container.tar newimage:tag. Save image(s) with layers: docker save image:tag > image.tar or docker save -o image.tar image1 image2. Load saved image: docker load < image.tar or docker load -i image.tar. Key difference: export/import loses image history and metadata (single layer), save/load preserves full image with all layers and history. Use save/load for image distribution, export/import for filesystem backup.
First login: docker login (Docker Hub) or docker login registry.example.com. Tag image with registry path: docker tag myimage:1.0 username/myimage:1.0. Push: docker push username/myimage:1.0. Push all tags: docker push -a username/myimage. For private registries, include hostname in tag: docker push registry.example.com/myimage:1.0. Credentials stored in ~/.docker/config.json. For CI/CD, use docker login -u user -p password or credential helpers. Push creates layers remotely - only new/changed layers are uploaded.
Basic pull: docker pull image:tag. Examples: docker pull nginx:latest, docker pull ubuntu:22.04. Without tag, defaults to :latest. Pull from private registry: docker pull registry.example.com/myimage:1.0 (requires docker login first). Pull all tags: docker pull -a nginx. Pull by digest (immutable): docker pull nginx@sha256:abc123.... Images are cached locally - subsequent pulls only download changed layers. Use docker pull --quiet to suppress progress output. Check local images with docker images.
List all images: docker images or docker image ls. Show all including intermediate layers: docker images -a. Show only image IDs: docker images -q. Filter images: docker images --filter 'dangling=true' (untagged images). docker images --filter 'reference=nginx'. Format output: docker images --format '{{.Repository}}:{{.Tag}} {{.Size}}'. Show digests: docker images --digests. Check disk usage: docker system df shows total image size. Dangling images waste space - remove with docker image prune.
Container Management
7 questionsUse the -u or --user flag: docker exec -u username container command or docker exec -u uid:gid container command. Examples: docker exec -u root mycontainer apt update runs as root. docker exec -u 1000:1000 mycontainer whoami runs as uid 1000. docker exec -u www-data mycontainer ls /var/www runs as www-data user. Useful for debugging permission issues or running commands that require specific user context. The user must exist in the container unless using numeric uid/gid.
Use the -w or --workdir flag: docker exec -w /app container_name ls. This overrides the container's default working directory for that specific exec command. Example: docker exec -w /var/log mycontainer cat syslog. Combine with interactive shell: docker exec -it -w /app mycontainer /bin/bash starts shell in /app directory. The working directory must exist in the container. Without -w, commands run in the directory set by WORKDIR in Dockerfile or / (root) if unset.
Use --restart policy: docker run --restart=always <image>. Policies: no (default, never restart), on-failure[:max-retries] (restart on non-zero exit), always (always restart, including daemon restart), unless-stopped (like always, but not if manually stopped). Examples: docker run --restart=on-failure:5 <image> restarts max 5 times on failure. docker run --restart=unless-stopped <image> survives host reboot. Update existing container: docker update --restart=always container. Check policy: docker inspect --format='{{.HostConfig.RestartPolicy.Name}}' container.
Memory limit: docker run --memory=512m <image> or --memory=1g. Memory with swap: --memory=512m --memory-swap=1g (512MB RAM + 512MB swap). Soft limit (reservation): --memory-reservation=256m. CPU limit: --cpus=1.5 limits to 1.5 CPU cores. CPU shares (relative weight): --cpu-shares=512 (default 1024). Pin to specific CPUs: --cpuset-cpus='0,1'. If container exceeds memory limit with no swap, it's killed (exit code 137 OOM). Check limits: docker inspect --format='{{.HostConfig.Memory}}' container.
Use --entrypoint flag: docker run --entrypoint /bin/bash <image>. This replaces the image's ENTRYPOINT. To run a different command with the original entrypoint, just append it: docker run <image> my-command. To disable entrypoint entirely: docker run --entrypoint '' <image> my-command. Example: Debug a container that normally runs a service: docker run -it --entrypoint /bin/sh nginx gives you a shell instead of starting nginx. The --entrypoint flag only accepts the executable, not arguments - pass arguments after the image name.
docker run creates a new container from an image and starts it. docker start starts an existing stopped container. docker run = docker create + docker start. Use docker run for new containers: docker run nginx. Use docker start to restart stopped containers: docker start my_container (preserves container state and configuration). docker start uses original run parameters. To see stopped containers: docker ps -a. docker run with same name fails if container exists - use docker start instead or docker rm first.
Use docker attach <container> to connect your terminal to the container's main process (PID 1) STDIN/STDOUT/STDERR. Different from docker exec which runs a new process. Detach without stopping: Ctrl+P, Ctrl+Q (if started with -it). Ctrl+C sends SIGINT to the main process (may stop container). For interactive shells, prefer docker exec: docker exec -it container /bin/bash. Use docker attach when you need to interact with the original process, like viewing real-time logs from the main app.
Building Images
3 questionsMulti-stage builds use multiple FROM statements to create intermediate build stages, copying only needed artifacts to final image. Example: FROM node:18 AS builder (build stage) followed by FROM node:18-slim (final stage) with COPY --from=builder /app/dist /app. Benefits: smaller final images (no build tools), better security (reduced attack surface), cleaner Dockerfiles. Name stages with AS: FROM golang AS build. Copy from any stage: COPY --from=build /app/binary /app/. Build specific stage: docker build --target builder ..
Tag during build: docker build -t myrepo/myimage:1.0 .. Tag existing image: docker tag source_image:tag target_image:tag. Example: docker tag myimage:latest myrepo/myimage:v1.0. Images can have multiple tags pointing to same image ID. For registries, include registry hostname: docker tag myimage registry.example.com/myimage:1.0. The 'latest' tag is not automatically the newest - it's just applied to untagged pushes. Best practice: Use semantic versioning (1.0.0, 1.0.1) in production, not 'latest'.
Create .dockerignore in your build context root to exclude files from COPY/ADD and reduce build context size. Syntax similar to .gitignore. Examples: node_modules, *.log, .git, Dockerfile, .env. Patterns: **/temp* matches temp in any directory. !important.log negates previous exclusion. Comments start with #. Benefits: faster builds (less data sent to daemon), smaller images, prevents secrets from being copied. Common entries: .git, node_modules, pycache, *.pyc, .env, .DS_Store, *.md, tests/, docs/.
Security
2 questionsIn Dockerfile: USER username or USER uid:gid. Create user first: RUN adduser --disabled-password --gecos '' appuser then USER appuser. At runtime: docker run --user 1000:1000 <image> or docker run --user appuser <image>. Check current user: docker exec container whoami. For file permissions, ensure files are owned by the user: COPY --chown=appuser:appuser . /app. Running as non-root is a security best practice - limits damage if container is compromised. Some images like nginx have built-in non-root support.
Use --privileged flag: docker run --privileged <image>. This gives the container full access to host devices and disables security isolation (AppArmor/SELinux). Use cases: running Docker-in-Docker, accessing hardware devices, kernel operations. Warning: Major security risk - container can escape to host. Prefer minimal capabilities instead: docker run --cap-add=SYS_ADMIN <image>. For specific devices, use --device: docker run --device=/dev/sda:/dev/sda <image>. Only use --privileged when absolutely necessary and in trusted environments.
Monitoring
2 questionsUse docker top <container> to see processes running in a container. Shows PID, user, time, and command. Example: docker top mycontainer lists all processes. Add ps options: docker top mycontainer aux for detailed view. For real-time monitoring, exec into container: docker exec container ps aux or docker exec container top. docker top runs from host perspective (shows host PIDs), exec ps shows container perspective. For a single container's resource usage, use docker stats container_name.
Real-time stats: docker stats shows CPU, memory, network, and I/O for all running containers. Single container: docker stats container_name. One-time snapshot (no streaming): docker stats --no-stream. Custom format: docker stats --format 'table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}'. CPU% can exceed 100% on multi-core systems (200% = 2 cores). Memory limit shown is container limit or host memory. For historical metrics, use monitoring tools like Prometheus with cAdvisor, or Docker Desktop's Resource Usage extension.
Dockerfile
2 questionsIn Dockerfile: HEALTHCHECK --interval=30s --timeout=5s --retries=3 CMD curl -f http://localhost/ || exit 1. Options: --interval (time between checks, default 30s), --timeout (check timeout, default 30s), --retries (failures before unhealthy, default 3), --start-period (grace period for startup). Command exit codes: 0=healthy, 1=unhealthy. At runtime: docker run --health-cmd='curl -f http://localhost/' --health-interval=30s <image>. Check health: docker inspect --format='{{.State.Health.Status}}' container. Status: starting, healthy, unhealthy.
FROM specifies the base image for your Docker build. It must be the first instruction in a Dockerfile (except ARG for pre-FROM variables). Syntax: FROM image:tag or FROM image@digest. Examples: FROM ubuntu:22.04, FROM python:3.11-slim, FROM node:18-alpine. Use specific tags for reproducibility (avoid :latest in production). FROM scratch creates an image from nothing (for static binaries). Multi-stage builds use multiple FROM statements. ARG before FROM: ARG VERSION=3.11 then FROM python:$VERSION.
Debugging
1 questionError: 'Cannot connect to the Docker daemon at unix:///var/run/docker.sock'. Solutions: 1) Check if Docker daemon is running: sudo systemctl status docker or sudo service docker status. Start it: sudo systemctl start docker. 2) Check user permissions: add user to docker group: sudo usermod -aG docker $USER then log out/in. 3) Check socket permissions: ls -la /var/run/docker.sock. 4) On WSL/Mac, ensure Docker Desktop is running. 5) Check DOCKER_HOST environment variable isn't set incorrectly.