Container crashes with 'no space left on device' error, often entering zombie state (running but non-functional). Application writes fail silently or with cryptic errors, database writes corrupt with partial transactions, logs truncate mid-line. Container may appear running in docker ps but doesn't respond to requests. Recovery is difficult - often requires container restart. Common causes: log explosion, temp file buildup, cache not cleared, improper log rotation. Prevention is critical because zombie containers are hard to diagnose.
Docker Disk Space FAQ & Answers
8 expert Docker Disk Space answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
8 questionsUse docker exec
Multi-layered prevention: (1) Use multi-stage builds to minimize image size (70MB vs 700MB+), (2) Set container storage limits: docker run --storage-opt size=10G, (3) Configure log rotation inside container (logrotate with daily rotation), (4) Monitor disk usage in application code with check-disk-space npm package, (5) Set Docker logging driver limits: --log-opt max-size=10m --log-opt max-file=3. Application monitoring: setInterval(() => checkDiskSpace('/'), 300000), alert at 90%, trigger cleanup at 85%. Prevention > Recovery always.
Run docker system prune on host (not inside container). Options: (1) docker system prune removes stopped containers, unused networks, dangling images. (2) docker system prune -a removes ALL unused images (not just dangling). (3) docker system prune -a --volumes removes unused volumes too (careful: data loss). Add -f flag to skip confirmation. Schedule as cron job: 0 3 * * * docker system prune -f. This runs at 3 AM daily. Monitor reclaimable space first: docker system df. Typical recovery: 10-50GB depending on workload.
Execute cleanup commands from host: (1) Remove old temp files: docker exec
Recovery steps: (1) Assess disk usage: docker system df to identify space hogs, (2) docker commit
Two approaches: (1) Docker logging driver (recommended): docker run --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 myapp. Limits each log file to 10MB, keeps 3 files max. (2) Internal logrotate: Install logrotate in Dockerfile: RUN apk add logrotate, create /etc/logrotate.d/app with config: /var/log/app.log { daily, rotate 7, compress, missingok }. Run via cron inside container. Docker driver method is simpler and recommended. Logs still accessible via docker logs command. Monitor with: docker inspect --format='{{.HostConfig.LogConfig}}'
Set limits with 50% buffer over peak usage. Docker: --storage-opt size=10G (requires overlay2 on xfs with pquota mount). Typical values: Small APIs 5-10GB, Medium services 10-20GB, Large processing 20-50GB. Account for 20-30% overhead for logs and system files. Kubernetes: resources.limits.ephemeral-storage: "10Gi" (GA in K8s 1.25+). Note: All log files including rotated logs count toward ephemeral-storage limit. Monitor: docker exec