docker_disk_space 8 Q&As

Docker Disk Space FAQ & Answers

8 expert Docker Disk Space answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

8 questions
A

Container crashes with 'no space left on device' error, often entering zombie state (running but non-functional). Application writes fail silently or with cryptic errors, database writes corrupt with partial transactions, logs truncate mid-line. Container may appear running in docker ps but doesn't respond to requests. Recovery is difficult - often requires container restart. Common causes: log explosion, temp file buildup, cache not cleared, improper log rotation. Prevention is critical because zombie containers are hard to diagnose.

99% confidence
A

Use docker exec df -h to check filesystem usage from host. Shows mounted volumes and container filesystem. Look for Use% column - >90% is critical. Pattern: docker exec myapp df -h | grep -v tmpfs. For detailed breakdown: docker exec myapp du -sh /var/log /tmp /app to identify space hogs. From inside container (if shell access): df -h and du -sh /* for directory sizes. Monitor /var/log first (logs), /tmp second (temp files), then application directories.

99% confidence
A

Multi-layered prevention: (1) Use multi-stage builds to minimize image size (70MB vs 700MB+), (2) Set container storage limits: docker run --storage-opt size=10G, (3) Configure log rotation inside container (logrotate with daily rotation), (4) Monitor disk usage in application code with check-disk-space npm package, (5) Set Docker logging driver limits: --log-opt max-size=10m --log-opt max-file=3. Application monitoring: setInterval(() => checkDiskSpace('/'), 300000), alert at 90%, trigger cleanup at 85%. Prevention > Recovery always.

99% confidence
A

Run docker system prune on host (not inside container). Options: (1) docker system prune removes stopped containers, unused networks, dangling images. (2) docker system prune -a removes ALL unused images (not just dangling). (3) docker system prune -a --volumes removes unused volumes too (careful: data loss). Add -f flag to skip confirmation. Schedule as cron job: 0 3 * * * docker system prune -f. This runs at 3 AM daily. Monitor reclaimable space first: docker system df. Typical recovery: 10-50GB depending on workload.

99% confidence
A

Execute cleanup commands from host: (1) Remove old temp files: docker exec find /tmp -type f -atime +7 -delete (files >7 days old), (2) Truncate logs: docker exec truncate -s 0 /var/log/app.log, (3) Clean package manager cache: docker exec apt-get clean or docker exec npm cache clean --force, (4) Remove application cache: docker exec rm -rf /app/cache/*. Check space after each: docker exec df -h. Don't remove node_modules or system files. If >95% full, may need emergency restart.

99% confidence
A

Recovery steps: (1) Assess disk usage: docker system df to identify space hogs, (2) docker commit backup:latest to save state, (3) docker stop (may fail - check docker ps), (4) Clean host: docker system prune -a -f to remove dangling images/containers. For stuck zombies: sudo rm -rf /var/lib/docker/overlay2/ then sudo systemctl restart docker. Extract data: docker cp :/critical/path ./backup before removal. Recreate: docker run backup:latest with increased storage. Prevention: Use .dockerignore files, set restart policies (on-failure:3), monitor with docker stats. Zombie containers rarely self-recover - prioritize data extraction over in-place repair.

99% confidence
A

Two approaches: (1) Docker logging driver (recommended): docker run --log-driver json-file --log-opt max-size=10m --log-opt max-file=3 myapp. Limits each log file to 10MB, keeps 3 files max. (2) Internal logrotate: Install logrotate in Dockerfile: RUN apk add logrotate, create /etc/logrotate.d/app with config: /var/log/app.log { daily, rotate 7, compress, missingok }. Run via cron inside container. Docker driver method is simpler and recommended. Logs still accessible via docker logs command. Monitor with: docker inspect --format='{{.HostConfig.LogConfig}}' .

99% confidence
A

Set limits with 50% buffer over peak usage. Docker: --storage-opt size=10G (requires overlay2 on xfs with pquota mount). Typical values: Small APIs 5-10GB, Medium services 10-20GB, Large processing 20-50GB. Account for 20-30% overhead for logs and system files. Kubernetes: resources.limits.ephemeral-storage: "10Gi" (GA in K8s 1.25+). Note: All log files including rotated logs count toward ephemeral-storage limit. Monitor: docker exec df -h, set to peak × 1.5. Also set: --log-opt max-size=10m --log-opt max-file=3. Namespace-level: ResourceQuota with requests.ephemeral-storage: "50Gi" and limits.ephemeral-storage: "100Gi". Alert at 80% usage, re-evaluate quarterly.

99% confidence