Kubernetes Job ensures one or more pods run to successful completion (exit code 0), ideal for finite tasks vs long-running services (Deployments). Key characteristics: Tracks successful completions (spec.completions: 5 = 5 pods must succeed), tolerates failures up to spec.backoffLimit (default 6 retries), deletes pods after completion if ttlSecondsAfterFinished set (automatic cleanup). Non-restarting: Completed pods not restarted unlike Deployments. Use cases: One-time database migration (run pg_migrate script once), batch processing (process 1000 images with parallelism: 50 workers), backup tasks, report generation, ETL pipeline steps. Configuration: spec.completions (pods to succeed), spec.parallelism (max concurrent pods), spec.backoffLimit (max retries), spec.activeDeadlineSeconds (timeout), spec.ttlSecondsAfterFinished (cleanup delay). Failure handling: Pod fails → Job creates new pod (retry++), backoffLimit exceeded → Job marked Failed.
Kubernetes Job Vs Cronjob FAQ & Answers
5 expert Kubernetes Job Vs Cronjob answers researched from official documentation. Every answer cites authoritative sources you can verify.
unknown
5 questionsKubernetes CronJob creates Jobs on recurring schedule (Cron syntax), manages Job lifecycle (creates, monitors, cleans up), ideal for periodic tasks. Configuration: (1) spec.schedule - Cron syntax (0 2 * * * = 2am daily, */15 * * * * = every 15 min, @hourly, @daily, @weekly). Timezone: spec.timeZone: America/New_York (Kubernetes 1.27+). (2) spec.jobTemplate - Job spec used for each run. (3) spec.concurrencyPolicy - Allow (run concurrent), Forbid (skip if previous running), Replace (cancel previous, start new). (4) spec.successfulJobsHistoryLimit: 3 (keep last 3 successful), spec.failedJobsHistoryLimit: 1. Use cases: Scheduled backups (database dump every 6 hours), log rotation (compress logs daily), cache warming (preload before traffic spike), data cleanup (delete old records weekly), certificate checks (validate expiry monthly). Example: CronJob backup-daily creates Job backup-daily-1705392000 at scheduled time, Job creates Pod to perform backup.
Key differences: Job - One-time finite task, manually created (kubectl apply -f job.yaml), runs until completion or failure, no scheduling. CronJob - Recurring scheduled task, automatically creates Jobs per schedule, manages Job history, supports concurrency policies. Relationship: CronJob → Job → Pod hierarchy. Example: CronJob backup-daily creates Job backup-daily-1705392000 at scheduled time, Job creates Pod backup-daily-1705392000-abc123. Use cases - Job: One-time database migration, batch data import, report generation, ETL pipeline step, ML model training. CronJob: Scheduled backups (every 6 hours), log rotation (daily), cache warming (before traffic spike), data cleanup (weekly), health checks (nightly). Configuration differences: Job has completions/parallelism/backoffLimit, CronJob adds schedule/concurrencyPolicy/jobTemplate/historyLimits. CronJob manages multiple Job instances over time, Job is single execution unit.
Three Job patterns: (1) Single completion (spec.completions: 1, spec.parallelism: 1): One pod runs to completion. Use: One-time database migration, single backup, report generation. Example: pg_migrate script runs once, job done. (2) Fixed completion count (spec.completions: 10, spec.parallelism: 3): 10 pods must succeed, max 3 running concurrently. Use: Batch processing (process 10 files, 3 workers at a time), distributed ETL (10 data chunks in parallel). Example: Image processing - completions: 1000, parallelism: 50 (50 workers process 20 images each). (3) Work queue (spec.completions omitted, spec.parallelism: 5): Pods pull tasks from external queue (Redis, RabbitMQ), job completes when queue empty. Use: Dynamic workload (unknown queue length), distributed tasks. Example: Video transcoding - workers poll SQS, transcode videos, finish when queue drained. Indexed Jobs (Kubernetes 1.24+): spec.completionMode: Indexed assigns unique index (0 to N-1) via JOB_COMPLETION_INDEX env var.
spec.concurrencyPolicy controls behavior when new job scheduled while previous still running. Three policies: (1) Allow (default): Run concurrent jobs (multiple instances simultaneously). Use: Parallelizable tasks where overlap safe (send email reminders, process event logs). Risk: Resource exhaustion if jobs pile up. (2) Forbid: Skip new run if previous still active (prevent overlap). Use: Exclusive tasks (database backups, schema migrations). Example: Backup every hour with Forbid - if backup takes 90 min, 1-hour trigger skipped (prevents concurrent backups corrupting data). (3) Replace: Cancel previous job, start new one (only one at a time). Use: Latest data always wins (cache refresh, health check). Example: Cache warming CronJob - new run cancels old warming job, starts fresh. Best practices: Use Forbid for exclusive tasks (backups, migrations), Allow for parallelizable tasks, Replace for idempotent refresh operations. Monitor with kubectl get jobs to track concurrency.