kubernetes_daemonset 10 Q&As

Kubernetes Daemonset FAQ & Answers

10 expert Kubernetes Daemonset answers researched from official documentation. Every answer cites authoritative sources you can verify.

unknown

10 questions
A

Controller that ensures a copy of a Pod runs on all (or selected) nodes in a cluster. Automatically adds Pod to new nodes, removes from deleted nodes. Use cases: node monitoring (Prometheus Node Exporter), log collection (Fluentd), storage daemons (Ceph, GlusterFS), network plugins (Cilium, Calico), security agents (Falco).

99% confidence
A

Define YAML with kind: DaemonSet, spec.template (Pod template), spec.selector (label selector). Example: apiVersion: apps/v1, kind: DaemonSet, metadata: {name: fluentd}, spec: {selector: {matchLabels: {name: fluentd}}, template: {metadata: {labels: {name: fluentd}}, spec: {containers: [...]}}}. Apply with kubectl apply -f daemonset.yaml.

99% confidence
A

Use nodeSelector or node affinity. Example nodeSelector: {disktype: ssd} runs only on nodes with label disktype=ssd. Advanced: spec.affinity.nodeAffinity with requiredDuringSchedulingIgnoredDuringExecution. Inverse: use taints/tolerations to exclude nodes (control plane nodes often tainted NoSchedule).

99% confidence
A

Two strategies: (1) RollingUpdate (default): updates Pods gradually, controlled by maxUnavailable (e.g., 1 = one Pod at a time), (2) OnDelete: manual deletion required for update. Check with: kubectl rollout status daemonset/name. Rollback: kubectl rollout undo daemonset/name. History: kubectl rollout history daemonset/name.

99% confidence
A

DaemonSets automatically tolerate certain taints: node.kubernetes.io/not-ready, node.kubernetes.io/unreachable, node.kubernetes.io/disk-pressure, node.kubernetes.io/memory-pressure, node.kubernetes.io/unschedulable. Enables DaemonSet Pods to run on nodes unavailable to other workloads. Add custom tolerations in Pod spec for other taints.

99% confidence
A

Best practices: (1) Set resource requests/limits (prevents node resource exhaustion), (2) Use Guaranteed QoS (requests = limits) for critical daemons, (3) Reserve CPU/memory for DaemonSet overhead (kubelet --system-reserved), (4) Monitor with kubectl top pod, (5) Use PriorityClass for critical daemons (prevents preemption).

99% confidence
A

Check: (1) Node labels match nodeSelector, (2) Taints block scheduling (kubectl describe node), (3) Resource requests exceed node capacity, (4) PodSecurityPolicy/PodSecurity admission (check events), (5) DaemonSet selector matches Pod template labels. Debug: kubectl describe daemonset, kubectl get events --field-selector involvedObject.name=daemonset-name.

99% confidence
A

DaemonSet: one Pod per node, no replica count, follows node lifecycle, used for node-level services. Deployment: arbitrary replica count, cluster-wide distribution, used for application services. DaemonSet ignores unschedulable nodes by default. Deployment respects all scheduling constraints.

99% confidence
A

Metrics: (1) .status.numberReady vs .status.desiredNumberScheduled (should match), (2) .status.numberMisscheduled (should be 0), (3) .status.numberUnavailable during updates. Query: kubectl get daemonset -o wide. Alert on: numberReady < desired, numberMisscheduled > 0, update stuck (watch rollout status).

99% confidence
A

Production use cases: (1) Logging: Fluentd/Fluent Bit for node logs, (2) Monitoring: Prometheus Node Exporter, (3) Networking: CNI plugins (Cilium, Calico), kube-proxy, (4) Storage: Ceph/GlusterFS client daemons, (5) Security: Falco runtime security, Twistlock defender, (6) GPU drivers: NVIDIA device plugin. Essential for cluster infrastructure.

99% confidence