Skip to content

Pod Disruption Budget Treats Terminating Pods as Unmanaged During Rolling Restarts #130723

@atilsensalduz

Description

@atilsensalduz

I have been experiencing a recurring issue with PDBs during every rolling restart or deployment. The issue manifests as a momentary event where a terminating pod from the old ReplicaSet is treated as unmanaged, causing the following warning event:

Warning CalculateExpectedPodCountFailed 49s (x3 over 49s) controllermanager Failed to calculate the number of expected pods: found no controllers for pod "service-6958844f49-rw9cr"

Just before the pod is deleted, the PDB incorrectly classifies it as an unmanaged pod, leading to the above warning.
Despite this being a transient issue (lasting only momentarily before the pod is removed), I have observed it consistently across multiple microservices.

Expected Behaviour: The terminating pod should still be recognized as managed by the ReplicaSet and not cause a miscalculation in expected pod counts.

Environment Details:
Kubernetes Version: 1.30 EKS

PDB Configuration:

  apiVersion: policy/v1
  kind: PodDisruptionBudget
  metadata:
    labels:
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/name: test-pdb
      argocd.argoproj.io/instance: test
      helm.sh/chart: generic-1.0.0
    name: test
    namespace: test
  spec:
    maxUnavailable: 1
    selector:
      matchLabels:
        app.kubernetes.io/name: test

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.needs-triageIndicates an issue or PR lacks a `triage/foo` label and requires one.sig/appsCategorizes an issue or PR as relevant to SIG Apps.

    Type

    No type

    Projects

    Status

    Needs Triage

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions