DeletingPolicy

Deletes Matching Resources

Introduction

DeletingPolicy is a Kyverno custom resource that allows cluster administrators to automatically delete Kubernetes resources matching specified criteria, based on a cron schedule. This policy is helpful for implementing lifecycle management, garbage collection, or enforcing retention policies.

This policy provides the same functionality as the CleanupPolicy but it is designed to use CEL expressions for Kubernetes compatibility.

Unlike admission policies that react to API requests, DeletingPolicy:

  • Runs periodically at scheduled times

  • Evaluates existing resources in the cluster

  • Deletes resources when matching rules and conditions are satisfied

Key Use Cases

  • Find and delete orphaned resources, like completed jobs, periodically

  • Remove expired secrets or configmaps

  • Implement time-bound leases for critical resources

Additional Fields

schedule
A cron expression that defines when the policy will be evaluated.

1schedule: "0 0 * * *" #everyday at midnight
  • Must follow standard cron format
  • Minimum granularity is 1 minute

matchPolicy
Controls how rules are matched against the API request:

1matchPolicy: "Equivalent"
  • Exact: strict matching on group/version

  • Equivalent: match across equivalent group/versions (recommended)

deletionPropogationPolicy
DeletionPropagationPolicy defines how resources will be deleted (Foreground, Background, Orphan)

1deletionProgrationPolicy: "Orphan"
  • Orphan: Ensures dependent resources are deleted before the primary resource is removed.
  • Foreground: Deletes the primary resource first, while dependents are removed asynchronously.
  • Background: Deletes the primary resource but leaves its dependents untouched.

lastExecutionTime
Records last time the DeletingPolicy was executed or triggered

Example

This DeletingPolicy named cleanup-old-test-pods is configured to automatically delete pods in Kubernetes once per day at 1 AM. It targets pods that are:

  • Located in namespaces labeled environment: test

  • Are older than 72 hours

The policy uses a cron schedule to run periodically and applies conditions using CEL expressions to ensure only stale pods are cleaned up. Additionally, it defines a variable (isEphemeral) that could be used to further refine deletion logic, such as deleting only temporary or ephemeral pods.

policy:

 1apiVersion: policies.kyverno.io/v1alpha1
 2kind: DeletingPolicy
 3metadata:
 4  name: cleanup-old-test-pods
 5spec:
 6  schedule: "0 1 * * *"  # Run daily at 1 AM
 7  matchConstraints:
 8    resourceRules:
 9      - apiGroups: [""]
10        apiVersions: ["v1"]
11        operations: ["*"]
12        resources: ["pods"]
13        scope: "Namespaced"
14    namespaceSelector:
15      matchLabels:
16        environment: test
17  conditions:
18    - name: isOld
19      expression: "now() - object.metadata.creationTimestamp > duration('72h')"
20  variables:
21    - name: isEphemeral
22      expression: "has(object.metadata.labels.ephermal) && object.metadata.labels.ephemeral == 'true'"

resource:

1apiVersion: v1
2kind: Pod
3metadata:
4  name: example
5  namespace: default
6spec:
7  containers:
8  - image: nginx:latest
9    name: example

Kyverno CEL Libraries

Kyverno extends the standard CEL environment with built-in libraries to support advanced deletion logic. While it includes the standard Kyverno Kyverno CEL Libraries, the DeletingPolicy explicitly excludes support for user CEL library (user-lib), which are available in other policy types such ImageValidatingPolicy. For comprehensive documentation of all available CEL libraries, see the Kyverno CEL Libraries documentation

Resource library Examples

 1apiVersion: policies.kyverno.io/v1alpha1
 2kind: DeletingPolicy
 3metadata:
 4  name: dpol-resource-lib-check
 5spec:
 6  matchConstraints:
 7    resourceRules:
 8      - apiGroups: [""]
 9        apiVersions: ["v1"]
10        resources: ["pods"]
11  conditions:
12    - name: check-cm-value
13      expression: >-
14        resource.Get("v1", "configmaps", "default", "clusterregistries").data["registries"] == "enabled"
15  schedule: "*/1 * * * *"

HTTP library Examples

 1apiVersion: policies.kyverno.io/v1alpha1
 2kind: DeletingPolicy
 3metadata:
 4  name: http-delete-check
 5spec:
 6  conditions:
 7  - name: http-200-check
 8    expression: >
 9      http.Get("http://test-api-service.default.svc.cluster.local:80").metadata.labels.app == "test"
10  matchConstraints:
11    resourceRules:
12    - apiGroups: [""]
13      apiVersions: ["v1"]
14      resources: ["pods"]
15  schedule: "*/1 * * * *"

ImageData library Examples

 1apiVersion: policies.kyverno.io/v1alpha1
 2kind: DeletingPolicy
 3metadata:
 4  name: image-date-delete
 5spec:
 6  matchConstraints:
 7    resourceRules:
 8      - apiGroups: [""]
 9        apiVersions: ["v1"]
10        resources: ["pods"]
11  schedule: "*/1 * * * *"
12  conditions:
13    - name: arch-check
14      expression: >
15        object.spec.containers.all(c, image.GetMetadata(c.image).config.architecture == "amd64")

Image library Examples

 1apiVersion: policies.kyverno.io/v1alpha1
 2kind: DeletingPolicy
 3metadata:
 4  name: image-registry-delete
 5spec:
 6  matchConstraints:
 7    resourceRules:
 8      - apiGroups: [""]
 9        apiVersions: ["v1"]
10        resources: ["pods"]
11  schedule: "*/1 * * * *"
12  conditions:
13    - name: test-isImage
14      expression: >
15        object.spec.containers.all(c, isImage(c.image))

GlobalContext library Examples

policy

 1apiVersion: policies.kyverno.io/v1alpha1
 2kind: DeletingPolicy
 3metadata:
 4  name: delete-if-deployment-exists
 5spec:
 6  schedule: "*/1 * * * *"
 7  matchConstraints:
 8    resourceRules:
 9      - apiGroups: [""]
10        apiVersions: ["v1"]
11        resources: ["pods"]
12  conditions:
13    - name: require-deployment
14      expression: globalContext.Get("gctxentry-apicall-correct", "") != 0

gctxentry

1apiVersion: kyverno.io/v2alpha1
2kind: GlobalContextEntry
3metadata:
4  name: gctxentry-apicall-correct
5spec:
6  apiCall:
7    urlPath: "/apis/apps/v1/namespaces/test-globalcontext-apicall-correct/deployments"
8    refreshInterval: 1h

Observability: Tracking Deletion Events

Kyverno’s DeletingPolicy not only removes resources on a schedule but also emits a Kubernetes event each time a deletion is executed. This event allows administrators and users to trace exactly which policy deleted which resource, improving transparency, auditing and troubleshooting.

How to View Events

To view deletion events:

  1. List policy-applied events (summary view):
1kubectl get events --field-selector reason=PolicyApplied -A
  1. Get the event name:
1kubectl get events -n <namespace> \
2  -o custom-columns=NAME:.metadata.name,REASON:.reason,MESSAGE:.message
  1. Get full event details:
1kubectl get event <event-name> -n <namespace> -o yaml

Example Event

 1apiVersion: v1
 2kind: Event
 3metadata:
 4  name: cleanup-old-test-pods.184c935c5c7c52c0
 5  namespace: default
 6  creationTimestamp: "2025-06-26T11:13:00Z"
 7  resourceVersion: "3894"
 8  uid: 064e08ef-4547-43a3-b199-d2bbadd93b65
 9action: Resource Cleaned Up
10reason: PolicyApplied
11message: successfully deleted the target resource Pod/default/example
12involvedObject:
13  apiVersion: policies.kyverno.io/v1alpha1
14  kind: DeletingPolicy
15  name: deleting-pod
16  uid: cc44fb71-9413-4bbf-bc37-036a10f02c7c
17related:
18  apiVersion: v1
19  kind: Pod
20  name: example
21  namespace: default
22reportingComponent: kyverno-cleanup
23reportingInstance: kyverno-cleanup-kyverno-cleanup-controller-76c8b69df6-89mjj
24type: Normal

What Gets Logged

When a DeletingPolicy triggers a deletion, Kyverno creates an event with:

  • action: Resource cleaned-up
  • reason: PolicyApplied
  • message: human-readable success message
  • involvedObject: the DeletingPolicy that triggered the action
  • related: the resource that was deleted
  • reportingComponent: kyverno-cleanup

Caution

DeletingPolicy performs destructive actions. Always test your policies in a staging or dry run environment before applying them to production clusters. Ensure your selectors and conditions are strict enough to avoid accidental deletions.

RBAC Requirements

The kyverno cleanup controller requires RBAC permissions to delete the targeted resources. Ensure that the following verbs are allowed in the cluster role:

  • get, list, watch, and delete on the targeted resources.

For example, to delete Configmaps:

1rules:
2  - apiGroups: [""]
3    resources: ["configmaps"]
4    verbs: ["get", "list", "watch", "delete"]

Kyverno cleanup-controller always requires RBAC permissions for deleting resources (even pods). But In some testing clusters like Minikube and Kind often already includes permissions to manage core resources like:

  • pods, configmaps, secrets, etc.

So you may notice RBAC issues when deleting pods, because:

  • Kyverno already has access to them
  • There are no extra CRDs or cluster-scoped permissions needed

But for other resources like deployments, networkpolicies etc, they are not always included in kyverno’s default permissions. You must explicitly grants delete RBAC for those resources.

Tips & Best Practices

  • Use dry runs or audit mode before enabling destructive deletes

  • Be careful when using wildcards * in resources

  • Always validate your CEL expressions with Kyverno CLI

  • Use meaningful variable/condition names for observability Footer


Last modified July 23, 2025 at 11:34 AM PST: [doc] Add doc for DeletingPolicy (#1589) (e12be44)