Skip to content

Kyverno CLI

FEATURE STATE: Stable Kyverno v1.17

The Kyverno Command Line Interface (CLI) is designed to validate and test policy behavior to resources prior to adding them to a cluster.

The CLI can be used in CI/CD pipelines to assist with the resource authoring process to ensure they conform to standards prior to them being deployed. It can be used as a kubectl plugin or as a standalone CLI.

The CLI, although composed of the same Kyverno codebase, is a purpose-built binary available via multiple installation methods but is distinct from the Kyverno container image which runs as a Pod in a target Kubernetes cluster.

This page covers the main Kyverno CLI commands, for other commands please refer to the Reference documentation

The Kyverno CLI can also be installed with Homebrew as a formula.

Terminal window
brew install kyverno

You can use Krew to install the Kyverno CLI:

Terminal window
# Install Kyverno CLI using kubectl krew plugin manager
kubectl krew install kyverno
# test the Kyverno CLI
kubectl kyverno version

You can install the Kyverno CLI via your favorite AUR helper (e.g. yay)

Terminal window
yay -S kyverno-git

The Kyverno CLI can be installed in GitHub Actions using kyverno-cli-installer from the GitHub Marketplace. Please refer to kyverno-cli-installer for more information.

The Kyverno CLI may also be installed by manually downloading the compiled binary available on the releases page. An example of installing the Kyverno CLI v1.12.0 on a Linux x86_64 system is shown below.

Terminal window
curl -LO https://github.com/kyverno/kyverno/releases/download/v1.12.0/kyverno-cli_v1.12.0_linux_x86_64.tar.gz
tar -xvf kyverno-cli_v1.12.0_linux_x86_64.tar.gz
sudo cp kyverno /usr/local/bin/

You can also build the CLI binary from the Git repository (requires Go).

Terminal window
git clone https://github.com/kyverno/kyverno
cd kyverno
make build-cli
sudo mv ./cmd/cli/kubectl-kyverno/kubectl-kyverno /usr/local/bin/

The apply command is used to perform a dry run on one or more policies with a given set of input resources. This can be useful to determine a policy’s effectiveness prior to committing to a cluster. In the case of mutate policies, the apply command can show the mutated resource as an output. The input resources can either be resource manifests (one or multiple) or can be taken from a running Kubernetes cluster. The apply command supports files from URLs both as policies and resources.

Apply to a resource:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml

Apply a policy to all matching resources in a cluster based on the current kubectl context:

Terminal window
kyverno apply /path/to/policy.yaml --cluster

The resources can also be passed from stdin:

Terminal window
kustomize build nginx/overlays/envs/prod/ | kyverno apply /path/to/policy.yaml --resource -

Apply all cluster policies in the current cluster to all matching resources in a cluster based on the current kubectl context:

Terminal window
kubectl get clusterpolicies -o yaml | kyverno apply - --cluster

Apply multiple policies to multiple resources:

Terminal window
kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --cluster

Apply a policy to a resource with a policy exception:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml

Apply multiple policies to multiple resources with exceptions:

Terminal window
kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --exception /path/to/exception1.yaml --exception /path/to/exception2.yaml

Apply multiple policies to multiple resources where exceptions are evaluated from the provided resources:

Terminal window
kyverno apply /path/to/policy1.yaml /path/to/folderFullOfPolicies --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml --exceptions-with-resources

Apply a mutation policy to a specific resource:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml
applying 1 policy to 1 resource...
mutate policy <policy_name> applied to <resource_name>:
<final mutated resource output>

Save the mutated resource to a file:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o newresource.yaml

Save the mutated resource to a directory:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml -o foo/

Run a policy with a mutate existing rule on a group of target resources:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --target-resource /path/to/target1.yaml --target-resource /path/to/target2.yaml
Applying 1 policy rule(s) to 1 resource(s)...
mutate policy <policy-name> applied to <trigger-name>:
<trigger-resource>
---
patched targets:
<patched-target1>
---
<patched-target2>
---
pass: 2, fail: 0, warn: 0, error: 0, skip: 0

Run a policy with a mutate existing rule on target resources from a directory:

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --target-resources /path/to/targets/
Applying 1 policy rule(s) to 1 resource(s)...
mutate policy <policy-name> applied to <trigger-name>:
<trigger-resource>
---
patched targets:
<patched-targets>
pass: 5, fail: 0, warn: 0, error: 0, skip: 0

Apply a policy containing variables using the --set or -s flag to pass in the values. Variables that begin with {{request.object}} normally do not need to be specified as these will be read from the resource.

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --set <variable1>=<value1>,<variable2>=<value2>

Use -f or --values-file for applying multiple policies to multiple resources while passing a file containing variables and their values. Variables specified can be of various types include AdmissionReview fields, ConfigMap context data, API call context data, and Global Context Entries.

Use -u or --userinfo for applying policies while passing an optional user_info.yaml file which contains necessary admission request data made during the request.

Terminal window
kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml --userinfo /path/to/user_info.yaml

Format of value.yaml with all possible fields:

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: <policy1 name>
rules:
- name: <rule1 name>
values:
<context variable1 in policy1 rule1>: <value>
<context variable2 in policy1 rule1>: <value>
- name: <rule2 name>
values:
<context variable1 in policy1 rule2>: <value>
<context variable2 in policy1 rule2>: <value>
resources:
- name: <resource1 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <resource2 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
namespaceSelector:
- name: <namespace1 name>
labels:
<label key>: <label value>
- name: <namespace2 name>
labels:
<label key>: <label value>

Format of user_info.yaml:

apiVersion: cli.kyverno.io/v1alpha1
kind: UserInfo
metadata:
name: user-info
clusterRoles:
- admin
userInfo:
username: molybdenum@somecorp.com

Example:

Policy manifest (add_network_policy.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-networkpolicy
spec:
background: false
rules:
- name: default-deny-ingress
match:
any:
- resources:
kinds:
- Namespace
clusterRoles:
- cluster-admin
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-ingress
namespace: '{{request.object.metadata.name}}'
synchronize: true
data:
spec:
# select all pods in the namespace
podSelector: {}
policyTypes:
- Ingress

Resource manifest (required_default_network_policy.yaml):

kind: Namespace
apiVersion: v1
metadata:
name: devtest

Apply a policy to a resource using the --set or -s flag to pass a variable directly:

Terminal window
kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -s request.object.metadata.name=devtest

Apply a policy to a resource using the --values-file or -f flag:

YAML file containing variables (value.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: add-networkpolicy
resources:
- name: devtest
values:
request.namespace: devtest
Terminal window
kyverno apply /path/to/add_network_policy.yaml --resource /path/to/required_default_network_policy.yaml -f /path/to/value.yaml

On applying the above policy to the mentioned resources, the following output will be generated:

Terminal window
Applying 1 policy to 1 resource...
(Total number of result count may vary as the policy is mutated by Kyverno. To check the mutated policy please try with log level 5)
pass: 1, fail: 0, warn: 0, error: 0, skip: 0

The summary count is based on the number of rules applied on the number of resources.

Value files also support global values, which can be passed to all resources the policy is being applied to.

Format of value.yaml:

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: <policy1 name>
resources:
- name: <resource1 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <resource2 name>
values:
<variable1 in policy1>: <value>
<variable2 in policy1>: <value>
- name: <policy2 name>
resources:
- name: <resource1 name>
values:
<variable1 in policy2>: <value>
<variable2 in policy2>: <value>
- name: <resource2 name>
values:
<variable1 in policy2>: <value>
<variable2 in policy2>: <value>
globalValues:
<global variable1>: <value>
<global variable2>: <value>

If a resource-specific value and a global value have the same variable name, the resource value takes precedence over the global value. See the Pod test-global-prod in the following example.

Example:

Policy manifest (add_dev_pod.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: cm-globalval-example
spec:
background: false
rules:
- name: validate-mode
match:
any:
- resources:
kinds:
- Pod
validate:
failureAction: Enforce
message: "The value {{ request.mode }} for val1 is not equal to 'dev'."
deny:
conditions:
any:
- key: '{{ request.mode }}'
operator: NotEquals
value: dev

Resource manifest (dev_prod_pod.yaml):

apiVersion: v1
kind: Pod
metadata:
name: test-global-prod
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: v1
kind: Pod
metadata:
name: test-global-dev
spec:
containers:
- name: nginx
image: nginx:1.12

YAML file containing variables (value.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: cm-globalval-example
resources:
- name: test-global-prod
values:
request.mode: prod
globalValues:
request.mode: dev
Terminal window
kyverno apply /path/to/add_dev_pod.yaml --resource /path/to/dev_prod_pod.yaml -f /path/to/value.yaml

The Pod test-global-dev passes the validation, and test-global-prod fails.

Apply a policy with the Namespace selector:

Use --values-file or -f for passing a file containing Namespace details. Check here to know more about Namespace selectors.

Terminal window
kyverno apply /path/to/policy1.yaml /path/to/policy2.yaml --resource /path/to/resource1.yaml --resource /path/to/resource2.yaml -f /path/to/value.yaml

Format of value.yaml:

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
namespaceSelector:
- name: <namespace1 name>
labels:
<namespace label key>: <namespace label value>
- name: <namespace2 name>
labels:
<namespace label key>: <namespace label value>

Example:

Policy manifest (enforce-pod-name.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-pod-name
spec:
background: true
rules:
- name: validate-name
match:
any:
- resources:
kinds:
- Pod
namespaceSelector:
matchExpressions:
- key: foo.com/managed-state
operator: In
values:
- managed
validate:
failureAction: Audit
message: 'The Pod must end with -nginx'
pattern:
metadata:
name: '*-nginx'

Resource manifest (nginx.yaml):

kind: Pod
apiVersion: v1
metadata:
name: test-nginx
namespace: test1
spec:
containers:
- name: nginx
image: nginx:latest

Namespace manifest (namespace.yaml):

apiVersion: v1
kind: Namespace
metadata:
name: test1
labels:
foo.com/managed-state: managed

YAML file containing variables (value.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
namespaceSelector:
- name: test1
labels:
foo.com/managed-state: managed

To test the above policy, use the following command:

Terminal window
kyverno apply /path/to/enforce-pod-name.yaml --resource /path/to/nginx.yaml -f /path/to/value.yaml

Apply a resource to a policy which uses a context variable:

Use --values-file or -f for passing a file containing the context variable.

Terminal window
kyverno apply /path/to/policy1.yaml --resource /path/to/resource1.yaml -f /path/to/value.yaml

policy1.yaml

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: cm-variable-example
annotations:
pod-policies.kyverno.io/autogen-controllers: DaemonSet,Deployment,StatefulSet
spec:
background: false
rules:
- name: example-configmap-lookup
context:
- name: dictionary
configMap:
name: mycmap
namespace: default
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
metadata:
labels:
my-environment-name: '{{dictionary.data.env}}'

resource1.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx-config-test
spec:
containers:
- image: nginx:latest
name: test-nginx

value.yaml

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: cm-variable-example
rules:
- name: example-configmap-lookup
values:
dictionary.data.env: dev1

You can also inject global context entries using variables. Here’s an example of a Values file that injects a global context entry:

apiVersion: cli.kyverno.io/v1alpha1
kind: Value
metadata:
name: values
globalValues:
request.operation: CREATE
policies:
- name: gctx
rules:
- name: main-deployment-exists
values:
deploymentCount: 1

In this example, request.operation is set as a global value, and deploymentCount is set for a specific rule in the gctx policy.

Policies that have their failureAction set to Audit can be set to produce a warning instead of a failure using the --audit-warn flag. This will also cause a non-zero exit code if no enforcing policies failed.

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --audit-warn

Additionally, you can use the --warn-exit-code flag with the apply command to control the exit code when warnings are reported. This is useful in CI/CD systems when used with the --audit-warn flag to treat Audit policies as warnings. When no failures or errors are found, but warnings are encountered, the CLI will exit with the defined exit code.

Terminal window
kyverno apply disallow-latest-tag.yaml --resource=echo-test.yaml --audit-warn --warn-exit-code 3
echo $?
3

You can also use --warn-exit-code in combination with --warn-no-pass flag to make the CLI exit with the warning code if no objects were found that satisfy a policy. This may be useful during the initial development of a policy or if you want to make sure that an object exists in the Kubernetes manifest.

Terminal window
kyverno apply disallow-latest-tag.yaml --resource=empty.yaml --warn-exit-code 3 --warn-no-pass
echo $?
3

Policy reports provide information about policy execution and violations. Use --policy-report with the apply command to generate a policy report for validate policies. mutate and generate policies do not trigger policy reports.

Policy reports can also be generated for a live cluster. While generating a policy report for a live cluster the -r flag, which declares a resource, is assumed to be globally unique. And it doesn’t support naming the resource type (ex., Pod/foo when the cluster contains resources of different types with the same name). To generate a policy report for a live cluster use --cluster with --policy-report.

Terminal window
kyverno apply policy.yaml --cluster --policy-report

Above example applies a policy.yaml to all resources in the cluster.

Below are the combination of inputs that can be used for generating the policy report from the Kyverno CLI.

PolicyResourceClusterNamespaceInterpretation
policy.yaml-r resource.yamlfalseApply policy from policy.yaml to the resources specified in resource.yaml
policy.yaml-r resourceNametrueApply policy from policy.yaml to the resource with a given name in the cluster
policy.yamltrueApply policy from policy.yaml to all the resources in the cluster
policy.yaml-r resourceNametrue-n=namespaceNameApply policy from policy.yaml to the resource with a given name in a specific Namespace
policy.yamltrue-n=namespaceNameApply policy from policy.yaml to all the resources in a specific Namespace

Example:

Consider the following policy and resources:

policy.yaml

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-pod-requests-limits
spec:
rules:
- name: validate-resources
match:
any:
- resources:
kinds:
- Pod
validate:
failureAction: Audit
message: 'CPU and memory resource requests and limits are required'
pattern:
spec:
containers:
- resources:
requests:
memory: '?*'
cpu: '?*'
limits:
memory: '?*'

resource1.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx1
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
resources:
requests:
memory: '64Mi'
cpu: '250m'
limits:
memory: '128Mi'
cpu: '500m'

resource2.yaml

apiVersion: v1
kind: Pod
metadata:
name: nginx2
labels:
env: test
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent

Case 1: Apply a policy manifest to multiple resource manifests

Terminal window
kyverno apply policy.yaml -r resource1.yaml -r resource2.yaml --policy-report

Case 2: Apply a policy manifest to multiple resources in the cluster

Create the resources by first applying manifests resource1.yaml and resource2.yaml.

Terminal window
kyverno apply policy.yaml -r nginx1 -r nginx2 --cluster --policy-report

Case 3: Apply a policy manifest to all resources in the cluster

Terminal window
kyverno apply policy.yaml --cluster --policy-report

Given the contents of policy.yaml shown earlier, this will produce a report validating against all Pods in the cluster.

Case 4: Apply a policy manifest to multiple resources by name within a specific Namespace

Terminal window
kyverno apply policy.yaml -r nginx1 -r nginx2 --cluster --policy-report -n default

Case 5: Apply a policy manifest to all resources within the default Namespace

Terminal window
kyverno apply policy.yaml --cluster --policy-report -n default

Given the contents of policy.yaml shown earlier, this will produce a report validating all Pods within the default Namespace.

On applying policy.yaml to the mentioned resources, the following report will be generated:

apiVersion: wgpolicyk8s.io/v1alpha1
kind: ClusterPolicyReport
metadata:
name: clusterpolicyreport
results:
- message: Validation rule 'validate-resources' succeeded.
policy: require-pod-requests-limits
resources:
- apiVersion: v1
kind: Pod
name: nginx1
namespace: default
rule: validate-resources
scored: true
status: pass
- message: 'Validation error: CPU and memory resource requests and limits are required; Validation rule validate-resources failed at path /spec/containers/0/resources/limits/'
policy: require-pod-requests-limits
resources:
- apiVersion: v1
kind: Pod
name: nginx2
namespace: default
rule: validate-resources
scored: true
status: fail
summary:
error: 0
fail: 1
pass: 1
skip: 0
warn: 0

Policy Exceptions can be applied alongside policies by using the -e or --exceptions flag to pass the Policy Exception manifest.

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml

Example:

Applying a policy to a resource with a policy exception.

Policy manifest (policy.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: max-containers
spec:
background: false
rules:
- name: max-two-containers
match:
any:
- resources:
kinds:
- Pod
validate:
failureAction: Enforce
message: 'A maximum of 2 containers are allowed inside a Pod.'
deny:
conditions:
any:
- key: '{{request.object.spec.containers[] | length(@)}}'
operator: GreaterThan
value: 2

Policy Exception manifest (exception.yaml):

apiVersion: kyverno.io/v2
kind: PolicyException
metadata:
name: container-exception
spec:
exceptions:
- policyName: max-containers
ruleNames:
- max-two-containers
- autogen-max-two-containers
match:
any:
- resources:
kinds:
- Pod
- Deployment
conditions:
any:
- key: "{{ request.object.metadata.labels.color || '' }}"
operator: Equals
value: blue

Resource manifest (resource.yaml):

A Deployment matching the characteristics defined in the PolicyException, shown below, will be allowed creation even though it technically violates the rule’s definition.

apiVersion: apps/v1
kind: Deployment
metadata:
name: three-containers-deployment
labels:
app: my-app
color: blue
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
color: blue
spec:
containers:
- name: nginx-container
image: nginx:latest
ports:
- containerPort: 80
- name: redis-container
image: redis:latest
ports:
- containerPort: 6379
- name: busybox-container
image: busybox:latest
command:
[
'/bin/sh',
'-c',
"while true; do echo 'Hello from BusyBox'; sleep 10; done",
]

Apply the above policy to the resource with the exception

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/resource.yaml --exception /path/to/exception.yaml

The following output will be generated:

Terminal window
Applying 3 policy rule(s) to 1 resource(s) with 1 exception(s)...
pass: 0, fail: 0, warn: 0, error: 0, skip: 1

The kyverno apply command can be used to apply native Kubernetes policies and their corresponding bindings to resources, allowing you to test them locally without a cluster.

With the apply command, Kubernetes ValidatingAdmissionPolicies can be applied to resources as follows:

Policy manifest (check-deployment-replicas.yaml):

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
name: check-deployments-replicas
spec:
failurePolicy: Fail
matchConstraints:
resourceRules:
- apiGroups: ['apps']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['deployments']
validations:
- expression: 'object.spec.replicas <= 3'
message: 'Replicas must be less than or equal 3'

Resource manifest (deployment.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-pass
spec:
replicas: 2
selector:
matchLabels:
app: nginx-pass
template:
metadata:
labels:
app: nginx-pass
spec:
containers:
- name: nginx-server
image: nginx

Apply the ValidatingAdmissionPolicy to the resource:

Terminal window
kyverno apply /path/to/check-deployment-replicas.yaml --resource /path/to/deployment.yaml

The following output will be generated:

Terminal window
Applying 1 policy rule(s) to 1 resource(s)...
pass: 1, fail: 0, warn: 0, error: 0, skip: 0

The below example applies a ValidatingAdmissionPolicyBinding along with the policy to all resources in the cluster.

Policy manifest (check-deployment-replicas.yaml):

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
name: 'check-deployment-replicas'
spec:
matchConstraints:
resourceRules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- deployments
validations:
- expression: object.spec.replicas <= 5
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: 'check-deployment-replicas-binding'
spec:
policyName: 'check-deployment-replicas'
validationActions: [Deny]
matchResources:
namespaceSelector:
matchLabels:
environment: staging

The above policy verifies that the number of deployment replicas is not greater than 5 and is limited to a namespace labeled environment: staging.

Create a Namespace with the label environment: staging:

Terminal window
kubectl create ns staging
kubectl label ns staging environment=staging

Create two Deployments, one of them in the staging namespace, which violates the policy.

Terminal window
kubectl create deployment nginx-1 --image=nginx --replicas=6 -n staging
kubectl create deployment nginx-2 --image=nginx --replicas=6

Get all Deployments from the cluster:

Terminal window
kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default nginx-2 6/6 6 6 7m26s
kube-system coredns 2/2 2 2 13m
local-path-storage local-path-provisioner 1/1 1 1 13m
staging nginx-1 6/6 6 6 7m44s

Apply the ValidatingAdmissionPolicy with its binding to all resources in the cluster:

Terminal window
kyverno apply /path/to/check-deployment-replicas.yaml --cluster --policy-report

The following output will be generated:

Terminal window
Applying 1 policy rule(s) to 4 resource(s)...
----------------------------------------------------------------------
POLICY REPORT:
----------------------------------------------------------------------
apiVersion: wgpolicyk8s.io/v1alpha2
kind: ClusterPolicyReport
metadata:
creationTimestamp: null
name: merged
results:
- message: 'failed expression: object.spec.replicas <= 5'
policy: check-deployment-replicas
resources:
- apiVersion: apps/v1
kind: Deployment
name: nginx-1
namespace: staging
uid: a95d1594-44a7-4c8a-9225-04ac34cb9494
result: fail
scored: true
source: kyverno
timestamp:
nanos: 0
seconds: 1707394871
summary:
error: 0
fail: 1
pass: 0
skip: 0
warn: 0

As expected, the policy is only applied to nginx-1 as it matches both the policy definition and its binding.

Similarly, you can test a MutatingAdmissionPolicy to preview the changes it would make to a resource. The CLI will output the final, mutated resource.

For instance, you can test a MutatingAdmissionPolicy that adds a label to a ConfigMap.

Policy manifest (add-label-to-configmap.yaml):

apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicy
metadata:
name: 'add-label-to-configmap'
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE']
resources: ['configmaps']
failurePolicy: Fail
reinvocationPolicy: Never
mutations:
- patchType: 'ApplyConfiguration'
applyConfiguration:
expression: >
object.metadata.?labels["lfx-mentorship"].hasValue() ?
Object{} :
Object{ metadata: Object.metadata{ labels: {"lfx-mentorship": "kyverno"}}}

Resource manifest (configmap.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
labels:
app: game
data:
player_initial_lives: '3'

Now, apply the MutatingAdmissionPolicy to the ConfigMap resource:

Terminal window
kyverno apply /path/to/add-label-to-configmap.yaml --resource /path/to/configmap.yaml

The output will show the mutated ConfigMap with the added label:

Terminal window
Applying 1 policy rule(s) to 1 resource(s)...
policy add-label-to-configmap applied to default/ConfigMap/game-demo:
apiVersion: v1
data:
player_initial_lives: "3"
kind: ConfigMap
metadata:
labels:
app: game
lfx-mentorship: kyverno
name: game-demo
namespace: default
---
Mutation has been applied successfully.
pass: 1, fail: 0, warn: 0, error: 0, skip: 0

The output displays the ConfigMap with the new lfx-mentorship: kyverno label, confirming the mutation was applied correctly.

Example 2: Mutation with a Binding and Namespace Selector
Section titled “Example 2: Mutation with a Binding and Namespace Selector”

You can also test policies that include a MutatingAdmissionPolicyBinding to control where the policy is applied. This example makes use of a namespace selector to apply the policy only to ConfigMaps in a specific namespace.

To do this, you must provide a values.yaml file to simulate the labels of the Namespaces your resources belong to.

Policy manifest (add-label-to-configmap.yaml):

This file defines a policy to add a label and a binding that restricts it to Namespaces labeled environment: staging or environment: production.

apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicy
metadata:
name: 'add-label-to-configmap'
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE']
resources: ['configmaps']
failurePolicy: Fail
reinvocationPolicy: Never
mutations:
- patchType: 'ApplyConfiguration'
applyConfiguration:
expression: >
object.metadata.?labels["lfx-mentorship"].hasValue() ?
Object{} :
Object{ metadata: Object.metadata{ labels: {"lfx-mentorship": "kyverno"}}}
---
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicyBinding
metadata:
name: 'add-label-to-configmap-binding'
spec:
policyName: 'add-label-to-configmap'
matchResources:
namespaceSelector:
matchExpressions:
- key: environment
operator: In
values:
- staging
- production

Resource manifest (configmaps.yaml):

This file contains three ConfigMap resources in different Namespaces. Only the ones in staging and production should be mutated.

apiVersion: v1
kind: ConfigMap
metadata:
name: matched-cm-1
namespace: staging
labels:
color: red
data:
player_initial_lives: '3'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: matched-cm-2
namespace: production
labels:
color: red
data:
player_initial_lives: '3'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: unmatched-cm
namespace: testing
labels:
color: blue
data:
player_initial_lives: '3'

Values file (values.yaml):

This file provides the necessary context. It tells the Kyverno CLI what labels are associated with the staging, production, and testing Namespaces so it can correctly evaluate the namespaceSelector in the binding.

apiVersion: cli.kyverno.io/v1alpha1
kind: Value
metadata:
name: values
namespaceSelector:
- labels:
environment: staging
name: staging
- labels:
environment: production
name: production
- labels:
environment: testing
name: testing

Now, apply the MutatingAdmissionPolicy and its binding to the ConfigMaps:

Terminal window
kyverno apply /path/to/add-label-to-configmap.yaml --resource /path/to/configmaps.yaml -f /path/to/values.yaml

The output will show the mutated ConfigMaps in the staging and production Namespaces, while the one in the testing Namespace remains unchanged:

Terminal window
Applying 1 policy rule(s) to 3 resource(s)...
policy add-label-to-configmap applied to staging/ConfigMap/matched-cm-1:
apiVersion: v1
data:
player_initial_lives: "3"
kind: ConfigMap
metadata:
labels:
color: red
lfx-mentorship: kyverno
name: matched-cm-1
namespace: staging
---
Mutation has been applied successfully.
policy add-label-to-configmap applied to production/ConfigMap/matched-cm-2:
apiVersion: v1
data:
player_initial_lives: "3"
kind: ConfigMap
metadata:
labels:
color: red
lfx-mentorship: kyverno
name: matched-cm-2
namespace: production
---
Mutation has been applied successfully.
pass: 2, fail: 0, warn: 0, error: 0, skip: 0

In this example, we will apply a ValidatingPolicy against two Deployment manifests: one that complies with the policy and one that violates it.

First, we define a ValidatingPolicy that ensures any Deployment has no more than two replicas.

Policy manifest (check-deployment-replicas.yaml):

apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-deployment-replicas
spec:
matchConstraints:
resourceRules:
- apiGroups: ['apps']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['deployments']
validations:
- expression: 'object.spec.replicas <= 2'
message: 'Deployment replicas must be less than or equal to 2'

Next, we have two Deployment manifests. The good-deployment is compliant with 2 replicas, while the bad-deployment is non-compliant with 3 replicas.

Resource manifest (deployments.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: good-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bad-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest

Now, we use the kyverno apply command to test the policy against both resources.

Terminal window
kyverno apply /path/to/check-deployment-replicas.yaml --resource /path/to/deployments.yaml --policy-report

The following output will be generated:

Terminal window
apiVersion: openreports.io/v1alpha1
kind: ClusterReport
metadata:
creationTimestamp: null
name: merged
results:
- message: Deployment replicas must be less than or equal 2
policy: check-deployment-replicas
properties:
process: background scan
resources:
- apiVersion: apps/v1
kind: Deployment
name: bad-deployment
namespace: default
result: fail
scored: true
source: KyvernoValidatingPolicy
timestamp:
nanos: 0
seconds: 1752755472
- message: success
policy: check-deployment-replicas
properties:
process: background scan
resources:
- apiVersion: apps/v1
kind: Deployment
name: good-deployment
namespace: default
result: pass
scored: true
source: KyvernoValidatingPolicy
timestamp:
nanos: 0
seconds: 1752755472
source: ""
summary:
error: 0
fail: 1
pass: 1
skip: 0
warn: 0

In addition to testing local YAML files, you can use the kyverno apply command to validate policies against resources that are already running in a Kubernetes cluster. Instead of specifying resource files with the --resource flag, you can use the --cluster flag.

For example, to test the check-deployment-replicas policy against all Deployment resources in your currently active cluster, you would run:

Terminal window
kyverno apply /path/to/check-deployment-replicas.yaml --cluster --policy-report

Many advanced policies need to look up the state of other resources in the cluster using Kyverno’s custom CEL functions like resource.Get(). When testing such policies locally with the kyverno apply command, the CLI cannot connect to the cluster to retrieve the required resources so you have to provide these resources as input via the --context-path flag.

This flag allows you to specify the resources that the policy will reference. The CLI will then use these resources to evaluate the policy.

This example demonstrates how to test a policy that validates an incoming Pod by checking its name against a value stored in a ConfigMap.

First, we define a ValidatingPolicy that uses resource.Get() to fetch a ConfigMap named policy-cm. The policy then validates that the incoming Pod’s name matches the name key in the ConfigMap’s data.

Policy manifest (check-pod-name-from-configmap.yaml):

apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-pod-name-from-configmap
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['pods']
variables:
# This variable uses a Kyverno CEL function to get a ConfigMap from the cluster.
- name: cm
expression: >-
resource.Get("v1", "configmaps", object.metadata.namespace, "policy-cm")
validations:
# This rule validates that the Pod's name matches the 'name' key in the ConfigMap's data.
- expression: >-
object.metadata.name == variables.cm.data.name

Next, we define two Pod manifests: good-pod, which should pass the validation, and bad-pod, which should fail.

Resource manifest (pods.yaml):

apiVersion: v1
kind: Pod
metadata:
name: good-pod
spec:
containers:
- name: nginx
image: nginx
---
apiVersion: v1
kind: Pod
metadata:
name: bad-pod
spec:
containers:
- name: nginx
image: nginx

Because the CLI cannot connect to a cluster to fetch the policy-cm ConfigMap, we must provide it in a context file. This file contains a mock ConfigMap that resource.Get() will use during local evaluation.

Context file (context.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Context
metadata:
name: context
spec:
# The resources defined here will be available to functions like resource.Get()
resources:
- apiVersion: v1
kind: ConfigMap
metadata:
namespace: default
name: policy-cm
data:
# According to this, the valid pod name is 'good-pod'.
name: good-pod

Now, we can run the kyverno apply command, providing the policy, the resources, and the context file. We also use the -p (or --policy-report) flag to generate a ClusterReport detailing the results.

Terminal window
kyverno apply /path/to/policy.yaml --resource /path/to/pods.yaml --context-file /path/to/context.yaml -p

The following output will be generated:

apiVersion: openreports.io/v1alpha1
kind: ClusterReport
metadata:
creationTimestamp: null
name: merged
results:
- message: success
policy: check-pod-name-from-configmap
properties:
process: background scan
resources:
- apiVersion: v1
kind: Pod
name: good-pod
namespace: default
result: pass
scored: true
source: KyvernoValidatingPolicy
timestamp:
nanos: 0
seconds: 1752756617
- policy: check-pod-name-from-configmap
properties:
process: background scan
resources:
- apiVersion: v1
kind: Pod
name: bad-pod
namespace: default
result: fail
scored: true
source: KyvernoValidatingPolicy
timestamp:
nanos: 0
seconds: 1752756617
source: ''
summary:
error: 0
fail: 1
pass: 1
skip: 0
warn: 0
  • The good-pod resource resulted in a pass because its name matches the value in the ConfigMap provided by the context file.

  • The bad-pod resource resulted in a fail because its name does not match, and the report includes the validation error message from the policy.

When using the --cluster flag, the CLI connects to your active Kubernetes cluster, so a local context file is not needed. The resource.Get() function will fetch the live ConfigMap directly from the cluster so you have to ensure the ConfigMap and the Pod resources exist in your cluster before running the command.

Terminal window
kyverno apply /path/to/check-pod-name-from-configmap.yaml --cluster --policy-report

In case of applying a ValidatingPolicy with a PolicyException, you can use the --exception flag to specify the exception manifest. The CLI will then apply the policy and the exception together.

In this example, we will test a policy that disallows hostPath volumes, but we will use a PolicyException to create an exemption for a specific Pod.

Policy manifest (disallow-host-path.yaml):

apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: disallow-host-path
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['pods']
validations:
- expression: '!has(object.spec.volumes) || object.spec.volumes.all(volume, !has(volume.hostPath))'
message: 'HostPath volumes are forbidden. The field spec.volumes[*].hostPath must be unset.'

Next, we define a Pod that clearly violates this policy by mounting a hostPath volume. Without an exception, this Pod would be blocked.

Resource manifest (pod-with-hostpath.yaml):

apiVersion: v1
kind: Pod
metadata:
name: pod-with-hostpath
spec:
containers:
- name: nginx
image: nginx
volumes:
- name: udev
hostPath:
path: /etc/udev

Now, we create a PolicyException to exempt our specific Pod from this policy. The exception matches the Pod by name and references the disallow-host-path policy.

Policy Exception manifest (exception.yaml):

apiVersion: policies.kyverno.io/v1alpha1
kind: PolicyException
metadata:
name: exempt-hostpath-pod
spec:
policyRefs:
- name: disallow-host-path
kind: ValidatingPolicy
matchConditions:
- name: 'skip-pod-by-name'
expression: "object.metadata.name == 'pod-with-hostpath'"

Now, we use the kyverno apply command, providing the policy, the resource, and the exception using the --exception flag. We will also use -p to generate a detailed report.

Terminal window
kyverno apply policy.yaml --resource resource.yaml --exception exception.yaml -p

The following output will be generated:

apiVersion: openreports.io/v1alpha1
kind: ClusterReport
metadata:
creationTimestamp: null
name: merged
results:
- message: 'rule is skipped due to policy exception: exempt-hostpath-pod'
policy: disallow-host-path
properties:
exceptions: exempt-hostpath-pod
process: background scan
resources:
- apiVersion: v1
kind: Pod
name: pod-with-hostpath
namespace: default
result: skip
rule: exception
scored: true
source: KyvernoValidatingPolicy
timestamp:
nanos: 0
seconds: 1752759828
source: ''
summary:
error: 0
fail: 0
pass: 0
skip: 1
warn: 0

The output confirms that the PolicyException worked as intended:

  • result: skip: The policy rule was not enforced on the resource.

  • properties.exceptions: exempt-hostpath-pod: The report explicitly names the PolicyException responsible for the skip.

  • summary.skip: 1: The final count reflects that one rule was skipped.

The test command is used to test a given set of resources against one or more policies to check desired results, declared in advance in a separate test manifest file, against the actual results. test is useful when you wish to declare what your expected results should be by defining the intent which then assists with locating discrepancies should those results change.

test works by scanning a given location, which can be either a Git repository or local folder, and executing the tests defined within. The rule types validate, mutate, and generate are currently supported. The command recursively looks for YAML files with policy test declarations (described below) with a specified file name and then executes those tests. All files applicable to the same test must be co-located. Directory recursion is supported. test supports the auto-gen feature making it possible to test, for example, Deployment resources against a Pod policy.

test will search for a file named kyverno-test.yaml and, if found, will execute the tests within.

In each test, there are four desired results which can be tested for. If the actual result of the test, once executed, matches the desired result as defined in the test manifest, it will be scored as a pass in the command output. For example, if the specified result of a given test of a resource against a policy is declared to be a pass and the actual result when tested is also a pass, the command output will show as pass. If the actual result was instead a skip, the command output will show as fail because the two results do not agree. The following are the desired results which can be specified in a test manifest.

  1. pass: The resource passes the policy definition. For validate rules which are written with a deny statement, this will not be a possible result. mutate and generate rules can declare a pass.
  2. skip: The resource does not meet either the match or exclude block, or does not pass the preconditions statements. For validate rules which are written with a deny statement, this is a possible result. If a rule contains certain conditional anchors which are not satisfied, the result may also be scored as a skip.
  3. fail: The resource does not pass the policy definition. Typically used for validate rules with pattern-style policy definitions.
  4. warn: Setting the annotation policies.kyverno.io/scored to "false" on a resource or policy which would otherwise fail will be considered a warn.

Use --detailed-results for a comprehensive output (default value false). For help with the test command, pass the -h flag for extensive output including usage, flags, and sample manifests.

The test declaration file format of kyverno-test.yaml must be of the following format.

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: kyverno-test
policies:
- <path/to/policy.yaml>
- <path/to/policy.yaml>
resources:
- <path/to/resource.yaml>
- <path/to/resource.yaml>
targetResources: # optional key for specifying target resources when testing for mutate existing rules
- <path/to/target-resource.yaml>
- <path/to/target-resource.yaml>
exceptions: # optional files for specifying exceptions. See below for an example.
- <path/to/exception.yaml>
- <path/to/exception.yaml>
variables: variables.yaml # optional file for declaring variables. see below for example.
userinfo: user_info.yaml # optional file for declaring admission request information (roles, cluster roles and subjects). see below for example.
context: context.yaml # optional file for declaring context variables. It is used in the new policy types; validatingpolicies, imagevalidationpolicies, mutatingpolicies and generatingpolicies.
results:
- policy: <name> # Namespaced Policy is specified as <namespace>/<name>
isValidatingAdmissionPolicy: false # when the policy is ValidatingAdmissionPolicy, this field is required.
isValidatingPolicy: false # when the policy is ValidatingPolicy, this field is required.
rule: <name> # when the policy is a Kyverno policy, this field is required.
resources: # optional, primarily for `validate` rules.
- <namespace_1/name_1>
- <namespace_2/name_2>
patchedResources: <file_name.yaml> # when testing a mutate rule this field is required. File may contain one or more resources separated by ---
generatedResource: <file_name.yaml> # when testing a generate rule this field is required.
cloneSourceResource: <file_name.yaml> # when testing a generate rule that uses `clone` object this field is required.
kind: <kind> # optional
result: pass
checks:
- match:
resource: {} # match results associated with a resource
policy: {} # match results associated with a policy
rule: {} # match results associated with a rule
assert: {} # assertion to validate the content of matched elements
error: {} # negative assertion to validate the content of matched elements

The test declaration consists of the following parts:

  1. The policies element which lists one or more policies to be applied.
  2. The resources element which lists one or more resources to which the policies are applied.
  3. The exceptions element which lists one or more policy exceptions. Cannot be used with ValidatingAdmissionPolicy. Optional.
  4. The variables element which defines a file in which variables and their values are stored for use in the policy test. Optional depending on policy content.
  5. The userinfo element which declares admission request data for subjects and roles. Optional depending on policy content.
  6. The results element which declares the expected results. Depending on the type of rule being tested, this section may vary.
  7. The checks element which declares the assertions to be evaluated against the results (see Working with Assertion Trees).

If needing to pass variables, such as those from external data sources like context variables built from API calls or others, a variables.yaml file can be defined with the same format as accepted with the apply command. If a variable needs to contain an array of strings, it must be formatted as JSON encoded. Like with the apply command, variables that begin with request.object normally do not need to be specified in the variables file as these will be sourced from the resource. Policies which trigger based upon request.operation equaling CREATE do not need a variables file. The CLI will assume a value of CREATE if no variable for request.operation is defined.

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: exclude-namespaces-example
rules:
- name: exclude-namespaces-dynamically
values:
namespacefilters.data.exclude: asdf
resources:
- name: nonroot-pod
values:
namespacefilters.data.exclude: foo
- name: root-pod
values:
namespacefilters.data.exclude: '["cluster-admin", "cluster-operator", "tenant-admin"]'

A variables file may also optionally specify global variable values without the need to name specific rules or resources avoiding repetition for the same variable and same value.

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
globalValues:
request.operation: UPDATE

If policies use a namespaceSelector, these can also be specified in the variables file.

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
namespaceSelector:
- name: test1
labels:
foo.com/managed-state: managed

The user can also declare a user_info.yaml file that can be used to pass admission request information such as roles, cluster roles, and subjects.

apiVersion: cli.kyverno.io/v1alpha1
kind: UserInfo
metadata:
name: user-info
clusterRoles:
- admin
userInfo:
username: someone@somecorp.com

Testing for subresources in Kind/Subresource matching format also requires a subresources{} section in the values file.

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
subresources:
- subresource:
name: <name of subresource>
kind: <kind of subresource>
group: <group of subresource>
version: <version of subresource>
parentResource:
name: <name of parent resource>
kind: <kind of parent resource>
group: <group of parent resource>
version: <version of parent resource>

Here is an example when testing for subresources:

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
subresources:
- subresource:
name: 'deployments/scale'
kind: 'Scale'
group: 'autoscaling'
version: 'v1'
parentResource:
name: 'deployments'
kind: 'Deployment'
group: 'apps'
version: 'v1'

Test a set of local files in the working directory.

Terminal window
kyverno test .

Test a set of local files by specifying the directory.

Terminal window
kyverno test /path/to/folderContainingTestYamls

Test an entire Git repository by specifying the branch name within the repo URL. If branch is not specified, main will be used as a default.

Terminal window
kyverno test https://github.com/kyverno/policies/release-1.6

Test a specific directory of the repository by specifying the directory within repo URL and the branch with the --git-branch or -b flag. Even if testing against main, when using a directory in the URL of the repo requires passing the --git-branch or -b flag.

Terminal window
kyverno test https://github.com/kyverno/policies/pod-security/restricted -b release-1.6

Use the -f flag to set a custom file name which includes test cases. By default, test will search for a file called kyverno-test.yaml.

Testing Policies with Image Registry Access

Section titled “Testing Policies with Image Registry Access”

For policies which require image registry access to set context variables, those variables may be sourced from a variables file (defined below) or from a “live” registry by passing the --registry flag.

In some cases, you may wish to only test a subset of policy, rules, and/ resource combination rather than all those defined in a test manifest. Use the --test-case-selector flag to specify the exact tests you wish to execute.

Terminal window
kyverno test . --test-case-selector "policy=add-default-resources, rule=add-default-requests, resource=nginx-demo2"

The test command executes a test declaration by applying the policies to the resources and comparing the actual results with the desired/expected results. The test passes if the actual results match the expected results.

Below is an example of testing a policy containing two validate rules against the same resource where each is supposed to pass the policy.

Policy manifest (disallow_latest_tag.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-latest-tag
spec:
rules:
- name: require-image-tag
match:
any:
- resources:
kinds:
- Pod
validate:
failureAction: Audit
message: 'An image tag is required.'
pattern:
spec:
containers:
- image: '*:*'
- name: validate-image-tag
match:
any:
- resources:
kinds:
- Pod
validate:
failureAction: Audit
message: "Using a mutable image tag e.g. 'latest' is not allowed."
pattern:
spec:
containers:
- image: '!*:latest'

Resource manifest (resource.yaml):

apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: nginx
image: nginx:1.12

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: disallow_latest_tag
policies:
- disallow_latest_tag.yaml
resources:
- resource.yaml
results:
- policy: disallow-latest-tag
rule: require-image-tag
resources:
- myapp-pod
kind: Pod
result: pass
- policy: disallow-latest-tag
rule: validate-image-tag
resources:
- myapp-pod
kind: Pod
result: pass
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 1 resource ...
Checking results ...
│────│─────────────────────│────────────────────│───────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│─────────────────────│────────────────────│───────────────│────────│────────│
1 disallow-latest-tag require-image-tag Pod/myapp-pod Pass Ok
2 disallow-latest-tag validate-image-tag Pod/myapp-pod Pass Ok
│────│─────────────────────│────────────────────│───────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

In the below case, a mutate policy which adds default resources to a Pod is being tested against two resources. Notice the addition of the patchedResources field in the results[] array, which is a requirement when testing mutate rules.

Policy manifest (add-default-resources.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-resources
spec:
background: false
rules:
- name: add-default-requests
match:
any:
- resources:
kinds:
- Pod
preconditions:
any:
- key: '{{request.operation}}'
operator: AnyIn
value:
- CREATE
- UPDATE
mutate:
patchStrategicMerge:
spec:
containers:
- (name): '*'
resources:
requests:
+(memory): '100Mi'
+(cpu): '100m'

Resource manifest (resource.yaml):

apiVersion: v1
kind: Pod
metadata:
name: nginx-demo1
spec:
containers:
- name: nginx
image: nginx:1.14.2
---
apiVersion: v1
kind: Pod
metadata:
name: nginx-demo2
spec:
containers:
- name: nginx
image: nginx:latest
resources:
requests:
memory: '200Mi'
cpu: '200m'

Variables manifest (values.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
policies:
- name: add-default-resources
resources:
- name: nginx-demo1
values:
request.operation: CREATE
- name: nginx-demo2
values:
request.operation: UPDATE

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: add-default-resources
policies:
- add-default-resources.yaml
resources:
- resource.yaml
variables: values.yaml
results:
- policy: add-default-resources
rule: add-default-requests
resources:
- nginx-demo1
patchedResources: patchedResource1.yaml
kind: Pod
result: pass
- policy: add-default-resources
rule: add-default-requests
resources:
- nginx-demo2
patchedResources: patchedResource2.yaml
kind: Pod
result: skip
Terminal window
$ kyverno test .
Executing add-default-resources...
applying 1 policy to 2 resources...
skipped mutate policy add-default-resources -> resource default/Pod/nginx-demo2
│───│───────────────────────│──────────────────────│─────────────────────────│────────│
# │ POLICY │ RULE │ RESOURCE │ RESULT │
│───│───────────────────────│──────────────────────│─────────────────────────│────────│
1 add-default-resources add-default-requests default/Pod/nginx-demo1 Pass
2 add-default-resources add-default-requests default/Pod/nginx-demo2 Pass
│───│───────────────────────│──────────────────────│─────────────────────────│────────│
Test Summary: 2 tests passed and 0 tests failed

In this scenario, a mutate policy which adds a label to Secrets based on requests made on a particular ConfigMap, note the use of the targetResources field.

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: kyverno-test.yaml
policies:
- policy.yaml
resources:
- trigger-cm.yaml
targetResources:
- raw-secret.yaml
results:
- patchedResources: mutated-secret.yaml
policy: mutate-existing-secret
resources:
- secret-1
result: pass
rule: mutate-secret-on-configmap-create

Policy (policy.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: mutate-existing-secret
spec:
rules:
- match:
any:
- resources:
kinds:
- ConfigMap
names:
- dictionary-1
namespaces:
- staging
mutate:
mutateExistingOnPolicyUpdate: false
patchStrategicMerge:
metadata:
labels:
foo: bar
targets:
- apiVersion: v1
kind: Secret
name: '*'
namespace: staging
name: mutate-secret-on-configmap-create

Trigger (trigger-cm.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: dictionary-1
namespace: staging

Target (raw-secret.yaml):

apiVersion: v1
kind: Secret
metadata:
name: secret-1
namespace: staging

Mutated target (mutated-secret.yaml):

apiVersion: v1
kind: Secret
metadata:
labels:
foo: bar
name: secret-1
namespace: staging
Terminal window
$ kyverno test .
Loading test ( 3-test/kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 1 resource ...
Checking results ...
│────│────────────────────────│───────────────────────────────────│────────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│────────────────────────│───────────────────────────────────│────────────────────────────│────────│────────│
1 mutate-existing-secret mutate-secret-on-configmap-create v1/Secret/staging/secret-1 Pass Ok
│────│────────────────────────│───────────────────────────────────│────────────────────────────│────────│────────│
Test Summary: 1 tests passed and 0 tests failed

If you don’t specify an entry in the resources field in the results the CLI will check results for all trigger and target resources involved in the test and will match them against the resources specified in the patchedResources file:

Patched resources (mutated-resources.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: dictionary-1
namespace: staging
---
apiVersion: v1
kind: Secret
metadata:
labels:
foo: bar
name: secret-1
namespace: staging
---
Loading test ( 5-test-with-selection/kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 1 resource ...
Checking results ...
│────│────────────────────────│───────────────────────────────────│───────────────────────────────────│────────│────────│
│ ID │ POLICY │ RULE │ RESOURCE │ RESULT │ REASON │
│────│────────────────────────│───────────────────────────────────│───────────────────────────────────│────────│────────│
│ 1 │ mutate-existing-secret │ mutate-secret-on-configmap-create │ v1/Secret/staging/secret-1 │ Pass │ Ok │
│ 2 │ mutate-existing-secret │ mutate-secret-on-configmap-create │ v1/ConfigMap/staging/dictionary-1 │ Pass │ Ok │
│────│────────────────────────│───────────────────────────────────│───────────────────────────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

In the following policy test, a generate policy rule is applied which generates a new resource from an existing resource present in resource.yaml. To test the generate policy, the addition of a generatedResource field in the results[] array is required which is used to test against the resource generated by the policy.

Policy manifest (add_network_policy.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-networkpolicy
spec:
rules:
- name: default-deny
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny
namespace: '{{request.object.metadata.name}}'
synchronize: true
data:
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

Resource manifest (resource.yaml):

apiVersion: v1
kind: Namespace
metadata:
name: hello-world-namespace

Generated Resource (generatedResource.yaml):

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: hello-world-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: deny-all-traffic
policies:
- add_network_policy.yaml
resources:
- resource.yaml
results:
- policy: add-networkpolicy
rule: default-deny
resources:
- hello-world-namespace
generatedResource: generatedResource.yaml
kind: Namespace
result: pass
Terminal window
$ kyverno test .
Executing deny-all-traffic...
applying 1 policy to 1 resource...
│───│───────────────────│──────────────│──────────────────────────────────│────────│
# │ POLICY │ RULE │ RESOURCE │ RESULT │
│───│───────────────────│──────────────│──────────────────────────────────│────────│
1 add-networkpolicy default-deny /Namespace/hello-world-namespace Pass
│───│───────────────────│──────────────│──────────────────────────────────│────────│
Test Summary: 1 tests passed and 0 tests failed

In the following policy test, a validate rule ensures that Pods aren’t allowed to access host namespaces. A Policy Exception is used to exempt Pods and Deployments beginning with the name important-tool in the delta namespace from this rule. The exceptions field is used in the Test manifest to declare a Policy Exception manifest. It is expected that resources that violate the rule but match policy exceptions will be skipped. Otherwise, they will fail.

Policy manifest (disallow-host-namespaces.yaml):

apiVersion: kyverno.io/v2beta1
kind: ClusterPolicy
metadata:
name: disallow-host-namespaces
spec:
background: false
rules:
- name: host-namespaces
match:
any:
- resources:
kinds:
- Pod
validate:
failureAction: Enforce
message: >-
Sharing the host namespaces is disallowed. The fields spec.hostNetwork,
spec.hostIPC, and spec.hostPID must be unset or set to `false`.
pattern:
spec:
=(hostPID): 'false'
=(hostIPC): 'false'
=(hostNetwork): 'false'

Policy Exception manifest (delta-exception.yaml):

apiVersion: kyverno.io/v2
kind: PolicyException
metadata:
name: delta-exception
namespace: delta
spec:
exceptions:
- policyName: disallow-host-namespaces
ruleNames:
- host-namespaces
- autogen-host-namespaces
match:
any:
- resources:
kinds:
- Pod
- Deployment
namespaces:
- delta
names:
- important-tool*

Resource manifest (resource.yaml):

Both Deployments violate the policy but only one matches an exception. The Deployment without an exception will fail while the one with an exception will be skipped.

apiVersion: apps/v1
kind: Deployment
metadata:
name: important-tool
namespace: delta
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
hostIPC: true
containers:
- image: busybox:1.35
name: busybox
command: ['sleep', '1d']
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: not-important
namespace: gamma
labels:
app: busybox
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
hostIPC: true
containers:
- image: busybox:1.35
name: busybox
command: ['sleep', '1d']

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: disallow-host-namespaces-test-exception
policies:
- disallow-host-namespace.yaml
resources:
- resource.yaml
exceptions:
- delta-exception.yaml
results:
- kind: Deployment
policy: disallow-host-namespaces
resources:
- important-tool
rule: host-namespaces
result: skip
- kind: Deployment
policy: disallow-host-namespaces
resources:
- not-important
rule: host-namespaces
result: fail
Terminal window
kyverno test .
Loading test ( .kyverno-test/kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 2 resources with 1 exception ...
Checking results ...
│────│──────────────────────────│─────────────────│───────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│──────────────────────────│─────────────────│───────────────────────────│────────│────────│
1 disallow-host-namespaces host-namespaces Deployment/important-tool Pass Ok
2 disallow-host-namespaces host-namespaces Deployment/not-important Pass Ok
│────│──────────────────────────│─────────────────│───────────────────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

For many more examples of test cases, please see the kyverno/policies repository which strives to have test cases for all the sample policies which appear on the website.

The kyverno test command can be used to test native Kubernetes policies.

To test a ValidatingAdmissionPolicy, the test manifest must include the isValidatingAdmissionPolicy: true field in each result entry.

Below is an example of testing a ValidatingAdmissionPolicy against two resources, one of which violates the policy.

Policy manifest (disallow-host-path.yaml):

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
name: disallow-host-path
spec:
failurePolicy: Fail
matchConstraints:
resourceRules:
- apiGroups: ['apps']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['deployments']
validations:
- expression: '!has(object.spec.template.spec.volumes) || object.spec.template.spec.volumes.all(volume, !has(volume.hostPath))'
message: 'HostPath volumes are forbidden. The field spec.template.spec.volumes[*].hostPath must be unset.'

Resource manifest (deployments.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-pass
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-server
image: nginx
volumeMounts:
- name: temp
mountPath: /scratch
volumes:
- name: temp
emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-fail
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx-server
image: nginx
volumeMounts:
- name: udev
mountPath: /data
volumes:
- name: udev
hostPath:
path: /etc/udev

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: disallow-host-path-test
policies:
- disallow-host-path.yaml
resources:
- deployments.yaml
results:
- policy: disallow-host-path
resources:
- deployment-pass
isValidatingAdmissionPolicy: true
kind: Deployment
result: pass
- policy: disallow-host-path
resources:
- deployment-fail
isValidatingAdmissionPolicy: true
kind: Deployment
result: fail
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Applying 1 policy to 2 resources ...
Checking results ...
│────│────────────────────│──────│────────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│────────────────────│──────│────────────────────────────│────────│────────│
1 disallow-host-path Deployment/deployment-pass Pass Ok
2 disallow-host-path Deployment/deployment-fail Pass Ok
│────│────────────────────│──────│────────────────────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

In the below example, a ValidatingAdmissionPolicy and its corresponding ValidatingAdmissionPolicyBinding are tested against six resources. Two of these resources do not match the binding, two match the binding but violate the policy, and the remaining two match the binding and do not violate the policy.

Policy manifest (check-deployment-replicas.yaml):

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicy
metadata:
name: 'check-deployment-replicas'
spec:
matchConstraints:
resourceRules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- deployments
validations:
- expression: object.spec.replicas <= 2
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionPolicyBinding
metadata:
name: 'check-deployment-replicas-binding'
spec:
policyName: 'check-deployment-replicas'
validationActions: [Deny]
matchResources:
namespaceSelector:
matchExpressions:
- key: environment
operator: In
values:
- staging
- production

Resource manifest (resource.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment-1
namespace: testing
labels:
app: busybox
spec:
replicas: 4
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: testing-deployment-2
namespace: testing
labels:
app: busybox
spec:
replicas: 2
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: staging-deployment-1
namespace: staging
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: staging-deployment-2
namespace: staging
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: production-deployment-1
namespace: production
labels:
app: nginx
spec:
replicas: 4
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: production-deployment-2
namespace: production
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest

The above resource manifest contains the following:

  1. Two Deployments named testing-deployment-1 and testing-deployment-2 in the testing namespace. The first Deployment has four replicas, while the second Deployment has two.

  2. Two Deployments named staging-deployment-1 and staging-deployment-2 in the staging namespace. The first Deployment has four replicas, while the second Deployment has two.

  3. Two Deployments named production-deployment-1 and production-deployment-2 in the production namespace. The first Deployment has four replicas, while the second Deployment has two.

Variables manifest (values.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Values
metadata:
name: values
namespaceSelector:
- name: staging
labels:
environment: staging
- name: production
labels:
environment: production
- name: testing
labels:
environment: testing

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: kyverno-test.yaml
policies:
- policy.yaml
resources:
- resource.yaml
variables: values.yaml
results:
- kind: Deployment
policy: check-deployment-replicas
isValidatingAdmissionPolicy: true
resources:
- testing-deployment-1
- testing-deployment-2
result: skip
- kind: Deployment
policy: check-deployment-replicas
isValidatingAdmissionPolicy: true
resources:
- staging-deployment-1
- production-deployment-1
result: fail
- kind: Deployment
policy: check-deployment-replicas
isValidatingAdmissionPolicy: true
resources:
- staging-deployment-2
- production-deployment-2
result: pass
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 6 resources ...
Checking results ...
│────│───────────────────────────│──────│────────────────────────────────────│────────│──────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│───────────────────────────│──────│────────────────────────────────────│────────│──────────│
1 check-deployment-replicas Deployment/testing-deployment-1 Pass Excluded
2 check-deployment-replicas Deployment/testing-deployment-2 Pass Excluded
3 check-deployment-replicas Deployment/staging-deployment-1 Pass Ok
4 check-deployment-replicas Deployment/production-deployment-1 Pass Ok
5 check-deployment-replicas Deployment/staging-deployment-2 Pass Ok
6 check-deployment-replicas Deployment/production-deployment-2 Pass Ok
│────│───────────────────────────│──────│────────────────────────────────────│────────│──────────│
Test Summary: 6 tests passed and 0 tests failed

To test a MutatingAdmissionPolicy, the test manifest must include the isMutatingAdmissionPolicy field set to true in the test results array. In addition, the patchedResources field must be included to specify the resource that is expected to be patched by the policy.

This example tests a policy that adds a label to a ConfigMap.

Policy manifest (policy.yaml):

apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicy
metadata:
name: 'add-label-to-configmap'
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE']
resources: ['configmaps']
failurePolicy: Fail
reinvocationPolicy: Never
mutations:
- patchType: 'ApplyConfiguration'
applyConfiguration:
expression: >
object.metadata.?labels["lfx-mentorship"].hasValue() ?
Object{} :
Object{ metadata: Object.metadata{ labels: {"lfx-mentorship": "kyverno"}}}

Resource manifest (resource.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
labels:
app: game
data:
player_initial_lives: '3'

Patched resource manifest (patched-resource.yaml):

This file defines what the ConfigMap should look like after the policy is applied.

apiVersion: v1
kind: ConfigMap
metadata:
name: game-demo
labels:
app: game
lfx-mentorship: kyverno
data:
player_initial_lives: '3'

Test manifest (kyverno-test.yaml):

apiVersion: kyverno.io/v1
kind: Test
metadata:
name: test
policies:
- policy.yaml
resources:
- resource.yaml
results:
- isMutatingAdmissionPolicy: true
kind: ConfigMap
patchedResources: patched-resource.yaml
policy: add-label-to-configmap
resources:
- game-demo
result: pass
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 1 resource with 0 exceptions ...
Checking results ...
│────│────────────────────────│──────│────────────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│────────────────────────│──────│────────────────────────────────│────────│────────│
1 add-label-to-configmap v1/ConfigMap/default/game-demo Pass Ok
│────│────────────────────────│──────│────────────────────────────────│────────│────────│
Test Summary: 1 tests passed and 0 tests failed
Example 2: Mutation with a Binding and Namespace Selector
Section titled “Example 2: Mutation with a Binding and Namespace Selector”

This example tests a policy that adds a label to ConfigMaps in specific namespaces.

Policy manifest (policy.yaml):

apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicy
metadata:
name: 'add-label-to-configmap'
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE']
resources: ['configmaps']
failurePolicy: Fail
reinvocationPolicy: Never
mutations:
- patchType: 'ApplyConfiguration'
applyConfiguration:
expression: >
object.metadata.?labels["lfx-mentorship"].hasValue() ?
Object{} :
Object{ metadata: Object.metadata{ labels: {"lfx-mentorship": "kyverno"}}}
---
apiVersion: admissionregistration.k8s.io/v1alpha1
kind: MutatingAdmissionPolicyBinding
metadata:
name: 'add-label-to-configmap-binding'
spec:
policyName: 'add-label-to-configmap'
matchResources:
namespaceSelector:
matchExpressions:
- key: environment
operator: In
values:
- staging
- production

Resource manifest (resource.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: matched-cm-1
namespace: staging
labels:
color: red
data:
player_initial_lives: '3'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: matched-cm-2
namespace: production
labels:
color: red
data:
player_initial_lives: '3'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: unmatched-cm
namespace: testing
labels:
color: blue
data:
player_initial_lives: '3'

Patched resource manifest (patched-resource.yaml):

apiVersion: v1
kind: ConfigMap
metadata:
name: matched-cm-1
namespace: staging
labels:
color: red
lfx-mentorship: kyverno
data:
player_initial_lives: '3'
---
apiVersion: v1
kind: ConfigMap
metadata:
name: matched-cm-2
namespace: production
labels:
color: red
lfx-mentorship: kyverno
data:
player_initial_lives: '3'

Variables manifest (values.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Value
metadata:
name: values
namespaceSelector:
- labels:
environment: staging
name: staging
- labels:
environment: production
name: production
- labels:
environment: testing
name: testing

Test manifest (kyverno-test.yaml):

apiVersion: kyverno.io/v1
kind: Test
metadata:
name: test
policies:
- policy.yaml
resources:
- resource.yaml
results:
- isMutatingAdmissionPolicy: true
kind: ConfigMap
patchedResources: patched-resource.yaml
policy: add-label-to-configmap
resources:
- matched-cm-1
- matched-cm-2
result: pass
variables: values.yaml
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 3 resources with 0 exceptions ...
Checking results ...
│────│────────────────────────│──────│──────────────────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│────────────────────────│──────│──────────────────────────────────────│────────│────────│
1 add-label-to-configmap v1/ConfigMap/staging/matched-cm-1 Pass Ok
2 add-label-to-configmap v1/ConfigMap/production/matched-cm-2 Pass Ok
│────│────────────────────────│──────│──────────────────────────────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

To test a ValidatingPolicy, the test manifest must include the isValidatingPolicy field set to true in the results[] array.

Below is an example of testing a ValidatingPolicy against two resources, one of which violates the policy.

Policy manifest (check-deployment-replicas.yaml):

apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-deployment-replicas
spec:
matchConstraints:
resourceRules:
- apiGroups: ['apps']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['deployments']
validations:
- expression: 'object.spec.replicas <= 2'
message: 'Deployment replicas must be less than or equal to 2'

Resource manifest (deployments.yaml):

apiVersion: apps/v1
kind: Deployment
metadata:
name: good-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bad-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: kyverno-test
policies:
- policy.yaml
resources:
- deployments.yaml
results:
- isValidatingPolicy: true
kind: Deployment
policy: check-deployment-replicas
resources:
- bad-deployment
result: fail
- isValidatingPolicy: true
kind: Deployment
policy: check-deployment-replicas
resources:
- good-deployment
result: pass
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 2 resources with 0 exceptions ...
Checking results ...
│────│───────────────────────────│──────│────────────────────────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│───────────────────────────│──────│────────────────────────────────────────────│────────│────────│
1 check-deployment-replicas apps/v1/Deployment/default/bad-deployment Pass Ok
2 check-deployment-replicas apps/v1/Deployment/default/good-deployment Pass Ok
│────│───────────────────────────│──────│────────────────────────────────────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

Some policies need to make decisions based on the properties of the namespace where a resource is being created, such as its labels or name. The namespaceObject variable in CEL provides access to the full namespace object for this purpose.

When running kyverno test, the command operates in an offline mode and does not have access to a live cluster’s namespaces. To test policies that use namespaceObject, you must provide mock namespace definitions. This is done by creating a “values file” and referencing it in your kyverno-test.yaml.

Let’s start with a policy that disallows creating certain Deployments in the default namespace. It uses namespaceObject.metadata.name to check the name of the Namespace.

Policy manifest (disallow-default-deployment.yaml):

apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-deployment-namespace
spec:
matchConstraints:
resourceRules:
- apiGroups: ['apps']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['deployments']
# This policy only applies to Deployments with the label `app: nginx`
objectSelector:
matchLabels:
app: nginx
validations:
# The validation logic checks the name of the Namespace object.
- expression: "namespaceObject.metadata.name != 'default'"
message: "Using 'default' namespace is not allowed for this application."

We define several Deployments to cover all test cases:

  1. A Deployment in the default namespace (should fail).
  2. A Deployment in a different namespace, staging (should pass).
  3. Two Deployments that do not have the app: nginx label (should be skipped).

Resource manifest (deployments.yaml):

# This deployment should FAIL because it's in the 'default' namespace.
apiVersion: apps/v1
kind: Deployment
metadata:
name: bad-deployment
namespace: default
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
# This deployment should PASS because it's in the 'staging' namespace.
apiVersion: apps/v1
kind: Deployment
metadata:
name: good-deployment
namespace: staging
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
---
# This deployment should be SKIPPED because it lacks the 'app: nginx' label.
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipped-deployment-1
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox:latest
---
# This deployment should also be SKIPPED for the same reason.
apiVersion: apps/v1
kind: Deployment
metadata:
name: skipped-deployment-2
namespace: staging
spec:
replicas: 1
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busybox:latest

This is the crucial step. We create a values file (e.g., values.yaml) to provide the mock Namespace objects. The kyverno test command will use these objects to populate the namespaceObject variable during the test run.

Values manifest (values.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Value
metadata:
name: values
namespaces:
- apiVersion: v1
kind: namespace
metadata:
labels:
environment: staging
name: staging
- apiVersion: v1
kind: namespace
metadata:
labels:
environment: default
name: default

Finally, we create the test manifest that ties everything together.

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: kyverno-test.yaml
policies:
- policy.yaml
resources:
- resources.yaml
results:
- isValidatingPolicy: true
kind: Deployment
policy: check-deployment-namespace
resources:
- bad-deployment
result: fail
- isValidatingPolicy: true
kind: Deployment
policy: check-deployment-namespace
resources:
- good-deployment
result: pass
- isValidatingPolicy: true
kind: Deployment
policy: check-deployment-namespace
resources:
- skipped-deployment-1
- skipped-deployment-2
result: skip
variables: values.yaml
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 4 resources with 0 exceptions ...
Checking results ...
│────│────────────────────────────│──────│────────────────────────────────────────────│────────│──────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│────────────────────────────│──────│────────────────────────────────────────────│────────│──────────│
1 check-deployment-namespace apps/v1/Deployment/default/bad-deployment Pass Ok
2 check-deployment-namespace apps/v1/Deployment/staging/good-deployment Pass Ok
3 check-deployment-namespace apps/Deployment/v1 Pass Excluded
4 check-deployment-namespace apps/Deployment/v1 Pass Excluded
│────│────────────────────────────│──────│────────────────────────────────────────────│────────│──────────│
Test Summary: 4 tests passed and 0 tests failed

Policies often need to validate resources based on external data, such as a value stored in a ConfigMap. Kyverno’s custom CEL function resource.Get() allows policies to fetch these external resources from the cluster.

When using kyverno test for offline testing, you must provide this external resource as “context” so that the resource.Get() function can resolve successfully. This can be done by referencing a context file directly from your kyverno-test.yaml.

In this example, we will test a policy that validates a Pod’s name against a value stored in a ConfigMap.

First, let’s define a policy named check-pod-name-from-configmap. This policy uses resource.Get() to fetch a ConfigMap named policy-cm and then checks if the incoming Pod’s name matches the name key in the ConfigMap’s data.

Policy manifest (check-pod-name-from-configmap.yaml):

policy.yaml
apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: check-pod-name-from-configmap
spec:
matchConstraints:
resourceRules:
- apiGroups: ['']
apiVersions: ['v1']
operations: ['CREATE', 'UPDATE']
resources: ['pods']
variables:
# Get the ConfigMap 'policy-cm' from the Pod's namespace.
- name: cm
expression: >-
resource.Get("v1", "configmaps", object.metadata.namespace, "policy-cm")
validations:
# The Pod is valid only if its name matches the value from the ConfigMap.
- expression: >-
object.metadata.name == variables.cm.data.name

Next, we create a context file that contains the mock ConfigMap policy-cm. The kyverno test command will load this file and make its contents available to the resource.Get() function during the test.

Context manifest (context.yaml):

context.yaml
apiVersion: cli.kyverno.io/v1alpha1
kind: Context
metadata:
name: test-context
spec:
resources:
- apiVersion: v1
kind: ConfigMap
metadata:
namespace: default
name: policy-cm
data:
# The 'name' key specifies that the only valid pod name is 'good-pod'.
name: good-pod

We define two Pod manifests: good-pod, whose name matches the value in our context, and bad-pod, whose name does not.

Resource manifest (pods.yaml):

# This Pod should PASS validation.
apiVersion: v1
kind: Pod
metadata:
name: good-pod
spec:
containers:
- name: nginx
image: nginx
---
# This Pod should FAIL validation.
apiVersion: v1
kind: Pod
metadata:
name: bad-pod
spec:
containers:
- name: nginx
image: nginx

Finally, we create the kyverno-test.yaml file. The key here is the context: context.yaml field, which links our mock ConfigMap to the test.

Test manifest (kyverno-test.yaml):

apiVersion: cli.kyverno.io/v1alpha1
kind: Test
metadata:
name: kyverno-test.yaml
policies:
- policy.yaml
resources:
- pod1.yaml
- pod2.yaml
results:
- isValidatingPolicy: true
kind: Pod
policy: disallow-host-path
resources:
- bad-pod
result: fail
- isValidatingPolicy: true
kind: Pod
policy: disallow-host-path
resources:
- good-pod
result: pass
context: context.yaml
Terminal window
$ kyverno test .
Loading test ( kyverno-test.yaml ) ...
Loading values/variables ...
Loading policies ...
Loading resources ...
Loading exceptions ...
Applying 1 policy to 2 resources with 0 exceptions ...
Checking results ...
│────│────────────────────│──────│─────────────────────────│────────│────────│
ID POLICY RULE RESOURCE RESULT REASON
│────│────────────────────│──────│─────────────────────────│────────│────────│
1 disallow-host-path v1/Pod/default/bad-pod Pass Ok
2 disallow-host-path v1/Pod/default/good-pod Pass Ok
│────│────────────────────│──────│─────────────────────────│────────│────────│
Test Summary: 2 tests passed and 0 tests failed

The Kyverno CLI has a jp subcommand which makes it possible to test not only the custom filters endemic to Kyverno but also the full array of capabilities of JMESPath included in the jp tool itself here. By passing in either through stdin or a file, both for input JSON or YAML documents and expressions, the jp subcommand will evaluate any JMESPath expression and supply the output.

Examples:

List available Kyverno custom JMESPath filters. Please refer to the JMESPath documentation page here for extensive details on each custom filter. Note this does not show the built-in JMESPath filters available upstream, only the custom Kyverno filters.

Terminal window
$ kyverno jp function
Name: add
Signature: add(any, any) any
Note: does arithmetic addition of two specified values of numbers, quantities, and durations
Name: base64_decode
Signature: base64_decode(string) string
Note: decodes a base 64 string
Name: base64_encode
Signature: base64_encode(string) string
Note: encodes a regular, plaintext and unencoded string to base64
Name: compare
Signature: compare(string, string) number
Note: compares two strings lexicographically
<snip>

Test a custom JMESPath filter using stdin inputs.

Terminal window
$ echo '{"foo": "BAR"}' | kyverno jp query 'to_lower(foo)'
Reading from terminal input.
Enter input object and hit Ctrl+D.
# to_lower(foo)
"bar"

Test a custom JMESPath filter using an input JSON file. YAML files are also supported.

Terminal window
$ cat foo.json
{"bar": "this-is-a-dashed-string"}
$ kyverno jp query -i foo.json "split(bar, '-')"
# split(bar, '-')
[
"this",
"is",
"a",
"dashed",
"string"
]

Test a custom JMESPath filter as well as an upstream JMESPath filter.

Terminal window
$ kyverno jp query -i foo.json "split(bar, '-') | length(@)"
# split(bar, '-') | length(@)
5

Test a custom JMESPath filter using an expression from a file.

Terminal window
$ cat add
add(`1`,`2`)
$ echo {} | kyverno jp query -q add
Reading from terminal input.
Enter input object and hit Ctrl+D.
# add(`1`,`2`)
3

Test upstream JMESPath functionality using an input JSON file and show cleaned output.

Terminal window
$ cat pod.json
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "mypod",
"namespace": "foo"
},
"spec": {
"containers": [
{
"name": "busybox",
"image": "busybox"
}
]
}
}
$ kyverno jp query -i pod.json 'spec.containers[0].name' -u
# spec.containers[0].name
busybox

Parse a JMESPath expression and show the corresponding AST to see how it was interpreted.

Terminal window
$ kyverno jp parse 'request.object.metadata.name | truncate(@, `9`)'
# request.object.metadata.name | truncate(@, `9`)
ASTPipe {
children: {
ASTSubexpression {
children: {
ASTSubexpression {
children: {
ASTSubexpression {
children: {
ASTField {
value: "request"
}
ASTField {
value: "object"
}
}
ASTField {
value: "metadata"
}
}
ASTField {
value: "name"
}
}
ASTFunctionExpression {
value: "truncate"
children: {
ASTCurrentNode {
}
ASTLiteral {
value: 9
}
}
}

For more specific information on writing JMESPath for use in Kyverno, see the JMESPath page.

Kyverno 1.12 introduced assertion trees support in the test command.

The purpose of assertion trees is to offer more flexibility than the traditional syntax in results.

Assertion trees reside under the checks stanza as shown in the example below:

checks:
- match:
resource:
kind: Namespace
metadata:
name: hello-world-namespace
policy:
kind: ClusterPolicy
metadata:
name: sync-secret
rule:
name: sync-my-secret
assert:
status: pass
error:
(status != 'pass'): true

A check is made of the following parts:

  • A match statement to select the elements considered by a check. This match can act on the resource, the policy and/or the rule. It is not limited to matching by kind or name but can match on anything in the payload (labels, annotations, etc…).
  • An assert statement defining the conditions to verify on the matched elements.
  • An error statement (the opposite of an assert) defining the conditions that must NOT evaluate to true on the matched elements.

In the example above the check is matching Namespace elements named hello-world-namespace for the cluster policy named sync-secret and rule named sync-my-secret. For those elements the status is expected to be equal to pass and the expression (status != 'pass') is NOT expected to be true.

Implementation is based on Kyverno JSON - assertion trees. Please refer to the documentation for more details on the syntax.

To select all results, all you need to do is to provide an empty match statement:

- match: {} # this will match everything
assert:
# ...
error:
# ...

To select results based on labels, specify those labels in the stanza where they apply:

- match:
resource:
metadata:
labels:
foo: bar
policy:
metadata:
labels:
bar: baz
assert:
# ...
error:
# ...