Scaling Kyverno

Scaling considerations for a Kyverno installation.

Scaling Kyverno

Kyverno supports scaling in multiple dimensions, both vertical as well as horizontal. It is important to understand when to scale, how to scale, and what the effects of that scaling will have on its operation. See the sections below to understand these topics better.

Because Kyverno is an admission controller with many capabilities and due to the variability with respect to environment type, size, and composition of Kubernetes clusters, the amount of processing performed by Kyverno can vary greatly. Sizing a Kyverno installation based solely upon Node or Pod count is often not appropriate to accurately predict the amount of resources it will require.

For example, a large production cluster hosting 60,000 Pods yet with no Kyverno policies installed which match on Pod has no bearing on the resources required by Kyverno. Because webhooks are dynamically managed by Kyverno according to the policies installed in the cluster, no policies which match on Pod results in no information about Pods being sent by the API server to Kyverno and, therefore, reduced processing load.

However, any policies which match on a wildcard ("*") will result in Kyverno being forced to process every operation (CREATE, UPDATE, DELETE, and CONNECT) on every resource in the cluster. Even if the policy logic itself is simple, only a single, simple policy written in such a manner and installed in a large cluster can and will have significant impact on the resources required by Kyverno.

Vertical Scale

Vertical scaling refers to increasing the resources allocated to existing Pods, which amounts to resource requests and limits.

We recommend conducting tests in your own environment to determine real-world utilization in order to best set resource requests and limits, but as a best practice we also recommend not setting CPU limits.

Horizontal Scale

Horizontal scaling refers to increasing the number of replicas of a given controller. Kyverno supports multiple replicas for each of its controllers, but the effect of multiple replicas is handled differently according to the controller. See the high availability section for more details.

Scale Testing

The following tables show Kyverno performance test results for the admission and reports controllers. The admission controller table shows the resource consumption (memory and CPU) and latency as a result of increased AdmissionReviews per Second (ARPS) and how this is influenced by the queries per second (QPS) and burst settings.

The reports controller table shows the policy report count and size impacts including the various intermediary resources. Also shown are the resource consumption figures at a scale of up to 100,000 Pods.

In both tables, the testing was performed using K3d on an Ubuntu 20.04 system with an AMD EPYC 7502P 32-core processor @ 2.5GHz (max 3.35GHz) and 256GB of RAM.

For additional specifics on these tests along with a set of instructions which can be used to reproduce the environment, see the developer documentation here.

Admission Controller

replicas# policiesRule TypeModeSubjectmemory request / limitcpu requestARPSLatency (avg, unit: ms)Memory (max)CPU (max)admission reportsbgscan reportspolicy reportsreports controller memory (max)reports controller CPU (max)# nodes# podsQPS/Bust
317ValidateEnforcePods128 Mi / 384Mi100m14.9244150.60Mi2.161000136888604.49Mi8.513001k15/15
317ValidateEnforcePods128 Mi / 384Mi100m43.4732169Mi5.5550005369164781.25Mi8.223005k50/50
317ValidateEnforcePods128 Mi / 384Mi100m81.9778215.64Mi10.3750005369164702.15Mi43005k100/100
317ValidateEnforcePods128 Mi / 512Mi100m83.88129267.29Mi8.7545524907146598.70Mi7.883004552/5000150/150
317ValidateEnforcePods128 Mi / 512Mi100m108.7151243.10Mi15.3421392630124375.98Mi7.513002262/5000200/200

Reports Controller

# validate policies# podsmemory request / limitmemory (max)cpu requestCPU (max)periodic scan interval / workerstotal etcd sizepolicyreports countadmission reports countbackground reports countQPS/burst# nodesadmission controller (memory request/limit)
17 PSS policies100064Mi / 4Gi240504832=229.36Mi100m6.2830 mins / 243.54Mi88100013695/10300128Mi/384Mi
17 PSS policies500064Mi / 4Gi823582720=785.43Mi100m830 mins / 2145.33Mi1645000536950/50300128Mi/384Mi
17 PSS policies1000064Mi / 4Gi1381728256=1.32Gi100m8.5130 mins / 2251.48Mi258100001036950/50300128Mi/384Mi
17 PSS policies1000064Mi / 4Gi1700921344=1.62Gi100m8.441h / 2251.48Mi258100001036950/50300128Mi/384Mi
17 PSS policies19924 / 2000064Mi / 4Gi2693844992=2.51Gi100m9.621h / 2470.42Mi448198852028950/50300128Mi/384Mi
17 PSS policies10094064Mi / 20Gi6866862080=6.40Gi100m5.551h / 213561005871144150/501000128Mi/384Mi (OOM)
17 PSS policies5345664Mi / 10Gi1.89Gi100m8.121h / 21077528932274250/50500128Mi/1Gi
17 PSS policies5345764Mi / 10Gi2.84Gi100m7.392h / 21077528933330350/50500128Mi/1Gi
17 PSS policies5345764Mi / 10Gi2.55Gi100m7.663h / 21.10Gi1077528933552050/50500128Mi/1Gi
17 PSS policies8371664Mi / 10Gi100m3h / 21510/1305828683376850/50800128Mi/1Gi
17 PSS policies8085664Mi / 10Gi2.20Gi100m19.132h / 102.24Gi1573n/a8089150/50818128Mi/384Mi
17 PSS policies10039264Mi / 10Gi4.83Gi100m23.142h / 102.38Gi18731000337372850/50960128Mi/512Mi

AdmissionReview Reference

API requests, operations, and activities which match corresponding Kyverno rules result in an AdmissionReview request getting sent to admission controllers like Kyverno. The number and frequency of these requests may vary greatly depending on the amount and type of activity in the cluster. The following table below is provided to give a sense of how many minimum AdmissionReview requests may result from common operations. These figures only refer to the minimum number and, in actuality, the final count will almost certainly be greater but varies depending on things like finalizers and other controllers in the cluster.

UPDATEDeploymentChange image8
UPDATEDeploymentChange image13
CREATEJobrestartPolicy=Never, backoffLimit=43
CREATECronJobschedule="*/1 * * * *"4 (3 per invocation)
DELETECronJobschedule="*/1 * * * *", 2 completed9

These figures were captured using K3d v5.4.9 on Kubernetes v1.26.2 and Kyverno 1.10.0-alpha.2 with a 3-replica admission controller. When testing against KinD, there may be one less DELETE AdmissionReview for Pod-related operations.

Last modified May 30, 2023 at 11:31 AM PST: Add scale testing results (#877) (8ff14e5)