Kubernetes Cluster Autoscaler - cluster-autoscaler-1.26.0



  • Parallel node drain for scale down was implemented. It can be enabled by setting the --parallel-drain flag to true, and is configurable via --max-scale-down-parallelism and --max-drain-parallelism flags. It's still turned off by default, and hasn't been sufficiently tested yet - treat it as experimental.
  • Priority expander no longer blocks scale up when its ConfigMap is missing or malformed.
  • NodeGroup.DeleteNodes calls during scale-down can be batched now, configurable via --node-deletion-batcher-interval (batching is turned off by default).
  • Nodes hosting kube-apiserver are no longer treated differently in scale-down logic. The special handling was removed after it was discovered scale down resource limiting was inconsistent with scale-up and haven't actually worked correctly since CA 1.18.
  • GPU-accelerated windows nodes can now be scaled down.
  • A new annotation cluster-autoscaler.kubernetes.io/pod-scale-up-delay has been added, which allows user to set new-pod-scale-up-delay per pod, instead of autoscaler wide configuration. Annotation value will be used only if it is larger then global new-pod-scale-up-delay.
  • Added --node-delete-delay-after-taint flag that controls how long Cluster Autoscaler waits for after tainting a node, before rechecking and deleting the node.
  • Introduced a new flag --enforce-node-group-min-size to enforce the node group minimum size. For node groups with fewer nodes than the configuration, Cluster Autoscaler will scale them up to the minimum number of nodes.
  • Cluster Autoscaler no longer applies beta.kubernetes.io/os and beta.kubernetes.io/arch labels to template nodes when scaling a node group from 0 nodes. Pods selecting these labels will no longer trigger a scale-up from 0 nodes.
  • The max_nodes_count metric is now computed dynamically by looping through all node groups and counting the MaxSize of each node group.


  • Support for extended resource definition in MIG templates was added. The resources can be configured via extended_resources in AUTOSCALER_ENV_VARS.


  • Updated AWS instance types adding support for Mac Metal and r6a and r6is Families.
  • When scaling an AWS ASG from 0, the ASG name (eks.amazonaws.com/nodegroup) can now be used without any additional work as a NodeSelector.


  • Support node label keys having underscores


  • Pre-existing volumes no longer break autoscaling.


  • Support monitoring instance-pool work-requests for capacity/quota issues during scale-up.


  • k8s.gcr.io/autoscaling/cluster-autoscaler:v1.26.0
  • k8s.gcr.io/autoscaling/cluster-autoscaler-arm64:v1.26.0
  • k8s.gcr.io/autoscaling/cluster-autoscaler-amd64:v1.26.0


Dec. 21, 2022, 2:17 p.m.
Cluster Autoscaler 1.26.0
Register or login to:
  • 🔍View and search all Kubernetes Cluster Autoscaler releases.
  • 🛠️Create and share lists to track your tools.
  • 🚨Setup notifications for major, security, feature or patch updates.
  • 🚀Much more coming soon!
Continue with GitHub
Continue with Google