RKE2 - v1.20.9+rke2r1

Attention: On August 5th, we discovered a bug in this release related to label processing. It will be fixed in our next release. We recommend not upgrading to this release and have marked it as a pre-release to prevent automatic upgrades to it.

This release updates Kubernetes to v1.20.9
For more details on what's new, see the Kubernetes release notes

Upgrade Notes

If you installed RKE2 from RPMs (default on RHEL-based distributions), you will need to either re-run the installer, or edit /etc/yum.repos.d/rancher-rke2.repo to point at the latest/1.20 or stable/1.20 channel (depending on how quickly you would like to receive new releases) in order to update RKE2 via yum.

Important Note

If your server (control-plane) nodes were not started with the --token CLI flag or config file key, a randomized token was generated during initial cluster startup. This key is used both for joining new nodes to the cluster, and for encrypting cluster bootstrap data within the datastore. Ensure that you retain a copy of this token, as is required when restoring from backup.

You may retrieve the token value from any server already joined to the cluster:

cat /var/lib/rancher/rke2/server/token

Changes since v1.20.8+rke2r1

  • Upgrade Kubernetes to v1.20.9 (#1421)
  • Upgrade containerd to v1.4.8-k3s1 (#1398)
    Addresses GHSA-c72p-9xmj-rx3w
  • Bootstrap data is now reliably encrypted with the cluster token (#1429)
    Addresses GHSA-hvj9-vfxp-c3cf
  • The rke2-kube-proxy chart has been deprecated; kube-proxy now runs as a static pod (#1205)
    With this change, the --kube-proxy-args flag is now supported in RKE2. RKE2 will automatically disable the kube-proxy static pod and retain the legacy rke2-kube-proxy if a rke2-kube-proxy HelmChartConfig resource is detected.
  • The RKE2 cloud controller has been moved into a static pod to improve resilience under high datastore latency (#1219)

Packaged Component Versions

| Component | Version |
| Kubernetes | v1.20.9 |
| Etcd | v3.4.13-k3s1 |
| Containerd | v1.4.8-k3s1 |
| Runc | v1.0.0-rc95 |
| CNI Plugins | v0.8.7 |
| Flannel | v0.13.0-rancher1 |
| Calico | v3.13.3 |
| Metrics-server | v0.3.6 |
| CoreDNS | v1.6.9 |
| Ingress-Nginx | v1.36.3 |
| Helm-controller | v0.9.2 |

Known Issues

  • #1447 - When restoring RKE2 from backup to a new node, you should ensure that all pods are stopped following the initial restore:
curl -sfL https://get.rke2.io | sudo INSTALL_RKE2_VERSION=v1.20.9+rke2r1
rke2 server \
  --cluster-reset \
  --cluster-reset-restore-path=<PATH-TO-SNAPSHOT> --token <token used in the original cluster>
systemctl enable rke2-server
systemctl start rke2-server

Helpful Links

As always, we welcome and appreciate feedback from our community of users. Please feel free to:
- Open issues here
- Join our Slack channel
- Check out our documentation for guidance on how to get started.


July 22, 2021, 8:36 p.m.
Register or login to:
  • 🔍View and search all RKE2 releases.
  • 🛠️Create and share lists to track your tools.
  • 🚨Setup notifications for major, security, feature or patch updates.
  • 🚀Much more coming soon!
Continue with GitHub
Continue with Google