kubernetes_1.24

Posted on 3 May 2022, updated on 8 August 2022.

Kubernetes 1.24 is the first release of 2022. Several features have reached general availability (GA) while others are now in Alpha or Beta. Others have been deprecated or removed entirely. Which of these changes matter, and which don't? Here is a quick recap to get you up to speed on the most important updates to the popular container orchestrator.

Dockershim is now entirely removed

This change made a lot of noise when it was announced: Kubernetes no longer supports using Docker as a container runtime. Don't panic: this change has no impact at all for most users of Kubernetes.

The Cloud Native ecosystem uses the Open Container Interface to avoid any dependency on a specific container runtime. In fact, most Kubernetes clusters use containerd or cri-o and not Docker.

If you use a managed Kubernetes service, this change should not impact you in any way. If you manage your own cluster and still use Docker as a container runtime, moving to containerd is fairly simple.

New kubelet metric: OOM events

Every Kubernetes release adds new metrics to kubelet, and this latest release is no exception. A new metric in particular will be useful to Site Reliability Engineers: container_oom_events_total. This Prometheus metrics allows cluster operators to count out-of-memory events that happen in each container running in the Kubernetes cluster.

Best practice is to set memory limits for each container. When software does not run as expected, it can reach this limit. When that happens, the Linux kernel kills the faulty process. Finding out exactly what happened is not always easy. This new metric will definitely help.

Choose the type of LoadBalancer you want

In Kubernetes 1.24, the Service.Spec.LoadBalancerClass field reaches general availability.

Creating a Service of type LoadBalancer in a managed Kubernetes cluster creates a load balancer. The Kubernetes service provider is responsible for creating this load balancer.

With this LoadBalancerClass field, users can specify what kind of load balancer they want. This will allow cloud providers to natively offer different types of load balancers. Up to this point, they have mostly relied on annotations or custom controllers, which can be clunky and counter-intuitive: annotations often vary from one version of a provider-specific controller to another. Hopefully, this will make networking easier for users of managed Kubernetes services.

The Service.Spec.LoadBalancerIP field is deprecated

Speaking of annotations, the Kubernetes team has decided to deprecate the Service.Spec.LoadBalancerIP field. The reason is that it was under-specified and had different meanings across implementations. It also cannot support dual-stack services, which require both IPv4 and IPv6 addresses.

Even though this field has been useful, it indeed has very different usage from one cloud provider to the next. Some providers even expect the field to contain a hostname rather than an IP address (looking at you, EKS).

The official recommendation is for providers to rely on annotations rather than this field. Once cloud providers start including Kubernetes 1.24 in their offering, we will be able to get a better feeling for these new annotations. Be ready.

No Secret by default for service account tokens

This change sounds scary, but only impacts Kubernetes users who use the long-lived service account tokens that Kubernetes stores inside Secrets.

Up to Kubernetes 1.23, creating a service account in a cluster results in Kubernetes automatically creating a Secret with a token for that service account. This token never expires, which can be useful but is also a security issue. Starting with Kubernetes 1.24, these Secrets will no longer be created automatically.

You might ask yourself: isn't that token mounted into Pods that use that service account? Well, no. Not anymore. When a new Pod is created, kubelet uses the TokenRequest API to generate a token specifically for that Pod, which is mounted as a projected volume. The token expires after an hour or when the Pod is deleted, whichever comes first. Kubelet renews the token at regular intervals so the Pod always has a valid token mounted.

If you absolutely need a never-expiring token stored in a Secret, you can still get one by creating the Secret yourself and adding a special annotation. Kubernetes will then add the token to the Secret for you.

The RuntimeClass.Overhead field is now GA

There are many reasons to use a container runtime other than the default: for running untrusted workloads, or workloads that require GPUs, to only name a couple. Kubernetes natively supports defining custom runtime classes for those use cases.

These custom runtimes can have some overhead: CPU and memory usage by the runtime itself and not the container running inside. The RuntimeClass.Overhead field, which reaches general availability in Kubernetes 1.24, allows cluster operators to specify this overhead so that Kubernetes can take it into account when making scheduling decisions.

Future Beta APIs will be off by default

This final change is not a new feature or a deprecation, but rather a change in how the Kubernetes team manages beta APIs.

As a reminder, APIs start in alpha, can then reach beta, and finally stability. Alpha and beta APIs can be enabled or disabled with feature gates, boolean settings in Kubernetes components.

So far, alpha APIs have been disabled by default and beta APIs enabled by default. Starting with Kubernetes 1.24, this changes: future beta APIs will be disabled by default. The relevant Kubernetes Enhancement Proposal explains it best:

From the Kubernetes release where this change is introduced, and onwards, beta APIs will not be enabled in clusters by default. Existing beta APIs and new versions of existing beta APIs, will continue to be enabled by default: if v1beta.some.group is currently enabled by default and we create v1beta2.some.group, v1beta2.some.group will still be enabled by default.

Other changes