kubernetes_1.23

Kubernetes 1.23 is the last release of 2021. Several features have reached general availability (GA) while others are now in Alpha or Beta. Here is a quick recap to get you up to speed on the most important updates to the popular container orchestrator.

Horizontal Pod Autoscaler v2 is now GA

Kubernetes has built-in features to scale your applications. The most commonly used is the Horizontal Pod Autoscaler, or HPA for short. The HPA can add or remove Pods from your cluster to handle peaks in traffic and minimize resource waste.

The first version of the Horizontal Pod Autoscaler can only scale your application based on your Pods' CPU or memory usage. The newly stable second version can also act based on custom metrics, like queue sizes.

HPA v2 has been in Beta for over a year now and has proven very useful. Padok uses it actively in multiple production clusters. In the 1.23 release, the v2 API finally reaches General Availability! This means the object's structure is final and will see no breaking changes. If you have been waiting for these stability guarantees to adopt v2's new features, now is the time.

Clean up finished Jobs with a TTL

Running short-lived tasks like database backups in your Kubernetes cluster is a breeze with Jobs. Once a Job is finished — either because it ran to completion or failed — it sticks around so that you can see its final status and read any logs it produced. If you run many Jobs, these can accumulate even though often only the most recent Jobs are of any interest.

Kubernetes 1.23 introduces a new ttlSecondsAfterFinished in each Job's spec. Setting this field tasks the built-in TTL-after-finished controller to delete the Job once it has been finished for a certain amount of time. No need to schedule cleanup of your Jobs anymore.

You can even use a custom mutating admission webhook to set a different TTL based on the Job's status. Maybe successful jobs don't need to stick around, but failed ones should stay for future troubleshooting. If you want to implement admission webhooks easily, I recommend having a look at Kyverno. Padok uses it in production regularly.

IPv4/IPv6 dual-stack is now GA

Every node and Pod in your Kubernetes cluster needs a unique IP address. The IPv4 address space is too small for some use cases, so some companies have started using IPv6. Support for using both IPv4 and IPv6 at the same time in a Kubernetes cluster has been around for a while. It is even enabled by default since Kubernetes 1.21.

In the 1.23 release, support for an IPv4/IPv6 dual-stack reaches General Availability. The feature flag that allowed cluster administrators to disable the feature is gone. All Kubernetes clusters with version 1.23 and above will support using both IPv4 and IPv6 at the same time.

In order to use this feature, your nodes must have both IPv4 and IPv6 network interfaces, you must use a dual-stack capable CNI plugin like Calico, and finally, configure your Services to use both at once with the ipFamilyPolicy field.

PodSecurity is now in Beta

You've probably heard that Pod Security Policies, or PSPs for short, were deprecated in Kubernetes 1.21 and will be removed entirely in the 1.25 release coming in 2022. PSPs have significant usability problems and are overall very confusing. Attempting to build secure systems on top of confusing primitives is a recipe for disaster.

The Kubernetes community opted for a different security model based on validating admission webhooks. The new Pod Security Admission functionality follows this same approach, implemented directly in the Kubernetes API. The feature reaches Beta in the 1.23 release and is now enabled by default.

Security is the topmost concern for everyone. You can now use Kubernetes' built-in functionality to prevent Pods from ever having dangerous capabilities. This will not stop us from adding additional layers of protection to our clusters, but more security is always a good thing.

Ephemeral Containers are in Beta

The kubectl debug command can create a new container and attach it to another container in a running Pod. The new container can access processes running in the pre-existing container, which is particularly useful when you want to debug a running process.

To do this, kubectl uses Ephemeral Containers. This feature reaches Beta in the 1.23 release, which means the feature is now enabled by default. The kubectl debug command comes as a welcome complement to kubectl exec for debugging running applications.

CRD validation is in Alpha

If you have written a Kubernetes operator to manage custom resources in your cluster's API, then you know that validating these resources can be a pain. You need to implement a custom admission webhook that validates everything. Wouldn't it be better if Kubernetes could do all that validation for you?

This is where CRD validation comes in. This feature reaches Alpha in the 1.23 release; enable the CustomResourceValidationExpressions feature gate to use it. You can then define validation rules in your CRD's OpenAPI schema. This definitely makes writing custom Kubernetes controllers easier.

Other improvements

That's it for the most important changes in the latest Kubernetes release. Here are a few other updates included in this release:

For more details, have a look at the official release announcement.