Posted on 31 March 2023, updated on 1 June 2023.

Kube-downscaler is an open-source tool that allows users to define schedules for the automatic downscaling of pod resources in Kubernetes. This helps reduce infrastructure costs by reducing resource usage during off-peak hours.


In this article, we will take a detailed look at the features of kube-downscaler, its installation, and configuration, as well as its use cases and future prospects.

Features of kube-downscaler

Kube-downscaler is a powerful tool based on schedules to upscale or downscale applications in a Kubernetes cluster. In this section, we will explore some of the key features of this tool:

Compatibility with Kubernetes features or tools

Kube-downscaler also supports horizontal pod autoscaling (HPA) and can work in conjunction with an HPA to ensure that the desired number of replicas is maintained for an application. This allows kube-downscaler to provide additional flexibility and fine-grained control over the scaling of applications in Kubernetes.

Karpenter and kube-downscaler are two tools that can be used in conjunction to provide a complete and powerful resource management solution for Kubernetes clusters. By using Karpenter and kube-downscaler together, Kubernetes clusters can benefit from both horizontal and vertical scaling. Downscaler allows reducing the number of pods, while Karpenter enables optimizing node utilization by consolidating pods onto fewer or different types of machines.

Automatic scaling of deployment replicas based on defined periods

Kube-downscaler can automatically scale deployment replicas based on predefined periods. This means that you can set up a schedule that will increase or decrease the number of replicas during certain times of day, week, or month.

For example, if you know that your application experiences high traffic during certain hours of the day, you can configure kube-downscaler to automatically scale up replicas during those times, and then scale them back down when traffic subsides.

This can allow scaling up in anticipation of a peak load instead of waiting for it to happen and be handled by HPA. This can help optimize resource usage and ensure that your application is always available and responsive.

But Kube-downscaler is mainly used to scale down replicas and optimize the cost of your cluster, we usually let HPA manage the scale-up.

Installation and Configuration of kube-downscaler

Explanation of the installation of kube-downscaler on a Kubernetes cluster

  1. Clone the kube-downscaler repository from GitHub:
git clone <https://codeberg.org/hjacobs/kube-downscaler.git>
  1. Navigate to the kube-downscaler directory:
cd kube-downscaler
  1. Edit the deploy/kube-downscaler.yaml file to customize the configuration according to your specific needs. For example, you can adjust the time zone, schedules, and scaling rules.
  2. Apply the configuration to your Kubernetes cluster:
kubectl apply -f deploy/

This command will deploy the kube-downscaler controller and create a kube-downscaler Deployment.

You can verify that the kube-downscaler controller is running by checking the logs of the kube-downscaler Deployment:

kubectl logs -f deployment/kube-downscaler

After this installation, you’ll need to configure your deployments.

Configuring kube-downscaler according to specific user needs

Kube-downscaler provides customization of scaling schedules through the use of annotations on Kubernetes deployment objects.

The downTimePeriod annotation in the deployment object can be used to specify a downtime period during which the deployment should not be scaled.

The minReplicas annotation can be used to set the minimum number of replicas for the deployment.

These fields, in conjunction with the kube-downscaler annotations, allow you to create a customized scaling schedule based on your specific business needs and resource utilization patterns.

By adjusting these fields, you can configure kube-downscaler to scale your deployments in a way that optimizes your application's availability and cost efficiency.

Here is a simple configuration for a deployment using kube-downscaler.

apiVersion: apps/v1
kind: Deployment
  name: random-deployment
		# Kube-downscaler
		downscaler/downtimePeriod: "Mon-Fri 00:00-07:00 Europe/Berlin"
		downscaler/minReplicas: 1
  replicas: 2
      app: random
        app: random
      - name: random-container
        image: random-image

With this configuration, from Monday to Friday between midnight and 7am (on Europe/Berlin timeline), the number of replicas will be reduced to 1.

kube-downscaler will automatically start scaling down pods based on the defined schedules.

That's it! You now have kube-downscaler installed and running on your Kubernetes cluster.

Use Cases

The main use case of this tool is to reduce costs by optimizing the utilization of Kubernetes cluster resources. However, it can also be used to warm up the cluster and avoid relying too much on HPA.

While this is not its primary purpose, this combination offers an alternative solution to ensure high application availability while minimizing infrastructure costs.

Cost Reduction

One of the main benefits of kube-downscaler is its ability to reduce infrastructure costs by limiting resource usage during off-peak hours. By defining a schedule for scaling down resources during periods of low usage, kube-downscaler can help reduce infrastructure costs. It does it without impacting the availability of applications running on the cluster.

Service Disruption Prevention

Another use case for kube-downscaler is to prevent service disruptions during peak usage periods. By defining a schedule for scaling up resources during periods of high demand, kube-downscaler can help to pre-scale deployments and avoid HPA latency to ensure that applications remain available and responsive, even during periods of peak usage.


Kube-downscaler is a useful tool for managing resource usage in Kubernetes clusters, but it does have some limitations. For example, it only supports scaling based on a predefined schedule, which may not be suitable for all use cases. Additionally, it does not support auto-scaling, which means that users must manually adjust the scaling schedules to meet changing demands.

An alternative solution to consider is Keda. Keda is an open-source project that provides dynamic auto-scaling capabilities for Kubernetes applications. With Keda, users can set custom scaling rules based on a variety of metrics, such as queue length, CPU usage, or custom metrics.

This allows for more granular control over resource usage and ensures that applications are always properly scaled to meet demand.

Moreover, Keda is compatible with a wide range of Kubernetes applications, including stateful and stateless applications, and supports a variety of event sources, such as Azure Event Hubs, Kafka, and RabbitMQ.


Kube-downscaler is a powerful tool for managing resource usage in Kubernetes clusters. By defining scaling schedules, users can optimize resource usage in their clusters and reduce costs, while ensuring that applications remain available and responsive, even during periods of peak usage.

While kube-downscaler is a valuable tool for managing resource usage in Kubernetes clusters, it may have some limitations. If you require more granular control over resource scaling or need auto-scaling capabilities, it may be worth considering an alternative solution like Keda.