Custom HPA scaling for Kubernetes with Prometheus and RabbitMQ metric

HPA is commonly used with metric like CPU or Memory to scale our pods. You can learn more here about autoscaling and Kubernetes.

These metrics are not always the KPI of our microservice, other metrics like the size of a queue or the value of another microservice could be more important in determining the number of required pods.


Setup

You are going to need

  • A Kubernetes cluster
  • Prometheus operator
  • Prometheus adapter
  • An exporter for the metric you need

Step by step

First of all you need to be connected to your Kubernetes cluster in 1.13+, and have the v2beta2 of the autoscaling API. (This is important because without this your HPA will not be able to read custom metrics)

Make sure to have enough nodes to scale your cluster.

Install the Prometheus operator

The Prometheus operator allows storing of Kubernetes metrics and your custom metrics!

To install Prometheus in our cluster we used the Prometheus helm operator:

helm/charts

To install it we used

helm install prometheus-operator -f prometheus-operator-value.yaml stable/prometheus-operator

This deploys Prometheus, Grafana, alert manager, etc.

You can now access Prometheus dashboard with the following command:

kubectl port-forward svc/prometheus-operator-prometheus 8002:9090

For Grafana

kubectl port-forward svc/prometheus-operator-grafana 8002:80

 

Install the Prometheus adapter

To allow Kubernetes to read metrics from Prometheus an adapter is needed.

We used the helm chart to install it in our cluster

helm/charts

To install

helm install -f prometheus-adapter-value.yaml prometheus-adapter stable/prometheus-adapter

When installed you can use the following command to see all the metrics that are now exposed to Kubernetes

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/"

or

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/" | jq/

 

Install an exporter for your custom metric

To scarp data from our RabbitMQ deployment and make them available for Prometheus we need to deploy an exporter pod that will do that for use.

We used the Prometheus exporter

kbudde/rabbitmq_exporter

 

Service monitor

Now that you have configured Prometheus and your exporter you should be able to see date in K8s metric API.

 

Prometheus rules

Now to query specific information within a metric we need to query Prometheus. To do so we need to create a PrometheusRule.

This configuration will expose a new metric for the HPA to consume.

Here it will be the number of messages within a specific a RabbitMQ queue.

This is the syntax of it :

 

HPA

Now you can configure your HPA (Horizontal pod autoscaling) with a custom metric.

Done you should now be able to see your metrics when describing your HPA.

 

Taking some time to figure out your KPI for scaling will make the difference between a successful or failure to manage a surge in traffic.

Refining an autoscaling rule or HPA for Kubernetes is a forced path for any resilient architecture. Here the example is the size of a RabbitMQ queue, but it requires the consumers to process message at a fix rate. It has to be constant because if some messages take 1 hour to be processed vs 2 sec for others the scaling will not be reliable.

You can make sure your infrastructure is robust by using chaos engineering.

Benjamin Sanvoisin

Benjamin Sanvoisin

Benjamin is a Site Reliability Engineer (SRE) in Padok. He is also our DevSecOps & Azure Cloud specialist. Security and infrastructure hold no secrets for him.

What do you think? Leave your comments here !