Posted on 10 May 2021, updated on 19 December 2023.

Kubernetes is a great tool for deploying resilient and scalable applications. However, Container Security remains a major issue for Kubernetes. Even if your pods are well secured, there is no guarantee that your neighbor’s pods will not be exploited to attack yours. Thankfully, there is a way to prevent this, and it is to run Firecracker as your Kubernetes CRI.

Why should you use Firecracker?

As a container shares its kernel with its host, the host has access to a lot of information about the containers it runs, such as its network, and the list of processes that the containers run. If you want to learn more about containers, check the article about what a container is.

So if a container can break through the security layer that prevents it from accessing host processes (which is known as privilege escalation), there is nothing to stop it from accessing information from other containers, potentially yours.

How to prevent that? The right answer would be by using virtual machines. Indeed, they do not use the same operating system as the host but instead run their very own OS, isolating them from the rest of the containers or virtual machines.

But how can you still use the features that made Kubernetes be as hype as it is today? It uses containers, not virtual machines! Even if it could, virtual machines are known to be slow to start and use many resources. If we used QEMU, for instance, we would need to use more powerful machines, and in fine pay more for almost the same service. That is where Firecracker’s micro-vms come in. 

Firecracker is a way to run virtual machines, but its primary goal is to be used as a container runtime interface, making it use very few resources by design. Thus, it is incredibly lightweight (you can run up to 4000 micro-vms to a EC2 I3.metal!) and is blazing fast to launch. This puts it at an advantage against its competitors such as gVisor, which has proved to be quite slow in use. And besides, Firecracker comes with the security advantages of virtual machines we listed earlier. And to drive the nail in, it offers almost all the features that are missing in other projects like Nabla containers. All of that’s great! But how do we deploy a Kubernetes cluster using Firecracker?

How to deploy Kubernetes with Firecracker?


To install your Kubernetes cluster with Firecracker as a Container Runtime Interface, we are going to need a few things:

  • At least one machine, be it physical or virtual, running a debian-like OS.
  • A partition on this machine will be used to store micro-vm’s volumes.
  • Make sure that this machine can run virtualization (using virsh)
  • A lot of courage and determination

Architecture schema

Here is what our infrastructure will look like by the end of this article :


For the sake of simplicity, we will use a single node Kubernetes deployed with kubeadm. 

We will use CRI-O as its runtime because CRI-O acts as a runtime selector and allows us to use multiple runtimes, and to choose which one we want. And to be fair it’s nice to have something as cold as CRI-O when we play with Fire. Moreover, it is specifically designed to implement the Open Container Initiative (OCI) to Kubernetes, so we’re unlocking its full potential here.

Then, we will use Kata containers, which is an Openstack project, and more specifically kata-fc that allows us to create containers running on Firecracker.

Install and configure CRI-O

The first step is to install CRI-O. As it often doesn’t recognize natively devicemapper, which is the only storage driver supported by Firecracker, we will have to recompile CRI-O. CRI-O is written in Golang, and it is fast to compile (you sadly won’t have too much time to duel on chairs while it’s compiling). You can do so by using the following script:

Once this is done, we need to edit the CRI-O configuration to make sure we use devicemapper as its preferred storage.

Edit /etc/crio/crio.conf and add the following lines:

Then restart CRI-O by using the following commands:

Install Kubernetes with kubeadm and configure it for CRI-O

We’ll cover this section quickly. For more details, check out here how to setup up a Kubernetes cluster with kubeadm.

Run the following commands in your shell:

We then have to tell Kubernetes to use our endpoint. Create a file in /etc/systemd/system/kubelet.service.d/0-crio.conf and type in the following content:

Then we just have to initialize the Kubernetes cluster with CRI-O, and connect to it. Run the following commands. The last one is used to remove the “master taint on our node, as we have a single node cluster. You can skip it if you are running a multiple node cluster.

Finally, as we are not using Docker as a CRI, we won’t be able to search for images natively on We have to tell CRI-O to do so, and we can do that by using a config file in /etc/containers.

Run the following command to set up as default:

Now our cluster should be ready to go and we could use it as it is, but then we wouldn’t be able to take advantage of Firecracker. Having come this far, that would be a shame. Thus to do so, let’s install all at once Kata containers and Firecracker.

Install Kata containers

To do so, simply use the following kubectl commands. It is following best practices in the documentation, and it will create a pod that will install and configure Kata containers for you.

When the job ends, Kata should be installed and have modified your CRI-O configuration to set up kata-fc, ie Kata containers using Firecracker, as a runtime.

All that’s left is to create a Firecracker RuntimeClass in Kubernetes and use our cluster.

Create Kubernetes RuntimeClass

Creating a RuntimeClass is pretty straightforward. Just follow the documentation and use the following command: 


To validate that our cluster is working well and that we have actually managed to secure our processes, we’ll create two nginx pods: one without Firecracker, and one with Firecracker (because, as we said earlier, CRI-O allows us to select multiple runtimes).

Let's start with the one with Firecracker. Type the following command and let the pod initialize:

Let’s check the processes on the host to see if we see any nginx. If you haven’t installed nginx on the host, ps aux | grep nginx should be empty. You can check that Firecracker is running with ps aux | grep firecracker.

If we create a pod that is running on the default runtime, which is usually runc, we will see a nginx process.

If we type again ps aux | grep nginx, we see that a nginx process is now shown.

We have successfully created our Kubernetes cluster that runs with Firecracker! Hurray, we will never be hacked again!

Unfortunately, using Firecracker is not enough to be able to say that sentence while being right. There are other ways to further protect Kubernetes clusters, often by following best practices, and our blog is filled with them! For instance, you could start with our Kubernetes Security Beginner guide.