Posted on 22 July 2020, updated on 21 December 2023.

A few months ago, we wrote an article about how to use NFS for Kubernetes dynamic storage provisioning, with concerns regarding resilience. For example, to protect yourself from what may happen when the nfs-provisioner crash.

Here we will look at another possible way to solve this problem, but this time backed up by an AWS EFS volume. You will see how much more resilient this solution is

EFS provisioner Architecture

What we want to deploy

Here is the architecture that we will set up in the following sections:

architecture-efs

The EFS volume at the top of the figure is an AWS-provisioned EFS volume, therefore managed by AWS, separately from Kubernetes. As most of AWS resources are, It will be attached to a VPC, Availability zones and subnets. And it will be protected by security groups.

This volume can basically be mounted anywhere you can mount volumes using the NFS protocol. So you can mount it on your laptop (considering you configured AWS security groups accordingly), which can be very useful for test or debug purposes. Or you can mount it in Kubernetes.

And that’s what will do both the EFS-provisioner (in order to configure sub-volumes inside the EFS volume) and your pods (in order to access the sub-volumes).

What will happen

When the EFS provisioner is deployed in Kubernetes, a new StorageClass “efs” is available and managed by this provisioner.

You can then create a PVC that references this StorageClass. By doing so, the EFS provisioner will see your PVC and begin to take care of it, by doing the following:

  • Create a subdir in the EFS volume, dedicated to this PVC
  • Create a PV with the URI of this subdir (Address of the EFS volume + subdir path) and related info that will enable pods to use this subdir as a storage location using NFS protocol
  • Bind this PV to the PVC

Now when a pod is designed to use PVC, it will use the PV’s info in order to connect directly to the EFS volume and use the subdir.

About resilience

The great advantage of this architecture is that the provisioner only aims at configuring the EFS volume, provisioning it with sub-volumes.

Indeed, for a pod to use the EFS volume, it only needs the info contained in the PV resource. But it doesn’t need at all to connect or access the EFS provisioner.

This ensures that your pods will continue to work properly even if the provisioner fails, reboot, etc.

And since your pods connect directly to the EFS volume, which is a multi availability zone service, this means your pods access a natively high availability storage service.

How to use the EFS provisioner

Create an EFS volume

You can easily create an AWS EFS volume (AWS Elastic File System) by going to the EFS service in the AWS console.

You basically just need to provide:

  • The VPC that will host the EFS volume
  • 3 subnets inside this VPC (1 per availability zone)
  • At least one security group to restrain access to the volume
    • Careful: your security groups rules need to allow access to the EFS volume from your cluster’s nodes

When this is done, your volume will be accessible right away.

Access your EFS volume

You can try your volume by adding the following line to your “/etc/fstab” file:

<DNS_EFS>:/<SUBDIR> /<MOUNT_POINT> nfs user,noauto 0 0

And then perform the following command line:

mount <MOUNT_POINT>

This way you can have access to the whole volume or to some <SUBDIR> that correspond to a particular PVC.

Doing this on your local machine implies that your security policy enables your machine to have access to the EFS volume. This is not always a good practice, but it’s often useful during the development and test phases.

Create the IAM role

The EFS-provisioner will need a role that has access to your EFS volume.

You can create one with the following IAM policy:

Please note that this policy will work but it’s pretty open and not very secured. You could also restrain it a little by allowing access only on resources whose ARN is the ARN of your EFS-volume.

Deploy the EFS provisioner

Now that you have an EFS volume, and that you checked it’s up and ready, you can deploy the EFS-provisioner inside Kubernetes.

An easy way to do this is to use Helm (if you don’t already know this technology, you can find info about Helm 3 here).

To do so you first need to create a Helm value file:

And then use Helm in order to install the efs-provisioner with these values:

helm repo add stable https://kubernetes-charts.storage.googleapis.com

helm install -f myvalues.yaml efs-provisioner stable/efs-provisioner

Note: For more information about the efs-provisioner Helm Chart, please visit its documentation there.

Use the EFS PVs

Using your EFS-provisioner in order to provide a pod with a PV is quite simple. Just use the following yaml file:

And then apply it:

Kubectl apply my_pvc.yaml

Please note the storageClassName: aws-efs in the file: that’s where you specify the EFS-provisioner that it should handle this particular PVC.

You can then create a pod that uses this PVC with the following command:

The command that will actually create that pod is the following:

Kubectl apply my_pod.yaml

We saw in this article how you can quickly set up an efs-provisioner that will provide your pods both resilient and ReadWriteMany persistent volumes. When the provisioner is up, using a new EFS PV is as easy as creating the appropriate PVC in Kubernetes.