Set up an SSH bastion on AWS with Terraform modules in few minutes

SSH bastions or proxies are pieces of infrastructure used to allow certain users — usually a few selected members of your organization — to access resources inside your private infrastructure via SSH. These accesses have to be restricted and thoroughly secured since an SSH Bastion is a gateway to resources that can be critical.

Nonetheless, since it is an element that won’t be seen by your end users and that will only be used occasionally by programmers for debugging purposes, it is often neglected and set up when the need arises.

The goal of this article is to show that a robust SSH bastion can be set up quickly and most importantly, securely while still being easy to use.

I chose to set it up with Terraform with the help of the modules available on the registry on an AWS cloud environment.

The case for an SSH bastion

The first question you have to ask yourself is this one:

Why do we need an SSH bastion?

The purpose of an SSH bastion is to create a way of accessing servers via SSH from an outside network.

This is particularly useful for debugging purposes, to understand why a software “works on my machine” but does not when deployed of the servers.

An SSH session could also be useful for server maintenance or post-mortem investigation after a crash.

Let’s take as an example the following AWS environment:


A simple AWS environment without SSH bastion

Now let’s say your app stops working. You inspect the logs and suspect the issue to be a network issue between the app server and the database.

Could it be that the app can’t initiate a connection to the database?

A simple way to verify the hypothesis would be to open a shell session on the app server and try to connect to the database with a cli tool like pgsql or mysql. If the connection succeeds, we can eliminate the hypothesis and continue investigating other possible issues.

But how do I open that said shell session?

As we can see on the diagram, the app server is on the private subnet (thus unavailable directly from the internet) and the load balancer, if configured correctly, only allows HTTPS connections to go through to the app server.

Here comes the need for an SSH bastion.


A simple AWS environment with SSH bastion

Now, with this new architecture, there is a dedicated entry point for the SSH protocol. You can connect to the elastic IP attached to the server deployed in the public subnet and then open a new session to the app server or directly try to connect to the database.

This setup is simple as can be and can be a source of performance and security issues.

In the next parts of this article, we’ll set up a more complex SSH bastion architecture to address these concerns.

Terraform modules

If you are using Terraform, you are probably familiar with the concept of modules which is the way used by Terraform to encapsulate chunks of code.

I am used to using modules in my Terraform scripts to separate different parts of my architecture (one module create a Kubernetes cluster, another creates a database and so on) but there is a simple reflex that I didn’t have and that could have made me save hours of my time: search community modules on Terraform registry.

The Terraform registry lists modules created by the community that you can use very easily in your own Terraform repository.

For example, you can use the following 15 lines of code to deploy a Kubernetes cluster on AWS.

Most of these modules take a multitude of input variables that allow you to deploy exactly the kind of resources you have in mind, without having to compromise or having to reinvent the wheel each time you have to add a new element to your architecture.

So, let’s digress no further and come back to our task at hand: deploying an SSH Bastion.

After some research on Terraform registry, I set my mind on a module named ssh-bastion-service. Reading the documentation of its GitHub repository told me it would do the job I wanted done and it would do it in quite a nice fashion.

We’ll dive deeper into that specific module in the next session.

Walkthrough of your new SSH bastion



As you can see I the diagram, the SSH bastion deployed by the Terraform module differs from the simple example presented previously in a number of ways:

  • An autoscaling group is deployed rather than a single instance. That means that if an instance fails for any reason, a new instance will be created to ensure high-availability of the system. If the instance is under resource pressure (CPU, memory, etc.) new instances will be created so user experience is as smooth as possible.
  • A Network Load-Balancer is deployed to proxy traffic to the SSH bastions. This load-balancer will simply balance incoming traffic on port 22 to any of the SSH bastions. The presence of this load-balancer in the public subnet also allows the SSH bastion instances to be deployed in the private subnet and thus making them inaccessible directly from the internet (without coming through the load-balancer).



Security measures are also built-in in the module, like for example security groups and IP whitelisting. The SSH bastion instances will be deployed in a separate security group. By default, the connections coming from this security group won’t be able to go to other security group; you will have to authorize connections on the security group containing your app infrastructure.

When instantiating the module in your Terraform code you can pass as a variable a list of IP subnets that will be allowed to connect to the SSH bastion. In a practical situation, this IP addresses would be your organization offices IP or employees home address depending on your remote working situation.

One of the main struggle when maintaining an SSH bastion is the management of the SSH keys. For a user to be able to connect to the bastion, his or her public key has to be authorized on the SSH server. Each time a new person needs to access the SSH bastion his or hey ssh key has to be added, not even accounting for the fact that people tend to lose their SSH keys or forget its passphrase. This maintenance represents some time-consuming and uninteresting tasks.

This Terraform module remedies to this problem by deploying a piece of code that will fetch the SSH keys directly from each user AWS IAM profile. That means that each user is in charge on managing his or her own keys, no action needed from the system administrator!

And by the way, if you have centralized your AWS users to a parent account (a strategy described in this article on AWS IAM), you can also retrieve the keys using AWS roles.

Another major feature is that the ssh sessions are not opened directly on the SSH bastion, but on a container inside that bastion. Every time a user opens a session, a new container is created and deleted with the sessions’ closure. This represents a very welcome security feature as nobody will be able to leave files containing possibly sensitive information on the server.



Here it is. A method to deploy an SSH bastion in only a few minutes with Terraform. Not only is it very easy to set up (or even to unset up, if you want to deploy this bastion only when the need arises), it is also easy and secured.

Furthermore, I recommend any Terraform user that doesn’t have a habit of using Terraform registry to give it a try; It could save you a lot of time and headache!

If you are new to using Terraform (and you are a Kubernetes enthusiast), I recommend you try to follow this article to set a Kubernetes cluster on AWS or that one if you prefer to use GCP.

Sadiki Camil

Sadiki Camil

Camil is a Site Reliability Engineer (SRE) at Padok. He shares with our clients his expertise in DevOps Technologies, such as Kubernetes, Helm, AWS, and Gitlab

What do you think? Leave your comments here !