ecs_fargate_deployment_app

Posted on 13 April 2023.

Congratulations! You just finished creating your new application, properly containerized it, and are now ready to become the next tech billionaire.

 

Sadly, you have no idea how to deploy and maintain this new app. This is where Amazon Web Services’ ECS using Fargate comes to the rescue.

What are ECS and Fargate

ECS


Amazon Web Services (AWS) is a cloud computing platform that offers a wide range of services to you to run your applications and services with ease.

Among the many services provided by AWS, Elastic Container Service (ECS) is a fully-managed container orchestration service that allows users to easily run and scale containers on AWS.

It enables developers to easily manage and scale their containerized applications, providing features like automatic load balancing, service discovery, and auto-scaling.

One of the benefits of using ECS is that it is a serverless service, which means that you do not need to worry about the underlying infrastructure. AWS manages the servers, operating systems, and other resources needed to run the containers, allowing you to focus on writing code instead of worrying about infrastructure.

Fargate


Furthermore, AWS ECS also provides the Fargate capacity provider, which is a serverless compute engine for containers. Fargate enables you to run containers without having to manage the underlying EC2 instances. This means that you can run containers as a true serverless service, paying only for the resources used by the container rather than paying for an entire instance.

With the Fargate capacity provider, you can launch and scale containerized applications automatically, based on demand. This provides a flexible and cost-effective way to run applications with varying resource requirements, as you only pay for the exact resources used by the application, without any waste.

All of this makes AWS ECS a highly flexible and cost-effective solution for deploying containerized applications in the cloud. And if you want to monitor your app to ensure that it’s running well and is well on its way to making you rich, you can even monitor it by enabling the Cloudwatch service

ECS Components

In this part, I will quickly summarize what the core components of ECS are and what they do.

If you have any questions or doubts, I suggest you read and explore the AWS' ECS documentation.

  • Task Definition: A task definition is a blueprint for a group of one or more containers that make up an application or service.
  • Task: An instance of a task definition that is running on a container instance in your cluster.
  • Service: A service in ECS allows you to run and maintain a specified number of instances of a task definition simultaneously in a cluster.
  • Cluster: A logical grouping of one or more EC2 instances or Fargate tasks, where you can run containerized applications.
  • Capacity Provider: A way to provide capacity for your tasks in your cluster. You can use Fargate or EC2 instances as capacity providers.

How to

Now that we’ve covered the theoretical part, let’s dig into the practical part of this article.

This section will explain how to deploy a minimalistic hello-world nginx app on ECS using Fargate. While this is not a full app, it is enough to cover the basics of how this AWS service works.

During this tutorial, I will include links to the documentation of AWS resources that I haven’t already covered in the two previous parts. Be sure to check them out if you don’t know what these services are and how they work!

First step: create a Virtual Private Cloud


We’ll start this tutorial by creating a VPC, which will be the foundation of the rest of the tutorial. Everything that will be created going forward will be created inside this VPC.

This VPC spans two Availability Zones, contains two public and two private subnets, and one NAT Gateway. In summary, it should look like this:

VPC

To get this result, click on the VPC panel of the AWS console, click on “Create VPC” and follow this configuration:

create_vpc

Second step: create a Cluster inside your VPC


Once your VPC is created, we need to create a cluster in which we’ll later deploy a service containing our tasks.

All the following work will be done inside the private subnets of our VPC since you probably don’t want the user of your app to access your resources directly. More on that later.

To create your Cluster, click on the Elastic Container Service panel of your AWS console, then select “Clusters” on the left panel, click on “Create cluster” and follow this configuration.

create_cluster

Third step: create a Service inside your Cluster


Now that you have your Cluster up and running, let’s get into the crucial part of this tutorial. We are now going to create a Service and a Task Definition to actually do something with the resources we created.

Click on your Cluster, then create a new Service with the following configuration.

Environment

create_environment

Make sure you have the “FARGATE” capacity provider selected since it will manage all sorts of configurations for you and make your life easier.

Deployment configuration

If you followed this tutorial correctly, you should be seeing this panel :

deployment_config

I created two replicas of my future containers in this Service. Replicas are really useful to avoid downtime in case a container fails or to avoid saturating the system in case of a surge in traffic. It is useless in our case but worth mentioning.

Theoretically, you should not have a “Family” available since we never created a Task definition. To address this issue, click the “Task definition” link, and let’s create it for our Service!

Task Definition


As described earlier, a task definition is a blueprint used to build tasks, which will be actual containers running on AWS.

In this tutorial, I’m going to create a single task definition to build containers based on a single nginx image. In the real world, you’ll probably need several task definitions as you want to deploy several images.

There are two main features for this screen: the container details and the port mappings.

The container details let you choose the name and the image of the containers you want to create. In this case, I’ll go with the latest nginx image, which will be automatically pulled from Dockerhub.

The port mapping parts expose a port on your containers, in this case, port 80. Since nginx is by default exposed on port 80, this is the equivalent of the docker run -p 80:80 nginx:latest on docker.

task_definition

The environment screen is a bit more straightforward. The two things you have to choose are the CPU and memory usage you want to allow your containers and the IAM roles you want to give your tasks.

I gave my containers very little CPU and memory since they will only run an nginx hello-world.

You can customize the IAM roles to give your containers, but the defaultenvironment ecsTaskExecutionRole works fine for us in our use case.

 

Once those two screens are completed, we can go back to our Service and finish setting it up!

Back to the Service!


Time for some networking! On this screen make sure to select the VPC created earlier and select the two private subnets to deploy your Service in.

You can select the default security group, we will configunetworkingre it a bit later.

Finally, make sure that the public IP is disabled. We want to access our service through a Load Balancer and not have it directly reachable on the Internet. More on that later as well!

 

Create an Application Load Balancer


As mentioned in the previous paragraph, you don’t want to have your services available publicly, but would rather access them through a Load Balancer. Luckily for you, ECS has the option to create one at the same time as your Service!

If you don’t know how ALBs work in AWS, I highly recommend you check the documentation.

Configuring it is pretty straightforward. Give it a name, then select the containers it should balance the load to.

You then have to configure a Listener, which consists in choosing an entry port and the protocol of the traffic you want to use (HTTP, in this case, to keep it simple, but in real life, you should really use HTTPS).

You can then create a new target group, which will be the group of resources your ALB should route incoming traffic to. In your case, it will be the containers created by the Service.

Follow the configuration below to create a nice and functioning ALB.

load_balancing

Create the Service and wait a few minutes. If all went well, you should see the following screen in the Cluster panel.

cluster_overview

Fourth Step: Final corrections


Theoretically, all our work should be done here and our app accessible from the DNS name of our ALB, on port 8080. However, if you try to access your app right now, it won’t load.

To fix this issue, we have two small changes to apply to the AWS resources we deployed earlier.

Firstly, your ALB is by default deployed in your private subnets, which are not connected to the Internet. With this configuration, no one can access your ALb, which is the entry point of your app.

On the EC2 panel, go to the “Load Balancers” dashboard, and select your brand-new ALB. In the “Network Mappings” section, click on edit subnets and replace your private subnets with your public ones.

edit_subnet-1

Finally, we have to allow ingress traffic in the default security group, which is disabled by default.

Still, in the EC2 panel, go to the “Security Groups” dashboard, and select your security group. Then in the “Inbound Rules” select “Edit inbound rules”. Create a new rule that specifies “Custom TCP” for Type, 8080 under port range (which is the port given to the Listener created with the ALB), and “Anywhere-IPV4” as a Source.

edit_inbound_rules

Final Step: access your app!


All the configuration is completed, and all the AWS resources have been deployed, time to check if you can access your nginx server!

Go back to EC2/Load balancers, select your ALB and, in the “Details” section, copy the DNS name.

dns_name

Add your listener port behind this DNS name, so it’ll look like this: http://<DN name>:<port>.

In our case, this port is 8080, so  I’ll add “:8080” behind my DNS name.

You can now access your app on this URL with the specified port!

access_app

Conclusion

While extremely simplified, this tutorial shows you how to properly deploy and configure numerous AWS resources that are required to deploy a real-life app.

To go further, I suggest you read about  RDS databases to deploy a real-life backend database, or about Elasticache to deploy a Redis cache for your app