serverless-aws

Posted on 11 January 2022, updated on 21 December 2023.

Today's IaC tools are multiplying on the market, and this also applies to serverless technologies from different cloud providers. In this article, I'll take a look at the frameworks available to deploy serverless infrastructures on AWS, and compare them in order to have a better vision and more arguments when choosing a technology to manage your serverless infrastructure.
This article is the first part of a series of two articles, I invite you to read the rest if you want to have my feedback on IaC technologies to implement serverless on AWS.

Introduction to serverless

What is serverless computing?

Serverless computing is a cloud-native development model that allows developers to build and run applications without having to manage the underlying infrastructure.

This model still requires servers, but their management is decoupled from application development. A cloud provider takes care of the routine work of provisioning the server infrastructure, keeping it running smoothly, and scaling it up. Developers then only need to package their code into functions or containers to deploy their applications.

Why is serverless a good idea?

It gives more freedom to developers, who are no longer responsible for maintaining the infrastructure and can therefore concentrate on their core business, development. This quickly accelerates the development of applications.

It guarantees high availability & high resilience of resources, thanks to auto-scaling, by automatically adjusting the allocation of resources necessary for the operation of applications and replicating resources over multiple DC.

The third and equally important advantage is cost savings, thanks to pay-per-use billing, based on the real-time code execution and resource consumption, with the millisecond as the payment unit. This eliminates the cost of purchasing servers and operating them, especially since they are often underused. Moreover, this encourages developers to optimize the performance of their code. Your costs are then proportional to the use of your business and the traffic you generate, which eventually means proportional to your profits.

What do I mean by serverless?

For this article, I will deploy a simple serverless solution using several frameworks / IaC tools on the cloud provider AWS and compare them. Why AWS? Simply because it is the most used cloud provider nowadays and the most advanced on serverless technologies.

This very simple serverless architecture aims to implement an API that will trigger a lambda that will interact with a NoSQL database (DynamoDB). All the components used here are serverless (API Gateway, Lambda & DynamoDB), and can be deployed in several ways, which we will detail below. Moreover, this infrastructure is free when the stack is not used, you only pay for the amount of data stored in DynamoDB.

Deploying with Console

Why deploying using management console?

Although in practice this mode of deployment is rarely used, it is the first view that we have of the components with AWS. And it seems to be the easiest way to achieve it without bothering with a development environment or a configuration/programming language.

Moreover, deploying with the console allows you to be aware of all the resources necessary for the proper functioning of the example you wish to deploy.

Tutorial on how to make the "Hello World" in the console

Create a role for lambda function
  • Go in IAM > Roles > Create Role
    create_roles

  • Give it a name and add the following policy:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "Stmt1428341300017",
      "Action": [
        "dynamodb:DeleteItem",
        "dynamodb:GetItem",
        "dynamodb:PutItem",
        "dynamodb:Query",
        "dynamodb:Scan",
        "dynamodb:UpdateItem"
      ],
      "Effect": "Allow",
      "Resource": "*"
    },
    {
      "Sid": "",
      "Resource": "*",
      "Action": [
        "logs:CreateLogGroup",
        "logs:CreateLogStream",
        "logs:PutLogEvents"
      ],
      "Effect": "Allow"
    }
  ]
}

policies

Once the role is created, we can start the creation of the Lambda function which will write in the dynamoDB table.

Create Lambda function

In AWS Lambda Management console > Create Function

  • Name the function

  • For the runtime, select Nodejs 12.x.

  • For the architecture, select x86_64.

  • For permissions, select Use an existing role and use the previously created role

  • Click on “Create function”

create_function

Once the function is created, copy the code below in the "Source Code" tab.

console.log('Loading function');

var AWS = require('aws-sdk');
var dynamo = new AWS.DynamoDB.DocumentClient();

/**
 * Provide an event that contains the following keys:
 *
 *   - operation: one of the operations in the switch statement below
 *   - tableName: required for operations that interact with DynamoDB
 *   - payload: a parameter to pass to the operation being performed
 */
exports.handler = function(event, context, callback) {
    //console.log('Received event:', JSON.stringify(event, null, 2));

    var operation = event.operation;

    if (event.tableName) {
        event.payload.TableName = event.tableName;
    }

    switch (operation) {
        case 'create':
            dynamo.put(event.payload, callback);
            break;
        case 'read':
            dynamo.get(event.payload, callback);
            break;
        case 'update':
            dynamo.update(event.payload, callback);
            break;
        case 'delete':
            dynamo.delete(event.payload, callback);
            break;
        case 'list':
            dynamo.scan(event.payload, callback);
            break;
        case 'echo':
            callback(null, "Success");
            break;
        case 'ping':
            callback(null, "pong");
            break;
        default:
            callback(`Unknown operation: ${operation}`);
    }
};

To make it simple, this code will act in a different way according to the event received in input, it will either create, read, update, delete or list elements of the dynamoDB table.

Now that our function is created, we have to create a resource that allows our user to call it so that it can be used! This is where API Gateway, the AWS service for creating APIs, comes into play.

Create a REST API and DynamoDB table

To create the API

  1. Open the API Gateway console.

  2. Choose Create API.

  3. In the REST API box, choose Build.

  4. Under Create new API, choose New API.

  5. Under Settings, do the following:

    1. For API name, enter DynamoDBOperations.
    2. For Endpoint Type, choose Regional.
  6. Choose Create API.

Create a resource in the API


In the following steps, you create a resource named DynamoDBManager in your REST API.

To create the resource:

  1. In the API Gateway console, in the Resources tree of your API, make sure that the root (/) level is highlighted. Then, choose Actions, Create Resource.

  2. Under New child resource, do the following:

    1. For Resource Name, enter DynamoDBManager.
    2. Keep Resource Path set to /dynamodbmanager.
  3. Choose Create Resource.

Create a POST method on the resource

In the following steps, you create a POST method on the DynamoDBManager resource that you created in the previous section.

To create the method:

  1. In the API Gateway console, in the Resources tree of your API, make sure that /dynamodbmanager is highlighted. Then, choose Actions, Create Method.

  2. In the small dropdown menu that appears under /dynamodbmanager, choose POST, and then choose the checkmark icon.

  3. In the method's Setup pane, do the following:

    1. For Integration type, choose Lambda Function.
    2. For Lambda Region, choose the same AWS Region as your Lambda function.
    3.  For Lambda Function, enter the name of your function (LambdaFunctionOverHttps).
    4. Select Use Default Timeout
    5. Choose Save.
  4. In the Add Permission to Lambda Function dialog box, choose OK.

Once all these steps are done, your post method should look like this:

Create a DynamoDB table

Create the DynamoDB table that your Lambda function uses.

To create the DynamoDB table:

  1. Open the Tables page of the DynamoDB console.

  2. Choose Create table.

  3. Under Table details, do the following:

    1. For Table name, enter lambda-apigateway.
    2. For Partition key, enter id, and keep the data type set as String.
  4. Under Settings, keep the Default settings.

  5. Choose Create table.

Test the API

Once all the elements of our example are created, we have to test that everything works well (to be sure that our example is viable). To do so, we just have to go to the post method created earlier in the API, and click on "Test".

In the body of the request, copy one of the following payloads and click on "Test".

Create Object
{
  "operation": "create",
  "tableName": "lambda-apigateway",
  "payload": {
    "Item": {
      "id": "1234ABCD",
      "number": 5
    }
  }
}

create_object


Update Object
{
    "operation": "update",
    "tableName": "lambda-apigateway",
    "payload": {
        "Key": {
            "id": "1234ABCD"
        },
        "AttributeUpdates": {
            "number": {
                "Value": 10
            }
        }
    }
}

update_object

Conclusion

  • It took me almost an hour to set up everything correctly in the console the first time. It will also take a long time to remove all the resources, and we have a high risk of forgetting some resources. Moreover, if I want to reproduce the same type of architecture, I would have to do the same steps again, and it would take another hour: you don't capitalize on the work you’ve already done!
  • It is very easy to make mistakes when deploying this architecture by hand (I made mistakes the two times when I deployed via the console, especially on the lambda policy).
  • To summarize, it is not serious to deploy an architecture like this. We have to find another way to avoid all the problems mentioned above.

Our standard to avoid the above-mentioned problems is to use IAC. Only several solutions are available to us to realize a serverless infrastructure, and we are entitled to ask ourselves which solution is the most adapted to our needs when we deal with serverless. If you want to learn more, I invite you to read the next article of the series on IaC technologies 🙂