Newsletter System with AWS, Part 1: AWS Lambda and API Gateway

A newsletter system seems like such a good fit for the AWS platform; I was surprised there were no up-to-date tutorials covering this and decided to create one. This first post is essentially a "getting started" tutorial for AWS Lambda and API Gateway, going over all the required concepts and preparing the ground for a follow-up article implementing the actual "business logic" in NodeJS, with DynamoDB as the database and AWS SES for actually sending emails.

The command line will be used for most things; this is not because it's faster but because the GUI will do certain things implicitly, and when learning about something I believe it is best do to things manually, to really understand how things are put together. In simple cases such as this one, using the GUI would actually be faster; and for "real" AWS automation, tools like CloudFront or Terraform would be used - not Bash scripts.

Requirements:

Lambda Basics

Lambda is Amazon's function as a service offering, and these functions can be written in a number of languages - one of them is JavaScript. As of June 2020, AWS will run JavaScript Lambdas in a NodeJS 12 environment; 12 is the current LTS (long-term support version), but the most recent major version is 14.

We want to enable users to subscribe to a newsletter - so we'll need a simple function to handle subscription requests. It should persist an entry in a database, generate a confirmation email which contains a generated confirmation link, and then actually send the email. The goal of this article is to explain how these functions can be created and invoked from public endpoints, and managing their development lifecycle - so the functions will only contain placeholder code.

But before we can actually create a Lambda function, we need to discuss permissions. Concepts like users, roles or permissions are generally not available at the language level; conceptually, all code is executed as the same user. However, in a cloud environment multiple users can co-exist so, for every individual function, we need to control who can invoke it and, once it's running, we need to restrict what it can do.

When a Lambda function is created, one of the parameters is the role it will run as, known as the execution role. Roles are supposed to be assumed - either by actual users, or by AWS services. Roles can be created with a specific type of policy that controls who can assume the role. If a role is created without a trust policy which allows the Lambda service to assume it, Lambda will not be able to execute the function. Below, we have a trust policy that allows the Lambda service to assume whatever role the trust policy is associated with. The policy file will be created in the ~/code/newsletter folder, so before running the command make sure it exists:

$ tee ~/code/newsletter/lambda-trust-policy.json <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
EOF

Next let's actually create the role. It won't do much - all it does is that it allows the AWS Lambda service to assume it, so we'll use the trust policy created above. For this, we must use the create-role subcommand of the IAM service. On success, the response will be a JSON payload from which we need to take note of the ARN as we'll use it later.

$ aws iam create-role \
    --role-name lambda-ex \
    --assume-role-policy-document file://lambda-trust-policy.json
{
    "Role": {
        "Path": "/",
        "RoleName": "lambda-ex",
        "RoleId": "AROAYCLCPE5WRDYY5QMWJ",
        "Arn": "arn:aws:iam::012345678901:role/lambda-ex",
        "CreateDate": "2020-06-15T16:32:31Z",
        "AssumeRolePolicyDocument": {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Principal": {
                        "Service": "lambda.amazonaws.com"
                    },
                    "Action": "sts:AssumeRole"
                }
            ]
        }
    }
}

Now let's actually write the code our first cloud function; for now, it will be just a placeholder. Note that functions can be created and edited from the AWS Lambda management console, using a web-based code editor. But keeping cloud functions as regular JavaScript files does have advantages as it allows flexibility in choice of code editor, version control and facilitates local testing.

$ tee ~/code/newsletter/index.js <<EOF
exports.handler = async function(event, context) {
    return "hello from Lambda!"
}
EOF

Once the file is created, let's package it up in a zip file and upload it as a Lambda function. Note that Lambda functions, like most AWS resources, are region-specific and unless a region is explicitly specified, the function will be created in the default region; the aws configure get region command will reveal the default region.

The create-function command can be used to create Lambda functions. The --runtime parameter specifies the required language runtime; here, NodeJS 12 is used. Because the logic of a Lambda function can be spread across multiple JavaScript files, and a same JavaScript file can contain multiple functions, the "entry point" - that is, the JavaScript function to be invoked when the Lambda function is invoked - needs to be specified using --handler parameter, in the <file-name>.<function-name> format. In our case, the file is index.js, and the function name is handler; so the --handler is index.handler (the file name does not include the extension).

The -j flag for zip will remove folders from the archived file names, storing all files at the root of the archive.

$ zip -j ~/code/newsletter/newsletter.zip ~/code/newsletter/index.js
  adding: Users/Mihai/code/newsletter/index.js (164 bytes security) (deflated 45%)

$ aws lambda create-function \
    --function-name subscription-create \
    --zip-file fileb://~/code/newsletter/newsletter.zip \
    --handler index.handler \
    --runtime nodejs12.x \
    --role arn:aws:iam::012345678901:role/lambda-ex
{
    "FunctionName": "subscription-create",
    "FunctionArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create",
    "Runtime": "nodejs12.x",
    "Role": "arn:aws:iam::012345678901:role/lambda-ex",
    "Handler": "index.handler",
    "CodeSize": 513,
    "Description": "",
    "Timeout": 3,
    "MemorySize": 128,
    "LastModified": "2020-06-17T14:15:38.593+0000",
    "CodeSha256": "1qFBRxCAS6Hm6LmkacL/viCRMw12CBWTY0tmqgM+7SM=",
    "Version": "$LATEST",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "RevisionId": "a7527e0e-7234-44b5-a7f5-a2aa8f9e878b",
    "State": "Active",
    "LastUpdateStatus": "Successful"
}

Let's save the functions ARN in an environment variable, as it will be required later - along with some other useful identifiers:

$ REGION=$(aws configure get region)
$ USER_ID="012345678901"
$ FN_ARN="arn:aws:lambda:$REGION:$USER_ID:function:subscription-create"
$ ROLE_ARN="arn:aws:iam::012345678901:role/lambda-ex"

Speaking of shell environment variables - when trying to "read" from a variable that is not set, by default the shell returns an empty string. Some code relies on this behaviour, but it many cases it's a genuine mistake that can lead to unintended consequences. Therefore, if using a bash shell, consider activating the nounset option, which makes reading from unset variables an error condition. To activate it for the currently running shell: set -o nounset.

Tagging

As this function is the first resource specific to the subscription application, now is a good time to start tagging these resources. This helps with managing costs, and with getting an overall view of what resources are involved in a particular application. To tag Lambda functions, we can use the aws lambda tag-resource command. In AWS a tag has two components - a key, and a value. Below, the key - also known as the tags name - is project, and the value is my-org/www/newsletter. Of course, you can choose a different naming convention but keep in mind that tag names and values are case-sensitive. In order for these tags to be useful, they have to be used consistently; to make this easier, I'll save the tag value in an environment variable - and then tag the Lambda function:

$ PROJECT_TAG="my-org/www/newsletter"

$ aws lambda tag-resource \
    --resource $FN_ARN \
    --tags project=$PROJECT_TAG

To find all resources with a particular tag, we have the aws resourcegroupstaggingapi command. The aws resource-groups command offers more advanced query capabilities - for example, it allows searching by resource type, not only tags. The disadvantage is that the query syntax is a bit more cumbersome, so we're going to stick with resourcegroupstaggingapi:

$ aws resourcegroupstaggingapi get-resources --tag-filters "Key=project,Values=$PROJECT_TAG"
{
    "ResourceTagMappingList": [
        {
            "ResourceARN": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create",
            "Tags": [
                {
                    "Key": "project",
                    "Value": "my-org/www/newsletter"
                }
            ]
        }
    ]
}

To extract the ARNs from such a query, we can use jq. This will work for both resourcegroupstagingapi and resource-groups, because both have the same output format. The command below lists all AWS resources for the current account which have the value of the $PROJECT_TAG environment variable, my-org/www/newsletter, as their project tag.

$ aws resourcegroupstaggingapi get-resources --tag-filters "Key=project,Values=$PROJECT_TAG" | jq -r .ResourceTagMappingList[].ResourceARN
arn:aws:lambda:eu-west-2:012345678901:function:subscription-create

Tags are also helpful when setting up budgets - but we're not going to cover that here.

Creating the Newsletter API

We have a function that is "in the cloud" - but how exactly can we invoke it ? One way is using HTTP. To create HTTP endpoints in AWS, we can use the API Gateway service. The docs tell us that it can be used to create HTTP, REST or Websocket APIs. This left me wondering about the distinction between REST and HTTP, because REST APIs are normally consumed over HTTP; this document explains what Amazon means. Basically, with REST you have to define your API in terms of REST resources - whereas with plain HTTP you can create endpoints directly. Another difference is that the AWS added HTTP more recently; being newer functionality, it does not yet have all the features of REST APIs. I also noticed that REST APIs seem to be better documented - so I'll be using REST.

So, let's create a REST API, using the create-rest-api subcommand. Such an API can be edge-optimized or regional. By default, APIs are edge-optimized - meaning the CloudFront CDN network will be used automatically. A regionally-deployed API is suitable if most requests to the API are expected to come from the same AWS region - in this case, we'll stick with the default.

$ aws apigateway create-rest-api \
        --name 'Newsletter' \
        --description 'Newsletter functionality for my-org.com'\
        --tags project=$PROJECT_TAG
{
    "id": "zdpya2a6ca",
    "name": "Newsletter",
    "description": "Newsletter functionality for my-org.com",
    "createdDate": 1592318574,
    "apiKeySource": "HEADER",
    "endpointConfiguration": {
        "types": [
            "EDGE"
        ]
    },
    "tags": {
        "project": "my-org/www/newsletter"
    }
}

$ API_ID=zdpya2a6ca

REST is built around the concept of "resources", which are acted upon using HTTP "verbs". In our use case, the application-specific "resource" would be a subscription; by convention, the HTTP verb (or method) associated with creating resources is POST. So to create a subscription, we'd send a POST HTTP request to an endpoint with the resource name (like /api/subscription), with the subscription payload as the request body.

Before the API can be made aware of the "subscription" resource, we need to get the ID of the resource at the root location, /, also known as the root resource:

$ aws apigateway get-resources --rest-api-id $API_ID --region $REGION
{
    "items": [
        {
            "id": "o8yzyxal4h",
            "path": "/"
        }
    ]
}

The ID will be used as the value of the --parent-id parameter to create-resource, which also requires the ID of the API:

$ aws apigateway create-resource \
      --rest-api-id $API_ID \
      --region $REGION \
      --parent-id o8yzyxal4h \
      --path-part subscriptions
{
    "id": "x5z38s",
    "parentId": "o8yzyxal4h",
    "pathPart": "subscriptions",
    "path": "/subscriptions"
}

$ RESOURCE_ID=x5z38s

After creating a resource and mapping it to an endpoint (/subscriptions), the supported HTTP methods (such as GET, POST, etc) need to be configured. This is done with the put-method subcommand, and it will not configure the actual functionality - that is, what happens when a HTTP request arrives using the configured method. Rather, it is just a way of "whitelisting" certain methods so that they will be accepted in the future; the actual functionality will be configured later.

$ aws apigateway put-method \
       --rest-api-id $API_ID \
       --region $REGION \
       --resource-id $RESOURCE_ID \
       --http-method POST \
       --authorization-type "NONE" 
{
    "httpMethod": "POST",
    "authorizationType": "NONE",
    "apiKeyRequired": false
}

Associate an Endpoint with a Lambda Function

We have an endpoint, and we have a Lambda function; now it's time to connect the two, so that the endpoint will forward the requests it receives to the Lambda function. When we connect an API Gateway endpoint to some other AWS service, we're creating an "integration" between the two services.

When it comes to Gateway/Lambda integrations, AWS has two options. The first one allows configuring some transformation and validation operations on the request before it is passed on to the Lamda function. Transformations would be performed using the Velocity template engine (similar to XSLT, but more generic) and validations would be configured using the draft-04 version of JSON Schema. This type of integration is known as a "custom" integration.

The second type of integration will simply forward the request to the Lambda function as-is; AWS calls this a "Lambda proxy" integration. The forwarded data includes meta-information such as the HTTP headers found on the request, as received by the API.

Integrations are created with the put-integration sub-command of apigateway. For simple cases, a proxy integration is sufficient, and can be requested by giving "AWS_PROXY" as the value of the --type argument. We also need to specify the resource ID with and the method (HTTP verb) to identify the "source", and the --uri parameter is used to identify the "destination" - in other words, to identify the actual piece of functionality (in this case, the Lambda function) to be invoked for this particular integration.

$ aws apigateway put-integration \
        --region $REGION \
        --rest-api-id $API_ID \
        --resource-id $RESOURCE_ID \
        --http-method POST \
        --type AWS_PROXY \
        --integration-http-method POST \
        --uri arn:aws:apigateway:$REGION:lambda:path/2015-03-31/functions/$FN_ARN/invocations
{
    "type": "AWS_PROXY",
    "httpMethod": "POST",
    "uri": "arn:aws:apigateway:eu-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations",
    "passthroughBehavior": "WHEN_NO_MATCH",
    "timeoutInMillis": 29000,
    "cacheNamespace": "x5z38s",
    "cacheKeyParameters": []
}

Deploy the Endpoint

The integration connected the /subscription endpoint with the Lambda function - but the API is still not deployed. As you might be aware from "traditional" development, when you deploy an API you deploy it somewhere - to an environment - whether it be development, staging or production. In AWS you have a similar concept of "stages" but things work a bit differently than one might expect.

Normally, before you can deploy something you need to get the target environment ready. In AWS, you create the stage after the deployment; essentially, a stage is a way of attaching behaviour to an existing deployment. A stage can have associated environment variables which can be used when selecting which endpoints to contact and in the case of Lambda functions, these variables will also be passed to the function, as parameters. In AWS, stages are the mechanism through which much of the configuration that is normally associated with an API takes place - such as throttling, caching, logging and "canary" deployments.

The the create-deployment subcommand creates deployments, and it takes --stage-name as an argument; if the specified stage does not exist, it will be created. Note that each create-deployment invocation is equivalent to running a Jenkins job, or some other task runner; a deployment is not some repeatable task, it's an actual deployment.

$ aws apigateway create-deployment \
        --rest-api-id $API_ID \
        --stage-name dev \
        --stage-description 'Development Stage' \
        --description 'First deployment to the dev stage'
{
    "id": "kyty58",
    "description": "First deployment to the dev stage",
    "createdDate": 1592340382
}

If the above is successful, then the API is live and public. The invocation URL has a specific template (https://{restapi-id}.execute-api.{region}.amazonaws.com/{stageName}) and we can use tools like curl or httpie to poke at it:

$ http https://$API_ID.execute-api.$REGION.amazonaws.com/dev
HTTP/1.1 403 Forbidden
Connection: keep-alive
Content-Length: 42
Content-Type: application/json
Date: Sat, 06 Jun 2020 21:27:47 GMT
x-amz-apigw-id: NuaM_GE-LPEFh4w=
x-amzn-ErrorType: MissingAuthenticationTokenException
x-amzn-RequestId: d7594b35-b595-4315-9627-05eabc380a29

{
    "message": "Missing Authentication Token"
}

Authentication token ? Didn't we configure the endpoint with --authorization-type as "NONE" ? That is true, but that was for a specific method (POST) on a specific resource. Let's try again, by appending the resource's endpoint endpoint after the stage name, and using POST:

$ http POST https://$API_ID.execute-api.$REGION.amazonaws.com/dev/subscriptions
HTTP/1.1 500 Internal Server Error
Connection: keep-alive
Content-Length: 36
Content-Type: application/json
Date: Sat, 06 Jun 2020 21:32:44 GMT
x-amz-apigw-id: Nua7bHmKLPEFS8A=
x-amzn-ErrorType: InternalServerErrorException
x-amzn-RequestId: 63c8b583-0b15-481c-ad21-4974b8b88b8a

{
    "message": "Internal server error"
}

Still an error message, but it's no longer an authorization issue. Before troubleshooting the problem let's take care of another issue. This is a development environment, and it should not be publicly exposed. For a private API we would have to use VPC - but even a public API can be protected by requiring an API key to be included with each request, so let's do that.

Protect an Endpoint by Requring an API Key - Usage Plans

First we'll create the API key with create-api-key:

$ aws apigateway create-api-key \
    --name "Dev API Key" \
    --description "Used for development" \
    --enabled
{
    "id": "ts28bqo19k",
    "value": "wqGU36ngAd5GKtHxKTN7C3CNnpXa8YXi5a8dF2T0",
    "name": "Dev API Key",
    "description": "Used for development",
    "enabled": true,
    "createdDate": 1592340512,
    "lastUpdatedDate": 1592340512,
    "stageKeys": []
}

$ API_KEY_ID=ts28bqo19k

$ API_KEY=wqGU36ngAd5GKtHxKTN7C3CNnpXa8YXi5a8dF2T0

API keys are intended for distributed to third-party developers which would consume the API - not necessarily as an authorization mechanism, but to enforce quotas and throttling. So API keys can only be used as part of a usage plan, which is then associated with a stage which, in turn, points to a deployment of an API.

When creating a usage plan we'll have to provide throttling settings. AWS uses the token bucket algorithm; the idea is that when a request arrives, it takes a token from the bucket. New tickets are delivered to the bucket at a constant rate, until the bucket reaches the maximum capacity. This mechanism means that the server will be able to cope with bursts, as long as there are still tokens in the bucket; the docs provide some concrete examples.

The rate limit is the bucket refill rate, measured in tokens per second. With the settings below (rateLimit set to 5) a new token is added every 200ms; so you can make 5 requests per second indefinitely. You can also make bursts of 10 requests, but that will exhaust the available tokens and any further requests will be dropped until new tokens are available. Dropped requests do not reach the integration; for each such request, the client will receive an HTTP response with code 429. Note that this is a client error - it is the client's responsibility stop making further requests.

The --quota option is much easier to understand - for the given time period (MONTH), this usage plan only allows 100 requests. The offset can be used to set an initial value, as if that many requests have already been made.

$ aws apigateway create-usage-plan \
    --name "The Basic Plan" \
    --description "Limited requests" \
    --throttle burstLimit=10,rateLimit=5 \
    --quota limit=100,offset=0,period=MONTH \
    --tags project=$PROJECT_TAG
{
    "id": "sf70th",
    "name": "The Basic Plan",
    "description": "Limited requests",
    "apiStages": [],
    "throttle": {
        "burstLimit": 10,
        "rateLimit": 5.0
    },
    "quota": {
        "limit": 100,
        "offset": 0,
        "period": "MONTH"
    },
    "tags": {
        "project": "my-org/www/newsletter"
    }
}

$ USAGE_PLAN_ID=sf70th

Once a usage plan is ready, it needs to be associated with a stage. This is one of those operations which cannot be achieved with "porcelain", top-level commands; we need to use the cumbersome "patch" syntax, which is described here.

$ aws apigateway update-usage-plan \
    --usage-plan-id $USAGE_PLAN_ID \
    --patch-operations op=add,path="/apiStages",value="$API_ID:dev"
{
    "id": "sf70th",
    "name": "The Basic Plan",
    "description": "Limited requests",
    "apiStages": [
        {
            "apiId": "zdpya2a6ca",
            "stage": "dev"
        }
    ],
    "throttle": {
        "burstLimit": 10,
        "rateLimit": 5.0
    },
    "quota": {
        "limit": 100,
        "offset": 0,
        "period": "MONTH"
    },
    "tags": {
        "project": "my-org/www/newsletter"
    }
}

For every resource method (HTTP verb) we have the option to make it require an API token. Again, for this we'll use the patch syntax:

$ aws apigateway update-method \
    --rest-api-id $API_ID \
    --resource-id $RESOURCE_ID \
    --http-method POST \
    --patch-operations op=replace,path="/apiKeyRequired",value="true"
{
    "httpMethod": "POST",
    "authorizationType": "NONE",
    "apiKeyRequired": true,
    "methodIntegration": {
        "type": "AWS_PROXY",
        "httpMethod": "POST",
        "uri": "arn:aws:apigateway:eu-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations",
        "passthroughBehavior": "WHEN_NO_MATCH",
        "timeoutInMillis": 29000,
        "cacheNamespace": "x5z38s",
        "cacheKeyParameters": []
    }
}

Once an API was modified, it needs to be re-deployed:

$ aws apigateway create-deployment \
        --rest-api-id $API_ID \
        --stage-name dev \
        --description 'Second deployment to the dev stage'

Now, instead of the 500, we're getting a 403 - because the /subscriptions endpoint requires an API token:

$ http POST https://$API_ID.execute-api.$REGION.amazonaws.com/dev/subscriptions
HTTP/1.1 403 Forbidden
Connection: keep-alive
Content-Length: 23
Content-Type: application/json
Date: Tue, 09 Jun 2020 20:02:59 GMT
x-amz-apigw-id: N4GmCFVULPEFo7Q=
x-amzn-ErrorType: ForbiddenException
x-amzn-RequestId: 5525066b-3ddf-4740-ad33-782513d45be4

{
    "message": "Forbidden"
}

To actually associate a key with an endpoint, the key needs to be associated with the usage plan - which is associated with the corresponding API/stage tuple. So the create-usage-plan-key subcommand requires a usage plan ID and the ID of a pre-existing API key - and it will associate the key with the usage plan. The value for the --key-type parameter is API_KEY and that's the only valid value for it, at them moment. No re-deployment is necessary for the effects to be applied.

$ aws apigateway create-usage-plan-key \
    --usage-plan-id $USAGE_PLAN_ID \
    --key-type "API_KEY" \
    --key-id $API_KEY_ID
{
    "id": "ts28bqo19k",
    "type": "API_KEY",
    "name": "Dev API Key"
}

After this we can supply the API key as the value of the x-api-key header, and we'll get the 500 again - instead of the 403:

$ http POST https://$API_ID.execute-api.$REGION.amazonaws.com/dev/subscriptions x-api-key:$API_KEY
HTTP/1.1 500 Internal Server Error
Connection: keep-alive
Content-Length: 36
Content-Type: application/json
Date: Tue, 09 Jun 2020 21:02:06 GMT
x-amz-apigw-id: N4PQQFtErPEFjug=
x-amzn-ErrorType: InternalServerErrorException
x-amzn-RequestId: d42ce746-44e8-4359-8e4d-3a2c0c759f61

{
    "message": "Internal server error"
}

Troubleshooting - Programatically Invoke a HTTP Method

After the detour for setting up the API key, let's take a look at fixing the 500. The API Gateway service allows programmatic invocation of specific endpoint methods; in this case, the endpoint is /subscription and the method is POST. We will need the ID of the resource with which the endpoint is associated, and the API id - and then a test invocation can be performed with test-invoke-method. This allows us to bypass the whole API Gateway/HTTP layer, and invoke the method directly, "from the inside".

$ aws apigateway test-invoke-method \
    --rest-api-id $API_ID \
    --resource-id $RESOURCE_ID \
    --http-method POST
{
    "status": 500,
    "body": "{\"message\": \"Internal server error\"}",
    "headers": {
        "x-amzn-ErrorType": "InternalServerErrorException"
    },
    "multiValueHeaders": {
        "x-amzn-ErrorType": [
            "InternalServerErrorException"
        ]
    },
    "log": "Execution log for request 62cf5973-33dd-479a-962e-889d2ba03086\nWed Jun 17 13:46:41 UTC 2020 : Starting execution for request: 62cf5973-33dd-479a-962e-889d2ba03086\nWed Jun 17 13:46:41 UTC 2020 : HTTP Method: POST, Resource Path: /subscriptions\nWed Jun 17 13:46:41 UTC 2020 : Method request path: {}\nWed Jun 17 13:46:41 UTC 2020 : Method request query string: {}\nWed Jun 17 13:46:41 UTC 2020 : Method request headers: {}\nWed Jun 17 13:46:41 UTC 2020 : Method request body before transformations: \nWed Jun 17 13:46:41 UTC 2020 : Endpoint request URI: https://lambda.eu-west-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations\nWed Jun 17 13:46:41 UTC 2020 : Endpoint request headers: {x-amzn-lambda-integration-tag=62cf5973-33dd-479a-962e-889d2ba03086, Authorization=************************************************************************************************************************************************************************************************************************************************************************************************************************ad52f1, X-Amz-Date=20200617T134641Z, x-amzn-apigateway-api-id=zdpya2a6ca, X-Amz-Source-Arn=arn:aws:execute-api:eu-west-2:012345678901:zdpya2a6ca/test-invoke-stage/POST/subscriptions, Accept=application/json, User-Agent=AmazonAPIGateway_zdpya2a6ca, X-Amz-Security-Token=IQoJb3JpZ2luX2VjEF0aCWV1LXdlc3QtMiJGMEQCIBBc55aL9qH7rG5QFdEXkWN2eYH8C3pnS9PGycRzKIz9AiAQSXm8PdAlhlZgmTyFgKhGY3jXXmHVZHuBgCJAvGzZCyq9AwjW//////////8BEAEaDDU0NDM4ODgxNjY2MyIMaczaS4jiUz0m7QdDKpEDGg4uzgVPP22K1w+i8j2F2lm5+cVXu3xFclikbFya4Gojuv15uc79oAu3GTTu6H3URuPf0Q4WWilf1+IhW7D5KCkWTkQw4hTaifrLWltexrh9CD/50Wj4lBEywpfMnY4CBVMcz7811foNP8 [TRUNCATED]\nWed Jun 17 13:46:41 UTC 2020 : Endpoint request body after transformations: {\"resource\":\"/subscriptions\",\"path\":\"/subscriptions\",\"httpMethod\":\"POST\",\"headers\":null,\"multiValueHeaders\":null,\"queryStringParameters\":null,\"multiValueQueryStringParameters\":null,\"pathParameters\":null,\"stageVariables\":null,\"requestContext\":{\"resourceId\":\"x5z38s\",\"resourcePath\":\"/subscriptions\",\"httpMethod\":\"POST\",\"extendedRequestId\":\"ORm-TGoJrPEFQcQ=\",\"requestTime\":\"17/Jun/2020:13:46:41 +0000\",\"path\":\"/subscriptions\",\"accountId\":\"012345678901\",\"protocol\":\"HTTP/1.1\",\"stage\":\"test-invoke-stage\",\"domainPrefix\":\"testPrefix\",\"requestTimeEpoch\":1592401601871,\"requestId\":\"62cf5973-33dd-479a-962e-889d2ba03086\",\"identity\":{\"cognitoIdentityPoolId\":null,\"cognitoIdentityId\":null,\"apiKey\":\"test-invoke-api-key\",\"principalOrgId\":null,\"cognitoAuthenticationType\":null,\"userArn\":\"arn:aws:iam::012345678901:user/mrotaru\",\"apiKeyId\":\"test-invoke-api-key-id\",\"userAgent\":\"aws-cli/1.18.75 Python/3.8.2 Windows/10 botocore/1.16.25\",\"accountId\":\"012345678901\",\"caller\":\"AIDAYCLCPE5WYSFXDD4A [TRUNCATED]\nWed Jun 17 13:46:41 UTC 2020 : Sending request to https://lambda.eu-west-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations\nWed Jun 17 13:46:41 UTC 2020 : Execution failed due to configuration error: Invalid permissions on Lambda function\nWed Jun 17 13:46:41 UTC 2020 : Method completed with status: 500\n",
    "latency": 9
}

There's clearly some useful information in the log property, but it's difficult to read as-is; jq to the rescue:

$ aws apigateway test-invoke-method \
    --rest-api-id $API_ID \
    --resource-id $RESOURCE_ID \
    --http-method POST \
  | jq -r .log

Execution log for request 26d544b9-2d5a-4e0e-a81e-782dab9e347c
Wed Jun 17 14:18:54 UTC 2020 : Starting execution for request: 26d544b9-2d5a-4e0e-a81e-782dab9e347c
Wed Jun 17 14:18:54 UTC 2020 : HTTP Method: POST, Resource Path: /subscriptions
Wed Jun 17 14:18:54 UTC 2020 : Method request path: {}
Wed Jun 17 14:18:54 UTC 2020 : Method request query string: {}
Wed Jun 17 14:18:54 UTC 2020 : Method request headers: {}
Wed Jun 17 14:18:54 UTC 2020 : Method request body before transformations:
Wed Jun 17 14:18:54 UTC 2020 : Endpoint request URI: https://lambda.eu-west-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations
Wed Jun 17 14:18:54 UTC 2020 : Endpoint request headers: {x-amzn-lambda-integration-tag=26d544b9-2d5a-4e0e-a81e-782dab9e347c, Authorization=************************************************************************************************************************************************************************************************************************************************************************************************************************f0b868, X-Amz-Date=20200617T141854Z, x-amzn-apigateway-api-id=zdpya2a6ca, X-Amz-Source-Arn=arn:aws:execute-api:eu-west-2:012345678901:zdpya2a6ca/test-invoke-stage/POST/subscriptions, Accept=application/json, User-Agent=AmazonAPIGateway_zdpya2a6ca, X-Amz-Security-Token=IQoJb3JpZ2luX2VjEF4aCWV1LXdlc3QtMiJHMEUCIQC8FcETbRVn5/fojL/0kBo9Bx1nPU6mJqW+F/+GAQ2/mAIgby9Nk8z0SMKK/UqGDWj6qqE4DfF47W6AwtRe+0t3WJAqvQMI1///////////ARABGgw1NDQzODg4MTY2NjMiDGR2XVzFwuMy1uvyRSqRA7oruNumJizTPF9WB0uPtY+z76hKdkfn6t26hTWLrqYR8mmGiKia48mlCLEvDW8s8nD97yxo4IwxYyw5JPRdLQQg+W1E+O5BXNGxGtlnRAdbQM0W/39b16/+w+5JdmZ/IJXVdcgQ8/dO6I [TRUNCATED]
Wed Jun 17 14:18:54 UTC 2020 : Endpoint request body after transformations: {"resource":"/subscriptions","path":"/subscriptions","httpMethod":"POST","headers":null,"multiValueHeaders":null,"queryStringParameters":null,"multiValueQueryStringParameters":null,"pathParameters":null,"stageVariables":null,"requestContext":{"resourceId":"x5z38s","resourcePath":"/subscriptions","httpMethod":"POST","extendedRequestId":"ORrsPHUErPEFoVw=","requestTime":"17/Jun/2020:14:18:54 +0000","path":"/subscriptions","accountId":"012345678901","protocol":"HTTP/1.1","stage":"test-invoke-stage","domainPrefix":"testPrefix","requestTimeEpoch":1592403534277,"requestId":"26d544b9-2d5a-4e0e-a81e-782dab9e347c","identity":{"cognitoIdentityPoolId":null,"cognitoIdentityId":null,"apiKey":"test-invoke-api-key","principalOrgId":null,"cognitoAuthenticationType":null,"userArn":"arn:aws:iam::012345678901:user/mrotaru","apiKeyId":"test-invoke-api-key-id","userAgent":"aws-cli/1.18.75 Python/3.8.2 Windows/10 botocore/1.16.25","accountId":"012345678901","caller":"AIDAYCLCPE5WYSFXDD4A [TRUNCATED]
Wed Jun 17 14:18:54 UTC 2020 : Sending request to https://lambda.eu-west-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations
Wed Jun 17 14:18:54 UTC 2020 : Execution failed due to configuration error: Invalid permissions on Lambda function
Wed Jun 17 14:18:54 UTC 2020 : Method completed with status: 500

So we're still not done with permissions. We need to ensure that the API Gateway service is allowed to invoke the Lambda function. We can specify this as a resource policy - in this case, the resource is the Lambda function, and we apply the policy to it.

$ aws lambda add-permission \
    --statement-id "AllowExecutionFromAPIGateway" \
    --function-name "arn:aws:lambda:$REGION:$USER_ID:function:subscription-create" \
    --action "lambda:InvokeFunction" \
    --principal "apigateway.amazonaws.com" \
    --source-arn "arn:aws:execute-api:$REGION:$USER_ID:$API_ID/*/POST/subscriptions"
{
    "Statement": "{\"Sid\":\"AllowExecutionFromAPIGateway\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:eu-west-2:012345678901:function:subscription-create\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:eu-west-2:012345678901:zdpya2a6ca/*/POST/subscriptions\"}}}"
}

In addition, you might recall that the Lambda function has some console.log statements; in the case of AWS, the logs would be sent to a CloudWatch log group so we need to make sure it has the appropriate permissions. For this we can add create a policy and then add it to the existing lambda-ex role:

$ tee ~/code/newsletter/lambda-logs-policy.json <<EOF
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "logs:CreateLogGroup",
            "Resource": "arn:aws:logs:eu-west-2:$USER_ID:*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:eu-west-2:$USER_ID:log-group:/aws/lambda/subscription-create:*"
            ]
        }
    ]
}
EOF

$ aws iam create-policy \
    --policy-name "AllowLambdaToLog" \
    --policy-document file://~/code/newsletter/lambda-logs-policy.json
{
    "Policy": {
        "PolicyName": "AllowLambdaToLog",
        "PolicyId": "ANPAYCLCPE5WQP3BM3AHS",
        "Arn": "arn:aws:iam::012345678901:policy/AllowLambdaToLog",
        "Path": "/",
        "DefaultVersionId": "v1",
        "AttachmentCount": 0,
        "PermissionsBoundaryUsageCount": 0,
        "IsAttachable": true,
        "CreateDate": "2020-06-11T20:00:35Z",
        "UpdateDate": "2020-06-11T20:00:35Z"
    }
}

$ aws iam attach-role-policy \
    --role-name lambda-ex \
    --policy-arn arn:aws:iam::$USER_ID:policy/AllowLambdaToLog

Now the function should have all the required permissions, so let's try another test invocation:

$ aws apigateway test-invoke-method \
    --rest-api-id $API_ID \
    --resource-id $RESOURCE_ID \
    --http-method POST \
  | jq -r .log | tail -n 7
Wed Jun 17 15:18:16 UTC 2020 : Sending request to https://lambda.eu-west-2.amazonaws.com/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create/invocations
Wed Jun 17 15:18:16 UTC 2020 : Received response. Status: 200, Integration latency: 10 ms
Wed Jun 17 15:18:16 UTC 2020 : Endpoint response headers: {Date=Wed, 17 Jun 2020 15:18:16 GMT, Content-Type=application/json, Content-Length=20, Connection=keep-alive, x-amzn-RequestId=d174993c-661e-4afa-95cd-f4575b9dfb1e, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5eea3438-793aa206bfa4d92d84321842;sampled=0}
Wed Jun 17 15:18:16 UTC 2020 : Endpoint response body before transformations: "hello from Lambda!"
Wed Jun 17 15:18:16 UTC 2020 : Execution failed due to configuration error: Malformed Lambda proxy response
Wed Jun 17 15:18:16 UTC 2020 : Method completed with status: 502

We're getting a different error this time - but it's not related to permissions. To fix the "Malformed Lambda proxy response" error, we must update the function to use the appropriate format for the return value. To understand the process of updating a Lambda function, we must discuss publishing, versions and aliases.

Updating a Lambda Function - Versions

When a Lambda is first created, it will be assigned the special $LATEST version. The function can be updated, but no new versions will be created; instead, $LATEST will be updated to reflect the latest changes. Only when we explicitly choose to publish the function, AWS will automatically create a dedicated version by cloning the current $LATEST. A version is an immutable snapshot which includes the function's code, as well as it's dependencies and associated meta-information - such as environment variables and the runtime version. By default, each version gets a name which is just an integer, incremented for every version. Note that only the $LATEST version can be changed - all other published versions are immutable.

The integration we setup previously, between the /subscriptions endpoint and the subscription-create Lambda function does not explicitly mention a version - we point to the Lambda function using the so-called "unqualified" ARN. In this situation, the $LATEST version of the function is used. An important consequence is that changes made to the function will be reflected in any existing integrations, regardless of API deployments or stages. Note that "publish" has a specific meaning in the context of Lambda functions; perhaps a more suitable name would be "snapshot". We'll take a look at a better way of managing lifecycle later, but for now let's just update the function so it uses the correct return format, publish it, and verify that the API endpoint can be invoked without errors.

After updating the JavaScript file locally, the update-function-code subcommand can be used to "push" the updated version to the Lambda service; these changes will be reflected in the $LATEST version:

$ tee ~/code/newsletter/index.js <<EOF
exports.handler = async function(event, context) {
    return {
      statusCode: 200,
      body: "hello from Lambda!"
    }
}
EOF

$ zip -j ~/code/newsletter/newsletter.zip ~/code/newsletter/index.js
updating: index.js (164 bytes security) (deflated 45%)

$ aws lambda update-function-code \
    --function-name $FN_ARN \
    --zip-file fileb://~/code/newsletter/newsletter.zip
{
    "FunctionName": "subscription-create",
    "FunctionArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create",
    "Runtime": "nodejs12.x",
    "Role": "arn:aws:iam::012345678901:role/lambda-ex",
    "Handler": "index.handler",
    "CodeSize": 537,
    "Description": "",
    "Timeout": 3,
    "MemorySize": 128,
    "LastModified": "2020-06-18T18:12:03.484+0000",
    "CodeSha256": "k+B+6gFCUNrFff2X80/tD/K5C2W1h/XnT/eK5ivKoGU=",
    "Version": "$LATEST",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "RevisionId": "405c9714-661b-4969-bba1-ba54230a55bb",
    "State": "Active",
    "LastUpdateStatus": "Successful"
}

From the response, we can see that we updated the $LATEST version - and since $LATEST is already published, there is no need to make a new deployment and we can finally use the API endpoint without errors:

$ http POST https://$API_ID.execute-api.$REGION.amazonaws.com/dev/subscriptions x-api-key:$API_KEY
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 18
Content-Type: application/json
Date: Thu, 18 Jun 2020 18:12:12 GMT
Via: 1.1 7dc4dc0842848b027020e8c90aa3042c.cloudfront.net (CloudFront)
X-Amz-Cf-Id: PQTmBQFHXEY9lh52UVHDBAUgPfHvEDBMVTuNMCtgHiK1b8ULHp6asA==
X-Amz-Cf-Pop: LHR3-C1
X-Amzn-Trace-Id: Root=1-5eebae7c-9f2125e630db301fd2e03311;Sampled=0
X-Cache: Miss from cloudfront
x-amz-apigw-id: OVgzeGuKLPEFg-w=
x-amzn-RequestId: 8b4630a1-73fa-4e81-832d-553c30810d9d

hello from Lambda!

All the versions of a function can be revealed with the list-versions-by-function subcommand:

$ aws lambda list-versions-by-function --function-name subscription-create
{
    "Versions": [
        {
            "FunctionName": "subscription-create",
            "FunctionArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:$LATEST",
            "Runtime": "nodejs12.x",
            "Role": "arn:aws:iam::012345678901:role/lambda-ex",
            "Handler": "index.handler",
            "CodeSize": 537,
            "Description": "",
            "Timeout": 3,
            "MemorySize": 128,
            "LastModified": "2020-06-18T18:12:03.484+0000",
            "CodeSha256": "k+B+6gFCUNrFff2X80/tD/K5C2W1h/XnT/eK5ivKoGU=",
            "Version": "$LATEST",
            "TracingConfig": {
                "Mode": "PassThrough"
            },
            "RevisionId": "405c9714-661b-4969-bba1-ba54230a55bb"
        }
    ]
}

To publish the current $LATEST - and, therefore, create a new version - use the publish-version subcommand.

$ aws lambda publish-version --function-name subscription-create
{
    "FunctionName": "subscription-create",
    "FunctionArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:1",
    "Runtime": "nodejs12.x",
    "Role": "arn:aws:iam::012345678901:role/lambda-ex",
    "Handler": "index.handler",
    "CodeSize": 537,
    "Description": "",
    "Timeout": 3,
    "MemorySize": 128,
    "LastModified": "2020-06-18T18:12:03.484+0000",
    "CodeSha256": "k+B+6gFCUNrFff2X80/tD/K5C2W1h/XnT/eK5ivKoGU=",
    "Version": "1",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "RevisionId": "81b22b94-ddbe-4a7e-9660-b84774ce9bb5",
    "State": "Active",
    "LastUpdateStatus": "Successful"
}

Note the FunctionArn in the response - it is very similar to the ARN we've been using so far, except it has the version number (1) appended after a semicolon; this is a "qualified ARN" and points to a specific version, and it can be used anywhere a Lambda function ARN can be used. Let's update the integration with it, to prevent the public API from automatically "following" the $LATEST version:

$ aws apigateway update-integration \
    --rest-api-id $API_ID \
    --resource-id $RESOURCE_ID \
    --http-method POST \
    --patch-operations op=replace,path="/uri",value="arn:aws:apigateway:$REGION:lambda:path/2015-03-31/functions/$FN_ARN:1/invocations"
{
    "type": "AWS_PROXY",
    "httpMethod": "POST",
    "uri": "arn:aws:apigateway:eu-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:1/invocations",
    "passthroughBehavior": "WHEN_NO_MATCH",
    "timeoutInMillis": 29000,
    "cacheNamespace": "x5z38s",
    "cacheKeyParameters": []
}

Now we can update the function freely, as it will not impact the deployed endpoint.

Aliases

In the previous section, we "pinned" the function version referenced by an integration. But imagine the same function was used in more than one integration; we'd have to go over each of them, and repeat the "pinning" command. In this scenario, we can take advantage of another Lambda feature - aliases. An alias points to a specific version, but it's mutable - the version it points to can be updated. So, for all the integrations that need to reference the same function version, we can use the alias instead of a specific version number.

Let's create two aliases, one for production and one for development use; initially, they will both refer to the same version. With the create-alias subcommand:

$ aws lambda create-alias \
    --function-name subscription-create \
    --function-version 1 \
    --name prod \
    --description "For production use"
{
    "AliasArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:prod",
    "Name": "prod",
    "FunctionVersion": "1",
    "Description": "Used in production",
    "RevisionId": "dbf9f186-2a89-452f-b6d1-8c6646c68e2b"
}

$ aws lambda create-alias \
    --function-name subscription-create \
    --function-version 1 \
    --name dev \
    --description "For development use"
{
    "AliasArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:dev",
    "Name": "dev",
    "FunctionVersion": "1",
    "Description": "For development use",
    "RevisionId": "23ae613a-5594-4ae5-9184-54e74ae318fb"
}

Going back to the API we setup using API Gateway - it already has a "dev" stage. It would make sense for the API stage to also determine which Lambda alias will be used for the function; for this we can use stage variables. Each stage (currently, we only have "dev") can have any number of variables associated with it; so let's create a variable, env, for storing the environment type represented by the stage. For the "dev" stage, it seems logical for the variable to have the "dev":

$ aws apigateway update-stage \
    --rest-api-id $API_ID \
    --stage-name dev \
    --patch-operations op=replace,path="/variables/env",value="dev"
{
    "deploymentId": "ekng6a",
    "stageName": "dev",
    "description": "Development Stage",
    "cacheClusterEnabled": false,
    "cacheClusterStatus": "NOT_AVAILABLE",
    "methodSettings": {},
    "variables": {
        "env": "dev"
    },
    "tracingEnabled": false,
    "createdDate": 1592340382,
    "lastUpdatedDate": 1592509664
}

Also, let's create a stage for production, and add a variable with the same name (env), but with the value "prod". Because a stage cannot be created with a "null" deployment, we will associate it the latest "dev" deployment. For simplicity, we will also associate the same usage plan with the new stage - because the /subscriptions resource is configured to require an API key and we'd get a 403 otherwise. In practice, it is common to have an additional stage between dev and prod - often called "qa" or "staging", with each individual deployment being "promoted" through the different stages.

$ aws apigateway get-deployments --rest-api-id $API_ID
{
    "items": [
        {
            "id": "ekng6a",
            "description": "Second deployment to the dev stage",
            "createdDate": 1592342220
        },
        {
            "id": "kyty58",
            "description": "First deployment to the dev stage",
            "createdDate": 1592340382
        }
    ]
}

$ aws apigateway create-stage \
        --rest-api-id $API_ID \
        --deployment-id ekng6a \
        --stage-name prod
{
    "deploymentId": "ekng6a",
    "stageName": "prod",
    "cacheClusterEnabled": false,
    "cacheClusterStatus": "NOT_AVAILABLE",
    "methodSettings": {},
    "tracingEnabled": false,
    "createdDate": 1592511245,
    "lastUpdatedDate": 1592511245
}

$ aws apigateway update-stage \
    --rest-api-id $API_ID \
    --stage-name prod \
    --patch-operations op=replace,path="/variables/env",value="prod"
{
    "deploymentId": "ekng6a",
    "stageName": "prod",
    "cacheClusterEnabled": false,
    "cacheClusterStatus": "NOT_AVAILABLE",
    "methodSettings": {},
    "variables": {
        "env": "prod"
    },
    "tracingEnabled": false,
    "createdDate": 1592511245,
    "lastUpdatedDate": 1592560373
}


$ aws apigateway update-usage-plan \
    --usage-plan-id $USAGE_PLAN_ID \
    --patch-operations op=add,path="/apiStages",value="$API_ID:prod"

Then we will have another endpoint available, corresponding to the "prod" stage. However, both "dev" and "prod" will invoke the same version of the Lambda function. To use Lambda aliases, we must update how the integration references the Lambda function; instead of referencing a concrete version, we can use an alias. But the alias should depend on which stage generated the request; this dynamic value will be swapped in for the ${stageVariables.env} string. So, in the AWS GUI console, the "Lambda function" field would have the value subscription-create:${stageVariables.env}. I used the GUI because the CLI command does not seem to work:

$ aws apigateway update-integration \
    --rest-api-id $API_ID \
    --resource-id $RESOURCE_ID \
    --http-method POST \
    --patch-operations op=replace,path="/uri",value="arn:aws:apigateway:$REGION:lambda:path/2015-03-31/functions/$FN_ARN:\${stageVariables.env}/invocations"

Error parsing parameter '--patch-operations': Expected: ',', received: '}' for input:
op=replace,path=/uri,value=arn:aws:apigateway:eu-west-2:lambda:path/2015-03-31/functions/arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:${stageVariables.env}/invocations

Also note we previously granted permission to the API Gateway service to invoke a Lambda function, using an unqualified ARN; this means the permission is granted only for the $LATEST version. If any alias or specific version is to be used in an integration, additional permissions are required; if using the GUI when updating the integration, AWS will show a popup with an auto-generated add-permission command; an equivalent command is also required for the "prod" alias.

$ aws lambda add-permission \
    --function-name "$FN_ARN:dev" \
    --source-arn "arn:aws:execute-api:$REGION:$USER_ID:$API_ID/*/POST/subscriptions" \
    --principal apigateway.amazonaws.com \
    --statement-id 22c0c40b-55df-47d6-9025-cdbc48830a37 \
    --action lambda:InvokeFunction
{
    "Statement": "{\"Sid\":\"22c0c40b-55df-47d6-9025-cdbc48830a37\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:dev\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:eu-west-2:012345678901:zdpya2a6ca/*/POST/subscriptions\"}}}"
}

$ aws lambda add-permission \
    --function-name "$FN_ARN:prod" \
    --source-arn "arn:aws:execute-api:$REGION:$USER_ID:$API_ID/*/POST/subscriptions" \
    --principal apigateway.amazonaws.com \
    --statement-id allow-api-gateway-to-invoke-prod \
    --action lambda:InvokeFunction
{
    "Statement": "{\"Sid\":\"allow-api-gateway-to-invoke-prod\",\"Effect\":\"Allow\",\"Principal\":{\"Service\":\"apigateway.amazonaws.com\"},\"Action\":\"lambda:InvokeFunction\",\"Resource\":\"arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:prod\",\"Condition\":{\"ArnLike\":{\"AWS:SourceArn\":\"arn:aws:execute-api:eu-west-2:012345678901:zdpya2a6ca/*/POST/subscriptions\"}}}"
}

With environments and aliases in place, the workflow is as follows: when a function is ready for deployment, it is published - which will generate a version. The alias corresponding to the environment ("stage" or "prod") is then updated, to reference the newly created version. Because the integration references Lambda functions by alias, this will result in the newly published version being invoked for future requests. This workflow will be illustrated in the next section, when we update the function to use a database.

We now have a working connection between a public endpoint and the Lambda function. The endpoint is intended for handling POST requests which would contain the subscriber's email encoded in their body. The integration will provide information about the HTTP request via the event parameter - including the request body, which would be accessible as event.body. Let's update the function so that the response body will contain a stringified version of the request body. Note that the --publish parameter of update-function-code will result in a version being published with the updated code; without --publish, changes would only be reflected in the $LATEST version.

$ tee ~/code/newsletter/index.js <<EOF
exports.handler = async function(event, context) {
    const body = JSON.stringify(event.body);
    return {
      statusCode: 200,
      body: "Just echoing back the received body: " + body
    }
}
EOF

$ zip -j ~/code/newsletter/newsletter.zip ~/code/newsletter/index.js
updating: index.js (164 bytes security) (deflated 45%)

$ aws lambda update-function-code \
    --function-name $FN_ARN \
    --zip-file fileb://~/code/newsletter/newsletter.zip \
    --publish
{
    "FunctionName": "subscription-create",
    "FunctionArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:2",
    "Version": "2",
    ...
}

Note that at this point, both stages point to the same deployment - and because when the deployment was created the integration used the unqualified ARN of the function, it means both endpoints will end up invoking the $LATEST version of the Lambda function. Let's explicitly set the "dev" alias to point to $LATEST, and "prod" to the first version of the function:

$ aws lambda update-alias \
    --function-name subscription-create \
    --name dev \
    --function-version '$LATEST'
{
    "AliasArn": "arn:aws:lambda:eu-west-2:012345678901:function:subscription-create:dev",
    "Name": "dev",
    "FunctionVersion": "$LATEST",
    "Description": "For development use",
    "RevisionId": "623f6b57-43c6-442c-96e9-5fe3244d973d"
}

$ aws lambda update-alias \
    --function-name subscription-create \
    --name prod \
    --function-version 1
{
    "AliasArn": "arn:aws:lambda:eu-west-2:554794166125:function:subscription-create:prod",
    "Name": "prod",
    "FunctionVersion": "1",
    "Description": "Used in production",
    "RevisionId": "6caec996-ef4f-4df3-b8b1-56290df2b8df"
}

$ aws apigateway create-deployment \
        --rest-api-id $API_ID \
        --stage-name prod \
        --description 'First deployment to the prod stage'
{
    "id": "my6ag5",
    "description": "First deployment to the prod stage",
    "createdDate": 1592572291
}

This concludes the first part of this series. In the next post, we will fill in the actual business logic in the Lambda functions, demonstrate how these functions can be augmented with packages from NPM and how they can interact with other AWS services - DynamoDB for storing the subscribers and their associated data, and AWS SES for actually sending emails.