Serverless with GitLab CI, AWS and the JAMStack

portrait photo of Tim Gifford
By Tim Gifford

Why JAMstack?

We’re using Jekyll to convert our markdown into a static html site.

We’re using the JAMstack because:

The JAMStack provides the content and tools to create the HTML. While this is important for our site, this story will focus on the Gitlab and AWS services we’re using to deliver and host our website.

Preview and Deploy the Website

tldr; When you push a change to any non-master branch it deploys to When you push to the master branch it deploys to

Let’s walk thru how this is done.

Every git push is picked up by Gitlab CI and it checks the .gitlab-ci.yml for the continuous delivery pipeline. Let’s take a closer look at this file:

  name: ruby:2.4
    - "/usr/bin/env"
    - "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

  stage: deploy
    name: prod
    - do/ci-deploy
    - master

  stage: deploy
    - do/ci-deploy-sandbox
    - master

The .gitlab-ci.yaml file starts out by defining the Docker container used to run the build in the image section. Then it defines the steps to deploy to master by only running in master, while the deploy-branch step runs for all changes except the master branch.

Gitlab provides a list of your environments, so the name and url settings under the environment section are used to populate that page. We use the branch name as the environment name for non-production deployments. It’s a little “status quo”, but we use prod as our production environment name.

The script section tells Gitlab CI, what scripts to run. Let’s take a look at the do/ci-deploy-sandbox script.

ci-deploy-sandbox does a few things:

The production deployment script ci-deploy does the same thing, with variables for production environment.

AWS Accounts

Our two environments, sandbox and production, could have been defined as separate S3 buckets within a single AWS account, but we created separate accounts. This helps ensure our deployments are consistent and our sandbox environment can be destroyed without impacting our production website.

Our production AWS account is where we keep:

The sandbox AWS account is much more open. It has the domain and IAM Roles that are trusted by the Groups in the master AWS account.

Website Hosting with S3

After launching the site on S3, I learned a couple thing about S3 web hosting.

First, S3 is very picky about links in sub-folders. This causes issues because if you don’t put an ending forward slash / in your links, S3 assumes it’s a file (not a folder) and will return a 404 Not Found error message.

We are too lazy to do a full manual regression of every link each time we publish the website, so we put some logic into the build scripts. Brandon packaged the logic into a CloudFront Link Checker Ruby Gem source. This gem checks all relative links and throws an error if it doesn’t contain a trailing forward slash.


Also, S3 doesn’t support HTTPS/TLS/SSL: you need a CloudFront distribution. CloudFront is a CDN for AWS so it caches the static contents of our site (all of it) and spreads them around the globe to make the site more responsive. We created the distribution and used Amazon Certificate Manager (ACM) to get our TLS certificate issued. ACM will handle our renewals for us too, so we shouldn’t have an outage (fingers crossed) when we forget to renew the certificate by October 19, 2019.

Fixing CloudFront

CloudFront has some odd behaviors too. It doesn’t know how to redirect responses when user enters an incorrect web path. These means that when a user requests /rants/index.html it would work, but a request for /rants/ or /rants would return a 404. I know it’s petty, but I wanted our navigation links to just reference the folder, not have to point to the index.html. The “RESTful” side of me wasn’t going to be happy with navigation links like /team/index.html. I know most people wouldn’t notice it, but it irritated me.

Luckily, Amazon has a tool, Lamdba@Edge, to allow redirects when running on Cloudfront.


The title says it all… APIs. And our API is

Our website is a static site. It doesn’t have the ability to run server-side because we are “Serverless”. One feature we wanted that required a server was ReCaptcha. Recaptcha is a tool that proves “proof” that the current user isn’t a bot. We wanted to leverage a Recaptcha for any page that sends an email to prevent spam.

We chose a “Serverless” solution called AWS Lambda. We’ve created an individual git repository for each AWS Lambda that contains the code and all deployment scripts for that AWS Lambda. This allows each function to have independent lifecycles. More details about what we learned about AWS Lambdas will likely be a rant for another day.


Basically, the website is scalable, fast and easy to update without needing to manage any servers. Static html isn’t the coolest tech, but at least it isn’t “status quo”. :)

Published: 10 Feb 2019