We’re using the JAMstack because:
When you push a change to any non-master branch it deploys to http://sandbox.leantechniques.com
When you push to the
master branch it deploys to https://leantechniques.com
Let’s walk thru how this is done.
Every git push is picked up by Gitlab CI and it checks the
.gitlab-ci.yml for the continuous delivery pipeline. This Let’s take a closer look at this file:
--- image: name: ruby:2.4 entrypoint: - '/usr/bin/env' - 'PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin' deploy-master: stage: deploy environment: name: prod url: http://leantechniques.com script: - do/ci-deploy only: - master deploy-branch: stage: deploy environment: name: $CI_COMMIT_REF_NAME url: http://sandbox.leantechniques.com script: - do/ci-deploy-sandbox except: - master
.gitlab-ci.yaml file starts out by defining the Docker container used to run the build in the
image section. Then it defines the steps to deploy to master by
only running in
master. And the
deploy-branch step runs for all changes
Gitlab provides a list of your environments, so the
url settings under the
environment section are used to populate that page. We use the branch name as the environment name for non-production deployments. It’s a little “status quo”, but we use
prod as our production environment name.
script section tells Gitlab CI, what scripts to run. Let’s take a look at the
ci-deploy-sandbox does a few things:
bundle installto install the gems required by the Jekyll site
The production deployment script
ci-deploy does the same thing, with variables for production environment.
Our two environments, sandbox and production, could have been defined as separate S3 buckets within a single AWS account but we created seperate accounts. This helps ensure our deployments are consistent and our sandbox environment can be destroyed without impacting our production website.
Our production AWS account is where we keep:
sandbox AWS account is much more open. It has the
sandbox.leantechniques.com domain and IAM Roles that are trusted by the Groups in the master AWS account.
After launching the site on S3, I learned a couple thing about S3 web hosting.
First, S3 is very picky about links in sub-folders. This causes issues because if you don’t put an ending forward slash
/ in your links, S3 assumes its a file (not a folder) and will return a 404 Not Found error message.
We are too lazy to do a full manual regression of every link each time we publish the website. So we put some logic into the build scripts. Brandon packaged the logic into a CloudFront Link Checker Ruby Gem source. This gem checks all relative links and throws an error if it doesn’t contain a trailing forward slash.
Also S3 doesn’t support HTTPS/TLS/SSL, you need a CloudFront distribution. CloudFront is a CDN for AWS so it caches the static content of our site (all of it) and spreads them around the globe to make the site more responsive. We created the distribution and used Amazon Certificate Manager (ACM) to get our TLS certificate issued. ACM will handle our renewals for us too, so we shouldn’t have an outage (fingers crossed) when we forget to renew the certificate by October 19, 2019.
CloudFront has some odd behaviors too. It doesn’t know how to redirect responses when user enters an incorrect web path. These means that when a user requests
/rants/index.html it would work, but a request for
/rants would return a 404. I know it’s petty, but I wanted our navigation links to just reference the folder, not have to point to the
index.html. The “RESTful” side of me wasn’t going to be happy with navigation links like
/team/index.html. I know most people wouldn’t notice it, but it irritated me.
Luckily, Amazon has a tool, Lamdba@Edge, to allow redirects when running on Cloudfront.
The title says it all…APIs. And our API is api.leantechniques.com
Our website is a static site. It doesn’t have the ability to run server side because we are “Serverless”. One feature we wanted that required a server was ReCaptcha. Recaptcha is a tool that proves “proof” that the current user isn’t a bot. We wanted to leverage a Recaptcha for any page that sends an email to prevent spam.
We chose a “Serverless” solution called AWS Lambda. We’ve created an individual git repository for each AWS Lambda that contains the code and all deployment scripts for that AWS Lambda. This allows each function to have independant lifecycles. More details about what we learned about AWS Lambdas will likely be a rant for another day.
Basically, the leantechniques.com website is scalable, fast and easy to update without needing to manage any servers. Static html isn’t the coolest tech, but at least it isn’t “status quo”. :)