“There should be two tasks for a human being to perform to deploy software into a development, test, or production environment: to pick the version and environment and to press the “deploy” button.”
- Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation
Introduction
In a previous article, we built a serverless Python Discord bot using AWS Lambda. In this article, we’ll go over how we can integrate this bot with Github/Github Actions to automatically redeploy the lambdas and update the registered commands whenever new code is pushed, at no extra cost.
Overview
Continuous Integration/Continuous Delivery, on a hand-wavy level, refers to the thought that code deployment should be automated. Manually deploying applications is repetitive, boring, and therefore error-prone. In our specific case, writing code locally and copying it into Lambda or straight up writing code in the web editor also has these additional drawbacks:
- Weak version control;
- Hard to share code and collaborate with others on the same Lambda;
- Hard to import dependencies, and harder to keep them up-to-date;
- Slash commands have to be registered separately.
We can address points (1) and (2) by tracking our code in a version control tool, such as git; we can address points (3) and (4) by connecting code push events to scripts that update the packages and publish slash commands. Conveniently, Github provides both of these capabilities.
Small disclaimer: there are many viable ways to implement CICD. I chose to pursue the git-based method mostly because so many of the components are prebuilt, although we do pay for this convenience by introducing external dependencies and limitations, such as being limited to certain Python versions (some of the components used in this tutorial only work for 3.6). We could’ve done the same thing through AWS Code Pipeline or Jenkins or even just bash scripts off of an internet-enabled toaster. Each tool has its own tradeoffs, and I’d love to hear how you think the process below could be improved.
We’ll be doing these 2 things:
- Create a git repository to track code; as part of this, we’ll:
- Create a Github Action in the repo to update the lambda with new code/dependencies on push; and
- Create a Github Action in the repo to publish new slash commands on push.
2. Create helper AWS resources.
After everything is set up, the deployment flow will look like this:
- Push code to Github;
- Github Actions updates lambda with new code;
- Github Actions uploads commands to S3, so they can be accessed by other arbitrary scripts;
- Github Actions runs a script that reads the file we uploaded to S3 and then registers those commands with Discord.
After (2), the primary lambda will have the new code, and after (4), the new commands (if any) will become accessible.
Create and populate a git repository
First, lets create a git repo to store our stuff. Github’s documentation for creating a git repository is pretty thorough.
Once the repo exists, we need to create these following files. Here’s an example repo; it’s where the links in the list below point as well.
lambda_function.py
: this is the code that your lambda function runs, kind of like its__main__
. Feel free to create more files thatlambda_function.py
can then import, but we need an entry file to know where to start code execution. (The name of the entry file/function can be changed in the lambda’s configurations.)requirements.txt
: records the non-standard-library dependencies that your lambda function needs. At minimum it should havepynacl
,boto3
, andrequests
in it, which are needed to verify the bot integration with Discord (see previous article for more details). By providing this list, we can skip manually compiling and adding the lambda layers.commands/commands.json
: ajson
file with all the slash commands that you’d like to be associated with this bot. Each time code is pushed to the repo, a job is triggered that registers these slash commands.scripts/publish_commands.py
: a python script that registers Discord commands to specified guilds. Can be tuned to optionally reset all guild commands, or to register commands globally in addition..github/workflows/main.yml
: this file tells Github when and how to run which Actions, which are automatic events associated with triggers. I would recommend copying it directly and editing as necessary (eg change thelambda_function_name
fields, etc). Here’s what it does:
name: deploy Python to lambda
on: # the trigger for this Action is whenever changes are pushed to the master branch
push:
branches:
- master
jobs:
build: # "packages" the code/dependencies and puts it into AWS
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master # checks out the code; needed for the next step
- name: Deploy code to Lambda # uses https://github.com/mariamrf/py-lambda-action to upload the code checked out above into the specified lambda with the specified credentials. only supports python3.6
uses: mariamrf/py-lambda-action@v1.0.0
with:
lambda_layer_arn: 'arn:aws:lambda:us-east-2:391107963258:layer:lambda_deps' # can be an existing layer or a new layer, eg the arn can be "aws:lambda:us-east-2:<your account number>:layer:<any layer name>"
lambda_function_name: 'lambda_name'
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_REGION }}
- name: Upload commands to S3 # uses https://github.com/tpaschalis/s3-cp-action to copy FILE into AWS_S3_BUCKET with the specified credentials
uses: tpaschalis/s3-sync-action@master
env:
FILE: ./commands/commands.json
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_S3_BUCKET: ${{ secrets.AWS_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
publish: # publishes the contents of commands.json to TEST_SERVERS by running scripts/publish_commands.py with the specified environment variables
needs: build
if: needs.build.result == 'success'
runs-on: ubuntu-latest
steps:
- name: Publish commands
uses: actions/checkout@master
- name: Install Python 3
uses: actions/setup-python@v1
with:
python-version: 3.6
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
- name: Publish slash commands to Discord
env:
APPLICATION_ID: ${{ secrets.APPLICATION_ID }}
TEST_SERVERS: ${{ secrets.TEST_SERVERS }}
BOT_TOKEN: ${{ secrets.BOT_TOKEN }}
AWS_BUCKET: ${{ secrets.AWS_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
run: |
output=$(python scripts/publish_commands.py)
echo "::set-output name=publishStatus::$output"
id: step1
- name: Print status
run: echo "${{ steps.step1.outputs.publishStatus }}"
Finally, we need to configure some secrets under repo -> Settings -> Secrets.
The workflow yml references these:
- APPLICATION_ID: your bot’s application ID, from Discord’s developer portal:
- BOT_TOKEN: the bot token, also from Discord’s developer portal:
- TEST_SERVERS: enter a list of server IDs to which the commands should be published. Needs json-like format (see image).
- AWS_BUCKET: to be filled in step 2
- AWS_ACCESS_KEY_ID: to be filled in step 2
- AWS_SECRET_ACCESS_KEY: to be filled in step 2
- AWS_REGION: your AWS account’s region, eg
us-east-2
For the TBD secrets, we can leave them blank for now. When we create the corresponding items in AWS, we’ll come back and fill in the information.
Create helper AWS resources
We need to create a set of credentials so Github Actions can interact with the stuff in our account, as well as an S3 bucket to which we can upload stuff.
Credentials:
AWS’s IAM (“Identity and Access Management”) system can be used to create users and roles that have different levels of access. Here, we’ll create a user that Github Actions can assume. It’ll have the permissions to upload to S3 and to write to Lambda.
- Go to the IAM/users page and click “Add User”:
- Give it a name and programmatic access, then click Next:
- There’s 3 ways to give permissions. Defining a group is best practice, but *laziness sounds* let’s attach the “AmazonS3FullAccess” and “AWSLambda_FullAccess” permissions directly and continue. Technically we should also limit these permissions to the bare minimum of what we need, but *more laziness sounds*
- The next step is tagging; add whatever, then click Next. Tagging helps organize resources.
- Review: we’re generating a user, specified by access keys, that can interact with Lambda and S3.
- After creating the user, copy the access key and the secret access key into the corresponding git secrets (repo -> Settings at the top right of bar -> Secrets). It might also be useful to save them somewhere else, since, as the textbox says, they won’t be available for viewing later.
S3
S3, short for “Simple Storage System”, is AWS’s file dumpster. Files are stored as flat objects inside “buckets”.
Here, we want to create a bucket into which we can upload our commands.json
.
- Go to the S3 home page; click Create Bucket.
- Give the bucket a name; it needs to be fairly specific, since it needs to be unique within a set of regions. The default settings work for everything else, since it’s good to restrict access to the objects to authorized users, and we don’t really need versioning (can incur extra costs) or encryption (not storing anything critically secret, decrypting can be a pain).
- Create the bucket, and update the git secret:
Now that we’ve created the necessary helper resources in AWS and defined the Actions secrets in Github, all the set up is done!
Lets try the whole thing out.
Try it out
Make sure you’ve saved the changes to your lambda somewhere else; if everything works, then the lambda’s contents will be overwritten. Then push some changes to master; maybe add a new command.
Under Actions, a workflow should start:
If a workflow fails, click on it to get more details:
Click to expand the jobs within the workflow and see the actual failure:
Successfully completed workflows look like this:
We can verify that:
- The lambda was updated:
- The file was uploaded to S3:
- The new slash command appears in each TEST_SERVER:
Summary
Using Github Actions and a few AWS resources, we set up push-to-deploy for a Discord bot that runs in Lambda.
Was this worth the trouble?
Some pros:
- Saves deployment time
- Makes collaboration possible
- Saves code/code history in git
Some cons:
- Incorporating pre-written tools such as predefined Actions can constrain what we can use to what those tools can support