3 min read

Deploying to your Terraform provisioned stack with Kamal

In a previous post, I went through automating your MRSK/Kamal setup with Terraform, which basically goes through DHH’s intro video but just the Digital Ocean portion of the video. Digital Ocean and Hetzner both have great Terraform providers that you can connect to and pretty easily provision the resources that you need.

Once you’ve provisioned those resources, you want to be able to easily deploy to them without having to copy/paste resource details like IP addresses. For instance, if you provision two droplets within Digital Ocean for production via Terraform, the idea would be that you could then just run kamal deploy and it’d know where to deploy your application to. Terraform’s outputs make this really easy combined with some simple JSON parsing in Ruby.

Also, just to outline the relationship between Terraform and Kamal and when you’ll end up running each:

  • Terraform - You’ll only be dealing with or applying a plan when you need to make changes to your infrastructure or add/remove infrastructure. Terraform for a Kamal related deployment involves the initial upfront work to get everything stood up but once it’s there then Kamal takes care of the rest via Docker. For instance, on a project I’ve been working on we provisioned our stack with Terraform about 6 months ago, and haven’t had to run an apply since then. We’re going to be adding a few more droplets in the coming months, we’ll use Terraform to bring these online and then Kamal to deploy to them.
  • Kamal - My preferred route is to just have Terraform write an outputs file per destination with non-sensitive information and have Kamal reference that in your deploy.yml details. Usually, this is just a set of IP addresses, nothing involving keys or anything along those lines. A good rule of thumb is that these outputs if exposed won’t give anyone access to those resources.

Isolating Terraform and Kamal also means that when you run Kamal it’s just looking at a static file. You’re not relying on Terraform to be able to deploy. This is also great for permissions. For instance, maybe you’ve isolated who can provision resources(IT / DevOps) and who can deploy(Software). You can easily create roles or only give keys to the appropriate teams rather than everyone having access to everything.

Based on the previous post I just wanted to highlight the portions after that of how I apply my Terraform plans which helps get Kamal connected to the correct machines.

Here’s an example bin/apply script but this will vary based on how you’re doing variable and secrets storage within Terraform:

#!/bin/bash -eux

terraform workspace select -or-create=true $1
terraform apply -var-file=$1/secret.tfvars -var-file=$1/variables.tfvars

terraform output -json > $1.json

You would then call this script with bin/apply staging or bin/apply production, whatever destinations(environments) that you have. The environment is passed in as $1 within the bash script which then expects the following files to be in place when running bin/apply staging for example:

  1. staging/secret.tfvars
  2. staging/variables.tfvars

The final line in the file is what writes our output file for our destination, you’ll see it’s writing it in json format(-json) and to $.json which in our case would be staging.json.

From our previous post you’ll notice that we have one output which is the web droplet IP address:

output "droplet_ip_address" {
  value = digitalocean_droplet.web.*.ipv4_address
}

If you have a lot of outputs or you do output sensitive information from Terraform then I’d recommend passing in the specific names of the outputs that you need from Terraform instead of outputting all of them.

Once you’ve ran bin/apply staging and you have your new terraform/staging.json file you can now look up the value with Kamal. Your staging.json file will look something like this:

{
  "web_ip_addresses": {
    "sensitive": false,
    "type": [
      "tuple",
      [
        "string"
      ]
    ],
    "value": [
      "867.53.0.9"
    ]
  }
}

First, we ensure that we have our environment set in our destination file:

.env.staging

RAILS_ENV=staging

Then we make sure our environment is loaded within the env section and then we can utilize some simple JSON parsing within Ruby to fetch our IP addresses:

env:
  clear:
    RAILS_ENV: <%= ENV['RAILS_ENV'] %>
servers:
  web:
    hosts: <%= JSON.parse(File.read("terraform/#{ENV['RAILS_ENV']}.json")).dig('web_ip_addresses', 'value') %>

A simple way to check if Kamal is looking at the correct value is by checking the lock status:

kamal lock status -d staging

Which should show you your IP address if everything is connected properly.

$ kamal lock status -d staging
  INFO [c5b57be3] Running /usr/bin/env mkdir -p .kamal on 867.53.0.9