9 min read

Automating your MRSK setup with Terraform

MRSK is a great leap forward in the container management and orchestration world. It’s a simple and fast approach to easily managing containers on whatever cloud or hardware you want to run your applications on. In the MRSK demo video DHH utilizes Digital Ocean and Hetzner to setup the infrastructure that he eventually deploys to and it’s really just a few clicks in the relevant hosting dashboard, easy peasy. I went through the same dashboard setup to get my first MRSK project deployed but then figured I’d codify into Terraform so that it’s easy to get started with a new project or to start working on migrating an existing project.

Terraform and Ansible are both great tools to do the initial orchestration of standing up your machines, controlling some networking, and really any additional customization that you’d need to do. Terraform is a pretty straightforward fit for this though, it’ll allow us to create a simple plan to define our infrastructure with just a Digital Ocean API key and existing SSH key pair.

To get going with this you’ll need a couple of things from Digital Ocean:

  1. A Digital Ocean API key
  2. The name of an SSH key pair that you already have stored in your Digital Ocean account

From there go ahead and create a new file called droplets.tf which for now will just be our main file for the Terraform plan, I’ve added a few comments inline.

# Add in the requirement for the DO provider
terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

# We'll define this variable in a secret.auto.tfvars file so that we
# don't have to commit it to git.
variable "do_token" {
  sensitive = true
}

# We'll also define this in the secret.auto.tfvars file.
variable "ssh_key_name" {
  sensitive = true
}

# Configuring the DO provider with our secret token.
provider "digitalocean" {
  token = var.do_token
}

# Adding a data source for the key name so that we can utilize it
# when creating our droplet, this lookup will grab the key ID which
# we'll need.
data "digitalocean_ssh_key" "default" {
  name = var.ssh_key_name
}

# Configure the new droplet and pass in the SSH key for connecting,
# I'm using the Docker image from the DO marketplace so we'll already
# have Docker running on our machine and MRSK won't have to deal with
# installing it. Size and region appropriately for your use.
resource "digitalocean_droplet" "web" {
  image      = "docker-20-04"
  name       = "web-1"
  region     = "sfo3"
  size       = "s-1vcpu-1gb"
  monitoring = true
  ssh_keys = [
    data.digitalocean_ssh_key.default.id
  ]
}

Now we can go ahead and create our secret.auto.tfvars file that’ll contain our DO token and SSH key name that droplets.tf will utilize, update with your actual values of course.

do_token="dop_v1_1..."
ssh_key_name="key_name"

With those two files in place we can now test out a Terraform init and apply.

terraform init
➜  terraform init

Initializing the backend...

Initializing provider plugins...
- Finding digitalocean/digitalocean versions matching "~> 2.0"...
- Installing digitalocean/digitalocean v2.28.0...
- Installed digitalocean/digitalocean v2.28.0 (signed by a HashiCorp partner, key ID F82037E524B9C0E8)

Partner and community providers are signed by their developers.
If you'd like to know more about provider signing, you can read about it here:
https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

init will configure the backend and for now it’s just a local backend and it drops a lock file to help with state management. Later on you could update the backend to utilize one of the remote providers or Terraform Cloud. As long as that initializes properly then we can try applying our plan.

We included .auto in the secrets file extension name so that Terraform will automatically load the vars from that file. If you’re trying to manage multiple environments or have a reason to load different var files you can remove the .auto from the extension and then just pass the file in when running apply via -var-file=secret.tfvars

terraform apply

Running that takes about 30-40 seconds to spin up a new droplet and then Terraform is done. If you login to your DO dashboard you’ll see your newly created droplet, yay!

➜  terraform apply
data.digitalocean_ssh_key.default: Reading...
data.digitalocean_ssh_key.default: Read complete after 0s

Terraform used the selected providers to generate the following execution plan. Resource
actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # digitalocean_droplet.web will be created
  + resource "digitalocean_droplet" "web" {
      + backups              = false
      + created_at           = (known after apply)
      + disk                 = (known after apply)
      + graceful_shutdown    = false
      + id                   = (known after apply)
      + image                = "docker-20-04"
      + ipv4_address         = (known after apply)
      + ipv4_address_private = (known after apply)
      + ipv6                 = false
      + ipv6_address         = (known after apply)
      + locked               = (known after apply)
      + memory               = (known after apply)
      + monitoring           = true
      + name                 = "web-1"
      + price_hourly         = (known after apply)
      + price_monthly        = (known after apply)
      + private_networking   = (known after apply)
      + region               = "sfo3"
      + resize_disk          = true
      + size                 = "s-1vcpu-1gb"
      + ssh_keys             = [
          + "123456",
        ]
      + status               = (known after apply)
      + urn                  = (known after apply)
      + vcpus                = (known after apply)
      + volume_ids           = (known after apply)
      + vpc_uuid             = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

digitalocean_droplet.web: Creating...

digitalocean_droplet.web: Still creating... [10s elapsed]
digitalocean_droplet.web: Still creating... [20s elapsed]
digitalocean_droplet.web: Still creating... [30s elapsed]
digitalocean_droplet.web: Still creating... [40s elapsed]
digitalocean_droplet.web: Creation complete after 41s [id=123456]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Now that we have a running droplet we can go ahead and add a networking rule to just allow port 80(HTTP) and 22(SSH).

At the bottom of your droplets.tf file add a few networking rules that reference the existing droplet. Feel free to adjust the source_addresses to your IP address if you’d like to lock down access or add in a few more rules if you need them.

resource "digitalocean_firewall" "web" {
  name = "only-22-80"

  droplet_ids = [digitalocean_droplet.web.id]

  inbound_rule {
    protocol         = "tcp"
    port_range       = "22"
    source_addresses = ["0.0.0.0/0", "::/0"]
  }

  inbound_rule {
    protocol         = "tcp"
    port_range       = "80"
    source_addresses = ["0.0.0.0/0", "::/0"]
  }
}

Go ahead and run terraform apply again and it’ll create your new firewall rules.

At this point you’re ready to deploy with MRSK since we have a running droplet and we’ve opened up port 80 for our droplet. I’m not going to repeat the MRSK deploy portion of this, the README has a great overview of how to get this going.

A great thing about Terraform though is that we can output the droplet’s IP address and then grab that for our MRSK deploy.yml script. Go ahead and add this output value to the end of the droplets.tf file so it always shows us the droplet IP address.

output "droplet_ip_address" {
  value = digitalocean_droplet.web.ipv4_address
}

Running terraform apply again you’ll see your droplet’s ipv4 address.

droplet_ip_address = "144.126.211.196"

Since we’ll probably want to load balance our application we can go ahead and make a few tweaks to tell Terraform that we want 2 droplets instead of just the one by setting the count to 2. We’ll utilize the index to name the droplets appropriately so that we have web-1 and web-2.

resource "digitalocean_droplet" "web" {
  count = 2
  image      = "docker-20-04"
  name       = "web-${count.index + 1}"
  region     = "sfo3"
  size       = "s-1vcpu-1gb"
  monitoring = true
  ssh_keys = [
    data.digitalocean_ssh_key.default.id
  ]
}

And then we need to tweak our firewall resource and our output to work with multiple droplet resources.

resource "digitalocean_firewall" "web" {
  name = "only-22-80-${count.index + 1}"
  count = 2

  droplet_ids = toset(digitalocean_droplet.web.*.id)

  inbound_rule {
    protocol         = "tcp"
    port_range       = "22"
    source_addresses = ["0.0.0.0/0", "::/0"]
  }

  inbound_rule {
    protocol         = "tcp"
    port_range       = "80"
    source_addresses = ["0.0.0.0/0", "::/0"]
  }
}

output "droplet_ip_address" {
  value = digitalocean_droplet.web.*.ipv4_address
}

Once we’ve made those changes and run a terraform apply we should see both of our droplet IP addresses at the end of the output.

droplet_ip_address = [
  "144.126.211.196",
  "143.126.148.197",
]

Now that we have 2 web servers we can go ahead and put a load balancer in front of it, for simplicity I’m just going to connect the web servers to the load balancer via port 80 and not deal with certificates. You could of course utilize the digitalocean_certificate resource to utilize an existing certificate or provision a new letsencrypt certificate.

resource "digitalocean_loadbalancer" "public" {
  name   = "loadbalancer-1"
  region = "sfo3"

  forwarding_rule {
    entry_port     = 80
    entry_protocol = "http"

    target_port     = 80
    target_protocol = "http"
  }

  healthcheck {
    port     = 22
    protocol = "tcp"
  }

  droplet_ids = toset(digitalocean_droplet.web.*.id)
}

With 2 web servers connected to the load balancer we’ll need a dedicated database server for both of those to connect to. We could go through a similar flow that we did for our web droplet setup and adjust the networking ports for those. For this though I’m going to utilize DO’s managed databases to change it up a bit. Terraform has resources ready for us to utilize to provision our database within a cluster so we can go ahead and get that going.

To get a database going in DO we setup a cluster and then add a database to that cluster. I’ve also added a step that’s a simple firewall setup that allows connections only from our web droplets since Terraform knows where those are already.

For the firewall on the database I decided to utilize tags to configure what droplets can connect to the database. This is mostly because the Terraform resource doesn’t let you pass an array of droplets unless you call for_each but also a good way to highlight a different option for dealing with multiple instances of something with DO. Since we’re utilizing tags for our database firewall rule we’ll need to add the same tag to our web droplets, add tags = ["terraform-web"] to the web droplet resource.

resource "digitalocean_database_db" "primary" {
  cluster_id = digitalocean_database_cluster.primary.id
  name       = "primary"
}

resource "digitalocean_database_cluster" "primary" {
  name       = "primary-mysql-cluster"
  engine     = "mysql"
  version    = "8"
  size       = "db-s-1vcpu-1gb"
  region     = "sfo3"
  node_count = 1
}

resource "digitalocean_database_firewall" "web" {
  cluster_id = digitalocean_database_cluster.primary.id

  rule {
    type  = "tag"
    value = "terraform-web"
  }
}

With your cluster provisioned go ahead and grab the connection string from the DO web console or via the DO CLI and update your application configuration to point to this new database. Once you’ve done that you can go ahead and deploy with MRSK and you should be connected to your shiny new database.

The final step from the demo is getting Cloudflare setup to terminate our SSL and point to our load balancer, I’ll leave that for another post if anyone is interested. You could also utilize the DO letsencrypt resource to easily get going with that as well.

Another option that’s easy to facilitate with Terraform is creating plans for multiple cloud providers and then you’re just pointing to a different resource in Terraform, you can easily create an additional plan for Hetzner for instance.

While getting this all written out with Terraform is a bit more verbose it’s also nice to have this managed via Terraform state and makes future changes or adding capacity even easier. If we need an additional web server we can just increment the count and Terraform will take care of everything else.

Now that you have your infrastructure built out with Terraform, take a look at this next post to deploy utilizing Terraform outputs with Kamal

Here’s our final Terraform plan:

terraform {
  required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "~> 2.0"
    }
  }
}

variable "do_token" {
  sensitive = true
}

variable "ssh_key_name" {
  sensitive = true
}

provider "digitalocean" {
  token = var.do_token
}

data "digitalocean_ssh_key" "default" {
  name = var.ssh_key_name
}

resource "digitalocean_droplet" "web" {
  count      = 2
  image      = "docker-20-04"
  name       = "web-${count.index + 1}"
  region     = "sfo3"
  size       = "s-1vcpu-1gb"
  tags       = ["terraform-web"]
  monitoring = true
  ssh_keys = [
    data.digitalocean_ssh_key.default.id
  ]
}

resource "digitalocean_firewall" "web" {
  name  = "only-22-80-${count.index + 1}"
  count = 2

  droplet_ids = toset(digitalocean_droplet.web.*.id)

  inbound_rule {
    protocol         = "tcp"
    port_range       = "22"
    source_addresses = ["0.0.0.0/0", "::/0"]
  }

  inbound_rule {
    protocol         = "tcp"
    port_range       = "80"
    source_addresses = ["0.0.0.0/0", "::/0"]
  }
}

resource "digitalocean_loadbalancer" "public" {
  name   = "loadbalancer-1"
  region = "sfo3"

  forwarding_rule {
    entry_port     = 80
    entry_protocol = "http"

    target_port     = 80
    target_protocol = "http"
  }

  healthcheck {
    port     = 22
    protocol = "tcp"
  }

  droplet_ids = toset(digitalocean_droplet.web.*.id)
}

resource "digitalocean_database_db" "primary" {
  cluster_id = digitalocean_database_cluster.primary.id
  name       = "primary"
}

resource "digitalocean_database_cluster" "primary" {
  name       = "primary-mysql-cluster"
  engine     = "mysql"
  version    = "8"
  size       = "db-s-1vcpu-1gb"
  region     = "sfo3"
  node_count = 1
}

resource "digitalocean_database_firewall" "web" {
  cluster_id = digitalocean_database_cluster.primary.id

  rule {
    type  = "tag"
    value = "terraform-web"
  }
}

output "droplet_ip_address" {
  value = digitalocean_droplet.web.*.ipv4_address
}