two spoons

Terraform AWS KMS Multi-Region Keys

Terraform just (November 2021) released the resource to create replica KMS keys! As the name says, a Multi-Region Key is a single key that’s available in two different AWS regions. There are few use cases, such as reducing cost of keys. Even a better case is the ability to share encrypted objects like AMI’s with other regions or accounts. Before I start showing the Terraform AWS KMS Multi-Region Keys Module, you have to know what AWS KMS is. Checkout my previous posts, AWS Key management service (KMS) – Part 1 and AWS KMS Customer Managed CMK with Terraform.

Terraform AWS KMS Multi-Region Keys Module code

We’ll need another “aws” provider. The second provider will be for your replicated key. This region will be different than the first provider.

provider "aws" {
  alias  = "replica"
  region = var.replica_region
}

The primary key will still use the original “aws_kms_key” Terraform resource. I just added additional tags. Don’t forget the key alias!

resource "aws_kms_key" "primary" {
  multi_region             = true
  description              = var.description
  customer_master_key_spec = var.key_spec
  is_enabled               = var.is_enabled
  enable_key_rotation      = var.rotation_enabled
  policy                   = var.primary_key_policy
  deletion_window_in_days  = var.deletion_window_in_days

  tags = merge(
    var.tags,
    {
      "Multi-Region" = "true",
      "Primary"      = "true"
    }
  )
}

# Add an alias to the primary key
resource "aws_kms_alias" "primary" {
  name          = "alias/${var.alias}"
  target_key_id = aws_kms_key.primary.key_id
}

Here comes the boom! The “aws_kms_replica_key” terraform resource is required to replicate the key that was just created with the above resource. That’s done with the “primary_key_arn” parameter. The key ARN of the replica key is the key ARN of the primary key.

Notice the “provider” is required in order to ensure this is created in another region. Now, you can reverse this design. You can have the provider on your primary key instead but this is my preference.

You can have a different or the same key policy. The alias, tags, description, deleteion_window_in_days can be the same or different, it doesn’t matter. It is “enabled” and there’s no option to rotate a replica key because the rotation is managed by the primary key.

# Create the replica key using the primary's arn.
resource "aws_kms_replica_key" "replica" {
  provider = aws.replica

  description             = var.description
  deletion_window_in_days = var.deletion_window_in_days
  primary_key_arn         = aws_kms_key.primary.arn
  policy                  = var.replica_key_policy

  tags = merge(
    var.tags,
    {
      "Multi-Region" = "true",
      "Primary"      = "false"
    }
  )
}

# Add an alias to the replica key
resource "aws_kms_alias" "replica" {
  provider = aws.replica

  name          = "alias/${var.alias}"
  target_key_id = aws_kms_replica_key.replica.key_id
}

Module usage

Here’s an example on how to use this module.

data "aws_iam_policy_document" "ebs_key" {
  statement {
    sid       = "Enable IAM User Permissions"
    effect    = "Allow"
    actions   = ["kms:*"]
    resources = ["*"]

    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${local.account_id}:root"]
    }
  }

  statement {
    sid    = "Allow access for Key Administrators"
    effect = "Allow"
    actions = [
      "kms:Create*",
      "kms:Describe*",
      "kms:Enable*",
      "kms:List*",
      "kms:Put*",
      "kms:Update*",
      "kms:Revoke*",
      "kms:Disable*",
      "kms:Get*",
      "kms:Delete*",
      "kms:TagResource",
      "kms:UntagResource",
      "kms:ScheduleKeyDeletion",
      "kms:CancelKeyDeletion"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/${local.role_name}"
      ]
    }
  }

  statement {
    sid    = "Allow use of the key"
    effect = "Allow"
    actions = [
      "kms:Encrypt",
      "kms:Decrypt",
      "kms:ReEncrypt*",
      "kms:GenerateDataKey*",
      "kms:DescribeKey"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/${local.role_name}"
      ]
    }
  }

  statement {
    sid    = "Allow attachment of persistent resources"
    effect = "Allow"
    actions = [
      "kms:CreateGrant",
      "kms:ListGrants",
      "kms:RevokeGrant"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/${local.role_name}"
      ]
    }

    condition {
      test     = "Bool"
      variable = "kms:GrantIsForAWSResource"
      values   = ["true"]
    }
  }
}

module "ebs_key" {
  source = "git@github.com:masterwali/terraform-kms-multi-region-module.git"

  description        = "KMS key for EBS volumes."
  alias              = "multi-region-ebs"
  primary_key_policy = data.aws_iam_policy_document.ebs_key.json
  replica_key_policy = data.aws_iam_policy_document.ebs_key.json
  replica_region     = "us-west-2"

  tags = {
    Name  = "multi-region-ebs"
    Owner = "Waleed"
  }
}

Here’s my applied code. I set the EBS default encryption to use the multi-region-ebs key that I created using the module. Notice Multi-Region key ID’s start with “mrk” for Multi-Region Key.

Terraform multi-region kMS key example
A volume resource using a multi-region KMS key.

Regions supported

Multi-Region keys are supported in all AWS Regions where AWS KMS is available.

Cost

Every pair, primary and replica, is priced as a single key! But the KMS quotas are still counted separately.

Complete Code

Here’s what you came for, https://github.com/masterwali/terraform-kms-multi-region-module. Learn more about AWS KMS Multi-Region Keys.

Don’t forget to subscribe for more 🙂

Post NotificationsGet the latest post directly in your inbox!
code coding computer cyberspace

AWS Three-Tier VPC with ALB in Terraform

This AWS Three-Tier VPC with ALB in Terraform is the second part of AWS Three-Tier VPC network with Terraform. In the first post I had created many of the VPC components; such as the VPC, app subnets, web subnets, data subnets, route tables for each subnet, internet and NAT gateways, NACLs for each subnet, and a generic security group. In this post I’ll reveal the Terraform code for creating an Elastic Load Balancer, specifically the Application Load Balancer (ALB). The ALB will require a listener, so I’ll add that too. For simplicity purposes the listener will monitor a non-secure port 80. Remember to use certificates and port 443 in production!

Cost

Charges may occur on various resources created using this module. The load balancers have a low hourly price set.

The design

Here’s the original diagram from the previous post.

three-tier vpc diagram

ALB (Application Load Balancer)

The ALB diagram. The ALB does a health check on port 80 on every instance in the target group. Whichever instance is healthy it will use that. In this drawing there’s only one web server; your production should have at least two that are in different availability zones.

alb diagram
The ALB

ALB Terraform Code

# ALB for the web servers
resource "aws_lb" "web_servers" {
  name               = format("%s-alb", var.vpc_name)
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.web.id]
  subnets            = aws_subnet.public.*.id
  enable_http2       = false
  enable_deletion_protection = true

  tags = {
    Name = format("%s-alb", var.vpc_name)
  }
}

This is an example of an external ALB. Note the “internal = false” statement. The load balancer type is what makes this an application load balancer. The other options are network or gateway. Since this is an ALB, it does require a security group. This external load balancer will be required to be in public subnets so outside users can reach the web servers which are hosted in private subnets. That’s one of the design features that makes this a three-tier VPC! See the Terraform documentation on more options.

Target Groups

Let’s make the ALB useful by creating a target group and a load balancer listener.

# Target group for the web servers
resource "aws_lb_target_group" "web_servers" {
  name     = "sharepoint-web-servers-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = aws_vpc.this.id
}

resource "aws_lb_listener" "front_end" {
  load_balancer_arn = aws_lb.web_servers.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.web_servers.arn
  }
}

An empty target group is not useful either. From now on when you launch a new EC2 web server be sure to add it to the target group like this.

# Find the target group
data "aws_lb_target_group" "web_servers" {
  name = "sharepoint-web-servers-tg"
}

# Attach an EC2 instance to the target group on port 80
resource "aws_lb_target_group_attachment" "web" {
  target_group_arn = aws_lb_target_group.arn
  target_id        = aws_instance.web.id
  port             = 80
}
target group registered targets
EC2 added to a target group

Notice the URL is using the ALB’s DNS record to reach the Nginx web server.

website
Nginx website using the ALB URL

Here’s the complete terraform module code: https://github.com/masterwali/tf-module-aws-three-tier-network-vpc

Here’s an example on how to call or reference the module.

module "sharepoint_network" {
  source = "git@github.com:masterwali/tf-module-aws-three-tier-network-vpc.git"
  
  aws_cli_profile = var.aws_cli_profile
  additional_tags = var.additional_tags
}

Subscribe to the newsletter for more tutorials!

Post NotificationsGet the latest post directly in your inbox!

Full Disclosure: I am an AWS employee, this post is opinion of my own.

web text

AWS Three-Tier VPC network with Terraform

A three-tier network is an enterprise architecture to deliver the best performance and security to the end-users. Each component of the design is separated into tiers. Reminder, a typical three-tier network consists of a website then the application then the database from an end-user perspective. Not every website automatically works like that. The developers and engineers have to build the web application by separating the user interface from the logic and data. An AWS three-tier VPC network is not too difficult to build in the cloud, either. However, in this post, I’ll be using Terraform and Terragrunt to build and deploy an AWS three-tier VPC network using, of course, VPC, subnets, route tables, network access control lists (NACLs), and few other VPC parts. Next, I’ll share with you how to create the AWS application load balancer (ALB) and the target groups with health checks in another post.

The design

This will be a simple start and in future posts, I’ll add more details. This AWS three-tier VPC network module will create a VPC, subnets, Network Access Control Lists (NACLs), Internet Gateway, NAT Gateways, route tables, Elastic IPs, and few other resources using Terraform and I’ll deploy it with Terragrunt.

Notice there are two NAT gateways, this provides high availability and fault tolerance. This means if the NAT Gateway in the availability zone (AZ) A fails or gets corrupted then the EC2 instances in AZ B will still be able to function as expected. It’s a little bit more costly… it all depends on your requirements.

The Terraform Module

This module will be generic so I can reuse the three-tier VPC network over and over again. By creating a module it makes my main code a lot less, therefore it will be a lot cleaner to view and understand.

What’s Terraform and Terragrunt? Well visit my Intro to Terragrunt and Terraform post first then come back here! You can name your module anything you like… I named my three-tier-vpc. Reminder in Terraform we can use one or more .tf files to build a module. I’ll be separating this module into few different Terraform files just for organizational purposes. Here’s my structure.

aws/tf-modules/three-tier-vpc
               ├── README.md
               ├── gateways.tf
               ├── main.tf
               ├── nacls.tf
               ├── outputs.tf
               ├── routes.tf
               ├── sec-grps.tf
               └── vars.tf

AWS VPC

Let’s start with the main.tf file which contains the VPC resource. The VPC CIDR will be a variable so we can plugin any CIDR during deployment.

# Create the VPC
resource "aws_vpc" "this" {
  cidr_block           = var.vpc_cidr
  instance_tenancy     = "default"
  enable_dns_support   = true
  enable_dns_hostnames = true

  tags = merge(
    var.additional_tags,
    {
      Name = "${var.vpc_name}-vpc"
    }
  )
}

In my module I’m giving the vpc_cidr a default value but you’re not required to do so. This default won’t hurt because if this network range is already taken then terraform apply will fail cleanly.

# vars.tf
variable "vpc_cidr" {
  type    = string
  default = "10.0.0.0/16"
}

The public subnet CIDR blocks will be variable too. Never hardcode values in modules that could or should be changed. The public_subnet_cidrs is a list of one or many CIDR blocks. If we provide one string value in the block then it will create one public subnet. If we provide four CIDR blocks then it will create four, how sweet is that!

# Create the public subnets
resource "aws_subnet" "public" {
  count = length(var.public_subnet_cidrs)

  vpc_id                  = aws_vpc.this.id
  cidr_block              = var.public_subnet_cidrs[count.index]
  availability_zone       = "${var.aws_region}${var.zones[count.index]}"
  map_public_ip_on_launch = true

  tags = {
    Name = "${var.vpc_name}-public-subnet-${var.zones[count.index]}"
  }
}

The map_public_ip_on_launch attribute set to true is one of the configurations that makes this a public subnet. Just because we named it public doesn’t make it public. Next are the private subnets. Note the map_public_ip_on_launch is now set to false.

# Create the web subnets
resource "aws_subnet" "web" {
  count = length(var.web_subnet_cidrs)

  vpc_id                  = aws_vpc.this.id
  cidr_block              = var.web_subnet_cidrs[count.index]
  availability_zone       = "${var.aws_region}${var.zones[count.index]}"
  map_public_ip_on_launch = false

  tags = {
    Name = "${var.vpc_name}-web-subnet-${var.zones[count.index]}"
  }
}

The rest of the main.tf file contains the resources for the data and the app subnets, these subnets are non-public too.

VPC Gateways

In this three-tier VPC network architecture, we have only a few public subnets (meaning the instances launched in the public subnets will get Public IPs; which makes them routable without a NAT gateway). The public subnets will need an internet gateway to complete the design. The private or non-public subnets will need a NAT gateway to allow instances with just private IPs to communicate to the internet (AKA outbound internet access). NAT gateways need 1) a public IP, 2) added to your public subnet for each availability zone you are using!

# Add internet gateway
resource "aws_internet_gateway" "this" {
  vpc_id = aws_vpc.this.id

  tags = {
    Name = "${var.vpc_name}-internet-gateway"
  }
}

# Charges may occur

# Reserve EIPs
resource "aws_eip" "nat_a" {
  vpc = true

  tags = {
    Name = "${var.vpc_name}-eip-nat-a"
  }

}

# NAT Gateway in AZ A
resource "aws_nat_gateway" "zone_a" {
  allocation_id = aws_eip.nat_a.id
  subnet_id     = aws_subnet.public[0].id

  tags = {
    Name = "${var.vpc_name}-nat-gateway-aza"
  }

  depends_on = [
    aws_subnet.public
  ]
}

# Reserve EIPs
resource "aws_eip" "nat_b" {
  vpc = true

  tags = {
    Name = "${var.vpc_name}-eip-nat-b"
  }

}

# NAT Gateway in AZ B
resource "aws_nat_gateway" "zone_b" {
  allocation_id = aws_eip.nat_b.id
  subnet_id     = aws_subnet.public[1].id

  tags = {
    Name = "${var.vpc_name}-nat-gateway-azb"
  }

  depends_on = [
    aws_subnet.public
  ]
}

Note one of the comments in the code, EIP’s and the gateways may cost you for the duration of their existence.

VPC Routes

Every time a VPC is created the main route table is automatically provisioned, too. I’m going to tag it and create my own route tables for this module. Do note I do have some hardcoded values in this module. I have decided this will be for two AZ’s and no more so I hardcoded the index value of zero and one in the resources below.

# Tag the main route table
resource "aws_ec2_tag" "main_route_table" {
  resource_id = aws_vpc.this.main_route_table_id
  key         = "Name"
  value       = "${var.vpc_name}-main-route-table"
}

# Create route table for the public subnets
# Uses IG
resource "aws_route_table" "public" {
  vpc_id = aws_vpc.this.id

  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.this.id
  }

  tags = {
    Name = "${var.vpc_name}-public-route-table"
  }

  depends_on = [
    aws_internet_gateway.this
  ]
}

# Associate the public subnets with the public route table
resource "aws_route_table_association" "public" {
  count = length(var.public_subnet_cidrs)

  subnet_id      = element(aws_subnet.public.*.id, count.index)
  route_table_id = aws_route_table.public.id
}

# Create a route table for the web and app subnets in AZ A
# Uses NAT gateway in AZ A
resource "aws_route_table" "private_aza" {
  vpc_id = aws_vpc.this.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.zone_a.id
  }

  tags = {
    Name = "${var.vpc_name}-private-route-table-aza"
  }

  depends_on = [
    aws_nat_gateway.zone_a
  ]
}

# Create a route table for the web and app subnets in AZ B
# Uses NAT gateway in AZ B
resource "aws_route_table" "private_azb" {
  vpc_id = aws_vpc.this.id

  route {
    cidr_block     = "0.0.0.0/0"
    nat_gateway_id = aws_nat_gateway.zone_b.id
  }

  tags = {
    Name = "${var.vpc_name}-private-route-table-azb"
  }

  depends_on = [
    aws_nat_gateway.zone_b
  ]
}

resource "aws_route_table" "data" {
  vpc_id = aws_vpc.this.id

  tags = {
    Name = "${var.vpc_name}-data-route-table"
  }
}

# Associate these subnets with the private route tables accordingly 
resource "aws_route_table_association" "web_aza" {
  subnet_id      = aws_subnet.web[0].id
  route_table_id = aws_route_table.private_aza.id
}

resource "aws_route_table_association" "app_aza" {
  subnet_id      = aws_subnet.app[0].id
  route_table_id = aws_route_table.private_aza.id
}

resource "aws_route_table_association" "web_azb" {
  subnet_id      = aws_subnet.web[1].id
  route_table_id = aws_route_table.private_azb.id
}

resource "aws_route_table_association" "app_azb" {
  subnet_id      = aws_subnet.app[1].id
  route_table_id = aws_route_table.private_azb.id
}

resource "aws_route_table_association" "data" {
  count = length(var.data_subnet_cidrs)

  subnet_id      = element(aws_subnet.data.*.id, count.index)
  route_table_id = aws_route_table.data.id
}

The routes are another set of configurations that differentiates public and internal sub-network. In the public subnet route table, all non-local traffic will be sent to the internet gateway. A private route table sends its non-local traffic to the NAT gateway which then routes it to the internet gateway and back.

VPC NACLs

Now it’s time to control which traffic is allowed or denied to this network. You may need to modify these rules to meet your requirements. For most web application projects, the public NACLs will most likely look like this.

# Public NACLS
resource "aws_network_acl" "public" {
  vpc_id     = aws_vpc.this.id
  subnet_ids = [aws_subnet.public[0].id, aws_subnet.public[1].id]

  # Ingress rules
  # Allow all local traffic
  ingress {
    protocol   = -1
    rule_no    = 100
    action     = "allow"
    cidr_block = aws_vpc.this.cidr_block
    from_port  = 0
    to_port    = 0
  }

  # Allow HTTPS traffic from the internet
  ingress {
    protocol   = "6"
    rule_no    = 105
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 443
    to_port    = 443
  }

  # Allow HTTP traffic from the internet
  ingress {
    protocol   = "6"
    rule_no    = 110
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 80
    to_port    = 80
  }

  # Allow the ephemeral ports from the internet
  ingress {
    protocol   = "6"
    rule_no    = 120
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 1025
    to_port    = 65534
  }

  ingress {
    protocol   = "17"
    rule_no    = 125
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 1025
    to_port    = 65534
  }

  # Egress rules
  # Allow all ports, protocols, and IPs outbound
  egress {
    protocol   = -1
    rule_no    = 100
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 0
    to_port    = 0
  }

  tags = {
    Name = "${var.vpc_name}-public-nacl"
  }

  depends_on = [aws_subnet.public]
}

The web subnet NACLs for this module.

resource "aws_network_acl" "web" {
  vpc_id     = aws_vpc.this.id
  subnet_ids = [aws_subnet.web[0].id, aws_subnet.web[1].id]

  # Ingress rules
  # Allow all local traffic
  ingress {
    protocol   = -1
    rule_no    = 100
    action     = "allow"
    cidr_block = aws_vpc.this.cidr_block
    from_port  = 0
    to_port    = 0
  }

  # Allow HTTP web traffic from anywhere
  ingress {
    protocol   = 6
    rule_no    = 105
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 80
    to_port    = 80
  }

  # Allow HTTPS web traffic from anywhere
  ingress {
    protocol   = 6
    rule_no    = 110
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 443
    to_port    = 443
  }

  # Allow the ephemeral ports from the internet
  ingress {
    protocol   = "6"
    rule_no    = 120
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 1025
    to_port    = 65534
  }

  ingress {
    protocol   = "17"
    rule_no    = 125
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 1025
    to_port    = 65534
  }

  # Egress rules
  # Allow all ports, protocols, and IPs outbound
  egress {
    protocol   = -1
    rule_no    = 100
    action     = "allow"
    cidr_block = "0.0.0.0/0"
    from_port  = 0
    to_port    = 0
  }

  tags = {
    Name = "${var.vpc_name}-web-nacl"
  }
}

The App and Data subnet NACLs have been set to allow only local traffic. You may adjust these rules.

VPC Security Group

A default security group is created every time a new VPC is provisioned. Here I’ll just give it some tags and few generic rules.

# Modify the default security group
resource "aws_default_security_group" "this" {
  vpc_id = aws_vpc.this.id

  dynamic "ingress" {
    for_each = var.default_security_group_ingress
    content {
      self        = lookup(ingress.value, "self", null)
      cidr_blocks = compact(split(",", lookup(ingress.value, "cidr_blocks", "")))
      description = lookup(ingress.value, "description", null)
      from_port   = lookup(ingress.value, "from_port", 0)
      to_port     = lookup(ingress.value, "to_port", 0)
      protocol    = lookup(ingress.value, "protocol", "-1")
    }
  }

  dynamic "egress" {
    for_each = var.default_security_group_egress
    content {
      self        = lookup(egress.value, "self", null)
      cidr_blocks = compact(split(",", lookup(egress.value, "cidr_blocks", "")))
      description = lookup(egress.value, "description", null)
      from_port   = lookup(egress.value, "from_port", 0)
      to_port     = lookup(egress.value, "to_port", 0)
      protocol    = lookup(egress.value, "protocol", "-1")
    }
  }

  tags = merge(
    {
      Name = format("%s-default-security-group", var.vpc_name)
    },
    var.additional_tags
  )
}

Now the values for this security group are passed as a variable like so. Be sure to change the ports and protocols to meet your needs.

variable "default_security_group_ingress" {
  description = "List of maps of ingress rules to set on the default security group"
  type        = list(map(string))
  default = [
    {
      cidr_blocks = "10.0.0.0/16"
      description = "Allow all from the local network."
      from_port   = 0
      protocol    = "-1"
      self        = false
      to_port     = 0
    },
    {
      cidr_blocks = "0.0.0.0/0"
      description = "Allow all HTTPS from the internet."
      from_port   = 443
      protocol    = "6"
      self        = false
      to_port     = 443
    },
    {
      cidr_blocks = "0.0.0.0/0"
      description = "Allow all HTTP from the internet."
      from_port   = 80
      protocol    = "6"
      self        = false
      to_port     = 80
    },
    {
      cidr_blocks = "0.0.0.0/0"
      description = "Allow all ephemeral ports from the internet."
      from_port   = 32768
      protocol    = "6"
      self        = false
      to_port     = 60999
    }
  ]
}

variable "default_security_group_egress" {
  description = "List of maps of egress rules to set on the default security group"
  type        = list(map(string))
  default = [
    {
      cidr_blocks = "0.0.0.0/0"
      description = "Allow all"
      from_port   = 0
      protocol    = "-1"
      self        = false
      to_port     = 0
    }
  ]
}

Here’s a link to the complete code so far in the dev branch. That’s it all for now! In the next post, we’ll add application load balancers, target groups, listeners, etc. For a more in-depth explanation of VPC resources checkout AWS technical documentation. Be sure to subscribe for more content like this!

Post NotificationsGet the latest post directly in your inbox!

Create an EC2 IAM role with Terraform

There are hundreds of posts online about what is EC2 IAM role so I’m not going to discuss that here. Here we will develop the code to create an EC2 IAM role with Terraform and deploy with Terragrunt. I’m assuming you know what both of these tools are… if not checkout my introduction posts on how to setup your development environment. Or if you need a introduction to Terraform modules and how to use them click here. Do note that an EC2 IAM role can only be used for EC2 instances. The IAM policies can be shared with other resources or services though.

This Terraform module creates AWS IAM policy then creates IAM role specifically designed to be used by EC2 instances. After that it attaches the IAM role to the EC2 instance profile. Lastly attaches the IAM policy to the EC2 IAM role. Remember every IAM role needs a set of policies (permissions).

Terraform EC2 IAM role module

ece iam role Terraform module
Module structure

Here’s the main.tf file of the module.

# Create the AWS IAM role. 
resource "aws_iam_role" "this" {
  name = var.ec2_iam_role_name
  path = "/"

  assume_role_policy = var.assume_role_policy
}

# Create AWS IAM instance profile
# Attach the role to the instance profile
resource "aws_iam_instance_profile" "this" {
  name = var.ec2_iam_role_name
  role = aws_iam_role.this.name
}

# Create a policy for the role
resource "aws_iam_policy" "this" {
  name        = var.ec2_iam_role_name
  path        = "/"
  description = var.policy_description
  policy      = var.policy
}

# Attaches the policy to the IAM role
resource "aws_iam_policy_attachment" "this" {
  name       = var.ec2_iam_role_name
  roles      = [aws_iam_role.this.name]
  policy_arn = aws_iam_policy.this.arn
}

As always there’s a vars.tf

variable ec2_iam_role_name {
  type = string

  validation {
    condition     = length(var.ec2_iam_role_name) > 4 && substr(var.ec2_iam_role_name, 0, 4) == "svc-"
    error_message = "The ec2_iam_role_name value must be a valid IAM role name, starting with \"svc-\"."
  }
}

variable policy_description {
  type = string

  validation {
    condition     = length(var.policy_description) > 4
    error_message = "The policy_description value must contain more than 4 characters."
  }
}

variable assume_role_policy {}

variable policy {}

The module full code: https://github.com/masterwali/tf-module-iam-ec2-role

Terraform variable type constraints

If you noticed the “type” keyword in the variable declaration, that’s to ensure the kind of answer matches exactly what the module expects. This way someone doesn’t give an integer to a string type variable, it may break the Terraform apply but it may not error during Terraform plan. So we want to catch problems like that as soon as possible with type constraints. Now all the validations are done during the Terraform plan.

Terraform variable validation

Terraform starting with version 0.13.x released this new capability to apply validation on the answers provided to the variables. If you noticed the validation blocks above; the ‘ec2_iam_role_name’ var checks for both the length and what should be the starting characters. This is excellent way to ensure everyone’s following the naming conventions your organizations have created. The second validation on the ‘policy_description’ is just ensuring the provided value is more than 4 characters. Learn more about variable conditions.

How to use the module

In another git project we’ll have this setup.

Terragrunt ece iam role setup
Terragrunt setup
Each environment has a inputs.yml, terragrunt.hcl and vars.tf

The main.tf that will call the Terraform module. Be sure to update your git code. Now we create many, many EC2 IAM roles with the same naming convention! By default the source variable will use the master branch!

provider "aws" {
  region  = var.aws_region
  profile = var.aws_cli_profile
}

terraform {
  backend "s3" {}
}

module "web_server" {
  source = "git@gitlab.com:cloudly-engineer/aws/tf-modules/iam-ec2-role.git"

  ec2_iam_role_name  = "svc-web-server-role"
  policy_description = "IAM ec2 instance profile for the web servers."
  assume_role_policy = file("assumption-policy.json")
  policy             = data.aws_iam_policy_document.web_server.json
}

Here’s the sample “web server” IAM policy that’s attached to this role.

data "aws_iam_policy_document" "web_server" {
  statement {
    sid    = "GetS3Stuff"
    effect = "Allow"
    actions = [
      "s3:List*",
      "s3:Get*"
    ]
    resources = ["*"]
  }
}

Then do a terragrunt init, plan and apply!

Here’s what the error message looks like when the validation fails during the plan.

Terraform ec2 iam role module
Terraform validation check

The calling terraform/terragrunt code: https://github.com/masterwali/ec2-iam-role

Subscribe to get notified when more AWS and Terraform code is published!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

AWS IAM groups and policies – Terraform

Let’s create a module to create and manage AWS IAM groups and policies with Terraform! In summary a Terraform module is one or more Terraform resources bundled together to be used a single Terraform resource. You can learn more about Terraform module’s here. If you have no idea what Terraform or Terragrunt then start here. Let’s code!

The Terraform module structure

aws-iam-group/
main.tf
vars.tf
README.md

The main file

The main.tf contains all the resources required to create AWS IAM groups and their policies. Notice this one uses three resources!

resource "aws_iam_group" "this" {
name = var.iam_group_name
}

resource "aws_iam_policy" "this" {
name = var.policy_name
description = var.policy_description
policy = var.policy
}

resource "aws_iam_group_policy_attachment" "this" {
group = aws_iam_group.this.name
policy_arn = aws_iam_policy.this.arn
}

The vars file

variable iam_group_name {}

variable policy_name {}

variable policy_description {}

variable policy {}

Here’s the AWS IAM groups and policies Terraform module on GitHub: https://github.com/masterwali/tf-module-aws-iam-group

Create AWS IAM groups and policies with Terraform

Let’s use the Terraform module to create one or many IAM groups and their policies! You can deploy with Terraform but I like to use Terragrunt.

The Terragrunt/Terraform structure

This is one way to create your deployment structure. You can name this directory anything you like.

aws-iam
├── LICENSE
├── README.md
├── cloud-engineers-policy.tf
├── database-admins-policy.tf
├── developers-policy.tf
├── network-admins-policies.tf
├── groups.tf
├── main.tf
├── dev
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── prod
│   └── terragrunt.hcl
│   ├── inputs.yml
│   └── vars.tf
├── qa
│   └── terragrunt.hcl
│   ├── inputs.yml
│   └── vars.tf
├── sec
│   └── terragrunt.hcl
│   ├── inputs.yml
│   └── vars.tf
└── terragrunt.hcl

In this example we will be creating groups and policies for developers, cloud engineers, database admins, and network admins all with one and same Terraform module! Using a module will ensure consistency and governance for our AWS resources.

The Terraform files

I like to keep the main file clean. I mainly use it for data calls and local variables.

main.tf

locals {
  account_id = data.aws_caller_identity.current.account_id
}

provider "aws" {
  region  = var.aws_region
  profile = var.aws_cli_profile
}

terraform {
  backend "s3" {}
}

data "aws_caller_identity" "current" {}

groups.tf

# CLOUD ENGINEERS
module "cloud_engineers" {
  source = "git@github.com:masterwali/tf-module-aws-iam-group.git"

  iam_group_name     = "cloud-engineers"
  policy_name        = "cloud-engineers"
  policy_description = "Cloud Engineers policy"
  policy             = data.aws_iam_policy_document.cloud_engineers.json
}
# ......... etc. 

# NETWORK ADMINS
# Modules creates group and policy and attaches policy to group. 
module "network_admins" {
  source = "git@github.com:masterwali/tf-module-aws-iam-group.git"

  iam_group_name     = "network-admins"
  policy_name        = "network-admins"
  policy_description = "Network Admins policy"
  policy             = data.aws_iam_policy_document.network_admins_main.json
}
# Create the network admins misc policy 
resource "aws_iam_policy" "network_admins_misc" {
  name        = "network-admins-misc"
  description = "Network Admins"
  policy      = data.aws_iam_policy_document.network_admins_misc.json
}
# Attach the above policy to the network admins group. 
resource "aws_iam_group_policy_attachment" "network_admins_misc" {
  group      = "network-admins"
  policy_arn = aws_iam_policy.network_admins_misc.arn

  depends_on = [aws_iam_policy.network_admins_misc]
}

The “cloud engineers” IAM policy file

data "aws_iam_policy_document" "cloud_engineers" {
  statement {
    sid    = "FullAccess"
    effect = "Allow"
    actions = [
      "iam:*",
      "kms:*",
      "s3:*"
    ]
    resources = ["*"]
  }
}

To see the developers policy and more take a look at the complete code at https://github.com/masterwali/aws-iam

Now let’s apply the code by navigating to the desired environment directory. Then initiate with ‘terragrunt init’ and apply with ‘terragrunt apply’. More on Terragrunt initiation, plans and apply.

Subscribe to get notified when more AWS and Terraform code is published!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

IAM

AWS IAM console groups
AWS IAM console: Groups
AWS IAM console group policies
AWS IAM Group policies