AWS Service Control Policies with Terraform

AWS Organizations

A cloud service designed to centralize & manage AWS accounts and to roll up billing from multiple AWS accounts into a single account. May be referred to as the “master” account because it can manage permissions of all its accounts that are “attached” to it. “Billing” is another name for this account because it’s the account that gets the invoice or the monthly charge. You can select any commercial AWS account to play this role. The account becomes this master account as soon as you join the master’s accounts organization. Next comes AWS’s Service Control Policies, this feature allows permission management for all your AWS accounts in your organization. In this post, I’ll share with you how to implement AWS’s service control policies with Terraform! Let’s break it down.

How to join an AWS Organization

Check out my previous post on the details.

AWS Organization Units

This has nothing to do with Microsoft’s Active Directory (AD) organization units. There’s no integration between MS AD and this, either. AWS’s organization units are a way of grouping AWS accounts so we can manage accounts permissions in groups or in a hierarchical format. There are dozens of ways to create this hierarchy and it all depends on your objective. You can group them by projects, departments, missions, environments, classifications, etc.

Simple OU structure by environments

our simple structure
AWS OU by environments

Service Control Policies with Terraform

Service Control Policies (SCP) is a critical feature to learn and understand. Questions related to this feature is a topic on many, many AWS certifications. This feature is highly useful and it’s used a lot in the real world. It’s these policies that can allow or deny actions or services at a high level. What I mean by “high level” is outside of the AWS’s account. For example; let’s say you want to experiment with the most expensive EC2 instance type and let’s say you also have IAM permissions to allow those actions. Then you try to launch an EC2 with an instance type of p4d.24xlarge and bam you get the encoded authorized failure message! How?! You have given yourself full permissions! Bingo… it’s the service control policies. Even with a full administration policy on the account, you can still be denied by SCP.

It’s best practice to enable SCP and create the OUs, and policies and attach the policies even before building your systems! For information on SCP checkout AWS’s documents.

Show me the Terraform code!

I personally like to break down the main.tf terraform file into separate manageable terraform files. Here’s my file structure. Notice there’s only one environment aka account “master” here. Yes, I also use terragrunt.

org/
├── README.md
├── dev-ou.tf
├── main.tf
├── master
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── prod-ou.tf
├── root-ou.tf
├── staging-ou.tf
└── terragrunt.hcl

main.tf this file usually contains common code. Enable SCP by adding the “SERVICE_CONTROL_POLICY” to the enabled_policy_types array.

provider "aws" {
  region  = var.aws_region
  profile = var.aws_cli_profile
}

 terraform {
   backend "s3" {}
 }

# Provides a resource to create an AWS organization.
resource "aws_organizations_organization" "this" {

  # List of AWS service principal names for which 
  # you want to enable integration with your organization
  aws_service_access_principals = [
    "cloudtrail.amazonaws.com",
    "config.amazonaws.com",
  ]

  feature_set = "ALL"

  enabled_policy_types = [
    "TAG_POLICY",
    "SERVICE_CONTROL_POLICY"
  ]
}

my root-out.tf contains the master account code and all the service control policies that I want to be applied to all accounts. Notice it’s attached to the root OU which then it’s inherited by all the accounts below the root OU!

resource "aws_organizations_account" "master" {
  # A friendly name for the member account
  name  = "my-master"
  email = "mymaster@email.com"

  # Enables IAM users to access account billing information 
  # if they have the required permissions
  # iam_user_access_to_billing = "ALLOW"

  tags = {
    Name  = "my-master"
    Owner = "Waleed"
    Role  = "billing"
  }

  parent_id = aws_organizations_organization.this.roots[0].id
}

# ---------------------------------------- # 
# Service Control Policies for all accounts
# ---------------------------------------- #

# ---------------------------- #
# REGION RESTRICTION 
# ---------------------------- #

data "aws_iam_policy_document" "restrict_regions" {
  statement {
    sid       = "RegionRestriction"
    effect    = "Deny"
    actions   = ["*"]
    resources = ["*"]

    condition {
      test     = "StringNotEquals"
      variable = "aws:RequestedRegion"

      values = [
        "us-east-1"
      ]
    }
  }
}

resource "aws_organizations_policy" "restrict_regions" {
  name        = "restrict_regions"
  description = "Deny all regions except US East 1."
  content     = data.aws_iam_policy_document.restrict_regions.json
}

resource "aws_organizations_policy_attachment" "restrict_regions_on_root" {
  policy_id = aws_organizations_policy.restrict_regions.id
  target_id = aws_organizations_organization.this.roots[0].id
}

# ---------------------------- #
# EC2 INSTANCE TYPE RESTRICTION 
# ---------------------------- #

data "aws_iam_policy_document" "restrict_ec2_types" {
  statement {
    sid       = "RestrictEc2Types"
    effect    = "Deny"
    actions   = ["ec2:RunInstances"]
    resources = ["arn:aws:ec2:*:*:instance/*"]

    condition {
      test     = "StringNotEquals"
      variable = "ec2:InstanceType"

      values = [
        "t3*",
        "t4g*",
        "a1.medium",
        "a1.large"
      ]
    }
  }
}

resource "aws_organizations_policy" "restrict_ec2_types" {
  name        = "restrict_ec2_types"
  description = "Allow certain EC2 instance types only."
  content     = data.aws_iam_policy_document.restrict_ec2_types.json
}

resource "aws_organizations_policy_attachment" "restrict_ec2_types_on_root" {
  policy_id = aws_organizations_policy.restrict_ec2_types.id
  target_id = aws_organizations_organization.this.roots[0].id
}

# ---------------------------- #
# REQUIRE EC2 TAGS 
# ---------------------------- #

data "aws_iam_policy_document" "require_ec2_tags" {
  statement {
    sid    = "RequireTag"
    effect = "Deny"
    actions = [
      "ec2:RunInstances",
      "ec2:CreateVolume"
    ]
    resources = [
      "arn:aws:ec2:*:*:instance/*",
      "arn:aws:ec2:*:*:volume/*"
    ]

    condition {
      test     = "Null"
      variable = "aws:RequestTag/Name"

      values = ["true"]
    }
  }
}

resource "aws_organizations_policy" "require_ec2_tags" {
  name        = "require_ec2_tags"
  description = "Name tag is required for EC2 instances and volumes."
  content     = data.aws_iam_policy_document.require_ec2_tags.json
}

resource "aws_organizations_policy_attachment" "require_ec2_tags_on_root" {
  policy_id = aws_organizations_policy.require_ec2_tags.id
  target_id = aws_organizations_organization.this.roots[0].id
}

Here’s an authorization failure message when I attempted to create an EC2 with an instance type that was not approved in the SCP defined above!

The encoded authorization failure message from the console.
Decoded message shows the deny statement from the SCP
Failed to create the volume because it now requires a Name tag in order to create a volume.
Name tag denied statement from SCP is shown once the failure message is decoded.

Account or OU specific SCP

What if you want to restrict certain actions or services on a single account. The code below shows how to block the internet in this environment. This says to create the prod account, create the SCP and only attach it directly to this OU; and this OU only has the prod account. To attach an SCP to an account only check the documentation.

prod-ou.tf

resource "aws_organizations_account" "prod" {
  # A friendly name for the member account
  name  = "my-prod"
  email = "my-prod@email.com"

  # Enables IAM users to access account billing information 
  # if they have the required permissions
  iam_user_access_to_billing = "ALLOW"

  tags = {
    Name  = "my-prod"
    Owner = "Waleed"
    Role  = "prod"
  }

  parent_id = aws_organizations_organizational_unit.prod.id
}

resource "aws_organizations_organizational_unit" "prod" {
  name      = "prod"
  parent_id = aws_organizations_organization.this.roots[0].id
}

# ------------------------------- #
# PREVENT INTERNET ACCESS TO A VPC 
# ------------------------------- #

data "aws_iam_policy_document" "block_internet" {
  statement {
    sid    = "BlockInternet"
    effect = "Deny"
    actions = [
      "ec2:AttachInternetGateway",
      "ec2:CreateInternetGateway",
      "ec2:CreateEgressOnlyInternetGateway",
      "ec2:CreateVpcPeeringConnection",
      "ec2:AcceptVpcPeeringConnection",
      "globalaccelerator:Create*",
      "globalaccelerator:Update*"
    ]
    resources = ["*"]

  }
}

resource "aws_organizations_policy" "block_internet" {
  name        = "block_internet"
  description = "Block internet access to the production network."
  content     = data.aws_iam_policy_document.block_internet.json
}

resource "aws_organizations_policy_attachment" "block_internet_on_prod" {
  policy_id = aws_organizations_policy.block_internet.id
  target_id = aws_organizations_organizational_unit.prod.id
}

I think you get the idea, go forth and implement governance with AWS’s organizations and Service Control Policies! Subscribe for more cloud code!

AWS CloudShell

AWS CloudShell was just announced this month (December 2020). Let’s go over what is AWS CloudShell, what are its use cases, when you shouldn’t use it, and much more.

What is AWS CloudShell?

We know this feature is not new to the cloud. We know Azure and Google Cloud engineers are familiar with this. As a AWS fanatic, this is great new feature!

  • AWS CloudShell’s permissions are managed by IAM
  • Inactive and long-running sessions are automatically stopped and recycled
  • Create start-up scripts
  • Available on the latest versions of Google Chrome, Mozilla Firefox, Microsoft Edge, and Apple Safari
  • Sudo privileges to install and modify your session
  • Since it’s Amazon Linux 2, you can use yum to manage its packages

AWS CloudShell IAM permissions

By default you may not have the appropriate IAM permissions to use the CloudShell. You may see various unauthorized error messages if you attempt to launch it. Example error message:”Unable to start the environment…is not authorized to perform: cloudshell:{action} on resource…” Going with the principle of least privilege, here’s what the minimum IAM permissions required to get started!

{
    "Version": "2012-10-17",
    "Statement": [{
        "Sid": "AllowUsersCloudShell",
        "Effect": "Allow",
        "Action": [
            "cloudshell:CreateSession",
            "cloudshell:CreateEnvironment",
            "cloudshell:GetEnvironmentStatus",
            "cloudshell:PutCredentials"
        ],
        "Resource": "*"
    }]
}

Additionally you can allow your engineers to upload and download files from/to the user’s local machine to the AWS CloudShell! Just add these IAM actions.

cloudshell:GetFileDownloadUrls
cloudshell:GetFileUploadUrls

AWS CloudShell Cost

Zero! Nada! The shell itself and its storage does not incur any charges but of course any resources you create from it may.

Launch AWS CloudShell

Currently, at the time of this writing (December 2020) it’s available in these regions.

  • US East (Ohio)
  • US East (N. Virginia)
  • US West (Oregon)
  • Asia Pacific (Tokyo)
  • Europe (Ireland)

Once logged into the AWS console, there’s a new icon on the left of the Notifications icon (bell). Click the icon with the greater than and underscore icon .

AWS CloudShell
AWS CloudShell

It’s loads your IAM permissions for you automatically! Next let’s run some AWS CLI commands… and notice we don’t have to provide the region or profile! My user account is allowed to list IAM users so this works right away.

$ aws iam list-users

# result
{
    "Users": [
        {
            "Path": "/",
            "UserName": "waleed",
            "UserId": "ABCAEFEFAEAFEAFE1",
            "Arn": "arn:aws:iam::1234567890:user/waleed",
            "CreateDate": "2020-12-16T02:05:25+00:00",
            "PasswordLastUsed": "2020-12-16T02:08:50+00:00"
        }
    ]
}

Switch shell

By default it’s the bash shell represented by the dollar ($) sign. Switch to PowerShell by typing pwsh. PowerShell is represented by the letters PS. Finally if you want to use Z shell enter zsh at any time. Z shell is represented by the percentage (%) symbol. If you want to switch back to Bash, enter bash.

Actions and options

Pretty simple and self-explanatory.

What’s it good for?

  • Ad-hoc actions to query information
  • Check permissions and such incase if your AWS CLI profile is broken on your workstation or elsewhere
  • Learning environment

What’s it not good for?

  • Long term development
  • CI/CD
  • Production workloads
  • It’s your responsibility to manage all user installed software/packages!
  • You will not be able to access your private EC2s, it’s a not VPN solution.
  • Users can access the internet, this may not be allowed in your organization

That’s all for now, read more about AWS CloudShell. Subscribe for more tutorial like this!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

Intro to Terragrunt and Terraform

Terragrunt is a command line interface tool to make Terraform better or build a better infrastructure as code pipeline. Terragrunt is built on the concept DRY. As their website states, DRY stands for “Don’t Repeat Yourself”. Terragrunt can help with structuring your code directories where you can write the Terraform code once and apply the same code with different variables and different remote state locations for each environment. Another useful feature of Terragrunt is before and after hooks. If you are developer you know you’ll need this feature at some point. Let’s get started with intro to Terragrunt and Terraform.

Installing Terraform and Terragrunt

https://cloudly.engineer/2020/setup-infrastructure-as-code-environment/aws/

Terraform code: resource

Let’s define the anatomy of a Terraform resource.

resource "aws_instance" "this" {
  ami           = data.aws_ami.centos.id
  instance_type = "t3a.medium"
}

Let’s break down this small piece of code.

  1. resource (no quotes) is a reserved keyword; means create or to ensure this type of resource exists
  2. aws_instance” (double quotes with underscores) is the type of a resource you want to create. Here’s a list of them available today.Terraform attempts to always be up to date but it could be missing resource types or some features of a resource. Most of the time, it has all the core resource types and options available.
  3. this (double quotes with underscores) The last part of the first line is a name you want to give this resource for Terraform’s state file. My best practice is to always use “this” unless you have multiple of the same resource then be specific but don’t put the resource type in this name. That’s redundant nonsense.
  4. Within the braces it’s always one or more required or optional variables. They diff from resource to resource.

    If you want to learn more Terraform click here.

Terragrunt Code: Deploy Resources

Terraform code just defines our infrastructure as code, so then we need Terragrunt to help with the multiple environment deployment. Now the combination of these two will prevent us from repeating our code for however many AWS accounts we have. Below is my Terragrunt project for my AWS account “settings” repository.

└── settings
    ├── README.md
    ├── dev
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── qa
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── sec
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── prod
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    └── terragrunt.hcl
    └── inputs.yml
    └── vars.tf

terragrunt.hcl The root terragrunt.hcl and environment terragrunt.hcl files are a must
inputs.yml This yml file will contain variables specific to that environment, such as AWS CLI profile name
vars.tf Terraform files for each environment and one common vars.tf for all deployments

This separation of projects allows each environments to have different Terraform versions of your code at the same time. Let’s continue to see what I mean.

Main ‘terragrunt.hcl’

remote_state {
  backend = "s3"
  config = {
    bucket  = "bucket-name-for-terraform-state"
    key     = "${path_relative_to_include()}/terraform.tfstate"
    region  = local.local_inputs.aws_region,
    profile = local.local_inputs.aws_cli_profile
    encrypt = true
  }
}

locals {
  local_inputs  = yamldecode(file("${get_terragrunt_dir()}/inputs.yml"))
  global_inputs = yamldecode(file("${get_terragrunt_dir()}/inputs.yml"))
}

inputs = merge(local.global_inputs, local.local_inputs)

remote state
I’ll be storing the Terraform state file in Amazon S3.

  1. key All of my environments/accounts Terraform state files will be stored in one AWS S3 bucket separated by environments using the directory names. The Terragrunt function path_relative_to_include() is going to help with that. But you can pass a different bucket for each environment through the local inputs.
  2. profile Since we’ll have several AWS accounts and profiles, this value will be dynamic and passed in from the environments input file.
  3. I think the rest are obvious.

locals

  1. local_inputs During Terraform plan or apply it grabs the variables for the environment to create Terraform files for that specific environment in the .terragrunt local cache directory.
  2. global_inputs contains variables that are common for all environments (if needed)

dev ‘terragrunt.hcl’

include {
  path = find_in_parent_folders()
}

terraform {
  source = "git@giturl.com:path/to/tf-modules/settings.git?ref=dev"
}

This says ‘hey go fetch the TF code from this URL but only the dev branch’. Also says ‘The Terraform backend configuration is in the main terragrunt.hcl file’. Next let’s init Terragrunt. If you haven’t already created the S3 bucket for your state file it will request to create.

I’ll be using aws cli profiles, this assumes you have already set this up.
AWS Permissions Required

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3ForTerraform",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketVersioning",
                "s3:CreateBucket"
            ],
            "Resource": "arn:aws:s3:::YOUR-TF-BUCKET-NAME-HERE"
        },
        {
            "Sid": "AllowDownloadNUploadtoPrefix",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::YOUR-TF-BUCKET-NAME-HERE/*"
        }
    ]
}

dev ‘inputs.yml’

aws_cli_profile: "your-env-aws-cli-profile-name"
aws_region: "us-east-1"
other_var: "other-value"

dev ‘vars.tf’

variable aws_account_alias {
  default = "acct-nickname-here"
}

variable aws_region {}

variable aws_cli_profile {}

variable other_var {}

As said before, this is environment specific values. Let’s initiate already!

cd settings/dev/
terragrunt init
Output

----------------------------------------------------------------------
.....
.....
Plan: 1 to add, 0 to change, 0 to destroy.

---------------------------------------------------------------------

Then ‘terragrunt apply’. Apply the configuration and verify.

Terragrunt cache

Don’t put terragrunt cache in git. Add the following to your .gitignore file for your Terragrunt repositories.

*.terraform*
*.terragrunt*

If you update your Terraform repository you’ll have to update your Terragrunt too to fetch the latest code. You can do that with this additional argument.

terragrunt init --terragrunt-source-update

An alternative design with Terragrunt and Terraform

In this alternative design structure you can have a main.tf file at the root of your project. This main.tf can contain all your main code in one place instead of having to create and manage several different git repositories and branches. See the structure example below. Here’s full working code example of this design.

├── README.md
├── dev
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── prod
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── qa
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── sec
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── terragrunt.hcl
├── main.tf
└── web-server-policy.tf

Create an EC2 IAM role with Terraform

There are hundreds of posts online about what is EC2 IAM role so I’m not going to discuss that here. Here we will develop the code to create an EC2 IAM role with Terraform and deploy with Terragrunt. I’m assuming you know what both of these tools are… if not checkout my introduction posts on how to setup your development environment. Or if you need a introduction to Terraform modules and how to use them click here. Do note that an EC2 IAM role can only be used for EC2 instances. The IAM policies can be shared with other resources or services though.

This Terraform module creates AWS IAM policy then creates IAM role specifically designed to be used by EC2 instances. After that it attaches the IAM role to the EC2 instance profile. Lastly attaches the IAM policy to the EC2 IAM role. Remember every IAM role needs a set of policies (permissions).

Terraform EC2 IAM role module

ece iam role Terraform module
Module structure

Here’s the main.tf file of the module.

# Create the AWS IAM role. 
resource "aws_iam_role" "this" {
  name = var.ec2_iam_role_name
  path = "/"

  assume_role_policy = var.assume_role_policy
}

# Create AWS IAM instance profile
# Attach the role to the instance profile
resource "aws_iam_instance_profile" "this" {
  name = var.ec2_iam_role_name
  role = aws_iam_role.this.name
}

# Create a policy for the role
resource "aws_iam_policy" "this" {
  name        = var.ec2_iam_role_name
  path        = "/"
  description = var.policy_description
  policy      = var.policy
}

# Attaches the policy to the IAM role
resource "aws_iam_policy_attachment" "this" {
  name       = var.ec2_iam_role_name
  roles      = [aws_iam_role.this.name]
  policy_arn = aws_iam_policy.this.arn
}

As always there’s a vars.tf

variable ec2_iam_role_name {
  type = string

  validation {
    condition     = length(var.ec2_iam_role_name) > 4 && substr(var.ec2_iam_role_name, 0, 4) == "svc-"
    error_message = "The ec2_iam_role_name value must be a valid IAM role name, starting with \"svc-\"."
  }
}

variable policy_description {
  type = string

  validation {
    condition     = length(var.policy_description) > 4
    error_message = "The policy_description value must contain more than 4 characters."
  }
}

variable assume_role_policy {}

variable policy {}

The module full code: https://github.com/masterwali/tf-module-iam-ec2-role

Terraform variable type constraints

If you noticed the “type” keyword in the variable declaration, that’s to ensure the kind of answer matches exactly what the module expects. This way someone doesn’t give an integer to a string type variable, it may break the Terraform apply but it may not error during Terraform plan. So we want to catch problems like that as soon as possible with type constraints. Now all the validations are done during the Terraform plan.

Terraform variable validation

Terraform starting with version 0.13.x released this new capability to apply validation on the answers provided to the variables. If you noticed the validation blocks above; the ‘ec2_iam_role_name’ var checks for both the length and what should be the starting characters. This is excellent way to ensure everyone’s following the naming conventions your organizations have created. The second validation on the ‘policy_description’ is just ensuring the provided value is more than 4 characters. Learn more about variable conditions.

How to use the module

In another git project we’ll have this setup.

Terragrunt ece iam role setup
Terragrunt setup
Each environment has a inputs.yml, terragrunt.hcl and vars.tf

The main.tf that will call the Terraform module. Be sure to update your git code. Now we create many, many EC2 IAM roles with the same naming convention! By default the source variable will use the master branch!

provider "aws" {
  region  = var.aws_region
  profile = var.aws_cli_profile
}

terraform {
  backend "s3" {}
}

module "web_server" {
  source = "git@gitlab.com:cloudly-engineer/aws/tf-modules/iam-ec2-role.git"

  ec2_iam_role_name  = "svc-web-server-role"
  policy_description = "IAM ec2 instance profile for the web servers."
  assume_role_policy = file("assumption-policy.json")
  policy             = data.aws_iam_policy_document.web_server.json
}

Here’s the sample “web server” IAM policy that’s attached to this role.

data "aws_iam_policy_document" "web_server" {
  statement {
    sid    = "GetS3Stuff"
    effect = "Allow"
    actions = [
      "s3:List*",
      "s3:Get*"
    ]
    resources = ["*"]
  }
}

Then do a terragrunt init, plan and apply!

Here’s what the error message looks like when the validation fails during the plan.

Terraform ec2 iam role module
Terraform validation check

The calling terraform/terragrunt code: https://github.com/masterwali/ec2-iam-role

Subscribe to get notified when more AWS and Terraform code is published!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

AWS IAM groups and policies – Terraform

Let’s create a module to create and manage AWS IAM groups and policies with Terraform! In summary a Terraform module is one or more Terraform resources bundled together to be used a single Terraform resource. You can learn more about Terraform module’s here. If you have no idea what Terraform or Terragrunt then start here. Let’s code!

The Terraform module structure

aws-iam-group/
main.tf
vars.tf
README.md

The main file

The main.tf contains all the resources required to create AWS IAM groups and their policies. Notice this one uses three resources!

resource "aws_iam_group" "this" {
name = var.iam_group_name
}

resource "aws_iam_policy" "this" {
name = var.policy_name
description = var.policy_description
policy = var.policy
}

resource "aws_iam_group_policy_attachment" "this" {
group = aws_iam_group.this.name
policy_arn = aws_iam_policy.this.arn
}

The vars file

variable iam_group_name {}

variable policy_name {}

variable policy_description {}

variable policy {}

Here’s the AWS IAM groups and policies Terraform module on GitHub: https://github.com/masterwali/tf-module-aws-iam-group

Create AWS IAM groups and policies with Terraform

Let’s use the Terraform module to create one or many IAM groups and their policies! You can deploy with Terraform but I like to use Terragrunt.

The Terragrunt/Terraform structure

This is one way to create your deployment structure. You can name this directory anything you like.

aws-iam
├── LICENSE
├── README.md
├── cloud-engineers-policy.tf
├── database-admins-policy.tf
├── developers-policy.tf
├── network-admins-policies.tf
├── groups.tf
├── main.tf
├── dev
│   ├── inputs.yml
│   ├── terragrunt.hcl
│   └── vars.tf
├── prod
│   └── terragrunt.hcl
│   ├── inputs.yml
│   └── vars.tf
├── qa
│   └── terragrunt.hcl
│   ├── inputs.yml
│   └── vars.tf
├── sec
│   └── terragrunt.hcl
│   ├── inputs.yml
│   └── vars.tf
└── terragrunt.hcl

In this example we will be creating groups and policies for developers, cloud engineers, database admins, and network admins all with one and same Terraform module! Using a module will ensure consistency and governance for our AWS resources.

The Terraform files

I like to keep the main file clean. I mainly use it for data calls and local variables.

main.tf

locals {
  account_id = data.aws_caller_identity.current.account_id
}

provider "aws" {
  region  = var.aws_region
  profile = var.aws_cli_profile
}

terraform {
  backend "s3" {}
}

data "aws_caller_identity" "current" {}

groups.tf

# CLOUD ENGINEERS
module "cloud_engineers" {
  source = "git@github.com:masterwali/tf-module-aws-iam-group.git"

  iam_group_name     = "cloud-engineers"
  policy_name        = "cloud-engineers"
  policy_description = "Cloud Engineers policy"
  policy             = data.aws_iam_policy_document.cloud_engineers.json
}
# ......... etc. 

# NETWORK ADMINS
# Modules creates group and policy and attaches policy to group. 
module "network_admins" {
  source = "git@github.com:masterwali/tf-module-aws-iam-group.git"

  iam_group_name     = "network-admins"
  policy_name        = "network-admins"
  policy_description = "Network Admins policy"
  policy             = data.aws_iam_policy_document.network_admins_main.json
}
# Create the network admins misc policy 
resource "aws_iam_policy" "network_admins_misc" {
  name        = "network-admins-misc"
  description = "Network Admins"
  policy      = data.aws_iam_policy_document.network_admins_misc.json
}
# Attach the above policy to the network admins group. 
resource "aws_iam_group_policy_attachment" "network_admins_misc" {
  group      = "network-admins"
  policy_arn = aws_iam_policy.network_admins_misc.arn

  depends_on = [aws_iam_policy.network_admins_misc]
}

The “cloud engineers” IAM policy file

data "aws_iam_policy_document" "cloud_engineers" {
  statement {
    sid    = "FullAccess"
    effect = "Allow"
    actions = [
      "iam:*",
      "kms:*",
      "s3:*"
    ]
    resources = ["*"]
  }
}

To see the developers policy and more take a look at the complete code at https://github.com/masterwali/aws-iam

Now let’s apply the code by navigating to the desired environment directory. Then initiate with ‘terragrunt init’ and apply with ‘terragrunt apply’. More on Terragrunt initiation, plans and apply.

Subscribe to get notified when more AWS and Terraform code is published!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

IAM

AWS IAM console groups
AWS IAM console: Groups
AWS IAM console group policies
AWS IAM Group policies

AWS KMS Customer Managed CMK with Terraform

AWS Key Management Service (KMS) is a AWS managed service that allows us to create, manage, and delete customer master keys (CMK) or simply use AWS customer managed keys for encrypting our data in the AWS cloud. From my experience with passing both the AWS Certified Security – Speciality and the AWS Certified Solutions Architect – Professional, AWS Key Management Service is a must to learn inside and out. If you can understand all of the KMS features you’ll have a better chance of passing those two exams. Let’s learn how to create and manage AWS KMS customer managed CMK with Terraform! I will also be using Terragrunt so we can follow the DRY (Don’t repeat yourself) model.

This is part two of AWS Key management service (KMS) – Part 1.

KMS key types
KMS Key types

Key properties

  • Key Id
  • Creation date
  • Description
  • Key state

Customer managed keys (CMK)

Given the name you can guess the differences right away, right? For starter, you as the customer will have to explicitly create the key with AWS CLI, AWS API, or Terraform or any other available methods. You can set the CMK policies to allow services or users to use the key. The key policy can pass the permission responsibilities to be managed by IAM policies instead of the KMS CMK key policies. A CMK can be set to enable or disable at any time to allow usage or stop the usage of the key.

Key Alias are a great way to tag and identify Customer managed CMKs. This will help the user quickly find the desired key in the AWS console or AWS CLI query results. Since rotation is possible and it can be enabled to automatically rotate on yearly basis. Key aliases helps with that by re-assigning the alias to the new key.

You can definitely delete the key, but you must be damn sure that no one or any data or services is using that key! Once the key is gone you cannot unencrypt the data that was encrypted with the deleted key! So in order to semi control this chaos AWS has enforced a scheduling delete functionality rather than immediate delete. The customer can delete any Customer managed key by scheduling a delete; the minimum number of days to schedule a delete is 7 days. Best practice is to set this for a month or more.

NOTE: Before proceeding

  • If you haven’t installed Terraform or Terragrunt then you must follow this guide
  • For Terraform deep dive explanation follow this guide

Pricing

Each customer master key (CMK) that you create in AWS Key Management Service (KMS) costs $1/month until you delete it. For the N. VA region:

  • $0.03 per 10,000 requests
  • $0.03 per 10,000 requests involving RSA 2048 keys
  • $0.10 per 10,000 ECC GenerateDataKeyPair requests
  • $0.15 per 10,000 asymmetric requests except RSA 2048
  • $12.00 per 10,000 RSA GenerateDataKeyPair requests

Learn more at https://aws.amazon.com/kms/pricing/

Create and edit KMS Keys

Terraform module: main.tf

# Creates/manages KMS CMK
resource "aws_kms_key" "this" {
  description              = var.description
  customer_master_key_spec = var.key_spec
  is_enabled               = var.enabled
  enable_key_rotation      = var.rotation_enabled
  tags                     = var.tags
  policy                   = var.policy
  deletion_window_in_days  = 30
}

# Add an alias to the key
resource "aws_kms_alias" "this" {
  name          = "alias/${var.alias}"
  target_key_id = aws_kms_key.this.key_id
}

Terraform module: vars.tf

variable description {}

# Options available
# SYMMETRIC_DEFAULT, RSA_2048, RSA_3072,
# RSA_4096, ECC_NIST_P256, ECC_NIST_P384,
# ECC_NIST_P521, or ECC_SECG_P256K1
variable key_spec {
  default = "SYMMETRIC_DEFAULT"
}

variable enabled {
  default = true
}

variable rotation_enabled {
  default = true
}

variable tags {}

variable alias {}

variable policy {}

Terragrunt KMS directory structure

terragrunt directory structure

Terraform plan

main.tf 

locals {
  admin_username = "waleed"
  account_id     = data.aws_caller_identity.current.account_id
}

provider "aws" {
  region  = var.aws_region
  profile = var.aws_cli_profile
}

terraform {
  backend "s3" {}
}

data "aws_caller_identity" "current" {}

data "aws_iam_policy_document" "ssm_key" {
  statement {
    sid       = "Enable IAM User Permissions"
    effect    = "Allow"
    actions   = ["kms:*"]
    resources = ["*"]

    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${local.account_id}:root"]
    }
  }

  statement {
    sid       = "Allow access for Key Administrators"
    effect    = "Allow"
    actions   = ["kms:*"]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor"
      ]
    }
  }

  statement {
    sid    = "Allow use of the key"
    effect = "Allow"
    actions = [
      "kms:Encrypt",
      "kms:Decrypt",
      "kms:ReEncrypt*",
      "kms:GenerateDataKey*",
      "kms:DescribeKey"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor"
      ]
    }
  }

  statement {
    sid    = "Allow attachment of persistent resources"
    effect = "Allow"
    actions = [
      "kms:CreateGrant",
      "kms:ListGrants",
      "kms:RevokeGrant"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor"
      ]
    }

    condition {
      test     = "Bool"
      variable = "kms:GrantIsForAWSResource"
      values   = ["true"]
    }
  }
}

ssm-key.tf : It’s best practice to have each type of key in its own terraform file. In this example this key is for SSM. You can create keys for any services possible in your region!

Notice the source is only pulling the ‘dev’ branch. Once tested and verified the source branch would change in each environment.

module "ssm" {
  source = "git@gitlab.com:cloudly-engineer/aws/tf-modules/kms.git?ref=dev"

  description = "KMS key for System Manager"
  alias       = "ssm"
  policy      = data.aws_iam_policy_document.ssm_key.json

  tags = {
    Name  = "ssm"
    Owner = "wsarwari"
  }
}
# module.ssm.aws_kms_alias.this will be created
  + resource "aws_kms_alias" "this" {
      + arn            = (known after apply)
      + id             = (known after apply)
      + name           = "alias/ssm"
      + target_key_arn = (known after apply)
      + target_key_id  = (known after apply)
    }

# module.ssm.aws_kms_key.this will be created
  + resource "aws_kms_key" "this" {
      + arn                      = (known after apply)
      + customer_master_key_spec = "SYMMETRIC_DEFAULT"
      + deletion_window_in_days  = 30
      + description              = "KMS key for System Manager"
      + enable_key_rotation      = true
      + id                       = (known after apply)
      + is_enabled               = true
      + key_id                   = (known after apply)
      + key_usage                = "ENCRYPT_DECRYPT"
      + policy                   = jsonencode(
            {
              + Statement = [
                  + {
                      + Action    = "kms:*"
                      + Effect    = "Allow"
                      + Principal = {
                          + AWS = "arn:aws:iam::111122223334:root"
                        }
                      + Resource  = "*"
                      + Sid       = "Enable IAM User Permissions"
                    },
                  + {
                      + Action    = "kms:*"
                      + Effect    = "Allow"
                      + Principal = {
                          + AWS = [
                              + "arn:aws:iam::111122223334:user/waleed",
                              + "arn:aws:iam::111122223334:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor",
                              + "arn:aws:iam::111122223334:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
                            ]
                        }
                      + Resource  = "*"
                      + Sid       = "Allow access for Key Administrators"
                    },
                  + {
                      + Action    = [
                          + "kms:ReEncrypt*",
                          + "kms:GenerateDataKey*",
                          + "kms:Encrypt",
                          + "kms:DescribeKey",
                          + "kms:Decrypt",
                        ]
                      + Effect    = "Allow"
                      + Principal = {
                          + AWS = [
                              + "arn:aws:iam::111122223334:user/waleed",
                              + "arn:aws:iam::111122223334:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor",
                              + "arn:aws:iam::111122223334:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
                            ]
                        }
                      + Resource  = "*"
                      + Sid       = "Allow use of the key"
                    },
                  + {
                      + Action    = [
                          + "kms:RevokeGrant",
                          + "kms:ListGrants",
                          + "kms:CreateGrant",
                        ]
                      + Condition = {
                          + Bool = {
                              + kms:GrantIsForAWSResource = "true"
                            }
                        }
                      + Effect    = "Allow"
                      + Principal = {
                          + AWS = [
                              + "arn:aws:iam::111122223334:user/waleed",
                              + "arn:aws:iam::111122223334:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor",
                              + "arn:aws:iam::111122223334:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
                            ]
                        }
                      + Resource  = "*"
                      + Sid       = "Allow attachment of persistent resources"
                    },
                ]
              + Version   = "2012-10-17"
            }
        )
      + tags                     = {
          + "Name"  = "ssm"
          + "Owner" = "wsarwari"
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

After applying the code you should see the key in Key Management Service.

kms key created

Enable and disable CMKs

Simple, just update the “enabled” variable to false! Plan and apply.

module "ssm" {
  source = "git@gitlab.com:cloudly-engineer/aws/tf-modules/kms.git?ref=dev"

  description = "KMS key for System Manager"
  alias       = "ssm"
  ...
  enabled     = false

  ....
}

Automatic rotation

Another easy change with Terraform! By default I have set it to rotate the key in the custom module I created above. Now if you want to disable automatic rotation then all you have to do is set the variable rotation_enabled to false.

module "ssm" {
  source = "git@gitlab.com:cloudly-engineer/aws/tf-modules/kms.git?ref=dev"

  description = "KMS key for System Manager"
  alias       = "ssm"
  ...
  rotation_enabled     = false

  ....
}

Key Alias

Keys are identified by randomly generated GUIDs. It’s best to create in alias for each key so it can be easily identified by everyone. Aliases are also easy to create and update in Terraform. In the custom module above I have added the ability to create the key alias right after the key is provisioned. The key alias is also passed as a variable as shown below.

module "ssm" {
  source = "git@gitlab.com:cloudly-engineer/aws/tf-modules/kms.git?ref=dev"

  description = "KMS key for System Manager"
  alias       = "ssm"
  ...
}
Type of CMKCan view CMK metadataCan manage CMKUsed only for my AWS accountAutomatic rotation
Customer managed CMKYesYesYesOptional. Every 365 days (1 year).
AWS managed CMKYesNoYesRequired. Every 1095 days (3 years).
AWS owned CMKNoNoNoVaries
This chart is from the AWS documentation

Key Policy

Each key must have a policy that allows or denies the key to be used by users or services. In the design that I have created allows you to give each key a policy. Let’s breakdown each statement of the key. 

Enable IAM User Permissions

statement {
    sid       = "Enable IAM User Permissions"
    effect    = "Allow"
    actions   = ["kms:*"]
    resources = ["*"]

    principals {
      type        = "AWS"
      identifiers = ["arn:aws:iam::${local.account_id}:root"]
    }
  }

This statement allows all users and services in this account to execute all KMS actions on this key. It’s best practice to follow up on this open statement by creating an IAM policy to restrict the key usage. The identifiers in the principles section can be other AWS accounts.

Allow access for Key Administrators

statement {
    sid       = "Allow access for Key Administrators"
    effect    = "Allow"
    actions   = ["kms:*"]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor"
      ]
    }
  }

This statement allows selected individual users and IAM roles to fully manage this key. This statement must exist for every single key. Without this statement you will absolutely lose access and management of this key.

Allow use of the key

statement {
    sid    = "Allow use of the key"
    effect = "Allow"
    actions = [
      "kms:Encrypt",
      "kms:Decrypt",
      "kms:ReEncrypt*",
      "kms:GenerateDataKey*",
      "kms:DescribeKey"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor"
      ]
    }
  }

This statement is specifically for the usage of the key. If you do not provide the statement Enable IAM User Permissions then you must include this statement; Otherwise this key will not be usable by anyone besides the key administrators.

Allow attachment of persistent resources

statement {
    sid    = "Allow attachment of persistent resources"
    effect = "Allow"
    actions = [
      "kms:CreateGrant",
      "kms:ListGrants",
      "kms:RevokeGrant"
    ]
    resources = ["*"]

    principals {
      type = "AWS"
      identifiers = [
        "arn:aws:iam::${local.account_id}:user/${local.admin_username}",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/support.amazonaws.com/AWSServiceRoleForSupport",
        "arn:aws:iam::${local.account_id}:role/aws-service-role/trustedadvisor.amazonaws.com/AWSServiceRoleForTrustedAdvisor"
      ]
    }

    condition {
      test     = "Bool"
      variable = "kms:GrantIsForAWSResource"
      values   = ["true"]
    }
  }

This statement allows listing, creating, and revoking grants for the key by the principals identified in the statement.

Deleting the key

Do not delete production keys! You must be 100% sure no service or data has ever used the key you are able to delete. Once a key is deleted it’s not possible to restore! You cannot decrypt data that was encrypted with the key that you delete. In Terraform you can delete the key by deleting the code or commenting out the code, then Terraform plan and apply. Or if you want to keep the code and just want to remove the resource from AWS then execute Terragrunt destroy. This destroy will also release the alias so it can be reused. 

kms key pending deletion
Key deletion


Status is now pending deletion. It will delete this key 30 days from the day of the destroy. 

Stop KMS key deletion

If you decide to not delete it then on the AWS console you can select the key then click on Key actions. Finally select Cancel key deletion. This option is only available before the deletion date. 

Cancel key deletion
Key actions

Read more about deleting KMS customer managed keys.

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe below for more! 

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

AWS managed CMKs

I talked about AWS managed customer master keys in my previous post here.

AWS Key management service (KMS) – Part 1

AWS Key Management Service (AWS KMS) is a AWS service that allows us to encrypt and decrypt our data with a Customer Master Key (CMKs) in the AWS cloud. As as result of passing both the AWS Certified Security – Speciality and the AWS Certified Solutions Architect – Professional exams I feel that it’s critical that you know AWS KMS inside and out. If you can understand all of the AWS KMS features I believe you’ll have a better chance of passing. This deep dive (with infographic) will be several different posts about AWS Key Management Service (AWS KMS).

AWS Key Management Service (AWS KMS) one of the most critical service to secure your AWS account and all of its data. KMS provides security that has to be put in place before new services and data come in. A lot of other AWS services will need KMS keys so it is best practice to set it up before starting other services like EC2, S3, Cloudtrail, Lambda, and so on.

Intro to CMKs

There are different types of AWS Key Management Service (AWS KMS) keys to allow us to encrypt and decrypt our data with a Customer Master Key (CMKs). AWS managed CMK and customer managed CMK. AWS KMS can support both symmetric or asymmetric keys. By default none of these key types are created when the account is created. Each AWS managed key is automatically created when you select to encrypt a certain service. The CMK type must be created by you before it’s available for use. Lets deep dive into each of these types and the Terraform code to creating them.

KMS key types
KMS Key types

CMK properties

  • Key Id
  • Creation date
  • Description
  • Key state

Symmetric CMK

A symmetric key is a 256-bit encryption key that never leaves AWS KMS unencrypted. It’s a single key that’s used for both encryption and decryption. More about symmetric keys here. When you create a KMS key this the default key type. This type of key requires valid AWS credentials to use. This means if the users requires encryption without AWS credentials then it’s an asymmetric key type. Symmetric keys are the better options for most cases. This key type cannot be signed or verified.

Asymmetric CMK

Asymmetric CMK key type is a RSA key pair (public key & a private key) that is used for encryption and decryption or signing and verification (but not both), or an elliptic curve (ECC) key pair that is used for signing and verification. The private key never leaves AWS KMS unencrypted. You can use the public key within AWS KMS by calling the AWS KMS API operations, or download the public key and use it outside of AWS KMS.

symmetric key vs asymmetric keys
Symmetric Key vs. Asymmetric Keys

AWS managed CMKs

AWS managed CMK’s are created, managed, and used on the behalf of the customer by an AWS service. A few AWS services only accept this type. As a customer you can view the keys, their policies, and track their usage via Cloudtrail. The customer cannot managed them, rotate them or modify their key policies.

AWS Key Management Service (AWS KMS)

AWS Managed keys – Pricing

There is no cost to the customer for the creation and storage of the AWS Managed CMKs. Under the free tier 20,000 requests/month calculated across all regions have no cost.

Here’s KMS pricing breakdown for the Northern Virginia region. The price does vary by region, use the pricing calculator for latest numbers.

  • $0.03 per 10,000 requests
  • $0.03 per 10,000 requests involving RSA 2048 keys
  • $0.10 per 10,000 ECC GenerateDataKeyPair requests
  • $0.15 per 10,000 asymmetric requests except RSA 2048
  • $12.00 per 10,000 RSA GenerateDataKeyPair requests

Note: Customer managed CMKs do have a cost, that’s another post. Subscribe to get notified when that’s released.

AWS MANAGED KEYS – Aliases

AWS managed CMKs have aliases already and cannot be modified. The name pattern is aws/service-name; aws/s3 or aws/ssm, etc.

KeyManager field will either be “AWS” or “Customer” to know if a key is managed by AWS or by you.

AWS MANAGED KEYS – Create

Remember the customer cannot create the AWS Managed CMKs directly. This type of a key will be available to you when you encrypt an object or resource.

By default the customer has no keys. Let’s run this command to see the list of keys.

aws kms list-keys --profile { profile_name }

{
    "Keys": []
}

Note: If you haven’t setup your AWS CLI be sure to visit this page “Setup infrastructure as code environment“. You’ll need the following IAM policy in order to list the keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ReadKMSkeys",
            "Effect": "Allow",
            "Action": [
                "kms:ListKeys",
                "kms:ListKeyPolicies",
                "kms:ListRetirableGrants",
                "kms:ListAliases",
                "kms:GetKeyPolicy",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}
AWS Key Management Service (AWS KMS)
By default no AWS managed keys exists

Let’s say you want to encrypt an object or a bucket in S3. You will see aws/s3 key available all of sudden. Hint this is an AWS managed CMK because of the format!

AWS Key Management Service (AWS KMS)
Encrypting S3 object with AWS managed CMK

Now if we list the keys you will see the AWS managed CMK.

aws kms list-keys --profile dev

{
    "Keys": [
        {
            "KeyArn": "arn:aws:kms:us-east-1:111111111:key/2323232-2323-2424-23424a-a2324a3", 
            "KeyId": "2323232-2323-2424-23424a-a2324a3"
        }
    ]
}

Key Type: Symmetric
Origin: AWS_KMS
Key Spec: SYMMETRIC_DEFAULT
Key Usage: Encrypt and decrypt

Here’s the default and non-editable key policy for the CMK I just created above. Each AWS managed CMK policy is restricted to a single AWS service.

{
    "Version": "2012-10-17",
    "Id": "auto-s3-2",
    "Statement": [
        {
            "Sid": "Allow access through S3 for all principals in the account that are authorized to use S3",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "kms:CallerAccount": "1111111111",
                    "kms:ViaService": "s3.us-east-1.amazonaws.com"
                }
            }
        },
        {
            "Sid": "Allow direct access to key metadata to the account",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1111111111:root"
            },
            "Action": [
                "kms:Describe*",
                "kms:Get*",
                "kms:List*"
            ],
            "Resource": "*"
        }
    ]
}

Note: The “Principal” with a value of “AWS” with the account number and “root” means that any authenticated user or service in that account can use this key.

AWS MANAGED CMKS – Enable and disable keys

Not possible with AWS managed CMKs.

AWS MANAGED CMKS – automatic rotation

Automatic rotation is enabled to rotate the key every 3 years for AWS Managed CMKs. You cannot modify this setting.

AWS MANAGED CMKS – Delete Keys

Not possible with AWS managed CMKs.

Quotas and Limits

Because the cloud services are shared with hundreds of thousands of customers, AWS has put some limits on requests and resources to ensure acceptable performance for all customers.

Resource limits

Grants for a given principal per CMK: 500

Key policy document size: 32 KB

Request limits

If you see this error

You have exceeded the rate at which you may call KMS. Reduce the frequency of your calls.
(Service: AWSKMS; Status Code: 400; Error Code: ThrottlingException; Request ID:

This “ThrottlingException” means you had a valid request but have passed the quota. AWS purposefully throttles your request. Use the Service Quotas console or the RequestServiceQuotaIncrease operation to request an increase.

Request quotas applies to both AWS managed and Customer managed CMKs but not AWS owned CMKs that are designed to protect your resources.

Also requests such as updating the CMK’s alias or setting it to disable have limits. If you aren’t doing large quantity changes you shouldn’t have to worry about hitting these limits. Here’s a few default limits, for a full list click here.

UpdateAlias5/second
DisableKey5/second
ListKeys100/second
Request quotas

KMS Security

  • Dedicated hardened hardware security modules (HSMs)
  • HSMs are physical devices that do not have a virtualization layer
  • Key usage is isolated within an AWS Region.
  • Multiple Amazon employees with role-specific access are required to perform administrative actions on the HSMs. There is no mechanism to export plaintext CMKs.
  • Approved for the these compliance programs; SOC 1,2,3, FedRamp, DoD Impact Level 2-6, HIPPA BAA, C5, and much more.
  • Available in the U.S GovCloud and U.S Secret regions
  • All symmetric key encrypt commands used within HSMs use the Advanced Encryption Standards (AES) 256
  • AWS KMS uses envelope encryption internally to secure confidential material between service endpoints
  • KMS does not store the data, just the keys
  • Use VPC EndPoints to avoid KMS traffic going through the internet

Encryption at Rest

  • Customer master keys (CMKs) are stored in FIPS 140-2 Level 2–compliant hardware security modules (HSMs).
  • The key material for CMKs and the encryption keys that protect the key material never leave the HSMs in plaintext form.

Encryption in Transit

  • The key material that AWS KMS generates for CMKs is never exported or transmitted in AWS KMS API operations
  • All AWS KMS API calls must be signed and be transmitted using minimum Transport Layer Security (TLS) 1.2
  • Also Calls to AWS KMS require a modern cipher suite that supports perfect forward secrecy

Wrapping up KMS for now; but in the future I’ll cover AWS KMS monitoring in detail, how and what can KMS integrate with other services, cross account KMS permissions, customer managed CMKs, and much more deep dives coming soon so be sure to subscribe! Here’s part two: https://cloudly.engineer/2020/aws-kms-customer-managed-cmk-with-terraform/aws/

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

AWS Account settings with Terraform and terragrunt Part 2

This is continuation of AWS account settings as code with Terraform and Terragrunt. Be sure to start with part one. Part will I’ll be blocking Amazon S3 bucket public access, enable EBS volume encryption at the AWS account level, and apply the IAM account password policies.

Cost: These exact settings applied on the account have no cost unless you use customer managed keys from KMS.

AWS IAM account password policies

# password policy
resource "aws_iam_account_password_policy" "this" {
  minimum_password_length        = 10
  max_password_age               = 365
  password_reuse_prevention      = 10
  require_lowercase_characters   = true
  require_numbers                = true
  require_uppercase_characters   = true
  require_symbols                = true
  allow_users_to_change_password = true
}

This applies the IAM account password settings as code.

This change requires the IAM permission “UpdateAccountPasswordPolicy” action allowed.

Update your “settings” repository dev branch. Then in the “settings” terragrunt update your code with

terragrunt init --terragrunt-source-update

terragrunt plan

# then
terragrunt apply

Check by going to the IAM service dashboard… now have a green check mark for “Apply an IAM password policy”.

Block Amazon S3 bucket public access

So many horror stories on the news about Amazon Simple Storage Service (S3) buckets being accidentally open to the public. Let’s prevent accidental public access on S3 buckets at the account level just in case if you get to block at the bucket level.

resource "aws_s3_account_public_access_block" "this" {
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

This sets the following settings on S3. As stated on the S3 “Block public access…” in the S3 console.

  • Blocks public access to buckets and objects granted through new access control lists (ACLs)
  • Blocks public access to buckets and objects granted through any access control lists (ACLs)
  • Blocks public access to buckets and objects granted through new public bucket or access point policies
  • Blocks public and cross-account access to buckets and objects through any public bucket or access point policies

You’ll need “PutAccountPublicAccessBlock” S3 action for this setting.

Update your “settings” repository dev branch. Then in the “settings” terragrunt update your code with

terragrunt init --terragrunt-source-update

terragrunt plan

# then
terragrunt apply

Verify by going to S3 service -> on the left navigation click “Block public access (account settings)“. You should see all green “On” for every single line.

block s3 public settings

Default EBS volume encryption

This account level setting will always set EC2 default EBS volume encryption during creation of any EBS volume regardless of what and how it’s provisioned. If you provision an EC2 using the console or use of any of the AWS CLI commands or any of the AWS SDKS and you don’t explicitly apply EBS volume encryption then this will do it for you! It’s quite amazing and simple to apply or remove.

What key will it use? It can use a default Amazon managed key or your a customer (you) managed KMS key. I haven’t setup KMS keys yet, so I’ll use the default Amazon managed key for now.

resource "aws_ebs_encryption_by_default" "this" {
  enabled = true
}

You’ll need the following IAM policy statement to apply this setting.

{
            "Sid": "AllowsEBSdefaultEncryption",
            "Effect": "Allow",
            "Action": [
                "ec2:GetEbsEncryptionByDefault",
                "ec2:EnableEbsEncryptionByDefault",
                "ec2:DisableEbsEncryptionByDefault",
                "ec2:ResetEbsDefaultKmsKeyId",
                "ec2:ModifyEbsDefaultKmsKeyId"
            ],
            "Resource": "*"
        }

and again, update your Terraform git repository then update your Terragrunt deployment code and apply.

Navigate to the EC2 service then on the main page on the right panel within the “Account attributes” click on “EBS encryption“.

Due note this only applied for a single region!

Bonus

Billing settings

These aren’t automated but it’s only enabled once your consolidated billing account. If you have permissions to enable billing alerts and emails. Within the Billing Preferences select the following settings.

Sends a PDF version of your invoice by email
As stated get notified on your free tier usage and cost

That’s it for now!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Image by Денис Марчук from Pixabay

AWS Account settings with Terraform and terragrunt

I’m going to use Terraform (TF) plus Terragrunt (TG) to apply the various AWS account settings as code. Writing these settings as code assures me I can make the code change once and apply to all my environments or AWS accounts with the same settings, as Terragrunt calls it DRY. It stands for “Don’t Repeat Yourself”. Write once and apply on hundreds of AWS accounts!

If you haven’t installed Terraform and Terragrunt, go back a step by going through this first https://cloudly.engineer/2020/setup-infrastructure-as-code-environment/aws/ and maybe this https://cloudly.engineer/2019/aws-cloud-account-initial-configuration/aws/

Terraform code: Define resources

First account configuration is setting the account alias with the Terraform aws_iam_account_alias resource. This “settings” is an single git project.

resource "aws_iam_account_alias" "this" {
  account_alias = "acct-nickname-here"
}

Let’s break down this small piece of code.

  1. resource (no quotes) is a reserved keyword; means create or to ensure this type of resource exists
  2. aws_iam_account_alias” (double quotes with underscores) is the type of a resource you want to create. Here’s a list of them available today.Terraform attempts to always be up to date but it could be missing resource types or some features of a resource. Most of the time, it has all the core resource types and options available.
  3. this (double quotes with underscores) The last part of the first line is a name you want to give this resource for Terraform’s state file. My best practice is to always use “this” unless you have multiple of the same resource then be specific but don’t put the resource type in this name. That’s redundant nonsense.
  4. Within the braces it’s always one or more options with different types

    If you want to learn more Terraform click here.

Push this code up to your dev branch. We want to make sure it works before merging it to master.

Terragrunt Code: Deploy Resources

Terraform code just defines our infrastructure as code, Terragrunt with help of Terraform will do the actual deployment. The combination of these two will prevent us from repeating our code for however many AWS accounts we have. Below is my Terragrunt project for “settings”.

└── settings
    ├── README.md
    ├── dev
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── qa
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── sec
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── prod
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    └── terragrunt.hcl
    └── inputs.yml
    └── vars.tf

terragrunt.hcl The root terragrunt.hcl and environment terragrunt.hcl files are a must
inputs.yml This yml file will contain variables specific to that environment, such as AWS CLI profile name
vars.tf Terraform files for each environment and one common vars.tf for all deployments

This separation of projects allows each environments to have different Terraform versions of your code at the same time. Let’s continue to see what I mean.

Main ‘terragrunt.hcl’

remote_state {
  backend = "s3"
  config = {
    bucket  = "bucket-name-for-terraform-state"
    key     = "${path_relative_to_include()}/terraform.tfstate"
    region  = local.local_inputs.aws_region,
    profile = local.local_inputs.aws_cli_profile
    encrypt = true
  }
}

locals {
  local_inputs  = yamldecode(file("${get_terragrunt_dir()}/inputs.yml"))
  global_inputs = yamldecode(file("${get_terragrunt_dir()}/inputs.yml"))
}

inputs = merge(local.global_inputs, local.local_inputs)

remote state
I’ll be storing the Terraform state file in Amazon S3.

  1. key All of my environments/accounts Terraform state files will be stored in one AWS S3 bucket separated by environments using the directory names. The Terragrunt function path_relative_to_include() is going to help with that.
  2. profile Since we’ll have several AWS accounts and profiles, this value will be dynamic and passed in from the environments input file.
  3. I think the rest are obvious.

locals

  1. local_inputs During Terraform plan or apply it grabs the variables for the environment to create Terraform files for that specific environment in the .terragrunt directory
  2. global_inputs contains variables that are common for all environments (if needed)

dev ‘terragrunt.hcl’

include {
  path = find_in_parent_folders()
}

terraform {
  source = "git@giturl.com:path/to/tf-modules/settings.git?ref=dev"
}

This says ‘hey go fetch the TF code from this URL but only the dev branch’. Also says ‘The Terraform backend configuration is in the main terragrunt.hcl file’. Next let’s init Terragrunt. If you haven’t already created the S3 bucket for your state file it will request to create.

I’ll be using aws cli profiles, this assumes you have already set this up
AWS Permissions Required

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3ForTerraform",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketVersioning",
                "s3:CreateBucket"
            ],
            "Resource": "arn:aws:s3:::YOUR-TF-BUCKET-NAME-HERE"
        },
        {
            "Sid": "AllowDownloadNUploadtoPrefix",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::YOUR-TF-BUCKET-NAME-HERE/*"
        }
    ]
}

dev ‘inputs.yml’

aws_cli_profile: "your-env-aws-cli-profile-name"
aws_region: "us-east-1"

dev ‘vars.tf’

variable aws_account_alias {
  default = "acct-nickname-here"
}

variable aws_region {}

variable aws_cli_profile {}

As said before, this is environment specific values. Let’s initiate already!

cd settings/dev/
terragrunt init
Output

----------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_iam_account_alias.this will be created
  + resource "aws_iam_account_alias" "this" {
      + account_alias = "acct-nickname-here"
      + id            = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

---------------------------------------------------------------------

Then ‘terragrunt apply’. Apply the configuration and verify in AWS IAM link.

IAM Sign in url

Going forward you can use this somewhat easy AWS sign in link for AWS commercial console sign in.

Terragrunt cache

Don’t put terragrunt cache in git. Add the following to your .gitignore file for your Terragrunt repositories.

*.terraform*
*.terragrunt*

If you update your Terraform you’ll have to update your Terragrunt too. You can do that with this additional argument

terragrunt init --terragrunt-source-update

This is the end of part 1. Subscribe for part 2!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!