AWS Key management service (KMS) – Part 1

AWS Key Management Service (AWS KMS) is a AWS service that allows us to encrypt and decrypt our data with a Customer Master Key (CMKs) in the AWS cloud. As as result of passing both the AWS Certified Security – Speciality and the AWS Certified Solutions Architect – Professional exams I feel that it’s critical that you know AWS KMS inside and out. If you can understand all of the AWS KMS features I believe you’ll have a better chance of passing. This deep dive (with infographic) will be several different posts about AWS Key Management Service (AWS KMS).

AWS Key Management Service (AWS KMS) one of the most critical service to secure your AWS account and all of its data. KMS provides security that has to be put in place before new services and data come in. A lot of other AWS services will need KMS keys so it is best practice to set it up before starting other services like EC2, S3, Cloudtrail, Lambda, and so on.

Intro to CMKs

There are different types of AWS Key Management Service (AWS KMS) keys to allow us to encrypt and decrypt our data with a Customer Master Key (CMKs). AWS managed CMK and customer managed CMK. AWS KMS can support both symmetric or asymmetric keys. By default none of these key types are created when the account is created. Each AWS managed key is automatically created when you select to encrypt a certain service. The CMK type must be created by you before it’s available for use. Lets deep dive into each of these types and the Terraform code to creating them.

KMS key types
KMS Key types

CMK properties

  • Key Id
  • Creation date
  • Description
  • Key state

Symmetric CMK

A symmetric key is a 256-bit encryption key that never leaves AWS KMS unencrypted. It’s a single key that’s used for both encryption and decryption. More about symmetric keys here. When you create a KMS key this the default key type. This type of key requires valid AWS credentials to use. This means if the users requires encryption without AWS credentials then it’s an asymmetric key type. Symmetric keys are the better options for most cases. This key type cannot be signed or verified.

Asymmetric CMK

Asymmetric CMK key type is a RSA key pair (public key & a private key) that is used for encryption and decryption or signing and verification (but not both), or an elliptic curve (ECC) key pair that is used for signing and verification. The private key never leaves AWS KMS unencrypted. You can use the public key within AWS KMS by calling the AWS KMS API operations, or download the public key and use it outside of AWS KMS.

symmetric key vs asymmetric keys
Symmetric Key vs. Asymmetric Keys

AWS managed CMKs

AWS managed CMK’s are created, managed, and used on the behalf of the customer by an AWS service. A few AWS services only accept this type. As a customer you can view the keys, their policies, and track their usage via Cloudtrail. The customer cannot managed them, rotate them or modify their key policies.

AWS Key Management Service (AWS KMS)

AWS Managed keys – Pricing

There is no cost to the customer for the creation and storage of the AWS Managed CMKs. Under the free tier 20,000 requests/month calculated across all regions have no cost.

Here’s KMS pricing breakdown for the Northern Virginia region. The price does vary by region, use the pricing calculator for latest numbers.

  • $0.03 per 10,000 requests
  • $0.03 per 10,000 requests involving RSA 2048 keys
  • $0.10 per 10,000 ECC GenerateDataKeyPair requests
  • $0.15 per 10,000 asymmetric requests except RSA 2048
  • $12.00 per 10,000 RSA GenerateDataKeyPair requests

Note: Customer managed CMKs do have a cost, that’s another post. Subscribe to get notified when that’s released.

AWS MANAGED KEYS – Aliases

AWS managed CMKs have aliases already and cannot be modified. The name pattern is aws/service-name; aws/s3 or aws/ssm, etc.

KeyManager field will either be “AWS” or “Customer” to know if a key is managed by AWS or by you.

AWS MANAGED KEYS – Create

Remember the customer cannot create the AWS Managed CMKs directly. This type of a key will be available to you when you encrypt an object or resource.

By default the customer has no keys. Let’s run this command to see the list of keys.

aws kms list-keys --profile { profile_name }

{
    "Keys": []
}

Note: If you haven’t setup your AWS CLI be sure to visit this page “Setup infrastructure as code environment“. You’ll need the following IAM policy in order to list the keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ReadKMSkeys",
            "Effect": "Allow",
            "Action": [
                "kms:ListKeys",
                "kms:ListKeyPolicies",
                "kms:ListRetirableGrants",
                "kms:ListAliases",
                "kms:GetKeyPolicy",
                "kms:DescribeKey"
            ],
            "Resource": "*"
        }
    ]
}
AWS Key Management Service (AWS KMS)
By default no AWS managed keys exists

Let’s say you want to encrypt an object or a bucket in S3. You will see aws/s3 key available all of sudden. Hint this is an AWS managed CMK because of the format!

AWS Key Management Service (AWS KMS)
Encrypting S3 object with AWS managed CMK

Now if we list the keys you will see the AWS managed CMK.

aws kms list-keys --profile dev

{
    "Keys": [
        {
            "KeyArn": "arn:aws:kms:us-east-1:111111111:key/2323232-2323-2424-23424a-a2324a3", 
            "KeyId": "2323232-2323-2424-23424a-a2324a3"
        }
    ]
}
Key TypeOriginKey SpecKey Usage
SymmetricAWS_KMSSYMMETRIC_DEFAULTEncrypt and decrypt
More information on the AWS created managed CMK

Here’s the default and non-editable key policy for the CMK I just created above. Each AWS managed CMK policy is restricted to a single AWS service.

{
    "Version": "2012-10-17",
    "Id": "auto-s3-2",
    "Statement": [
        {
            "Sid": "Allow access through S3 for all principals in the account that are authorized to use S3",
            "Effect": "Allow",
            "Principal": {
                "AWS": "*"
            },
            "Action": [
                "kms:Encrypt",
                "kms:Decrypt",
                "kms:ReEncrypt*",
                "kms:GenerateDataKey*",
                "kms:DescribeKey"
            ],
            "Resource": "*",
            "Condition": {
                "StringEquals": {
                    "kms:CallerAccount": "1111111111",
                    "kms:ViaService": "s3.us-east-1.amazonaws.com"
                }
            }
        },
        {
            "Sid": "Allow direct access to key metadata to the account",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::1111111111:root"
            },
            "Action": [
                "kms:Describe*",
                "kms:Get*",
                "kms:List*"
            ],
            "Resource": "*"
        }
    ]
}

Note: The “Principal” with a value of “AWS” with the account number and “root” means that any authenticated user or service in that account can use this key.

AWS MANAGED CMKS – Enable and disable keys

Not possible with AWS managed CMKs.

AWS MANAGED CMKS – automatic rotation

Automatic rotation is enabled to rotate the key every 3 years for AWS Managed CMKs. You cannot modify this setting.

AWS MANAGED CMKS – Delete Keys

Not possible with AWS managed CMKs.

Quotas and Limits

Because the cloud services are shared with hundreds of thousands of customers, AWS has put some limits on requests and resources to ensure acceptable performance for all customers.

Resource limits

Grants for a given principal per CMK: 500

Key policy document size: 32 KB

Request limits

If you see this error

You have exceeded the rate at which you may call KMS. Reduce the frequency of your calls.
(Service: AWSKMS; Status Code: 400; Error Code: ThrottlingException; Request ID:

This “ThrottlingException” means you had a valid request but have passed the quota. AWS purposefully throttles your request. Use the Service Quotas console or the RequestServiceQuotaIncrease operation to request an increase.

Request quotas applies to both AWS managed and Customer managed CMKs but not AWS owned CMKs that are designed to protect your resources.

Also requests such as updating the CMK’s alias or setting it to disable have limits. If you aren’t doing large quantity changes you shouldn’t have to worry about hitting these limits. Here’s a few default limits, for a full list click here.

UpdateAlias5/second
DisableKey5/second
ListKeys100/second
Request quotas

KMS Security

  • Dedicated hardened hardware security modules (HSMs)
  • HSMs are physical devices that do not have a virtualization layer
  • Key usage is isolated within an AWS Region.
  • Multiple Amazon employees with role-specific access are required to perform administrative actions on the HSMs. There is no mechanism to export plaintext CMKs.
  • Approved for the these compliance programs; SOC 1,2,3, FedRamp, DoD Impact Level 2-6, HIPPA BAA, C5, and much more.
  • Available in the U.S GovCloud and U.S Secret regions
  • All symmetric key encrypt commands used within HSMs use the Advanced Encryption Standards (AES) 256
  • AWS KMS uses envelope encryption internally to secure confidential material between service endpoints
  • KMS does not store the data, just the keys
  • Use VPC EndPoints to avoid KMS traffic going through the internet

Encryption at Rest

  • Customer master keys (CMKs) are stored in FIPS 140-2 Level 2–compliant hardware security modules (HSMs).
  • The key material for CMKs and the encryption keys that protect the key material never leave the HSMs in plaintext form.

Encryption in Transit

  • The key material that AWS KMS generates for CMKs is never exported or transmitted in AWS KMS API operations
  • All AWS KMS API calls must be signed and be transmitted using minimum Transport Layer Security (TLS) 1.2
  • Also Calls to AWS KMS require a modern cipher suite that supports perfect forward secrecy

Wrapping up KMS for now; but in the future I’ll cover AWS KMS monitoring in detail, how and what can KMS integrate with other services, cross account KMS permissions, customer managed CMKs, and much more deep dives coming soon so be sure to subscribe!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

AWS Account settings with Terraform and terragrunt Part 2

This is continuation of AWS account settings as code with Terraform and Terragrunt. Be sure to start with part one. Part will I’ll be blocking Amazon S3 bucket public access, enable EBS volume encryption at the AWS account level, and apply the IAM account password policies.

Cost: These exact settings applied on the account have no cost unless you use customer managed keys from KMS.

AWS IAM account password policies

# password policy
resource "aws_iam_account_password_policy" "this" {
  minimum_password_length        = 10
  max_password_age               = 365
  password_reuse_prevention      = 10
  require_lowercase_characters   = true
  require_numbers                = true
  require_uppercase_characters   = true
  require_symbols                = true
  allow_users_to_change_password = true
}

This applies the IAM account password settings as code.

This change requires the IAM permission “UpdateAccountPasswordPolicy” action allowed.

Update your “settings” repository dev branch. Then in the “settings” terragrunt update your code with

terragrunt init --terragrunt-source-update

terragrunt plan

# then
terragrunt apply

Check by going to the IAM service dashboard… now have a green check mark for “Apply an IAM password policy”.

Block Amazon S3 bucket public access

So many horror stories on the news about Amazon Simple Storage Service (S3) buckets being accidentally open to the public. Let’s prevent accidental public access on S3 buckets at the account level just in case if you get to block at the bucket level.

resource "aws_s3_account_public_access_block" "this" {
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

This sets the following settings on S3. As stated on the S3 “Block public access…” in the S3 console.

  • Blocks public access to buckets and objects granted through new access control lists (ACLs)
  • Blocks public access to buckets and objects granted through any access control lists (ACLs)
  • Blocks public access to buckets and objects granted through new public bucket or access point policies
  • Blocks public and cross-account access to buckets and objects through any public bucket or access point policies

You’ll need “PutAccountPublicAccessBlock” S3 action for this setting.

Update your “settings” repository dev branch. Then in the “settings” terragrunt update your code with

terragrunt init --terragrunt-source-update

terragrunt plan

# then
terragrunt apply

Verify by going to S3 service -> on the left navigation click “Block public access (account settings)“. You should see all green “On” for every single line.

block s3 public settings

Default EBS volume encryption

This account level setting will always set EC2 default EBS volume encryption during creation of any EBS volume regardless of what and how it’s provisioned. If you provision an EC2 using the console or use of any of the AWS CLI commands or any of the AWS SDKS and you don’t explicitly apply EBS volume encryption then this will do it for you! It’s quite amazing and simple to apply or remove.

What key will it use? It can use a default Amazon managed key or your a customer (you) managed KMS key. I haven’t setup KMS keys yet, so I’ll use the default Amazon managed key for now.

resource "aws_ebs_encryption_by_default" "this" {
  enabled = true
}

You’ll need the following IAM policy statement to apply this setting.

{
            "Sid": "AllowsEBSdefaultEncryption",
            "Effect": "Allow",
            "Action": [
                "ec2:GetEbsEncryptionByDefault",
                "ec2:EnableEbsEncryptionByDefault",
                "ec2:DisableEbsEncryptionByDefault",
                "ec2:ResetEbsDefaultKmsKeyId",
                "ec2:ModifyEbsDefaultKmsKeyId"
            ],
            "Resource": "*"
        }

and again, update your Terraform git repository then update your Terragrunt deployment code and apply.

Navigate to the EC2 service then on the main page on the right panel within the “Account attributes” click on “EBS encryption“.

Due note this only applied for a single region!

Bonus

Billing settings

These aren’t automated but it’s only enabled once your consolidated billing account. If you have permissions to enable billing alerts and emails. Within the Billing Preferences select the following settings.

Sends a PDF version of your invoice by email
As stated get notified on your free tier usage and cost

That’s it for now!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Image by Денис Марчук from Pixabay

AWS Account settings with Terraform and terragrunt

I’m going to use Terraform (TF) plus Terragrunt (TG) to apply the various AWS account settings as code. Writing these settings as code assures me I can make the code change once and apply to all my environments or AWS accounts with the same settings, as Terragrunt calls it DRY. It stands for “Don’t Repeat Yourself”. Write once and apply on hundreds of AWS accounts!

If you haven’t installed Terraform and Terragrunt, go back a step by going through this first https://cloudly.engineer/2020/setup-infrastructure-as-code-environment/aws/ and maybe this https://cloudly.engineer/2019/aws-cloud-account-initial-configuration/aws/

Terraform code: Define resources

First account configuration is setting the account alias with the Terraform aws_iam_account_alias resource. This “settings” is an single git project.

resource "aws_iam_account_alias" "this" {
  account_alias = "acct-nickname-here"
}

Let’s break down this small piece of code.

  1. resource (no quotes) is a reserved keyword; means create or to ensure this type of resource exists
  2. aws_iam_account_alias” (double quotes with underscores) is the type of a resource you want to create. Here’s a list of them available today.Terraform attempts to always be up to date but it could be missing resource types or some features of a resource. Most of the time, it has all the core resource types and options available.
  3. this (double quotes with underscores) The last part of the first line is a name you want to give this resource for Terraform’s state file. My best practice is to always use “this” unless you have multiple of the same resource then be specific but don’t put the resource type in this name. That’s redundant nonsense.
  4. Within the braces it’s always one or more options with different types

    If you want to learn more Terraform click here.

Push this code up to your dev branch. We want to make sure it works before merging it to master.

Terragrunt Code: Deploy Resources

Terraform code just defines our infrastructure as code, Terragrunt with help of Terraform will do the actual deployment. The combination of these two will prevent us from repeating our code for however many AWS accounts we have. Below is my Terragrunt project for “settings”.

└── settings
    ├── README.md
    ├── dev
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── qa
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── sec
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    ├── prod
    │   └── terragrunt.hcl
        └── inputs.yml
        └── vars.tf
    └── terragrunt.hcl
    └── inputs.yml
    └── vars.tf

terragrunt.hcl The root terragrunt.hcl and environment terragrunt.hcl files are a must
inputs.yml This yml file will contain variables specific to that environment, such as AWS CLI profile name
vars.tf Terraform files for each environment and one common vars.tf for all deployments

This separation of projects allows each environments to have different Terraform versions of your code at the same time. Let’s continue to see what I mean.

Main ‘terragrunt.hcl’

remote_state {
  backend = "s3"
  config = {
    bucket  = "bucket-name-for-terraform-state"
    key     = "${path_relative_to_include()}/terraform.tfstate"
    region  = local.local_inputs.aws_region,
    profile = local.local_inputs.aws_cli_profile
    encrypt = true
  }
}

locals {
  local_inputs  = yamldecode(file("${get_terragrunt_dir()}/inputs.yml"))
  global_inputs = yamldecode(file("${get_terragrunt_dir()}/inputs.yml"))
}

inputs = merge(local.global_inputs, local.local_inputs)

remote state
I’ll be storing the Terraform state file in Amazon S3.

  1. key All of my environments/accounts Terraform state files will be stored in one AWS S3 bucket separated by environments using the directory names. The Terragrunt function path_relative_to_include() is going to help with that.
  2. profile Since we’ll have several AWS accounts and profiles, this value will be dynamic and passed in from the environments input file.
  3. I think the rest are obvious.

locals

  1. local_inputs During Terraform plan or apply it grabs the variables for the environment to create Terraform files for that specific environment in the .terragrunt directory
  2. global_inputs contains variables that are common for all environments (if needed)

dev ‘terragrunt.hcl’

include {
  path = find_in_parent_folders()
}

terraform {
  source = "git@giturl.com:path/to/tf-modules/settings.git?ref=dev"
}

This says ‘hey go fetch the TF code from this URL but only the dev branch’. Also says ‘The Terraform backend configuration is in the main terragrunt.hcl file’. Next let’s init Terragrunt. If you haven’t already created the S3 bucket for your state file it will request to create.

I’ll be using aws cli profiles, this assumes you have already set this up
AWS Permissions Required

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowS3ForTerraform",
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketVersioning",
                "s3:CreateBucket"
            ],
            "Resource": "arn:aws:s3:::YOUR-TF-BUCKET-NAME-HERE"
        },
        {
            "Sid": "AllowDownloadNUploadtoPrefix",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject",
                "s3:GetObject"
            ],
            "Resource": "arn:aws:s3:::YOUR-TF-BUCKET-NAME-HERE/*"
        }
    ]
}

dev ‘inputs.yml’

aws_cli_profile: "your-env-aws-cli-profile-name"
aws_region: "us-east-1"

dev ‘vars.tf’

variable aws_account_alias {
  default = "acct-nickname-here"
}

variable aws_region {}

variable aws_cli_profile {}

As said before, this is environment specific values. Let’s initiate already!

cd settings/dev/
terragrunt init
Output

----------------------------------------------------------------------

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_iam_account_alias.this will be created
  + resource "aws_iam_account_alias" "this" {
      + account_alias = "acct-nickname-here"
      + id            = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

---------------------------------------------------------------------

Then ‘terragrunt apply’. Apply the configuration and verify in AWS IAM link.

IAM Sign in url

Going forward you can use this somewhat easy AWS sign in link for AWS commercial console sign in.

Terragrunt cache

Don’t put terragrunt cache in git. Add the following to your .gitignore file for your Terragrunt repositories.

*.terraform*
*.terragrunt*

If you update your Terraform you’ll have to update your Terragrunt too. You can do that with this additional argument

terragrunt init --terragrunt-source-update

This is the end of part 1. Subscribe for part 2!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

AWS Access Keys setup and Best Practices

What are AWS Access keys?

Automation is a must in the cloud, regardless of which Cloud Service Provider you work on. Start automation from the beginning. You can’t afford to backlog security controls anymore, especially in the cloud computing realm. In order to automate you first have to connect your tools to AWS; we can do this by creating a set of keys. This set of keys can be used for the AWS Command Line Interface (CLI). Read about CLI here. Keep in mind this pair of keys are sensitive information. Let’s first create a set!

Install AWS CLI

It’s best to install the AWS CLI prior to creating keys so you can enter the keys into the CLI immediately.

Generate AWS Access Keys – Console

Your very first set can only be done from the console.

  1. Login with an account that has the correct permissions to create keys. (See IAM statement policy below)
  2. Navigate to the IAM service.
  3. Click on users -> your username and then Security Credentials
  4. Click on Create Access Key
  5. Jump into the terminal so we can enter the info directly into the CLI instead of saving the .csv file.
  6. aws configure –profile {account-name}
    aws configure –profile master
  7. Copy and paste the key Id
  8. Copy and paste the secret
  9. Enter region code; i.e us-east-1
  10. Enter output format: Either json or table or text

The statement policy below allows each user to create access keys only for themselves.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CreateOwnAccessKeys",
            "Effect": "Allow",
            "Action": [
                "iam:CreateAccessKey",
                "iam:GetUser",
                "iam:ListAccessKeys"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        }
    ]
}

GENERATE AWS ACCESS KEYS – CLI

Now going forward you can manage your keys with code using the CLI or any of the SDKs.

TIP: Always create a new set of keys prior to setting the existing set to inactive or delete. Otherwise you’ll have to use the console again.

You’ll need this IAM statement policy to allow rotation of keys for own keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ManageOwnAccessKeys",
            "Effect": "Allow",
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:GetAccessKeyLastUsed",
                "iam:GetUser",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        }
    ]
}

List your access keys

aws iam list-access-keys --user-name {your-username}

Create a new set of access keys

aws iam create-access-key --user-name {your-username}

List your access keys with the new one this time

aws iam create-access-key --user-name {your-username}

Delete the old access key – You should be able to get the key id from the previous command. Look for the “CreateDate” that has the oldest timestamp.

aws iam delete-access-key --access-key-id AKIDPMS9RO4H3FEXAMPLE --user-name {your-username}

For more details refer to the original AWS documentation.

Rotate keys regularly

Let’s say you accidentally put your access keys in your GitHub repository that’s public. If no one yet discovered your keys and you don’t want to scrub your entire repo to find those keys then just rotate them. This way the old set is useless and you’ll update your code so that in the future your new set is not in code. No need to go back and scrub the old set.

One way to set notifications for rotating is using AWS Config. I’ll set this up later. “access-keys-rotated” is the name of that AWS Config rule. You can setup SNS notification as the remediation for this rule to let you know when someone’s access keys are older than X amount of days.

Script

A good friend (Tim) who’s also a Cloud Engineer has written a script that can rotate the keys as described above.

Link: https://github.com/easttimor/aws-scripts/blob/master/aws_rotate_access_key.sh

Usage:

Execute:
#	Assumes the default profile
#       ./aws_rotate_access_key.sh
#
#	For a specific profile
#	./aws_rotate_access_key.sh -p abc
#
#	For a specific profiles
#	./aws_rotate_access_key.sh -p abc -p def
#
#	For all profiles
#	./aws_rotate_access_key.sh -a
#
# Description:
#   Creates a new access key and deletes the old one

Never share

Each person that needs keys should have their own keys, do not create shared keys! Don’t share personal keys with third-party vendors and software. Everything is logged in AWS, any calls with your personal keys are your responsibility!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

Never use in AMI’s or EC2’s

A lot of rookie’s make this mistake! Don’t ever use your personal keys to make CLI calls from an EC2, you must use IAM roles for that. On top of that, engineers creates AMI’s with those personal keys so they can build bootstrap solutions, so bad!

separate personal & service keys

I know these days using IAM roles with multi-account architecture are limited with some third-party software. Or setting up IAM roles for cross account is too complex for beginners so creating keys is the fastest solutions. Create a service user like “service-{software-name}” or “service-ansible”, create a group like so “service-ansible”, place that user in that group. Then create the minimum IAM policy and attach to the group. Lastly, generate keys for the service account.

  • Disable console login
  • Set MFA for the account
  • Tag the account

TIP: Naming convention is very important for resource management and security purposes.

Delete AWS access keys

Monitor all AWS access keys; specifically if it’s not being used at all. Delete them and create them when it’s needed. On the Security Credentials tab, the “Last Used” will either be N/A or a timestamp. N/A obviously means it’s been never used. The Credential Report also lists unused access keys. You can automate this process by using the API/CLI command to view the “LastUsedDate” with the “get-access-key-last-used” and apply simple date logic. I’ll create this automation with Ansible later so be sure to subscribe to get notified.

AWS CLI cheat sheets

Be sure to checkout the cheat sheets at the Download page on this site.

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Setup infrastructure as code environment

What is infrastructure as code?

It’s 2020, I shouldn’t have to explain this…instead here’s a good post explaining it. What I’m going to say is that Infrastructure as Code (IaC) is a must, no debate needed! Let’s setup infrastructure as code environment using homebrew, terragrunt, terraform CLI, the AWS CLI, git, and GitLab.

I know that’s a lot of tools, but it’s worth it in the long run! Do make sure you have an AWS account already. Follow this guide to sign up.

Mac OS: Install homebrew

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

https://brew.sh/

Install Terragrunt

brew install terragrunt

Verify install

terragrunt --version

If you know Terraform, I’m going to wrap that with Terragrunt.

“Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.” – Gruntwork

Install AWS CLI

brew install awscli

or

pip install awscli

https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html

Verify install

aws --version

Install git

Code source management: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

git –version

GitLab sign up

I’m going to also use GitLab for code repositories and pipelines. GitLab has a free version on the cloud. Sign up and create a new project.

GitLab Project create page

You’ll have to add your ssh keys to your project. Follow the instructions on that page on how to create or add existing keys.

Let’s copy down (clone) the empty project into our workstation

git clone project

Here’s another blog about git commands

Pre-commit

Lastly add this awesome utility to your toolbox. Pre-commit allows you git code to be checked for various types of error or formatting issues before it commits your code to the server. For example it can fix your trailing whitespaces or end of line formatting or check if your Terraform is formatted correctly and so on. Here’s a full list of the the various checks.

Next we’ll setup our AWS access keys, setup Terragrunt, setup Terraform, and manage AWS from code! Finally I’ll push the code to GitLab. Sign up below to get notified when that blog gets posted!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Post image by: Image by 024-657-834 from Pixabay

Tips on passing the AWS Certified Solutions Architect – Professional

Certification #5

This is my fifth AWS certification that I have passed! I have one MS Azure certification, a total of 6 Cloud related certs. The reason I am telling you this, is because I hope you take my guidance seriously. Let’s jump right into it!

Pass these exams first

Tip #1

I super highly recommend to climb your way up the ladder. This process helped me study a bit faster or sometimes skips some domains because I already had that knowledge because of the previous certification. Here’s the order I recommend.

Timing

Tip #2

Take the AWS Certified Cloud Practitioner as soon as you can. Then study for the AWS Certified SysOps Administrator – Associates, schedule this exam and the AWS Certified Solutions Architect – Associates in the same month! I spent majority of my study time into the AWS Certified SysOps Administrator and a bit on the AWS Certified Solutions Architect. Took each exam just a week apart. Then a year or so, study for the AWS Certified Security – Speciality, then schedule this one and the AWS Certified Solutions Architect – Professional about a 2-5 weeks apart. Dedicate majority of your time on the Security exam, and a good portion on the professional. If you barely passed any of the previous exams, then dedicate more time on the professional.

Be prepared mentally

Tip #3

The AWS Certified Solutions Architect – Professional was by far the most challenging, intimidating, verbose, and exhausting exam I have ever taken! It took my all 170 minutes to complete. I’m usually done with an exam with plenty of time left on the clock. There was so much reading and comprehending that I was mentally drained and wanted to just give up in the last 30 minutes! (ps. it was also a Saturday so that didn’t help, maybe do it on a Sunday?) So take a deep breath here and there, look away from the screen for few seconds to recollect yourself and keep on pushing.

Be motivated

Tip #4

I hope your motivation is strong enough to get you to study so much and sit on a super long exam. If you have no motivation then I don’t advise you to take on this challenge. Ask your supervisor if there any cash rewards or does it help you take on a new role or to assist you to get a better wage. Or it might be a personal challenge. Whatever it is, be sure it’s strong!

Experience, Study, do labs

Tip #5

As always, experience is your best tool to passing an exam. Then on top of that study using online services like LinuxAcademy.com and their hands on labs. Read all the necessary white papers mentioned in the exam guide.

Shoutout to Adrian Cantrill for awesome and precise explanation of each topic and all the exam tips and tricks!

Be sure to be on the list to get more exam tips and tricks and cloud related technical posts

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Interested in becoming a Cloudly Engineer writer?

Send me a message at cloudlyengineer@gmail.com or comment below. Thanks!

Featured image: Photo by Joyce McCown on Unsplash

How to pass the AWS Certified Security – Speciality

I recently took the AWS Certified Security – Speciality exam for the first time and I passed it with over 800 points! L Scale is from 100 to 1,000 and 750 is the passing mark. More about the exam here. et’s go over how to pass the AWS certified Security – Speciality certification.

Take these certifications first

Although AWS has lifted the requirements to have an associate level exam before taking a speciality exam; I on the other hand still believe having an associate level certification and understanding is very much needed to pass. In my personal experience I always advise taking the AWS SysOps Administrator over the Solutions Architect if you’re planning on taking this security speciality. Knowing the capabilities of many AWS services will help you cross out the wrong answers immediately. The AWS security training does not cover in depth capabilities.

  • Pass the AWS Cloud Practitioner
  • Pass the AWS SysOps Administrator

Experience

Nothing beats real experience. Or at least LinuxAcademy.com labs. I have been in the AWS world for more than 4 years now.

Security topics

I believe the same exam is always different for different editions and users. This is all based on my experience.

  • KMS: Know this inside and out!
    • All of its capabilities
    • Key policies/permissions/cross account
    • Usage with S3 in depth
    • Usage with all other top services like EBS, RDS, DynamoDB, etc.
    • Be able to troubleshoot issues related to permissions and missing keys, etc.
  • IAM: This is one obvious and must
    • Be able to read and write IAM policies, S3 bucket policies, KMS policies, etc.
    • Master roles for services, for role switching (how to secure it)
    • Cross account setup
  • S3
    • Replication
    • KMS integration
    • KMS cross account integration
    • Troubleshoot permission issues

This is just my top list, always use the guide and study based on that. So don’t ignore other topics.

Training tools / White papers

Exam guide

Click here

Subscribe for future updates and AWS tutorials

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more! 

GOOGLE FIREBASE DETAILED SECURITY RULES Part 2

This is follow on post from Google Firebase detailed security rules part 1.

Advanced Rules

Only allow user to edit or add their own

This prevents other users from modifying other users content. Here’s an example of Firebase Realtime Database.

"users": {
      "$user_id": {
        ".write": "data.child('user_id').val() === auth.uid || newData.child('user_id').val() === auth.uid"               
        }
}

Similarly in Firebase Storage rules. Allows authenticated users to read the users image. This example show folders and child folders to help apply different rules for each folder.

match /user-images {
      // Allow all to read each other's profile if authenticated
      allow read: if request.auth != null;
      
      match /{user_id} {
      // Only allow current user to write to its own folder
      	allow write: if request.auth.uid == user_id;
        allow read: if request.auth != null;
        
        match /{allPaths=**} {
        	allow read: if request.auth != null;
      		allow write: if request.auth.uid == user_id;
        }
      }
}

Multi-layer permission

Sometimes one key will need a different permission than the rest of the keys in the Firebase Realtime Database.

"users": {
      "$uid": {
        ".write": "$uid === auth.uid",
        "key1": {
          ".write": "auth != null"
        },
        "key2": {
          ".write": "auth == null"
        },
        "key3": {
          ".write": "auth === auth.uid"
        },
        "key4": {
          ".write": "auth != auth.uid"
        }
      }
  }

In Firebase Storage. Allow only authenticated users to read in the /users/ folder. Then in /users/1233/ only allow the owner to write and others who are authenticated to read. In the folder /users/1233/anotherFolder/ read for all authenticated and write for owner. Last, /users/private/ only the owner is able to read and write.

match /users{
      allow read: if request.auth != null;
      
      match /{user_id} {
      	allow write: if request.auth.uid == user_id;
        allow read: if request.auth != null;
        
        match /{allPaths=**} {
        	allow read: if request.auth != null;
      		allow write: if request.auth.uid == user_id;
        }

        match /private {
      		allow write: if request.auth.uid == user_id;
        	allow read: if request.auth.uid == user_id;
      	}
      }
    }

More info checkout https://firebase.google.com/docs/database/security Don’t forget to subscribe below for more cloud engineer posts!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!