AWS Access Keys setup and Best Practices

What are AWS Access keys?

Automation is a must in the cloud, regardless of which Cloud Service Provider you work on. Start automation from the beginning. You can’t afford to backlog security controls anymore, especially in the cloud computing realm. In order to automate you first have to connect your tools to AWS; we can do this by creating a set of keys. This set of keys can be used for the AWS Command Line Interface (CLI). Read about CLI here. Keep in mind this pair of keys are sensitive information. Let’s first create a set!

Install AWS CLI

It’s best to install the AWS CLI prior to creating keys so you can enter the keys into the CLI immediately.

Generate AWS Access Keys – Console

Your very first set can only be done from the console.

  1. Login with an account that has the correct permissions to create keys. (See IAM statement policy below)
  2. Navigate to the IAM service.
  3. Click on users -> your username and then Security Credentials
  4. Click on Create Access Key
  5. Jump into the terminal so we can enter the info directly into the CLI instead of saving the .csv file.
  6. aws configure –profile {account-name}
    aws configure –profile master
  7. Copy and paste the key Id
  8. Copy and paste the secret
  9. Enter region code; i.e us-east-1
  10. Enter output format: Either json or table or text

The statement policy below allows each user to create access keys only for themselves.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "CreateOwnAccessKeys",
            "Effect": "Allow",
            "Action": [
                "iam:CreateAccessKey",
                "iam:GetUser",
                "iam:ListAccessKeys"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        }
    ]
}

GENERATE AWS ACCESS KEYS – CLI

Now going forward you can manage your keys with code using the CLI or any of the SDKs.

TIP: Always create a new set of keys prior to setting the existing set to inactive or delete. Otherwise you’ll have to use the console again.

You’ll need this IAM statement policy to allow rotation of keys for own keys.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "ManageOwnAccessKeys",
            "Effect": "Allow",
            "Action": [
                "iam:CreateAccessKey",
                "iam:DeleteAccessKey",
                "iam:GetAccessKeyLastUsed",
                "iam:GetUser",
                "iam:ListAccessKeys",
                "iam:UpdateAccessKey"
            ],
            "Resource": "arn:aws:iam::*:user/${aws:username}"
        }
    ]
}

List your access keys

aws iam list-access-keys --user-name {your-username}

Create a new set of access keys

aws iam create-access-key --user-name {your-username}

List your access keys with the new one this time

aws iam create-access-key --user-name {your-username}

Delete the old access key – You should be able to get the key id from the previous command. Look for the “CreateDate” that has the oldest timestamp.

aws iam delete-access-key --access-key-id AKIDPMS9RO4H3FEXAMPLE --user-name {your-username}

For more details refer to the original AWS documentation.

Rotate keys regularly

Let’s say you accidentally put your access keys in your GitHub repository that’s public. If no one yet discovered your keys and you don’t want to scrub your entire repo to find those keys then just rotate them. This way the old set is useless and you’ll update your code so that in the future your new set is not in code. No need to go back and scrub the old set.

One way to set notifications for rotating is using AWS Config. I’ll set this up later. “access-keys-rotated” is the name of that AWS Config rule. You can setup SNS notification as the remediation for this rule to let you know when someone’s access keys are older than X amount of days.

Script

A good friend (Tim) who’s also a Cloud Engineer has written a script that can rotate the keys as described above.

Link: https://github.com/easttimor/aws-scripts/blob/master/aws_rotate_access_key.sh

Usage:

Execute:
#	Assumes the default profile
#       ./aws_rotate_access_key.sh
#
#	For a specific profile
#	./aws_rotate_access_key.sh -p abc
#
#	For a specific profiles
#	./aws_rotate_access_key.sh -p abc -p def
#
#	For all profiles
#	./aws_rotate_access_key.sh -a
#
# Description:
#   Creates a new access key and deletes the old one

Never share

Each person that needs keys should have their own keys, do not create shared keys! Don’t share personal keys with third-party vendors and software. Everything is logged in AWS, any calls with your personal keys are your responsibility!

TOP 13 CLOUD ENGINEER POSITION INTERVIEW QUESTIONS AND ANSWERSBe prepared for you interview!

Never use in AMI’s or EC2’s

A lot of rookie’s make this mistake! Don’t ever use your personal keys to make CLI calls from an EC2, you must use IAM roles for that. On top of that, engineers creates AMI’s with those personal keys so they can build bootstrap solutions, so bad!

separate personal & service keys

I know these days using IAM roles with multi-account architecture are limited with some third-party software. Or setting up IAM roles for cross account is too complex for beginners so creating keys is the fastest solutions. Create a service user like “service-{software-name}” or “service-ansible”, create a group like so “service-ansible”, place that user in that group. Then create the minimum IAM policy and attach to the group. Lastly, generate keys for the service account.

  • Disable console login
  • Set MFA for the account
  • Tag the account

TIP: Naming convention is very important for resource management and security purposes.

Delete AWS access keys

Monitor all AWS access keys; specifically if it’s not being used at all. Delete them and create them when it’s needed. On the Security Credentials tab, the “Last Used” will either be N/A or a timestamp. N/A obviously means it’s been never used. The Credential Report also lists unused access keys. You can automate this process by using the API/CLI command to view the “LastUsedDate” with the “get-access-key-last-used” and apply simple date logic. I’ll create this automation with Ansible later so be sure to subscribe to get notified.

AWS CLI cheat sheets

Be sure to checkout the cheat sheets at the Download page on this site.

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Setup infrastructure as code environment

What is infrastructure as code?

It’s 2020, I shouldn’t have to explain this…instead here’s a good post explaining it. What I’m going to say is that Infrastructure as Code (IaC) is a must, no debate needed! Let’s setup infrastructure as code environment using homebrew, terragrunt, terraform CLI, the AWS CLI, git, and GitLab.

I know that’s a lot of tools, but it’s worth it in the long run! Do make sure you have an AWS account already. Follow this guide to sign up.

Mac OS: Install homebrew

/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

https://brew.sh/

Install Terragrunt

brew install terragrunt

Verify install

terragrunt --version

If you know Terraform, I’m going to wrap that with Terragrunt.

“Terragrunt is a thin wrapper that provides extra tools for keeping your configurations DRY, working with multiple Terraform modules, and managing remote state.” – Gruntwork

Install AWS CLI

brew install awscli

or

pip install awscli

https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html

Verify install

aws --version

Install git

Code source management: https://git-scm.com/book/en/v2/Getting-Started-Installing-Git

git –version

GitLab sign up

I’m going to also use GitLab for code repositories and pipelines. GitLab has a free version on the cloud. Sign up and create a new project.

GitLab Project create page

You’ll have to add your ssh keys to your project. Follow the instructions on that page on how to create or add existing keys.

Let’s copy down (clone) the empty project into our workstation

git clone project

Here’s another blog about git commands

Pre-commit

Lastly add this awesome utility to your toolbox. Pre-commit allows you git code to be checked for various types of error or formatting issues before it commits your code to the server. For example it can fix your trailing whitespaces or end of line formatting or check if your Terraform is formatted correctly and so on. Here’s a full list of the the various checks.

Next we’ll setup our AWS access keys, setup Terragrunt, setup Terraform, and manage AWS from code! Finally I’ll push the code to GitLab. Sign up below to get notified when that blog gets posted!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Post image by: Image by 024-657-834 from Pixabay

Tips on passing the AWS Certified Solutions Architect – Professional

Certification #5

This is my fifth AWS certification that I have passed! I have one MS Azure certification, a total of 6 Cloud related certs. The reason I am telling you this, is because I hope you take my guidance seriously. Let’s jump right into it!

Pass these exams first

Tip #1

I super highly recommend to climb your way up the ladder. This process helped me study a bit faster or sometimes skips some domains because I already had that knowledge because of the previous certification. Here’s the order I recommend.

Timing

Tip #2

Take the AWS Certified Cloud Practitioner as soon as you can. Then study for the AWS Certified SysOps Administrator – Associates, schedule this exam and the AWS Certified Solutions Architect – Associates in the same month! I spent majority of my study time into the AWS Certified SysOps Administrator and a bit on the AWS Certified Solutions Architect. Took each exam just a week apart. Then a year or so, study for the AWS Certified Security – Speciality, then schedule this one and the AWS Certified Solutions Architect – Professional about a 2-5 weeks apart. Dedicate majority of your time on the Security exam, and a good portion on the professional. If you barely passed any of the previous exams, then dedicate more time on the professional.

Be prepared mentally

Tip #3

The AWS Certified Solutions Architect – Professional was by far the most challenging, intimidating, verbose, and exhausting exam I have ever taken! It took my all 170 minutes to complete. I’m usually done with an exam with plenty of time left on the clock. There was so much reading and comprehending that I was mentally drained and wanted to just give up in the last 30 minutes! (ps. it was also a Saturday so that didn’t help, maybe do it on a Sunday?) So take a deep breath here and there, look away from the screen for few seconds to recollect yourself and keep on pushing.

Be motivated

Tip #4

I hope your motivation is strong enough to get you to study so much and sit on a super long exam. If you have no motivation then I don’t advise you to take on this challenge. Ask your supervisor if there any cash rewards or does it help you take on a new role or to assist you to get a better wage. Or it might be a personal challenge. Whatever it is, be sure it’s strong!

Experience, Study, do labs

Tip #5

As always, experience is your best tool to passing an exam. Then on top of that study using online services like LinuxAcademy.com and their hands on labs. Read all the necessary white papers mentioned in the exam guide.

Shoutout to Adrian Cantrill for awesome and precise explanation of each topic and all the exam tips and tricks!

Be sure to be on the list to get more exam tips and tricks and cloud related technical posts

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more!

Interested in becoming a Cloudly Engineer writer?

Send me a message at cloudlyengineer@gmail.com or comment below. Thanks!

Featured image: Photo by Joyce McCown on Unsplash

How to pass the AWS Certified Security – Speciality

I recently took the AWS Certified Security – Speciality exam for the first time and I passed it with over 800 points! L Scale is from 100 to 1,000 and 750 is the passing mark. More about the exam here. et’s go over how to pass the AWS certified Security – Speciality certification.

Take these certifications first

Although AWS has lifted the requirements to have an associate level exam before taking a speciality exam; I on the other hand still believe having an associate level certification and understanding is very much needed to pass. In my personal experience I always advise taking the AWS SysOps Administrator over the Solutions Architect if you’re planning on taking this security speciality. Knowing the capabilities of many AWS services will help you cross out the wrong answers immediately. The AWS security training does not cover in depth capabilities.

  • Pass the AWS Cloud Practitioner
  • Pass the AWS SysOps Administrator

Experience

Nothing beats real experience. Or at least LinuxAcademy.com labs. I have been in the AWS world for more than 4 years now.

Security topics

I believe the same exam is always different for different editions and users. This is all based on my experience.

  • KMS: Know this inside and out!
    • All of its capabilities
    • Key policies/permissions/cross account
    • Usage with S3 in depth
    • Usage with all other top services like EBS, RDS, DynamoDB, etc.
    • Be able to troubleshoot issues related to permissions and missing keys, etc.
  • IAM: This is one obvious and must
    • Be able to read and write IAM policies, S3 bucket policies, KMS policies, etc.
    • Master roles for services, for role switching (how to secure it)
    • Cross account setup
  • S3
    • Replication
    • KMS integration
    • KMS cross account integration
    • Troubleshoot permission issues

This is just my top list, always use the guide and study based on that. So don’t ignore other topics.

Training tools / White papers

Exam guide

Click here

Subscribe for future updates and AWS tutorials

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more! 

AWS CLI Cheat sheet!

Head over to my download page to find AWS CLI cheat sheet for free! There AWS CLI cheat sheets for Amazon S3 commands and EC2 commands.

I have recently designed and uploaded an AWS S3 CLI cheat sheet on the https://cloudly.engineer/downloads/. This is addition to the AWS EC2 and general AWS CLI cheat sheets!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more! 

GOOGLE FIREBASE DETAILED SECURITY RULES Part 2

This is follow on post from Google Firebase detailed security rules part 1.

Advanced Rules

Only allow user to edit or add their own

This prevents other users from modifying other users content. Here’s an example of Firebase Realtime Database.

"users": {
      "$user_id": {
        ".write": "data.child('user_id').val() === auth.uid || newData.child('user_id').val() === auth.uid"               
        }
}

Similarly in Firebase Storage rules. Allows authenticated users to read the users image. This example show folders and child folders to help apply different rules for each folder.

match /user-images {
      // Allow all to read each other's profile if authenticated
      allow read: if request.auth != null;
      
      match /{user_id} {
      // Only allow current user to write to its own folder
      	allow write: if request.auth.uid == user_id;
        allow read: if request.auth != null;
        
        match /{allPaths=**} {
        	allow read: if request.auth != null;
      		allow write: if request.auth.uid == user_id;
        }
      }
}

Multi-layer permission

Sometimes one key will need a different permission than the rest of the keys in the Firebase Realtime Database.

"users": {
      "$uid": {
        ".write": "$uid === auth.uid",
        "key1": {
          ".write": "auth != null"
        },
        "key2": {
          ".write": "auth == null"
        },
        "key3": {
          ".write": "auth === auth.uid"
        },
        "key4": {
          ".write": "auth != auth.uid"
        }
      }
  }

In Firebase Storage. Allow only authenticated users to read in the /users/ folder. Then in /users/1233/ only allow the owner to write and others who are authenticated to read. In the folder /users/1233/anotherFolder/ read for all authenticated and write for owner. Last, /users/private/ only the owner is able to read and write.

match /users{
      allow read: if request.auth != null;
      
      match /{user_id} {
      	allow write: if request.auth.uid == user_id;
        allow read: if request.auth != null;
        
        match /{allPaths=**} {
        	allow read: if request.auth != null;
      		allow write: if request.auth.uid == user_id;
        }

        match /private {
      		allow write: if request.auth.uid == user_id;
        	allow read: if request.auth.uid == user_id;
      	}
      }
    }

More info checkout https://firebase.google.com/docs/database/security Don’t forget to subscribe below for more cloud engineer posts!

As always if you see any errors, mistakes, have suggestions or questions please comment below. Don’t forget to like, share, and subscribe for more! 

AWS Route 53 Routing Policies explained with diagrams

Reading time: 5 minutes

Wait, route what? Yes you are reading that correctly. AWS (Amazon Web Services) Route 53 is DNS (Domain Name Services). In this post I’ll be writing about AWS Route 53 Routing Policies and not discussing what is DNS. Search elsewhere for that information. Keep in mind these diagrams are simplified.

What is Route 53?

Again Route 53 is a DNS service built and fully managed by AWS. Did you know Route 53 has a freaking 100% SLA?! Read here if you don’t believe me. It basically means it will NEVER, ever be down! I don’t know about you but that’s beyond amazing to me! Here’s a link to learn more in depth about Route 53. Why 53? 53 is the common DNS port.

Route 53 Routing Policies

Simple

This is the default routing policy. Use this only when you have exactly one resource such as one EC2 web server. This policy can contain multiple values but it returns one resource. This policy is not recommend for production sites.

AWS Route 53 routing policy simple diagram

Failover

Allows creating two records for the same name. This starts like simple policy but with a health check. If that single web server is unhealthy then you can point elsewhere. That next pointer can be another web server or possibly an error.html page hosted in AWS S3.

AWS Route 53 failover routing policy diagram

Geolocation

Use this when you want to serve your site based on the location of the client or user.

AWS Route 53 geolocation routing policy

Geoproximity

This is somewhat complicated so I would like to point to original documentation for full explanation.

Let’s subscribe to learn more or suggest topics!

Latency

When you have multiple resources in multiple regions, this policy routes the user not to the closest resource necessarily but the resource who responds the fastest or lowest latency.

AWS Route 53 latency policy diagram
In this example we can see the latency from France to United States was the lowest so therefore the website traffic is routed from a U.S region and not from an Australian region.

Multivalue answer

This one lets your return multiple values for each of your resources. The client or user browser randomly chooses one. Optionally you can add health checks. If any value becomes unhealthy then the client chooses another value to resolve. This is not an alternative solution to load balancing, it’s an enhancement.

AWS Route 53 multivalue answer routing policy diagram

Weighted

This one is fantastic for new deployments or release testing new versions. It’s based on a numerical value ranging from 0 to 255. If you specify a value of 0 for all regions then it’s routed equally.

AWS Route 53 weighted routing policy
The math is a little more complicated than simplified here but you get the idea. Over time if release 2.0 is going good, then you would increase that value and lower release 1.0.

In the future I’ll be showing how to actually implement these via code. Subscribe to get notified when that gets released!

As always if you see any errors or mistakes, please comment below. Don’t forget to like, share, and subscribe for more! 🙂