Before you use or approve Amazon EKS in production you must have a security checklist. Everyone’s list is different but everyone’s listing must-have items to ensure authentication and authorization are at a minimum; in other words least privilege. Let’s explore Amazon EKS IAM roles and policies written in Terraform!
What are some suggestions to improve your Amazon EKS IAM design?
- Start with the managed roles and policies, then review AWS CloudTrail logs to see what events or API calls actually occurs
- Start creating your own managed IAM policies and IAM roles; one at a time
- If possible require MFA for ensuring the user is who he/she says they are
- Once you have validated your custom roles and policies then add conditions to your IAM policies; again one condition at a time
Before I show any code it’s important to know basic AWS IAM terminology. Let’s add Identity-based policies Resource-based policies to your vocabulary. Resource-based policies are about “what”. The Identity-based policies are about “who”. Then there’s the “action” that identity or resource can use based on the “effect”. There are dozens of EKS IAM actions available, see the Actions defined by Amazon Elastic Kubernetes Service page.
EKS Cluster Authentication
Just a reminder on how EKS cluster authentication works. Bottomline is that all permissions are essentially managed by Kubernetes Role Based Access Control (RBAC). However AWS IAM is involved.

Prerequisites
Prior to creating your EKS cluster be sure to identify which IAM role or user will be the “primary” identity to create the EKS cluster. The identity that first creates the EKS cluster will be automatically added to K8s system:masters group. Which is great, however you will not be able to visually see that identity in the ConfigMap!
Amazon EKS IAM Resource
The Amazon EKS cluster resource has the following ARN:
arn:${Partition}:eks:${Region}:${Account}:cluster/${ClusterName}
Note: Use a wildcard (“*”) if you really need to specify all clusters. Also you cannot use this resource filter pattern for certain events such as creating a new cluster. How would anyone know that? Well, take a look again at the Actions defined by Amazon Elastic Kubernetes Service page. You’ll notice in the CreateCluster action the Resource box is empty.

IAM policies based on the cluster name
Read/View all clusters: Terraform IAM example
Initially the user or role may not have any EKS permissions, so if you attempt to list all the clusters it would return an error like this.
aws eks list-clusters
An error occurred (AccessDeniedException) when calling the ListClusters operation: User: arn:aws:iam::1234567890:user/read-only-all is not authorized to perform: eks:ListClusters on resource: arn:aws:eks:us-east-2:1234567890:cluster/*
The “eks:ListClusters” Resource must not be restricted so therefore it’s in a different statement than the other actions that do allow the EKS resource ARN. See example below.
{
"Statement": [
{
"Action": [
"eks:ListUpdates",
"eks:ListTagsForResource",
"eks:ListNodegroups",
"eks:ListIdentityProviderConfigs",
"eks:ListFargateProfiles",
"eks:ListAddons",
"eks:DescribeCluster"
],
"Effect": "Allow",
"Resource": "arn:aws:eks:us-east-2:1234567890:cluster/*",
"Sid": "ReadAllEKSclusters"
},
{
"Action": "eks:ListClusters",
"Effect": "Allow",
"Resource": "*",
"Sid": "ListAllEKSclusters"
}
],
"Version": "2012-10-17"
}
Rerun list cluster action with this newly attached policy.
aws eks list-clusters
# My results
{
"clusters": [
"aws001-preprod-dev-eks"
]
}
The “Describe cluster” action and few others will require the exact name of the cluster you want to describe and because the ARN is set to all (asterisk in the ARN) that means this policy allows describing all clusters.
aws eks describe-cluster --name aws001-preprod-dev-eks
# result
{
"cluster": {
"name": "aws001-preprod-dev-eks",
"arn": "arn:aws:eks:us-east-2:1234567890:cluster/aws001-preprod-dev-eks",
"createdAt": "2022-03-22T06:37:57.278000-04:00",
"version": "1.21",
"endpoint": "https://abc123456789.gr7.us-east-2.eks.amazonaws.com",
"roleArn": "arn:aws:iam::1234567890:role/aws001-preprod-dev-eks-cluster-role",
"resourcesVpcConfig": {
"subnetIds": [
...
}
Read only specific cluster name
This time simply replace the last asterisk in the resource name with the cluster name, in this case I’m using Terraform so I’m passing the name via a local variable.
actions = [
"eks:AccessKubernetesApi",
"eks:DescribeCluster",
"eks:ListAddons",
"eks:ListFargateProfiles",
"eks:ListIdentityProviderConfigs",
"eks:ListNodegroups",
"eks:ListTagsForResource",
"eks:ListUpdates"
]
resources = [
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:cluster/${local.cluster_name}",
]
}
aws eks list-clusters --profile iam
An error occurred (AccessDeniedException) when calling the ListClusters operation: User: arn:aws:iam::1234567890:user/Waleed is not authorized to perform: eks:ListClusters on resource: arn:aws:eks:us-east-2:1234567890:cluster/*
Now listing all EKS clusters is not allowed.
Modify all clusters
This role should only be assigned to a few EKS/Kubernetes administrators whose highly experienced or certified to manage Kubernetes clusters in the AWS Cloud. Larger organizations may have a single administrator per cluster while others have one administrator to manage multiple.
statement {
sid = "ModifyAllEKSclusters"
actions = [
"eks:AccessKubernetesApi",
"eks:Associate*",
"eks:Create*",
"eks:Delete*",
"eks:DeregisterCluster",
"eks:Describe*",
"eks:List*",
"eks:RegisterCluster",
"eks:TagResource",
"eks:UntagResource",
"eks:Update*"
]
resources = [
"*"
]
}
statement {
sid = "Deny"
# No major updates allowed in this example
actions = [
"eks:CreateCluster",
"eks:DeleteCluster"
]
resources = [
"*"
]
}
Modify a specific cluster
statement {
sid = "ModifyaEKScluster"
actions = [
"eks:AccessKubernetesApi",
"eks:Associate*",
"eks:Create*",
"eks:Delete*",
"eks:DeregisterCluster",
"eks:DescribeCluster",
"eks:DescribeUpdate",
"eks:List*",
"eks:TagResource",
"eks:UntagResource",
"eks:Update*"
]
resources = [
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:cluster/${local.cluster_name}",
]
}
statement {
sid = "ModifyaEKSclusterResource"
actions = [
"eks:DescribeNodegroup",
"eks:DescribeFargateProfile",
"eks:DescribeIdentityProviderConfig",
"eks:DescribeAddon"
]
resources = [
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:cluster/${local.cluster_name}",
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:nodegroup/${local.cluster_name}/*/*",
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:addon/${local.cluster_name}/*/*",
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:identityproviderconfig/${local.cluster_name}/*/*/*",
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:fargateprofile/${local.cluster_name}/*/*"
]
}
# These actions don't use the 'cluster' resource type
statement {
sid = "Modify"
actions = [
"eks:RegisterCluster",
"eks:DisassociateIdentityProviderConfig"
]
resources = [
"*",
]
}
statement {
sid = "Deny"
# No major updates allowed in this example
actions = [
"eks:CreateCluster",
"eks:DeleteCluster"
]
resources = [
"*"
]
}
For a complete list of Amazon EKS actions see the original documentation.
Amazon EKS resources ARN
Use these various ARN patterns to constrain permissions for certain teams.
Type | ARN pattern |
---|---|
cluster | arn:aws:eks:${Region}:${Account}:cluster/${ClusterName} |
nodegroup | arn:aws:eks:${Region}:${Account}:nodegroup/${ClusterName}/${NodegroupName}/${UUID} |
addon | arn:aws:eks:${Region}:${Account}:addon/${ClusterName}/${AddonName}/${UUID} |
fargateprofile | arn:aws:eks:${Region}:${Account}:fargateprofile/${ClusterName}/${FargateProfileName}/${UUID} |
identityproviderconfig | arn:aws:eks:${Region}:${Account}:identityproviderconfig/${ClusterName}/${IdentityProviderType}/${IdentityProviderConfigName}/${UUID} |
EKS IAM Condition Keys
In addition to filtering EKS permissions with the resource name(s), you can further filter down by using keys.
Key | Type | Description |
---|---|---|
aws:RequestTag/${TagKey} | string | Use this to ensure tags are present in the “request”/create call. Basically before creating new resources. |
aws:ResourceTag/${TagKey} | string | Use this to find resources that have the tags already. It’s for existing resources with tags. |
aws:TagKeys | ArrayOfString | Similar to aws:RequestTag/${TagKey} but it’s a list of tag keys, instead of just one. |
eks:clientId | string | The “clientId” value in the associateIdentityProviderConfig call |
eks:issuerUrl | string | The “issuerUrl” value in the associateIdentityProviderConfig call |
{
"Sid": "TagEKSWithTheseTags",
"Effect": "Allow",
"Action": [
"eks:CreateCluster",
"eks:TagResource"
],
"Resource": "*",
"Condition": {
"StringEqualsIfExists": {
"aws:RequestTag/environment": [
"development",
"sandbox"
],
"aws:RequestTag/jobfunction": "DevOps"
},
"ForAllValues:StringEquals": {
"aws:TagKeys": [
"environment",
"jobfunction"
]
}
}
}
Other IAM policies
EKS Console Admin policy: This permission will allow full read and write to the Configuration tab on the EKS console. The Resources and the Overview tabs requires Kubernetes RBAC permissions.
data "aws_iam_policy_document" "console_admin" {
statement {
sid = "admin"
actions = [
"eks:*"
]
resources = [
"*"
]
}
statement {
sid = "console"
effect = "Allow"
actions = [
"iam:PassRole"
]
resources = [
"*"
]
condition {
test = "StringEquals"
variable = "iam:PassedToService"
values = ["eks.amazonaws.com"]
}
}
}
Update a Kubernetes cluster version: This policy will only allow to update just the Kubernetes Cluster version. In the Terraform example below, updating the cluster version is only allowed when the EKS cluster has a tag of “environment” and a value of “sandbox”; the EKS cluster that’s in the current account and region of course.
data "aws_iam_policy_document" "cluster_version" {
statement {
sid = "admin"
actions = [
"eks:UpdateClusterVersion"
]
resources = [
"arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:cluster/*"
]
condition {
test = "StringEquals"
variable = "aws:ResourceTag/environment"
values = ["sandbox"]
}
}
}
EKS Service-linked roles
The following table shows all the service-linked roles that are automatically created when you create the cluster and its components.
Component | Role name | Service URL | IAM Policy | IAM Policy ARN |
---|---|---|---|---|
EKS Cluster role | AWSServiceRoleForAmazonEKS | eks.amazonaws.com | AmazonEKSServiceRolePolicy | arn:aws:iam::aws:policy/aws-service-role/AmazonEKSServiceRolePolicy |
EKS node groups | AWSServiceRoleForAmazonEKSNodegroup | eks-nodegroup.amazonaws.com | AWSServiceRoleForAmazonEKSNodegroup | arn:aws:iam::aws:policy/aws-service-role/AWSServiceRoleForAmazonEKSNodegroup |
EKS Fargate profiles | AWSServiceRoleForAmazonEKSForFargate | eks-fargate.amazonaws.com | AmazonEKSForFargateServiceRolePolicy | arn:aws:iam::aws:policy/aws-service-role/AmazonEKSForFargateServiceRolePolicy |
EKS Connector | AWSServiceRoleForAmazonEKSConnector | eks-connector.amazonaws.com | AmazonEKSConnectorServiceRolePolicy | arn:aws:iam::aws:policy/aws-service-role/AmazonEKSConnectorServiceRolePolicy |
EKS IAM Roles
Amazon EKS Cluster Role
The AmazonEKSClusterPolicy is required to be attached to your EKS Cluster role before you create your cluster.
resource "aws_iam_role" "eks_cluster_role" {
name = "eks-cluster-role"
tags = local.required_tags
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "eks_cluster_role" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks_cluster_role.name
}
Amazon EKS node IAM role
Each node (EC2 instance for example) uses IAM roles to make AWS API calls. Before you can create and register nodes to the EKS cluster they must have an IAM role with the following policies attached AmazonEKSWorkerNodePolicy and AmazonEC2ContainerRegistryReadOnly. We’ll add the AmazonEKS_CNI_Policy later.
locals {
eks_node_policies = ["AmazonEC2ContainerRegistryReadOnly", "AmazonEKSWorkerNodePolicy"]
}
resource "aws_iam_role" "eks_node_role" {
name = "eks-node-role"
tags = local.required_tags
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "eks_node_role" {
for_each = toset(local.eks_node_policies)
policy_arn = "arn:aws:iam::aws:policy/${each.value}"
role = aws_iam_role.eks_node_role.name
}
Amazon EKS CNI Policy
You can attach the AmazonEKS_CNI_Policy to the node role above. However, you must follow least privilege model to protect your nodes as much as possible. We’ll need to create the IAM roles for Kubernetes service account or IRSA. There are multiple steps to create it, but we’re in luck because there’s an IRSA Terraform module built by the AWS Open Source community! if you’re using IPV4. There’s an another IRSA Terraform module maintained by the community.
locals {
addon_context = {
aws_caller_identity_account_id = data.aws_caller_identity.current.account_id
aws_caller_identity_arn = data.aws_caller_identity.current.arn
aws_eks_cluster_endpoint = data.aws_eks_cluster.eks_cluster.endpoint
aws_partition_id = data.aws_partition.current.partition
aws_region_name = data.aws_region.current.name
eks_oidc_issuer_url = local.eks_oidc_issuer_url
eks_cluster_id = aws_eks_cluster.this.id
eks_oidc_provider_arn = "arn:${data.aws_partition.current.partition}:iam::${data.aws_caller_identity.current.account_id}:oidc-provider/${local.eks_oidc_issuer_url}"
tags = local.required_tags
}
}
module "vpc_cni_irsa" {
source = "git@github.com:aws-ia/terraform-aws-eks-blueprints.git//modules/irsa?ref=v4.2.1"
kubernetes_namespace = "kube-system"
kubernetes_service_account = "aws-node"
create_kubernetes_namespace = false
create_kubernetes_service_account = false
irsa_iam_policies = ["arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"]
addon_context = local.addon_context
}
Component | Service URL | IAM Policy | IAM Policy ARN |
---|---|---|---|
EKS Cluster role | eks.amazonaws.com | AmazonEKSClusterPolicy | arn:aws:iam::aws:policy/AmazonEKSClusterPolicy |
EKS node role | ec2.amazonaws.com | AmazonEKSWorkerNodePolicy AmazonEC2ContainerRegistryReadOnly | arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly |
EKS Fargate profiles | eks-fargate.amazonaws.com | AmazonEKSForFargateServiceRolePolicy | arn:aws:iam::aws:policy/aws-service-role/AmazonEKSForFargateServiceRolePolicy |
EKS Connector | eks-connector.amazonaws.com | AmazonEKSConnectorServiceRolePolicy | arn:aws:iam::aws:policy/aws-service-role/AmazonEKSConnectorServiceRolePolicy |
EKS Fargate profiles
We cannot use the node IAM role for the EKS Farget profiles, we have to create a pod execution IAM role. Kubernetes Role based access control (RBAC) will use this pod execution IAM role for authorization to AWS services, for example to pull an image from Amazon Elastic Container Registry (ECR). The code below creates the Amazon EKS pod execution IAM role with the required policy and trust settings.
resource "aws_iam_role" "eks_pod_exe_role" {
name = "eks-fargate-pod-execution-role"
tags = local.required_tags
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks-fargate-pods.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:eks:${var.region}:${data.aws_caller_identity.current.account_id}:fargateprofile/${local.cluster_name}/*"
}
}
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "eks_pod_exe_role" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = aws_iam_role.eks_pod_exe_role.name
}
EKS Connector
This is a read or view only feature to see your Kubernetes clusters from your other cloud providers or on-premises or running on your EC2s. This also needs a different IAM role.
# #########################################
# EKS Connector
# #########################################
data "aws_iam_policy_document" "connector" {
statement {
sid = "SsmControlChannel"
actions = [
"ssmmessages:CreateControlChannel"
]
resources = [
"arn:aws:eks:*:*:cluster/*"
]
}
statement {
sid = "ssmDataplaneOperations"
actions = [
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenDataChannel",
"ssmmessages:OpenControlChannel"
]
resources = ["*"]
}
}
resource "aws_iam_policy" "connector" {
name = "eks-connector"
path = "/"
policy = data.aws_iam_policy_document.connector.json
tags = {
"Name" = "eks-connector"
}
}
resource "aws_iam_role" "eks_connector_role" {
name = "eks-connector-role"
tags = local.required_tags
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ssm.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
resource "aws_iam_role_policy_attachment" "eks_connector_role" {
policy_arn = aws_iam_policy.connector.arn
role = aws_iam_role.eks_connector_role.name
}
To learn more about AWS managed policies, see https://docs.aws.amazon.com/eks/latest/userguide/security-iam-awsmanpol.html
To see all the code in Terraform, visit the GitHub repo.
If you don’t know how Terraform works, then jump to the Intro to Terraform guide first.
Published by