I can safely assume a lot of engineer’s know of HashCorp’s Packer utility already. Packer is simply an automated virtual machine image template maker, it can create images for all the major cloud providers. It can build Amazon Machine Images (AMI) in AWS or Azure’s Virtual Machine Image. Not too long ago, AWS released their version of automated image builder, called EC2 Image Builder! On this get started with EC2 Image Builder in Terraform I will be showing you how to quickly put together your Terraform code to create an a simple AMI.
Tools
You will need Terraform and if you are deploying this exact code then you’ll need Terragrunt too. Here’s a Setup infrastructure as code environment instructions and if you’re new to Terragrunt then checkout Intro to Terragrunt and Terraform post. I also suggest installing pre-commit too.
EC2 Image Builder Cost
This service doesn’t cost anything but the various resources created by this service could cost you. For example, you have to select your EC2 instance type to run for the during of the AMI creation. It will terminate the EC2 instance once the job is completed. Also, as you know AMI’s have EC2 snapshots, hint cost of storage. You get the point, let’s continue!
Permissions
You will need full permissions on the EC2 Image Builder service.
"imagebuilder:*"
Now the EC2 Image Builder IAM role will need at least the block below. Here I’m creating the policy and role with Terraform. You may need more less, adjust accordingly!
data "aws_iam_policy_document" "image_builder" {
statement {
effect = "Allow"
actions = [
"ssm:DescribeAssociation",
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:GetDocument",
"ssm:DescribeDocument",
"ssm:GetManifest",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:ListAssociations",
"ssm:ListInstanceAssociations",
"ssm:PutInventory",
"ssm:PutComplianceItems",
"ssm:PutConfigurePackageResult",
"ssm:UpdateAssociationStatus",
"ssm:UpdateInstanceAssociationStatus",
"ssm:UpdateInstanceInformation",
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel",
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply",
"imagebuilder:GetComponent",
]
resources = ["*"]
}
statement {
effect = "Allow"
actions = [
"s3:List",
"s3:GetObject"
]
resources = ["*"]
}
statement {
effect = "Allow"
actions = [
"s3:PutObject"
]
resources = ["arn:aws:s3:::${var.aws_s3_log_bucket}/image-builder/*"]
}
statement {
effect = "Allow"
actions = [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents"
]
resources = ["arn:aws:logs:*:*:log-group:/aws/imagebuilder/*"]
}
statement {
effect = "Allow"
actions = [
"kms:Decrypt"
]
resources = ["*"]
condition {
test = "ForAnyValue:StringEquals"
variable = "kms:EncryptionContextKeys"
values = [
"aws:imagebuilder:arn"
]
}
condition {
test = "ForAnyValue:StringEquals"
variable = "aws:CalledVia"
values = [
"imagebuilder.amazonaws.com"
]
}
}
}
EC2 Image Builder features
It has automated pipelines! You can set it to build your AMI on a schedule or on-demand. It has a package installer and security components. You can share the AMI across multiple AWS accounts. You can read more about its features at EC2 Image Builder Features.
EC2 Image Builder Pipeline
I’m setting this pipeline to run every Tuesday morning at 8 am. This pipeline will trigger on that schedule and if there are any updates available. I have enabled testing of the image and setting a timeout of 60 minutes.
resource "aws_imagebuilder_image_pipeline" "this" {
image_recipe_arn = aws_imagebuilder_image_recipe.this.arn
infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.this.arn
name = "amazon-linux-baseline"
status = "ENABLED"
description = "Creates an Amazon Linux 2 image."
schedule {
schedule_expression = "cron(0 8 ? * tue)"
# This cron expressions states every Tuesday at 8 AM.
pipeline_execution_start_condition = "EXPRESSION_MATCH_AND_DEPENDENCY_UPDATES_AVAILABLE"
}
# Test the image after build
image_tests_configuration {
image_tests_enabled = true
timeout_minutes = 60
}
tags = {
"Name" = "${var.ami_name_tag}-pipeline"
}
}
EC2 Image Builder Recipe
In the Image Recipe, I’m defining the AMI’s volume size and type, and the components. For this simple example, I’m only installing the CloudWatch agent to the AMI.
resource "aws_imagebuilder_image" "this" {
distribution_configuration_arn = aws_imagebuilder_distribution_configuration.this.arn
image_recipe_arn = aws_imagebuilder_image_recipe.this.arn
infrastructure_configuration_arn = aws_imagebuilder_infrastructure_configuration.this.arn
depends_on = [
data.aws_iam_policy_document.image_builder
]
}
resource "aws_imagebuilder_image_recipe" "this" {
block_device_mapping {
device_name = "/dev/xvdb"
ebs {
delete_on_termination = true
volume_size = var.ebs_root_vol_size
volume_type = "gp3"
}
}
component {
component_arn = aws_imagebuilder_component.cw_agent.arn
}
name = "amazon-linux-recipe"
parent_image = "arn:${data.aws_partition.current.partition}:imagebuilder:${data.aws_region.current.name}:aws:image/amazon-linux-2-x86/x.x.x"
version = var.image_receipe_version
}
resource "aws_s3_bucket_object" "cw_agent_upload" {
bucket = var.aws_s3_bucket_object
key = "/files/amazon-cloudwatch-agent-linux.yml"
source = "${path.module}/files/amazon-cloudwatch-agent-linux.yml"
# If the md5 hash is different it will re-upload
etag = filemd5("${path.module}/files/amazon-cloudwatch-agent-linux.yml")
}
data "aws_kms_key" "image_builder" {
key_id = "alias/image-builder"
}
# Amazon Cloudwatch agent component
resource "aws_imagebuilder_component" "cw_agent" {
name = "amazon-cloudwatch-agent-linux"
platform = "Linux"
uri = "s3://${var.aws_s3_bucket_object}/files/amazon-cloudwatch-agent-linux.yml"
version = "1.0.0"
kms_key_id = data.aws_kms_key.image_builder.arn
depends_on = [
aws_s3_bucket_object.cw_agent_upload
]
}
EC2 Image Builder Infrastructure Configuration
Select the EC2 instance type, the IAM role, security group, subnet, logging bucket, and much more in the infrastructure configuration resource.
resource "aws_imagebuilder_infrastructure_configuration" "this" {
description = "Simple infrastructure configuration"
instance_profile_name = var.ec2_iam_role_name
instance_types = ["t2.micro"]
key_pair = var.aws_key_pair_name
name = "amazon-linux-infr"
security_group_ids = [data.aws_security_group.this.id]
subnet_id = data.aws_subnet.this.id
terminate_instance_on_failure = true
logging {
s3_logs {
s3_bucket_name = var.aws_s3_log_bucket
s3_key_prefix = "image-builder"
}
}
tags = {
Name = "amazon-linux-infr"
}
}
EC2 Image Builder Distribution Configuration
Here you can choose to share this AMI with other accounts or it’s just for this account. You can tag the AMI in this resource too.
resource "aws_imagebuilder_distribution_configuration" "this" {
name = "local-distribution"
distribution {
ami_distribution_configuration {
ami_tags = {
Project = "IT"
}
name = "amzn-linux-{{ imagebuilder:buildDate }}"
launch_permission {
user_ids = ["123456789012"]
}
}
region = var.aws_region
}
}
Tests the AMI
Since I have enabled testing of the AMI EC2 Image Builder, it will create an instance from the AMI automatically.


Complete Code
A lot of other files and code aren’t shown here. You can find a completed working example at https://github.com/masterwali/ec2-image-builder
Get notified when new posts are published, sign up below!
Hi Waleed, thank you kindly for this example and the accompanying code. One problem I have implementing this is whenever I upgrade the version number of a recipe component and recipe, Terraform attempts to destroy the previous version of the recipe. However since previous version of the component exists, there is a dependency error.
So the question is, in the code you shared, what is the correct way to increment component versions and recipe versions?
Thanks!
You’re welcome!
I do have a solution for that. You’ll need to add the Terraform lifecycle “create_before_destroy” to your recipe. I’ll add it to my code when I get time.
The latest code in GitHub now has the fix.
The code in GitHub has been updated.
Thank you for sharing. How can I prevent the pipeline from running everytime I terraform apply ? I just want to update the resources, but I don’t want to run the pipeline. I tried “status = DISABLED” but that doesn’t stop the pipeline from running after terraform apply
The pipeline continues to run on subsequent “applies” even when the status is set to disable?