How To Create S3 Buckets Using Terraform
In this blog post , We will see how to create S3 buckets using Terraform.
What Is Terraform?
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as On-premise datacenters.
Hence It is called as Infrastructure as a Code.
Installing Terraform
If you havn’t installed terraform yet, You can go ahead and install using the below article.
Setting Up Project Directory
This is the place where you will store all the terraform files.
So What we are going to do is, we will create a folder and inside that we will create terraform files.
We need to create following files:
creds.tf , providers.tf , .gitignore , main.tf
Terraform will automatically pick all the .tf files within the directory.
Explanation Of .Tf Files
variables.tf
This is the place where we will store all the AWS secrets such as Access Key ID , Secret Key, Region.

aws_access_key – It makes an API call to AWS resources from your machine.
aws_secret_key – Secret Access Key that’s associated with Access Key.
aws_region – The AWS region where you want to create all your resources.
Providers.tf

Providers are interfaces to the services that will maintain our resources.There are many cloud providers supported by terraform such as AWS, Azure and Google Cloud, IBM, Oracle Cloud, Digital Ocean.
Hence Amazon Web Services is One Provider.
The AWS Provider requires Access_Key (Which IAM user the terraform should use ) and Secret_key (Allows Authentication) and aws_region represents where the terraform should initiate creating the infrastructure.
Main.tf
You can change the name of this file as per the requirement and based on the Directory structure.
What Not To Do With Access Keys?
It is always recommended not to use aws access and secret keys directly in a file.
What Should We Do?
Instead, We will setup awscli, an open source tool that enables you to interact with AWS services using commands in your command-line shell.
Then we will add AWS keys to /home/rahul/.aws/credentials file.
We will ask the terraform to use particular profile when it runs.
I have written an article on , How to install AWS CLI, configure profiles and use it for Terraform.
Lets go ahead and setup Terraform scripts to create S3 buckets.
Creating Single S3 Bucket Using Terraform
Lets say you have to create a S3 bucket.
We will be creating files as discussed above.
In the provider.tf file , We will mention the provider as AWS and the region where the S3 bucket should be created.
provider "aws" { access_key = "${var.aws_access_key}" secret_key = "${var.aws_secret_key}" region = "${var.aws_region}" }
And the creds.tf file. While holds the AWS credentials and let the terraform to create the S3 bucket.
You can also configure AWS profile to access the credentials instead of directly using the credentials in creds.tf file.
AWS Config variable "aws_access_key" { default = "PASTE_ACCESS_KEY_HERE" } variable "aws_secret_key" { default = "PASTE_SECRET_KEY_HERE" } variable "aws_region" { default = "ENTER_AWS_REGION" }
And then we will create a file called s3.tf while contains the terraform script to create s3 bucket.
The below script will create one s3 bucket , The ACL of the bucket will be Private and with the versioning enabled.
resource "aws_s3_bucket" "onebucket" { bucket = "testing-s3-with-terraform" acl = "private" versioning { enabled = true } tags = { Name = "Bucket1" Environment = "Test" } }
The above script will create a bucket named “testing-s3-with-terraform” , Which will be private and versioning enabled.
We are also tagging the bucket with Name and Environment.
Also In the script we have used bucket: to refer the name of the bucket , If the bucket name is not mentioned , terraform will assign a random bucket name , as the name of the bucket should be globally unique.
Run terraform plan to verify the script.It will let us know what will happen if the above script is executed.

Now run terraform apply to create s3 bucket.
Lets verify the same by loggin into S3 console.
Search for the name of the bucket you have mentioned.

And also , Click the bucket , Choose Properties , to verify whether versioning is enabled.

If you wish to delete the S3 bucket , Run terraform destroy
Creating Multiple S3 Buckets At Once
The below script will create multiple S3 buckets with the ACL of the buckets be Private and the Versioning Enabled.
variable "s3_bucket_name" { type = "list" default = ["terr-test-buc-1", "terr-test-buc-1", "terr-test-buc-1"] } resource "aws_s3_bucket" "henrys_bucket" { count = "${length(var.s3_bucket_name)}" bucket = "${var.s3_bucket_name[count.index]}" acl = "private" versioning { enabled = true } force_destroy = "true" }
In the above script , The s3_bucket_name variable will contains the lists of bucket names that you want to create in an array.
The bucket names are mentioned in the default key.
And then count , Will calculate the number of buckets we need to create from the s3_bucket_name variable.
Run terraform plan to verify the script and then run terraform apply to create multiple S3 buckets as per your requirement.
Using the above script we can create multiple S3 buckets , Bucket will be Private with versioning Enabled.
Conclusion
We have learnt to create S3 buckets using terraform.
Thanks for reading.Hope you find it helpful.
Please do check out my other articles.