Deployment of AWS, Azure & GCP resources through Terraform using the S3 remote backend with Dynamodb
Deploying AWS resources through Terraform using S3 as the backend is a magnificent way to store the remote state. The remote state can be stored for AWS resource deployment through Terraform using the S3 bucket & Dynamodb table.
The S3 bucket is required to be created first with the help of the following terraform configuration.
# Create S3 storage
resource “aws_s3_bucket” “example” {
bucket = var.aws_s3_bucket_name
tags = {
Name = “<your_s3_bucket_name>”
Environment = “Dev”
}
}
/*
resource “aws_s3_access_point” “example” {
bucket = aws_s3_bucket.example.id
name = var.aws_s3_bucket_name
}
*/
resource “aws_s3_bucket_acl” “example” {
bucket = aws_s3_bucket.example.id
acl = “private”
}
Once the S3 bucket gets created, you can specify it as the backend for remote state configuration for provisioning of AWS resources. You may define the backend definition for Terraform providers block like as the following.
terraform {
backend “s3” {
bucket = “<your_s3_bucket_name>”
key = “terraform”
region = “us-east-1”
encrypt = true
dynamodb_table = “<your_dynamodb_table_name>”
}
required_providers {
aws = {
source = “hashicorp/aws”
version = “~>3.74”
}
}
}
provider “aws” {
endpoints {
s3 = “https://s3.us-east-1.amazonaws.com"
}
region = “us-east-1”
skip_region_validation = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
profile = “default”
shared_credentials_file = “[~/.aws/credentials]”
}
Here goes an example of the Dynamodb table provisioned through Terraform using the S3 backend. The S3 bucket consists of the. tfstate of the provisioned resources.
Let's provision a simple EC2 instance through Terraform using the S3 backend with Dynamodb table for managing state. The EC2 instance can be provisioned with the following terraform configuration from GitHub.
Next, provision the GCP Cloud SQL resource through the following Terraform configuration (main.tf) file. Make sure to define the “database_version” argument in the “google_sql_database_instance” tf configuration resource block.
# The main configurations of Cloud SQL terraform module
resource “google_sql_database” “sql_database” {
name = “${var.sql_database_name}”
instance = google_sql_database_instance.instance_name.name
}
resource “google_sql_database_instance” “instance_name” {
name = “${var.sql_database_instance_name}”
region = “${var.sql_database_instance_region}”
database_version = “POSTGRES_14”
settings {
tier = “db-f1-micro”
}
deletion_protection = “true”
}
Now, specify the similar terraform backend S3 block with Dynamodb details for Terraform remote state management for the GCP resources provisioned via Terraform. Define the following configuration for the terraform providers block.
terraform {
backend “s3” {
bucket = “<your_s3_bucket_name>”
key = “<your_s3_bucket_key>”
region = “us-east-1”
encrypt = true
dynamodb_table = “<your_dynamodb_table>”
}
required_providers {
aws = {
source = “hashicorp/aws”
version = “~>3.74”
}
}
}
provider “google” {
project = “${var.project}”
region = “${var.region}”
zone = “${var.zone}”
}
Next, deploy an AKS cluster nodepool through Terraform using S3 backend. Once you start “terraform init” using the S3 backend, it would display it has successfully initialized S3 as the backend in the following screenshot.
As you can see, the AKS cluster node pool is provisioned through Terraform using S3 as a remote backend.
You can refer to the Terraform providers block for the AKS cluster node pool or for other cloud resource provisioning processes from the following configuration file of the GitHub repo.
#Happy Terraforming!