Getting Started with Terraform: Infrastructure as Code for Developers
Terraform lets you define cloud infrastructure in code, version it like source code, and apply it repeatably. Here's how to go from zero to your first AWS resource.
Getting Started with Terraform: Infrastructure as Code for Developers
Clicking through the AWS console to provision resources is fine for experimentation. It falls apart when you need to reproduce the same infrastructure in staging and production, collaborate with a team, or recover from an outage quickly.
Terraform solves this. You describe your infrastructure in code, commit it to git, and apply it. Changes are auditable, repeatable, and reversible.
What Is Infrastructure as Code?
Infrastructure as Code (IaC) means defining servers, databases, networking, and other cloud resources in configuration files — just like application code. The benefits:
- Reproducibility: apply the same config in any environment
- Version control: every infrastructure change is tracked in git with a diff and a commit message
- Collaboration: teams review infrastructure changes the same way they review code
- Recovery: re-create an environment from scratch in minutes
Terraform is the most widely used IaC tool across AWS, GCP, and Azure.
Installing Terraform
# macOS with Homebrew
brew tap hashicorp/tap
brew install hashicorp/tap/terraform
# Ubuntu/Debian
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
sudo apt install terraform
# Verify
terraform --version
HCL: The Terraform Language
Terraform uses HCL (HashiCorp Configuration Language) — a readable, JSON-like syntax for describing resources.
A Terraform project is a directory of .tf files. The main concepts:
Providers
Providers are plugins that talk to cloud APIs. You declare which provider you need:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "ap-southeast-1"
}
Resources
Resources are the actual infrastructure you want to create:
resource "aws_s3_bucket" "uploads" {
bucket = "my-app-uploads-prod"
tags = {
Environment = "production"
Team = "backend"
}
}
The format is resource "<provider_type>" "<local_name>". The local name is how you reference this resource elsewhere in your config.
Variables
Parameterise your config so you can reuse it across environments:
variable "environment" {
type = string
default = "staging"
}
variable "db_instance_class" {
type = string
}
Outputs
Expose values from your infrastructure (e.g. to use in application config):
output "bucket_name" {
value = aws_s3_bucket.uploads.bucket
}
output "db_endpoint" {
value = aws_db_instance.main.endpoint
sensitive = true
}
A Real Example: S3 Bucket + IAM Policy
resource "aws_s3_bucket" "uploads" {
bucket = "my-app-uploads-${var.environment}"
}
resource "aws_s3_bucket_public_access_block" "uploads" {
bucket = aws_s3_bucket.uploads.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
resource "aws_iam_policy" "s3_write" {
name = "s3-upload-write-${var.environment}"
policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Effect = "Allow"
Action = ["s3:PutObject", "s3:GetObject", "s3:DeleteObject"]
Resource = "${aws_s3_bucket.uploads.arn}/*"
}]
})
}
The Core Workflow
# 1. Initialise — downloads providers
terraform init
# 2. Plan — shows what will change, no modifications yet
terraform plan
# 3. Apply — creates/modifies/destroys resources
terraform apply
# 4. Destroy — tear everything down (use carefully)
terraform destroy
Always run terraform plan before apply. Review the output carefully — it shows exactly what will be created, changed, or destroyed.
State
Terraform stores the mapping between your config and real-world resources in a state file (terraform.tfstate). This is critical — don’t delete it.
For teams, store state remotely so everyone shares the same view:
terraform {
backend "s3" {
bucket = "my-terraform-state"
key = "prod/terraform.tfstate"
region = "ap-southeast-1"
dynamodb_table = "terraform-locks" # prevents concurrent apply
encrypt = true
}
}
Project Structure for Real Projects
infrastructure/
├── main.tf # Core resources
├── variables.tf # Input variables
├── outputs.tf # Output values
├── versions.tf # Provider version constraints
└── terraform.tfvars # Variable values (don't commit secrets)
For larger projects, use modules to group related resources into reusable components.
What to Learn Next
- Terraform documentation — the official docs are excellent
- Terraform workspaces — manage multiple environments with the same config
- Modules — reusable infrastructure components
terragrunt— a thin wrapper that helps manage large multi-account setups
Terraform has a learning curve, but once you’ve provisioned your first environment from code, manually clicking through cloud consoles feels painfully fragile by comparison.
Related Articles
Docker for Backend Developers: A Practical Introduction
Learn how Docker works, why backend developers need it, and how to containerize your first Python or Go application in under 30 minutes.
Containerising a Backend Service: From Docker to Kubernetes
A practical walkthrough of containerising a Python backend service with Docker, deploying it to Kubernetes on ECS, and the production gaps that only show up once real traffic hits.
Kubernetes Basics Every Backend Developer Should Know
You don't need to be a DevOps engineer to understand Kubernetes. Learn the core concepts — Pods, Deployments, Services — that every backend developer encounters in production.