Terraform Tutorial
Terraform Tutorial
Terraform Tutorial
1. Download the Terraform from the terraform site based on the Operation System.
https://www.terraform.io/downloads.html
2. Copy the Zip file on the server where we need to install.
3. Unzip the file using the following command.
1. Install the MS Visual Studio in Windows machine and install the Terraform plugin in
Visual Studio code.
2. Create file name called instance.tf in VS code as shown below.
provider "aws" {
access_key = "Your Key"
secret_key = "Your secret key"
region = "us-east-1"
}
1. Terraform variables were completely reworked for the terraform 0.12 release.
2. You can now have more control over the variables, and have for and for-each loops,
which where not possible with earlier versions.
3. You don't have to specify the type in variables, but it's recommended.
4. Terraform's simple variable types.
• Number
• String
• Bool
variable "a-string"{
type = string
}
variable "this-is-a-number"{
type = number
}
variable "true-or-false"{
type = bool
}
providers.tf
provider "aws"{
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
region = "${var.AWS_REGION}"
}
vars.tf
variable "AWS_ACCESS_KEY"{}
variable "AWS_SECRET_KEY"{}
variable "AWS_REGION"{
default = "eu-west-1"
}
terraform.tfvars
AWS_ACCESS_KEY = ""
AWS_SECRET_KEY = ""
AWS_REGION = ""
instance.tf
vars.tf
variable "AWS_ACCESS_KEY"{}
variable "AWS_SECRET_KEY"{}
variable "AWS_REGION"{
default = "eu-west-1"
}
variable "AMIS"
type = "map"
default = {
us-east-1 = "ami-13be557e"
us-west-2 = "ami-06b94666"
us-west-1 = "ami-0d729a60"
}
}
instance.tf
File Uploads:
resource "aws_instance" "web" {
ami = "${lookup(var.AMIS, var.AWS_REGION)}"
instance_type = "t2.micro"
provisioner "file"{
source = "app.conf"
destination = "/etc/myapp.conf"
}
}
provisioner "file"{
source = "script.sh"
destination = "/opt/script.sh"
connection{
user = "${var.instance_username}"
password = "${var.instance_password}"
}
}
}
5. When spinning up instances on AWS, ec2-user is the default user for Amazon Linux and
Ubuntu.
6. Typically, on AWS, you will use SSH keypairs:
resource äws_key_pair""edward-key"{
key_name = "mykey"
public_key = "ssh-rsa my-public-key"
}
provisioner "file"{
source = "script.sh"
destination = "/opt/script.sh"
}
provisioner "remote-exec"{
inline = [
"chmod +x /opt/script.sh",
"/opt/script.sh arguments"
]
}
}
output "ip" {
value = "${aws_instance.example.public_ip}"
}
5. You can refer to any attribute by specifying the following elements in your variables:
• The resource type: aws_instance
• The resource name: example
• The attribute name: public_ip
6. You can also use the attributes in a Script:
9. Local state works well in the beginning, but when you project becomes bigger, you might
want to store your state remote.
10. The terraform state can be saved remote, using the backend functionality in terraform.
11. The default is a local backend (the local terraform state file)
12. Other backends include:
Terraform {
backend "consul"{
address = "demo.consul.io" #hostname of consul cluster
path ="terraform/myproject"
}
}
16. You can also store your state in S3:
terraform {
backend "s3"{
bucket = "mybucket"
key = "terraform/myproject"
region = "eu-west-1"
}
17. When using an S3 remote state, it's best to configure the AWS credentials:
$aws configure
AWS Access Key ID[]: AWS-key
AWS Secret Access Key[]: AWS_secret_key
Default region name[]: eu-west-1
Default output format[None]
18. Next step, terraform init
19. Using a remote store for the terraform state will ensure that you always have the latest
version of the state.
20. It avoids having to commit and push the terraform.tfstate to version control.
21. Terraform remote stores don’t always support locking
• The documentation always mentions if locking is available for a remote
store.
• S3 and consul support it.
22. You can also specify a (read-only) remote store directly in the .tf file.
data "terraform_remote_state" "aws_state" {
backend = "s3"
config {
bucket = "terraform-state"
key = "terraform.tfstate"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
region = "${var.AWS_REGION}"
}
}
23. This is only useful as a read only feed from your remote file.
24. It’s a data source.
25. It is very useful to generate outputs.
ingress {
from_port = "443"
to_port = "443"
protocol = "tcp"
cidr_blocks = slice(data.aws_ip_ranges.european_ec2.cidr_blocks, 0, 50)
}
tags = {
CreateDate = data.aws_ip_ranges.european_ec2.create_date
SyncToken = data.aws_ip_ranges.european_ec2.sync_token
}
}
2.9: Templates:
9. Then you create a template_file resource that will read the template file and replace
${myip}with the IP address of an AWS instance created by terraform.
vars{
myip = "${aws_instance.database1.private_ip}
}
}
10. Then you can use the my-template resource when creating a new instance
# Create a web server
resource "aws_instance" "web" {
# ---
user_data = "${data.template_file.my-template.rendered}"
}
11. When terraform runs, it will see that it first needs to spin up the databse1 instance, then
generate the template and only then spin up the web instance.
12. The web instance will have the template injected in the user_data and when it launches,
the user-data will create a file /etc/myapp.config with the IP address of the database.
13. First you create a template file:
#!/bin/bash
echo "database-ip = ${myip}" >>/etc/myapp.config
2.11: Modules:
module-example/output.tf
output "aws_cluster"
value = "${aws_instance.instance-1.public_ip}, ${aws_instance.instance-2.public_ip},
${aws_instance.instance-3.public_ip}"
8. Use the output from the module in the main part of your code:
output "some-output" {
values = "${module.module-example.aws-cluster"
}
9. I am just using the output resource here, but we can use the variables anywhere in the
terraform code.
Command Description
Terraform apply applies state
Terraform destroy Destroys all terraform managed state
Terraform fmt Rewrite terraform configuration files to canonical format
Terraform get Download and update modules
Terraform graph create a visual representation of a configuration
Terraform import address ID Import will try
Terraform output Output any of your resources.
Terraform init Initialization the terraform
Terraform plan It will shows the infrastructure changes
Terraform push push changes to Atlas
Terraform refresh Refresh the remote state
Terraform remote Configure remote state storage
Terraform show shows the human readable output from state
Terraform state Advanced state management
Terraform validate Validate the terraform syntax
Terraform taint Manually mark resources as tainted
Terraform untaint Undo a taint
Section-3: Packer:
Packer is an open-source DevOps tool made by Hashicorp to create images from a single JSON
config file, which helps in keeping track of its changes in the long run. This software is cross-platform
compatible and can create multiple images in parallel.
➢ Packer is a command line tool that can build AWS AMIs based on templates.
➢ Instead of installing the software after booting up an instance, you can create an AMI with all the
necessary software on.
➢ This can speed up boot times of instances
➢ It's a common approach when you run a horizontally scaled app layer or a cluster of something.
1. On Amazon AWS, you have a default VPC (Virtual Private Network) created for you by
AWS to launch instances in
2. Up until now we used this default VPC
3. VPC isolates the instances on a network level
• It's like your own private network in the cloud
4. Best Practice is to always launch your instances in a VPC
• The default VPC
• or one you create yourself (managed by terraform)
5. There is also EC2-Classic, which is basically one big network where all AWS customers
could launch their instances in
6. For smaller to medium setups, one VPC (per region) will be suitable for your needs.
7. An instance launched in one VPC can never communicate with an instance in an other
VPC using their private IP addresses
• They could communicate still, but using their public IP (not
recommended)
• You could also link 2 VPCs called peering.
8. On Amazon AWS, you start by creating your own Virtual Private Network to deploy
your instances (servers) / databases in
9. This VPC uses the 10.0.0.0/16 addressing space, allowing you to use the IP addresses that
start with "10.0", like this: 10.0.x.x
10. This VPC covers the eu-west-1 region, which is an Amazon AWS Region in Ireland.
Private Subnets:
Range From To
10.0.0.0/8 10.0.0.0 10.255.255.255
172.16.0.0/12 172.16.0.0 172.31.255.255
192.168.0.0/16 192.168.0.0 192.168.255.255
Subnet Masks:
Range Netmask Total Addresses Examples
10.0.0.0/8 255.0.0.0 16,777,214 10.0.0.1
10.0.0.0/16 255.255.0.0 65,536 10.0.5.1
10.1.0.0/16 255.255.0.0 65,536 10.1.5.1
10.0.0.0/24 255.255.255.0 256 10.0.0.1
10.0.1.0/24 255.255.255.0 256 10.0.1.5
10.0.0.5/32 255.255.255.255 1 10.0.0.5
11. Every availability zone has its own public and private subnet.
12. Instances started in subnet maon-public-3 will have IP address 10.0.3.x and will be
launched in the eu-west-1c availability zone (Amazon calls 1 datacenter an availability
zone)
13. An instance launched in main-private-1 will have an IP address 10.0.4.x and will reside
in Amazon's eu-west-1a Availability Zone (AZ)
14. All the public subnets are connected to an Internet Gateway. These instances will also
have a public IP address, allowing them to be reachable from the internet.
15. Instances launched in the private subnets don't get a public IP address, so they will not be
reachable from the internet.
16. Instances from main-public can reach instances from main-private, because they are all in
the same VPC. This of course if you set the firewall to allow traffic from one to another.
17. Typically, you use the public subnets for internet-facing services/applications.
18. Databases, Caching services and Backends all go into the private subnets.
19. If you use a Load Balancer (LB), you will typically put the LB in the public subnets and
the instances serving an application in the private subnets.
4.2: Launching EC2 instance in VPC:
}
9. The Keys/mykepair.pub will be uploaded to AWS and will allow an instance to be
launched with this public key installed on it.
10. You never upload your private key! you use your private key to login to the instance.
1. The t2.micro instance with this particular AMI automatically adds 8 GB of EBS storage.
2. Some instance types have local storage on the instance itself.
• This is called ephemeral storage
• This type of storage is always lost when the instance terminates.
3. The 8 GB EBS root volume storage that comes with the instance is also set to be
automatically removed when the instance is terminated.
• You could still instruct AWS not to do so, but that would be counter-
intuitive (anti-pattern)
4. In most cases the 8 GB for the OS (root block devices) suffices.
5. In our next example I am adding an extra EBS storage volume
• Extra volumes can be used for the log files, any real data that is put on the
instance.
• That data will be persisted until you instruct AWS to remove it.
6. EBS storage can be added using a terraform resource and then attached to our instance.
7. In the previous example we added an extra volume
8. The root volume of 8 GB still exists
9. In you want to increase the storage or type of the root volume, you can use
root_block_device within the aws_instance resource.
4.4: Userdata:
4.8: Autoscaling:
1. In AWS autoscaling groups can be created to automatically add/remove instances when
certain thresholds are reached.
• e.g. your application layer can be scaled out when you have more visitors.
2. To set up autoscaling in AWS you need to setup at least 2 resources:
• An AWS launch configuration
• An autoscaling group
3. Once the autoscaling group is setup, you can create autoscaling policies.
• A policy is triggered based on a threshold (CloudWatch Alarm)
• An adjustment will be executed.
4. First the launch configuration and the autoscaling group needs to be created.
5. To create a policy, you need a aws_autoscaling_policy
6. Then, you can create a CloudWatch alarm which will trigger the autoscaling policy.
7. If you want to receive an alert (e.g. email) when autoscaling is invoked, need to create a
SNS topic (Simple Notification Service)
8. That SNS topic needs to be attached to the autoscaling group.
1. Now that you have autoscaled instances, you might want to put a load balancer in front of
it.
2. The AWS Elastic Load Balancer (ELB) automatically distributes incoming traffic across
multiple EC2 instances.
• The ELB itself scales when you receive more traffic.
• The ELB will health check your instances.
• If an instance fails its health check, no traffic will be sent to it.
• If a new instance is added by the autoscaling group, the ELB will
automatically add the new instances and will start health checking it.
3. The ELB can also be used as SSL terminator.
• It can offload the encryption away from the EC2 instances.
• AWS can even manage the SSL certificates for you.
4. ELBs can be spread over multiple Availability Zones for higher fault tolerance.
5. You will in general achieve higher levels of fault tolerance with an ELB routing the
traffic for your application.
6. ELB is comparable to a nginx/haproxy, but then provided as a service.
7. AWS provides 2 different types of load balancers:
• The Classic Load Balancer (ELB)
➢ Routes traffic based on network information.
➢ E.g. Forwards all traffic from port 80 (HTTP) to port 8080
(application)
• The application Load Balancer (ALB)
➢ Routes traffic based on application level information.
➢ E.g. can route /api and /website to different EC2 instances.
1. Starting from terraform 0.12 you can use for and for_each loops.
2. The for-loop features can help you to loop over variables, transform it and output it in
different formats.
3. For example:
• [for s in ["this is a", "list"] : upper(s)]
• You can loop over a list[1,2,3,4] or even a map like {"key" = "value"}
• You can transform them by doing a calculation or a string operation.
• Then you can output them as a list or map
4. For loops are typically used when assigning a value to an argument.
5. For Example:
• security_groups = ["sg-12345", "sg-5678"]
➢ This could be replaced by a for loop if you need to transform the input
data.
• Tags = {Name = "resource name"}
➢ This is a map which can be "hardcoded" or which can be the output of
a for loop.
6. For_each loops are not used when assigning a value to an argument, but rather to repeat
nested blocks
7. The way to then to loop over data and output multiple literal blocks is with a foreach
1. When starting with terraform on Production environments, you quickly realize that you
need a decent project structure.
2. Ideally, you want to separate your development and production environments completely.
• That way, if you always test terraform changes in development first,
mistakes will be caught before they can have a production impact.
• For complete isolation, it's best to create multiple AWS accounts and use
on account for Dev and for Prod and a third one for billing.
• Splitting out terraform in multiple projects will also reduce the resources
that you will need to manage during one terraform apply.
1. Starting from terraform 0.14(November 2020), terraform will use a provider dependency
lockfile.
2. The file is created when you enter terraform init and is called .terraform.lock.hcl
3. This file tracks the versions of providers and modules
4. The lockfile should be committed to git.
5. When committed to git, re-runs of terraform will use the same provider/module versions
you used during execution.
6. Terraform will also store checksums of the archive to be able to verify the checksum.
7. Terraform will update the lockfile, when you make changes to the provider requirements.
1. The command "terraform state" can be used to manipulate your terraform state file.
Command Description
Terraform state list List the State
Terraform state mv Move an item in the state
Terraform state pull Pull current state and output to stdout
Terraform state push Overwrite state by pushing local file to
statefile
Terraform state replace-provider Replace a provider in the state file
Terraform state rm Remove item from state
Terraform state show Show item in state.
2. Here are a few use cases when you will need to modify the state:
• When upgrading between versions, ex: 0.11 -> 0.12
• When you want to rename a resource in terraform without recreating it.
• When you changed a key in a for_each, but you don’t want to recreate the
resources.
• Change position of a resource in a list(resource[0], resource[1],...)
• When you want to stop managing a resource, but you don't want to destroy
the resource(terraform state rm)
• When you want to show the attributes in the state of a resource(terraform
state show)
1. Just like packer builds AMIs, you can use docker to build docker images.
2. Those images can then be run on any Linux host with Docker Engine installed.
3. Using my vagrant "DevOps Box" (The ubuntu box)
4. By downloading Docker for:
• Windows: https://docs.docker.com/engine/installation/windows/
• MacOS: https://docs.docker.com/engine/installation/mac
• Linux: https://docs.docker.com/engine/installation/linux
5. The demos will be done using Docker Enfgine installed in the Vagrant DevOps box
(https://github.com/wardviaene/devops-box)
ecr.tf
output.tf
output "myapp-repository-URL"{
value = "${aws_ecr_repository,myapp.repository_url}"
}
1. Now that your app is dockerized and uploaded to ECR, you can start the ECS cluster.
2. ECS - EC2 container services will manage your docker containers.
3. You just need to start an autoscaling group with a custom AMI
• The custom AMI contains the ECS agent
4. Once the ECS cluster is online, tasks and services can be started on the cluster.
5. First, the ECS cluster needs to be defined:
resource "aws_ecs_cluster" "example-cluster"{
name = "example-cluster"
}
6. Then, an autoscaling group launches EC2 instances that will join this cluster
7. The iam role policy (aws_iam_role_policy.ecs-ec2-role-policy)
8. Before the docker app can be launched, a task definition needs to be provided.
9. The task definition describes what docker container to be run on the cluster:
• Specifies Docker image (the Docker image in ECR)
• Max CPU usage, Max memory usage
• Whether containers should be linked
• Environment variables
• And any other container specific definitions
10. A service definition is going to run a specific amount of containers based on the task
definition.
11. A service is always running, if the container stops, it will be restarted.
12. A service can be scaled, you can run 1 instance of a container or multiple
13. You can put an Elastic Load Balancer in front of a service.
14. You typically run multiple instances of a container, spread over Availability Zones.
• If one container fails, your load balancer stops sending traffic to it.
• Running multiple instances with an ELB / ALB allows you to have HA.
1. Amazon Elastic Container Service for Kubernetes is a highly available, scalable and
secure Kubernetes Service.
2. It's General available since June 2018
3. Kubernetes is an alternative to AWS ECS
• ECS is AWS-specific, whereas Kubernetes can run on anu public cloud
provider or even on-premises
• They are both great solutions to Orchestrate containers.
4. AWS EKS provides managed Kubernetes master nodes
• There is no master nodes to manage
• The master nodes are multi-AZ to provide redundancy
• The Master nodes will scale automatically when necessary
• Secure by default: EKS Integrates with IAM.
5. AWS charges money to run an EKS cluster
• For smaller setups, ECS is cheaper
6. Kubernetes is much more popular than ECS, so if you are planning to deploy on more
cloud providers / on-prem, it is a more natural choice.
7. Kubernetes has more features, but it also much more complicates than ECS- to deploy
simpler apps/solutions
8. ECS has very tight integration with other AWS services, but it is expected that EKS will
also be tightly integrated over time.
9.
3. Select the Terraform Associate and you will get the complete information.
https://www.hashicorp.com/certification/terraform-associate
1. Infrastructure As Code:
• Instead of going through the UI, write code to create resources.
• Code will be stored in git or other VCS
• Audit Log
• Ability to have a review process (PRs)
• Code can be used within a Disaster Recovery process
• Reusability of code
• Possible automation of provisioning
2. Terraform applies IaC using HCL (Hashicorp Configuration Language)
3. Terraform can run an execution plan to show you how the described code differs from
what is actually provisioned.
4. Terraform can resolve dependencies for you, it reads all your *.tf files at once and creates
a "Resource Graph" to know what resource should be created before another resource.
5. You know exactly what terraform will apply, using the plan and apply workflow.
• Terraform will only update resources that need to be changed.
1. For the certification, you need to know about a few CLI commands (besides
init/plan/apply).
2. Commands:
• Terraform fmt
• Terraform taint
• Terraform import
• Terraform workspace
• Terraform state
3. Terraform starts with a single workspace "default"
4. You can create a new workspace using "terraform workspace new"
$ terraform workspace new mytestworkspace
5. Switching to another workspace (or back to default) can be done with "terraform
workspace select name-of-workspace"
6. Once you are in new workspace, you will have an "empty" state.
7. Your previous state is till accessible if you select the "default" workspace again.
8. When you run terraform apply in your new workspace you will be able to re-create all the
resources and those resources will be managed by this new state in this new workspace.
9. This can be useful if you for example, want to test something in your code without
making changes to your existing resources, for example create a new instance with
encrypted root devices in a new workspace to test whether your new code works, rather
than immediately trying this on your existing resource.
1. In this course, we covered a lot of material on modules, so let’s rehearse what we learned
in this lecture.
2. This is a typical module declaration:
module "consul" {
source = "hashicorp/consul/aws"
version = "0.1.0"
}
3. This will download a specific module version from the terraform registry
4. We can also see that the module is owned by HashiCorp, because it starts with
HashiCorp/
5. You don't necessarily need to use the registry, you can also use the modules directly if
you create a directory for example:
module "mymodule" {
source = "./mymodule" # refers to a local path
}
module "mymodule" {
source = "./mymodule"
myvalue = "123"
}
module "other-module" {
public_ip = module.mymodule.instance_public_ip
}
12. In ./mymodule/output.tf:
output "instance_public_ip" {
value = aws_instance.myinstace.public_ip
}
13. In a module you can only use the variables that are declared within that module.
14. In the root module (the root project), you can only access parameters that are defined as
output in that module.
15. To access data from the root module or other modules, you can use inputs to pass
information to the module.
16. To provide data to the root module, you can use outputs to pass information to the root
module.
1. When using modules and also providers, you can specify a version constraint
version = ">=1.2.0, <2.0.0"
2. This version allows every version greater or equal then 1.2.0 but needs to be less than
2.0.0
3. You can separate conditions with a comma
4. The version numbering should follow semantic versioning (major.minor.patch)
5. The following operators can be used with version conditions:
• = : Exactly one version
• != : Excludes as exact version
• >,>=,<,<=
• ~>: Allows right most version to increment
• Ex: "~> 1.2.3" will match 1.2.4,1.2.5 but no 1.3.0
6. Best Practices:
• Terraform documentation recommends to use specific versions for third
party modules
• For modules within your organization, you can use a range, for example
"~> 1.2.0" to avoid big changes when you bump to 1.3.0.
• Within modules, you should supply a minimum terraform core version to
ensure compatibility
• For providers you can use the ~> constraint to set lower and upper bound.
1. The default backend in terraform is the local backend, this requires no configuration.
2. A terraform.tfstate file will be written to your project folder.
• This is where the state is stored.
• Every time you run terraform apply, the state will be changed and the file
will be updated.
3. Once you start working in a team, you are going to want to use a remote backend.
4. Working with a remote state has benefits:
• You can easily work in a team, as the state is separate from the code
(alternatively, you would have to commit the state to version control -
which is far from ideal if you need to work in a team.
• A remote backend can keep sensitive information off disk.
• S3 supports encryption at rest, authentication and authorization, which
protects your state file much more than having it on you disk / version
control/
• Remote Operations: terraform apply can run for a long time in bigger
projects. Backends, like the "remote" backend, supports remote operations
that are executed fully remote, so that the whole operation runs
asynchronously. You don’t need to be connected / keep your laptop
running during the terraform apply.
5. State locking ensures nobody can write to the state at the same time.
6. Sometimes, when terraform crashes or a users internet connection breaks during
terraform apply, the lock will stay.
7. "Terraform force-unlock <id>" can be used to force unlock the state, in case there is a
lock, but nobody is running terraform apply.
• This command will not touch the state, it will just remove the lock file, so
it's safe, as long as nobody is really still doing an apply.
8. There is also an option -lock=false that can be passed to terraform apply, which will not
use the lock file. This is discouraged and should only be used when your locking
mechanism is not working.
9. Supported standard backends:
• Artifactory (artifact storage software)
• Azurerm (azure)
• Consul (Hashicorp key value store)
• Cos (Tencent cloud)
• Etcd, Etcdv3 (similar to consul)
• Gcs (Google Cloud)
• http
• Kubernetes
• Manta (also object storage)
• OSS (Alibaba cloud storage)
• pg (postgress)
• S3
• Swift (Openstack blob storage)
10. Every backend will also have a specific authentication method
11. The configuration is done within the terraform {} block:
terraform {
backend "azurerm" {
}
}
terraform {
backend "s3" {
}
}
12. You can have a partial backend configuration, where you leave away some of the
information.
13. This can be useful if you would like to use different backends when executing the code
14. This is often then scripted with shell scripts that call terraform with the correct arguments
- this to avoid having to do this manually every time.
15. Most commonly this is used to avoid having to hardcode secrets in the terraform files,
which would end up in version control.
16. There are 3 ways to pass this backend information:
• Interactively, when the information is missing, terraform init will ask for it
• A file
• Key / Value pairs
$ terraform init -backend-config = path-to file
$ terraform init -backend-config = "bucket=mubucket" -backend-config =
"otherkey = othervalue"
17. If at some point you'd like to update your state file to reflect the "actual" stat of your
infrastructure, but you don't want to run terraform apply, you can run "terraform
refresh"
18. Terraform refresh will look at your infrastructure that has been applied and will update
your state file to reflect any changes
19. It will modify your infrastructure, it will only update your state file.
20. This is often useful if you have outputs that need to be refreshed or something changed
outside terraform and you need to make terraform aware of it without having to run an
apply.
21. You need to be aware that secrets can be stored in your state file.
• For example, when you create a database, the initial database password
will be in the state file.
22. If you have a remote state, then locally it will not be stored on disk
• As a result, storing state remote can increase security.
23. Make sure your remote state backend is protected sufficiently.
• For example for S3, make sure only terraform administrators have access
to this bucket, enable encryption at rest. Also make sure that for every
backend TLS is used when communicating with the backend.