Terraform Notes PPT 25th August 2024 - KPLABS

Download as pdf or txt
Download as pdf or txt
You are on page 1of 678

HashiCorp Certified Terraform Associate 2024

PPT Version
PPT Release Date = 25th August 2024

We regularly release new version of PPT when we update this course.

Please check regularly that you are using the latest version.

The Latest Version Details are mentioned in the PPT Lecture in Section 1.
Understanding the Need
My personal journey started with implementing “AWS Hardening” guidelines.

There were 100+ pages of guidelines, and it used to take 2-3 days to implement
in 1 account.

Nowadays, it is more than 250+ pages.


Challenge that Terraform Solves
Terraform allows us to create reusable code that can deploy identical set of
infrastructure in a repeatable fashion.

AWS Account 1
Deploy
HCL Configuration

Hardening Rule 1
AWS Account 2
Terraform
Hardening Rule 2

Hardening Rule 3

Hardening Rule 100 Deploy AWS Account 99


Amazing Terraform
One of the great benefits of Terraform is that it supports thousands of providers.

Once you learn Terraform Core concepts, you can write code to create and
manage infrastructure across all the providers.

Terraform
Overview of Terraform Certification
Terraform has become of the most popular and widely used tools to create and
manage infrastructure and one of the defacto IAC tools for DevOps.

HashiCorp has released the official Terraform certification to certify students


related to core Terraform concepts and skills.
What Does this Course Cover?
We start this course of Terraform from absolute scratch and then we move
ahead with advance topics.

We cover ALL the topics of the official exams.


About Me
● DevSecOps Engineer - Defensive Security.
● Teaching is one of my passions.
● I have total of 16 courses, and around 350,000+ students now.

Something about me :-

● HashiCorp Certified [Terraform, Vault, Consul] Associate.


● AWS Certified [DevOps Pro, SA Pro, Advanced Networking, Security Specialty …]
● RedHat Certified Architect (RHCA) + 13 more Certifications
● Part time Security Consultant
Join us in our Adventure

kplabs.in/chat

Be Awesome

kplabs.in/linkedin
About the Course
Understanding the Basics
This is a certification specific course and we cover all the pointers that are part
of the official exam blueprint.
Point to Note

The arrangement of topics in this course is a little different from the exam
blueprint to ensure this course remains beginner friendly and topics are covered
in a step by step manner.
Course Resource - GitHub
All the code that we use during practicals have been added to our GitHub page.
Course Resource - PPT Slides
ALL the slides that we use in this course is available to download as PDF.

The PDF is attached as part of the lecture titled “Central PPT Notes”.
Our Community (Optional)
We also have a Discord community that allows all the individuals who are
preparing for the same certification to connect with each other for discussions as
well as technical support.

https://kplabs.in/chat
Important Note - Platform for This Course
Terraform supports hundreds of of platforms like AWS, Azure, GCP etc.

To learn Terraform concepts, we have to choose 1 platform for our testing.

For this course we have chosen AWS.


Clarification Regarding AWS Platform
Aim of this course is to learn Core Concepts of Terraform and not AWS.

We use very basic AWS services like Virtual Machine, AWS users to
demonstrate and Learn the Core Terraform concepts.

The Terraform structure and concepts remain SAME irrespective of platform.

We have hundreds of users from different platform like Azure who have
completed this course and are actively implementing Terraform for different
platforms..
Infrastructure as Code (IAC)
Understanding the Basics
There are two ways in which you can create and manage your infrastructure:

● Manually approach.

● Through Automation
Work Requirement: Database Backup
I was assigned a task to take database backup every day at 10 PM and the
backup had to be stored in Amazon S3 Storage with appropriate timestamp.

● db-backup-01-01-2024.sql
● db-backup-02-01-2024.sql

Initially due to lack of time, I used to manually take DB backup at 10 PM and


upload it to S3.

Initiate Backup
Upload Backup

Amazon S3 Database
Learning from this Work Requirement
If a particular task has to be done in an repeatable manner, it MUST be
automated.

Points to Note:

1. Depending on the type of task, the tools for automation will change.

2. There are wide variety of Tools & Technologies used for Automation like
Ansible, CloudFormation, Terraform, Python etc.
Example of a Single Service
Set of resources (Virtual Machine, Database, S3, AWS Users) must be created
with exact similar configuration in Dev, Stage and Production environment.

Development ENV Staging ENV Production ENV


Example of a Single Service - Automated Way

IAC Tool

Development ENV Staging ENV Production ENV


Basics of Infrastructure as Code
Infrastructure as Code (IaC) is the managing and provisioning of infrastructure
through code instead of through manual processes.
Benefits of Infrastructure As Code

There are several benefits of designing your infrastructure as code:

● Speed of Infrastructure Management.

● Low Risk of Human Errors.

● Version Control.

● Easy collaboration between Teams.


Choosing Right IAC Tool
Available Tools
There are various types of tools that can allow you to deploy infrastructure as
code :

● Terraform
● CloudFormation
● Heat
● Ansible
● SaltStack
● Chef, Puppet and others
Categories of Tools
The tools are widely divided into two major categories

Infrastructure As Code

Infrastructure Orchestration Configuration Management

Terraform, CloudFormation Ansible, Chef


Configuration Management
Configuration Management tools are primarily used to maintain desired
configuration of systems (inside servers)

Example: ALL servers should have Antivirus installed with version 10.0.2

Installing AV

Ansible

Server Fleet
Infrastructure Orchestration
Infrastructure Orchestration is primarily used to create and manage
infrastructure environments.

Example: Create 3 Servers with 4 GB RAM, 2 vCPUs. Each server should have
firewall rule to allow SSH connection from Office IPs.

Terraform

Infrastructure Fleet
IAC & Configuration Management = Friends

Deploy Server

Terraform
Completed
first_server.tf
Terraform EC2 Running

New E2

Install & Configure Application

AWS

Ansible
How to choose IAC Tool?
i) Is your infrastructure going to be vendor specific in longer term ? Example AWS.

ii) Are you planning to have multi-cloud / hybrid cloud based infrastructure ?

iii) How well does it integrate with configuration management tools ?

iv) Price and Support


Use-Case 1 - Requirement of Organization 1
1. Organization is going to be based on AWS for next 25 years.

2. Official support is required in-case if team face any issue related to IAC tool or
code itself.

3. They want some kind of GUI interface that supports automatic code
generation.
Use-Case 2 - Requirement of Organization 2

1. Organization is based on Hybrid Solution. They use VMware for on-premise


setup; AWS, Azure and GCP for Cloud.

2. Official support is required in-case if IAC tool has any issues.


Installing Terraform
Terraform in detail
Overview of Installation Process

Terraform installation is very simple.

You have a single binary file, download and use it.

Download

terraform

knowledge portal
Supported Platforms

Terraform works on multiple platforms, these includes:

● Windows
● macOS
● Linux
● FreeBSD
● OpenBSD
● Solaris

knowledge portal
Terraform Installation - Mac & Linux

There are two primary steps required to install terraform in Mac and Linux

1) Download the Terraform Binary File.

2) Move it in the right path.

knowledge portal
Choosing IDE For Terraform
Terraform in detail
Terraform Code in NotePad!
You can write Terraform code in Notepad and it will not have any impact.

Downsides:

● Slower Development
● Limited Features

knowledge portal
Need of a Better Software
There is a need of a better application that allows us to develop code faster.

knowledge portal
What are the Options!
There are many popular source code editors available in the market.

Source Code Editors

knowledge portal
Editor for This Course
We are going to make use of Visual Studio Code as primary editor in this course.

Advantages:
1. Supports Windows, Mac, Linux
2. Supports Wide variety of programming languages.
3. Many Extensions.

knowledge portal
knowledge portal
Visual Studio Code Extensions
Understanding the Basics
Extensions are add-ons that allow you to customize and enhance your
experience in Visual Studio by adding new features or integrating existing tools

They offer wide range of functionality related to colors, auto-complete, report


spelling errors etc.
Terraform Extension
HashiCorp also provides extension for Terraform for Visual Studio Code.
Setting up the Lab
Let’s start Rolling !
Let’s Start

i) Create a new AWS Account.

ii) Begin the course

knowledge portal
Registering an AWS Account

knowledge portal
Authentication and Authorization
Understanding the Basics
Before we start working on managing environments through Terraform, the first
important step is related to Authentication and Authorization.

Create new Server

Terraform AWS Cloud

Dude, Authenticate First


Basics of Authentication and Authorization
Authentication is the process of verifying who a user is.

Authorization is the process of verifying what they have access to

Example:

Alice is a user in AWS with no access to any service.


Learning for Todays’ Video
Terraform needs access credentials with relevant permissions to create and
manage the environments.

Create new Server

Terraform

Done

username password

Bob pwd928#
Access Credentials
Depending on the provider, the type of access credentials would change.

Provider Access Credentials

AWS Access Keys and Secret Keys

GitHub Tokens

Kubernetes Kubeconfig file, Credentials Config

Digital Ocean Tokens


First Virtual Machine Through Terraform
Revising the Basics of EC2
EC2 stands for Elastic Compute Cloud.

In-short, it's a name for a virtual server that you launch in AWS.

VM EC2 Instance
Available Regions
Cloud providers offers multiple regions in which we can create our resource.

You need to decide the region in which Terraform would create the resource.
Virtual Machine Configuration
A Virtual Machine would have it’s own set of configurations.

● CPU
● Memory
● Storage
● Operating System

While creating VM through Terraform, you will need to define these.


Providers and Resources
Basics of Providers
Terraform supports multiple providers.

Depending on what type of infrastructure we want to launch, we have to use


appropriate providers accordingly.
Learning 1 - Provider Plugins
A provider is a plugin that lets Terraform manage an external API.

When we run terraform init, plugins required for the provider are automatically
downloaded and saved locally to a .terraform directory.
Learning 2 - Resource
Resource block describes one or more infrastructure objects

Example:

● resource aws_instance
● resource aws_alb
● resource iam_user
● resource digitalocean_droplet
Learning 3 - Resource Blocks
A resource block declares a resource of a given type ("aws_instance") with a
given local name ("myec2").

Resource type and Name together serve as an identifier for a given resource
and so must be unique.

EC2 Instance Number 1 EC2 Instance Number 2


Point to Note
You can only use the resource that are supported by a specific provider.

In the below example, provider of Azure is used with resource of aws_instance


Important Question
The core concepts, standard syntax remains similar across all providers.

If you learn the basics, you should be able to work with all providers easily.
Issues and Bugs with Providers
A provider that is maintained by HashiCorp does not mean it has no bugs.

It can happen that there are inconsistencies from your output and things
mentioned in documentation. You can raise issue at Provider page.
Relax and Have a Meme Before Proceeding

knowledge portal
Provider Tiers
Provider Maintainers
There are 3 primary type of provider tiers in Terraform.

Provider Tiers Description

Official Owned and Maintained by HashiCorp.

Partner Owned and Maintained by Technology Company that


maintains direct partnership with HashiCorp.

Community Owned and Maintained by Individual Contributors.


Provider Namespace
Namespaces are used to help users identify the organization or publisher
responsible for the integration

Tier Description

Official hashicorp

Partner Third-party organization


e.g. mongodb/mongodbatlas

Community Maintainer’s individual or organization account, e.g.


DeviaVir/gsuite
Important Learning
Terraform requires explicit source information for any providers that are not
HashiCorp-maintained, using a new syntax in the required_providers nested
block inside the terraform configuration block

HashiCorp Maintained

Non-HashiCorp Maintained
Terraform Destroy
Learning to Destroy Resources
If you keep the infrastructure running, you will get charged for it.

Hence it is important for us to also know on how we can delete the infrastructure
resources created via terraform.

Terraform
Approach 1 - Destroy ALL
terraform destroy allows us to destroy all the resource that are created within the
folder.

terraform destroy

Terraform
Approach 2 - Destroy Some
terraform destroy with -target flag allows us to destroy specific resource.

terraform destroy - target aws_instance.myec2

Terraform
Terraform Destroy with Target
The -target option can be used to focus Terraform's attention on only a subset of
resources.

Combination of : Resource Type + Local Resource Name

Resource Type Local Resource Name

aws_instance myec2

github_repository example
Desired & Current State
Terraform in detail
Desired State
Terraform's primary function is to create, modify, and destroy infrastructure resources to
match the desired state described in a Terraform configuration

EC2 - t2.micro

knowledge portal
Current State
Current state is the actual state of a resource that is currently deployed.

t2.medium

knowledge portal
Important Pointer

Terraform tries to ensure that the deployed infrastructure is based on the desired state.

If there is a difference between the two, terraform plan presents a description of the
changes necessary to achieve the desired state.

knowledge portal
Provider Versioning
Terraform in detail
Provider Architecture

Infrastructure
Provisioning

(API interactions)

do_droplet.tf Terraform Digital Ocean New Server


Provider

Digital Ocean

knowledge portal
Overview of Provider Versioning
Provider plugins are released separately from Terraform itself.

They have different set of version numbers.

Version 1
Version 2

knowledge portal
Explicitly Setting Provider Version
During terraform init, if version argument is not specified, the most recent provider will be
downloaded during initialization.

For production use, you should constrain the acceptable provider versions via configuration, to
ensure that new versions with breaking changes will not be automatically installed.
Arguments for Specifying provider
There are multiple ways for specifying the version of a provider.

Version Number Arguments Description

>=1.0 Greater than equal to the version

<=1.0 Less than equal to the version

~>2.0 Any version in the 2.X range.

>=2.10,<=2.30 Any version between 2.10 and 2.30

knowledge portal
Dependency Lock File
Terraform dependency lock file allows us to lock to a specific version of the provider.

If a particular provider already has a selection recorded in the lock file, Terraform will always
re-select that version for installation, even if a newer version has become available.

You can override that behavior by adding the -upgrade option when you run terraform init,
Terraform Refresh
Understanding the Challenge
Terraform can create an infrastructure based on configuration you specified.

It can happen that the infrastructure gets modified manually.

t2.micro
EC2:

type: t2.micro
State File storage: 20
sg: default
Understanding the Challenge
The terraform refresh command will check the latest state of your infrastructure
and update the state file accordingly.

terraform refresh
Scan real infra

t2.large
EC2:

type: t2.large
State File storage: 20
sg: default
Points to Note

You shouldn't typically need to use this command, because Terraform


automatically performs the same refreshing actions as a part of creating a plan
in both the terraform plan and terraform apply commands.
Understanding the Usage

The terraform refresh command is deprecated in newer version of terraform.

The -refresh-only option for terraform plan and terraform apply was introduced in
Terraform v0.15.4.
AWS Provider - Authentication Configuration
Understanding the Basics
At this stage, we have been manually hardcoding the access / secret keys within
the provider block.

Although a working solution, but it is not optimal from security point of view.
Better Way
We want our code to run successfully without hardcoding the secrets in the
provider block.
Better Approach
The AWS Provider can source credentials and other settings from the shared
configuration and credentials files.
Default Configurations
If shared files lines are not added to provider block, by default, Terraform will
locate these files at $HOME/.aws/config and $HOME/.aws/credentials on Linux
and macOS.

"%USERPROFILE%\.aws\config" and "%USERPROFILE%\.aws\credentials" on


Windows.
AWS CLI
AWS CLI allows customers to manage AWS resources directly from CLI.

When you configure Access/Secret keys in AWS CLI, the location in which these
credentials are stored is the same default location that Terraform searches the
credentials from.

Create EC2 Instance

awscli AWS Platform


Lecture Format - Terraform Course
Terraform in detail
Overview of the Format

We tend to use a different folder for each practical that we do in the course.

This allows us to be more systematic and allows easier revisit in-case required.

Lecture Name Folder Names

Create First EC2 Instance folder1

Tainting resource folder2

Conditional Expression folder3

knowledge portal
Find the appropriate code from GitHub

Code in GitHub is arranged according to sections that are matched to the domains in the course.

Every section in GitHub has easy Readme file for quick navigation.

knowledge portal
Destroy Resource After Practical

We know how to destroy resources by now

terraform destroy

After you have completed your practical, make sure you destroy the resource before moving to
the next practical.

This is easier if you are maintaining separate folder for each practical.

knowledge portal
Relax and Have a Meme Before Proceeding

knowledge portal
Learning Scope - AWS Services for Terraform Course
Understanding the Basics
AWS has more than 200 services available.
Aim of the Course
Primary aim of this course is to master the core concepts of Terraform.

Terraform = Infrastructure as Code Tool.

To learn Terraform, we need to create infrastructure somewhere.


Services that we Choose
Throughout the course, we use very basic AWS services to demonstrate
Terraform concepts.

● Virtual Machine (EC2)

● Firewall (Security Groups)

● AWS Users (IAM Users)

● IP Address (Elastic IP)


Basics of These Services are Covered
We have 100,000+ students from different background who are learning
Terraform.

Some are AWS Pros, Some are from Azure/GCP, Some are students

To align everyone on same page, we also cover basics of the AWS service that
we use throughout the course.
Example - Creating Firewall Through Terraform

1. Basics of Firewalls in AWS

2. Firewall Practical in AWS (GUI Console)

3. Creating Firewall Rules Through Terraform.


Basics of Firewalls
Basics of Ports
A port acts as a endpoint of communication to identify a given application or
process on an Linux operating system

Opening 22 SSH

Opening 80 HTTPD

Internet Users

1.2.3.4
Basics of Firewall
Firewall is a network security system that monitors and controls incoming and
outgoing network traffic based on predetermined security rules.

Connect to 22 SSH

Firewall
HTTPD

User from Internet

Deny connect to 22

Allow connect to 80
Firewall in AWS
A security group acts as a virtual firewall for your instance to control inbound and
outbound traffic.

EC2
User from Internet

Deny connect to 22

Allow connect to 80
Sample Security Group with Rules
Inbound and Outbound Rules
Firewalls control both inbound and outbound connections to and from the server.

EC2

Inbound Outbound

Allow 80 from 0.0.0.0/0 Allow 3306 to 172.31.10.50


Creating Firewall Rules with Terraform
Architecture of Today’s Video
We will create a Firewall (Security Group) in AWS with following configuration

terraform-firewall

Inbound Outbound

Allow 80 from 0.0.0.0/0 Allow ALL


Reference - Final Code in Video
Dealing with Documentation Code Updates - Terraform
Understanding the Challenge
Occasionally in the newer version of Providers, you will see some changes in
the way you create a resource.

New Approach Old Approach


Points to Note

Just because a better approach is recommended, does NOT always mean that
the older approach will stop working.

Organizations can continue to use the approach that suits best in it’s
environment.
Switching to Older Provider Doc
You can always switch to the older version of provider documentation page to
understand the changes.
Closing Pointers

For larger enterprises, it becomes difficult to upgrade their code base to the
newer approach that provider recommends.

In such case, they stick with the appropriate provider version that supports the
older approach of creating the resource.
Create Elastic IP with Terraform
Basics of Elastic IP in AWS
An Elastic IP address is a static IPv4 address in AWS.

You can create it and associate it with EC2 instance.

52.30.40.50
52.30.40.50
Aim of Today’s Video

We are going to use Terraform to generate Elastic IP resource in AWS.


Attributes
Basics of Attributes
Each resource has its associated set of attributes.

Attributes are the fields in a resource that hold the values that end up in state.

Attributes Values

ID i-abcd

public_ip 52.74.32.50

private_ip 172.31.10.50

private_dns ip-172-31-10-50-.ec2.internal
Points to Note
Each resource type has a predefined set of attributes determined by the
provider.
Cross-Resource Attribute References
Typical Challenge
It can happen that in a single terraform file, you are defining two different
resources.

However Resource 2 might be dependent on some value of Resource 1.

Allow 443 from Elastic IP


Elastic IP Address
Understanding The Workflow

Elastic IP

Allow 443 from 52.72.30.50


52.72.30.50
Analyzing the Attributes of EIP
We have to find which attribute stores the Public IP associated with EIP
Resource.
Referencing Attribute in Other Resource
We have to find a way in which attribute value of “public_ip” is referenced to the
cidr_ipv4 block of security group rule resource.

Elastic IP

Attribute Value

public_ip 52.72.52.72
Cross Referencing Resource Attribute
Terraform allows us to reference the attribute of one resource to be used in a
different resource.

Overall syntax:

<RESOURCE TYPE>.<NAME>.<ATTRIBUTE>
Cross Referencing Resource Attribute
We can specify the resource address with attribute for cross-referencing.

Elastic IP

Attribute Value

public_ip 52.72.52.72
String Interpolation in Terraform
${...}): This syntax indicates that Terraform will replace the expression inside the
curly braces with its calculated value.
Joke Time

Why did the Terraform attribute take a break?

...It was feeling over-referenced.

How did the Terraform attribute become a


detective?

...It followed the resource trail.


Output Values
Understanding the Basics
Output values make information about your infrastructure available on the
command line, and can expose information for other Terraform configurations to
use.

Create EC2 and give me


it’s Public IP
Create EC2

Terraform

IP Of EC2: 172.32.10.50 Fetch Info of EC2


Sample Example
Use-Case:

Create a Elastic IP (Public IP) resource in AWS and output the value of the EIP.
Point to Note
Output values defined in Project A can be referenced from code in Project B as
well.

Fetch

Output Values

Ip = 54.146.20.28

TF Code
Project B

Project A
Terraform Variables
Understanding the Challenge
Repeated static values in the code can create more work in the future.

Example: VPN IP needs to be whitelisted for 5 ports through Firewall Rules.

Port Number CIDR Block Description

80 101.0.62.210/32 VPN IP Whitelist

443 101.0.62.210/32 VPN IP Whitelist

22 101.0.62.210/32 VPN IP Whitelist

21 101.0.62.210/32 VPN IP Whitelist

8080 101.0.62.210/32 VPN IP Whitelist


Reference Screenshot

Firewall Rule 1

Firewall Rule 2
Better Approach
A better solution would be to define repeated static value in one central place.

Key Value

vpn_ip 101.0.62.210/32

Central Location
Basics of Variables
Terraform input variables are used to pass certain values from outside of the
configuration

Name Value

vpn_ip 101.0.62.210/32

app_port 8080

Variable File
Benefits of Variables
1. Update important values in one central place instead of searching and
replacing them throughout your code, saving time and potential mistakes.

2. No need to touch the core Terraform configuration file. This can avoid
human mistakes while editing.
Variable Definitions File (TFVars)
Understanding the Base

Managing variables in production environment is one of the very important


aspect to keep code clean and reusable.

HashiCorp recommends creating a separate file with name of *.tfvars to define


all variable value in a project.
How Recommended Folder Structure Looks Like
1. Main Terraform Configuration File.

2. variables.tf file that defines all the variables.

3. terraform.tfvars file that defines value to all the variables.

Main Configuration File variables.tf File terraform.tfvars file


Configuration for Different Environments
Organizations can have wide set of environments: Dev, Stage, Prod

Dev

tfvars file

Main Configuration File variables.tf file Prod


Selecting tfvars File
If you have multiple variable definitions file (*.tfvars) file, you can manually define
the file to use during command line.
Point to Note
If file name is terraform.tfvars → Terraform will automatically load values from it.

If file name is different like prod.tfvars → You have to explicitly define the file
during plan / apply operation.
Approach to Variable Assignment
Understanding the Base
By default, whenever you define a variable, you must also set a value
associated with it.

Main Configuration File variables.tf


Add a Value in CLI
If you have not defined a value for a variable, Terraform will ask you to input the
value in CLI Prompt when you run terraform plan / apply operation.
Declaring Variable Values
When variables are declared in your configuration, they can be set in a number
of ways:

1. Variable Defaults.

2. Variable Definition File (*.tfvars)

3. Environment Variables

4. Setting Variables in the Command Line.


Variable Defaults
You can set a default value for a variable.

If there is no value supplied, the default value will be taken.


Variable Definition File (*.tfvars)

Variable Values can be defined in *.tfvars file.


Setting Variable in Command Line
To specify individual variables on the command line, use the -var option when
running the terraform plan and terraform apply commands:
Setting Variable through Environment Variables
Terraform searches the environment of its own process for environment
variables named TF_VAR_ followed by the name of a declared variable.
Variable Definition Precedence
Understanding the Base
Values for a variable can be defined at multiple different places.

What if values for a variable are different?


Simple Example
variable “instance_type” {}

1. Default Value is t2.micro

2. Terraform.tfvars value is “t2.small”

3. Environment Variable TF_VAR_instance_type = “t2.large”

Which value will Terraform take?


Variable Definition Precedence
Terraform loads variables in the following order, with later sources taking
precedence over earlier ones:

1. Environment variables
2. The terraform.tfvars file, if present.
3. The terraform.tfvars.json file, if present.

4. Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their


filenames.

5. Any -var and -var-file options on the command line


Example 1

ENV Variable of TF_VAR_instance_type = “t2.micro”

Value in terraform.tfvars = “t2.large”

Final Result = “t2.large”


Example 2

1. ENV Variable of TF_VAR_instance_type = “t2.micro”

2. Value in terraform.tfvars = “t2.large”

3. terraform plan -var="instance_type=m5.large"

Final Result = “m5.large”


Data Types
Setting the Base
Data type refers to the type of value.

Depending on the requirement, you can use wide variety of values in Terraform
configuration.

Example Data Type Data Type

“Hello World” String

7575 Number
Restricting Variable Value to Data Type
We can restrict the value of a variable to a data type.

Example:

Only numbers should be allowed in AWS Usernames.


Data Types in Terraform
Data Types Description

string a sequence of Unicode characters representing some text, like "hello".

number A Numeric value

bool a boolean value, either true or false

list a sequence of values, like ["us-west-1a", "us-west-1c"]

set a collection of unique values that do not have any secondary identifiers or
ordering.

map a group of values identified by named labels, like {name = "Mabel", age =
52}.

null a value that represents absence or omission.


Data Type - List
List Data Type
Allows us to store collection of values for a single variable / argument.

Represented by a pair of square brackets containing a comma-separated


sequence of values, like ["a", 15, true].

Useful when multiple values needs to be added for a specific argument


Data Type and Documentation
Arguments for a resource requires specific data types.

Some argument requires list, some requires map and so on.

The details of data type expected for an argument is mentioned in


documentation.
Use-Case 1: List Data Type

EC2 instance can have one or more security groups.

Requirement:

Create EC2 instance with 2 security groups attached.


Specify the Type of Values in List

We can also specify the type of values expected in a list.


Map - Data Type
Map Data Type
A map data type represents a collection of key-value pair elements
Use-Case of Map
We can add multiple tags to AWS resources.

These tags are key-value pairs.


The COUNT Meta-Argument
Understanding the Challenge
By default, a resource block configures one real infrastructure object.
Understanding the Use-Case
Sometimes you want to manage several similar objects (like a fixed pool of
compute instances) without writing a separate block for each one.

Pool of Servers
Introducing Count Argument
The count argument accepts a whole number, and creates that many instances
of the resource.

Pool of Servers
Challenges with Count
The instances created through count and identical copies, but you might want to
customize certain properties for each one.
Example - IAM User
For many resources, exact identical copies are not required and will not work.

Example: You cannot have multiple AWS Users with exact same name.
COUNT.INDEX
Introducing Count Index
When using count, you can also make use of count.index which allows better
flexibility.

This attribute holds a distinct index number, starting from 0, that uniquely
identifies each instance created by the count meta-argument.

0 1 2
Tabular Representation
Following representation shows each EC2 instance’s resource address that
contains the index.

Resource Address Description

aws_instance.myec2[0] First EC2 Instance

aws_instance.myec2[1] Second EC2 Instance

aws_instance.myec2[2] Third EC2 Instance

0 1 2
CLI Output
Within CLI output, you will be able to see the index value of resource.

First EC2 Second EC2


Example - IAM User Use-Case
The ${count.index} is dynamic expression that utilizes the count.index attribute
so that each username will be unique.
Enhancing with Count Index
You can use count.index to iterate through the list to have more customization.
Conditional Expressions
Setting the Base
Conditional expressions in Terraform allow you to choose between two values
based on a condition

dev

Variable
Conditional Expression
ENV = Production
production

Logic Result

Env = Development Launch small server

Env = Production Launch Large server


Syntax of Conditional Expression
The syntax of a conditional expression is as follows:

condition ? true_val : false_val

If condition is true then the result is true_val. If condition is false then the result is
false_val.
Conditional Expression Based on Use-Case
If Environment is Development, t2.micro instance type should be used.

If Environment is NOT development, m5.large instance type should be used.


Conditional Expression with Multiple Variables
In the following example, only if env=production and region=us-east-1, the larger
instance type of m5.large can be used.
Terraform Functions
Basics of Function
A function is a block of code that performs a specific task.

Input Output
Function
10,30,20 30

max ()
Function 1 - MAX
max () takes one or more numbers and returns the greatest number.
Function 2 - FILE
file () reads the contents of a file at the given path and returns them as a string.
Introducing Terraform Console
Terraform Console provides an interactive environment specifically designed to
test functions and experiment with expressions before integrating them into your
main code.
Importance of File Function
file reads the contents of a file at the given path and returns them as a string.

After

Before
Functions in Terraform
Terraform has wide variety of functions available to achieve different set of
use-cases.

Functions are grouped into categories. Some of these include:

Function Categories Functions Available

Numeric Functions abs, ceil, floor, max, min

String Functions concat, replace, split, tolower,toupper

Collection Functions element, keys, length, merge, sort

Filesystem Functions: file, filebase64, dirname


Important Point to Note

The Terraform language does not support user-defined functions, and so only
the functions built in to the language are available for use

The documentation includes a page for all of the available built-in functions.
Challenge - Analyzing Code Containing Functions
Setting the Base
As part of this challenge, you will be given a code that contains multiple sets of
Terraform Functions.

You have to analyze what this code does without running the “apply” operation.
Overall Workflow
1. Analyze what exactly the given code in GitHub will do without running the
“apply operation”.

2. Analyze the outcome by applying function using Terraform Console and


reading the documentation.

3. Make a note of it.

4. Run the “terraform apply” operation to verify if it matches your findings.


Solution - Analyzing Code Containing Functions
1- Analyzing Lookup Function
lookup retrieves the value of a single element from a map, given its key.

Format: lookup(map, key, default)


Testing Lookup Function
To test lookup function, add the details that are part of the map associated with
variable of ami and the default value of variable of region.

terraform console
2 - Analyzing Length Function
length determines the length of a given list, map, or string.
Testing Length Function
Code: count = length(var.tags)
3 - Analyzing Element Function
element retrieves a single element from a list.

Format: element(list, index)


Testing Element Function
Code: Name = element(var.tags,count.index)
4 - Analyzing TimeStamp Function
timestamp returns a UTC timestamp string in RFC 3339 format.
Testing TimeDate Function
A simple call to the timestamp () returns the timestamp value
5 - Analyzing Formatdate Function
formatdate converts a timestamp into a different time format.
5 - Testing Formatdate Function
Code Block:

CreationDate = formatdate("DD MMM YYYY hh:mm ZZZ",timestamp())


Final Result

1. Two set of EC2 instances will be created.

2. Name of EC2 will be “firstec2”, and “secondec2”

3. EC2 will have a tag of creation date with the timestamp value
You are Awesome
Learning “Terraform Function” is a longer learning journey compared to other
topics.

In today’s video, we learnt the practical aspect of Function in Terraform Code.


Local Values
Understanding the Challenge
Various resources in your project can have common values like tags.

Repeating these values across multiple resource blocks increases the code
length and makes it difficult to manage in larger projects.
Solution using Variables
One solution is to centralize these common values using Variables

Variable
Introducing Local Values
Local Values are similar to Variables in a sense that it allows you to store data
centrally and that can be referenced in multiple parts of configuration.

Locals
Additional Benefit of Locals
You can add expressions to locals, which allows you to compute values
dynamically
Locals vs Variables

Variable value can be defined in wide variety of places like terraform.tfvars, ENV
Variables, CLI and so on.

Locals are more of a private resource. You have to directly modify the code.

Locals are used when you want to avoid repeating the same expression multiple
times.
Important Points to Note

Local values are often referred to as just "locals"

Local values are created by a locals block (plural), but you reference them as
attributes on an object named local (singular)
Data Sources
Introducing Data Sources
Data sources allow Terraform to use / fetch information defined outside of
Terraform

Fetch details of EC2


instances in region.
Data Source Block

Pass On
the Details
Internally

Resource Block

Terraform Code
Example 1 - Reading Info of DO Account
Following data source code is used to get information on your DigitalOcean
account.
Example 2 - Reading a File
Following data source allows you to read contents of a file in your local filesystem.
Clarity regarding path.module

${path.module} returns the current file system path where your code is located.
Example 3 - Fetch EC2 Instance Details
Following data source code is used to fetch details about the EC2 instance in
your AWS region.
Data Sources Documentation Reference
Finding Available Data Sources
List of available data source are associated with each resource of a provider.
Data Sources Format
Understanding the Basic Structure
A data source is accessed via a special kind of resource known as a data
resource, declared using a data block:

Following data block requests that Terraform read from a given data source
("aws_instance") and export the result under the given local name ("foo").
Filter Structure
Within the block body (between { and }) are query constraints defined by the
data source.
Fetching Latest OS Image Using Data Sources
Understanding the Requirement

You have been given a requirement to write a Terraform code that creates EC2
instance using latest OS Image of Amazon Linux.
Approach that New User will Take
We want to use the latest OS image for creating server in AWS.

Steps that we typically follow:

1. Go to EC2 Console.

2. Fetch the latest AMI ID

3. Add that AMI ID in Terraform code.


Sample Reference Code

Hard Coded static value


Static Information is Boring
Hardcoding static details in your Terraform code will lead you to repeatedly
modify your code to meet changing requirements.
Another Challenge with Static Values
In many of the cases, the static value changes depending on the region.

Example: AMI IDs are specific to region.

Hardcoded AMI in code will only work for single region.

Mumbai Region Singapore Region Tokyo Region

ami-1234 ami-5678 ami-9012


Time to be Pros - Dynamic Configuration
We want Terraform to automatically query the latest OS image in AWS or any
other provide and use that for creating server.

We need code which works for all region without modification.

Fetch me latest OS
Image of Ubuntu

Terraform

ami-1234
Introducing Data Sources
Data sources allow Terraform to use information defined outside of Terraform
and we can use that information to provision resources.

Data Source Block Fetch latest AMI ID

Pass the
AMI ID to
resource
block

EC2 Resource

Terraform Code
Debugging Terraform
Basics of Debugging
Debugging is the process of finding the root cause of a specific issue.

30-40% of the time of a System Administrator goes into Debugging.


Example - SSH Verbosity
One of the important requirement in Debugging is getting detailed Log

Depending on the application, the approach to get detailed logs will differ.
Debugging in Terraform
Similar to SSH Verbosity, even Terraform allows us to set wide variety of log
levels for getting detailed logs for debugging purpose.
Understanding the Basics
Terraform has detailed logs that you can enable by setting the TF_LOG
environment variable to any value.

You can set TF_LOG to one of the log levels (in order of decreasing verbosity)

Log Level

TRACE

DEBUG

INFO

WARN

ERROR
Storing the Logs to File
To persist logged output you can set TF_LOG_PATH in order to force the log to
always be appended to a specific file when logging is enabled
Terraform Troubleshooting Model
Terraform Troubleshooting Model
There are four potential types of issues that you could experience with Terraform

Language, State, Core, and Provider Errors.


1 - Language Errors
In most of the cases, the errors that you will face will be related to this.

When Terraform encounters a syntax error in your configuration, it prints out the
line numbers and an explanation of the error.
2 - State Errors
If state is out of sync, Terraform may destroy or change your existing resources.

If state is locked, you will also be blocked from running write operations.
3 - Core errors
These errors are directly related to the main Terraform application.

Errors produced at this level may be a bug.


4 - Provider errors
These set of errors are primarily related to the provider plugins.

Use the Provider GitHub page for reporting and identifying the issue.
Reporting Terraform Bugs
Reporting Bugs
You can report bugs in the Terraform Core GitHub page or appropriate provider
page.
1 - Navigate to Issues
First, navigate to the Terraform GitHub repository and choose "Issues" from the
top tabs.
2 - Choose "New Issue".
3 - Click “Get Started”
4 - Fill Core Terraform Template
Terraform Format
Terraform in detail
Importance of Readability

Anyone who is into programming knows the importance of formatting the code for readability.

The terraform fmt command is used to rewrite Terraform configuration files to take care of the
overall formatting.

knowledge portal
Before fmt

After fmt

knowledge portal
Terraform Validate
Terraform in detail
Overview of Terraform Validate

Terraform Validate primarily checks whether a configuration is syntactically valid.

It can check various aspects including unsupported arguments, undeclared variables and others.

knowledge portal
Load Order & Semantics
Terraform in detail
Understanding Semantics

Terraform generally loads all the configuration files within the directory specified in
alphabetical order.

The files loaded must end in either .tf or .tf.json to specify the format that is in use.

terraform-kplabs

web.tf app.tf sg.tf providers.tf

knowledge portal
Dynamic Block
Terraform In Depth
Understanding the Challenge

In many of the use-cases, there are repeatable nested blocks that needs to be defined.

This can lead to a long code and it can be difficult to manage in a longer time.

knowledge portal
Dynamic Blocks

Dynamic Block allows us to dynamically construct repeatable nested blocks which is supported
inside resource, data, provider, and provisioner blocks:

knowledge portal
Iterators

The iterator argument (optional) sets the name of a temporary variable that represents the
current element of the complex value

If omitted, the name of the variable defaults to the label of the dynamic block ("ingress" in the
example above).

knowledge portal
Terraform Taint
Understanding the Use-Case
You have created a new resource via Terraform.

Users have made a lot of manual changes (both infrastructure and inside the
server)

Two ways to deal with this: Import Changes to Terraform / Delete & Recreate
the resource

Lots of manual changes

Terraform Managed Resource


Recreating the Resource
The -replace option with terraform apply to force Terraform to replace an object
even though there are no configuration changes that would require it.

terraform apply -replace="aws_instance.web"

Destroy

Create
Points to Note

Similar kind of functionality was achieved using terraform taint command in older
versions of Terraform.

For Terraform v0.15.2 and later, HashiCorp recommend using the -replace
option with terraform apply
Splat Expression

Terraform Expressions
Overview of Spalat Expression
Splat Expression allows us to get a list of all the attributes.

knowledge portal
Terraform Graph
Understanding the Base Structure
Terraform graph refers to a visual representation of the dependency
relationships between resources defined in your Terraform configuration.
Summary and Conclusion

Terraform graphs are a valuable tool for visualizing and understanding the
relationships between resources in your infrastructure defined with Terraform.

It can improve your overall workflow by aiding in planning, debugging, and


managing complex infrastructure configurations.
Saving Terraform Plan to File
Setting the Base
Terraform allows saving a plan to a file.

terraform plan -out ec2.plan


Apply from Plan File
You can run the terraform apply by referencing the plan file.

This ensures the infrastructure state remains exactly as shown in the plan to
ensure consistency.
Exploring Terraform Plan File
The saved Terraform plan file will be a binary file.

You can use the terraform show command to read the contents in detail.
Use-Cases of Saving Plan to a File

Many organizations require documented proof of planned changes before


implementation.

These changes will further be reviewed and approved.

Running apply from plan ensures consistent desired outcome.


Terraform Output
Terraform in detail
Terraform Output

The terraform output command is used to extract the value of an output variable from the state
file.

knowledge portal
Terraform Settings
Setting the Base
We can use the provider block to define various aspects of the provider, like
region, credentials and so on.
Specific Version to Run Your Code
In a Terraform project, your code might require a very specific set of versions to
run.

Sample Terraform Code

Requirements to Run the Code

Terraform Version must be 1.8

AWS Provider Version must be 5.56


Introducing Terraform Settings
Terraform Settings are used to configure project-specific Terraform behaviors,
such as requiring a minimum Terraform version to apply to your configuration.

Terraform settings are gathered together into terraform blocks:


1 - Specifying a Required Terraform Version
If your code is compatible with specific versions of Terraform, you can use the
required_version block to add your constraints.
2 - Specifying Provider Requirements
The required_providers block can be used to specify all of the providers required
by your Terraform code.

You can further fine-tune to include a specific version of the provider plugins.
Flexibility in Settings Block
There are a wide variety of options that can be specified in the Terraform block.

Options That Can be Defined

Required Terraform Version


terraform {....}
Required Provider and Version

BackEnd Configuration

Experimental Features
Point to Note

It is a good practice to include the terraform { } block to include details like


required_providers as part of your project.

The provider { } block is still important to specify various other aspects like
regions, credentials, alias and others.
Dealing with Larger Infrastructure
Terraform in detail
Challenges with Larger Infrastructure

When you have a larger infrastructure, you will face issue related to API limits for a provider.

5 EC2 Update state of each resource.

terraform plan

3 RDS

100 SG Rules

VPC Infra

infra.tf
Dealing With Larger Infrastructure

Switch to smaller configuration were each can be applied independently.

terraform plan
5 EC2 ec2.tf
5 EC2

terraform plan
3 RDS rds.tf
3 RDS

100 SG Rules 100 SG Rules sg.tf

VPC Infra
VPC Infra vpc.tf
infra.tf
Slow Down, My Man
We can prevent terraform from querying the current state during operations like terraform plan.

This can be achieved with the -refresh=false flag


Specify the Target
The -target=resource flag can be used to target a specific resource.

Generally used as a means to operate on isolated portions of very large configurations

terraform plan -target=ec2


Zipmap
Terraform Function
Overview of Zipmap

The zipmap function constructs a map from a list of keys and a corresponding list of
values.

pineapple yellow
pineapple=yellow
orange orange
orange=orange
strawberry red zipmap
strawberry=red

List of Keys List of Values

knowledge portal
Sample Output of Zipmap Function

knowledge portal
Simple Use-Case
You are creating multiple IAM users.

You need output which contains direct mapping of IAM names and ARNs

knowledge portal
Comments in Terraform Code
Commenting the Code!
Overview of Comments
A comment is a text note added to source code to provide explanatory information,
usually about the function of the code

knowledge portal
Comments in Terraform
The Terraform language supports three different syntaxes for comments:

Type Description

# begins a single-line comment, ending at the end of the line.

// also begins a single-line comment, as an alternative to #.

/* and */ are start and end delimiters for a comment that might span over multiple lines.

knowledge portal
Resource Behavior and Meta-Argument
Understanding the Basics
A resource block declares that you want a particular infrastructure object to exist
with the given settings
How Terraform Applies a Configuration
Create resources that exist in the configuration but are not associated with a real
infrastructure object in the state.

Destroy resources that exist in the state but no longer exist in the configuration.

Update in-place resources whose arguments have changed.

Destroy and re-create resources whose arguments have changed but which
cannot be updated in-place due to remote API limitations.
Understanding the Limitations
What happens if we want to change the default behavior?

Example: Some modification happened in Real Infrastructure object that is not


part of Terraform but you want to ignore those changes during terraform apply.

Name HelloWorld

Env Production
Solution - Using Meta Arguments
Terraform allows us to include meta-argument within the resource block which
allows some details of this standard resource behavior to be customized on a
per-resource basis.

Inside resource block


Different Meta-Arguments
Meta-Argument Description

depends_on Handle hidden resource or module dependencies that Terraform cannot


automatically infer.

count Accepts a whole number, and creates that many instances of the resource

for_each Accepts a map or a set of strings, and creates an instance for each item in that
map or set.

lifecycle Allows modification of the resource lifecycle.

provider Specifies which provider configuration to use for a resource, overriding


Terraform's default behavior of selecting one based on the resource type name
Meta Argument - LifeCycle
Basics of Lifecycle Meta-Argument
Some details of the default resource behavior can be customized using the
special nested lifecycle block within a resource block body:
Arguments Available
There are four argument available within lifecycle block.

Arguments Description

create_before_destroy New replacement object is created first, and the prior object is destroyed
after the replacement is created.

prevent_destroy Terraform to reject with an error any plan that would destroy the
infrastructure object associated with the resource

ignore_changes Ignore certain changes to the live resource that does not match the
configuration.

replace_triggered_by Replaces the resource when any of the referenced items change
Replace Triggered By
Replaces the resource when any of the referenced items change.
Create Before Destroy Argument
Understanding the Default Behavior
By default, when Terraform must change a resource argument that cannot be
updated in-place due to remote API limitations, Terraform will instead destroy the
existing object and then create a new replacement object with the new
configured arguments.

Destroy First

Changed AMI

Create Second
Create Before Destroy Argument
The create_before_destroy meta-argument changes this behavior so that the
new replacement object is created first, and the prior object is destroyed after
the replacement is created.

Destroy Second

Changed AMI

Create First
Join us in our Adventure

kplabs.in/chat

Be Awesome

kplabs.in/linkedin
LifeCycle - Prevent Destroy Argument
Prevent Destroy Argument
This meta-argument, when set to true, will cause Terraform to reject with an
error any plan that would destroy the infrastructure object associated with the
resource, as long as the argument remains present in the configuration.
Points to Note
This can be used as a measure of safety against the accidental replacement of
objects that may be costly to reproduce, such as database instances.

Since this argument must be present in configuration for the protection to apply,
note that this setting does not prevent the remote object from being destroyed if
the resource block were removed from configuration entirely.
LifeCycle - Ignore Changes Argument
Ignore Changes
In cases where settings of a remote object is modified by processes outside of
Terraform, the Terraform would attempt to "fix" on the next run.

In order to change this behavior and ignore the manually applied change, we
can make use of ignore_changes argument under lifecycle.
Points to Note
Instead of a list, the special keyword all may be used to instruct Terraform to
ignore all attributes, which means that Terraform can create and destroy the
remote object but will never propose updates to it.
Challenges with Count
Meta-Argument
Revising the Basics
Resource are identified by the index value from the list.

Resource Address Infrastructure

aws_iam_user.iam[0] user-01

aws_iam_user.iam[1] user-02

aws_iam_user.iam[2] user-03

knowledge portal
Challenge - 1

If the order of elements of index is changed, this can impact all of the other resources.

Resource Address Infrastructure

aws_iam_user.iam.[0] user-01

aws_iam_user.iam.[1] user-02

aws_iam_user.iam.[2] user-03

knowledge portal
Important Note

If your resources are almost identical, count is appropriate.

If distinctive values are needed in the arguments, usage of for_each is recommended.

knowledge portal
Data Type - SET
Let’s Revise Programming
Basics of List

● Lists are used to store multiple items in a single variable.


● List items are ordered, changeable, and allow duplicate values.
● List items are indexed, the first item has index [0], the second item has index [1] etc.

knowledge portal
Understanding SET

● SET is used to store multiple items in a single variable.

● SET items are unordered and no duplicates allowed.

Allowed

Not-Allowed

knowledge portal
toset Function

toset function will convert the list of values to SET

knowledge portal
for_each
Meta-Argument
Basics of For Each

for_each makes use of map/set as an index value of the created resource.

Resource Address Infrastructure

aws_iam_user.iam[user-01] user-01

aws_iam_user.iam[user-02] user-02

aws_iam_user.iam[user-03] user-03

knowledge portal
Replication Count Challenge

If a new element is added, it will not affect the other resources.

Resource Address Infrastructure

aws_iam_user.iam[user-01] user-01

aws_iam_user.iam[user-02] user-02

aws_iam_user.iam[user-03] user-03

aws_iam_user.iam[user-0] user-0

knowledge portal
The each object

In blocks where for_each is set, an additional each object is available.

This object has two attributes:

Each object Description

each.key The map key (or set member) corresponding to this instance.

each.value The map value corresponding to this instance

knowledge portal
Relax and Have a Meme Before Proceeding

knowledge portal
Terraform Provisioners
Setting the Base
We have been using Terraform to create and manage resources for a specific
provider.

Organizations would want end-to-end solution for creation of infrastructure and


configuring appropriate packages required for the application.

Launch VM
Terraform

Now what?
Introducing Provisioners
Provisioners are used to execute scripts on a local or remote machine as part of
resource creation or destruction.

Example: After VM is launched, install software package required for application.

Launch VM

Terraform

Install Software
Types of Provisioners in Terraform
Setting the Base
Provisioners are used to execute scripts on a local or remote machine as part
of resource creation or destruction.

There are 2 major types of provisioners available

Provisioners

local-exec remote-exec
Type 1 - local-exec provisioner
The local-exec provisioner invokes a local executable after a resource is
created.

Example: After EC2 is launched, fetch the IP and store it in file server_ip.txt

server_ip.txt 1

2
Launch Server

Terraform
Store IP
Type 2 - remote-exec provisioner
remote-exec provisioners allow to invoke scripts or run commands directly on
the remote server.

Example: After EC2 is launched, install “apache” software

Launch VM

Terraform

Install Software
Today’s Demo
For today’s demo, the Terraform code will run two provisioners.

1. Remote-Exec will install Nginx software in EC2 to have basic website.


2. Local-Exec will fetch the Public IP of EC2 and store it in a new file.

server_ip.txt 1

2 Launch Server + Install Nginx

Terraform
Store IP
Format of Provisioners
1 - Defining Provisioners

Provisioners are defined inside a specific resource.


2 - Defining provisioner

Provisioners are defined by “provisioner” followed by type of provisioner


3 - Local Provisioner Approach

For local provisioners, we have to specify command that needs to be run locally
4 - Remote Exec Provisioner Approach
Since commands are executed on remote-server, we have to provide way for
Terraform to connect to remote server.

Details to connect to the Server

Commands to Run on the Server


Points to Note - Provisioners
Provisioners are Defined inside Resource Block
It is not necessary to define a aws_instance resource block for provisioner to
run.

They can be defined inside other resource types as well.


Multiple Provisioners Blocks for Single Resource
We can define multiple provisioners block in a single resource block.
Creation-Time & Destroy-Time Provisioners
Basic of Creation-Time Provisioners
By default, provisioners run when the resource they are defined within is
created.

Creation-time provisioners are only run during creation, not during updating or
any other lifecycle.

Launch VM

Terraform

Install Software
Destroy-Time Provisioner
Destroy provisioners are run before the resource is destroyed.

Example:

Remove and De-Link Anti-Virus software before EC2 gets terminated.

Define destroy-time
provisioner
Tainting Resource in Creation-Time Provisioners
If a creation-time provisioner fails, the resource is marked as tainted.

A tainted resource will be planned for destruction and recreation upon the next
terraform apply.

Terraform does this because a failed provisioner can leave a resource in a


semi-configured state.

Install Software

Provisioner

Failed. Permission Denied


Reference Screenshot - Resource Marked as Tainted
Following screenshot shows state file that has marked the resource as “tainted”
because the provisioner had failed.
Failure Behaviour in Provisioners
Understanding the Challenge
By default, provisioners that fail will also cause the terraform apply itself to fail.

This will lead to resource being tainted and we have to re-create the resource.
Basics of On Failure Setting
The on_failure setting can be used to change the default behaviour.

Allowed Values Description

continue Ignore the error and continue with creation or destruction.

fail Raise an error and stop applying (the default behavior). If this is a
creation provisioner, taint the resource.
Reference Code - On-Failure
Following screenshot shows a reference code where on_failure is set to
continue.
Reference Screenshot - Failed Provisioner
Following screenshot shows that the provisioner has failed but still the apply has
completed successfully.

This is an example of on_failure = continue


Join us in our Adventure

kplabs.in/chat

Be Awesome

kplabs.in/linkedin
Terraform Modules
Understanding the Basic
In software engineering, don't repeat yourself (DRY) is a principle of software
development aimed at reducing repetition of software patterns.
Understanding the Challenge
Let’s assume there are 10 teams in your organization using Terraform to create
and manage EC2 instances.

Team 3 Team 5
Team1

Team 2 Team 4 Team 6


Challenge with the Previous Example
1. Repetition of Code.

2. Change in AWS Provider specific option will require change in EC2 code
blocks of all the teams.

3. Lack of standardization.

4. Difficult to manage.

5. Difficult for developers to use.


Better Approach
In this approach, the DevOps Team has defined standard EC2 template in a
central location that all can use.

Team 1 Code

Team 2 Code

Team 3 Code

Standard Template Team 4 Code


Introducing Terraform Modules
Terraform Modules allows us to centralize the resource configuration and it
makes it easier for multiple projects to re-use the Terraform code for projects.

Team 1 Code

Team 2 Code

Team 3 Code

Terraform Module Team 4 Code


Multiple Modules for a Single Project
Instead of writing code from scratch, we can use multiple ready-made modules
available.

Team 1 Code

Team 2 Code
EC2 Module

Team 3 Code
Infrastructure Created

Team 4 Code

VPC Module
Points to Note - Referencing Terraform Modules
Understanding the Base
For some infrastructure resources, you can directly use the module calling code,
and the entire infrastructure will be created for you.

terraform apply
Avoiding Confusion

Just by referencing any module, it is not always the case that the infrastructure
resource will be created for you directly.

Some modules require specific inputs and values from the user side to be filled
in before a resource gets created.
Example Module - AWS EKS
If you try to use an AWS EKS Module directly and run “terraform apply”, it will
throw an error.

terraform apply
Module Structure Can be Different
Some module pages in GitHub can contain multiple sets of modules together for
different features.

In such cases, you have to reference the exact sub-module required.


Learnings for Today’s Video

Always read the Module Documentation to understand the overall structure,


important information, and what is expected from the user side when creating a
resource.
Choosing the Right Terraform Module
Understanding the Base
Terraform Registry can contain multiple modules for a specific infrastructure
resource maintained by different users
1 - Check Total Downloads
Module Downloads can provide early indication about level of acceptance by
users in the Terraform community
2 - Check GitHub Page of Module
GitHub page can provide important information related to the Contributors,
Reported Issues and other data.
3 - Avoid Modules Written by Individual Participant
Avoid module that are maintained by a single contributor as regular updates,
issues and other areas might not always be maintained.
4 - Analyze Module Documentation
Good documentation should include an overview, usage instructions, input and
output variables, and examples.
5 - Check Version History of Module
Look at the version history. Frequent updates and a clear versioning strategy
suggest active maintenance.
6 - Analyze the Code
Inspect the module's source code on GitHub or another platform. Clean,
well-structured code is a good sign.
7 - Check the Community Feedback
The number of stars and forks on GitHub can indicate popularity and community
interest.
8 - Modules Maintained by HashiCorp Partner
Search for modules that are maintained by HashiCorp Partners
Important Point to Note

Avoid directly trying any random Terraform module that is not actively maintained
and looks shady (primarily by sole individual contributors)

An attacker can include malicious code in a module that sends information about
your environment to the attacker.
Which Modules do Organizations Use?

In most of the scenarios, organizations maintain their own set of modules.

They might initially fork a module from the Terraform registry and modify it based
on their use case.
Creating Base Module Structure
Understanding the Base Structure
A base “modules” folder.

A sub-folder containing name for each modules that are available.

EC2 IAM

VPC SG

modules folder
What is Inside the Sub-Folders
Each module’s sub-folder contains the actual module Terraform code that other
projects can reference from.

EC2

modules folder
main.tf
Calling the Module
Each Team can call various set of modules that are available in the modules
folder based on their requirements.

Team 1 Code

EC2 IAM
Team 2 Code

VPC SG
Team 3 Code

modules folder Team 4 Code


Our Practical Structure
Our practical structure will include two main folders (modules and teams).

Modules sub-folder will contain sub-folders of modules that are available.

Teams sub-folder will contain list of teams that we want to be made available.

EC2 A

SG B

modules folder
Teams folder
Module Sources - Calling a Module
Understanding the Base
Module source code can be present in wide variety of locations.

These includes:

1. GitHub
2. HTTP URLs
3. S3 Buckets
4. Terraform Registry
5. Local paths
Base - Calling the Module
In order to reference to a module, you need to make use of module block

The module block must contain source argument that contains location to the
referenced module.
Example 1 - Local Paths
Local paths are used to reference to module that is available in local filesystem.

A local path must begin with either ./ or ../ to indicate that a local path
Example 2 - Generic Git Repository
Arbitrary Git repositories can be used by prefixing the address with the special
git:: prefix.
Module Version
A specific module can have multiple versions.

You can reference to specific version of module with the version block
Improvements in Custom Module Code
Our Simple Module
We had created a very simple module that allows developers to launch an EC2
instance when calling the module.
Need to Analyze Shortcomings
Being a simplistic and a basic module code, there is a good room of
improvements.

In today’s video, we will be discussing about some of the important


shortcomings with the code.
Challenge 1 - Hardcoded Values
The values are hardcoded as part of the module.

If developer is calling the module, he will have to stick with same values.

Developer will not be able to override the hardcoded values of the module.

Hard-Coded Values
Challenge 2 - Provider Improvements
Avoid hard-coding region in the Module code as much as possible.

A required_provider block with version constraints for module to work is


important.
Variables in Terraform Modules
Point to Note
As much as possible, avoid hardcoding values as part of the Modules.

This will make the module less flexible.


Convert Hard Coded Values to Variables
For modules, it is especially recommended to convert hard-coded values to
variables so that it can be overridden based on user requirements.

Bad Approach Good Approach


Advantages of Variables in Module Code
Variable based approach will allow the teams to override the values.

Team 1 instance_type = t2.micro

Team 2 instance_type = m5.large


Main Module
Reviewing Professional EC2 Module Code
Reviewing an EC2 Module code that is professionally written, we see that the
values associated with arguments are not hardcoded and variables are used
extensively.
Module Outputs
Revising Output Values
Output values make information about your infrastructure available on the
command line, and can expose information for other Terraform configurations to
use.
Understanding the Challenge
If you want to create a resource that has a dependency on an infrastructure
created through a module, you won’t be able to implicitly call it without output
values.
Accessing Child Module Outputs
Ensure to include output values in the module code for better flexibility and
integration with other resources and projects.

Format: module.<MODULE NAME>.<OUTPUT NAME>


Root and Child Modules
Root Module
Root Module resides in the main working directory of your Terraform
configuration. This is the entry point for your infrastructure definition

Root Module
Child Module
A module that has been called by another module is often referred to as a child
module.

Child Module Root Module


Standard Module Structure
Setting the Base

At this stage, we have been keeping the overall module structure very simple to
understand the concepts.

In production environments, it is important to follow recommendations and


best-practices set by HashiCorp.
Basic of Standard Module Structure
The standard module structure is a file and directory layout HashiCorp
recommends for reusable modules.

A minimal recommended module following the standard structure is shown


below
Scope the Requirements for Module Creation
A team wants to provision their infrastructure using Terraform.

The following architecture diagram depicts the desired outcome.


Planning a Module Structure

In this scenario, a team of Terraform producers, who write Terraform code from
scratch, will build a collection of modules to provision the infrastructure and
applications.

The members of the team in charge of the application will consume these
modules to provision the infrastructure they need.
Final Module Output
After reviewing the consumer team's requirements, the producer team has
broken up the application infrastructure into the following modules:

Network, Web, App, Database, Routing, and Security.


Publishing Modules
Publish Modules to Terraform Registry
Overview of Publishing Modules
Anyone can publish and share modules on the Terraform Registry.

Published modules support versioning, automatically generate documentation, allow


browsing version histories, show examples and READMEs, and more.

knowledge portal
Requirements for Publishing Module
Requirement Description

GitHub The module must be on GitHub and must be a public repo. This is only a
requirement for the public registry.

Named Module repositories must use this three-part name format


terraform-<PROVIDER>-<NAME>

Repository The GitHub repository description is used to populate the short description of
description the module.

Standard module The module must adhere to the standard module structure.
structure

x.y.z tags for releases The registry uses tags to identify module versions. Release tag names must
be a semantic version, which can optionally be prefixed with a v. For
example, v1.0.4 and 0.9.2

knowledge portal
Standard Module Structure
The standard module structure is a file and directory layout that is recommend for
reusable modules distributed in separate repositories

knowledge portal
Terraform Workspace
Setting the Base
An infrastructure created through Terraform is tied to the underlying Terraform
configuration and a state file.

EC2 Instance

terraform.tfstate
What If?
What if we have multiple state file for single Terraform configuration?

Can we manage different env’s through it separately?

Environment 1

State File 1 State File 2 Environment 2


Introducing Terraform Workspace
Terraform workspaces enable us to manage multiple set of deployments from
the same sets of configuration file.

Development
dev.tfstate

Development ENV

Production
prod.tfstate

Production ENV
Workspace
Flexibility with Workspace
Depending on the workspace being used, the value to a specific argument in
your Terraform code can also change.

Development
dev.tfstate

Production
Environment instance_type prod.tfstate

Development t2.micro

Production m5.large Workspace


Team Collaboration
Terraform in detail
Local Changes are not always good
Currently we have been working with terraform code locally.

Terraform Code

………
………

knowledge portal
Centralized Management

Central Repository

Terraform Code
Terraform Code
………
………
………
………

knowledge portal
Relax and Have a Meme Before Proceeding

knowledge portal
Terraform & GitIgnore
Terraform in detail
Overview of gitignore

The .gitignore file is a text file that tells Git which files or folders to ignore in a project.

.gitignore

conf/

*.artifacts

credentials

knowledge portal
Terraform and .gitignore

Depending on the environments, it is recommended to avoid committing certain files to GIT.

Files to Ignore Description

.terraform This file will be recreated when terraform init is run.

terraform.tfvars Likely to contain sensitive data like usernames/passwords and secrets.

terraform.tfstate Should be stored in the remote side.

crash.log If terraform crashes, the logs are stored to a file named crash.log

knowledge portal
Terraform Backend
Terraform in detail
Basics of Backends
Backends primarily determine where Terraform stores its state.

By default, Terraform implicitly uses a backend called local to store state as a local file on disk.

demo.tf
terraform.tfstate

knowledge portal
Challenge with Local Backend
Nowadays Terraform project is handled and collaborated by an entire team.

Storing the state file in the local laptop will not allow collaboration.

knowledge portal
Ideal Architecture
Following describes one of the recommended architectures:

1. The Terraform Code is stored in Git Repository.


2. The State file is stored in a Central backend.

TF files Central Git Repo

terraform.tfstate
Project Collaborators

Central Backend

knowledge portal
Backends Supported in Terraform

Terraform supports multiple backends that allows remote service related operations.

Some of the popular backends include:

● S3
● Consul
● Azurerm
● Kubernetes
● HTTP
● ETCD

knowledge portal
Important Note
Accessing state in a remote service generally requires some kind of access credentials

Some backends act like plain "remote disks" for state files; others support locking the state while
operations are being performed, which helps prevent conflicts and inconsistencies.

Store State File

Terraform User S3 Bucket


Authenticate First

knowledge portal
State Locking
Let’s Lock the State
Understanding State Lock
Whenever you are performing write operation, terraform would lock the state file.

This is very important as otherwise during your ongoing terraform apply operations, if others
also try for the same, it can corrupt your state file.

knowledge portal
Basic Working

terraform apply
User 1

State File

User 2 terraform destroy

Hold on Dude! State is locked

knowledge portal
Important Note

State locking happens automatically on all operations that could write state. You won't see any
message that it is happening

If state locking fails, Terraform will not continue

Not all backends support locking. The documentation for each backend includes details on
whether it supports locking or not.

knowledge portal
Force Unlocking State
Terraform has a force-unlock command to manually unlock the state if unlocking failed.

If you unlock the state when someone else is holding the lock it could cause multiple writers.

Force unlock should only be used to unlock your own lock in the situation where automatic
unlocking failed.

knowledge portal
State Locking in S3 Backend
Back to Providers
State Locking in S3
By default, S3 does not support State Locking functionality.

You need to make use of DynamoDB table to achieve state locking functionality.

terraform.tfstate S3 Bucket

State Lock DynamoDB

knowledge portal
Terraform State Management
Setting the Base
As your Terraform usage becomes more advanced, there are some cases where
you may need to modify the Terraform state.

It is NOT recommended to modify the state file manually.


State Management
The terraform state command is used for advanced state management

Sub-Commands Description

list List resources within terraform state file.

mv Moves item with terraform state.

pull Manually download and output the state from remote state.

push Manually upload a local state file to remote state.

rm Remove items from the Terraform state

show Show the attributes of a single resource in the state.

replace-provider Used to replace the provider for resources in a Terraform state.


Sub-Command 1 - List
The terraform state list command is used to list resources within a Terraform
state.

Useful if you want to quickly view all resources managed by Terraform.


Sub-Command 2 - Show
The terraform state show command is used to show the attributes of a single
resource in the state.

Useful for debugging and understanding the current attributes of a resource.


Sub-Command 3 - pull
The terraform state pull command is used to pull the state from a remote
backend and output it to stdout.

Useful to view or backup the current state stored in a remote backend.


Sub-Command 4 - rm
The terraform state rm command is used to remove items from the state.

Use this when you need to remove a resource from Terraform’s state
management without destroying it.
Sub-Command 5 - mv
The terraform state mv command is used to move an item in the state to a
different address.
Sub-Command 6 - replace-provider
The terraform state replace-provider command is used to replace the provider
for resources in a Terraform state.
Remote State Data Source
Setting up the Base
In larger enterprises, there can be multiple different teams working on different
aspects of a infrastructure resource

Output Values
Remote State
52.30.20.5
Public IPs
52.50.20.5
Networking Team

Firewall Rules

Security Team
Understanding the Challenge
Security Team wants that all the IP addresses added as part of Output Values in
tfstate file of Networking Team project should be whitelisted in Firewall.

Output Values
Remote State
52.30.20.5
Public IPs
52.50.20.5
Networking Team

Firewall Rules
Fetch Output Values and Whitelist
Security Team
What Needs to be Achieved

1. The code from Security Team project should connect to the terraform.tfstate
file managed by the Networking team.

2. The code should fetch all the IP addresses mentioned in the output values
in the state file.

3. The code should whitelist these IP addresses in Firewall rules.


Practical Workflow Steps
1. Create two folders for networking-team and security-team

2. Create Elastic IP resource in Networking Team and Store the State file in S3
bucket. Output values should have information of EIP.

3. In Security Team, use Terraform Remote State data source to connect to the
tfstate file of Networking Team.

4. Use the Remote State to fetch EIP and whitelist it in Security Group rule.
Introducing Remote State Data Source
The terraform_remote_state data source allows us to fetch output values from a
specific state backend

Step 1 - Define Remote State Source Step 2 - Define Data to Fetch


Terraform Import
Typical Challenge
It can happen that all the resources in an organization are created manually.

Organization now wants to start using Terraform and manage these resources
via Terraform.

Manually Created
Earlier Approach
In the older approach, Terraform import would create the state file associated
with the resources running in your environment.

Users still had to write the tf files from scratch.

s3.tf
terraform import
terraform.tfstate create manually

ec2.tf

Manually Created
Newer Approach
In the newer approach, terraform import can automatically create the terraform
configuration files for the resources you want to import.

resources.tf
Terraform Import

terraform.tfstate

Manually Created
Point to Note

Terraform 1.5 introduces automatic code generation for imported resources.

This dramatically reduces the amount of time you need to spend writing code to
match the imported

This feature is not available in the older version of Terraform.


Multiple Provider Configuration
Understanding the Requirement
There can be multiple resource types in same project and you want to deploy
them in different set of AWS regions.

Singapore Region

Mumbai Region
Setting the Base
At this stage, we have been dealing with single provider configuration.

In below code, both resources will be created in Singapore region.


Alias Meta-Argument
Each provider can have one default configuration, and any number of alternate
configurations that include an extra name segment (or "alias").
Final Output Using Alias
By using the provider meta-argument, you can select an alternate provider
configuration for a resource.
Sensitive Parameter
Setting the Base
By default, Terraform will show the values associated with defined attributes in
the CLI output during plan, apply operations for most of the resources.
What to Expect
We should design our Terraform code in such way that no sensitive information
is available and shown out of the box in CLI Output, Logs, etc.
Basics of Sensitive Parameter
Adding sensitive parameter ensures that you do not accidentally expose this
data in CLI output, log output
Sensitive Values AND Output Values
If you try to reference sensitive value in output values, Terraform will immediately
give you an error.
Sensitive Values AND Output Values
If you still want sensitive content to be available in “output” of state file but
should not be visible in CLI Output, Logs, following approach can be used.
Important Point to Note
Sensitive parameter will NOT protect / redact information from State file.

Configuration File State File


Benefits of Mature Providers
Various providers like AWS will automatically considers the password argument
for any database instance as sensitive and will redact it as a sensitive value
Overview of Vault
HashiCorp Certified: Vault Associate
Let’s get started
HashiCorp Vault allows organizations to securely store secrets like tokens, passwords, certificates
along with access management for protecting secrets.

One of the common challenges nowadays in an organization is “Secrets Management”

Secrets can include, database passwords, AWS access/secret keys, API Tokens, encryption keys
and others.
Dynamic Secrets

knowledge portal
Life Becomes Easier
Once Vault is integrated with multiple backends, your life will become much easier and you can
focus more on the right work.

Major aspect related to Access Management can be taken over by vault.

knowledge portal
Vault Provider
Back to Providers
Vault Provider
The Vault provider allows Terraform to read from, write to, and configure HashiCorp
Vault.

Inject in Terraform
admin
password123

db_creds

Vault

knowledge portal
Important Note

Interacting with Vault from Terraform causes any secrets that you read and write to be
persisted in both Terraform's state file.

knowledge portal
Terraform Cloud
Terraform in detail
Overview of Terraform Cloud
Terraform Cloud manages Terraform runs in a consistent and reliable environment with various
features like access controls, private registry for sharing modules, policy controls and others.

knowledge portal
Sentinel
Terraform Cloud In Detail
Overview of the Sentinel
Sentinel is a policy-as-code framework integrated with the HashiCorp Enterprise products.

It enables fine-grained, logic-based policy decisions, and can be extended to use information
from external sources.

Note: Sentinel policies are paid feature

terraform plan sentinel checks terraform apply

knowledge portal
High Level Structure

Policy Policy Sets Workspace

Block EC2 without tags

knowledge portal
Terraform Backend
Terraform in detail
Basics of Backends
Backends primarily determine where Terraform stores its state.

By default, Terraform implicitly uses a backend called local to store state as a local file on disk.

demo.tf
terraform.tfstate

knowledge portal
Challenge with Local Backend
Nowadays Terraform project is handled and collaborated by an entire team.

Storing the state file in the local laptop will not allow collaboration.

knowledge portal
Ideal Architecture
Following describes one of the recommended architectures:

1. The Terraform Code is stored in Git Repository.


2. The State file is stored in a Central backend.

TF files Central Git Repo

terraform.tfstate
Project Collaborators

Central Backend

knowledge portal
Backends Supported in Terraform

Terraform supports multiple backends that allows remote service related operations.

Some of the popular backends include:

● S3
● Consul
● Azurerm
● Kubernetes
● HTTP
● ETCD

knowledge portal
Important Note
Accessing state in a remote service generally requires some kind of access credentials

Some backends act like plain "remote disks" for state files; others support locking the state while
operations are being performed, which helps prevent conflicts and inconsistencies.

Store State File

Terraform User S3 Bucket


Authenticate First

knowledge portal
Air Gapped Environments
Installation Methods
Understanding Concept of Air Gap
An air gap is a network security measure employed to ensure that a secure computer network
is physically isolated from unsecured networks, such as the public Internet.

Internet Gateway Internal Router

Internet Connectivity Air Gapped System

knowledge portal
Usage of Air Gapped Systems

Air Gapped Environments are used in various areas. Some of these include:

● Military/governmental computer networks/systems

● Financial computer systems, such as stock exchanges

● Industrial control systems, such as SCADA in Oil & Gas fields

knowledge portal
Terraform Enterprise Installation Methods
Terraform Enterprise installs using either an online or air gapped method and as the
names infer, one requires internet connectivity, the other does not

Air Gap Install

Isolated Server
Terraform Enterprise

knowledge portal
knowledge portal
knowledge portal
Relax and Have a Meme Before Proceeding

knowledge portal
Terraform Challenges
Key Observations
At this stage, we have been learning core concepts of Terraform step by step.

Whenever learning a new technology, small set of practical projects are always
useful to grasp the practical aspects of a technology.
Introducing Terraform Challenges
With Terraform Challenges, we aim to reduce the gap between learning and
gaining practical experience.

Terraform Master
About the Challenges
Each Challenge will test you in different areas of Terraform that will help you gain
some kind of hands-on experience.

Troubleshoot Secure

Optimize Analyze

Awesome Students
Terraform
Workflow Steps
We will have multiple sets of challenges.

After each challenge video, we will have a Solution Hints video and then the
Practical Solution video.

Challenge - 1 Solution Hints Practical Solution


Terraform Challenge 1
Understanding the Challenge
A Developer at Sample Small Corp had created a Terraform File for creating
certain resources.

The code was written a few years back based on the old Terraform version.
What you need to do?
1. Create Infrastructure using the provided code (without modifications).

2. Verify if the code works in the latest version of Terraform and Provider .

3. Modify and Fix the code so that it works with latest version of Terraform.

4. Feel free to edit the code as you like.


TF Challenge 1 - Solution Discussion and Hints
Hint 1 - Create Infrastructure with Base Code
Based on the initial code given to you, use appropriate version of binaries to
ensure infrastructure gets created successfully.
Hint 2 - Access/Secret Keys
There are hardcoded AWS Access/Secret keys with the code.

This MUST be be fixed.


Hint 3 - Provider Block
Provider Block is used to define provider version along with 3rd party providers.

Instead, use the new required_provider block to define provider and constraints.
Hint 4 - Terraform Core Version Requirement

Since the challenge states that latest version of Terraform should be used, you
can plan to remove the required_version block from the code.
Hint 5 - Code Upgrade
Does the resource block of “aws_eip” work with the latest version of Terraform?

It can happen that latest AWS provider requires some changes in the aws_eip
resource block. Incorporate these changes to ensure EIP gets created.
Join us in our Adventure

kplabs.in/chat

Be Awesome

kplabs.in/linkedin
Terraform Challenge 2
Understanding the Challenge
A sample code has been provided to you that creates certain resources.

You are required to optimize the code following the Best Practices.
Conditions to Meet
1. Ensure the code is working and resource gets created.

2. Do NOT delete the existing terraform.lock.hcl file. File is free to be modified


based on requirements.

3. Demonstrate ability to modify variable “splunk” from 8088 to 8089 without


modifying the Terraform code.
TF Challenge 2 - Solution Discussion and Hints
Hint 1 - Indentation
Indentation issues are present in the code.

Make sure that code is properly indented.


Hint 2 - Using Variables and TFVars
Many values are hard-coded as part of the code.

This makes it difficult to modify if code base becomes larger.


Hint 3 - Using Tags
It is important that resources are properly tagged

This will make it easier to identify the resource among all others.
Hint 4 - Variable Precedence

Consider using appropriate variable precedence to override variables from


Terraform code.
Hint 5 - Right Folder Structure

Having right naming convention for files is important.

Bad Structure: Everything in one single file named main.tf

Good Structure: providers.tf , variables.tf , ec2.tf and so on.


Terraform Challenge 3
Understanding the Requirements
You will be provided with a variable named instance_config

The variable type is map.


Conditions to Meet

1. Based on the values specified in map, EC2 instances should be created


accordingly.

2. If key/value is removed from map, EC2 instances should be destroyed


accordingly.
TF Challenge 3 - Hints
Hint 1 - Loops
The requirement indicates that based on key/value specified in map, the
resources should be created and destroyed accordingly.

We need to use some kind of loops to achieve this.


Hint 2 - for_each

If a resource block includes a for_each argument whose value is a map or a set


of strings, Terraform creates one instance for each member of that map or set.
Terraform Challenge 4
Requirement - 1
Clients wants a code that can create IAM user in AWS account with following
syntax:

admin-user-{account-number-of-aws}

admin-user-12345
AWS: 12345

admin-user-67890
AWS: 67890
Requirement - 2
Client wants to have a logic that will show names of ALL users in AWS account
in the output.

AWS Account
Requirement - 3
Along with list of users in AWS, client also wants Terraform to show Total number
of users in AWS.

3 Users

AWS Account
TF Challenge 4 - Solution Hints
Hint 1 - Data Sources

Data Sources allows us to dynamically fetch information from the infrastructure


resource or other state backends.

You can try to dynamically fetch information like AWS Account ID, User names
using Data Sources.
Hint 2 - Functions

To calculate number of users is outside scope of Data Source.

You need to make use of Terraform Function that can calculate total number of
users and output it.
Join us in our Adventure

kplabs.in/chat

Be Awesome

kplabs.in/linkedin
Overview of HashiCorp Exams
Let’s Get Certified!
Overview of HashiCorp Associate Exams
Overview of the basic exam related information.

Assessment Type Description

Type of Exams Multiple Choice

Format Online Proctored

Duration 1 hour

Questions 57

Price 70.50 USD + Taxes

Language English

Expiration 2 years

knowledge portal
Multiple Choice

This includes various sub-formats, including:

● True or False
● Multiple Choice
● Fill in the blank

knowledge portal
Delta Type of Question
Example 1:

Demo Software stores information in which type of backend?

knowledge portal
Format - Online Proctored

Important Rules to be followed:

● You are alone in the room


● Your desk and work area are clear
● You are connected to a power source
● No phones or headphones
● No dual monitors
● No leaving your seat
● No talking
● Webcam, speakers, and microphone must remain on throughout the test.
● The proctor must be able to see you for the duration of the test.

knowledge portal
My Experience - Before Room

knowledge portal
My Experience - After Room

knowledge portal
My Experience - My Desk

knowledge portal
Registration Process

The high-level steps for registering for the exams are as follows:

1. Login to the HashiCorp Certification Page.


2. Register for Exams.
3. Check System Requirements
4. Download PSI Software
5. Best of Luck & Good Luck!

knowledge portal
Make sure to complete system check.

knowledge portal
Registration Process

knowledge portal
Registration Process

knowledge portal
Registration Process

knowledge portal
Registration Process

knowledge portal
Registration Process

knowledge portal
Exam Preparation - Part 1
Providers in AWS
A provider is responsible for understanding API interactions and exposing
resources.

When we run terraform init, plugins required for the provider are automatically
downloaded and saved locally to a .terraform directory.
Interesting Question
Is provider block {..} mandatory to be added as part of your Terraform
configuration? Yes/No
Two Pointers from Documentation

All Terraform configurations must declare which providers they require so that
Terraform can install and use them.

A provider block may be omitted if its contents would otherwise be empty.


Concluding this Question
If you plan to explicitly add some contents to provider {} like region, credentials,
then defining this block is required.

Otherwise, even if you skip, the terraform apply will work fine.
Alias in Providers
alias can be used for using the same provider with different configurations for
different resources
Point to Note - Providers

It is a good practice to store the credentials outside of Terraform configuration,


such as in Environment Variables.
Terraform Settings
Terraform Settings are used to configure project-specific Terraform behaviours,
such as requiring a minimum Terraform version to apply to your configuration.

Options That Can be Defined

Required Terraform Version

Required Provider and Version

BackEnd Configuration

Experimental Features
Point to Note

You cannot define configuration related to regions, Access/Secret keys inside


required_provider block.

For these, you have to use a provider {} block.


Versioning Constraint
Version Constraint allows you to specify mix of multiple operators to select a
suitable version of Terraform and Provider Plugins.

Operators and Examples Description

>=1.0 Greater than equal to the version

<=1.0 Less than equal to the version

~>2.0 Any version in the 2.X range.

>=2.10,<=2.30 Any version between 2.10 and 2.30


Provider Tiers
There are 3 primary type of provider tiers in Terraform.

Provider Tiers Description

Official Owned and Maintained by HashiCorp.

Partner Owned and Maintained by Technology Company that


maintains direct partnership with HashiCorp.

Community Owned and Maintained by Individual Contributors.


Terraform Init
The terraform init command initializes a working directory.

Initialization includes Installing Provider Plugins, Backend initialization, Copy


Source Module, etc.

This is the first command that should be run after writing a new Terraform
configuration. It is safe to run multiple times.
Terraform Init Upgrade
The terraform init -upgrade installs the latest module and provider versions
allowed within configured constraints.

If you have latest provider plugins installed and if you define new version
constraints that matches different version, you will have to run the init -upgrade
Terraform Plan
The terraform plan command is used to create an execution plan.

The infrastructure is not modified as part of this plan.

The state file is not modified even when it detects drift in real-world and current
infrastructure.
Saving Plan to File
You can use the optional -out=FILE option to save the generated plan to a file on
disk, which you can later execute by passing the file to terraform apply as an
extra argument.

This ensures consistent infrastructure as defined in the plan.

terraform plan -out ec2.plan


Terraform Apply
terraform apply command is used to apply the changes required to reach the
desired state of the configuration.

The state file gets modified in this command.

Name of state file = terraform.tfstate

Terraform Apply can change, destroy and provision resources but cannot import
any resource.
Terraform Destroy

terraform destroy command is used to destroy the Terraform-managed


infrastructure.

terraform destroy command is not the only command through which


infrastructure can be destroyed.
Terraform Format
terraform fmt command is used to rewrite Terraform configuration files to a
canonical format and style. It will directly perform “write” operation and not “read’

For use-case, where the all configuration written by team members needs to
have a proper style of code, terraform fmt can be used.

After
Before
terraform fmt options
You have to keep a note of two important flags for terraform fmt command

Command Description

-check Check if the input is formatted. Files are not modified.

-recursive Also process files in subdirectories. By default, only the given directory
(or current directory) is processed.
Exam Preparation - Part 2
Terraform Validate
terraform validate command validates the configuration files in a directory.

Requires an initialized working directory with any referenced plugins and


modules installed.

terraform plan uses implied validation check.


Terraform Refresh

terraform refresh command reads the current settings from all managed remote
objects and updates the Terraform state to match.

This won't modify your real remote objects, but it will modify the Terraform state.

This command is deprecated, because its default behavior is unsafe.


Resource Blocks
A resource block declares a resource of a given type ("aws_instance") with a
given local name ("web").

Resource type and Name together serve as an identifier for a given resource
and so must be unique.

Address of the following resource is: aws_instance.web


Important Terminology

aws_instance Resource Type

myec2 Local name for the resource

ami Argument Name

ami-123 Argument value


Data Types in Terraform
Data Types Description

string a sequence of Unicode characters representing some text, like "hello".

number A Numeric value

bool a boolean value, either true or false

list a sequence of values, like ["us-west-1a", "us-west-1c"]

set a collection of unique values that do not have any secondary identifiers or
ordering.

map a group of values identified by named labels, like {name = "Mabel", age =
52}.

null a value that represents absence or omission.


Point to Note - Data Types

Array data types are not supported in Terraform


State Management
The terraform state command is used for advanced state management

Sub-Commands Description

list List resources within terraform state file.

mv Moves item with terraform state.

pull Manually download and output the state from remote state.

push Manually upload a local state file to remote state.

rm Remove items from the Terraform state

show Show the attributes of a single resource in the state.

replace-provider Used to replace the provider for resources in a Terraform state.


Use-Case - Removing Item from State

There are 5 EC2 instances created through Terraform using count =5

The Team wants to destroy all the EC2 instances except the second instance
with the resource address of aws_instance.web[1].

1. This is not possible since the instance created through Count.


2. Apply taint on the EC2 instance.
3. Use terraform state rm aws_instance.web[1]
4. Use terraform state mv aws_instance.web[1]
5. None of the Above
Debugging in Terraform
Terraform has detailed logs that can be enabled by setting the TF_LOG
environment variable to any value.

You can set TF_LOG to one of the log levels TRACE, DEBUG, INFO, WARN or
ERROR to change the verbosity of the logs.

To persist logged output, you can set TF_LOG_PATH


Terraform Import
Allows importing existing infrastructure to Terraform.

Automatic code generation for imported resources is supported.

You can use import blocks to import more than one resource at a time.

resources.tf
Terraform Import

terraform.tfstate

Manually Created
Import Workflow Steps
Local Values
Locals are used when you want to avoid repeating the same expression multiple
times.

Local values are created by a locals block (plural), but you reference them as
attributes on an object named local (singular)

Local value can reference values from other variables, locals etc.
Terraform Workspace
Terraform workspaces enable us to manage multiple sets of deployments from
the same sets of configuration files.

State File Directory = terraform.tfstate.d

Not suitable for isolation for strong separation between workspace (stage/prod)

Use-Case Command

Create New Workspace terraform workspace new kplabs

Switch to a specific Workspace terraform workspace select prod


Terraform Modules
Terraform Modules allow us to centralize the resource configuration, and it
makes it easier for multiple projects to re-use the Terraform code.

Instead of writing code from scratch, we can use multiple ready-made modules
available.
Calling a Module
Module source code can be present in a wide variety of locations including:

GitHub, Local Paths, Terraform Registry, S3 Buckets, HTTP URLs

To reference a module, you need to make use of module block and source

Terraform uses this during the module installation step of terraform init to download the source
code to a directory on local disk so that other Terraform commands can use it.
Example 1 - Local Paths
Local paths are used to reference to a module that is available in local
filesystem.

A local path must begin with either ./ or ../ to indicate that a local path.

Modules sourced from local paths do NOT support versions.


Example 2 - Generic Git Repository
Arbitrary Git repositories can be used by prefixing the address with the special
git:: prefix.
Root vs Child Modules
Root Module resides in the main working directory of your Terraform
configuration. This is the entry point for your infrastructure definition

A module that has been called by another module is often referred to as a child
module.

Child Module Root Module


Module Outputs
A child module can use outputs to expose a subset of its resource attributes to a
parent module.

Format: <MODULE NAME>.<OUTPUT NAME>


Module Versioning
When using modules installed from a module registry, HashiCorp recommends
explicitly constraining the acceptable version numbers to avoid unexpected or
unwanted changes.

It is not mandatory to specify a version argument.


Terraform Registry
● Hosts a broad collection of publicly available Terraform modules.
● Each Terraform module has an associated address.
● A module address has the syntax hostname/namespace/name/system

The hostname/ portion of a module is optional, and if omitted, it defaults to


registry.terraform.io/.
Functions in Terraform
The Terraform language includes a number of built-in functions that you can use
to transform and combine values.

NO SUPPORT for User-Defined Functions.

Function Categories Functions Available

Numeric Functions abs, ceil, floor, max, min

String Functions concat, replace, split, join, tolower,toupper

Collection Functions element, keys, length, merge, sort, slice

Filesystem Functions: file, filebase64, dirname


Lookup function
lookup retrieves the value of a single element from a map, given its key. If the
given key does not exist, the given default value is returned instead.
Zipmap function
zipmap constructs a map from a list of keys and a corresponding list of values.
Index function
index finds the element index for a given value in a list.
Element Function
element retrieves a single element from a list.

Format: element(list, index)


Testing Element Function
Code: Name = element(var.tags,count.index)
toset Function
toset function will convert the list of values to SET
TimeStamp Function
timestamp returns a UTC timestamp string in RFC 3339 format.
File Function
File function can reduce the overall Terraform code size by loading contents
from external sources during terraform operations.

After

Before
Meta Arguments in Terraform
Terraform allows us to include meta-arguments within the resource block, which
allows some details of this standard resource behaviour to be customized on a
per-resource basis.

Inside resource block


Different Meta-Arguments
Meta-Argument Description

depends_on Handle hidden resource or module dependencies that Terraform cannot


automatically infer.

count Accepts a whole number, and creates that many instances of the resource

for_each Accepts a map or a set of strings, and creates an instance for each item in that
map or set.

lifecycle Allows modification of the resource lifecycle.

provider Specifies which provider configuration to use for a resource, overriding


Terraform's default behavior of selecting one based on the resource type name
Lifecycle Meta-Argument
Some details of the default resource behaviour can be customized using the
special nested lifecycle block within a resource block body:
Arguments Available for LifeCycle Block
There are four argument available within lifecycle block.

Arguments Description

create_before_destroy New replacement object is created first, and the prior object is destroyed
after the replacement is created.

prevent_destroy Terraform to reject with an error any plan that would destroy the
infrastructure object associated with the resource

ignore_changes Ignore certain changes to the live resource that does not match the
configuration.

replace_triggered_by Replaces the resource when any of the referenced items change
Count and Count Index
The count argument accepts a whole number, and creates that many instances
of the resource.

count.index — The distinct index number (starting with 0) corresponding to this


instance.

0 1 2
Exam Preparation - Part 3
Find the Issue - Use-Cases
You can expect a use case in exam with a sample Terraform code, and you must
find what should be removed as part of Terraform best practice.
Sentinel
Sentinel is an embedded policy-as-code framework integrated with the
HashiCorp Enterprise products. Sentinel is a proactive service.

Can be used for various use-cases like:

● Verify if EC2 instance has tags.


● Verify if the S3 bucket has encryption enabled.

terraform plan sentinel checks terraform apply


Terraform Graph
Terraform graph refers to a visual representation of the dependency
relationships between resources defined in your Terraform configuration.

The output of terraform graph is in the DOT format, which can easily be
converted to an image.
Input Variables
Terraform input variables are used to pass certain values from outside of the
configuration

Name Value

vpn_ip 101.0.62.210/32

app_port 8080

Variable File
Terraform TFVARS
terraform.tfvars file can be used to define value to all the variables.

This approach leads to easier setup for multi-project deployments.


Selecting tfvars File
If you have multiple variable definitions file (*.tfvars) file, you can manually define
the file to use during the command line.
Declaring Variable Values
When variables are declared in your configuration, they can be set in a number
of ways:

1. Variable Defaults.

2. Variable Definition File (*.tfvars)

3. Environment Variables

4. Setting Variables in the Command Line.


Setting Variable in Command Line
To specify individual variables on the command line, use the -var option when
running the terraform plan and terraform apply commands:
Setting Variable through Environment Variables
Terraform searches the environment of its own process for environment
variables named TF_VAR_ followed by the name of a declared variable.
Variable Definition Precedence
Terraform loads variables in the following order, with later sources taking
precedence over earlier ones:

1. Environment variables
2. The terraform.tfvars file, if present.
3. The terraform.tfvars.json file, if present.

4. Any *.auto.tfvars or *.auto.tfvars.json files, processed in lexical order of their


filenames.

5. Any -var and -var-file options on the command line


Variables with undefined values
If you have variables with undefined values, it will NOT directly result in an error.

Terraform will ask you to supply the value associated with them.
Not Allowed Variable Names
We cannot use all words within variable names.

Terraform reserves some additional names that can no longer be used as input
variable names for modules. These reserved names are:

● count
● depends_on
● for_each
● lifecycle
● providers
● source
Points to Note - State File
Terraform state file generally stores details about the resources that it manages.

Various aspects like “Input Variables” are not stored.

Output value will be stored in state file but not the description
Terraform Console
Terraform Console provides an interactive environment specifically designed to
test functions and experiment with expressions before integrating them into your
main code.
Dependency Lock File
Terraform dependency lock file allows us to lock to a specific version of the
provider. Name is terraform.lock.hcl. For tracking provider dependencies.

If a particular provider already has a selection recorded in the lock file, Terraform
will always re-select that version for installation, even if a newer version has
become available.

You can override that behavior by running terraform init -upgrade


Dependencies - Implicit
With implicit dependency, Terraform can automatically find references of the
object, and create an implicit ordering requirement between the two resources.

In the following screenshot, Terraform will create EC2 first before EIP.
Dependencies - Explicit
Explicitly specifying a dependency is only necessary when a resource relies on
some other resource's behavior but doesn't access any of that resource's data in
its arguments.

Uses the depends_on meta-argument


Data Sources
Data sources allow Terraform to use / fetch information defined outside of
Terraform
Terraform Enterprise
Terraform Enterprise provides several added advantages compared to Terraform
Cloud.

Some of these include:

● Single Sign-On
● Auditing
● Private Data Center Networking
● Clustering

Team & Governance features are not available for Terraform Cloud Free (Paid)
Remote Backend
The remote backend stores Terraform state and may be used to run operations
in Terraform Cloud.

When using full remote operations, operations like terraform plan or terraform
apply can be executed in Terraform Cloud's run environment, with log output
streaming to the local terminal.

The remote backend was the primary implementation of HCP Terraform's


CLI-driven run workflow for Terraform versions 0.11.13 through 1.0.x. We
recommend using the native cloud integration for Terraform versions 1.1 or later,
as it provides an improved user experience and various enhancements.
Points to Note
HCP = HashiCorp Cloud Platform

Secure Variable Storage is available in Terraform Enterprise and Cloud but not in
the normal version of Terraform.

Terraform Cloud / Enterprise comes with a Private Module registry which allows
organizations to restrict access based on requirements.

Terraform Cloud provides the feature of Remote State storage.

Encryption of state file is available


Points to Note
In a HCP workspace linked to a VCS repository, runs start automatically when
you merge or commit changes to version control.

A workspace is linked to one branch of a VCS repository and ignores changes to


other branches.

To protect secret values in HCP, you can mark any Terraform or environment
variable as sensitive data by clicking its Sensitive checkbox that is visible during
editing. Marking a variable as sensitive makes it write-only and prevents all
users (including you) from viewing its value
Sensitive Values in HCP
To protect secret values in HCP, you can mark any Terraform or environment
variable as sensitive data by clicking its Sensitive checkbox that is visible during
editing.

Marking a variable as sensitive makes it write-only and prevents all users


(including you) from viewing its value
Recreating the Resource
The -replace option with terraform apply to force Terraform to replace an object
even though there are no configuration changes that would require it.

terraform apply -replace="aws_instance.web"

A similar kind of functionality was achieved using the terraform taint command in
older versions of Terraform. Not recommended now.
Benefits of IAC Tool
There are three primary benefits of Infrastructure as Code tools:

Automation, Versioning, and Reusability.

Various IAC Tools Available in the market:

● Terraform
● CloudFormation
● Azure Resource Manager
● Google Cloud Deployment Manager
terraform output

The terraform output command is used to extract the value of an output variable
from the state file.
Module Source and Git Branches

By default, Terraform will clone and use the default branch (referenced by
HEAD) in the selected repository.

You can override this using the ref argument.

Format: ?ref=<version-number>
Splat Expressions
Splat Expression allows us to get a list of all the attributes.

Resources that use the for_each argument will appear in expressions as a map
of objects, so you can't use splat expressions with those resources.
Important Documentation Reference - Splat
Point to Note

Will this code block display the names of all the IAM usernames created?

Answer = NO
Legacy Splat Expression
Earlier versions of the Terraform language had a slightly different version of splat
expressions, which Terraform continues to support for backward compatibility.

The legacy "attribute-only" splat expressions use the sequence .*, instead of [*]:
Fetching Values from List

To fetch the instance_type value of m5.xlarge from the list, you can reference to
the key of 1

var.size[1]
Fetching Values from Map
To reference the “t2.small” instance type from the below map, the following
approaches need to be used:

var.types[“ap-south-1”]
Dealing with Larger Infrastructure
Cloud Providers have set a certain rate limit, so Terraform can only request a
certain number of resources over a period of time.

It is important to break larger configurations into multiple smaller configurations


that can be independently applied.

Alternatively, you can make use of -refresh=false and target flag for a
workaround (not recommended)
BackEnds
Backends primarily determine where Terraform stores its state.

By default, Terraform implicitly uses a backend called local to store state as a


local file on disk.

If required, you can store state file in other backend as well.


Points to Note - Initializing Backend

When configuring a backend for the first time (moving from no defined backend
to explicitly configuring one), Terraform will give you the option to migrate your
state to the new backend.

This lets you adopt new backends without losing any existing state.
Local Backend
The local backend stores the state on the local filesystem, locks that state using
system APIs, and performs operations locally.

By default, Terraform uses the "local" backend, which is the normal behavior of
Terraform you're used to
Air Gapped Environments
An air gap is a network security measure employed to ensure that a secure
computer network is physically isolated from unsecured networks, such as the
public Internet.

Air Gap Install

Isolated Server
Terraform Enterprise
Requirements for Publishing Module in Registry
Core Requirements Description

GitHub The module must be on GitHub and must be a public repo.


This is only a requirement for the public registry and not for
private registry.

Named Example:
terraform-<PROVIDER>-<NAME>
terraform-aws-ec2-instance

Repository description The GitHub repository description is used to populate the short
description of the module.

Standard module structure he module must adhere to the standard module structure. This
allows the registry to inspect your module and generate
documentation, track resource usage, parse submodules and
examples, and more.

x.y.z tags for releases For example, v1.0.4 and 0.9.2


Comments in Terraform
A comment is a text note added to the source code to provide explanatory
information, usually about the function of the code.
Dynamic Blocks
Dynamic Block allows us to dynamically construct repeatable nested blocks,
which are supported inside resource, data, provider, and provisioner blocks.

Overuse of Dynamic blocks can make configuration hard to read and maintain.
Miscellaneous Pointers
GitHub is not the supported backend type in Terraform.

API and CLI access for Terraform Cloud can be managed through API tokens
that can be generated from Terraform Cloud UI.

Terraform uses Parallelism to reduce the time it takes to create the resource. By
default, this value is set to 10
Code Formatting Recommended Practices
Indent two spaces for each nesting level

When multiple arguments with single-line values appear on consecutive lines at


the same nesting level, align their equals signs:
Miscellaneous Pointers

Terraform does not require go as a prerequisite.

Terraform providers are NOT always installed through the internet. There is a
different offline approach for air-gapped systems.

Terraform and Terraform Provider NEED NOT have the same major version for
compatibility.
Sensitive Parameter
Adding sensitive parameter ensures that you do not accidentally expose this
data in CLI output, log output.

The sensitive value will be present as part of the state file.


Actions Forbidden When State File is Locked
When the state file is locked, the following actions are forbidden:

Running terraform apply or any other command that modifies the state file (e.g.
terraform plan, terraform destroy, etc.)

Running terraform refresh, which updates the state file to reflect the current state
of the infrastructure

terraform state [push, rm, mv]


Miscellaneous Pointers

1. terraform.tfstate will NOT always match the current state infrastructure.

If you are making use of the GIT repository for commiting terraform code, the
.gitignore should be configured to ignore certain terraform files that might contain
sensitive data.

Some of these can include:


terraform.tfstate file (this can include sensitive information)
*.tfvars (may contain sensitive data like passwords)
Points to Note - State

1. If supported by your backend, Terraform will lock your state for all
operations that could write state.

2. Not all backends support locking functionality.

3. Terraform has a force-unlock command to manually unlock the state if


unlocking failed. terraform force-unlock LOCK_ID [DIR]
Join us in our Adventure

kplabs.in/chat

Be Awesome

kplabs.in/linkedin

You might also like