Ansible For Aws Sample
Ansible For Aws Sample
Ansible For Aws Sample
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
2014 - 2015 Yan Kurniawan
Contents
Preface . . . . . . . . . . . . .
What this book covers . . .
Who this book is for . . . . .
What you need for this book
Conventions . . . . . . . . .
Example Code Files . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
i
ii
iii
iii
iii
iv
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
1
7
13
17
27
29
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
31
31
32
33
34
38
39
45
47
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
49
49
51
61
66
72
74
78
81
CONTENTS
OpenVPN Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Getting VPC and Subnet ID . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
87
93
Preface
Since the CFEngine project by Mark Burgess began in 1993, configuration management tools
have been revolutionizing IT operations. Followed by the emergence of Puppet and Chef, which
later gain more popularity, there are now many choices available to do IT automation. The new
generation of configuration management tools can build servers in seconds and automate your entire
infrastructure.
Ansible, first released in 2012, is one of the newer tools in the IT automation space. While other tools
like Puppet and Chef focused on completeness and configurability, Ansible focused on simplicity and
low learning curve, without sacrificing security and reliability.
Amazon Web Services (AWS) began offering IT infrastructure services to businesses in 2006, in the
form of web services now commonly known as cloud computing. One of the key benefits of cloud
computing is the opportunity to replace up-front capital infrastructure expenses with low variable
costs that scale with your business. With the Cloud, businesses no longer need to plan for and procure
servers and other IT infrastructure weeks or months in advance. Instead, they can instantly spin up
hundreds or thousands of servers in minutes and deliver results faster.
This book will show you how to use Ansibles cloud modules to easily provision and manage AWS
resources including EC2, VPC, RDS, S3, ELB, ElastiCache, and Route 53. This book takes you
beyond the basics of Ansible, showing you real-world examples of AWS infrastructure automation
and management using Ansible, with detailed steps, complete codes, and screen captures from AWS
console.
The example projects will help you grasp the concepts quickly. From a single WordPress site, to a
highly available and scalable WordPress site, Ansible will help you automate all tasks.
https://cfengine.com
https://puppetlabs.com
http://www.getchef.com
http://www.ansible.com
http://aws.amazon.com/about-aws
Preface
ii
iii
Preface
Conventions
In this book, you will find a number of styles of text that distinguish between different kinds of
information. Here are some examples of these styles, and an explanation of their meaning.
Code words in text are shown as follow: We can include other contexts through the use of the
include directive.
A block of code is set as follows:
1
2
3
4
[group]
host1
host2
host3
Preface
https://github.com/yankurniawan/ansible-for-aws
iv
AWS Functionality
In each category, there are one or more services. For example, AWS offers five database services, each
one optimized for a certain type of use. With so many offerings, you can design an AWS solution
that is tailored to your needs.
Amazon has produced a set of short videos to help you understand AWS basics:
What is Cloud Computing
What is Amazon Web Services
In this book we will only use following AWS services from Foundation Services category:
http://youtu.be/jOhbTAU4OPI
http://youtu.be/mZ5H8sn_2ZI
VPC
Amazon Virtual Private Cloud (Amazon VPC) lets you provision a logically
isolated section of the Amazon Web Services (AWS) Cloud where you can launch
AWS resources in a virtual network that you define. You have complete control
over your virtual networking environment, including selection of your own IP
address range, creation of subnets, and configuration of route tables and network
gateways.
You can easily customize the network configuration for your Amazon VPC. For
example, you can create a public-facing subnet for your webservers that has access to the Internet,
and place your backend systems such as databases or application servers in a private-facing subnet
with no Internet access. You can leverage multiple layers of security, including security groups and
network access control lists, to help control access to Amazon EC2 instances in each subnet.
your database, storing the backups for a user-defined retention period and enabling point-in-time
recovery. You benefit from the flexibility of being able to scale the compute resources or storage
capacity associated with your Database Instance (DB Instance) via a single API call.
S3
Amazon S3 provides a simple web-services interface that can be used to store and
retrieve any amount of data, at any time, from anywhere on the web. It gives any
developer access to the same highly scalable, reliable, secure, fast, inexpensive
infrastructure that Amazon uses to run its own global network of web sites.
The service aims to maximize benefits of scale and to pass those benefits on to
developers.
Amazon S3 provides a highly durable and available store for a variety of content, ranging from web
applications to media files. It allows you to offload your entire storage infrastructure onto the cloud,
where you can take advantage of Amazon S3s scalability and pay-as-you-go pricing to handle your
growing storage needs. You can distribute your content directly from Amazon S3 or use Amazon S3
as an origin store for pushing content to your Amazon CloudFront edge locations.
ELB
Elastic Load Balancing automatically scales its request handling capacity to meet
the demands of application traffic. Additionally, Elastic Load Balancing offers integration with Auto
Scaling to ensure that you have back-end capacity to meet varying levels of traffic levels without
requiring manual intervention.
http://aws.amazon.com/rds
http://aws.amazon.com/s3
http://aws.amazon.com/elasticloadbalancing
ElastiCache
ElastiCache is a web service that makes it easy to deploy, operate, and scale an
in-memory cache in the cloud. The service improves the performance of web
applications by allowing you to retrieve information from fast, managed, inmemory caches, instead of relying entirely on slower disk-based databases.
Amazon ElastiCache automatically detects and replaces failed nodes, reducing
the overhead associated with self-managed infrastructures and provides a reElastiCache
silient system that mitigates the risk of overloaded databases, which slow website
and application load times. Through integration with Amazon CloudWatch, Amazon ElastiCache
provides enhanced visibility into key performance metrics associated with your Memcached or Redis
nodes.
ElastiCache supports two open-source caching engines:
Memcached - a widely adopted memory object caching system. ElastiCache is protocol
compliant with Memcached, so popular tools that you use today with existing Memcached
environments will work seamlessly with the service.
Redis a popular open-source in-memory key-value store that supports data structures such
as sorted sets and lists. ElastiCache supports Redis master/slave replication which can be used
to achieve cross AZ redundancy.
Using Amazon ElastiCache, you can add an in-memory caching layer to your infrastructure in a
matter of minutes by using the AWS Management Console.
Route 53
Amazon Route 53 is a highly available and scalable Domain Name System (DNS)
web service. It is designed to give developers and businesses an extremely reliable
and cost effective way to route end users to Internet applications by translating
names like www.example.com into the numeric IP addresses like 192.0.2.1 that
computers use to connect to each other.
Route 53 effectively connects user requests to infrastructure running in AWS
such as Amazon EC2 instances, Elastic Load Balancers, or Amazon S3 buckets
and can also be used to route users to infrastructure outside of AWS.
Route53
Route 53 is designed to be fast, easy to use, and cost-effective. It answers DNS queries with low
latency by using a global network of DNS servers. Queries for your domain are automatically routed
to the nearest DNS server, and thus answered with the best possible performance. With Route 53,
you can create and manage your public DNS records with the AWS Management Console or with
http://aws.amazon.com/elasticache
an easy-to-use API. Its also integrated with other Amazon Web Services. For instance, by using
the AWS Identity and Access Management (IAM) service with Route 53, you can control who in
your organization can make changes to your DNS records. Like other Amazon Web Services, there
are no long-term contracts or minimum usage requirements for using Route 53 you pay only for
managing domains through the service and the number of queries that the service answers.
http://aws.amazon.com/route53
http://aws.amazon.com/what-is-cloud-computing/#global-reach
http://aws.amazon.com/about-aws/globalinfrastructure/regional-product-services
2. On the next screen, select the I am a new user radio button, fill in your e-mail address in the
given field, and then click the Sign in using our secure server button
Technically, if you have Amazon retail account, you can sign in using your Amazon.com
account, but it is recommended that you set up a new AWS account.
3. On the next page, enter a username, type your e-mail address again, and enter your password
(twice), then click Continue button
4. On the next screen, enter the required personal information on the Contact Information
form, type the Security Check characters, confirm your acceptance of the AWS customer
agreement, and then click the Create Account and Continue button.
5. The next page asks you for a credit card number and your billing address information. Enter
the required information and click Continue button.
6. On the next page, Amazon wants to confirm your identity. Enter your valid phone or mobile
number and click the Call Me Now button.
10
Answer the phone and enter the displayed PIN code on the telephone keypad, or you can say
the PIN numbers. After the identity verification completed successfully, click the Continue
button.
7. On the next page, choose your support plan and click the Continue button.
8. Setup is now complete, youll get an e-mail confirming your account setup.
11
You have given AWS your credit card information to pay for the resources you use.
Be careful about how much AWS resource you use and try to understand the pricing scheme
for each service.
EC2 Pricing Scheme
S3 Pricing Scheme
Your initial account sign-up provides free usage tier for a year. For a complete list of services
that you can use for free, check out AWS Free Usage Tier page.
http://aws.amazon.com/ec2/pricing
http://aws.amazon.com/s3/pricing
http://aws.amazon.com/free
http://calculator.s3.amazonaws.com/index.html
12
13
To set the start page or landing page after you log in, you can choose it from the Set Start Page
pull-down menu. For example, if you work with EC2 instances most of the time, you can set your
landing page to EC2 dashboard.
14
After you choose EC2 Dashboard as your start page, the next time you log in to your AWS account
you will land on the EC2 Dashboard.
15
To see your monthly billing, you can choose Billing from the Services pull down menu on the menu
bar on top of the page.
16
You can customize one-click navigation on the menu bar. Click on the Edit pull down menu and
drag your selected service to/from the menu bar.
For a complete guide to AWS Management Console, visit AWS Management Console
Getting Started Guide page.
http://docs.aws.amazon.com/awsconsolehelpdocs/latest/gsg/getting-started.html
17
18
5. Enter a name for the new key pair in the Key pair name field of the Create Key Pair dialog
box, and then click Create. Choose a name that is easy for you to remember, such as your
name, followed by -key-pair, plus the region name. For example, yan-key-pair-apsydney.
6. If you use Google Chrome as your browser, the private key file is automatically downloaded by
your browser. The base file name is the name you specified as the name of your key pair, and
the file name extension is .pem. If you use Firefox, the browser might ask you to Open/Save
the file. Save the private key file in a safe place.
Important
This is the only chance for you to save the private key file. Youll need to provide the name of
your key pair when you launch an instance and the corresponding private key each time you
connect to the instance.
19
7. If you will use an SSH client on a Mac or Linux computer to connect to your Linux instance,
use the chmod command to set the permissions of your private key file so that only you can
read it.
$ chmod 400 yan-key-pair-apsydney.pem
If youll connect to your Linux instance from a computer running Windows, you can use PuTTY or
MindTerm. If you use PuTTY, youll need to install it and use the following procedure to convert
the .pem file to a .ppk file.
To prepare to connect to a Linux instance from Windows using PuTTY
1. Download and install PuTTY from http://www.chiark.greenend.org.uk/~sgtatham/putty. Be
sure to install the entire suite (Download the installer file under A Windows installer for
everything except PuTTYtel section).
2. Start PuTTYgen (for example, from the Start menu, click All Programs > PuTTY >
PuTTYgen).
3. Under Type of key to generate, select SSH-2 RSA.
4. Click Load. By default, PuTTYgen displays only files with the extension .ppk. To locate your
.pem file, select the option to display files of all types.
20
5. Select the private key file that you created in the previous procedure and click Open. Click
OK to dismiss the confirmation dialog box.
6. Click Save private key. PuTTYgen displays a warning about saving the key without a
passphrase. Click Yes.
7. Specify the same name for the key that you used for the key pair. PuTTY automatically adds
the .ppk file extension.
21
5. Enter a name for the new security group and a description. Choose a name that is easy for
you to remember, such as SG_ plus the region name. For example, SG_apsydney.
On the Inbound tab, create the following rules (click Add Rule for each new rule), and click
Create when youre done:
Select HTTP from the Type list, and make sure that Source is set to Anywhere
(0.0.0.0/0).
Select HTTPS from the Type list, and make sure that Source is set to Anywhere
(0.0.0.0/0).
Select SSH from the Type list. In the Source box, ensure Custom IP is selected, and
22
specify the public IP address of your computer or network in CIDR notation. To specify
an individual IP address in CIDR notation, add the routing prefix /32. For example, if
your IP address is 203.0.100.2, specify 203.0.100.2/32. If your company allocates addresses
from a range, specify the entire range, such as 203.0.100.0/24.
Caution
For security reasons, Amazon doesnt recommend that you allow SSH access from all
IP addresses (0.0.0.0/0) to your instance, except for testing purposes and only for a short
time.
The following procedure is intended to help you launch your first instance quickly and doesnt
go through all possible options. For more information about the advanced options see AWS
Documentation on Launching an Instance.
To launch an instance
1. Open the Amazon EC2 Dashboard https://console.aws.amazon.com/ec2.
2. From the console dashboard, click Launch Instance.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/launching-instance.html
23
3. The Choose an Amazon Machine Image (AMI) page displays a list of basic configurations
called Amazon Machine Images (AMIs) that serve as templates for your instance. Select the
64-bit Amazon Linux AMI. Notice that this configuration is marked Free tier eligible. Click
the Select button.
4. On the Choose an Instance Type page, you can select the hardware configuration of your
instance. The t1.micro instance is selected by default. Click Review and Launch to let the
wizard complete other configuration settings for you, so you can get started quickly.
24
5. On the Review Instance Launch page, you can review the settings for your instance.
Under Security Groups, youll see that the wizard created and selected a security group for
you. Instead, select the security group that you created when getting set up using the following
steps:
Click Edit security groups.
On the Configure Security Group page, ensure the Select an existing security group
option is selected.
Select your security group from the list of existing security groups, and click Review
and Launch.
25
8. A confirmation page lets you know that your instance is launching. Click View Instances to close
the confirmation page and return to the console.
9. On the Instances screen, you can view the status of your instance. It takes a short time for an
instance to launch. When you launch an instance, its initial state is pending. After the instance
26
starts, its state changes to running, and it receives a public DNS name. (If the Public DNS column
is hidden, click the Show/Hide icon and select Public DNS).
27
3. In the Category pane, expand Connection, expand SSH, and then select Auth. Complete the
following:
Click Browse
Select the .ppk file that you generated for your key pair, and then click Open
Click Open to start the PuTTY session.
4. If this is the first time you have connected to this instance, PuTTY displays a security alert
dialog box that asks whether you trust the host you are connecting to. Click Yes. A window
opens and you are connected to your instance.
28
For Amazon Linux, the default user name is ec2-user. For RHEL5, the user name is often
root but might be ec2-user. For Ubuntu, the user name is ubuntu. For SUSE Linux, the
user name is root. Otherwise, check with your AMI provider.
29
Most part of this chapter is based on Amazon AWS Online Documentation. This chapter
only covers the basics of AWS and EC2. If you want to learn more about EC2, see
Amazon EC2 User Guide. For a complete list of Amazon AWS Documentation, visit AWS
Documentation.
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/concepts.html
http://aws.amazon.com/documentation
30
Unlike Chef and Puppet, Ansible uses an agentless architecture. You only need to install Ansible on
the machines that you use to manage your infrastructure. Managed nodes are not required to install
and run background daemons to connect with a controlling machine and pull configuration, thus
reduces the overhead on the network.
In this chapter youll learn how to install Ansible, build an inventory file, use Ansible from the
command line, write a simple playbook, and use Ansible modules.
32
Installing Ansible
Ansible is written in Python. To install the latest version of Ansible, we will use pip. Pip is a tool
used to manage packages of Python software and libraries. Ansible releases are pushed to pip as
soon as they are released.
To install Ansible via pip:
1. Install RHEL EPEL repository. The EPEL (Extra Packages for Enterprise Linux) repository is
a package repository for Red Hat Enterprise Linux (RHEL) or CentOS, maintained by people
from Fedora Project community, to provide add-on packages from Fedora, which are not
included in the commercially supported Red Hat product line.
#
#
#
#
yum -y update
yum -y install wget
wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6*.rpm
2. Install Development tools group. The Development tools are a yum group, which is a
predefined bundle of software that can be installed at once, instead of having to install
each application separately. The Development tools will allow you to build and compile
software from source code. Tools for building RPMs are also included, as well as source code
management tools like Git, SVN, and CVS.
# yum groupinstall -y 'development tools'
4. Upgrade setuptools
# pip install setuptools --upgrade
After the installation has completed successfully, you will be able to run this command to show your
Ansibles version number:
# ansible --version
ansible 1.6.3
33
SSH Keys
Ansible communicates with remote machines over SSH. By default, Ansible 1.3 and later will try to
use native OpenSSH for remote communication when possible. It is recommended that you use SSH
keys for SSH authentication, so Ansible wont have to ask password to communicate with remote
hosts.
To enable SSH keys authentication:
1. Create public and private keys using ssh-keygen
# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):[press ENTER key]
Enter passphrase (empty for no passphrase):[press ENTER key]
Enter same passphrase again:[press ENTER key]
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
34
Inventory
Ansible works against multiple nodes in your infrastructure at the same time, by selecting portions of
nodes listed in Ansibles inventory file. By default, Ansible uses /etc/ansible/hosts for the inventory
file.
The format for /etc/ansible/hosts is an INI format and looks like this:
1
2
3
4
5
6
7
8
9
10
11
one.example.com
[webservers]
web1.example.com
web2.example.com
two.example.com
[dbservers]
db1.example.com
db2.example.com
two.example.com
The words in brackets are group names, which are used in classifying nodes and deciding what
hosts you are controlling in an Ansible task. One node can be a member of more than one group,
like two.example.com on the above example.
To add a lot of hosts with similar patterns you can specify range like this:
1
2
3
4
5
[webservers]
web[01:50].example.com
[databases]
db-[a:f].example.com
1
2
3
4
5
6
7
8
9
10
11
35
[sydney]
host1
host2
[singapore]
host3
host4
[asiapacific:children]
sydney
singapore
Host Variables
You can specify variables in hosts file that will be used later in Ansible playbooks:
1
2
3
[webservers]
web1.example.com
web2.example.com
ansible_ssh_user=ec2-user
ansible_ssh_user=ubuntu
Assuming the inventory file path is /etc/ansible/hosts, you can store host variables, in YAML
format, in individual files:
/etc/ansible/host_vars/web1.example.com
/etc/ansible/host_vars/web2.example.com
For example, the data in the host variables file /etc/ansible/host_vars/web1.example.com might
look like:
1
2
3
--ansible_ssh_user: ec2-user
ansible_ssh_port: 5300
The following variables control how Ansible interacts with remote hosts:
ansible_ssh_host
The name of the host to connect to, if different from the alias you wish to give to it.
ansible_ssh_port
36
ansible_ssh_pass
The ssh password to use (this is insecure, its strongly recommended to use ask-pass or SSH
keys)
ansible_sudo_pass
The sudo password to use (this is insecure, its strongly recommended to use ask-sudo-pass)
ansible_connection
Connection type of the host. Candidates are local, ssh or paramiko. The default is paramiko
before Ansible 1.2, and smart afterwards which detects whether usage of ssh would be
feasible based on whether ControlPersist is supported.
ansible_ssh_private_key_file
Private key file used by ssh. Useful if using multiple keys and you dont want to use SSH agent.
ansible_shell_type
The shell type of the target system. By default commands are formatted using sh-style syntax
by default. Setting this to csh or fish will cause commands executed on target systems to
follow those shells syntax instead.
ansible_python_interpreter
The target host python path. This is useful for systems with more than one Python or not
located at /usr/bin/python such as *BSD, or where /usr/bin/python is not a 2.X series Python.
ansible\_\*\_interpreter
Works for anything such as ruby or perl and works just like ansible_python_interpreter.
This replaces shebang of modules which will run on that host.
Group Variables
Variables can be applied to an entire group at once:
1
2
3
4
5
6
7
[sydney]
node1
node2
[sydney:vars]
ntp_server=ntp.sydney.example.com
proxy_server=proxy.sydney.example.com
1
2
3
4
5
6
7
8
9
10
11
12
13
14
37
[sydney]
host1
host2
[singapore]
host3
host4
[asiapacific:children]
sydney
singapore
[asiapacific:vars]
db_server=db1.asiapacific.example.com
Assuming the inventory file path is /etc/ansible/hosts, you can store group variables, in YAML
format, in individual files:
/etc/ansible/group_vars/sydney
For example, the data in the group variables file /etc/ansible/group_vars/sydney might look like:
1
2
3
--ntp_server: ntp.sydney.example.com
proxy_server: proxy.sydney.example.com
38
[local]
localhost
#
#
#
#
Ansible will attempt to connect to remote machines using your current user name, just like SSH
would. To override the remote user name, use the -u parameter.
Ping command
Live command
The above examples show how to use /usr/bin/ansible for running ad-hoc tasks. An ad-hoc
command can be used to do quick things that you might not necessarily want to write a full playbook
for. For example, if you want to restart some services on remote hosts.
39
Playbooks
Playbooks are Ansibles configuration, deployment, and orchestration language. They are a completely different way to use Ansible than in ad-hoc task execution mode.
Playbooks are written in YAML format and have a minimum of syntax, which intentionally tries to
not be a programming language or script, but rather a model of a configuration or a process. Writing
in YAML format allow you to describe your automation jobs in a way that approaches plain English.
It is easy to learn and easy to understand for new Ansible users, but it is also powerful for expert
users.
Each playbook is composed of one or more plays in a list. A play maps a group of hosts to some
well defined roles, represented by ansible tasks. A task is a call to an Ansible module.
Ansible Playbook
A module can control system resources, like services, packages, or files, or anything on remote hosts.
Modules can be executed from the command line using /usr/bin/ansible, or by writing a playbook
and run it using /usr/bin/ansible-playbook command. Each module supports taking arguments.
Nearly all modules take key=value arguments, space delimited. Some modules take no arguments,
and the command/shell modules simply take the string of the command you want to run.
Examples of executing modules from the command line:
# ansible webservers -m service -a "name=httpd state=started"
# ansible webservers -m ping
40
For a complete list of Ansible modules, see Ansible Module Index page.
Most modules are idempotent, they will only make changes in order to bring the system to the
desired state. It is safe to rerun the same playbook multiple times. It wont make changes if the
desired state has been achieved.
YAML Syntax
For Ansible playbook, nearly every YAML file starts with a list. Each item in the list is a
list of key/value pairs, commonly called a hash or a dictionary. All YAML files should
begin with "---", which indicates the start of a document.
All members of a list are lines beginning at the same indentation level starting with a -
(dash) character:
1
2
3
4
5
6
7
8
9
A play in playbook consists of three sections: the hosts section, the variables section, and the tasks
section. You can include as many plays as you like in a single playbook.
http://docs.ansible.com/modules_by_category.html
41
- hosts: webservers
remote_user: root
The hosts that a play will be run on must be set in the value of hosts. This value uses the same
host-pattern-matching syntax as the one used in Ansible command line:
The following patterns target all hosts in inventory:
all
*
The following patterns address one or more groups. Groups separated by a colon indicate an
OR configuration (the host may be in either one group or the other):
webservers
webservers:dbservers
To exclude groups:
webservers:!sydney
all hosts must be in the webservers group but not in the sydney group
Group intersection:
webservers:&staging
all hosts must be in the webservers group and also in the staging group
You can also use regular expressions. Start the pattern with a "":
(web|db).*\.example\.com
In this section you can provide some parameters:
sudo
Set this to yes if you want Ansible to running things from sudo for the whole play.
remote_user
This defines the username Ansible will use to connect to targeted hosts.
42
sudo_user
Tells Ansible what connection method to use to connect to the remote hosts. You can use ssh,
paramiko, or local.
gather_facts
If set to no Ansible will not run the setup module to collect facts from remote hosts.
- hosts: webservers
vars:
http_port: 80
region: ap-southeast-2
To use the variable in the tasks section, use "{{ variable }}" syntax:
1
2
3
4
5
6
7
8
- hosts: webservers
vars:
region: ap-southeast-2
tasks:
- name: create key pair
local_action:
module: ec2_key
region: "{{ region }}"
Variables can also be loaded from external YAML files. This is done using the vars_files directive:
1
2
3
- hosts: webservers
vars_files:
- /vars/external_vars.yml
You can instruct Ansible to prompt for variables using the vars_prompt directive:
1
2
3
43
vars_prompt:
- name: 'vpc_subnet_id'
prompt: 'Enter the VPC subnet ID: '
tasks:
- name: make sure apache is running
service: name=httpd state=running
The command and shell modules are the only modules that just take a list of arguments and dont
use the key=value form:
1
2
3
tasks:
- name: disable selinux
command: /sbin/setenforce 0
The command and shell modules will return error code. If the exit code is not zero you can ignore
the error:
1
2
3
4
44
tasks:
- name: run somecommand and ignore the return code
shell: /usr/bin/somecommand
ignore_errors: yes
If the action line is getting too long, you can break it into separate lines, indent any continuation
lines:
1
2
3
4
tasks:
- name: Copy somefile to remote host
copy: src=/home/somefile dest=/etc/somefile
owner=root group=root mode=0644
Handlers
Handlers are lists of tasks, referenced by name, called by notify directive. Notify actions are
triggered when the task made a change on the remote system. If many tasks in a play notify one
handler, it will run only once, after all tasks completed in a particular play.
1
2
3
4
5
6
7
8
tasks:
- name: Configure ntp file
template: src=ntp.conf.j2 dest=/etc/ntp.conf
notify: restart ntp
handlers:
- name: restart ntp
service: name=ntpd state=restarted
The service ntpd will be restarted only if the template module made changes to remote hosts file
/etc/ntp.conf.
45
[local]
localhost
46
2361
2362
2363
2364
2365
?
?
?
?
?
S
S
S
S
S
0:00
0:00
0:00
0:00
0:00
/usr/sbin/httpd
/usr/sbin/httpd
/usr/sbin/httpd
/usr/sbin/httpd
/usr/sbin/httpd
47
Include directives look like this, and can be mixed in with regular tasks in a playbook:
1
2
tasks:
- include: tasks/task1.yml
Roles are a better way to organize your playbooks. Roles are ways of automatically loading certain
variables, tasks, and handlers based on a known file structure. Grouping content using roles makes
it easier to share roles with other users.
Example project structure:
site.yml
webservers.yml
dbservers.yml
roles/
common/
files/
templates/
tasks/
handlers/
vars/
meta/
48
webservers/
files/
templates/
tasks/
handlers/
vars/
meta/
Ansible provides a nice quick start video. You can find it here http://www.ansible.com/
resources
http://docs.ansible.com
50
Amazon has multiple locations world-wide. These locations are composed of regions and
Availability Zones. Each region is a separate geographic area. Each region has multiple,
isolated locations known as Availability Zones. Amazon provides you the ability to place
resources, such as instances, and data in multiple locations. Resources arent replicated
across regions unless you do so specifically.
A network access control list (ACL) is an optional layer of security that acts as a firewall
for controlling traffic in and out of a subnet. You might set up network ACLs with rules
similar to your security groups in order to add an additional layer of security to your VPC.
The following figure illustrates the key components that Amazon set up for a default VPC:
The CIDR block for a default VPC is always 172.31.0.0/16, which provides up to 65,536 private IP
addresses. A default subnet has a /20 subnet mask, which provides up to 4,096 addresses per subnet.
Some addresses are reserved for Amazons use.
By default, a default subnet is connected to the Internet through the Internet gateway. Instances that
you launch into a default subnet receive both a private IP address and a public IP address.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/default-vpc.html
51
VPC Diagram
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Scenario2.html
52
VPC Sizing
The allowed CIDR block size for a VPC is between a /28 netmask and /16 netmask,
therefore the VPC can contain from 16 to 65,536 IP addresses.
You cant change the size of a VPC after you create it. If your VPC is too small to meet your
needs, you must terminate all the instances in the VPC, delete the VPC, and then create a
new, larger VPC.
The following list describes the basic components presented in the configuration diagram for this
scenario:
A virtual private cloud (VPC) of size /16 (CIDR: 10.0.0.0/16). This provides 65,536 private
IP addresses.
A public subnet of size /24 (CIDR: 10.0.0.0/24). This provides 256 private IP addresses.
A private subnet of size /24 (CIDR: 10.0.1.0/24). This provides 256 private IP addresses.
An Internet gateway. This connects the VPC to the Internet and to other AWS products, such
as Amazon Simple Storage Service (Amazon S3).
Instances with private IP addresses in the subnet range (examples: 10.0.0.5, 10.0.1.5), which
enables them to communicate with each other and other instances in the VPC. Instances in
the public subnet also have Elastic IP addresses (example: 198.51.100.1), which enable them to
be reached from the Internet. Instances in the private subnet are back-end servers that dont
need to accept incoming traffic from the Internet; however, they can send requests to the
Internet using the NAT instance (see the next bullet).
A network address translation (NAT) instance with its own Elastic IP address. This enables
instances in the private subnet to send requests to the Internet (for example, for software
updates).
A custom route table associated with the public subnet. This route table contains an entry that
enables instances in the subnet to communicate with other instances in the VPC, and an entry
that enables instances in the subnet to communicate directly with the Internet.
The main route table associated with the private subnet. The route table contains an entry
that enables instances in the subnet to communicate with other instances in the VPC, and an
entry that enables instances in the subnet to communicate with the Internet through the NAT
instance.
To create a VPC:
1. Open the Amazon VPC console at https://console.aws.amazon.com/vpc/.
Each VPC has a related instance tenancy attribute. You cant change the instance tenancy
of a VPC after you create it. A dedicated VPC tenancy attribute means that all instances
launched into the VPC are Dedicated Instances, regardless of the value of the tenancy
attribute for the instance. If you set this to default, then the tenancy attribute will follow
the tenancy attribute setting on each instance. A default tenancy instance runs on shared
hardware. A dedicated tenancy instance runs on single-tenant hardware.
Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC)
on hardware thats dedicated to a single customer. Your Dedicated Instances are physically
isolated at the host hardware level from your instances that arent Dedicated Instances
and from instances that belong to other AWS accounts. To see the pricing for dedicated
instances go to http://aws.amazon.com/ec2/purchasing-options/dedicated-instances/.
53
54
55
56
57
Route Tables
The following are the basic things that you need to know about route tables:
Initially, the main route table (and every route table in a VPC) contains only a single route: a local
route that enables communication within the VPC. You cant modify the local route in a route table.
Whenever you launch an instance in the VPC, the local route automatically covers that instance;
you dont need to add the new instance to a route table.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Route_Tables.html
58
If you dont explicitly associate a subnet with a route table, the subnet is implicitly associated with
the main route table. However, you can still explicitly associate a subnet with the main route table.
You might do that if you change which table is the main route table. The console shows the number
of subnets associated with each table. Only explicit associations are included in that number.
For the scenario in this section, the main route tables should contain an entry that enables instances
in the private subnet to communicate with the Internet through the NAT instance, but Im not
going to provide instructions for manually creating a NAT instance here. If you want to learn about
this, follow the instruction here: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_
NAT_Instance.html and then you could edit the main route table and add an entry: destination
0.0.0.0/0, target nat-instance-id. I will show you later in this chapter how to provision a NAT
instance using Ansible.
59
4. Click Save.
6. Click Edit.
7. Select the Associate check box for public subnet and click Save.
60
61
VPC Provisioning
We will use Ansible ec2_vpc module to create or terminate AWS Virtual Private Cloud (VPC).
Options for this module are listed in the following table:
parameter
required
default
choices
comments
aws_access_key
no
aws_secret_key
no
cidr_block
yes
dns_hostnames
no
yes
yes
no
dns_support
no
yes
yes
no
instance_tenancy
no
default
default
dedicated
internet_gateway
no
no
yes
no
region
no
resource_tags
yes
route_tables
no
62
parameter
required
default
choices
comments
attached to the main table implicitly.
state
yes
subnets
no
present
no
yes
vpc_id
no
wait
no
no
wait_timeout
no
300
yes
no
yes
no
The following playbook will show you how to create VPC, subnets, and route tables, using the same
public and private subnets scenario as used in preceding section.
# cd /etc/ansible
# vi vpc_create.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
63
- cidr: 10.0.0.0/24
az: "{{ az }}"
resource_tags: '{"Name":"{{ prefix }}_subnet_public"}'
- cidr: 10.0.1.0/24
az: "{{ az }}"
resource_tags: '{"Name":"{{ prefix }}_subnet_private"}'
internet_gateway: yes
route_tables:
- subnets:
- 10.0.0.0/24
routes:
- dest: 0.0.0.0/0
gw: igw
register: vpc
- name: write vpc id to {{ prefix }}_vpc_info file
sudo: yes
local_action: shell echo "{{ prefix }}"_vpc":" "{{ vpc.vpc_id }}"
> "{{ prefix }}"_vpc_info
- name: write subnets id to {{ prefix }}_vpc_info file
sudo: yes
local_action: shell echo "{{ item.resource_tags.Name }}"":" "{{ item.id }}\
"
>> "{{ prefix }}"_vpc_info
with_items: vpc.subnets
The playbook will create a VPC with resource tags Name=staging_vpc and 2 subnets with resource
tags Name=staging_subnet_public and Name=staging_subnet_private. You could easily create a
duplicate VPC with its subnets, route tables, etc simply by changing the prefix variable and run
the playbook again.
You should see the staging_vpc created and listed on the VPC console. Open the Amazon VPC
console at https://console.aws.amazon.com/vpc/ and select Your VPCs in the navigation pane.
64
staging_vpc
You should also see new staging subnets on the Subnets list:
New Subnets
You can see from the preceding playbook that I registered the output of ec2_vpc module then wrote
the VPC id and subnets id to a file called staging_vpc_info.
The content of staging_vpc_info file should look like this:
1
2
3
staging_vpc: vpc-xxxxxxxx
staging_subnet_public: subnet-xxxxxxxx
staging_subnet_private: subnet-xxxxxxxx
We can use the staging_vpc_info file as a variables file for another playbook.
For example, the following is a playbook to delete the VPC we have created with the preceding
playbook.
# vi vpc_delete.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
65
You could run the vpc_delete.yml playbook to delete the staging_vpc you have created with the
vpc_create.yml playbook. Simply re-run the vpc_create.yml playbook to create new staging_vpc
after the deletion.
When you delete a VPC, Amazon deletes all its components, such as subnets, security
groups, network ACLs, route tables, Internet gateways, VPC peering connections, and
DHCP options. If you have instances launched in the VPC, you have to terminate all
instances in the VPC first before deleting the VPC.
66
67
NAT security group will allow the NAT instance to receive inbound HTTP and HTTPS traffic
from private subnet, allow SSH from your computers or networks public IP address, and
allow outbound HTTP and HTTPS access to the Internet.
First, we will create a playbook to provision the security groups with empty rules. This playbook
will be useful to remove dependencies from the security groups. Later on, if you want to delete the
security groups run this playbook first to empty the rules, so the deletion wont produce dependency
error.
# cd /etc/ansible
# vi sg_empty.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
68
The following security groups should be created in your VPC with empty rules: staging_sg_web,
staging_sg_database, and staging_sg_nat. You can see the security groups on your VPC console
https://console.aws.amazon.com/vpc/, select Security Groups in the navigation pane.
We will modify the rules using another playbook:
# vi sg_modify.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
69
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
70
Run the playbook and you can see the rules changed.
Now from the VPC console, try to delete staging_sg_web. Right click on the security group and
select Delete Security Group, click Yes, Delete. It will tell you that you could not delete the security
group because it has a dependent object, which is staging_sg_database in its outbound rules.
You could run the sg_empty.yml playbook to remove all rules from the security groups, then you
could delete the security group without dependency issue.
You can use the following playbook to delete the security groups:
# vi sg_delete.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
71
If you run sg_delete.yml playbook without deleting the security groups rules first, it will produce
dependency error. You have to run sg_empty.yml first before deleting the security groups.
72
EC2-VPC Provisioning
In chapter 3 we used Ansible to launch EC2 instances without creating and specifying a non-default
VPC, therefore the instances launched in the default VPC. In this chapter we have created a nondefault VPC, subnets, and VPC security groups. To launch an instance in a particular subnet in your
VPC using the Ansible ec2 module, you need to specify the subnet id using the vpc_subnet_id
option.
The following playbook will launch an EC2 instance for our web server in the public subnet of our
VPC:
# vi ec2_vpc_web_create.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
73
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
And the following playbook will launch an EC2 instance for our database server in the private subnet
of our VPC, without assigning a public IP address:
# vi ec2_vpc_db_create.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
74
NAT Instance
Instances that you launch into a private subnet in a VPC cant communicate with the Internet. You
can optionally use a network address translation (NAT) instance in a public subnet in your VPC to
enable instances in the private subnet to initiate outbound traffic to the Internet, but prevent the
instances from receiving inbound traffic initiated by someone on the Internet.
Amazon provides Amazon Linux AMIs that are configured to run as NAT instances. These AMIs
include the string amzn-ami-vpc-nat in their names, so you can search for them in the Amazon EC2
console.
To get the NAT AMI ID:
1. Open the Amazon EC2 console https://console.aws.amazon.com/ec2
2. On the dashboard, click the Launch Instance button.
3. On the Choose an Amazon Machine Image (AMI) page, select the Community AMIs
category, and search for amzn-ami-vpc-nat. In the results list, each AMIs name includes the
version to enable you to select the most recent AMI, for example, 2013.09.
4. Take a note of the AMI ID.
NAT AMI
This AMI is using paravirtual virtualization so it wont work with t2.micro instance type. We will
use the t1.micro instance type.
The following playbook will launch a NAT instance in the public subnet of our VPC and associate
an Elastic IP address to the instance.
# vi nat_launch.yml
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_NAT_Instance.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
75
Run the playbook and check your AWS EC2 console. A new staging_nat instance should be created
in staging_subnet_public subnet and associated with an EIP address.
76
Each EC2 instance performs source/destination checks by default. This means that the instance must
be the source or destination of any traffic it sends or receives. However, a NAT instance must be able
to send and receive traffic when the source or destination is not itself. Therefore, you must disable
source/destination checks on the NAT instance. To do this, in the playbook we set the ec2 modules
option source_dest_check: no.
To allow instances in private subnet to connect to the Internet via the NAT instance, we must update
the Main route tables. We need to do this from the AWS VPC console:
1. In the VPC console navigation pane select Route tables.
2. Select the Main route table of your staging_vpc VPC and select the Routes tab.
3. Click Edit
4. Enter 0.0.0.0/0 CIDR block in the Destination field and select the staging_nat instance id
from the Target list.
77
5. Click Save
Now we have completed the VPC infrastructure provisioning using Ansible. At the time of writing,
there is not yet a module for Network ACLs provisioning. If you want to add some Network ACLs
for your VPC subnet, you could do it from the VPC console: select Network ACLs in the navigation
pane and then click Create Network ACLs. You can find more information about ACLs here: http:
//docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_ACLs.html.
If you have finished with the examples, you can terminate the EC2 instances to avoid any cost.
78
Multi-AZ Deployment
You can span your Amazon VPC across multiple subnets in multiple Availability Zones (AZ) inside
a region. This will create a high availability system, adding redundancy to the system so that failure
of a component does not mean failure of the entire system.
The following diagram shows a Multi-AZ version of our public and private subnets scenario. Well
add a public subnet and a private subnet in another availability zone within our region.
Multi-AZ VPC
We will use Ansible to provision our Multi-AZ VPC. First, we need to delete the staging_vpc VPC.
You can use vpc_delete.yml to delete the VPC or you can delete the VPC from your AWS VPC
console. Make sure you have terminated all EC2 instances in the VPC before deleting the VPC.
The following playbook will create a VPC with Multi-AZ subnets.
# cd /etc/ansible
# vi vpc_create_multi_az.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
79
43
44
45
46
47
48
80
After running the playbook, a new VPC will be created, with 2 subnets in ap-southeast-2a zone,
named staging_subnet_private_0 and staging_subnet_public_0, and 2 subnets in ap-southeast-2b
zone, named staging_subnet_private_1 and staging_subnet_public_1.
Run sg_empty.yml playbook, and then sg_modify.yml playbook to re-create our VPC security
groups.
To achieve high availability, you can deploy your web application cluster in 2 (or more) availability
zones (staging_subnet_public_0 and staging_subnet_public_1), and distribute the load using
Amazon Elastic Load Balancing (ELB). For the database tier, you can use Amazon RDS (Relational
Database Service), deployed in 2 (or more) availability zones (staging_subnet_private_0 and
staging_subnet_private_1).
81
Ansible in VPC
Instances in private subnet of a VPC cannot directly receive inbound traffic from the internet.
Therefore you cant use Ansible from the internet to manage the private server configuration. To
use Ansible to manage configuration of servers in the private subnet of a VPC, we have 2 options:
Install Ansible in an instance in public subnet of the VPC and allow SSH connection from the
Ansible machine to the hosts to be managed in private subnet. This Ansible machine can also
be used as a jump box to allow SSH access from the internet to hosts in private subnet (SSH to
the Ansible machine first and then use the Ansible machine to SSH to hosts in private subnet).
Create a VPN (Virtual Private Network) connection between Ansible machine (over the
internet) and the private subnet. We can launch an OpenVPN server (available in AWS
marketplace) instance in the public subnet which will allow Ansible machine to log in using
OpenVPN client and connect via SSH to hosts in the private subnet.
82
We can use our current Ansible machine to launch a jump box instance in the public subnet and
install Ansible on the instance. First, we need to create a new security group for this instance.
The following playbook will create a new security group for the Ansible or jump box instance:
# cd /etc/ansible
# vi sg_jumpbox.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
local_action:
module: ec2_group
region: "{{ region }}"
vpc_id: "{{ vpc_id }}"
#your security group name
name: "{{ prefix }}_sg_jumpbox"
description: security group for jump box
rules:
# allow ssh access from your ip address
- proto: tcp
from_port: 22
to_port: 22
cidr_ip: "{{ allowed_ip }}"
rules_egress:
- proto: all
cidr_ip: 0.0.0.0/0
# ansible-playbook sg_jumpbox.yml
The following playbook will launch our jump box instance in public subnet A:
# vi ec2_vpc_jumpbox.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
83
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
wait: yes
group: "{{ prefix }}_sg_jumpbox"
instance_tags:
Name: "{{ prefix }}_jumpbox"
class: jumpbox
environment: "{{ prefix }}"
id: jumpbox_launch_01
vpc_subnet_id: "{{ vpc_subnet_id }}"
register: ec2
- name: associate new EIP for the instance
local_action:
module: ec2_eip
region: "{{ region }}"
instance_id: "{{ item.id }}"
with_items: ec2.instances
Ping the instance, make sure Ansible can connect via SSH to the host:
# ansible -i ec2.py tag_class_jumpbox -m ping
You might want to disable host key checking in ssh configuration so ssh will automatically
add new host keys to the user known hosts files without asking (the default is ask). To
disable host key checking, set StrictHostKeyChecking no in your /etc/ssh/ssh_config
file.
84
13
14
15
85
To allow SSH access from this new Ansible machine, do not forget to modify the security group of
hosts you want to manage. For example if you want to install your own MySQL database server
in the private subnet and manage the configuration using Ansible, you can modify the rules in
sg_modify.yml:
- name: modify sg_database rules
local_action:
module: ec2_group
region: "{{ region }}"
vpc_id: "{{ vpc_id }}"
name: "{{ prefix }}_sg_database"
description: security group for databases
rules:
# allow ssh from the jump box
- proto: tcp
from_port: 22
to_port: 22
group_name: "{{ prefix }}_sg_jumpbox"
# allow mysql access from web servers
- proto: tcp
from_port: 3306
to_port: 3306
group_name: "{{ prefix }}_sg_web"
rules_egress:
- proto: tcp
from_port: 80
to_port: 80
cidr_ip: 0.0.0.0/0
- proto: tcp
from_port: 443
to_port: 443
cidr_ip: 0.0.0.0/0
86
87
OpenVPN Server
This section will show you how to launch an OpenVPN EC2 instance in the public subnet using
Ansible, and configure the server from its web UI.
OpenVPN Access Server is a full featured SSL VPN software solution that integrates OpenVPN
server capabilities, enterprise management capabilities, simplified OpenVPN Connect UI, and OpenVPN Client software packages that accommodate Windows, MAC, and Linux OS environments.
OpenVPN Access Server supports a wide range of configurations, including secure and granular
remote access to internal network and/ or private cloud network resources and applications with
fine-grained access control.
To launch an OpenVPN instance, first we need to know the AMI ID of the OpenVPN Access Server
AMI for our region.
To get the AMI ID:
1.
2.
3.
4.
Go to your EC2 dashboard, select your region, and then click Launch Instance button.
On the left hand navigation bar, select Community AMIs.
When the AMI selection dialog appears, type OpenVPN in the search box.
Locate the latest version of OpenVPN Access Server AMI provided by openvpn.net and note
the AMI ID.
http://openvpn.net/index.php/access-server/overview.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
88
The following playbook will launch our OpenVPN server instance in public subnet A:
# vi ec2_vpc_openvpn.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
89
90
The OpenVPN Access Server Setup Wizard runs automatically upon your initial login to the
appliance. If you would like to run this wizard again in the future, issue the sudo ovpn-init --ec2
command in the terminal.
>Please enter 'yes' to indicate your agreement [no]: yes
Will this be the primary Access Server node?
(enter 'no' to configure as a backup or standby node)
> Press ENTER for default [yes]:
Please specify the network interface and IP address to be
used by the Admin Web UI:
(1) all interfaces: 0.0.0.0
(2) eth0: 10.0.0.40
Please enter the option number from the list above (1-2).
> Press Enter for default [2]: 1
Please specify the port number for the Admin Web UI.
> Press ENTER for default [943]:
Please specify the TCP port number for the OpenVPN Daemon
> Press ENTER for default [443]:
Should client traffic be routed by default through the VPN?
> Press ENTER for default [no]:
Should client DNS traffic be routed by default through the VPN?
> Press ENTER for default [no]:
Use local authentication via internal DB?
> Press ENTER for default [yes]:
Private subnets detected: ['10.0.0.0/16']
Should private subnets be accessible to clients by default?
91
Initializing OpenVPN...
After you complete the setup wizard, you can access the Admin Web UI area to configure other
aspects of your VPN:
1. Go to https://openvpn-ipaddress/admin.
2. Go to VPN Settings menu.
3. Configure subnets for the clients. On the Dynamic IP Address network allocate address for
VPN clients, for example 10.1.0.0/23.
Static IP Address Network: (leave empty)
Group Default IP Address Network: (leave empty)
4. Click Save settings.
5. Click Update Running Server.
To add user:
1.
2.
3.
4.
92
Connect Client
The Connect Client can be accessed via a preferred web browser by entering the following address
into the address bar: https://openvpn-ipaddress.
Users have the option to either Connect to the VPN or Login to the Connect Client. When connecting,
the user will be connected to the VPN directly through their web browser. When the user decides
to login to the Connect Client they can download their user configuration files (client.ovpn) and
use them to connect to the VPN with other OpenVPN Clients.
For more information on OpenVPN Access Server go to https://openvpn.net/index.php/accessserver/docs.html.
93
#!/usr/bin/python
#author: John Jarvis
import sys
AWS_REGIONS = ['ap-northeast-1',
'ap-southeast-1',
'ap-southeast-2',
'eu-west-1',
'sa-east-1',
'us-east-1',
'us-west-1',
'us-west-2']
try:
from boto.vpc import VPCConnection
from boto.vpc import connect_to_region
except ImportError:
print "failed=True msg='boto required for this module'"
sys.exit(1)
def main():
module=AnsibleModule(
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
argument_spec=dict(
region=dict(choices=AWS_REGIONS),
aws_secret_key=dict(aliases=['ec2_secret_key', 'secret_key'],
no_log=True),
aws_access_key=dict(aliases=['ec2_access_key', 'access_key']),
tags=dict(default=None, type='dict'),
)
)
tags = module.params.get('tags')
aws_secret_key = module.params.get('aws_secret_key')
aws_access_key = module.params.get('aws_access_key')
region = module.params.get('region')
# If we have a region specified, connect to its endpoint.
if region:
try:
vpc = connect_to_region(region, aws_access_key_id=aws_access_key,
aws_secret_access_key=aws_secret_key)
except boto.exception.NoAuthHandlerFound, e:
module.fail_json(msg=str(e))
else:
module.fail_json(msg="region must be specified")
subnet_ids = []
for tag, value in tags.iteritems():
for subnet in vpc.get_all_subnets(filters={"tag:" + tag: value}):
subnet_ids.append(subnet.id)
vpc_ids = []
for tag, value in tags.iteritems():
for vpc in vpc.get_all_vpcs(filters={"tag:" + tag: value}):
vpc_ids.append(vpc.id)
module.exit_json(changed=False, vpc_ids=vpc_ids, subnet_ids=subnet_ids)
94
95
The following playbook will show you how to use this additional module. This example playbook
will get the ID of VPC with resource tags Name=test-vpc (if exists) and delete the VPC.
# vi vpc_delete.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
You can use the same module to get subnet id based on resource tags, the JSON output used is
subnet_ids.