The Good Parts of AWS
The Good Parts of AWS
The Good Parts of AWS
[email protected]
The Good Parts of AWS
Daniel Vassallo, Josh Pschorr
Sold to
[email protected]
Table of Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Part 1: The Good Parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
The Default Heuristic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DynamoDB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
S3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
EC2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
EC2 Auto Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Lambda. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
ELB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
CloudFormation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Route 53 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
SQS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Kinesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Part 2: The Bootstrap Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Starting from Scratch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
Infrastructure as Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Automatic Deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Load Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Production . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Custom Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
HTTPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
Sold to
[email protected]
Preface
This is not your typical reference book. It doesn’t cover all
of AWS or all its quirks. Instead, we want to help you
realize which AWS features you’d be foolish not to use.
Features for which you almost never need to consider
alternatives. Features that have passed the test of time by
being at the backbone of most things on the internet.
Sold to
1 [email protected]
on all sorts of web applications, from small projects to
massive web services running on thousands of servers.
We have been using AWS since it was just three services
without a web console, and we even got to help build a
small part of AWS itself.
Sold to
2 [email protected]
Part 1: The Good Parts
Sold to
3 [email protected]
likely be the optimal choice.
When you start with little experience, you might not have
a default choice for everything you want to do. In this
book we’re going to share our own default choices when
it comes to AWS services and features. We’re going to
explain why some things became our defaults, and why
other things we don’t even bother with. We hope this
Sold to
4 [email protected]
information will help you build or supplement your basket
of default choices, so that when you take on your next
project you will be able to make choices quickly and
confidently.
DynamoDB
Amazon describes DynamoDB as a database, but it’s best
seen as a highly-durable data structure in the cloud. A
partitioned B-tree data structure, to be precise.
Sold to
5 [email protected]
Compared to a relational database, DynamoDB requires
you to do most of the data querying yourself within your
application. You can either read a single value out of
DynamoDB, or you can get a contiguous range of data.
But if you want to aggregate, filter, or sort, you have to do
that yourself, after you receive the requested data range.
Sold to
6 [email protected]
is rarely a large factor when deciding whether DynamoDB
is a viable option. Instead, it’s generally request pricing
that matters most.
Sold to
7 [email protected]
would get throttled if there were a million and one
requests in a day.) Obviously, both of these assumptions
are impractical. In reality, you’re going to have to
provision abundant headroom in order to deal with the
peak request rate, as well as to handle any general
uncertainty in demand. With provisioned capacity, you
will have the burden to monitor your utilization and
proactively provision the necessary capacity.
Sold to
8 [email protected]
provisioned capacity for peace of mind.
Sold to
9 [email protected]
happens, disaster strikes: all write operations on the main
table start failing. The most problematic part of this
behavior is that there’s no way to monitor the state of this
internal queue. So, the only way to prevent it is to
monitor the throttled request count on all your global
indexes, and then to react quickly to any throttling by
provisioning additional capacity on the affected indexes.
Nevertheless, this situation tends to only happen with
highly active tables, and short bursts of throttling rarely
cause this problem. Global indexes are still very useful,
but keep in mind the fact that they’re eventually
consistent and that they can indirectly affect the main
table in a very consequential manner if they happen to be
underprovisioned.
S3
If you’re storing data—whatever it is—S3 should be the
very first thing to consider using. It is highly-durable, very
easy to use and, for all practical purposes, it has infinite
bandwidth and infinite storage space. It is also one of the
few AWS services that requires absolutely zero capacity
management.
Sold to
10 [email protected]
Fundamentally, you can think of S3 as a highly-durable
hash table in the cloud. The key can be any string, and the
value any blob of data up to 5 TB. When you upload or
download S3 objects, there’s an initial delay of around
20 ms before the data gets streamed at a rate of around
90 MB/s. You can have as many parallel uploads and
downloads as you want—thus, the infinite bandwidth. You
can also store as many objects as you want and use as
much volume as you need, without either having to
provision capacity in advance or experiencing any
performance degradation as the scale increases.
At first, you can start with the default storage class and
ignore all the other classes. Unless you’re storing several
terabytes in S3, it is almost never worth bothering with
them. In general, you can spare yourself the trouble of
understanding all the implications of the different storage
classes until you really need to start saving money from
S3 storage costs.
Sold to
11 [email protected]
since 2010 (before storage classes existed), and it is
currently more expensive than the default storage class,
but with no benefits and lower availability (in theory).
Sold to
12 [email protected]
A word about static website hosting on S3. Unfortunately,
S3 doesn’t support HTTPS when used as a static website
host, which is a problem. Web browsers will display a
warning, and search engines will penalize you in the
rankings. You could set up HTTPS using CloudFront, but
it’s probably much more trouble than it’s worth.
Nowadays, there are plenty of static website hosts
outside of AWS that offer a much better hosting
experience for static websites.
Sold to
13 [email protected]
EC2
EC2 allows you to get a complete computer in the cloud
in a matter of seconds. The nice thing about EC2 is that
the computer you get will be very similar to the computer
you use to develop your software. If you can run your
software on your computer, you can almost certainly run
it on EC2 without any changes. This is one of EC2’s main
advantages compared to other types of compute
platforms (such as Lambda): you don’t have to adapt your
application to your host.
Sold to
14 [email protected]
get to pick an instance type from its catalog. Sometimes
this may seem inefficient, because the instance type you
settle for might come with resources you don’t need. But
this commoditization of server types is what makes it
possible for EC2 to exist as a service and to have servers
available to be provisioned in a matter of seconds.
Sold to
15 [email protected]
you save money by allowing EC2 to take away your
instance whenever it wants. The cost savings with spot
can be even more significant than with reserved
instances, but of course not every use case can tolerate
having compute capacity taken away from it randomly.
Sold to
17 [email protected]
compensate the excess load.
Sold to
18 [email protected]
fluctuations are not significant, or they are too abrupt, or
they are not very smooth, Auto Scaling will almost
certainly not work well for you.
Having said all that, you should still almost always use
Auto Scaling if you’re using EC2! Even if you only have one
instance.
Sold to
19 [email protected]
The other nice thing that comes with Auto Scaling is the
ability to simply add or remove instances just by updating
the desired capacity setting. Auto Scaling becomes a
launch template for your EC2 instances, and you get a dial
that you can turn up or down depending on how many
running instances you need. There is no faster way to add
instances to your fleet than with this method.
Lambda
If EC2 is a complete computer in the cloud, Lambda is a
code runner in the cloud. With EC2 you get an operating
system, a file system, access to the server’s hardware, etc.
But with Lambda, you just upload some code and Amazon
runs it for you. The beauty of Lambda is that it’s the
simplest way to run code in the cloud. It abstracts away
everything except for a function interface, which you get
to fill in with the code you want to run.
Sold to
20 [email protected]
sophisticated piece of software on Lambda without
making some very drastic changes to your application and
accepting some significant new limitations from the
platform.
Sold to
21 [email protected]
new TLS certificate from the AWS Certificate Manager.
Using Lambda, you can extend the CloudFormation
language to add (almost) any capability you want. And so
on—you get the idea. Lambda is a great way to extend
existing AWS features.
Sold to
22 [email protected]
problematic, depending on your use case, but we’re quite
confident that these issues will improve over time.
Sold to
23 [email protected]
everything in the stack beneath your code is a
phenomenal advance in software development. However,
when building software in the present, we have to assess
the options available to us today, and while Lambda has
its place, it is certainly not a substitute for EC2.
ELB
ELB is a load balancer service, and comes in three
variants: Application (ALB), Network (NLB), and Classic.
Classic is a legacy option, and remains there only because
it works with very old AWS accounts, where you can still
run EC2 instances outside of a VPC. For any new setup,
you should choose one of the other two variants.
Sold to
24 [email protected]
On the other hand, NLBs behave like load balancers, but
they work by routing network packets rather than by
proxying HTTP requests. An NLB is more like a very
sophisticated network router. When a client connects to
a server through an NLB, the server would see the client
as if it were connected to the client directly.
Sold to
26 [email protected]
scale quickly enough to handle a big burst of traffic.
Sold to
27 [email protected]
about any obscure capacity limits. The fact that it’s also
faster and less expensive is a nice bonus.
CloudFormation
When using AWS, you almost always want to use some
CloudFormation (or a similar tool). It lets you create and
update the things you have in AWS without having to click
around on the console or write fragile scripts. It takes a
while to get the hang of it, but the time savings pay off the
initial investment almost immediately. Even for
development, the ability to tear down everything cleanly
and recreate your AWS setup in one click is extremely
valuable.
Sold to
28 [email protected]
easier to read and modify). Then you point
CloudFormation to your AWS account, and it creates all
the resources you defined. If you run the script again
without making any changes, CloudFormation won’t do
anything (it’s idempotent). If you make a change to one
resource, it will change only that resource, plus any other
resources that depend on the modified one (if necessary).
If you change your mind about an update, you can safely
tell CloudFormation to roll it back. You can also tell
CloudFormation to tear down everything it created, and it
will give you your AWS account back in the original state
(with a few exceptions).
Sold to
29 [email protected]
Sometimes it will try to reconcile, but become stuck in an
endless loop.
Sold to
30 [email protected]
Route 53
Route 53 is a DNS service. It lets you translate domain
names to IP addresses. There’s nothing particularly
special about Route 53’s DNS capabilities. In fact, it has a
few annoying (but mostly minor) limitations, such as the
lack of support for ALIAS records (unless they point to
AWS resources) and the lack of DNSSEC support.
However, the reason we stick to using Route 53 is that,
first of all, it’s good enough, and secondly, it integrates
very well with ELB. There is a significant benefit in having
CloudFormation automatically set up your load balancer
together with the DNS records for your custom domain.
Route 53 makes this possible, whereas if you were to use
a different DNS provider, you’d likely have to manage your
DNS records manually.
SQS
SQS is a highly-durable queue in the cloud. You put
messages on one end, and a consumer takes them out
from the other side. The messages are consumed in
almost first-in-first-out order, but the ordering is not
Sold to
31 [email protected]
strict. The lack of strict ordering happens because your
SQS queue is actually a bunch of queues behind the
scenes. When you enqueue a message, it goes to a
random queue, and when you poll, you also poll a random
queue. In addition, duplicate messages can emerge within
SQS, so your consumers should be prepared to handle
this situation.
Like S3, SQS is one of the few AWS services that requires
zero capacity management. There is no limit on the rate
of messages enqueued or consumed, and you don’t have
to worry about any throttling limits. The number of
messages stored in SQS (the backlog size) is also
unlimited. As long as you can tolerate the lack of strict
ordering and the possibility of duplicates, this property
makes SQS a great default choice for dispatching
asynchronous work.
Sold to
32 [email protected]
Kinesis
You can think of a Kinesis stream as a highly-durable
linked list in the cloud. The use cases for Kinesis are often
similar to those of SQS—you would typically use either
Kinesis or SQS when you want to enqueue records for
asynchronous processing. The main difference between
the two services is that SQS can only have one consumer,
while Kinesis can have many. Once an SQS message gets
consumed, it gets deleted from the queue. But Kinesis
records get added to a list in a stable order, and any
number of consumers can read a copy of the stream by
keeping a cursor over this never-ending list. Multiple
consumers don’t affect each other, and if one falls behind,
it doesn’t slow down the other consumers. Whenever
consumers read data out of Kinesis, they will always get
their records in the same order.
Sold to
33 [email protected]
The reason for this cost profile is simple: Kinesis streams
are optimized for sequential reads and sequential writes.
Records get added to the end of a file, and reads always
happen sequentially from a pointer on that file. Unlike
SQS, records in a Kinesis stream don’t get deleted when
consumed, so it’s a pure append-only data structure
behind the scenes. Data simply ages out of a Kinesis
stream once it exceeds its retention period, which is 24
hours by default.
Sold to
34 [email protected]
Part 2: The Bootstrap
Guide
In this part, we will walk you through getting a basic web
application running in the cloud on AWS. We will start
with a blank project, and build the application and its
infrastructure step by step. Each step focuses on a single
aspect of the infrastructure, and we will try to explain in
detail what’s happening, and why.
Sold to
35 [email protected]
Starting from Scratch
Objective
Get a simple web application running on a single EC2
instance.
Steps
1. Write a basic "hello world" web application.
2. Manually create basic AWS infrastructure to host
our application.
3. Manually install our application on an EC2 instance.
Sold to
36 [email protected]
Creating our application
We will need git and npm installed. If you don’t already have
them, the best way to install them depends on your system,
so it’s best to check the official guidance.
git
https://git-scm.com/book/en/v2/Getting-Started-
Installing-Git
npm
https://www.npmjs.com/get-npm
terminal
Sold to
37 [email protected]
server.js
❶ We’ll run the server on port 8080 because port numbers below 1024
require root privileges.
terminal
$ node server.js
Server running at http://localhost:8080/
terminal
$ curl localhost:8080
Hello World
Sold to
38 [email protected]
package.json
{
"name": "aws-bootstrap",
"version": "1.0.0",
"description": "",
"main": "server.js",
"scripts": {
"start": "node ./node_modules/pm2/bin/pm2 start ./server.js --name
hello_aws --log ../logs/app.log ", ❷ ❸
"stop": "node ./node_modules/pm2/bin/pm2 stop hello_aws", ❹
"build": "echo 'Building...'" ❺
},
"dependencies": {
"pm2": "^4.2.0" ❶
}
}
terminal
$ npm install
Sold to
39 [email protected]
now, let’s create the directory manually on our local
machine.
terminal
$ mkdir ../logs
terminal
$ npm start
terminal
$ curl localhost:8080
Hello World
terminal
Sold to
40 [email protected]
Pushing our code to GitHub
If you don’t already have a GitHub account, create one
(it’s free for simple projects like this). You’ll also need to
set up SSH access.
terminal
Sold to
41 [email protected]
You should never use your AWS root account credentials
other than to create an Administrator user for your account.
AWS has several best practices for managing accounts and
credentials.
If you are creating a new AWS account or you don’t yet have
an Administrator user, use the root account to create an
Administrator user now, and then use only that user.
Sold to
42 [email protected]
Figure 1. EC2 Instances
Sold to
43 [email protected]
Figure 2. EC2 Instance AMI
Sold to
44 [email protected]
Figure 3. EC2 Instance Type
Sold to
45 [email protected]
Figure 4. EC2 Security Groups
Hit the Review and Launch and Launch buttons to get our
instance started. You’ll be presented with a scary-looking
screen saying that you won’t be able to log on to the
instance without a key pair. This is not entirely true, as
EC2 Instance Connect provides a way to SSH into your
instance without a key pair. So, select Proceed without a
key pair and then Launch Instance.
Sold to
46 [email protected]
Figure 5. EC2 Instance Key Pair
Sold to
47 [email protected]
Figure 6. Running EC2 Instance
Sold to
48 [email protected]
Figure 7. EC2 Connect
Sold to
49 [email protected]
Figure 8. EC2 Connect SSH Connection
terminal
Sold to
50 [email protected]
❶ Updates the installed yum packages.
❷ We’ll use NVM to install node.
❸ Makes sure NVM is available.
❹ Installs node via NVM.
terminal
$ mkdir logs
$ curl -sL https://github.com/<username>/aws-bootstrap/archive/master.zip
--output master.zip ❶ ❷
$ unzip master.zip
$ mv aws-bootstrap-master app
$ cd app
$ npm install
$ npm start
$ curl localhost:8080
Hello World
Sold to
51 [email protected]
configure again, either. But we’ll get there, step by step, in
the following sections.
At this point, you may also want to set up billing alerts with
CloudWatch to help you monitor your AWS charges and to
remind you if you forget to decommission any experiments
you have performed using your AWS account.
Sold to
52 [email protected]
Infrastructure as Code
Objective
Recreate our infrastructure using CloudFormation.
Steps
1. Configure the AWS CLI.
2. Create a CloudFormation Stack.
3. Deploy the CloudFormation Stack.
Sold to
53 [email protected]
documentation.
terminal
terminal
Infrastructure as code
Infrastructure as code is the idea of using the same
processes and tools to update your infrastructure as you
do for your application code. We will now start defining
our infrastructure into files that can be linted, schema-
checked, version controlled, and deployed without
manual processes. Within AWS, the tool for this is
CloudFormation.
Sold to
54 [email protected]
We’ll use the AWS CLI to submit infrastructure updates to
CloudFormation. Although we could interact with
CloudFormation directly from the AWS CLI, it is easier to
write a script containing the necessary parameters. We’ll
call the script deploy-infra.sh and use it to deploy
changes to our CloudFormation stack. A stack is what
CloudFormation calls the collection of resources that are
managed together as a unit.
deploy-infra.sh
#!/bin/bash
STACK_NAME=awsbootstrap ❶
REGION=us-east-1 ❷
CLI_PROFILE=awsbootstrap ❸
EC2_INSTANCE_TYPE=t2.micro ❹
❶ The stack name is the name that CloudFormation will use to refer to
the group of resources it will manage.
❷ The region to deploy to.
❸ We use the awsbootstrap profile that we created in the previous
section.
❹ An instance type in the free tier.
❺ The main.yml file is the CloudFormation template that we will use to
Sold to
55 [email protected]
define our infrastructure.
❻ These correspond to the input parameters in the template that we’ll
write next.
terminal
$ chmod +x deploy-infra.sh
Parameters
These are the input parameters for the template. They
give us the flexibility to change some settings without
having to modify the template code.
Resources
This is the bulk of the template. Here is where we define
and configure the resources that CloudFormation will
manage for us.
Outputs
These are like return values for the template. We use
them to make it easy to find some of the resources that
CloudFormation will create for us.
Sold to
56 [email protected]
We’re going to name our template file main.yml. There
will be other template files later, but they will all be
referenced from here. This file will become quite large, so
let’s start by sketching out its high-level structure.
main.yml
AWSTemplateFormatVersion: 2010-09-09
Parameters:
Resources:
Outputs:
main.yml
Parameters:
EC2InstanceType:
Type: String
EC2AMI:
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>' ❶
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
❶ This is a special parameter type that allows our template to get the
latest AMI without having to specify the exact version.
Sold to
57 [email protected]
The first resource that we’re going to define is our
security group. This functions like a firewall for the EC2
instance that we’ll create. We need to add a rule to allow
TCP traffic to port 8080 (to reach our application) and to
port 22 (for SSH access).
main.yml
Resources:
SecurityGroup
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub 'Internal Security group for ${AWS::StackName}'
❶ ❷
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Tags: ❸
- Key: Name
Value: !Ref AWS::StackName ❹
Sold to
58 [email protected]
The next resource we’ll create is an IAM role, which our
EC2 instance will use to define its permissions. At this
point our application doesn’t need much, as it isn’t using
any AWS services yet. For now, we will grant our instance
role full access to AWS CloudWatch, but there are many
other managed polices, which you can choose based on
what permissions your application needs.
main.yml
Resources:
SecurityGroup: ...
InstanceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service:
- "ec2.amazonaws.com"
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchFullAccess
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
59 [email protected]
main.yml
Resources:
SecurityGroup: ...
InstanceRole: ...
InstanceProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles:
- Ref: InstanceRole
main.yml
Resources:
SecurityGroup: ...
InstanceRole: ...
InstanceProfile: ...
Instance:
Type: AWS::EC2::Instance
CreationPolicy: ❶
ResourceSignal:
Timeout: PT15M
Count: 1
Metadata:
AWS::CloudFormation::Init:
config:
packages: ❷
yum:
wget: []
unzip: []
Properties:
ImageId: !Ref EC2AMI ❸
InstanceType: !Ref EC2InstanceType ❹
IamInstanceProfile: !Ref InstanceProfile
Monitoring: true
SecurityGroupIds:
- !GetAtt SecurityGroup.GroupId ❺
UserData:
# ... ❻
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
60 [email protected]
❶ This tells CloudFormation to wait for a signal before marking the new
instance as created (we’ll see how in the install script).
❷ Here we define some prerequisites that CloudFormation will install on
our instance (the wget and unzip utilities). We’ll need them to install
our application.
❸ The AMI ID that we take as a template parameter.
❹ The EC2 instance type that we take as a template parameter.
❺ !GetAtt is a CloudFormation function that can reference attributes
from other resources.
❻ See the next code listing for how to fill in this part.
main.yml
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
# Have CloudFormation install any files and packages from the metadata
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --region
${AWS::Region} --resource Instance ❷
Sold to
61 [email protected]
bash
# Dot source the files to ensure that variables are available within
the current shell
. /home/ec2-user/.nvm/nvm.sh
. /home/ec2-user/.bashrc
# Run server
cd app
npm install
npm start
EOF
Sold to
62 [email protected]
instance as a template output.
main.yml
Outputs:
InstanceEndpoint:
Description: The DNS name for the created instance
Value: !Sub "http://${Instance.PublicDnsName}:8080" ❶
Export:
Name: InstanceEndpoint
deploy-infra.sh
# If the deploy succeeded, show the DNS name of the created instance
if [ $? -eq 0 ]; then
aws cloudformation list-exports \
--profile awsbootstrap \
--query "Exports[?Name=='InstanceEndpoint'].Value" ❶ ❷
fi
Deploying
Now it’s time to deploy our infrastructure. Let’s run the
deploy-infra.sh command. We can check the status of
our stack from the CloudFormation console. The events
Sold to
63 [email protected]
tab shows which resources are being created, modified,
or destroyed.
terminal
$ ./deploy-infra.sh
terminal
$ curl ec2-35-174-3-173.compute-1.amazonaws.com:8080
Hello World
terminal
Sold to
64 [email protected]
if we make a change to our application, our EC2 instance
won’t be updated. Next, we will make our instance receive
a new version of our application automatically as soon as
a change is pushed to GitHub.
Sold to
65 [email protected]
Automatic Deployments
Objective
Automatically update our application when a change
gets pushed to GitHub.
Steps
1. Get GitHub credentials.
2. Install the CodeDeploy agent on our EC2 instance.
3. Create a CodePipeline.
Sold to
66 [email protected]
Figure 9. GitHub Access Token Generation
terminal
$ mkdir -p ~/.github
$ echo "aws-bootstrap" > ~/.github/aws-bootstrap-repo
$ echo "<username>" > ~/.github/aws-bootstrap-owner ❶
$ echo "<token>" > ~/.github/aws-bootstrap-access-token ❷
Sold to
67 [email protected]
S3 bucket for build artifacts
CodePipeline requires an S3 bucket to store artifacts built
by CodeBuild. We chose to create this bucket outside of
our main CloudFormation template because
CloudFormation is unable to delete S3 buckets unless
they’re empty. This limitation becomes very inconvenient
during development, because you would have to delete
the S3 bucket manually every time you tear down your
CloudFormation stack. Therefore, we like to put
resources such as these in a separate CloudFormation
template called setup.yml.
setup.yml
AWSTemplateFormatVersion: 2010-09-09
Parameters:
CodePipelineBucket:
Type: String
Description: 'The S3 bucket for CodePipeline artifacts.'
Resources:
CodePipelineS3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Ref CodePipelineBucket
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
Sold to
68 [email protected]
S3 bucket name for our CodePipeline.
deploy-infra.sh
deploy-infra.sh
Sold to
69 [email protected]
start-service.sh
#!/bin/bash -xe
source /home/ec2-user/.bash_profile ❶
cd /home/ec2-user/app/release ❷
npm run start ❸
❶ Makes sure any user-specific software that we’ve installed (e.g., npm via
nvm) is available.
❷ Changes into the working directory in which our application expects to
be run.
❸ Runs the start script we put in package.json.
stop-service.sh
#!/bin/bash -xe
source /home/ec2-user/.bash_profile
[ -d "/home/ec2-user/app/release" ] && \
cd /home/ec2-user/app/release && \
npm stop
Sold to
70 [email protected]
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
nodejs: 10
pre_build:
commands:
# run 'npm install' using versions in package-lock.json
- npm ci
build:
commands:
- npm run build
artifacts:
files:
- start-service.sh
- stop-service.sh
- server.js
- package.json
- appspec.yml
- 'node_modules/**/*'
Sold to
71 [email protected]
appspec.yml
version: 0.0
os: linux
files:
# unzip the build artifact in ~/app
- source: /
destination: /home/ec2-user/app/release
permissions:
# change permissions from root to ec2-user
- object: /home/ec2-user/app/release
pattern: "**"
owner: ec2-user
group: ec2-user
hooks:
ApplicationStart:
# start the application
- location: start-service.sh
timeout: 300
runas: ec2-user
ApplicationStop:
# stop the application
- location: stop-service.sh
timeout: 300
runas: ec2-user
terminal
Sold to
72 [email protected]
deploy-infra.sh
deploy-infra.sh
Sold to
73 [email protected]
main.yml
Parameters:
EC2InstanceType:
Type: String
EC2AMI:
Type: 'AWS::SSM::Parameter::Value<AWS::EC2::Image::Id>'
Default: '/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2'
CodePipelineBucket:
Type: String
Description: 'The S3 bucket for CodePipeline artifacts.'
GitHubOwner:
Type: String
Description: 'The username of the source GitHub repo.'
GitHubRepo:
Type: String
Description: 'The source GitHub repo name (without the username).'
GitHubBranch:
Type: String
Default: master
Description: 'The source GitHub branch.'
GitHubPersonalAccessToken:
Type: String
NoEcho: true
Description: 'A GitHub personal access token with "repo" and
"admin:repo_hook" permissions.'
Sold to
74 [email protected]
main.yml
InstanceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service:
- "ec2.amazonaws.com"
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchFullAccess
- arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforAWSCodeDeploy ❶
Tags:
- Key: Name
Value: !Ref AWS::StackName
main.yml
DeploymentRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service:
- codepipeline.amazonaws.com
- codedeploy.amazonaws.com
- codebuild.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/PowerUserAccess
Sold to
75 [email protected]
main.yml
BuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: !Ref AWS::StackName
ServiceRole: !GetAtt DeploymentRole.Arn
Artifacts:
Type: CODEPIPELINE
Environment:
Type: LINUX_CONTAINER
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/standard:2.0
Source:
Type: CODEPIPELINE
main.yml
DeploymentApplication:
Type: AWS::CodeDeploy::Application
Properties:
ApplicationName: !Ref AWS::StackName
ComputePlatform: Server ❶
Sold to
76 [email protected]
main.yml
StagingDeploymentGroup:
Type: AWS::CodeDeploy::DeploymentGroup
DependsOn: Instance
Properties:
DeploymentGroupName: staging
ApplicationName: !Ref DeploymentApplication
DeploymentConfigName: CodeDeployDefault.AllAtOnce ❶
ServiceRoleArn: !GetAtt DeploymentRole.Arn
Ec2TagFilters: ❷
- Key: aws:cloudformation:stack-name
Type: KEY_AND_VALUE
Value: !Ref AWS::StackName
main.yml
Pipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: !Ref AWS::StackName
Sold to
77 [email protected]
ArtifactStore:
Location: !Ref CodePipelineBucket
Type: S3
RoleArn: !GetAtt DeploymentRole.Arn
Stages:
- Name: Source
Actions:
- Name: Source
ActionTypeId:
Category: Source
Owner: ThirdParty
Version: 1
Provider: GitHub
OutputArtifacts:
- Name: Source
Configuration:
Owner: !Ref GitHubOwner
Repo: !Ref GitHubRepo
Branch: !Ref GitHubBranch
OAuthToken: !Ref GitHubPersonalAccessToken
PollForSourceChanges: false ❶
RunOrder: 1
- Name: Build
Actions:
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Version: 1
Provider: CodeBuild
InputArtifacts:
- Name: Source
OutputArtifacts:
- Name: Build
Configuration:
ProjectName: !Ref BuildProject
RunOrder: 1
- Name: Staging
Actions:
- Name: Staging
InputArtifacts:
- Name: Build
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
Configuration:
ApplicationName: !Ref DeploymentApplication
DeploymentGroupName: !Ref StagingDeploymentGroup
RunOrder: 1
Sold to
78 [email protected]
❶ We don’t need to poll for changes because we’ll set up a webhook to
trigger a deployment as soon as GitHub receives a change.
main.yml
PipelineWebhook:
Type: AWS::CodePipeline::Webhook
Properties:
Authentication: GITHUB_HMAC
AuthenticationConfiguration:
SecretToken: !Ref GitHubPersonalAccessToken
Filters:
- JsonPath: $.ref
MatchEquals: 'refs/heads/{Branch}'
TargetPipeline: !Ref Pipeline
TargetAction: Source
Name: !Sub 'webhook-${AWS::StackName}'
TargetPipelineVersion: !GetAtt Pipeline.Version
RegisterWithThirdParty: true
Sold to
79 [email protected]
main.yml
Instance:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M
Count: 1
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
ruby: [] ❶
files:
/home/ec2-user/install: ❷
source: !Sub "https://aws-codedeploy-
${AWS::Region}.s3.amazonaws.com/latest/install"
mode: "000755" # executable
commands:
00-install-cd-agent: ❸
command: "./install auto"
cwd: "/home/ec2-user/"
Properties:
ImageId: !Ref EC2AMI
InstanceType: !Ref EC2InstanceType
IamInstanceProfile: !Ref InstanceProfile
Monitoring: true
SecurityGroupIds:
- !GetAtt SecurityGroup.GroupId
UserData:
# ... ❹
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
80 [email protected]
for us now.
main.yml
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
# Dot source the files to ensure that variables are available within
the current shell
. /home/ec2-user/.nvm/nvm.sh
. /home/ec2-user/.bashrc
# Have CloudFormation install any files and packages from the metadata
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --region
${AWS::Region} --resource Instance
# Signal to CloudFormation that the instance is ready
/opt/aws/bin/cfn-signal -e $? --stack ${AWS::StackName} --region
${AWS::Region} --resource Instance
Sold to
81 [email protected]
And with all of that done, we can deploy our
infrastructure updates. But first, we need to delete our
stack from the CloudFormation console, because the
changes we’ve made will not trigger CloudFormation to
tear down our EC2 instance and start a new one. So, let’s
delete our stack, and recreate it by running the deploy-
infra.sh script.
terminal
$ ./deploy-infra.sh
Sold to
82 [email protected]
application automatically as soon as they start.
server.js
terminal
Sold to
83 [email protected]
Let’s wrap this up by pushing all our infrastructure
changes to our GitHub repository.
terminal
Sold to
84 [email protected]
Load Balancing
Objective
Run our application on more than one EC2 instance.
Steps
1. Add a second EC2 instance.
2. Add an Application Load Balancer.
Sold to
85 [email protected]
We can almost copy our EC2 instance configuration into a
new launch template resource as is, but there are slight
differences between the two specifications. In addition,
we’ll also need to change the cfn-init and cfn-signal
calls at the end of the UserData script to dynamically
determine the instance ID at runtime.
main.yml
InstanceLaunchTemplate:
Type: AWS::EC2::LaunchTemplate
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
ruby: []
jq: []
files:
/home/ec2-user/install:
source: !Sub "https://aws-codedeploy-
${AWS::Region}.s3.amazonaws.com/latest/install"
mode: "000755" # executable
commands:
00-install-cd-agent:
command: "./install auto"
cwd: "/home/ec2-user/"
Properties:
LaunchTemplateName: !Sub 'LaunchTemplate_${AWS::StackName}'
LaunchTemplateData:
ImageId: !Ref EC2AMI
InstanceType: !Ref EC2InstanceType
IamInstanceProfile:
Arn: !GetAtt InstanceProfile.Arn
Monitoring:
Enabled: true
SecurityGroupIds:
- !GetAtt SecurityGroup.GroupId
UserData:
# ... ❶
❶ See the next code listing for how to fill in this part.
Sold to
86 [email protected]
Now let’s update the UserData script.
main.yml
UserData:
Fn::Base64: !Sub |
#!/bin/bash -xe
# Dot source the files to ensure that variables are available within
the current shell
. /home/ec2-user/.nvm/nvm.sh
. /home/ec2-user/.bashrc
# Have CloudFormation install any files and packages from the metadata
/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --region
${AWS::Region} --resource InstanceLaunchTemplate
Sold to
87 [email protected]
export LOGICAL_ID=`aws --region ${AWS::Region} ec2 describe-tags \ ❷ ❸
--filters "Name=resource-id,Values=${!INSTANCE_ID}" \
"Name=key,Values=aws:cloudformation:logical-id" \
| jq -r ".Tags[0].Value"`
❶ We’re using the Instance Metadata service to get the instance id.
❷ Here, we’re getting the tags associated with this instance. The
aws:cloudformation:logical-id tag is automatically attached by
CloudFormation. Its value is what we pass to cfn-signal to signal a
successful launch.
❸ Note the usage of ${!INSTANCE_ID}. Since this is inside a
CloudFormation !Sub, if we used ${INSTANCE_ID}, CloudFormation
would have tried to do the substitution itself. Adding the ! tells
CloudFormation to rewrite it for bash to interpret.
main.yml
Instance:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M
Count: 1
Properties:
LaunchTemplate:
LaunchTemplateId: !Ref InstanceLaunchTemplate
Version: !GetAtt InstanceLaunchTemplate.LatestVersionNumber ❶
Tags:
- Key: Name
Value: !Ref AWS::StackName
❶ Each time we update our launch template, it will get a new version
number. We always want to use the latest.
Sold to
88 [email protected]
Adding a second instance is now as easy as creating a new
instance resource that references the same launch
template.
main.yml
Instance2:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M
Count: 1
Properties:
LaunchTemplate:
LaunchTemplateId: !Ref InstanceLaunchTemplate
Version: !GetAtt InstanceLaunchTemplate.LatestVersionNumber
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
89 [email protected]
main.yml
InstanceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service:
- "ec2.amazonaws.com"
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchFullAccess
- arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforAWSCodeDeploy
Policies:
- PolicyName: ec2DescribeTags ❶
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: 'ec2:DescribeTags'
Resource: '*'
Tags:
- Key: Name
Value: !Ref AWS::StackName
main.yml
Outputs:
InstanceEndpoint1:
Description: The DNS name for the created instance
Value: !Sub "http://${Instance.PublicDnsName}:8080"
Export:
Name: InstanceEndpoint1
InstanceEndpoint2:
Description: The DNS name for the created instance
Value: !Sub "http://${Instance2.PublicDnsName}:8080"
Export:
Name: InstanceEndpoint2
Sold to
90 [email protected]
Finally, let’s change our deploy-infra.sh script to give
us these URLs.
deploy-infra.sh
# If the deploy succeeded, show the DNS name of the created instance
if [ $? -eq 0 ]; then
aws cloudformation list-exports \
--profile awsbootstrap \
--query "Exports[?starts_with(Name,'InstanceEndpoint')].Value"
fi
terminal
$ ./deploy-infra.sh
Sold to
91 [email protected]
GitHub.
terminal
server.js
terminal
Sold to
92 [email protected]
CodePipeline console. Once the deployment is complete,
we can verify our change by making a request to both
URLs.
terminal
$ curl http://ec2-52-91-223-254.compute-1.amazonaws.com:8080
Hello World from ip-10-0-113-245.ec2.internal
$ curl http://ec2-3-93-145-152.compute-1.amazonaws.com:8080
Hello World from ip-10-0-61-251.ec2.internal
We’re also going to make sure that our two instances will
be running in separate availability zones. This is a
fundamental requirement for ensuring high availability
when using EC2. We recommend James Hamilton’s video
Failures at Scale and How to Ignore Them from 2012 as a
good introduction to how AWS thinks about availability
Sold to
93 [email protected]
zones and high availability.
So, let’s start by adding our VPC and two subnets to our
CloudFormation template.
Sold to
94 [email protected]
main.yml
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
Tags:
- Key: Name
Value: !Ref AWS::StackName
SubnetAZ1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 0, !GetAZs '' ] ❶ ❷
CidrBlock: 10.0.0.0/18
MapPublicIpOnLaunch: true ❸
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 0, !GetAZs '' ]
SubnetAZ2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 1, !GetAZs '' ] ❶ ❷
CidrBlock: 10.0.64.0/18
MapPublicIpOnLaunch: true ❸
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 1, !GetAZs '' ]
Sold to
95 [email protected]
no longer using the default one. The internet gateway
makes it possible for our hosts to route network traffic to
and from the internet.
main.yml
InternetGateway:
Type: AWS::EC2::InternetGateway
Properties:
Tags:
- Key: Name
Value: !Ref AWS::StackName
InternetGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
InternetGatewayId: !Ref InternetGateway
VpcId: !Ref VPC
Sold to
96 [email protected]
main.yml
RouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Ref AWS::StackName
DefaultPublicRoute:
Type: AWS::EC2::Route
DependsOn: InternetGatewayAttachment
Properties:
RouteTableId: !Ref RouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
SubnetRouteTableAssociationAZ1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref SubnetAZ1
SubnetRouteTableAssociationAZ2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref RouteTable
SubnetId: !Ref SubnetAZ2
Now it’s time to create the load balancer itself. The load
balancer will exist in both of our subnets.
main.yml
LoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
Type: application
Scheme: internet-facing
SecurityGroups:
- !GetAtt SecurityGroup.GroupId
Subnets:
- !Ref SubnetAZ1
- !Ref SubnetAZ2
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
97 [email protected]
Then, let’s configure our load balancer to listen for HTTP
traffic on port 80, and forward that traffic to a target
group named LoadBalancerTargetGroup.
main.yml
LoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref LoadBalancerTargetGroup
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
main.yml
LoadBalancerTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
TargetType: instance
Port: 8080
Protocol: HTTP
VpcId: !Ref VPC
HealthCheckEnabled: true
HealthCheckProtocol: HTTP
Targets:
- Id: !Ref Instance
- Id: !Ref Instance2
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
98 [email protected]
specified by the subnet.
main.yml
Instance:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M
Count: 1
Properties:
SubnetId: !Ref SubnetAZ1 ❶
LaunchTemplate:
LaunchTemplateId: !Ref InstanceLaunchTemplate
Version: !GetAtt InstanceLaunchTemplate.LatestVersionNumber
Tags:
- Key: Name
Value: !Ref AWS::StackName
Instance2:
Type: AWS::EC2::Instance
CreationPolicy:
ResourceSignal:
Timeout: PT5M
Count: 1
Properties:
SubnetId: !Ref SubnetAZ2 ❶
LaunchTemplate:
LaunchTemplateId: !Ref InstanceLaunchTemplate
Version: !GetAtt InstanceLaunchTemplate.LatestVersionNumber
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
99 [email protected]
main.yml
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPC ❶
GroupDescription:
!Sub 'Internal Security group for ${AWS::StackName}'
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
CidrIp: 0.0.0.0/0
- IpProtocol: tcp ❷
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: !Ref AWS::StackName
main.yml
LBEndpoint:
Description: The DNS name for the LB
Value: !Sub "http://${LoadBalancer.DNSName}:80"
Export:
Name: LBEndpoint
Sold to
100 [email protected]
deploy-infra.sh
# If the deploy succeeded, show the DNS name of the created instance
if [ $? -eq 0 ]; then
aws cloudformation list-exports \
--profile awsbootstrap \
--query "Exports[?ends_with(Name,'LBEndpoint')].Value" ❶
fi
❶ Using ends_with here is not necessary right now, but will be useful
when we get to the Production section.
terminal
$ ./deploy-infra.sh
Sold to
101 [email protected]
reach our application through the load balancer endpoint.
If we try to make several requests to this endpoint, we
should be able to see the load balancer in action, where
we get a different message depending on which of our
two instances responded to the request.
terminal
terminal
Sold to
102 [email protected]
Scaling
Objective
Replace explicit EC2 instances with Auto Scaling.
Steps
1. Add an Auto Scaling Group.
2. Remove Instance and Instance2.
Sold to
103 [email protected]
The new ASG will also:
Multi-phase deployments
In a production system, we have to assume that we have
users continually sending requests to the system all the
time. As such, when we make infrastructure or software
changes, it is important to do so in such a way that causes
no disruption. We must consider the effect of every
change and stage the changes so that the users do not
experience any loss of service. We also need to be able to
roll back a change if we discover that it’s not doing quite
what we wanted, again without affecting our users.
Sold to
104 [email protected]
dependencies, and not all resources are strictly
CloudFormation resources.
Sold to
105 [email protected]
main.yml
ScalingGroup:
Type: AWS::AutoScaling::AutoScalingGroup
UpdatePolicy: ❶
AutoScalingRollingUpdate:
MinInstancesInService: "1"
MaxBatchSize: "1"
PauseTime: "PT15M"
WaitOnResourceSignals: "true" ❶
SuspendProcesses:
- HealthCheck
- ReplaceUnhealthy
- AZRebalance
- AlarmNotification
- ScheduledActions
Properties:
AutoScalingGroupName: !Sub 'ASG_${AWS::StackName}'
AvailabilityZones:
- !Select [ 0, !GetAZs '' ]
- !Select [ 1, !GetAZs '' ]
MinSize: 2 ❷
MaxSize: 6
HealthCheckGracePeriod: 0
HealthCheckType: ELB ❸
LaunchTemplate: ❹
LaunchTemplateId: !Ref InstanceLaunchTemplate
Version: !GetAtt InstanceLaunchTemplate.LatestVersionNumber
TargetGroupARNs:
- !Ref LoadBalancerTargetGroup ❺
MetricsCollection:
-
Granularity: "1Minute"
Metrics:
- "GroupMaxSize"
- "GroupInServiceInstances"
VPCZoneIdentifier: ❻
- !Ref SubnetAZ1
- !Ref SubnetAZ2
Tags:
- Key: Name
Value: !Ref AWS::StackName
PropagateAtLaunch: "true" ❼
❶ The WaitOnResourceSignal works for ASGs in the same way that the
CreationPolicy worked on individual instances. Our launch script
will get the ASG’s logical ID when querying its tag, and will pass that to
the cfn-signal command, which in turn will signal to the ASG that
the instance has launched successfully.
Sold to
106 [email protected]
❷ We have two availability zones, so we’ll set the minimum number of
hosts to two to get one instance in each.
❸ Our ASG will use our load balancer’s health check to assess the health
of its instances.
❹ All instances that the ASG launches will be created as per our launch
template.
❺ The ASG will add all launched instances to the load balancer’s target
group.
❻ The VPC and subnets into which the ASG will launch the instances.
❼ Specifying PropagateAtLaunch ensures that this tag will be copied to
all instances that are launched as part of this ASG.
Sold to
107 [email protected]
main.yml
StagingDeploymentGroup:
Type: AWS::CodeDeploy::DeploymentGroup
Properties:
DeploymentGroupName: staging
AutoScalingGroups:
- !Ref ScalingGroup
ApplicationName: !Ref DeploymentApplication
DeploymentConfigName: CodeDeployDefault.AllAtOnce
ServiceRoleArn: !GetAtt DeploymentRole.Arn
Ec2TagFilters: ❶
- Key: aws:cloudformation:stack-name
Type: KEY_AND_VALUE
Value: !Ref AWS::StackName
terminal
$ ./deploy-infra.sh
Sold to
108 [email protected]
Then, if we hit the load balancer endpoint, we should see
our requests spread across our two explicit instances, as
well as across two new instances that our ASG has spun
up.
terminal
terminal
Sold to
109 [email protected]
LoadBalancerTargetGroup resource.
• The Ec2TagFilters property from the
StagingDeploymentGroup resource.
• The InstanceEndpoint and InstanceEndpoint2
outputs.
terminal
$ ./deploy-infra.sh
Sold to
110 [email protected]
terminal
terminal
Sold to
111 [email protected]
Figure 11. Auto Scaling Details
Sold to
112 [email protected]
Production
Objective
Create separate environments for staging and
production.
Steps
1. Extract common resources out of main.yml.
2. Create separate stacks for staging and production.
Sold to
113 [email protected]
Adding the stack name to Hello World
Let’s start by making a small change to the start-
service.sh script so that our application will know what
environment it is running in.
start-service.sh
#!/bin/bash -xe
source /home/ec2-user/.bash_profile
cd /home/ec2-user/app/release
package.json
Sold to
114 [email protected]
server.js
terminal
terminal
cp main.yml stage.yml
• DeploymentRole
• BuildProject
• DeploymentApplication
• StagingDeploymentGroup
• Pipeline
• PipelineWebhook
• CodePipelineBucket
Sold to
116 [email protected]
• GitHubOwner
• GitHubRepo
• GitHubBranch
• GitHubPersonalAccessToken
main.yml
Staging:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: stage.yml
TimeoutInMinutes: 30
Parameters:
EC2InstanceType: !Ref EC2InstanceType
EC2AMI: !Ref EC2AMI
stage.yml
Outputs:
LBEndpoint:
Description: The DNS name for the LB
Value: !Sub "http://${LoadBalancer.DNSName}:80"
ScalingGroup:
Description: The ScalingGroup for this stage
Value: !Ref ScalingGroup
Sold to
117 [email protected]
We don’t need Export properties in stage.yml. This is
because stage.yml will be referenced only by the main.yml
stack, and parent stacks can access the output variables of
nested stacks directly.
main.yml
StagingDeploymentGroup:
Type: AWS::CodeDeploy::DeploymentGroup
Properties:
DeploymentGroupName: staging
AutoScalingGroups:
- !GetAtt Staging.Outputs.ScalingGroup ❶
ApplicationName: !Ref DeploymentApplication
DeploymentConfigName: CodeDeployDefault.AllAtOnce
ServiceRoleArn: !GetAtt DeploymentRole.Arn
main.yml
StagingLBEndpoint:
Description: The DNS name for the staging LB
Value: !GetAtt Staging.Outputs.LBEndpoint
Export:
Name: StagingLBEndpoint
Sold to
118 [email protected]
CloudFormation packaging to help us upload and
transform our templates.
setup.yml
CloudFormationBucket:
Type: String
Description: 'The S3 bucket for CloudFormation templates.'
setup.yml
CloudFormationS3Bucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: !Ref CloudFormationBucket
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
Sold to
119 [email protected]
deploy-infra.sh
CFN_BUCKET="$STACK_NAME-cfn-$AWS_ACCOUNT_ID"
deploy-infra.sh
Sold to
120 [email protected]
deploy-infra.sh
deploy-infra.sh
Sold to
121 [email protected]
Finally, we need to change the section of deploy-
infra.sh that prints the endpoint URLs so that it
catches both our staging endpoint, as well as the
forthcoming prod endpoint.
deploy-infra.sh
terminal
$ ./deploy-infra.sh
awsbootstrap-setup
A root stack containing our S3 buckets for CodePipeline
and CloudFormation.
awsbootstrap
A root stack for our application containing our
deployment resources and our staging nested stack.
Sold to
123 [email protected]
awsbootstrap-Staging-XYZ
Our new nested staging stack containing all the
application resources.
terminal
terminal
Sold to
124 [email protected]
main.yml
ProdDeploymentGroup:
Type: AWS::CodeDeploy::DeploymentGroup
Properties:
DeploymentGroupName: prod
AutoScalingGroups:
- !GetAtt Prod.Outputs.ScalingGroup
ApplicationName: !Ref DeploymentApplication
DeploymentConfigName: CodeDeployDefault.OneAtATime ❶
ServiceRoleArn: !GetAtt DeploymentRole.Arn
main.yml
- Name: Prod
Actions:
- Name: Prod
InputArtifacts:
- Name: Build
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CodeDeploy
Configuration:
ApplicationName: !Ref DeploymentApplication
DeploymentGroupName: !Ref ProdDeploymentGroup
RunOrder: 1
Sold to
125 [email protected]
main.yml
Prod:
Type: AWS::CloudFormation::Stack
DependsOn: Staging ❶
Properties:
TemplateURL: stage.yml
TimeoutInMinutes: 30
Parameters:
EC2InstanceType: !Ref EC2InstanceType
EC2AMI: !Ref EC2AMI
❶ Updates to the prod stack will not be enacted until the staging stack
successfully applies stack updates.
main.yml
ProdLBEndpoint:
Description: The DNS name for the prod LB
Value: !GetAtt Prod.Outputs.LBEndpoint
Export:
Name: ProdLBEndpoint
Sold to
126 [email protected]
terminal
$ ./deploy-infra.sh
Sold to
127 [email protected]
terminal
terminal
Sold to
128 [email protected]
Custom Domains
Objective
Access our application from a custom domain.
Steps
1. Register a domain with Route 53.
2. Create a DNS hosted zone for our domain.
3. Map our domain to the load balancers.
Sold to
129 [email protected]
Registering a domain
Registering a domain is done infrequently and requires
some human intervention. Therefore, we will do this
manually through the Route 53 console. After we’ve
chosen our domain name, Route 53 will check if it is
available, and if it is, we can proceed with the registration.
Sold to
130 [email protected]
Figure 15. Hosted Zones
stage.yml
Domain:
Type: String
SubDomain:
Type: String
Sold to
131 [email protected]
stage.yml
DNS:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: !Sub '${Domain}.'
Name: !Sub '${SubDomain}.${Domain}.'
Type: A
AliasTarget:
HostedZoneId: !GetAtt LoadBalancer.CanonicalHostedZoneID
DNSName: !GetAtt LoadBalancer.DNSName
Next, let’s change the stage output to return the URL with
our custom domain rather than the load balancer’s default
endpoint.
stage.yml
LBEndpoint:
Description: The DNS name for the stage
Value: !Sub "http://${DNS}"
main.yml
Domain:
Type: String
Sold to
132 [email protected]
main.yml
Staging:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: stage.yml
TimeoutInMinutes: 30
Parameters:
EC2InstanceType: !Ref EC2InstanceType
EC2AMI: !Ref EC2AMI
Domain: !Ref Domain ❶
SubDomain: staging ❷
Prod:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: stage.yml
TimeoutInMinutes: 30
Parameters:
EC2InstanceType: !Ref EC2InstanceType
EC2AMI: !Ref EC2AMI
Domain: !Ref Domain ❶
SubDomain: prod ❷
deploy-infra.sh
DOMAIN=the-good-parts.com ❶
Sold to
133 [email protected]
deploy-infra.sh
terminal
$ ./deploy-infra.sh
Sold to
134 [email protected]
And now have a much more human-friendly endpoint for
our two stages. We should also be able to see the A
records in our Route 53 hosted zone.
terminal
terminal
Sold to
135 [email protected]
If the curl commands work, but your browser times out
trying to connect, it may be trying to upgrade to HTTPS in
order to provide better security. You can try from another
browser or wait until we enable HTTPS in the next section.
terminal
Sold to
136 [email protected]
HTTPS
Objective
Migrate our endpoint from HTTP to HTTPS.
Steps
1. Manually create a TLS certificate.
2. Add an HTTPS endpoint.
3. Make the application speak HTTPS.
4. Remove the HTTP endpoint.
Sold to
137 [email protected]
Creating the certificate
Requesting a certificate is an infrequent operation that
requires human intervention for validation (or more
automation than makes sense, for a process that happens
only once). Therefore, we’re going to create our
certificate manually. To start, let’s visit the AWS
Certificate Manager (ACM) console and hit Request a
certificate. Then, let’s select the public certificate option.
Sold to
138 [email protected]
Figure 19. Add Domain Names
Sold to
139 [email protected]
Figure 20. Select Validation Method
Sold to
140 [email protected]
Figure 21. Create CNAME Records
Sold to
141 [email protected]
Figure 22. Validated Certificate
You can also inspect the CNAME record that was added
to your hosted zone in Route 53.
Sold to
142 [email protected]
Adding the HTTPS endpoint
We will now update our deploy-infra.sh script to
retrieve the certificate ARN. This should go at the top of
the script, and depends on the DOMAIN environment
variable.
deploy-infra.sh
DOMAIN=the-good-parts.com
CERT=`aws acm list-certificates --region $REGION --profile awsbootstrap
--output text \
--query "CertificateSummaryList[?DomainName=='
$DOMAIN'].CertificateArn | [0]"` ❶
deploy-infra.sh
Sold to
143 [email protected]
We also have to add this as a parameter in the main.yml
template.
main.yml
Certificate:
Type: String
Description: 'An existing ACM certificate ARN for your domain'
main.yml
Staging:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: stage.yml
TimeoutInMinutes: 30
Parameters:
EC2InstanceType: !Ref EC2InstanceType
EC2AMI: !Ref EC2AMI
Domain: !Ref Domain
SubDomain: staging
Certificate: !Ref Certificate ❶
Prod:
Type: AWS::CloudFormation::Stack
Properties:
TemplateURL: stage.yml
TimeoutInMinutes: 30
Parameters:
EC2InstanceType: !Ref EC2InstanceType
EC2AMI: !Ref EC2AMI
Domain: !Ref Domain
SubDomain: prod
Certificate: !Ref Certificate ❶
Sold to
144 [email protected]
to receive the certificate ARN from main.yml.
stage.yml
Certificate:
Type: String
Description: 'An existing ACM certificate ARN for subdomain.domain'
stage.yml
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPC
GroupDescription:
!Sub 'Internal Security group for ${AWS::StackName}'
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
CidrIp: 0.0.0.0/0
- IpProtocol: tcp ❶
FromPort: 8443
ToPort: 8443
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 80
ToPort: 80
CidrIp: 0.0.0.0/0
- IpProtocol: tcp ❶
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: !Ref AWS::StackName
Sold to
145 [email protected]
At this point, we need to modify the UserData section of
our EC2 launch template to make the instance generate a
self-signed certificate automatically when it starts up.
This certificate will be used for traffic between the load
balancer and the instance.
stage.yml
# Dot source the files to ensure that variables are available within the
current shell
. /home/ec2-user/.nvm/nvm.sh
. /home/ec2-user/.bashrc
Sold to
146 [email protected]
stage.yml
HTTPSLoadBalancerTargetGroup:
Type: AWS::ElasticLoadBalancingV2::TargetGroup
Properties:
TargetType: instance
Port: 8443 ❶
Protocol: HTTPS
VpcId: !Ref VPC
HealthCheckEnabled: true
HealthCheckProtocol: HTTPS ❷
Tags:
- Key: Name
Value: !Ref AWS::StackName
❶ 8443 is the non-privileged port that our application will use to serve
HTTPS requests.
❷ The health check will also be made on the HTTPS port.
stage.yml
HTTPSLoadBalancerListener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
DefaultActions:
- Type: forward
TargetGroupArn: !Ref HTTPSLoadBalancerTargetGroup
LoadBalancerArn: !Ref LoadBalancer
Certificates:
- CertificateArn: !Ref Certificate ❶
Port: 443 ❷
Protocol: HTTPS
Sold to
147 [email protected]
balancer’s HTTPS target.
stage.yml
TargetGroupARNs:
- !Ref LoadBalancerTargetGroup
- !Ref HTTPSLoadBalancerTargetGroup ❶
stage.yml
HTTPSEndpoint:
Description: The DNS name for the stage
Value: !Sub "https://${DNS}"
Finally, we’ll add two new outputs from main.yml for the
new HTTPS endpoints.
Sold to
148 [email protected]
main.yml
Outputs:
StagingLBEndpoint:
Description: The DNS name for the staging LB
Value: !GetAtt Staging.Outputs.LBEndpoint
Export:
Name: StagingLBEndpoint
StagingHTTPSLBEndpoint: ❶
Description: The DNS name for the staging HTTPS LB
Value: !GetAtt Staging.Outputs.HTTPSEndpoint
Export:
Name: StagingHTTPSLBEndpoint
ProdLBEndpoint:
Description: The DNS name for the prod LB
Value: !GetAtt Prod.Outputs.LBEndpoint
Export:
Name: ProdLBEndpoint
ProdHTTPSLBEndpoint: ❶
Description: The DNS name for the prod HTTPS LB
Value: !GetAtt Prod.Outputs.HTTPSEndpoint
Export:
Name: ProdHTTPSLBEndpoint
Sold to
149 [email protected]
terminal
$ ./deploy-infra.sh
terminal
curl https://prod.the-good-parts.com
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
</body>
</html>
If you were to look for the new HTTPS target group in the
AWS console, you should see no healthy hosts in the
Sold to
150 [email protected]
Monitoring tab. You can also see that the EC2 instances
are being continuously created and destroyed.
terminal
Sold to
151 [email protected]
server.js
Sold to
152 [email protected]
migrated.
terminal
terminal
terminal
terminal
Sold to
153 [email protected]
terminal
• from Resources
◦ from SecurityGroup
▪ from SecurityGroupIngress
▪ the section for traffic on port 8080
▪ the section for traffic on port 80
◦ from ScalingGroup
Sold to
154 [email protected]
▪ the TargetGroupARNs entry for
LoadBalancerTargetGroup
◦ the entire LoadBalancerListener resource
◦ the entire LoadBalancerTargetGroup resource
• from Outputs
◦ the LBEndpoint output
• from Outputs
◦ the StagingLBEndpoint output
◦ the ProdLBEndpoint output
Sold to
155 [email protected]
terminal
terminal
terminal
Sold to
156 [email protected]
terminal
$ curl -v http://prod.the-good-parts.com
* Rebuilt URL to: http://prod.the-good-parts.com/
* Trying 35.153.128.232...
* TCP_NODELAY set
* Connection failed
* connect to 35.153.128.232 port 80 failed: Operation timed out
* Trying 54.147.46.5...
* TCP_NODELAY set
* Connection failed
* connect to 54.147.46.5 port 80 failed: Operation timed out
* Failed to connect to prod.the-good-parts.com port 80: Operation timed out
* Closing connection 0
curl: (7) Failed to connect to prod.the-good-parts.com port 80: Operation
timed out
terminal
Sold to
157 [email protected]
server.js
terminal
Sold to
158 [email protected]
Network Security
Objective
Make our instances inaccessible from the internet.
Steps
1. Add private subnets with a NAT gateway.
2. Switch our ASGs to use the private subnets.
3. Only allow the HTTPS port in the public subnets.
Sold to
159 [email protected]
Once we make our instances use a NAT gateway to connect
to the internet, an additional data transfer charge (currently
$0.045/GB in us-east-1) will apply to all traffic that transits
the gateway. This includes traffic to other AWS services.
To avoid the extra charge, most AWS services can be
configured to expose an endpoint that doesn’t pass through
the internet. This can be done via Gateway VPC Endpoints or
Interface VPC Endpoints (AWS PrivateLink).
Sold to
160 [email protected]
stage.yml
InstanceRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service:
- "ec2.amazonaws.com"
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/CloudWatchFullAccess
- arn:aws:iam::aws:policy/service-role/AmazonEC2RoleforAWSCodeDeploy
- arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore ❶
- arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy ❶
Policies:
- PolicyName: ec2DescribeTags
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action: 'ec2:DescribeTags'
Resource: '*'
Tags:
- Key: Name
Value: !Ref AWS::StackName
terminal
terminal
$ ./deploy-infra.sh
...
Sold to
161 [email protected]
SSM plugin for the AWS CLI while the CloudFormation
changes are deploying.
terminal
$ curl "https://s3.amazonaws.com/session-manager-
downloads/plugin/latest/mac/sessionmanager-bundle.zip" -o "sessionmanager-
bundle.zip"
...
$ unzip sessionmanager-bundle.zip
...
$ sudo ./sessionmanager-bundle/install -i /usr/local/sessionmanagerplugin -b
/usr/local/bin/session-manager-plugin ...
Sold to
162 [email protected]
.terminal
Sold to
163 [email protected]
Figure 24. SSM Connection
Sold to
164 [email protected]
stage.yml
PrivateSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPC
GroupDescription:
!Sub 'Internal Security group for ${AWS::StackName}'
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8443
ToPort: 8443
CidrIp: 0.0.0.0/0
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: !Ref AWS::StackName
stage.yml
SecurityGroupIds:
- !GetAtt PrivateSecurityGroup.GroupId
Sold to
165 [email protected]
stage.yml
PrivateSubnetAZ1:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 0, !GetAZs '' ]
CidrBlock: 10.0.128.0/18
MapPublicIpOnLaunch: false
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 0, !GetAZs '' ]
PrivateSubnetAZ2:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
AvailabilityZone: !Select [ 1, !GetAZs '' ]
CidrBlock: 10.0.192.0/18
MapPublicIpOnLaunch: false
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 1, !GetAZs '' ]
stage.yml
EIPAZ1:
Type: AWS::EC2::EIP
DependsOn: InternetGatewayAttachment
Properties:
Domain: vpc
EIPAZ2:
Type: AWS::EC2::EIP
DependsOn: InternetGatewayAttachment
Properties:
Domain: vpc
Sold to
166 [email protected]
stage.yml
NATGatewayAZ1:
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt EIPAZ1.AllocationId
SubnetId: !Ref SubnetAZ1
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 0, !GetAZs '' ]
NATGatewayAZ2:
Type: AWS::EC2::NatGateway
Properties:
AllocationId: !GetAtt EIPAZ2.AllocationId
SubnetId: !Ref SubnetAZ2
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 1, !GetAZs '' ]
Sold to
167 [email protected]
stage.yml
PrivateSubnetRouteTableAZ1:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 0, !GetAZs '' ]
PrivateSubnetRouteTableAZ2:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
Tags:
- Key: Name
Value: !Ref AWS::StackName
- Key: AZ
Value: !Select [ 1, !GetAZs '' ]
PrivateRouteAZ1:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateSubnetRouteTableAZ1
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NATGatewayAZ1
PrivateRouteAZ2:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateSubnetRouteTableAZ2
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NATGatewayAZ2
PrivateSubnetRouteTableAssociationAZ1:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateSubnetRouteTableAZ1
SubnetId: !Ref PrivateSubnetAZ1
PrivateSubnetRouteTableAssociationAZ2:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
RouteTableId: !Ref PrivateSubnetRouteTableAZ2
SubnetId: !Ref PrivateSubnetAZ2
Sold to
168 [email protected]
Switching our ASG to use private subnets
Finally, we have to switch the ASG to launch new
instances in the private subnets rather than the public.
The instances in the public subnets won’t be terminated
until the new ones in the private subnets are launched.
stage.yml
VPCZoneIdentifier:
- !Ref PrivateSubnetAZ1
- !Ref PrivateSubnetAZ2
Sold to
169 [email protected]
terminal
$ ./deploy-infra.sh
terminal
Sold to
170 [email protected]
terminal
terminal
Sold to
171 [email protected]
stage.yml
SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
VpcId: !Ref VPC
GroupDescription:
!Sub 'Security group for ${AWS::StackName}'
SecurityGroupIngress: ❶
- IpProtocol: tcp
FromPort: 443
ToPort: 443
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: !Ref AWS::StackName
terminal
$ ./deploy-infra.sh
Sold to
172 [email protected]
terminal
terminal
Our instances are now isolated from the internet, and the
only way to reach them is through the load balancer.
terminal
Sold to
173 [email protected]