Amazon ECS Lab1
Amazon ECS Lab1
Amazon ECS Lab1
https://medium.com/
boltops/gentle-
introduction-to-how-aws-
ecs-works-with-example-
tutorial-cea3d27ce63d
I remember when I first got introduced to the all the terms, I quickly
got confused. AWS provides nice detailed diagrams to help explain the
terms. Here is a simplified diagram to help visualize and explain the
terms.
ECS Terms
In this diagram you can see that there are 4 running Tasks or Docker
containers. They are part of an ECS Service. The Service and Tasks
span 2 Container Instances. The Container Instances are part of a
logical group called an ECS Cluster.
Tutorial Example
In this tutorial example I will create a small Sinatra web service that
prints the meaning of life: 42.
7. Clean It All Up
The ECS First Run Wizard provided in the Getting Started with
Amazon ECS documentation performs the similar above with a
CloudFormation template and ECS API calls. I’m doing it out step by
step because I believe it better helped me understand the ECS
components.
Now create an ECS Cluster called my-cluster and the ec2 instance that
belongs to the ECS Cluster. Use the my-ecs-sg security group that was
created. You can get the id of the security group from the EC2
Console / Network & Security / Security Groups. It is important to
select a Key pair so you can ssh into the instance later to verify things
are working.
For the Networking VPC settings, I used the default VPC and all the
Subnets associated with the account to keep this tutorial simple. For
the IAM Role use ecsInstanceRole. If ecsInstanceRole does not yet
exist, create it per AWS docs. All the my settings are provided in the
screenshot. You will need to change the settings according to your own
account and default VPC and Subnets.
Wait a few minutes and the confirm that the Container Instance has
successfully registered to the my-cluster ECS cluster. You can confirm
it by clicking on the ECS Instances tab under Clusters / my-cluster.
Before creating the task definition, find a sinatra docker image to use
and test that it’s working. I’m using the tongueroo/sinatra image.
$ docker run -d -p 4567:4567 --name hi tongueroo/sinatra
6df556e1df02e93b05aa46425fc539121f5e50afee630e1cd918b337c3b6c202
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
6df556e1df02 tongueroo/sinatra "ruby hi.rb" 2
seconds ago Up 1 seconds 0.0.0.0:4567->4567/tcp hi
$ curl localhost:4567 ; echo
42
$ docker stop hi ; docker rm hi
$
Above, I’ve started a container with the sinatra image and curl
localhost:4657. Port 4567 is the default port that sinatra listens on and
it is exposed in the Dockerfile. It returns “42” as expected. Now that
I’ve tested the sinatra image and verify that it works, let’s create the
task definition. Create a task-definition.json and add:
{
"family": "sinatra-hi",
"containerDefinitions": [
{
"name": "web",
"image": "tongueroo/sinatra:latest",
"cpu": 128,
"memoryReservation": 128,
"portMappings": [
{
"containerPort": 4567,
"protocol": "tcp"
}
],
"command": [
"ruby", "hi.rb"
],
"essential": true
}
]
}
Confirm that the task definition successfully registered with the ECS
Console:
3. Create an ELB and Target Group to later associate with the
ECS Service
Now let’s create an ELB and a target group with it. We are creating an
ELB because we eventually want to load balance requests across
multiple containers and also want to expose the sinatra app to the
internet for testing. The easiest way to create an ELB is with the EC2
Console.
Use the default Listener with a HTTP protocol and Port 80.
Under Availability Zone, chose a VPC and choose the subnets you
would like. I chose all 4 subnets in the default VPC just like step 1.
It is very important to chose the same subnets that was chosen
when you created the cluster in step 1. If the subnets are not the
same the ELB health check can fail and the containers will keep
getting destroyed and recreated in an infinite loop if the instance is
launched in an AZ that the ELB is not configured to see.
There will be a warning about using a secure listener, but for the
purpose of this exercise we can skip using SSL.
Confirm the rules were added to the security groups via the EC2
Console:
With these security group rules, only port 80 on the ELB is exposed to
the outside world and any traffic from the ELB going to a container
instance with the my-ecs-group group is allowed. This a nice simple
setup.
You can confirm that the container is running on the ECS Console. Go
to Clusters / my-cluster / my-service and view the Tasks tab.
5. Confirm Everything is Working
Check that the my-ecs-sg security group is allowing all traffic from
the my-elb-sg security group. This was done in Step 4 with
the authorized-security-group-ingress command after you created
the ELB.
Check that the security groups for the ELB, in step 3, is set to the
same security groups that you use when you created the ECS
Cluster and Container Instance in step 1. Remember the ELB can
only detect healthy instances in AZs that it is configure to use.
Let also ssh into the instance and see the running docker process is
returning a good response. Under Clusters / ECS Instances, click on
the Container Instance and grab the public dns record so you can ssh
into the instance.
$ ssh [email protected]
$ docker ps
CONTAINER ID IMAGE COMMAND
CREATED STATUS PORTS
NAMES
9e9a55399589 tongueroo/sinatra:latest "ruby hi.rb"
16 minutes ago Up 16 minutes 8080/tcp, 0.0.0.0:32773-
>4567/tcp ecs-sinatra-hi-1-web-d8efaad38dd7c3c63a00
4fea55231363 amazon/amazon-ecs-agent:latest "/agent"
41 minutes ago Up 41 minutes
ecs-agent
$ curl 0.0.0.0:32773 ; echo
42
$
Above, I’ve verified that the docker container running on the instance
by curling the app and seeing a successful response with the “42” text.
Lastly, let’s also verify by hitting the external DNS address of the ELB.
You can find the DNS address in the EC2 Console under Load
Balancing / Load Balancers and clicking on my-elb.
Lastly, let’s also verify by hitting the external DNS address of the ELB.
You can find the DNS address in the EC2 Console under Load
Balancing / Load Balancers and clicking on my-elb.
Verify the ELB publicly available dns endpoint with curl:
$ curl my-elb-1693572386.us-east-1.elb.amazonaws.com ; echo
42
$
This is the easiest part. To scale up and add more containers simply go
to Clusters / my-cluster / my-service and click on “Update Service”.
You can change “Number of tasks” from 1 to 4 there. After only a few
moments you should see 4 running tasks. That’s it!
7. Clean It All Up
ELB: my-elb
Summary
In this post I covered the ECS terminology and went through a simple
example to create a sinatra app behind a ELB.
Overall, I think that ECS is a pretty amazing service and it has taken
the hassle of managing docker orchestration and provisioning
responsibility away.