Unit+4 +Docker-+Containers
Unit+4 +Docker-+Containers
Unit+4 +Docker-+Containers
Patil Pratishthan’s
D. Y. Patil Institute of Master of Computer Applications and Management
(Approved by AICTE, New Delhi & Affiliated to Savitribai Phule Pune University)
Dr. D. Y. Patil Educational Complex, Sector 29, Pradhikaran, Akurdi, Pune – 411 044
Tel No: (020)27640998, Website: www.dypimca.ac.in, E-mail : [email protected]
----------------------------------------------------------------------------------------------------------------------
MONOGRAPH
4.1. Introduction
• What is a Docker
• Docker is an open platform for developing, shipping, and running applications.
• Docker enables you to separate your applications from your infrastructure so you can deliver
software quickly.
• With Docker’s methodologies for shipping, testing, and deploying code quickly, you can
significantly reduce the delay between writing code and running it in production.
• Docker is a containerization platform that packages your application and all its dependencies
together in the form of Containers to ensure that your application works seamlessly in any
environment.
Container
• Docker provides the ability to package and run an application in a loosely isolated environment called a
container.
• The isolation and security allows you to run many containers simultaneously on a given host.
• Docker provides tooling and a platform to manage the lifecycle of your containers:
• Develop your application and its supporting components using containers.
• The container becomes the unit for distributing and testing your application.
• When you’re ready, deploy your application into your production environment, as a container
or an orchestrated service. This works the same whether your production environment is a local
data center, a cloud provider, or a hybrid of the two.
• Use case of Docker
• Fast, consistent delivery of your applications
• Docker streamlines the development lifecycle by allowing developers to work in standardized
environments using local containers which provide your applications and services.
• Containers are great for continuous integration and continuous delivery (CI/CD) workflows.
Jelastic
o Jelastic is a multi-cloud platform that can host multiple tools/frameworks/applications
such as Docker, Kubernetes, Java, Ruby, Python, JavaScript, Go, etc. It combines
Platform as a Service (PaaS) and Container as a Service (CaaS) model.
o Multi-cloud availability is the most important feature of the Jelastic platform.
Kamatera
o Create servers quickly with Kamatera and deploy your cloud infrastructure now. It
offers unlimited scale up and out along with a simple management console, an API
o In addition to Docker hosting, you can add load balancers, private networks,
and firewalls and run any operating system edition of Linux and Windows.
A2 Hosting
o It has blazing fast SwiftServer to host docker, and it gives the best performance
possible.
o It gives you complete access to the environment; you get root access so you can even
edit server files according to your need. You can even change the operating system,
start/start/reboot the system.
StackPath
o StackPath is known for CDN and cloud-based security platform.
o Edge computing provides distributed computing; it brings computation and storage
closer to the user’s location, which eventually saves the bandwidth and improves the
response time. StackPath platform supports the Open Container Initiative (OCI)
images.
Google Cloud Run
o Google Cloud Platform (GCP) is one of the most popular cloud service providers
which has been growing across several geographies at a fast pace. Kubernetes, a
popular container orchestration tool, was originally developed by Google, so
obviously, docker hosting on GCP is very much possible and suitable.
o In GCP, Cloud Run is a serverless managed compute platform where you can host
and run docker containers. It is built on top of the KNative project, which makes the
workload easily portable across different platforms.
Amazon Elastic Container Service (Amazon ECS)
o Amazon Elastic Container Service (Amazon ECS) is a highly scalable container
service with docker support. It is used to containerize your applications on AWS.
It provides windows compatibility and supports the management of windows
containers.
o It uses the AWS Fargate service to deploy and manage docker containers. AWS
Fargate takes care of server provisioning, cluster management, and orchestration;
you don’t have to worry about these; you just need to focus on resource
management.
• Dockers vs. Virtualization
Virtualization
• Virtualization is the technique of importing a Guest operating system on top of a Host
operating system.
• The advantages of Virtual Machines or Virtualization are:
• Multiple operating systems can run on the same machine
• Maintenance and Recovery were easy in case of failure conditions
• Total cost of ownership was also less due to the reduced need for infrastructure
Disadvantages of Virtualization:
• Running multiple Virtual Machines leads to unstable performance
• Hypervisors are not as efficient as the host operating system
• Boot up process is long and takes time
Containerization
4.2. Architecture
• Docker Architecture
• Docker uses a client-server architecture.
• The Docker client talks to the Docker daemon, which does the heavy lifting of building, running,
and distributing your Docker containers.
• The Docker client and daemon can run on the same system, or you can connect a Docker client
to a remote Docker daemon.
• The Docker client and daemon communicate using a REST API, over UNIX sockets or a network
interface.
• Another Docker client is Docker Compose, that lets you work with applications consisting of a
set of containers.
• Understanding the Docker components
• Docker Container
• Docker Containers are the ready applications created from Docker Images.
• Running instances of the Images and they hold the entire package needed to run the
application.
• Docker Engine
• Docker Engine is simply the application that is installed on host machine.
• It works like a client-server application which uses:
• A server which is a type of long-running program called a daemon process
• A command line interface (CLI) client
• REST API is used for communication between the CLI client and Docker Daemon
4.3. Installation
Installing Docker on Linux
Step 1) To install Docker, we need to use the Docker team’s DEB packages.
$ sudo apt-get install \
apt-transport-https \
ca-certificates curl \
software-properties-common
*the sign “\” is not necessary it’s used for the new line, if want you can write the command
without using “\” in one line only.
Step 2) Add the official Docker GPG key with the fingerprint.
Use the below Docker command to enter the GPG key
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Step 3) Next, Add the Docker APT repository.
Use the below Docker command to add the repository
$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
You may be prompted to confirm that you wish to add the repository and have the GPG key
automatically added to your host.
The lsb_release command should populate the Ubuntu distribution version of your host.
Step 4) After adding the GPG key,
Update APT sources using the below Docker command
$ sudo apt-get update
We can now install the Docker package itself.
Step 5) Once the APT sources are updated,
Start installing the Docker packages on Ubuntu using the below Docker command
$ sudo apt-get install docker-ce
• Docker commands
1. docker –version
This command is used to get the currently installed version of docker
2. docker pull
Usage: docker pull <image name>
This command is used to pull images from the docker repository(hub.docker.com)
3. docker run
Usage: docker run -it -d <image name>
This command is used to create a container from an image
4. docker ps
This command is used to list the running containers
5. docker ps -a
This command is used to show all the running and exited containers
6. docker exec
Usage: docker exec -it <container id> bash
This command is used to access the running container
7. docker stop
Usage: docker stop <container id>
This command stops a running container
8. docker kill
Usage: docker kill <container id>
This command kills the container by stopping its execution immediately. The difference between
‘docker kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to shutdown
gracefully, in situations when it is taking too much time for getting the container to stop, one can
opt to kill it
9. docker commit
Usage: docker commit <conatainer id> <username/imagename>
This command creates a new image of an edited container on the local system
10. docker login
This command is used to login to the docker hub repository
11. docker push
Usage: docker push <username/image name>
This command is used to push an image to the docker hub repository
12. docker images
This command lists all the locally stored docker images
13. docker rm
Usage: docker rm <container id>
This command is used to delete a stopped container
14. docker rmi
Usage: docker rmi <image-id>
This command is used to delete an image from local storage
15. docker build
Usage: docker build <path to docker file>
This command is used to build an image from a specified docker file
• Provisioning
Docker-based provisioning involves the creation of a Microgateway Docker file from an
existing installation, building the image, and running it multiple times in a container
environment as depicted in the following figure.
For the Docker-based provisioning the Microgateway CLI provides the createDockerFile
command.
• Microgateway Docker image
o For the Docker-based provisioning the Microgateway CLI provides
the createDockerFile command. The command creates a Docker file that can be
consumed by docker build for creating a Docker image. The Microgateway Docker
image contains an unzipped Microgateway package.
o The command takes the command line options detailed in the Command Line
Reference.
o You can run the Docker image to spawn a Docker container from the created docker
image.
o The Docker images resulting from Docker files created using
the createDockerFile command feature the following:
• Docker logging
o Microgateway Docker containers log to stdout and stderr. The Microgateway logs can
be fetched with the Docker logs command.
• Docker health check
o Microgateway Docker containers perform health checks.
HEALTHCHECK CMD ${MICROGW_DIR}/microgateway.sh status 2>&1 | grep 'Server active'
o The status command checks the Microgateway availability. If the status command
confirms an active Microgateway the container is considered healthy.
• Graceful shutdown
o When the docker stop command is used on a Microgateway container it performs a
graceful shutdown.
• Entrypoint support
o Microgateway Dockerfile exposes an ENTRYPOINT. The options provided to the
createDockerFile command are supplied to the ENTRYPOINT through a CMD
specification.
• RE support
o The createDockerFile command adds a Microgateway JRE to the Docker file so
that Microgateway Docker image can be self-contained. Since the custom base image
provides a JRE, the createDockerFile command supports the jre=none option to reuse
the existing JRE and not copy the Microgateway JRE.
o Microgateway provides a musl libc compatible JRE to support Alpine Docker-based
images. The Microgateway installation provides the musl libc compatible JRE in
the microgateway-jre-linux-musl folder. You have to specify the jre=linux-musl option
in the createDockerFile command to copy the musl libc compatible JRE. If there is no
base image specified the musl libc compatible JRE is copied. The available JRE options
are linux, linux-musl, and none. The default value for the jre option depends on the
docker_from value:
• If there is no docker_from value specified, then the JRE used is linux-musl as the default base
image is Alpine.
• If you specify docker_from value, then the JRE used is linux
Dockerfile:
• A Dockerfile is a text document which contains all the commands that a user can call on
the command line to assemble an image.
• Docker can build images automatically by reading the instructions from a Dockerfile.
• Can use docker build to create an automated build to execute several command-line
instructions in succession.
Docker Image:
• Docker Image can be compared to a template which is used to create Docker Containers.
• Read-only templates are the building blocks of a Container. You can use docker run to run
the image and create a container.
• Docker Images are stored in the Docker Registry. It can be either a user’s local repository
or a public repository like a Docker Hub which allows multiple users to collaborate in
building an application.
Docker Container:
• It is a running instance of a Docker Image as they hold the entire package needed to run the
application.
• Ready applications created from Docker Images which is the ultimate utility of Docker.
Docker Compose is a tool that helps us overcome this problem and efficiently handle
multiple containers at once. Also used to manage several containers at the same time for the
same application.
It is a YAML file which contains details about the services, networks, and volumes for setting
up the application. So, you can use Docker Compose to create separate containers, host
them and get them to communicate with each other.
Each container will expose a port for communicating with other containers.
Compose Installation:
Install Compose plugin: If you have Desktop installed then you already have the Compose
plugin installed.
Create a docker-compose.yaml file that defines the services (containers) that make up your
application. So they can be run together in an isolated environment. In this compose file, we
define all the configurations that need to build and run the services as docker containers. There
are several steps to follow to use docker-compose.
1. Split your app into services
The first thing to do is to think about how you’re going to divide the components of your
application into different services(containers).
In a simple client-server web application, it could contain three main layers (frontend, backend,
and the database). So we can split the app in that way. Likewise, you will have to identify your
services of the application, respectively.
4. Configure networking
Docker containers communicate with each other through their internal network that is created by
compose (eg service_name:port). If you want to connect from your host machine, you will have to
expose the service to a host port.
5. Set up volumes
In most cases, we would not want our database contents to be lost each time the database service is
brought down. A simple way to persist our DB data is to mount a volume.
Docker Swarm
It is a technique to create and maintain a cluster of Docker Engines. The Docker engines can be
hosted on different nodes, and these nodes, which are in remote locations, form a Cluster
when connected in Swarm mode.
4.5. Custom images
Follow the below steps to create a Dockerfile, Image & Container.
Step 1: First you have to install Docker. To learn how to install it, you can click here.
Step 2: Once installation is complete use the below command to check the version.
docker –v
Step 3: Now create a folder in which you can create a DockerFile and change the current
working directory to that folder.
mkdir images
cd images
Step 4.1: Now create a Dockerfile by using an editor. In this case, I have used the nano
editor.
nano Dockerfile
Step 4.2: After you open a Dockerfile, you have to write it as follows.
FROM: Specifies the image that has to be downloaded
MAINTAINER: Metadata of the owner who owns the image
RUN: Specifies the commands to be executed
ENTRYPOINT: Specifies the command which will be executed first
EXPOSE: Specifies the port on which the container is exposed
Step 4.3: Once you are done with that, just save the file.
Step 5: Build the Dockerfile using the below command.
docker build
Step 6: Once the above command has been executed the respective docker image will be
created. To check whether Docker Image is created or not, use the following command.
docker images
Step 7: Now to create a container based on this image, you have to run the following
command:
docker run -it -p port_number -d image_id
Where -it is to make sure the container is interactive, -p is for port forwarding, and -d to
run the daemon in the background.
Step 8: Now you can check the created container by using the following command:
docker ps
Creating Repository
Step 1: Create Your Docker ID
To share images on Docker Hub, a Docker ID provides you access and entry to Docker
Hub repositories and allows you to search images from the Docker community and
confirmed publishers.
Step 2: Create Your First Repository
To generate a repository:
Please Sign in to Docker Hub.
Click on Create a Repository option on the Docker Hub welcome page.
Name it <your-username>/my-testprivate-repo.
Set the visibility as Private
Click on Create option.
Step 3: Install and Download Docker Desktop
For installing and downloading Docker Desktop to push and build a container image to
Docker Hub:
Install and Download Docker Desktop. If you want to do it in Linux, download Docker
Engine.
Then sign in to the Docker Desktop application with the help of Docker ID (as created in
Step 1).
A developer writes a code that stipulates application requirements or the dependencies in an easy
to write Docker File and this Docker File produces Docker Images. Dependencies required for a
particular application are present in this image.
Docker Containers are runtime instance of Docker Image. These images are uploaded onto the
Docker Hub(Git repository for Docker Images) which contains public/private repositories.
From public repositories, you can pull your image as well and you can upload your own images
onto the Docker Hub.
From Docker Hub, various teams such as Quality Assurance or Production teams will pull that
image and prepare their own containers.
These individual containers, communicate with each other through a network to perform the
required actions, and this is nothing but Docker Networking.
Container Network Model (CNM)
• Container Network Model (CNM) standardizes the steps required to provide networking for
containers using multiple network drivers. CNM requires a distributed key-value store like
console to store the network configuration.
• A CNM has mainly built on 5 objects:
• Network Controller,
• Driver
• Network
• Endpoint
• Sandbox
• CNM Objects
• Network Controller: Provides the entry-point into Libnetwork that exposes simple APIs for
Docker Engine to allocate and manage networks. Since Libnetwork supports multiple inbuilt
and remote drivers, Network Controller enables users to attach a particular driver to a given
network.
• Driver: Owns the network and is responsible for managing the network by having multiple
drivers participating to satisfy various use-cases and deployment scenarios.
• Network: Provides connectivity between a group of endpoints that belong to the same
network and isolate from the rest. So, whenever a network is created or updated, the
corresponding Driver will be notified of the event.
• Endpoint: Provides the connectivity for services exposed by a container in a network with
other services provided by other containers in the network. An endpoint represents a service
and not necessarily a particular container, Endpoint has a global scope within a cluster as
well.
• Sandbox: Created when users request to create an endpoint on a network. A Sandbox can
have multiple endpoints attached to different networks representing container’s network
configuration such as IP-address, MAC-address, routes, DNS.
• Accessing containers
Obtain the container ID by running the following command: docker ps
Access the Docker container by running the following command: docker exec -it
<container_id>
• Linking containers
Container Linking allows multiple containers to link with each other. It is a better
option than exposing ports.
Step 1 − Download the Jenkins image, if it is not already present, using the
Jenkins pull command.
Step 2 − Once the image is available, run the container, but this time, you can specify a
name to the container by using the –-name option. This will be our source
container.
Step 3 − Next, it is time to launch the destination container, but this time, we will link it
with our source container. For our destination container, we will use the standard
Ubuntu image.
When you do a docker ps, you will see both the containers running.
Step 4 − Now, attach to the receiving container.
Then run the env command. You will notice new variables for linking with the
source container.
• Exposing container ports
In Docker, the containers themselves can have applications running on ports.
When you run a container, if you want to access the application in the container via
a port number, you need to map the port number of the container to the port number
of the Docker host.
To understand what ports are exposed by the container, you should use the
Docker inspect command to inspect the image.
docker inspect Container/Image
• Container Routing
Container routing determines how to transport containers from their origins to their
destinations in a liner shipping network.