Docker Certified Associate

Download as pdf or txt
Download as pdf or txt
You are on page 1of 514

MUMSHAD

MANNAMBE
TH

YOGESH
RAHEJA
Objectives

Installation and Configuration Image Management Storage and Volumes

Networking Security Orchestration


Exam
Details
90
minutes
MCQ vs DOMC

Q. What is the default network driver used when a container is provisioned?

o overlay
o bridge
o None
o host
Submit
MCQ vs DOMC

Q. What is the default network driver used when a container is provisioned?

o overlay Yes No
o bridge Yes No
o None Yes No
o host Yes No
Frequently Asked Questions
Q. Can we take the exam from home or a testing center?
A. Home (Proctored)

Q. Fee for the exam


A. $195

Q. Passing score
A. N/A

Q. When will I get the results


A. Immediately

https://training.mirantis.com/dca-certification-exam/
Register

https://training.mirantis.com/dca-certification-exam/
Curriculum
Curriculum

Installation and Configuration Image Management Storage and Volumes

Networking Security Orchestration


Curriculum
• Sizing Requirements
Installation and Configuration
• Docker Engine Installation
Image Management
• Swarm Installation
• Docker Enterprise – UCP, DTR
Storage and Volumes • Manage Users & Teams
• Daemon Configuration
Networking • Certificate based auth
• Namespaces & Cgroups
Security • Troubleshoot issues
• Configure Backups
Orchestration
Curriculum
Installation and Configuration
• Dockerfile
• Dockerfile Instructions
Image Management
• Create efficient image
Storage and Volumes
• Docker Image CLI
• Push,Pull,Delete images
Networking • Inspect Images
• Tag Images
Security • Display Layers
• Registry Functions
Orchestration • Deploy & Search in Registry
Curriculum
Installation and Configuration
• Drivers for various OS
• Compare Objects vs Block
Image Management
• Image layers and filesystem
Storage and Volumes
• Volumes
• Cleanup unused images
Networking • PV, PVCs on Kubernetes
• Storage Classes
Security

Orchestration
Curriculum
Installation and Configuration
• Container Network Model
• Built-in Network Drivers
Image Management
• Traffic flow between Docker Engine, Registry & UCP
Storage and Volumes
• Docker Bridge Network
• Publish Ports
Networking • External DNS
• Deploy a service on a docker overlay network
Security • Troubleshoot container and engine logs
• Kubernetes traffic using Cluster IP and NodePort Servic
Orchestration • Kubernetes Network Policies
Curriculum
Installation and Configuration • Image signing
• Docker Engine Security
Image Management • Docker Swarm Security
• Identity Roles
Storage and Volumes • UCP Workers vs Managers
• Security scan in images
Networking • Docker Content Trust
• RBAC with UCP
Security
• UCP with LDAP/AD
• UCP Client Bundles
Orchestration
Curriculum • Docker Swarm:
• Setup Swarm Cluster
Installation and Configuration • Quorum in a Swarm Cluster
• Stack in swarm
Image Management • Scale up and down replicas
• Networks, Publish Ports
Storage and Volumes • Replicated vs Global Services
• Placements
Networking
• Healthchecks
• Kubernetes
Security
• PODS, Deployments
• Services
Orchestration
• ConfigMaps, Secrets
• Liveness and Readiness Probes
Curriculum
Installation and Configuration

Docker Engine Image Management

Docker Swarm Storage and Volumes

Networking
Kubernetes
Security
Docker Enterprise
Orchestration
Pre-Requisite
Course & Exam
Tips
Learning Format
Research Questions

• Open Book
• Refer to Lecture and Documentation
• Research
• Get familiar with the MCQ format
Notes

• Note the most difficult/confusing concepts


for you
• Don’t write large notes
Revision
Revision
Revision
Learning Schedule
Section Learning Time Days Days Days
(Hours) (2 Hours) (4 Hours) (6 Hours)
Docker Architecture 20 10 5 3
Images 20 10 5 3
Security 8 4 2 1
Networking 14 7 3.5 2
Storage 8 4 2 1
Compose 12 6 3 2
Docker Swarm 26 13 6.5 4
Kubernetes 32 16 8 5
Docker Engine Enterprise 12 6 3 2
Docker Trusted Registry 6 3 1.5 1
Disaster Recovery 8 4 2 1
Mock Exams 28 14 7 5
Total Duration 194 Hours 97 Days 48.5 Days 30 Days
Question Types
Architecture
Docker Engine
Docker Engine Architecture
2013

2014 (v0.9)

Docker CLI

REST API

Docker Deamon

libcontainer

LXC

namespace CGroups
Docker Engine Architecture
2013

2014 (v0.9)

Docker CLI

REST API

Docker Deamon

libcontainer

namespace CGroups
Docker Engine Architecture
https://github.com/opencontainers/runtime-spec/blob/master/runtime.md
2013

2014 (v0.9)

OCI
Docker CLI

runtime-spec
REST API
image-spec
Docker Deamon

libcontainer

namespace CGroups
Docker Engine Architecture
2013

2014 (v0.9)

2016 (v1.11) OCI


Docker CLI

runtime-spec
REST API
image-spec
Docker Deamon
Images Volumes Networks
Manage Containers
Run containers

libcontainer

namespace CGroups
Docker Engine Architecture
2013
Docker CLI
2014 (v0.9)
REST API
2016 (v1.11) OCI

Docker Deamon
runtime-spec
Images Volumes Networks
image-spec
containerd
Manage Containers

runC
Run containers

libcontainer

namespace CGroups
Docker Engine Architecture
Docker CLI
2013

2014 (v0.9) REST API

2016 (v1.11) OCI


Docker Deamon
Images Volumes Networks
runtime-spec

containerd image-spec
Manage Containers

containerd-shim

runC
Run containers

libcontainer

namespace CGroups
Docker Objects

Images Networks

Containers Volumes
Docker Objects

Images Networks

Containers Volumes
Registry

Registry

HTTPD
docker container run –it ubuntu Docker CLI

REST API

Docker Deamon Registry


Images Volumes Networks

containerd
Manage Containers HTTPD

containerd-shim

runC
Run containers

libcontainer

namespace CGroups
Docker Engine Installation
docker version docker --version
Client: Docker Engine - Community Docker version 19.03.5, build 633a0ea
Version: 19.03.5
API version: 1.40 docker system info
Go version: go1.12.12
Git commit: 633a0ea Client:
Built: Wed Nov 13 07:25:41 2019 Debug Mode: false
OS/Arch: linux/amd64
Experimental: false Server:
Containers: 0
Server: Docker Engine - Community Running: 0
Engine:
Paused: 0
Version: 19.03.5
Stopped: 0
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12 Images: 0
Git commit: 633a0ea Server Version: 19.03.5
Built: Wed Nov 13 07:24:18 2019 Storage Driver: overlay2
OS/Arch: linux/amd64 Backing Filesystem: xfs
Experimental: false .
containerd: .
Version: 1.2.10 .
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 Experimental: false
runc: Insecure Registries:
Version: 1.0.0-rc8+dev 127.0.0.0/8
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
Live Restore Enabled: false
docker-init:
Version: 0.18.0
GitCommit: fec3683
Docker
Service Configuration
Check Service Status

systemctl start docker

systemctl status docker


● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-21 04:21:01 UTC; 3 days ago
Docs: https://docs.docker.com
Main PID: 4197 (dockerd)
Tasks: 13
Memory: 129.7M
CPU: 9min 6.980s
CGroup: /system.slice/docker.service
└─4197 /usr/bin/dockerd -H fd:// -H tcp://0.0.0.0 --containerd=/run/containerd/containerd.sock

systemctl stop docker


Start Manually
dockerd
INFO[2020-10-24T08:20:40.372653463Z] Starting up
INFO[2020-10-24T08:20:40.375298351Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:20:40.375510773Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:20:40.375657667Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
INFO[2020-10-24T08:20:40.375973480Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-10-24T08:20:40.377210185Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:20:40.377304998Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:20:40.377491827Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
INFO[2020-10-24T08:20:40.377762558Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-10-24T08:20:40.381198263Z] [graphdriver] using prior storage driver: overlay2
WARN[2020-10-24T08:20:40.572888603Z] Your kernel does not support swap memory limit
WARN[2020-10-24T08:20:40.573014192Z] Your kernel does not support cgroup rt period
WARN[2020-10-24T08:20:40.573404879Z] Your kernel does not support cgroup rt runtime
Start Manually With Debug
dockerd --debug
INFO[2020-10-24T08:29:00.331925176Z] Starting up
DEBU[2020-10-24T08:29:00.332463203Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2020-10-24T08:29:00.333316936Z] Golang's threads limit set to 6930
INFO[2020-10-24T08:29:00.333659056Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:29:00.333685921Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:29:00.333705237Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
INFO[2020-10-24T08:29:00.333715024Z] ClientConn switching balancer to "pick_first" module=grpc
INFO[2020-10-24T08:29:00.334889983Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:29:00.334914951Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:29:00.334931237Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
INFO[2020-10-24T08:29:00.334940958Z] ClientConn switching balancer to "pick_first" module=grpc
DEBU[2020-10-24T08:29:00.335626982Z] Using default logging driver json-file
DEBU[2020-10-24T08:29:00.335808043Z] [graphdriver] priority list: [btrfs zfs overlay2 aufs overlay devicemapper vfs]
DEBU[2020-10-24T08:29:00.335969923Z] processing event stream module=libcontainerd
namespace=plugins.moby
DEBU[2020-10-24T08:29:00.337633503Z] backingFs=extfs, projectQuotaSupported=false, indexOff="" storage-driver=overlay2
INFO[2020-10-24T08:29:00.337658643Z] [graphdriver] using prior storage driver: overlay2
DEBU[2020-10-24T08:29:00.337674607Z] Initialized graph driver overlay2
WARN[2020-10-24T08:29:00.364649284Z] Your kernel does not support swap memory limit
WARN[2020-10-24T08:29:00.364679148Z] Your kernel does not support cgroup rt period
WARN[2020-10-24T08:29:00.364687757Z] Your kernel does not support cgroup rt runtime
Unix Socket
Docker CLI
192.168.1.10:2375

/var/run/docker.sock Unix Socket

Docker CLI

dockerd --debug \
--host=tcp://192.168.1.10:2375
INFO[2020-10-24T08:29:00.331925176Z] Starting up
DEBU[2020-10-24T08:29:00.332463203Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2020-10-24T08:29:00.333316936Z] Golang's threads limit set to 6930
INFO[2020-10-24T08:29:00.333659056Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:29:00.333685921Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:29:00.333705237Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
TCP Socket
Docker CLI
192.168.1.10:2375

/var/run/docker.sock Unix Socket


export DOCKER_HOST=“tcp://192.168.1.10:2375”

docker ps

Docker CLI

dockerd --debug \
--host=tcp://192.168.1.10:2375
INFO[2020-10-24T08:29:00.331925176Z] Starting up
DEBU[2020-10-24T08:29:00.332463203Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2020-10-24T08:29:00.333316936Z] Golang's threads limit set to 6930
INFO[2020-10-24T08:29:00.333659056Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:29:00.333685921Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:29:00.333705237Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
TLS Encryption
Docker CLI
192.168.1.10:2375
192.168.1.10:2376

/var/run/docker.sock Unix Socket


export DOCKER_HOST=“tcp://192.168.1.10:2376”
DOCKER_HOST=“tcp://192.168.1.10:2375”

docker ps

Docker CLI

dockerd --debug \
--host=tcp://192.168.1.10:2375
:2376 \
--tls=true \
--tlscert=/var/docker/server.pem \
2375 Un-encrypted
--tlskey=/var/docker/serverkey.pem
2376 Encrypted
Daemon Configuration File
/etc/docker/daemon.json
{
dockerd --debug \
"debug": true,
--host=tcp://192.168.1.10:2375
:2376 \
"hosts": ["tcp://192.168.1.10:2376"]
--tls=true \ "tls": true,
--tlscert=/var/docker/server.pem \ "tlscert": "/var/docker/server.pem",
--tlskey=/var/docker/serverkey.pem "tlskey": "/var/docker/serverkey.pem"
}

dockerd --debug=false
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as
a flag and in the configuration file: debug: (from flag: false, from file: true)

systemctl start docker


References
• https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file
• https://docs.docker.com/config/daemon/
• https://docs.docker.com/engine/reference/commandline/dockerd/
• https://docs.docker.com/engine/security/https/
Basic Container
Operations
Docker Objects
docker <docker-object> <sub-command> [options] <Arguments/Commands>

Images Networks

docker image ls docker network ls

Containers Volumes

docker container ls docker volume ls


Docker Engine Command
docker <docker-object> docker <sub-command>
<sub-command> [options]
[options] <Arguments/Commands>
<Arguments/Commands>

docker container run -it ubuntu docker run -it ubuntu

docker image build . docker build .

docker container attach ubuntu docker attach ubuntu

docker container kill ubuntu docker kill ubuntu


Container Create - Create a new container
docker container create httpd
Unable to find image 'httpd:latest' locally
latest: Pulling from library/httpd
8ec398bc0356: Pull complete
354e6904d655: Pull complete
36412f6b2f6e: Pull complete
Digest:
sha256:769018135ba22d3a7a2b91cb89b8de711562cdf51ad6621b2b9b13e95f3798de
Status: Downloaded newer image for httpd:latest
36a391532e10d45f772f2c9430c2cc38dad4b441aa7a1c44d459f6fa3d78c6b6

ls /var/lib/docker/
builder containers network plugins swarm trust
buildkit image overlay2 runtimes tmp volumes

ls -lrt /var/lib/docker/containers/
36a391532e10d45f772f2c9430c2cc38dad4b441aa7a1c44d459f6fa3d78c6b6

ls -lrt /var/lib/docker/containers/36a391532e10*
Checkpoint hostconfig.json config.v2.json
Container ls - List the details for container
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36a391532e10 httpd "httpd-foreground" 2 minutes ago Created charming_wiles

docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36a391532e10 httpd "httpd-foreground" 2 minutes ago Created charming_wiles

docker container ls -q

docker container ls -aq


36a391532e10
Container Start - Start a container
docker container start 36a391532e10
36a391532e10

docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36a391532e10 httpd "httpd-foreground" 6 minutes ago Up 1 minutes 80/tcp charming_wiles
Container Run – Create and Start a container
docker container create httpd docker container start 36a391532e10

docker container run ubuntu


Unable to find image 'httpd:latest' locally
latest: Pulling from library/httpd
8ec398bc0356: Pull complete
354e6904d655: Pull complete
36412f6b2f6e: Pull complete
Digest: sha256:769018135ba22d3a7a2b91cb89b8de711562cdf51ad6621b2b9b13e95f3798de
Status: Downloaded newer image for httpd:latest
36a391532e10d45f772f2c9430c2cc38dad4b441aa7a1c44d459f6fa3d78c6b6
Container Run – Create and Start a container
docker container run ubuntu
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
2746a4a261c9: Pull complete
4c1d20cdee96: Pull complete
0d3160e1d0de: Pull complete
c8e37668deea: Pull complete
Digest: sha256:250cc6f3f3ffc5cdaa9d8f4946ac79821aafb4d3afc93928f0de9336eba21aa4
Status: Downloaded newer image for ubuntu:latest

docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d969ecdb44ea ubuntu "/bin/bash" 2 minutes ago Exited (0) 2 minutes ago intelligent_almeida
Container Run – Create and Start a container

docker container run ubuntu

docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d969ecdb44ea ubuntu "/bin/bash" 2 minutes ago Exited (0) 2 minutes ago intelligent_almeida
Container Run – With Options
docker container run -it ubuntu
root@6caba272c8f5:/#
root@6caba272c8f5:/# hostname
6caba272c8f5
root@6caba272c8f5:/#

docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" About a minute ago Up About a minute quizzical_austin

docker container run -it ubuntu docker container run ubuntu -it
Container Run – exiting a running process
docker container run -it ubuntu
root@6caba272c8f5:/#
root@6caba272c8f5:/# hostname
6caba272c8f5
root@6caba272c8f5:/# exit
exit

docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" 8 minutes ago Exited (0) 37 seconds ago quizzical_austin
Container Run – Container Name
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" 8 minutes ago Exited (0) 37 seconds ago quizzical_austin

docker container run -itd --name=webapp ubuntu


59aa5eacd88c42970754cd6005ce315944a2efcd32288df998b29267ae54c152

docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" 20 seconds ago Up 19 seconds webapp

docker container rename webapp custom-webapp

docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" About a minute ago Up About a minute custom-webapp
Container Run – Detached Mode
docker container run httpd
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName'
directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName'
directive globally to suppress this message
[Thu Sep 17 15:39:31.138134 2020] [mpm_event:notice] [pid 1:tid 139893041316992] AH00489: Apache/2.4.46 (Unix) configured --
resuming normal operations
[Thu Sep 17 15:39:31.138584 2020] [core:notice] [pid 1:tid 139893041316992] AH00094: Command line: 'httpd -D FOREGROUND'

docker container run –d httpd


11cbd7fe7e65a9da453e159ed0fe163592dccc8a7845abc91b8305c78f50ac70

docker container attach 11cb


Interacting with
a Container
Container Run – Escape Sequence
docker container run -it ubuntu
root@6caba272c8f5:/# exit
exit

docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" 8 minutes ago Exited (0) 37 seconds ago quizzical_austin

docker container run -it ubuntu


root@b71f15d33b60:/# [PRESS CTRL+p+q]

docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b71f15d33b60 ubuntu "/bin/bash" 3 minutes ago Up 3 minutes magical_babbage
Container Exec – Executing Commands
docker container exec b71f15d33b60 hostname
b71f15d33b60

docker container exec -it b71f15d33b60 /bin/bash


root@b71f15d33b60:/#
root@b71f15d33b60:/# ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 12:53 pts/0 00:00:00 /bin/bash
root 86 1 0 13:10 pts/0 00:00:00 ps -ef
root@b71f15d33b60:/# tty
/dev/pts/0
root@b71f15d33b60:/# exit
exit

docker container attach b71f15d33b60


root@b71f15d33b60:/#
Inspecting a
Container
Container Inspect
docker container inspect webapp
[
{
"Id": "59aa5eacd88c42970754cd6005ce315944a2efcd32288df998b29267ae54c152",
"Created": "2020-01-14T13:23:01.225868339Z",
"Path": "/bin/bash",
"Args": [],
"State": {
"Status": "running",
"Running": true,
"Paused": false,
"Restarting": false,
.
.
"IPAddress": "172.17.0.5",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:05",
"DriverOpts": null
}
}
}
}
]
Container Stats
docker container stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK
PIDS
59aa5eacd88c webapp 50.00% 400KiB / 989.4MiB 0.04% 656B / 0B 0B / 0
a00b5535783d epic_leavitt 0.00% 404KiB / 989.4MiB 0.04% 656B / 0B 0B / 0
616f80b0f026 elegant_cohen 0.00% 404KiB / 989.4MiB 0.04% 656B / 0B 0B / 0
36a391532e10 charming_wiles 0.01% 8.363MiB / 989.4MiB 0.85% 656B / 0B 0B / 0
Container Top
docker container top webapp
UID PID PPID C STIME TTY TIME CMD
root 17001 16985 0 13:23 ? 00:00:00 stress
Container Logs
docker container logs logtest
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6. Set the 'ServerName'
directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.6. Set the 'ServerName'
directive globally to suppress this message
[Tue Jan 14 13:38:15.699310 2020] [mpm_event:notice] [pid 1:tid 140610463122560] AH00489: Apache/2.4.41 (Unix) configured --
resuming normal operations
[Tue Jan 14 13:38:15.699520 2020] [core:notice] [pid 1:tid 140610463122560] AH00094: Command line: 'httpd -D FOREGROUND'

docker container logs -f logtest


Docker System Events
docker container start webapp
webapp

docker system events --since 60m


2020-01-14T18:30:30.423389441Z network connect d349c5984e7eebab74db57b8529df40e11a140f98a6b5e3ee1807aaeafa0e684
(container=68649c8b359f89db7a3866ee0ebcc7261c0cb9697f3a624cd314c8f4f652f84b, name=bridge, type=bridge)
2020-01-14T18:30:30.721669156Z container start 68649c8b359f89db7a3866ee0ebcc7261c0cb9697f3a624cd314c8f4f652f84b (image=ubuntu, name=casethree)
2020-01-14T18:40:46.779320656Z network connect d349c5984e7eebab74db57b8529df40e11a140f98a6b5e3ee1807aaeafa0e684
(container=71c90a19b9876c9ce2eb9d035355a062fdaceed4a714b61ddf0612651d47d3e2, name=bridge, type=bridge)
2020-01-14T18:40:47.076482525Z container start 71c90a19b9876c9ce2eb9d035355a062fdaceed4a714b61ddf0612651d47d3e2 (image=ubuntu, name=webapp)
Stopping &
Removing Container
Linux Signals

httpd

kill –SIGSTOP 11663


$(pgrep httpd)
Linux Signals

httpd

kill –SIGSTOP 11663


$(pgrep httpd)

kill –SIGCONT $(pgrep httpd)

kill –SIGTERM $(pgrep httpd)

kill –SIGKILL $(pgrep httpd)

kill –9 $(pgrep httpd)


Linux Signals

httpd docker container run --name web httpd

kill –SIGSTOP 11663


$(pgrep httpd) docker container pause web
freezer cgroup
kill –SIGCONT $(pgrep httpd) docker container unpause web

kill –SIGTERM $(pgrep httpd)


docker container stop web

kill –SIGKILL $(pgrep httpd)

kill –9 $(pgrep httpd) docker container kill --signal=9 web


Removing a container
docker container stop web
web

ls -lrt /var/lib/docker/containers/
36a391532e10d45f772f2c9430c2cc38dad4b441aa7a1c44d459f6fa3d78c6b6

docker container rm web


web

Error response from daemon: You cannot remove a running container


36c57f29b607460fc53dace758dac47afbf8cb698694d2fcfcb0ab43a74f0d90. Stop the container
before attempting removal or force remove

ls -lrt /var/lib/docker/containers/
Remove All Container
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" 23 minutes ago Up 23 minutes kodekloudagain
a00b5535783d ubuntu "/bin/bash" 25 minutes ago Up 25 minutes epic_leavitt
616f80b0f026 ubuntu "/bin/bash" 31 minutes ago Up 28 minutes elegant_cohen
36a391532e10 httpd "httpd-foreground" About an hour ago Up About an hour 80/tcp charming_wiles

docker container ls -q
59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10

docker container stop $(docker container ls -q)


59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10

docker container rm $(docker container ls -aq)


59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10
Container Prune
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" 23 minutes ago Up 23 minutes kodekloudagain
a00b5535783d ubuntu "/bin/bash" 25 minutes ago Up 25 minutes epic_leavitt
616f80b0f026 ubuntu "/bin/bash" 31 minutes ago Up 28 minutes elegant_cohen
36a391532e10 httpd "httpd-foreground" About an hour ago Up About an hour 80/tcp charming_wiles

docker container ls -q
59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10

docker container stop $(docker container ls -q)


59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10

docker container prune


WARNING! This will remove all stopped containers.
Are you sure you want to continue? [y/N] y
Deleted Containers:
59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10
Total reclaimed space: 1223423
Remove Flag
docker container run --rm ubuntu expr 4 + 5
9
Container
Hostname
Container Hostname
docker container run -it --name=webapp ubuntu
root@3484d738:/# hostname
3484d738

docker container run -it --name=webapp --hostname=webapp ubuntu


root@webapp :/# hostname
webapp
Restart Policy
Container – Restart Policies
NO ON-FAILURE ALWAYS UNLESS STOPPED

docker container run ubuntu expr 3 + 5


ubuntu "expr 3 + 5" Exited (0) 11 seconds ago

docker container run ubuntu expr three + 5


ubuntu "expr three + 5" Exited (1) 2 seconds ago

docker container stop httpd


httpd "httpd-foreground" Exited (0) 4 days ago

docker container run --restart= no


unless-stopped
on-failure
always ubuntu
Live Restore

docker container run --name web httpd

systemctl stop docker

systemctl start docker

docker container run --name web httpd

systemctl stop docker


/etc/docker/daemon.json
{
"debug": true,
"hosts": ["tcp://192.168.1.10:2376"],
"live-restore": true
}
Copy Files
Container cp – From Host to Container
SRC_PATH DEST_PATH

docker container cp /tmp/web.conf webapp:/etc/web.conf

/etc/web.conf
docker container cp webapp:/root/dockerhost /tmp/
Container - webapp

/tmp/web.conf
docker container cp /tmp/web.conf webapp:/etc/

docker container cp /tmp/web.conf webapp:/etccc/

docker container cp /tmp/app/ webapp:/opt/app


Publishing Ports
Run – PORT mapping
docker run kodekloud/simple-webapp
http://192.168.1.5:80
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
80 8000 8001

IP: 192.168.1.5

http://172.17.0.2:5000 Internal IP
5000 5000 5000

IP: 172.17.0.2 IP: 172.17.0.3 IP: 172.17.0.4


docker run –p 80:5000 kodekloud/simple-webapp 3
3 Web APP Web APP Web APP
0 Docker Container Docker Container Docker Container
docker run –p 8000:5000 kodekloud/simple-webapp 6
3 IP: 172.17.0.5 3 IP: 172.17.0.6 3 IP: 172.17.0.6
3 MySQL 3 MySQL 3 MySQL
docker run –p 8001:5000 kodekloud/simple-webapp 0 Docker 0 0
Docker Docker
6 Container 6 Container 6 Container
8
docker run –p 3306:3306 mysql 3
0
6
docker run –p 8306:3306 mysql Docker Host
docker run –p 8306:3306 mysql
Container PORT Publish 10.2.4.0 192.168.1.0 10.5.3.0

http://10.2.4.5:8000 http://192.168.1.5:8000 http://10.5.3.2:800


docker run –p 8000:5000 kodekloud/simple-webapp
10.2.4.5 192.168.1.5 10.5.3.2
41232
8000 41232
8000 41232
8000
docker run –p 192.168.1.5:8000:5000 kodekloud/simple-webapp Nic2 Nic1 Nic3

docker run –p 127.0.0.1:8000:5000 kodekloud/simple-webapp 5000


127.0.0.1
8000 IP: 172.17.0.3
docker run –p 5000 kodekloud/simple-webapp
loopback Web APP
Docker Container
Ephemeral Port Range => 32768 - 60999

cat /proc/sys/net/ipv4/ip_local_port_range
32768 60999

Docker Host
Container PORT Publish 10.2.4.0 192.168.1.0 10.5.3.0

docker run –P kodekloud/simple-webapp http://192.168.1.5:8000


10.2.4.5 192.168.1.5 10.5.3.2
41232
8000 41232
8000 41232
8000
Dockerfile
Nic2 Nic1 Nic3
FROM ubuntu:16.04
RUN apt-get update && apt-get install -y python python-pip
5000
RUN pip install flask
IP: 172.17.0.3
COPY app.py /opt/
Web APP
ENTRYPOINT flask run Docker Container

EXPOSE 5000

docker run –P --expose=8080 kodekloud/simple-webapp

docker inspect kodekloud/simple-webapp


Docker Host
"ExposedPorts": {
“5000/tcp": {},
"8080/tcp": {}
},
IP Tables

iptables -t nat –S DOCKER 41232

-N DOCKER
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 41232 -j DNAT --to-destination 172.17.0.3:5000

5000

IP: 172.17.0.3

Web APP
Docker Container

INPUT FORWARD OUTPUT

DOCKER-USER DOCKER
IP Tables
Docker Host
References
https://docs.docker.com/network/links/
https://docs.docker.com/engine/reference/run/#expose-incoming-ports
https://docs.docker.com/config/containers/container-networking/
https://docs.docker.com/network/iptables/
T r oub l eshoot
Doc ker Daemon
Check Service Status
docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Check Docker Host
Docker CLI
192.168.1.10:2375
192.168.1.10:2376

/var/run/docker.sock Unix Socket


export DOCKER_HOST=“tcp://192.168.1.10:2376”
DOCKER_HOST=“tcp://192.168.1.10:2375”

docker ps

Docker CLI

2375 Un-encrypted
2376 Encrypted
Check Service Status

systemctl start docker

systemctl status docker


● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: inactive
active (running)
(dead) since
sinceSat
Wed2020-10-24
2020-10-2107:42:08
04:21:01UTC;
UTC;21s
3 days
ago ago
Docs: https://docs.docker.com
Main
Process:
PID: 4197 (dockerd)
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0 --containerd=/run/containerd/containerd.sock
(code=exited,
Tasks: 13 Main PID: 4197 (code=exited, status=0/SUCCESS)
Memory: 129.7M
CPU: 9min 6.980s
CGroup: /system.slice/docker.service
└─4197 /usr/bin/dockerd -H fd:// -H tcp://0.0.0.0 --containerd=/run/containerd/containerd.sock
View Logs
journalctl -u docker.service
-- Logs begin at Wed 2020-10-21 04:05:39 UTC, end at Sat 2020-10-24 07:41:39 UTC. --
Oct 21 04:05:42 ubuntu-xenial systemd[1]: Starting Docker Application Container Engine...
Oct 21 04:05:42 time="2020-10-21T04:05:42.565473329Z" level=info msg="parsed scheme: \"unix\"" mod
Oct 21 04:05:42 time="2020-10-21T04:05:42.565496428Z" level=info msg="scheme \"unix\" not register
Oct 21 04:05:42 time="2020-10-21T04:05:42.565554302Z" level=info msg="ccResolverWrapper: sending u
Oct 21 04:05:42 time="2020-10-21T04:05:42.565673967Z" level=info msg="ClientConn switching balance
Oct 21 04:05:42 time="2020-10-21T04:05:42.570967241Z" level=info msg="parsed scheme: \"unix\"" mod
Oct 21 04:05:42 time="2020-10-21T04:05:42.570982918Z" level=info msg="scheme \"unix\" not register
Oct 21 04:05:42 time="2020-10-21T04:05:42.571027208Z" level=info msg="ccResolverWrapper: sending u
Oct 21 04:05:42 time="2020-10-21T04:05:42.571037442Z" level=info msg="ClientConn switching balance
Oct 21 04:05:42 time="2020-10-21T04:05:42.629609680Z" level=info msg="[graphdriver] using prior st
Oct 21 04:05:42 time="2020-10-21T04:05:42.847722164Z" level=warning msg="Your kernel does not supp
Oct 21 04:05:42 time="2020-10-21T04:05:42.847808687Z" level=warning msg="Your kernel does not supp
Oct 21 04:05:42 time="2020-10-21T04:05:42.847816072Z" level=warning msg="Your kernel does not supp
Oct 21 04:05:42 time="2020-10-21T04:05:42.848125012Z" level=info msg="Loading containers: start."
Oct 21 04:05:43 time="2020-10-21T04:05:43.610553801Z" level=info msg="Removing stale sandbox ae1f6
Oct 21 04:05:43 time="2020-10-21T04:05:43.618004459Z" level=warning msg="Error (Unable to complete
Oct 21 04:05:43 time="2020-10-21T04:05:43.865861594Z" level=info msg="Removing stale sandbox c1138
Oct 21 04:05:43 time="2020-10-21T04:05:43.872335497Z" level=warning msg="Error (Unable to complete
Oct 21 04:05:44 time="2020-10-21T04:05:44.135363994Z" level=info msg="Removing stale sandbox ingre
Daemon Configuration File
/etc/docker/daemon.json
{
"debug": true,
"hosts": ["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem"
}

unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in
the configuration file: debug: (from flag: true, from file: false)
Free Disk Space on Host
df -h
Filesystem Size Used Avail Use% Mounted on
dev 364M 0 364M 0% /dev
run 369M 340K 369M 1% /run
/dev/sda1 19G 14.7G 15M 99% /
tmpfs 369M 0 369M 0% /dev/shm
tmpfs 369M 0 369M 0% /sys/fs/cgroup
tmpfs 369M 4.0K 369M 1% /tmp
tmpfs 74M 0 74M 0% /run/user/0

docker container prune

docker image prune


Debug in Docker
docker system info
Client:
Debug Mode: false

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: xfs
.
.
.
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
References
https://docs.docker.com/config/daemon/
https://docs.docker.com/engine/reference/commandline/dockerd/
Logging Drivers
Logging Drivers
docker run –d --name nginx nginx

docker logs nginx


/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up

docker system info


Server:
...
Images: 54
Server Version: 19.03.6
...
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
...
Logging Drivers
docker ps
f3997637c0df nginx "/docker-entrypoint.…" 37 minutes ago Up 37 nginx

cd /var/lib/docker/containers; ls
38781779e9aa15c190746784ba23d1ae237f03b58e0479286259e275d4c8820a
c5ab1dba9b51486e0e69386c137542be2e4315a56b4ee07c825e2d41c99f89b4
f3997637c0df66becf4dd4662d3c172bf16f916a3b9289b95f0994675102de17

cat f3997637c0df66becf4dd4662d3c172bf16f916a3b9289b95f0994675102de17.json
{"log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform
configuration\n","stream":"stdout","time":"2020-10-25T05:59:43.832656488Z"}
{"log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/\n","stream":"stdout","time":"2020-10-
25T05:59:43.832891838Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh\n","stream":"stdout","time":"202
25T05:59:43.833987067Z"}
{"log":"10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf\n","stream":"stdout","time":"2
25T05:59:43.83695198Z"}
{"log":"10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf\n","stream":"stdout","time":
10-25T05:59:43.84592186Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh\n","stream":"stdout","time":"2020-10
25T05:59:43.846117966Z"}
{"log":"/docker-entrypoint.sh: Configuration complete; ready for start up\n","stream":"stdout","time":"2020-10-
25T05:59:43.850840102Z"}
Logging Drivers
docker system info /etc/docker/daemon.json
Server: {
... "debug": true,
Images: 54 "hosts": ["tcp://192.168.1.10:2376"]
Server Version: 19.03.6
...
"tls": true,
Logging Driver: json-file "tlscert": "/var/docker/server.pem",
Cgroup Driver: cgroupfs "tlskey": "/var/docker/serverkey.pem",
Plugins: "log-driver": "awslogs"
Log: awslogs fluentd gcplogs gelf journald json-file local }
logentries splunk syslog
...
Logging Driver - Options
docker system info /etc/docker/daemon.json
Server: {
... "debug": true,
Images: 54 "hosts": ["tcp://192.168.1.10:2376"]
Server Version: 19.03.6
...
"tls": true,
Logging Driver: json-file "tlscert": "/var/docker/server.pem",
Cgroup Driver: cgroupfs "tlskey": "/var/docker/serverkey.pem",
Plugins: "log-driver": "awslogs",
Log: awslogs fluentd gcplogs gelf journald json-file local
logentries splunk syslog "log-opt": {
... "awslogs-region": "us-east-1"
}
}

export AWS_ACCESS_KEY_ID=<>
export AWS_SECRET_ACCESS_KEY=<>
export AWS_SESSION_TOKEN=<>
Logging Drivers
docker run –d --log-driver json-file nginx

docker container inspect nginx


[
{
"Id": "f3997637c0df66becf4dd4662d3c172bf16f916a3b9289b95f0994675102de17",
"Created": "2020-10-25T05:59:43.543296741Z",
"Path": "/docker-entrypoint.sh",
...
"HostConfig": {
"Binds": null,
"ContainerIDFile": "",
"LogConfig": {
"Type": "json-file",
"Config": {}
},

docker container inspect -f '{{.HostConfig.LogConfig.Type}}’ nginx


json-file
Docker Images
Image Registry

Docker Trusted Registry

Google Container Registry

Amazon Container Registry

Azure Container Registry


Image Registry
Official Images Verified Images User Images
Registry: Searching an image
Image Tags
docker run ubuntu docker run ubuntu:latest

docker run ubuntu:18.04

docker run ubuntu:trusty


Image list: List Local Available Images
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB
Image Search: Search without GUI
docker search httpd
NAME DESCRIPTION STARS OFFICIAL
AUTOMATED
httpd The Apache HTTP Server Project 2815 [OK]
centos/httpd-24-centos7 Platform for running Apache httpd 2.4 or bui… 29
centos/httpd 26
[OK]
armhf/httpd The Apache HTTP Server Project 8
salim1983hoop/httpd24 Dockerfile running apache config 2
[OK]

docker search httpd --limit 2


NAME DESCRIPTION STARS OFFICIAL AUTOMATED
httpd The Apache HTTP Server Project 2815 [OK]
centos/httpd-24-centos7 Platform for running Apache httpd 2.4 or bui… 29

docker search --filter stars=10 httpd


NAME DESCRIPTION STARS OFFICIAL AUTOMATED
httpd The Apache HTTP Server Project 2815 [OK]
centos/httpd-24-centos7 Platform for running Apache httpd 2.4 or bui… 29
centos/httpd 26 [OK]

docker search --filter stars=10 --filter is-official=true httpd


Image Pull: Download latest Image
docker image pull httpd
Using default tag: latest
latest: Pulling from library/httpd
8ec398bc0356: Pull complete
354e6904d655: Pull complete
27298e4c749a: Pull complete
10e27104ba69: Pull complete
36412f6b2f6e: Pull complete
Digest: sha256:769018135ba22d3a7a2b91cb89b8de711562cdf51ad6621b2b9b13e95f3798de
Status: Downloaded newer image for httpd:latest
docker.io/library/httpd:latest

docker image list


REPOSITORY TAG IMAGE ID CREATED SIZE
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB
Image Addressing
Convention
Image Addressing Convention

docker image pull httpd


docker.io
Docker Hub

Image Addressing Convention

image: docker.io/ httpd/httpd

Registry User/ Image/


Account Repository
gcr.io/ httpd/httpd
Aut henticat
ing to
Reg i s tries
Public/Private Registry
docker pull ubuntu

docker pull gcr.io/organization/ubuntu


Using default tag: latest
Error response from daemon: pull access denied for gcr.io/organization/ubuntu, repository does not exist or may
require 'docker login': denied: requested access to the resource is denied

docker push ubuntu


The push refers to repository [docker.io/library/ubuntu]
128fa0b0fb81: Layer already exists
c0151ca45f27: Layer already exists
b2fd17df2071: Layer already exists
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of
the docker.io registry NOW to avoid future disruption. More information at
https://docs.docker.com/registry/spec/deprecated-schema-v1/
errors:
denied: requested access to the resource is denied
unauthorized: authentication required
Public/Private Registry
docker login docker.io
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head
over to https://hub.docker.com to create one.
Username: registry-user
Password:
WARNING! Your password will be stored unencrypted in /home/vagrant/.docker/config.json.

Login Succeeded

docker login gcr.io


Username: registry-user
Password:
WARNING! Your password will be stored unencrypted in /home/vagrant/.docker/config.json.

Login Succeeded

docker image push httpd


The push refers to repository [gcr.io/kodekloud/httpd]
2f159baeafde: Mounted from library/httpd
6b27de954cca: Mounted from library/httpd
httpd: digest: sha256:9a5e7d690fd4ca39ccdc9e6d39e3dc0f96bf3acda096a2567374b4c608f6dacc size: 1362
Image Tag: Retagging an image locally
docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd alpine 52862a02e4e9 2 weeks ago 112MB
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB

docker image tag httpd:alpine httpd:customv1

docker image list


REPOSITORY TAG IMAGE ID CREATED SIZE
httpd alpine 52862a02e4e9 2 weeks ago 112MB
httpd customv1 52862a02e4e9 2 weeks ago 112MB
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB

docker image tag httpd:alpine gcr.io/company/httpd:customv1

docker image push gcr.io/company/httpd:customv1


Objects Size
docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd alpine 52862a02e4e9 2 weeks ago 112MB
httpd customv1 52862a02e4e9 2 weeks ago 112MB
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB

docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 3 0 341.9MB 341.9MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
Rem ove
Images
Image Rm: Removing an Image Locally
docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd alpine 52862a02e4e9 2 weeks ago 112MB
httpd customv1 52862a02e4e9 2 weeks ago 112MB
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB

Note: An image cannot be removed if a container is dependent on it. All


containers must be removed and deleted first.

docker image rm httpd:customv1


Untagged: httpd:customv1

docker image rm httpd:alpine


untagged: httpd:alpine
deleted: sha256:549b9b86cb8d75a2b668c21c50ee092716d070f129fd1493f95ab7e43767eab8
deleted: sha256:7c52cdc1e32d67e3d5d9f83c95ebe18a58857e68bb6985b0381ebdcec73ff303
deleted: sha256:a3c2e83788e20188bb7d720f36ebeef2f111c7b939f1b19aa1b4756791beece0
deleted: sha256:61199b56f34827cbab596c63fd6e0ac0c448faa7e026e330994818190852d479
deleted: sha256:2dc9f76fb25b31e0ae9d36adce713364c682ba0d2fa70756486e5cedfaf40012
Image Prune: removing all unused image
docker image prune -a
WARNING! This will remove all images without at least one container associated to them.
Are you sure you want to continue? [y/N] y
Deleted Images:
untagged: ubuntu:latest
untagged: ubuntu@sha256:250cc6f3f3ffc5cdaa9d8f4946ac79821aafb4d3afc93928f0de9336eba21aa4
deleted: sha256:549b9b86cb8d75a2b668c21c50ee092716d070f129fd1493f95ab7e43767eab8
deleted: sha256:7c52cdc1e32d67e3d5d9f83c95ebe18a58857e68bb6985b0381ebdcec73ff303
deleted: sha256:a3c2e83788e20188bb7d720f36ebeef2f111c7b939f1b19aa1b4756791beece0
deleted: sha256:61199b56f34827cbab596c63fd6e0ac0c448faa7e026e330994818190852d479
deleted: sha256:2dc9f76fb25b31e0ae9d36adce713364c682ba0d2fa70756486e5cedfaf40012
untagged: httpd:latest
untagged: httpd@sha256:769018135ba22d3a7a2b91cb89b8de711562cdf51ad6621b2b9b13e95f3798de
deleted: sha256:c2aa7e16edd855da8827aa0ccf976d1d50f0827c08622c16e0750aa1591717e5
deleted: sha256:9fa170034369c33a4c541b38ba11c63c317f308799a46e55da9bea5f9c378643
deleted: sha256:9a41b3deb4609bec368902692dec63e858e6cd85a1312ee1931d421f51b2a07c
deleted: sha256:ed10451b31dfca751aa8d3e4264cb08ead23d4f2b661324eca5ec72b0e7c59fa
deleted: sha256:06020df9067f8f2547f53867de8e489fed315d964c9f17990c3e5e6a29838d98
deleted: sha256:556c5fb0d91b726083a8ce42e2faaed99f11bc68d3f70e2c7bbce87e7e0b3e10

Total reclaimed space: 229.4MB


I ns p ect
Image
Image Layers: display image layers
docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB

docker image history ubuntu


IMAGE CREATED CREATED BY SIZE
COMMENT
549b9b86cb8d 4 weeks ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 4 weeks ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 4 weeks ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 745B
<missing> 4 weeks ago /bin/sh -c [ -z "$(apt-get indextargets)" ] 987kB
<missing> 4 weeks ago /bin/sh -c #(nop) ADD file:53f100793e6c0adfc… 63.2MB
Image inspect
docker image inspect httpd
[
{
Parent/Base Image
"Parent": "",
"Comment": "",
"Created": "2020-09-15T23:05:57.348340124Z", Exposed Ports
"ContainerConfig": {
"ExposedPorts": {
"80/tcp": {}
} Author Details
},
"DockerVersion": "18.09.7",
"Author": "",
Size
"Architecture": "amd64",
"Os": "linux",
"Size": 137532780,
"VirtualSize": 137532780, All Configs in Dockerfile
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
Image inspect - with format
docker image inspect httpd -f '{{.Os}}' docker image inspect httpd
linux [
{
docker image inspect httpd -f '{{.Architecture}}' "Parent": "",
"Comment": "",
amd64 "Created": "2020-09-15T23:05:57.348340124Z"
"ContainerConfig": {
docker image inspect httpd -f '{{.Architecture}} {{.Os}}' "ExposedPorts": {
"80/tcp": {}
amd64 linux }
},
"DockerVersion": "18.09.7",
docker image inspect httpd -f
"Author": "",
'{{.ContainerConfig.ExposedPorts}}' "Architecture": "amd64",
map[80/tcp:{}] "Os": "linux",
"Size": 137532780,
"VirtualSize": 137532780,
"Metadata": {
"LastTagTime": "0001-01-01T00:00:00Z"
}
}
]
S ave and
Load
docker.io
Docker Hub
Image Save and Load

docker image save alpine:latest -o alpine.tar

docker image load -i alpine.tar


beee9f30bc1f: Loading layer [===============>] 5.862MB/5.862MB .tar
Loaded image: alpine:latest

docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest a187dde48cd2 4 weeks ago 5.6MB
Import and Export Operations
docker export <container-name> > testcontainer.tar

docker image import testcontainer.tar newimage:latest


sha256:8090b7da236bb21aa2e52e6e04dff4b7103753e4046e15457a3daf6dfa723a12

docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
newimage latest 8090b7da236b 2 minutes ago 5.6MB
alpine latest a187dde48cd2 4 weeks ago 5.6MB
Building Images
Using Commit
Docker Container Commit
docker run –d --name httpd httpd

docker exec -it httpd bash


root@3484d738:/# cat > htdocs/index.html
Welcome to my custom web application
Image-customhttpd

docker containerimage-registry-and-operations
commit –a “Ravi” httpd customhttpd

docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
customhttpd latest adac0f56a7df 5 seconds ago 138MB
httpd latest 417af7dc28bc 8 days ago 138MB
Save vs Load vs Import vs Export vs Commit
docker run –d --name httpd httpd

docker exec -it httpd bash


root@3484d738:/# cat > htdocs/index.html
Welcome to my custom web application

docker container commit –a “Ravi” httpd customhttpd

docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
customhttpd latest adac0f56a7df 5 seconds ago 138MB
httpd latest 417af7dc28bc 8 days ago 138MB
Bui l d Context
Build Context
Dockerfile
FROM ubuntu

RUN apt-get update Docker Daemon


RUN apt-get install python

RUN pip install flask Docker CLI


RUN pip install flask-mysql
/opt/my-custom-app

COPY . /opt/source-code

ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run

docker build . -t my-custom-app

docker build /opt/my-custom-app


Build Context
Dockerfile
FROM ubuntu

RUN apt-get update Docker Daemon


RUN apt-get install python
/var/lib/docker/tmp/docker-builderxxxxx
RUN pip install flask
RUN pip install flask-mysql

COPY . /opt/source-code
Docker CLI
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
/opt/my-custom-app

app.py
docker build . -t my-custom-app

docker build /opt/my-custom-app


Sending build context to Docker daemon 2.048kB
Step 1/7 : FROM ubuntu
.dockerignore
.dockerignore
Dockerfile
tmp
FROM ubuntu
logs
build
RUN apt-get update Docker Daemon
RUN apt-get install python
/var/lib/docker/tmp/docker-builderxxxxx
RUN pip install flask
RUN pip install flask-mysql

COPY . /opt/source-code
Docker CLI
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
/opt/my-custom-app

app.py
docker build . -t my-custom-app
tmp
docker build /opt/my-custom-app logs
Sending build context to Docker daemon 2.048kB build
Step 1/7 : FROM ubuntu
.dockerignore
Build Context
docker build . -t my-custom-app

docker build /opt/my-custom-app


Sending build context to Docker daemon 2.048kB
Docker Daemon
Step 1/7 : FROM ubuntu

/var/lib/docker/tmp/docker-builderxxxxx
docker build https://github.com/myaccount/myapp

docker build https://github.com/myaccount/myapp#<branch>

Docker CLI
docker build https://github.com/myaccount/myapp:<folder>
/opt/my-custom-app

docker build -f Dockerfile.dev https://github.com/myaccount/myapp


Build Cache
Build Cache
Dockerfile
FROM ubuntu Layer 1. Base ubuntu Layer 120 MB

RUN apt-get update Layer 2. Update apt packages 22 MB


RUN apt-get install -y python python3-pip Layer 3. Install python and python pip 329 MB

RUN pip3 install flask Layer 4. Changes in pip packages 4.3 MB

COPY app.py /opt/source-code Layer 5. Source code 229 B

ENTRYPOINT flask run Layer 6. Update Entrypoint with “flask” command 0B


Build Cache
docker build .
Layer 1. Base ubuntu Layer 120 MB Sending build context to Docker daemon 2.048kB
Step 1/6 : FROM ubuntu
cached Layer 2. Update apt packages 22 MB ---> bb0eaf4eee00
Step 2/6 : RUN apt-get update
Layer 3. Install python and python pip 329 MB ---> Using cache
---> e09e593ec730
Layer 4. Changes in pip packages 4.3 MB Step 3/6 : RUN apt-get install -y python python-pip
---> Running in e9944225690a
Reading package lists...
Layer 5. Source code 229 B
Building dependency tree...
Reading state information...
Layer 6. Update Entrypoint with “flask” command 0B E: Unable to locate package python-pip
The command '/bin/sh -c apt-get install -y python pytho
zero code: 100
Build Cache
Dockerfile
FROM ubuntu Layer 1. Base ubuntu Layer 120 MB

RUN apt-get update cached Layer 2. Update apt packages 22 MB


RUN apt-get install -y python python3-pip cached Layer 3. Install python and python pip 329 MB

RUN pip3 install flask flask-mysql invalid


cached Layer 4. Changes in pip packages 4.3 MB

COPY app.py /opt/source-code invalid


cached Layer 5. Source code 229 B

ENTRYPOINT flask run cached


invalid Layer 6. Update Entrypoint with “flask” command 0B

1. Compare instructions in Dockerfile


2. Compare checksums of files in ADD or COPY
Build Cache - Cache Busting
Dockerfile
FROM ubuntu Layer 1. Base ubuntu Layer
RUN
RUN apt-get update
apt-get update && apt-get install –y \ cached Layer 2. Update & aptInstall
packages
python and pyth
RUN apt-getpython3-pip
python python python3-pip python-dev
install -y python-dev invalid
cached Layer 3. Install python and python pip
RUN pip3 install flask flask-mysql invalid
cached Layer 3.
4. Changes in pip packages
COPY app.py /opt/source-code cached
invalid 4. Source code
Layer 5.
ENTRYPOINT flask run cached
invalid Layer 5.
6. Update Entrypoint with “flask” co
Build Cache - Version Pinning
Dockerfile
FROM ubuntu Layer 1. Base ubuntu Layer

RUN apt-get update && apt-get install –y \ cached Layer 2. Update & Install python and pyth
python \ invalid
cached
python-dev \
python3-pip =20.0.2 invalid
cached Layer 3.
4. Changes in pip packages
cached
invalid 4. Source code
Layer 5.
RUN pip3 install flask flask-mysql
cached
invalid Layer 5.
6. Update Entrypoint with “flask” co
COPY app.py /opt/source-code

ENTRYPOINT flask run


Build Cache
Dockerfile
FROM ubuntu

RUN apt-get update && apt-get install –y \ cached


python \
python-dev \
python3-pip =20.0.2

RUN pip3 install flask flask-mysql cached

COPY app.py /opt/source-code cached


invalid

ENTRYPOINT flask run


Build Cache
Dockerfile
FROM ubuntu

COPY app.py /opt/source-code invalid


cached

RUN apt-get update && apt-get install –y \


python \
python-dev \
python3-pip =20.0.2 cached
invalid

RUN pip3 install flask flask-mysql cached


invalid

ENTRYPOINT flask run


References
• https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#leverage-build-cache
COPY vs ADD
Difference between COPY and ADD
Dockerfile Dockerfile
FROM centos:7 FROM centos:7
COPY /testdir /testdir ADD /testdir /testdir

Dockerfile
FROM centos:7
ADD app.tar.xz /testdir

Dockerfile
FROM centos:7
ADD http://app.tar.xz /testdir
RUN tar -xJf /testdir/app.tar.xz -C /tmp/app
RUN make -C /tmp/app
Copy or ADD?
Dockerfile Dockerfile
FROM centos:7 FROM centos:7
COPY /testdir /testdir ADD /testdir /testdir

Dockerfile
FROM centos:7
ADD app.tar.xz /testdir

Dockerfile Dockerfile
FROM centos:7 FROM centos:7
RUN curl http://app.tar.xz \ ADD http://app.tar.xz /testdir
| tar –xcJ /testdir/file.tar.xz \ RUN tar -xJf /testdir/app.tar.xz -C /tmp/app
&& yarn build \ RUN make -C /tmp/app
&& rm /testdir/file.tar.xz
Base Image
Base vs Parent Image

Dockerfile – My Custom Webapp

FROM httpd
Parent
COPY index.html htdocs/index.html httpd (Parent)

My Custom WebApp
Base vs Parent Image

Dockerfile - httpd
FROM debian:buster-slim
Parent debian
ENV HTTPD_PREFIX /usr/local/apache2
ENV PATH $HTTPD_PREFIX/bin:$PATH
httpd (Parent)
WORKDIR $HTTPD_PREFIX
<content trimmed>
My Custom WebApp

Dockerfile – My Custom Webapp

FROM httpd

COPY index.html htdocs/index.html


Base vs Parent Image
Dockerfile - debian:buster-slim

Base FROM scratch


ADD rootfs.tar.xz /
scratch
CMD ["bash"]

debian (Base)

Dockerfile - httpd httpd (Parent)


FROM debian:buster-slim
My Custom WebApp
Parent ENV HTTPD_PREFIX /usr/local/apache2
ENV PATH $HTTPD_PREFIX/bin:$PATH
WORKDIR $HTTPD_PREFIX
<content trimmed>
Dockerfile – My Custom Webapp

FROM httpd

COPY index.html htdocs/index.html


Base vs Parent Image

scratch scratch scratch scratch

debian (Base) debian (Base) ubuntu (Base) debian (Base)

php (Parent) php (Parent) httpd (Parent)

custom-php (Parent) Wordpress MongoDB My Custom WebApp

Custom Wordpress
Base vs Parent Image

scratch

Dockerfile - debian:buster-slim
FROM scratch
ADD rootfs.tar.xz /
CMD ["bash"]
References
https://docs.docker.com/develop/develop-images/baseimages/
Multi-Stage
Builds
my-application
Dockerfile
1. Build 2. Containerize for Production
LICENSE
README.md npm run build Dockerfile

package.json FROM nginx

app.js COPY dist /usr/share/nginx/html


public
CMD [ "nginx", "-g", "daemon off;" ]
tests
config
docker build –t my-app .
routes
services
db
core
dist
Development Server
my-application
Dockerfile.builder 1. Build 2. Containerize for Production
Dockerfile
LICENSE Dockerfile.builder Dockerfile

README.md FROM node FROM nginx

package.json COPY . . COPY dist /usr/share/nginx/html


app.js RUN npm install
RUN npm run build CMD [ "nginx", "-g", "daemon off;" ]
public
tests
docker build –t builder . docker build –t my-app .
config
routes
services
db
core
dist
Development Server
1. Build 3. Extract build from first image 3. Containerize for Production

Dockerfile.builder copy-dist-from-builder.sh Dockerfile


FROM node docker container create --name builder builder FROM nginx
docker container cp builder:dist ./dist
COPY . . docker container rm –f builder COPY dist /usr/share/nginx/html
RUN npm install
RUN npm run build CMD [ "nginx", "-g", "daemon off;" ]

docker build –t builder . docker build –t my-app .


Multi-stage builds

Dockerfile
FROM node

1. Build COPY . .
RUN npm install
RUN npm run build

FROM nginx
3. Containerize for Production
COPY dist /usr/share/nginx/html

CMD [ "nginx", "-g", "daemon off;" ]

docker build –t my-app .


Multi-stage builds

Dockerfile
FROM node AS builder

1. Build COPY . .
RUN npm install
Stage 0
RUN npm run build

Stage 1 FROM nginx


3. Containerize for Production
3. Extract build from first image COPY --from=0
--from=builder dist /usr/share/nginx/html

CMD [ "nginx", "-g", "daemon off;" ]

docker build –t my-app .

docker build --target builder –t my-app .


Multi-Stage Builds

• Optimize Dockerfiles and keeps them easy to read and maintain


• Helps keep size of images low
• Helps avoid having to maintain multiple Dockerfiles – Builder and
Production
• No intermediate images
Best
Practices
Modular
Modular
Persist State
Persist State
Slim/Minimal Images

1.Create slim/minimal images


2.Find an official minimal image that exists
3.Only install necessary packages
4.Maintain different images for different
environments:
• Development – debug tools
• Production - lean
5. Use multi-stage builds to create lean production
ready images.
6. Avoid sending unwanted files to the build context
References
1. https://docs.docker.com/develop/dev-best-practices/
2. https://docs.docker.com/develop/develop-images/dockerfile_best-
practices//
Networking
Network: List
docker network ls
NETWORK ID NAME DRIVER SCOPE
599dcaf4e856 bridge bridge local
c817f1bca596 host host local
e6508d3404a3 none null local

docker network inspect 599dcaf4e856


[
{
"Name": "bridge",
"Id": "599dcaf4e85684c8c3a111baa52b7530f097853b96485a8a3ffcd9088b20f0cb",
"Created": "2020-01-20T18:10:46.896056535Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.17.0.0/16",
"Gateway": "172.17.0.1"
Custom Network
docker network connect custom-net my-container

docker network disconnect custom-net my-container

docker network rm custom-net

docker network prune


WARNING! This will remove all networks not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Networks:
custom-net
Vo l u m e
Volume Inspect
docker volume inspect data_volume
[
{
"CreatedAt": "2020-01-20T19:52:34Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/data_volume/_data",
"Name": "data_volume",
"Options": {},
"Scope": "local"
}
]
Volume Removal: rm and prune
docker volume remove data_volume
Error response from daemon: remove data_volume: volume is in use -
[2be4d91822964882504a31992aac9dd0b228c03f8739b1afe74984aae6409620]

docker volume remove data_volume


data_volume

docker volume prune


WARNING! This will remove all local volumes not used by at least one container.
Are you sure you want to continue? [y/N] y
Deleted Volumes:
data_vol3
data_vol1
data_vol2

Total reclaimed space: 12MB


ReadOnly Volume
docker container inspect my-container
"Mounts": [
{
"Type": "volume",
"Name": "data_vol1",
"Source": "/var/lib/docker/volumes/data_vol1/_data",
"Destination": "/var/www/html/index.html",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],

docker container run --mount \


source=data_vol1,destination=/var/www/html/index.html,readonly httpd
References
https://docs.docker.com/storage/
End to End
Engine Demo
Sample application – voting application
voting-app result-app
python NodeJS
C

in-memory DB db
PostgreSQL

worker
.NET
Sample application – voting application

Build and Pull Images

Build a user-defined Network for your voting app

Create containers inside the user-defined network

Test your voting app


d o c k e r

compose
Docker compose Public Docker registry - dockerhub
docker continer run –itd –name=web nodejs

docker container run –itd –name=db mongodb


docker container run –itd –name=messaging redis

docker container run –itd –name=orchestration ansible

docker-compose.yml
services:
web:
image: “nodejs"
db:
image: “mongodb“
messaging:
image: "redis“
orchestration:
image: “ansible“

docker-compose up
Docker compose - versions
docker-compose.yml
version: “3.8”
services:
web:
image: httpd:alpine
ports:
- “80”
networks:
- appnet
volumes:
- appvol:/webfs
networks:
- appnet
volumes:
- appvol

configs:

secrets:

version: 3
Docker compose
docker-compose.yml
version: '3.8'
services:
vote:
image: yogeshraheja/vote:v1 appnet
ports:
voting-app result-app
- "81:80"
networks:
- appnet
redis:
image: yogeshraheja/redis:v1
networks:
- appnet
db:
image: yogeshraheja/db:v1 redis db
networks:
- appnet
worker:
image: yogeshraheja/worker:v1
networks:
- appnet
result:
image: yogeshraheja/result:v1 worker
ports:
- "82:80"
networks:
- appnet
networks:
appnet:
driver: bridge
Compose Commands

docker-compose up

docker-compose up -d

docker-compose ps

docker-compose logs

docker-compose stop

docker-compose start
Compose Commands

docker-compose stop

docker-compose rm

docker-compose down
Docker compose

appnet
voting-app result-app

redis db

worker
d o c k e r
swarm
Docker swarm

Docker Swarm

Web Web Web Web Web


Container Container Container Container Container

MySQL
Container

Docker Host Docker Host Docker Host Docker Host


Docker swarm

Docker Swarm

Web Web Web


Container Container Container

Service Task Task Task

Docker Host Docker Host Docker Host Docker Host

Manager Node Worker Node Worker Node Worker Node


Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing Web Web

• Security
service-definition.yml Service Task Task
• Load balancing
services:
web:
Docker Host Docker Host Docker Host
• image:
Service Discovery
"simple-webapp"
database:
Manager Node Worker Node Worker Node
image: "mongodb"
messaging:
image: "redis:alpine"
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing Web Web Web Web Web Web

• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing Web Web Web Web Web Web

• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing Web Web Web Web Web Web

• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing Web Web Web Web Web Web

• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
External Load Balancer

• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates Port - 58080 Port - 58080

• Self Healing Web Web Web Web Web Web

• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm

• Rolling Updates
• Self Healing DNS
Server
Web Web Web Web Web Web

• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Setup
Docker Swarm
Setup swarm

Docker Swarm

Docker Host Docker Host Docker Host

Manager Node Worker Node Worker Node


Pre-Requisites

172.31.46.126 172.31.46.127 172.31.46.128 docker system info


Server:
Containers: 31
Running: 1
Paused: 0
Stopped: 30
Images: 15
Server Version: 19.03.6
Docker Host Docker Host Docker Host Swarm: inactive
Runtimes: runc
Manager Node Worker Node Worker Node

Port Description
TCP 2377 Cluster Management Communications
TCP and UDP 7946 Communication among nodes
UDP 4789 Overlay network traffic
Cluster Setup
172.31.46.126 172.31.46.127 172.31.46.128

Docker Host Docker Host Docker Host

Manager Node Worker Node Worker Node


docker swarm init
Swarm initialized: current node (91uxgq6i78j1h1u5v7moq7vgz) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-1m989y6yl10qhgyz4bqc8eks1wx13kslvuzzi7q3tt12epcwn6-4cq5kbifs4wpkyq68n9ynxmnd


172.31.46.126:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

docker system info | grep -i swarm


Swarm: active
Cluster Setup
172.31.46.126 172.31.46.127 172.31.46.128

Docker Host Docker Host Docker Host

Manager Node Worker Node Worker Node

docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8

docker swarm join-token worker


To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-1m989y6yl10qhgyz4bqc8eks1wx13kslvuzzi7q3tt12epcwn6-4cq5kbifs4wpkyq68n9ynxmnd


172.31.46.126:2377
Cluster Setup
172.31.46.126 172.31.46.127 172.31.46.128

Docker Host Docker Host Docker Host

Manager Node Worker Node Worker Node

docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8

Active
Pause Leader
Drain Reachable
Unavailable
Cluster Setup
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8

Active
Pause Leader
Drain Reachable
Unavailable

docker node inspect manager1 --pretty


ID: 91uxgq6i78j1h1u5v7moq7vgz
Hostname: manager1
Status:
State: Ready
Availability: Active
Address: 172.31.46.126
Manager Status:
Address: 172.31.46.126:2377
Raft Status: Reachable
Operations
Docker Swarm
Promote a Worker to Manager
docker node promote worker1
Node worker1 promoted to a manager in the swarm.

docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
VERSION
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active Reachable 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8

docker node demote worker1


Manager worker1 demoted in the swarm.

docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
VERSION
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
Draining A Node
172.31.46.126 172.31.46.127 172.31.46.128

Web Web

Docker Host Docker Host Docker Host

Manager1 Node Worker1 Node Worker2 Node

docker node update --availability drain worker1


worker1
Draining A Node
172.31.46.126 172.31.46.128

Web Web
172.31.46.127

Docker Host Docker Host

Manager1 Node Worker2 Node


Docker Host

Worker1 Node

docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
VERSION
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Drain 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
Draining A Node
172.31.46.126 172.31.46.128

Web Web
172.31.46.127

Docker Host Docker Host

Manager1 Node Worker2 Node


Docker Host

Worker1 Node

docker node update --availability active worker1


worker1
Draining A Node
172.31.46.126 172.31.46.127 172.31.46.128

Web Web

Docker Host Docker Host Docker Host

Manager1 Node Worker1 Node Worker2 Node

docker node update --availability active worker1


worker1
Deleting A Node
172.31.46.126 172.31.46.127 172.31.46.128

Web Web

Docker Host Docker Host Docker Host

Manager1 Node Worker1 Node Worker2 Node

docker node update --availability drain worker2


worker2
Deleting A Node
172.31.46.126 172.31.46.127

Web Web
172.31.46.128

Docker Host Docker Host

Manager1 Node Worker1 Node Docker Host

Worker2 Node

docker node update --availability drain worker2 docker swarm leave


worker2
Deleting A Node
172.31.46.126 172.31.46.127

Web Web
172.31.46.128

Docker Host Docker Host

Manager1 Node Worker1 Node Docker Host

Worker2 Node

docker node update --availability drain worker2 docker swarm leave


worker2 Node left the swarm.
Talk about 12 Factor App
Manager Nodes
Docker Swarm
Manager nodes
Swarm Manager Swarm Manager Swarm Manager

Leader
Docker Host Docker Host Docker Host

Worker Worker Worker Worker

Docker Host Docker Host Docker Host Docker Host


Distributed consensus - RAFT

L
L
Distributed consensus - RAFT

DB

D
Instruction

DB DB
Quorum
How many Manager nodes?
• Docker Recommends – 7 Managers
• No limit on Managers

Managers Majority Fault N +1


Tolerance
Quorum of N =
1 1 0 2
5 +1
2 2 0 Quorum of 5 = = 3.5 = 3
2
3 2 1

4 3 1

5 3 2

6 4 2
N-1
7 4 3 Fault Tolerance of N =
2
Odd or even?

Managers Majority Fault


Tolerance

1 1 0

2 2 0

3 2 1

4 3 1

5 3 2

6 4 2

7 4 3
Distributing Managers

Managers Majority Fault


Tolerance

1 1 0
Site A Site C
2 2 0

3 2 1

4 3 1

5 3 2

6 4 2
Site B
7 4 3
Distributing Managers

Managers Majority Fault


Tolerance

1 1 0
Site A Site C
2 2 0

3 2 1

4 3 1

5 3 2

6 4 2
Site B
7 4 3
Distributing Managers

Managers Majority Fault


Tolerance

1 1 0
Site A Site C
2 2 0

3 2 1

4 3 1

5 3 2

6 4 2
Site B
7 4 3
Distributing Managers Managers Distribution

7 3-2-2

Managers Majority Fault


Tolerance

1 1 0
Site A Site C
2 2 0

3 2 1

4 3 1

5 3 2

6 4 2
Site B
7 4 3
Distributing Managers Managers Distribution

7 3-2-2

5 2-2-1

Managers Majority Fault


Tolerance

1 1 0
Site A Site C
2 2 0

3 2 1

4 3 1

5 3 2

6 4 2
Site B
7 4 3
Distributing Managers Managers Distribution

7 3-2-2

5 2-2-1

3 1-1-1

Managers Majority Fault


Tolerance

1 1 0
Site A Site C
2 2 0

3 2 1

4 3 1

5 3 2

6 4 2
Site B
7 4 3
What happens when it fails?

Worker Worker Worker Worker Worker

Web Web Web Web Web

Docker Host Docker Host Docker Host Docker Host Docker Host

docker node promote

docker swarm init --force-new-cluster


Locking your
swarm cluster
Distributed consensus - RAFT

DB

D
Instruction

DB DB
Lock your Swarm Cluster

docker swarm init --autolock=true

docker swarm update --autolock=true


Swarm updated.
To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

SWMKEY-1-7K9wg5n85QeC4Zh7rZ0vSV0b5MteDsUvpVhG/lQnbl0

Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.
Unlock and Join back to Swarm Cluster
docker node ls
Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use "docker swarm
unlock" to unlock it.

docker swarm unlock


Please enter unlock key: SWMKEY-1-7K9wg5n85QeC4Zh7rZ0vSV0b5MteDsUvpVhG/lQnbl0
Swar m S ervices
Docker service

docker run httpd docker service create --replicas=3 httpd

httpd httpd httpd


httpd

Docker Host Worker Node Worker Node Worker Node

Docker Swarm
Tasks
docker service create --replicas=3 httpd

Orchestrator

Scheduler
Manager Node

Task Task Task

Web Server Web Server Web Server

Worker Node Worker Node Worker Node


Docker Swarm
Tasks docker service create --replicas=3 httpd

API

Orchestrator

Allocator

Dispatcher

Scheduler
Manager Node

Task Task Task

Web Server Web Server Web Server

Worker Node Worker Node Worker Node


Docker Swarm
Service Creation
docker service create --name=firstservice -p 80:80 httpd:alpine
3zhe91mns5vzi6dyyqhld177c
overall progress: 1 out of 1 tasks
1/1: running [==================================================>]
verify: Service converged

docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3zhe91mns5vz firstservice replicated 1/1 httpd:alpine *:80->80/tcp

docker service ps firstservice


ID NAME IMAGE NODE DESIRED STATE CURRENT STATE PORTS
cfxpavgps2cy firstservice.1 httpd:alpine worker1 Running Running 2 minutes ago
Service Inspect
docker service inspect firstservice --pretty
ID: 3zhe91mns5vzi6dyyqhld177c
Name: firstservice
Service Mode: Replicated
Replicas: 1
Placement:
UpdateConfig:
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Update order: stop-first
RollbackConfig:
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Rollback order: stop-first
ContainerSpec:
Image: httpd:alpine@sha256:30a98fa70cb11a4b388328c8512c5cd2528b3c0bd4c4f02def164f165cbb153e
Init: false
Resources:
Endpoint Mode: vip
Ports:
PublishedPort = 80
Protocol = tcp
TargetPort = 80
PublishMode = ingress
Service Logs
docker service logs firstservice
firstservice.1.cfxpavgps2cy@worker1 | AH00557: httpd: apr_sockaddr_info_get() failed for
06235d80b97e
firstservice.1.cfxpavgps2cy@worker1 | AH00558: httpd: Could not reliably determine the
server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally
to suppress this message
firstservice.1.cfxpavgps2cy@worker1 | AH00557: httpd: apr_sockaddr_info_get() failed for
06235d80b97e
firstservice.1.cfxpavgps2cy@worker1 | AH00558: httpd: Could not reliably determine the
server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally
to suppress this message
firstservice.1.cfxpavgps2cy@worker1 | [Fri Apr 24 18:55:56.440200 2020] [mpm_event:notice]
[pid 1:tid 139963811605832] AH00489: Apache/2.4.43 (Unix) configured -- resuming normal
operations
firstservice.1.cfxpavgps2cy@worker1 | [Fri Apr 24 18:55:56.440244 2020] [core:notice] [pid
1:tid 139963811605832] AH00094: Command line: 'httpd -D FOREGROUND'
firstservice.1.cfxpavgps2cy@worker1 | 10.0.0.7 - - [24/Apr/2020:18:56:10 +0000] "POST /cgi-
bin/mainfunction.cgi HTTP/1.1" 400 226
firstservice.1.cfxpavgps2cy@worker1 | 10.0.0.4 - - [24/Apr/2020:19:00:00 +0000] "POST /cgi-
bin/mainfunction.cgi HTTP/1.1" 400 226
Delete a Service
docker service rm firstservice
firstservice
Roll i ng Updates &
Rollbacks
Docker Service

Master Node Worker Node Worker Node Worker Node

Docker Swarm
Docker Service
docker service create -p 80:80 web

web

Worker Node Worker Node Worker Node


Docker Service – Scale up
docker service create -p 80:80 web

docker service update --replicas=3 -p 80:80 web

web web web

Worker Node Worker Node Worker Node


Docker Service – Scale up
docker service create -p 80:80 web

docker service update --replicas=3 -p 80:80 web

web web web

Worker Node Worker Node Worker Node

docker service update --replicas=1 -p 80:80 web


Docker Service – Rolling Update
docker service update -p 80:80 --image=web:2 web

docker service update -p 80:80 --update-delay 60s --image=web:3 web

web:3
web:2
web web:2
web:3
web web:2
web:3
web

Worker Node Worker Node Worker Node


Docker Service – Rolling Update
docker service update -p 80:80 --image=web:2 web

docker service update -p 80:80 --update-delay 60s --image=web:3 web

web:2
web web:2
web web:2
web

web:2
web web:2
web web:2
web

web:2
web web:2
web web:2
web

Worker Node Worker Node Worker Node

docker service update -p 80:80 --update-parallelism 3 --image=web:2 web


Docker Service – Rolling Update
docker service inspect web
ID: y1k8vhoyqxulgthxrkph7xtug
Name: web
Service Mode: Replicated
Replicas: 5 web:2
web web:2
web web:2
web
Placement:
UpdateConfig: web:2
web web:2
web web:2
web
Parallelism: 3
Delay: 60s
On failure: pause web:2
web web:2
web web:2
web
Monitoring Period: 5s
Max failure ratio: 0
Update order: stop-first
RollbackConfig: Worker Node Worker Node Worker Node
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0
Rollback order: stop-first
ContainerSpec:
Image: web:2…
Init: false
Resources:
Endpoint Mode: vip
Docker Service – Rolling Update
docker service inspect web
ID: y1k8vhoyqxulgthxrkph7xtug
Name: web
Service Mode: Replicated
Replicas: 5 web:2
web web:2
web web:2
web
Placement:
UpdateConfig: web:2
web web:2
web web:2
web
Parallelism: 3
Delay: 60s
On failure: pause web web web
Monitoring Period: 5s
Max failure ratio: 0
Update order: stop-first
RollbackConfig: Worker Node Worker Node Worker Node
Parallelism: 1
On failure: pause
Monitoring Period: 5s
Max failure ratio: 0 docker service update -p 80:80 \
Rollback order: stop-first --update-failure-action pause|continue|rollback \
ContainerSpec:
--image=web:2 web
Image: web:2…
Init: false
Resources:
Endpoint Mode: vip
Docker Service – Rollback
docker service update --rollback web

web:2
web web:2
web web:2
web

web
web:2 web
web:2 web
web:2

web
web:2 web
web:2 web
web:2

Worker Node Worker Node Worker Node


Rep l i c as vs Global
Service Types
Global vs Replicated Services
docker service create --replicas=5 web

web web web

web web

Worker Node Worker Node Worker Node

docker service inspect web --pretty | grep -i "service mode"


Service Mode: Replicated
Global vs Replicated Services
docker service create --mode=global agent

agent agent agent agent

Worker Node Worker Node Worker Node Worker Node


P l ac ement
Swarm Service
Batch Realtime
Web Servers
Processing analytics

Worker1 Node Worker2 Node Worker3 Node


Web Servers Batch Realtime analytics
Processing

Worker1 Node Worker2 Node Worker3 Node


Web Servers
Batch Realtime analytics
Processing

Worker1 Node Worker2 Node Worker3 Node


Labels & Constraints
Web Servers Batch Realtime analytics
Processing

type=cpu-optimized type=memory-optimized type=gp

Worker1 Node Worker2 Node Worker3 Node

docker node update --label-add type=cpu-optimized worker1 docker node inspect worker1 --pretty
ID: 7t1vexyw8semg7z277mhliouv
docker node update --label-add type=memory-optimized worker2 Labels:
- type=cpu-optimized
docker node update --label-add type=gp worker3 Hostname: worker1
Joined at: 2020-04-24 11:21:42.05927
Status:
Labels & Constraints
Web Servers Batch Realtime analytics
Processing

type=cpu-optimized type=memory-optimized type=gp

Worker1 Node Worker2 Node Worker3 Node

docker service create --constraint=node.labels.type==cpu-optimized batch-processing


Labels & Constraints
Web Servers
Realtime analytics

Batch
Processing

type=cpu-optimized type=memory-optimized type=gp

Worker1 Node Worker2 Node Worker3 Node

docker service create --constraint=node.labels.type==cpu-optimized batch-processing

docker service create --constraint=node.labels.type==memory-optimized realtime-analytics


Labels & Constraints
Web Servers

Batch
Processing

Realtime analytics

type=cpu-optimized type=memory-optimized type=gp

Worker1 Node Worker2 Node Worker3 Node

docker service create --constraint=node.labels.type==cpu-optimized batch-processing

docker service create --constraint=node.labels.type==memory-optimized realtime-analytics


Labels & Constraints
docker service create --constraint=node.labels.type==cpu-optimized batch-processing

docker service create --constraint=node.labels.type==memory-optimized realtime-analytics

docker service create --constraint=node.labels.type!=memory-optimized web

docker service create --constraint=node.role==worker web


Docker
Overlay
Networks
Default networks
Bridge none host

docker run ubuntu docker run --network=none ubuntu docker run --network=host ubuntu

5000 5000
Web Web
Web Web Container Container
Container Container
172.17.0.2 172.17.0.3
172.17.0.1
docker0
Web
Container
172.17.0.4 172.17.0.5

Web Web
Container Container

Docker Host Docker Host Docker Host


Overlay network

Web Web Web Web Web Web


Container Container Container Container Container Container
172.17.0.2 172.17.0.3 172.17.0.2 172.17.0.3 172.17.0.2 172.17.0.3
172.17.0.1 172.17.0.1 172.17.0.1
docker0 docker0 docker0

Overlay Network
10.0.0.0

Docker Host Docker Host Docker Host


Ingress network
http://192.168.1.5:80

docker run \ 80
docker network ls
-p 80:5000 my-web-server
NETWORK ID NAME DRIVER
Load Balancer 68abeefb1f2e bridge bridge
5bab4adc7d02 host host
docker service create \ e43bd489dd57 none null
--replicas=2 \ mevcdb5b40zz ingress overlay
-p 80:5000 \
my-web-server 5000 5000

Web Web
Container Container
172.17.0.2 172.17.0.3

docker0
172.17.0.1

Docker Host
Docker Swarm
Ingress network

80 80 80

Load Balancer Load Balancer Load Balancer

Routing Mesh

5000 5000

Web Web
Container Container

Docker Host Docker Host Docker Host


Docker Swarm
Default Networks

Web Web Web


Container Container Container

Docker Host Docker Host Docker Host


Docker Swarm
Overlay Network
docker network ls
NETWORK ID NAME DRIVER SCOPE
68abeefb1f2e bridge bridge local
5bab4adc7d02 host host local
e43bd489dd57 none null local
mevcdb5b40zz ingress overlay swarm
c8fb2c361202 docker_gwbridge bridge local

docker network create --driver overlay my-overlay-network

docker network create --driver overlay --subnet 10.15.0.0/16 my-overlay-network

docker network create --driver overlay --attachable my-overlay-network

docker network create --driver overlay --opt encrypted my-overlay-network

docker service create --network my-overlay-network my-web-service


Overlay Network Deletion
docker network rm my-overlay-network
my-overlay-network

docker network prune


Ports
Port Description
TCP 2377 Cluster Management Communications
TCP and UDP 7946 Communication among nodes/Container
Network Discovery
UDP 4789 Overlay network traffic
Publishing Ports
docker service create -p 80:5000 my-web-server

docker service create --publish published=80,target=5000 my-web-server

docker service create -p 80:5000/udp my-web-server

docker service create --publish published=80,target=5000,protocol=udp my-web-server


Default MACVLAN networks
Web Container
Web Container
eth0 eth0

MACLVAN

Docker
ETH0 Host
Interface

PHYSICAL NETWORK

docker network create --driver mcvlan –o parent=eth0 my-overlay-network


bridge Traffic goes through a physical device on the host
802.1q trunk bridge Traffic goes through 802.1q sub-interface. Allows control over routing and filtering at a more granular
level
Summary
Type Use Case

None To disable all network. This is not available for swarm services

Host To remove network isolation. Container uses host’s network.

Bridge For multiple containers to communicate on the same docker host.

Overlay Networks For multiple containers to communicate on different docker hosts.

Macvlan Legacy applications that need containers to look like physical hosts on
network with unique MAC Address. Used for multiple containers to
communicate across different docker hosts. L3 Bridge
IPVLan Used for multiple containers to communicate across different docker hosts.
L2 Bridge.
References
• https://docs.docker.com/network/overlay/
• https://docs.docker.com/engine/swarm/ingress/
Service Discovery
Docker Swarm
Service Discovery - DNS

Host IP
mysql.connect( 172.17.0.3
mysql ) web mysql web 192.168.10.2
Container Container
192.168.10.2 192.168.10.3 mysql 192.168.10.2
Docker
bridge

DNS
Server

127.0.0.11

Docker Host
docker exec -it web cat /etc/resolv.conf
search ec2.internal
nameserver 127.0.0.11
options ndots:0
Service Discovery - DNS
docker service create --name=api-server --replicas=2 api-server

docker service create --name=web web

api-server

Web API API


Container Container Container

Docker Host Docker Host Docker Host


Docker Swarm
Service Discovery - DNS
docker network create --driver=overlay app-network

docker service create --name=api-server --replicas=2 api-server

docker service create --name=web web

api-server

Web API API


Container Container Container
Service Discovery - DNS
docker network create --driver=overlay app-network

docker service create --name=api-server --replicas=2 --network=app-network api-server

docker service create --name=web --network=app-network web

api-server

Web API API


Container Container Container
Docker Config
Docker Volume
docker run nginx

NGINX
Container

nginx.conf

nginx.conf

Docker Host
Docker Volume
docker run -v /tmp/nginx.conf:/etc/nginx/nginx.conf nginx

NGINX
Container

nginx.conf

nginx.conf

Docker Host
Docker Volume
docker run -v /tmp/nginx.conf:/etc/nginx/nginx.conf nginx

NGINX
Container

nginx.conf

nginx.conf

Docker Host Docker Host Docker Host Docker Host


Docker Swarm
Docker Volume
docker run
service create --replicas=4 -v /tmp/nginx.conf:/etc/nginx/nginx.conf nginx

NGINX NGINX NGINX NGINX


Container Container Container
Container

nginx.conf nginx.conf nginx.conf nginx.conf

??? ??? ???


nginx.conf

Docker Host Docker Host Docker Host Docker Host


Docker Swarm
Docker Configs
docker config create nginx-conf /tmp/nginx.conf

docker run
service create -v
--replicas=4 --config /tmp/nginx.conf:/etc/nginx/nginx.conf
--config nginx-conf
src=nginx-conf,target="/etc/nginx/nginx.conf" nginx

NGINX NGINX NGINX NGINX


Container Container Container Container

/nginx-conf
/etc/nginx/nginx-conf /nginx-conf
/etc/nginx/nginx-conf /nginx-conf
/etc/nginx/nginx-conf /nginx-conf
/etc/nginx/nginx-conf

nginx-conf nginx.conf

Docker Host Docker Host Docker Host Docker Host


Docker Swarm
Docker Configs
docker config create nginx-conf /tmp/nginx.conf

docker run
service create -v
--replicas=4 --config /tmp/nginx.conf:/etc/nginx/nginx.conf
--config nginx-conf
src=nginx-conf,target="/etc/nginx/nginx.conf" nginx

docker service update --config-rm nginx-conf nginx

docker config rm nginx-conf

docker config create nginx-conf-new /tmp/nginx-new.conf

docker service update --config-rm nginx-conf --config-add nginx-conf-new nginx


Stack
Docker Swarm
Docker Compose
docker run simple-webapp

docker run mongodb

docker run redis:alpine

docker-compose.yml
services:
web:
image: “simple-webapp"
database:
image: “mongodb“
messaging:
image: "redis:alpine“

docker-compose up
Docker Compose
docker run simple-webapp docker service create simple-webapp

docker run mongodb docker service create mongodb

docker run redis:alpine docker service create redis

docker-compose.yml docker-compose.yml
services: services:
web: web:
image: “simple-webapp" image: “simple-webapp"
database: database:
image: “mongodb“ image: “mongodb“
messaging: messaging:
image: "redis:alpine“ image: "redis:alpine“

docker-compose up docker stack deploy --compose-file docker-compose.yml


STACK

Container Container Container


Service

Stack
Container
Service
Service

Container Container Container


Service

Stack
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis

db:
image: postgres:9.4

vote:
image: voting-app

result:
image: result

worker:
image: worker

docker-compose up
Docker Host
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis

db:
image: postgres:9.4

vote:
image: voting-app

result:
image: result

worker:
image: worker

docker-compose up
Manager Node Worker Node Docker Host
Docker Swarm
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
vote:
image: voting-app
deploy:
replicas: 2

result:
image: result
deploy:
replicas: 1
worker:
image: worker Manager Node Worker Node Docker Host
Docker Swarm
docker stack deploy --compose-file docker-compose.yml
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
vote:
image: voting-app
deploy:
replicas: 2

result:
image: result
deploy:
replicas: 1 Manager Node Worker Node Docker Host
worker: Docker Swarm
image: worker
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
vote:
image: voting-app
deploy:
replicas: 2
resources:
limits:
cpus: 0.01
memory: 50M Manager Node Worker Node Docker Host
Docker Swarm
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
vote:
image: voting-app

healthcheck:
test: [“CMD”, “curl”, “-f”, “http://localhost”]
interval: 1m30s
timeout: 10s
retries: 3
Manager Node Worker Node Docker Host
start_period: 40s
deploy: Docker Swarm
replicas: 2
Stack Commands

docker stack deploy

docker stack ls

docker stack services

docker stack ps

docker stack rm
Curriculum
• Kubernetes Architecture
• PODs
• ReplicaSets
Docker Engine • Deployments
• Services
• Commands & Arguments
Docker Swarm • Environment Variables
• ConfigMaps
• Secrets
Kubernetes • Readiness Probes
• Liveness Probes
• Network Policies
Docker Enterprise • Volume driver plugins
• Volumes in Kubernetes
• PVs, PVCs, Storage Classes
Kubernetes
Essentials
Kubernetes

Please add videos for earlier courses for:


- K8S overview
- POD
- RS
- Deployments
- Services

NOTE: The demo for Voting App using Kubernetes object has been already created and uploaded
on drive.
Doc ker Security
Security

Namespaces Cgroups Kernel Other Kernel


Capabilities Security
- Application
Armor
- SeLinux
- GRSEC
- Seccomp
Security

Secure Docker Encrypted Overlay Docker Content


Swarm Network Trust and Signed RBAC
Image

Image Scanning
S ec ur i ng the
Daemon
Secure Docker Server

• Delete existing containers hosting applications


• Delete volumes storing data
• Run containers to run their applications (bit coin mining)
• Gain root access to the host system by running a privileged /var/run/docker.sock Unix Socket
container, which we will see in a bit.
• Target the other systems in the network and network itself

Docker CLI
/etc/docker/daemon.json
{
"hosts": ["tcp://192.168.1.10:2375"]

• Disable Password based authentication


• Enable SSH key based authentication
} • Determine users who needs access to the server
TLS Encryption

192.168.1.10:2375

/var/run/docker.sock Unix Socket

Docker CLI
/etc/docker/daemon.json
{
"hosts": ["tcp://192.168.1.10:2375"]

} cacert.pem server.pem serverkey.pem


CA Server
TLS Encryption
Docker CLI
192.168.1.10:2376
192.168.1.10:2375
export DOCKER_TLS=true

export DOCKER_HOST=“tcp://192.168.1.10:2376”
/var/run/docker.sock Unix Socket
docker ps
--tls ps

server.pem serverkey.pem

Docker CLI
/etc/docker/daemon.json
{
"hosts":
"hosts": ["tcp://192.168.1.10:2375"]
["tcp://192.168.1.10:2376"]

"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem"

} cacert.pem
CA Server
TLS Authentication
Docker CLI
192.168.1.10:2375
192.168.1.10:2376
export DOCKER_TLS=true

export DOCKER_HOST=“tcp://192.168.1.10:2376”
/var/run/docker.sock Unix Socket
docker ps

cacert.pem server.pem serverkey.pem

Docker CLI
/etc/docker/daemon.json
{
"hosts": ["tcp://192.168.1.10:2376"]

"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"tlsverify": true, client.pem clientkey.pem cacert.pem
"tlscacert": "/var/docker/caserver.pem" CA Server
}
Authentication
Docker CLI
192.168.1.10:2375
192.168.1.10:2376
DOCKER_TLS=true
export DOCKER_TLS_VERIFY=true

export DOCKER_HOST=“tcp://192.168.1.10:2376”
/var/run/docker.sock Unix Socket
docker --tlscert=<>
ps --tlskey=<> --tlscacert=<> ps

client.pem clientkey.pem cacert.pem server.pem serverkey.pem

cacert.pem Docker CLI


/etc/docker/daemon.json
~/.docker
{
"hosts": ["tcp://192.168.1.10:2376"]

"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"tlsverify": true,
cacert.pem
"tlscacert": "/var/docker/caserver.pem" CA Server
}
Summary
/etc/docker/daemon.json /etc/docker/daemon.json
{ {
"hosts": ["tcp://192.168.1.10:2376"] "hosts": ["tcp://192.168.1.10:2376"]
"tls": true, "tlscert": "/var/docker/server.pem",
"tlscert": "/var/docker/server.pem", "tlskey": "/var/docker/serverkey.pem",
"tlskey": "/var/docker/serverkey.pem" "tlsverify": true,
"tlscacert": "/var/docker/caserver.pem"
}
}

docker --tls ps docker --tlsverify


--tlscert=<> --tlskey=<>
--tlscacert=<> ps

Without Authentication With Authentication


References
https://docs.docker.com/engine/security/https/
Nam es paces
Containerization
Process ID Unix Timesharing

Network Namespace Mount

InterProcess
Namespace - PID (On the container)
Linux System ps aux
PID : 1 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4528 828 ? Ss 03:06 0:00 nginx

PID : 2
(On the host)
PID : 3
Child System (Container) ps aux

PID : 4 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
project 3720 0.1 0.1 95500 4916 ? R 06:06 0:00 sshd: project@p
project 3725 0.0 0.1 95196 4132 ? S 06:06 0:00 sshd: project@n
PID : 5 PID : 1 project
root
3727 0.2 0.1
3802 0.0 0.0
21352
8924
5340
3616
pts/0
?
Ss
Sl
06:06
06:06
0:00
0:00
-bash
docker-containe
shim -namespace m
root 3816 1.0 0.0 4528 828 ? Ss 06:06 0:00 nginx
PID : 6 PID : 2
CGr oups
CGroups

Process 1 Process 1 Process 1 Process 1

CPU

MEM

NET
Resource
Constraints
Container Memory – Limit and Reservations

CPU

MEM

Container - webapp

Docker Host
Linux – CPU Sharing

CPU

1024 512
Process 1 Process 2

Docker Host
Linux – CPU Sharing

CPU

Completely Fair
Realtime Scheduler
Scheduler (CFS) 512
Process 2

Docker Host
Containers – CPU Shares

CPU

Completely Fair 1024 1024


512 1024
Scheduler (CFS) Process 1 Process 2 Process 3

Container – webapp1 Container – webapp2 Container – webapp3

Control Groups (CGroups)


Docker Host

docker container run --cpu-shares=512 webapp4


Containers – CPU Sets

CPU

Completely Fair 1024 1024 1024 512


Scheduler (CFS) Process 1 Process 2 Process 3 Process 4

Container – webapp1 Container – webapp2 Container – webapp3 Container – webapp4

Control Groups (CGroups)


Docker Host

docker container run --cpu-shares=512 webapp4


Containers – CPU Sets

CPU -0 CPU-1 CPU -2 CPU-3

Completely Fair 1024 1024 1024 512


Scheduler (CFS) Process 1 Process 2 Process 3 Process 4

Container – webapp1 Container – webapp2 Container – webapp3 Container – webapp4

Control Groups (CGroups)


Docker Host

docker container run --cpuset-cpus=0-1 webapp1 docker container run --cpuset-cpus=2 webapp3

docker container run --cpuset-cpus=0-1 webapp2 docker container run --cpuset-cpus=2 webapp4
Containers – CPU Count

CPU -0 CPU-1 CPU -2 CPU-3

Completely Fair 1024 1024 1024 512


Scheduler (CFS) Process 1 Process 2 Process 3 Process 4

Container – webapp1 Container – webapp2 Container – webapp3 Container – webapp4

Control Groups (CGroups)


Docker Host

docker container run --cpus=2.5 webapp4

docker container update --cpus=0.5 webapp4


Containers – CPU Sharing

CPU

Container - webapp

Docker Host
Resource
Constraints-
Memory
Linux – Memory

MEM SWAP

OOM (Out of Memory)

Container - webapp

Docker Host

docker container run --memory=512m webapp

docker container run --memory=512m --memory-swap=512m webapp Swap Space = 512m – 512m = 0m

docker container run --memory=512m --memory-swap=768m webapp Swap Space = 768m – 512m = 256m
References
https://www.cyberark.com/resources/threat-research-blog/the-route-to-root-container-escape-using-kernel-exploitation
Curriculum •

Docker EE Introduction
Docker Enterprise Engine Setup
• Universal Control Plane Setup
• Node Addition in UCP cluster
• Docker Trusted Registry Setup
• Deployment in Docker EE
Docker Engine • Docker EE UCP Client Bundle
• RBAC
• UCP Setting for LDAP integration
Docker Swarm • Docker EE

• Docker Trusted Registry


Kubernetes • Image Scanning
• Image Promotions
• Garbage Collection
Docker Enterprise • Docker Content Trust and Image Signing
• Docker Trusted Registry

• Backup & Disaster Recovery


Docker Enterprise
Docker EE

Community Enterprise
Edition Edition
Docker EE

Enterprise
Edition
Docker EE

Security & Access Kubernetes


Control Service
Enterprise
Trusted Registry Docker Swarm
Edition Service

Universal Control Docker Engine -


Plane Enterprise
Docker EE

Docker Trusted Registry Docker Worker Nodes

Universal Control Plane

Docker Enterprise Edition (Enterprise Engine)

Docker Certified Infrastructure


Pre-Requisites
• Linux Kernel Version 3.10 or higher for Managers
• Static IP and Persistent Host Name
• Network Connectivity Between all Servers
• Time Sync (NTP)
• User namespaces should not be configured on any node
(Currently not supported)
• Docker Engine - Enterprise
UCP - Minimum Requirements
• 8 GB of RAM for manager nodes (16GB)
• 4 GB of RAM for worker nodes
• 2 vCPUs for manager nodes (4 vCPUs)
• 10 GB of free disk space for the /var partition for manager nodes (25-100GB)
• 500 MB of free disk space for the /var partition for worker nodes
DTR - Minimum Requirements
• 16 GB of RAM
• 2 vCPUs (4 vCPUs)
• 10 GB of free disk space (100GB)
• Port 80 and 443
Doc ker Engine
Ent er prise
Docker Enterprise Engine Setup
Docker Enterprise Engine Setup
docker version
Client: Docker Engine - Enterprise
Version: 19.03.5
API version: 1.40
Go version: go1.12.12
Git commit: 2ee0c57608
Built: Wed Nov 13 07:36:57 2019
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Enterprise


Engine:
Version: 19.03.5
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12
Git commit: 2ee0c57608
Built: Wed Nov 13 07:35:23 2019
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.2.10
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339
runc:
Version: 1.0.0-rc8+dev
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
docker-init:
Version: 0.18.0
GitCommit: fec3683
Note!

Docker Trusted Registry Mirantis Secure Registry (MSR)

Universal Control Plane Mirantis Kubernetes Engine (MKE)

Docker Enterprise Edition (Enterprise Engine) Mirantis Container Runtime


Note!
Uni ver s al Control
P l ane
UCP
UCP
WEB GUI DOCKER CLI

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy

ucp-metrics

ucp-auth-api
Type
Linux Manager,Worker
Windows Manager

Manager Node Worker Node Worker Node


Docker Swarm
UCP Setup
Make sure Docker EE is up and running

Run a container with the docker/ucp image

Set the Admin Username and Password for UCP Console

Login into the Browser

Download and Provide the Docker EE License

Add more Managers and Workers as per requirement


Wor ker Node
Addi tion
Worker Node Addition

ucp-agent

ucp-controller

ucp-metrics

ucp-auth-api

Manager Node Worker Node


Docker Swarm
Worker Node Addition

Global Service ucp-agent ucp-agent

ucp-controller ucp-proxy

ucp-metrics

ucp-auth-api

Manager Node Worker Node


Docker Swarm
Doc ker Trusted
Reg i stry
Docker Registry
docker pull ubuntu

docker push ubuntu

docker pull gcr.io/organization/ubuntu


Docker Trusted Registry (DTR)
Worker Node Addition

ucp-agent ucp-agent

ucp-controller ucp-proxy

ucp-metrics

ucp-auth-api

Manager Node Worker Node Worker Node


Docker Swarm
Worker Node Addition

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy


dtr-ol
ucp-metrics
dtr-* dtr-*
ucp-auth-api

Manager Node Worker Node Worker Node


Docker Swarm
Worker Node Addition

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy


dtr-ol
ucp-metrics
dtr-* dtr-*
ucp-auth-api

Manager Node Worker Node Worker Node S3


Docker Swarm
DTR Console
Dep l oying
Wor kl oad on
Docker EE
Deploy and Test Workload on UCP Cluster

WEB GUI DOCKER CLI


Deploy and Test Workload on UCP Cluster

WEB GUI
Deploy and Test Workload on UCP Cluster

WEB GUI
UCP Client
B undles
Deploy and Test Workload on UCP Cluster

WEB GUI DOCKER CLI


Deploy and Test Workload on UCP Cluster

DOCKER_HOST=x.x.x.x

DOCKER_CERT_PATH=/tmp/client.crt

DOCKER CLI docker ps


Rol e Based
Ac c es s Control
RBAC

Who can do what operations on which resources?


RBAC - Subject
Org

Team

Service Account
(Kubernetes)
User

Who can do what operations on which resources?


RBAC - Role
Config None
View Container
View Only
Image
Create
Network Restricted Control
Delete Secret
Scheduler
Service
Update
Volume Full Control

Who can do what operations on which resources?


RBAC – Resource Sets

Swarm Collection Namespace

Swarm Collection Namespace

Docker Swarm Cluster Kubernetes Cluster

Who can do what operations on which resources?


RBAC – Grant

Swarm Collection

User Restricted Control

Swarm Collection

Docker Swarm Cluster

Who can do what operations on which resources?


Notes

• Access Control High Level Steps:


• Configure Subjects – Users, teams, organizations, service accounts
• Configure custom roles – permissions per type of resource
• Configure resource sets – Swarm Collections or Kubernetes Namespaces
• Create Grants – Subjects + Roles + Resource Sets

• Best practice is to configure a team with the right privileges and


add/remove users to it during organizational changes

• Create Users:
• Create local users from UCP Console
• Integrate UCP with LDAP/AD
Doc ker Trusted
Reg i stry
Image Addressing Convention
docker.io
Docker Hub

image: docker.io/ httpd/httpd

Registry User/ Image/


Account Repository
Image Addressing Convention
docker.io
registry.company.org
DockerDocker
TrustedHub
Registry

image: docker.io/ httpd/httpd

Registry User/ Image/


Account Repository
Image Addressing Convention
54.145.234.153
docker.io
registry.company.org
DockerDocker
TrustedHub
Registry

image: registry.company.org/
54.145.234.153/ httpd/httpd

Registry User/ Image/


Account Repository
docker build . –t 54.145.234.153/company/webapp

docker tag httpd/httpd –t 54.145.234.153/httpd/httpd


Create new Repository
Create new Repository
Push Image

docker build . –t 54.145.234.153/yogeshraheja/kodekloud

docker push 54.145.234.153/yogeshraheja/kodekloud


View Repositories
Pull Image

docker pull 54.145.234.153/yogeshraheja/kodekloud


Create new repository on Push
Doc ker Trusted
Reg i stry
DTR Security
DTR Users
DTR Users
DTR Organizations & Teams
DTR Team Permissions
Repository operation read read-write admin
View/ browse x x x
Pull x x x
Push x x
Start a scan x x
Delete tags x x
Edit description x
Set public or private x
Manage user access x
Delete repository x
DTR
Im ag e S canning
Image Scanning
Image Scanning
Image Scanning
Scan Report
Summary
• Detects vulnerabilities in OS packages and libraries within images and version in which it was introduced
• Recommends fixed version

• Data about vulnerabilities are pulled either from a universal database known as the US national vulnerability
database or it can also be configured manually by uploading a file.
• Scanning can be manually trigged or automatically when an image is pushed

• The scan report reports Critical, Major or Minor categories along with the count in each

• To fix vulnerabilities check application level dependencies, upgrade packages and rebuild docker image
DTR
Im ag e P r omotion
Development Pipeline

Dev Test Stage Prod

registry.company.org/dev/app registry.company.org/test/app registry.company.org/stage/app registry.company.org/prod/app


Image Promotion

Dev Test Stage Prod

registry.company.org/dev/app registry.company.org/test/app registry.company.org/stage/app registry.company.org/prod/app


Image Promotion

Dev Test Stage Prod

registry.company.org/dev/app registry.company.org/test/app registry.company.org/stage/app registry.company.org/prod/app


Image Promotion

Dev Test Stage Prod

registry.company.org/dev/app registry.company.org/test/app registry.company.org/stage/app registry.company.org/prod/app


Image Promotion

Dev Test Stage Prod

registry.company.org/dev/app registry.company.org/test/app registry.company.org/stage/app registry.company.org/prod/app


Image Promotion
DTR
Gar bage
Collection
DTR Operations
Notes
• Deleting image does not delete image layers
• Does not free up space
• For this we must schedule garbage collection

• During Garbage Collection:


• DTR becomes read-only. Images can be pulled, but pushes are not allowed
• DTR identifies and marks all unused image layers
• DTR deletes the marked image layers.

• Garbage collection is a CPU intensive process


• Must be scheduled outside of business peak hours

• May be configured to run


• Until done
• For X minutes
• Never
Di s aster
Rec overy
Docker Swarm
Backup and Restoration

web web

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy

ucp-metrics
dtr-*
ucp-auth-api

Manager Node Worker Node Worker Node


Docker Swarm
Docker Swarm - Recovery 1 +1
Quorum of 1 = = 1.5 = 1
docker service update --force web 2

web web

web

Manager Node Worker Node Worker Node


Docker Swarm
Docker Swarm - Recovery Quorum of 3 =
3 +1
= 2.5 = 2
2
docker node promote docker swarm init --force-new-cluster

web web

Manager Node Manager Node Manager Node Worker Node Worker Node

Docker Swarm
Docker Swarm - Recovery Quorum of 3 =
3 +1
= 2.5 = 2
2
docker node promote docker swarm init --force-new-cluster

web web

Manager Node Worker Node Worker Node


Docker Swarm
Docker Swarm - Backup

web web

RAFT
DB

/var/lib/docker/swarm

/var/lib/docker /var/lib/docker /var/lib/docker

Manager Node Worker Node Worker Node


Docker Swarm
Docker Swarm - Backup

systemctl stop docker

tar cvzf /tmp/swarm-backup.tgz /var/lib/docker/swarm/


RAFT
DB
systemctl start docker

/var/lib/docker/swarm

/var/lib/docker

Manager Node

https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-swarm.html
Docker Swarm - Backup
Raft keys Cluster Membership Services Networks

Configs Secrets Swarm unlock keys

RAFT
DB

docker swarm init --autolock=true /var/lib/docker/swarm

docker swarm update --autolock=true


Swarm updated. /var/lib/docker
To unlock a swarm manager after it restarts, run the `docker swarm unlock`
command and provide the following key:

SWMKEY-1-7K9wg5n85QeC4Zh7rZ0vSV0b5MteDsUvpVhG/lQnbl0

Please remember to store this key in a password manager, since without it you Manager Node
will not be able to restart the manager.
Docker Swarm - Restore

systemctl stop docker

tar xvzf /tmp/swarm-backup.tgz -C /


RAFT
DB
systemctl start docker

docker swarm init --force-new-cluster /var/lib/docker/swarm

/var/lib/docker

Manager Node
References
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-
swarm.html

https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/restore-
swarm.html
Di s aster
Rec overy
UCP
Disaster Recovery - UCP
Services

web web
Configs

Secrets
ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy Overlay


Networks
ucp-metrics
dtr-*
ucp-auth-api

Manager Node Worker Node Worker Node


Docker Swarm
Backup - UCP
UCP
Access Control Certificates & Keys Metrics Services
Configurations

Configs
Organizations Volumes

Kubernetes Secrets
Declarative Objects
Overlay
Networks
UCP - Backup

docker container run \


--rm \
--log-driver none \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
--volume /tmp:/backup \
docker/ucp:3.2.5 backup \
--file mybackup.tar \
--passphrase "secret12chars" \
--include-logs=false

https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-ucp.html
UCP - Restore
docker container run \
--rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
uninstall-ucp

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy

ucp-metrics
dtr-*
ucp-auth-api
UCP - Restore
docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:3.2.5 restore < /tmp/mybackup.tar

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy

ucp-metrics
dtr-*
ucp-auth-api
Notes
• One backup at a time
• UCP does not backup swarm workloads. Swarm workloads are
backed up with Swarm backup
• Cannot take a backup of a cluster that’s already crashed.
• Restore to the same version of Docker Enterprise as that of the
one that was used during backup
• Restore either to the same swarm cluster or to a Docker host and
swarm will be initialized automatically
References
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/disaster-
recovery-ucp.html
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-ucp.html
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/restore-ucp.html
Di s aster
Rec overy
Docker Trusted
Registry
DTR - Backup and Restoration
Services

UCP
web web

ucp-agent ucp-agent ucp-agent


dtr-ol
ucp-controller ucp-proxy ucp-proxy

dtr-* dtr-* dtr-*

Manager Node Worker Node Worker Node


Docker Swarm
DTR - Backup
Services

UCP

Image Data
dtr-ol

dtr-* dtr-* dtr-*

Manager Node Worker Node Worker Node S3


Docker Swarm
DTR - Backup
Configurations Services

Repositories UCP
Metadata
Image Data
Access Control
dtr-ol

Notary data
dtr-* dtr-* dtr-*

Scan Results

Certificates
Manager Node Worker Node Worker Node
Docker Swarm
DTR - Backup
docker run \
docker/dtr backup \
--existing-replica-id $REPLICA_ID > dtr-metadata-backup.tar

dtr-ol

dtr-* dtr-* dtr-*


DTR - Backup
docker run --rm \
--env UCP_PASSWORD=$UCP_PASSWORD \
docker/dtr backup \
--ucp-username $UCP_ADMIN \
--ucp-url $UCP_URL \
--ucp-ca "$(curl https://${UCP_URL}/ca)" \
--existing-replica-id $REPLICA_ID > dtr-metadata-backup.tar

https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/dtr/dtr-admin/disaster-recovery/create-a-backup.html

dtr-ol

dtr-* dtr-* dtr-*


DTR - Restore
docker run -it --rm \
docker/dtr destroy \
--ucp-insecure-tls

docker run -i --rm \


docker/dtr restore < dtr-metadata-backup.tar

https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/dtr/dtr-admin/disaster-recovery/restore-from-backup.html

dtr-ol

S3

dtr-* dtr-* dtr-*


Backup and Restoration

web web

ucp-agent ucp-agent ucp-agent

ucp-controller ucp-proxy ucp-proxy

ucp-metrics
dtr-*
ucp-auth-api

Manager Node Worker Node Worker Node


Docker Swarm
title
sample
docker run ubuntu
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
fc7181108d40: Already exists
d2e987ca2267: Pull complete
0b760b431b11: Pull complete
Digest:
sha256:96fb261b66270b900ea5a2c17a26abbfabe95506e73c3a3c65869a6dbe83223a
Status: Downloaded newer image for nginx:latest
Sample - Commands
docker run nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
fc7181108d40: Already exists
d2e987ca2267: Pull complete
0b760b431b11: Pull complete
Digest:
sha256:96fb261b66270b900ea5a2c17a26abbfabe95506e73c3a3c65869a6dbe83223a
Status: Downloaded newer image for nginx:latest
Sample - Containers
docker run ubuntu

docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS

docker ps -a

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS


45aacca36850 ubuntu "/bin/bash" 43 seconds ago Exited (0) 41 seconds ago
Sample – Highlighting command/output
docker run redis
Using default tag: latest
latest: Pulling from library/redis
f5d23c7fed46: Pull complete
Status: Downloaded newer image for redis:latest

1:C 31 Jul 2019 09:02:32.624 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo


1:C 31 Jul 2019 09:02:32.624 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
1:M 31 Jul 2019 09:02:32.626 # Server initialized

docker run redis:4.0 TAG


Unable to find image 'redis:4.0' locally
4.0: Pulling from library/redis
e44f086c03a2: Pull complete
Status: Downloaded newer image for redis:4.0

1:C 31 Jul 09:02:56.527 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo


1:C 31 Jul 09:02:56.527 # Redis version=4.0.14, bits=64, commit=00000000, modified=0, pid=1, just started
1:M 31 Jul 09:02:56.530 # Server initialized
Sample – Port Mappings
docker run kodekloud/webapp
http://192.168.1.5:80
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
80 8000 8001

IP: 192.168.1.5

http://172.17.0.2:5000 Internal IP
5000 5000 5000

IP: 172.17.0.2 IP: 172.17.0.3 IP: 172.17.0.4


docker run –p 80:5000 kodekloud/simple-webapp 3
3 Web APP Web APP Web APP
0 Docker Container Docker Container Docker Container
docker run –p 8000:5000 kodekloud/simple-webapp 6
3 IP: 172.17.0.5 3 IP: 172.17.0.6 3 IP: 172.17.0.6
3 MySQL 3 MySQL 3 MySQL
docker run –p 8001:5000 kodekloud/simple-webapp 0 Docker 0 0
Docker Docker
6 Container 6 Container 6 Container
8
docker run –p 3306:3306 mysql 3
0
6
docker run –p 8306:3306 mysql Docker Host
docker run –p 8306:3306 mysql
Inspect Container
docker inspect blissful_hopper
[
{
"Id": "35505f7810d17291261a43391d4b6c0846594d415ce4f4d0a6ffbf9cc5109048",
"Name": "/blissful_hopper",
"Path": "python",
"Args": [
"app.py"
],
"State": {
"Status": "running",
"Running": true,
},

"Mounts": [],
"Config": {
"Entrypoint": [
"python",
"app.py"
],
},
"NetworkSettings": {..}
}
]
Sample – Application Code
app.py

import os
from flask import Flask

app = Flask(__name__)


color = "red"

@app.route("/")
def main():
print(color)
return render_template('hello.html', color=color)

if __name__ == "__main__":
app.run(host="0.0.0.0", port="8080")

python app.py
Applying
Finishing
Touches
We will be here soon !

You might also like