Docker Certified Associate
Docker Certified Associate
Docker Certified Associate
MANNAMBE
TH
YOGESH
RAHEJA
Objectives
o overlay
o bridge
o None
o host
Submit
MCQ vs DOMC
o overlay Yes No
o bridge Yes No
o None Yes No
o host Yes No
Frequently Asked Questions
Q. Can we take the exam from home or a testing center?
A. Home (Proctored)
Q. Passing score
A. N/A
https://training.mirantis.com/dca-certification-exam/
Register
https://training.mirantis.com/dca-certification-exam/
Curriculum
Curriculum
Orchestration
Curriculum
Installation and Configuration
• Container Network Model
• Built-in Network Drivers
Image Management
• Traffic flow between Docker Engine, Registry & UCP
Storage and Volumes
• Docker Bridge Network
• Publish Ports
Networking • External DNS
• Deploy a service on a docker overlay network
Security • Troubleshoot container and engine logs
• Kubernetes traffic using Cluster IP and NodePort Servic
Orchestration • Kubernetes Network Policies
Curriculum
Installation and Configuration • Image signing
• Docker Engine Security
Image Management • Docker Swarm Security
• Identity Roles
Storage and Volumes • UCP Workers vs Managers
• Security scan in images
Networking • Docker Content Trust
• RBAC with UCP
Security
• UCP with LDAP/AD
• UCP Client Bundles
Orchestration
Curriculum • Docker Swarm:
• Setup Swarm Cluster
Installation and Configuration • Quorum in a Swarm Cluster
• Stack in swarm
Image Management • Scale up and down replicas
• Networks, Publish Ports
Storage and Volumes • Replicated vs Global Services
• Placements
Networking
• Healthchecks
• Kubernetes
Security
• PODS, Deployments
• Services
Orchestration
• ConfigMaps, Secrets
• Liveness and Readiness Probes
Curriculum
Installation and Configuration
Networking
Kubernetes
Security
Docker Enterprise
Orchestration
Pre-Requisite
Course & Exam
Tips
Learning Format
Research Questions
• Open Book
• Refer to Lecture and Documentation
• Research
• Get familiar with the MCQ format
Notes
2014 (v0.9)
Docker CLI
REST API
Docker Deamon
libcontainer
LXC
namespace CGroups
Docker Engine Architecture
2013
2014 (v0.9)
Docker CLI
REST API
Docker Deamon
libcontainer
namespace CGroups
Docker Engine Architecture
https://github.com/opencontainers/runtime-spec/blob/master/runtime.md
2013
2014 (v0.9)
OCI
Docker CLI
runtime-spec
REST API
image-spec
Docker Deamon
libcontainer
namespace CGroups
Docker Engine Architecture
2013
2014 (v0.9)
runtime-spec
REST API
image-spec
Docker Deamon
Images Volumes Networks
Manage Containers
Run containers
libcontainer
namespace CGroups
Docker Engine Architecture
2013
Docker CLI
2014 (v0.9)
REST API
2016 (v1.11) OCI
Docker Deamon
runtime-spec
Images Volumes Networks
image-spec
containerd
Manage Containers
runC
Run containers
libcontainer
namespace CGroups
Docker Engine Architecture
Docker CLI
2013
containerd image-spec
Manage Containers
containerd-shim
runC
Run containers
libcontainer
namespace CGroups
Docker Objects
Images Networks
Containers Volumes
Docker Objects
Images Networks
Containers Volumes
Registry
Registry
HTTPD
docker container run –it ubuntu Docker CLI
REST API
containerd
Manage Containers HTTPD
containerd-shim
runC
Run containers
libcontainer
namespace CGroups
Docker Engine Installation
docker version docker --version
Client: Docker Engine - Community Docker version 19.03.5, build 633a0ea
Version: 19.03.5
API version: 1.40 docker system info
Go version: go1.12.12
Git commit: 633a0ea Client:
Built: Wed Nov 13 07:25:41 2019 Debug Mode: false
OS/Arch: linux/amd64
Experimental: false Server:
Containers: 0
Server: Docker Engine - Community Running: 0
Engine:
Paused: 0
Version: 19.03.5
Stopped: 0
API version: 1.40 (minimum version 1.12)
Go version: go1.12.12 Images: 0
Git commit: 633a0ea Server Version: 19.03.5
Built: Wed Nov 13 07:24:18 2019 Storage Driver: overlay2
OS/Arch: linux/amd64 Backing Filesystem: xfs
Experimental: false .
containerd: .
Version: 1.2.10 .
GitCommit: b34a5c8af56e510852c35414db4c1f4fa6172339 Experimental: false
runc: Insecure Registries:
Version: 1.0.0-rc8+dev 127.0.0.0/8
GitCommit: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
Live Restore Enabled: false
docker-init:
Version: 0.18.0
GitCommit: fec3683
Docker
Service Configuration
Check Service Status
Docker CLI
dockerd --debug \
--host=tcp://192.168.1.10:2375
INFO[2020-10-24T08:29:00.331925176Z] Starting up
DEBU[2020-10-24T08:29:00.332463203Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2020-10-24T08:29:00.333316936Z] Golang's threads limit set to 6930
INFO[2020-10-24T08:29:00.333659056Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:29:00.333685921Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:29:00.333705237Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
TCP Socket
Docker CLI
192.168.1.10:2375
docker ps
Docker CLI
dockerd --debug \
--host=tcp://192.168.1.10:2375
INFO[2020-10-24T08:29:00.331925176Z] Starting up
DEBU[2020-10-24T08:29:00.332463203Z] Listener created for HTTP on unix (/var/run/docker.sock)
DEBU[2020-10-24T08:29:00.333316936Z] Golang's threads limit set to 6930
INFO[2020-10-24T08:29:00.333659056Z] parsed scheme: "unix" module=grpc
INFO[2020-10-24T08:29:00.333685921Z] scheme "unix" not registered, fallback to default scheme module=grpc
INFO[2020-10-24T08:29:00.333705237Z] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock
0 <nil>}] <nil>} module=grpc
TLS Encryption
Docker CLI
192.168.1.10:2375
192.168.1.10:2376
docker ps
Docker CLI
dockerd --debug \
--host=tcp://192.168.1.10:2375
:2376 \
--tls=true \
--tlscert=/var/docker/server.pem \
2375 Un-encrypted
--tlskey=/var/docker/serverkey.pem
2376 Encrypted
Daemon Configuration File
/etc/docker/daemon.json
{
dockerd --debug \
"debug": true,
--host=tcp://192.168.1.10:2375
:2376 \
"hosts": ["tcp://192.168.1.10:2376"]
--tls=true \ "tls": true,
--tlscert=/var/docker/server.pem \ "tlscert": "/var/docker/server.pem",
--tlskey=/var/docker/serverkey.pem "tlskey": "/var/docker/serverkey.pem"
}
dockerd --debug=false
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as
a flag and in the configuration file: debug: (from flag: false, from file: true)
Images Networks
Containers Volumes
ls /var/lib/docker/
builder containers network plugins swarm trust
buildkit image overlay2 runtimes tmp volumes
ls -lrt /var/lib/docker/containers/
36a391532e10d45f772f2c9430c2cc38dad4b441aa7a1c44d459f6fa3d78c6b6
ls -lrt /var/lib/docker/containers/36a391532e10*
Checkpoint hostconfig.json config.v2.json
Container ls - List the details for container
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36a391532e10 httpd "httpd-foreground" 2 minutes ago Created charming_wiles
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36a391532e10 httpd "httpd-foreground" 2 minutes ago Created charming_wiles
docker container ls -q
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
36a391532e10 httpd "httpd-foreground" 6 minutes ago Up 1 minutes 80/tcp charming_wiles
Container Run – Create and Start a container
docker container create httpd docker container start 36a391532e10
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d969ecdb44ea ubuntu "/bin/bash" 2 minutes ago Exited (0) 2 minutes ago intelligent_almeida
Container Run – Create and Start a container
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d969ecdb44ea ubuntu "/bin/bash" 2 minutes ago Exited (0) 2 minutes ago intelligent_almeida
Container Run – With Options
docker container run -it ubuntu
root@6caba272c8f5:/#
root@6caba272c8f5:/# hostname
6caba272c8f5
root@6caba272c8f5:/#
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" About a minute ago Up About a minute quizzical_austin
docker container run -it ubuntu docker container run ubuntu -it
Container Run – exiting a running process
docker container run -it ubuntu
root@6caba272c8f5:/#
root@6caba272c8f5:/# hostname
6caba272c8f5
root@6caba272c8f5:/# exit
exit
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" 8 minutes ago Exited (0) 37 seconds ago quizzical_austin
Container Run – Container Name
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" 8 minutes ago Exited (0) 37 seconds ago quizzical_austin
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" 20 seconds ago Up 19 seconds webapp
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" About a minute ago Up About a minute custom-webapp
Container Run – Detached Mode
docker container run httpd
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName'
directive globally to suppress this message
AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 172.17.0.3. Set the 'ServerName'
directive globally to suppress this message
[Thu Sep 17 15:39:31.138134 2020] [mpm_event:notice] [pid 1:tid 139893041316992] AH00489: Apache/2.4.46 (Unix) configured --
resuming normal operations
[Thu Sep 17 15:39:31.138584 2020] [core:notice] [pid 1:tid 139893041316992] AH00094: Command line: 'httpd -D FOREGROUND'
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6caba272c8f5 ubuntu "/bin/bash" 8 minutes ago Exited (0) 37 seconds ago quizzical_austin
docker container ls -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b71f15d33b60 ubuntu "/bin/bash" 3 minutes ago Up 3 minutes magical_babbage
Container Exec – Executing Commands
docker container exec b71f15d33b60 hostname
b71f15d33b60
httpd
httpd
ls -lrt /var/lib/docker/containers/
36a391532e10d45f772f2c9430c2cc38dad4b441aa7a1c44d459f6fa3d78c6b6
ls -lrt /var/lib/docker/containers/
Remove All Container
docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
59aa5eacd88c ubuntu "/bin/bash" 23 minutes ago Up 23 minutes kodekloudagain
a00b5535783d ubuntu "/bin/bash" 25 minutes ago Up 25 minutes epic_leavitt
616f80b0f026 ubuntu "/bin/bash" 31 minutes ago Up 28 minutes elegant_cohen
36a391532e10 httpd "httpd-foreground" About an hour ago Up About an hour 80/tcp charming_wiles
docker container ls -q
59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10
docker container ls -q
59aa5eacd88c
a00b5535783d
616f80b0f026
36a391532e10
/etc/web.conf
docker container cp webapp:/root/dockerhost /tmp/
Container - webapp
/tmp/web.conf
docker container cp /tmp/web.conf webapp:/etc/
IP: 192.168.1.5
http://172.17.0.2:5000 Internal IP
5000 5000 5000
cat /proc/sys/net/ipv4/ip_local_port_range
32768 60999
Docker Host
Container PORT Publish 10.2.4.0 192.168.1.0 10.5.3.0
EXPOSE 5000
-N DOCKER
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 41232 -j DNAT --to-destination 172.17.0.3:5000
5000
IP: 172.17.0.3
Web APP
Docker Container
DOCKER-USER DOCKER
IP Tables
Docker Host
References
https://docs.docker.com/network/links/
https://docs.docker.com/engine/reference/run/#expose-incoming-ports
https://docs.docker.com/config/containers/container-networking/
https://docs.docker.com/network/iptables/
T r oub l eshoot
Doc ker Daemon
Check Service Status
docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Check Docker Host
Docker CLI
192.168.1.10:2375
192.168.1.10:2376
docker ps
Docker CLI
2375 Un-encrypted
2376 Encrypted
Check Service Status
unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in
the configuration file: debug: (from flag: true, from file: false)
Free Disk Space on Host
df -h
Filesystem Size Used Avail Use% Mounted on
dev 364M 0 364M 0% /dev
run 369M 340K 369M 1% /run
/dev/sda1 19G 14.7G 15M 99% /
tmpfs 369M 0 369M 0% /dev/shm
tmpfs 369M 0 369M 0% /sys/fs/cgroup
tmpfs 369M 4.0K 369M 1% /tmp
tmpfs 74M 0 74M 0% /run/user/0
Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 19.03.5
Storage Driver: overlay2
Backing Filesystem: xfs
.
.
.
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
References
https://docs.docker.com/config/daemon/
https://docs.docker.com/engine/reference/commandline/dockerd/
Logging Drivers
Logging Drivers
docker run –d --name nginx nginx
cd /var/lib/docker/containers; ls
38781779e9aa15c190746784ba23d1ae237f03b58e0479286259e275d4c8820a
c5ab1dba9b51486e0e69386c137542be2e4315a56b4ee07c825e2d41c99f89b4
f3997637c0df66becf4dd4662d3c172bf16f916a3b9289b95f0994675102de17
cat f3997637c0df66becf4dd4662d3c172bf16f916a3b9289b95f0994675102de17.json
{"log":"/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform
configuration\n","stream":"stdout","time":"2020-10-25T05:59:43.832656488Z"}
{"log":"/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/\n","stream":"stdout","time":"2020-10-
25T05:59:43.832891838Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh\n","stream":"stdout","time":"202
25T05:59:43.833987067Z"}
{"log":"10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf\n","stream":"stdout","time":"2
25T05:59:43.83695198Z"}
{"log":"10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf\n","stream":"stdout","time":
10-25T05:59:43.84592186Z"}
{"log":"/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh\n","stream":"stdout","time":"2020-10
25T05:59:43.846117966Z"}
{"log":"/docker-entrypoint.sh: Configuration complete; ready for start up\n","stream":"stdout","time":"2020-10-
25T05:59:43.850840102Z"}
Logging Drivers
docker system info /etc/docker/daemon.json
Server: {
... "debug": true,
Images: 54 "hosts": ["tcp://192.168.1.10:2376"]
Server Version: 19.03.6
...
"tls": true,
Logging Driver: json-file "tlscert": "/var/docker/server.pem",
Cgroup Driver: cgroupfs "tlskey": "/var/docker/serverkey.pem",
Plugins: "log-driver": "awslogs"
Log: awslogs fluentd gcplogs gelf journald json-file local }
logentries splunk syslog
...
Logging Driver - Options
docker system info /etc/docker/daemon.json
Server: {
... "debug": true,
Images: 54 "hosts": ["tcp://192.168.1.10:2376"]
Server Version: 19.03.6
...
"tls": true,
Logging Driver: json-file "tlscert": "/var/docker/server.pem",
Cgroup Driver: cgroupfs "tlskey": "/var/docker/serverkey.pem",
Plugins: "log-driver": "awslogs",
Log: awslogs fluentd gcplogs gelf journald json-file local
logentries splunk syslog "log-opt": {
... "awslogs-region": "us-east-1"
}
}
export AWS_ACCESS_KEY_ID=<>
export AWS_SECRET_ACCESS_KEY=<>
export AWS_SESSION_TOKEN=<>
Logging Drivers
docker run –d --log-driver json-file nginx
Login Succeeded
Login Succeeded
docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 3 0 341.9MB 341.9MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
Rem ove
Images
Image Rm: Removing an Image Locally
docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
httpd alpine 52862a02e4e9 2 weeks ago 112MB
httpd customv1 52862a02e4e9 2 weeks ago 112MB
httpd latest c2aa7e16edd8 2 weeks ago 165MB
ubuntu latest 549b9b86cb8d 4 weeks ago 64.2MB
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest a187dde48cd2 4 weeks ago 5.6MB
Import and Export Operations
docker export <container-name> > testcontainer.tar
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
newimage latest 8090b7da236b 2 minutes ago 5.6MB
alpine latest a187dde48cd2 4 weeks ago 5.6MB
Building Images
Using Commit
Docker Container Commit
docker run –d --name httpd httpd
docker containerimage-registry-and-operations
commit –a “Ravi” httpd customhttpd
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
customhttpd latest adac0f56a7df 5 seconds ago 138MB
httpd latest 417af7dc28bc 8 days ago 138MB
Save vs Load vs Import vs Export vs Commit
docker run –d --name httpd httpd
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
customhttpd latest adac0f56a7df 5 seconds ago 138MB
httpd latest 417af7dc28bc 8 days ago 138MB
Bui l d Context
Build Context
Dockerfile
FROM ubuntu
COPY . /opt/source-code
COPY . /opt/source-code
Docker CLI
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
/opt/my-custom-app
app.py
docker build . -t my-custom-app
COPY . /opt/source-code
Docker CLI
ENTRYPOINT FLASK_APP=/opt/source-code/app.py flask run
/opt/my-custom-app
app.py
docker build . -t my-custom-app
tmp
docker build /opt/my-custom-app logs
Sending build context to Docker daemon 2.048kB build
Step 1/7 : FROM ubuntu
.dockerignore
Build Context
docker build . -t my-custom-app
/var/lib/docker/tmp/docker-builderxxxxx
docker build https://github.com/myaccount/myapp
Docker CLI
docker build https://github.com/myaccount/myapp:<folder>
/opt/my-custom-app
RUN apt-get update && apt-get install –y \ cached Layer 2. Update & Install python and pyth
python \ invalid
cached
python-dev \
python3-pip =20.0.2 invalid
cached Layer 3.
4. Changes in pip packages
cached
invalid 4. Source code
Layer 5.
RUN pip3 install flask flask-mysql
cached
invalid Layer 5.
6. Update Entrypoint with “flask” co
COPY app.py /opt/source-code
Dockerfile
FROM centos:7
ADD app.tar.xz /testdir
Dockerfile
FROM centos:7
ADD http://app.tar.xz /testdir
RUN tar -xJf /testdir/app.tar.xz -C /tmp/app
RUN make -C /tmp/app
Copy or ADD?
Dockerfile Dockerfile
FROM centos:7 FROM centos:7
COPY /testdir /testdir ADD /testdir /testdir
Dockerfile
FROM centos:7
ADD app.tar.xz /testdir
Dockerfile Dockerfile
FROM centos:7 FROM centos:7
RUN curl http://app.tar.xz \ ADD http://app.tar.xz /testdir
| tar –xcJ /testdir/file.tar.xz \ RUN tar -xJf /testdir/app.tar.xz -C /tmp/app
&& yarn build \ RUN make -C /tmp/app
&& rm /testdir/file.tar.xz
Base Image
Base vs Parent Image
FROM httpd
Parent
COPY index.html htdocs/index.html httpd (Parent)
My Custom WebApp
Base vs Parent Image
Dockerfile - httpd
FROM debian:buster-slim
Parent debian
ENV HTTPD_PREFIX /usr/local/apache2
ENV PATH $HTTPD_PREFIX/bin:$PATH
httpd (Parent)
WORKDIR $HTTPD_PREFIX
<content trimmed>
My Custom WebApp
FROM httpd
debian (Base)
FROM httpd
Custom Wordpress
Base vs Parent Image
scratch
Dockerfile - debian:buster-slim
FROM scratch
ADD rootfs.tar.xz /
CMD ["bash"]
References
https://docs.docker.com/develop/develop-images/baseimages/
Multi-Stage
Builds
my-application
Dockerfile
1. Build 2. Containerize for Production
LICENSE
README.md npm run build Dockerfile
Dockerfile
FROM node
1. Build COPY . .
RUN npm install
RUN npm run build
FROM nginx
3. Containerize for Production
COPY dist /usr/share/nginx/html
Dockerfile
FROM node AS builder
1. Build COPY . .
RUN npm install
Stage 0
RUN npm run build
in-memory DB db
PostgreSQL
worker
.NET
Sample application – voting application
compose
Docker compose Public Docker registry - dockerhub
docker continer run –itd –name=web nodejs
docker-compose.yml
services:
web:
image: “nodejs"
db:
image: “mongodb“
messaging:
image: "redis“
orchestration:
image: “ansible“
docker-compose up
Docker compose - versions
docker-compose.yml
version: “3.8”
services:
web:
image: httpd:alpine
ports:
- “80”
networks:
- appnet
volumes:
- appvol:/webfs
networks:
- appnet
volumes:
- appvol
configs:
secrets:
version: 3
Docker compose
docker-compose.yml
version: '3.8'
services:
vote:
image: yogeshraheja/vote:v1 appnet
ports:
voting-app result-app
- "81:80"
networks:
- appnet
redis:
image: yogeshraheja/redis:v1
networks:
- appnet
db:
image: yogeshraheja/db:v1 redis db
networks:
- appnet
worker:
image: yogeshraheja/worker:v1
networks:
- appnet
result:
image: yogeshraheja/result:v1 worker
ports:
- "82:80"
networks:
- appnet
networks:
appnet:
driver: bridge
Compose Commands
docker-compose up
docker-compose up -d
docker-compose ps
docker-compose logs
docker-compose stop
docker-compose start
Compose Commands
docker-compose stop
docker-compose rm
docker-compose down
Docker compose
appnet
voting-app result-app
redis db
worker
d o c k e r
swarm
Docker swarm
Docker Swarm
MySQL
Container
Docker Swarm
• Rolling Updates
• Self Healing
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Rolling Updates
• Self Healing Web Web
• Security
service-definition.yml Service Task Task
• Load balancing
services:
web:
Docker Host Docker Host Docker Host
• image:
Service Discovery
"simple-webapp"
database:
Manager Node Worker Node Worker Node
image: "mongodb"
messaging:
image: "redis:alpine"
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Rolling Updates
• Self Healing Web Web Web Web Web Web
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Rolling Updates
• Self Healing Web Web Web Web Web Web
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Rolling Updates
• Self Healing Web Web Web Web Web Web
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Rolling Updates
• Self Healing Web Web Web Web Web Web
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
External Load Balancer
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Features
• Simplified Setup
• Declarative
• Scaling Docker Swarm
• Rolling Updates
• Self Healing DNS
Server
Web Web Web Web Web Web
• Security
• Load balancing
Docker Host Docker Host Docker Host
• Service Discovery
Manager Node Worker Node Worker Node
Setup
Docker Swarm
Setup swarm
Docker Swarm
Port Description
TCP 2377 Cluster Management Communications
TCP and UDP 7946 Communication among nodes
UDP 4789 Overlay network traffic
Cluster Setup
172.31.46.126 172.31.46.127 172.31.46.128
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
Active
Pause Leader
Drain Reachable
Unavailable
Cluster Setup
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
Active
Pause Leader
Drain Reachable
Unavailable
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
VERSION
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active Reachable 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
VERSION
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Active 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
Draining A Node
172.31.46.126 172.31.46.127 172.31.46.128
Web Web
Web Web
172.31.46.127
Worker1 Node
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE
VERSION
91uxgq6i78j1h1u5v7moq7vgz * manager1 Ready Active Leader 19.03.8
2lux7z6p96gc6vtx0h6a2wo2r worker1 Ready Drain 19.03.8
w0qr6k2ce03ojawmflc26pvp3 worker2 Ready Active 19.03.8
Draining A Node
172.31.46.126 172.31.46.128
Web Web
172.31.46.127
Worker1 Node
Web Web
Web Web
Web Web
172.31.46.128
Worker2 Node
Web Web
172.31.46.128
Worker2 Node
Leader
Docker Host Docker Host Docker Host
L
L
Distributed consensus - RAFT
DB
D
Instruction
DB DB
Quorum
How many Manager nodes?
• Docker Recommends – 7 Managers
• No limit on Managers
4 3 1
5 3 2
6 4 2
N-1
7 4 3 Fault Tolerance of N =
2
Odd or even?
1 1 0
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
7 4 3
Distributing Managers
1 1 0
Site A Site C
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
Site B
7 4 3
Distributing Managers
1 1 0
Site A Site C
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
Site B
7 4 3
Distributing Managers
1 1 0
Site A Site C
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
Site B
7 4 3
Distributing Managers Managers Distribution
7 3-2-2
1 1 0
Site A Site C
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
Site B
7 4 3
Distributing Managers Managers Distribution
7 3-2-2
5 2-2-1
1 1 0
Site A Site C
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
Site B
7 4 3
Distributing Managers Managers Distribution
7 3-2-2
5 2-2-1
3 1-1-1
1 1 0
Site A Site C
2 2 0
3 2 1
4 3 1
5 3 2
6 4 2
Site B
7 4 3
What happens when it fails?
Docker Host Docker Host Docker Host Docker Host Docker Host
DB
D
Instruction
DB DB
Lock your Swarm Cluster
SWMKEY-1-7K9wg5n85QeC4Zh7rZ0vSV0b5MteDsUvpVhG/lQnbl0
Please remember to store this key in a password manager, since without it you
will not be able to restart the manager.
Unlock and Join back to Swarm Cluster
docker node ls
Error response from daemon: Swarm is encrypted and needs to be unlocked before it can be used. Please use "docker swarm
unlock" to unlock it.
Docker Swarm
Tasks
docker service create --replicas=3 httpd
Orchestrator
Scheduler
Manager Node
API
Orchestrator
Allocator
Dispatcher
Scheduler
Manager Node
docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3zhe91mns5vz firstservice replicated 1/1 httpd:alpine *:80->80/tcp
Docker Swarm
Docker Service
docker service create -p 80:80 web
web
web:3
web:2
web web:2
web:3
web web:2
web:3
web
web:2
web web:2
web web:2
web
web:2
web web:2
web web:2
web
web:2
web web:2
web web:2
web
web:2
web web:2
web web:2
web
web
web:2 web
web:2 web
web:2
web
web:2 web
web:2 web
web:2
web web
docker node update --label-add type=cpu-optimized worker1 docker node inspect worker1 --pretty
ID: 7t1vexyw8semg7z277mhliouv
docker node update --label-add type=memory-optimized worker2 Labels:
- type=cpu-optimized
docker node update --label-add type=gp worker3 Hostname: worker1
Joined at: 2020-04-24 11:21:42.05927
Status:
Labels & Constraints
Web Servers Batch Realtime analytics
Processing
Batch
Processing
Batch
Processing
Realtime analytics
docker run ubuntu docker run --network=none ubuntu docker run --network=host ubuntu
5000 5000
Web Web
Web Web Container Container
Container Container
172.17.0.2 172.17.0.3
172.17.0.1
docker0
Web
Container
172.17.0.4 172.17.0.5
Web Web
Container Container
Overlay Network
10.0.0.0
docker run \ 80
docker network ls
-p 80:5000 my-web-server
NETWORK ID NAME DRIVER
Load Balancer 68abeefb1f2e bridge bridge
5bab4adc7d02 host host
docker service create \ e43bd489dd57 none null
--replicas=2 \ mevcdb5b40zz ingress overlay
-p 80:5000 \
my-web-server 5000 5000
Web Web
Container Container
172.17.0.2 172.17.0.3
docker0
172.17.0.1
Docker Host
Docker Swarm
Ingress network
80 80 80
Routing Mesh
5000 5000
Web Web
Container Container
MACLVAN
Docker
ETH0 Host
Interface
PHYSICAL NETWORK
None To disable all network. This is not available for swarm services
Macvlan Legacy applications that need containers to look like physical hosts on
network with unique MAC Address. Used for multiple containers to
communicate across different docker hosts. L3 Bridge
IPVLan Used for multiple containers to communicate across different docker hosts.
L2 Bridge.
References
• https://docs.docker.com/network/overlay/
• https://docs.docker.com/engine/swarm/ingress/
Service Discovery
Docker Swarm
Service Discovery - DNS
Host IP
mysql.connect( 172.17.0.3
mysql ) web mysql web 192.168.10.2
Container Container
192.168.10.2 192.168.10.3 mysql 192.168.10.2
Docker
bridge
DNS
Server
127.0.0.11
Docker Host
docker exec -it web cat /etc/resolv.conf
search ec2.internal
nameserver 127.0.0.11
options ndots:0
Service Discovery - DNS
docker service create --name=api-server --replicas=2 api-server
api-server
api-server
api-server
NGINX
Container
nginx.conf
nginx.conf
Docker Host
Docker Volume
docker run -v /tmp/nginx.conf:/etc/nginx/nginx.conf nginx
NGINX
Container
nginx.conf
nginx.conf
Docker Host
Docker Volume
docker run -v /tmp/nginx.conf:/etc/nginx/nginx.conf nginx
NGINX
Container
nginx.conf
nginx.conf
docker run
service create -v
--replicas=4 --config /tmp/nginx.conf:/etc/nginx/nginx.conf
--config nginx-conf
src=nginx-conf,target="/etc/nginx/nginx.conf" nginx
/nginx-conf
/etc/nginx/nginx-conf /nginx-conf
/etc/nginx/nginx-conf /nginx-conf
/etc/nginx/nginx-conf /nginx-conf
/etc/nginx/nginx-conf
nginx-conf nginx.conf
docker run
service create -v
--replicas=4 --config /tmp/nginx.conf:/etc/nginx/nginx.conf
--config nginx-conf
src=nginx-conf,target="/etc/nginx/nginx.conf" nginx
docker-compose.yml
services:
web:
image: “simple-webapp"
database:
image: “mongodb“
messaging:
image: "redis:alpine“
docker-compose up
Docker Compose
docker run simple-webapp docker service create simple-webapp
docker-compose.yml docker-compose.yml
services: services:
web: web:
image: “simple-webapp" image: “simple-webapp"
database: database:
image: “mongodb“ image: “mongodb“
messaging: messaging:
image: "redis:alpine“ image: "redis:alpine“
Stack
Container
Service
Service
Stack
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
db:
image: postgres:9.4
vote:
image: voting-app
result:
image: result
worker:
image: worker
docker-compose up
Docker Host
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
db:
image: postgres:9.4
vote:
image: voting-app
result:
image: result
worker:
image: worker
docker-compose up
Manager Node Worker Node Docker Host
Docker Swarm
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
vote:
image: voting-app
deploy:
replicas: 2
result:
image: result
deploy:
replicas: 1
worker:
image: worker Manager Node Worker Node Docker Host
Docker Swarm
docker stack deploy --compose-file docker-compose.yml
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
vote:
image: voting-app
deploy:
replicas: 2
result:
image: result
deploy:
replicas: 1 Manager Node Worker Node Docker Host
worker: Docker Swarm
image: worker
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
vote:
image: voting-app
deploy:
replicas: 2
resources:
limits:
cpus: 0.01
memory: 50M Manager Node Worker Node Docker Host
Docker Swarm
Docker Compose
docker-compose.yml
version: 3
services:
redis:
image: redis
deploy:
replicas: 1
db:
image: postgres:9.4
deploy:
replicas: 1
placement:
constraints:
- node.role == manager
vote:
image: voting-app
healthcheck:
test: [“CMD”, “curl”, “-f”, “http://localhost”]
interval: 1m30s
timeout: 10s
retries: 3
Manager Node Worker Node Docker Host
start_period: 40s
deploy: Docker Swarm
replicas: 2
Stack Commands
docker stack ls
docker stack ps
docker stack rm
Curriculum
• Kubernetes Architecture
• PODs
• ReplicaSets
Docker Engine • Deployments
• Services
• Commands & Arguments
Docker Swarm • Environment Variables
• ConfigMaps
• Secrets
Kubernetes • Readiness Probes
• Liveness Probes
• Network Policies
Docker Enterprise • Volume driver plugins
• Volumes in Kubernetes
• PVs, PVCs, Storage Classes
Kubernetes
Essentials
Kubernetes
NOTE: The demo for Voting App using Kubernetes object has been already created and uploaded
on drive.
Doc ker Security
Security
Image Scanning
S ec ur i ng the
Daemon
Secure Docker Server
Docker CLI
/etc/docker/daemon.json
{
"hosts": ["tcp://192.168.1.10:2375"]
192.168.1.10:2375
Docker CLI
/etc/docker/daemon.json
{
"hosts": ["tcp://192.168.1.10:2375"]
export DOCKER_HOST=“tcp://192.168.1.10:2376”
/var/run/docker.sock Unix Socket
docker ps
--tls ps
server.pem serverkey.pem
Docker CLI
/etc/docker/daemon.json
{
"hosts":
"hosts": ["tcp://192.168.1.10:2375"]
["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem"
} cacert.pem
CA Server
TLS Authentication
Docker CLI
192.168.1.10:2375
192.168.1.10:2376
export DOCKER_TLS=true
export DOCKER_HOST=“tcp://192.168.1.10:2376”
/var/run/docker.sock Unix Socket
docker ps
Docker CLI
/etc/docker/daemon.json
{
"hosts": ["tcp://192.168.1.10:2376"]
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"tlsverify": true, client.pem clientkey.pem cacert.pem
"tlscacert": "/var/docker/caserver.pem" CA Server
}
Authentication
Docker CLI
192.168.1.10:2375
192.168.1.10:2376
DOCKER_TLS=true
export DOCKER_TLS_VERIFY=true
export DOCKER_HOST=“tcp://192.168.1.10:2376”
/var/run/docker.sock Unix Socket
docker --tlscert=<>
ps --tlskey=<> --tlscacert=<> ps
"tls": true,
"tlscert": "/var/docker/server.pem",
"tlskey": "/var/docker/serverkey.pem",
"tlsverify": true,
cacert.pem
"tlscacert": "/var/docker/caserver.pem" CA Server
}
Summary
/etc/docker/daemon.json /etc/docker/daemon.json
{ {
"hosts": ["tcp://192.168.1.10:2376"] "hosts": ["tcp://192.168.1.10:2376"]
"tls": true, "tlscert": "/var/docker/server.pem",
"tlscert": "/var/docker/server.pem", "tlskey": "/var/docker/serverkey.pem",
"tlskey": "/var/docker/serverkey.pem" "tlsverify": true,
"tlscacert": "/var/docker/caserver.pem"
}
}
InterProcess
Namespace - PID (On the container)
Linux System ps aux
PID : 1 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 4528 828 ? Ss 03:06 0:00 nginx
PID : 2
(On the host)
PID : 3
Child System (Container) ps aux
PID : 4 USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
project 3720 0.1 0.1 95500 4916 ? R 06:06 0:00 sshd: project@p
project 3725 0.0 0.1 95196 4132 ? S 06:06 0:00 sshd: project@n
PID : 5 PID : 1 project
root
3727 0.2 0.1
3802 0.0 0.0
21352
8924
5340
3616
pts/0
?
Ss
Sl
06:06
06:06
0:00
0:00
-bash
docker-containe
shim -namespace m
root 3816 1.0 0.0 4528 828 ? Ss 06:06 0:00 nginx
PID : 6 PID : 2
CGr oups
CGroups
CPU
MEM
NET
Resource
Constraints
Container Memory – Limit and Reservations
CPU
MEM
Container - webapp
Docker Host
Linux – CPU Sharing
CPU
1024 512
Process 1 Process 2
Docker Host
Linux – CPU Sharing
CPU
Completely Fair
Realtime Scheduler
Scheduler (CFS) 512
Process 2
Docker Host
Containers – CPU Shares
CPU
CPU
docker container run --cpuset-cpus=0-1 webapp1 docker container run --cpuset-cpus=2 webapp3
docker container run --cpuset-cpus=0-1 webapp2 docker container run --cpuset-cpus=2 webapp4
Containers – CPU Count
CPU
Container - webapp
Docker Host
Resource
Constraints-
Memory
Linux – Memory
MEM SWAP
Container - webapp
Docker Host
docker container run --memory=512m --memory-swap=512m webapp Swap Space = 512m – 512m = 0m
docker container run --memory=512m --memory-swap=768m webapp Swap Space = 768m – 512m = 256m
References
https://www.cyberark.com/resources/threat-research-blog/the-route-to-root-container-escape-using-kernel-exploitation
Curriculum •
•
Docker EE Introduction
Docker Enterprise Engine Setup
• Universal Control Plane Setup
• Node Addition in UCP cluster
• Docker Trusted Registry Setup
• Deployment in Docker EE
Docker Engine • Docker EE UCP Client Bundle
• RBAC
• UCP Setting for LDAP integration
Docker Swarm • Docker EE
Community Enterprise
Edition Edition
Docker EE
Enterprise
Edition
Docker EE
ucp-metrics
ucp-auth-api
Type
Linux Manager,Worker
Windows Manager
ucp-agent
ucp-controller
ucp-metrics
ucp-auth-api
ucp-controller ucp-proxy
ucp-metrics
ucp-auth-api
ucp-agent ucp-agent
ucp-controller ucp-proxy
ucp-metrics
ucp-auth-api
WEB GUI
Deploy and Test Workload on UCP Cluster
WEB GUI
UCP Client
B undles
Deploy and Test Workload on UCP Cluster
DOCKER_HOST=x.x.x.x
DOCKER_CERT_PATH=/tmp/client.crt
Team
Service Account
(Kubernetes)
User
Swarm Collection
Swarm Collection
• Create Users:
• Create local users from UCP Console
• Integrate UCP with LDAP/AD
Doc ker Trusted
Reg i stry
Image Addressing Convention
docker.io
Docker Hub
image: registry.company.org/
54.145.234.153/ httpd/httpd
• Data about vulnerabilities are pulled either from a universal database known as the US national vulnerability
database or it can also be configured manually by uploading a file.
• Scanning can be manually trigged or automatically when an image is pushed
• The scan report reports Critical, Major or Minor categories along with the count in each
• To fix vulnerabilities check application level dependencies, upgrade packages and rebuild docker image
DTR
Im ag e P r omotion
Development Pipeline
web web
ucp-metrics
dtr-*
ucp-auth-api
web web
web
web web
Manager Node Manager Node Manager Node Worker Node Worker Node
Docker Swarm
Docker Swarm - Recovery Quorum of 3 =
3 +1
= 2.5 = 2
2
docker node promote docker swarm init --force-new-cluster
web web
web web
RAFT
DB
/var/lib/docker/swarm
/var/lib/docker/swarm
/var/lib/docker
Manager Node
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-swarm.html
Docker Swarm - Backup
Raft keys Cluster Membership Services Networks
RAFT
DB
SWMKEY-1-7K9wg5n85QeC4Zh7rZ0vSV0b5MteDsUvpVhG/lQnbl0
Please remember to store this key in a password manager, since without it you Manager Node
will not be able to restart the manager.
Docker Swarm - Restore
/var/lib/docker
Manager Node
References
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-
swarm.html
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/restore-
swarm.html
Di s aster
Rec overy
UCP
Disaster Recovery - UCP
Services
web web
Configs
Secrets
ucp-agent ucp-agent ucp-agent
Configs
Organizations Volumes
Kubernetes Secrets
Declarative Objects
Overlay
Networks
UCP - Backup
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-ucp.html
UCP - Restore
docker container run \
--rm -it \
--name ucp \
-v /var/run/docker.sock:/var/run/docker.sock \
docker/ucp \
uninstall-ucp
ucp-metrics
dtr-*
ucp-auth-api
UCP - Restore
docker container run \
--rm \
--interactive \
--name ucp \
--volume /var/run/docker.sock:/var/run/docker.sock \
docker/ucp:3.2.5 restore < /tmp/mybackup.tar
ucp-metrics
dtr-*
ucp-auth-api
Notes
• One backup at a time
• UCP does not backup swarm workloads. Swarm workloads are
backed up with Swarm backup
• Cannot take a backup of a cluster that’s already crashed.
• Restore to the same version of Docker Enterprise as that of the
one that was used during backup
• Restore either to the same swarm cluster or to a Docker host and
swarm will be initialized automatically
References
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/disaster-
recovery-ucp.html
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/backup-ucp.html
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/ucp/admin/disaster-recovery/restore-ucp.html
Di s aster
Rec overy
Docker Trusted
Registry
DTR - Backup and Restoration
Services
UCP
web web
UCP
Image Data
dtr-ol
Repositories UCP
Metadata
Image Data
Access Control
dtr-ol
Notary data
dtr-* dtr-* dtr-*
Scan Results
Certificates
Manager Node Worker Node Worker Node
Docker Swarm
DTR - Backup
docker run \
docker/dtr backup \
--existing-replica-id $REPLICA_ID > dtr-metadata-backup.tar
dtr-ol
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/dtr/dtr-admin/disaster-recovery/create-a-backup.html
dtr-ol
https://docs.mirantis.com/docker-enterprise/v3.0/dockeree-products/dtr/dtr-admin/disaster-recovery/restore-from-backup.html
dtr-ol
S3
web web
ucp-metrics
dtr-*
ucp-auth-api
docker ps
docker ps -a
IP: 192.168.1.5
http://172.17.0.2:5000 Internal IP
5000 5000 5000
"Mounts": [],
"Config": {
"Entrypoint": [
"python",
"app.py"
],
},
"NetworkSettings": {..}
}
]
Sample – Application Code
app.py
import os
from flask import Flask
app = Flask(__name__)
…
…
color = "red"
@app.route("/")
def main():
print(color)
return render_template('hello.html', color=color)
if __name__ == "__main__":
app.run(host="0.0.0.0", port="8080")
python app.py
Applying
Finishing
Touches
We will be here soon !