Docker Course
Docker Course
Docker Course
DOCKER
RHEL7
Table of Contents
Chapter 1 Chapter 4
CONTAINER TECHNOLOGY OVERVIEW 1 CREATING IMAGES WITH DOCKERFILE 1
Application Management Landscape 2 Dockerfile 2
Application Isolation 3 Caching 3
Container Resource Control & Security 5 docker build 4
Container Types 6 Dockerfile Instructions 6
Container Ecosystem 7 ENV and WORKDIR 7
Lab Tasks 8 Running Commands 8
1. Container Concepts LXC 9 Getting Files into the Image 9
2. Container Concepts Systemd 15 Defining Container Executable 10
Best Practices 11
Chapter 2 Lab Tasks 12
MANAGING CONTAINERS 1 1. Dockerfile Fundamentals 13
Installing Docker 2
Docker Control Socket 4 Chapter 5
Creating a New Container 5 DOCKER NETWORKING 1
Listing Containers 6 Overview 2
Viewing Container Operational Details 7 Data-Link Layer Details 3
Running Commands in an Existing Container 8 Network Layer Details 5
Interacting with a Running Container 9 Hostnames and DNS 6
Stopping, Starting, and Removing Containers 10 Local Host <--> Container 7
Lab Tasks 11 Container <--> Container (same node) 8
1. Docker Basics 12 Container <--> Container: Links 9
2. Install Docker via Docker Machine 22 Container <--> Container: Private Network 10
3. Configure a docker container to start at boot. 28 Managing Private Networks 12
Remote Host <--> Container 13
Chapter 3 Multi-host Networks with Overlay Driver 14
MANAGING IMAGES 1 Lab Tasks 16
Docker Images 2 1. Docker Networking 17
Listing and Removing Images 4 2. Docker Ports and Links 26
Searching for Images 6 3. Multi-host Networks 37
Downloading Images 8
Committing Changes 9 Chapter 6
Uploading Images 10 DOCKER VOLUMES 1
Export/Import Images 11 Volume Concepts 2
Save/Load Images 12 Creating and Using Volumes 3
Lab Tasks 13 Managing Volumes (cont.) 4
1. Docker Images 14 Changing Data in Volumes 5
2. Docker Platform Images 24 Removing Volumes 6
Backing up Volumes 7
SELinux Considerations 8
ii
Mapping Devices 9
Lab Tasks 10
1. Docker Volumes 11
Chapter 7
DOCKER COMPOSE/SWARM 1
Concepts 2
Compose CLI 3
Defining a Service Set 4
Docker Swarm 5
Lab Tasks 7
1. Docker Compose 8
2. Docker Swarm 19
Appendix A
CONTINUOUS INTEGRATION WITH GITLAB, GITLAB CI,
AND DOCKER 1
Lab Tasks 2
1. GitLab and GitLab CI Setup 3
2. Unit and Functional Tests 7
iii
Typographic Conventions
0O
The fonts, layout, and typographic conventions of this book have been
carefully chosen to increase readability. Please take a moment to
familiarize yourself with them.
iv
Typographic Conventions
The following format is used to introduce and define a series of terms: Occasionally content that should be on a single line, such as command
line input or URLs, must be broken across multiple lines in order to fit
deprecate ⇒ To indicate that something is considered obsolete, with on the page. When this is the case, a special symbol is used to indicate
the intent of future removal. to the reader what has happened. When copying the content, the line
frob ⇒ To manipulate or adjust, typically for fun, as opposed to tweak. breaks should not be included. For example, the following hypothetical
grok ⇒ To understand. Connotes intimate and exhaustive knowledge. PAM configuration should only take two actual lines:
hork ⇒ To break, generally beyond hope of repair.
hosed ⇒ A metaphor referring to a Cray that crashed after the password required /lib/security/pam_cracklib.so retry=3a
disconnection of coolant hoses. Upon correction, users were assured type= minlen=12 dcredit=2 ucredit=2 lcredit=0 ocredit=2
the system was rehosed. password required /lib/security/pam_unix.so use_authtok
mung (or munge) ⇒ Mash Until No Good: to modify a file, often
Representing File Edits
irreversibly.
troll ⇒ To bait, or provoke, an argument, often targeted towards the
File edits are represented using a consistent layout similar to the unified
newbie. Also used to refer to a person that regularly trolls.
diff format. When a line should be added, it is shown in bold with a
twiddle ⇒ To make small, often aimless, changes. Similar to frob.
plus sign to the left. When a line should be deleted, it is shown struck
out with a minus sign to the left. When a line should be modified, it
When discussing a command, this same format is also used to show and
is shown twice. The old version of the line is shown struck out with a
describe a list of common or important command options. For example,
minus sign to the left. The new version of the line is shown below the
the following ssh options:
old version, bold and with a plus sign to the left. Unmodified lines are
often included to provide context for the edit. For example, the following
-X ⇒ Enables X11 forwarding. In older versions of OpenSSH that do
describes modification of an existing line and addition of a new line to
not include -Y, this enables trusted X11 forwarding. In newer versions
the OpenSSH server configuration file:
of OpenSSH, this enables a more secure, limited type of forwarding.
-Y ⇒ Enables trusted X11 forwarding. Although less secure, trusted
File: /etc/ssh/sshd_config
forwarding may be required for compatibility with certain programs.
#LoginGraceTime 2m
Representing Keyboard Keystrokes - #PermitRootLogin yes
+ PermitRootLogin no
When it is necessary to press a series of keys, the series of keystrokes + AllowUsers sjansen
will be represented without a space between each key. For example, the #StrictModes yes
following means to press the "j" key three times: jjj
Note that the standard file edit representation may not be used when it
When it is necessary to press keys at the same time, the combination will is important that the edit be performed using a specific editor or method.
be represented with a plus between each key. For example, the following In these rare cases, the editor specific actions will be given instead.
means to press the "ctrl," "alt," and "backspace" keys at the same time:
Ó¿Ô¿×. Uppercase letters are treated the same: Ò¿A
v
Lab Conventions
Every lab task begins with three standard informational headers: In some lab tasks, students are required to replace portions of commands
"Objectives," "Requirements," and "Relevance". Some tasks also include a with variable data. Variable substitution are represented using italic fonts.
"Notices" section. Each section has a distinct purpose. For example, X and Y.
Objectives ⇒ An outline of what will be accomplished in the lab task. Substitutions are used most often in lab tasks requiring more than one
Requirements ⇒ A list of requirements for the task. For example, computer. For example, if a student on station4 were working with a
whether it must be performed in the graphical environment, or student on station2, the lab task would refer to stationX and stationY
whether multiple computers are needed for the lab task.
Relevance ⇒ A brief example of how concepts presented in the lab stationX$ ssh root@stationY
task might be applied in the real world.
and each would be responsible for interpreting the X and Y as 4 and 2.
Notices ⇒ Special information or warnings needed to successfully
complete the lab task. For example, unusual prerequisites or common station4$ ssh root@station2
sources of difficulty.
Though different shells, and distributions, have different prompt Command output is occasionally omitted or truncated in examples. There
characters, examples will use a $ prompt for commands to be run as are two type of omissions: complete or partial.
a normal user (like guru or visitor), and commands with a # prompt
should be run as the root user. For example: Sometimes the existence of a command’s output, and not its content, is
all that matters. Other times, a command’s output is too variable to
$ whoami reliably represent. In both cases, when a command should produce
guru output, but an example of that output is not provided, the following
$ su - format is used:
Password: password
# whoami $ cat /etc/passwd
root . . . output omitted . . .
Occasionally the prompt will contain additional information. For example, In general, at least a partial output example is included after commands.
when portions of a lab task should be performed on two different stations When example output has been trimmed to include only certain lines,
(always of the same distribution), the prompt will be expanded to: the following format is used:
vi
Lab Conventions
This courseware is designed to support multiple Linux distributions. Some lab steps consist of a list of conceptually related actions. A
When there are differences between supported distributions, each description of each action and its effect is shown to the right or under
version is labeled with the appropriate base strings: the action. Alternating actions are shaded to aid readability. For example,
the following action list describes one possible way to launch and use
R ⇒ Red Hat Enterprise Linux (RHEL) xkill to kill a graphical application:
S ⇒ SUSE Linux Enterprise Server (SLES)
U ⇒ Ubuntu Ô¿Å Open the "Run Application" dialog.
The specific supported version is appended to the base distribution xkillÕ Launch xkill. The cursor should change,
strings, so for Red Hat Enterprise Linux version 6 the complete string usually to a skull and crossbones.
is: R6. Click on a window of the application to kill.
Indicate which process to kill by clicking on
Certain lab tasks are designed to be completed on only a sub-set of
it. All of the application’s windows should
the supported Linux distributions. If the distribution you are using is not
disappear.
shown in the list of supported distributions for the lab task, then you
should skip that task.
Callouts
Certain lab steps are only to be performed on a sub-set of the supported
Linux distributions. In this case, the step will start with a standardized Occasionally lab steps will feature a shaded line that extends to a note
string that indicates which distributions the step should be performed on. in the right margin. This note, referred to as a "callout," is used to provide
When completing lab tasks, skip any steps that do not list your chosen additional commentary. This commentary is never necessary to complete
distribution. For example: the lab succesfully and could in theory be ignored. However, callouts
do provide valuable information such as insight into why a particular
1) [R4] This step should only be performed on RHEL4. command or option is being used, the meaning of less obvious command
Because of a bug in RHEL4's Japanese fonts... output, and tips or tricks such as alternate ways of accomplishing the task
at hand.
Sometimes commands or command output is distribution specific. In [S10] $ sux - On SLES10, the sux command
these cases, the matching distribution string will be shown to the left of Password: password copies the MIT-MAGIC-COOKIE-1
the command or output. For example: # xclock so that graphical applications
can be run after switching
$ grep -i linux /etc/*-release | cut -d: -f2 to another user account. The
[R6] Red Hat Enterprise Linux Server release 6.0 (Santiago) SLES10 su command did not
[S11] SUSE Linux Enterprise Server 11 (i586) do this.
vii
Content
Application Management Landscape . . . . . . . . . . . . . . . . . . 2
Application Isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Container Resource Control & Security . . . . . . . . . . . . . . . . 5
Container Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Container Ecosystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Lab Tasks 8
1. Container Concepts LXC . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2. Container Concepts Systemd . . . . . . . . . . . . . . . . . . . . . 15
Chapter
1
CONTAINER
TECHNOLOGY
OVERVIEW
Application Management Landscape
Approaches to server and application management have evolved
• Full featured server installs vs Just enough Operating System
(JeOS)
• Many applications per server vs single application per server
• Physical servers vs virtual servers
Many different technologies address specific pain points
• Security, deployment, isolation, resource management
Linux Containers
• Goal: comprehensive, secure, lightweight application isolation
and deployment
• Leverages modern features in kernel
Applications Management Challenges Without configured resource limits, applications can consume more
CPU, memory, or I/O than is desirable. If multiple applications are
A modern server can run many application services (daemons) at the competing for a resource, the Linux kernel does its best to prevent
same time. One Linux system can serve as a mail server, database monopolization and keep resource allocation as fair as possible.
server, web server, file server, etc. However, this non-isolated Sometimes resource allocation should purposely not be fair. For
approach isn't considered best practice for several reasons. Firstly, example, a database server should allow the database processes to
the more daemons running on the same server increases the attack monopolize the I/O.
surface area. With the daemons all running on the same operating
system, a successful attack on one daemon can lead to compromise Containers
of the other daemons. Techniques such as chroot(2) and SELinux
can help to prevent an attacker from moving laterally from one Linux containers aim to solve many of the application deployment
daemon to another. challenges in a comprehensive fashion. Linux containers bring
together many modern Linux features to provide a very lightweight
Upgrades can be more difficult with multiple daemons on the same application isolation with image based application packing and
operating system. Suppose one web application requires a particular delivery.
version of Python and then an update to a web application requires a
newer version of Python that the first web application is not Docker provides a framework for creating, deploying, and managing
compatible with. This situation can crop up when running 3rd party containers. Competing frameworks exist.
web application frameworks that don't ship with a specific Enterprise
Linux distribution.
Deploying an application often times requires tweaks and
modifications to the operating system, installing application data and
configuring the applications configuration files. Once an application is
installed on one Linux system it is no trivial matter to migrate the
application to another Linux system. Automation and orchestration
frameworks such as Puppet, SALT, and Chef can help by abstracting
and centrally managing application deployments.
1-2
Application Isolation
Pre-Container approaches to isolation
• One server per application (physical or virtual)
cons: expensive/heavyweight
• Change root (chroot(2)) — added to UNIX in 1979
changes apparent root directory for a process
cons: only isolates filesystem, limited security
• SELinux — enforces proper/allowed application behavior
Linux container features
• Good isolation of global system resources via namespaces(7)
filesystems, hostname, network, PID, IPC and UID/GIDs
• Single kernel = no virtualization overhead
1-3
Examples of Namespace Use Launch a shell into the existing network namespace that PID 22173 is
in:
By default, processes inherit the namespaces of their parent.
Alternatively, when creating a child process through the clone(2) # nsenter -t 22173 -n /bin/bash
system call, a process can use the CLONE_NEW* flags to have one # lsof -i
or more private namespaces created for the child process. A process COMMAND PID USER FD TYPE DEVICE NODE NAME
can also use the unshare(2) system call to have a new namespace nc 22173 root 3u IPv4 1644993 TCP localhost:http (LISTEN)
created and join it, or the setns(2) call to join an existing # nc 127.0.0.1 80
namespace. . . . output omitted . . .
The unshare command is a wrapper around the unshare(2) call, and User namespace with unshare
the nsenter command wraps the setns(2) call. Using these
programs, any program can be placed into alternate namespaces. Run a shell with apparent UID of root that actually maps to an
Some other userspace commands also have an awareness of unprivileged user within the global namespace:
namespaces, a notable example is the ip command used to
configure networking. The following examples show the use of these $ id -un
commands: guru
$ unshare --user --map-root-user /bin/bash
Private namespaces with unshare # id -un
root
Create a container running a shell in private IPC and PID namespaces: # cat /proc/$$/uid_map
# unshare -ip -f --mount-proc /bin/bash 0 500 1
# ps aux # touch file
USER PID %CPU %MEM VSZ RSS TTY START TIME COMMAND # ls -l file
root 1 0.0 0.1 116528 3312 pts/3 15:19 0:00 /bin/bash -rw-r--r--. 1 root root 0 Oct 1 13:08 file
root 29 0.0 0.0 123372 1376 pts/3 15:20 0:00 ps aux # exit
# ipcs $ ls -l file
------ Message Queues -------- -rw-r--r--. 1 guru guru 0 Oct 1 13:08 file
key msqid owner perms used-bytes messages
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
------ Semaphore Arrays --------
key semid owner perms nsems
Network namespaces with ip netns and nsenter
Create a new network namespace (dmz1), and run a process in that
namespace that will return a process details listing when someone
connects to the loopback on port 80:
# ip netns add dmz1
# ip netns exec dmz1 nc -l 127.0.0.1 80 <<<$(ps aux) &
[1] 22173
1-4
Container Resource Control & Security
System Resource Management via Linux Control Groups (cgroups)
• Can allocates and control
CPU time, system memory, I/O bandwidth, etc
SELinux sVirt
• Isolates VMs or Containers from each other and host
• Enabled by default
• See docker_selinux(8)
POSIX Capabilities
• Docker runs as root but drops unneeded privileged capabilities
by default
Control Groups Files within the container are labeled by default as follows:
The Linux kernel 2.6.24 added a new feature called "control groups" svirt_sandbox_file_t ⇒ default for all files within a container
(cgroups) that was primarily created by engineers at Google. The use image.
of cgroups allows for labeling, control, isolation, accounting, docker_var_lib_t ⇒ default type for internal volumes
prioritization, and resource limiting of groups of processes. With sysfs_t, proc_*_t, sysctl_*_t, etc. ⇒ virtual filesystems mounted
modern Linux systems, cgroups are managed with systemd service, within the container have specific types related to their security
scope, and slice units. Every child process inherits the cgroup of its needs.
parent and can't escape that cgroup unless it is privileged. View * ⇒ files within external volumes keep their assigned types.
cgroups with systemd-cgls either in their entirety or for an individual
service. Containers and SELinux Multi-Category Security Separation
When running Docker containers, proportional memory, CPU, and I/O Files and processes for containers are further separated from one
weights can be specified as parameters for the container. The another via the assignment of MCS labels. By default a unique label
constraints only take effect if there is contention. A container with is generated on container start, and assigned to that container's
constraints can use resources beyond the constraints if nothing else processes and files. The MCS label is formed from a pair of category
needs them. For more advanced cgroup functionality, a persistent numbers. For example, the complete label on a file within a container
cgroup can be defined in a systemd unit file and then referenced as might be: system_u:object_r:svirt_sandbox_file_t:s0:c241,c344
the cgroup parent in a Docker container parameters. with a corresponding process for that container having a complete
label of: system_u:system_r:svirt_lxc_net_t:s0:c338,c578.
Containers and SELinux Type Enforcement
Note that within the container it will appear as if SELinux is disabled
By default, containers are assigned the svirt_lxc_net_t SELinunx even though it can be confirmed that SELinux is in enforcing mode
type. This type is allowed to read and execute from certain types (all on the host.
of /usr and most of /etc), but denied access to most other types
(such as content in /var, /home, /root, etc). The svirt_lxc_net_t
type can only write to the svirt_sandbox_file_t and
docker_var_lib_t types.
1-5
Container Types
Image-based Containers — Most common production usage
• Application packaged with JeOS runtime
JeOS can be based on any Linux distribution
• Portability
• Multiple image layers
Host Containers
• Lightweight application sandboxes
• Same userspace as the host
Security updates applied to the host apply to the
containers
Container Types
One of the innovations of the Docker container system is packaging
applications into self-contained images that are separate from the
Docker host. This way applications can be deployed onto any Docker
host or host that implement the Open Container Specification.
A container is based on an image that contains multiple layers. At the
base is a platform image, all changes are made in the top-most
writable layer.
1-6
Container Ecosystem
Open Container Initiative
• Industry Standard for Containers — based on Docker
Minimal Container Hosts
• CoreOS
• Red Hat's Project Atomic
• Canonical's Snappy Core
Orchestration
• Kubernetes
• Docker Compose
1-7
Lab 1
Estimated Time: 35 minutes
1-8
Objectives Lab 1
y Create and administer a simple container using virsh and the libvirt-lxc
driver
y Use systemd tools to examine a running container.
Task 1
Container Concepts LXC
y Explore the use of namespaces to isolate processes.
Estimated Time: 20 minutes
Requirements
b (1 station) c (classroom server)
Relevance
Libvirt and systemd both have container support and provide a good
environment to explore the basic concepts of containers without involving
Docker.
Notices
y The libvirt-lxc backend is deprecated as of RHEL7.1
y The native systemd container capabilities are expanding rapidly and
newer systemd versions have features very similar to Docker. See the
following man pages for more details:
http://www.freedesktop.org/software/systemd/man/machinectl.html
http://www.freedesktop.org/software/systemd/man/systemd-nspawn.html
$ su -
Password: makeitso Õ
1-9
3) Configure the virsh command to use the lxc backend by default and verify it
connects:
# export LIBVIRT_DEFAULT_URI=lxc:/// Use the lxc backend by default to avoid typing virsh -c
lxc:/// each time.
# virsh uri libvirtd doesn't see the additional backend drivers until it
error: failed to connect to the hypervisor is restarted.
error: no valid connection
error: no connection driver available for lxc:///
error: Failed to reconnect to the hypervisor
# systemctl restart libvirtd
# virsh uri
lxc:///
File: /root/test-lxc.xml
+ <domain type=•lxc•>
+ <name>test-lxc</name>
+ <memory>25600</memory>
+ <os>
+ <type>exe</type>
+ <init>/bin/sh</init>
+ </os>
+ <devices>
+ <console type=•pty•/>
+ </devices>
+ </domain>
Note that the memory defined above will become a limit enforced by cgroups.
5) Define a new machine from the XML, and start the container:
1-10
Id Name State
----------------------------------------------------
- test-lxc shut off
# virsh start test-lxc
Domain test-lxc started
# virsh list
Id Name State
----------------------------------------------------
6013 test-lxc running
6) Connect to the shell running in the container and explore the isolating effect of
namespaces:
1-11
# head -n3 /proc/meminfo
MemTotal: 25600 kB /proc reports the container total memory as
MemFree: 24788 kB "Memtotal", and the host system total memory as
MemAvailable: 310476 kB "MemAvailable". This confuses many tools that do
calculations with these numbers.
# Ó¿] This escape exits the virsh console. Typing exit would
instead terminate the shell which would cause the
An alternative way to view the memory limit is to run the following from the host container to stop.
(while the container is still running):
# grep memory_limit /sys/fs/cgroup/memory/machine.slice/machine-lxc\\x2dtest\\x2dlxc.scope/memory.stat
hierarchical_memory_limit 26214400
7) libvirtd registered the container with the systemd container services. Use the
native systemd commands to examine the container:
# machinectl list
MACHINE CONTAINER SERVICE
lxc-test-lxc container libvirt-lxc
1 machines listed.
# machinectl status lxc-test-lxc
lxc-test-lxc(79daf6b4787d4dc89d4f031f73a3a2f4)
Since: Wed 2015-06-03 16:40:19 MDT; 2min 31s ago
Leader: 6225
Service: libvirt-lxc; class container
Unit: machine-lxc\x2dtest\x2dlxc.scope
|-6226 /usr/libexec/libvirt_lxc --name test-lxca
--console 22 --security=selinux --handshake 25 --background
-6227 /bin/sh
Take note of the PID number for the Borne shell from your output:
Result:
8) Examine the namespaces in effect for the containerized shell, and your current
shell:
# readlink /proc/PID_of_shell_from_step_7/ns/*
ipc:[4026532213]
mnt:[4026532211]
1-12
net:[4026531956]
pid:[4026532214]
uts:[4026532212]
# readlink /proc/$$/ns/*
ipc:[4026531839]
mnt:[4026531840]
net:[4026531956]
pid:[4026531836]
uts:[4026531838]
Note which namespaces are different (ipc, mnt, pid, and uts) and which are the
same (net).
9) Discover how many namespaces are currently in use by processes and how many
processes are using each namespace:
Result:
10) Track down the PID number of all processes using that pid namespace and use
the ps command to view them (should be the containerized shell):
1-13
/proc/6227/ns/pid
# ps e -p PID_from_previous_command_output
PID TTY STAT TIME COMMAND
6227 pts/0 Ss+ 0:00 /bin/sh PATH=/bin:/sbin TERM=linux container=lxc-libvirta
container_uuid=79daf6b4-787d-4dc8-9d4f-031f7
Cleanup
1-14
Objectives Lab 1
y Create and administer a simple container using systemd-nspawn
y Use systemd tools to manage a running container. Task 2
Container Concepts Systemd
Requirements
Estimated Time: 15 minutes
b (1 station) c (classroom server)
Relevance
Notices
y The native systemd container capabilities are expanding rapidly and
newer systemd versions have features very similar to Docker. See the
following man pages for more details:
http://www.freedesktop.org/software/systemd/man/machinectl.html
http://www.freedesktop.org/software/systemd/man/systemd-nspawn.html
$ su -
Password: makeitso Õ
2) The kernel audit facility is not currently supported when running systemd
containers. Disable it by adding the following parameter to the existing line, and
then rebooting:
File: /etc/default/grub
→ GRUB_CMDLINE_LINUX="video=800x600 crashkernel=auto rd.lvm.lv=vg0/swapa
rd.lvm.lv=vg0/root rhgb quiet audit=0"
1-15
3) Install a minimal R7 system into the /srv/c1 directory using yum:
# mkdir /var/lib/machines/c1
# yum -y --releasever=7 --nogpg --installroot=/var/lib/machines/c1 --disablerepo=•*• --enablerepo=basea
install systemd passwd yum redhat-release-server vim-minimal
. . . snip . . .
tzdata.noarch 0:2015g-1.el7 ustr.x86_64 0:1.0.4-16.el7
util-linux.x86_64 0:2.23.2-26.el7 xz.x86_64 0:5.1.2-12alpha.el7
xz-libs.x86_64 0:5.1.2-12alpha.el7 yum-metadata-parser.x86_64 0:1.1.4-10.el7
zlib.x86_64 0:1.2.7-15.el7
Complete!
# setenforce 0
# systemd-nspawn -D /var/lib/machines/c1
Spawning namespace container on /var/lib/machines/c1 (console is /dev/pts/2).
Init process in the container running as PID 2855.
-bash-4.2# passwd
Changing password for user root.
New password: makeitso Õ
BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary word
Retype new password: makeitso Õ
passwd: all authentication tokens updated successfully.
-bash-4.2# exit
#
# systemd-nspawn -D /var/lib/machines/c1/ -b
Spawning container c1 on /var/lib/machines/c1.
Press ^] three times within 1s to kill container.
systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSE
TUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN)
1-16
Detected virtualization systemd-nspawn.
Detected architecture x86-64.
Welcome to Red Hat Enterprise Linux Server 7.2 (Maipo)!
Cannot add dependency job for unit display-manager.service, ignoring: Unit display-manager.servicea
failed to load: No such file or directory.
[ OK ] Reached target Remote File Systems.
[ OK ] Created slice Root Slice.
. . . snip . . .
[ OK ] Reached target Multi-User System.
[ OK ] Reached target Graphical Interface.
Red Hat Enterprise Linux Server 7.2 (Maipo)
Kernel 3.10.0-327.4.4.el7.x86_64 on an x86_64
c1 login: root
Password: makeitso Õ
-bash-4.2#
Leave this terminal logged in and perform the next steps in another terminal.
7) From another terminal, examine the container using the systemd machinectl
command:
[2]# machinectl list
MACHINE CONTAINER SERVICE
c1 container nspawn
1 machines listed.
[2]# machinectl status c1
c1
Since: Fri 2016-03-11 22:34:46 MST; 1min 37s ago
Leader: 13535 (systemd)
Service: nspawn; class container
Root: /var/lib/machines/c1
Address: 10.100.0.1
192.168.122.1
fe80::52:ff:fe17:1
OS: Red Hat Enterprise Linux Server 7.2 (Maipo)
Unit: machine-c1.scope
|-13535 /usr/lib/systemd/systemd
|-user.slice
1-17
| -user-0.slice
| -session-31.scope
| |-13587 login -- root
| -13658 -bash
-system.slice
|-dbus.service
| -13581 /bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-a
|-systemd-logind.service
| -13580 /usr/lib/systemd/systemd-logind
-systemd-journald.service
-13547 /usr/lib/systemd/systemd-journald
Mar 11 22:34:46 station1.example.com systemd[1]: Starting Container c1.
q
Bonus
1-18
10) After downloading the image to your system, extract it and then add an account:
# unxz Fedora-Cloud-Base-20141203-21.x86_64.raw.xz
# systemd-nspawn -i Fedora-Cloud-Base-20141203-21.x86_64.raw
Spawning container Fedora-Cloud-Base-20141203-21.x86_64.raw on /root/Fedora-Cloud-Base-20141203-21.x86_64.raw.
Press ^] three times within 1s to kill container.
[root@Fedora-Cloud-Base-20141203-21 ž]#
# useradd guru
# passwd guru
Changing password for user guru.
New password: work Õ
BAD PASSWORD: The password is shorter than 8 characters
Retype new password: work Õ
passwd: all authentication tokens updated successfully.
# exit
logout
Container Fedora-Cloud-Base-20141203-21.x86_64.raw exited successfully.
11) Boot the image and login with the added account:
# systemd-nspawn -i Fedora-Cloud-Base-20141203-21.x86_64.raw -b -M fedora21
Spawning container fedora21 on /root/Fedora-Cloud-Base-20141203-21.x86_64.raw.
Press ^] three times within 1s to kill container.
systemd 217 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GN
Detected virtualization •systemd-nspawn•.
Detected architecture •x86-64•.
Welcome to Fedora 21 (Twenty One)!
Set hostname to <localhost.localdomain>.
Running in a container, ignoring fstab device entry for /dev/disk/by-uuid/c3dbe26e-e200-496e-af8e-d3071afe1a29.
. . . snip . . .
[21614.879720] cloud-init[240]: ci-info: | 0 | 0.0.0.0 | 10.100.0.254 | 0.0.0.0 | eth0 | UG |
[21614.879842] cloud-init[240]: ci-info: | 1 | 10.100.0.0 | 0.0.0.0 | 255.255.255.0 | eth0 | U |
[21614.879976] cloud-init[240]: ci-info: | 2 | 192.168.122.0 | 0.0.0.0 | 255.255.255.0 | virbr0 | U |
[21614.880273] cloud-init[240]: ci-info: +-------+---------------+--------------+---------------+-----------+-------+
Fedora release 21 (Twenty One)
Kernel 3.10.0-327.4.4.el7.x86_64 on an x86_64 (console)
station1 login: guru
1-19
Password: work Õ
Last login: Sat Mar 12 02:02:37 on pts/0
[guru@station1 ž]$ ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 02:08 ? 00:00:00 /usr/lib/systemd/systemd
root 18 1 0 02:08 ? 00:00:00 /usr/lib/systemd/systemd-journald
root 35 1 0 02:08 ? 00:00:00 /usr/lib/systemd/systemd-logind
. . . output omitted . . .
$ exit
logout
Fedora release 21 (Twenty One)
Kernel 3.10.0-327.4.4.el7.x86_64 on an x86_64 (console)
station1 login: Ó¿]Ó¿]Ó¿]
Container fedora21 terminated by signal KILL.
Cleanup
12) Remove the container filesystem and restore SELinux enforcing mode:
# rm -rf /var/lib/machines/c1
# rm Fedora-Cloud-Base-20141203-21.x86_64.raw
# setenforce 1
1-20
Content
Installing Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Docker Control Socket . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Creating a New Container . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Listing Containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Viewing Container Operational Details . . . . . . . . . . . . . . . . 7
Running Commands in an Existing Container . . . . . . . . . . 8
Interacting with a Running Container . . . . . . . . . . . . . . . . . . 9
Stopping, Starting, and Removing Containers . . . . . . . . . 10
Lab Tasks 11
1. Docker Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2. Install Docker via Docker Machine . . . . . . . . . . . . . . . . . 22
3. Configure a docker container to start at boot. . . . . . . . 28 Chapter
2
MANAGING
CONTAINERS
Installing Docker
Use packages provided by distro
• RHEL, Fedora, CentOS, Oracle Linux, SLES, Debian, Ubuntu, etc.
Use sources from Docker
• curl -sSL https://get.docker.com/ | sh
Use docker-machine
• Drivers for: Amazon EC2, Azure, Digital Ocean, Exoscale, Google
Cloud, Openstack, Rackspace, Softlayer, VirtualBox, VMware
Cloud, VMWare Vsphere
Installing Docker
A growing list of Linux distributions include Docker packages in their official supported repositories. Generally, installing Docker is as simple as
installing the provided package via the native package system (YUM, APT, etc.); for example:
2-2
. . . output omitted . . .
# systemctl start docker
# docker -v
Docker version 1.10.3, build 20f81dd
Cloud or VM Installs with docker-machine
The docker-machine command can be installed onto a host and then used to deploy and manage Docker daemon instances across a large
number of platforms; for example:
2-3
Docker Control Socket
Docker client to daemon control socket
• no fine grained ACL, all or nothing!
Default /var/run/docker.sock
• -G daemon option to set group
-H daemon option to set alternate socket
• -H tcp://IP:port
• $DOCKER_HOST=
• Can secure with TLS certs
Docker Client to Daemon via UNIX Socket the daemon using the -H IP:port option and then connect using the
same option fro the client; for example:
By default, the Docker daemon creates the /var/run/docker.sock
control socket and sets it owned by root:docker mode 660 so that File: /etc/systemd/system/docker.service
only the root user or group can interact with Docker. To allow a → ExecStart=/usr/bin/docker daemon -H 10.100.0.1:2375
different group access, modify the socket options specified in the
systemd socket definition. For example:
[host1]# systemctl restart docker
# cp /lib/systemd/system/docker.socketa
[host2]$ export DOCKER_HOST="tcp://10.100.0.1:2375"
/etc/systemd/system/docker.socket
[host2]$ docker info
File: /etc/systemd/system/docker.socket
. . . snip . . .
Operating System: Red Hat Enterprise Linux Server 7.2 (Maipo)
[Socket] OSType: linux
ListenStream=/var/run/docker.sock Architecture: x86_64
SocketMode=0660 CPUs: 1
SocketUser=root Total Memory: 740.2 MiB
→ SocketGroup=dockercontainer Name: host1.example.com
ID: J3U4:TZW3:6GLX:3VHK:WUQW:U4W2:QKMU:D35P:UUEI:U3L3:H4NJ:WTMS
# groupadd container
# usermod -G container guru Security Warning
# systemctl daemon-reload Access to the Docker socket must be protected as it is trivial to
# systemctl restart docker.socket obtain root level access to the host system if arbitrary containers can
# ls -l /var/run/docker.sock be run. Treat access to the socket as the equivalent of giving root
srw-rw----. 1 root container 0 Jul 31 11:04 /var/run/docker.sockaccess. If listening on a network socket, firewall rules should be used
Using Docker via Network Socket to limit access to authorized individuals.
Instead of creating a local UNIX socket, the Docker daemon can bind
to a specified TCP port allowing remote hosts access. Simply launch
2-4
Creating a New Container
docker run [OPTIONS] image [COMMAND] [ARG...]
• image downloaded if not found locally
/var/lib/docker/
• default command and args generally defined by image
Listing Containers
The docker ps command will show the list of containers on the connect Docker host. By default, only running containers are shown. To see all
containers (even stopped ones), use the -a option. To list just container IDs, use the -q option. To list just the last container, use the -l option.
For example:
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3dbedddb763 nginx:latest "nginx -g •daemon of 2 days ago Up 3 minutes 0.0.0.0:32768->80/tcp web2
c6bda13fe3e8 nginx:latest "nginx -g •daemon of 2 days ago Up 17 minutes 80/tcp, 443/tcp web1
# docker ps -q
f3dbedddb763
c6bda13fe3e8
To store the ID of the last created container in a variable:
# CONTAINER_ID=$(docker ps -ql)
To stop any running containers and then delete all containers:
2-6
Viewing Container Operational Details
docker top → process listing for container
docker stats → mem, CPU, and net I/O
2-7
Running Commands in an Existing Container
docker exec → Run command within a running container
• -ti → interactive session
• -d → command runs in background
-f|--follow ⇒ continue reading log output for new data (like tail
-f)
-t|--timestamps ⇒ Show date/time stamps with log output
--tail num ⇒ output the specified number of line from end of log
# docker logs web1
172.17.42.1 - - [28/Jul/2015:19:11:36 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.42.1 - - [28/Jul/2015:19:14:41 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.42.1 - - [28/Jul/2015:19:14:54 +0000] "POST / HTTP/1.1" 405 172 "-" "curl/7.29.0" "-"
2-9
Stopping, Starting, and Removing Containers
docker stop → Stop container; SIGTERM then SIGKILL
docker kill → Kill container; SIGKILL
• -s|--signal send alternate signal
docker start → Start a previously defined container
• -a → attach STD(OUT|ERR)
• -i → attach STDIN; interactive use
docker rm → Remove container
• -f|--force → remove even if running
• -v|--volumes → remove associated volumes
$ docker kill -s SIGSTOP db The following example shows using command nesting to get a list of
all containers and remove them (and their associated volumes), even
Starting Containers if they are running:
To start a container that was previously created and either terminated $ docker rm -fv $(docker ps -qa)
naturally or was manually stopped use the docker start command.
For processes that are not listening for input, but have output, use Delete all containers that are not currently running:
the -a option to attach to the STD(OUT|ERR). For processes that
[bash]$ docker rm $(comm -3 <(docker ps -qa|sort)a
accept interactive input, use the -i option to attach the local terminal
STDIN to the container STDIN. <(docker ps -q|sort))
2-10
Lab 2
Estimated Time: 60 minutes
2-11
Objectives Lab 2
y Install the Docker client and server
y Configure Docker to use a local registry
y Configure Docker to use devicemapper for storage
Task 1
Docker Basics
y Download an image and launch simple containers Estimated Time: 30 minutes
y Manage containers and images
y Examine the changes made to a host system by Docker when running
containers
Requirements
b (1 station) c (classroom server)
Relevance
$ su -
Password: makeitso Õ
3) Examine the systemd unit file used by Docker to discover how it is configured and
run:
# cat /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
2-12
After=network.target docker.socket
Requires=docker.socket
[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd://
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
[Install]
WantedBy=multi-user.target
4) Create a copy of the unit file and edit it to reference an external environment file:
# cp /usr/lib/systemd/system/docker.service /etc/systemd/system/docker.service
File: /etc/systemd/system/docker.service
[Service]
Type=notify
+ EnvironmentFile=/etc/sysconfig/docker
→ ExecStart=/usr/bin/docker daemon -H fd://$OPTIONS
5) Edit the main Docker config adding these lines so that it uses the registry on the
classroom server:
File: /etc/sysconfig/docker
+ OPTIONS=•-H fd:// --registry-mirror http://server1:5000 --insecure-registry 10.100.0.0/24•
The insecure option is to avoid having to install the TLS certificate for now.
2-13
Loaded: loaded (/etc/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2015-03-30 16:59:42 MDT; 18s ago
Docs: http://docs.docker.com
Main PID: 4664 (docker)
CGroup: /system.slice/docker.service
-4664 11354 /usr/bin/docker daemon -H fd:// --registry-mirror http://server1:5000 --insecure-registry 10.100.0
If you see errors reported, then recheck your configs and restart.
7) Examine the detailed version info for this Docker host instance:
# docker version
Client:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:39:25 2016
OS/Arch: linux/amd64
Server:
Version: 1.10.3
API version: 1.22
Go version: go1.5.3
Git commit: 20f81dd
Built: Thu Mar 10 15:39:25 2016
OS/Arch: linux/amd64
2-14
9) Tag the image with an alternate name that is easier to use:
# docker tag server1:5000/busybox busybox
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 298a170cb181 12 days ago 1.113 MB
server1:5000/busybox latest 298a170cb181 12 days ago 1.113 MB
10) Create a new container from the image and run the echo command within the
container:
# docker run busybox /bin/echo "Hello"
Hello STDOUT from processes in the container are sent to the
# docker ps terminal by default.
. . . output omitted . . . No output is shown because the container terminated
immediately after running the command
# docker ps -a -a shows recent containers (even stopped)
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
28dcc6a4d179 busybox:latest "/bin/echo Hello" 17 seconds ago Exited (0) 16 seconds ago stoic_heisenberg
Container names are automatically generated if one is not explicitly assigned
when the container is launched. Names can be used instead of container IDs
when referring to the container later.
11) Start a new container (based on the same image) that runs a long running process
and also detaches from the terminal (runs in background):
# docker run -d busybox /bin/sh -c •while true; do echo Hello; sleep 2; done•
c732093ba0812dfa8e0c0c6401f62c7dc85633d9389aa489bdcb513e0240d75e This is the full container ID.
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c732093ba081 busybox:latest "/bin/sh -c •while t 9 seconds ago Up 9 seconds
2-15
Hello
. . . snip . . .
14) Detach from the container again by running the following in another terminal
window as the root user:
. . . snip . . .
Hello
Hello
Killed
#
16) Identity the container ID of the first container (not test) and remove it:
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS . . . snip
c732093ba081 busybox:latest "/bin/sh -c •while t 13 minutes ago Exited (137) 12 seconds ago . . . snip
28dcc6a4d179 busybox:latest "/bin/echo Hello" 15 minutes ago Exited (0) 15 minutes ago . . . snip
2-16
# docker rm 28dcc6a4d179
28dcc6a4d179
18) Execute a shell, running inside of the container and attach that shell to the
terminal:
# hostname
stationX.example.com
# docker exec -ti test /bin/sh
/ # hostname
c732093ba081
19) Explore the environment within the container to see the effect of the
namespaces:
/ # ps -ef
PID USER COMMAND
1 root /bin/sh -c while true; do echo Hello; sleep 2; done
87 root /bin/sh
92 root sleep 2
93 root ps -ef
/ # ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
14: eth0@if15: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
/ # exit
20) Examine the virtual Ethernet interface and bridge created automatically by Docker
to network the container:
# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT qlen 1000
2-17
link/ether 02:52:00:13:01:03 brd ff:ff:ff:ff:ff:ff
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT
link/ether 52:54:00:3e:fa:22 brd ff:ff:ff:ff:ff:ff
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN mode DEFAULT qlen 500
link/ether 52:54:00:3e:fa:22 brd ff:ff:ff:ff:ff:ff
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT
link/ether 02:42:9f:74:64:db brd ff:ff:ff:ff:ff:ff
15: veth5d55869@if14: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT
link/ether b6:5f:9a:b6:2c:9c brd ff:ff:ff:ff:ff:ff link-netnsid 0
# brctl show docker0
bridge name bridge id STP enabled interfaces
docker0 8000.56847afe9799 no veth5d55869
21) Examine (from within the container) the writable filesystem layer and associated
device created for the container:
22) Examine (from the host system) the device mapper device used by Docker:
2-18
Configuring devicemapper Storage
24) Resize the /var filesystem so that it has enough space to hold later Docker
images:
# df -h /var
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-var 2.0G 157M 1.9G 8% /var
# fsadm -l resize /dev/vg0/var 5G
Size of logical volume vg0/var changed from 2.00 GiB (512 extents) to 5.00 GiB (1280 extents).
Logical volume var successfully resized.
meta-data=/dev/mapper/vg0-var isize=256 agcount=4, agsize=131072 blks
= sectsz=512 attr=2, projid32bit=1
= crc=0 finobt=0
data = bsize=4096 blocks=524288, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 524288 to 1310720
# df -h /var
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg0-var 5.0G 233M 4.8G 5% /var
25) Create an LVM thin pool within the existing volume group:
26) Modify the daemon options to use the thinpool for storage (and to make it easier
to read):
2-19
File: /etc/sysconfig/docker
- OPTIONS=•-H fd:// --registry-mirror http://server1:5000 --insecure-registry 10.100.0.0/24•
+ OPTIONS=•-H fd:// \
+ --registry-mirror http://server1:5000 \
+ --insecure-registry 10.100.0.0/24 \
+ --storage-driver=devicemapper \
+ --storage-opt=dm.thinpooldev=/dev/mapper/vg0-dockerpool•
28) Examine the devicemapper devices for both the thinpool and the thinly
provisioned storage space (backed by the pool) for the launched container:
2-20
29) Within the LVM thin pool the individual container filesystems are composed. By
default, each container gets an apparent 10G (with space only really allocated
when needed allowing for over subscription within the thin-pool LV):
# docker exec test df -Ph | grep docker
/dev/mapper/docker-253:3-8388736-90230ce9e3cfc1dfc68301896ec2367324a69264eab188679eded16015b9ea07 10.0G 33.8M
31) Notice that the image persists even if all containers associated with it have been
deleted:
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
busybox latest 298a170cb181 12 days ago 1.113 MB
server1:5000/busybox latest 298a170cb181 12 days ago 1.113 MB
2-21
Objectives Lab 2
y Install the Docker Engine to your second lab node.
Requirements
Task 2
Install Docker via Docker
bb (2 stations) c (classroom server) Machine
Estimated Time: 15 minutes
Relevance
When hosting a larger number of containers, it is helpful to have an
automated method of installing, configuring, and using the Docker engine
on a host. Docker Machine supports many backend drivers allowing you to
easily provision Docker onto a wide variety of Virtual, and Cloud hosting
providers.
Notices
y In this lab task, the first lab system assigned to you is referred to as
node1 or stationX. The second lab system assigned to you is referred to
as node2 or stationY.
$ su -
Password: makeitso Õ
2) From your first assigned lab host (node1), create an SSH key and install it as
trusted onto your second host:
[node1]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): Õ
Enter passphrase (empty for no passphrase): Õ
Enter same passphrase again: Õ
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
e1:9b:cf:9c:58:8f:67:17:c3:b1:35:fb:01:94:75:3a [email protected]
. . . snip . . .
[node1]# ssh-copy-id stationY
The authenticity of host •stationY (10.100.0.Y)• can•t be established.
ECDSA key fingerprint is 11:9d:dd:80:7f:69:96:67:3d:e2:53:54:be:96:5b:39.
Are you sure you want to continue connecting (yes/no)? yes
2-22
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@stationY•s password: makeitso Õ
Number of key(s) added: 1
. . . snip . . .
[node1]# ssh stationY hostname -i Verify the newly installed key works by running a
10.100.0.Y command on node2
4) Docker Machine invokes the install script from the Docker website by default.
Create a simple service that simply installs Docker from the local yum repo
instead and serve that on a network accessible URL:
5) Use Docker Machine and the generic driver (SSH) to install and configure Docker
onto your second lab host (node2). Be sure to replace X with your first node
address, and Y with your second node address:
2-23
User-Agent: curl/7.29.0
Host: stationX:8000
Accept: */*
Copying certs to the local machine directory... This may take a minute due to entropy depletion.... Be
Copying certs to the remote machine... patient.
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machinea
env stationY.example.com
[1]+ Done echo "yum install -y docker-engine" | nc -l 10.100.0.X 8000
# docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
stationY.example.com - generic Running tcp://10.100.0.Y:2376 v1.10.3
7) Examine the configuration that was created for the Docker instance installed on
your second node:
8) Import the necessary environment variables to point the docker command to your
second node:
2-24
# env | grep DOCKER Examine the variables that were set
DOCKER_HOST=tcp://10.100.0.Y:2376
DOCKER_MACHINE_NAME=stationY.example.com
DOCKER_TLS_VERIFY=1
DOCKER_CERT_PATH=/root/.docker/machine/machines/stationY.example.com
9) Verify that the docker command is now accessing the second node:
# docker-machine active
stationY.example.com
# docker info 2>/dev/null | grep ^Name
Name: stationY.example.com
File: /root/.docker/machine/machines/stationY.example.com/config.json
. . . snip . . .
"HostOptions": {
"Driver": "",
"Memory": 0,
"Disk": 0,
"EngineOptions": {
. . . snip . . .
→ "InsecureRegistry": ["10.100.0.0/24"],
. . . snip . . .
→ "RegistryMirror": ["http://server1:5000"],
2-25
12) Re-deploy with the new config and restart:
13) Test your node2 configuration by launching a simple web server container and
connecting to it from node1:
Cleanup
2-26
webtest
[node1]# docker rmi server1:5000/nginx
Untagged: server1:5000/nginx:latest
Deleted: sha256:50836e9a7005ec9036a2f846dcd4561fd21f708b02cda80f0b6d508dfaee6515
Deleted: sha256:2bd2cdbf8a04fa01956fdf075059c4b2bae0203787cb8a5eaa7dd084a23f0724
. . . snip . . .
15) Unset the variables exported by the earlier eval command so that the docker
command points to the local system again:
2-27
Objectives Lab 2
y Configure a docker container to restart if it fails
y Configure a docker container to auto-start at boot Task 3
Configure a docker container
Requirements to start at boot.
b (1 station) c (classroom server)
Estimated Time: 15 minutes
Relevance
$ su -
Password: makeitso Õ
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2556113a7f65 ubuntu "bin/sleep 86400" 53 seconds ago Up 51 seconds c1
4) Kill the container via the docker command to see if it automatically restarts:
# docker kill c1
c1
# docker ps -q
Containers that are manually stopped or killed via docker commands are never
auto-restarted.
2-28
5) Restart the container and then kill the primary process within the container to see
if the container is automatically restarted:
# docker start c1
c1
# docker top c1 -o pid
PID
10089
# kill -9 10089
# docker ps -q
The container did not get auto-restarted. However, this is governed by the policy
assigned to the container which is explored in the next steps.
6) Inspect the restart policy for the container and start a new container with a
different policy:
7) Kill the primary process within the container to see if the container is
automatically restarted:
2-29
The container did get auto-restarted because of the assigned policy.
Cleanup
12) Create a new systemd unit file to control a simple Ubuntu container:
File: /etc/systemd/system/ubuntu-container.service
+ [Unit]
+ Description=Simple Ubuntu container
+ Requires=docker.service
+ After=docker.service
+
+ [Service]
+ ExecStartPre=-/bin/docker run --name c1 -dti server1:5000/ubuntu
+ ExecStart=/bin/docker start -a c1
+ ExecStop=/bin/docker stop c1
+
+ [Install]
+ WantedBy=multi-user.target
2-30
13) Notice the unit is disabled:
# systemctl reboot
2-31
18) Attach to the running container:
# docker attach c1
Õ
root@492e9afe4cb4:/#
root@492e9afe4cb4:/# cat /etc/os-release
NAME="Ubuntu"
VERSION="14.04.4 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.4 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
root@492e9afe4cb4:/# Ó¿pÓ¿q
Cleanup
19) Stop and disable the systemd unit, and remove the unit file:
# docker rm c1
c1
# docker rmi server1:5000/ubuntu
. . . output omitted . . .
2-32
Content
Docker Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Listing and Removing Images . . . . . . . . . . . . . . . . . . . . . . . . 4
Searching for Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Downloading Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Committing Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Uploading Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Export/Import Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Save/Load Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Lab Tasks 13
1. Docker Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2. Docker Platform Images . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Chapter
3
MANAGING
IMAGES
Docker Images
The filesystem that will be used by a container
• built by progressively layering tar images
The set of instructions for running containers
• contained in layer_id/json
Examine image's metadata with
• docker inspect image
• docker history image
What is an Image? filesystem layers are assembled in. Each layer has a name which is a
hash and includes a reference to a parent layer (by hash number).
Docker images are the foundation for running containers. Images The final base layer lacks the reference to a parent.
contain two types of things:
When a container is started, the desired image is referenced and a
1. A collection of layers that are combined to form the filesystem union of the various layers leading from that layer back to the base
that will be seen within the container. are mounted into a namespace and handed to the container.
2. A collection of configuration options that provide the defaults
for any containers launched using that image. When a container is in operation, a writable layer is created under
/var/lib/docker/tmp/ and mounted as the final layer to store any
The configuration data for an image (contained in a JSON array), temporary changes that accumulate.
provides the information Docker needs to determine the order that
Inspecting Images
The metadata associated with each image can be viewed using the docker inspect command. The -f option allows passing of a Go template
to select and format the output; for example:
3-2
"nginx",
"-g",
"daemon off;"
],
. . . output omitted . . .
# docker inspect -f •Image was created on: {{ .Created }}• nginx
Image was created on: 2015-07-14T00:04:53.560160038Z
# docker inspect -f •{{ .Config.Env }}• nginx
[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin NGINX_VERSION=1.9.2-1žjessie]
The operations committed in each layer of the image can be viewed using the docker history command. By default, the CREATED BY column
is truncated. To get a full listing include the --no-trunc option. The following example shows listing the commit history of the python image:
3-3
Listing and Removing Images
docker images → list images
• Only lists top parent images by default
• -a to list all layers
docker rmi image → remove images
• -f remove image even if in use
• -no-prune do not delete untagged parents
Listing Images
The images already downloaded to the Docker host being used can be listed as follows:
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
server1/postgres latest c0ee64808866 7 days ago 265.5 MB
server1/mysql latest a5e5891111da 7 days ago 283.5 MB
server1/python 2.7 0d1c644f790b 12 days ago 672.9 MB
server1/elasticsearch latest bc6061565360 4 weeks ago 514.7 MB
server1/redis latest 0ecdc1a8a4c9 4 weeks ago 111 MB
server1/nginx latest 319d2015d149 4 weeks ago 132.8 MB
server1/debian jessie bf84c1d84a8f 4 weeks ago 125.1 MB
server1/ubuntu 15.10 906f2d7d1b8b 4 weeks ago 133.6 MB
To see all the layers in an image, use the -a option as follows:
# docker images -a
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 61f2d0dc0350 43 hours ago 132.8 MB
server1/nginx latest ce293cb4a3c9 43 hours ago 132.8 MB
<none> <none> 4b1798a9cafa 43 hours ago 132.8 MB
<none> <none> 2b12d66de277 43 hours ago 132.8 MB
<none> <none> 51ed40dd3671 43 hours ago 132.8 MB
<none> <none> 7340fb61bdef 43 hours ago 132.8 MB
<none> <none> 97df1ddba09e 44 hours ago 125.2 MB
<none> <none> 438d75921414 44 hours ago 125.2 MB
<none> <none> 5dd2638d10a1 44 hours ago 125.2 MB
<none> <none> aface2a79f55 44 hours ago 125.2 MB
3-4
<none> <none> 9a61b6b1315e 2 days ago 125.2 MB
<none> <none> 902b87aaaec9 2 days ago 125.2 MB
Removing Images
Images can be removed with the docker rmi commands. If the specified image is tagged with another name, then the tag name used in the
docker rmi command will be removed, but the image will remain. If no other tags use that image, then the image file itself will be removed. If
the image is used by a defined container, then a warning is issued instead. To delete images even if they are in use, use the -f option. When
an image is removed, parent images that are not referenced by another image are also removed:
3-5
Searching for Images
Find image
• Web search: https://registry.hub.docker.com/
• Index CLI search: docker search image
Finding Images
Docker images are generally stored in a Docker Registry. A registry can be run on the local network or even local host. When the docker
command is run, it will connect to the Docker daemon specified by the -H option if used, or the local Docker UNIX socket. The daemon in turn
will have one or more registries configured. Registries store one or more repositories and can optionally have an associated Index service that
can provide the ability to search for the images and their related metadata.
The official Docker registry can be accessed at: https://registry.hub.docker.com/. It supports a web search interface and is currently the
best way to search for images.
Searching via CLI
The docker search command can be used to search the configured registry. The search string provided can optionally include the repository
name to further qualify the search. It also allows filtering by number of stars as shown in the following example:
3-6
# docker search -s 3 ssh
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
docker.io/jdeathe/centos-ssh CentOS-6 6.6 x86_64 / EPEL Repo. / OpenSSH... 3 [OK]
If more details are needed, the Registry REST API can be used. For example, to get a list of all tags associated with the ubuntu image you
could run:
# curl https://registry.hub.docker.com/v1/repositories/ubuntu/tags
[{"layer": "d2a0ecff", "name": "latest"}, {"layer": "3db9c44f", "name": "10.04"},a
{"layer": "6d021018", "name": "12.04"}, {"layer": "6d021018", "name": "12.04.5"},a
{"layer": "c5881f11", "name": "12.10"}, {"layer": "463ff6be", "name": "13.04"},. . . snip . . .
Many tools exist that could parse and format the JSON in easier to read ways. For example:
3-7
Downloading Images
docker pull image
• -a|--all-tags=true
Downloading Images
Once the desired image has been identified, it can be downloaded
using the docker pull image command. At a minimum, a bare image
name can be used in which case it will try to connect to the main
Docker Registry and pull the latest official library image by that name.
Optionally, the registry host, repository name, image name, and tag
can all be specified if desired. As an example, assuming that tag
version 22 is the same as tag latest, all of the following commands
would download the same image:
# docker pull fedora
# docker pull fedora:latest
# docker pull fedora:22
# docker pull registry.hub.docker.com/fedora:22
To download tag version v2 of the image named project_red from
the bcroft repository from the registry running on server1 port 5000,
use:
# docker pull server1.example.com:5000/bcroft/project_red:v2
All tags for an image can be pulled by adding the -a option when
executing the pull. Be careful with this option as some repositories
have a large number of images version, and corresponding tags.
3-8
Committing Changes
docker commit container [REPO[:TAG]]
• changes to filesystem automatically included
• -c → change container metadata
•-a author
•-m commit message
docker diff container
• lists accumulated changes to container's filesystem
3-9
Uploading Images
docker login
• -e|--email=
• -p|--password=
• -u|--username=
ž/.dockercfg
docker push
3-11
Save/Load Images
docker save image
• save image to tar archive
• STDOUT by default, -o filename
docker load image
• load image from tar archive
• STDIN by default, -i filename
Saving an Image
The docker save command will save the specified image including all
its individual layers and associated metadata. This is an excellent way
to backup an image. The following example shows saving an image
into a compressed archive and then examining its contents:
# docker save agentapp:v47 | gzip agent47.tgz
# tar tvzf agent47.tgz
drwxr-xr-x 0/0 0 2015-07-16 13:30 2b12d66de2777efbc3fcb84ac615f358573137f1e7e6d31078414a9376b8019a/
-rw-r--r-- 0/0 3 2015-07-16 13:30 2b12d66de2777efbc3fcb84ac615f358573137f1e7e6d31078414a9376b8019a/VERSION
-rw-r--r-- 0/0 1620 2015-07-16 13:30 2b12d66de2777efbc3fcb84ac615f358573137f1e7e6d31078414a9376b8019a/json
-rw-r--r-- 0/0 3072 2015-07-16 13:30 2b12d66de2777efbc3fcb84ac615f358573137f1e7e6d31078414a9376b8019a/layer.tar
drwxr-xr-x 0/0 0 2015-07-16 13:30 438d75921414c48c09defacd519b2339fd5c06a1192562b40e68465fd06df017/
-rw-r--r-- 0/0 3 2015-07-16 13:30 438d75921414c48c09defacd519b2339fd5c06a1192562b40e68465fd06df017/VERSION
-rw-r--r-- 0/0 1615 2015-07-16 13:30 438d75921414c48c09defacd519b2339fd5c06a1192562b40e68465fd06df017/json
-rw-r--r-- 0/0 1024 2015-07-16 13:30 438d75921414c48c09defacd519b2339fd5c06a1192562b40e68465fd06df017/layer.tar
. . . output omitted . . .
Loading an Image
The docker load command will load the specified image (contained
in an optionally compressed tar archive) into the Docker image store.
The archive is expected to contain a repositories file that defines
tags, and one or more directories named by digest values which
contain layer.tar and json layer metadata files. The following
example shows loading a compressed image:
# docker load -i agent47.tgz
3-12
Lab 3
Estimated Time: 40 minutes
3-13
Objectives Lab 3
y Pull and push images.
y Make and commit a change to an image.
y Export, change, and import a container's filesystem.
Task 1
Docker Images
y Save, examine, and load a docker image.
Estimated Time: 35 minutes
Requirements
b (1 station) c (classroom server)
Relevance
$ su -
Password: makeitso Õ
2) Verify that your system has the Docker daemon running, and has no images:
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
# docker pull server1:5000/ubuntu Since no tag is specified, the default of "latest" will be
Trying to pull repository server1:5000/ubuntu ... used.
09b55647f2b8: Download complete
6071b4945dcf: Download complete
5bff21ba5409: Download complete
e5855facec0b: Download complete
8251da35e7a7: Download complete
6b977291cadf: Download complete
baea68532906: Download complete
Status: Downloaded newer image for server1:5000/ubuntu:latest
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
server1:5000/ubuntu latest 09b55647f2b8 11 days ago 188.3 MB
3-14
4) View all the layers that compose the ubuntu:latest image:
5) Tag the image with a simpler name for convenience in launching containers:
6) Launch a container using the image and verify that the curl command is not
currently in the image:
3-15
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
ca-certificates krb5-locales libasn1-8-heimdal libcurl3 libgssapi-krb5-2
. . . snip . . .
Processing triggers for libc-bin (2.19-0ubuntu6.6) ...
Processing triggers for ca-certificates (20130906ubuntu2) ...
Updating certificates in /etc/ssl/certs... 164 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
8) Minimize the changes that will be stored in the new image layer by deleting
unneeded info, and then exit:
/# apt-get clean
/# history -c
/# exit
exit
# docker diff c1
C /usr
C /usr/bin
A /usr/bin/curl
A /usr/bin/c_rehash
. . . snip . . .
# docker diff c1
C /usr Note that the diff output order is not consistent
C /usr/local between runs.
C /usr/local/share
A /usr/local/share/ca-certificates
. . . snip . . .
10) Commit the changes into a new layer and tag the new image as mydev:v1:
3-16
11) Launch a new container that uses the modified image and verify that it has the
curl command:
12) Tag the new image so that it can be pushed back up to the registry:
14) Create another variant of the image that uses the newly installed Curl program as
the entrypoint, so that args can be passed to it, and tag it as v2:
3-17
> -c •ENTRYPOINT ["/usr/bin/curl"]• c2 server1:5000/studentX/mydev:v2
sha256:a57f67de34ae6725f4fbb8082a07c387d44381bf99dd137e82aaf1c876fc40f5
15) Test the new image to verify that the entrypoint works:
# docker run --rm server1:5000/studentX/mydev:v2 -V
curl 7.35.0 (x86_64-pc-linux-gnu) libcurl/7.35.0 OpenSSL/1.0.1f zlib/1.2.8 libidn/1.28 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
16) Push the v2 tagged image up to the remote registry into the previously created
repository:
# docker push server1:5000/studentX/mydev:v2
. . . output omitted . . .
0ea724891d39: Layer already exists
v2: digest: sha256:11ca0157eb83ed95ac13b427a97e0fee01a89de4c9d8926884a715e6437a4ee8 size: 2167
17) Use the Curl binary in the new image to connect to the classroom registry and list
all variants (tags) for your image via the Registry REST API:
# docker run --rm server1:5000/studentX/mydev:v2 \
> -sS http://server1:5000/v2/studentX/mydev/tags/list
{"name":"studentX/mydev","tags":["v1","v2"]}
18) Export the filesystem image of the previously created container and unpack it into
directory for examination:
# mkdir /tmp/image-export
# cd /tmp/image-export
# docker export c2 | tar x
19) Verify that this is the modified image that contains the curl command:
# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
# find . -name curl
./usr/bin/curl
3-18
./usr/share/doc/curl
20) Make another change to the filesystem by adding a new file to the root:
21) Archive (tar) the filesystem and import it back into Docker tagged as v1.1:
22) Verify that the imported image contains the additional file by launching a new
container and trying to access the file:
23) Save the modified mydev:v1 image layers so they can be examined:
# mkdir /tmp/image-save
# cd /tmp/image-save
# docker save mydev:v1 | tar x
24) Compare the structure of the saved image layers with the layers stored by the
local Docker process:
# ls -F
3-19
038018d99ee757a2556a5128c7f528803ecd10eac1753ee8f2f47e02b48b09bc/
083f56e714ec1eaa29ece1c64af36c6bb6ef63c9ef2c6597edb7cb1388d54f31/
5b69eeda9041268b37a106ccf5354ef2c6632bf2fd216cbe2181e83558c2f586/
5e096974a66d7ad8c336111513948db1b304cac8be317a79dedb08ae5ba3ee19/
623e3c7fbb2c6b71406d0ecc87fc2e7e2cc8d9e4991b5c3a63c3df1c13a6ff4b/
6cafb7bc591d8e441a48bf51c56cda96afbd19b8e2b57a01fb56a6908d143341/
dbcd063980209fe1e5a4938ee6fe7592a3e836ee01c3d1dae598169e0d6664df/
ebc52fba86e3984d8ca6911b9eee4ad3fe7952e84bed57f6ee3ec99eade5a5b3.json
eca838e93dc15acf6a979eb99961a5656edade094bf34ba61f139e16a2103a86/
manifest.json
repositories
# cat manifest.json | python -mjson.tool
[
{
"Config": "ebc52fba86e3984d8ca6911b9eee4ad3fe7952e84bed57f6ee3ec99eade5a5b3.json",
"Layers": [
"038018d99ee757a2556a5128c7f528803ecd10eac1753ee8f2f47e02b48b09bc/layer.tar",
"5b69eeda9041268b37a106ccf5354ef2c6632bf2fd216cbe2181e83558c2f586/layer.tar",
"eca838e93dc15acf6a979eb99961a5656edade094bf34ba61f139e16a2103a86/layer.tar",
"dbcd063980209fe1e5a4938ee6fe7592a3e836ee01c3d1dae598169e0d6664df/layer.tar",
"623e3c7fbb2c6b71406d0ecc87fc2e7e2cc8d9e4991b5c3a63c3df1c13a6ff4b/layer.tar",
"5e096974a66d7ad8c336111513948db1b304cac8be317a79dedb08ae5ba3ee19/layer.tar",
"083f56e714ec1eaa29ece1c64af36c6bb6ef63c9ef2c6597edb7cb1388d54f31/layer.tar",
"6cafb7bc591d8e441a48bf51c56cda96afbd19b8e2b57a01fb56a6908d143341/layer.tar"
],
"RepoTags": [
"mydev:v1"
]
}
]
# sha256sum 038018d99e...2b48b09bc/layer.tar Use the first layer ID from the manifest output.
0ea724891d39006f7c9d4704f7307191e2fae896604911f478a1ae09ae7813f3 038018d99e...2b48b09bc/layer.tar
# ls -F /var/lib/docker/image/devicemapper/layerdb/sha256/
0ea724891d39006f7c9d4704f7307191e2fae896604911f478a1ae09ae7813f3/ Matching image layers stored by docker
1a9dcce1578874aa8bae8f886bb980ce07d3ca4a3b73f2878562259dc5394eb9/
1aeffc90c7f0a72e8c2cc6fa7e4a3ad78318f51e44a1e644e57486c96acbcb99/
. . . output omitted . . .
# cat ebc52fba86e3984...9eade5a5b3.json |python -mjson.tool JSON file is the config from the manifest.
. . . snip . . .
"rootfs": {
"diff_ids": [
3-20
"sha256:0ea724891d39006f7c9d4704f7307191e2fae896604911f478a1ae09ae7813f3",
"sha256:3873458c48b8bc033deb7642793140d2f9ddbdf6cef44649f53a05a18f383c97",
"sha256:2c1bd54c37a8c71aa3c0c6161204362c630775df014ed7825285a15556e8929c",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef",
"sha256:030a3fe9584fb61f8e5f7a8f548eab71645ee5af4f7980d1372cf33e0aa5b8f1",
"sha256:f86c02b0df1a0f1f7d43a05a3908bf55c19ba65366d3bebe66bc4a7939335369",
"sha256:48e51c7724844f1d7ad59628f11df9b6135d1345b44a97b359be665a2ae98808"
],
"type": "layers"
#
Prior to 1.10, Docker stored the corresponding image layers in
/var/lib/docker/graph/layer_id_name. 1.10 moved to using content
addressable image IDs that are a SHA256 hash of the actual image's content and
now stores those layers in the
/var/lib/docker/image/devicemapper/layerdb/sha256/ path by default.
25) Determine which layer is the top of the stack (the last commit to the image) and
change to the corresponding directory:
# cat repositories
{"mydev":{"v1":"4c496cee7f8588c1bbd8d439402ef1349124e9224adb7520eacd80bb24cba51e"}}
# cd 4c496cee7f8588c1bbd8d439402ef1349124e9224adb7520eacd80bb24cba51e/ USE SHELL COMPLETION Ð
26) Examine the files which define this layer and verify that it is your commit
containing the curl command:
# ls
json layer.tar VERSION
# sed •s/,/\n/g• json
{"id":"98c86fd202ec8d4a6a50d917a1f69fcec83d1297543a03735d398f34a641a29e"
"parent":"981b0f18b5d6aaf9766057db958a6ce63d51680a6113033688c24ce5aa104a78"
. . . snip . . .
"docker_version":"1.10.3"
"author":"StudentX \[email protected]\u003e"
. . . snip . . .
# tar tvf layer.tar | grep bin/curl
-rwxr-xr-x 0/0 154328 2016-04-14 12:03 usr/bin/curl
3-21
27) Repackage the layers into a compressed file that could be distributed and used by
Docker:
# cd /tmp/image-save/
# tar czf /tmp/mydev.v1.tgz .
# ls -lh /tmp/mydev.v1.tgz
-rw-r--r--. 1 root root 68M Apr 14 13:10 /tmp/mydev.v1.tgz
29) Load the mydev:v1 image (and all of its layers) back into Docker storage from the
local compressed archive:
30) Pull the original ubuntu image from the remote registry:
3-22
image. Effectively the only change was the base image for ubuntu was tagged
and given a repository name.
32) Verify that all three images again are listed in the local Docker images store:
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
server1:5000/studentX/mydev v2 3e8a810affc1 56 minutes ago 200.7 MB
mydev v1 1e2af1c32183 2 hours ago 200.7 MB
server1:5000/ubuntu latest 09b55647f2b8 6 weeks ago 188.3 MB
Cleanup
3-23
Objectives Lab 3
y Create a basic platform container image.
Requirements
Task 2
Docker Platform Images
b (1 station) c (classroom server)
Estimated Time: 5 minutes
Relevance
3-24
5) Create a compressed archive of the image to be used in a later lab:
3-25
Content
Dockerfile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Caching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
docker build . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Dockerfile Instructions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
ENV and WORKDIR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Running Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Getting Files into the Image . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Defining Container Executable . . . . . . . . . . . . . . . . . . . . . . 10
Best Practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Lab Tasks 12
1. Dockerfile Fundamentals . . . . . . . . . . . . . . . . . . . . . . . . . 13
Chapter
4
CREATING IMAGES
WITH DOCKERFILE
Dockerfile
Text file with instructions to build an image
Processed by docker build
Results in one or more layers
• images can share layers
4-2
Caching
Each layer produced by an instruction is cached
• can greatly accelerate subsequent builds
• can result in a cached layer being used when a new build is
needed instead
Cache invalidations caused by
• changed instruction
• referenced file checksum changed
• previous layer cache invalid
4-4
Using Intermediate Images and Containers
When building images, if any instruction results in a non zero return
code, the build will fail. Any layers that did build successfully will
remain, and one effective method of troubleshooting is to start a
container using the previous layer's image and then run the command
manually (perhaps with changes to produce more verbose output) to
determine the source of the problem:
# docker build -t borken:v1 .
Sending build context to Docker daemon 2.56 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:14.04
---> d2a0ecffe6fa
. . . snip . . .
Step 6 : RUN sed •s/enabled/disabled/• -i /etc/thing.conf
---> Using cache
---> c91999ebec3e
Step 7 : RUN apt-get -qqy install borken 2>/dev/null
---> Running in e7332456eb92
INFO[0002] The command [/bin/sh -c apt-get -y install borken] returned a non-zero code: 100
# docker run -ti c91999ebec3e /bin/bash
root@0db2507034ee:/# apt-get -y install borken
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package borken
After a successful build, all the intermediate containers used to
process Dockerfile instructions are discarded. If it is useful
(generally for troubleshooting), you can suppress this with the
--rm=false option.
When a build fails, the intermediate containers are not removed by
default. To force removal, use the --force-rm option.
4-5
Dockerfile Instructions
https://docs.docker.com/reference/builder/
FROM image → what base image to build on
• hub.docker.com official library
• scratch
MAINTAINER → who is responsible
LABEL → extra metadata
4-6
ENV and WORKDIR
ENV → set an environment variable in containers based on this image
WORKDIR → sets a working directory for other instructions
File: Dockerfile
ENV var1=val1 var2="value two" \
var3=val\ 3
4-7
Running Commands
RUN → commands to run within container
New layer created and commands run within a temporary container
If successful, layer is committed and used for next step
Cache and non-idempotent commands
The RUN instruction can take the following two forms: When ordering RUN instructions, remember that changing the line will
result in that layer and all later layers being rebuilt (instead of using
exec form: RUN ["command", "arg1", "arg2", "..."] ⇒ This the cache). If practical, put commonly edited RUN instructions near the
allows running commands in images that lack a shell. end of the Dockerfile to minimize the number of layers that must be
shell form: RUN commands ⇒ Passes the command(s) as an argument rebuilt.
to /bin/sh -c allowing for shell expansion, redirects, etc.
Most commands result in the exact same action every time they are
Optimizing Number of Image Layers run. For commands that are not deterministic, and may result in a
When using RUN instructions, seek for a balance in the number of different result, remember that Docker has no way of detecting this
since it only uses a checksum of the instruction line. The most
layers generated. Commands should be group logically and ordered
common example of this is updating the package list (ex. yum update,
in a sequence to allow for lower layers to be reused by other images.
When grouping commands, use shell line continuation to improve or apt-get update). To force a rebuild, either use the docker build
readability; for example: --no-cache, or update a previous line to force the change; for
example:
File: Dockerfile
ENV update_date="Wed Jul 15 11:06:33 MDT 2015"
RUN yum update && yum install -y
4-8
Getting Files into the Image
COPY → copies files or directories into container
ADD → expands local or remote files into container
• source can be a tar archive (optionally compressed with gzip,
bzip2, or xz)
• source can be URL such as
http://server1.example.com/conf/sshd.conf
If the source is a list of files, then the destination files created in the The following example show using the ADD instruction to create a
image will be owned by root:root, but will retain their original mode. new image from a local tar file and then download an application and
If the file ownership is critical, it can be modified by a later RUN data into it:
directive, or the ADD instruction can be used instead to extract a tar File: Dockerfile
archive.
FROM scratch
ADD rootfs.tar.xz /
ADD http://example.com/proj1/app1 /code
ADD http://example.com/proj1/data1.gz /data
4-9
Defining Container Executable
CMD → primarily defines default command for image
• exec form
• shell form
• parameter form
ENTRYPOINT → treat container as a command (allows passing of
arguments)
• exec form
• shell form
4-10
Best Practices
Keep images minimal
• one process per container, use linking
• no need for SSH, Vim, or even /bin/sh in every image
Examine official library Dockerfiles
• https://registry.hub.docker.com/search?q=library
Best Practices
Containers are designed to be ephemeral. Avoid creating images that
require significant setup or manual configuration. Keep the content of
an image to a minimum. Full OS images are rarely needed and not
the primary use case for Docker. Even basic tools like a text editor
can often be omitted from images. Content that must be modifiable
can be placed in a volume and either edited from the host system, or
create an editor image and launch it using the --volumes-from
option.
The Docker Hub includes many official images that are well cared for.
The dockerfile's used to build the images can be examined for ideas
and a better understanding of best practices. It is recommended that
you examine the associated Dockerfile and any published
documentation for any base images you use.
Maintaining the official image repository is a community effort. You
can submit new images for inclusion, or suggest updates to existing
images. See
https://docs.docker.com/docker-hub/official_repos/ for details.
4-11
Lab 4
Estimated Time: 45 minutes
4-12
Objectives Lab 4
y Create simple Dockerfiles and use them to build images
y Understand the use of the cache and the effect of commands on image
layers
Task 1
Dockerfile Fundamentals
y Make effective use of the .dockerignore file Estimated Time: 45 minutes
y Understand the CMD and ENTRYPOINT Dockerfile directives
y Use the RUN directive to execute commands while building images
Requirements
b (1 station) c (classroom server)
Relevance
2) Remove all containers and images to ensure the lab starts at a known state:
# docker rm -f $(docker ps -aq)
. . . output omitted . . .
# docker rmi -f $(docker images -q)
. . . output omitted . . .
4) Create a simple Dockerfile that will use an empty base image (scratch), define a
maintainer (insert your own name), and copy a single file into the image:
File: Dockerfile
+ FROM scratch
+ MAINTAINER "your_name"
+ COPY file1 /
4-13
5) Create the referenced file and built the image using default options:
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
<none> <none> 926d6f79ed13 4 seconds ago 6B
4-14
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
image v1 926d6f79ed13 4 minutes ago 6B
Notice that the cached images are used instead of having to regenerate those
layers.
10) Examine the internal structure of the new image listing what layers it contains:
11) Modify the Dockerfile so that the files are added in a single copy:
4-15
File: Dockerfile
FROM scratch
MAINTAINER "your_name"
+ COPY file* /
- COPY file1 /
- COPY file2 /
12) Build the image using a new tag, and verify that it results in one less layer being
used:
4-16
Sending build context to Docker daemon 1.049GB Notice the time spent uploading.
Sending build context to Docker daemon
. . . output omitted . . .
The full build context is uploaded to the Docker daemon (including bigfile) even
though it is not referenced by the Dockerfile or needed. This could take even
longer if the Docker daemon was on a remote host. Best practice is to avoid
putting anything in the build root that is not actually needed.
CMD Directives
17) Create the following two simple scripts in the build root:
File: script1
+ echo "script1"
File: script2
+ #!/bin/sh
+ echo "script2"
Notice that the first script lacks the magic interpreter line.
4-17
script2
From a shell on the host, the magic line is not essential in this case. If not
specified, the script is interpreted by the current shell.
File: Dockerfile
+ FROM server1:5000/busybox
+ MAINTAINER "your_name"
+ COPY script* /bin/
4-18
# docker run --rm image:v4 script2
script2 Works fine if interpreter is specified.
22) Define a default command for the container by adding the following:
File: Dockerfile
FROM server1:5000/busybox
MAINTAINER "your_name"
COPY script* /bin/
+ CMD ["/bin/sh", "-c", "/bin/script1"]
23) Rebuild the image and test by launching a container without specifying a
command:
24) Create another script in the build root and set it executable:
File: script3
+ #!/bin/sh
+ echo "from environment: $VAR1"
+ echo "first arg: $1"
+ echo "second arg: $2"
4-19
25) Edit the Dockerfile so that it has the following content:
File: Dockerfile
FROM server1:5000/busybox
MAINTAINER "your_name"
COPY script* /bin/
- CMD ["/bin/sh", "-c", "/bin/script1"]
+ ENV VAR1 foo
+ ENTRYPOINT ["/bin/script3"]
28) Edit the Dockerfile and add a CMD instruction to provide default arguments:
4-20
File: Dockerfile
FROM server1:5000/busybox
MAINTAINER "your_name"
COPY script* /bin/
ENV VAR1 foo
ENTRYPOINT ["/bin/script3"]
+ CMD ["bar", "baz"]
When used with an ENTRYPOINT, the CMD instruction defines default args
instead of the default command to execute.
29) Rebuild the image and try running without passing any arguments this time:
30) Run another container this time overriding the arguments by passing one on
execution:
32) Show that the other environment variables are unaffected during container
execution by overriding the entrypoint and executing the env command inside the
container:
4-21
# docker run --rm -e VAR1=flurp --entrypoint=/bin/sh image:v5 -c env
container_uuid=1ccf50da-3507-6bfe-fde5-fbb6390df906
HOSTNAME=1ccf50da3507
SHLVL=1
HOME=/root
VAR1=flurp
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/
RUN Directive
33) Remove the need to change permissions on the scripts before building the image
by moving that step into the image build script:
File: Dockerfile
FROM server1:5000/busybox
MAINTAINER "your_name"
COPY script* /bin/
ENV VAR1 foo
+ RUN chmod a+x /bin/script*
ENTRYPOINT ["/bin/script3"]
CMD ["bar", "baz"]
4-22
35) Create a completely new Dockerfile in your build directory with the following
content:
File: /root/build/Dockerfile
+ FROM server1:5000/ubuntu:latest
+ MAINTAINER "your_name"
+ RUN apt-get update && \
+ apt-get install -y --force-yes curl && \
+ rm -rf /var/lib/apt/lists/*
+ RUN curl http://server1.example.com/index.html > /tmp/doc
Note how commands can be chained in a single RUN directive. This results in
them being processed in a single image layer.
36) Build a new image and examine the output to see the specified commands
running:
# docker build -t image:v7 .
Sending build context to Docker daemon 7.168 kB
Sending build context to Docker daemon
Step 0 : FROM ubuntu:latest
Trying to pull repository server1:5000/ubuntu ...
09b55647f2b8: Download complete
6071b4945dcf: Download complete
5bff21ba5409: Download complete
e5855facec0b: Download complete
8251da35e7a7: Download complete
6b977291cadf: Download complete
baea68532906: Download complete
Status: Downloaded newer image for server1:5000/ubuntu:latest
---> 09b55647f2b8
Step 1 : MAINTAINER "your_name"
---> Running in 15972d816ff0
---> 010cde86f527
Removing intermediate container 15972d816ff0
Step 2 : RUN apt-get update && apt-get install -y --force-yes curl && rm -rf /var/lib/apt/lists/*
---> Running in b2a790656961
. . . snip . . .
Setting up curl (7.38.0-4+deb8u2) ...
Setting up libsasl2-modules:amd64 (2.1.26.dfsg1-13) ...
4-23
Processing triggers for libc-bin (2.19-18) ...
Processing triggers for ca-certificates (20141019) ...
Updating certificates in /etc/ssl/certs... 173 added, 0 removed; done.
Running hooks in /etc/ca-certificates/update.d....done.
---> c0f36195bcd5
Removing intermediate container b2a790656961
Step 3 : RUN curl http://server1.example.com/index.html > /tmp/doc
---> Running in eeba8f2ca473
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 29 100 29 0 0 737 0 --:--:-- --:--:-- --:--:-- 743
---> 8d6fad54cfe8
Removing intermediate container eeba8f2ca473
Successfully built 8d6fad54cfe8
37) Run the container and verify that the content was downloaded into the container:
4-24
Content
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Data-Link Layer Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Network Layer Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Hostnames and DNS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Local Host <--> Container . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Container <--> Container (same node) . . . . . . . . . . . . . . . . 8
Container <--> Container: Links . . . . . . . . . . . . . . . . . . . . . . 9
Container <--> Container: Private Network . . . . . . . . . . . 10
Managing Private Networks . . . . . . . . . . . . . . . . . . . . . . . . . 12
Remote Host <--> Container . . . . . . . . . . . . . . . . . . . . . . . 13
Multi-host Networks with Overlay Driver . . . . . . . . . . . . . . 14
Lab Tasks
1. Docker Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
16 Chapter
5
2. Docker Ports and Links . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3. Multi-host Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
DOCKER
NETWORKING
Overview
docker0 bridge
veth interface pairs
IPv4 and IPv6 supported
• routing
• NAT
/etc/sysconfig/docker-network
• $DOCKER_NETWORK_OPTIONS
Docker Default Networking with a global address can simply route directly.
Docker has a flexible networking system. By default, it will place Passing Network Options to the Docker Deamon
containers in their own isolated network namespace and then
connect them to the host system (and each other) through a Some of the network options must be specified when the Docker
combination of virtual interfaces and a bridge. Nearly every aspect of daemon is launched. Other container specific options are defined
Dockers network configuration can be changed or disabled (allowing when images are built, or containers are started. Although the Docker
for highly custom network configurations to be manually defined). daemon can be started by hand, on a typical Docker host system, the
startup will be integrated with the normal system startup scripts.
The central point of connectivity for containers at the data-link layer is
a Linux bridge (called docker0 by default). When a new container is On RHEL 7, Docker uses a simple systemd unit file that references
started, a virtual Ethernet (veth) interface pair is created. This pair of environment from the following files:
interfaces behaves like a tunnel where traffic sent in one interface is /etc/sysconfig/docker{,network,storage}. Define network related
received on the other interface. Docker places one member of this daemon options as values of the DOCKER_NETWORK_OPTIONS variable.
interface pair into the container namespace where it becomes eth0
for that container. The other member of the pair is added to the
docker0 bridge.
The following example output shows the state of the bridge when
two containers are running:
# brctl show docker0
bridge name bridge id STP enabled interfaces
docker0 8000.56847afe9799 no veth2623067
At the network layer, Docker dynamically assigns IPv4 and/or IPv6
addresses to the container interfaces. Basic IP routing provides
connectivity between containers and the host system. For
connectivity to remote systems with IPv4, Docker can automatically
expose specified ports through NAT rules. IPv6 enabled containers
5-2
Data-Link Layer Details
-b|--bridge= → Alternate bridge
• OpenvSwitch, weave, flannel, etc.
--mtu= → Maximum transmittable unit
--mac-address= → VETH MAC addresses
--net= → Network namespace
Using an Alternate Switch fragmentation needed messages, etc.) Tunneling packets to remote
Docker hosts will also add additional packet overhead which can
By default, data-link layer connectivity for Docker containers is result in additional fragmentation and possible packet loss. Calculate
provided by a Linux bridge (docker0) as previously described. This the MTU based on the encapsulation method used. For example, if
works well for connecting a set of containers that reside on a single using OvS GRE tunnels, you would have 1500 bytes, minus 14 bytes
host. If a more complex or feature rich solution is needed, Docker's for Ethernet header, minus 20 bytes for IP header, minus 4 bytes for
auto-creation of a bridge can be disabled and another piece of GRE header yielding an interface MTU of 1462 bytes. Set MTU by
software can manage the connectivity. To have Docker use an passing the --mtu= option to the Docker daemon.
existing bridge, start the daemon with the
--bridge="alternate_bridge_name" option. MAC Address Range
Connecting containers that reside on multiple hosts can be By default, Docker will assign MAC addresses from the following
accomplished by creating tunnels between the hosts. One popular range: 02:42:ac:11:00:00 → 02:42:ac:11:ff:ff. Docker selects a MAC
solution is to use OpenvSwitch to create a switch on each Docker based on the IP address that will be assigned to the container's
host, and then connect them into a mesh with GRE tunnels. interface. MACs use the prefix of 02:42:ac:11 and then the final two
octets of the IP address are be converted to hex and mapped to the
Other multi-host networking solutions include flannel, which runs an final four hex digits. For example, the IP of 172.17.30.42 would map
additions daemon (flanneld on each Docker host. Hosts are then to a MAC of 02:42:ac:11:1e:2a. To assign a specific MAC addresses,
connected via a back-end encapsulation and routing method (by pass the --mac-address= option when creating a container.
default simple UDP encapsulation). Weave features include peer
message exchanges that allow for rapid adaptation to network
topology changes (such as failed links), crypto for data, multicast
support, and connections to non-containersized applications.
Starting with Docker 1.9, multi-host networks are natively supported
through the docker network create -d overlay command.
Maximum Transmittable Unit - MTU
Auto detection of PMTU can be unreliable (firewalls blocking ICMP
5-3
Network Namespace Considerations
When creating a container, the --net= option can be passed to
determine what network namespace the container will use. This
option accepts five modes:
bridge ⇒ container placed in a private network and UTS namespace.
A VETH interface pair will be created, one interface added to the
container namespace, and the other to the docker0 bridge. This is
the default.
none ⇒ container placed in a private network and UTS namespace.
No VETH interfaces are created, but a loopback interface will
exist. Any other network interfaces will need to be created
manually.
host ⇒ container remains in the root network and UTS namespaces.
No VETH interfaces are created, but all host interfaces are
accessible. Container will use the host's
/etc/{hosts,resolv.conf} files instead of the typical bind
mounted private files.
container:container_name ⇒ container placed in the same shared
network and UTS namespace as the specified container. No new
VETH interfaces are created, but any interfaces in the other
container are seen.
network_{name,id} ⇒ container connected to the specified user
defined network and to the docker_gwbridge.
5-4
Network Layer Details
--bip= → IP assigned to Docker bridge
--fixed-cidr= → Range used for containers
--ipv6=true|false → Link-local IPv6 address for containers
--fixed-cidr-v6= → Globally routable IPv6 network prefix
File: /etc/sysconfig/docker-network
+ DOCKER_NETWORK_OPTIONS="--bip=10.200.0.1/16 --fixed-cidr=10.200.3.0/24"
5-5
Hostnames and DNS
-h|--hostname= → Set hostname
--add-host=[] → Add extra entries to /etc/hosts
--dns=[] → Specify DNS server for /etc/resolv.conf
--dns-search=[] → DNS search path for resolv.conf
Connecting Direct to Container Services from the Host System 80/tcp -> 0.0.0.0:32769
# lsof -Pi -a -c docker
Because the Docker host has a network interface (the docker0 COMMAND PID USER FD TYPE DEVICE S/OFF NODE NAME
bridge) in the same network as the containers, the host can connect docker 16670 root 4u IPv6 363639 0t0 TCP *:32768 (LISTEN)
directly to the IP:port of any service run by a container. This does not docker 16677 root 4u IPv6 363661 0t0 TCP *:32769 (LISTEN)
require publishing ports to the outside world; for example: # iptables -t nat -nL DOCKER
# docker run -d --name web1 nginx Chain DOCKER (2 references)
c6bda13fe3e83832a3c716367b60b05c1b8197fba1c902f1bde8389cc639 target prot source destination
# docker inspect -f •{{ .NetworkSettings.IPAddress }}• web1 DNAT tcp 0.0.0.0/0 0.0.0.0/0 tcp dpt:32768 to:172.17.0.2:443
172.17.0.1 DNAT tcp 0.0.0.0/0 0.0.0.0/0 tcp dpt:32769 to:172.17.0.2:80
# curl 172.17.0.1 # curl 127.0.0.1:32769
<!DOCTYPE html> . . . snip . . .
<html> <title>Welcome to nginx!</title>
<head> Changing the Host Proxy Binding Address
<title>Welcome to nginx!</title>
. . . output omitted . . . The host IP used when binding the proxy for published ports can be
specified with the --ip= Docker daemon option. As with all daemon
Connecting to Published Container Ports from the Host System options, Docker must be restarted for it to take effect. For example,
When the -P or -p run options are used, Docker will select an to have container service sonly bind to the loopback address:
available port from the ephemeral range defined in File: /etc/sysconfig/docker-network
/proc/sys/net/ipv4/ip_local_port_range and run a proxy on that
port. Connections to that IP:port are processed by Netfilter DNAT → DOCKER_NETWORK_OPTIONS="--ip=127.0.0.1"
rules to map the packets back to the container IP:port; for example:
# docker run -dP --name web3 nginx
# docker run -dP --name web2 nginx 417f8884e0f014c60f095520c8094460659df15735547561df4ed879ef89
f3dbedddb7632b413c574d062db26bdb2064d62d0d90fe2109a7af70178d # docker port web3
# docker port web2 443/tcp -> 127.0.0.1:32768
443/tcp -> 0.0.0.0:32768 80/tcp -> 127.0.0.1:32769
5-7
Container <--> Container (same node)
Connect to direct container IP:port, or published host IP:port
--iptables=true|false → Manual or automatic firewall rules
--icc=true|false → Default ACCEPT or DROP policy
Firewall Considerations
By default, all networked containers have an interface placed in the docker0 bridge allowing connections between them. Firewall rules on the
host system can further limit connectivity between containers. If iptables=false is set, then you must manage any firewall rules manually.
With the default of --iptables=true, Docker can create rules as containers are started based on the services they expose.
Assuming --iptables=true is set, the --icc= daemon option determines what the default policy is for traffic between containers. With the
default of --icc=true, all inter-container traffic is permitted as follows:
5-8
Container <--> Container: Links
--link=[] → Make new container aware of "exposed" services in
linked container
• environment variables reveal linked service info within container
• entry made in /etc/hosts for linked host
5-11
Managing Private Networks
docker network {connect,disconnect}
Name resolution on private networks
• docker-engine-1.9 → /etc/hosts automatically updated when
hosts are connected/disconnected on that network
• docker-engine-1.10 → embedded DNS server in docker daemon
Connecting Existing Containers to Private Networks inet 172.18.0.3/16 scope global eth0
valid_lft forever preferred_lft forever
The docker network {connect,disconnect} commands can be used # docker exec host1 cat /etc/resolv.conf
to dynamically add and remove existing containers from networks. search example.com
When changes are made, the hosts file of all containers connected to nameserver 127.0.0.11
that network are also updated to reflect the change. # docker exec host1 ping -c1 host2
In the following sequence, notice how host1 is connected to both the PING host2 (172.18.0.4): 56 data bytes
default bridge, and the explicitly connected net1 bridge, while host2 64 bytes from 172.18.0.4: seq=0 ttl=64 time=0.049 ms
is only connected to the private net1 bridge: . . . output omitted . . .
# brctl show
# docker run -dtih host1 --name host1 busybox bridge name bridge id STP enabled interfaces
c3cc32a0aa163153cddd17717fec0a3d5ef7f39269c3a226bb609acd0 br-7403a3a68acf 8000.0242b97a1281 no veth6b441b8
# docker network create net1 vethb777f68
7403a3a68acf51e6fd663539045b5cf48658af061cda14a3223a34c14 docker0 8000.0242253f74ac no veth19503eb
# docker network connect net1 host1
# docker exec host1 ip -4 addr li
. . . snip . . .
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
inet 172.17.0.2/16 scope global eth0
valid_lft forever preferred_lft forever
27: eth1@if28: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
inet 172.18.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
# docker run -dtih host2 --name host2 --net=net1 busybox
72f6957ff44db80a682483b7d0c6dc06d703146315ef651bc576ccffbd3ee574
# docker exec host2 ip -4 addr li
. . . snip . . .
31: eth0@if32: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
5-12
Remote Host <--> Container
--ip-forward=true|false → Controls kernel routing of packets
• /proc/sys/net/ipv4/conf/all/forwarding
--ip-masq=true|false → SNAT for outbound packets
-p, --publish=[] → Publish specific port
-P, --publish-all=true|false → Publish all exported ports
Routing of Packets to the published port on the host, and the corresponding DNAT rule
to map the traffic back to the container service:
By default, Docker places containers on the 172.17.0.0/26 IP network
(part of the RFC1918 internal space). Connections between the # docker run --name web1 -p 80:80 -d nginx
containers and remote hosts require packets to be routed to this daf4672ab6353e56d0011f21460ef5f8addcc69e2eb8ca683ddb3c6da941
internal network. By default, the Docker daemon configures the # docker inspect -f •{{ .NetworkSettings.IPAddress }}• web1
kernel to route packets. This can be disabled by setting 172.17.0.1
--ip-forward=false as a daemon option. # docker port web1
80/tcp -> 0.0.0.0:80
Since the RFC1918 address space is not globally routed, most
# lsof -i :80
network configurations will also require packets readdressing via NAT
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
before they leave the local host or network. The default setting of
docker 11066 root 4u IPv6 540231 0t0 TCP *:http (LISTEN)
--ip-masq=true causes the Docker daemon to add a Netfilter rule
# iptables -t nat -L DOCKER
sending all container traffic leaving the host to the MASQUERADE target
Chain DOCKER (2 references)
resulting in the traffic source address being changed to the outbound
target prot opt source destination
host IP.
DNAT tcp -- anywhere anywhere tcp dpt:http to:172.17.0.1:80
Publishing Container Services and DNAT
When a container is started using the -P (publish all) option, all ports
Getting traffic from the containers to remote hosts is handled via listed in the image EXPOSE parameter, or specified by the --expose
normal routing, and SNAT as described. For return remote traffic to port run option are mapped.
be routed back to the proper container, port forwarding via Netfilter
DNAT rules is used. DNAT rules are put in place for all container
services that are published via either the -p, or -P run options.
Individual container services can be published to remote hosts using
the -p IP:host_port:container_port run option. In the following
example output, a container is started mapping port 80 on the host
system to port 80 in the container. Notice the Docker proxy listening
5-13
Multi-host Networks with Overlay Driver
docker network create -d overlay net_name
• Require kernel 3.16 or higher
• Key/Value store: Consul, Etcd, ZooKeeper
• Cluster of Docker hosts with connectivity to key/value store
• Each Docker host engine launched with appropriate
--cluster-store options
5-14
"Name": "net1",
"Id": "8a257a4240ff33f59334a76775cb30845c78cf3180ffb4110a5b4bab9957fcd6",
"Scope": "global",
"Driver": "overlay",
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{ "Subnet": "10.0.0.0/24",
"Gateway": "10.0.0.1/24" }
]
},
"Containers": {
"a7fe8a1bbcd46d192ea3b6d95386506ae9ed2b34bdf9a452167bd765de81b574": {
"Name": "host1",
"EndpointID": "e1b563dc87c47ee40576da53fd43b4edeefea0be1997dd694f0e4d56e3c50b53",
"MacAddress": "02:42:0a:00:00:02",
"IPv4Address": "10.0.0.2/24",
"IPv6Address": ""
}
},
"Options": {}
} ]
# brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.0242e79082ad no
docker_gwbridge 8000.0242e2b618da no
Once created, the network is seen by all hosts connected to the common key/value store. Containers running on different hosts can then be
added to the network and communicate with one another:
[node1]# docker run -dtih container1 --net=net1 busybox
a7fe8a1bbcd46d192ea3b6d95386506ae9ed2b34bdf9a452167bd765de81b574
[node2]# docker run -dtih container2 --net=net1 busybox
[node1]# docker exec container1 ping -c3 container2
PING container2 (10.0.0.3) 56(84) bytes of data.
64 bytes from container2 (10.0.0.3): icmp_seq=1 ttl=64 time=0.348 ms
64 bytes from container2 (10.0.0.3): icmp_seq=2 ttl=64 time=0.244 ms
64 bytes from container2 (10.0.0.3): icmp_seq=3 ttl=64 time=0.268 ms
--- container2 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2001ms
rtt min/avg/max/mdev = 0.244/0.286/0.348/0.048 ms
5-15
Lab 5
Estimated Time: 90 minutes
5-16
Objectives Lab 5
y Specify networking options when launching a container
y Create and use private local networks Task 1
Docker Networking
Requirements
Estimated Time: 30 minutes
b (1 station) c (classroom server)
Relevance
$ su -
Password: makeitso Õ
2) Launch a simple Debian container and examine the default network state:
5-17
Do not exit yet.
5) Launch another container overriding several network settings and examining the
effect:
# docker run --cap-add NET_ADMIN \ grant the container cap_net_admin
-ti -h node1 \ static hostname instead of random
--dns=8.8.8.8 --dns-search=example.com \ specified DNS instead of inheriting from host system
server1:5000/ubuntu
/# cat /etc/hosts
172.17.0.158 node1
. . . snip . . .
5-18
/# cat /etc/resolv.conf
nameserver 8.8.8.8
search example.com
/# capsh --print
Current: = cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,a
cap_net_bind_service,cap_net_admin,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap+eip
Bounding set =cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpca
ap,cap_net_bind_service,cap_net_admin,cap_net_raw,cap_sys_chroot,cap_mknod,cap_audit_write,cap_setfcap
Securebits: 00/0x0/1•b0
secure-noroot: no (unlocked)
secure-no-suid-fixup: no (unlocked)
secure-keep-caps: no (unlocked)
uid=0(root)
gid=0(root)
groups=
/# ip link add type veth Works because the cap_net_admin capability is present
/# ip link show
. . . snip . . .
2: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 0a:96:f2:3b:1b:f1 brd ff:ff:ff:ff:ff:ff
3: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 1e:c3:21:ee:a1:83 brd ff:ff:ff:ff:ff:ff
/# exit
6) Restart the container and check for the presence of the manually added network
interfaces:
5-19
7) Run a container that uses the network namespace of the host system instead of
an isolated one:
# docker run --name web --net=host -d server1:5000/nginx Launch container as daemon
. . . output omitted . . .
# lsof -i :80
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
nginx 3035 root 6u IPv4 2801789 0t0 TCP *:http (LISTEN) Containerized nginx listening on port 80 on the host
system
lsof: no pwd entry for UID 104 The nginx user is only defined within the container and
nginx 3046 104 6u IPv4 2801789 0t0 TCP *:http (LISTEN) not resolvable from the host system
# curl localhost
. . . snip . . .
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
8) Connect a shell to the running container to inspect it interactively from the inside:
# docker exec -ti web /bin/bash
/# ps -ef Container has other namespaces active such as PID
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 23:21 ? 00:00:00 nginx: master process nginx -g daemon off;
nginx 12 1 0 23:22 ? 00:00:00 nginx: worker process
root 13 0 0 23:32 ? 00:00:00 /bin/bash
root 17 13 0 23:32 ? 00:00:00 ps -ef
/# hostname
stationX.example.com The hostname in the container is that of the host
system
/# ip link show All host network interfaces are seen
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:02:00:03 brd ff:ff:ff:ff:ff:ff
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default
link/ether 56:84:7a:fe:97:99 brd ff:ff:ff:ff:ff:ff
/# exit
5-20
web
10) Launch a container and start a simple server within it listening on port 5000:
11) Start another container named nc-client that shares the same network
namespace as nc-server, and verify that you can connect to the service running
in the other container:
# docker rm -f nc-{server,client}
nc-server
nc-client
5-21
Private Inter-container Networks
13) Create two local networks that use the default bridge driver:
15) Launch another container, this time connected to the second private networks:
5-22
/# ping -c 1 172.18.0.2 Containers on different private networks can't connect to
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data. one another.
16) Create a third container initially connected to the standard docker0 bridge
network, and then connect it to both private networks after the fact:
/# cat /etc/resolv.conf
search example.com
nameserver 127.0.0.11 Note that this is NOT normal loopback, but a special
options ndots:0 DNS that Docker is running.
/# getent hosts c1
172.18.0.2 c1
/# ping -c 1 c1
PING c1 (172.18.0.2) 56(84) bytes of data.
64 bytes from c1.net1 (172.18.0.2): icmp_seq=1 ttl=64 time=0.036 ms
--- c1 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms
/# ping -c 1 c2
5-23
PING c2 (172.19.0.2) 56(84) bytes of data.
64 bytes from c2.net2 (172.19.0.2): icmp_seq=1 ttl=64 time=0.054 ms
--- c2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms
Containers in both private networks are reachable with container names resolved
via DNS. Prior to 1.10 release, name resolution was accomplished via /etc/hosts
file manipulation.
18) Try again to send packets between the first and second containers (routed
through the third container):
19) Find the Netfilter rules that are blocking the traffic and remove them to verify that
the first two containers can indeed communicate by routing packets through the
third:
5-24
3 10 840 DROP all -- br-962b15f712d2 br-1a8ba2305f8b anywhere anywhere
4 10 840 DROP all -- br-1a8ba2305f8b br-962b15f712d2 anywhere anywhere
. . . output omitted . . .
# iptables -D DOCKER-ISOLATION 3 Use the rule number of the first rule in the pair
identified by the matched traffic of 10 packets.
# iptables -D DOCKER-ISOLATION 3 Repeat the number since the deletion of the first rule
# docker exec c2 ping -c10 172.18.0.2 caused the line numbers to shift up and the second rule
PING 172.18.0.2 (172.18.0.2) 56(84) bytes of data. is now the same number that the first was.
64 bytes from 172.18.0.2: icmp_seq=1 ttl=63 time=0.094 ms
. . . snip . . .
64 bytes from 172.18.0.2: icmp_seq=9 ttl=63 time=0.073 ms
64 bytes from 172.18.0.2: icmp_seq=10 ttl=63 time=0.071 ms
--- 172.18.0.2 ping statistics ---
10 packets transmitted, 10 received, 0% packet loss, time 8999ms
rtt min/avg/max/mdev = 0.069/0.074/0.094/0.013 ms
With the isolation rules deleted, the containers can now communicate.
Cleanup
# docker rm -f c1 c2 c3
. . . output omitted . . .
# docker network rm net1
# docker network rm net2
5-25
Objectives Lab 5
y Understand the use of the EXPOSE Dockerfile instruction
y Use the -P option to automatically map ports
y Manually map ports with the -p option
Task 2
Docker Ports and Links
y Connect containers via links Estimated Time: 30 minutes
Requirements
b (1 station)
Relevance
$ su -
Password: makeitso Õ
# mkdir /root/bin
File: /root/bin/docker-ip
+ #!/bin/sh
+ exec docker inspect --format •{{ .NetworkSettings.IPAddress }}• "$@"
# rm -rf ž/build/
# mkdir ž/build
# cd ž/build
File: ž/build/Dockerfile
+ FROM server1:5000/nginx
+ COPY start.sh /
+ EXPOSE 80 443
+ CMD ["sh", "-c", "/start.sh"]
5-26
File: ž/build/start.sh
+ #!/bin/sh
+ echo "$HOSTNAME website" > /usr/share/nginx/html/index.html
+ exec nginx -g •daemon off;•
6) Determine what ports the container processes are actually listening on, by
examining the container from the inside with the ss command:
5-27
# docker exec web ss -taun
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port
tcp LISTEN 0 128 *:80 *:*
Notice that the service is not listening on port 443 (only port 80). Remember that
the ports listed in the docker ps output are not an indication of which ports the
container is actually using.
9) Remove the container, and start a new one this time using the -P option:
# docker rm -f web
web
# docker run -dP --name web nginx:v2
9b93a1a55aed003e31ee583506636ef8bd6b2c2eb48c775bef29a6dd83e52a46
5-28
10) Again examine which internal container ports are mapped to host ports and
subsequently reachable from outside the container:
11) Try to connect to the mapped ports and access the service within the container:
5-29
13) Start a container with a manually defined port mapping and test:
14) Temporarily bind an additional IP to the host system's eth0 interface that is the
same as its main IP, but with 100 added to the final octet:
# IP2="10.100.0.$(hostname -i | awk -F. •{print $4+100}•)" Again store in variable for easy access.
# echo $IP2
10.100.0.10X
# ip addr add $IP2 dev eth0
# ip -4 addr show dev eth0| grep inet
inet 10.100.0.X/24 brd 10.100.0.255 scope global dynamic eth0
inet 10.100.0.X+100/32 scope global eth0
15) Launch containers with mappings bound to specific IPs, but random ports:
5-30
16) Launch containers with mappings bound to specific IPs, and assigned ports:
17) Examine the collective docker proxy processes listening for the six web containers
and the corresponding forwarding rules in the DNAT tables:
5-31
exe 30318 root 4u IPv4 3702389 0t0 TCP stationX+100.example.com:80 (LISTEN)
# iptables -t nat -nL DOCKER
Chain DOCKER (2 references)
target prot opt source destination
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:32785 to:172.17.0.41:80
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8000 to:172.17.0.42:80
DNAT tcp -- 0.0.0.0/0 10.100.0.3 tcp dpt:32769 to:172.17.0.43:80
DNAT tcp -- 0.0.0.0/0 10.100.0.103 tcp dpt:32768 to:172.17.0.44:80
DNAT tcp -- 0.0.0.0/0 10.100.0.3 tcp dpt:80 to:172.17.0.45:80
DNAT tcp -- 0.0.0.0/0 10.100.0.103 tcp dpt:80 to:172.17.0.46:80
5-32
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=3d46df778618
container_uuid=3d46df77-8618-d752-3c0c-95cc89a68997
MYSQL_ROOT_PASSWORD=pass
YSQL_MAJOR=5.7
MYSQL_VERSION=5.7.12-1debian8
HOME=/root
21) Launch a temporary container that has a link to the MySQL db container, and
examine the changes made by the link option:
# docker run --rm -ti --link db:db mysql bash
/# env | grep DB
DB_NAME=/sleepy_engelbart/db
DB_PORT=tcp://172.17.0.1:3306
DB_PORT_3306_TCP_PORT=3306
DB_PORT_3306_TCP_PROTO=tcp
DB_ENV_MYSQL_ROOT_PASSWORD=pass
DB_PORT_3306_TCP_ADDR=172.17.0.1
DB_PORT_3306_TCP=tcp://172.17.0.1:3306
DB_ENV_MYSQL_VERSION=5.7.12-1debian8
DB_ENV_MYSQL_MAJOR=5.7
/# grep db /etc/hosts
172.17.0.1 db 3d46df778618
/# ping -c1 db
PING db (172.17.0.1): 48 data bytes
56 bytes from 172.17.0.1: icmp_seq=0 ttl=64 time=0.067 ms
--- db ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.067/0.067/0.067/0.000 ms
Environment variables and a host entry allow applications to identify where the
linked services from the db container are located.
22) Use the environment variables to provide the necessary info to connect to the
MySQL server instance in the other container:
/# mysql -h${DB_PORT_3306_TCP_ADDR} -P${DB_PORT_3306_TCP_PORT} -uroot -p${DB_ENV_MYSQL_ROOT_PASSWORD}
. . . snip . . .
mysql> \s
--------------
5-33
mysql Ver 14.14 Distrib 5.7.12, for Linux (x86_64) using EditLine wrapper
Connection id: 3
Current database:
Current user: [email protected]
SSL: Not in use
Current pager: stdout
Using outfile: ••
Using delimiter: ;
Server version: 5.7.12 MySQL Community Server (GPL)
Protocol version: 10
Connection: 172.17.0.9 via TCP/IP
Server characterset: latin1
Db characterset: latin1
Client characterset: latin1
Conn. characterset: latin1
TCP port: 3306
Uptime: 2 min 52 sec
Threads: 1 Questions: 5 Slow queries: 0 Opens: 67 Flush tables: 1 Open tables: 60 Queries per second avg: 0.029
--------------
mysql> \q
Bye
/# exit
#
24) Launch another container that has links to both of the database containers:
# docker run -ti --name db-client --link db:db1a Note the use of an alias (db1) for the db container to
--link db2:db2 mysql /bin/bash make variable names more uniform.
/# grep db /etc/hosts Host entries exist for both containers
172.17.0.1 db1 3d46df778618 db
172.17.0.3 db2 089d70b4f8c7
/# env | grep DB | sort
DB1_ENV_MYSQL_MAJOR=5.7
DB1_ENV_MYSQL_ROOT_PASSWORD=pass
5-34
DB1_ENV_MYSQL_VERSION=5.7.12-1debian8
DB1_NAME=/sad_mcclintock/db1
DB1_PORT=tcp://172.17.0.1:3306
DB1_PORT_3306_TCP=tcp://172.17.0.1:3306
DB1_PORT_3306_TCP_ADDR=172.17.0.1
DB1_PORT_3306_TCP_PORT=3306
DB1_PORT_3306_TCP_PROTO=tcp
DB2_ENV_MYSQL_MAJOR=5.7
DB2_ENV_MYSQL_ROOT_PASSWORD=pass2
DB2_ENV_MYSQL_VERSION=5.7.12-1debian8
DB2_NAME=/sad_mcclintock/db2
DB2_PORT=tcp://172.17.0.3:3306
DB2_PORT_3306_TCP=tcp://172.17.0.3:3306
DB2_PORT_3306_TCP_ADDR=172.17.0.3
DB2_PORT_3306_TCP_PORT=3306
DB2_PORT_3306_TCP_PROTO=tcp
Ó¿pÓ¿q Detach from the terminal, but leave the client container
running.
25) Restart one of the database server container instances and compare the IP it has
before and after the restart:
# docker-ip db
172.17.0.1
# docker stop db
db
# docker run -d --name sleeper server1:5000/busybox /bin/sleep 8000
. . . output omitted . . .
# docker start db
db
# docker-ip db
172.17.0.5
Remember that container IP addresses are assigned from a dynamic pool by
default and subject to change. The new pluggable IP address management (IPAM)
system introduced in version 1.9 gives control over this and allows different
backend modules to use different schemes (including just assigning addresses
statically).
26) Reconnect to the client container that was linked to the two database server
containers and see if the new IP is recorded there:
5-35
# docker attach db-client
Õ
/# echo $DB1_PORT_3306_TCP_ADDR
172.17.0.1 Old IP :(
/# grep db1 /etc/hosts
172.17.0.6 db1 3d46df778618 db New IP :)
/# exit
Whenever possible, use the hostnames, and not the link environment variables, to
refer to linked hosts since they are updated dynamically if a linked container is
restarted.
Cleanup
5-36
Objectives Lab 5
y Examine how overlay networks are built and function
y Use the overlay driver to create Docker networks that span multiple
hosts
Task 3
Multi-host Networks
y Connect and disconnect containers to multi-host networks Estimated Time: 30 minutes
Requirements
bb (2 stations) c (classroom server)
Relevance
$ su -
Password: makeitso Õ
2) Edit the docker daemon config options on your first node so that it uses the
shared Consul server:
File: /etc/sysconfig/docker
--registry-mirror http://server1:5000 \
--insecure-registry 10.100.0.0/24 \
--storage-driver=devicemapper \
→ --storage-opt=dm.thinpooldev=/dev/mapper/vg0-dockerpool• \
+ --cluster-store=consul://server1.example.com:8500 \
+ --cluster-advertise=eth0:2376•
3) Verify that the daemon restarts without error, and is using the defined cluster
store:
5-37
CGroup: /system.slice/docker.service
13011 /usr/bin/docker daemon -H fd:// --registry-mirror http://server1:5000a
--insecure-registry 10.100.0.0/24 --cluster-store=consul://server1.example.com:8500 --cluster-advertise=eth0:2376
. . . output omitted . . .
# docker info 2>/dev/null | tail -n 2
Cluster store: consul://server1.example.com:8500
Cluster advertise: 10.100.0.X:2376
4) Configure your second node to also use the classroom Consul server by editing
its config:
File: /root/.docker/machine/machines/stationY.example.com/config.json
. . . snip . . .
"HostOptions": {
"Driver": "",
"Memory": 0,
"Disk": 0,
"EngineOptions": {
→ "ArbitraryFlags": [],
→ "ArbitraryFlags": [
+ "cluster-store=consul://server1.example.com:8500",
+ "cluster-advertise=eth0:2376"
+ ],
5-38
[node2]# exit
For most Docker Machine drivers, the docker-machine restart host_name (which
reboots the host) would be used after updating the config. For the generic SSH
driver used in the lab, this will not work.
6) Create a multi-host network from your first node. Since the entire class is sharing
a single Key/Value store, the network name must be unique within the classroom.
Be sure to replace X and Y with the station numbers of your assigned nodes:
5-39
7) Launch a container attached to the overlay network:
[node1]# docker run -dti --net=s1-s2 --name=c1 server1:5000/busybox
fad3097a6c9a6266c86714225e46ba5546abf84c49071aa806b5e310fc5d65c6
10) Launch a container on your second node that is also connected to the overlay
network, and examine its interfaces and addresses:
[node1]# docker run -dti --net=s1-s2 --name=c2 server1:5000/busybox
5-40
[node1]# docker exec c2 ip -f inet addr li
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
6: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
inet 10.0.0.3/24 scope global eth0
valid_lft forever preferred_lft forever
9: eth1@if10: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue
inet 172.18.0.2/16 scope global eth1
valid_lft forever preferred_lft forever
11) Capture traffic on the standard VXLAN UDP port to show the encapsulated traffic
flowing across the overlay network:
[node1]# tcpdump -nei eth0 udp port 4789 &
[1] 26390 Õ
[node1]# docker exec c2 ping -c1 c1 &>/dev/null
18:07:22.069084 02:52:00:13:01:02 > 02:52:00:13:01:01, ethertype IPv4 (0x0800), length 148: 10.100.0.2.51734a
> 10.100.0.1.4789: VXLAN, flags [I] (0x08), vni 256
02:42:0a:00:00:03 > 02:42:0a:00:00:02, ethertype IPv4 (0x0800), length 98: 10.0.0.3 > 10.0.0.2: ICMP echo a
request, id 10496, seq 0, length 64
18:07:22.069230 02:52:00:13:01:01 > 02:52:00:13:01:02, ethertype IPv4 (0x0800), length 148: 10.100.0.1.41434a
> 10.100.0.2.4789: VXLAN, flags [I] (0x08), vni 256
02:42:0a:00:00:02 > 02:42:0a:00:00:03, ethertype IPv4 (0x0800), length 98: 10.0.0.2 > 10.0.0.3: ICMP echo a
reply, id 10496, seq 0, length 64
[node1]# kill %1
2 packets captured
2 packets received by filter
0 packets dropped by kernel
Õ
[1]+ Done tcpdump -nei eth0 udp port 4789
Note that a single ping (ICMP echo-request/echo-reply) is tunneled inside the
VXLAN connection over UDP port 4789. The MAC/IP addresses of the "inner"
frames/packets are those of the eth0 interfaces of the containers. The MAC/IP
addresses of the "outer" frames/packets are those of the eth0 interfaces of the
nodes.
5-41
1:28:36.823114 02:52:00:13:01:03 > 02:52:00:13:01:01, ethertype IPv4 (0x0800), length 81: 10.100.0.3.7946a
> 10.100.0.1.7946: UDP, length 39
11:28:36.823257 02:52:00:13:01:01 > 02:52:00:13:01:03, ethertype IPv4 (0x0800), length 55: 10.100.0.1.7946a
> 10.100.0.3.7946: UDP, length 13
11:28:37.517141 02:52:00:13:01:01 > 02:52:00:13:01:03, ethertype IPv4 (0x0800), length 81: 10.100.0.1.7946a
> 10.100.0.3.7946: UDP, length 39
. . . snip . . .
15 packets captured
15 packets received by filter
0 packets dropped by kernel
Serf over TCP/UDP 7946 is used to propagate the MAC address info needed to
populate the VXLAN bridge forwarding tables (to permit unicast frames) and is an
alternative to using traditional multicast routing and multicast frames to carry
VXLANs.
Cleanup
[node1]# docker rm -f c2
c2
[node1]# unset ${!DOCKER_*}
[node1]# docker rm -f c1
c1
[node1]# docker network rm sX-sY
[node1]# docker network ls
NETWORK ID NAME DRIVER
697b1747f06b bridge bridge
49ee96048008 none null
3dfc2fe3a2de host host
52b52168c13d docker_gwbridge bridge
. . . output omitted . . . Depending on the progress of other students, you may
still see their overlay networks listed, but your network
should be gone.
5-42
Content
Volume Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Creating and Using Volumes . . . . . . . . . . . . . . . . . . . . . . . . . 3
Managing Volumes (cont.) . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Changing Data in Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Removing Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Backing up Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
SELinux Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Mapping Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Lab Tasks 10
1. Docker Volumes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter
6
DOCKER VOLUMES
Volume Concepts
Bypasses union file system
• mounts directory from host within container
Decouples data life cycle from container life cycle
Share data between containers
Volume Concepts
As previously explored, the filesystem image presented to a container
consists of one or more layers that are combined together using a
union file system driver. The combination of these layers is mounted
read-only and then an additional write layer is mounted on top of that
to hold changes. When a file within a lower layer is modified, a copy
is made within the top read-write layer and then changed. Unless
committed, the top layer is discarded when the container is
terminated.
Volumes provide a way to bypass the normal union file system and
attach a data directory from the host system. This has several
advantages as it decouples the life cycle of the data volume from the
life cycle of the container. Best practice for containers is to design
them so that they can be easily discarded and new containers
launched (ex. when upgrading a software component within the
container). Volumes facilitate:
y Persisting data independent of the container life cycle.
y A convenient way to share data between containers (since a
volume can be mounted by multiple containers simultaneously).
y Exposing the data to the host system (ex. to allow easy editing
of the data from the host).
6-2
Creating and Using Volumes
-v /container_dir
• VOLUME Dockerfile instruction
• Data in /var/lib/docker/volumes/vol_hash_id/_data
• Meta-data linking container to volume:
/var/lib/docker/containers/container_id/config.json
-v /host_{dir,file}:/container_{dir,file}
• bind mounts the host directory or file inside the container
Using a data volume container
• --volumes-from=container
:ro or :rw
6-3
Managing Volumes (cont.)
docker volume {create,rm}
• Added in 1.9 release
docker volume {inspect,ls}
• Added in 1.10 release
Supports growing list of volume drivers, many which allow volume to
span multiple nodes
Docker Volume Command and the forth is an orphaned volume no longer in use:
Starting with the 1.9 release, the Docker engine now supports a new # docker volume ls
volume sub-command. This allows named volumes to be created and DRIVER VOLUME NAME
managed without requiring the volumes to be tied to a container. local datavol3
Subsequently, the older method of creating "data volume containers" local datavol4
is no longer considered best practice. The following example shows local datavol1
creating, a named volume and then attaching a container to it later. local datavol2
Note that this volume will persist even if all containers referencing it # docker volume rm datavol1
are removed with the -v option: Error response from daemon: Conflict: remove datavol1: volume is in u
# docker volume ls -f dangling=true -q
# docker volume create --name datavol1
datavol4
datavol1
# docker volume rm datavol4
# docker volume inspect datavol1
datavol4
[
{
"Name": "datavol1",
"Driver": "local",
"Mountpoint": "/var/lib/docker/volumes/datavol1/_data"
}
]
# docker run -v datavol1:/data -ti --rm busybox
/ # echo "example data" > /data/file
/ # exit
# cat /var/lib/docker/volumes/datavol1/_data/file
example data
Volumes can be listed with optional filters, and removed. In the
following example, the first three volumes are in use by containers
6-4
Changing Data in Volumes
Edit data within container
• Disadvantage: requires editor within container
Edit on host via /var/lib/docker/volumes/volume_{name,id}/*
• Disadvantage: difficult to identify directory
• May not have access to host system
Use --volumes-from with "editor container"
6-5
Removing Volumes
docker rm -v container
• removes volume if no other images reference it
docker volume rm volume_{name,id}
Removing Volumes
When a container is removed, all volumes associated with the
container can be deleted by using the -v option as shown in the
following example:
$ docker run -v /root/code1:/app1 -v /root/code2:/app2 --name code_vol ubuntu echo "Data container for code"
$ ls /var/lib/docker/volumes/
10b42870c37cf6bb53961ffa369028c83902cbf561137877a9886276dc840291
e68bfb7b2ad5193f78e72e3b33aa07a6dd0fdcae18cd4dc33cd61b28c5928873
$ docker run --name app1 -d --volumes-from code_vol ubuntu /app1/main
$ docker rm -fv app1 #running container deleted, but volumes remain in spite of -v
$ ls /var/lib/docker/volumes/
10b42870c37cf6bb53961ffa369028c83902cbf561137877a9886276dc840291
e68bfb7b2ad5193f78e72e3b33aa07a6dd0fdcae18cd4dc33cd61b28c5928873
$ docker rm -v code_vol #final reference to volumes removed with -v option, so volumes removed
$ ls /var/lib/docker/volumes/
Orphaned Volumes
Since one of the primary purposes of volumes is the persistent
storage of data, as a precaution they are never removed by default.
This can easily result in orphaned volumes. Discover these using the
docker volume -f dangling=true option as previously shown.
6-6
Backing up Volumes
docker run --volumes-from
• Allows use of full featured container to perform backup
docker cp container:/src_file /dst_file
6-8
Mapping Devices
Don't map devices as volumes on SELinux enabled hosts
• normal host labels will block access to device within container
• using -Z to relabel will block access to device from host
--device=host_devname:container:devname
• does not use bind mounting, creates separate device file
• host device untouched, internal device labeled correctly
• may require additional capabilities within container
6-9
Lab 6
Estimated Time: 30 minutes
6-10
Objectives Lab 6
y Run a container with a persistent internal data volume attached
y Attach a host directory to a container as a volume
y Attach a host file to a container as a volume
Task 1
Docker Volumes
y Create and use a data-volume container Estimated Time: 30 minutes
y Map a host device into a container
Requirements
b (1 station)
Relevance
3) Download and inspect the mysql image to see what internal volumes it defines:
# docker pull server1:5000/mysql
Trying to pull repository server1:5000/mysql ...
319d2015d149: Download complete
64e5325c0d9d: Download complete
. . . snip . . .
107c338c1d31: Download complete
Status: Downloaded newer image for server1:5000/mysql:latest
# docker tag server1:5000/mysql mysql
# docker inspect -f •{{ .Config.Volumes }}• mysql
map[/var/lib/mysql:{}]
When containers are created from this image, Docker will create an internal
volume that will be mounted at /var/lib/mysql/ within the container.
6-11
4) Start a container and verify that the volume required by the image config is
created by the Docker daemon:
5) Identify the directory storing that volume and list its contents:
# docker rm -fv db
db
If the container is removed without specifying the -v option then its volumes are
orphaned. They exist, but no current Docker command can display or remove
them. Additionally, when removing a container, there is currently no Docker
command to remove specific volumes but leave others.
6-12
7) Create a container with a persistent external data volume:
8) Restart the container and verify that the data in the volume is persistent across
container runs:
9) Access the file contained in the data volume directly from the host system:
# cat /var/lib/docker/volumes/*/_data/file
persistent data
10) Start another container that uses the volumes defined by the data1 container:
6-13
# docker rm -v data1 Docker doesn't remove the volume even though the -v
data1 option is used because another container references the
# docker start -i data2 volume.
persistent data
# docker rm -v data2
data2
# ls /var/lib/docker/volumes/ When the last container referencing a volume is
# docker volume ls removed, the volume is removed if the -v option is
DRIVER VOLUME NAME used, otherwise the volume becomes orphaned.
12) Create a new project directory and related files to host a simple web application:
# mkdir -p ž/webapp/content
# cd ž/webapp
# echo "container web content" > content/index.html
13) Start a container that mounts the content/ directory as the document root for the
web server and access the content:
14) Configure Docker to use SELinux enforcing on your first assigned node by adding
the following daemon option to the existing config:
File: /etc/sysconfig/docker
. . . snip . . .
--cluster-store=consul://server1.example.com:8500 \
→ --cluster-advertise=eth0:2376• \
+ --selinux-enabled•
6-14
15) Restart the container and test whether it can access the content with SELinux
now enabled:
16) Start a new container this time running as a protected sVirt type, and attempt to
access the content (will fail):
6-15
17) Start a new container requesting the volume content to be relabeled for exclusive
access by this container, and test access:
# docker run --name web3 -dP -v /root/webapp/content:/usr/share/nginx/html:Z server1:5000/nginx
3a76da1274457e3b7121c2788d56d94f3c8768385ae2786afedb22cbadf76c45
# docker inspect web3 | egrep •(Mount|Process)Label•
"MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c360,c613",
"ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c360,c613",
# ls -lZ ž/webapp/content/
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0:c360,c613 index.html
# curl $(docker-ip web3)
container web content
# docker stop web3
The files within the volume are properly labeled because of the :Z prefix used
when declaring the volume.
18) Create two new containers that use a shared sVirt label to share access to
SELinux protected volume content:
# docker run --name web4 -dp 8000:80 -v /root/webapp/content:/usr/share/nginx/html:z server1:5000/nginx
2f0734ad2bd72551c7e83bd033dbc031403b11eb93b8c661d585bd848445f75c
# docker run --name web5 -dp 8001:80 -v /root/webapp/content:/usr/share/nginx/html:ro,z server1:5000/nginx
49a71c0919341232503f47e65eedde1ed2558183e3025b7ce8480fbc0d3bd632
# ls -Z content/
-rw-r--r--. root root system_u:object_r:svirt_sandbox_file_t:s0 index.html
# curl localhost:8000
container web content
# curl localhost:8001
container web content
Both containers can access the content because it was labeled without restricting
sVirt categories. The content is still protected from other apps due to the
svirt_sandbox_file_t type label.
19) Try creating content from with the containers to test the operation of mounting
the volume RW vs. RO:
# docker exec web4 bin/sh -c "echo web4 > /usr/share/nginx/html/web4"
# docker exec web5 bin/sh -c "echo web5 > /usr/share/nginx/html/two"
bin/sh: 1: cannot create /usr/share/nginx/html/two: Read-only file system
6-16
20) Compare the config info related to the bind mounted volumes from the host:
22) Launch a container that has an individual file mounted from the host system:
# touch ž/container_history File must exist or Docker will create a directory by that
# docker run -ti --rm -v /root/container_history:/root/.bash_history:Za name
server1:5000/ubuntu /bin/bash
/# ls -l
. . . output omitted . . .
/# ps -ef
. . . output omitted . . .
/# exit
23) Verify the command history was recorded to the persistent file on the host:
# cat ž/container_history
ls -l
ps -ef
exit
6-17
24) Launch a container that runs unconfined and logs to the logging socket of the
host system:
# docker run --rm -ti --security-opt=label:disable -v /dev/log:/dev/log server1:5000/ubuntu
/# ls -lZ /dev/log
srw-rw-rw-. 1 root root system_u:object_r:devlog_t:s0:c0 0 Aug 19 21:36 /dev/log
/# logger "Test message from within container"
/# exit
exit
# journalctl -rn1 _COMM=logger
-- Logs begin at Wed 2015-08-19 15:36:05 MDT, end at Fri 2015-08-21 12:21:20 MDT. --
Aug 21 12:21:08 stationX.example.com logger[1577]: Test message from within container
25) Map the host primary disk device into a container as a read only device:
# docker run --rm -ti --device=/dev/sda:/dev/sdb:r server1:5000/ubuntu
/# ls -l /dev/sdb
brw-rw----. 1 root disk 8, 0 Aug 21 18:26 /dev/sdb Device file shows read/write permissions.
/# fdisk /dev/sdb
You will not be able to write the partition table. fdisk program correctly detects and reports that writes
are actually blocked.
Command (m for help): p
. . . snip . . .
Device Boot Start End Blocks Id System
/dev/sdb1 * 2048 1026047 512000 83 Linux
/dev/sdb2 1026048 72706047 35840000 8e Linux LVM
Command (m for help): q
/# exit
26) Map the host primary disk device into a container this time read/write:
# docker run --rm -ti --device=/dev/sda:/dev/sdb:rw server1:5000/ubuntu
/# ls -l /dev/sdb
brw-rw----. 1 root disk 8, 0 Aug 21 18:26 /dev/sdb Device file shows read/write permissions.
/# fdisk /dev/sdb
fdisk program does not display warning that writes are
Command (m for help): q blocked.
/# exit
6-18
Content
Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Compose CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Defining a Service Set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Docker Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Lab Tasks 7
1. Docker Compose . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2. Docker Swarm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Chapter
7
DOCKER
COMPOSE/SWARM
Concepts
Define and run multi-container applications
• currently all containers on a single host
docker-compose.yml → describes a set of containers
docker-compose build → build set
docker-compose up → start set
7-2
Compose CLI
docker-compose command
• operates on set of containers
• looks for ./docker-compose.yml
https://docs.docker.com/compose/cli/
7-3
Defining a Service Set
docker-compose.yml
• https://docs.docker.com/compose/yml/
Requires either image or build
Remaining keys map to docker run options
docker-compose.yml In addition to the keys that map to docker run options, Compose
also has a few keys that are unique to its operation. For example:
The heart of Docker Compose is the YAML formatted file that defines
the set of services to launch. This is looked for in the current external_links: ⇒ defines a link to a container outside of the set
directory, and relative references to other files are relative to the described by the local YAML file.
location of the file. The only required key is either image which dockerfile ⇒ an alternate name for the file used to build that
specifies the name of the image to use for the container, or build service. Allows a collection of Dockerfile's to share the same
which specifies the location of the Dockerfile to build an image directory.
from. The remaining keys all correspond to options to the docker run extends: ⇒ an alternative to image: or build:, this essentially
command. provides a reference to another YAML file and service name and
then extends that definition.
The following table shows the relationship between a few common labels: ⇒ equivalent to the LABEL directive in a Dockerfile. Allows
keys used in a Compose YAML file, and the corresponding docker adding arbitrary metadata layers to the image used by that
run options: service.
env_file: ⇒ adds environment variables from a file.
docker-compose key docker run option
image: IMAGE[:TAG|@DIGEST]
command: [COMMAND] [ARG...]
entrypoint: --entrypoint=
links: --link
ports: -p
expose: --expose=
environment: --env=
volumes: -v
7-4
Docker Swarm
Clustering of Docker hosts
• based on Docker API
Needs key/value store for operation: Consul, Etcd, Zookeeper, Docker
Hub
Includes basic scheduler: spread, binpack, random
Filters implement constraints (label driven)
Can run in HA configuration: primary + secondary manager(s)
7-5
The second factor involved in scheduling containers is the use of filters which provide further constraints on the list of nodes the container is
allowed to run on. Current filters include:
Node filters ⇒ Attributes of the node in the form of arbitrary labels (specified when launching the Docker daemon for that node). Some labels
are defined automatically such as: node (name or ID of node), storagedriver, executiondriver, kernelversion, and operatingsystem.
Container affinity filters ⇒ Constrain new container to a node: running a particular container, which has the specified image, or has a
container with a specific label.
Container dependency filters ⇒ Constrain new container to a node which has a container matching the specified: --volumes-from, --link, or
--net=container run option.
Port filter ⇒ New container can only run on a node where the requested exposed port is not already taken by another container.
Filter expressions are specified at container run time using the -e option as if they were environment variables. For example: docker run -d -e
affinity:image==baseapp app1. The general form for filters is: [filter_type]:[filter_name][operator][filter_value].
7-6
Lab 7
Estimated Time: 80 minutes
7-7
Objectives Lab 7
y Create a .yml config for Docker Compose that launches multiple
connected containers
y Build and run a set of containers using docker-compose
Task 1
Docker Compose
y Use docker-compose commands to manage a set of containers
Estimated Time: 60 minutes
Requirements
b (1 station) c (classroom server)
Relevance
$ su -
Password: makeitso Õ
7-8
(integer) 1
127.0.0.1:6379> incr hits
(integer) 1
127.0.0.1:6379> incr hits
(integer) 2
127.0.0.1:6379> del hits
(integer) 1
127.0.0.1:6379> quit Exit Redis CLi
/# exit Exit container
4) Create an empty directory to contain all the files for building this set of containers:
# mkdir ž/compose
# cd ž/compose
5) Create a Dockerfile that can be used for building a web application container:
File: ž/compose/Dockerfile
+ FROM server1:5000/python:2.7
+ ADD . /code
+ WORKDIR /code
+ RUN pip install --trusted-host=server1.example.com -r requirements.txt
+ CMD python app.py
The --trusted-host option is to allow pip to use the local server over HTTP
instead of requiring HTTPS and a trusted cert.
File: ž/compose/requirements.txt
+ --index-url=http://server1.example.com/packages/simple/
+ flask
+ redis
The --index-url option is to point pip to the locally hosted repo instead of the
normal Internet PyPI repo.
7-9
7) Create the referenced app.py file with the following content:
File: ž/compose/app.py
+ from flask import Flask
+ from redis import Redis
+ import os
+ app = Flask(__name__)
+ redis = Redis(host=•redis•, port=6379)
+
+ @app.route(•/•)
+ def hello():
+ redis.incr(•hits•)
+ return •Hello from container %s. Site accessed %s times.\n• % a
(os.environ.get(•HOSTNAME•),redis.get(•hits•))
+
+ if __name__ == "__main__":
+ app.run(host="0.0.0.0", debug=True)
7-10
Collecting flask (from -r requirements.txt (line 2))
Downloading http://server1.example.com/packages/simple/flask/Flask-0.10.1.tar.gz (544kB)
. . . snip . . .
Successfully built flask redis itsdangerous MarkupSafe
Installing collected packages: Werkzeug, MarkupSafe, Jinja2, itsdangerous, flask, redis
Successfully installed Jinja2-2.8 MarkupSafe-0.23 Werkzeug-0.10.4 flask-0.10.1 itsdangerous-0.24 redis-2.10.3
---> 08b077cca263
Removing intermediate container d8fb4c472027
Step 4 : CMD python app.py
---> Running in e24a2cb7129c
---> 2b5d27e56990
Removing intermediate container e24a2cb7129c
Successfully built 2b5d27e56990
10) Remove both containers in preparation of building them with Docker Compose:
11) Create a new file with the following content (YAML files are very sensitive to
whitespace. Use spaces not tabs and align things carefully):
7-11
File: ž/compose/docker-compose.yml
+ myapp:
+ build: .
+ ports:
+ - "80:5000"
+ volumes:
+ - .:/code:Z
+ links:
+ - redis
+ redis:
+ image: redis
12) Install the Docker Compose command onto your system and verify it runs:
# curl server1:/packages/docker-compose > /usr/bin/docker-compose
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 216 100 216 0 0 1360 0 --:--:-- --:--:-- --:--:-- 1367
# chmod a+x /usr/bin/docker-compose
# docker-compose -v
docker-compose version 1.7.0, build 0d7bf73
# docker-compose -h
Define and run multi-container applications with Docker.
Usage:
docker-compose [options] [COMMAND] [ARGS...]
docker-compose -h|--help
. . . snip . . .
13) Use Docker Compose to build and launch the set of containers:
# docker-compose up
Creating compose_redis_1...
Building myapp...
Step 0 : FROM python:2.7
---> 0d1c644f790b
Step 1 : ADD . /code
---> 74aa96b220ec
Removing intermediate container a47d114f34b1
Step 2 : WORKDIR /code
7-12
. . . snip . . .
Step 4 : CMD python app.py
---> Running in f353e94e4a8f
---> 1e11bec6bae5
Removing intermediate container f353e94e4a8f
Successfully built 1e11bec6bae5
Creating compose_myapp_1...
Attaching to compose_redis_1, compose_myapp_1
redis_1 | 1:C 13 Jul 19:15:33.148 # Warning: no config file specified, using the default config.a
In order to specify a config file use redis-server /path/to/redis.conf
redis_1 | _._
redis_1 | _.- __ ••-._
redis_1 | _.- . _. ••-._ Redis 3.0.2 (00000000/0) 64 bit
. . . snip . . .
redis_1 | 1:M 13 Jul 19:15:33.150 * The server is now ready to accept connections on port 6379
myapp_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
myapp_1 | * Restarting with stat
Leave this terminal window running with both containers connected.
14) In another terminal window, verify that you can connect to the application and
that is is using the redis container for data storage:
# curl localhost
Hello from container 8f5ef700507. Site accessed 1 times.
# curl localhost
Hello from container 8f5ef700507. Site accessed 2 times.
15) List the location where the code volume maps to:
16) Edit the code changing it to display the hostname instead of the container ID:
7-13
File: ž/compose/app.py
def hello():
redis.incr(•hits•)
→ return •Hello from container %s Site accessed %s times.\n• % a
(os.environ.get(•HOSTNAME•),redis.get(•hits•))
# curl localhost
Hello from host 045fdeace2f3. Site accessed 3 times.
18) Return to the original terminal where the containers are connected and stop them:
redis_1 | 1:M 13 Jul 19:22:02.727 * The server is now ready to accept connections on port 6379
myapp_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
myapp_1 | * Restarting with stat
Ó¿cGracefully stopping... (press Ctrl+C again to force)
Stopping compose_myapp_1... done
Stopping compose_redis_1... done
# docker-compose up -d
Starting compose_redis_1...
Starting compose_myapp_1...
20) List all containers that are defined by the docker-compose.yml in the current
directory:
# docker-compose ps
Name Command State Ports
----------------------------------------------------------------------------
compose_myapp_1 /bin/sh -c python app.py Up 0.0.0.0:80->5000/tcp
compose_redis_1 /entrypoint.sh redis-server Up 6379/tcp
21) Run a command inside one of the containers that is part of this compose group:
7-14
. . . snip . . .
172.17.0.2 compose_redis_1 1778a66a1cc4
172.17.0.2 redis 1778a66a1cc4 compose_redis_1
172.17.0.2 redis_1 1778a66a1cc4 compose_redis_1
172.17.0.3 compose_myapp_1 87633385b0c6
172.17.0.3 myapp 87633385b0c6 compose_myapp_1
172.17.0.3 myapp_1 87633385b0c6 compose_myapp_1
172.17.0.4 6c1f7fd6ad24
Note the additional aliases that include a number, for example redis_1. These are
used when the scale features of Docker Compose are invoked.
# docker-compose stop
Stopping compose_myapp_1...
Stopping compose_redis_1... done
23) Create a set of subdirectories to store the builds and move the application into
the new location:
# cd ž/compose
# mkdir {web,app}
# mv app.py requirements.txt Dockerfile app/
24) Create a new Dockerfile for building the Nginx web frontend:
File: ž/compose/web/Dockerfile
+ FROM nginx
+ COPY nginx.conf /etc/nginx/nginx.conf
25) Create a config for nginx that can proxy connections on port 80 to two servers
each on port 5000:
7-15
File: web/nginx.conf
+ events { worker_connections 1024; }
+ http {
+ upstream myapp {
+ server myapp1:5000;
+ server myapp2:5000;
+ }
+ server {
+ listen 80;
+ location / {
+ proxy_pass http://myapp;
+ proxy_http_version 1.1;
+ proxy_set_header Upgrade $http_upgrade;
+ proxy_set_header Connection •upgrade•;
+ proxy_set_header Host $host;
+ proxy_cache_bypass $http_upgrade;
+ }
+ }
+ }
7-16
File: docker-compose.yml
+ web:
+ build: ./web
+ links:
+ - myapp1:myapp1
+ - myapp2:myapp2
+ ports:
+ - "80:80"
+ myapp1:
+ build: ./app
+ ports:
+ - "5000"
+ volumes:
+ - ./app:/code:z
+ links:
+ - redis
+ myapp2:
+ build: ./app
+ ports:
+ - "5000"
+ volumes:
+ - ./app:/code:z
+ links:
+ - redis
+ redis:
+ image: redis
# docker-compose up
. . . snip . . .
redis_1 | 1:M 14 Jul 02:38:41.540 * DB loaded from disk: 0.000 seconds
redis_1 | 1:M 14 Jul 02:38:41.540 * The server is now ready to accept connections on port 6379
myapp2_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
myapp2_1 | * Restarting with stat
myapp1_1 | * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
myapp1_1 | * Restarting with stat
7-17
28) In another terminal window, examine the set of containers:
# docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------
compose_myapp1_1 /bin/sh -c python app.py Up 0.0.0.0:32780->5000/tcp
compose_myapp2_1 /bin/sh -c python app.py Up 0.0.0.0:32779->5000/tcp
compose_redis_1 /entrypoint.sh redis-server Up 6379/tcp
compose_web_1 nginx -g daemon off; Up 443/tcp, 0.0.0.0:80->80/tcp
29) Test the round robin load balancing by connecting to port 80 twice and examining
the host names to verify you are directed to both containers:
# curl localhost
container 51eb8e777a21 accessed 3 times.
# curl localhost
container a959cc019324 accessed 4 times.
30) Return to the original terminal where you launched the containers and stop them
by pressing Ó¿c:
7-18
Objectives Lab 7
y Connect a Docker host to an existing Swarm using the containerized
Swarm binary.
y Explore the differences between a single Docker host, and a swarm
Task 2
Docker Swarm
when creating containers and networks. Estimated Time: 20 minutes
Requirements
b (1 station) c (classroom server)
Relevance
Docker can be scaled to handle larger production demands by clustering a
group of Docker hosts together. Docker Swarm manges a collection of
Docker hosts and provides a Docker API compliant interface to manages
that set of hosts.
Notices
y This lab task requires the shared classroom server to be running a
Swarm Primary Manager process listening on port 4000, and a Consul
server listening on port 8500. Normally this is setup ahead of time by the
instructor. The following two containers (running on the server) will
provide the needed services:
# docker run -dp 8500:8500 -h consul --name consul \
--restart=always consul -server -bootstrap
$ su -
Password: makeitso Õ
2) Reconfigure your first assigned node so that the Docker process listens on the
network as well as the normal UNIX socket:
7-19
File: /etc/sysconfig/docker
5) Query the Swarm primary manager to verify that your node is seen as part of the
swarm and reports a status of Healthy:
7-20
6) Identify how many nodes have joined the swarm:
7) Configure your environment so that the Docker client uses the Swarm by default:
8) Create a network and launch a container connected to that network (be sure to
replace all instances of X with the number of your first assigned node):
7-21
11) List all the containers in the Swarm showing their container IDs, and names
sorted by the node they are running on:
12) Pull the names of all containers within the Swarm directly via the API endpoint:
Cleanup
14) Remove the test container and network, and leave the Swarm:
7-22
# docker rm -f stationY.example.com/sX-test
station3.example.com/sX-test
# docker network rm sX-net
# unset DOCKER_HOST
# docker stop swarm
swarm
# docker rm swarm
7-23
Appendix
A
CONTINUOUS
INTEGRATION
WITH GITLAB,
GITLAB CI, AND
DOCKER
Lab 1
Estimated Time: 135 minutes
1-2
Objectives Lab 1
y Install and configure GitLab
y Install and configure GitLab CI Task 1
GitLab and GitLab CI Setup
Requirements
Estimated Time: 15 minutes
(0 stations) c (classroom server)
Relevance
GitLab and GitLab CI is one of the most popular open-source combinations
for a CI enabled workflow.
Notices
y A single instance of this service is shared in the classroom and the
instructor will normally perform this lab as a demo on the shared
classroom server.
1) DO NOT DO THIS LAB (on student stations). The below steps are for your
reference. The lab is completed by the instructor on the classroom server system.
2) Verify that the system has at least 4G of RAM and 2 CPU cores:
# free
total used free shared buff/cache available
Mem: 4047620 701960 2829652 10720 516008 3088692
Swap: 524284 0 524284
# grep processor /proc/cpuinfo
processor : 0
processor : 1
. . . output omitted . . .
1-3
# echo "ci IN A 10.100.0.254" >> /var/named/data/named.example.com
# systemctl restart named
# host ci
ci.example.com has address 10.100.0.254
# host git
git.example.com has address 10.100.0.254
5) GitLab sends some email to admin. Create an alias that directs that email to root:
# gitlab-ctl reconfigure
. . . output omitted (This can take a few minutes) . . .
Hangs during install can be caused by the gitlab-runsvdir.service not being started.
Manually start and retry if a hang occurs.
File: /etc/gitlab/gitlab.rb
- external_url •http://server1.example.com•
+ external_url •http://git.example.com:8000•
. . . output omitted . . .
- # ci_external_url •http://ci.example.com•
+ ci_external_url •http://ci.example.com:8001•
1-4
File: /etc/gitlab/gitlab.rb
+ gitlab_rails[•ldap_enabled•] = true
+ gitlab_rails[•ldap_servers•] = YAML.load <<-•EOS•
+ main:
+ label: •Server1 LDAP Accounts•
+ host: •server1.example.com•
+ port: 389
+ uid: •uid•
+ method: •tls•
+ bind_dn: •cn=Manager,dc=example,dc=com•
+ password: •secret-ldap-pass•
+ active_directory: false
+ base: •ou=DevAccts,dc=example,dc=com•
+ EOS
File: /etc/gitlab/gitlab.rb
+ gitlab_rails[•gravatar_plain_url•] = •http://127.0.0.1•
+ gitlab_rails[•gravatar_ssl_url•] = •https://127.0.0.1•
+ gitlab_ci[•gravatar_enabled•] = false
11) Change the password for the root GitLab user by signing in via the web interface:
http://git.example.com:8000
Current credentials: u: root, p: 5iveL!fe
12) Connect to the GitLab CI site to show the authorization token for registering
shared runners:
1-5
Open a browser to ci.example.com:8001
The GitLab CI main page opens
Click 'Login with GitLabs account.
The page refreshes to the GitLabs login
Login as the root user
The page redirects to an 'Authorize Required' page
Click 'Authorize' The page refreshes to the GitLab Ci Dashboard
Click on 'Admin' --> 'Runners'
The needed token is displayed
1-6
Objectives Lab 1
y Create and use a project within GitLab
y Use the Python virtualenvwrapper to create a project sandbox
y Create unit tests, and functional tests
Task 2
Unit and Functional Tests
Estimated Time: 120 minutes
Requirements
b (1 station) c (classroom server)
Relevance
Continuous integration relies on unit and functional tests run by the CI
server to determine if code commits still result in working software.
login to the http://git.example.com:8000 URL with the assigned devX LDAP user (password=password).
The 'Welcome to GitLab!' screen is displayed
Click 'New Project'
The screen refreshes to the 'New Project' page
Enter a project path of: mod10-example-stationX
Click to change 'Visibility Level' to public.
Click 'Create Project'
The screen refreshes and confirms that the project was created and is currently empty
Click on the HTTP button
The HTTP project address is displayed instead of the SSH address
Scroll down to where the example git commands are shown for a reference
1-7
$ git init
Initialized empty Git repository in /home/guru/mod10-example/.git/
File: /home/guru/.netrc
+ machine git.example.com
+ login devX
+ password password
6) Configure remote HTTP access to GitLab and push into project (Be sure to replace
the X with your proper user/station number):
$ git remote add gitlab http://git.example.com:8000/devX/mod10-example-stationX.git
$ git push gitlab master
Counting objects: 3, done.
Writing objects: 100% (3/3), 261 bytes | 0 bytes/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To http://git.example.com:8000/devX/mod10-example-stationX.git
* [new branch] master -> master
1-8
$ sudo pip install --trusted-host=server1.example.coma
--index-url=http://server1.example.com/packages/simple virtualenvwrapper
. . . snip . . .
Collecting virtualenv (from virtualenvwrapper)
Downloading http://server1.example.com/packages/simple/virtualenv/a
virtualenv-13.1.2-py2.py3-none-any.whl (1.7MB)
. . . snip . . .
Running setup.py install for virtualenv-clone
Successfully installed argparse-1.3.0 pbr-1.6.0 six-1.9.0 stevedore-1.7.0a
virtualenv-13.1.2 virtualenv-clone-0.2.6 virtualenvwrapper-4.7.0
8) Create a directory to serve as a sandbox environment home and set your shell to
load the related functions on login:
$ mkdir -p ž/AppData/env
$ echo "export WORKON_HOME=$HOME/AppData/env" >> ž/.bashrc
$ echo "source /bin/virtualenvwrapper.sh" >> ž/.bashrc
$ source ž/.bashrc
$ mkvirtualenv mod10-example
New python executable in mod10-example/bin/python
Installing setuptools, pip, wheel...done.
(mod10-example)[guru@stationX ž]$ Prompt changes to reflect current environment.
$ workon Can be used anytime to see current environment name.
(mod10-example)
10) Examine the files created to support the new virtual environment:
$ du -sh ž/AppData/env/mod10-example/
9.5M AppData/env/mod10-example/
$ ls -R ž/AppData/env/mod10-example/
. . . output omitted . . .
11) Configure the structure of the mod10-example project and source tree. Due to its
module structure, Python allows for great flexibility in how the code tree can be
structured. There are a few general guidelines, however, so that project's are
generally compatible with pip:
1-9
y scripts/ or bin/ folder for command-line interfaces to the program or
library
y tests/ folder for tests
y lib/ for C-language libraries
y doc/ for documentation
y a folder for the source code of the program, in this case mod10/
$ cd ž/mod10-example/
$ mkdir {tests,ftests,mod10}
$ touch ./{tests,ftests,mod10}/__init__.py Allow directories to be used as Python packages
12) The Luhn algorithm (also called the "modulus 10" or "mod 10" algorithm) is a
simple way to partially validate whether a credit card number is valid without
having to explicitly sending a request to the credit card company. Note: it doesn't
tell you anything about whether the card is owned by anyone, or if it is authorized
for charges. That still requires a request to a payment processor.
Implemented as a simple checksum, mod10 is a component of the numbering
systems used by all of the major credit card companies today. Though it isn't
cryptographically secure, it is a useful check for mistaken or mistyped credit card
numbers. Briefly, here is how it determines whether a credit card number is valid:
1. Reverse the card number, collect the digits in odd and even positions.
2. For digits in even positions, multiply by two, this creates the "Luhn" digits.
3. For Luhn digits greater than 10, subtract nine.
4. Add the Luhn and off-digit numbers together
5. Take the number from step 4 and calculate the modulo 10. If there is no
remainder, the credit card number is valid.
Note: The next steps show the code listing as a reference. Your instructor may
provide you with the completed code to save time in class.
13) Create the core mod10 library which provides the logic for the Luhn credit card
validation:
1-10
File: ž/mod10-example/mod10/mod10.py
+ import six
+ import string
+ SET_DIGITS = set(string.digits)
File: ž/mod10-example/mod10/mod10.py
::returns bool: True if valid, false otherwise
•••
+ # Validate input as a string
+ if not isinstance(ccstr, six.string_types):
+ raise TypeError(
+ ( •Invalid input format %r, CC numbers should be input•
+ •as a valid Python string type•) % type(ccstr) )
+ # Check input length, credit card numbers are always a fixed length
+ if len(ccstr) > length_limit:
+ raise ValueError(
+ (•Invalid credit card input number, value is longer than allowed •
+ •maximum length•))
+ # Ensure that only numeric characters are present
+ if set(ccstr).difference(SET_DIGITS):
+ raise ValueError(
+ (•Invalid credit card number %s, credit card numbers may not •
+ •contain letter values•) % ccstr)
1-11
15) Add the Luhn checksum check:
File: ž/mod10-example/mod10/mod10.py
(•Invalid credit card number %s, credit card numbers may not •
•contain letter values•) % ccstr)
+ # Reverse the card number, collect the digits in odd and even positions
+ # For digits in even positions, multiply by two
+ off_digits = [ int(ccdigit) for ccdigit in ccstr[-1::-2] ]
+ luhn_digits = [ int(ccdigit)*2 for ccdigit in ccstr[-2::-2] ]
+ # Take the sum of the luhn digits add the sum of the off digits,
+ # calculate modulo 10, if there is no remainder, the card is valid
+ return (sum(scaled_luhn_digits) + sum(off_digits)) % 10 == 0
16) The server component of the credit card validation is implemented as a simple
Python web app using the Flask framework:
y It will have a single `/validate` endpoint that will take a credit card number
encoded as a POST parameter.
y It will use the mod10 library to determine if the number is valid.
y If the number is valid, it will return a HTTP 200 status code with a JSON
body.
y If the number is invalid (or the request was badly formed), it will return a
HTTP 400 status code with a JSON body providing details as to why the
credit card number was invalid.
17) Create the Python web app that uses the CC validation library starting with
boilerplate includes:
1-12
File: ž/mod10-example/mod10/server.py
+ import json
File: ž/mod10-example/mod10/server.py
# Import mod10 validation
from mod10 import ccnum_isvalid
1-13
19) Add the error checking and application startup:
File: ž/mod10-example/mod10/server.py
raise ValueError(•Invalid credit card number: %s•
% request.headers.get(•Credit-Card•))
+ # Unspecified errors
+ except Exception as err:
+ response[•valid•] = False
+ response[•error•] = str(err)
+ statuscode = 500
+ return Response(
+ json.dumps(response),
+ status=statuscode,
+ mimetype=•application/json•)
20) Create a file that specifies the required libraries to run the app:
File: ž/mod10-example/requirements.txt
+ --index-url=http://server1.example.com/packages/simple/
+ six==1.9.0
+ Flask==0.10.1
+ requests==2.7.0
+ argparse==1.3.0
1-14
21) Install the required libraries:
$ pip install --trusted-host=server1.example.com -r requirements.txt
. . . snip . . .
Installing collected packages: six, Werkzeug, MarkupSafe, Jinja2,a
itsdangerous, Flask, requests, argparse
Successfully installed Flask-0.10.1 Jinja2-2.8 MarkupSafe-0.23a
Werkzeug-0.10.4 argparse-1.3.0 itsdangerous-0.24 requests-2.7.0 six-1.9.0
Note that root access is not required because the packages are being installed
into the virtual environment.
22) Test that the application launches and listens on local port 8080:
$ python mod10/server.py
* Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
Ó¿c
24) Commit the working core code to git and post to GitLab:
$ git add .gitignore mod10 requirements.txt
$ git commit -m "Core project code"
[master c282199] Core project code
6 files changed, 104 insertions(+)
create mode 100644 .gitignore
create mode 100644 mod10/__init__.py
create mode 100644 mod10/hello.py
create mode 100644 mod10/mod10.py
create mode 100644 mod10/server.py
create mode 100644 requirements.txt
$ git push gitlab master
Counting objects: 10, done.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (9/9), 2.05 KiB | 0 bytes/s, done.
Total 9 (delta 0), reused 0 (delta 0)
To http://git.example.com:8000/dev1/mod10-example-station1.git
1-15
7b3fa07..c282199 master -> master
25) With the project skeleton, source code, and dependencies in place; a robust test
suite can be created that will ensure the code functions the way that it is
supposed to. Provided as part of the Python standard library is a robust testing
system called unittest, which is inspired by a similar system written for the Java
programming language called JUnit. Both unittest and JUnit are the de-facto
standards for unit testing for Python and Java, respectively.
Unittest supports test automation, sharing of setup and shutdown code for tests,
aggregation of tests into collection, and the independence of the tests from the
reporting framework. A full discussion of its capabilities and features is well
beyond the scope of this course. However, to be comfortable with the testing
code provided below, there are several high level concepts which you should be
aware.
Unittest organizes its code into fixtures, cases, and suites. These are in turn run
by a test runner.
y Test fixtures represent the preparation needed to perform one of more tests,
and any associated cleanup actions.
y Test cases are the smallest unit of testing. They are used to check for
specific responses to a particular set of inputs. Unittest provides a class
which implements many of the methods needed for test cases.
y Test suites represent a collection of test cases, tests suites, or both. It is
used to aggregate tests which should be executed together.
The TestCase class provided by unittest is the primary tool that we will use to
create and execute tests. We will, in turn, use the default runner provided by
unittest to run the tests and report the results. We will keep all of our tests cases
in the tests/ folder, which helps to separate it from the program logic. The test
suite should cover:
1. A set of "valid" numbers, following the general rules for Visa, American
Express, and MasterCard.
2. A set of "invalid" numbers, which are plausible but do not validate.
3. A number which includes alphabetic characters to check that the method
raises a ValueError.
1-16
4. A number which is submitted as a integer or float type (rather than a string)
to ensure that a TypeError is raised.
5. A number longer than the maximum number of allowed digits to ensure
that a ValueError is raised.
Due to the way that unittest organizes test suites, each test case will be
implemented on a common TestCase class with a method name which begins
with test_. Individual test assertions are performed using the assert methods
provided by the TestCase class.
26) Start by creating a small suite of tests which checks the logic for our
ccnum_isvalid method:
1-17
File: ž/mod10-example/tests/test_mod10.py
+ import os, sys
+ # Add the project root to the Python path to allow the import of project code
+ TESTS_ROOT = os.path.abspath(os.path.dirname(__file__))
+ PROJECT_ROOT = os.path.dirname(TESTS_ROOT)
+ if not PROJECT_ROOT in sys.path:
+ sys.path.append(PROJECT_ROOT)
+ # Test imports
+ import unittest
+ from mod10.mod10 import ccnum_isvalid
+ class Mod10ValidationTestCases(unittest.TestCase):
+ ••• Test cases for mod10 validation
+ •••
+
+ def test_ValidMod10CreditCards(self):
+ ••• Check mod10 validation of valid credit card numbers
+ •••
+ # American Express Test Number
+ self.assertEqual(ccnum_isvalid(•378282246310005•), True)
+ self.assertEqual(ccnum_isvalid(•371449635398431•), True)
+ # Visa
+ self.assertEqual(ccnum_isvalid(•4111111111111111•), True)
+ self.assertEqual(ccnum_isvalid(•4012888888881881•), True)
+ # MasterCard
+ self.assertEqual(ccnum_isvalid(•5555555555554444•), True)
L
+ self.assertEqual(ccnum_isvalid(•5105105105105100•), True)
1-18
28) Add additional tests by appending the following lines to the already existing file:
File: ž/mod10-example/tests/test_mod10.py
+ def test_InvalidMod10CreditCards(self):
+ ••• Check mod10 validation of invalid credit card numbers
+ •••
+ self.assertEqual(ccnum_isvalid(•123456780123•), False)
+ self.assertEqual(ccnum_isvalid(•98765421•), False)
+ self.assertEqual(ccnum_isvalid(•4111123456789000•), False)
+ self.assertEqual(ccnum_isvalid(•5105015987654321•), False)
+ self.assertEqual(ccnum_isvalid(•3782822463100013•), False)
+
+ def test_InvalidCreditCardErrors(self):
+ ••• Check that ccnum_isvalid raises expected errors
+ •••
+ self.assertRaises(TypeError, ccnum_isvalid, 123456780123)
+ self.assertRaises(ValueError, ccnum_isvalid, •378282246310005a•)
+ self.assertRaises(ValueError, ccnum_isvalid,
+ •378282246310005378282246310005378282246310005378282246310005•)
30) Commit the working unit test code to git and post to GitLab:
1-19
Creating a Functional Test Suite
31) Unit tests provide a good degree of confidence that the core logic of the program
are working as expected, but are only a component of a robust testing strategy.
Also important are functional tests. As opposed to unit tests (which only test
individual components), functional tests run against a running program and ensure
that it is working according to specification.
The functional tests for our program are kept in the ftests/ directory, and
invoked using a custom runner script.
Create the runner script:
File: ž/mod10-example/ftests.py
+ # Functional tests for the credit card validation server
+ import argparse, logging, os, sys, urlparse
+ import unittest
+ # from ftests.ftests_mod10validation import Mod10HttpTest
1-20
File: ž/mod10-example/ftests.py
+ def add_functional_test_suite(testcase_class=None, method=None):
+ ••• Helper function to load test classes into a custom runner
+ •••
+ suite = unittest.TestSuite()
+ if testcase_class:
+ testloader = unittest.TestLoader()
+ testnames = testloader.getTestCaseNames(testcase_class)
+ for name in testnames:
+ if name == method or method == None:
+ suite.addTest(testcase_class(name))
+ return suite
File: ž/mod10-example/ftests.py
+ # Execute test cases
+ if __name__ == •__main__•:
+ args = parser.parse_args()
+
+ # Log level and script verbosity
+ if args.loglevel:
+ logoptions[•level•] = args.loglevel
+ logging.basicConfig(**logoptions)
+ if args.testdetail:
+ verbosity = int(args.testdetail)
+ else:
+ verbosity = 2
1-21
34) Create a functional test that uses the REST API of the CC validation server to
check a sample card:
File: ž/mod10-example/ftests/ftests_mod10validation.py
+ import os, logging, posixpath, unittest, requests, json
+ logger = logging.getLogger(__name__)
+ class Mod10HttpTest(unittest.TestCase):
+ def test_Mod10CCNumValidation(self):
+ r = requests.post(•http://127.0.0.1:8080/validate•,
+ headers={•Credit-Card•: •378282246310005•})
+ self.assertEqual(r.status_code, 200)
+ rdata = json.loads(r.content)
+ self.assertEqual(rdata.get(•valid•), True)
35) Activate the new functional test by adding, or uncommenting, the following lines
within the test runner script:
File: ž/mod10-example/ftests.py
import argparse, logging, os, sys, urlparse
import unittest
+ from ftests.ftests_mod10validation import Mod10HttpTest
. . . snip . . .
testloader = unittest.TestLoader()
+ mod10_suite = unittest.TestSuite()
+ mod10_suite.addTest(add_functional_test_suite(Mod10HttpTest, args.method))
+ result = unittest.TextTestRunner(verbosity=int(verbosity)).run(mod10_suite)
sys.exit(not result.wasSuccessful())
1-22
36) Start the server application in the background and then run the functional test to
verify that it works:
$ cd ž/mod10-example/
$ python mod10/server.py &
[1] 4819
* Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
$ python ftests.py
test_Mod10CCNumValidation (ftests.ftests_mod10validation.Mod10HttpTest) ...a
INFO 2015-08-07 15:07: Starting new HTTP connection (1): 127.0.0.1
127.0.0.1 - - [07/Aug/2015 15:07:07] "POST /validate HTTP/1.1" 200 -
DEBUG 2015-08-07 15:07: "POST /validate HTTP/1.1" 200 52
ok
----------------------------------------------------------------------
Ran 1 test in 0.020s
OK
$ kill %1
$ Õ
[1]+ Terminated python mod10/server.py
37) Commit the working functional test code to git and post to GitLab:
$ git add ftests.py ftests/*.py
$ git commit -m "Functional test code"
. . . output omitted . . .
$ git push gitlab master
. . . output omitted . . .
38) Click on the GitLab 'Dashboard' link in the browser and confirm that you see the
master branch and list of recent commits.
39) The following actions require administrative privileges. Switch to a root login
shell:
$ su -
1-23
Password: makeitso Õ
42) Obtain the security token from GitLab CI that is needed to register a runner for
your project:
# gitlab-ci-multi-runner register
Please enter the gitlab-ci coordinator URL (e.g. http://gitlab-ci.org:3000/):
http://ci.example.com:8001
Please enter the gitlab-ci token for this runner:
46eb7d1074a9f68dcc5fb5bd7c03c0
1-24
Please enter the gitlab-ci description for this runner:
[server1.example.com]:Õ
INFO[0077] 46eb7d1 Registering runner... succeeded
Please enter the executor: ssh, shell, parallels, docker, docker-ssh:
[shell]: docker
Please enter the Docker image (eg. ruby:2.1):
ubuntu:14.04
If you want to enable mysql please enter version (X.Y) or enter latest?Õ
If you want to enable postgres please enter version (X.Y) or enter latest?Õ
If you want to enable redis please enter version (X.Y) or enter latest?Õ
If you want to enable mongo please enter version (X.Y) or enter latest?Õ
INFO[0399] Runner registered successfully. Feel free to start it, but if it•s running already the config shoulda
be automatically reloaded!
44) GitLab CI limits the use of custom images when invoking builds through Docker.
The runner register program doesn not allow passing the config option to enable
the images and services needed for this project.
As root, edit the config adding the needed lines:
File: /etc/gitlab-runner/config.toml
concurrent = 1
[[runners]]
name = "stationX.example.com"
url = "http://ci.example.com:8001"
token = "46eb7d1074a9f68dcc5fb5bd7c03c0"
limit = 1
executor = "docker"
[runners.docker]
image = "ubuntu:14.04"
+ allowed_images = ["ubuntu:*", "python:*"]
+ allowed_services = ["mysql:*", "redis:*", "postgres:*"]
privileged = false
volumes = ["/cache"]
1-25
With concurrent set to 1, the tests will run serially. For parallel operation, you can
change that value to 2 or more.
45) Verify the runner shows an active (running) status before proceeding:
46) Refresh the GitLab CI web page showing your project runners and verify your
runner is listed as 'Activated' for the project (you may need to scroll down the
page).
Do not proceed until you have confirmed the runner is activated with the CI
server!
47) Administrative privileges are no longer required; exit the root shell to return to an
unprivileged account:
# exit
48) To prepare for builds to run, you must create a configuration file to manage the
execution of scripts. GitLab makes use of a configuration file (named
.gitlab-ci.yml) which describes how a project will be built. The file uses YAML
syntax for specifying setup, configuration, and testing. The file should be created
in the root of the repository.
A GitLab CI configuration includes several sections:
y optional parameters which describe the docker environment (the image and
services parameters)
y parameters which control the environment configuration (before_script)
y parameters which govern the types of jobs which can be defined (types)
y specific jobs which will be executed (the initial config file for this project will
not have any specified jobs, they will be defined later)
1-26
To have GitLab CI use Docker for tests and builds, we must provide information
about the Docker configuration. This includes which image (ubuntu:14.04) and
services that are required for testing.
y image specifies the name of a Docker image that is present in a local Docker
repository, or a repository that can be found at Docker Hub.
y services are separate images that run at the same time and are linked to
the build. Common services include postgres or mysql for data access.
For this example project, only build and test jobs will be used.
49) As the guru user, create the following file that will serve as the foundation of this
project's CI configuration and setup:
File: ž/mod10-example/.gitlab-ci.yml
+ image: ubuntu:14.04
+ before_script:
+ - ./envprep.sh
+ types:
+ - build
+ - test
1-27
$ git commit .gitlab-ci.yml -m "Added GitLab CI base configuration"
. . . output omitted . . .
51) For builds to execute correctly, the referenced environment program must be
created which will:
Create a script that performs the setup for the ubuntu:14.04 Docker image used
for testing this project:
File: ž/mod10-example/envprep.sh
+ # Install virtual environment if not available
+ if command -v virtualenv > /dev/null; then
+ echo "virtualenv installed"
+ else
+ echo "virtualenv not installed, install before proceeding"
+ apt-get install python-virtualenv
+ fi
1-28
52) Set the new script executable and commit the change:
$ chmod +x envprep.sh
$ git add envprep.sh
$ git commit envprep.sh -m "Script to build Python virtual environment"
. . . output omitted . . .
53) Modify the gitlab-ci.yaml configuration to include a job that runs the unit tests
for the project. Add the code in the following listing at the bottom of the existing
file:
File: ž/mod10-example/.gitlab-ci.yml
+ mod10_unittest:
+ type: test
+ script:
+ - source ./env/bin/activate
+ - python -m unittest tests.test_mod10
56) Use the GitLab CI interface to examine the commit and associated build. Notice
that it is failing because the CI server is trying to clone the project code into the
container, but the container lacks the git command.
57) Create a Dockerfile to build an image that includes the git command:
1-29
File: ž/mod10-example/Dockerfile
+ FROM ubuntu:14.04
59) Modify the .gitlab-ci.yml so that the Ci server uses the new image:
File: ž/mod10-example/.gitlab-ci.yml
→ image: ubuntu:14.04-mod10
60) Spawn a new build with the new image, by committing the Dockerfile and
modified .gitlab-ci.yml to the source code repository:
$ git add Dockerfile .gitlab-ci.yml
$ git commit -m "Build and use custom image that includes Git"
[master 7b77e71] Added Dockerfile and updated .gitlab-ci.yml to use custom image
2 files changed, 5 insertions(+), 1 deletion(-)
create mode 100644 Dockerfile
$ git push gitlab master
1-30
. . . output omitted . . .
61) Again, use the GitLab CI web interface to examine the commit and associated
build. Notice that it is still failing because the envprep.sh is trying to install the
Python vitrtualenv package, but the minimal image lacks the correct repo.
Modify the Dockerfile so the image includes the needed items:
File: ž/mod10-example/Dockerfile
FROM ubuntu:14.04-mod10
62) Build the new image and commit the change the change to git:
63) Push the commit to GitLab (which will trigger another build):
1-31
Adding a CI Runner for Functional Testing
64) Knowing that the unit tests are passing correctly provides a great degree of
comfort, but only assures us that the components of the program are working as
expected. Equally important is to ensure that the functional tests are also passing.
Green functional tests provide a degree of confidence that the program is working
according to specification and are an integral part of any testing strategy.
Create a second CI job to execute the functional test suite by appending the
following to the bottom of the .gitlab-ci.yml config file:
File: ž/mod10-example/.gitlab-ci.yml
mod10_unittest:
type: test
script:
- source ./env/bin/activate
- python -m unittest tests.test_mod10
+ mod10_functest:
+ type: test
+ script:
+ - source ./env/bin/activate
+ - ./ftests.sh
Similar to the first job, this attempts to activate the virtualenv (with dependencies)
and run a set of tests. Unlike the unit tests, however, the functional tests require
a functional server to connect to in order to successfully run the tests.
There are many ways that this problem could be solved. One option might be to
build two separate docker containers, one with the server source code and the
second with the test client. Another might be to configure and launch the server
as a background daemon in the container and then run the testing client. The
solution used in the lab is to use a shell script, ftests.sh, to manage the
functional test process.
65) Create the ftests.sh shell scripts with the following content:
1-32
File: ž/mod10-example/ftests.sh
+ #!/bin/bash
+ python ./mod10/server.py &
+ sleep 1
+ python ftests.py
+ testexit=$?
+ pkill python
+ exit $testexit
1. Starts the server as a background process, and then pauses the script for 1
second (this is done to allow the server to initialize and avoid race
conditions).
2. Launches the functional test client (ftests.py), which will run the test
suite.
3. Capture the ftests.py exist code so that it can be returned at the end of
the script. This is necessary so that it can signal to the test runner whether
the tests passed or failed.
4. Shuts down the server (using pkill).
5. Exists with the exit value of the functional test client.
$ chmod +x ftests.sh
$ git add ftests.sh .gitlab-ci.yml
$ git commit -m "Added functional test job and management script to CI"
2 files changed, 13 insertions(+)
create mode 100755 ftests.sh
$ git push gitlab master
67) Again examine the GitLab CI entry for the new commit and associated builds.
Both unit and functional tests should now be passing (listing status of 'success').
At this point, the suite of tests will be run after every commit, and the CI server
could now be configured (see services link in web interface) to provide updates
via a number of services if something has been broken.
1-33
Bonus: Using Branches and Verifying Breakage is Quickly
Reported
69) Edit the existing mod10.py file (which contains the ccnum_isvalid source, and
modify the line where the luhn_digits are calculated:
File: mod10/mod10.py
off_digits = [int(ccdigit) for ccdigit in ccstr[-1::-2]]
→ luhn_digits = [int(ccdigit)*23 for ccdigit in ccstr[-2::-2]]
scaled_luhn_digits = map(lambda i: i-9 if i > 9 else i, luhn_digits)
70)
$ git commit mod10/mod10.py -m "Enhancements to Luhn algorithm"
$ git push gitlab guru/mod10_enhancements
Counting objects: 7, done.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (4/4), 427 bytes | 0 bytes/s, done.
Total 4 (delta 2), reused 0 (delta 0)
To http://git.example.com:8000/dev3/mod10-example-station3.git
* [new branch] guru/mod10_enhancements -> guru/mod10_enhancements
71) Return to the Git Lab CI web interface and examine the latest commit under the
'All Commits' tab. It should report an status of 'Failed' for the build.
When there are multiple Git Lab branches, the build status will be neatly
separated by branch. This makes it convenient to see which branches may be
passing or failing the test suite. Then you can further drill down into the build to
get additional details about which jobs and tests are failing.
1-34
The modifications made represent a fundamental change to the algorithm. For
that reason, both the functional and the unit tests are both failing. If the program
were more complex, you can break unit tests into multiple jobs in order to get a
high level overview of which groups of tests are failing. Clicking on the Build ID
link further lets us drop down and review the logs from the test run, providing still
further detail about where the failures occur.
The reporting and automation tools that continuous integration brings make it a
valuable tool in the arsenal of any organization that works with software. When
coupled with other best practices, such as test driven development, continuous
integration can improve the quality and compliance of code. Because tests can be
run after every commit, it becomes much easier to find bugs and breaks earlier in
development, and patch them more quickly.