CICD With Docker Kubernetes Semaphore
CICD With Docker Kubernetes Semaphore
CICD With Docker Kubernetes Semaphore
and Kubernetes
How to Deliver Cloud Native
Applications at High Velocity
CI/CD with Docker and Kubernetes
How to Deliver Cloud Native Applications at High Velocity
Semaphore
1
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Who Is This Book For, and What Does It Cover? . . . . . . . . . . 8
How to Contact Us . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
About the Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2 Deploying to Kubernetes 20
2.1 Containers and Pods . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Declarative vs Imperative Systems . . . . . . . . . . . . . . . . 22
2.3 Replica Sets Make Scaling Pods Easy . . . . . . . . . . . . . . . 23
2.4 Deployments Drive Replica Sets . . . . . . . . . . . . . . . . . . 25
2.4.1 What Happens When You Change Configuration . . . . 25
2.5 Detecting Broken Deployments with Readiness Probes . . . . . 26
2.6 Rollbacks for Quick Recovery from Bad Deploys . . . . . . . . . 27
2.7 MaxSurge and MaxUnavailable . . . . . . . . . . . . . . . . . . 27
2.8 Quick Demo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.9 Selectors and Labels . . . . . . . . . . . . . . . . . . . . . . . . 29
2.9.1 Services as Load Balancers . . . . . . . . . . . . . . . . . 29
2.10 Advanced Kubernetes Deployment Strategies . . . . . . . . . . 30
2.10.1 Blue / Green Deployment . . . . . . . . . . . . . . . . . 30
2.10.2 Canary Deployment . . . . . . . . . . . . . . . . . . . . 32
2.11 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2
3.1 What Makes a Good CI/CD Pipeline . . . . . . . . . . . . . . . 35
3.1.1 Speed . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1.2 Reliability . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.1.3 Completeness . . . . . . . . . . . . . . . . . . . . . . . . 36
3.1 General Principles . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.1 Architect the System in a Way That Supports Iterative
Releases . . . . . . . . . . . . . . . . . . . . . . . . . . 37
3.1.2 You Build It, You Run It . . . . . . . . . . . . . . . . . . 38
3.1.3 Use Ephemeral Resources . . . . . . . . . . . . . . . . . 38
3.1.4 Automate Everything . . . . . . . . . . . . . . . . . . . . 39
3.2 Continuous Integration Best Practices . . . . . . . . . . . . . . 39
3.2.1 Treat Master Build as If You’re Going to Make a Release
at Any Time . . . . . . . . . . . . . . . . . . . . . . . 39
3.2.2 Keep the Build Fast: Up to 10 Minutes . . . . . . . . . . 40
3.2.3 Build Only Once and Promote the Result Through the
Pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.2.4 Run Fast and Fundamental Tests First . . . . . . . . . . 43
3.2.5 Minimize Feature Branches, Embrace Feature Flags . . . 45
3.2.6 Use CI to Maintain Your Code . . . . . . . . . . . . . . 45
3.3 Continuous Delivery Best Practices . . . . . . . . . . . . . . . . 46
3.3.1 The CI/CD Pipeline is the Only Way to Deploy to Production 46
3.3.2 Developers Can Deploy to Production-Like Staging Envi-
ronments at a Push of a Button . . . . . . . . . . . . . 47
3.3.3 Always Use the Same Environment . . . . . . . . . . . . 47
3
4.4.2 Creating a Semaphore Project For The Demo Repository 59
4.4.3 The Semaphore Workflow Builder . . . . . . . . . . . . . 62
4.4.4 The Continous Integration Pipeline . . . . . . . . . . . . 65
4.4.5 Your First Build . . . . . . . . . . . . . . . . . . . . . . 70
4.5 Provisioning a Kubernetes Cluster . . . . . . . . . . . . . . 72
4.6 Provisioning a Database . . . . . . . . . . . . . . . . . . . 74
4.7 The Canary Pipeline . . . . . . . . . . . . . . . . . . . . . . . . 76
4.7.1 Creating a Promotion and Deployment Pipeline . . . . . 76
4.8 Your First Release . . . . . . . . . . . . . . . . . . . . . . . . . 80
4.8.1 The Stable Deployment Pipeline . . . . . . . . . . . . . . 80
4.8.2 Releasing the Canary . . . . . . . . . . . . . . . . . . . . 82
4.8.3 Releasing the Stable . . . . . . . . . . . . . . . . . . . . 82
4.8.4 The Rollback Pipeline . . . . . . . . . . . . . . . . . . . 84
4.8.5 Troubleshooting and Tips . . . . . . . . . . . . . . . . . 87
4.9 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5 Final Words 90
5.1 Share This Book With The World . . . . . . . . . . . . . . . . . 90
5.2 Tell Us What You Think . . . . . . . . . . . . . . . . . . . . . . 90
5.3 About Semaphore . . . . . . . . . . . . . . . . . . . . . . . . . . 90
4
© 2020 Rendered Text. All rights reserved.
This work is licensed under Creative Commmons Attribution-NonCommercial-
NoDerivatives 4.0 International. To view a copy of this license, visit https:
//creativecommons.org/licenses/by-nc-nd/4.0
This book is open source: https://github.com/semaphoreci/book-cicd-docker-
kubernetes
Published on the Semaphore website: https://semaphoreci.com
May 2020: First edition v1.0 (revision 69f24f4)
5
Share this book:
I’ve just started reading “CI/CD with Docker and Kubernetes”, a
free ebook by @semaphoreci: https://bit.ly/3bJELLQ (Tweet this!)
6
Preface
To maximize the rate of learning, we must minimize the time to try things.
In software development, the cloud has been a critical factor in increasing the
speed of building innovative products.
Today there’s a massive change going on in the way we’re using the cloud.
To borrow the metaphor from Adrian Cockroft, who led cloud architecture at
Netflix, we need to think of cloud resources not as long-lived and stable pets,
but as transitory and disposable cattle.
Doing so successfully, however, requires our applications to adapt. They need to
be disposable and horizontally scalable. They should have a minimal divergence
between development and production so that we can continuously deploy them
multiple times per day.
A new generation of tools has democratized the way of building such cloud
native software. Docker container is now the standard way of packaging software
in a way that can be deployed, scaled, and dynamically distributed on any
cloud. And Kubernetes is the leading platform to run containers in production.
Over time new platforms with higher-order interfaces will emerge, but it’s
almost certain that they will be based on Kubernetes.
The great opportunity comes potentially at a high cost. Countless organizations
have spent many engineering months learning how to deliver their apps with
this new stack, making sense of disparate information from the web. Delaying
new features by months is not exactly the outcome any business wants when
engineers announce that they’re moving to new tools that are supposed to
make them more productive.
This is where this book comes into play, dear reader. Our goal is to help
you transition to delivering cloud native apps quickly. The fundamentals
don’t change: we still need a rock-solid delivery pipeline, which automatically
configures, builds, tests, and deploys code. This book shows you how to do
that in a cloud native way — so you can focus on building great products and
solutions.
7
Who Is This Book For, and What Does It Cover?
The main goal of this book is to provide a practical roadmap for software
development teams who want to:
• Use Docker containers to package their code,
• Run it on Kubernetes, and
• Continuously deliver all changes.
We don’t spend much time explaining why you should, or should not use
container technologies to ship your applications. We also don’t provide a
general reference to using Docker and Kubernetes. When you encounter a
concept of Docker or Kubernetes that you’re not familiar with, we recommend
that you consult the official documentation.
We assume that you’re fairly new to the container technology stack and that
your goal is to establish a standardized and fully automated build, test, and
release process.
We believe that both technology leaders and individual contributors will benefit
from reading this book.
If you are a CTO or otherwise ultimately responsible for delivering working
software to customers, this book will provide you with a clear vision of what a
reliable CI/CD pipeline to Kubernetes looks like, and what it takes to build
one.
If you are a developer or systems administrator, besides understanding the big
picture, you will also find working code and configuration that you can reuse
in your projects.
Chapter 1, “Using Docker for Development and CI/CD”, outlines the key
benefits of using Docker and provides a detailed roadmap to adopting it.
Chapter 2, “Deploying to Kubernetes”, explains what you need to know about
Kubernetes deployments to deliver your containers to production.
Chapter 3, “Best Practices for Cloud Native Applications”, describes how both
our culture and tools related to software delivery need to change to fully benefit
from the agility that containers and cloud can offer.
Chapter 4, “A Complete CI/CD Pipeline”, is a step-by-step guide to imple-
menting a CI/CD pipeline with Semaphore that builds, tests, and deploys a
Dockerized microservice to Kubernetes.
8
How to Contact Us
We would very much love to hear your feedback after reading this book. What
did you like and learn? What could be improved? Is there something we could
explain further?
A benefit of publishing an ebook is that we can continuously improve it. And
that’s exactly what we intend to do based on your feedback.
You can send us feedback by sending an email to [email protected].
Find us on Twitter: https://twitter.com/semaphoreci
Find us on Facebook: https://facebook.com/SemaphoreCI
Find us on LinkedIn: https://www.linkedin.com/company/rendered-text
9
1 Using Docker for Development and CI/CD
In 2013, Solomon Hykes showed a demo of the first version of Docker during the
PyCon conference in Santa Clara1 . Since then, the benefits of Docker containers
have spread to seemingly every corner of the software industry. While Docker
(the project and the company) made containers so popular, they were not the
first project to leverage containers out there; and they are definitely not the
last either.
Several years later, we can hopefully see beyond the hype as some powerful,
efficient patterns emerged to leverage containers to develop and ship better
software, faster.
In this chapter, you will first learn about the kind of benefits that you can
expect from implementing Docker containers.
Then, a realistic roadmap that any organization can follow realistically, to
attain these benefits.
10
You can run these three lines on any machine where Docker is installed (Linux,
macOS, Windows), and in a few minutes, you will get the DockerCoins demo app
up and running. DockerCoins was created in 2015; it has multiple components
written in Python, Ruby, and Node.js, as well as a Redis store. Years later,
without changing anything in the code, we can still bring it up with the same
three commands.
This means that onboarding a new team member, or switching from a project
to another, can now be quick and reliable. It doesn’t matter if DockerCoins is
using Python 2.7 and Node.js 8 while your other apps are using Python 3 and
Node.js 10, or if your system is using even different versions of these languages;
each container is perfectly isolated from the others and from the host system.
We will see how to get there.
11
1.1.3 Less Risky Releases
Containers can help us to reduce the risks associated with a new release.
When we start a new version of our app by running the corresponding container
image, if something goes wrong, rolling back is very easy. All we have to do is
stop the container, and restart the previous version. The image for the previous
version will still be around and will start immediately.
This is way safer than attempting a code rollback, especially if the new version
implied some dependency upgrades. Are we sure that we can downgrade to
the previous version? Is it still available on the package repositories? If we are
using containers, we don’t have to worry about that, since our container image
is available and ready.
This pattern is sometimes called immutable infrastructure, because instead
of changing our services, we deploy new ones. Initially, immutable infrastructure
happened with virtual machines: each new release would happen by starting a
new fleet of virtual machines. Containers make this even easier to use.
As a result, we can deploy with more confidence, because we know that if
something goes wrong, we can easily go back to the previous version.
12
7. The last logical step is continuous deployment to production.
Each step is a self-contained iteration. Some steps are easy, others are more
work; but each of them will improve your workflow.
13
FROM ruby
RUN gem install sinatra
RUN gem install thin
ADD hasher.rb /
CMD ["ruby", "hasher.rb"]
EXPOSE 80
Once we have a working Dockerfile for an app, we can start using this container
image as the official development environment for this specific service or
component. If we picked a fast-moving one, we will see the benefits very
quickly, since Docker makes library and other dependency upgrades completely
seamless. Rebuilding the entire environment with a different language version
now becomes effortless. And if we realize after a difficult upgrade that the new
version doesn’t work as well, rolling back is just as easy and instantaneous,
because Docker keeps a cache of previous image builds around.
14
1.2.4 Writing a Docker Compose File
A Dockerfile makes it easy to build and run a single container; a Docker
Compose file makes it easy to build and run a stack of multiple containers.
So once each component runs correctly in a container, we can describe the
whole application with a Compose file.
Here’s what docker-compose.yml for DockerCoins demo looks like:
rng:
build: rng
ports:
- "8001:80"
hasher:
build: hasher
ports:
- "8002:80"
webui:
build: webui
links:
- redis
ports:
- "8000:80"
volumes:
- "./webui/files/:/files/"
redis:
image: redis
worker:
build: worker
links:
- rng
- hasher
- redis
15
for the application, and start all the containers in that network. Why use a
private network for the application? Isn’t that a bit overkill?
Since Compose will create a new network for each app that it starts, this lets
us run multiple apps next to each other (or multiple versions of the same app)
without any risk of interference.
This pairs with Docker’s service discovery mechanism, which relies on DNS.
When an application needs to connect to, say, a Redis server, it doesn’t need
to specify the IP address of the Redis server, or its FQDN. Instead, it can just
use redis as the server host name. For instance, in PHP:
Docker will make sure that the name redis resolves to the IP address of the
Redis container in the current network. So multiple applications can each
have a redis service, and the name redis will resolve to the right one in each
network.
16
1.2.6 End-To-End Testing and QA
When we want to automate a task, it’s a good idea to start by having it done
by a human, and write down the necessary steps. In other words: do things
manually first, but document them. Then, these instructions can be given to
another person, who will execute them. That person will probably ask us some
clarifying questions, which will allow us to refine our manual instructions.
Once these manual instructions are perfectly accurate, we can turn them
into a program (a simple script will often suffice) that we can then execute
automatically.
Follow these principles to deploy test environments, and execute CI (Continuous
Integration) and end-to-end testing, depending on the kind of tests that you
use in your organization. Even if you don’t have automated testing, you surely
have some kind of testing happening before you ship a feature, even if it’s just
someone messing around with the app in staging before your users see it.
In practice, this means that we will document and then automate the deploy-
ment of our application, so that anyone can get it up and running by running
a script.
Our final deployment scripts will be way simpler to write and to run than
full-blown configuration management manifests, VM images, and so on.
If we have a QA team, they are now empowered to test new releases without
relying on someone else to deploy the code for them.
If you’re doing any kind of unit testing or end-to-end testing, you can now
automate these tasks as well, by following the same principle as we did to
automate the deployment process.
We now have a whole sequence of actions: building images, starting containers,
executing initialization or migration hooks, and running tests. From now on,
we will call this the pipeline, because all these actions have to happen in a
specific order, and if one of them fails, we don’t execute the subsequent stages.
17
pipeline can also run on a specific branch, or a specific set of branches.
Each time there are relevant changes, our pipeline will automatically perform
a sequence similar to the following:
• Build new container images;
• Run unit tests on these images (if applicable);
• Deploy them in a temporary environment;
• Run end-to-end tests on the application;
• Make the application available for human testing.
Further in this book we will see how to actually go and implement such a
pipeline.
Note that we still don’t require container orchestration for all of this to work.
If our application in a staging environment can fit on a single machine, we
don’t need to worry about setting up a cluster, yet. In fact, thanks to Docker’s
layer system, running side-by-side images that share a common ancestry, which
will be the case for images corresponding to successive versions of the same
component, is very disk- and memory-efficient; so there is a good chance that
we will be able to run many copies of our app on a single Docker Engine.
But this is also the right time to start looking into orchestration, and a platform
like Kubernetes. Again, at this stage we don’t need to roll that out straight to
production; but we could use one of these orchestrators to deploy the staging
versions of our application.
This will give us a low-risk environment where we can ramp up our skills
on container orchestration and scheduling, while having the same level of
complexity, minus the volume of requests and data, that our production
environment.
18
1.3 Summary
Building a delivery pipeline with new tools from scratch is certainly a lot of
work. But with the roadmap described above, we can get there one step at a
time, while enjoying concrete benefits at each step.
In the next chapter, we will learn about deploying code to Kubernetes, including
strategies that might not have been possible in your previous technology stack.
19
2 Deploying to Kubernetes
When getting started with Kubernetes, one of the first commands that you
learn and use is generally kubectl run. Folks who have experience with Docker
tend to compare it to docker run and think: “Ah, this is how I can simply
run a container!”
As it turns out, when you use Kubernetes, you don’t simply run a container.
The way in which Kubernetes handles containers depends heavily on which
version you are running 2 . You can check the server version with:
$ kubectl version
Kubernetes containers on versions 1.17 and lower
When using a version lower than 1.18, look at what happens after running a
very basic kubectl run command:
$ kubectl run web --image=nginx
deployment.apps/web created
Alright! Then you check what was created on the cluster, and . . .
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web-65899c769f-dhtdx 1/1 Running 0 11s
20
• a pod (web-65899c769f-dhtdx).
Note: you can ignore the service named kubernetes in the example above;
that one already existed before the kubectl run command.
Kubernetes containers in versions 1.18 and higher
When you are running version 1.18 or higher, Kubernetes does indeed create a
single pod. Look how different Kubernetes acts on newer versions:
$ kubectl run web --image=nginx
pod/web created
As you can see, more recent Kubernetes versions behave pretty much in line
with what seasoned Docker users would expect. Notice that no deployments or
replicasets are created:
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/web 1/1 Running 0 3m14s
21
For instance, the containers within a pod can communicate with each other over
localhost. From a network perspective, all the processes in these containers
are local.
But you can never create a standalone container: the closest you can do is
create a pod with a single container in it.
That’s what happens here: when you tell Kubernetes, “create me an NGINX!”,
you’re really saying, “I would like a pod, in which there should be a single
container, using the nginx image.”
# pod-nginx.yml
# Create it with:
# kubectl apply -f pod-nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: web
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
Alright, then, why doesn’t it just have a pod? Why the replica set and
deployment?
22
In software container terms, you can say, “I would like a pod named web, in
which there should be a single container, that will run the nginx image.”
If that pod doesn’t exist yet, Kubernetes will create it. If that pod already
exists and matches your spec, Kubernetes doesn’t need to do anything.
With that in mind, how do you scale your web application, so that it runs in
multiple containers or pods?
# pod-replicas.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: web-replicas
labels:
app: web
tier: frontend
spec:
replicas: 3
selector:
matchLabels:
tier: frontend
template:
metadata:
23
labels:
app: web
tier: frontend
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
Replica sets are particularly relevant for scaling and high availability.
Scaling is relevant because you can update an existing replica set to change
the desired number of replicas. As a consequence, Kubernetes will create or
delete pods so it will be the exact desired number in the end.
For high availability, it is relevant because Kubernetes will continuously monitor
what’s going on in the cluster. It will ensure that no matter what happens,
you still have the desired number.
If a node goes down, taking one of the web pods with it, Kubernetes creates
another pod to replace it. If it turns out that the node wasn’t down, but merely
unreachable or unresponsive for a while, you may have one extra pod when it
comes back. Kubernetes will then terminate a pod to make sure that you still
have the exact requested number.
What happens, however, if you want to change the definition of a pod within
your replica set? For instance, what happens when you want to switch the
image that you are using with a newer version?
Remember: the mission of the replica set is, “Make sure that there are N
pods matching this specification.” What happens if you change that definition?
Suddenly, there are zero pods matching the new specification.
By now you know how a declarative system is supposed to work: Kubernetes
should immediately create N pods matching your new specification. The old
pods would just stay around until you clean them up manually.
It makes a lot of sense for these pods to be removed cleanly and automatically
in a CI/CD pipeline, as well as for the creation of new pods to happen in a
more gradual manner.
24
2.4 Deployments Drive Replica Sets
It would be nice if pods could be removed cleanly and automatically in a
CI/CD pipeline and if the creation of new pods could happen in a more gradual
manner.
This is the exact role of deployments in Kubernetes. At a first glance, the
specification for a deployment looks very much like the one for a replica set:
it features a pod specification, a number of replicas, and a few additional
parameters that you’ll read about later in this guide.
# deployment-nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Deployments, however, don’t create or delete pods directly. They delegate that
work to one or more replica sets.
When you create a deployment, it creates a replica set, using the exact pod
specification that you gave it.
When you update a deployment and adjust the number of replicas, it passes
that update down to the replica set.
25
releasing a new version), or the application’s parameters (through command-line
arguments, environment variables, or configuration files).
When you update the pod specification, the deployment creates a new replica
set with the updated pod specification. That replica set has an initial size
of zero. Then, the size of that replica set is progressively increased, while
decreasing the size of the other replica set.
You could imagine that you have a sound mixing board in front of you, and
you are going to fade in (turn up the volume) on the new replica set while
fading out (turn down the volume) on the old one.
During the whole process, requests are sent to pods of both the old and new
replica sets, without any downtime for your users.
That’s the big picture, but there are many little details that make this process
even more robust.
26
application keeps running with the old version until you address the issue.
Note: if there is no readiness probe, then the container is considered as ready,
as long as it could be started. So make sure that you define a readiness probe
if you want to leverage that feature!
27
This, however, implies that you have some spare capacity available on our
cluster. It might be the case that you can’t afford to run any extra pod, because
your cluster is full to the brim, and that you prefer to shutdown an old pod
before starting a new one.
MaxSurge indicates how many extra pods you are willing to run during a
rolling update, while MaxUnavailable indicates how many pods you can lose
during the rolling update. Both parameters are specific to a deployment:
each deployment can have different values for them. Both parameters can be
expressed as an absolute number of pods, or as a percentage of the deployment
size; and both parameters can be zero, but not at the same time.
Below, you’ll find a few typical values for MaxSurge and MaxUnavailable and
what they mean.
Setting MaxUnavailable to 0 means, “do not shutdown any old pod before a
new one is up and ready to serve traffic.”
Setting MaxSurge to 100% means, “immediately start all the new pods”, imply-
ing that you have enough spare capacity on your cluster and that you want to
go as fast as possible.
The default values for both parameters are 25%, meaning that when updating
a deployment of size 100, 25 new pods are immediately created, while 25 old
pods are shutdown. Each time a new pod comes up and is marked ready,
another old pod can be shutdown. Each time an old pod has completed its
shutdown and its resources have been freed, another new pod can be created.
28
• kubectl get events -w
Then, create, scale, and update a deployment with the following commands:
kubectl create deployment web --image=nginx
kubectl scale deployment web --replicas=10
kubectl set image deployment web nginx=that-image-does-not-exist
You can see that the deployment is stuck, but 80% of the application’s capacity
is still available.
If you run kubectl rollout undo deployment web, Kubernetes will go back
to the initial version, running the nginx image.
29
with the following command:
kubectl expose deployment web --port=80
The service will have its own internal IP address (denoted by the name
ClusterIP), and connections to this IP address on port 80 will be load-balanced
across all the pods of this deployment.
In fact, these connections will be load-balanced across all the pods matching
the service’s selector. In that case, that selector will be run=web.
When you edit the deployment and trigger a rolling update, a new replica set
is created. This replica set will create pods, whose labels will include, among
others, run=web. As such, these pods will receive connections automatically.
This means that during a rollout, the deployment doesn’t reconfigure or inform
the load balancer that pods are started and stopped. It happens automatically
through the selector of the service associated to the load balancer.
If you’re wondering how probes and healthchecks play into this, a pod is added
as a valid endpoint for a service only if all its containers pass their readiness
check. In other words, a pod starts receiving traffic only once it’s actually
ready for it.
30
You can achieve blue/green deployment by creating multiple deployments (in
the Kubernetes sense), and then switching from one to another by changing
the selector of our service.
Let’s see how this would work in a quick demo.
The following commands will create two deployments blue and green, respec-
tively using the nginx and httpd container images:
kubectl create deployment blue --image=nginx
kubectl create deployment green --image=httpd
Then, you create a service called web, which initially won’t send traffic anywhere:
kubectl create service clusterip web --tcp=80
Now, you can update the selector of service web by running kubectl edit
service web. This will retrieve the definition of service web from the Kuber-
netes API, and open it in a text editor. Look for the section that says:
selector:
app: web
31
Replace web with blue or green, to your liking. Save and exit. kubectl will
push your updated definition back to the Kubernetes API, and voilà! Service
web is now sending traffic to the corresponding deployment.
You can verify for yourself by retrieving the IP address of that service with
kubectl get svc web and connecting to that IP address with curl.
The modification that you did with a text editor can also be done entirely from
the command line, using for instance kubectl patch as follows:
kubectl patch service web -p '{"spec": {"selector": {"app": "green"}}}'
The advantage of blue/green deployment is that the traffic switch is almost
instantaneous, and you can roll back to the previous version just as fast by
updating the service definition again.
32
This technique, which would be fairly involved to set up, ends up being
relatively straightforward thanks to Kubernetes’ native mechanisms of labels
and selectors.
It’s worth noting that in the previous example, we changed the service’s selector,
but it is also possible to change the pods’ labels.
For instance, is a service’s selector is set to look for pods with the label
status=enabled, you can apply such a label to a specific pod with:
kubectl label pod fronted-aabbccdd-xyz status=enabled
You can apply labels en masse as well, for instance:
kubectl label pods -l app=blue,version=v1.5 status=enabled
And you can remove them just as easily:
kubectl label pods -l app=blue,version=v1.4 status-
2.11 Summary
You now know a few techniques that can be used to deploy with more confidence.
Some of these techniques simply reduce the downtime caused by the deployment
33
itself, meaning that you can deploy more often, without being afraid of affecting
your users.
Some of these techniques give you a safety belt, preventing a bad version from
taking down your service. And some others give you an extra peace of mind,
like hitting the “SAVE” button in a video game before trying a particularly
difficult sequence, knowing that if something goes wrong, you can always go
back where you were.
Kubernetes makes it possible for developers and operation teams to leverage
these techniques, which leads to safer deployments. If the risk associated with
deployments is lower, it means that you can deploy more often, incrementally,
and see more easily the results of your changes as we implement them; instead
of deploying once a week or month, for instance.
The end result is a higher development velocity, lower time-to-market for fixes
and new features, as well as better availability of your applications. Which is
the whole point of implementing containers in the first place.
34
3 CI/CD Best Practices for Cloud Native Ap-
plications
Engineering leaders strive to deliver bug-free products to customers as produc-
tively as possible. Today’s cloud-native technology empowers teams to iterate,
at scale, faster than ever. But to experience the promised agility, we need to
change how we deliver software.
“CI/CD” stands for the combined practices of Continuous Integration (CI)
and Continuous Delivery (CD). It is a timeless way of developing software in
which you’re able to release updates at any time in a sustainable way. When
changing code is routine, development cycles are faster. Work is more fulfilling.
Companies can improve their products many times per day and delight their
customers.
In this chapter, we’ll review the principles of CI/CD and see how we can apply
them to developing cloud native applications.
3.1.1 Speed
Pipeline velocity manifests itself in several ways:
How quickly do we get feedback on the correctness of our work? If
it’s longer than the time it takes to get a cup of coffee, pushing code to CI
becomes too distracting. It’s like asking a developer to join a meeting in the
middle of solving a problem. Developers will work less effectively due to context
switching.
How long does it take us to build, test and deploy a simple code
commit? Take a project with a total time of one hour to run CI and deployment
and a team of about a dozen engineers. Such CI/CD runtime means that the
entire team has a hard limit of up to six or seven deploys in a workday. In other
words, there is less than one deploy per developer per day available. The team
will settle on a workflow with less frequent and thus more risky deployments.
This workflow is in stark contrast to the rapid iterations that businesses today
need.
How quickly can we set up a new pipeline? Difficulty with scaling CI/CD
35
infrastructure or reusing existing configuration creates friction. You make the
best use of the cloud by writing software as a composition of small services.
Developers need new CI/CD pipelines often, and they need them fast. The
best way to solve this is to let developers create and own CI/CD pipelines for
their projects.
For this to happen, the CI/CD tool of choice should fit into the existing
development workflows. Such a CI/CD tool should support storing all pipeline
configuration as code. The team can review, version, and reuse pipelines like
any other code. But most importantly, CI/CD should be easy to use for every
developer. That way, projects don’t depend on individuals or teams who set
up and maintain CI for others.
3.1.2 Reliability
A reliable pipeline always produces the same output for a given input. And
with consistent runtime. Intermittent failures cause intense frustration among
developers.
Engineers like to do things on their own, and often they opt to maintain their
CI/CD system. But operating CI/CD that provides on-demand, clean, stable,
and fast resources is a complicated job. What seems to work well for one project
or a few developers usually breaks down later. The team and the number of
projects grow the technology stack changes. Then someone from management
realizes that by delegating that task, the team could spend more time on the
actual product. At that point, if not earlier, the engineering team moves from
a self-hosted to a cloud-based CI/CD solution.
3.1.3 Completeness
Any increase in automation is a positive change. However, a CI/CD pipeline
needs to run and visualize everything that happens to a code change — from
the moment it enters the repository until it runs in production. This requires
the CI/CD tool to be able to model both simple and, when needed, complex
workflows. That way, manual errors are all but impossible.
For example, it’s not uncommon to have the pipeline run only the build and
test steps. Deployment remains a manual operation, often performed by a
single person. This is a relic of the past when CI tools unable to model delivery
workflows.
Today a service like Semaphore provides features like:
36
• Secret management
• Multi-stage pipelines
• Container registry
• Connections to multiple environments (staging, production, etc.)
• Audit log
There is no longer a reason not to automate the entire software delivery process.
37
3.1.2 You Build It, You Run It
In the seminal 2006 interview to ACM3 , Werner Vogels, Amazon CTO, pio-
neered the mindset of you build it, you run it. The idea is that developers
should be in direct contact with the operation of their software, which, in turn,
puts them in close contact with customers.
The critical insight is that involving developers in the customer feedback loop
is essential for improving the quality of the service. Which ultimately leads to
better business results.
Back then, that view was radical. The tooling required was missing. So only
the biggest companies could afford to invest in building software that way.
Since then, the philosophy has passed the test of time. Today the best product
organizations are made of small autonomous teams. They own the full lifecycle
of their services. They have more freedom to react to feedback from users and
make the right decisions quickly.
Being responsible for the quality of software requires being responsible for
releasing it. This breaks down the silos between traditional developers and
operations groups. Everyone must work together to achieve high-level goals.
It’s not rare that in newly formed teams there is no dedicated operations person.
Instead, the approach is to do “NoOps”. Developers who write code also own
the delivery pipeline. The cloud providers take care of hosting and monitoring
production services.
38
As we’ve seen in chapter 1, containers allow us to use one environment in
development, CI/CD, and production. There’s no need to set up and maintain
infrastructure or sacrifice environmental fidelity.
39
pipeline, even while the application has no real functionality. The pipeline will
discourage any manual or risky processes from creeping in and slowing you
down in the future.
If you have an existing project with some technical debt, you can start by
committing to a “no broken windows” policy on the CI pipeline. When someone
breaks master, they should drop what they’re doing and fix it.
Every test failure is a bug. It needs to be logged, investigated, and fixed.
Assume that the defect is in application code unless tests can prove otherwise.
However, sometimes the test itself is the problem. Then the solution is to
rewrite it to be more reliable.
The process of cleaning up the master build usually starts as being frustrating.
But if you’re committed and stick to the process, over time, the pain goes away.
One day you reach a stage when a failed test means there is a real bug. You
don’t have to re-run the CI build to move on with your work. No one has to
impose a code freeze. Days become productive again.
40
productivity: we want feedback as soon as possible. Fast feedback loops keep
us in a state of flow, which is the source of our happiness at work.
So, it’s helpful to establish criteria for how fast should a CI process be:
Proper continuous integration is when it takes you less than 10 minutes from
pushing new code to getting results.
The 10-minute mark is about how much a developer can wait without getting
too distracted. It’s also adopted by one of the pioneers of continuous delivery,
Jez Humble. He performs the following informal poll at conferences4 .
First, he asks his audience to raise their hands if they do continuous integration.
Usually, most of the audience raise their hands.
He then asks them to keep their hands up if everyone on their team commits
and pushes to the master branch at least daily.
Over half the hands go down. He then asks them to keep their hands up if
each such commit causes an automated build and test. Half the remaining
hands are lowered.
Finally, he asks if, when the build fails, it’s usually back to green within ten
minutes.
With that last question, only a few hands remain. Those are the people who
pass the informal CI certification test.
There are a couple of tactics which you can employ to reduce CI build time:
• Caching: Project dependencies should be independently reused across
builds. When building Docker containers, use the layer caching feature
to reuse known layers from the registry.
• Built-in Docker registry: A container-native CI solution should in-
clude a built-in registry. This saves a lot of money comparing to using
the registry provided by your cloud provider. It also speeds up CI, often
by several minutes.
• Test parallelization: A large test suite is the most common reason why
CI is slow. The solution is to distribute tests across as many parallel jobs
as needed.
4
What is Proper Continuous Integration, Semaphore
https://semaphoreci.com/blog/2017/03/02/what-is-proper-continuous-integration.html
41
3.2.3 Build Only Once and Promote the Result Through the Pipeline
In the context of container-based services, this principle means building con-
tainers only once and then reusing the images throughout the pipeline.
For example, consider a case where you need to run tests in parallel and then
deploy a container. The desired pipeline should build the container image in
the first stage. The later stages of testing and deployment reuse the container
from the registry. Ideally, the registry would be part of the CI service to save
costs and avoid network overhead.
The same principle applies to any other assets that you need to create from
source code and use later. The most common are binary packages and website
assets.
Besides speed, there is the aspect of reliability. The goal is to be sure that
every automated test ran against the artifact that will go to production.
To support such workflows, your CI system should be able to:
• Execute pipelines in multiple stages.
• Run each stage in an identical, clean, and isolated environment.
• Version and upload the resulting artifact to an artifact or container
storage system.
42
• Reuse the artifacts in later stages of the pipeline.
These steps ensure that the build doesn’t change as it progresses through the
system.
43
This strategy allows developers to get feedback on trivial errors in seconds. It
also encourages all team members to understand the performance impact of
individual tests as the code base grows.
There are additional tactics that you can use with your CI system to get fast
feedback:
Conditional stage execution lets you defer running certain parts of your
build for the right moment. For example, you can configure your CI to run a
subset of end-to-end tests only if a related component has changed.
In the pipeline above, backend and frontend tests run if code changed in the
corresponding directories. End-to-end tests run if any of the two has passed
and none has failed.
A fail-fast strategy gives you instant feedback when a job fails. CI stops
all currently running jobs in the pipeline as soon as one of the jobs has failed.
This approach is particularly useful when running parallel jobs with variable
duration.
Automatic cancelation of queued builds can help in situations when you
push some changes, only to realize that you have made a mistake. So you push
a new revision immediately but would then need to wait for twice as long for
feedback. Using automatic cancelations, you can get feedback on revisions that
44
matter while skipping the intermediate ones.
if current_user.can_use_feature?("new-feature")
render_new_feature_widget
end
So you don’t even load the related code unless the user is a developer working
on it, or a small group of beta testers. No matter how unfinished the code
is, nobody will be affected. So you can work on it in short iterations and
make sure each iteration is well integrated with the system as a whole. Such
integrations are much easier to deal with than a big-bang merge.
45
No one may touch the service’s repository for months. And then, one day,
there’s an urgent need to deploy a change. The CI build unexpectedly fails:
there are security vulnerabilities in several dependencies, some of which have
introduced breaking changes. What seemed like a minor update becomes a
high-risk operation that may drag into days of work.
To prevent this from happening, you can schedule a daily CI build. A
scheduled build is an excellent way of detecting any issues with dependencies
early, regardless of how often your code changes (or doesn’t).
You can further support the quality of your code by incorporating in your CI
pipeline:
• Code style checkers
• Code smell detectors
• Security scanners
And running them first, before unit tests.
46
3.3.2 Developers Can Deploy to Production-Like Staging Environ-
ments at a Push of a Button
An ideal CI/CD pipeline is almost invisible. Developers get feedback from
tests without losing focus and deploy with a single command or button press.
There’s no delay between intent and actualization. Anything that gets in the
way of that ideal state is undesirable.
Developers should be the ones who deploy their code. This is in line with
the general principle of “You build it, you run it”. Delegating that task to
anyone else simply makes the process an order of magnitude slower and more
complicated.
Developers who build containerized microservices need to have a staging Ku-
bernetes cluster where they can deploy at will. Alternatively, they need a way
to deploy a canary build, which we describe later in the book.
47
Other environments are still not the same as production, since reproducing
the same infrastructure and load is expensive. However, the differences are
manageable, and we get to avoid most of the errors that would have occurred
with non-identical environments.
Chapter 1 includes a roadmap for adopting Docker for this purpose. Chapter 2
described some of the advanced deployment strategies that you can use with
Kubernetes. Strategies like blue-green and canary deployment reduce the risk
of bad deploys. Now that we know what a proper CI/CD pipeline should look
like, it’s time to start implementing it.
48
4 Implementing a CI/CD Pipeline
Going to a restaurant and looking at the menu with all those delicious dishes
is undoubtedly fun. But in the end, we have to pick something and eat it—the
whole point of going out is to have a nice meal. So far, this book has been like
a menu, showing you all the possibilities and their ingredients. In this chapter,
you are ready to order. Bon appétit.
Our goal is to get an application running on Kubernetes using CI/CD best
practices.
49
• login: we need to log in before we can push images. Takes a username,
password, and an optional registry URL.
• build: creates a custom image from a Dockerfile.
• tag: renames an image or changes its tag.
• exec: starts a process in an already-running container. Compare it with
docker run which starts a new container instead.
50
• git (https:// git-scm.com) to manage the code.
• docker (https:// www.docker.com) to run containers.
• kubectl (https:// kubernetes.io/ docs/tasks/tools/install-kubectl/ ) to
control the Kubernetes cluster.
• curl (https:// curl.haxx.se) to test the application.
$ docker-compose up --build
Docker Compose builds and runs the container image as required. It also
downloads and starts a PostgreSQL database for you.
51
The included Dockerfile builds a container image from an official Node.js
image:
FROM node:12.16.1-alpine3.10
USER $APP_USER
WORKDIR $APP_HOME
EXPOSE 3000
CMD ["node", "src/app.js"]
52
4.2.4 Reviewing Kubernetes Manifests
In chapter 3, we learned that Kubernetes is a declarative system: instead of
telling it what to do, we state what we want and trust it knows how to get
there.
The manifests directory contains all the Kubernetes manifest files.
service.yml describes the LoadBalancer service. Forwards traffic from port
80 (HTTP) to port 3000.
apiVersion: v1
kind: Service
metadata:
name: addressbook-lb
spec:
selector:
app: addressbook
type: LoadBalancer
ports:
- port: 80
targetPort: 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: $deployment
spec:
replicas: $replicas
selector:
matchLabels:
app: addressbook
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
template:
metadata:
labels:
app: addressbook
deployment: $deployment
spec:
containers:
53
- name: addressbook
image: $img
readinessProbe:
httpGet:
path: /ready
port: 3000
env:
- name: NODE_ENV
value: "production"
- name: PORT
value: "$PORT"
- name: DB_SCHEMA
value: "$DB_SCHEMA"
- name: DB_USER
value: "$DB_USER"
- name: DB_PASSWORD
value: "$DB_PASSWORD"
- name: DB_HOST
value: "$DB_HOST"
- name: DB_PORT
value: "$DB_PORT"
- name: DB_SSL
value: "$DB_SSL"
54
The CI pipeline performs the following steps:
• Git checkout: Get the latest source code.
• Docker pull: Get the latest available application image, if it exists, from
the CI Docker registry. This optional step decreases the build time in
the following step.
• Docker build: Create a Docker image.
• Test: Start the container and run tests inside.
• Docker push: If all test pass, push the accepted image to the production
registry.
In this process, we’ll use Semaphore’s built-in Docker registry. This is faster
and cheaper than using a registry from a cloud vendor to work with containers
in the CI/CD context.
55
We can do a canary deployment by connecting the canary pods to the same
load balancer as the rest of the pods. As a result, a set fraction of user traffic
goes to the canary. For example, if we have nine stable pods and one canary
pod, 10% of the users would get the canary release.
56
When you deploy v2 as a canary, you scale down the number of v1 pods to 2,
to keep the total amount of pods to 3.
Then, you can start a rolling update to version v2 on the stable deployment.
One at a time, all its pods are updated and restarted, until they are all running
on v2 and you can get rid of the canary.
57
4.4 Implementing a CI/CD Pipeline With Semaphore
In this section, we’ll learn about Semaphore and how to use it to build cloud-
based CI/CD pipelines.
58
4.4.1 Creating a Semaphore Account
To get started with Semaphore:
• Go to https://semaphoreci.com and click to sign up with your GitHub
account.
• GitHub will ask you to let Semaphore access your profile information.
Allow this so that Semaphore can create an account for you.
• Semaphore will walk you through the process of creating an organization.
Since software development is a team sport, all Semaphore projects
belong to an organization. Your organization will have its own domain,
for example, awesomecode.semaphoreci.com.
• Semaphore will ask you to choose between a time-limited free trial with
unlimited capacity, a free plan, and an open-source plan. Since we’re going
to work with an open-source repository, you can choose the open-source
option.
• Finally, you’ll be greeted with a quick product tour.
59
To keep things simple, select the “Public repositories” option. If you later
decide that you want to use Semaphore with your private projects as well, you
can extend the permission at any time.
Next, Semaphore will present you a list of repositories to choose from as the
source of your project:
60
The next screen lets you invite collaborators to your project. Semaphore
mirrors access permissions of GitHub, so if you add some people to the GitHub
repository later, you can “sync” them inside project settings on Semaphore.
Click on Go to Workflow Builder. Semaphore will ask you if you want to use
the existing pipelines or create one from scratch. At this point, you can choose
to use the existing configuration to get directly to the final workflow. In this
chapter, however, we want to learn how to create the pipelines so we’ll make a
fresh start.
61
Click on the option to configure the project from scratch.
62
Semaphore will immediately start the workflow. Wait a few seconds and your
first Docker image is ready, congratulations!
Since we haven’t told Semaphore where to store the image yet, it’s lost as soon
63
as the job ends. We’ll correct that next.
See the Edit Workflow button on the top right corner? Click it to open the
Workflow Builder.
Now it’s a good moment to learn the basic concepts of Semaphore by exploring
the Workflow Builder.
Pipelines
Pipelines are represented in Workflow Builder as big gray boxes. Pipelines
organize the workflow in blocks that are executed from left to right. Each
pipeline usually has a specific objective such as test, build, or deploy. Pipelines
can be chained together to make complex workflows.
Agent
The agent is the combination of hardware and software that powers the pipeline.
The machine type determines the amount of CPUs and memory allocated to
the virtual machine6 . The operating system is controlled by the Environment
Type and OS Image settings.
The default machine is called e1-standard-2 and has 2 CPUs, 4 GB RAM,
and runs a custom Ubuntu 18.04 image.
6
To see all the available machines, go to https://docs.semaphoreci.com/ci-cd-
environment/machine-types
64
Jobs and Blocks
Blocks and jobs define what to do at each step. Jobs define the commands that
do the work. Blocks contain jobs with a common objective and shared settings.
Jobs inherit their configuration from their parent block. All the jobs in a block
run in parallel, each in its isolated environment. If any of the jobs fails, the
pipeline stops with an error.
Blocks run sequentially, once all the jobs in the block complete, the next block
starts.
Each line on the job is a command to execute. The first command in the
job is checkout, which is a built-in script that clones the repository at the
correct revision7 . The next command, docker build, builds the image using
our Dockerfile.
Note: Long commands have been broken down into two or more lines with
backslash (\) to fit on the page. Semaphore expects one command per line, so
7
You can find the complete Semaphore toolbox at
https://docs.semaphoreci.com/reference/toolbox-reference
65
when typing them, remove the backslashes and newlines.
Replace the contents of the job with the following commands:
checkout
docker login -u $SEMAPHORE_REGISTRY_USERNAME \
-p $SEMAPHORE_REGISTRY_PASSWORD $SEMAPHORE_REGISTRY_URL
docker pull \
$SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:latest || true
docker build \
--cache-from $SEMAPHORE_REGISTRY_URL/seamphore-demo-cicd-kubernetes:latest \
-t $SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID .
docker push \
$SEMAPHORE_REGISTRY_URL/semaphore-demo-cicd-kubernetes:$SEMAPHORE_WORKFLOW_ID
66
Now that we have a Docker image that we can test let’s add a second block.
Click on the +Add Block dotted box.
The Test block will have jobs:
• Static tests.
• Integration tests.
• Functional tests.
The general sequence is the same for all tests:
1. Pull the image from the registry.
2. Start the container.
3. Run the tests.
Blocks can have a prologue in which we can place shared initialization commands.
Open the prologue section on the right side of the block and type the following
commands, which will be executed before each job:
Next, rename the first job as “Unit test” and type the following command,
which runs JSHint, a static code analysis tool:
67
Next, click on the +Add another job link below the job to create a new one
called “Functional test”. Type these commands:
This job tests two things: that the container connects to the database (ping)
and that it can create the tables (migrate). Obviously, we’ll need a database
for this to work; fortunately, we have sem-service, which lets us start database
engines like MySQL, Postgres, or MongoDB with a single command9 .
Finally, add a third job called “Integration test” and type these commands:
This last test runs the code in src/database.test.js, which checks if the
application can write and delete rows in the database.
9
For the complete list of services sem-service can manage check:
https://docs.semaphoreci.com/ci-cd-environment/sem-service-managing-databases-
and-services-on-linux/
68
Create the third block in the pipeline and call it “Push”. This last job will tag
the current Docker image as latest. Type these commands in the job:
69
This completes the setup of the CI pipeline.
70
Wait until the pipeline is complete then go to the top level of the project.
Click on the Docker Registry button and open the repository to verify that the
Docker image is there.
71
4.5 Provisioning a Kubernetes Cluster
In this book we will show you how to deploy to Kubernetes hosted on three
public cloud providers: Amazon AWS, Google Cloud Platform, and DigitalO-
cean. With small modifications, the process will work with any other cloud or
Kubernetes instance.
We’ll deploy the application in a three-node Kubernetes cluster. You can pick
a different size based on your needs, but you’ll need at least three nodes to run
an effective canary deployment with rolling updates.
72
• DOCKER_PASSWORD with the corresponding password.
73
Open a terminal and sign in to AWS:
$ aws configure
AWS Access Key ID: TYPE YOUR ACCESS KEY ID
AWS Secret Access Key: TYPE YOUR SECRET ACCESS KEY
Default region name: TYPE A REGION
74
• In the Connectivity tab, whitelist the 0.0.0.0/0 network11 .
• Go to the Users & Databases tab and create a database called “demo”
and a user named “demouser”.
• In the Overview tab, take note of the PostgreSQL IP address and port.
75
• DB_PORT points to the database port (default is 5432).
• DB_SCHEMA for AWS should be called “postgres”, for the other clouds
its value should be “demo”.
• DB_USER for the database user.
• DB_PASSWORD with the password.
• DB_SSL should be “true” for DigitalOcean, it can be left empty for
the rest.
5. Click on Save changes.
Check the Enable automatic promotion box. Now we can define the following
auto-starting conditions for the new pipeline:
76
result = 'passed' and (branch = 'master' or tag =~ '^hotfix*')
In the new pipeline, click on the first block. Let’s call it “Push”. The push
block takes the Docker image that we built earlier and uploads it to Docker
Hub. The secrets and the login command will vary depending on the cloud of
choice. For DigitalOcean, we’ll use Docker Hub as a repository:
Open the Secrets section and check the dockerhub secret.
Type the following commands in the job:
77
Create a new block called “Deploy” and enable secrets:
• dockerhub to communicate with Docker Hub;
• db-params to use the cloud database;
• do-key which is the cloud-specific access token.
Open the Environment Variables section and create a variable called
CLUSTER_NAME with the DigitalOcean cluster name (semaphore-demo-cicd-kubernetes).
To connect with the DigitalOcean cluster, we can use the official doctl tool,
which comes preinstalled in Semaphore.
First, type these commands in the prologue:
78
• Create a load balancer service with kubectl apply.
• Execute apply.sh, which creates the canary deployment.
• Reduce the size of the stable deployment with kubectl scale.
Create a third block called “Functional test and migration” and enable the
do-key secret. Repeat the environment variables and prologue steps from
the previous block. This is the last block in the pipeline and it runs some
automated tests on the canary. By combining kubectl get pod and kubectl
exec, we can run commands inside the pod.
Type the following commands in the job:
79
4.8 Your First Release
So far, so good. Let’s see where we are: we built the Docker image, and, after
testing it, we’ve setup the one-pod canary deployment pipeline. In this section,
we’ll extend the workflow with a stable deployment pipeline.
80
Create the “Deploy to Kubernetes” block with the do-key, db-params, and
dockerhub secrets. Also, create the CLUSTER_NAME variable and repeat the
same commands in the prologue as we did in the previous step.
In the job command box, type the following lines to make the rolling deployment
and delete the canary pods:
81
Good! We’re done with the release pipeline.
82
the deployment continues. If not, after collecting the necessary error reports
and stack traces, we rollback and regroup.
Let’s say we decide to go ahead. So go on and hit the Promote button next to
the stable pipeline.
While the block runs, you should see both the existing canary and a new
“addressbook-stable” deployment:
One at a time, the numbers of replicas should increase until reaching the target
of three:
We can use curl to test the API endpoint directly. For example, to create a
person in the addressbook:
83
$ curl -w "\n" -X PUT -d "firstName=Sammy&lastName=David Jr" 34.68.150.168/person
{
"id": 1,
"firstName": "Sammy",
"lastName": "David Jr",
"updatedAt": "2019-11-10T16:48:15.900Z",
"createdAt": "2019-11-10T16:48:15.900Z"
}
84
The rollback job collects information to help diagnose the problem. Create
a new block called “Rollback Canary”, import the do-ctl secret, and create
CLUSTER_NAME. Repeat the prologue commands like we did before and type
these lines in the job:
85
The first four lines print out information about the cluster. The last two,
undoes the changes by scaling up the stable deployment and removing the
canary.
Run the workflow once more and make a canary release, but this time try
rollback pipeline by clicking on its promote button:
And we’re back to normal, phew! Now its time to check the job logs to see
what went wrong and fix it before merging to master again.
86
But what if we discover a problem after we deploy a stable release?
Let’s imagine that a defect sneaked its way into production. It can happen,
maybe there was some subtle bug that no one found out hours or days in. Or
perhaps some error not picked up by the functional test. Is it too late? Can
we go back to the previous version?
The answer is yes, we can go to the previous version, but a manual intervention
is required. Remember that we tagged each Docker image with a unique ID (the
SEMAPHORE_WORKFLOW_ID)? We can re-promote the stable deployment pipeline
from the last good version in Semaphore. If the Docker image is no longer in
the registry, we can just regenerate it using the Rerun button in the top right
corner.
And services:
87
And the log output of the pods using:
If you need to jump in one of the containers, you can start a shell as long as
the pod is running with:
To access a pod network from your machine, forward a port with port-forward,
for instance:
These are some common error messages that you might run into:
• Manifest is invalid: it usually means that the manifest YAML syntax
is incorrect. Use kubectl --dry-run or --validate options verify the
manifest.
• ImagePullBackOff or ErrImagePull: the requested image is invalid or
was not found. Check that the image is in the registry and that the
reference in the manifest is correct.
• CrashLoopBackOff: the application is crashing, and the pod is shutting
down. Check the logs for application errors.
• Pod never leaves Pending status: this could mean that one of the Kuber-
netes secrets are missing.
• Log message says that “container is unhealthy”: this message may show
that the pod is not passing a probe. Check that the probe definitions are
correct.
• Log message says that there are “insufficient resources”: this may happen
when the cluster is running low on memory or CPU.
4.9 Summary
You have learned how to put together the puzzle of CI/CD, Docker, and
Kubernetes into a practical application. In this chapter, you have put in
practice all that you’ve learned in this book:
• How to setup pipelines in Semaphore CI/CD and use them to deploy to
the cloud.
88
• How to build Docker images and start a dev environment with the help
of Docker Compose.
• How to do canary deployments and rolling updates in Kubernetes.
• How to scale deployments and how to recover when things don’t go as
planned.
Each of the pieces had its role: Docker brings portability, Kubernetes adds
orchestration, and Semaphore CI/CD drives the test and deployment process.
89
5 Final Words
Congratulations, you’ve made it through the entire book. We wrote it with
the goal to help you deliver great cloud native applications. Now it’s up to
you—go make something awesome!
90