26 Operating ReplicaSets
26 Operating ReplicaSets
26 Operating ReplicaSets
In this lesson, we will explore the operating procedure of ReplicaSets and see its self-healing property in action.
• Deleting ReplicaSets
• Re-using the Same Pods
• Updating the De nition
• Self-healing in Action
• Destroying a Pod
• Removing a label
• Re-adding the Label
Deleting ReplicaSets #
What would happen if we delete the ReplicaSet? As you might have guessed,
both the ReplicaSet and everything it created (the Pods) would disappear
with a single kubectl delete -f rs/go-demo-2.yml command.
However, since ReplicaSets and Pods are loosely coupled objects with
matching labels, we can remove one without deleting the other.
We can, for example, remove the ReplicaSet we created while leaving the two
Pods intact.
kubectl get rs
The two Pods created by the ReplicaSet are indeed still running in the cluster
even though we removed the ReplicaSet.
The Pods that are currently running in the cluster do not have any relation
with the ReplicaSet we created earlier. We deleted the ReplicaSet, and the
Pods are still there.
Knowing that the ReplicaSet uses labels to decide whether the desired number
of Pods is already running in the cluster, should lead us to the conclusion that
if we create the same ReplicaSet again, it should reuse the two Pods that are
running in the cluster. Let’s confirm that.
If you compare the names of the Pods, you’ll see that they are the same as
before we created the ReplicaSet. It looked for matching labels, deduced that
there are two Pods that match them, and decided that there’s no need to
create new ones. The matching Pods fulfill the desired number of replicas.
We could have created the ReplicaSet with apply in the first place, but we
didn’t. The apply command automatically saves the configuration so that we
can edit it later on. The create command does not do such thing by default so
we had to save it with --save-config .
This time, the output is slightly different. Instead of saying that the ReplicaSet
was created, we can see that it was configured .
As expected, now there are four Pods in the cluster. If you pay closer
attention to the names of the Pods, you’ll notice that two of them are the same
as before.
Self-healing in Action #
We have already discussed that ReplicsSets have self-healing property. Let’s
test this property by making a few changes to our system.
Destroying a Pod #
Let’s see what happens when a Pod is destroyed.
We retrieved all the Pods and used -o name to retrieve only their names. The
result was piped to tail -1 so that only one of the names is output. The result
is stored in the environment variable POD_NAME . The latter command used that
variable to remove the Pod as a simulation of a failure.
We can see that the Pod we deleted is terminating . However, since we have a
ReplicaSet with replicas set to 4 , as soon as it discovered that the number of
Pods dropped to 3 , it created a new one. We just witnessed self-healing in
action.
📝 We get the final output after the system goes through several stages
so your output might differ from the above.
As long as there are enough available resources in the cluster, ReplicaSets will
make sure that the specified number of Pod replicas are (almost) always up-
and-running.
Removing a label #
Let’s see what happens if we remove one of the Pod labels ReplicaSet uses in
its selector.
We used the same command to retrieve the name of one of the Pods and
executed the command that removed the label service .
ℹ Please note - at the end of the name of the label. It is the syntax that
indicates that a label should be removed.
The output of the last command, limited to the labels section, is as follows.
...
Labels: db=mongo
language=go
type=backend
...
Now, let’s list the Pods in the cluster and check whether there is any change.
The total number of Pods increased to five. The moment we removed the
service label from one of the Pods, the ReplicaSet discovered that the number
of Pods matching the selector labels is three and created a new Pod.
Right now, we have four Pods controlled by the ReplicaSet and one running
freely due to non-matching labels.
The moment we added the label, the ReplicaSet discovered that there are five
Pods with matching selector labels. Since the specification states that there
should be four replicas of the Pod, it removed one of the Pods so that the
desired state matches the actual state.
The previous few examples showed, one more time, that ReplicaSets and Pods
are loosely coupled through matching labels and that ReplicaSets are using
those labels to maintain the parity between the actual and the desired state.
In the next lesson, we will go through a quick quiz to test our understanding
of ReplicaSets.