Kubernetes
Kubernetes
Kubernetes
2. etcd cluster: This plays a crucial role in data storage. It's a highly
available, distributed key-value store that holds all the cluster's configuration
and state information. Any changes made through the kube-apiserver are reflected in
the etcd cluster, ensuring that all components have the latest information. It's
essentially the persistent memory of the cluster.
4. kube-scheduler: This component takes care of Pod placement. When a new Pod is
created without a specified node, the kube-scheduler kicks in. It analyzes
available nodes, their resources, and Pod requirements to find the most suitable
node for the Pod to run on. It considers various factors like resource
availability, anti-affinity rules, and node labels to make optimal placement
decisions. Think of it as the matchmaker, pairing Pods with the most compatible
nodes.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
1. kubelet: Imagine this as the agent on each worker node in the cluster. It acts
as a bridge between the control plane (the components you mentioned before) and the
actual container runtime environment on the node. Here's what it does:
2. kube-proxy: This component handles network traffic routing within the cluster.
It ensures Pods from different services can communicate seamlessly. Here's its
role:
Watches for changes in Services and Endpoints resources in the API server.
Maintains network rules (like iptables) on each node based on these resources.
Routes traffic to the appropriate Pods based on the service definition and Pod IPs.
Supports various networking modes like iptables and CNI plugins (Container Network
Interface) for more flexibility.
Think of kube-proxy as the traffic cop, directing network flow based on service
definitions and ensuring smooth communication between Pods.
3. Container Runtime Engine: This is the software responsible for the actual
creation and execution of containers on the node. Kubernetes itself doesn't manage
containers directly; it relies on a separate container runtime engine. Some popular
options include:
Docker Engine: The original and still widely used container runtime.
containerd: A newer, lightweight container runtime used by many Kubernetes
distributions.
CRI-O: A Kubernetes-specific container runtime focused on security and efficiency.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
imagespec: This defines the structure and layout of container images, ensuring
portability and consistency across different runtime engines.
runtimespec: This defines how container runtimes should operate, including process
management, resource allocation, and isolation.
distribution-spec: This defines how container images are distributed and registered
in registries.
3. imagespec:
This OCI specification outlines the structure of a container image.
It consists of layers, where each layer represents a filesystem snapshot with
changes on top of the previous layer.
This layering approach allows efficient image creation and distribution by only
sending the changed layers instead of the entire image each time.
The imagespec also defines metadata associated with the image, such as entrypoint,
environment variables, and user information.
4. runtimespec:
This OCI specification defines the behavior and interface of container runtimes.
It specifies how a runtime should start, stop, pause, resume, and manage the
lifecycle of a container.
It also defines resource allocation, isolation, and security aspects of container
execution.
By standardizing the runtime behavior, OCI enables interoperability between
different runtime engines and tools.
Summary -
CRI provides a communication layer between Kubernetes and container runtimes.
OCI defines standards for container image formats (imagespec) and runtime behavior
(runtimespec).
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Three different types of CLI toools for containerd are cri, nerdctl, crictl
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ETCD Commands -
For example, ETCDCTL version 2 supports the following commands:
etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set
To set the right version of API set the environment variable ETCDCTL_API command
export ETCDCTL_API=3
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
spec:
containers:
- name: nginx-container
image: nginx
ports:
- containerPort: 80
kubectl run nginx --image-nginx #run a pod with nginx image
kubectl get pods
kubectl describe pods
kubectl apply -f pod.yml
kubectl edit pod <pod-name>
# Get commands with basic output
kubectl get services # List all services in the namespace
kubectl get pods --all-namespaces # List all pods in all namespaces
kubectl get pods -o wide # List all pods in the current
namespace, with more details
kubectl get deployment my-dep # List a particular deployment
kubectl get pods # List all pods in the namespace
kubectl get pod my-pod -o yaml # Get a pod's YAML
kubectl delete pods --all
kubectl delete pods --all -n <namespace>
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Replicaset -
kubectl create -f replicaset-definition.yaml
kubectl get replicaset
kubectl delete replicaset myapp-replicaset
kubectl edit replicaset <replica> and kubectl replace -f replicaset-definition.yaml
kubectl scale --replicas=6 -f replicaset.yaml
kubectl scale rs new-replica-set --replicas=5
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Deployment -
kubectl create -f deployment.yml
Create a deployment - kubectl create deployment --image=nginx nginx
Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) - kubectl
create deployment --image=nginx nginx --dry-run=client -o yaml
Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 q (--
replicas=4) - kubectl create deployment --image=nginx nginx --dry-run=client -o
yaml > nginx-deployment.yaml
In k8s version 1.19+, we can specify the --replicas option to create a deployment
with 4 replicas - kubectl create deployment --image=nginx nginx --replicas=4 --dry-
run=client -o yaml > nginx-deployment.yaml
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Services -
NodePort -
kubectl create service nodeport my-service --tcp=80:80 --dry-run=client -o yaml
A NodePort service exposes a set of pods to the outside world (external network)
via a static port on each node in the cluster. When an external client connects to
this port, traffic is forwarded to the service and then to one of the pods.
NodePort services are typically used when you need to expose your application
externally or to a specific group of users or systems outside of the Kubernetes
cluster. They are often used for testing or development purposes, and not
recommended for production use.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: my-service
name: my-service
spec:
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: my-service
type: NodePort
status:
loadBalancer: {}
ClusterIP -
kubectl create service clusterip my-service --tcp=80:80 --dry-run=client -o yaml
A ClusterIP service exposes a set of pods to other pods and services within the
Kubernetes cluster via a virtual IP address. This virtual IP address is only
reachable from within the cluster, and traffic is load-balanced between the pods
associated with the service.
ClusterIP services are commonly used when you need to expose your application
internally or to other services within the cluster. They are often used for web
services or APIs that need to communicate with other services in the same
Kubernetes cluster.
LoadBalancer -
kubectl create service loadbalancer my-service --tcp=80:80 --dry-run=client -o yaml
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Namespaces -
kubectl get pods --namespace=kube-system = to get information about pods from diff
namespace
kubectl create -f pod-definition.yml --namespace=dev = to create pod in a specific
namespace
kubectl create namespace brahma = to create a new namespace
kubectl config set-context $(kubectl config current-context) --namespace=dev = to
switch to the namespace permanently
kubectl get pods --all-namespaces = to get pods information from all namespaces
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
declarative configuration files describe the desired state of a resource. The user
specifies the desired state, and Kubernetes works to make that state a reality. The
configuration files are typically written in YAML format and include all the
necessary information for creating, modifying, or deleting a resource
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Scheduling -
Manual Scheduling -
Add a pod to a node manually -
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
nodeName: node01
containers:
- image: nginx
name: nginx
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Node Selectors -
Labelling Node - kubectl label nodes node01 size=large
spec:
containers:
- image: nginx
name: bee
resources: {}
nodeSelector:
size: large
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Node Affinity -
Q. Node affinity label examples -
A. Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=node01
kubernetes.io/os=linux
Q. Apply a label color=blue to node node01
A. kubectl label node node01 color=blue
Q. Set Node Affinity to the deployment to place the pods on node01 only
A. affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: color
operator: In OR Not In
values:
- blue
Planned -
requiredDuringSchedulingRequiredDuringExecution
preferredDuringSchedulingRequiredDuringExecution
Q. Create a new deployment named red with the nginx image and 2 replicas, and
ensure it gets placed on the controlplane node only.
Use the label key - node-role.kubernetes.io/control-plane - which is already set on
the controlplane node.
A. spec:
containers:
- image: nginx
name: nginx
resources: {}
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: Exists
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Resource Limits -
Requests specify the minimum amount of resources that a container needs to run,
while limits specify the maximum amount of resources that a container can use.
For example, let's say you have a container that requires at least 1 CPU and 512MB
of memory to run properly, but it may need more resources depending on its
workload. In this case, you would set the request for CPU and memory to 1 and
512MB, respectively, and the limit for CPU and memory to a higher value, such as 2
and 1GB, respectively
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: my-image
resources:
requests:
cpu: "1"
memory: "512Mi"
limits:
cpu: "2"
memory: "1Gi"
Daemon Sets -
In Kubernetes, DaemonSets are a type of controller that ensures that a particular
pod runs on all or some of the nodes in a cluster. They are used for deploying
system daemons or other system-level agents that should run on all nodes, such as
log collectors, monitoring agents, or storage agents.
A DaemonSet creates and maintains a copy of a pod on each node in the cluster,
which allows the system-level agents to operate on each node in the cluster. If new
nodes are added to the cluster, the DaemonSet automatically creates new pods on
those nodes. If nodes are removed, the DaemonSet automatically terminates the pods
running on those nodes.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Static Pods -
A static pod is a pod managed directly by the kubelet daemon on a node, rather than
being managed by the Kubernetes API server and controller manager. Static pods are
defined by YAML or JSON files placed in a directory watched by the kubelet on each
node. When the kubelet detects a change to a static pod file, it creates or updates
the corresponding pod on the node.
Static pods are typically used for system-level daemons that should run on every
node in a cluster, such as a network or storage agent. Static pods are useful in
situations where running a daemon as a regular Kubernetes deployment or daemonset
is not desirable or practical.
To use this definition file as a static pod, it should be saved to a directory
watched by the kubelet on a node, such as /etc/kubernetes/manifests.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Multiple Schedulers -
In Kubernetes, multiple schedulers allow you to define and use alternative
scheduling algorithms besides the default scheduler. This can be useful in
scenarios where you want to prioritize certain types of workloads or use a custom
scheduler that meets your specific needs.
apiVersion: v1
kind: ConfigMap
metadata:
name: my-scheduler
namespace: kube-system
data:
scheduler.yml: |-
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: true
profiles:
- schedulerName: my-scheduler
plugins:
preFilter:
enabled:
- name: NodeResourcesFit
- name: NodeName
filter:
enabled:
- name: NodeSelector
postFilter:
enabled:
- name: DefaultPreemption
score:
enabled:
- name: NodeResourcesLeastAllocated
weight: 1
kubectl apply -f my-scheduler-config.yaml
kubectl get pods -n kube-system | grep my-scheduler = Verify that the new scheduler
is running
Once you have created and deployed a new scheduler, you can use it to schedule your
workloads by adding a scheduler name to the spec section of your deployment or pod
definition file
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
schedulerName: my-scheduler
containers:
- name: my-container
image: my-image
ports:
- containerPort: 80
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Logging & Monitoring -
Heapster is deprecated
Metric Server - Metric Server is a component responsible for collecting resource
metrics such as CPU and memory usage from nodes and pods. It provides these metrics
to other components, such as the Horizontal Pod Autoscaler, which uses the metrics
to automatically scale the number of replicas for a specific deployment based on
the observed resource utilization.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++