Kodedkloud Instalation Hardway

Download as pdf or txt
Download as pdf or txt
You are on page 1of 153

Course Objectives

Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
3

DESIGN A
KUBERNETES
CLUSTER
Objectives
• Node Considerations
• Resource Requirements
• Network Considerations
Ask
• Purpose
• Education
• Development & Testing
• Hosting Production Applications
• Cloud or OnPrem?
• Workloads
• How many?
• What kind?
• Web
• Big Data/Analytics
• Application Resource Requirements
• CPU Intensive
• Memory Intensive
• Traffic
• Heavy traffic
• Burst Traffic
Purpose
• Education

• Minikube
• Single node cluster with kubeadm/GCP/AWS

• Development & Testing

• Multi-node cluster with a Single Master and Multiple workers


• Setup using kubeadm tool or quick provision on GCP or AWS or AKS

• Hosting Production Applications


Hosting Production Applications
• High Availability Multi node cluster with multiple master nodes
• Kubeadm or GCP or Kops on AWS or other supported platforms
• Upto 5000 nodes
• Upto 150,000 PODs in the cluster
• Upto 300,000 Total Containers
• Upto 100 PODs per Node
Nodes GCP AWS

1-5 N1-standard-1 1 vCPU 3.75 GB M3.medium 1 vCPU 3.75 GB

6-10 N1-standard-2 2 vCPU 7.5 GB M3.large 2 vCPU 7.5 GB

11-100 N1-standard-4 4 vCPU 15 GB M3.xlarge 4 vCPU 15 GB

101-250 N1-standard-8 8 vCPU 30 GB M3.2xlarge 8 vCPU 30 GB

251-500 N1-standard-16 16 vCPU 60 GB C4.4xlarge 16 vCPU 30 GB

> 500 N1-standard-32 32 vCPU 120 GB C4.8xlarge 36 vCPU 60 GB


Cloud or OnPrem?

• Use Kubeadm for on-prem


• GKE for GCP
• Kops for AWS
• Azure Kubernetes Service(AKS) for Azure
Storage
• High Performance – SSD Backed Storage
• Multiple Concurrent connections – Network based storage
• Persistent shared volumes for shared access across multiple PODs
• Label nodes with specific disk types
• Use Node Selectors to assign applications to nodes with specific disk types
Nodes
• Virtual or Physical Machines
• Minimum of 4 Node Cluster (Size based on workload)
• Master vs Worker Nodes
• Linux X86_64 Architecture

• Master nodes can host workloads


• Best practice is to not host workloads on Master nodes
Master Nodes
ETCD ETCD

M M

API Controller API Controller


Scheduler Scheduler
Server Manager Server Manager
Our Design

W W
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
15

Choosing
Kubernetes
Infrastructure
Linux Windows
KUBEADM

Minikube

Deploys VMs Requires VMs to be ready

Singe Node Cluster Singe/Multi Node Cluster


Turnkey Solutions Hosted Solutions
(Managed Solutions)

• You Provision VMs • Kubernetes-As-A-Service


• You Configure VMs • Provider provisions VMs
• You Use Scripts to Deploy Cluster • Provider installs Kubernetes
• You Maintain VMs yourself • Provider maintains VMs
• Eg: Kubernetes on AWS using KOPS • Eg: Google Container Engine (GKE)
Turnkey Solutions

OpenShift Cloud Foundry VMware Cloud Vagrant


Container Runtime PKS
Hosted Solutions

Google Container OpenShift Azure Kubernetes Amazon Elastic


Engine (GKE) Online Service Container Service
for Kubernetes
(EKS)
Our Choice
Our Design

W W
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
26

Chose a
Networking
Solution
GCE
Our Design

POD CIDR: 10.32.0.0/12


Service CIDR: 10.96.0.0/24

W W
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
31

HA Kubernetes
Cluster
M W W
API API
ETCD ETCD
Server Server

Controller M Controller M
Scheduler Scheduler
Manager Manager
https://load-balancer:6443
Active Active

https://master1:6443 https://master2:6443

API API
Server Server

M M

Controller Controller
ETCD Scheduler ETCD Scheduler
Manager Manager
Active Standby

Controller Controller
Scheduler Scheduler
Manager Manager

M M

API API
ETCD ETCD
Server Server
Kube-controller-manager
Endpoint
master1

Active Standby

Controller Controller
Manager Manager

Scheduler Scheduler

M M

API API
ETCD ETCD
Server Server

kube-controller-manager --leader-elect true [other options]


--leader-elect-lease-duration 15s
--leader-elect-renew-deadline 10s
--leader-elect-retry-period 2s
Stacked Topology

ETCD ETCD

M M

API Controller API Controller


Scheduler Scheduler
Server Manager Server Manager

✓ Easier to setup
✓ Easier to manage
✓ Fewer Servers
❖ Risk during failures
External ETCD Topology

ETCD ETCD

M M

API Controller API Controller


Scheduler Scheduler
Server Manager Server Manager

✓ Less Risky
❖ Harder to Setup
❖ More Servers
cat /etc/systemd/system/kube-apiserver.service
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--etcd-cafile=/var/lib/kubernetes/ca.pem \\
--etcd-certfile=/var/lib/kubernetes/kubernetes.pem \\
--etcd-keyfile=/var/lib/kubernetes/kubernetes-key.pem \\
--etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2379
ETCD ETCD

M M

API Controller API Controller


Scheduler Scheduler
Server Manager Server Manager
Our Design
LB

ETCD ETCD

M M

W W
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
44

ETCD
In HA
Objectives
• What is ETCD?
• What is a Key-Value Store?
• How to get started quickly?
• How to operate ETCD?
• What is a distributed system?
• How ETCD Operates
• RAFT Protocol
• Best practices on number of nodes
ETCD is a distributed
reliable key-value store
that is Simple, Secure &
Fast
key-value store Tabular/Relational Databases

Name Age Location Salary Grade

John Doe 45 New York 5000

Dave Smith 34 New York 4000

Aryan Kumar 10 New York A

Lauren Rob 13 Bangalore C

Lily Oliver 15 Bangalore B


key-value store
Key Value Key Value

Name John Doe Name Dave Smith

Age 45 Age 34

Location New York Location New York

Salary 5000 Salary 4000

Key Value Key Value Key Value

Name Aryan Kumar Name Lauren Rob Name Lily Oliver

Age 10 Age 13 Age 15

Location New York Location Bangalore Location Bangalore

Grade A Grade C Grade B


key-value store
{
{
"name": "Dave Smith",
"name": "John Doe",
"age": 34,
"age": 45,
"location": "New York",
"location": "New York",
"salary": 4000,
"salary": 5000
"organization": "ACME"
}
}

{ { {
"name": "Aryan Kumar", "name": "Lily Oliver", "name": "Lauren Rob",
"age": 10, "age": 15, "age": 13,
"location": "New York", "location": "Bangalore", "location": "Bangalore",
"Grade": "A" "Grade": "B" "Grade": "C"
} } }
ETCD is a distributed
reliable key-value store
that is Simple, Secure &
Fast
distributed
2379 2379 2379
Consistent

READ/WRITE READ/WRITE READ/WRITE

2379 2379 2379


READ
READ READ READ

2379 2379 2379


WRITE Name Joe
Name John

WRITE WRITE

2379 2379 2379

Name John
Joe
Leader Election - RAFT

L
L
Age 10

Name John

WRITE

2379 2379 2379

Age 10

Name John
Instances Quorum Fault
Tolerance

1 1 0
Majority = N/2 + 1
Quorum
2 2 0

3 2 1
Quorum of 2 = 2/2 + 1 = 2
4 3 1
Quorum of 3 = 3/2 + 1 = 2.5 ~= 2
5 3 2
Quorum of 5 = 5/2 + 1 = 3.5 ~= 3
6 4 2

7 4 3
Odd or even?

Managers Majority Fault


Tolerance

1 1 0
Quorum - 4

2 2 0

3 2 1

4 3 1 Quorum - 4
Quorum - 4 Quorum - 4
5 3 2

6 4 2

7 4 3
Getting Started
wget -q --https-only \
"https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"

tar -xvf etcd-v3.3.9-linux-amd64.tar.gz

mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/

mkdir -p /etc/etcd /var/lib/etcd

cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/


etcd.service
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster peer-1=https://${PEER1_IP}:2380,peer-2=https://${PEER2_IP}:2380
controller-0=https://${CONTROLLER0_IP}:2380,controller-1=https://${CONTROLLER1_IP}:2380
\\ \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
ETCDCTL
export ETCDCTL_API=3

etcdctl put name john

etcdctl get name

name
john

etcdctl get / --prefix --keys-only

name
Number of Nodes
Instances Quorum Fault
Tolerance

1 1 0 ETCD ETCD

2 2 0

3 2 1 M M

4 3 1
API Controller API Controller
Scheduler Scheduler
Server Manager Server Manager
5 3 2

6 4 2

7 4 3
Our Design
LB

ETCD ETCD

M M

W W
65

DEMO
Pre-Requisites
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
68

Provision
Infrastructure
Our Design
LB

ETCD ETCD

M M

W W
vagrant up

• Deploys 5 VMs - 2 Master, 2 Worker and 1 Loadbalancer with the name 'kubernetes-ha-* ‘
• Set's IP addresses in the range 192.168.5
• Add's a DNS entry to each of the nodes to access internet
• Install's Docker on the nodes
72

DEMO
Provision Infrastructure
74

DEMO
Install Client Tools
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
77

DEMO
Secure Cluster
Communication
78
79

DEMO
Kube Config Files
80
81

DEMO
Data Encryption
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
84

Kubernetes
Release Binaries
wget https://github.com/kubernetes/kubernetes/releases/download/v1.13.3/kubernetes.tar.gz
kubernetes.tar.gz

tar -xzvf kubernetes.tar.gz


kubernetes

cd kubernetes; ls
client cluster docs hack LICENSES platforms README.md
server version

cluster/get-kube-binaries.sh
client/kubernetes-client-linux-amd64.tar.gz
server/kubernetes-server-linux-amd64.tar.gz

Extracting /root/kubernetes/client/kubernetes-client-linux-amd64.tar.gz into /root/kubernetes/platforms/linux/amd64


Add '/root/kubernetes/client/bin' to your PATH to use newly-installed binaries.

cd server; tar –zxvf kubernetes-server-linux-amd64.tar.gz


Kubernetes/server
cluster/get-kube-binaries.sh
client/kubernetes-client-linux-amd64.tar.gz
server/kubernetes-server-linux-amd64.tar.gz

Extracting /root/kubernetes/client/kubernetes-client-linux-amd64.tar.gz into /root/kubernetes/platforms/linux/amd64


Add '/root/kubernetes/client/bin' to your PATH to use newly-installed binaries.

cd server; tar –zxvf kubernetes-server-linux-amd64.tar.gz


Kubernetes/server

ls kubernetes/server/bin
apiextensions-apiserver kubeadm kube-proxy.docker_tag mounter
cloud-controller-manager kube-apiserver kube-controller-manager.tar kube-proxy.tar
kubectl kube-scheduler cloud-controller-manager.tar kube-apiserver.tar
kubelet kube-scheduler.docker_tag
hyperkube kube-controller-manager kube-proxy kube-scheduler.tar
91

DEMO
Download Release Binaries
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
94

Install Master
Our Design HAPRoxy

LB

❑ Deploy ETCD Cluster


API
API
Server Server ❑ Deploy Control Plane Components
ETCD ETCD
❑ Network Loadbalancer
M M
Controller Controller
Manager Manager

Scheduler Scheduler

W W
96
97

DEMO
Install ETCD Cluster
98
99

DEMO
Install Control-plane
Components
100
101

DEMO
Install Load Balancer
102
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
104

Install Worker
Nodes
Our Design HAPRoxy

LB

✓ Deploy ETCD Cluster


API API ✓ Deploy Control Plane Components
Server Server
✓ Network Loadbalancer
ETCD ETCD

M M
Controller Controller
Manager Manager

Scheduler Scheduler

❑ Generate CERTs for Worker-1 kubelet kubelet TLS Bootstrap:


❑ Configure Kubelet for Worker-1 ❑ Worker-2 to create and configure
❑ Renew Certificates
Kube-proxy Kube-proxy certificates by itself
❑ Configure kube-proxy ❑ Configure Kubelet for Worker-2
W W ❑ Worker-2 to renew certificates by itself
❑ Configure kube-proxy
106

DEMO
Install Worker-1
107
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
109

TLS Bootstrap
Kubelet
Our Design HAPRoxy

LB

✓ Deploy ETCD Cluster


API API ✓ Deploy Control Plane Components
Server Server
✓ Network Loadbalancer
ETCD ETCD

M M
Controller Controller
Manager Manager

Scheduler Scheduler

✓ Generate CERTs for Worker-1 kubelet TLS Bootstrap:


✓ Configure Kubelet for Worker-1 ❑ Worker-2 to create and configure
✓ Renew Certificates
Kube-proxy certificates by itself
✓ Configure kube-proxy ❑ Configure Kubelet for Worker-2
W W ❑ Worker-2 to renew certificates by itself
❑ Configure kube-proxy
Server Certs

M W

Client Certs

kubelet.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--tls-cert-file=/var/lib/kubelet/worker-1.crt \\
--tls-private-key-file=/var/lib/kubelet/worker-1.key \\
--network-plugin=cni \\
--register-node=true \\
--v=2
system:bootstrappers system:nodes

M
Submit CSR? system:node-bootstrapper

Auto Approve CSR? certificatesigningrequests:nodeclient

kubectl get csr Auto Renew CSR? certificatesigningrequests:selfnodeclient

NAME AGE REQUESTOR CONDITION


node-csr-VnzfkE6WdOMOna_S7jIuMTtQzu1-utgAA5gbk3dooUY 13m system:bootstrap:07401b Approved,Issued

1. Create Bootstrap Token and associate it to group system:bootstrappers


2. Assign Role – system:node-bootstrapper to group system:bootstrappers
3. Assign Role - system:certificates.k8s.io:certificatesigningrequests:nodeclient to group system:bootstrappers
4. Assign Role - system:certificates.k8s.io:certificatesigningrequests:selfnodeclient to group system:nodes
bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster: W
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.5.30:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d kubelet.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--tls-cert-file=/var/lib/kubelet/worker-1.crt \\
--tls-private-key-file=/var/lib/kubelet/worker-1.key \\
--network-plugin=cni \\
--register-node=true \\
--v=2
bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster: W
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.5.30:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d kubelet.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--tls-cert-file=/var/lib/kubelet/worker-1.crt
--rotate-certificates=true \\ \\
--tls-private-key-file=/var/lib/kubelet/worker-1.key
--tls-cert-file=/var/lib/kubelet/worker-1.crt \\ \\
--network-plugin=cni \\
--tls-private-key-file=/var/lib/kubelet/worker-1.key \\
--register-node=true \\
--network-plugin=cni
--v=2
--register-node=true \\
--v=2
bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster: W
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.5.30:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap Client Certs
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d kubelet.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--tls-cert-file=/var/lib/kubelet/worker-1.crt
--rotate-certificates=true \\ \\
--tls-cert-file=/var/lib/kubelet/worker-1.crt \\
--tls-private-key-file=/var/lib/kubelet/worker-1.key \\
--rotate-server-certificates=true \\
--tls-private-key-file=/var/lib/kubelet/worker-1.key
--network-plugin=cni \\ \\
--network-plugin=cni
--register-node=true \\
--register-node=true
--v=2 \\
--v=2
Server Certs Client Certs

CSR Approval CSR Approval


Manual Automatic

kubectl get csr


NAME AGE REQUESTOR CONDITION
csr-x254z 13m system:node:worker-2 Pending
node-csr-VnzfkE6WdOMOna_S7jIuMTtQzu1-utgAA5gbk3dooUY 13m system:bootstrap:07401b Approved,Issued

kubectl certificate approve csr-x254z


csr-x254z approved!
117

DEMO
TLS Bootstrap Kubelet
118
119

DEMO
Configure KubeConfig File
120
121

DEMO
Provision Networking
122
123

DEMO
KubeApi Server to Kubelet
Connectivity
124
125

DEMO
Deploy DNS - CoreDNS
126
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
128

Node end-to-end
Tests
Test - Manual
kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker-1 Ready <none> 8d v1.13.0
worker-2 Ready <none> 8d v1.13.0

kubectl get pods


NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-69cbb76ff8-9p45d 1/1 Running 1 8d
kube-system coredns-69cbb76ff8-rmhzt 1/1 Running 0 8d
kube-system weave-net-58j2j 2/2 Running 2 8d
kube-system weave-net-rr5dk 2/2 Running 2 8d
Test - Manual
kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-5dntv 1/1 Running 0 1h
coredns-78fcdf6894-knpzl 1/1 Running 0 1h
etcd-master 1/1 Running 0 1h
kube-apiserver-master 1/1 Running 0 1h
kube-controller-manager-master 1/1 Running 0 1h
kube-proxy-fvbpj 1/1 Running 0 1h
kube-proxy-v5r2t 1/1 Running 0 1h
kube-scheduler-master 1/1 Running 0 1h
weave-net-7kd52 2/2 Running 1 1h
weave-net-jtl5m 2/2 Running 1 1h
Test - Manual
service kube-apiserver status
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/etc/systemd/system/kube-apiserver.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-20 07:57:25 UTC; 1 weeks 1 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 15767 (kube-apiserver)
Tasks: 13 (limit: 2362)

service kube-controller-manager status


● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/etc/systemd/system/kube-controller-manager.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-20 07:57:25 UTC; 1 weeks 1 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 15771 (kube-controller)
Tasks: 10 (limit: 2362)

service kube-scheduler status


● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2019-03-29 01:45:32 UTC; 11min ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 28390 (kube-scheduler)
Tasks: 10 (limit: 2362)
Test - Manual
service kubelet status
● kubelet.service - Kubernetes Kubelet
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-20 14:22:06 UTC; 1 weeks 1 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 1281 (kubelet)
Tasks: 24 (limit: 1152)

service kube-proxy status


● kube-proxy.service - Kubernetes Kube Proxy
Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-20 14:21:54 UTC; 1 weeks 1 days ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 794 (kube-proxy)
Tasks: 7 (limit: 1152)
Test - Manual
kubectl run nginx
deployment.apps/nginx created

kubectl get pods


NAME READY STATUS RESTARTS AGE
nginx-7cdbd8cdc9-g5q8d 1/1 Running 0 19s

kubectl scale --replicas=3 deploy/nginx


deployment.extensions/nginx scaled

kubectl get pods


NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cdbd8cdc9-djj6x 1/1 Running 0 74s 10.40.0.5 worker-2 <none> <none>
nginx-7cdbd8cdc9-g5q8d 1/1 Running 0 3m29s 10.32.0.5 worker-1 <none> <none>
nginx-7cdbd8cdc9-rsskt 1/1 Running 0 74s 10.32.0.6 worker-1 <none> <none>
Test - Manual
kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

kubectl get service


NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8d
nginx NodePort 10.96.0.88 <none> 80:31850/TCP 3s

curl http://worker-1:31850
...
<h1>Welcome to nginx!</h1>
...
kubetest
kubetest - Tests

e2e: ~1000
sig-api-machinery sig-apps sig-auth sig-cli

sig-network sig-scheduling sig-storage


kubetest - Tests
sig-api-machinery sig-apps sig-auth sig-cli

sig-network sig-scheduling sig-storage

✓ Networking should function for intra-pod communication (http)


✓ Services should serve a basic endpoint from pods
✓ Service endpoints latency should not be very high
✓ DNS should provide DNS for services

✓ Secrets should be consumable in multiple volumes in a pod


✓ Secrets should be consumable from pods in volume with mappings
✓ ConfigMap should be consumable from pods in volume
kubetest - Tests
✓ Networking should function for intra-pod communication (http)
[sig-network] Networking Granular Checks: Pods
1. Prepare: Creates a namespace for this should function for intra-pod communication: http [NodeConformance] [Confo
STEP: Building a namespace api object
test STEP: Performing setup for networking test in namespace e2e-tests-pod-networ
2. Creates Test Pod in this namespace Mar 14 11:35:19.315: INFO: Waiting up to 10m0s for all (but 0) nodes to be s
Waits for the PODs to come up STEP: Creating test pods
Mar 14 11:35:39.522: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -
3. Test: Executes curl on one pod to reach 'http://10.32.0.8:8080/dial?request=hostName&protocol=http&host=10.32.0.7&po
the IP of another over HTTP drstd PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> Cap
4. Record the result PreserveWhitespace:false}
Mar 14 11:35:39.522: INFO: >>> kubeConfig: /root/.kube/config
5. Cleanup: Delete the namespace Mar 14 11:35:39.656: INFO: Waiting for endpoints: map[]
Mar 14 11:35:39.660: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -
'http://10.32.0.8:8080/dial?request=hostName&protocol=http&host=10.38.0.12&p
drstd PodName:host-test-container-pod ContainerName:hostexec Stdin:<nil> Cap
PreserveWhitespace:false}
Mar 14 11:35:39.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Destroying namespace "e2e-tests-pod-network-test-drstd" for this suite
Kubernetes Test-Infra

Build Deploy Test Cleanup

My-kubeadm.sh
Kubetest - Tests

e2e: ~1000

conformance: ~160
Kubetest - Tests
e2e: ~1000
conformance: ~160
Kubetest - Time

Full e2e = ~1000 Tests / 12 Hours

Conformance = 164 Tests / 1.5 hours


144
Course Objectives
Core Concepts

Scheduling
Logging Monitoring

Application Lifecycle Management


Cluster Maintenance
Security

Storage
Networking

Installation, Configuration & Validation

Design a Kubernetes Cluster Provision Infrastructure TLS Bootstrapping a Node

Choose Secure Cluster Communication Node end-to-end tests


Kubernetes Infrastructure Config
Choose a Network Solution Kubernetes Release Binaries Run & Analyze end-to-end tes
Install Kubernetes Master Nodes
HA Kubernetes Cluster
Troubleshooting Install Kubernetes Worker Nodes
146

Run & Analyze


E2E Tests
kubetest- Run
go get -u k8s.io/test-infra/kubetest

kubetest --extract=v1.11.3 Note: Version must match the kubernetes


kubernetes server version

cd kubernetes

export KUBE_MASTER_IP=“192.168.26.10:6443”

export KUBE_MASTER=kube-master

kubetest --test --provider=skeleton > testout.txt

kubetest --test --provider=skeleton --test_args="--ginkgo.focus=Secrets" > testout.txt

kubetest --test --provider=skeleton --test_args="--ginkgo.focus=\[Conformance\]" > testout.txt


kubetest- Run
kubetest --test --provider=skeleton --test_args="--ginkgo.focus=\[Conformance\]" > testout.txt

cat testout.txt
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8",
GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:49:34Z",
GoVersion:"go1.10.8", C
ompiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.8",
GitCommit:"4e209c9383fa00631d124c8adcc011d617339b3c", GitTreeState:"clean", BuildDate:"2019-02-28T18:40:05Z",
GoVersion:"go1.10.8", C
ompiler:"gc", Platform:"linux/amd64"}
Setting up for KUBERNETES_PROVIDER="skeleton".
Mar 14 11:16:12.419: INFO: Overriding default scale value of zero to 1
Mar 14 11:16:12.419: INFO: Overriding default milliseconds value of zero to 5000
I0314 11:16:12.674596 20093 e2e.go:333] Starting e2e run "933b1eae-464a-11e9-81ea-02f0aa2d49f4" on Ginkgo node 1
Running Suite: Kubernetes e2e suite
===================================
Random Seed: 1552562172 - Will randomize all specs
Will run 167 of 1008 specs

Mar 14 11:16:12.731: INFO: >>> kubeConfig: /root/.kube/config


Mar 14 11:16:12.745: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
Mar 14 11:16:12.770: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running
and ready
Mar 14 11:16:12.831: INFO: 12 / 13 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
• [SLOW TEST:8.486 seconds]
[sig-storage] EmptyDir volumes

kubetest- Run
/workspace/anago-v1.11.8-
beta.0.41+4e209c9383fa00/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.g
o:40
should support (root,0777,tmpfs) [NodeConformance] [Conformance]
/workspace/anago-v1.11.8-
beta.0.41+4e209c9383fa00/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framewor
k.go:684
------------------------------
SSMar 14 13:01:15.397: INFO: Running AfterSuite actions on all node
Mar 14 13:01:15.397: INFO: Running AfterSuite actions on node 1

Summarizing 2 Failures:

[Fail] [sig-network] DNS [It] should provide DNS for services [Conformance]
/workspace/anago-v1.11.8-
beta.0.41+4e209c9383fa00/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common
.go:497

[Fail] [sig-network] DNS [It] should provide DNS for the cluster [Conformance]
/workspace/anago-v1.11.8-
beta.0.41+4e209c9383fa00/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common
.go:497

Ran 166 of 1008 Specs in 6302.670 seconds


FAIL! -- 164 Passed | 2 Failed | 0 Pending | 842 Skipped --- FAIL: TestE2E (6302.72s)
FAIL

Ginkgo ran 1 suite in 1h45m3.31433997s


Test Suite Failed
150
151

DEMO
Run Smoke Test
152
153

DEMO
Run End-to-End Tests

You might also like