Component Pack 6.0.0.6 Installation Guide: Martti Garden - IBM Roberto Boccadoro - ELD Engineering

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 11

Component Pack 6.0.0.

6 Installation Guide
Martti Garden – IBM

Roberto Boccadoro – ELD Engineering


Note: this document details a test installation. For production installations refer to
the Knowledge Base
We will install on three servers:

Component Pack Master: soc.yourserver.com

Component Pack Generic Worker OM+Customizer: soc1.yourserver.com

Component Pack ES Worker Elasticsearch: soc2.yourserver.com

The Connections server is con.yourserver.com

Preparing the system:


open firewall port on each machine
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10251/tcp --permanent
firewall-cmd --zone=public --add-port=10252/tcp --permanent
firewall-cmd --zone=public --add-port=80/tcp --permanent
firewall-cmd --zone=public --add-port=443/tcp --permanent
firewall-cmd --zone=public --add-port=5000/tcp --permanent
firewall-cmd --zone=public --add-port=30001/tcp --permanent
firewall-cmd --zone=public --add-port=30099/tcp --permanent
firewall-cmd --zone=public --add-port=31100/tcp --permanent
firewall-cmd --zone=public --add-port=32721/tcp --permanent
firewall-cmd --zone=public --add-port=32200/tcp --permanent
firewall-cmd --zone=public --add-port=27017/tcp --permanent
firewall-cmd --zone=public --add-port=30484/tcp --permanent
firewall-cmd --zone=public --add-port=32333/tcp --permanent
firewall-cmd –reload
Installing pre-requisites

Installing Docker 17.03. (on each server)


yum-config-manager --add-repo
https://download.docker.com/linux/centos/docker-ce.repo
yum-config-manager --disable docker*
yum-config-manager --enable docker-ce-stable
yum install -y --setopt=obsoletes=0 docker-ce-17.03*
yum makecache fast
sudo systemctl start docker
sudo systemctl enable docker.service
yum-config-manager --disable docker*

Configure Docker with the devicemapper storage driver (loop-lvm) (on each server)
sudo systemctl stop docker
vi /etc/docker/daemon.json

add:
{
"storage-driver": "devicemapper"
}

save & exit


sudo systemctl start docker

Check Device Mapper is running by:


docker info

Disable swap on each server


swapoff -a
vi /etc/fstab

comment out following line:


/dev/mapper/cl-swap swap swap defaults 0 0

save and exit

if changes were made in fstab run the following command:


mount -a

Install kubeadm, kubelet, and kubectl (on each server)


vi /etc/yum.repos.d/kubernetes.repo

add
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
save and close
The setenforce 0 command disables SELinux to allow containers to access the host file system (required by
pod networks, for example).
setenforce 0
yum install -y kubelet-1.11.1* kubeadm-1.11.1* kubectl-1.11.1*
systemctl enable kubelet && systemctl start kubelet
Ensure that the packages do not upgrade to a later version by running the following command to disable
the kubernetes yum repo:
yum-config-manager --disable kubernetes*
#Some users on RHEL/CentOS 7 have reported issues with traffic being routed incorrectly due to iptables
being bypassed. To avoid this problem, run the following commands to ensure that net.bridge.bridge-nf-
call-iptables is set to 1 in your sysctl config:
vi /etc/sysctl.d/k8s.conf
add
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
save and close
sysctl –system
Initializing Master (on Master)
Using Calico as pod network addon
kubeadm init --kubernetes-version=v1.11.1 --pod-network-
cidr=192.168.0.0/16

ATTENTION: Copy out kubeadm join command - will be needed later!

(kubeadm join IP_ADDR:6443 --token euh9gv.a3hjyafpplr88t8q --discovery-token-ca-cert-hash


sha256:4ea5cda8d56a8907644965e6bc8a4e41ebb4028eaa9c8bb5c92357003fab6f71)

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install a pod network add-on (here Calico) so that your pods can communicate with each other.
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-
started/kubernetes/installation/hosted/rbac-kdd.yaml
kubectl apply -f https://docs.projectcalico.org/v3.1/getting-
started/kubernetes/installation/hosted/kubernetes-datastore/calico-
networking/1.7/calico.yaml

Join Workers (on Worker Nodes)


Run the command you copied on both Workers
kubeadm join IP_ADDR:6443 --token euh9gv.a3hjyafpplr88t8q --discovery-
token-ca-cert-hash
sha256:4ea5cda8d56a8907644965e6bc8a4e41ebb4028eaa9c8bb5c92357003fab6f71

check success on master with:


kubectl get nodes

copy the Master configuration to the Worker nodes

mkdir -p $HOME/.kube

scp root@IP_ADDR:$HOME/.kube/config $HOME/.kube

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Install Helm (on Master)


wget https://storage.googleapis.com/kubernetes-helm/helm-v2.11.0-linux-
amd64.tar.gz
tar -zxvf helm-v2.11.0-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/helm
helm init
kubectl create clusterrolebinding add-on-cluster-admin --
clusterrole=cluster-admin --serviceaccount=kube-system:default
sudo rm -f helm-v2.11.0-linux-amd64.tar.gz

Test environment (every pod should be running):


kubectl get pods -n kube-system

Create Connections Namespace (on Master)


kubectl create namespace connections

Install Docker Registry (on Master)


Create directories:
mkdir /docker-registry
mkdir /docker-registry/{auth,certs,registry}

Create password file:


docker run --entrypoint htpasswd registry:2 -Bbn admin mypassword >
/docker-registry/auth/htpasswd

Create self signed certs:


openssl req -newkey rsa:4096 -nodes -sha256 -keyout key.pem -x509 -days
3650 -out cert.pem

Copy cert and key to docker directory:


cp key.pem cert.pem /docker-registry/certs

Create directories on all machines in cluster:


mkdir /etc/docker/certs.d
mkdir /etc/docker/certs.d/soc.yourserver.com\:5000/

Copy cert to docker dir:


cp cert.pem /etc/docker/certs.d/soc.yourserver.com\:5000/ca.crt

SCP the cert from the docker registry machine to all other machines in the kubernetes cluster:
scp cert.pem
soc1.yourserver.com:/etc/docker/certs.d/soc.yourserver.com\:5000/ca.crt
scp cert.pem
soc2.yourserver.com:/etc/docker/certs.d/soc.yourserver.com\:5000/ca.crt

Create registry:
docker run -d -p 5000:5000 --restart=always --name registry -v /docker-
registry/auth:/auth -v /docker-registry/certs:/certs -v /docker-
registry/registry:/var/lib/registry -e "REGISTRY_AUTH=htpasswd" -e
"REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e
"REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd" -e
"REGISTRY_HTTP_TLS_CERTIFICATE=/certs/cert.pem" -e
"REGISTRY_HTTP_TLS_KEY=/certs/key.pem" registry:2

Verify:
docker login -u admin -p mypassword soc.yourserver.com:5000

create image pull secret


kubectl create secret docker-registry myregkey -n connections --docker-
server=soc.yourserver.com:5000 --docker-username=admin --docker-
password=mypassword

Create persistant volumes (on Master / NFS Server)


Note: this is valid for PoC installations and not for production. In production, it is best practice to have
the NFS share on a storage server that is not part of the Kubernetes cluster, but for a proof of concept,
non-HA deployment, it is acceptable to host the NFS share on your Kubernetes master.
sudo mkdir -p /pv-connections/esdata-{0,1,2}
sudo mkdir -p /pv-connections/esbackup
sudo mkdir -p /pv-connections/customizations
sudo mkdir -p /pv-connections/mongo-node-{0,1,2}/data/db
sudo mkdir -p /pv-connections/solr-data-solr-{0,1,2}
sudo mkdir -p /pv-connections/zookeeper-data-zookeeper-{0,1,2}
sudo chmod -R 777 /pv-connections

unzip -p hybridcloud_20180925-031433.zip
microservices_connections/hybridcloud/support/nfsSetup.sh > nfsSetup.sh
unzip -p hybridcloud_20180925-031433.zip
microservices_connections/hybridcloud/support/volumes.sh > volumes.sh

If you need only a few components change the volumes.txt in


extractedFolder/microservices_connections/hybridcloud/support
cd /root/cp6006/microservices_connections/hybridcloud/support/
sudo bash nfsSetup.sh

to check created shares run


sudo cat /etc/exports

Install persistent volumes using Helm


helm install --name=connections-volumes
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/connections
-persistent-storage-nfs-0.1.0.tgz --set \nfs.server=IP_ADDR
extractedFolder/microservices_connections/hybridcloud/helmbuilds/connecti
ons-persistent---set
\solr.enabled=false,\zk.enabled=false,\mongo.enabled=false,\customizer.en
abled=false,\nfs.server=IP_ADDR

Labeling and tainting worker nodes for Elasticsearch (on Master)


Get list of available nodes
kubectl get nodes

run command with node name added


kubectl label nodes soc2.yourserver.com type=infrastructure --overwrite
kubectl taint nodes soc2.yourserver.com
dedicated=infrastructure:NoSchedule –overwrite

Pushing the images to the Docker registry (on Master)


cd /root/cp6006/microservices_connections/hybridcloud/support
./setupImages.sh -dr soc.yourserver.com:5000 -u admin -p mypassword -st
customizer,elasticsearch,orientme

Bootstrapping the Kubernetes cluster (on Master)


Bootstrapping a Kubernetes cluster performs the following tasks: v Validates the Kubernetes configuration
/ Creates the required Kubernetes secrets / Creates the required IBM Connections certificates / Configures
Redis for use by the Orient Me component
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/bootstrap-
0.1.0-20180924-133245.tgz --set
image.repository="soc.yourserver.com:5000/connections",env.set_ic_admin_u
ser=wasadmin,env.set_ic_admin_password=YOUR_PASSWORD,env.set_ic_internal=
con.yourserver.com,env.set_master_ip=IP_ADDR,env.set_elasticsearch_ca_pas
sword=mypassword,env.set_elasticsearch_key_password=mypassword,env.set_re
dis_secret=mypassword,env.set_search_secret=mypassword,env.set_solr_secre
t=mypassword

Check success (Should show "Complete")


kubectl get pods -n connections -a | grep bootstrap

Restart Common and News application on Connections Server


Installing the Component Pack
Installing the Component Pack's connections-env (on master)
helm install --name=connections-env
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/connections
-env-0.1.40-20180919-173326.tgz --set
createSecret=false,ic.host=con.yourserver.com,ic.internal=con.yourserver.
com

verify with (should show deployed):


helm list

Installing the Component Pack's infrastructure (on master)


helm install --name=infrastructure
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/infrastruct
ure-0.1.0-20180925-030258.tgz --set
global.onPrem=true,global.image.repository=soc.yourserver.com:5000/connec
tions,mongodb.createSecret=false,appregistry-
service.deploymentType=hybrid_cloud

verify with (should show deployed):


helm list

and (can take up to 10 minutes for all pods to come up):


kubectl get pods -n connections

Installing the Component Pack's Orient Me (on master)


helm install --name=orientme
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/orientme-
0.1.0-20180925-030334.tgz --set
global.onPrem=true,global.image.repository=soc.yourserver.com:5000/connec
tions,orient-web-client.service.nodePort=30001,itm-
services.service.nodePort=31100,mail-
service.service.nodePort=32721,community-
suggestions.service.nodePort=32200

verify with (should show deployed):


helm list

and (can take up to 10 minutes for all pods to come up):


kubectl get pods -n connections

Installing the Component Pack's ElasticSearch (on master)


helm install --name=elasticsearch
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/elasticsear
ch-0.1.0-20180921-115419.tgz --set
image.repository=soc.yourserver.com:5000/connections,nodeAffinityRequired
=true

verify with (should show deployed):


helm list

and (can take up to 10 minutes for all pods to come up):


kubectl get pods -n connections

Installing the Component Pack's Customizer (on master)


helm install --name=mw-proxy
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/mw-proxy-
0.1.0-20180924-103122.tgz --set
image.repository=soc.yourserver.com:5000/connections,deploymentType=hybri
d_cloud

verify with (should show deployed):


helm list

and (can take up to 10 minutes for all pods to come up):


kubectl get pods -n connections

Installing the Dasboards for for monitoring and logging (on master)
mkdir /opt/kubernetes-dashboard

create keys
openssl req -nodes -new -x509 -keyout /opt/kubernetes-
dashboard/dashboard.key -out /opt/kubernetes-dashboard/dashboard.crt -
subj "/CN=dashboard"
kubectl create secret generic kubernetes-dashboard-certs --from-
file=/opt/kubernetes-dashboard -n kube-system

kubectl apply -f
https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/
recommended/kubernetes-dashboard.yaml
kubectl apply -f
/root/cp6006/microservices_connections/hybridcloud/support/dashboard-
admin.yaml
kubectl patch svc kubernetes-dashboard -n kube-system -p
'{"spec":{"type": "NodePort"}}'
kubectl create -f
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-
config/influxdb/grafana.yaml
kubectl create -f
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-
config/influxdb/heapster.yaml
kubectl create -f
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-
config/influxdb/influxdb.yaml
kubectl create -f
https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-
config/rbac/heapster-rbac.yaml
nohup kubectl proxy --address=159.8.241.236 -p 443 --accept-hosts='^*$' &
Verify with
http://IP_ADDR:443/api/v1/namespaces/kube-
system/services/https:kubernetes-dashboard:/proxy/

Installing the Component Pack's Sanity Dashboard (on master)


helm install --name=sanity
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/sanity-
0.1.8-20180924-121014.tgz --set
image.repository=soc.yourserver.com:5000/connections

Get the application URL by running these commands:


export NODE_PORT=$(kubectl get --namespace connections -o
jsonpath="{.spec.ports[0].nodePort}" services sanity)
export NODE_IP=$(kubectl get nodes --namespace connections -o
jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT

helm install --name=sanity-watcher


/root/cp6006/microservices_connections/hybridcloud/helmbuilds/sanity-
watcher-0.1.0-20180830-052154.tgz --set
image.repository=soc.yourserver.com:5000/connections

Installing the Component Pack's Elastic Stack (on master)


helm install --name=elasticstack
/root/cp6006/microservices_connections/hybridcloud/helmbuilds/elasticstac
k-0.1.0-20180925-030346.tgz --set
global.onPrem=true,global.image.repository=soc.yourserver.com:5000/connec
tions

Accessing the Kibana Dashboard

Open a browser and navigate to


https://soc.yourserver.com:32333

First Time Setup : Enter ‘comppackk8s-*’ as the index name or pattern and click ‘Create’

You might also like