Configure High Availability Cluster in Centos 7 (Step by Step Guide)
Configure High Availability Cluster in Centos 7 (Step by Step Guide)
Configure High Availability Cluster in Centos 7 (Step by Step Guide)
com/ste-by-step-configure-high-availability-cluster-centos-7/
Object 1
In my last article I had explained about the different kinds of clustering and their architecture. Before
you start with the configuration of High Availability Cluster, you must be aware of the basic
terminologies related to Clustering. In this article I will share step by step guide to configure high
availability cluster in CentOS Linux 7 using 3 virtual machines. These virtual machines are running
on my Oracle VirtualBox installed on my Linux Server.
Still installing Linux manually?
I would recommend to configure one click installation using Network PXE Boot Server. Using PXE
server you can install Oracle Virtual Machines or KVM based Virtual Machines or any type of physical
server without any manual intervention saving time and effort.
NOTE:
The steps to configure High Availability Cluster on Red Hat 7 will be same as CentOS 7. On RHEL
system you must have an active subscription to RHN or you can configure a local offline repository
using which "yum" package manager can install the provided rpm and it's dependencies.
What Is Pacemaker?
We will use pacemaker and corosync to configure High Availability Cluster. Pacemaker is a cluster
resource manager, that is, a logic responsible for a life-cycle of deployed software — indirectly perhaps
even whole systems or their interconnections — under its control within a set of computers (a.k.a.
nodes) and driven by prescribed rules.
It achieves maximum availability for your cluster services (a.k.a. resources) by detecting and
recovering from node- and resource-level failures by making use of the messaging and membership
capabilities provided by your preferred cluster infrastructure (either Corosync or Heartbeat), and
possibly by utilizing other parts of the overall cluster stack.
ALSO READ:
Understanding resource group and constraints in a Cluster with examples
Bring up Environment
First of all before we start to Configure High Availability Cluster, let us bring up our virtual machines
with CentOS 7. I am using Oracle VirtualBox. You can also install Oracle VirtualBox on Linux
environment. Below are my vm's configuration details
Object 2
Edit the /etc/hosts file and add the IP address, followed by an FQDN and a short cluster
node name for every available cluster node network interface.
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.2.20 node1.example.com node1
10.0.2.21 node2.example.com node2
10.0.2.22 node3.example.com node3
To finish, you must check and confirm connectivity among the cluster nodes. You can do this by simply
releasing a ping command to every cluster node.
Stop and disable Network Manager on all the nodes
[root@node1 ~]# systemctl disable NetworkManager
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
NOTE:
You must remove or disable the NetworkManager service because you will want to avoid any
automated configuration of network interfaces on your cluster nodes.
After removing or disabling the NetworkManager service, you must restart the networking service.
Configure NTP
To configure High Availability Cluster it is important that all your nodes in the cluster are connected
and synced to a NTP server. Since my machines are in IST timezone I will use the India pool of NTP
servers.
[root@node1 ~]# systemctl start ntpd
[root@node1 ~]# systemctl enable ntpd
Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to
/usr/lib/systemd/system/ntpd.service.
HINT:
Steps to install EPEL repository in RHEL 8
[root@node1 ~]# yum install epel-release -y
pcs is the pcaemaker software and all it's dependencies The fence-agents-all will install all the
default fencing agents which is available for Red Hat Cluster
[root@node1 ~]# yum install pcs fence-agents-all -y
NOTE:
If you are using iptables directly, or some other firewall solution besides firewalld, simply open the
following ports: TCP ports 2224, 3121, and 21064, and UDP port 5404, 5405.
If you run into any problems during testing, you might want to disable the firewall and SELinux
entirely until you have everything working. This may create significant security issues and should not
be performed on machines that will be exposed to the outside world, but may be appropriate during
development and testing on a protected host.
Configure Corosync
To configure Openstack High Availability we need to configure corosync on any one of the node,
use pcs cluster auth to authenticate as the hacluster user:
[root@node1 ~]# pcs cluster auth node1.example.com node2.example.com
node3.example.com
Username: hacluster
Password:
node2.example.com: Authorized
node1.example.com: Authorized
node3.example.com: Authorized
NOTE:
If you face any issues at this step, check your firewalld/iptables or selinux policy
Finally, run the following commands on the first node to create the cluster and start it. Here our cluster
name will be mycluster
[root@node1 ~]# pcs cluster setup --start --name mycluster node1.example.com
node2.example.com node3.example.com
Destroying cluster on nodes: node1.example.com, node2.example.com,
node3.example.com...
node3.example.com: Stopping Cluster (pacemaker)...
node2.example.com: Stopping Cluster (pacemaker)...
node1.example.com: Stopping Cluster (pacemaker)...
node1.example.com: Successfully destroyed cluster
node2.example.com: Successfully destroyed cluster
node3.example.com: Successfully destroyed cluster
Enable the cluster service i.e. pacemaker and corosync so they can automatically start on boot
[root@node1 ~]# pcs cluster enable --all
node1.example.com: Cluster Enabled
node2.example.com: Cluster Enabled
node3.example.com: Cluster Enabled
PCSD Status:
node3.example.com: Online
node1.example.com: Online
node2.example.com: Online
Votequorum information
----------------------
Expected votes: 3
Highest expected: 3
Total votes: 3
Quorum: 2
Flags: Quorate
Membership information
----------------------
Nodeid Votes Name
1 1 node1.example.com (local)
2 1 node2.example.com
3 1 node3.example.com
This all about Configure High Availability Cluster on Linux, Below are some more articles on Cluster
which you can use to understand about cluster architecture, resource group and resource constraints etc.