Pacemaker with GFS2 RHEL 7

Install and initial cluster configuration

/etc/hosts configuration example on both nodes:

    192.168.122.10        cluster-1.exmaple.com    cluster-1
    192.168.122.15        cluster-2.example.com    cluster-2

Yum configuration on both nodes:

[cdrom]
name=cdrom
baseurl=file:///media
gpgcheck=0
enabled=1

[ha]
name=ha
baseurl=file:///media/addons/HighAvailability/
gpgcheck=0
enabled=1

[storage]
name=storage
baseurl=file:///media/addons/ResilientStorage/
gpgcheck=0
enabled=1

On both nodes:

sudo yum install pcs pacemaker fence-agents-all sbd lvm2-cluster gfs2-utils
sudo systemctl disable firewalld
sudo systemctl stop firewalld
echo 'manager' |sudo passwd hacluster --stdin
sudo systemctl start pcsd.service
sudo systemctl enable pcsd.service

On one node:

sudo pcs cluster auth -u hacluster ha1.gbmdc.dc ha2.gbmdc.dc
sudo pcs cluster setup --start --name spyderdb ha1.gbmdc.dc ha2.gbmdc.dc
sudo pcs cluster enable --all

Configure fencing

Edit /etc/modules-load.d/softdog.conf on both nodes:

softdog

Execute:

sudo systemctl enable systemd-modules-load
sudo systemctl start systemd-modules-load
sudo pcs stonith sbd device setup --device=/dev/sdb
sudo pcs cluster stop --all
sudo pcs stonith sbd enable
sudo pcs cluster start --all
sudo pcs property set stonith-watchdog-timeout=10
sudo pcs stonith create sbd fence_sbd devices=/dev/sdb

Edit /etc/sysconfig/sbd on both nodes:

SBD_DEVICE="/dev/sdb"

Reboot nodes. After check if the output of this command is correct:

ps aux | grep -e COMMAND -e "sbd: watcher: /dev" | grep -v grep

Configure GFS2

sudo pcs property set no-quorum-policy=freeze
sudo pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true

On both nodes:

sudo /sbin/lvmconf --enable-cluster
sudo mkdir /spyderha

On one node:

sudo pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
sudo pcs constraint order start dlm-clone then clvmd-clone
sudo pcs constraint colocation add clvmd-clone with dlm-clone
sudo pvcreate /dev/sda
sudo vgcreate -Ay -cy spyderDB_VG /dev/sda
sudo lvcreate -L10G -n spyderDB_LV spyderDB_VG
sudo mkfs.gfs2 -j2 -p lock_dlm -t spyderdb:gfs2db /dev/mapper/spyderDB_VG-spyderDB_LV
sudo pcs resource create clusterfs Filesystem device="/dev/mapper/spyderDB_VG-spyderDB_LV" directory="/spyderha" fstype="gfs2" "options=noatime" op monitor interval=10s on-fail=fence clone interleave=true
sudo pcs constraint order start clvmd-clone then clusterfs-clone
sudo pcs constraint colocation add clusterfs-clone with clvmd-clone

Problems

Skipping clustered volume group spyderDB_VG
vgchange -cn $vgname --config 'global {locking_type = 0}'

Resource HTTPd example

Edit /etc/sysconfig/selinux:

SELINUX=permissive

On both nodes:

sudo setenforce 0
sudo yum -y install httpd
sudo sed -i 's/\/var\/www/\/spyderha/g' /etc/httpd/conf/httpd.conf

On one node:

sudo mkdir /spyderha/cgi-bin
sudo mkdir /spyderha/html
sudo pcs resource defaults resource-stickiness=15000
sudo pcs resource create httpd systemd:httpd --group spyderdb
sudo pcs resource create VirtualIP IPaddr2 ip=192.168.122.20 cidr_netmask=24 --group spyderdb
sudo pcs constraint order clusterfs-clone then httpd

Disable monitoring of resources

pcs resource update httpd op monitor enabled=false

Monitor but do nothing

pcs resource update httpd op monitor on-fail=block

References

pacemaker_gfs2_rhel_7.txt · Last modified: 2019/07/22 17:16
Public Domain Except where otherwise noted, content on this wiki is licensed under the following license: Public Domain