Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.
Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils
Configure Ha-Cluster user
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s
[root@controller ~]# passwd hacluster
Make sure the
host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2
Start and enable the service for next start
systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker
Authenticate the nodes
[root@controller ~]# pcs cluster auth controller controller2
<password of hacluster>
Enabling the Cluster for Next boot (ON both Server’s)
[root@controller ~]# pcs cluster enable --all
[root@controller ~]# pcs cluster status
Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop pacemaker.service
Redirecting to /bin/systemctl stop corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#
Add a STONITH device – i.e. a fencing device
>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
Attributes: devices=/dev/mapper/LUN1
Meta Attrs: provides=unfencing
Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)
Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.
>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true
Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:
>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone
Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.
pcs property set no-quorum-policy=ignore
Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):
mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0
Mounting the GFS file system using pcs resource
Here we don’t use fstab but we use a pcs resource to mount the LUN.
pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:
pcs constraint order start clvmd-clone then gfs2_res-clone
pcs constraint colocation add gfs2_res-clone with clvmd-clone
pcs constraint show
[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi