Pages

Showing posts with label GFS. Show all posts
Showing posts with label GFS. Show all posts

Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Wednesday, July 30, 2014

GFS - Global File System from Redhat + Iscsi drive sharing.

# install packages

yum groupinstall -y "High Availability"
yum install -y cman gfs2-utils modcluster ricci luci cluster-snmp iscsi-initiator-utils openais oddjob rgmanager

On each node create a cluster config file

# /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="3" name="cluster1">
<clusternodes>
<clusternode name="node1" nodeid="1"/>
<clusternode name="node2" nodeid="2"/>
</clusternodes>
</cluster>

passwd ricci
chkconfig iptables off
#or configure the Ports to be opened.
chkconfig ip6tables off
chkconfig ricci on
chkconfig cman on
chkconfig rgmanager on
chkconfig modclusterd on
service iptables stop
service ip6tables stop
service ricci start
service cman start
service rgmanager start
service modclusterd start

service ricci restart
service cman restart
service rgmanager restart
service modclusterd restart
# for node1 only
chkconfig luci on
service luci start

[root@controller1 ~]# chkconfig luci on
vice luci start[root@controller1 ~]# service luci start
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `controller1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
(none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Start luci... [ OK ]
Point your web browser to https://controller1:8084 (or equivalent) to access luci
[root@controller1 ~]#

Making GFS file system
/sbin/mkfs.gfs2 -j 10 -p lock_dlm -t cluster1:GFS /dev/sdb

Mounting the partion
# edit /etc/fstab and append the following.
/dev/sdb /path_to_mount gfs2 defaults,noatime,nodiratime 0 0