Pages

Showing posts with label clustering. Show all posts
Showing posts with label clustering. Show all posts

Wednesday, August 12, 2015

Mysql Cluster Using Mysql NDB


Mysql Cluster using NDB(Network DataBase) provides a self healing mysql Cluster which provides a good performance. Mainly the Mysql Cluster Contains 3 Components ie using Management , SQL and Data parts. 

Here we will be configuring two Management and two Data/SQL (together in One server) for the HA. Once the configuration is completed we will have two end points to connect to te database so we need to keep an Load balancer in front of the SQL end points.




OS used is RHEL7
Selinux Enabed
Firewall Disabled

Management Server## Perform the Following steps in both the Management Server's. 

Install Needed Packages 
=================
yum install glibc.i686  ncurses-libs.i686 libstdc++.i686 libgcc.i686 -y


Make Directories and Download the Cluster Files
====================================

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
tar zxvf mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz

cd mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686
cp bin/ndb_mgm* /usr/bin/
chmod 755 /usr/bin/ndb_mgm*


mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
==========================================
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]
DataDir=/var/lib/mysql-cluster
[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
NodeId=1
# IP address of the first management node (this system)
HostName=192.168.70.130

[NDB_MGMD]
NodeId=2
#IP address of the second management node
HostName=192.168.70.131

# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.70.132
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.70.133
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
==========================================

chown mysql. /var/lib/mysql-cluster -R

To start the Management Service
========================
ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/

Data And SQL Server#Perform this on both of the Server's
==============================================

Install the needed Packages
====================
yum install libaio.i686 libaio-devel.i686 -y
yum install perl -y
yum -y install perl-Data-Dumper

Download the packages
cd /usr/local/
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
tar zxvf mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
mv /root/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz mysql
chown mysql. mysql -R
cd mysql

Initializing the database
scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data

cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server

cd /usr/local/mysql/bin
mv * /usr/bin
cd ../

vi /etc/my.cnf
============
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.70.130,192.168.70.131
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.70.130,192.168.70.131
============

mkdir /var/lib/mysql-cluster

cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start

After this, secure the MySQL installation by running the appropriate script:

/usr/local/mysql/bin/mysql_secure_installation


Testing 
In the Management Node check the command ndb_mgm and check the status



Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Monday, September 29, 2014

Configure HA using Corosync and pacemaker

Opening needed Ports in Iptables if We are using IPtables
/etc/sysconfig/iptables. Towards the end of the file, but before any REJECT statements, we add the following lines:
-A INPUT -p udp -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT
-A INPUT -m tcp -p tcp --dport 7788 -j ACCEPT
-A INPUT -m tcp -p tcp --dport 3306 -j ACCEPT

Installing modules
yum -y install wget
rpm -Uvh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
yum -y install drbd84-utils kmod-drbd84 --enablerepo=elrepo
yum -y install pacemaker corosync cluster-glue

wget -P /etc/yum.repos.d/ http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
yum install crmsh

Configure Corosync 

vi /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.0.0.0
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

service {
        # Load the Pacemaker Cluster Resource Manager
        ver:       1
        name:      pacemaker
}

aisexec {
        user:   root
        group:  root
}


chkconfig --level 3 corosync on
service corosync start
chkconfig --level 3 pacemaker on
service pacemaker start

Checking the Cluster Connectivity
corosync-objctl runtime.totem.pg.mrp.srp.members

Check the service and cluster status
crm_mon -1


Configuring the cluster
>>crm configure
property no-quorum-policy="ignore" pe-warn-series-max="1000" pe-input-series-max="1000" pe-error-series-max="1000" cluster-recheck-interval="5min"
property stonith-enabled=false
commit

Adding a Cluster server for common IP (VIP)
>>crm configure
primitive p_api-ip ocf:heartbeat:IPaddr2 params ip="10.0.0.199" cidr_netmask="24" op monitor interval="30s"
commit

Now we need to configure the needed services in the CRM.

  

Thursday, June 26, 2014

Virtual Ip With Keepalived as Front end for HAproxy server's

Install Keepalived
Virtual Ip 192.168.216.100

HAproxy Ip 192.168.216.101
1. Install Keepalived package:

On RHEL/CentOS:

$ yum install -y centos-release
$ yum install -y keepalived
$ chkconfig keepalived on

2. Tell kernel to allow binding non-local IP into the hosts and apply the changes:

$ echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
$ sysctl -p

Configure Keepalived and Virtual IP
1. Login into LB1 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth2 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.216.100 # the virtual IP
}
track_script {
chk_haproxy
}
}
2. Login into LB2 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth2 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.216.100 # the virtual IP
}
track_script {
chk_haproxy
}
}
3. Start Keepalived in both nodes:

$ sudo /etc/init.d/keepalived start
4. Verify the Keepalived status. LB1 should hold the VIP and the MASTER state while LB2 should run as BACKUP state without VIP:

LB1 IP:

$ ip a | grep -e inet.*eth2
inet 192.168.216.101/24 brd 192.168.216.255 scope global eth2
inet 192.168.216.100/32 scope global eth2

Wednesday, June 25, 2014

Mysql replication-Master-Master

MySQL Master-Master replication.

Master-1 my.cnf configuration:

mkdir /var/lib/mysql/log/
log-bin=/var/lib/mysql/log/mysql-bin
log_warnings
log_slow_queries = /var/lib/mysql/log/slow.log
long_query_time = 5
log_long_format
tmpdir = /tmp
server-id = 1
log_slave_updates
replicate-same-server-id = 0
auto_increment_increment = 10
auto_increment_offset = 1
relay-log = mysql-relay-bin

Master-2 my.cnf configuration:
mkdir /var/lib/mysql/log/
log-bin=/var/lib/mysql/log/mysql-bin
log_warnings
log_slow_queries = /var/lib/mysql/log/slow.log
long_query_time = 5
log_long_format
tmpdir = /tmp
server-id = 2
replicate-same-server-id = 0
auto_increment_increment = 2
auto_increment_offset = 2
relay-log = mysql-relay-bin

First setup Master1 as Master and Master2 as slave for Master-1:

Follow below steps:
On Master-1:

grant replication slave on *.* to 'root'@'192.168.216.135' identified by 'admin';
show master status;

It shows file name and position, Use these records on Master-2 to run it as slave for Master-1.
Step 3: Now log on to master-2 and run the below query:

CHANGE MASTER TO MASTER_HOST='192.168.216.130', MASTER_USER='root',MASTER_PASSWORD='admin', MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=106;

Step 4: start slave
Step 5: show slave status \G

On this status, the following 2 records should be as follows
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

These 2 records indicates Replication status. If these parameters show “Yes” that means replication is running successfully.

Setup Master2 as Master and Master1 as slave for Master-2:

On Master-2 server:

grant replication slave on *.* to 'root'@'192.168.216.130' identified by 'admin

Step 2: mysql> show master status;

Step 3: Now log on to master-1 and run the below query:
CHANGE MASTER TO MASTER_HOST='192.168.216.135', MASTER_USER='root',MASTER_PASSWORD='admin', MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=346;
Step 4: start slave
Step 5: show slave status \G

The following parameters should show “Yes”, so that replication is running successfully

Slave_IO_Running: Yes
Slave_SQL_Running: Yes

On both servers “slave_IO_Running” and “slave_SQL_Running” parameters should always be “Yes” for successful Master-Master Replication.

Tuesday, May 20, 2014

MySql Server Cluster (Maria+galera)

Add MariaDB Repositories
========================
Create a mariadb repository /etc/yum.repos.d/mariadb.repo using following content in your system. Below repository will work on CentOS 6.x systems, For other system use repository generation tool and add to your system.

Disable Selinux in redhat sever's.


For CentOS 6 – 64bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
For CentOS 6 – 32bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
Install MariaDB and Galera
==========================
Before installing MariaDB Galera cluster packages, remove any existing MySQL or MariaDB packages installed on system. After that use following command to install on all nodes.

# yum install MariaDB-Galera-server MariaDB-client galera

Initial MariaDB Configuration
=============================
After successfully installing packages in above steps do the some initial MariaDB configurations. Use following command and follow the instructions on all nodes of cluster. If will prompt to set root account password also.

# mysql_secure_installation
# service mysql start
After that create a user in MariaDB on all nodes, which can access database from your network in cluster.

# mysql -u root -p

MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit
and stop MariaDB service before starting cluster configuration

# service mysql stop
Setup Cluster Configuration on database1
========================================
Lets start setup MariaDB Galera cluster from database1 server. Edit MariaDB server configuration file and add following values under [mariadb] section.

[root@database1 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.4,10.0.0.5
wsrep_cluster_name='cluster1'
wsrep_node_address='10.0.0.2'
wsrep_node_name='database1'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password
Start cluster using following command.

[root@database1 ~]# /etc/init.d/mysql bootstrap
Bootstrapping the clusterStarting MySQL.... SUCCESS!
If you get any problem during startup check MariaDB error log file /var/lib/mysql/<hostname>.err

Add database2 in MariaDB Cluster
================================
After successfully starting cluster on database1. Start configuration on database2. Edit MariaDB server configuration file and add following values under [mariadb] section. All the settings are similar to database1 except wsrep_node_address, wsrep_cluster_address and wsrep_node_name.

[root@database2 ~]# vim /etc/my.cnf.d/server.cnf

query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.2,10.0.0.5
wsrep_cluster_name='cluster1'
wsrep_node_address='10.0.0.4'
wsrep_node_name='database2'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password

Start cluster using following command.

[root@database2 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Add database3 in MariaDB Cluster
================================
This server is optional, If you want only two server in cluster, you can ignore this step, but you need to remove third server ip from database1/database2 configuration files. To add this server make changes same as database2.

[root@database3 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.2,10.0.0.4
wsrep_cluster_name='cluster1'
wsrep_node_address='10.0.0.5'
wsrep_node_name='database3'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password
Start cluster using following command.

[root@db3 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Thursday, May 15, 2014

Tomcat-Static-Unicast-Clustering

Tomcat needs to be configured to allow for setup of cluster of two nodes over unicast. Following is section of my ${LIFERAY_HOME}/tomcat-6.0.32/conf/server.xml on server1 (replace node1 with node2 and swap location of IP_ADDRESSES and change unique_id to anything 16 bit long other than{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2}, on server.xml in server2) which allowed for this. IP_ADDRESSES here refer to private ip addresses of server1 and server2 respectively.

================================

<Engine name="Catalina" defaultHost="localhost" jvmRoute="node1">

<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="6" channelStartOptions="3">

<Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true" />

<Channel className="org.apache.catalina.tribes.group.GroupChannel">

<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
autoBind="0" selectorTimeout="5000" maxThreads="6"
address="IP_ADDRESS_SERVER1" port="4444" />
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"
timeout="60000"
keepAliveTime="10"
keepAliveCount="0"
/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpPingInterceptor" staticOnly="true"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.StaticMembershipInterceptor">
<Member className="org.apache.catalina.tribes.membership.StaticMember"
host="IP_ADDRESS_SERVER2"
port="4444"
uniqueId="{0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,2}"/>
</Interceptor>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter="" />
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve" />
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>

=================================