Pages

Showing posts with label HA. Show all posts
Showing posts with label HA. Show all posts

Wednesday, August 12, 2015

Mysql Cluster Using Mysql NDB


Mysql Cluster using NDB(Network DataBase) provides a self healing mysql Cluster which provides a good performance. Mainly the Mysql Cluster Contains 3 Components ie using Management , SQL and Data parts. 

Here we will be configuring two Management and two Data/SQL (together in One server) for the HA. Once the configuration is completed we will have two end points to connect to te database so we need to keep an Load balancer in front of the SQL end points.




OS used is RHEL7
Selinux Enabed
Firewall Disabled

Management Server## Perform the Following steps in both the Management Server's. 

Install Needed Packages 
=================
yum install glibc.i686  ncurses-libs.i686 libstdc++.i686 libgcc.i686 -y


Make Directories and Download the Cluster Files
====================================

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
tar zxvf mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz

cd mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686
cp bin/ndb_mgm* /usr/bin/
chmod 755 /usr/bin/ndb_mgm*


mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
==========================================
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]
DataDir=/var/lib/mysql-cluster
[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
NodeId=1
# IP address of the first management node (this system)
HostName=192.168.70.130

[NDB_MGMD]
NodeId=2
#IP address of the second management node
HostName=192.168.70.131

# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.70.132
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.70.133
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
==========================================

chown mysql. /var/lib/mysql-cluster -R

To start the Management Service
========================
ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/

Data And SQL Server#Perform this on both of the Server's
==============================================

Install the needed Packages
====================
yum install libaio.i686 libaio-devel.i686 -y
yum install perl -y
yum -y install perl-Data-Dumper

Download the packages
cd /usr/local/
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
tar zxvf mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
mv /root/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz mysql
chown mysql. mysql -R
cd mysql

Initializing the database
scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data

cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server

cd /usr/local/mysql/bin
mv * /usr/bin
cd ../

vi /etc/my.cnf
============
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.70.130,192.168.70.131
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.70.130,192.168.70.131
============

mkdir /var/lib/mysql-cluster

cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start

After this, secure the MySQL installation by running the appropriate script:

/usr/local/mysql/bin/mysql_secure_installation


Testing 
In the Management Node check the command ndb_mgm and check the status



Monday, March 23, 2015

Creating Replicated Volumes with Gluster FS

In the following scenario we are replicating a particular details from one  server to another server using GlusterFS replicated Volumes.

Mount the partition
On Both the Server's
mkfs.ext3 /dev/sdb1
mkdir /root/glusterfs
mount /dev/sdb1 /root/glusterfs/
tail -n 1 /etc/mtab >> /etc/fstab
mkdir /root/glusterfs/images


How to Enable EPEL Repository in RHEL/CentOS
Next, we need to enable GlusterFs repository on both servers.

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
yum install glusterfs-server -y
service glusterd start
chkconfig glusterd on


#Configure the Trusted Pool
#Run the following command on ‘Server1‘.
gluster peer probe controller2
#Run the following command on ‘Server2‘.
gluster peer probe controller1
#Note: Once this pool has been connected, only trusted users may probe new servers into this pool.
gluster peer status


#Step 6: Set up a GlusterFS Volume
#On both server1 and server2.
#Create a volume On any single server and start the volume. Here, I’ve taken ‘Server1‘.

 gluster volume create images replica 2 controller1:/root/gluster/images controller2:/root/gluster/images

 gluster volume create images replica 2 network1:/root/gluster/openvswitch network2:/root/gluster/openvswitch
mount.glusterfs 192.168.216.145:images /etc/openvswitch/
 gluster volume start images

# Next, confirm the status of volume.
gluster volume info

chkconfig glusterd on


 echo "
 mount.glusterfs 192.168.216.135:images /var/lib/glance/images/
 " >> /etc/rc.local

 mount.glusterfs 192.168.216.135:images /var/lib/glance/images/
chown glance.glance /var/lib/glance/images -R
chmod g+s /var/lib/glance/images
chmod 775 /var/lib/glance -R

Monday, September 29, 2014

Configure HA using Corosync and pacemaker

Opening needed Ports in Iptables if We are using IPtables
/etc/sysconfig/iptables. Towards the end of the file, but before any REJECT statements, we add the following lines:
-A INPUT -p udp -m state --state NEW -m multiport --dports 5404,5405 -j ACCEPT
-A INPUT -m tcp -p tcp --dport 7788 -j ACCEPT
-A INPUT -m tcp -p tcp --dport 3306 -j ACCEPT

Installing modules
yum -y install wget
rpm -Uvh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm
yum -y install drbd84-utils kmod-drbd84 --enablerepo=elrepo
yum -y install pacemaker corosync cluster-glue

wget -P /etc/yum.repos.d/ http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-6/network:ha-clustering:Stable.repo
yum install crmsh

Configure Corosync 

vi /etc/corosync/corosync.conf
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 10.0.0.0
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
}
}

logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}

amf {
mode: disabled
}

service {
        # Load the Pacemaker Cluster Resource Manager
        ver:       1
        name:      pacemaker
}

aisexec {
        user:   root
        group:  root
}


chkconfig --level 3 corosync on
service corosync start
chkconfig --level 3 pacemaker on
service pacemaker start

Checking the Cluster Connectivity
corosync-objctl runtime.totem.pg.mrp.srp.members

Check the service and cluster status
crm_mon -1


Configuring the cluster
>>crm configure
property no-quorum-policy="ignore" pe-warn-series-max="1000" pe-input-series-max="1000" pe-error-series-max="1000" cluster-recheck-interval="5min"
property stonith-enabled=false
commit

Adding a Cluster server for common IP (VIP)
>>crm configure
primitive p_api-ip ocf:heartbeat:IPaddr2 params ip="10.0.0.199" cidr_netmask="24" op monitor interval="30s"
commit

Now we need to configure the needed services in the CRM.

  

Thursday, June 26, 2014

Mysql HA with Haproxy and Master-Master replication

Once Mysql master -master replication is done we set HA with those using HAproxy

http://enekumvenamorublog.wordpress.com/2014/06/25/mysql-replication-master-master/

HAProxy

yum install haproxy

To use Haproxy with MYsql we need to create a user in mysql so that haproxy can access it .

GRANT ALL PRIVILEGES ON *.* TO 'haproxy'@'192.168.216.180' IDENTIFIED BY '';

Sample configuration  /etc/haproxy/haproxy.cfg

global
log 127.0.0.1 local0 notice
user haproxy
group haproxy

# turn on stats unix socket
stats socket /var/lib/haproxy/stats mode 777

defaults
log global
retries 2
timeout connect 1000
timeout server 5000
timeout client 5000
listen stats 192.168.255.180:80
mode http
stats enable
stats uri /stats
stats realm HAProxy\ Statistics
stats auth admin:password

listen MYSQL 192.168.255.190:3306
balance source
mode tcp
option mysql-check user haproxy
server controller1 192.168.216.130 check
server controller2 192.168.216.135 check
[root@HAPROXY ~]#

Wednesday, June 25, 2014

Enable HAProxy logging on Centos

Enable HAProxy logging on CentOS
By default, HAProxy will not log to files unless we make some modifications
1. Create rsyslog configuration file
nano /etc/rsyslog.d/haproxy.conf
Add these lines to the file
# Enable UDP port 514 to listen to incoming log messages from haproxy
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info -/var/log/haproxy/haproxy.log;Haproxy
local0.notice -/var/log/haproxy/admin.log;Haproxy
# don't log anywhere else
local0.* ~
Restart rsyslog service
/etc/init.d/rsyslog restart
Ref: http://blog.hintcafe.com/post/33689067443/haproxy-logging-with-rsyslog-on-linux
2. Modify the log rotate config to match the new folder:
nano /etc/logrotate.d/haproxy
Change
/var/log/haproxy.log {
daily
rotate 10
missingok
[...]
to
/var/log/haproxy/*.log {
daily
rotate 10
missingok
[...]
Now we can check if HAProxy logging is working.
tail -f /var/log/haproxy/haproxy.log

===================================================

global to have these messages end up in /var/log/haproxy.log you will need to:

1) configure syslog to accept network log events. This is done by adding the '-r' option to the SYSLOGD_OPTIONS in /etc/sysconfig/syslog

2) configure local2 events to go to the /var/log/haproxy.log file. A line like the following can be added to /etc/sysconfig/syslog
local2.* /var/log/haproxy.log

In haproxy conf file add
log 127.0.0.1 local2 info

Tuesday, May 20, 2014

MySql Server Cluster (Maria+galera)

Add MariaDB Repositories
========================
Create a mariadb repository /etc/yum.repos.d/mariadb.repo using following content in your system. Below repository will work on CentOS 6.x systems, For other system use repository generation tool and add to your system.

Disable Selinux in redhat sever's.


For CentOS 6 – 64bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-amd64
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
For CentOS 6 – 32bit

[mariadb]
name = MariaDB
baseurl = http://yum.mariadb.org/5.5/centos6-x86
gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgcheck=1
Install MariaDB and Galera
==========================
Before installing MariaDB Galera cluster packages, remove any existing MySQL or MariaDB packages installed on system. After that use following command to install on all nodes.

# yum install MariaDB-Galera-server MariaDB-client galera

Initial MariaDB Configuration
=============================
After successfully installing packages in above steps do the some initial MariaDB configurations. Use following command and follow the instructions on all nodes of cluster. If will prompt to set root account password also.

# mysql_secure_installation
# service mysql start
After that create a user in MariaDB on all nodes, which can access database from your network in cluster.

# mysql -u root -p

MariaDB [(none)]> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'password' WITH GRANT OPTION;
MariaDB [(none)]> FLUSH PRIVILEGES;
MariaDB [(none)]> exit
and stop MariaDB service before starting cluster configuration

# service mysql stop
Setup Cluster Configuration on database1
========================================
Lets start setup MariaDB Galera cluster from database1 server. Edit MariaDB server configuration file and add following values under [mariadb] section.

[root@database1 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.4,10.0.0.5
wsrep_cluster_name='cluster1'
wsrep_node_address='10.0.0.2'
wsrep_node_name='database1'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password
Start cluster using following command.

[root@database1 ~]# /etc/init.d/mysql bootstrap
Bootstrapping the clusterStarting MySQL.... SUCCESS!
If you get any problem during startup check MariaDB error log file /var/lib/mysql/<hostname>.err

Add database2 in MariaDB Cluster
================================
After successfully starting cluster on database1. Start configuration on database2. Edit MariaDB server configuration file and add following values under [mariadb] section. All the settings are similar to database1 except wsrep_node_address, wsrep_cluster_address and wsrep_node_name.

[root@database2 ~]# vim /etc/my.cnf.d/server.cnf

query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.2,10.0.0.5
wsrep_cluster_name='cluster1'
wsrep_node_address='10.0.0.4'
wsrep_node_name='database2'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password

Start cluster using following command.

[root@database2 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Add database3 in MariaDB Cluster
================================
This server is optional, If you want only two server in cluster, you can ignore this step, but you need to remove third server ip from database1/database2 configuration files. To add this server make changes same as database2.

[root@database3 ~]# vim /etc/my.cnf.d/server.cnf
query_cache_size=0
binlog_format=ROW
default_storage_engine=innodb
innodb_autoinc_lock_mode=2
wsrep_provider=/usr/lib/galera/libgalera_smm.so
wsrep_cluster_address=gcomm://10.0.0.2,10.0.0.4
wsrep_cluster_name='cluster1'
wsrep_node_address='10.0.0.5'
wsrep_node_name='database3'
wsrep_sst_method=rsync
wsrep_sst_auth=root:password
Start cluster using following command.

[root@db3 ~]# /etc/init.d/mysql start
Starting MySQL..... SUCCESS!

Thursday, May 15, 2014

Apache load balancing using mod_jk

Considering you have two tomcat server's and both are configured and port 8009 is listened by ajp in tomcat.

Download the module from http://tomcat.apache.org/download-connectors.cgi

Sample Version http://apache.mirrors.hoobly.com/tomcat/tomcat-connectors/jk/tomcat-connectors-1.2.40-src.tar.gz

#tar -xvf tomcat-connectors-1.2.37-src.tar

# cd tomcat-connectors-1.2.32-src/native/

# which usr/sbin/apxs

# ./configure --with-apxs=/usr/sbin/apxs --enable-api-compatibility

# make

# make install

after completed this activity you will get mod_jk.so file in /usr/lib64/httpd/modules/mod_jk.so

or else copy the modules to apache's module directory.

 

if get it , going well

Installation part has been completed, let's start configuration part

4. Open httpd.conf file and add end of line.

# vi /etc/httpd/conf/httpd.conf

JkWorkersFile "/etc/httpd/conf/worker.properties"
JkLogFile "/var/log/httpd/mod_jk.log"
JkRequestLogFormat "%w %V %T"
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

The below two lines in the virtualhost.

JkMount / loadbalancer
JkMount /status status

Content of the worker.properties

cat /etc/httpd/conf/worker.properties
worker.list=loadbalancer,status

worker.template.type=ajp13
worker.template.connection_pool_size=50
worker.template.socket_timeout=1200

worker.node2.reference=worker.template
worker.node1.port=8009
worker.node1.host=54.86.231.61
worker.node1.type=ajp13
worker.node2.jvm_route=node1

worker.node2.port=8009
worker.node2.host=54.86.17.252
worker.node2.type=ajp13
worker.node2.jvm_route=node2

worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
#worker.loadbalancer.sticky_session=TRUE

to check the status

worker.status.type=status

 

 

 

Monday, May 12, 2014

Apache load balancer

An add-in module that acts as a software load balancer and ensures that traffic is split across back-end servers or workers to reduce latencies and give users a better experience.

mod_proxy_balancer distributes requests to multiple worker processes running on back-end servers to let multiple resources service incoming traffic and processing. It ensures efficient utilization of the back-end workers to prevent any single worker from getting overloaded.

When you configure mod_proxy_balancer, you can choose among three load-balancing algorithms: Request Counting, Weighted Traffic Counting, and Pending Request Counting, which we'll discuss in detail in a moment. The best algorithm to use depends on the individual use case; if you are not sure which to try first, go with Pending Request Counting.

The add-in also supports session stickyness, meaning you can optionally ensure that all the requests from a particular IP address or in a particular session goes to the same back-end server. The easiest way to achieve stickyness is to use cookies, either inserted by the Apache web server or by the back-end servers.

A general configuration for load balancing defined in /etc/httpd/httpd.conf would look like this:

<Proxy balancer://A_name_signifying_your_app>
BalancerMember http://ip_address:port/ loadfactor=appropriate_load_factor # Balancer member 1
BalancerMember http://ip_address:port/ loadfactor=appropriate_load_factor # Balancer member 2
ProxySet lbmethod=the_Load_Balancing_algorithm
</Proxy>
You can specify anything for a name, but it's good to choose one that's significant. BalancerMember specifies a back-end worker's IP address and port number. A worker can be a back-end HTTP server or anything that can serve HTTP traffic. You can omit the port number if you use the web server's default port of 80. You can define as many BalancerMembers as you want; the optimal number depends on the capabilities of each server and the incoming traffic load. The loadfactor variable specifies the load that a back-end worker can take. Depending upon the algorithm, this can represent a number of requests or a number of bytes. lbmethod specifies the algorithm to be used for load balancing.

 

Let's look at how to configure each of the three options.
Get an Open Source Support Quote

Request Counting
With this algorithm, incoming requests are distributed among back-end workers in such a way that each back end gets a proportional number of requests defined in the configuration by the loadfactor variable. For example, consider this Apache config snippet:
<Proxy balancer://myapp>
BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1
BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2
ProxySet lbmethod=byrequests
</Proxy>
In this example, one request out of every four will be sent to 192.168.10.11, while three will be sent to 192.168.10.10. This might be an appropriate configuration for a site with two servers, one of which is more powerful than the other.

 

Weighted Traffic Counting Algorithm
The Weighted Traffic Counting algorithm is similar to Request Counting algorithm, with a minor difference: Weighted Traffic Counting considers the number of bytes instead of number of requests. In the configuration example below, the number of bytes processed by 192.168.10.10 will be three times that of 192.168.10.11.
<Proxy balancer://myapp>
BalancerMember http://192.168.10.11/ loadfactor=1 # Balancer member 1
BalancerMember http://192.168.10.10/ loadfactor=3 # Balancer member 2
ProxySet lbmethod=bytraffic
</Proxy>
Pending Request Counting Algorithm
The Pending Request Counting algorithm is the latest and most sophisticated algorithm provided by Apache for load balancing. It is available from Apache 2.2.10 onward.

 

In this algorithm, the scheduler keeps track of the number of requests that are assigned to each back-end worker at any given time. Each new incoming request will be sent to the back end that has least number of pending requests – in other words, to the back-end worker that is relatively least loaded. This helps keep the request queues even among the back-end workers, and each request generally goes to the worker that can process it the fastest.

 

If two workers are equally lightly loaded, the scheduler uses the Request Counting algorithm to break the tie.
<Proxy balancer://myapp>
BalancerMember http://192.168.10.11/ # Balancer member 1
BalancerMember http://192.168.10.10/ # Balancer member 2
ProxySet lbmethod=bybusyness
</Proxy>
Enable the Balancer Manager
Sometimes you may need to change your load balancing configuration, but that may not be easy to do without affecting the running server. For such situations, the Balancer Manager module provides a web interface to change the status of back-end workers on the fly. You can use Balancer Manager to put a worker in offline mode or change its loadfactor. You must have mod_status installed in order to use Balance Manager. A sample config, which should be defined in /etc/httpd/httpd.conf, might look like:

 

<Location /balancer-manager>

SetHandler balancer-manager

Order Deny,Allow
Deny from all
Allow from .test.com
</Location>
Once you add directives like those above to httpd.conf and restart Apache you can open the Balancer Manager by pointing a browser at http://test.com/balancer-manager.

 

<VirtualHost *:80>
ProxyRequests off

ServerName domain.com

<Proxy balancer://mycluster>
# WebHead1
BalancerMember http://10.176.42.144:80
# WebHead2
BalancerMember http://10.176.42.148:80

# Security "technically we aren't blocking
# anyone but this the place to make those
# chages
Order Deny,Allow
Deny from none
Allow from all

# Load Balancer Settings
# We will be configuring a simple Round
# Robin style load balancer. This means
# that all webheads take an equal share of
# of the load.
ProxySet lbmethod=byrequests

</Proxy>

# balancer-manager
# This tool is built into the mod_proxy_balancer
# module and will allow you to do some simple
# modifications to the balanced group via a gui
# web interface.
<Location /balancer-manager>
SetHandler balancer-manager

# I recommend locking this one down to your
# your office
Order deny,allow
Allow from all
</Location>

# Point of Balance
# This setting will allow to explicitly name the
# the location in the site that we want to be
# balanced, in this example we will balance "/"
# or everything in the site.
ProxyPass /balancer-manager !
ProxyPass / balancer://mycluster/

</VirtualHost>

 

========================

Enable proxy_module, proxy_balancer_module and proxy_http_module in httpd.conf of Apache web server
LoadModule proxy_module modules/mod_proxy.so
LoadModule proxy_balancer_module modules/mod_proxy_balancer.so
LoadModule proxy_http_module modules/mod_proxy_http.so
Add proxy pass along with balancer name for application context root. In this example, I have proxy path as examples and balancer name as mycluster. Very important to include stickysession as not having this option will distribute same request to multiple tomcat server and you will have session expiry issues in application.

<IfModule proxy_module>
ProxyRequests Off
ProxyPass /examples balancer://mycluster stickysession=JSESSIONID
ProxyPassReverse /examples balancer://mycluster stickysession=JSESSIONID
<Proxy balancer://mycluster>
BalancerMember http://localhost:8080/examples route=server1
BalancerMember http://localhost:8090/examples route=server2
</Proxy>
</IfModule>
As you can see in above configuration, I have added route in BalancerMember so route value can be appended to session ID. Now, let’s configure Apache to print JSESSIONID in access logs.

Add following in LogFormat directive
%{JSESSIONID}C
Ex:

LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"\"%{JSESSIONID}C\"" combined
Restart Apache Web Server