Pages

Wednesday, August 12, 2015

Mysql Cluster Using Mysql NDB


Mysql Cluster using NDB(Network DataBase) provides a self healing mysql Cluster which provides a good performance. Mainly the Mysql Cluster Contains 3 Components ie using Management , SQL and Data parts. 

Here we will be configuring two Management and two Data/SQL (together in One server) for the HA. Once the configuration is completed we will have two end points to connect to te database so we need to keep an Load balancer in front of the SQL end points.




OS used is RHEL7
Selinux Enabed
Firewall Disabled

Management Server## Perform the Following steps in both the Management Server's. 

Install Needed Packages 
=================
yum install glibc.i686  ncurses-libs.i686 libstdc++.i686 libgcc.i686 -y


Make Directories and Download the Cluster Files
====================================

mkdir /usr/src/mysql-mgm
cd /usr/src/mysql-mgm
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
tar zxvf mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz

cd mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686
cp bin/ndb_mgm* /usr/bin/
chmod 755 /usr/bin/ndb_mgm*


mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
==========================================
[NDBD DEFAULT]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[MYSQLD DEFAULT]

[NDB_MGMD DEFAULT]
DataDir=/var/lib/mysql-cluster
[TCP DEFAULT]

# Section for the cluster management node
[NDB_MGMD]
NodeId=1
# IP address of the first management node (this system)
HostName=192.168.70.130

[NDB_MGMD]
NodeId=2
#IP address of the second management node
HostName=192.168.70.131

# Section for the storage nodes
[NDBD]
# IP address of the first storage node
HostName=192.168.70.132
DataDir= /var/lib/mysql-cluster
[NDBD]
# IP address of the second storage node
HostName=192.168.70.133
DataDir=/var/lib/mysql-cluster
# one [MYSQLD] per storage node
[MYSQLD]
[MYSQLD]
==========================================

chown mysql. /var/lib/mysql-cluster -R

To start the Management Service
========================
ndb_mgmd -f /var/lib/mysql-cluster/config.ini --configdir=/var/lib/mysql-cluster/

Data And SQL Server#Perform this on both of the Server's
==============================================

Install the needed Packages
====================
yum install libaio.i686 libaio-devel.i686 -y
yum install perl -y
yum -y install perl-Data-Dumper

Download the packages
cd /usr/local/
wget http://cdn.mysql.com/Downloads/MySQL-Cluster-7.4/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
tar zxvf mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz
mv /root/mysql-cluster-gpl-7.4.7-linux-glibc2.5-i686.tar.gz mysql
chown mysql. mysql -R
cd mysql

Initializing the database
scripts/mysql_install_db --user=mysql --datadir=/usr/local/mysql/data

cp support-files/mysql.server /etc/init.d/
chmod 755 /etc/init.d/mysql.server

cd /usr/local/mysql/bin
mv * /usr/bin
cd ../

vi /etc/my.cnf
============
[mysqld]
ndbcluster
# IP address of the cluster management node
ndb-connectstring=192.168.70.130,192.168.70.131
[mysql_cluster]
# IP address of the cluster management node
ndb-connectstring=192.168.70.130,192.168.70.131
============

mkdir /var/lib/mysql-cluster

cd /var/lib/mysql-cluster
ndbd --initial
/etc/init.d/mysql.server start

After this, secure the MySQL installation by running the appropriate script:

/usr/local/mysql/bin/mysql_secure_installation


Testing 
In the Management Node check the command ndb_mgm and check the status



Thursday, July 9, 2015

Delete a nat rule in iptables


First of all I list all the rules including line numbers like this;

iptables -L -t nat –line-numbers

I then look at the output that will be similar to the below

In this example lets say I want to delete rule number 2 in the PREROUTING chain, I would enter the following;

iptables -t nat -D PREROUTING 2

In English the above line means remove line number 2 from the PREOUTING chain, I would then run the first command again to check my iptables file, then save the iptables file and restart the iptables service.

iptables -L -t nat –line-numbers

service iptables save

service iptables restart

All the above is carried out running on Centos, you may have to edit slightly for your particular distribution.

Friday, June 12, 2015

Getting Client IP Behind the Aws ELB (Http/Http Mode)

We need to add the Following Logformat to get the clients IP.

We use the X-Forwarded-For entry in the apache configuration to get it done.

# The following directives define some format nicknames for use with
# a CustomLog directive (see below).
#
LogFormat "\"%{X-Forwarded-For}i\" %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined_new
#....

#...
#
# START_HOST example.com

    ServerName example.com
    DocumentRoot "/var/www/example.com/html"

        Options Includes FollowSymLinks
        AllowOverride All
        Order allow,deny
        Allow from all

    CustomLog /var/www/logs/example.com/access_log combined_new
    ErrorLog /var/www/logs/example.com/error_log

# END_HOST example.com

Friday, June 5, 2015

Jira
===
JIRA is a commercial software product that can be licensed for running on-premises or available as a hosted application. Pricing depends on the maximum number of users.

Installing Java
yum install java-1.7.0*

Installing Database
yum install -y mariadb-server
mysql -u root -p
CREATE DATABASE jiradb CHARACTER SET utf8 COLLATE utf8_bin;
GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER,INDEX on jiradb.* TO 'jira'@'localhost' IDENTIFIED BY 'jira_xuZEKE4N';
flush privileges;
SHOW GRANTS FOR 'jira'@'localhost';
exit;

Install Jira:
Download atlassian-jira-6.2.2-x64.bin (32/64 bit) from https://www.atlassian.com/software/jira/download. And install as below-
wget https://downloads.atlassian.com/software/jira/downloads/atlassian-jira-6.4.5-x64.bin
./atlassian-jira-6.4.5-x64.bin
===================================================================
[root@adancsvso002 opt]# sh atlassian-jira-6.4.5-x64.bin
Unpacking JRE ...
Starting Installer ...
May 26, 2015 6:28:39 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.

This will install JIRA 6.4.5 on your computer.
OK [o, Enter], Cancel [c]

Choose the appropriate installation or upgrade option.
Please choose one of the following:
Express Install (use default settings) [1], Custom Install (recommended for advanced users) [2, Enter], Upgrade an existing JIRA installation [3]


Where should JIRA 6.4.5 be installed?
[/opt/atlassian/jira]

Default location for JIRA data
[/var/atlassian/application-data/jira]

Configure which ports JIRA will use.
JIRA requires two TCP ports that are not being used by any other
applications on this machine. The HTTP port is where you will access JIRA
through your browser. The Control port is used to Startup and Shutdown JIRA.
Use default ports (HTTP: 8080, Control: 8005) - Recommended [1, Enter], Set custom value for HTTP and Control ports [2]

JIRA can be run in the background.
You may choose to run JIRA as a service, which means it will start
automatically whenever the computer restarts.
Install JIRA as Service?
Yes [y, Enter], No [n]


Extracting files ...


Please wait a few moments while JIRA starts up.
Launching JIRA ...
Installation of JIRA 6.4.5 is complete
Your installation of JIRA 6.4.5 is now ready and can be accessed via your
browser.
JIRA 6.4.5 can be accessed at http://localhost:8080
Finishing installation ...
[root@adancsvso002 opt]#
===================================================================

firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-port=8005/tcp --permanent
firewall-cmd --reload

wget http://cdn.mysql.com/Downloads/Connector-J/mysql-connector-java-5.1.35.tar.gz
tar zxvf mysql-connector-java-5.1.35.tar.gz
cp -rp mysql-connector-java-5.1.35/mysql-connector-java-5.1.35-bin.jar /opt/atlassian/jira/lib/

systemctl restart mariadb
systemctl status mariadb
service jira start

http://xxx.xxx.xxx.xxx:8080/

Sunday, May 31, 2015

Jenkins Starting issue.


Issue with starting
===================
Note: if you get the following error message, ensure that Java has been installed:
Starting jenkins (via systemctl):  Job for jenkins.service failed. See 'systemctl status jenkins.service' and 'journalctl -xn' for details                                            [FAILED]


Check for the tmp directory and if the noexec is enabled on the /tmp, try disabling it.

mount -o remount,exec /tmp

Other way around is by selecting another tmp directory

Edit /etc/sysconfig/jenkins
JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djava.io.tmpdir=$JENKINS_HOME/tmp"

We can get the Jenkins from URL
http://xxx.xxx.xxx.xxx:8080/jenkins/

Wednesday, May 27, 2015

Jenkins Integration/Automation Tools

Integration/Automation tool
==================
Jenkins is an open source continuous integration tool written in Java. The project was forked from Hudson after a dispute with Oracle. Jenkinsprovides continuous integration services for software development. It is a server-based system running in a servlet container such as Apache Tomcat

Installing Jenkins Latest Version
=================================
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins

Installation of a stable version
===========================================================
sudo wget -O /etc/yum.repos.d/jenkins.repo http://pkg.jenkins-ci.org/redhat-stable/jenkins.repo
sudo rpm --import https://jenkins-ci.org/redhat/jenkins-ci.org.key
sudo yum install jenkins

Installation of Java
====================
yum install java-1.7.0-openjdk
yum install java-1.7.0*

Start/Stop The Jenkins Services
===============================
service jenkins start/stop/restart
chkconfig jenkins on
/etc/init.d/jenkins
Usage: /etc/init.d/jenkins {start|stop|status|try-restart|restart|force-reload|reload|probe}

Enable the firewall
firewall-cmd --zone=public --add-port=8080/tcp --permanent
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload
firewall-cmd --list-all

Friday, May 8, 2015

Openstack KVM libvirtError: internal error: no supported architecture for os type 'hvm'

Nova Error Log
===========
2015-05-06 16:50:22.982 1187 ERROR nova.compute.manager [-] [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] Instance failed to spawn
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] Traceback (most recent call last):
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     yield resources
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     block_device_info=block_device_info)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in spawn
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     block_device_info, disk_info=disk_info)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4425, in _create_domain_and_network
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     power_on=power_on)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4349, in _create_domain
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     LOG.error(err)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     six.reraise(self.type_, self.value, self.tb)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4333, in _create_domain
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     domain = self._conn.defineXML(xml)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     rv = execute(f, *args, **kwargs)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     six.reraise(c, e, tb)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     rv = meth(*args, **kwargs)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3445, in defineXML
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] libvirtError: internal error: no supported architecture for os type 'hvm'
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]
2015-05-06 16:50:22.987 1187 WARNING nova.virt.libvirt.driver [-] [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] During wait destroy, instance disappeared


Fix
===#IF we need to enable qemu
openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu