Pages

Monday, July 21, 2014

Neutron + Pacemaker for HA Gives error

I was trying to configure HA for neutron server in icehouse implementation. I was able to set up ha for all other services except neutron. I was trying to use pacemaker for setting up HA  by following http://docs.openstack.org/high-availability-guide/content/_add_neutron_l3_agent_resource_to_pacemaker.html

but still i get following error. dhcp agent and metadata agent is showing no error but l3 agent is not working.

output of crm_mon -1
Last updated:FriJul1814:03:252014Last change:FriJul1813:54:042014 via cibadmin on network1 Stack: classic openais (with plugin)Current DC: network2 - partition with quorum Version:1.1.10-14.el6_5.3-368c7262Nodes configured,2 expected votes 4Resources configured

Online:[ network1 network2 ]

p_api-ip (ocf::heartbeat:IPaddr2):Started network2

p_neutron-dhcp-agent (ocf::openstack:neutron-dhcp-agent):Started network1

p_neutron-metadata-agent (ocf::openstack:neutron-metadata-agent):Started network1

Failed actions: p_neutron-l3-agent_start_0 on network2 'unknown error'(1): call=13, status=TimedOut,last-rc-change='Fri Jul 18 04:32:06 2014', queued=20091ms,exec=0ms p_neutron-l3-agent_start_0 on network1 'unknown error'(1): call=23, status=TimedOut,last-rc-change='Fri Jul 18 14:03:01 2014', queued=20010ms,exec=0ms[root@network1 openstack]#

Solution

The neutron-agent-l3 script to blame as it tries to communicate with neutron server
directly on port 9696, while communication is handled by AMQP service
(Qpid in my case). We need to modify the script to use Qpid port and not neutron server one.

Friday, July 18, 2014

Neutron Network Issue. Gateway not pinging for the external network.

In Network  node

ip netns

Above command will give the virtual router's as you can see my output below. From that select the qrouter ID and try command

ip netns exec <qrouter-id> ip addr

ip netns exec <qrouter-id> route -n

The above commands should show IP's in virtual router and routing table of qrouter.

make sure your routing table shown as has a gateway. Or else try setting it using

ip netns exec <qrouter-id> route add default gw *** *** *** ***

ip netns exec <qrouter-id> iptables save

 

 

Thursday, July 10, 2014

Configure Amazon CloudWatch Monitoring Scripts for Linux

Installing dependencies

yum install cpan
yum install perl-Time-HiRes
cpan >> install Switch
yum install zip unzip
yum install wget
yum install perl-Crypt-SSLeay


wget http://ec2-downloads.s3.amazonaws.com/cloudwatch-samples/CloudWatchMonitoringScripts-v1.1.0.zip
unzip CloudWatchMonitoringScripts-v1.1.0.zip
cd aws-scripts-mon/

vi awscreds.template
AWSAccessKeyId=
AWSSecretKey=

Test it
./mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-path=/ --disk-space-util --disk-space-used --disk-space-avail --swap-used --aws-credential-file=/root/aws-scripts-mon/awscreds.template

Add it to cron
* * * * * /usr/bin/perl /root/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --aws-credential-file=/root/aws-scripts-mon/awscreds.template --from-cron

 

 

Saturday, June 28, 2014

[Errno 13] Permission denied: '/var/log/keystone/keystone.log'

[root@controller2 ~]# tail -f /var/log/keystone/keystone-startup.log
_setup_logging_from_conf(product_name, version)
File "/usr/lib/python2.6/site-packages/keystone/openstack/common/log.py", line 525, in _setup_logging_from_conf
filelog = logging.handlers.WatchedFileHandler(logpath)
File "/usr/lib64/python2.6/logging/handlers.py", line 377, in __init__
logging.FileHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib64/python2.6/logging/__init__.py", line 827, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib64/python2.6/logging/__init__.py", line 846, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 13] Permission denied: '/var/log/keystone/keystone.log'

 

chown keystone.keystone /var/log/keystone/keystone.log

 

Thursday, June 26, 2014

Virtual Ip With Keepalived as Front end for HAproxy server's

Install Keepalived
Virtual Ip 192.168.216.100

HAproxy Ip 192.168.216.101
1. Install Keepalived package:

On RHEL/CentOS:

$ yum install -y centos-release
$ yum install -y keepalived
$ chkconfig keepalived on

2. Tell kernel to allow binding non-local IP into the hosts and apply the changes:

$ echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
$ sysctl -p

Configure Keepalived and Virtual IP
1. Login into LB1 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth2 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.216.100 # the virtual IP
}
track_script {
chk_haproxy
}
}
2. Login into LB2 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth2 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.216.100 # the virtual IP
}
track_script {
chk_haproxy
}
}
3. Start Keepalived in both nodes:

$ sudo /etc/init.d/keepalived start
4. Verify the Keepalived status. LB1 should hold the VIP and the MASTER state while LB2 should run as BACKUP state without VIP:

LB1 IP:

$ ip a | grep -e inet.*eth2
inet 192.168.216.101/24 brd 192.168.216.255 scope global eth2
inet 192.168.216.100/32 scope global eth2

Mysql HA with Haproxy and Master-Master replication

Once Mysql master -master replication is done we set HA with those using HAproxy

http://enekumvenamorublog.wordpress.com/2014/06/25/mysql-replication-master-master/

HAProxy

yum install haproxy

To use Haproxy with MYsql we need to create a user in mysql so that haproxy can access it .

GRANT ALL PRIVILEGES ON *.* TO 'haproxy'@'192.168.216.180' IDENTIFIED BY '';

Sample configuration  /etc/haproxy/haproxy.cfg

global
log 127.0.0.1 local0 notice
user haproxy
group haproxy

# turn on stats unix socket
stats socket /var/lib/haproxy/stats mode 777

defaults
log global
retries 2
timeout connect 1000
timeout server 5000
timeout client 5000
listen stats 192.168.255.180:80
mode http
stats enable
stats uri /stats
stats realm HAProxy\ Statistics
stats auth admin:password

listen MYSQL 192.168.255.190:3306
balance source
mode tcp
option mysql-check user haproxy
server controller1 192.168.216.130 check
server controller2 192.168.216.135 check
[root@HAPROXY ~]#

Wednesday, June 25, 2014

Enable HAProxy logging on Centos

Enable HAProxy logging on CentOS
By default, HAProxy will not log to files unless we make some modifications
1. Create rsyslog configuration file
nano /etc/rsyslog.d/haproxy.conf
Add these lines to the file
# Enable UDP port 514 to listen to incoming log messages from haproxy
$ModLoad imudp
$UDPServerRun 514
$template Haproxy,"%msg%\n"
local0.=info -/var/log/haproxy/haproxy.log;Haproxy
local0.notice -/var/log/haproxy/admin.log;Haproxy
# don't log anywhere else
local0.* ~
Restart rsyslog service
/etc/init.d/rsyslog restart
Ref: http://blog.hintcafe.com/post/33689067443/haproxy-logging-with-rsyslog-on-linux
2. Modify the log rotate config to match the new folder:
nano /etc/logrotate.d/haproxy
Change
/var/log/haproxy.log {
daily
rotate 10
missingok
[...]
to
/var/log/haproxy/*.log {
daily
rotate 10
missingok
[...]
Now we can check if HAProxy logging is working.
tail -f /var/log/haproxy/haproxy.log

===================================================

global to have these messages end up in /var/log/haproxy.log you will need to:

1) configure syslog to accept network log events. This is done by adding the '-r' option to the SYSLOGD_OPTIONS in /etc/sysconfig/syslog

2) configure local2 events to go to the /var/log/haproxy.log file. A line like the following can be added to /etc/sysconfig/syslog
local2.* /var/log/haproxy.log

In haproxy conf file add
log 127.0.0.1 local2 info