Pages

Friday, July 25, 2014

Managing Two Gateway in Linux Environment using routing rules

eth3
Ipaddr=192.168.1.45
Gateway 192.168.1.1

cat /etc/iproute2/rt_tables
echo "# dual nic-gateway below" >> /etc/iproute2/rt_tables
echo "10 routetable15" >> /etc/iproute2/rt_tables
cat /etc/iproute2/rt_tables

echo "
192.168.1.0 dev eth3 src 192.168.1.45 table routetable15
default via 192.168.1.1 dev eth3 table routetable15
" >> /etc/sysconfig/network-scripts/route-eth3
echo "
from 192.168.1.0/24 table routetable15
to 192.168.1.45 table routetable15
" >> /etc/sysconfig/network-scripts/rule-eth3

eth1
Ipaddr 192.168.2.45
gateway 192.168.2.1

cat /etc/iproute2/rt_tables
echo "# dual nic-gateway below" >> /etc/iproute2/rt_tables
echo "11 routetable17" >> /etc/iproute2/rt_tables
cat /etc/iproute2/rt_tables

echo "
192.168.2.0 dev eth1 src 192.168.2.45 table routetable17
default via 192.168.2.1 dev eth1 table routetable17
" >> /etc/sysconfig/network-scripts/route-eth1
echo "
from 192.168.2.0/24 table routetable17
to 192.168.2.45 table routetable17
" >> /etc/sysconfig/network-scripts/rule-eth1

Monday, July 21, 2014

Neutron + Pacemaker for HA Gives error

I was trying to configure HA for neutron server in icehouse implementation. I was able to set up ha for all other services except neutron. I was trying to use pacemaker for setting up HA  by following http://docs.openstack.org/high-availability-guide/content/_add_neutron_l3_agent_resource_to_pacemaker.html

but still i get following error. dhcp agent and metadata agent is showing no error but l3 agent is not working.

output of crm_mon -1
Last updated:FriJul1814:03:252014Last change:FriJul1813:54:042014 via cibadmin on network1 Stack: classic openais (with plugin)Current DC: network2 - partition with quorum Version:1.1.10-14.el6_5.3-368c7262Nodes configured,2 expected votes 4Resources configured

Online:[ network1 network2 ]

p_api-ip (ocf::heartbeat:IPaddr2):Started network2

p_neutron-dhcp-agent (ocf::openstack:neutron-dhcp-agent):Started network1

p_neutron-metadata-agent (ocf::openstack:neutron-metadata-agent):Started network1

Failed actions: p_neutron-l3-agent_start_0 on network2 'unknown error'(1): call=13, status=TimedOut,last-rc-change='Fri Jul 18 04:32:06 2014', queued=20091ms,exec=0ms p_neutron-l3-agent_start_0 on network1 'unknown error'(1): call=23, status=TimedOut,last-rc-change='Fri Jul 18 14:03:01 2014', queued=20010ms,exec=0ms[root@network1 openstack]#

Solution

The neutron-agent-l3 script to blame as it tries to communicate with neutron server
directly on port 9696, while communication is handled by AMQP service
(Qpid in my case). We need to modify the script to use Qpid port and not neutron server one.

Friday, July 18, 2014

Neutron Network Issue. Gateway not pinging for the external network.

In Network  node

ip netns

Above command will give the virtual router's as you can see my output below. From that select the qrouter ID and try command

ip netns exec <qrouter-id> ip addr

ip netns exec <qrouter-id> route -n

The above commands should show IP's in virtual router and routing table of qrouter.

make sure your routing table shown as has a gateway. Or else try setting it using

ip netns exec <qrouter-id> route add default gw *** *** *** ***

ip netns exec <qrouter-id> iptables save

 

 

Thursday, July 10, 2014

Configure Amazon CloudWatch Monitoring Scripts for Linux

Installing dependencies

yum install cpan
yum install perl-Time-HiRes
cpan >> install Switch
yum install zip unzip
yum install wget
yum install perl-Crypt-SSLeay


wget http://ec2-downloads.s3.amazonaws.com/cloudwatch-samples/CloudWatchMonitoringScripts-v1.1.0.zip
unzip CloudWatchMonitoringScripts-v1.1.0.zip
cd aws-scripts-mon/

vi awscreds.template
AWSAccessKeyId=
AWSSecretKey=

Test it
./mon-put-instance-data.pl --mem-util --mem-used --mem-avail --disk-path=/ --disk-space-util --disk-space-used --disk-space-avail --swap-used --aws-credential-file=/root/aws-scripts-mon/awscreds.template

Add it to cron
* * * * * /usr/bin/perl /root/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-used --mem-avail --aws-credential-file=/root/aws-scripts-mon/awscreds.template --from-cron

 

 

Tuesday, July 8, 2014

Shell In A Box – A Web-Based SSH Terminal to Access Remote Linux Servers

Shell In A Box (pronounced as shellinabox) is a web based terminal emulator . It has built-in web server that runs as a web-based SSH client on a specified port and prompt you a web terminal emulator to access and control your Linux Server SSH Shell remotely using any AJAX/JavaScript and CSS enabled browsers without the need of any additional browser 

RHEL/CentOS 6 32-64 Bit

## RHEL/CentOS 6 32-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm

## RHEL/CentOS 6 64-Bit ##
# wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
# rpm -ivh epel-release-6-8.noarch.rpm
# vi /etc/sysconfig/shellinaboxd
# TCP port that shellinboxd's webserver listens on- Which ever you need , here I am choosing port 80
PORT=80

# specify the IP address of a destination SSH server
OPTS="-s /:SSH:172.16.25.125"

# if you want to restrict access to shellinaboxd from localhost only
OPTS="-s /:SSH:172.16.25.125 --localhost-only"

Saturday, June 28, 2014

[Errno 13] Permission denied: '/var/log/keystone/keystone.log'

[root@controller2 ~]# tail -f /var/log/keystone/keystone-startup.log
_setup_logging_from_conf(product_name, version)
File "/usr/lib/python2.6/site-packages/keystone/openstack/common/log.py", line 525, in _setup_logging_from_conf
filelog = logging.handlers.WatchedFileHandler(logpath)
File "/usr/lib64/python2.6/logging/handlers.py", line 377, in __init__
logging.FileHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib64/python2.6/logging/__init__.py", line 827, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib64/python2.6/logging/__init__.py", line 846, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 13] Permission denied: '/var/log/keystone/keystone.log'

 

chown keystone.keystone /var/log/keystone/keystone.log

 

Thursday, June 26, 2014

Virtual Ip With Keepalived as Front end for HAproxy server's

Install Keepalived
Virtual Ip 192.168.216.100

HAproxy Ip 192.168.216.101
1. Install Keepalived package:

On RHEL/CentOS:

$ yum install -y centos-release
$ yum install -y keepalived
$ chkconfig keepalived on

2. Tell kernel to allow binding non-local IP into the hosts and apply the changes:

$ echo "net.ipv4.ip_nonlocal_bind = 1" >> /etc/sysctl.conf
$ sysctl -p

Configure Keepalived and Virtual IP
1. Login into LB1 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth2 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 101 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.216.100 # the virtual IP
}
track_script {
chk_haproxy
}
}
2. Login into LB2 and add following line into /etc/keepalived/keepalived.conf:

vrrp_script chk_haproxy {
script "killall -0 haproxy" # verify the pid existance
interval 2 # check every 2 seconds
weight 2 # add 2 points of prio if OK
}

vrrp_instance VI_1 {
interface eth2 # interface to monitor
state MASTER
virtual_router_id 51 # Assign one ID for this route
priority 100 # 101 on master, 100 on backup
virtual_ipaddress {
192.168.216.100 # the virtual IP
}
track_script {
chk_haproxy
}
}
3. Start Keepalived in both nodes:

$ sudo /etc/init.d/keepalived start
4. Verify the Keepalived status. LB1 should hold the VIP and the MASTER state while LB2 should run as BACKUP state without VIP:

LB1 IP:

$ ip a | grep -e inet.*eth2
inet 192.168.216.101/24 brd 192.168.216.255 scope global eth2
inet 192.168.216.100/32 scope global eth2