Pages

Friday, October 31, 2014

Installing Swish Module for php

Swish package does not comes with current repo's of centos or redhat so we need to compile and install it before installing the swish package through the pecl. Else we may end up in error while installing Swish package with pecl

Downloading and installing the swish packages.
wget http://swish-e.org/distribution/swish-e-2.4.7.tar.gz
tar zxvf swish-e-2.4.7.tar.gz
cd swish-e-2.4.7
./configure
make
make check
make install

cd ~

Installing swish php module using pecl
pecl install swish-beta
chmod 755 /usr/lib64/php/modules/swish.so
echo "extension=swish.so" >> /etc/php.ini

Thursday, October 30, 2014

Installing PHP modules using pecl command.

Once you have installed the php you need to install needed modules to support the development process. we can use the pecl function to install the modules.

To install pecl function.

yum install php-pear

Now to install needed modules just use pecl

pecl install <Module Name>

To install a beta version
pecl install <Module Name>-beta

To list all modules in pecl database

pecl list-all

To check whether the module is installed or not

php -m

Wednesday, October 29, 2014

Installing PHP 5.6 in Centos6/7

Compiling php can be difficult some time. But We can just install the latest version of php from proper remi repo.

Install Remi repository

CentOS and Red Hat (RHEL)
Remi and EPEL (Dependency) on CentOS 7 and Red Hat (RHEL) 7

64 bit : yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
yum install -y http://rpms.famillecollet.com/enterprise/remi-release-7.rpm


Remi and Epel repo ( Dependency ) on CentOS 6 and Red Hat (RHEL) 6
64 bit  : yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
32 bit  : yum install -yhttp://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

yum install -y http://rpms.famillecollet.com/enterprise/remi-release-6.rpm


Installing PHP5.6 from the remi and httpd from local repo
CentOS 7/6.5/5.10 and Red Hat (RHEL) 7/6.5/5.10
yum --enablerepo=remi,remi-php56 install httpd php php-common

Install PHP 5.6.0 modules

yum --enablerepo=remi,remi-php56 install php-pecl-apcu php-cli php-pear php-pdo php-mysqlnd php-pgsql php-pecl-mongo php-sqlite php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml

Start Apache HTTP server (httpd) and autostart Apache HTTP server (httpd) on boot
## CentOS/RHEL 7 ##
systemctl start httpd.service ## use restart after update


## CentOS / RHEL 6.5/5.10 ##
/etc/init.d/httpd start ## use restart after update
## OR ##
service httpd start ## use restart after update


##CentOS/RHEL 7 ##
systemctl enable httpd.service

## CentOS / RHEL 6.5/5.10 ##
chkconfig --levels 235 httpd on


Create test PHP page to check that Apache, PHP and PHP modules are working
Add following content to /var/www/html/test.php file.

<?php

    phpinfo();
?>

Now Check the PHp page at http://<<SERVER_IP>>/test.php

Make sure that the EPEL and Remi repo's are disabled to avoid Further issue in future.

Module Available in Latest PHP

bcmath
bz2
calendar
com_dotnet
ctype
curl
date
dba
dom
enchant
ereg
exif
fileinfo
filter
ftp
gd
gettext
gmp
hash
iconv
imap
interbase
intl
json
ldap
libxml
mbstring
mcrypt
mssql
mysql
mysqli
mysqlnd
oci8
odbc
opcache
openssl
pcntl
pcre
pdo
pdo_dblib
pdo_firebird
pdo_mysql
pdo_oci
pdo_odbc
pdo_pgsql
pdo_sqlite
pgsql
phar
posix
pspell
readline
recode
reflection
session
shmop
simplexml
skeleton
snmp
soap
sockets
spl
sqlite3
standard
sybase_ct
sysvmsg
sysvsem
sysvshm
tidy
tokenizer
wddx
xml
xmlreader
xmlrpc
xmlwriter
xsl
zip
zlib

Monday, October 27, 2014

Openstack Juno - Neutron HA using VRRP (Keepalived)


First configure two neutron server's. Let that be network and network1 .
http://www.adminz.in/2014/10/openstack-juno-part-5-neutron.html

Then install Keepalived in both the neutron server's.

#Added Following entries in both neutron server
#in  /etc/neutron/neutron.conf
l3_ha = True
#And the HA Scheduler has to be used :
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler


In Controller Server Database update
neutron-db-manage --config-file=/etc/neutron/neutron.conf  --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  mkdir /etc/neutron/rootwrap.d
cp /usr/share/neutron/rootwrap/l3.filters /etc/neutron/rootwrap.d/

Now restart the Openstack Services in  all the controller and neutron nodes.



On Controller Server Create a new set of Network setting

source admin-openrc.sh
neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.1.0.101,end=10.1.0.200 --disable-dhcp --gateway 10.1.0.42 10.1.0.0/24


To create the tenant network
neutron net-create cli-net
neutron subnet-create cli-net --name cli-subnet --gateway 192.168.1.1 192.168.1.0/24
neutron router-create cli-router
neutron router-interface-add cli-router cli-subnet
neutron router-gateway-set cli-router ext-net


Now if we check both the neutron node we can see the router's.

[root@network ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network ~]#

[root@network2 ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network2 ~]#


[root@network ~]#  ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
10: ha-224b2c85-81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:42:4d:52 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.8/18 brd 169.254.255.255 scope global ha-224b2c85-81
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe42:4d52/64 scope link
       valid_lft forever preferred_lft forever
11: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
12: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link
       valid_lft forever preferred_lft forever
[root@network ~]#
[root@network ~]#



[root@network2 ~]# ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: ha-37517361-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:6f:a0:11 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-37517361-ec
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe6f:a011/64 scope link
       valid_lft forever preferred_lft forever
17: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
18: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
[root@network2 ~]#


In above output you can see the device  qg-04d4c06e-49 and  qr-842e3e41-3a has been created in both the server.

Friday, October 24, 2014

Removing Blank Lines from the File.

In sed 
Type the following sed command to delete all empty files:

Display with out Blank Lines
sed '/^$/d' input.txt

Remove all the Blank Lines from file
sed -i '/^$/d' input.txt
cat input.txt

In awk 

Type the following awk command to delete all empty files:

Display with out Blank Lines
awk NF input.txt

Remove all the Blank Lines from file
awk 'NF  input.txt > output.txt
cat output.txt


In perl
Type the following perl one liner to delete all empty files and save orignal file as input.txt.backup:
Remove all the Blank Lines from file
perl -i.backup -n -e "print if /\S/" input.txt


In vi editor
:g/^$/d
:g will execute a command on lines which match a regex. The regex is 'blank line' and the command is
:d (delete)


In tr
tr -s '\n' < abc.txt

In grep
grep -v "^$" abc.txt



Wednesday, October 22, 2014

Openstack Juno Part 6 - Neutron Configuration on Compute Service

Installing the packages

yum install openstack-neutron-ml2 openstack-neutron-openvswitch ipset -y


Configure the Service 
#Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

#Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.
#Dedicated Ip for Tunneling in Compute Node

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.214
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service


Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

#Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in #configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the #following commands to resolve this issue:

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the Services
systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service

Tuesday, October 21, 2014

Openstack Juno Part 5 - Neutron configuring Network Node

Installing the Packages

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ipset  -y

Configuring  the Service
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest


openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


#verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
#Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with #troubleshooting.


openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1454" >> /etc/neutron/dnsmasq-neutron.conf
chown neutron:neutron /etc/neutron/dnsmasq-neutron.conf

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password mar4neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret mar4meta

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with #troubleshooting.

#Perform the next two steps on the controller node.
#On the controller node, configure Compute to use the metadata service:
#Replace METADATA_SECRET with the secret you chose for the metadata proxy.

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret mar4meta

On the controller node, restart the Compute API service:
systemctl restart openstack-nova-api.service

# To configure the Modular Layer 2 (ML2) plug-in

 # Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network #interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface #on the network node.
#Dedicated IP for tunneling in network node
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.212
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service

#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth1


#Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve #suitable throughput between your instances and the external network.
#To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the service's 

systemctl enable neutron-openvswitch-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
systemctl enable neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service
systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service

Monday, October 20, 2014

Openstack Juno + Docker error "Docker daemon is not running or is not reachable"

I was getting following error while integrating docker with Openstack Juno.

"2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)"

I tried changing the permission of the docker.sock but that didn't help. But when I upgraded the docker to 1.2 version the issue was fixed . The docker version which comes with centos is little bit old we can the rpm of new docker for centos7 from 

Download the following RPMS 

wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm

Install the RPM

in the same dorectory
yum install docker-1.2.0
yum install docker*


Error
====
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup
2014-10-20 14:24:22.876 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.901 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.906 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.919 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.954 2995 ERROR nova.openstack.common.threadgroup [-] Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup

Openstack Juno Part 4 neutron - Controller.

Create the Mysql Database

  create database neutron;
 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'10.0.0.211' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'10.0.0.212' IDENTIFIED BY 'mar4neutron';
flush privileges;

Create keystone Endpoints and user's
source /root/admin-openrc.sh
keystone user-create --name neutron --pass mar4neutron
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"
keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 --region regionOne

Installing the packages 
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which -y

Configuring the Packages
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:mar4neutron@controller/neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller:35357/v2.0
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password mar4nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_region_name regionOne

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Populating the database
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron

Starting the Services.
systemctl restart openstack-nova-api.service
systemctl restart openstack-nova-scheduler.service
systemctl restart openstack-nova-conductor.service
systemctl enable neutron-server.service
systemctl start neutron-server.service

Checking the database
MariaDB [neutron]> show tables;
+-------------------------------------+
| Tables_in_neutron                   |
+-------------------------------------+
| agents                              |
| alembic_version                     |
| allowedaddresspairs                 |
| arista_provisioned_nets             |
| arista_provisioned_tenants          |
| arista_provisioned_vms              |
| brocadenetworks                     |
| brocadeports                        |
| cisco_credentials                   |
| cisco_csr_identifier_map            |
| cisco_hosting_devices               |
| cisco_ml2_apic_contracts            |
| cisco_ml2_apic_host_links           |
| cisco_ml2_apic_names                |
| cisco_ml2_nexusport_bindings        |
| cisco_n1kv_multi_segments           |
| cisco_n1kv_network_bindings         |
| cisco_n1kv_port_bindings            |
| cisco_n1kv_profile_bindings         |
| cisco_n1kv_trunk_segments           |
| cisco_n1kv_vlan_allocations         |
| cisco_n1kv_vmnetworks               |
| cisco_n1kv_vxlan_allocations        |
| cisco_network_profiles              |
| cisco_policy_profiles               |
| cisco_port_mappings                 |
| cisco_provider_networks             |
| cisco_qos_policies                  |
| cisco_router_mappings               |
| consistencyhashes                   |
| csnat_l3_agent_bindings             |
| dnsnameservers                      |
| dvr_host_macs                       |
| embrane_pool_port                   |
| externalnetworks                    |
| extradhcpopts                       |
| firewall_policies                   |
| firewall_rules                      |
| firewalls                           |
| floatingips                         |
| ha_router_agent_port_bindings       |
| ha_router_networks                  |
| ha_router_vrid_allocations          |
| healthmonitors                      |
| hyperv_network_bindings             |
| hyperv_vlan_allocations             |
| ikepolicies                         |
| ipallocationpools                   |
| ipallocations                       |
| ipavailabilityranges                |
| ipsec_site_connections              |
| ipsecpeercidrs                      |
| ipsecpolicies                       |
| lsn                                 |
| lsn_port                            |
| maclearningstates                   |
| members                             |
| meteringlabelrules                  |
| meteringlabels                      |
| ml2_brocadenetworks                 |
| ml2_brocadeports                    |
| ml2_dvr_port_bindings               |
| ml2_flat_allocations                |
| ml2_gre_allocations                 |
| ml2_gre_endpoints                   |
| ml2_network_segments                |
| ml2_port_bindings                   |
| ml2_vlan_allocations                |
| ml2_vxlan_allocations               |
| ml2_vxlan_endpoints                 |
| mlnx_network_bindings               |
| multi_provider_networks             |
| network_bindings                    |
| network_states                      |
| networkconnections                  |
| networkdhcpagentbindings            |
| networkflavors                      |
| networkgatewaydevicereferences      |
| networkgatewaydevices               |
| networkgateways                     |
| networkqueuemappings                |
| networks                            |
| networksecuritybindings             |
| neutron_nsx_network_mappings        |
| neutron_nsx_port_mappings           |
| neutron_nsx_router_mappings         |
| neutron_nsx_security_group_mappings |
| nexthops                            |
| nuage_net_partition_router_mapping  |
| nuage_net_partitions                |
| nuage_provider_net_bindings         |
| nuage_subnet_l2dom_mapping          |
| ofcfiltermappings                   |
| ofcnetworkmappings                  |
| ofcportmappings                     |
| ofcroutermappings                   |
| ofctenantmappings                   |
| ovs_network_bindings                |
| ovs_tunnel_allocations              |
| ovs_tunnel_endpoints                |
| ovs_vlan_allocations                |
| packetfilters                       |
| poolloadbalanceragentbindings       |
| poolmonitorassociations             |
| pools                               |
| poolstatisticss                     |
| port_profile                        |
| portbindingports                    |
| portinfos                           |
| portqueuemappings                   |
| ports                               |
| portsecuritybindings                |
| providerresourceassociations        |
| qosqueues                           |
| quotas                              |
| router_extra_attributes             |
| routerflavors                       |
| routerl3agentbindings               |
| routerports                         |
| routerproviders                     |
| routerroutes                        |
| routerrules                         |
| routers                             |
| routerservicetypebindings           |
| securitygroupportbindings           |
| securitygrouprules                  |
| securitygroups                      |
| segmentation_id_allocation          |
| servicerouterbindings               |
| sessionpersistences                 |
| subnetroutes                        |
| subnets                             |
| tunnelkeylasts                      |
| tunnelkeys                          |
| tz_network_bindings                 |
| vcns_edge_monitor_bindings          |
| vcns_edge_pool_bindings             |
| vcns_edge_vip_bindings              |
| vcns_firewall_rule_bindings         |
| vcns_router_bindings                |
| vips                                |
| vpnservices                         |
+-------------------------------------+
142 rows in set (0.00 sec)

Sunday, October 19, 2014

Failed to issue method call: Unit iptables.service failed to load In Centos7

In RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.

It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service:

systemctl stop firewalld
systemctl mask firewalld
Then, install the iptables-services package:

yum install iptables-services
Enable the service at boot-time:

systemctl enable iptables
Managing the service

systemctl [stop|start|restart] iptables
Systemctl doesn't seem to manage the save action like you were able to do in the past with service:

/usr/libexec/iptables/iptables.init save

Friday, October 17, 2014

Logstash to parse Local files,apache/niginx Logs

Filters in logstach 
Filters are an in-line processing mechanism which provide the flexibility to slice and dice your data to fit your needs. Let’s see one in action, namely the grok filter.

input { stdin { } }

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
Run logstash with this configuration:

bin/logstash -f logstash-filter.conf
Now paste this line into the terminal (so it will be processed by the stdin input):

127.0.0.1 - - [11/Dec/2013:00:01:45 -0800] "GET /xampp/status.php HTTP/1.1" 200 3891 "http://cadenza/xampp/navi.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0"


Run Logstash from Local File buy configuring input session. Below we parse a apache access log from local server. 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Logstach Configuration for parsing nginx Logs 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Log Monitoring WIth Kibana+Logstash+elasticsearch



Centralized logging using Logstash and elasticsearch  can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place.


Installing Java 

yum install java-1.7.0-openjdk-*

Install Elasticsearch

yum install https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.noarch.rpm

Elasticsearch is now installed. Let's edit the configuration:/etc/elasticsearch/elasticsearch.yml

Add the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

You will also want to restrict outside access to your Elasticsearch instance, so outsiders can't read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host and uncomment it so it looks like this:

network.host: localhost

Then disable multicast by finding the discovery.zen.ping.multicast.enabled item and uncommenting so it looks like this:

discovery.zen.ping.multicast.enabled: false


Now start Elasticsearch:

sudo service elasticsearch restart


Install Nginx

yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install nginx -y

Download the sample Nginx configuration from Kibana's github repository to your home directory:

cd ~; curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

Open the sample configuration file for editing:

vi nginx.conf

Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to where we installed Kibana, so they look like the following entries:

server_name FQDN;
root  /usr/share/nginx/kibana3;

Save and exit. Now copy it over your Nginx default server block with the following command:

sudo cp ~/nginx.conf /etc/nginx/conf.d/default.conf


Installing Kibana to parse the logs
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.tar.gz
tar zxvf kibana-3.1.1.tar.gz


Open the Kibana configuration file kibana-3.1.1/config.js  and  find the line that specifies the elasticsearch server URL, and replace the port number (9200 by default) with 80:

   elasticsearch: "http://"+window.location.hostname+":80",

mv kibana-3.1.1 /usr/share/nginx/kibana3

start the Nginx

service nginx start

sudo yum install httpd-tools-2.2.15
Then generate a login that will be used in Kibana to save and share dashboards (substitute your own username):
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user

Install Logstash

yum install https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.2-1_2c0f5a1.noarch.rpm -y

Creating Certificates

cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


cat << EOF >> /etc/logstash/conf.d/01-lumberjack-input.conf
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
EOF

cat << EOF >> /etc/logstash/conf.d/10-syslog.conf
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
EOF


cat << EOF >> /etc/logstash/conf.d/30-lumberjack-output.conf
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
EOF




On Logstash Server

Copy the SSL certificate to Server (substitute with your own login):

scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp


Install Logstash Forwarder Package

yum install -y http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm

Next, you will want to install the Logstash Forwarder init script, so it starts on bootup. We will use the init script provided by logstashbook.com:

cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
sudo chmod +x logstash-forwarder

The init script depends on a file called /etc/sysconfig/logstash-forwarder. A sample file is available to download:

sudo curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

sudo vi /etc/sysconfig/logstash-forwarder
And modify the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Save and quit.

Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configure Logstash Forwarder
On Server, create and edit Logstash Forwarder configuration file, which is in JSON format:

cat << EOF > /etc/logstash-forwarder
{
  "network": {
    "servers": [ "192.168.255.1:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

EOF


Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Now we will want to add the Logstash Forwarder service with chkconfig:

sudo chkconfig --add logstash-forwarder

Now start Logstash Forwarder to put our changes into place:

sudo service logstash-forwarder start


Now checkout the kibana server IP to get the dashboard

Thursday, October 16, 2014

Poodle-SSLv3 Vulnerability

A vulnerability in SSLv3 encryption protocol was disclosed. This vulnerability, known as  POODLE (Padding Oracle On Downgraded Legacy Encryption), allows an attacker to read information encrypted with this version of the protocol in plain text using a man-in-the-middle attack.

Although SSLv3 is an older version of the protocol which is mainly obsolete, many pieces of software still fall back on SSLv3 if better encryption options are not available. More importantly, it is possible for an attacker to force SSLv3 connections if it is an available alternative for both participants attempting a connection

How to test for SSL POODLE vulnerability?
$ openssl s_client -connect google.com:443 -ssl3
If there is a handshake failure then the server is not supporting SSLv3 and it is secure from this vulnerability. Otherwise it is required to disable SSLv3 support.


The POODLE vulnerability exists because the SSLv3 protocol does not adequately check the padding bytes that are sent with encrypted messages.

Since these cannot be verified by the receiving party, an attacker can replace these and pass them on to the intended destination. When done in a specific way, the modified payload will potentially be accepted by the recipient without complaint.

The POODLE vulnerability does not represent an implementation problem and is an inherent issue with the entire protocol, there is no workaround and the only reliable solution is to not use it.

In nginx configuration, just after the "ssl on;" line, add the following to allow only TLS protocols:

ssl_protocols TLSv1.2 TLSv1.1 TLSv1;

Apache Web Server

Inside /etc/httpd/conf.d/ssl.conf or httpd.conf you can find the SSLProtocol directive. If this is not available, create it. Modify this to explicitly remove support for SSLv3:

SSLProtocol all -SSLv3 -SSLv2


Ha-Proxy
To disable SSLv3 in an HAProxy load balancer, you will need to open the haproxy.cfg file.

sudo nano /etc/haproxy/haproxy.cfg
frontend name
    bind public_ip:443 ssl crt /path/to/certs no-sslv3

Postfix

In Postfix conf /etc/postfix/main.cf add.
smtpd_tls_mandatory_protocols=!SSLv2, !SSLv3


In Dovecot

sudo nano /etc/dovecot/conf.d/10-ssl.conf
ssl_protocols = !SSLv3 !SSLv2

Tomcat

Edit @ $TOMCAT_HOME/conf/server.xml.

Tomcat 5 and 6:

    <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               clientAuth="false" sslEnabledProtocols = "TLSv1,TLSv1.1,TLSv1.2" />
Tomcat >= 7

    <Connector port="8443" protocol="org.apache.coyote.http11.Http11Protocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               clientAuth="false" sslProtocols = "TLSv1,TLSv1.1,TLSv1.2" />







Openstack Juno -Part 3 -Compute service Nova

  Creating Nova Database 
create database nova;
 GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'mar4nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'mar4nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'10.0.0.200' IDENTIFIED BY 'mar4nova';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'10.0.0.202' IDENTIFIED BY 'mar4nova';
flush privileges;

Configuring User's in keystone
source admin-openrc.sh
keystone user-create --name nova --pass mar4nova --email EMAIL_ADDRESS
keystone user-role-add --user nova --tenant service --role admin
keystone service-create --name nova --type compute --description "OpenStack Compute"
keystone endpoint-create --service-id $(keystone service-list | awk '/ compute / {print $2}') --publicurl http://controller:8774/v2/%\(tenant_id\)s --internalurl http://controller:8774/v2/%\(tenant_id\)s --adminurl http://controller:8774/v2/%\(tenant_id\)s --region regionOne

#On Controller

Installing packages 
yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

Configuring Service 
openstack-config --set /etc/nova/nova.conf database connection mysql://nova:mar4nova@controller/nova

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.142
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 10.1.15.142
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.142

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

openstack-config --set /etc/nova/nova.conf glance host controller

#Populate the database 

su -s /bin/sh -c "nova-manage db sync" nova

Database changed
MariaDB [nova]> show tables;
+--------------------------------------------+
| Tables_in_nova                             |
+--------------------------------------------+
| agent_builds                               |
| aggregate_hosts                            |
| aggregate_metadata                         |
| aggregates                                 |
| block_device_mapping                       |
| bw_usage_cache                             |
| cells                                      |
| certificates                               |
| compute_nodes                              |
| console_pools                              |
| consoles                                   |
| dns_domains                                |
| fixed_ips                                  |
| floating_ips                               |
| instance_actions                           |
| instance_actions_events                    |
| instance_extra                             |
| instance_faults                            |
| instance_group_member                      |
| instance_group_policy                      |
| instance_groups                            |
| instance_id_mappings                       |
| instance_info_caches                       |
| instance_metadata                          |
| instance_system_metadata                   |
| instance_type_extra_specs                  |
| instance_type_projects                     |
| instance_types                             |
| instances                                  |
| iscsi_targets                              |
| key_pairs                                  |
| migrate_version                            |
| migrations                                 |
| networks                                   |
| pci_devices                                |
| project_user_quotas                        |
| provider_fw_rules                          |
| quota_classes                              |
| quota_usages                               |
| quotas                                     |
| reservations                               |
| s3_images                                  |
| security_group_default_rules               |
| security_group_instance_association        |
| security_group_rules                       |
| security_groups                            |
| services                                   |
| shadow_agent_builds                        |
| shadow_aggregate_hosts                     |
| shadow_aggregate_metadata                  |
| shadow_aggregates                          |
| shadow_block_device_mapping                |
| shadow_bw_usage_cache                      |
| shadow_cells                               |
| shadow_certificates                        |
| shadow_compute_nodes                       |
| shadow_console_pools                       |
| shadow_consoles                            |
| shadow_dns_domains                         |
| shadow_fixed_ips                           |
| shadow_floating_ips                        |
| shadow_instance_actions                    |
| shadow_instance_actions_events             |
| shadow_instance_extra                      |
| shadow_instance_faults                     |
| shadow_instance_group_member               |
| shadow_instance_group_policy               |
| shadow_instance_groups                     |
| shadow_instance_id_mappings                |
| shadow_instance_info_caches                |
| shadow_instance_metadata                   |
| shadow_instance_system_metadata            |
| shadow_instance_type_extra_specs           |
| shadow_instance_type_projects              |
| shadow_instance_types                      |
| shadow_instances                           |
| shadow_iscsi_targets                       |
| shadow_key_pairs                           |
| shadow_migrate_version                     |
| shadow_migrations                          |
| shadow_networks                            |
| shadow_pci_devices                         |
| shadow_project_user_quotas                 |
| shadow_provider_fw_rules                   |
| shadow_quota_classes                       |
| shadow_quota_usages                        |
| shadow_quotas                              |
| shadow_reservations                        |
| shadow_s3_images                           |
| shadow_security_group_default_rules        |
| shadow_security_group_instance_association |
| shadow_security_group_rules                |
| shadow_security_groups                     |
| shadow_services                            |
| shadow_snapshot_id_mappings                |
| shadow_snapshots                           |
| shadow_task_log                            |
| shadow_virtual_interfaces                  |
| shadow_volume_id_mappings                  |
| shadow_volume_usage_cache                  |
| shadow_volumes                             |
| snapshot_id_mappings                       |
| snapshots                                  |
| task_log                                   |
| virtual_interfaces                         |
| volume_id_mappings                         |
| volume_usage_cache                         |
| volumes                                    |
+--------------------------------------------+
108 rows in set (0.00 sec)

MariaDB [nova]>

systemctl enable openstack-nova-api.service
systemctl enable openstack-nova-cert.service
systemctl enable openstack-nova-consoleauth.service
systemctl enable openstack-nova-scheduler.service
systemctl enable openstack-nova-conductor.service
systemctl enable openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service
systemctl start openstack-nova-cert.service
systemctl start openstack-nova-consoleauth.service
systemctl start openstack-nova-scheduler.service
systemctl start openstack-nova-conductor.service
systemctl start openstack-nova-novncproxy.service



On compute Node

Installing Packages
yum install openstack-nova-compute -y

Configuring Service
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_uri = http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT identity_uri = http://controller:35357
openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name = service
openstack-config --set /etc/nova/nova.conf DEFAULT admin_user = nova
openstack-config --set /etc/nova/nova.conf DEFAULT admin_password = mar4nova


#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

Determine whether your compute node supports hardware acceleration for virtual machines:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your compute node supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your compute node does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
Edit the [libvirt] section in the /etc/nova/nova.conf file as follows:
[libvirt]
...
virt_type = qemu

openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu


systemctl enable libvirtd.service
systemctl start libvirtd.service
systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service


#Verify operation

$ nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary           | Host       | Zone     | Status  | State | Updated_at                 | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1  | nova-conductor   | controller | internal | enabled | up    | 2014-09-16T23:54:02.000000 | -               |
| 2  | nova-consoleauth | controller | internal | enabled | up    | 2014-09-16T23:54:04.000000 | -               |
| 3  | nova-scheduler   | controller | internal | enabled | up    | 2014-09-16T23:54:07.000000 | -               |
| 4  | nova-cert        | controller | internal | enabled | up    | 2014-09-16T23:54:00.000000 | -               |
| 5  | nova-compute     | compute1   | nova     | enabled | up    | 2014-09-16T23:54:06.000000 | -               |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+

[root@controller ~]# nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID                                   | Name                | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| e54cb5b2-4717-4139-8258-2a0366216b92 | cirros-0.3.3-x86_64 | ACTIVE |        |
+--------------------------------------+---------------------+--------+--------+
[root@controller ~]#

Wednesday, October 15, 2014

Openstack Juno - Part 2 - Image Service Glance

Create the database 
create database glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'mar4glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'mar4glance';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'10.0.0.200' IDENTIFIED BY 'mar4glance';
flush privileges;

Creating the Keystone Endpoints and User's
source /root/admin-openrc.sh
keystone user-create --name=glance --pass=mar4glance --email=glance@example.com
keystone user-role-add --user=glance --tenant=service --role=admin
keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292

Install the packages
yum install openstack-glance python-glanceclient -y

Configuring the service 
openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:mar4glance@controller/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:mar4glance@controller/glance

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password mar4glance
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone

openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password mar4glance
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

Populating the DB
su -s /bin/sh -c "glance-manage db_sync" glance

Database changed
MariaDB [glance]> show tables;
+----------------------------------+
| Tables_in_glance                 |
+----------------------------------+
| image_locations                  |
| image_members                    |
| image_properties                 |
| image_tags                       |
| images                           |
| metadef_namespace_resource_types |
| metadef_namespaces               |
| metadef_objects                  |
| metadef_properties               |
| metadef_resource_types           |
| migrate_version                  |
| task_info                        |
| tasks                            |
+----------------------------------+
13 rows in set (0.00 sec)


systemctl enable openstack-glance-api.service
systemctl enable openstack-glance-registry.service
systemctl start openstack-glance-api.service
systemctl start openstack-glance-registry.service


Verifying the Glance
mkdir /tmp/images
cd /tmp/images
wget http://cdn.download.cirros-cloud.net/0.3.3/cirros-0.3.3-x86_64-disk.img
source admin-openrc.sh
glance image-create --name "cirros-0.3.3-x86_64" --file /tmp/images/cirros-0.3.3-x86_64-disk.img --disk-format qcow2 --container-format bare --is-public True --progress
glance image-list
rm -r /tmp/images