Pages

Friday, October 31, 2014

Installing Swish Module for php

Swish package does not comes with current repo's of centos or redhat so we need to compile and install it before installing the swish package through the pecl. Else we may end up in error while installing Swish package with pecl

Downloading and installing the swish packages.
wget http://swish-e.org/distribution/swish-e-2.4.7.tar.gz
tar zxvf swish-e-2.4.7.tar.gz
cd swish-e-2.4.7
./configure
make
make check
make install

cd ~

Installing swish php module using pecl
pecl install swish-beta
chmod 755 /usr/lib64/php/modules/swish.so
echo "extension=swish.so" >> /etc/php.ini

Thursday, October 30, 2014

Installing PHP modules using pecl command.

Once you have installed the php you need to install needed modules to support the development process. we can use the pecl function to install the modules.

To install pecl function.

yum install php-pear

Now to install needed modules just use pecl

pecl install <Module Name>

To install a beta version
pecl install <Module Name>-beta

To list all modules in pecl database

pecl list-all

To check whether the module is installed or not

php -m

Wednesday, October 29, 2014

Installing PHP 5.6 in Centos6/7

Compiling php can be difficult some time. But We can just install the latest version of php from proper remi repo.

Install Remi repository

CentOS and Red Hat (RHEL)
Remi and EPEL (Dependency) on CentOS 7 and Red Hat (RHEL) 7

64 bit : yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
yum install -y http://rpms.famillecollet.com/enterprise/remi-release-7.rpm


Remi and Epel repo ( Dependency ) on CentOS 6 and Red Hat (RHEL) 6
64 bit  : yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
32 bit  : yum install -yhttp://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

yum install -y http://rpms.famillecollet.com/enterprise/remi-release-6.rpm


Installing PHP5.6 from the remi and httpd from local repo
CentOS 7/6.5/5.10 and Red Hat (RHEL) 7/6.5/5.10
yum --enablerepo=remi,remi-php56 install httpd php php-common

Install PHP 5.6.0 modules

yum --enablerepo=remi,remi-php56 install php-pecl-apcu php-cli php-pear php-pdo php-mysqlnd php-pgsql php-pecl-mongo php-sqlite php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml

Start Apache HTTP server (httpd) and autostart Apache HTTP server (httpd) on boot
## CentOS/RHEL 7 ##
systemctl start httpd.service ## use restart after update


## CentOS / RHEL 6.5/5.10 ##
/etc/init.d/httpd start ## use restart after update
## OR ##
service httpd start ## use restart after update


##CentOS/RHEL 7 ##
systemctl enable httpd.service

## CentOS / RHEL 6.5/5.10 ##
chkconfig --levels 235 httpd on


Create test PHP page to check that Apache, PHP and PHP modules are working
Add following content to /var/www/html/test.php file.

<?php

    phpinfo();
?>

Now Check the PHp page at http://<<SERVER_IP>>/test.php

Make sure that the EPEL and Remi repo's are disabled to avoid Further issue in future.

Module Available in Latest PHP

bcmath
bz2
calendar
com_dotnet
ctype
curl
date
dba
dom
enchant
ereg
exif
fileinfo
filter
ftp
gd
gettext
gmp
hash
iconv
imap
interbase
intl
json
ldap
libxml
mbstring
mcrypt
mssql
mysql
mysqli
mysqlnd
oci8
odbc
opcache
openssl
pcntl
pcre
pdo
pdo_dblib
pdo_firebird
pdo_mysql
pdo_oci
pdo_odbc
pdo_pgsql
pdo_sqlite
pgsql
phar
posix
pspell
readline
recode
reflection
session
shmop
simplexml
skeleton
snmp
soap
sockets
spl
sqlite3
standard
sybase_ct
sysvmsg
sysvsem
sysvshm
tidy
tokenizer
wddx
xml
xmlreader
xmlrpc
xmlwriter
xsl
zip
zlib

Monday, October 27, 2014

Openstack Juno - Neutron HA using VRRP (Keepalived)


First configure two neutron server's. Let that be network and network1 .
http://www.adminz.in/2014/10/openstack-juno-part-5-neutron.html

Then install Keepalived in both the neutron server's.

#Added Following entries in both neutron server
#in  /etc/neutron/neutron.conf
l3_ha = True
#And the HA Scheduler has to be used :
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler


In Controller Server Database update
neutron-db-manage --config-file=/etc/neutron/neutron.conf  --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  mkdir /etc/neutron/rootwrap.d
cp /usr/share/neutron/rootwrap/l3.filters /etc/neutron/rootwrap.d/

Now restart the Openstack Services in  all the controller and neutron nodes.



On Controller Server Create a new set of Network setting

source admin-openrc.sh
neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.1.0.101,end=10.1.0.200 --disable-dhcp --gateway 10.1.0.42 10.1.0.0/24


To create the tenant network
neutron net-create cli-net
neutron subnet-create cli-net --name cli-subnet --gateway 192.168.1.1 192.168.1.0/24
neutron router-create cli-router
neutron router-interface-add cli-router cli-subnet
neutron router-gateway-set cli-router ext-net


Now if we check both the neutron node we can see the router's.

[root@network ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network ~]#

[root@network2 ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network2 ~]#


[root@network ~]#  ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
10: ha-224b2c85-81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:42:4d:52 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.8/18 brd 169.254.255.255 scope global ha-224b2c85-81
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe42:4d52/64 scope link
       valid_lft forever preferred_lft forever
11: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
12: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link
       valid_lft forever preferred_lft forever
[root@network ~]#
[root@network ~]#



[root@network2 ~]# ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: ha-37517361-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:6f:a0:11 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-37517361-ec
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe6f:a011/64 scope link
       valid_lft forever preferred_lft forever
17: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
18: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
[root@network2 ~]#


In above output you can see the device  qg-04d4c06e-49 and  qr-842e3e41-3a has been created in both the server.

Friday, October 24, 2014

Removing Blank Lines from the File.

In sed 
Type the following sed command to delete all empty files:

Display with out Blank Lines
sed '/^$/d' input.txt

Remove all the Blank Lines from file
sed -i '/^$/d' input.txt
cat input.txt

In awk 

Type the following awk command to delete all empty files:

Display with out Blank Lines
awk NF input.txt

Remove all the Blank Lines from file
awk 'NF  input.txt > output.txt
cat output.txt


In perl
Type the following perl one liner to delete all empty files and save orignal file as input.txt.backup:
Remove all the Blank Lines from file
perl -i.backup -n -e "print if /\S/" input.txt


In vi editor
:g/^$/d
:g will execute a command on lines which match a regex. The regex is 'blank line' and the command is
:d (delete)


In tr
tr -s '\n' < abc.txt

In grep
grep -v "^$" abc.txt



Wednesday, October 22, 2014

Openstack Juno Part 6 - Neutron Configuration on Compute Service

Installing the packages

yum install openstack-neutron-ml2 openstack-neutron-openvswitch ipset -y


Configure the Service 
#Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

#Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.
#Dedicated Ip for Tunneling in Compute Node

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.214
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service


Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

#Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in #configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the #following commands to resolve this issue:

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the Services
systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service

Tuesday, October 21, 2014

Openstack Juno Part 5 - Neutron configuring Network Node

Installing the Packages

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ipset  -y

Configuring  the Service
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest


openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


#verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
#Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with #troubleshooting.


openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1454" >> /etc/neutron/dnsmasq-neutron.conf
chown neutron:neutron /etc/neutron/dnsmasq-neutron.conf

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password mar4neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret mar4meta

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with #troubleshooting.

#Perform the next two steps on the controller node.
#On the controller node, configure Compute to use the metadata service:
#Replace METADATA_SECRET with the secret you chose for the metadata proxy.

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret mar4meta

On the controller node, restart the Compute API service:
systemctl restart openstack-nova-api.service

# To configure the Modular Layer 2 (ML2) plug-in

 # Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network #interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface #on the network node.
#Dedicated IP for tunneling in network node
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.212
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service

#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth1


#Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve #suitable throughput between your instances and the external network.
#To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the service's 

systemctl enable neutron-openvswitch-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
systemctl enable neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service
systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service