Pages

Monday, October 27, 2014

Openstack Juno - Neutron HA using VRRP (Keepalived)


First configure two neutron server's. Let that be network and network1 .
http://www.adminz.in/2014/10/openstack-juno-part-5-neutron.html

Then install Keepalived in both the neutron server's.

#Added Following entries in both neutron server
#in  /etc/neutron/neutron.conf
l3_ha = True
#And the HA Scheduler has to be used :
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler


In Controller Server Database update
neutron-db-manage --config-file=/etc/neutron/neutron.conf  --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  mkdir /etc/neutron/rootwrap.d
cp /usr/share/neutron/rootwrap/l3.filters /etc/neutron/rootwrap.d/

Now restart the Openstack Services in  all the controller and neutron nodes.



On Controller Server Create a new set of Network setting

source admin-openrc.sh
neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.1.0.101,end=10.1.0.200 --disable-dhcp --gateway 10.1.0.42 10.1.0.0/24


To create the tenant network
neutron net-create cli-net
neutron subnet-create cli-net --name cli-subnet --gateway 192.168.1.1 192.168.1.0/24
neutron router-create cli-router
neutron router-interface-add cli-router cli-subnet
neutron router-gateway-set cli-router ext-net


Now if we check both the neutron node we can see the router's.

[root@network ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network ~]#

[root@network2 ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network2 ~]#


[root@network ~]#  ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
10: ha-224b2c85-81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:42:4d:52 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.8/18 brd 169.254.255.255 scope global ha-224b2c85-81
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe42:4d52/64 scope link
       valid_lft forever preferred_lft forever
11: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
12: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link
       valid_lft forever preferred_lft forever
[root@network ~]#
[root@network ~]#



[root@network2 ~]# ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: ha-37517361-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:6f:a0:11 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-37517361-ec
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe6f:a011/64 scope link
       valid_lft forever preferred_lft forever
17: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
18: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
[root@network2 ~]#


In above output you can see the device  qg-04d4c06e-49 and  qr-842e3e41-3a has been created in both the server.

Friday, October 24, 2014

Removing Blank Lines from the File.

In sed 
Type the following sed command to delete all empty files:

Display with out Blank Lines
sed '/^$/d' input.txt

Remove all the Blank Lines from file
sed -i '/^$/d' input.txt
cat input.txt

In awk 

Type the following awk command to delete all empty files:

Display with out Blank Lines
awk NF input.txt

Remove all the Blank Lines from file
awk 'NF  input.txt > output.txt
cat output.txt


In perl
Type the following perl one liner to delete all empty files and save orignal file as input.txt.backup:
Remove all the Blank Lines from file
perl -i.backup -n -e "print if /\S/" input.txt


In vi editor
:g/^$/d
:g will execute a command on lines which match a regex. The regex is 'blank line' and the command is
:d (delete)


In tr
tr -s '\n' < abc.txt

In grep
grep -v "^$" abc.txt



Wednesday, October 22, 2014

Openstack Juno Part 6 - Neutron Configuration on Compute Service

Installing the packages

yum install openstack-neutron-ml2 openstack-neutron-openvswitch ipset -y


Configure the Service 
#Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

#Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.
#Dedicated Ip for Tunneling in Compute Node

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.214
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service


Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

#Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in #configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the #following commands to resolve this issue:

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the Services
systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service

Tuesday, October 21, 2014

Openstack Juno Part 5 - Neutron configuring Network Node

Installing the Packages

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ipset  -y

Configuring  the Service
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest


openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


#verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
#Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with #troubleshooting.


openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1454" >> /etc/neutron/dnsmasq-neutron.conf
chown neutron:neutron /etc/neutron/dnsmasq-neutron.conf

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password mar4neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret mar4meta

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with #troubleshooting.

#Perform the next two steps on the controller node.
#On the controller node, configure Compute to use the metadata service:
#Replace METADATA_SECRET with the secret you chose for the metadata proxy.

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret mar4meta

On the controller node, restart the Compute API service:
systemctl restart openstack-nova-api.service

# To configure the Modular Layer 2 (ML2) plug-in

 # Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network #interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface #on the network node.
#Dedicated IP for tunneling in network node
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.212
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service

#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth1


#Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve #suitable throughput between your instances and the external network.
#To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the service's 

systemctl enable neutron-openvswitch-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
systemctl enable neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service
systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service

Monday, October 20, 2014

Openstack Juno + Docker error "Docker daemon is not running or is not reachable"

I was getting following error while integrating docker with Openstack Juno.

"2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)"

I tried changing the permission of the docker.sock but that didn't help. But when I upgraded the docker to 1.2 version the issue was fixed . The docker version which comes with centos is little bit old we can the rpm of new docker for centos7 from 

Download the following RPMS 

wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm

Install the RPM

in the same dorectory
yum install docker-1.2.0
yum install docker*


Error
====
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup
2014-10-20 14:24:22.876 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.901 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.906 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.919 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.954 2995 ERROR nova.openstack.common.threadgroup [-] Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup

Openstack Juno Part 4 neutron - Controller.

Create the Mysql Database

  create database neutron;
 GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'10.0.0.211' IDENTIFIED BY 'mar4neutron';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'10.0.0.212' IDENTIFIED BY 'mar4neutron';
flush privileges;

Create keystone Endpoints and user's
source /root/admin-openrc.sh
keystone user-create --name neutron --pass mar4neutron
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"
keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696 --region regionOne

Installing the packages 
yum install openstack-neutron openstack-neutron-ml2 python-neutronclient which -y

Configuring the Packages
openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:mar4neutron@controller/neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller:35357/v2.0
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password mar4nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_region_name regionOne

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

Populating the database
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron

Starting the Services.
systemctl restart openstack-nova-api.service
systemctl restart openstack-nova-scheduler.service
systemctl restart openstack-nova-conductor.service
systemctl enable neutron-server.service
systemctl start neutron-server.service

Checking the database
MariaDB [neutron]> show tables;
+-------------------------------------+
| Tables_in_neutron                   |
+-------------------------------------+
| agents                              |
| alembic_version                     |
| allowedaddresspairs                 |
| arista_provisioned_nets             |
| arista_provisioned_tenants          |
| arista_provisioned_vms              |
| brocadenetworks                     |
| brocadeports                        |
| cisco_credentials                   |
| cisco_csr_identifier_map            |
| cisco_hosting_devices               |
| cisco_ml2_apic_contracts            |
| cisco_ml2_apic_host_links           |
| cisco_ml2_apic_names                |
| cisco_ml2_nexusport_bindings        |
| cisco_n1kv_multi_segments           |
| cisco_n1kv_network_bindings         |
| cisco_n1kv_port_bindings            |
| cisco_n1kv_profile_bindings         |
| cisco_n1kv_trunk_segments           |
| cisco_n1kv_vlan_allocations         |
| cisco_n1kv_vmnetworks               |
| cisco_n1kv_vxlan_allocations        |
| cisco_network_profiles              |
| cisco_policy_profiles               |
| cisco_port_mappings                 |
| cisco_provider_networks             |
| cisco_qos_policies                  |
| cisco_router_mappings               |
| consistencyhashes                   |
| csnat_l3_agent_bindings             |
| dnsnameservers                      |
| dvr_host_macs                       |
| embrane_pool_port                   |
| externalnetworks                    |
| extradhcpopts                       |
| firewall_policies                   |
| firewall_rules                      |
| firewalls                           |
| floatingips                         |
| ha_router_agent_port_bindings       |
| ha_router_networks                  |
| ha_router_vrid_allocations          |
| healthmonitors                      |
| hyperv_network_bindings             |
| hyperv_vlan_allocations             |
| ikepolicies                         |
| ipallocationpools                   |
| ipallocations                       |
| ipavailabilityranges                |
| ipsec_site_connections              |
| ipsecpeercidrs                      |
| ipsecpolicies                       |
| lsn                                 |
| lsn_port                            |
| maclearningstates                   |
| members                             |
| meteringlabelrules                  |
| meteringlabels                      |
| ml2_brocadenetworks                 |
| ml2_brocadeports                    |
| ml2_dvr_port_bindings               |
| ml2_flat_allocations                |
| ml2_gre_allocations                 |
| ml2_gre_endpoints                   |
| ml2_network_segments                |
| ml2_port_bindings                   |
| ml2_vlan_allocations                |
| ml2_vxlan_allocations               |
| ml2_vxlan_endpoints                 |
| mlnx_network_bindings               |
| multi_provider_networks             |
| network_bindings                    |
| network_states                      |
| networkconnections                  |
| networkdhcpagentbindings            |
| networkflavors                      |
| networkgatewaydevicereferences      |
| networkgatewaydevices               |
| networkgateways                     |
| networkqueuemappings                |
| networks                            |
| networksecuritybindings             |
| neutron_nsx_network_mappings        |
| neutron_nsx_port_mappings           |
| neutron_nsx_router_mappings         |
| neutron_nsx_security_group_mappings |
| nexthops                            |
| nuage_net_partition_router_mapping  |
| nuage_net_partitions                |
| nuage_provider_net_bindings         |
| nuage_subnet_l2dom_mapping          |
| ofcfiltermappings                   |
| ofcnetworkmappings                  |
| ofcportmappings                     |
| ofcroutermappings                   |
| ofctenantmappings                   |
| ovs_network_bindings                |
| ovs_tunnel_allocations              |
| ovs_tunnel_endpoints                |
| ovs_vlan_allocations                |
| packetfilters                       |
| poolloadbalanceragentbindings       |
| poolmonitorassociations             |
| pools                               |
| poolstatisticss                     |
| port_profile                        |
| portbindingports                    |
| portinfos                           |
| portqueuemappings                   |
| ports                               |
| portsecuritybindings                |
| providerresourceassociations        |
| qosqueues                           |
| quotas                              |
| router_extra_attributes             |
| routerflavors                       |
| routerl3agentbindings               |
| routerports                         |
| routerproviders                     |
| routerroutes                        |
| routerrules                         |
| routers                             |
| routerservicetypebindings           |
| securitygroupportbindings           |
| securitygrouprules                  |
| securitygroups                      |
| segmentation_id_allocation          |
| servicerouterbindings               |
| sessionpersistences                 |
| subnetroutes                        |
| subnets                             |
| tunnelkeylasts                      |
| tunnelkeys                          |
| tz_network_bindings                 |
| vcns_edge_monitor_bindings          |
| vcns_edge_pool_bindings             |
| vcns_edge_vip_bindings              |
| vcns_firewall_rule_bindings         |
| vcns_router_bindings                |
| vips                                |
| vpnservices                         |
+-------------------------------------+
142 rows in set (0.00 sec)

Sunday, October 19, 2014

Failed to issue method call: Unit iptables.service failed to load In Centos7

In RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.

It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service:

systemctl stop firewalld
systemctl mask firewalld
Then, install the iptables-services package:

yum install iptables-services
Enable the service at boot-time:

systemctl enable iptables
Managing the service

systemctl [stop|start|restart] iptables
Systemctl doesn't seem to manage the save action like you were able to do in the past with service:

/usr/libexec/iptables/iptables.init save