Pages

Showing posts with label Openstack. Show all posts
Showing posts with label Openstack. Show all posts

Thursday, September 4, 2014

Openstack Icehouse install Part -7 Cinder Service Block storage

Install Cinder- Block Storage Service

On Controller Node
Install the appropriate packages

yum install openstack-cinder -y

Configure Block Storage to use your database

openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cinder4admin@controller/cinder

Creating Database
On Mysql Server

mysql -u root -p

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.30' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.31' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.35' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.36' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.32' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.42' IDENTIFIED BY 'cinder4admin';
exit;

Create the database tables

su -s /bin/sh -c "cinder-manage db sync" cinder

Create a cinder user.

keystone user-create --name=cinder --pass=cinder4admin --email=cinder@example.com
keystone user-role-add --user=cinder --tenant=service --role=admin

Edit the /etc/cinder/cinder.conf configuration file:

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder4admin

Configure Block Storage to use the Qpid message broker:

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname 10.1.15.40

Register the Block Storage service with the Identity service so that other OpenStack services can locate it:

keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s

Register a service and endpoint for version 2 of the Block Storage service API:

keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)s

Start and configure the Block Storage services to start when the system boots:

service openstack-cinder-api start
service openstack-cinder-scheduler start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on

On Cinder Service Node.

Setting Up NFS Share .

Installing NFS packages
yum install nfs-utils nfs-utils-lib

Make and configure partition
mkfs.ext4 /dev/mapper/vg_cloud2-LogVol03
mkdir /home/cinder_nfs
mount /dev/mapper/vg_cloud2-LogVol03 /home/cinder_nfs/
Add entries in Fstab
/dev/mapper/vg_cloud2-LogVol02 /home/cinder_nfs ext4 rw 0 0

Add Share to NFS
vi /etc/exports
/home/cinder_nfs *(rw,sync,no_root_squash,no_subtree_check)
exportfs -a
showmount -e 192.168.11.42

service nfs start
service nfs restart
service iptables stop
chkconfig iptables off
Install the Cinder Software
yum install openstack-cinder scsi-target-utils

Configure the Service

Copy the /etc/cinder/cinder.conf configuration file from the controller, or perform the following steps to set the keystone credentials:
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder4admin
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname 10.1.15.40

openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cinder4admin@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller

[root@compute2 ~]# cat /etc/cinder/nfsshares
192.168.11.42:/home/cinder_nfs
[root@compute2 ~]#

openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfsshares
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
service openstack-cinder-volume start
chkconfig openstack-cinder-volume on

Monday, September 1, 2014

Creating Custom Windows Images for Openstack

Setting up the KVM environment to create the custom images.

yum install kvm qemu-kvm python-virtinst libvirt libvirt-python virt-manager libguestfs-tools

Once the packages are installed we need to get the ISO’s.

For example are getting windows7 from the http://www.w7forums.com/threads/official-windows-7-sp1-iso-image-downloads.12325/
wget http://msft.digitalrivercontent.net/win/X17-24395.iso

Now we need the Virtio Driver’s so that windows can detect unsigned devices like linux from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/

wget http://alt.fedoraproject.org/pub/alt/virtio-win/latest/virtio-win-0.1-81.iso

First Create the Disk on which the OS need to be installed

qemu-img create -f qcow2 -o preallocation=metadata windows.qcow2 20G

Start the KVM installation
/usr/libexec/qemu-kvm -m 2048 -smp 2 -cdrom X17-24395.iso -drive file=virtio-win-0.1-81.iso,index=3,media=cdrom -drive file= windows.qcow2,if=virtio,boot=off -boot d -vga std -k en-us -vnc 10.1.17.42:1 -usbdevice tablet

Connect to Installation

Once the above step is done you will be able to connect to VNC using 10.1.17.42:1
You will be connected to VNC and you will be at the installations screen. Click Next to continue

Windows-install00

Select Install option to continue with installation.

Windows-install01

While secting the Installation driver we need to load the driver, Select the load driveroption and load the driver from the Virto ISO we have mounted

Windows-install02

Continue with the installation

Windows-install04

Once you are done download the Cloud init for windows from

https://github.com/cloudbase/cloudbase-init
Once installation is completed load the computer with virto NIC with following Command

/usr/libexec/qemu-kvm -m 2048 -smp 2 -drive file=virtio-win-0.1-81.iso,index=3,media=cdrom -drive file=windows-7.qcow2,if=virtio -boot d -vga std -k en-us -vnc 10.1.17.42:1 -usbdevice tablet -net nic,model=virtio
Connect to VNC and add the Virto NIC Driver From Device manager

Windows-install06

Now install the Cloud-init and initialize the Image

Windows-install10

Enable RDP for the access.

Now the Image is ready for Use .

You can get the windows password by

nova get-password <instance ID> <ssh-key>

 

 

Nova Rule

nova secgroup-add-rule default tcp 3389 3389 0.0.0.0/0

Tuesday, August 26, 2014

"Host key verification failed" error while resizing instance Openstack

Setting allow_resize_to_same_host=true will enable the resizing in All in one model of installation . but if we have mutiple controller and compute node's we will be getting an authentication error as following

"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 4907,
in migrate_disk_and_power_off\n utils.execute(\'ssh\', dest, \'mkdir\', \'-p\', inst_base)\n', '
File "/usr/lib/python2.6/site-packages/nova/utils.py", line 165, in execute\n return processutils.execute(*cmd, **kwargs)\n', '
File "/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 193, in execute\n cmd=\' \'.join(cmd))\n',
"ProcessExecutionError: Unexpected error while running command.\nCommand: ssh 10.1.15.44
mkdir -p /var/lib/nova/instances/5d5ced81-6fb1-4028-97cd-686e450d1bab\nExit code: 255\nStdout: ''\nStderr:
'Host key verification failed.\\r\\n'\n"]

This is due to a bug in nova as told in below bug report which can be solved by enabling password less authentication between the compute and controller node's nova user .

https://bugzilla.redhat.com/show_bug.cgi?id=975014#c3

By default the nova user does not have a shell so we need to enable the shell and do steps to enable password less authentication between all the server .

Enable shell for nova user


In all the controller and compute server’s enable nova user.

sed -i "s/\/var\/lib\/nova:\/sbin\/nologin/\/var\/lib\/nova:\/bin\/bash/g" /etc/passwd

cat /etc/passwd |grep nova

nova:x:162:162:OpenStack Nova Daemons:/var/lib/nova:/bin/bash

Enable password less authentication

Between all the compute and controller nodes create password less authenticatioin for user nova.

 

su - nova

ssh-keygen

ssh-copy-id

Tuesday, August 19, 2014

Creating Multiple Network in Openstack – Icehouse

Adding first network

On Network Node

#Add the integration bridge:
ovs-vsctl add-br br-int
#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth2

On Controller Node

[root@controller1 ~]# source /root/admin-openrc.sh
[root@controller1 ~]# neutron net-create ext-net --shared --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 2fdfff06-5837-4ab8-b971-c08696165e9d |
| name | ext-net |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 1 |
| router:external | True |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+---------------------------+--------------------------------------+
[root@controller1 ~]#
[root@controller1 ~]# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.10.129,end=192.168.10.254 --disable-dhcp --gateway 192.168.10.1 192.168.10.128/24
Created a new subnet:
+------------------+------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------+
| allocation_pools | {"start": "192.168.10.129", "end": "192.168.10.254"} |
| cidr | 192.168.10.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.10.1 |
| host_routes | |
| id | 09ec246e-4b1d-4fcb-babc-fdfd73ad13d0 |
| ip_version | 4 |
| name | ext-subnet |
| network_id | 2fdfff06-5837-4ab8-b971-c08696165e9d |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+------------------+------------------------------------------------+
[root@controller1 ~]# neutron net-create demo-net
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | df86ab38-57f2-432b-8ea8-ef30ceb72607 |
| name | demo-net |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 2 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+---------------------------+--------------------------------------+
[root@controller1 ~]# neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0/24
Created a new subnet:
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.0.2", "end": "10.0.0.254"} |
| cidr | 10.0.0.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.0.1 |
| host_routes | |
| id | 7f103f72-1d77-48d9-a4eb-fdb10c6dc11d |
| ip_version | 4 |
| name | demo-subnet |
| network_id | df86ab38-57f2-432b-8ea8-ef30ceb72607 |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+------------------+--------------------------------------------+
[root@controller1 ~]# neutron router-create demo-router
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | d2e481dd-a624-4447-8ea7-54fac383ec52 |
| name | demo-router |
| status | ACTIVE |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+-----------------------+--------------------------------------+
[root@controller1 ~]# neutron router-interface-add demo-router demo-subnet
uter demo-router
neutron router-gateway-set demo-router ext-net
Added interface 628fbec4-625b-4b03-a2c5-92666cbc72af to router demo-router.
[root@controller1 ~]# #Set gateway for router demo-router
[root@controller1 ~]# neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router
On Network Node

Edit /etc/neutron/l3_agent.ini and add following details
handle_internal_only_routers = True
gateway_external_network_id = ****.*****.*****.*****.****
external_network_bridge = br-ex

As per above example
handle_internal_only_routers = True
gateway_external_network_id = 2fdfff06-5837-4ab8-b971-c08696165e9d
external_network_bridge = br-ex
Adding the Second Network

On Network Node

ovs-vsctl add-br br-ex-2
ovs-vsctl add-port br-ex-2 eth3
On Controller Node

[root@controller1 ~]# source /root/admin-openrc.sh
[root@controller1 ~]# neutron net-create ext-net-2 --shared --router:external=True
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 19b85183-9637-4bea-9b26-92caf2a5cb99 |
| name | ext-net-2 |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 3 |
| router:external | True |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+---------------------------+--------------------------------------+
[root@controller1 ~]# neutron subnet-create ext-net-2 --name ext-subnet-2 --allocation-pool start=192.168.11.50,end=192.168.11.254 --disable-dhcp --gateway 192.168.11.1 192.168.11.0/24
Created a new subnet:
+------------------+-----------------------------------------------+
| Field | Value |
+------------------+-----------------------------------------------+
| allocation_pools | {"start": "192.168.11.50", "end": "192.168.11.254"} |
| cidr | 192.168.11.0/24 |
| dns_nameservers | |
| enable_dhcp | False |
| gateway_ip | 192.168.11.1 |
| host_routes | |
| id | 92dcc66d-00a3-4678-a8ac-9d72a94613ed |
| ip_version | 4 |
| name | ext-subnet-2 |
| network_id | 19b85183-9637-4bea-9b26-92caf2a5cb99 |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+------------------+-----------------------------------------------+
[root@controller1 ~]# neutron router-create demo-router-2
Created a new router:
+-----------------------+--------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------+
| admin_state_up | True |
| external_gateway_info | |
| id | e9e194bb-dafe-4e41-b867-0a64c2e74e29 |
| name | demo-router-2 |
| status | ACTIVE |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+-----------------------+--------------------------------------+
[root@controller1 ~]# neutron router-gateway-set demo-router-2 ext-net-2
Set gateway for router demo-router-2
[root@controller1 ~]# neutron net-create demo-net-2
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | f6238c45-9de4-4869-ad3e-cf3d1a647285 |
| name | demo-net-2 |
| provider:network_type | gre |
| provider:physical_network | |
| provider:segmentation_id | 4 |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+---------------------------+--------------------------------------+
[root@controller1 ~]# neutron subnet-create demo-net-2 --name demo-subnet-2 --gateway 10.1.0.1 10.1.0.0/24
Created a new subnet:
+------------------+--------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------+
| allocation_pools | {"start": "10.1.0.2", "end": "10.1.0.254"} |
| cidr | 10.1.0.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.1.0.1 |
| host_routes | |
| id | 30bcb484-e7e7-42a5-9a08-b7364f94180a |
| ip_version | 4 |
| name | demo-subnet-2 |
| network_id | f6238c45-9de4-4869-ad3e-cf3d1a647285 |
| tenant_id | 8c9fb6577a1e45879f35e3e43b34de58 |
+------------------+--------------------------------------------+
[root@controller1 ~]# neutron router-interface-add demo-router-2 demo-subnet-2
Added interface daa4f64d-117c-49ee-a5b6-430c837a59d5 to router demo-router-2.
[root@controller1 ~]#

On Network Node

Edit /etc/neutron/l3_agent.ini and add following details
handle_internal_only_routers = False
gateway_external_network_id = ****.*****.*****.*****.****
external_network_bridge = br-ex

As per above example
handle_internal_only_routers = False
gateway_external_network_id = 19b85183-9637-4bea-9b26-92caf2a5cb99
external_network_bridge = br-ex-2

[root@neutron1 ~]# cp /etc/init.d/neutron-l3-agent /etc/init.d/neutron-l3-agent-2

Make the change in the following Lines to get the difference command as below
[root@neutron1 ~]# diff -n /etc/init.d/neutron-l3-agent /etc/init.d/neutron-l3-agent-2
d3 1
a3 1
# neutron-l3-agent-2 OpenStack Neutron Layer 3 Agent
d18 1
a18 1
"/etc/$proj/l3_agent-2.ini" \
d21 1
a21 1
pidfile="/var/run/$proj/$prog-2.pid"
d23 1
a23 1
[ -e /etc/sysconfig/$prog-2 ] && . /etc/sysconfig/$prog-2
d25 1
a25 1
lockfile=/var/lock/subsys/$prog-2
d32 2
a33 3
echo -n $"Starting $prog-2: "
daemon --user neutron --pidfile $pidfile "$exec --log-file /var/log/$proj/$plugin-2.log --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/l3_agent-2.ini --config-file /etc/neutron/fwaas_driver.ini &>/dev/null & echo \$! > $pidfile"

d41 2
a42 2
echo -n $"Stopping $prog-2: "
killproc -p $pidfile $prog-2
[root@neutron1 ~]#

[root@neutron1 ~]# chkconfig neutron-l3-agent-2 on
[root@neutron1 ~]# openstack-service restart
Stopping neutron-dhcp-agent: [ OK ]
Starting neutron-dhcp-agent: [ OK ]
Stopping neutron-l3-agent: [ OK ]
Starting neutron-l3-agent: [ OK ]
Stopping neutron-l3-agent-2: [ OK ]
Starting neutron-l3-agent-2: [ OK ]
Stopping neutron-metadata-agent: [ OK ]
Starting neutron-metadata-agent: [ OK ]
Stopping neutron-openvswitch-agent: [ OK ]
Starting neutron-openvswitch-agent: [ OK ]
[root@neutron1 ~]#

Tuesday, August 12, 2014

Openstack - Error While resizing the instance

Error While resizing the instance

=========
2014-08-12 09:38:08.449 13602 WARNING nova.scheduler.utils [req-7cf9b53a-24ed-46e7-adce-6c5e83c2839b 2ca1137e67d741e0839e67e
4959a8eea 8c9fb6577a1e45879f35e3e43b34de58] Failed to compute_task_migrate_server: No valid host was found.
=========
Solution

make sure following settings are enables in all the nova.conf

"allow_resize_to_same_host=true"
"allow_migrate_to_same_host=true"

After enabling it reboot all the openstack services and try the resizing from dashboard

We will be asked for an conformation for resizing, if we don't select any option resizing will fail

By command promt
[root@controller1 ~]# nova list
+--------------------------------------+------------------+--------+------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+--------+------------+-------------+-------------------+
| 0c86c7bd-e815-4616-9c01-30cb8eb09414 | Test Instance 00 | ACTIVE | - | Running | demo-net=10.0.0.9 |
+--------------------------------------+------------------+--------+------------+-------------+-------------------+

List the available flavors with the following command:

$ nova flavor-list
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 0 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
To resize the server, pass the server ID or name and the new flavor to the nova resize command. Include the --poll parameter to report the resize progress.

$ nova resize myCirrosServer 4 --poll
Instance resizing... 100% complete
Finished
Show the status for your server:

$ nova list

[root@controller1 ~]# nova list
+--------------------------------------+------------------+--------+------------------+-------------+-------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------+--------+------------------+-------------+-------------------+
| 0c86c7bd-e815-4616-9c01-30cb8eb09414 | Test Instance 00 | RESIZE | resize_migrating | Running | demo-net=10.0.0.9 |
+--------------------------------------+------------------+--------+------------------+-------------+-------------------+

When the resize completes, the status becomes VERIFY_RESIZE.

Confirm the resize:

$ nova resize-confirm 0c86c7bd-e815-4616-9c01-30cb8eb094145
The server status becomes ACTIVE.

If the resize fails or does not work as expected, you can revert the resize:

$ nova resize-revert 0c86c7bd-e815-4616-9c01-30cb8eb09414
The server status becomes ACTIVE.

Monday, July 21, 2014

Neutron + Pacemaker for HA Gives error

I was trying to configure HA for neutron server in icehouse implementation. I was able to set up ha for all other services except neutron. I was trying to use pacemaker for setting up HA  by following http://docs.openstack.org/high-availability-guide/content/_add_neutron_l3_agent_resource_to_pacemaker.html

but still i get following error. dhcp agent and metadata agent is showing no error but l3 agent is not working.

output of crm_mon -1
Last updated:FriJul1814:03:252014Last change:FriJul1813:54:042014 via cibadmin on network1 Stack: classic openais (with plugin)Current DC: network2 - partition with quorum Version:1.1.10-14.el6_5.3-368c7262Nodes configured,2 expected votes 4Resources configured

Online:[ network1 network2 ]

p_api-ip (ocf::heartbeat:IPaddr2):Started network2

p_neutron-dhcp-agent (ocf::openstack:neutron-dhcp-agent):Started network1

p_neutron-metadata-agent (ocf::openstack:neutron-metadata-agent):Started network1

Failed actions: p_neutron-l3-agent_start_0 on network2 'unknown error'(1): call=13, status=TimedOut,last-rc-change='Fri Jul 18 04:32:06 2014', queued=20091ms,exec=0ms p_neutron-l3-agent_start_0 on network1 'unknown error'(1): call=23, status=TimedOut,last-rc-change='Fri Jul 18 14:03:01 2014', queued=20010ms,exec=0ms[root@network1 openstack]#

Solution

The neutron-agent-l3 script to blame as it tries to communicate with neutron server
directly on port 9696, while communication is handled by AMQP service
(Qpid in my case). We need to modify the script to use Qpid port and not neutron server one.

Friday, July 18, 2014

Neutron Network Issue. Gateway not pinging for the external network.

In Network  node

ip netns

Above command will give the virtual router's as you can see my output below. From that select the qrouter ID and try command

ip netns exec <qrouter-id> ip addr

ip netns exec <qrouter-id> route -n

The above commands should show IP's in virtual router and routing table of qrouter.

make sure your routing table shown as has a gateway. Or else try setting it using

ip netns exec <qrouter-id> route add default gw *** *** *** ***

ip netns exec <qrouter-id> iptables save

 

 

Saturday, June 28, 2014

[Errno 13] Permission denied: '/var/log/keystone/keystone.log'

[root@controller2 ~]# tail -f /var/log/keystone/keystone-startup.log
_setup_logging_from_conf(product_name, version)
File "/usr/lib/python2.6/site-packages/keystone/openstack/common/log.py", line 525, in _setup_logging_from_conf
filelog = logging.handlers.WatchedFileHandler(logpath)
File "/usr/lib64/python2.6/logging/handlers.py", line 377, in __init__
logging.FileHandler.__init__(self, filename, mode, encoding, delay)
File "/usr/lib64/python2.6/logging/__init__.py", line 827, in __init__
StreamHandler.__init__(self, self._open())
File "/usr/lib64/python2.6/logging/__init__.py", line 846, in _open
stream = open(self.baseFilename, self.mode)
IOError: [Errno 13] Permission denied: '/var/log/keystone/keystone.log'

 

chown keystone.keystone /var/log/keystone/keystone.log

 

Tuesday, June 24, 2014

Openstack Live Migration failure: operation failed

Error in the error log nova/compute

2014-06-25 01:32:50.752 2703 ERROR nova.virt.libvirt.driver [-]
[instance: fc118bff-77a3-4300-ab27-371a314b819f] Live Migration failure: operation failed: Failed to connect to remote libvirt URI qemu+tcp://compute2/system

Try updating the libvirt configurations. Modify /etc/libvirt/libvirtd.conf. To see all of the available options


  1. before : #listen_tls = 0
    after : listen_tls = 0

    before : #listen_tcp = 1
    after : listen_tcp = 1

    add: auth_tcp = "none"


Openstack+ Shared Storage(NFS) +Permission denied

While setting up Openstack with shared storage , If we get following error while creating a instance ensure that the server is in permissive selinux mode with

getenfonce

and if that too doesn't work try giving 755 permission to the mounted directory here its /var/lib/nova

2014-06-24 18:58:20.642 5119 TRACE nova.compute.manager [instance: a7996f1f-9af2-4410-8351-139d43f00786] libvirtError: internal error Process exited while reading console log output: qemu-kvm: -chardev file,id=charserial0,path=/var/lib/nova/instances/a7996f1f-9af2-4410-8351-139d43f00786/console.log: Could not open '/var/lib/nova/instances/a7996f1f-9af2-4410-8351-139d43f00786/console.log': Permission denied

 

At last if nothing else worked , tell libvirtd/qemu to use root user to access datas.
[root@compute nova]# cat /etc/libvirt/qemu.conf |grep root
user = "root"
#group = "root"
[root@compute nova]#

Monday, June 23, 2014

Openstack Icehouse - VNC console not connecting to server

Make sure that the setting in the controller and compute node are correct and also double check the IP's.  And replace the host-name with the IP.

controller - 192.168.216.130

running:
nova-consoleauth
nova-novncproxy

nova.conf:
novncproxy_host=0.0.0.0
novncproxy_port=6080
novncproxy_base_url=http://192.168.216.130:6080/vnc_auto.html

compute - 192.168.216.140

running:
nova-compute

nova.conf:
vnc_enabled=True
novncproxy_base_url=http://192.168.216.130:6080/vnc_auto.html
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=192.168.216.140

Wednesday, June 18, 2014

Openstack Icehouse Part 6 : Testing the Setup + Horizon

Creating the Key


$ ssh-keygen

Add the public key to your OpenStack environment:
$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub demo-key

Verify addition of the public key:
$ nova keypair-list
+----------+-------------------------------------------------+
| Name | Fingerprint |
+----------+-------------------------------------------------+
| demo-key | 6c:74:ec:3a:08:05:4e:9e:21:22:a6:dd:b2:62:b8:28 |
+----------+-------------------------------------------------+

nova image-list
+--------------------------------------+---------------------+--------+--------+
| ID | Name | Status | Server |
+--------------------------------------+---------------------+--------+--------+
| acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE | |
+--------------------------------------+---------------------+--------+--------+
Your first instance uses the cirros-0.3.2-x86_64 image.


List available networks:

$ neutron net-list
+--------------------------------------+----------+-------------------------------------------------------+
| id | name | subnets |
+--------------------------------------+----------+-------------------------------------------------------+
| 3c612b5a-d1db-498a-babb-a4c50e344cb1 | demo-net | 20bcd3fd-5785-41fe-ac42-55ff884e3180 192.168.1.0/24 |
| 9bce64a3-a963-4c05-bfcd-161f708042d1 | ext-net | b54a8d85-b434-4e85-a8aa-74873841a90d 203.0.113.0/24 |
+--------------------------------------+----------+-------------------------------------------------------+
Your first instance uses the demo-net tenant network. However, you must reference this network using the ID instead of the name.


List available security groups:

$ nova secgroup-list
+--------------------------------------+---------+-------------+
| Id | Name | Description |
+--------------------------------------+---------+-------------+
| ad8d4ea5-3cad-4f7d-b164-ada67ec59473 | default | default |
+--------------------------------------+---------+-------------+



Creating a New flavor


nova boot --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=8cc217b0-96a6-4e98-a901-a694ebff173f --security-group default --key-name demo-key demo-instance1

Creating the instance from back end


nova boot --poll --flavor m1.tiny --image cirros-0.3.2-x86_64 --nic net-id=69c6ca95-2f5d-4173-8973-164c5129cb27 --security-group default --key-name Chumma demo-instance

 

Install Horizone on Controller Node


yum install memcached python-memcached mod_wsgi openstack-dashboard

Edit /etc/openstack-dashboard/local_settings:

ALLOWED_HOSTS = ['localhost', 'my-desktop']
service httpd start
service memcached start
chkconfig httpd on
chkconfig memcached on

 

Openstack Icehouse Part 5 : Configuring EXTERNAL NETWORK

To create the external network on controller


source /root/admin-openrc.sh

neutron net-create ext-net --shared --router:external=True

To create a subnet on the external network

neutron subnet-create ext-net --name ext-subnet --allocation-pool start=192.168.255.160,end=192.168.255.180 --disable-dhcp --gateway 192.168.255.2 ext_net 192.168.255.0/24
#To create the tenant network

source /root/demo-openrc.sh

#Create the network:

neutron net-create demo-net

#To create a subnet on the tenant network

neutron subnet-create demo-net --name demo-subnet --gateway 10.0.0.1 10.0.0.0/24

#Create the router:

neutron router-create demo-router

#Attach the router to the demo tenant subnet:

neutron router-interface-add demo-router demo-subnet

#Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router.

neutron router-gateway-set demo-router ext-net

#Set gateway for router demo-router

 

Now check whether the gateway of the external-network here it will be 192.168.255.160 which is first Ip of the range. Try pinging to the IP and if its not working Stop there and remove all the routers and gateway and redo it using the ID.  If the gate way Don't get pinged the instance won't be able to access outside the network.

 

Set the neutron router-interface-add  and neutron router-gateway-set  BY ID


neutron router-list
+--------------------------------------+-------------+--------------------------------------------------------+

| id                                   | name        | external_gateway_info                                  |

+--------------------------------------+-------------+--------------------------------------------------------+

| 020f48d9-182e-4e33-a73f-813333533092 | router-demo | {"network_id": "9a457578-8f85-486b-9cd0-f7f04922ba0c"} |

+--------------------------------------+-------------+--------------------------------------------------------+


# neutron net-list

+--------------------------------------+----------+----------------------------------------------------+

| id                                   | name     | subnets                                            |

+--------------------------------------+----------+----------------------------------------------------+

| 07e10f48-0637-46bb-a444-695646e6bd15 | net-demo | c042e65e-3892-45bc-aeb0-625ce5f4aaaf 50.50.1.0/24  |

| 9a457578-8f85-486b-9cd0-f7f04922ba0c | ext_net  | 0bcccf59-be17-48c7-8032-e00fd4f15b46 1.2.3.0/24 |

+--------------------------------------+----------+----------------------------------------------------+


#neutron router-gateway-set 020f48d9-182e-4e33-a73f-813333533092 9a457578-8f85-486b-9cd0-f7f04922ba0c

Openstack Icehouse Part 4 Neutron

OpenStack Networking (neutron) Configure controller node


$ mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron4mar';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron4mar';
exit

keystone user-create --name neutron --pass neutron4mar --email neutron@example.com
keystone user-role-add --user neutron --tenant service --role admin
keystone service-create --name neutron --type network --description "OpenStack Networking"
keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696

To install the Networking components

# yum -y install openstack-neutron openstack-neutron-ml2 python-neutronclient

openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:neutron4mar@controller/neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron4mar

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_username nova
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_tenant_id $(keystone tenant-list | awk '/ service / { print $2 }')
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_password nova4mar
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_admin_auth_url http://controller:35357/v2.0

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron4mar
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

service openstack-nova-api restart
service openstack-nova-scheduler restart
service openstack-nova-conductor restart

service neutron-server start
chkconfig neutron-server on

Neutron ON NETWORK NODE


Edit /etc/sysctl.conf to contain the following:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Implement the changes:
sysctl -p

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron4mar

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting.
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password neutron4mar
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret meta4mar

We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting.

Perform the next two steps on the controller node.
On the controller node, configure Compute to use the metadata service:
Replace METADATA_SECRET with the secret you chose for the metadata proxy.
openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret meta4mar
#On the controller node, restart the Compute API service:
service openstack-nova-api restart
To configure the Modular Layer 2 (ML2) plug-in

Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your network node. 

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.216.151
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True

service openvswitch start
chkconfig openvswitch on
#Add the integration bridge:
ovs-vsctl add-br br-int
#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth4

Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off

 

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
service neutron-openvswitch-agent start
service neutron-l3-agent start
service neutron-dhcp-agent start
service neutron-metadata-agent start
chkconfig neutron-openvswitch-agent on
chkconfig neutron-l3-agent on
chkconfig neutron-dhcp-agent on
chkconfig neutron-metadata-agent on

Neutron Configure compute node


Edit /etc/sysctl.conf to contain the following:
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Implement the changes:
# sysctl -p

To install the Networking components

yum -y install openstack-neutron-ml2 openstack-neutron-openvswitch

Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_host controller
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password neutron4mar

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend neutron.openstack.common.rpc.impl_qpid
openstack-config --set /etc/neutron/neutron.conf DEFAULT qpid_hostname controller

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router

Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 192.168.216.141
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
service openvswitch start
chkconfig openvswitch on

#Add the integration bridge:

ovs-vsctl add-br br-int


Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_url http://controller:9696
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_auth_strategy keystone
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_tenant_name service
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_username neutron
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_password neutron4mar
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_admin_auth_url http://controller:35357/v2.0
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
#Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:

cp /etc/init.d/neutron-openvswitch-agent /etc/init.d/neutron-openvswitch-agent.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /etc/init.d/neutron-openvswitch-agent
#Restart the Compute service:

service openstack-nova-compute restart
#Start the Open vSwitch (OVS) agent and configure it to start when the system boots:
service neutron-openvswitch-agent start
chkconfig neutron-openvswitch-agent on

Openstack Icehouse Part 3 NOVA

COMPUTE SERVER CONFIGURATION On Controller


yum -y install openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:nova4mar@controller/nova
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.216.130
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.216.130
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.216.130

mysql -u root -p
CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova4mar';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova4mar';
exit

#Create the Compute service tables:

su -s /bin/sh -c "nova-manage db sync" nova



keystone user-create --name=nova --pass=nova4mar --email=nova@example.com
keystone user-role-add --user=nova --tenant=service --role=admin

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova4mar
keystone service-create --name=nova --type=compute --description="OpenStack Compute"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s

service openstack-nova-api start
service openstack-nova-cert start
service openstack-nova-consoleauth start
service openstack-nova-scheduler start
service openstack-nova-conductor start
service openstack-nova-novncproxy start
chkconfig openstack-nova-api on
chkconfig openstack-nova-cert on
chkconfig openstack-nova-consoleauth on
chkconfig openstack-nova-scheduler on
chkconfig openstack-nova-conductor on
chkconfig openstack-nova-novncproxy on

nova image-list



Add a rule to the default Nova Security Group to allow SSH access and Ping to instances:
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0


NOVA ON COMPUTE NODE


Install the Compute packages:

yum -y install openstack-nova-compute

Edit the /etc/nova/nova.conf configuration file:

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_host controller
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password nova4mar

openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller

#Configure Compute to provide remote console access to instances.

openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.216.140
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.216.140
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://controller:6080/vnc_auto.html

#Specify the host that runs the Image Service.

openstack-config --set /etc/nova/nova.conf DEFAULT glance_host controller

#You must determine whether your system's processor and/or hypervisor support hardware acceleration for virtual machines.

Run the following command:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your system supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your system does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
Run the following command:
# openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu
Start the Compute service and configure it to start when the system boots:

service libvirtd start
service messagebus start
chkconfig libvirtd on
chkconfig messagebus on
service openstack-nova-compute start
chkconfig openstack-nova-compute on

OpenStack – Icehouse –Part 2 Glance

Configure the Image Service On controller Server


yum install openstack-glance python-glanceclient -y

openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:glance4mar@controller/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:glance4mar@controller/glance

openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/glance/glance-api.conf DEFAULT qpid_hostname controller

mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance4mar';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance4mar';
exit

su -s /bin/sh -c "glance-manage db_sync" glance
keystone user-create --name=glance --pass=glance4mar --email=glance@example.com
keystone user-role-add --user=glance --tenant=service --role=admin

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance4mar
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance4mar
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292

service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

#Verify the Image Service installation


mkdir /tmp/images
cd /tmp/images/
wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

source /root/admin-openrc.sh
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img

cd /
rm -rf /tmp/images

glance image-list

 

Importing Images into Glance


You can load an image from the command line with glance, eg:
glance image-create --name 'Fedora 19 x86_64' --disk-format qcow2 --container-format bare --is-public true \
--copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2

Building Your Own Images


Alternatively, one can use diskimage-builder, which is available in the RDO repository:

yum install diskimage-builder

$ disk-image-create -a amd64 fedora vm -o fedora-image.qcow2

More Images In Following URL


http://openstack.redhat.com/Image_resources

Tuesday, June 17, 2014

OpenStack - Icehouse --Part 1 Keystone.

The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project.

OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complemental services. Each service offers an application programming interface (API) that facilitates this integration. The following table provides a list of OpenStack services:


DashboardHorizonProvides a web-based self-service portal to interact with underlying OpenStack services, such as launching an instance, assigning IP addresses and configuring access controls.
ComputeNovaManages the lifecycle of compute instances in an OpenStack environment. Responsibilities include spawning, scheduling and decommissioning of virtual machines on demand.
NetworkingNeutronEnables network connectivity as a service for other OpenStack services, such as OpenStack Compute. Provides an API for users to define networks and the attachments into them. Has a pluggable architecture that supports many popular networking vendors and technologies.
Storage
Object StorageSwiftStores and retrieves arbitrary unstructured data objects via a RESTful, HTTP based API. It is highly fault tolerant with its data replication and scale out architecture. Its implementation is not like a file server with mountable directories.
Block StorageCinderProvides persistent block storage to running instances. Its pluggable driver architecture facilitates the creation and management of block storage devices.
Shared services
Identity serviceKeystoneProvides an authentication and authorization service for other OpenStack services. Provides a catalog of endpoints for all OpenStack services.
Image ServiceGlanceStores and retrieves virtual machine disk images. OpenStack Compute makes use of this during instance provisioning.
TelemetryCeilometerMonitors and meters the OpenStack cloud for billing, benchmarking, scalability, and statistical purposes.
Higher-level services
OrchestrationHeatOrchestrates multiple composite cloud applications by using either the native HOT template format or the AWS CloudFormation template format, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.
Database ServiceTroveProvides scalable and reliable Cloud Database-as-a-Service functionality for both relational and non-relational database engines.



Sample Architecture We are trying to Set up. The Ip's will Vary , Please do check and clear ..

installguide_arch-neutron

ON ALL THE NODE

#Making Selinux to Permissive
sed -i "s/SELINUX=.*/SELINUX=permissive/g" /etc/sysconfig/selinux

yum -y install policycoreutils setroubleshoot
setenforce 0
yum install -y euca2ools
yum install -y yum-plugin-priorities gedit curl wget nc

yum -y install ntp

service ntpd start
chkconfig ntpd on

yum -y install http://repos.fedorapeople.org/repos/openstack/openstack-icehouse/rdo-release-icehouse-3.noarch.rpm
yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum -y install openstack-utils
yum -y install openstack-selinux
yum -y upgrade



On All node add the following Rules in Iptables

-A INPUT -p tcp -m state --state NEW -m tcp --dport 80 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 443 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5000 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 5672 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 6080 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 8774 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 9292 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 9696 -j ACCEPT
-A INPUT -p tcp -m state --state NEW -m tcp --dport 35357 -j ACCEPT

ON OTHER NODE


yum install MySQL-python -y

ON CONTROLLER NODE


yum -y install qpid-cpp-server memcached
sed -i "s/auth=yes/auth=no/g" /etc/qpidd.conf
service qpidd start
chkconfig qpidd on

yum install mysql mysql-server MySQL-python -y
service mysqld start
chkconfig mysqld on
mysql_secure_installation

 Over ALL Network


192.168.255.130 controller

192.168.216.130 controller
192.168.216.140 compute
192.168.216.141 compute
192.168.255.150 network
eth4 netwrok << Public Connection
192.168.216.150 netwrok
192.168.216.151 network


On NETWORK NODE


One of the external interface uses a special configuration without an IP address assigned to it. Configure the third interface as the external interface:
Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
Edit the /etc/sysconfig/network-scripts/ifcfg-INTERFACE_NAME file to contain the following:
Do not change the HWADDR and UUID keys.

DEVICE=INTERFACE_NAME
TYPE=Ethernet
ONBOOT="yes"
BOOTPROTO="none"

Restart networking:
service network restart
=========================================
KEYSTONE_DBPASS keystone4mar Database password of Identity service
DEMO_PASS demo4mar Password of user demo
ADMIN_PASS admin4mar Password of user admin
GLANCE_DBPASS glance4mar Database password for Image Service
GLANCE_PASS glance4mar Password of Image Service user glance
NOVA_DBPASS nova4mar Database password for Compute service
NOVA_PASS nova4mar Password of Compute service user nova
DASH_DBPASS dash4mar Database password for the dashboard
CINDER_DBPASS cinder4mar Database password for the Block Storage service
CINDER_PASS cinder4mar Password of Block Storage service user cinder
NEUTRON_DBPASS neutron4mar Database password for the Networking service
NEUTRON_PASS neutron4mar Password of Networking service user neutron
HEAT_DBPASS heat4mar Database password for the Orchestration service
HEAT_PASS heat4mar Password of Orchestration service user heat
CEILOMETER_DBPASS ceil4mar Database password for the Telemetry service
CEILOMETER_PASS ceil4mar Password of Telemetry service user ceilometer
TROVE_DBPASS trove4mar Database password of Database service
TROVE_PASS trove4mar Password of Database Service user trove
=========================================

On Controller Node


In my.cnf configure for INnode DB

bind-address = ***.***.***.***
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8


Installing Identity Service On Controller Node




yum install openstack-keystone python-keystoneclient -y
openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:keystone4mar@controller/keystone
$ mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone4mar';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone4mar';
exit

Create the database tables for the Identity Service:
su -s /bin/sh -c "keystone-manage db_sync" keystone
ADMIN_TOKEN=$(openssl rand -hex 10)
echo $ADMIN_TOKEN
openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN
keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
chown -R keystone:keystone /etc/keystone/ssl
chmod -R o-rwx /etc/keystone/ssl

service openstack-keystone start
chkconfig openstack-keystone on
#Define users, tenants, and roles
*********Replace ADMIN_TOKEN with your authorization token
#export OS_SERVICE_TOKEN=$ADMIN_TOKEN
export OS_SERVICE_TOKEN=$(echo $ADMIN_TOKEN)
export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

#Create an administrative user

keystone user-create --name=admin --pass=admin4mar --email=ADMIN_EMAIL
keystone role-create --name=admin
keystone tenant-create --name=admin --description="Admin Tenant"
keystone user-role-add --user=admin --tenant=admin --role=admin
keystone user-role-add --user=admin --role=_member_ --tenant=admin

#Create a normal user

keystone user-create --name=demo --pass=demo4mar --email=DEMO_EMAIL
keystone tenant-create --name=demo --description="Demo Tenant"
keystone user-role-add --user=demo --role=_member_ --tenant=demo

#Create a service tenant

keystone tenant-create --name=service --description="Service Tenant"
#Define services and API endpoints

keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0

#Verify the Identity Service installation

unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT
keystone --os-username=admin --os-password=admin4mar --os-auth-url=http://controller:35357/v2.0 token-get
keystone --os-username=admin --os-password=admin4mar --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 token-get

echo "export OS_USERNAME=admin
export OS_PASSWORD=admin4mar
export OS_TENANT_NAME=admin
export OS_AUTH_URL=http://controller:35357/v2.0" >> /root/admin-openrc.sh

cat /root/admin-openrc.sh

source /root/admin-openrc.sh
keystone token-get
keystone user-list
keystone user-role-list --user admin --tenant admin

IF WE WANT TO INSTALL THE CLIENTS


yum install python-pip
pip install python-PROJECTclient

ceilometer - Telemetry API
cinder - Block Storage API and extensions
glance - Image Service API
heat - Orchestration API
keystone - Identity service API and extensions
neutron - Networking API
nova - Compute API and extensions
swift - Object Storage API
trove - Database Service API

#On Red Hat Enterprise Linux, CentOS, or Fedora, use yum to install the clients from the packaged versions available in RDO:

yum install python-PROJECTclient


Creating a client Profile file


=====================
echo "export OS_USERNAME=demo
export OS_PASSWORD=demo4mar
export OS_TENANT_NAME=demo
export OS_AUTH_URL=http://controller:35357/v2.0" >> /root/demo-openrc.sh
cat /root/demo-openrc.sh