Pages

Showing posts with label Openstack. Show all posts
Showing posts with label Openstack. Show all posts

Friday, May 8, 2015

Openstack KVM libvirtError: internal error: no supported architecture for os type 'hvm'

Nova Error Log
===========
2015-05-06 16:50:22.982 1187 ERROR nova.compute.manager [-] [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] Instance failed to spawn
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] Traceback (most recent call last):
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2246, in _build_resources
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     yield resources
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2116, in _build_and_run_instance
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     block_device_info=block_device_info)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 2622, in spawn
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     block_device_info, disk_info=disk_info)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4425, in _create_domain_and_network
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     power_on=power_on)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4349, in _create_domain
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     LOG.error(err)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/excutils.py", line 82, in __exit__
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     six.reraise(self.type_, self.value, self.tb)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4333, in _create_domain
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     domain = self._conn.defineXML(xml)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 183, in doit
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     result = proxy_call(self._autowrap, f, *args, **kwargs)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 141, in proxy_call
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     rv = execute(f, *args, **kwargs)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 122, in execute
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     six.reraise(c, e, tb)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib/python2.7/site-packages/eventlet/tpool.py", line 80, in tworker
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     rv = meth(*args, **kwargs)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]   File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3445, in defineXML
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]     if ret is None:raise libvirtError('virDomainDefineXML() failed', conn=self)
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] libvirtError: internal error: no supported architecture for os type 'hvm'
2015-05-06 16:50:22.982 1187 TRACE nova.compute.manager [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545]
2015-05-06 16:50:22.987 1187 WARNING nova.virt.libvirt.driver [-] [instance: fdc97e3f-25f0-4d4d-b649-4a6d4aff8545] During wait destroy, instance disappeared


Fix
===#IF we need to enable qemu
openstack-config --set /etc/nova/nova.conf DEFAULT libvirt_type qemu
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

Tuesday, April 21, 2015

Enabling Instance resizing In Openstack Juno


Editing Configuration
#Run on ALL Compute Server and Contoller Server

sed -i "s/#allow_resize_to_same_host.*/allow_resize_to_same_host=true/g" /etc/nova/nova.conf
sed -i "s/#allow_migrate_to_same_host.*/allow_migrate_to_same_host=true/g" /etc/nova/nova.conf

Configure the nova user
usermod -s /bin/bash nova

And enable password less authentication between Nova user’s in all server’s.

To Create Public and Private key for the user
ssh-keygen

To Copy the Public key to other users
ssh-copy-id <To all Server’s>

Add the Following Configruation file under Nova User in Every Server which has a Nova User
su - nova
cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

Monday, April 13, 2015

Creating Custom Windows Image for Openstack

Creating Custom Windows Images.

Setting up the KVM environment to create the custom images.

Installing Packages # We can do it on Compute 2

yum install kvm qemu-kvm python-virtinst libvirt libvirt-python virt-manager libguestfs-tools

Once the packages are installed we need to get the ISO’s.

Now we need the Virtio Driver’s so that windows can detect unsigned devices like linux from http://alt.fedoraproject.org/pub/alt/virtio-win/latest/

wget http://alt.fedoraproject.org/pub/alt/virtio-win/latest/virtio-win-0.1-81.iso

First Create the Disk on which the OS need to be installed

qemu-img create -f qcow2 -o preallocation=metadata windows.qcow2 20G

Start the KVM installation

qemu-system-x86_64 -enable-kvm -m 4096 -cdrom en_windows_7_professional_with_sp1_x64_dvd_u_676939.iso -drive file=windows.qcow2,if=virtio -drive file=virtio-win-0.1-100.iso,index=3,media=cdrom  -boot d -vga std -k en-us -vnc 10.1.52.42:1 -usbdevice tablet

Connect to Installation
Once the above step is done you will be able to connect to VNC using 10.1.52.42:1

You will be connected to VNC and you will be at the installations screen. Click Next to continue


Select Install option to continue with installation.


Selecting the Hard disk Driver

While setting the Installation driver we need to load the driver, Select the load driver option and load the driver from the Virto ISO we have mounted


Continue with the installation

  
Once you are done with the installation .The instance will be having Internet connection as you are using default NIC setting so download the Cloud init for windows from
https://github.com/cloudbase/cloudbase-init

To allow Cloudbase-Init to run scripts during an instance boot, set the PowerShell execution policy to be unrestricted:

C:\powershell
C:\Set-ExecutionPolicy Unrestricted
Download and install Cloudbase-Init:
C:\Invoke-WebRequest -UseBasicParsing http://www.cloudbase.it/downloads/CloudbaseInitSetup_Beta_x64.msi -OutFile cloudbaseinit.msi
Shutdown the instance.

Final Configuration

Once installation is completed load the computer with virto NIC with following Command

qemu-system-x86_64 -enable-kvm -m 4096 -drive file=windows.qcow2,if=virtio -drive file=virtio-win-0.1-100.iso,index=3,media=cdrom  -boot d -vga std -k en-us -vnc 10.1.52.42:1 -usbdevice tablet -net nic,model=virtio

Connect to VNC and add the Virto NIC Driver From Device manager

Enable RDP in the Server.


Installing Cloud init .

Complete the Cloud Init installation
Run the Cloud-init Service to start the installation and Configure it as below.

C:\.\cloudbaseinit.msi
In the configuration options window, change the following settings:
Username: Administrator
Network adapter to configure: Red Hat VirtIO Ethernet Adapter
Serial port for logging: COM1
When the installation is done, in the Complete the Cloudbase-Init Setup Wizard window, select the Run Sysprep and Shutdown check boxes and click Finish.


Now the Image is ready for Use.

You can get the windows password by

nova get-password <instance ID> <ssh-key>

Add the image through front end Images >> Create Images

Thursday, March 19, 2015

Openstack Recovering Data from Failed Instances Disk

Openstack Recovering Data from Failed Instances Disk


****************************
Qemu-nbd tools in Ubuntu
****************************

In some scenarios, instances are running but are inaccessible through SSH and do not respond to any command. The VNC console could be displaying a boot failure or kernel panic error messages. This could be an indication of file system corruption on the VM itself. If you need to recover files or inspect the content of the instance, qemu-nbd can be used to mount the disk.

We can get the path of the Instance by greping the Instance name from the common instance path.

>>egrep -i "Instance-name" /var/lib/nova/instances/*/*.xml

To access the instance's disk (/var/lib/nova/instances/xxxx-instance-uuid-xxxxxx/disk), use the following steps:
1.)Suspend the instance using the virsh command.
2.)Connect the qemu-nbd device to the disk.
3.)Mount the qemu-nbd device.
4.)Unmount the device after inspecting.
5.)Disconnect the qemu-nbd device.
6.)Resume the instance.

If you do not follow steps 4 through 6, OpenStack Compute cannot manage the instance any longer. It fails to respond to any command issued by OpenStack Compute, and it is marked as shut down.

Once you mount the disk file, you should be able to access it and treat it as a collection of normal directories with files and a directory structure. However, we do not recommend that you edit or touch any files because this could change the access control lists (ACLs) that are used to determine which accounts can perform what operations on files and directories. Changing ACLs can make the instance unbootable if it is not already.

Suspend the instance using the virsh command, taking note of the internal ID:

# virsh list
Id Name                 State
----------------------------------
1 instance-00000981    running
2 instance-000009f5    running
30 instance-0000274a    running

# virsh suspend 30
Domain 30 suspended
Connect the qemu-nbd device to the disk:

# cd /var/lib/nova/instances/instance-0000274a
# ls -lh
total 33M
-rw-rw---- 1 libvirt-qemu kvm  6.3K Jan 15 11:31 console.log
-rw-r--r-- 1 libvirt-qemu kvm   33M Jan 15 22:06 disk
-rw-r--r-- 1 libvirt-qemu kvm  384K Jan 15 22:06 disk.local
-rw-rw-r-- 1 nova         nova 1.7K Jan 15 11:30 libvirt.xml
# qemu-nbd -c /dev/nbd0 `pwd`/disk
Mount the qemu-nbd device.

The qemu-nbd device tries to export the instance disk's different partitions as separate devices. For example, if vda is the disk and vda1 is the root partition, qemu-nbd exports the device as /dev/nbd0 and /dev/nbd0p1, respectively:

# mount /dev/nbd0p1 /mnt/
You can now access the contents of /mnt, which correspond to the first partition of the instance's disk.

To examine the secondary or ephemeral disk, use an alternate mount point if you want both primary and secondary drives mounted at the same time:

# umount /mnt
# qemu-nbd -c /dev/nbd1 `pwd`/disk.local
# mount /dev/nbd1 /mnt/
# ls -lh /mnt/
total 76K
lrwxrwxrwx.  1 root root    7 Jan 15 00:44 bin -> usr/bin
dr-xr-xr-x.  4 root root 4.0K Jan 15 01:07 boot
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 dev
drwxr-xr-x. 70 root root 4.0K Jan 15 11:31 etc
drwxr-xr-x.  3 root root 4.0K Jan 15 01:07 home
lrwxrwxrwx.  1 root root    7 Jan 15 00:44 lib -> usr/lib
lrwxrwxrwx.  1 root root    9 Jan 15 00:44 lib64 -> usr/lib64
drwx------.  2 root root  16K Jan 15 00:42 lost+found
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 media
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 mnt
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 opt
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 proc
dr-xr-x---.  3 root root 4.0K Jan 15 21:56 root
drwxr-xr-x. 14 root root 4.0K Jan 15 01:07 run
lrwxrwxrwx.  1 root root    8 Jan 15 00:44 sbin -> usr/sbin
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 srv
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 sys
drwxrwxrwt.  9 root root 4.0K Jan 15 16:29 tmp
drwxr-xr-x. 13 root root 4.0K Jan 15 00:44 usr
drwxr-xr-x. 17 root root 4.0K Jan 15 00:44 var
Once you have completed the inspection, unmount the mount point and release the qemu-nbd device:

# umount /mnt
# qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
Resume the instance using virsh:

# virsh list
Id Name                 State
----------------------------------
1 instance-00000981    running
2 instance-000009f5    running
30 instance-0000274a    paused

# virsh resume 30
Domain 30 resumed


****************************
Libguestfs  tools in Centos7
****************************

sudo yum install libguestfs-tools      # Fedora/RHEL/CentOS
sudo apt-get install libguestfs-tools  # Debian/Ubuntu


[boris@icehouse1 Downloads]$  guestfish --rw -a disk files

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

> run
> list-filesystems
/dev/sda1: ext4
> mount /dev/sda1 /
> ls /



****************************
Guestmount tools in Centos7
****************************

[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ls
console.log  disk  disk.info  libvirt.xml
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ls -al
total 13790864
drwxr-xr-x.  2 nova nova        3864 Mar 18 15:07 .
drwxr-xr-x. 20 nova nova        3864 Mar 19 11:01 ..
-rw-rw----.  1 root root           0 Mar 19 11:01 console.log
-rw-r--r--.  1 qemu qemu 14094106624 Mar 19 12:09 disk
-rw-r--r--.  1 nova nova          79 Mar 18 15:07 disk.info
-rw-r--r--.  1 nova nova        2603 Mar 19 10:59 libvirt.xml
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# guestmount -a disk -i /mnt
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ll /mnt/
total 136
dr-xr-xr-x.  2 root root  4096 Mar 18 15:44 bin
dr-xr-xr-x.  4 root root  4096 Apr 16  2014 boot
drwxr-xr-x. 10 root root  4096 Mar 19 10:22 cgroup
drwxr-xr-x.  2 root root  4096 Apr 16  2014 dev
drwxr-xr-x. 80 root root  4096 Mar 19 11:00 etc
drwxr-xr-x.  3 root root  4096 Mar 18 15:08 home
dr-xr-xr-x.  8 root root  4096 Apr 16  2014 lib
dr-xr-xr-x. 11 root root 12288 Mar 18 15:44 lib64
drwx------.  2 root root 16384 Apr 16  2014 lost+found
drwxr-xr-x.  2 root root  4096 Sep 23  2011 media
drwxr-xr-x.  2 root root  4096 Sep 23  2011 mnt
drwxr-xr-x.  2 root root  4096 Sep 23  2011 opt
drwxr-xr-x.  2 root root  4096 Apr 16  2014 proc
dr-xr-x---.  4 root root 24576 Mar 19 10:59 root
dr-xr-xr-x.  2 root root 12288 Mar 18 15:45 sbin
drwxr-xr-x.  2 root root  4096 Apr 16  2014 selinux
drwxr-xr-x.  2 root root  4096 Sep 23  2011 srv
drwxr-xr-x.  2 root root  4096 Apr 16  2014 sys
drwxrwxrwt.  3 root root  4096 Mar 19 11:00 tmp
drwxr-xr-x. 13 root root  4096 Apr 16  2014 usr
drwxr-xr-x. 19 root root  4096 Mar 19 10:14 var
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# guestunmount /mnt/
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]#

Tuesday, March 10, 2015

Swift Tips


Swift stores the data we store in the containers in .data foramt in the corresponding Drives.

[root@compute ~]# find /srv/node/sdc1 -iname *.data
/srv/node/sdc1/objects/58511/456/3923e942436c9de6e832f944fb30c456/1421697356.06343.data
/srv/node/sdc1/objects/66216/445/40aa0b832ae8dff8681916972fd13445/1422560956.02659.data
/srv/node/sdc1/objects/142841/960/8b7e465403a5b5017ae51c0c0ab5a960/1422459278.52978.data
/srv/node/sdc1/objects/53083/6af/33d6dc4a65e40f2c539e26649c2d96af/1422459797.37964.data
/srv/node/sdc1/objects/37756/61e/24df3295c06d9770e1cd4f1d15ee861e/1422560823.75913.data
/srv/node/sdc1/objects/206317/924/c97b770dc9f2170f2434631423ccb924/1422560870.83562.data
/srv/node/sdc1/objects/1056/c1d/01081c2d99e3ed7cc3408249335b9c1d/1422560871.31131.data
/srv/node/sdc1/objects/107854/6aa/6953b4ba90867f1b2ee0ff36e8f7d6aa/1422560871.63875.data
/srv/node/sdc1/objects/262004/dfc/ffdd367dd12034d5f3c066845e4d8dfc/1422560873.82851.data
/srv/node/sdc1/objects/71710/393/4607a45373f2b0f6632b2f56501cf393/1422560874.16764.data

In above out put the swift drive is mounted to /srv/node/sdc1.


we can get the date when the data file is created from the name of the data file.

/srv/node/sdc1/objects/71784/771/461a3fd11073d0a88222403d4a7d1771/1422561047.39847.data
[root@compute ~]# date --date @1422561047
Thu Jan 29 14:50:47 EST 2015
[root@compute ~]#

If we have enabled 2 replication is the swift ring configuration there will be  two data file with same name. If we have multiple swift server's the replicated data will be stored in different server's rather than the same server. 

Friday, January 30, 2015

Openstack - Auto evacuation Script

The Following Script will
1.)Check for the Compute Hosts which are Down
2.)Check for the Instance in the Down Hosts
3.)Check for Compute Hosts which are Up
4.)Calculate the Vcpu and Memory needed for Instance which are in Down Host
5.)Calculate the free Vcpu and Memory in the Up Hosts
7.)Find proper Host for each Instance in the Down Host
8.)Once proper Compute Hosts are Found for each Instance the Mysql entries are modified and the Instance is hard rebooted .

#! / usr / bin / env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE


# Determine whether the compute nodes down, compute_down_list is a list of nodes calculate downtime
#Returns a list of Nodes which are down
def select_compute_down_host ():
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
compute_down_list = []
for compute_num in range (len (nova_service_list)):
new_val = ( nova_service_list [compute_num] )
if 'nova-compute' in new_val:
if 'enabled' in new_val:
if 'XXX' in new_val:
compute_down_list.append (nova_service_list [compute_num] .split () [1])
if len(compute_down_list) == 0:
print "No Downtime Computing Nodes, The Program Automatically Exit!"
exit (0)
else:
compute_down_list = list (set (compute_down_list))
return compute_down_list


# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list



# Dertermine which instances are down, down_instance_list is the list of instance which are down
# Return a Tuple of Intance which are down # Input is a List of Down Nodes
def instance_in_down_node(down_nodes, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
        instances_dict = {}
        down_instances_list = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
for node in down_nodes:
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
pass
else:
instances_dict [node] = instances_name
down_instances_list.append (instances_dict [node])
if down_instances_list == []:
               print '\ n no downtime on the compute nodes running virtual machines \ n'
               exit ()

#for node in down_instances_list:
# for instance in node:
# print instance[0]
# usage_of_instance = usage_instance(instance[0])
# print usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return down_instances_list



#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instance_dict = {}
type_down_instance_list = []
usage_type_down_instance_list = []
instances_type_details = []
# print ("Checking Usage of Instance {}".format(instances))
sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
cursor.execute (sql_select)
instances_type = cursor.fetchall ()
instance_dict [instances] = instances_type
type_down_instance_list.append (instance_dict [instances])
for node in type_down_instance_list:
for instance in node:
type_instance = instance[0]
# print type_instance

sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
# print sql_select
cursor.execute (sql_select)
instances_type_details = cursor.fetchall ()
# print instances_type_details
instance_dict [instances_type_details] =  instances_type_details
usage_type_down_instance_list.append (instance_dict [instances_type_details])

# print usage_type_down_instance_list
for instance in usage_type_down_instance_list:
for instance_details in instance:
# print instance_details[0],instance_details[1],instance_details[2]
return instance_details
#
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Determine Resouce left in a Compute Node which is UP
#Inputs a Node name and Returns free Vcpu and free Memory of that node
def usage_of_compute_node(node, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
# Connect to the database
        instance_dict = {}
usage_instance_detail = []
free = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
# print node
sql_select = "select vcpus,vcpus_used,free_ram_mb from compute_nodes where  hypervisor_hostname = '%s'" %(node)
# print sql_select
        cursor.execute (sql_select)
        instance_usage = cursor.fetchall ()
        instance_dict [node] = instance_usage
usage_instance_detail.append (instance_dict [node])

for node in usage_instance_detail:
for detail in node:
free_vcpu = (detail[0] - detail[1])
free_mem = detail[2]
# print detail[0],detail[1],detail[2]
# print ("Free Vcpu {} : :Free Memory {}".format(free_vcpu,free_mem))
free.append (free_vcpu)
free.append (free_mem)
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return free

#Update the database of of the node usage
#Inputs a instance UUID and Node name # the details of the Node are updated including the resource usage of new instance
def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
free_instance = []
free_node = []
instance_dict = {}
instance_details = []
node_details = []
# print instance,node
free_instance = usage_instance(instance)
# print free_instance[1],free_instance[2]
instance_memory = free_instance[1]
instance_vcpu = free_instance[0]
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
# print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
for instance in instance_details:
for details in instance:
# print details[0]
instance_space = details[0]

# print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
# print sql_select
cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
for host in node_details:
for details in host:
node_vcpus_used = details[0]
node_memory_mb_used = details[1]
node_free_ram_mb = details[2]
node_local_gb = details[3]
node_running_vms = details[4]
node_disk_available_least = details[5]
        #sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        #print sql_select

node_vcpus_used = node_vcpus_used + instance_vcpu
node_memory_mb_used = node_memory_mb_used + instance_memory
node_free_ram_mb = node_free_ram_mb - instance_memory
node_local_gb = node_local_gb + instance_space
node_running_vms = node_running_vms + 1
node_disk_available_least = node_disk_available_least - instance_space

sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
print sql_select
cursor.execute (sql_select)
storage = cursor.fetchall ()
connection_mysql.commit ()
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Intake a instance and node. If the node have enough resource to take in the instance the instance are moved into the node
def rescue_instance (instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
print instance, node
node_usage = usage_of_compute_node(node)
instance_usage = usage_instance(instance)
node_vcpu = node_usage[0]
node_memory = node_usage[1]
instance_vcpu = instance_usage[0]
instance_memory = instance_usage[1]
# print ("Node Vcpu {} Node Memory {}".format(node_vcpu,node_memory))
# print ("Instance Vcpu {} Instanc Memory {}".format(instance_vcpu,instance_memory))
if node_vcpu > instance_vcpu:
if node_memory > instance_memory:
print "Transfer possible"
sql_select = "update instances set vm_state = 'stopped',power_state = 4 where uuid = '%s'"%(instance)
print sql_select
      cursor.execute (sql_select)
      storage = cursor.fetchall ()
connection_mysql.commit ()
sql_select = "update instances set host = '%s' where uuid = '%s' and vm_state = 'stopped'"%(node,instance)
print sql_select
                        cursor.execute (sql_select)
                        storage = cursor.fetchall ()
connection_mysql.commit ()
instance_reboot = "source /root/admin-openrc.sh;nova reboot --hard %s" %(instance)
print instance_reboot
subprocess.call(instance_reboot, shell=True ,stderr=subprocess.PIPE)
# time.sleep(60)
update_compute_node_usage(instance,node)
return 0
else:
print "Not Possible"
else:
print "not possible"
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return 1

def update_down_host(host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
down_nodes = select_compute_down_host ()
instance_dict = {}
down_host_usage = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in down_nodes:
                print ("The Node {} is Down".format(host))
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
print sql_select
cursor.execute (sql_select)
       node = cursor.fetchall ()
        instance_dict [host] = node
        down_host_usage.append (instance_dict [host])
for node in down_host_usage:
for detail in node:
memory_used = detail[0]
vcpu_used = detail[1]
local_gb_used = detail[2]
running_vm = detail[3]
free_ram = detail[4]
memory_mb = detail[5]
free_disk_gb = detail[6]
local_gb = detail[7]
memory_used = 512
vcpu_used = 0
local_gb_used = 0
running_vm = 0
free_ram = memory_mb - memory_used
free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
print sql_select
                cursor.execute (sql_select)
connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instances_dict = {}

down_nodes = []
#================================
#Scanig For Nodes Which are Down
#================================
print "Scanning For Nodes Which Are Down.."
down_nodes = select_compute_down_host ()
for host in down_nodes:
print ("The Node {} is Down".format(host))

instance_down = []
#====================================
#Scaning for Instance Which are Down
#====================================
instance_down = instance_in_down_node(down_nodes)
for node in instance_down:
for instance in node:
print ("The Instance {} is Down ".format(instance[0]))
# print ("Usage Of Instance")
usage_of_instance = usage_instance(instance)
# print ("Vcpus {} : : Memory {} : : Instance_type {}".format(usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]))

up_nodes = []
free_resource_node = []
#==================================
#Scaning for nodes which are UP
#==================================
up_nodes = select_compute_up_host ()
for node in up_nodes:
print ("The Node {} is Up".format(node))
free_resource_node = usage_of_compute_node(node)
#print ("Free Vcpus:{} , Free Memory:{}".format( free_resource_node[0],free_resource_node[1]))


###=====================================
###Rescue the instance from the Down Node
###=====================================
for node in instance_down:
for instance in node:
for live_node in up_nodes:
success = rescue_instance(instance[0],live_node)
if success == 0:
break
else:
continue

update_down_host()

cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

if __name__ == "__main__":
select_compute_down_host_instances ()

Tuesday, January 20, 2015

Openstack - Hyperviser usage Update script

This is a basic script which is used to update the Usage Display in Openstack If it went worng. This basically finds all the compute nodes and instance in each compute node and adds up the cpu, memory and storage and updates the database.

#! /usr/bin/env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE

# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list

def intialize_compute_usage(host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
compute_list.append (nova_service_list [compute_num] .split () [1])

if len(compute_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_list = list (set (compute_list))

instance_dict = {}
        host_usage = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in compute_list:
print host
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
                print sql_select
cursor.execute (sql_select)
                node = cursor.fetchall ()
                instance_dict [host] = node
host_usage.append (instance_dict [host])
                for node in host_usage:
                        for detail in node:
                                memory_used = detail[0]
                                vcpu_used = detail[1]
                                local_gb_used = detail[2]
                                running_vm = detail[3]
                                free_ram = detail[4]
                                memory_mb = detail[5]
                                free_disk_gb = detail[6]
                                local_gb = detail[7]
     
       memory_used = 512
                vcpu_used = 0
                local_gb_used = 0
                running_vm = 0
                free_ram = memory_mb - memory_used
                free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
                print sql_select
cursor.execute (sql_select)
              connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
        instance_dict = {}
        type_down_instance_list = []
        usage_type_down_instance_list = []
        instances_type_details = []
#       print ("Checking Usage of Instance {}".format(instances))
        sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
        cursor.execute (sql_select)
        instances_type = cursor.fetchall ()
        instance_dict [instances] = instances_type
        type_down_instance_list.append (instance_dict [instances])
        for node in type_down_instance_list:
                for instance in node:
                        type_instance = instance[0]
#                       print type_instance

        sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
#       print sql_select
        cursor.execute (sql_select)
        instances_type_details = cursor.fetchall ()
#       print instances_type_details
        instance_dict [instances_type_details] =  instances_type_details
        usage_type_down_instance_list.append (instance_dict [instances_type_details])

#       print usage_type_down_instance_list
        for instance in usage_type_down_instance_list:
                for instance_details in instance:
                       print instance_details[0],instance_details[1],instance_details[2]
                       return instance_details
#
        cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
        free_instance = []
        free_node = []
        instance_dict = {}
        instance_details = []
        node_details = []
#       print instance,node
        free_instance = usage_instance(instance)
        instance_memory = free_instance[1]
        instance_vcpu = free_instance[0]
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
        sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
#       print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
        for instance in instance_details:
                for details in instance:
                        print details[0]
print "Spave of the Instance"
                        instance_space = details[0]

#       print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

        sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
#       print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
        for host in node_details:
                for details in host:
                        node_vcpus_used = details[0]
                        node_memory_mb_used = details[1]
                        node_free_ram_mb = details[2]
                        node_local_gb = details[3]
                        node_running_vms = details[4]
                        node_disk_available_least = details[5]

        node_vcpus_used = node_vcpus_used + instance_vcpu
        node_memory_mb_used = node_memory_mb_used + instance_memory
        node_free_ram_mb = node_free_ram_mb - instance_memory
        node_local_gb = node_local_gb + instance_space
        node_running_vms = node_running_vms + 1
        node_disk_available_least = node_disk_available_least - instance_space

        sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        connection_mysql.commit ()
        cursor.close ()
        connection_mysql.commit ()
connection_mysql.close ()




def instances_node(node, host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
instances_dict = {}
instances_list = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
                pass
else:
                instances_dict [node] = instances_name
instances_list.append (instances_dict [node])
if instances_list == []:
        print '\ n No Running Instance  \ n'

for nodes in instances_list:
for instance in nodes:
print instance[0],node
update_compute_node_usage(instance[0],node)
        cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
        instances_dict = {}

intialize_compute_usage()

        up_nodes = []
        #================================
        #Scanig For Nodes Which are Down
        #================================
        print "Scanning For Nodes Which Are Up.."
        up_nodes = select_compute_up_host ()
        for host in up_nodes:
                print ("The Node {} is up".format(host))
instances_node(host)


        cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()





if __name__ == "__main__":
        select_compute_down_host_instances ()

Friday, December 5, 2014

NovaException: Unexpected vif_type=binding_failed In Openstack Juno Migration


Sample Error
=============
ERROR nova.compute.manager [req-] [instance: ******-******-******-*******] Setting instance vm_state to ERROR
TRACE nova.compute.manager [instance: ******-******-******-*******] Traceback (most recent call last):
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5596, in _error_out_instance_on_exception
TRACE nova.compute.manager [instance: ******-******-******-*******]     yield
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3459, in resize_instance
TRACE nova.compute.manager [instance: ******-******-******-*******]     block_device_info)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4980, in migrate_disk_and_power_off
TRACE nova.compute.manager [instance: ******-******-******-*******]     utils.execute('ssh', dest, 'mkdir', '-p', inst_base)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     return processutils.execute(*cmd, **kwargs)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 193, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     cmd=' '.join(cmd))
TRACE nova.compute.manager [instance: ******-******-******-*******] ProcessExecutionError: Unexpected error while running command.
TRACE nova.compute.manager [instance: ******-******-******-*******] Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
TRACE nova.compute.manager [instance: ******-******-******-*******] Exit code: 255
TRACE nova.compute.manager [instance: ******-******-******-*******] Stdout: ''
TRACE nova.compute.manager [instance: ******-******-******-*******] Stderr: 'Host key verification failed.\r\n'
TRACE nova.compute.manager [instance: ******-******-******-*******]
ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected error while running command.
Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
Exit code: 255
Stdout: ''
Stderr: 'Host key verification failed.\r\n'

Things Need to be checked

Configure the nova user
First things first, let's make sure our nova user has an appropriate shell set:

cat /etc/passwd | grep nova
Verify that the last entry is /bin/bash.

If not, let's modify the user and make it so:

usermod -s /bin/bash nova


After doing this the next steps are all run as the nova user.
SSH Configuration
su - nova
We need to generate an SSH key:

ssh-keygen

Next up we need to configure SSH to not do host key verification, unless you want to manually SSH to all compute nodes that exist and accept the key (and continue to do so for each new compute node you add).

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

Make Password less Authentication with all Nova user's.

Wednesday, November 19, 2014

Docker+Juno Giving MissingSectionHeaderError while creating docker instance

I was able to configure the docker with Juno by following The instructions in http://www.adminz.in/2014/11/integrating-docker-into-juno-nova.html

First I got an time out error with the docker service, the nova service was not starting up then I edited the connectionpool.py as told in the following URL. http://www.adminz.in/2014/11/docker-n...

After that the service was running fine but While launching an instance I am getting following error.

2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 404, in spawn
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     self._start_container(container_id, instance, network_info)
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 376, in _start_container
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     instance_id=instance['name'])
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] InstanceDeployFailure: Cannot setup network: Unexpected error while running command.
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip link add name tapb97f8d6e-a6 type veth peer name nsb97f8d6e-a6
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Exit code: 1
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stdout: u''
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stderr: u'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 10, in <module>\n    sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/cmd.py", line 91, in main\n    filters = wrapper.load_filters(config.filters_path)\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/wrapper.py", line 120, in load_filters\n    filterconfig.read(os.path.join(filterdir, filterfile))\n  File "/usr/lib64/python2.7/ConfigParser.py", line 305, in read\n    self._read(fp, filename)\n  File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read\n    raise MissingSectionHeaderError(fpname, lineno, line)\nConfigParser.MissingSectionHeaderError: File contains no section headers.\nfile: /etc/nova/rootwrap.d/docker.filters, line: 1\n\' [Filters]\\n\'\n'
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]

FIX
The Issue was because of a BLANK space before the [Filters] entry in the docker.filter file in rootwrap.d directory in the docker server. Once the entry was cleared the docker instance was launched correclty .

[root@docker ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
d37ea1ce08b9        tutum/wordpress:latest   "/run.sh"           16 seconds ago      Up 15 seconds                           nova-73a4f67a-b6d0-4251-a292-d28c5137e6d4
[root@docker ~]#

Tuesday, November 18, 2014

Integrating Docker into Juno Nova Service as a Hypervisor


Installing Python Modules Needed for Docker
===========================================
yum install -y python-six
yum install -y python-pbr
yum install -y python-babel
yum install -y python-openbabel
yum install -y python-oslo-*
yum install -y python-docker-py

Installing Latest Version of Docker
==================================
yum install wget
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm
yum install docker-*

Starting the Docker Service
===========================
systemctl start docker
systemctl status docker
systemctl enable docker


Installing and configuring Nova-Docker Driver
=============================================
yum install -y python-pip git
pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install


Install and configure Neutorn Service In Docker Server
======================================================
http://www.adminz.in/2014/10/openstack-juno-part-6-neutron.html

Inatall and configure Nova Service to use Docker
======================================================
Installing Packages
yum install openstack-nova-compute -y ; usermod -G docker nova


openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver novadocker.virt.docker.DockerDriver


openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.15.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

Conufigure Glance to Include Docker Images
==========================================
On Controller server
# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

systemctl restart openstack-glance-api

Creating Custom Rootwrap Filters. On Docker Server
=================================
mkdir /etc/nova/rootwrap.d/
cat << EOF >> /etc/nova/rootwrap.d/docker.filters
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root
EOF

chgrp nova /etc/nova/rootwrap.d -R
chmod 640 /etc/nova/rootwrap.d -R

systemctl restart openstack-nova-compute

If you face an time out issue with Nova try the fix in following URL

http://www.adminz.in/2014/11/docker-nova-time-out-error.html

On Docker Server Adding Docker Image
docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

Monday, November 10, 2014

Docker + Nova Time Out Error

http://paste.openstack.org/show/131728/

Sample Error
==========
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 468, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 456, in request
    resp = self.send(prep, **send_kwargs)
 File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
 File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 559, in send
    r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
    timeout=timeout
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 299, in _make_request
    timeout_obj = self._get_timeout(timeout)
 File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 279, in _get_timeout
    return Timeout.from_float(timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 152, in from_float
    return Timeout(read=timeout, connect=timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 95, in __init__
    self._connect = self._validate_timeout(connect, 'connect')
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
    "int or float." % (name, value))
ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.



To fix the problem i have to modify directly "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py"

def _get_timeout(self, timeout):                                            
    """ Helper that always returns a :class:`urllib3.util.Timeout` """      
    if timeout is _Default:                                                           
        return self.timeout.clone()                                         

    if isinstance(timeout, Timeout): <========================== Timeout is not a urllib3 timeout
        return timeout.clone()                                              
    else:                                                                   
        # User passed us an int/float. This is for backwards compatibility, 
        # can be removed later                                                                                                 
        return Timeout.from_float(timeout._connect ) <======================= manually entered _connect
I

Removing Nova and Neutron Services from Mysql

Some times we need to remove the services listed in the Nova or neutron as they are duplicated or they are removed from the entire system. So we can do it in the following way.

Removing Nova Service from Mysql Database. 

>>nova service-list
>>nova hypervisor-list

mysql> use nova;
mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;

mysql> DELETE FROM compute_node_stats WHERE compute_node_id='1';
mysql> DELETE FROM compute_nodes WHERE hypervisor_hostname='compute1';
mysql> DELETE FROM services WHERE host='compute1';



Removing Nneutron  Service from Mysql Database. 

>>neutron agent-list

mysql> use neutorn
mysql> DELETE FROM agents WHERE host='compute1';

Thursday, November 6, 2014

Parse Error Caused Due to Blank Space Before the entries.

   I noticed that in Openstack Juno if there are white spaces on the beginning of lines containing 'key' = 'value' we get parse error in the logs. 

Sample Error. 

Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib64/python2.7/argparse.py", line 1794...ion
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: action(self, namespace, argument_values, option_string)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...l__
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: ConfigParser._parse_file(values, namespace)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...ile
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: raise ConfigFileParseError(pe.filename, str(pe))
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: oslo.config.cfg.ConfigFileParseError: Failed to pa...ue'

Nov 06 13:29:42 controller.novalocal systemd[1]: neutron-server.service: main process exited, code=exited, st...LURE

solution is to find out the line and remove the blank Space.