Pages

Showing posts with label Cloud. Show all posts
Showing posts with label Cloud. Show all posts

Monday, April 10, 2023

NextCloud Setup with Docker

One of the most commonly used self-hosted alternatives for cloud storages. Now it's easy to deploy with dockers. Following docker file and Nginx configuration can be used to deploy the nextcloud application behind the Nginx proxy server with SSL termination. 
we can bring up and bring down the containers with the following commands

docket-compose up -f
docker-compose down

===========

version: '2'
#volumes:
#  nextcloud: /root/nextcloud/ncdata
#  db: /root/nextcloud/mysql
services:
  db:
    image: mariadb:10.5
    restart: always
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    volumes:
      - /root/nextcloud/mysql:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=XXXXXXXXX
      - MYSQL_PASSWORD=XXXXXXXX
      - MYSQL_DATABASE=XXXXXXXX
      - MYSQL_USER=XXXXXXXX
  app:
    image: nextcloud
    restart: always
    links:
      - db
    volumes:
      - /root/nextcloud/ncdata:/var/www/html
    environment:
      - MYSQL_PASSWORD=XXXXXXXX
      - MYSQL_DATABASE=XXXXXXXX
      - MYSQL_USER=XXXXXXXX
      - MYSQL_HOST=XXXXXXXX
      - NEXTCLOUD_TRUSTED_DOMAINS=abc.xyz.aa
      - OVERWRITEHOST=abc.xyz.aa:XXXX
      - OVERWRITEPROTOCOL=https
        
  web:
       image: nginx
       restart: always
       ports:
         - 8082:8080
       links:
         - app
       volumes:
         - /root/nextcloud/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
         - /root/nextcloud/cert:/etc/cert
===========
Nginx Configuration file
===========

server {
  listen 80;
  server_name abc.xyz.aa;
  return 301 https://$server_name:8080$request_uri;
  add_header X-Content-Type-Options              "nosniff";
}
server {
  listen 8080 ssl;
  server_name abc.xyz.aa;
  ssl_certificate /etc/cert/abc.xyz.aa.crt;
  ssl_certificate_key /etc/cert/abc.xyz.aa.key;
  ssl_prefer_server_ciphers on;
  location / {
  proxy_pass http://app;
        proxy_set_header        Host $host;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;
 }

===========




Tuesday, November 28, 2017

Increases swap in azure linux machine

In Azure to create a swap file in the directory that's defined by the ResourceDisk.MountPoint parameter, you can update the /etc/waagent.conf file by setting the following three parameters:

ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=xx


Note The xx placeholder represents the desired number of megabytes (MB) for the swap file.
Restart the WALinuxAgent service by running one of the following commands, depending on the system in question:

Ubuntu: service walinuxagent restart
Red Hat/Centos: service waagent restart


Run one of the following commands to show the new swap apace that's being used after the restart:

dmesg | grep swap
swapon -s
cat /proc/swaps
file /mnt/resource/swapfile
free| grep -i swap


If the swap file isn't created, you can restart the virtual machine by using one of the following commands:

shutdown -r now
init 6

Sunday, August 13, 2017

Qubole : Load Multiple tables to Qubole Hive table from a Data Store

API call to Load Multiple tables from a Qubole Data Store to Hive table. 


[rahul@local qubole]$ cat /databasescript 
#!/bin/bash

#Qubole API Key
AUTH="***********"
#Database Name
DB_NAME="***********"
#Host Name
DB_HOST="***********"
#User Name
DB_USER="***********
#Password 
DB_PASS='***********'

echo $DB_PASS


## request table import from tap;
function tableImport() {

request_body=$(cat <<EOF
{
   "command_type":"DbImportCommand",
   "mode":"1",
   "hive_serde":"orc",
   "hive_table":"<HIVE TABLE NAME>.$1",
   "dbtap_id":"$2",
   "db_table":"$1",
   "db_parallelism":"1",
   "use_customer_cluster":"1",
   "customer_cluster_label":"Qubole_Data_Import",
   "tags":[" Data"]
}
EOF
)

echo $request_body
   curl -X POST \
-H "X-AUTH-TOKEN: $AUTH" \
-H "Content-Type:application/json" \
-d "$request_body" https://api.qubole.com/api/v1.2/commands/
}

##register database with tap
request_body=$(cat <<EOF
{
  "db_name":"$DB_NAME",
  "db_host":"$DB_HOST",
  "db_user":"$DB_USER",
  "db_passwd":"$DB_PASS",
  "db_type":"sqlserver",
  "db_location":"on-premise",
  "gateway_ip": "***********",
  "gateway_port": "***********",
  "gateway_username": "***********",
  "gateway_private_key": "***********"}

EOF
)

echo $KEY
ID=$(curl -s -X POST \
-H "X-AUTH-TOKEN: $AUTH" \
-H "Content-Type:application/json" \
-d "$request_body" https://api.qubole.com/api/v1.2/db_taps/ | jq .id)

#get the tables and call import
curl -s -H "X-AUTH-TOKEN: $AUTH" \
     -H "Content-Type:application/json" \
     https://api.qubole.com/api/v1.2/db_taps/$ID/tables | jq -r .[] | while read x; do  tableImport $x $ID; done

# can't delete the tap at the end unless we continuously poll for no active jobs;
STATUS="null"

while [ "$STATUS" = "null" ]
do
STATUS=$(curl  -s -X DELETE \
 -H "X-AUTH-TOKEN: $AUTH" \
 -H "Content-Type:application/json" \
 https://api.qubole.com/api/v1.2/db_taps/$ID | jq .status)
echo -n "."
sleep 5
done

Friday, January 30, 2015

Openstack - Auto evacuation Script

The Following Script will
1.)Check for the Compute Hosts which are Down
2.)Check for the Instance in the Down Hosts
3.)Check for Compute Hosts which are Up
4.)Calculate the Vcpu and Memory needed for Instance which are in Down Host
5.)Calculate the free Vcpu and Memory in the Up Hosts
7.)Find proper Host for each Instance in the Down Host
8.)Once proper Compute Hosts are Found for each Instance the Mysql entries are modified and the Instance is hard rebooted .

#! / usr / bin / env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE


# Determine whether the compute nodes down, compute_down_list is a list of nodes calculate downtime
#Returns a list of Nodes which are down
def select_compute_down_host ():
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
compute_down_list = []
for compute_num in range (len (nova_service_list)):
new_val = ( nova_service_list [compute_num] )
if 'nova-compute' in new_val:
if 'enabled' in new_val:
if 'XXX' in new_val:
compute_down_list.append (nova_service_list [compute_num] .split () [1])
if len(compute_down_list) == 0:
print "No Downtime Computing Nodes, The Program Automatically Exit!"
exit (0)
else:
compute_down_list = list (set (compute_down_list))
return compute_down_list


# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list



# Dertermine which instances are down, down_instance_list is the list of instance which are down
# Return a Tuple of Intance which are down # Input is a List of Down Nodes
def instance_in_down_node(down_nodes, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
        instances_dict = {}
        down_instances_list = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
for node in down_nodes:
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
pass
else:
instances_dict [node] = instances_name
down_instances_list.append (instances_dict [node])
if down_instances_list == []:
               print '\ n no downtime on the compute nodes running virtual machines \ n'
               exit ()

#for node in down_instances_list:
# for instance in node:
# print instance[0]
# usage_of_instance = usage_instance(instance[0])
# print usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return down_instances_list



#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instance_dict = {}
type_down_instance_list = []
usage_type_down_instance_list = []
instances_type_details = []
# print ("Checking Usage of Instance {}".format(instances))
sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
cursor.execute (sql_select)
instances_type = cursor.fetchall ()
instance_dict [instances] = instances_type
type_down_instance_list.append (instance_dict [instances])
for node in type_down_instance_list:
for instance in node:
type_instance = instance[0]
# print type_instance

sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
# print sql_select
cursor.execute (sql_select)
instances_type_details = cursor.fetchall ()
# print instances_type_details
instance_dict [instances_type_details] =  instances_type_details
usage_type_down_instance_list.append (instance_dict [instances_type_details])

# print usage_type_down_instance_list
for instance in usage_type_down_instance_list:
for instance_details in instance:
# print instance_details[0],instance_details[1],instance_details[2]
return instance_details
#
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Determine Resouce left in a Compute Node which is UP
#Inputs a Node name and Returns free Vcpu and free Memory of that node
def usage_of_compute_node(node, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
# Connect to the database
        instance_dict = {}
usage_instance_detail = []
free = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
# print node
sql_select = "select vcpus,vcpus_used,free_ram_mb from compute_nodes where  hypervisor_hostname = '%s'" %(node)
# print sql_select
        cursor.execute (sql_select)
        instance_usage = cursor.fetchall ()
        instance_dict [node] = instance_usage
usage_instance_detail.append (instance_dict [node])

for node in usage_instance_detail:
for detail in node:
free_vcpu = (detail[0] - detail[1])
free_mem = detail[2]
# print detail[0],detail[1],detail[2]
# print ("Free Vcpu {} : :Free Memory {}".format(free_vcpu,free_mem))
free.append (free_vcpu)
free.append (free_mem)
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return free

#Update the database of of the node usage
#Inputs a instance UUID and Node name # the details of the Node are updated including the resource usage of new instance
def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
free_instance = []
free_node = []
instance_dict = {}
instance_details = []
node_details = []
# print instance,node
free_instance = usage_instance(instance)
# print free_instance[1],free_instance[2]
instance_memory = free_instance[1]
instance_vcpu = free_instance[0]
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
# print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
for instance in instance_details:
for details in instance:
# print details[0]
instance_space = details[0]

# print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
# print sql_select
cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
for host in node_details:
for details in host:
node_vcpus_used = details[0]
node_memory_mb_used = details[1]
node_free_ram_mb = details[2]
node_local_gb = details[3]
node_running_vms = details[4]
node_disk_available_least = details[5]
        #sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        #print sql_select

node_vcpus_used = node_vcpus_used + instance_vcpu
node_memory_mb_used = node_memory_mb_used + instance_memory
node_free_ram_mb = node_free_ram_mb - instance_memory
node_local_gb = node_local_gb + instance_space
node_running_vms = node_running_vms + 1
node_disk_available_least = node_disk_available_least - instance_space

sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
print sql_select
cursor.execute (sql_select)
storage = cursor.fetchall ()
connection_mysql.commit ()
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Intake a instance and node. If the node have enough resource to take in the instance the instance are moved into the node
def rescue_instance (instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
print instance, node
node_usage = usage_of_compute_node(node)
instance_usage = usage_instance(instance)
node_vcpu = node_usage[0]
node_memory = node_usage[1]
instance_vcpu = instance_usage[0]
instance_memory = instance_usage[1]
# print ("Node Vcpu {} Node Memory {}".format(node_vcpu,node_memory))
# print ("Instance Vcpu {} Instanc Memory {}".format(instance_vcpu,instance_memory))
if node_vcpu > instance_vcpu:
if node_memory > instance_memory:
print "Transfer possible"
sql_select = "update instances set vm_state = 'stopped',power_state = 4 where uuid = '%s'"%(instance)
print sql_select
      cursor.execute (sql_select)
      storage = cursor.fetchall ()
connection_mysql.commit ()
sql_select = "update instances set host = '%s' where uuid = '%s' and vm_state = 'stopped'"%(node,instance)
print sql_select
                        cursor.execute (sql_select)
                        storage = cursor.fetchall ()
connection_mysql.commit ()
instance_reboot = "source /root/admin-openrc.sh;nova reboot --hard %s" %(instance)
print instance_reboot
subprocess.call(instance_reboot, shell=True ,stderr=subprocess.PIPE)
# time.sleep(60)
update_compute_node_usage(instance,node)
return 0
else:
print "Not Possible"
else:
print "not possible"
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return 1

def update_down_host(host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
down_nodes = select_compute_down_host ()
instance_dict = {}
down_host_usage = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in down_nodes:
                print ("The Node {} is Down".format(host))
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
print sql_select
cursor.execute (sql_select)
       node = cursor.fetchall ()
        instance_dict [host] = node
        down_host_usage.append (instance_dict [host])
for node in down_host_usage:
for detail in node:
memory_used = detail[0]
vcpu_used = detail[1]
local_gb_used = detail[2]
running_vm = detail[3]
free_ram = detail[4]
memory_mb = detail[5]
free_disk_gb = detail[6]
local_gb = detail[7]
memory_used = 512
vcpu_used = 0
local_gb_used = 0
running_vm = 0
free_ram = memory_mb - memory_used
free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
print sql_select
                cursor.execute (sql_select)
connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instances_dict = {}

down_nodes = []
#================================
#Scanig For Nodes Which are Down
#================================
print "Scanning For Nodes Which Are Down.."
down_nodes = select_compute_down_host ()
for host in down_nodes:
print ("The Node {} is Down".format(host))

instance_down = []
#====================================
#Scaning for Instance Which are Down
#====================================
instance_down = instance_in_down_node(down_nodes)
for node in instance_down:
for instance in node:
print ("The Instance {} is Down ".format(instance[0]))
# print ("Usage Of Instance")
usage_of_instance = usage_instance(instance)
# print ("Vcpus {} : : Memory {} : : Instance_type {}".format(usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]))

up_nodes = []
free_resource_node = []
#==================================
#Scaning for nodes which are UP
#==================================
up_nodes = select_compute_up_host ()
for node in up_nodes:
print ("The Node {} is Up".format(node))
free_resource_node = usage_of_compute_node(node)
#print ("Free Vcpus:{} , Free Memory:{}".format( free_resource_node[0],free_resource_node[1]))


###=====================================
###Rescue the instance from the Down Node
###=====================================
for node in instance_down:
for instance in node:
for live_node in up_nodes:
success = rescue_instance(instance[0],live_node)
if success == 0:
break
else:
continue

update_down_host()

cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

if __name__ == "__main__":
select_compute_down_host_instances ()

Tuesday, January 27, 2015

Clustering CoreOS Docker Hosts Using Fleet

Once the CoreOS Docker Hosts are Clustered we will be able to manage the Docker Hosts from a single server.

Getting the new Discovery URL.

curl -w "\n" https://discovery.etcd.io/new

We will get somthing like

https://discovery.etcd.io/16043bf6be5ecf5c42a0bcc0d9237954

make sure we use the new Discovery URL in the Config-core.yaml

Configure the new Config File with the URL
>>cat config-core.yaml
===============================================
#cloud-config
coreos:
  etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    #discovery: https://discovery.etcd.io/<token>
    discovery: https://discovery.etcd.io/a41bcad1e117d272d47eb938e060e6c8
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
write_files:
  - path: /etc/resolv.conf
    permissions: 0644
    owner: root
    content: |
      nameserver 8.8.8.8
ssh_authorized_keys:
  # include one or more SSH public keys
  - ssh-rsa AA7dasdlakdjlksajdlkjasa654d6s5a4d6sa5d465df46sdg4rdfghfdhdg74wefg4sd32f13f468e Generated-by-Nova
===============================================
If we are using Openstack make sure that the neutron metadata service is running fine and working properly because the private_ipv4 attribute works only if the instance get the meta data properly.

Use the Above Config File to start the CoreOs VM's. and Once the CoreOs VM are running Then we can use the Following Commands to check the fleet status.

List the machines in the Fleet
>> fleetctl list-machines
MACHINE         IP              METADATA
0a1cad1d...     192.36.0.65      -
220f3e64...     192.36.0.67      -
31dbd5ca...     192.36.0.66      -

If we need to add a new server into the same fleet we can get the Discovery URL by

grep DISCOVERY /run/systemd/system/etcd.service.d/20-cloudinit.conf

Using the above command we can get the Discovery URL from a running machine in the Fleet.

Once we get the URL start the new instance with the same config-core.yaml.

Wednesday, November 19, 2014

Docker+Juno Giving MissingSectionHeaderError while creating docker instance

I was able to configure the docker with Juno by following The instructions in http://www.adminz.in/2014/11/integrating-docker-into-juno-nova.html

First I got an time out error with the docker service, the nova service was not starting up then I edited the connectionpool.py as told in the following URL. http://www.adminz.in/2014/11/docker-n...

After that the service was running fine but While launching an instance I am getting following error.

2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 404, in spawn
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     self._start_container(container_id, instance, network_info)
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 376, in _start_container
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     instance_id=instance['name'])
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] InstanceDeployFailure: Cannot setup network: Unexpected error while running command.
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip link add name tapb97f8d6e-a6 type veth peer name nsb97f8d6e-a6
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Exit code: 1
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stdout: u''
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stderr: u'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 10, in <module>\n    sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/cmd.py", line 91, in main\n    filters = wrapper.load_filters(config.filters_path)\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/wrapper.py", line 120, in load_filters\n    filterconfig.read(os.path.join(filterdir, filterfile))\n  File "/usr/lib64/python2.7/ConfigParser.py", line 305, in read\n    self._read(fp, filename)\n  File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read\n    raise MissingSectionHeaderError(fpname, lineno, line)\nConfigParser.MissingSectionHeaderError: File contains no section headers.\nfile: /etc/nova/rootwrap.d/docker.filters, line: 1\n\' [Filters]\\n\'\n'
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]

FIX
The Issue was because of a BLANK space before the [Filters] entry in the docker.filter file in rootwrap.d directory in the docker server. Once the entry was cleared the docker instance was launched correclty .

[root@docker ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
d37ea1ce08b9        tutum/wordpress:latest   "/run.sh"           16 seconds ago      Up 15 seconds                           nova-73a4f67a-b6d0-4251-a292-d28c5137e6d4
[root@docker ~]#

Tuesday, November 18, 2014

Integrating Docker into Juno Nova Service as a Hypervisor


Installing Python Modules Needed for Docker
===========================================
yum install -y python-six
yum install -y python-pbr
yum install -y python-babel
yum install -y python-openbabel
yum install -y python-oslo-*
yum install -y python-docker-py

Installing Latest Version of Docker
==================================
yum install wget
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm
yum install docker-*

Starting the Docker Service
===========================
systemctl start docker
systemctl status docker
systemctl enable docker


Installing and configuring Nova-Docker Driver
=============================================
yum install -y python-pip git
pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install


Install and configure Neutorn Service In Docker Server
======================================================
http://www.adminz.in/2014/10/openstack-juno-part-6-neutron.html

Inatall and configure Nova Service to use Docker
======================================================
Installing Packages
yum install openstack-nova-compute -y ; usermod -G docker nova


openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver novadocker.virt.docker.DockerDriver


openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.15.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

Conufigure Glance to Include Docker Images
==========================================
On Controller server
# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

systemctl restart openstack-glance-api

Creating Custom Rootwrap Filters. On Docker Server
=================================
mkdir /etc/nova/rootwrap.d/
cat << EOF >> /etc/nova/rootwrap.d/docker.filters
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root
EOF

chgrp nova /etc/nova/rootwrap.d -R
chmod 640 /etc/nova/rootwrap.d -R

systemctl restart openstack-nova-compute

If you face an time out issue with Nova try the fix in following URL

http://www.adminz.in/2014/11/docker-nova-time-out-error.html

On Docker Server Adding Docker Image
docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

Monday, November 10, 2014

Docker + Nova Time Out Error

http://paste.openstack.org/show/131728/

Sample Error
==========
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 468, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 456, in request
    resp = self.send(prep, **send_kwargs)
 File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
 File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 559, in send
    r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
    timeout=timeout
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 299, in _make_request
    timeout_obj = self._get_timeout(timeout)
 File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 279, in _get_timeout
    return Timeout.from_float(timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 152, in from_float
    return Timeout(read=timeout, connect=timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 95, in __init__
    self._connect = self._validate_timeout(connect, 'connect')
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
    "int or float." % (name, value))
ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.



To fix the problem i have to modify directly "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py"

def _get_timeout(self, timeout):                                            
    """ Helper that always returns a :class:`urllib3.util.Timeout` """      
    if timeout is _Default:                                                           
        return self.timeout.clone()                                         

    if isinstance(timeout, Timeout): <========================== Timeout is not a urllib3 timeout
        return timeout.clone()                                              
    else:                                                                   
        # User passed us an int/float. This is for backwards compatibility, 
        # can be removed later                                                                                                 
        return Timeout.from_float(timeout._connect ) <======================= manually entered _connect
I

Removing Nova and Neutron Services from Mysql

Some times we need to remove the services listed in the Nova or neutron as they are duplicated or they are removed from the entire system. So we can do it in the following way.

Removing Nova Service from Mysql Database. 

>>nova service-list
>>nova hypervisor-list

mysql> use nova;
mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;

mysql> DELETE FROM compute_node_stats WHERE compute_node_id='1';
mysql> DELETE FROM compute_nodes WHERE hypervisor_hostname='compute1';
mysql> DELETE FROM services WHERE host='compute1';



Removing Nneutron  Service from Mysql Database. 

>>neutron agent-list

mysql> use neutorn
mysql> DELETE FROM agents WHERE host='compute1';

Thursday, November 6, 2014

Parse Error Caused Due to Blank Space Before the entries.

   I noticed that in Openstack Juno if there are white spaces on the beginning of lines containing 'key' = 'value' we get parse error in the logs. 

Sample Error. 

Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib64/python2.7/argparse.py", line 1794...ion
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: action(self, namespace, argument_values, option_string)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...l__
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: ConfigParser._parse_file(values, namespace)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...ile
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: raise ConfigFileParseError(pe.filename, str(pe))
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: oslo.config.cfg.ConfigFileParseError: Failed to pa...ue'

Nov 06 13:29:42 controller.novalocal systemd[1]: neutron-server.service: main process exited, code=exited, st...LURE

solution is to find out the line and remove the blank Space. 

Tuesday, November 4, 2014

Docker with Openstack Giving Error "ova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout"

When I try to integrate Docker to Openstack Juno, I am not able to start the nova service in the compute node. I followed https://wiki.openstack.org/wiki/Docker .

When I remove or comment out #compute_driver = novadocker.virt.docker.DockerDriver from nova configuration, the service is able to start but the pid gets killed soon.

I am getting following error while trying to start the nova service.

Complete Error.
http://paste.openstack.org/show/128805/

Sample Error
****2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup     "int or float." % (name, value))
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup****


The issue has been fixed , I didn't installed the docker requirement . Once i installed it and rebooted the server its working fine now .

For testing I have used * for installation ,we just need to install the correct packages. https://github.com/stackforge/nova-do...

yum install *pbr*
yum install *six*
yum install *babel*
yum install *oslo*
yum install docker-py

Monday, October 27, 2014

Openstack Juno - Neutron HA using VRRP (Keepalived)


First configure two neutron server's. Let that be network and network1 .
http://www.adminz.in/2014/10/openstack-juno-part-5-neutron.html

Then install Keepalived in both the neutron server's.

#Added Following entries in both neutron server
#in  /etc/neutron/neutron.conf
l3_ha = True
#And the HA Scheduler has to be used :
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler


In Controller Server Database update
neutron-db-manage --config-file=/etc/neutron/neutron.conf  --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  mkdir /etc/neutron/rootwrap.d
cp /usr/share/neutron/rootwrap/l3.filters /etc/neutron/rootwrap.d/

Now restart the Openstack Services in  all the controller and neutron nodes.



On Controller Server Create a new set of Network setting

source admin-openrc.sh
neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.1.0.101,end=10.1.0.200 --disable-dhcp --gateway 10.1.0.42 10.1.0.0/24


To create the tenant network
neutron net-create cli-net
neutron subnet-create cli-net --name cli-subnet --gateway 192.168.1.1 192.168.1.0/24
neutron router-create cli-router
neutron router-interface-add cli-router cli-subnet
neutron router-gateway-set cli-router ext-net


Now if we check both the neutron node we can see the router's.

[root@network ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network ~]#

[root@network2 ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network2 ~]#


[root@network ~]#  ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
10: ha-224b2c85-81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:42:4d:52 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.8/18 brd 169.254.255.255 scope global ha-224b2c85-81
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe42:4d52/64 scope link
       valid_lft forever preferred_lft forever
11: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
12: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link
       valid_lft forever preferred_lft forever
[root@network ~]#
[root@network ~]#



[root@network2 ~]# ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: ha-37517361-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:6f:a0:11 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-37517361-ec
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe6f:a011/64 scope link
       valid_lft forever preferred_lft forever
17: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
18: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
[root@network2 ~]#


In above output you can see the device  qg-04d4c06e-49 and  qr-842e3e41-3a has been created in both the server.

Wednesday, October 22, 2014

Openstack Juno Part 6 - Neutron Configuration on Compute Service

Installing the packages

yum install openstack-neutron-ml2 openstack-neutron-openvswitch ipset -y


Configure the Service 
#Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

#Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.
#Dedicated Ip for Tunneling in Compute Node

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.214
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service


Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

#Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in #configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the #following commands to resolve this issue:

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the Services
systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service

Tuesday, October 21, 2014

Openstack Juno Part 5 - Neutron configuring Network Node

Installing the Packages

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ipset  -y

Configuring  the Service
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest


openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


#verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
#Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with #troubleshooting.


openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1454" >> /etc/neutron/dnsmasq-neutron.conf
chown neutron:neutron /etc/neutron/dnsmasq-neutron.conf

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password mar4neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret mar4meta

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with #troubleshooting.

#Perform the next two steps on the controller node.
#On the controller node, configure Compute to use the metadata service:
#Replace METADATA_SECRET with the secret you chose for the metadata proxy.

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret mar4meta

On the controller node, restart the Compute API service:
systemctl restart openstack-nova-api.service

# To configure the Modular Layer 2 (ML2) plug-in

 # Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network #interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface #on the network node.
#Dedicated IP for tunneling in network node
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.212
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service

#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth1


#Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve #suitable throughput between your instances and the external network.
#To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the service's 

systemctl enable neutron-openvswitch-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
systemctl enable neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service
systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service

Monday, October 20, 2014

Openstack Juno + Docker error "Docker daemon is not running or is not reachable"

I was getting following error while integrating docker with Openstack Juno.

"2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)"

I tried changing the permission of the docker.sock but that didn't help. But when I upgraded the docker to 1.2 version the issue was fixed . The docker version which comes with centos is little bit old we can the rpm of new docker for centos7 from 

Download the following RPMS 

wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm

Install the RPM

in the same dorectory
yum install docker-1.2.0
yum install docker*


Error
====
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:21.940 2987 TRACE nova.openstack.common.threadgroup
2014-10-20 14:24:22.876 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.901 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.906 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connecting to AMQP server on controller:5672
2014-10-20 14:24:22.919 2995 INFO oslo.messaging._drivers.impl_rabbit [req-aadcbda1-ccd1-4b49-8dac-43ce49afa0fa ] Connected to AMQP server on controller:5672
2014-10-20 14:24:22.954 2995 ERROR nova.openstack.common.threadgroup [-] Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup Traceback (most recent call last):
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 125, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     x.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/threadgroup.py", line 47, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.thread.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 173, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self._exit_event.wait()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/event.py", line 121, in wait
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return hubs.get_hub().switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/hubs/hub.py", line 293, in switch
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     return self.greenlet.switch()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/eventlet/greenthread.py", line 212, in main
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     result = function(*args, **kwargs)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/openstack/common/service.py", line 492, in run_service
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     service.start()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/service.py", line 164, in start
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.manager.init_host()
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1125, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     self.driver.init_host(host=self.host)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup   File "/etc/nova/src/novadocker/novadocker/virt/docker/driver.py", line 82, in init_host
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup     _('Docker daemon is not running or is not reachable'
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup NovaException: Docker daemon is not running or is not reachable (check the rights on /var/run/docker.sock)
2014-10-20 14:24:22.954 2995 TRACE nova.openstack.common.threadgroup