Pages

Friday, February 13, 2015

Running a Script in Client Server's using Puppet Master.

Running a Script in Client Server's using Puppet.

Enable the puppet File Server
=============================
Add Following entries to /etc/puppet/fileserver.conf
[extra_files]
path /var/lib/puppet/bucket
allow *


The File is stored in the mentioned path
========================================
[root@master ~]# ll /var/lib/puppet/bucket/
total 4
-rw-r--r--. 1 root root 39 Feb 10 16:45 startup.sh

In the below codes first the scripts is fetched from the master and saved in the local file. and then execute
==============================================================================================================
[root@master ~]# cat /etc/puppet/manifests/site.pp
node "client" {
file { '/tmp/startup.sh':
          owner => 'root',
          group => 'root',
          mode => '700',
          source => 'puppet:///extra_files/startup.sh',
       }
exec    {'run_startup':
        command => '/tmp/startup.sh',
        }
}
[root@master ~]#

Tuesday, February 10, 2015

Puppet Master-Client Setup/Usage

Puppet is a system for automating system administration tasks. It has a master server in which we will be mentioning the client configurations and in the client we will be running an agent which will fetch the configuration form the master server and implement it.

Environment
Master and Client Runs on Centos7

Open the port 8140 in firewall and set SELINUX to permissive mode.

Intalling the packages.
================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet-server

Start the service
============
systemctl start  puppetmaster.service
puppet resource service puppetmaster ensure=running enable=true
--------------------------
Notice: /Service[puppetmaster]/enable: enable changed 'false' to 'true'
service { 'puppetmaster':
  ensure => 'running',
  enable => 'true',
}
[root@master ~]#

Now the Certificate and keys would have been created.
====================================================
[root@master ~]# ll /var/lib/puppet/ssl/certs
total 8
-rw-r--r--. 1 puppet puppet 2013 Feb  9 14:48 ca.pem
-rw-r--r--. 1 puppet puppet 2098 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#
[root@master ~]# ll /var/lib/puppet/ssl/private_keys/
total 4
-rw-r--r--. 1 puppet puppet 3243 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#


Add the Following entries to the Following File. # You will find the cert name in /var/lib/puppet/ssl/certs
================================================
vim /etc/puppet/puppet.conf
[master]
certname = master.example.com.novalocal.pem
autosign = true

Restart the Service
systemctl restart  puppetmaster.service

[root@master ~]# netstat -plan |grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      5870/ruby
[root@master ~]#

####################
Client Configuration 
####################

Install the Packages
====================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet

Configure the Client
=====================
 vim /etc/puppet/puppet.conf
# In the [agent] section
    server = master.example.com.novalocal
    report = true
    pluginsync = true

Now the Following Command will add the certificate to Server 
===============================================
puppet agent -t --debug --verbose

From Server we need to sign the client certificate If its not signed Automatically
=============================================================
puppet cert sign --all

Now from Client again run
=========================
puppet agent -t --debug --verbose
to get synced.



Now in Server Create the Configuration file 
==================================
cat /etc/puppet/manifests/site.pp
node "client.example.com" {
file { '/root/example_file.txt':
    ensure => "file",
    owner  => "root",
    group  => "root",
    mode   => "700",
    content => "Congratulations!
Puppet has created this file.
",}
}

Once the above file in created in Server we need to run agent in the client
puppet agent -t --debug --verbose

we can see that file is created

Info: Applying configuration version '1423504520'
Notice: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]/ensure: defined content as '{md5}8a2d86dd40aa579c3fabac1453fcffa5'
Debug: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]: The container Node[client.example.com] will propagate my refresh event
Debug: Node[client.example.com]: The container Class[Main] will propagate my refresh event
Debug: Class[Main]: The container Stage[main] will propagate my refresh event
Debug: Finishing transaction 23483900
Debug: Storing state
Debug: Stored state in 0.01 seconds
Notice: Finished catalog run in 0.03 seconds
Debug: Using cached connection for https://master.example.com.novalocal:8140
Debug: Caching connection for https://master.example.com.novalocal:8140
Debug: Closing connection for https://master.example.com.novalocal:8140
[root@client ~]# ll /root/
total 4
-rwx------. 1 root root 47 Feb  9 17:55 example_file.txt
[root@client ~]#



Tuesday, February 3, 2015

Configuring http proxy in the linux Server


Open the .bash_profile file for editing.

(example: vi ~/.bash_profile)
Add the following lines to the end of the file:
http_proxy=http://proxy_server_address:port
export no_proxy=localhost,127.0.0.1,192.168.0.34
export http_proxy
http_proxy should be the ip address or hostname, plus the port of your proxy server
no_proxy should be any exclusions you want to make – addresses that you don’t want to send via the proxy.
NOTE: This must be done for each individual user, including root.
If you don’t want to log out of your shell session, you can reload the bash profile with the following:
source .bash_profile

Configuring YUM to use proxy
To configure “yum” to use the HTTP / HTTPS proxy you will need to edit the /etc/yum.conf configuration file. Open /etc/yum.conf in your favorite editor and add the following line.
proxy=http://proxy_server_address:port

Save and close the file, then clear the cache used by yum with the following command:
yum clean all

Friday, January 30, 2015

Openstack - Auto evacuation Script

The Following Script will
1.)Check for the Compute Hosts which are Down
2.)Check for the Instance in the Down Hosts
3.)Check for Compute Hosts which are Up
4.)Calculate the Vcpu and Memory needed for Instance which are in Down Host
5.)Calculate the free Vcpu and Memory in the Up Hosts
7.)Find proper Host for each Instance in the Down Host
8.)Once proper Compute Hosts are Found for each Instance the Mysql entries are modified and the Instance is hard rebooted .

#! / usr / bin / env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE


# Determine whether the compute nodes down, compute_down_list is a list of nodes calculate downtime
#Returns a list of Nodes which are down
def select_compute_down_host ():
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
compute_down_list = []
for compute_num in range (len (nova_service_list)):
new_val = ( nova_service_list [compute_num] )
if 'nova-compute' in new_val:
if 'enabled' in new_val:
if 'XXX' in new_val:
compute_down_list.append (nova_service_list [compute_num] .split () [1])
if len(compute_down_list) == 0:
print "No Downtime Computing Nodes, The Program Automatically Exit!"
exit (0)
else:
compute_down_list = list (set (compute_down_list))
return compute_down_list


# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list



# Dertermine which instances are down, down_instance_list is the list of instance which are down
# Return a Tuple of Intance which are down # Input is a List of Down Nodes
def instance_in_down_node(down_nodes, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
        instances_dict = {}
        down_instances_list = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
for node in down_nodes:
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
pass
else:
instances_dict [node] = instances_name
down_instances_list.append (instances_dict [node])
if down_instances_list == []:
               print '\ n no downtime on the compute nodes running virtual machines \ n'
               exit ()

#for node in down_instances_list:
# for instance in node:
# print instance[0]
# usage_of_instance = usage_instance(instance[0])
# print usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return down_instances_list



#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instance_dict = {}
type_down_instance_list = []
usage_type_down_instance_list = []
instances_type_details = []
# print ("Checking Usage of Instance {}".format(instances))
sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
cursor.execute (sql_select)
instances_type = cursor.fetchall ()
instance_dict [instances] = instances_type
type_down_instance_list.append (instance_dict [instances])
for node in type_down_instance_list:
for instance in node:
type_instance = instance[0]
# print type_instance

sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
# print sql_select
cursor.execute (sql_select)
instances_type_details = cursor.fetchall ()
# print instances_type_details
instance_dict [instances_type_details] =  instances_type_details
usage_type_down_instance_list.append (instance_dict [instances_type_details])

# print usage_type_down_instance_list
for instance in usage_type_down_instance_list:
for instance_details in instance:
# print instance_details[0],instance_details[1],instance_details[2]
return instance_details
#
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Determine Resouce left in a Compute Node which is UP
#Inputs a Node name and Returns free Vcpu and free Memory of that node
def usage_of_compute_node(node, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
# Connect to the database
        instance_dict = {}
usage_instance_detail = []
free = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
# print node
sql_select = "select vcpus,vcpus_used,free_ram_mb from compute_nodes where  hypervisor_hostname = '%s'" %(node)
# print sql_select
        cursor.execute (sql_select)
        instance_usage = cursor.fetchall ()
        instance_dict [node] = instance_usage
usage_instance_detail.append (instance_dict [node])

for node in usage_instance_detail:
for detail in node:
free_vcpu = (detail[0] - detail[1])
free_mem = detail[2]
# print detail[0],detail[1],detail[2]
# print ("Free Vcpu {} : :Free Memory {}".format(free_vcpu,free_mem))
free.append (free_vcpu)
free.append (free_mem)
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return free

#Update the database of of the node usage
#Inputs a instance UUID and Node name # the details of the Node are updated including the resource usage of new instance
def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
free_instance = []
free_node = []
instance_dict = {}
instance_details = []
node_details = []
# print instance,node
free_instance = usage_instance(instance)
# print free_instance[1],free_instance[2]
instance_memory = free_instance[1]
instance_vcpu = free_instance[0]
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
# print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
for instance in instance_details:
for details in instance:
# print details[0]
instance_space = details[0]

# print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
# print sql_select
cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
for host in node_details:
for details in host:
node_vcpus_used = details[0]
node_memory_mb_used = details[1]
node_free_ram_mb = details[2]
node_local_gb = details[3]
node_running_vms = details[4]
node_disk_available_least = details[5]
        #sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        #print sql_select

node_vcpus_used = node_vcpus_used + instance_vcpu
node_memory_mb_used = node_memory_mb_used + instance_memory
node_free_ram_mb = node_free_ram_mb - instance_memory
node_local_gb = node_local_gb + instance_space
node_running_vms = node_running_vms + 1
node_disk_available_least = node_disk_available_least - instance_space

sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
print sql_select
cursor.execute (sql_select)
storage = cursor.fetchall ()
connection_mysql.commit ()
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Intake a instance and node. If the node have enough resource to take in the instance the instance are moved into the node
def rescue_instance (instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
print instance, node
node_usage = usage_of_compute_node(node)
instance_usage = usage_instance(instance)
node_vcpu = node_usage[0]
node_memory = node_usage[1]
instance_vcpu = instance_usage[0]
instance_memory = instance_usage[1]
# print ("Node Vcpu {} Node Memory {}".format(node_vcpu,node_memory))
# print ("Instance Vcpu {} Instanc Memory {}".format(instance_vcpu,instance_memory))
if node_vcpu > instance_vcpu:
if node_memory > instance_memory:
print "Transfer possible"
sql_select = "update instances set vm_state = 'stopped',power_state = 4 where uuid = '%s'"%(instance)
print sql_select
      cursor.execute (sql_select)
      storage = cursor.fetchall ()
connection_mysql.commit ()
sql_select = "update instances set host = '%s' where uuid = '%s' and vm_state = 'stopped'"%(node,instance)
print sql_select
                        cursor.execute (sql_select)
                        storage = cursor.fetchall ()
connection_mysql.commit ()
instance_reboot = "source /root/admin-openrc.sh;nova reboot --hard %s" %(instance)
print instance_reboot
subprocess.call(instance_reboot, shell=True ,stderr=subprocess.PIPE)
# time.sleep(60)
update_compute_node_usage(instance,node)
return 0
else:
print "Not Possible"
else:
print "not possible"
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return 1

def update_down_host(host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
down_nodes = select_compute_down_host ()
instance_dict = {}
down_host_usage = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in down_nodes:
                print ("The Node {} is Down".format(host))
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
print sql_select
cursor.execute (sql_select)
       node = cursor.fetchall ()
        instance_dict [host] = node
        down_host_usage.append (instance_dict [host])
for node in down_host_usage:
for detail in node:
memory_used = detail[0]
vcpu_used = detail[1]
local_gb_used = detail[2]
running_vm = detail[3]
free_ram = detail[4]
memory_mb = detail[5]
free_disk_gb = detail[6]
local_gb = detail[7]
memory_used = 512
vcpu_used = 0
local_gb_used = 0
running_vm = 0
free_ram = memory_mb - memory_used
free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
print sql_select
                cursor.execute (sql_select)
connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instances_dict = {}

down_nodes = []
#================================
#Scanig For Nodes Which are Down
#================================
print "Scanning For Nodes Which Are Down.."
down_nodes = select_compute_down_host ()
for host in down_nodes:
print ("The Node {} is Down".format(host))

instance_down = []
#====================================
#Scaning for Instance Which are Down
#====================================
instance_down = instance_in_down_node(down_nodes)
for node in instance_down:
for instance in node:
print ("The Instance {} is Down ".format(instance[0]))
# print ("Usage Of Instance")
usage_of_instance = usage_instance(instance)
# print ("Vcpus {} : : Memory {} : : Instance_type {}".format(usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]))

up_nodes = []
free_resource_node = []
#==================================
#Scaning for nodes which are UP
#==================================
up_nodes = select_compute_up_host ()
for node in up_nodes:
print ("The Node {} is Up".format(node))
free_resource_node = usage_of_compute_node(node)
#print ("Free Vcpus:{} , Free Memory:{}".format( free_resource_node[0],free_resource_node[1]))


###=====================================
###Rescue the instance from the Down Node
###=====================================
for node in instance_down:
for instance in node:
for live_node in up_nodes:
success = rescue_instance(instance[0],live_node)
if success == 0:
break
else:
continue

update_down_host()

cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

if __name__ == "__main__":
select_compute_down_host_instances ()

Tuesday, January 27, 2015

Clustering CoreOS Docker Hosts Using Fleet

Once the CoreOS Docker Hosts are Clustered we will be able to manage the Docker Hosts from a single server.

Getting the new Discovery URL.

curl -w "\n" https://discovery.etcd.io/new

We will get somthing like

https://discovery.etcd.io/16043bf6be5ecf5c42a0bcc0d9237954

make sure we use the new Discovery URL in the Config-core.yaml

Configure the new Config File with the URL
>>cat config-core.yaml
===============================================
#cloud-config
coreos:
  etcd:
    # generate a new token for each unique cluster from https://discovery.etcd.io/new
    #discovery: https://discovery.etcd.io/<token>
    discovery: https://discovery.etcd.io/a41bcad1e117d272d47eb938e060e6c8
    # multi-region and multi-cloud deployments need to use $public_ipv4
    addr: $private_ipv4:4001
    peer-addr: $private_ipv4:7001
  units:
    - name: etcd.service
      command: start
    - name: fleet.service
      command: start
write_files:
  - path: /etc/resolv.conf
    permissions: 0644
    owner: root
    content: |
      nameserver 8.8.8.8
ssh_authorized_keys:
  # include one or more SSH public keys
  - ssh-rsa AA7dasdlakdjlksajdlkjasa654d6s5a4d6sa5d465df46sdg4rdfghfdhdg74wefg4sd32f13f468e Generated-by-Nova
===============================================
If we are using Openstack make sure that the neutron metadata service is running fine and working properly because the private_ipv4 attribute works only if the instance get the meta data properly.

Use the Above Config File to start the CoreOs VM's. and Once the CoreOs VM are running Then we can use the Following Commands to check the fleet status.

List the machines in the Fleet
>> fleetctl list-machines
MACHINE         IP              METADATA
0a1cad1d...     192.36.0.65      -
220f3e64...     192.36.0.67      -
31dbd5ca...     192.36.0.66      -

If we need to add a new server into the same fleet we can get the Discovery URL by

grep DISCOVERY /run/systemd/system/etcd.service.d/20-cloudinit.conf

Using the above command we can get the Discovery URL from a running machine in the Fleet.

Once we get the URL start the new instance with the same config-core.yaml.

Tuesday, January 20, 2015

Openstack - Hyperviser usage Update script

This is a basic script which is used to update the Usage Display in Openstack If it went worng. This basically finds all the compute nodes and instance in each compute node and adds up the cpu, memory and storage and updates the database.

#! /usr/bin/env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE

# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list

def intialize_compute_usage(host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
compute_list.append (nova_service_list [compute_num] .split () [1])

if len(compute_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_list = list (set (compute_list))

instance_dict = {}
        host_usage = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in compute_list:
print host
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
                print sql_select
cursor.execute (sql_select)
                node = cursor.fetchall ()
                instance_dict [host] = node
host_usage.append (instance_dict [host])
                for node in host_usage:
                        for detail in node:
                                memory_used = detail[0]
                                vcpu_used = detail[1]
                                local_gb_used = detail[2]
                                running_vm = detail[3]
                                free_ram = detail[4]
                                memory_mb = detail[5]
                                free_disk_gb = detail[6]
                                local_gb = detail[7]
     
       memory_used = 512
                vcpu_used = 0
                local_gb_used = 0
                running_vm = 0
                free_ram = memory_mb - memory_used
                free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
                print sql_select
cursor.execute (sql_select)
              connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
        instance_dict = {}
        type_down_instance_list = []
        usage_type_down_instance_list = []
        instances_type_details = []
#       print ("Checking Usage of Instance {}".format(instances))
        sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
        cursor.execute (sql_select)
        instances_type = cursor.fetchall ()
        instance_dict [instances] = instances_type
        type_down_instance_list.append (instance_dict [instances])
        for node in type_down_instance_list:
                for instance in node:
                        type_instance = instance[0]
#                       print type_instance

        sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
#       print sql_select
        cursor.execute (sql_select)
        instances_type_details = cursor.fetchall ()
#       print instances_type_details
        instance_dict [instances_type_details] =  instances_type_details
        usage_type_down_instance_list.append (instance_dict [instances_type_details])

#       print usage_type_down_instance_list
        for instance in usage_type_down_instance_list:
                for instance_details in instance:
                       print instance_details[0],instance_details[1],instance_details[2]
                       return instance_details
#
        cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
        free_instance = []
        free_node = []
        instance_dict = {}
        instance_details = []
        node_details = []
#       print instance,node
        free_instance = usage_instance(instance)
        instance_memory = free_instance[1]
        instance_vcpu = free_instance[0]
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
        sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
#       print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
        for instance in instance_details:
                for details in instance:
                        print details[0]
print "Spave of the Instance"
                        instance_space = details[0]

#       print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

        sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
#       print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
        for host in node_details:
                for details in host:
                        node_vcpus_used = details[0]
                        node_memory_mb_used = details[1]
                        node_free_ram_mb = details[2]
                        node_local_gb = details[3]
                        node_running_vms = details[4]
                        node_disk_available_least = details[5]

        node_vcpus_used = node_vcpus_used + instance_vcpu
        node_memory_mb_used = node_memory_mb_used + instance_memory
        node_free_ram_mb = node_free_ram_mb - instance_memory
        node_local_gb = node_local_gb + instance_space
        node_running_vms = node_running_vms + 1
        node_disk_available_least = node_disk_available_least - instance_space

        sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        connection_mysql.commit ()
        cursor.close ()
        connection_mysql.commit ()
connection_mysql.close ()




def instances_node(node, host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
instances_dict = {}
instances_list = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
                pass
else:
                instances_dict [node] = instances_name
instances_list.append (instances_dict [node])
if instances_list == []:
        print '\ n No Running Instance  \ n'

for nodes in instances_list:
for instance in nodes:
print instance[0],node
update_compute_node_usage(instance[0],node)
        cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova', db = 'nova'):
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
        instances_dict = {}

intialize_compute_usage()

        up_nodes = []
        #================================
        #Scanig For Nodes Which are Down
        #================================
        print "Scanning For Nodes Which Are Up.."
        up_nodes = select_compute_up_host ()
        for host in up_nodes:
                print ("The Node {} is up".format(host))
instances_node(host)


        cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()





if __name__ == "__main__":
        select_compute_down_host_instances ()

Friday, January 9, 2015

Pushing Images into private Docker-Registry

Pushing to a Private Docker.

Configure CoreOs to use the Private Docker Registry

To use the Private Registry in the coreos we need to Copy the CA certificate from the registry server to the Coreos Docker server.
Copy the CA certificate to /etc/ssl/certs/docker-registry.pem as pem .
now update the Certificate list using command
>>sudo update-ca-certificates

Let our private docker be https://docker-registry:8080

In the Docker Server.
Listing the Images.
core@coreos ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
centos                          6                   510cf09a7986        3 days ago          215.8 MB
centos                          centos6             510cf09a7986        3 days ago          215.8 MB

List the Running Docker's
core@coreos ~ $ docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS                                        NAMES
4867ea72bd6a        centos:6            "/bin/bash"         41 minutes ago      Up 41 minutes       0.0.0.0:2221->22/tcp, 0.0.0.0:8080->80/tcp   boring_babbage

Commit the Docker
core@coreos ~ $ docker commit 4867ea72bd6a dockeradmin/centos-wordpress
9d1b81492b51653710745cad6614444d16b78551981ec44a53804b196b683fdb

Check Whether the image of new contianer is ready
core@coreos ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
dockeradmin/centos-wordpress    latest              9d1b81492b51        4 minutes ago       591.3 MB
centos                          6                   510cf09a7986        3 days ago          215.8 MB
centos                          centos6             510cf09a7986        3 days ago          215.8 MB

Tag the Container to the name format <private-registry>/<repo-name>
core@coreos ~ $ docker tag dockeradmin/centos-wordpress dockerregistry:8080/wordpress
core@coreos ~ $ docker images
REPOSITORY                      TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
dockeradmin/centos-wordpress    latest              9d1b81492b51        4 minutes ago       591.3 MB
dockerregistry:8080/wordpress   latest              9d1b81492b51        4 minutes ago       591.3 MB
centos                          6                   510cf09a7986        3 days ago          215.8 MB
centos                          centos6             510cf09a7986        3 days ago          215.8 MB

Try Loging in to the Docker-Registry
core@coreos ~ $ docker login https://dockerregistry:8080
Username (dockeradmin):
Login Succeeded

Finally Pushing into the Registry. 
core@coreos ~ $ docker push dockerregistry:8080/wordpress
The push refers to a repository [dockerregistry:8080/wordpress] (len: 1)
Sending image list
Pushing repository dockerregistry:8080/wordpress (1 tags)
511136ea3c5a: Image successfully pushed
5b12ef8fd570: Image successfully pushed
510cf09a7986: Image successfully pushed
9d1b81492b51: Image successfully pushed
Pushing tag for rev [9d1b81492b51] on {https://dockerregistry:8080/v1/repositories/wordpress/tags/latest}