Pages

Thursday, March 12, 2015

Google Two-Factor Authentication on Linux Server



The Google Authenticator is an open-source module that includes implementations of one-time passcodes (TOTP) verification token developed by Google. It supports several mobile platforms, as well as PAM (Pluggable Authentication Module). These one-time passcodes are generated using open standards created by the OATH (Initiative for Open Authentication).

Install the needed packages
yum install pam-devel make gcc-c++ wget bzip*

cd /root
wget https://google-authenticator.googlecode.com/files/libpam-google-authenticator-1.0-source.tar.bz2
tar -xvf libpam-google-authenticator-1.0-source.tar.bz2

cd libpam-google-authenticator-1.0
make
make install
google-authenticator

Do you want authentication tokens to be time-based (y/n) y

Your new secret key is: FGHLERMHLCISCSU6
Your verification code is 485035
Your emergency scratch codes are:
  90385136
  97173523
  18612791
  73040662
  45704109

Do you want me to update your "/root/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y



Configuring SSH to use Google Authenticator Module
Open the PAM configuration file ‘/etc/pam.d/sshd‘ and add the following line to the top of the file.

auth       required     pam_google_authenticator.so
Next, open the SSH configuration file ‘/etc/ssh/sshd_config‘ and scroll for fine the line that says.

ChallengeResponseAuthentication no
Change it to “yes“. So, it becomes like this.

ChallengeResponseAuthentication yes
Finally, restart SSH service to take new changes.

# systemctl restart sshd

Install the Google Authentication Application on the you mobile app or make use of the firefox addoon GAuth Authenticator .Below we show how the Gauth Application is used in Android Phones.



Once we enter the secret key in above setting we will get the verfificatuion code as below, which will be changing in very so and so period.


Login to the Server using Google Authentication
[root@localhost ~]# ssh root@xxx.xxx.xxx.xxx
Password: <<User Password
Verification code: <<The Code which we get from the Phone
Last failed login: Fri Mar 13 04:49:59 UTC 2015 from xxx.xxx.xxx.xxx on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Fri Mar 13 04:48:35 2015 from xxx.xxx.xxx.xxx
[root@server ~]#

Important: The two-factor authentication works with password based SSH login. If you are using any private/public key SSH session, it will ignore two-factor authentication and log you in directly.

Tuesday, March 10, 2015

Swift Tips


Swift stores the data we store in the containers in .data foramt in the corresponding Drives.

[root@compute ~]# find /srv/node/sdc1 -iname *.data
/srv/node/sdc1/objects/58511/456/3923e942436c9de6e832f944fb30c456/1421697356.06343.data
/srv/node/sdc1/objects/66216/445/40aa0b832ae8dff8681916972fd13445/1422560956.02659.data
/srv/node/sdc1/objects/142841/960/8b7e465403a5b5017ae51c0c0ab5a960/1422459278.52978.data
/srv/node/sdc1/objects/53083/6af/33d6dc4a65e40f2c539e26649c2d96af/1422459797.37964.data
/srv/node/sdc1/objects/37756/61e/24df3295c06d9770e1cd4f1d15ee861e/1422560823.75913.data
/srv/node/sdc1/objects/206317/924/c97b770dc9f2170f2434631423ccb924/1422560870.83562.data
/srv/node/sdc1/objects/1056/c1d/01081c2d99e3ed7cc3408249335b9c1d/1422560871.31131.data
/srv/node/sdc1/objects/107854/6aa/6953b4ba90867f1b2ee0ff36e8f7d6aa/1422560871.63875.data
/srv/node/sdc1/objects/262004/dfc/ffdd367dd12034d5f3c066845e4d8dfc/1422560873.82851.data
/srv/node/sdc1/objects/71710/393/4607a45373f2b0f6632b2f56501cf393/1422560874.16764.data

In above out put the swift drive is mounted to /srv/node/sdc1.


we can get the date when the data file is created from the name of the data file.

/srv/node/sdc1/objects/71784/771/461a3fd11073d0a88222403d4a7d1771/1422561047.39847.data
[root@compute ~]# date --date @1422561047
Thu Jan 29 14:50:47 EST 2015
[root@compute ~]#

If we have enabled 2 replication is the swift ring configuration there will be  two data file with same name. If we have multiple swift server's the replicated data will be stored in different server's rather than the same server. 

Friday, March 6, 2015

Directory Sharing between Host Machine and Docker

Mount a Host Directory as a Data Volume
 To mount a  host directory on to the container

>>$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py

This will mount the host directory, /src/webapp, into the container at /opt/webapp

Friday, February 13, 2015

Running a Script in Client Server's using Puppet Master.

Running a Script in Client Server's using Puppet.

Enable the puppet File Server
=============================
Add Following entries to /etc/puppet/fileserver.conf
[extra_files]
path /var/lib/puppet/bucket
allow *


The File is stored in the mentioned path
========================================
[root@master ~]# ll /var/lib/puppet/bucket/
total 4
-rw-r--r--. 1 root root 39 Feb 10 16:45 startup.sh

In the below codes first the scripts is fetched from the master and saved in the local file. and then execute
==============================================================================================================
[root@master ~]# cat /etc/puppet/manifests/site.pp
node "client" {
file { '/tmp/startup.sh':
          owner => 'root',
          group => 'root',
          mode => '700',
          source => 'puppet:///extra_files/startup.sh',
       }
exec    {'run_startup':
        command => '/tmp/startup.sh',
        }
}
[root@master ~]#

Tuesday, February 10, 2015

Puppet Master-Client Setup/Usage

Puppet is a system for automating system administration tasks. It has a master server in which we will be mentioning the client configurations and in the client we will be running an agent which will fetch the configuration form the master server and implement it.

Environment
Master and Client Runs on Centos7

Open the port 8140 in firewall and set SELINUX to permissive mode.

Intalling the packages.
================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet-server

Start the service
============
systemctl start  puppetmaster.service
puppet resource service puppetmaster ensure=running enable=true
--------------------------
Notice: /Service[puppetmaster]/enable: enable changed 'false' to 'true'
service { 'puppetmaster':
  ensure => 'running',
  enable => 'true',
}
[root@master ~]#

Now the Certificate and keys would have been created.
====================================================
[root@master ~]# ll /var/lib/puppet/ssl/certs
total 8
-rw-r--r--. 1 puppet puppet 2013 Feb  9 14:48 ca.pem
-rw-r--r--. 1 puppet puppet 2098 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#
[root@master ~]# ll /var/lib/puppet/ssl/private_keys/
total 4
-rw-r--r--. 1 puppet puppet 3243 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#


Add the Following entries to the Following File. # You will find the cert name in /var/lib/puppet/ssl/certs
================================================
vim /etc/puppet/puppet.conf
[master]
certname = master.example.com.novalocal.pem
autosign = true

Restart the Service
systemctl restart  puppetmaster.service

[root@master ~]# netstat -plan |grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      5870/ruby
[root@master ~]#

####################
Client Configuration 
####################

Install the Packages
====================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet

Configure the Client
=====================
 vim /etc/puppet/puppet.conf
# In the [agent] section
    server = master.example.com.novalocal
    report = true
    pluginsync = true

Now the Following Command will add the certificate to Server 
===============================================
puppet agent -t --debug --verbose

From Server we need to sign the client certificate If its not signed Automatically
=============================================================
puppet cert sign --all

Now from Client again run
=========================
puppet agent -t --debug --verbose
to get synced.



Now in Server Create the Configuration file 
==================================
cat /etc/puppet/manifests/site.pp
node "client.example.com" {
file { '/root/example_file.txt':
    ensure => "file",
    owner  => "root",
    group  => "root",
    mode   => "700",
    content => "Congratulations!
Puppet has created this file.
",}
}

Once the above file in created in Server we need to run agent in the client
puppet agent -t --debug --verbose

we can see that file is created

Info: Applying configuration version '1423504520'
Notice: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]/ensure: defined content as '{md5}8a2d86dd40aa579c3fabac1453fcffa5'
Debug: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]: The container Node[client.example.com] will propagate my refresh event
Debug: Node[client.example.com]: The container Class[Main] will propagate my refresh event
Debug: Class[Main]: The container Stage[main] will propagate my refresh event
Debug: Finishing transaction 23483900
Debug: Storing state
Debug: Stored state in 0.01 seconds
Notice: Finished catalog run in 0.03 seconds
Debug: Using cached connection for https://master.example.com.novalocal:8140
Debug: Caching connection for https://master.example.com.novalocal:8140
Debug: Closing connection for https://master.example.com.novalocal:8140
[root@client ~]# ll /root/
total 4
-rwx------. 1 root root 47 Feb  9 17:55 example_file.txt
[root@client ~]#



Tuesday, February 3, 2015

Configuring http proxy in the linux Server


Open the .bash_profile file for editing.

(example: vi ~/.bash_profile)
Add the following lines to the end of the file:
http_proxy=http://proxy_server_address:port
export no_proxy=localhost,127.0.0.1,192.168.0.34
export http_proxy
http_proxy should be the ip address or hostname, plus the port of your proxy server
no_proxy should be any exclusions you want to make – addresses that you don’t want to send via the proxy.
NOTE: This must be done for each individual user, including root.
If you don’t want to log out of your shell session, you can reload the bash profile with the following:
source .bash_profile

Configuring YUM to use proxy
To configure “yum” to use the HTTP / HTTPS proxy you will need to edit the /etc/yum.conf configuration file. Open /etc/yum.conf in your favorite editor and add the following line.
proxy=http://proxy_server_address:port

Save and close the file, then clear the cache used by yum with the following command:
yum clean all

Friday, January 30, 2015

Openstack - Auto evacuation Script

The Following Script will
1.)Check for the Compute Hosts which are Down
2.)Check for the Instance in the Down Hosts
3.)Check for Compute Hosts which are Up
4.)Calculate the Vcpu and Memory needed for Instance which are in Down Host
5.)Calculate the free Vcpu and Memory in the Up Hosts
7.)Find proper Host for each Instance in the Down Host
8.)Once proper Compute Hosts are Found for each Instance the Mysql entries are modified and the Instance is hard rebooted .

#! / usr / bin / env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE


# Determine whether the compute nodes down, compute_down_list is a list of nodes calculate downtime
#Returns a list of Nodes which are down
def select_compute_down_host ():
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
compute_down_list = []
for compute_num in range (len (nova_service_list)):
new_val = ( nova_service_list [compute_num] )
if 'nova-compute' in new_val:
if 'enabled' in new_val:
if 'XXX' in new_val:
compute_down_list.append (nova_service_list [compute_num] .split () [1])
if len(compute_down_list) == 0:
print "No Downtime Computing Nodes, The Program Automatically Exit!"
exit (0)
else:
compute_down_list = list (set (compute_down_list))
return compute_down_list


# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list



# Dertermine which instances are down, down_instance_list is the list of instance which are down
# Return a Tuple of Intance which are down # Input is a List of Down Nodes
def instance_in_down_node(down_nodes, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
        instances_dict = {}
        down_instances_list = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
for node in down_nodes:
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
pass
else:
instances_dict [node] = instances_name
down_instances_list.append (instances_dict [node])
if down_instances_list == []:
               print '\ n no downtime on the compute nodes running virtual machines \ n'
               exit ()

#for node in down_instances_list:
# for instance in node:
# print instance[0]
# usage_of_instance = usage_instance(instance[0])
# print usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return down_instances_list



#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instance_dict = {}
type_down_instance_list = []
usage_type_down_instance_list = []
instances_type_details = []
# print ("Checking Usage of Instance {}".format(instances))
sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
cursor.execute (sql_select)
instances_type = cursor.fetchall ()
instance_dict [instances] = instances_type
type_down_instance_list.append (instance_dict [instances])
for node in type_down_instance_list:
for instance in node:
type_instance = instance[0]
# print type_instance

sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
# print sql_select
cursor.execute (sql_select)
instances_type_details = cursor.fetchall ()
# print instances_type_details
instance_dict [instances_type_details] =  instances_type_details
usage_type_down_instance_list.append (instance_dict [instances_type_details])

# print usage_type_down_instance_list
for instance in usage_type_down_instance_list:
for instance_details in instance:
# print instance_details[0],instance_details[1],instance_details[2]
return instance_details
#
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Determine Resouce left in a Compute Node which is UP
#Inputs a Node name and Returns free Vcpu and free Memory of that node
def usage_of_compute_node(node, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
# Connect to the database
        instance_dict = {}
usage_instance_detail = []
free = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
# print node
sql_select = "select vcpus,vcpus_used,free_ram_mb from compute_nodes where  hypervisor_hostname = '%s'" %(node)
# print sql_select
        cursor.execute (sql_select)
        instance_usage = cursor.fetchall ()
        instance_dict [node] = instance_usage
usage_instance_detail.append (instance_dict [node])

for node in usage_instance_detail:
for detail in node:
free_vcpu = (detail[0] - detail[1])
free_mem = detail[2]
# print detail[0],detail[1],detail[2]
# print ("Free Vcpu {} : :Free Memory {}".format(free_vcpu,free_mem))
free.append (free_vcpu)
free.append (free_mem)
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return free

#Update the database of of the node usage
#Inputs a instance UUID and Node name # the details of the Node are updated including the resource usage of new instance
def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
free_instance = []
free_node = []
instance_dict = {}
instance_details = []
node_details = []
# print instance,node
free_instance = usage_instance(instance)
# print free_instance[1],free_instance[2]
instance_memory = free_instance[1]
instance_vcpu = free_instance[0]
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
# print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
for instance in instance_details:
for details in instance:
# print details[0]
instance_space = details[0]

# print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
# print sql_select
cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
for host in node_details:
for details in host:
node_vcpus_used = details[0]
node_memory_mb_used = details[1]
node_free_ram_mb = details[2]
node_local_gb = details[3]
node_running_vms = details[4]
node_disk_available_least = details[5]
        #sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        #print sql_select

node_vcpus_used = node_vcpus_used + instance_vcpu
node_memory_mb_used = node_memory_mb_used + instance_memory
node_free_ram_mb = node_free_ram_mb - instance_memory
node_local_gb = node_local_gb + instance_space
node_running_vms = node_running_vms + 1
node_disk_available_least = node_disk_available_least - instance_space

sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
print sql_select
cursor.execute (sql_select)
storage = cursor.fetchall ()
connection_mysql.commit ()
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Intake a instance and node. If the node have enough resource to take in the instance the instance are moved into the node
def rescue_instance (instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
print instance, node
node_usage = usage_of_compute_node(node)
instance_usage = usage_instance(instance)
node_vcpu = node_usage[0]
node_memory = node_usage[1]
instance_vcpu = instance_usage[0]
instance_memory = instance_usage[1]
# print ("Node Vcpu {} Node Memory {}".format(node_vcpu,node_memory))
# print ("Instance Vcpu {} Instanc Memory {}".format(instance_vcpu,instance_memory))
if node_vcpu > instance_vcpu:
if node_memory > instance_memory:
print "Transfer possible"
sql_select = "update instances set vm_state = 'stopped',power_state = 4 where uuid = '%s'"%(instance)
print sql_select
      cursor.execute (sql_select)
      storage = cursor.fetchall ()
connection_mysql.commit ()
sql_select = "update instances set host = '%s' where uuid = '%s' and vm_state = 'stopped'"%(node,instance)
print sql_select
                        cursor.execute (sql_select)
                        storage = cursor.fetchall ()
connection_mysql.commit ()
instance_reboot = "source /root/admin-openrc.sh;nova reboot --hard %s" %(instance)
print instance_reboot
subprocess.call(instance_reboot, shell=True ,stderr=subprocess.PIPE)
# time.sleep(60)
update_compute_node_usage(instance,node)
return 0
else:
print "Not Possible"
else:
print "not possible"
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return 1

def update_down_host(host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
down_nodes = select_compute_down_host ()
instance_dict = {}
down_host_usage = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in down_nodes:
                print ("The Node {} is Down".format(host))
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
print sql_select
cursor.execute (sql_select)
       node = cursor.fetchall ()
        instance_dict [host] = node
        down_host_usage.append (instance_dict [host])
for node in down_host_usage:
for detail in node:
memory_used = detail[0]
vcpu_used = detail[1]
local_gb_used = detail[2]
running_vm = detail[3]
free_ram = detail[4]
memory_mb = detail[5]
free_disk_gb = detail[6]
local_gb = detail[7]
memory_used = 512
vcpu_used = 0
local_gb_used = 0
running_vm = 0
free_ram = memory_mb - memory_used
free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
print sql_select
                cursor.execute (sql_select)
connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instances_dict = {}

down_nodes = []
#================================
#Scanig For Nodes Which are Down
#================================
print "Scanning For Nodes Which Are Down.."
down_nodes = select_compute_down_host ()
for host in down_nodes:
print ("The Node {} is Down".format(host))

instance_down = []
#====================================
#Scaning for Instance Which are Down
#====================================
instance_down = instance_in_down_node(down_nodes)
for node in instance_down:
for instance in node:
print ("The Instance {} is Down ".format(instance[0]))
# print ("Usage Of Instance")
usage_of_instance = usage_instance(instance)
# print ("Vcpus {} : : Memory {} : : Instance_type {}".format(usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]))

up_nodes = []
free_resource_node = []
#==================================
#Scaning for nodes which are UP
#==================================
up_nodes = select_compute_up_host ()
for node in up_nodes:
print ("The Node {} is Up".format(node))
free_resource_node = usage_of_compute_node(node)
#print ("Free Vcpus:{} , Free Memory:{}".format( free_resource_node[0],free_resource_node[1]))


###=====================================
###Rescue the instance from the Down Node
###=====================================
for node in instance_down:
for instance in node:
for live_node in up_nodes:
success = rescue_instance(instance[0],live_node)
if success == 0:
break
else:
continue

update_down_host()

cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

if __name__ == "__main__":
select_compute_down_host_instances ()