Pages

Showing posts with label Cloud. Show all posts
Showing posts with label Cloud. Show all posts

Thursday, December 28, 2023

Mastering Puppet: A Guide to Configuring the Puppet Master and Client

Puppet is a powerful configuration management tool that automates the process of managing your infrastructure. Setting up a Puppet Master and its clients can seem daunting, but with this guide, you'll be equipped to handle the initial configuration with ease. This blog will walk you through the steps needed to set up a Puppet Master and client, ensuring a smooth and secure connection between them.

Step 1: Initial Setup for Both Master and Client

Downloading and Installing Needed RPM
Before you begin, ensure that both the Master and the client machines have the necessary RPM installed. This can be done by running:

rpm -ivUh http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

This command will install the EPEL (Extra Packages for Enterprise Linux) repository, providing additional packages for your setup.

Step 2: Installing the Puppet Server and Client


Master: Installing Puppet Server
On your Master machine, install the Puppet Server with Yum:

yum install puppet-server
Client: Installing Puppet
On the client machine, install the Puppet client software:

yum install puppet

Step 3: Configuring Hostnames and Network

Ensure that the Master and client can communicate with each other by setting up the hostnames correctly.

Edit the Hosts File
Add the following entries to the /etc/hosts file on both the Master and client:

xxx.xxx.xxx.xxx master.puppet.com
xxx.xxx.xxx.xxx client.puppet.com

Replace xxx.xxx.xxx.xxx with the appropriate IP addresses.

Test the Connection
Test the connectivity between the Master and client using the ping command:

ping -c 3 client.puppet.com
ping -c 3 master.puppet.com

Step 4: Setting Up Iptables

For secure communication, you need to ensure that the correct port is open on both the Master and client.

Modify Iptables Rules
You can either disable Iptables or open port 8140, which Puppet uses for communication:


iptables -A INPUT -p tcp --dport 8140 -m state --state NEW,ESTABLISHED -j ACCEPT

Step 5: Starting the Puppet Master Server

With the configurations set, it's time to start the Puppet Master.

Start the Server
On the Master machine, start the Puppet Master service:
/etc/init.d/puppetmaster restart

Step 6: Client Certificate Signing

Puppet uses a certificate-based authentication system. The client will request a certificate from the Master, which needs to be signed.
Check for Signed Certificates
From the client machine, initiate a certificate signing request:

puppetd --server=master.puppet.com --waitforcert 60 --test

Sign the Client's Certificate
On the Master, list all unsigned certificates:

puppetca --list

Sign the client's certificate:

puppetca --sign client.puppet.com

Step 7: Creating Configuration for Clients

With the infrastructure in place, you'll now need to define the desired state of your client systems in the Puppet Master.

Edit the Manifest File
Add configurations to /etc/puppet/manifests/site.pp on the Master. Here's a sample configuration:


# Create "/tmp/testfile" if it doesn't exist.
file { "/tmp/outside":
ensure => present,
mode => 644,
owner => root,
group => root
}
class test_class {
file { "/tmp/testfile":
ensure => present,
mode => 644,
owner => root,
group => root
}
}
package {
'httpd':
ensure => installed }
service {
'httpd':
ensure => true,
enable => true,
require => Package['httpd']
}
# tell puppet on which client to run the class
node client {
include test_class
}
Conclusion
Congratulations! If you've followed these steps without error, your Puppet Master and client are now configured and communicating securely. With your infrastructure now under Puppet's management, you're set to automate your configurations, ensuring consistency and reliability across your environment. Remember, Puppet is incredibly powerful and flexible. Continue exploring its capabilities to fully harness its potential in managing your infrastructure.

Sunday, December 24, 2023

Building a Custom NAT Server on AWS: A Step-by-Step Guide

Network Address Translation (NAT) servers are essential components in a cloud infrastructure, allowing instances in a private subnet to connect to the internet or other AWS services while preventing the internet from initiating a connection with those instances. This blog provides a detailed guide on setting up a NAT server from scratch in an AWS cloud environment.

Step 1: Launching an AWS Instance

Start a t1.micro instance:

  • Navigate to the AWS Management Console.
  • Select the EC2 service and choose to launch a t1.micro instance.
  • Pick an Amazon Machine Image (AMI) that suits your needs (commonly Amazon Linux or Ubuntu).
  • Configure instance details ensuring it's in the same VPC as your private subnet but in a public subnet.

Step 2: Configuring the Instance

Disable "Change Source / Dest Check":

  • Right-click on the instance from the EC2 dashboard.
  • Navigate to "Networking" and select "Change Source / Dest Check."
  • Disable this setting to allow the instance to route traffic not specifically destined for itself.

Security Group Settings:

  • Ensure the Security Group associated with your NAT instance allows the necessary traffic.
  • Typically, it should allow inbound traffic on ports 80 (HTTP) and 443 (HTTPS) for updates and patches.

Step 3: Configuring the NAT Server

Access your instance via SSH and perform the following configurations:

Enable IP Forwarding:

  1. Edit the /etc/sysctl.conf file to enable IP forwarding. This setting allows the instance to forward traffic from the private subnet to the internet.

    sed -i "s/net.ipv4.ip_forward.*/net.ipv4.ip_forward = 1/g" /etc/sysctl.conf
  2. Activate the change immediately:

    echo 1 > /proc/sys/net/ipv4/ip_forward
  3. Confirm the change:

    cat /etc/sysctl.conf | grep net.ipv4.ip_forward

    Expected output: net.ipv4.ip_forward = 1

Configure iptables:

  1. Set up NAT using iptables to masquerade outbound traffic, making it appear as if it originates from the NAT server:

    iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

    This command routes all connections reaching eth0 (the primary network interface) to all available paths.

  2. Allow traffic on ports 80 and 443 for updates and external access:

    iptables -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT iptables -A INPUT -m state --state NEW -p tcp --dport 443 -j ACCEPT iptables -A FORWARD -i eth0 -j ACCEPT

Step 4: Routing Configuration

Configure Route Tables:

  • In the AWS Console, go to the VPC Dashboard and select Route Tables.
  • Modify the route table associated with your private subnet:
    • Add a route where the destination is 0.0.0.0/0 (representing all traffic), and the target is the instance ID of your NAT server.
  • Modify the route table associated with your NAT instance:
    • Ensure there's a route where the destination is 0.0.0.0/0, and the target is the internet gateway of your VPC.

Conclusion

With these steps, you've successfully created a NAT server in your AWS environment, allowing instances in a private subnet to securely access the internet for updates and communicate with other AWS services. This setup is crucial for maintaining a secure and efficient cloud infrastructure. Always monitor and maintain your NAT server to ensure it operates smoothly and securely. Currently there are managed NAT server services from AWs which we can use for production grade environments.

Monday, April 10, 2023

NextCloud Setup with Docker

One of the most commonly used self-hosted alternatives for cloud storages. Now it's easy to deploy with dockers. Following docker file and Nginx configuration can be used to deploy the nextcloud application behind the Nginx proxy server with SSL termination. 
we can bring up and bring down the containers with the following commands

docket-compose up -f
docker-compose down

===========

version: '2'
#volumes:
#  nextcloud: /root/nextcloud/ncdata
#  db: /root/nextcloud/mysql
services:
  db:
    image: mariadb:10.5
    restart: always
    command: --transaction-isolation=READ-COMMITTED --binlog-format=ROW
    volumes:
      - /root/nextcloud/mysql:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=XXXXXXXXX
      - MYSQL_PASSWORD=XXXXXXXX
      - MYSQL_DATABASE=XXXXXXXX
      - MYSQL_USER=XXXXXXXX
  app:
    image: nextcloud
    restart: always
    links:
      - db
    volumes:
      - /root/nextcloud/ncdata:/var/www/html
    environment:
      - MYSQL_PASSWORD=XXXXXXXX
      - MYSQL_DATABASE=XXXXXXXX
      - MYSQL_USER=XXXXXXXX
      - MYSQL_HOST=XXXXXXXX
      - NEXTCLOUD_TRUSTED_DOMAINS=abc.xyz.aa
      - OVERWRITEHOST=abc.xyz.aa:XXXX
      - OVERWRITEPROTOCOL=https
        
  web:
       image: nginx
       restart: always
       ports:
         - 8082:8080
       links:
         - app
       volumes:
         - /root/nextcloud/nginx/nginx.conf:/etc/nginx/conf.d/default.conf
         - /root/nextcloud/cert:/etc/cert
===========
Nginx Configuration file
===========

server {
  listen 80;
  server_name abc.xyz.aa;
  return 301 https://$server_name:8080$request_uri;
  add_header X-Content-Type-Options              "nosniff";
}
server {
  listen 8080 ssl;
  server_name abc.xyz.aa;
  ssl_certificate /etc/cert/abc.xyz.aa.crt;
  ssl_certificate_key /etc/cert/abc.xyz.aa.key;
  ssl_prefer_server_ciphers on;
  location / {
  proxy_pass http://app;
        proxy_set_header        Host $host;
        proxy_set_header        X-Real-IP $remote_addr;
        proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header        X-Forwarded-Proto $scheme;
 }

===========




Tuesday, November 28, 2017

Increases swap in azure linux machine

In Azure to create a swap file in the directory that's defined by the ResourceDisk.MountPoint parameter, you can update the /etc/waagent.conf file by setting the following three parameters:

ResourceDisk.Format=y
ResourceDisk.EnableSwap=y
ResourceDisk.SwapSizeMB=xx


Note The xx placeholder represents the desired number of megabytes (MB) for the swap file.
Restart the WALinuxAgent service by running one of the following commands, depending on the system in question:

Ubuntu: service walinuxagent restart
Red Hat/Centos: service waagent restart


Run one of the following commands to show the new swap apace that's being used after the restart:

dmesg | grep swap
swapon -s
cat /proc/swaps
file /mnt/resource/swapfile
free| grep -i swap


If the swap file isn't created, you can restart the virtual machine by using one of the following commands:

shutdown -r now
init 6

Sunday, August 13, 2017

Qubole : Load Multiple tables to Qubole Hive table from a Data Store

API call to Load Multiple tables from a Qubole Data Store to Hive table. 


[rahul@local qubole]$ cat /databasescript 
#!/bin/bash

#Qubole API Key
AUTH="***********"
#Database Name
DB_NAME="***********"
#Host Name
DB_HOST="***********"
#User Name
DB_USER="***********
#Password 
DB_PASS='***********'

echo $DB_PASS


## request table import from tap;
function tableImport() {

request_body=$(cat <<EOF
{
   "command_type":"DbImportCommand",
   "mode":"1",
   "hive_serde":"orc",
   "hive_table":"<HIVE TABLE NAME>.$1",
   "dbtap_id":"$2",
   "db_table":"$1",
   "db_parallelism":"1",
   "use_customer_cluster":"1",
   "customer_cluster_label":"Qubole_Data_Import",
   "tags":[" Data"]
}
EOF
)

echo $request_body
   curl -X POST \
-H "X-AUTH-TOKEN: $AUTH" \
-H "Content-Type:application/json" \
-d "$request_body" https://api.qubole.com/api/v1.2/commands/
}

##register database with tap
request_body=$(cat <<EOF
{
  "db_name":"$DB_NAME",
  "db_host":"$DB_HOST",
  "db_user":"$DB_USER",
  "db_passwd":"$DB_PASS",
  "db_type":"sqlserver",
  "db_location":"on-premise",
  "gateway_ip": "***********",
  "gateway_port": "***********",
  "gateway_username": "***********",
  "gateway_private_key": "***********"}

EOF
)

echo $KEY
ID=$(curl -s -X POST \
-H "X-AUTH-TOKEN: $AUTH" \
-H "Content-Type:application/json" \
-d "$request_body" https://api.qubole.com/api/v1.2/db_taps/ | jq .id)

#get the tables and call import
curl -s -H "X-AUTH-TOKEN: $AUTH" \
     -H "Content-Type:application/json" \
     https://api.qubole.com/api/v1.2/db_taps/$ID/tables | jq -r .[] | while read x; do  tableImport $x $ID; done

# can't delete the tap at the end unless we continuously poll for no active jobs;
STATUS="null"

while [ "$STATUS" = "null" ]
do
STATUS=$(curl  -s -X DELETE \
 -H "X-AUTH-TOKEN: $AUTH" \
 -H "Content-Type:application/json" \
 https://api.qubole.com/api/v1.2/db_taps/$ID | jq .status)
echo -n "."
sleep 5
done

Friday, January 30, 2015

Openstack - Auto evacuation Script

The Following Script will
1.)Check for the Compute Hosts which are Down
2.)Check for the Instance in the Down Hosts
3.)Check for Compute Hosts which are Up
4.)Calculate the Vcpu and Memory needed for Instance which are in Down Host
5.)Calculate the free Vcpu and Memory in the Up Hosts
7.)Find proper Host for each Instance in the Down Host
8.)Once proper Compute Hosts are Found for each Instance the Mysql entries are modified and the Instance is hard rebooted .

#! / usr / bin / env python
# - * - Coding: utf-8 - * -
import time
import os
import re
import MySQLdb
import subprocess
from subprocess import Popen, PIPE


# Determine whether the compute nodes down, compute_down_list is a list of nodes calculate downtime
#Returns a list of Nodes which are down
def select_compute_down_host ():
nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
compute_down_list = []
for compute_num in range (len (nova_service_list)):
new_val = ( nova_service_list [compute_num] )
if 'nova-compute' in new_val:
if 'enabled' in new_val:
if 'XXX' in new_val:
compute_down_list.append (nova_service_list [compute_num] .split () [1])
if len(compute_down_list) == 0:
print "No Downtime Computing Nodes, The Program Automatically Exit!"
exit (0)
else:
compute_down_list = list (set (compute_down_list))
return compute_down_list


# Determine whether the compute nodes Up, compute_up_list is a list of nodes calculate downtime
# Returns a list of Nodes Which are Up
def select_compute_up_host ():
        nova_service_list = os.popen ("nova-manage service list 2> /dev/null").read().strip().split("\n")
        compute_up_list = []
        for compute_num in range (len (nova_service_list)):
                new_val = ( nova_service_list [compute_num] )
                if 'nova-compute' in new_val:
                        if 'enabled' in new_val:
                                if ':-)' in new_val:
                                        compute_up_list.append (nova_service_list [compute_num] .split () [1])
        if len(compute_up_list) == 0:
                print "No Compute Node's are UP, The Program Automatically Exit!"
                exit (0)
        else:
                compute_up_list = list (set (compute_up_list))
        return compute_up_list



# Dertermine which instances are down, down_instance_list is the list of instance which are down
# Return a Tuple of Intance which are down # Input is a List of Down Nodes
def instance_in_down_node(down_nodes, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
        instances_dict = {}
        down_instances_list = []
        connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
for node in down_nodes:
sql_select = "select uuid from instances where host = '%s' and vm_state = 'active'" %(node)
cursor.execute (sql_select)
instances_name = cursor.fetchall ()
if instances_name == ():
pass
else:
instances_dict [node] = instances_name
down_instances_list.append (instances_dict [node])
if down_instances_list == []:
               print '\ n no downtime on the compute nodes running virtual machines \ n'
               exit ()

#for node in down_instances_list:
# for instance in node:
# print instance[0]
# usage_of_instance = usage_instance(instance[0])
# print usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return down_instances_list



#Determines the Resource usage of Down Instance
#Input a instanc UUID and return its Vcpu, memory and inatnce Type as a list
def usage_instance(instances, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instance_dict = {}
type_down_instance_list = []
usage_type_down_instance_list = []
instances_type_details = []
# print ("Checking Usage of Instance {}".format(instances))
sql_select = "select instance_type_id from instances where uuid = '%s' " %(instances)
cursor.execute (sql_select)
instances_type = cursor.fetchall ()
instance_dict [instances] = instances_type
type_down_instance_list.append (instance_dict [instances])
for node in type_down_instance_list:
for instance in node:
type_instance = instance[0]
# print type_instance

sql_select = "select vcpus,memory_mb,id from instance_types where id = '%d'" %(type_instance)
# print sql_select
cursor.execute (sql_select)
instances_type_details = cursor.fetchall ()
# print instances_type_details
instance_dict [instances_type_details] =  instances_type_details
usage_type_down_instance_list.append (instance_dict [instances_type_details])

# print usage_type_down_instance_list
for instance in usage_type_down_instance_list:
for instance_details in instance:
# print instance_details[0],instance_details[1],instance_details[2]
return instance_details
#
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Determine Resouce left in a Compute Node which is UP
#Inputs a Node name and Returns free Vcpu and free Memory of that node
def usage_of_compute_node(node, host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
# Connect to the database
        instance_dict = {}
usage_instance_detail = []
free = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
# print node
sql_select = "select vcpus,vcpus_used,free_ram_mb from compute_nodes where  hypervisor_hostname = '%s'" %(node)
# print sql_select
        cursor.execute (sql_select)
        instance_usage = cursor.fetchall ()
        instance_dict [node] = instance_usage
usage_instance_detail.append (instance_dict [node])

for node in usage_instance_detail:
for detail in node:
free_vcpu = (detail[0] - detail[1])
free_mem = detail[2]
# print detail[0],detail[1],detail[2]
# print ("Free Vcpu {} : :Free Memory {}".format(free_vcpu,free_mem))
free.append (free_vcpu)
free.append (free_mem)
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return free

#Update the database of of the node usage
#Inputs a instance UUID and Node name # the details of the Node are updated including the resource usage of new instance
def update_compute_node_usage(instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
free_instance = []
free_node = []
instance_dict = {}
instance_details = []
node_details = []
# print instance,node
free_instance = usage_instance(instance)
# print free_instance[1],free_instance[2]
instance_memory = free_instance[1]
instance_vcpu = free_instance[0]
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
sql_select = "select root_gb from instances where uuid = '%s'" %(instance)
# print sql_select
        cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [instance] = storage
        instance_details.append (instance_dict [instance])
for instance in instance_details:
for details in instance:
# print details[0]
instance_space = details[0]

# print ("Instance details Vcpu={},Memory={},Space={}".format(instance_vcpu,instance_memory,instance_space))

sql_select = "select vcpus_used,memory_mb_used,free_ram_mb,local_gb_used,running_vms,disk_available_least from compute_nodes where hypervisor_hostname = '%s'" %(node)
# print sql_select
cursor.execute (sql_select)
        storage = cursor.fetchall ()
        instance_dict [node] = storage
        node_details.append (instance_dict [node])
for host in node_details:
for details in host:
node_vcpus_used = details[0]
node_memory_mb_used = details[1]
node_free_ram_mb = details[2]
node_local_gb = details[3]
node_running_vms = details[4]
node_disk_available_least = details[5]
        #sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
        #print sql_select

node_vcpus_used = node_vcpus_used + instance_vcpu
node_memory_mb_used = node_memory_mb_used + instance_memory
node_free_ram_mb = node_free_ram_mb - instance_memory
node_local_gb = node_local_gb + instance_space
node_running_vms = node_running_vms + 1
node_disk_available_least = node_disk_available_least - instance_space

sql_select = "update compute_nodes set vcpus_used = '%s',memory_mb_used = '%s',free_ram_mb = '%s',local_gb_used = '%s',running_vms = '%s' ,disk_available_least = '%s' where hypervisor_hostname = '%s'" %(node_vcpus_used,node_memory_mb_used,node_free_ram_mb,node_local_gb,node_running_vms,node_disk_available_least,node)
print sql_select
cursor.execute (sql_select)
storage = cursor.fetchall ()
connection_mysql.commit ()
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()



#Intake a instance and node. If the node have enough resource to take in the instance the instance are moved into the node
def rescue_instance (instance,node,host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
print instance, node
node_usage = usage_of_compute_node(node)
instance_usage = usage_instance(instance)
node_vcpu = node_usage[0]
node_memory = node_usage[1]
instance_vcpu = instance_usage[0]
instance_memory = instance_usage[1]
# print ("Node Vcpu {} Node Memory {}".format(node_vcpu,node_memory))
# print ("Instance Vcpu {} Instanc Memory {}".format(instance_vcpu,instance_memory))
if node_vcpu > instance_vcpu:
if node_memory > instance_memory:
print "Transfer possible"
sql_select = "update instances set vm_state = 'stopped',power_state = 4 where uuid = '%s'"%(instance)
print sql_select
      cursor.execute (sql_select)
      storage = cursor.fetchall ()
connection_mysql.commit ()
sql_select = "update instances set host = '%s' where uuid = '%s' and vm_state = 'stopped'"%(node,instance)
print sql_select
                        cursor.execute (sql_select)
                        storage = cursor.fetchall ()
connection_mysql.commit ()
instance_reboot = "source /root/admin-openrc.sh;nova reboot --hard %s" %(instance)
print instance_reboot
subprocess.call(instance_reboot, shell=True ,stderr=subprocess.PIPE)
# time.sleep(60)
update_compute_node_usage(instance,node)
return 0
else:
print "Not Possible"
else:
print "not possible"
cursor.close ()
        connection_mysql.commit ()
        connection_mysql.close ()
return 1

def update_down_host(host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
down_nodes = select_compute_down_host ()
instance_dict = {}
down_host_usage = []
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
        cursor = connection_mysql.cursor ()
        connection_mysql.autocommit(True)
for host in down_nodes:
                print ("The Node {} is Down".format(host))
sql_select = "select memory_mb_used,vcpus_used,local_gb_used,running_vms,free_ram_mb,memory_mb,free_disk_gb,local_gb from compute_nodes where hypervisor_hostname = '%s'" %(host)
print sql_select
cursor.execute (sql_select)
       node = cursor.fetchall ()
        instance_dict [host] = node
        down_host_usage.append (instance_dict [host])
for node in down_host_usage:
for detail in node:
memory_used = detail[0]
vcpu_used = detail[1]
local_gb_used = detail[2]
running_vm = detail[3]
free_ram = detail[4]
memory_mb = detail[5]
free_disk_gb = detail[6]
local_gb = detail[7]
memory_used = 512
vcpu_used = 0
local_gb_used = 0
running_vm = 0
free_ram = memory_mb - memory_used
free_disk_gb = local_gb - local_gb_used
sql_select = "update compute_nodes set memory_mb_used = '%s',vcpus_used = '%s',local_gb_used = '%s',running_vms = '%s',free_ram_mb = '%s',free_disk_gb = '%s' where hypervisor_hostname = '%s'" %(memory_used,vcpu_used,local_gb_used,running_vm,free_ram,free_disk_gb,host)
print sql_select
                cursor.execute (sql_select)
connection_mysql.commit ()
cursor.close ()
connection_mysql.commit ()
        connection_mysql.close ()





def select_compute_down_host_instances (host = 'controller', user = 'nova', passwd = 'nova4key', db = 'nova'):
connection_mysql = MySQLdb.connect (host = host, user = user, passwd = passwd, db = db) #, charset = 'utf8')
cursor = connection_mysql.cursor ()
connection_mysql.autocommit(True)
instances_dict = {}

down_nodes = []
#================================
#Scanig For Nodes Which are Down
#================================
print "Scanning For Nodes Which Are Down.."
down_nodes = select_compute_down_host ()
for host in down_nodes:
print ("The Node {} is Down".format(host))

instance_down = []
#====================================
#Scaning for Instance Which are Down
#====================================
instance_down = instance_in_down_node(down_nodes)
for node in instance_down:
for instance in node:
print ("The Instance {} is Down ".format(instance[0]))
# print ("Usage Of Instance")
usage_of_instance = usage_instance(instance)
# print ("Vcpus {} : : Memory {} : : Instance_type {}".format(usage_of_instance[0],usage_of_instance[1],usage_of_instance[2]))

up_nodes = []
free_resource_node = []
#==================================
#Scaning for nodes which are UP
#==================================
up_nodes = select_compute_up_host ()
for node in up_nodes:
print ("The Node {} is Up".format(node))
free_resource_node = usage_of_compute_node(node)
#print ("Free Vcpus:{} , Free Memory:{}".format( free_resource_node[0],free_resource_node[1]))


###=====================================
###Rescue the instance from the Down Node
###=====================================
for node in instance_down:
for instance in node:
for live_node in up_nodes:
success = rescue_instance(instance[0],live_node)
if success == 0:
break
else:
continue

update_down_host()

cursor.close ()
connection_mysql.commit ()
connection_mysql.close ()

if __name__ == "__main__":
select_compute_down_host_instances ()