Pages

Thursday, March 19, 2015

Openstack Recovering Data from Failed Instances Disk

Openstack Recovering Data from Failed Instances Disk


****************************
Qemu-nbd tools in Ubuntu
****************************

In some scenarios, instances are running but are inaccessible through SSH and do not respond to any command. The VNC console could be displaying a boot failure or kernel panic error messages. This could be an indication of file system corruption on the VM itself. If you need to recover files or inspect the content of the instance, qemu-nbd can be used to mount the disk.

We can get the path of the Instance by greping the Instance name from the common instance path.

>>egrep -i "Instance-name" /var/lib/nova/instances/*/*.xml

To access the instance's disk (/var/lib/nova/instances/xxxx-instance-uuid-xxxxxx/disk), use the following steps:
1.)Suspend the instance using the virsh command.
2.)Connect the qemu-nbd device to the disk.
3.)Mount the qemu-nbd device.
4.)Unmount the device after inspecting.
5.)Disconnect the qemu-nbd device.
6.)Resume the instance.

If you do not follow steps 4 through 6, OpenStack Compute cannot manage the instance any longer. It fails to respond to any command issued by OpenStack Compute, and it is marked as shut down.

Once you mount the disk file, you should be able to access it and treat it as a collection of normal directories with files and a directory structure. However, we do not recommend that you edit or touch any files because this could change the access control lists (ACLs) that are used to determine which accounts can perform what operations on files and directories. Changing ACLs can make the instance unbootable if it is not already.

Suspend the instance using the virsh command, taking note of the internal ID:

# virsh list
Id Name                 State
----------------------------------
1 instance-00000981    running
2 instance-000009f5    running
30 instance-0000274a    running

# virsh suspend 30
Domain 30 suspended
Connect the qemu-nbd device to the disk:

# cd /var/lib/nova/instances/instance-0000274a
# ls -lh
total 33M
-rw-rw---- 1 libvirt-qemu kvm  6.3K Jan 15 11:31 console.log
-rw-r--r-- 1 libvirt-qemu kvm   33M Jan 15 22:06 disk
-rw-r--r-- 1 libvirt-qemu kvm  384K Jan 15 22:06 disk.local
-rw-rw-r-- 1 nova         nova 1.7K Jan 15 11:30 libvirt.xml
# qemu-nbd -c /dev/nbd0 `pwd`/disk
Mount the qemu-nbd device.

The qemu-nbd device tries to export the instance disk's different partitions as separate devices. For example, if vda is the disk and vda1 is the root partition, qemu-nbd exports the device as /dev/nbd0 and /dev/nbd0p1, respectively:

# mount /dev/nbd0p1 /mnt/
You can now access the contents of /mnt, which correspond to the first partition of the instance's disk.

To examine the secondary or ephemeral disk, use an alternate mount point if you want both primary and secondary drives mounted at the same time:

# umount /mnt
# qemu-nbd -c /dev/nbd1 `pwd`/disk.local
# mount /dev/nbd1 /mnt/
# ls -lh /mnt/
total 76K
lrwxrwxrwx.  1 root root    7 Jan 15 00:44 bin -> usr/bin
dr-xr-xr-x.  4 root root 4.0K Jan 15 01:07 boot
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 dev
drwxr-xr-x. 70 root root 4.0K Jan 15 11:31 etc
drwxr-xr-x.  3 root root 4.0K Jan 15 01:07 home
lrwxrwxrwx.  1 root root    7 Jan 15 00:44 lib -> usr/lib
lrwxrwxrwx.  1 root root    9 Jan 15 00:44 lib64 -> usr/lib64
drwx------.  2 root root  16K Jan 15 00:42 lost+found
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 media
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 mnt
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 opt
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 proc
dr-xr-x---.  3 root root 4.0K Jan 15 21:56 root
drwxr-xr-x. 14 root root 4.0K Jan 15 01:07 run
lrwxrwxrwx.  1 root root    8 Jan 15 00:44 sbin -> usr/sbin
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 srv
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 sys
drwxrwxrwt.  9 root root 4.0K Jan 15 16:29 tmp
drwxr-xr-x. 13 root root 4.0K Jan 15 00:44 usr
drwxr-xr-x. 17 root root 4.0K Jan 15 00:44 var
Once you have completed the inspection, unmount the mount point and release the qemu-nbd device:

# umount /mnt
# qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
Resume the instance using virsh:

# virsh list
Id Name                 State
----------------------------------
1 instance-00000981    running
2 instance-000009f5    running
30 instance-0000274a    paused

# virsh resume 30
Domain 30 resumed


****************************
Libguestfs  tools in Centos7
****************************

sudo yum install libguestfs-tools      # Fedora/RHEL/CentOS
sudo apt-get install libguestfs-tools  # Debian/Ubuntu


[boris@icehouse1 Downloads]$  guestfish --rw -a disk files

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

> run
> list-filesystems
/dev/sda1: ext4
> mount /dev/sda1 /
> ls /



****************************
Guestmount tools in Centos7
****************************

[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ls
console.log  disk  disk.info  libvirt.xml
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ls -al
total 13790864
drwxr-xr-x.  2 nova nova        3864 Mar 18 15:07 .
drwxr-xr-x. 20 nova nova        3864 Mar 19 11:01 ..
-rw-rw----.  1 root root           0 Mar 19 11:01 console.log
-rw-r--r--.  1 qemu qemu 14094106624 Mar 19 12:09 disk
-rw-r--r--.  1 nova nova          79 Mar 18 15:07 disk.info
-rw-r--r--.  1 nova nova        2603 Mar 19 10:59 libvirt.xml
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# guestmount -a disk -i /mnt
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ll /mnt/
total 136
dr-xr-xr-x.  2 root root  4096 Mar 18 15:44 bin
dr-xr-xr-x.  4 root root  4096 Apr 16  2014 boot
drwxr-xr-x. 10 root root  4096 Mar 19 10:22 cgroup
drwxr-xr-x.  2 root root  4096 Apr 16  2014 dev
drwxr-xr-x. 80 root root  4096 Mar 19 11:00 etc
drwxr-xr-x.  3 root root  4096 Mar 18 15:08 home
dr-xr-xr-x.  8 root root  4096 Apr 16  2014 lib
dr-xr-xr-x. 11 root root 12288 Mar 18 15:44 lib64
drwx------.  2 root root 16384 Apr 16  2014 lost+found
drwxr-xr-x.  2 root root  4096 Sep 23  2011 media
drwxr-xr-x.  2 root root  4096 Sep 23  2011 mnt
drwxr-xr-x.  2 root root  4096 Sep 23  2011 opt
drwxr-xr-x.  2 root root  4096 Apr 16  2014 proc
dr-xr-x---.  4 root root 24576 Mar 19 10:59 root
dr-xr-xr-x.  2 root root 12288 Mar 18 15:45 sbin
drwxr-xr-x.  2 root root  4096 Apr 16  2014 selinux
drwxr-xr-x.  2 root root  4096 Sep 23  2011 srv
drwxr-xr-x.  2 root root  4096 Apr 16  2014 sys
drwxrwxrwt.  3 root root  4096 Mar 19 11:00 tmp
drwxr-xr-x. 13 root root  4096 Apr 16  2014 usr
drwxr-xr-x. 19 root root  4096 Mar 19 10:14 var
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# guestunmount /mnt/
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]#

Thursday, March 12, 2015

Google Two-Factor Authentication on Linux Server



The Google Authenticator is an open-source module that includes implementations of one-time passcodes (TOTP) verification token developed by Google. It supports several mobile platforms, as well as PAM (Pluggable Authentication Module). These one-time passcodes are generated using open standards created by the OATH (Initiative for Open Authentication).

Install the needed packages
yum install pam-devel make gcc-c++ wget bzip*

cd /root
wget https://google-authenticator.googlecode.com/files/libpam-google-authenticator-1.0-source.tar.bz2
tar -xvf libpam-google-authenticator-1.0-source.tar.bz2

cd libpam-google-authenticator-1.0
make
make install
google-authenticator

Do you want authentication tokens to be time-based (y/n) y

Your new secret key is: FGHLERMHLCISCSU6
Your verification code is 485035
Your emergency scratch codes are:
  90385136
  97173523
  18612791
  73040662
  45704109

Do you want me to update your "/root/.google_authenticator" file (y/n) y

Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n) y

By default, tokens are good for 30 seconds and in order to compensate for
possible time-skew between the client and the server, we allow an extra
token before and after the current time. If you experience problems with poor
time synchronization, you can increase the window from its default
size of 1:30min to about 4min. Do you want to do so (y/n) y

If the computer that you are logging into isn't hardened against brute-force
login attempts, you can enable rate-limiting for the authentication module.
By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting (y/n) y



Configuring SSH to use Google Authenticator Module
Open the PAM configuration file ‘/etc/pam.d/sshd‘ and add the following line to the top of the file.

auth       required     pam_google_authenticator.so
Next, open the SSH configuration file ‘/etc/ssh/sshd_config‘ and scroll for fine the line that says.

ChallengeResponseAuthentication no
Change it to “yes“. So, it becomes like this.

ChallengeResponseAuthentication yes
Finally, restart SSH service to take new changes.

# systemctl restart sshd

Install the Google Authentication Application on the you mobile app or make use of the firefox addoon GAuth Authenticator .Below we show how the Gauth Application is used in Android Phones.



Once we enter the secret key in above setting we will get the verfificatuion code as below, which will be changing in very so and so period.


Login to the Server using Google Authentication
[root@localhost ~]# ssh root@xxx.xxx.xxx.xxx
Password: <<User Password
Verification code: <<The Code which we get from the Phone
Last failed login: Fri Mar 13 04:49:59 UTC 2015 from xxx.xxx.xxx.xxx on ssh:notty
There was 1 failed login attempt since the last successful login.
Last login: Fri Mar 13 04:48:35 2015 from xxx.xxx.xxx.xxx
[root@server ~]#

Important: The two-factor authentication works with password based SSH login. If you are using any private/public key SSH session, it will ignore two-factor authentication and log you in directly.

Tuesday, March 10, 2015

Swift Tips


Swift stores the data we store in the containers in .data foramt in the corresponding Drives.

[root@compute ~]# find /srv/node/sdc1 -iname *.data
/srv/node/sdc1/objects/58511/456/3923e942436c9de6e832f944fb30c456/1421697356.06343.data
/srv/node/sdc1/objects/66216/445/40aa0b832ae8dff8681916972fd13445/1422560956.02659.data
/srv/node/sdc1/objects/142841/960/8b7e465403a5b5017ae51c0c0ab5a960/1422459278.52978.data
/srv/node/sdc1/objects/53083/6af/33d6dc4a65e40f2c539e26649c2d96af/1422459797.37964.data
/srv/node/sdc1/objects/37756/61e/24df3295c06d9770e1cd4f1d15ee861e/1422560823.75913.data
/srv/node/sdc1/objects/206317/924/c97b770dc9f2170f2434631423ccb924/1422560870.83562.data
/srv/node/sdc1/objects/1056/c1d/01081c2d99e3ed7cc3408249335b9c1d/1422560871.31131.data
/srv/node/sdc1/objects/107854/6aa/6953b4ba90867f1b2ee0ff36e8f7d6aa/1422560871.63875.data
/srv/node/sdc1/objects/262004/dfc/ffdd367dd12034d5f3c066845e4d8dfc/1422560873.82851.data
/srv/node/sdc1/objects/71710/393/4607a45373f2b0f6632b2f56501cf393/1422560874.16764.data

In above out put the swift drive is mounted to /srv/node/sdc1.


we can get the date when the data file is created from the name of the data file.

/srv/node/sdc1/objects/71784/771/461a3fd11073d0a88222403d4a7d1771/1422561047.39847.data
[root@compute ~]# date --date @1422561047
Thu Jan 29 14:50:47 EST 2015
[root@compute ~]#

If we have enabled 2 replication is the swift ring configuration there will be  two data file with same name. If we have multiple swift server's the replicated data will be stored in different server's rather than the same server. 

Friday, March 6, 2015

Directory Sharing between Host Machine and Docker

Mount a Host Directory as a Data Volume
 To mount a  host directory on to the container

>>$ sudo docker run -d -P --name web -v /src/webapp:/opt/webapp training/webapp python app.py

This will mount the host directory, /src/webapp, into the container at /opt/webapp

Friday, February 13, 2015

Running a Script in Client Server's using Puppet Master.

Running a Script in Client Server's using Puppet.

Enable the puppet File Server
=============================
Add Following entries to /etc/puppet/fileserver.conf
[extra_files]
path /var/lib/puppet/bucket
allow *


The File is stored in the mentioned path
========================================
[root@master ~]# ll /var/lib/puppet/bucket/
total 4
-rw-r--r--. 1 root root 39 Feb 10 16:45 startup.sh

In the below codes first the scripts is fetched from the master and saved in the local file. and then execute
==============================================================================================================
[root@master ~]# cat /etc/puppet/manifests/site.pp
node "client" {
file { '/tmp/startup.sh':
          owner => 'root',
          group => 'root',
          mode => '700',
          source => 'puppet:///extra_files/startup.sh',
       }
exec    {'run_startup':
        command => '/tmp/startup.sh',
        }
}
[root@master ~]#

Tuesday, February 10, 2015

Puppet Master-Client Setup/Usage

Puppet is a system for automating system administration tasks. It has a master server in which we will be mentioning the client configurations and in the client we will be running an agent which will fetch the configuration form the master server and implement it.

Environment
Master and Client Runs on Centos7

Open the port 8140 in firewall and set SELINUX to permissive mode.

Intalling the packages.
================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet-server

Start the service
============
systemctl start  puppetmaster.service
puppet resource service puppetmaster ensure=running enable=true
--------------------------
Notice: /Service[puppetmaster]/enable: enable changed 'false' to 'true'
service { 'puppetmaster':
  ensure => 'running',
  enable => 'true',
}
[root@master ~]#

Now the Certificate and keys would have been created.
====================================================
[root@master ~]# ll /var/lib/puppet/ssl/certs
total 8
-rw-r--r--. 1 puppet puppet 2013 Feb  9 14:48 ca.pem
-rw-r--r--. 1 puppet puppet 2098 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#
[root@master ~]# ll /var/lib/puppet/ssl/private_keys/
total 4
-rw-r--r--. 1 puppet puppet 3243 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#


Add the Following entries to the Following File. # You will find the cert name in /var/lib/puppet/ssl/certs
================================================
vim /etc/puppet/puppet.conf
[master]
certname = master.example.com.novalocal.pem
autosign = true

Restart the Service
systemctl restart  puppetmaster.service

[root@master ~]# netstat -plan |grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      5870/ruby
[root@master ~]#

####################
Client Configuration 
####################

Install the Packages
====================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet

Configure the Client
=====================
 vim /etc/puppet/puppet.conf
# In the [agent] section
    server = master.example.com.novalocal
    report = true
    pluginsync = true

Now the Following Command will add the certificate to Server 
===============================================
puppet agent -t --debug --verbose

From Server we need to sign the client certificate If its not signed Automatically
=============================================================
puppet cert sign --all

Now from Client again run
=========================
puppet agent -t --debug --verbose
to get synced.



Now in Server Create the Configuration file 
==================================
cat /etc/puppet/manifests/site.pp
node "client.example.com" {
file { '/root/example_file.txt':
    ensure => "file",
    owner  => "root",
    group  => "root",
    mode   => "700",
    content => "Congratulations!
Puppet has created this file.
",}
}

Once the above file in created in Server we need to run agent in the client
puppet agent -t --debug --verbose

we can see that file is created

Info: Applying configuration version '1423504520'
Notice: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]/ensure: defined content as '{md5}8a2d86dd40aa579c3fabac1453fcffa5'
Debug: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]: The container Node[client.example.com] will propagate my refresh event
Debug: Node[client.example.com]: The container Class[Main] will propagate my refresh event
Debug: Class[Main]: The container Stage[main] will propagate my refresh event
Debug: Finishing transaction 23483900
Debug: Storing state
Debug: Stored state in 0.01 seconds
Notice: Finished catalog run in 0.03 seconds
Debug: Using cached connection for https://master.example.com.novalocal:8140
Debug: Caching connection for https://master.example.com.novalocal:8140
Debug: Closing connection for https://master.example.com.novalocal:8140
[root@client ~]# ll /root/
total 4
-rwx------. 1 root root 47 Feb  9 17:55 example_file.txt
[root@client ~]#



Tuesday, February 3, 2015

Configuring http proxy in the linux Server


Open the .bash_profile file for editing.

(example: vi ~/.bash_profile)
Add the following lines to the end of the file:
http_proxy=http://proxy_server_address:port
export no_proxy=localhost,127.0.0.1,192.168.0.34
export http_proxy
http_proxy should be the ip address or hostname, plus the port of your proxy server
no_proxy should be any exclusions you want to make – addresses that you don’t want to send via the proxy.
NOTE: This must be done for each individual user, including root.
If you don’t want to log out of your shell session, you can reload the bash profile with the following:
source .bash_profile

Configuring YUM to use proxy
To configure “yum” to use the HTTP / HTTPS proxy you will need to edit the /etc/yum.conf configuration file. Open /etc/yum.conf in your favorite editor and add the following line.
proxy=http://proxy_server_address:port

Save and close the file, then clear the cache used by yum with the following command:
yum clean all