Pages

Wednesday, April 1, 2015

Protect Grub2 with Password Centos7/rhel7


Protect Grub2 with Plain Password Method
1.)Login as a root user
su –

2.) Backup the existing grub.cfg so if anything goes wrong we can always restore it.
>>cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.orig

To specify a superuser, add the following lines in the /etc/grub.d/01_users file, where john is the name of the user designated as the superuser, and johnspassword is the superuser's password:

cat <<EOF
set superusers="john"
password john johnspassword
EOF

On BIOS-based machines, issue the following command as root:
>>grub2-mkconfig -o /boot/grub2/grub.cfg
On UEFI-based machines, issue the following command as root:
>> grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

To Use Encrypted password
Create the encrypted password using
grub2-mkpasswd-pbkdf2
Enter Password:
Reenter Password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.19074739ED80F115963D984BDCB35AA671C24325755377C3E9B014D862DA6ACC77BC110EED41822800A87FD3700C037320E51E9326188D53247EC0722DDF15FC.C56EC0738911AD86CEA55546139FEBC366A393DF9785A8F44D3E51BF09DB980BAFEF85281CBBC56778D8B19DC94833EA8342F7D73E3A1AA30B205091F1015A85

Now we can change the entry in the file /etc/grub.d/01_users as follows

cat <<EOF
set superusers="john"
password_pbkdf2 john grub.pbkdf2.sha512.10000.19074739ED80F115963D984BDCB35AA671C24325755377C3E9B014D862DA6ACC77BC110EED41822800A87FD3700C037320E51E9326188D53247EC0722DDF15FC.C56EC0738911AD86CEA55546139FEBC366A393DF9785A8F44D3E51BF09DB980BAFEF85281CBBC56778D8B19DC94833EA8342F7D73E3A1AA30B205091F1015A85
EOF





Monday, March 23, 2015

Creating Replicated Volumes with Gluster FS

In the following scenario we are replicating a particular details from one  server to another server using GlusterFS replicated Volumes.

Mount the partition
On Both the Server's
mkfs.ext3 /dev/sdb1
mkdir /root/glusterfs
mount /dev/sdb1 /root/glusterfs/
tail -n 1 /etc/mtab >> /etc/fstab
mkdir /root/glusterfs/images


How to Enable EPEL Repository in RHEL/CentOS
Next, we need to enable GlusterFs repository on both servers.

wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
yum install glusterfs-server -y
service glusterd start
chkconfig glusterd on


#Configure the Trusted Pool
#Run the following command on ‘Server1‘.
gluster peer probe controller2
#Run the following command on ‘Server2‘.
gluster peer probe controller1
#Note: Once this pool has been connected, only trusted users may probe new servers into this pool.
gluster peer status


#Step 6: Set up a GlusterFS Volume
#On both server1 and server2.
#Create a volume On any single server and start the volume. Here, I’ve taken ‘Server1‘.

 gluster volume create images replica 2 controller1:/root/gluster/images controller2:/root/gluster/images

 gluster volume create images replica 2 network1:/root/gluster/openvswitch network2:/root/gluster/openvswitch
mount.glusterfs 192.168.216.145:images /etc/openvswitch/
 gluster volume start images

# Next, confirm the status of volume.
gluster volume info

chkconfig glusterd on


 echo "
 mount.glusterfs 192.168.216.135:images /var/lib/glance/images/
 " >> /etc/rc.local

 mount.glusterfs 192.168.216.135:images /var/lib/glance/images/
chown glance.glance /var/lib/glance/images -R
chmod g+s /var/lib/glance/images
chmod 775 /var/lib/glance -R

Thursday, March 19, 2015

Openstack Recovering Data from Failed Instances Disk

Openstack Recovering Data from Failed Instances Disk


****************************
Qemu-nbd tools in Ubuntu
****************************

In some scenarios, instances are running but are inaccessible through SSH and do not respond to any command. The VNC console could be displaying a boot failure or kernel panic error messages. This could be an indication of file system corruption on the VM itself. If you need to recover files or inspect the content of the instance, qemu-nbd can be used to mount the disk.

We can get the path of the Instance by greping the Instance name from the common instance path.

>>egrep -i "Instance-name" /var/lib/nova/instances/*/*.xml

To access the instance's disk (/var/lib/nova/instances/xxxx-instance-uuid-xxxxxx/disk), use the following steps:
1.)Suspend the instance using the virsh command.
2.)Connect the qemu-nbd device to the disk.
3.)Mount the qemu-nbd device.
4.)Unmount the device after inspecting.
5.)Disconnect the qemu-nbd device.
6.)Resume the instance.

If you do not follow steps 4 through 6, OpenStack Compute cannot manage the instance any longer. It fails to respond to any command issued by OpenStack Compute, and it is marked as shut down.

Once you mount the disk file, you should be able to access it and treat it as a collection of normal directories with files and a directory structure. However, we do not recommend that you edit or touch any files because this could change the access control lists (ACLs) that are used to determine which accounts can perform what operations on files and directories. Changing ACLs can make the instance unbootable if it is not already.

Suspend the instance using the virsh command, taking note of the internal ID:

# virsh list
Id Name                 State
----------------------------------
1 instance-00000981    running
2 instance-000009f5    running
30 instance-0000274a    running

# virsh suspend 30
Domain 30 suspended
Connect the qemu-nbd device to the disk:

# cd /var/lib/nova/instances/instance-0000274a
# ls -lh
total 33M
-rw-rw---- 1 libvirt-qemu kvm  6.3K Jan 15 11:31 console.log
-rw-r--r-- 1 libvirt-qemu kvm   33M Jan 15 22:06 disk
-rw-r--r-- 1 libvirt-qemu kvm  384K Jan 15 22:06 disk.local
-rw-rw-r-- 1 nova         nova 1.7K Jan 15 11:30 libvirt.xml
# qemu-nbd -c /dev/nbd0 `pwd`/disk
Mount the qemu-nbd device.

The qemu-nbd device tries to export the instance disk's different partitions as separate devices. For example, if vda is the disk and vda1 is the root partition, qemu-nbd exports the device as /dev/nbd0 and /dev/nbd0p1, respectively:

# mount /dev/nbd0p1 /mnt/
You can now access the contents of /mnt, which correspond to the first partition of the instance's disk.

To examine the secondary or ephemeral disk, use an alternate mount point if you want both primary and secondary drives mounted at the same time:

# umount /mnt
# qemu-nbd -c /dev/nbd1 `pwd`/disk.local
# mount /dev/nbd1 /mnt/
# ls -lh /mnt/
total 76K
lrwxrwxrwx.  1 root root    7 Jan 15 00:44 bin -> usr/bin
dr-xr-xr-x.  4 root root 4.0K Jan 15 01:07 boot
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 dev
drwxr-xr-x. 70 root root 4.0K Jan 15 11:31 etc
drwxr-xr-x.  3 root root 4.0K Jan 15 01:07 home
lrwxrwxrwx.  1 root root    7 Jan 15 00:44 lib -> usr/lib
lrwxrwxrwx.  1 root root    9 Jan 15 00:44 lib64 -> usr/lib64
drwx------.  2 root root  16K Jan 15 00:42 lost+found
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 media
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 mnt
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 opt
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 proc
dr-xr-x---.  3 root root 4.0K Jan 15 21:56 root
drwxr-xr-x. 14 root root 4.0K Jan 15 01:07 run
lrwxrwxrwx.  1 root root    8 Jan 15 00:44 sbin -> usr/sbin
drwxr-xr-x.  2 root root 4.0K Feb  3  2012 srv
drwxr-xr-x.  2 root root 4.0K Jan 15 00:42 sys
drwxrwxrwt.  9 root root 4.0K Jan 15 16:29 tmp
drwxr-xr-x. 13 root root 4.0K Jan 15 00:44 usr
drwxr-xr-x. 17 root root 4.0K Jan 15 00:44 var
Once you have completed the inspection, unmount the mount point and release the qemu-nbd device:

# umount /mnt
# qemu-nbd -d /dev/nbd0
/dev/nbd0 disconnected
Resume the instance using virsh:

# virsh list
Id Name                 State
----------------------------------
1 instance-00000981    running
2 instance-000009f5    running
30 instance-0000274a    paused

# virsh resume 30
Domain 30 resumed


****************************
Libguestfs  tools in Centos7
****************************

sudo yum install libguestfs-tools      # Fedora/RHEL/CentOS
sudo apt-get install libguestfs-tools  # Debian/Ubuntu


[boris@icehouse1 Downloads]$  guestfish --rw -a disk files

Welcome to guestfish, the guest filesystem shell for
editing virtual machine filesystems and disk images.

Type: 'help' for help on commands
      'man' to read the manual
      'quit' to quit the shell

> run
> list-filesystems
/dev/sda1: ext4
> mount /dev/sda1 /
> ls /



****************************
Guestmount tools in Centos7
****************************

[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ls
console.log  disk  disk.info  libvirt.xml
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ls -al
total 13790864
drwxr-xr-x.  2 nova nova        3864 Mar 18 15:07 .
drwxr-xr-x. 20 nova nova        3864 Mar 19 11:01 ..
-rw-rw----.  1 root root           0 Mar 19 11:01 console.log
-rw-r--r--.  1 qemu qemu 14094106624 Mar 19 12:09 disk
-rw-r--r--.  1 nova nova          79 Mar 18 15:07 disk.info
-rw-r--r--.  1 nova nova        2603 Mar 19 10:59 libvirt.xml
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# guestmount -a disk -i /mnt
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# ll /mnt/
total 136
dr-xr-xr-x.  2 root root  4096 Mar 18 15:44 bin
dr-xr-xr-x.  4 root root  4096 Apr 16  2014 boot
drwxr-xr-x. 10 root root  4096 Mar 19 10:22 cgroup
drwxr-xr-x.  2 root root  4096 Apr 16  2014 dev
drwxr-xr-x. 80 root root  4096 Mar 19 11:00 etc
drwxr-xr-x.  3 root root  4096 Mar 18 15:08 home
dr-xr-xr-x.  8 root root  4096 Apr 16  2014 lib
dr-xr-xr-x. 11 root root 12288 Mar 18 15:44 lib64
drwx------.  2 root root 16384 Apr 16  2014 lost+found
drwxr-xr-x.  2 root root  4096 Sep 23  2011 media
drwxr-xr-x.  2 root root  4096 Sep 23  2011 mnt
drwxr-xr-x.  2 root root  4096 Sep 23  2011 opt
drwxr-xr-x.  2 root root  4096 Apr 16  2014 proc
dr-xr-x---.  4 root root 24576 Mar 19 10:59 root
dr-xr-xr-x.  2 root root 12288 Mar 18 15:45 sbin
drwxr-xr-x.  2 root root  4096 Apr 16  2014 selinux
drwxr-xr-x.  2 root root  4096 Sep 23  2011 srv
drwxr-xr-x.  2 root root  4096 Apr 16  2014 sys
drwxrwxrwt.  3 root root  4096 Mar 19 11:00 tmp
drwxr-xr-x. 13 root root  4096 Apr 16  2014 usr
drwxr-xr-x. 19 root root  4096 Mar 19 10:14 var
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]# guestunmount /mnt/
[root@compute ea1aeb3xxxxxxxxxxxxxxxxx3157a9b81621]#