Pages

Friday, December 12, 2014

Docker Usage Explained

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

Downloading a Docker image >>docker pull centos >>docker pull ubuntu
Running A Docker The -t and -i flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running. Let's launch an Ubuntu container and install Apache inside of it using the bash prompt: >>docker run -t -i ubuntu /bin/bash To Quit
Starting with docker 0.6.5, you can add -t to the docker run command, which will attach a pseudo-TTY. Then you can type Control-C to detach from the container without terminating it.If you use -t and -i then Control-C will terminate the container.When using -i with -t then you have to use Control-P Control-Q to detach without terminating.
Control-P Control-Q List the Dockers Running >>docker ps -a Enter a running docker >>docker exec -it [container-id] bash Once inside the Docker install the needed Items and Packages and configure the Services as needed. Now Quit the Docker using Control-P Control-Q To keep it running.
For Using Public Docker Registry, Register with Email Address and Username https://registry.hub.docker.com/
Committing the changes made into a new Image that can be used later. >>docker commit [container-id] <registered_username>/<Nameforimage> eg: core@coreos ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5adf005708db centos:latest "/bin/bash" 11 minutes ago Up 11 minutes thirsty_ritchie core@coreos ~ $ docker commit 5adf005708db rahulrajvn/centos-httpd b8810f9ca8d52a289c963f57824f575341324c353707a5b1f215840c9ea88ebe core@coreos ~ $ Now the Image named rahulrajvn/centos-httpd is present in the local machine if we need to create more of that Image in same sever we can use it. Pushing the Image to registered public Docker-io repo , While pusing we will be asked for Username and password. core@coreos ~ $ docker push rahulrajvn/centos-httpd The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Please login prior to push: Username: rahulrajvn Password:******** Email: ****************** Login Succeeded The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Pushing repository rahulrajvn/centos-httpd (1 tags) 511136ea3c5a: Image already pushed, skipping 5b12ef8fd570: Image already pushed, skipping 34943839435d: Image already pushed, skipping b8810f9ca8d5: Image successfully pushed Pushing tag for rev [b8810f9ca8d5] on {https://cdn-registry-1.docker.io/v1/repositories/rahulrajvn/centos-httpd/tags/latest} core@coreos ~ $ Download a image from a Public Repo We just need to call it using the account name and Image name . Here in below example we use account rahulrajvn and image centos-httpd. core@coreos2 ~ $ docker pull rahulrajvn/centos-httpd Pulling repository rahulrajvn/centos-httpd b8810f9ca8d5: Download complete 511136ea3c5a: Download complete 5b12ef8fd570: Download complete 34943839435d: Download complete Status: Downloaded newer image for rahulrajvn/centos-httpd:latest core@coreos2 ~ $ Network Access to 80 The default apache install will be running on port 80. To give our container access to traffic over port 80, we use the -p flag and specify the port on the host that maps to the port inside the container. In our case we want 80 for each, so we include -p 80:80 in our command: docker run -d -p 80:80 -it rahulrajvn/centos6 /bin/bash If we need to forward more ports we can do it by adding one more -p option. docker run -d -p 80:80 -p 2222:22 -it rahulrajvn/centos6 /bin/bash Listing the Images >>docker images Removing Images >>docker rmi <Image-ID>

Friday, December 5, 2014

NovaException: Unexpected vif_type=binding_failed In Openstack Juno Migration


Sample Error
=============
ERROR nova.compute.manager [req-] [instance: ******-******-******-*******] Setting instance vm_state to ERROR
TRACE nova.compute.manager [instance: ******-******-******-*******] Traceback (most recent call last):
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5596, in _error_out_instance_on_exception
TRACE nova.compute.manager [instance: ******-******-******-*******]     yield
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3459, in resize_instance
TRACE nova.compute.manager [instance: ******-******-******-*******]     block_device_info)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4980, in migrate_disk_and_power_off
TRACE nova.compute.manager [instance: ******-******-******-*******]     utils.execute('ssh', dest, 'mkdir', '-p', inst_base)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     return processutils.execute(*cmd, **kwargs)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 193, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     cmd=' '.join(cmd))
TRACE nova.compute.manager [instance: ******-******-******-*******] ProcessExecutionError: Unexpected error while running command.
TRACE nova.compute.manager [instance: ******-******-******-*******] Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
TRACE nova.compute.manager [instance: ******-******-******-*******] Exit code: 255
TRACE nova.compute.manager [instance: ******-******-******-*******] Stdout: ''
TRACE nova.compute.manager [instance: ******-******-******-*******] Stderr: 'Host key verification failed.\r\n'
TRACE nova.compute.manager [instance: ******-******-******-*******]
ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected error while running command.
Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
Exit code: 255
Stdout: ''
Stderr: 'Host key verification failed.\r\n'

Things Need to be checked

Configure the nova user
First things first, let's make sure our nova user has an appropriate shell set:

cat /etc/passwd | grep nova
Verify that the last entry is /bin/bash.

If not, let's modify the user and make it so:

usermod -s /bin/bash nova


After doing this the next steps are all run as the nova user.
SSH Configuration
su - nova
We need to generate an SSH key:

ssh-keygen

Next up we need to configure SSH to not do host key verification, unless you want to manually SSH to all compute nodes that exist and accept the key (and continue to do so for each new compute node you add).

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

Make Password less Authentication with all Nova user's.

Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi