Pages

Wednesday, December 24, 2014

Python Error "ImportError: No module named pkg_resources"

I encountered the ImportError today while trying to use pip. Somehow the setup tools package had been deleted in my Python environment.

=============== File "/usr/bin/gunicorn", line 5, in <module> from pkg_resources import load_entry_point ImportError: No module named pkg_resources ===============

Fix to reset to python Environment curl https://bootstrap.pypa.io/ez_setup.py | python

Private Docker Registry @ Centos7

Here we will try to create a private docker registry for the internal use with security.

Install the Packages
>> yum update -y ;
>> yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
>> yum install docker-registry

The configruetion file will be at /etc/docker-registry.yml

Edit the dev section to match the needed Storage portion

 # This is the default configuration when no flavor is specified
dev:
    storage: local
    storage_path: /home/registry
    loglevel: debug

Create the Directory at /home/registry or Configure the storage there.

Start and enable the Docker Registry

>>systemctl start docker-registry
>>systemctl status docker-registry
docker-registry.service - Registry server for Docker
   Loaded: loaded (/usr/lib/systemd/system/docker-registry.service; enabled)
   Active: active (running) since Mon 2014-12-15 21:20:26 UTC; 5s ago
 Main PID: 19468 (gunicorn)
   CGroup: /system.slice/docker-registry.service
           ├─19468 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19473 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19474 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19475 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19482 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19488 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19489 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19494 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           └─19496 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...

Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Listening a...68)
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Using worke...ent
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19473] [INFO] Booting wor...473
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19474] [INFO] Booting wor...474
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19475] [INFO] Booting wor...475
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19482] [INFO] Booting wor...482
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19488] [INFO] Booting wor...488
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19489] [INFO] Booting wor...489
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19494] [INFO] Booting wor...494
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19496] [INFO] Booting wor...496
Hint: Some lines were ellipsized, use -l to show in full.


Check whether its working
>>curl localhost:5000
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#

Now we need to add authentication to the registry and an self signed certificate SSL

Creating the Certificates
# Generate private key
>>openssl genrsa -out docker-registry.key 2048

# Generate CSR **** MAKE SURE WE GIVE THE SERVER NAME CORRECTLY***
>>openssl req -new -key docker-registry.key -out docker-registry.csr

# Generate Self Signed Key
>>openssl x509 -req -days 365 -in docker-registry.csr -signkey docker-registry.key -out docker-registry.crt

# Copy the files to the correct locations
>>cp docker-registry.crt /etc/pki/tls/certs
>>cp docker-registry.key /etc/pki/tls/private/docker-registry.key
>>cp docker-registry.csr /etc/pki/tls/private/docker-registry.csr

#Update the CA-Certificate in the Centos7
>>update-ca-trust enable
>>cp docker-registry.crt /etc/pki/ca-trust/source/anchors/
>>update-ca-trust extract


>>yum install nginx httpd-tools

Create a User for authentication
>>sudo htpasswd -c /etc/nginx/docker-registry.htpasswd dockeruser
<Password will be prompted>

Configure nginx Virtual Host (/etc/nginx/conf.d/virtualhost.conf).

============
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary

upstream docker-registry {
 server localhost:5000;
}

server {
 listen 8080;
 server_name docker.registry;

 # ssl on;
 # ssl_certificate /etc/pki/tls/certs/docker-registry.crt;
 # ssl_certificate_key /etc/pki/tls/private/docker-registry.key;

 proxy_set_header Host       $http_host;   # required for Docker client sake
 proxy_set_header X-Real-IP  $remote_addr; # pass on real client IP

 client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

 # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
 chunked_transfer_encoding on;

 location / {
     # let Nginx know about our auth file
     auth_basic              "Restricted";
     auth_basic_user_file    docker-registry.htpasswd;

     proxy_pass http://docker-registry;
 }
 location /_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }
 location /v1/_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }

}
============
Restart the Service
>>>systemctl restart nginx

Checking the Resgistry curl https://<username>:<password>@<hostname>:<port>  ******Hostanme should be the one we gave in the certificate ********

>> curl https://dockeruser:docker@docker-registry:8080
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#


Configure CoreOs to use the Private Docker Registry

To use the Private Registry in the coreos we need to Copy the CA certificate from the registry server to the Coreos Docker server.

Copy the CA certificate to /etc/ssl/certs/docker-registry.pem as pem .
now update the Certificate list using command
>>sudo update-ca-certificates

Also in this case as our server name is docker-registry:8080 we need to copy the CA certificate to /etc/docker/cert.d/docker-registry\:8080/ca.crt also.

And restart the docker service

Now try to login to registry

core@coreos ~ $ docker login https://docker-registry:8080
Username :
Password :
Email :
Login Succeeded
core@coreos ~ $


Now to Push the Image to Private repo we need to tag the image in format <registry-hostname>:<port>


Import/Export a Docker Images

For Exporting a docker image from one server to another we can user private registry or we can also tar the image and copy the tar over to new server and import it into the new server using the tar file.

Export a Docker image to a file.

docker save image > image.tar

Import a Docker image

docker load -i (archivefile)
Loads in a Docker image in the following formats: .tar, .tar.gz, .tar.xz. lrz is not supported.

Friday, December 12, 2014

Docker Usage Explained

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

Downloading a Docker image >>docker pull centos >>docker pull ubuntu
Running A Docker The -t and -i flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running. Let's launch an Ubuntu container and install Apache inside of it using the bash prompt: >>docker run -t -i ubuntu /bin/bash To Quit
Starting with docker 0.6.5, you can add -t to the docker run command, which will attach a pseudo-TTY. Then you can type Control-C to detach from the container without terminating it.If you use -t and -i then Control-C will terminate the container.When using -i with -t then you have to use Control-P Control-Q to detach without terminating.
Control-P Control-Q List the Dockers Running >>docker ps -a Enter a running docker >>docker exec -it [container-id] bash Once inside the Docker install the needed Items and Packages and configure the Services as needed. Now Quit the Docker using Control-P Control-Q To keep it running.
For Using Public Docker Registry, Register with Email Address and Username https://registry.hub.docker.com/
Committing the changes made into a new Image that can be used later. >>docker commit [container-id] <registered_username>/<Nameforimage> eg: core@coreos ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5adf005708db centos:latest "/bin/bash" 11 minutes ago Up 11 minutes thirsty_ritchie core@coreos ~ $ docker commit 5adf005708db rahulrajvn/centos-httpd b8810f9ca8d52a289c963f57824f575341324c353707a5b1f215840c9ea88ebe core@coreos ~ $ Now the Image named rahulrajvn/centos-httpd is present in the local machine if we need to create more of that Image in same sever we can use it. Pushing the Image to registered public Docker-io repo , While pusing we will be asked for Username and password. core@coreos ~ $ docker push rahulrajvn/centos-httpd The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Please login prior to push: Username: rahulrajvn Password:******** Email: ****************** Login Succeeded The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Pushing repository rahulrajvn/centos-httpd (1 tags) 511136ea3c5a: Image already pushed, skipping 5b12ef8fd570: Image already pushed, skipping 34943839435d: Image already pushed, skipping b8810f9ca8d5: Image successfully pushed Pushing tag for rev [b8810f9ca8d5] on {https://cdn-registry-1.docker.io/v1/repositories/rahulrajvn/centos-httpd/tags/latest} core@coreos ~ $ Download a image from a Public Repo We just need to call it using the account name and Image name . Here in below example we use account rahulrajvn and image centos-httpd. core@coreos2 ~ $ docker pull rahulrajvn/centos-httpd Pulling repository rahulrajvn/centos-httpd b8810f9ca8d5: Download complete 511136ea3c5a: Download complete 5b12ef8fd570: Download complete 34943839435d: Download complete Status: Downloaded newer image for rahulrajvn/centos-httpd:latest core@coreos2 ~ $ Network Access to 80 The default apache install will be running on port 80. To give our container access to traffic over port 80, we use the -p flag and specify the port on the host that maps to the port inside the container. In our case we want 80 for each, so we include -p 80:80 in our command: docker run -d -p 80:80 -it rahulrajvn/centos6 /bin/bash If we need to forward more ports we can do it by adding one more -p option. docker run -d -p 80:80 -p 2222:22 -it rahulrajvn/centos6 /bin/bash Listing the Images >>docker images Removing Images >>docker rmi <Image-ID>

Friday, December 5, 2014

NovaException: Unexpected vif_type=binding_failed In Openstack Juno Migration


Sample Error
=============
ERROR nova.compute.manager [req-] [instance: ******-******-******-*******] Setting instance vm_state to ERROR
TRACE nova.compute.manager [instance: ******-******-******-*******] Traceback (most recent call last):
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5596, in _error_out_instance_on_exception
TRACE nova.compute.manager [instance: ******-******-******-*******]     yield
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3459, in resize_instance
TRACE nova.compute.manager [instance: ******-******-******-*******]     block_device_info)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4980, in migrate_disk_and_power_off
TRACE nova.compute.manager [instance: ******-******-******-*******]     utils.execute('ssh', dest, 'mkdir', '-p', inst_base)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     return processutils.execute(*cmd, **kwargs)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 193, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     cmd=' '.join(cmd))
TRACE nova.compute.manager [instance: ******-******-******-*******] ProcessExecutionError: Unexpected error while running command.
TRACE nova.compute.manager [instance: ******-******-******-*******] Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
TRACE nova.compute.manager [instance: ******-******-******-*******] Exit code: 255
TRACE nova.compute.manager [instance: ******-******-******-*******] Stdout: ''
TRACE nova.compute.manager [instance: ******-******-******-*******] Stderr: 'Host key verification failed.\r\n'
TRACE nova.compute.manager [instance: ******-******-******-*******]
ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected error while running command.
Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
Exit code: 255
Stdout: ''
Stderr: 'Host key verification failed.\r\n'

Things Need to be checked

Configure the nova user
First things first, let's make sure our nova user has an appropriate shell set:

cat /etc/passwd | grep nova
Verify that the last entry is /bin/bash.

If not, let's modify the user and make it so:

usermod -s /bin/bash nova


After doing this the next steps are all run as the nova user.
SSH Configuration
su - nova
We need to generate an SSH key:

ssh-keygen

Next up we need to configure SSH to not do host key verification, unless you want to manually SSH to all compute nodes that exist and accept the key (and continue to do so for each new compute node you add).

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

Make Password less Authentication with all Nova user's.

Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Saturday, November 29, 2014

Configuring Multipath in Centos 7 for ISCSI storage LUNS

Install Packages

yum -y install iscsi-initiator-utils
yum install device-mapper-multipath -y

Starting and Enabling the Service 

systemctl start iscsi;
systemctl start iscsid ;
systemctl start multipathd ;

systemctl enable iscsi ;
systemctl enable iscsid ;
systemctl enable multipathd ;

Discovering the iSCSI Targets
iscsiadm -m discovery -t sendtargets -p 10.1.1.100
iscsiadm -m discovery -t sendtargets -p 10.1.0.100


Login to all the targets
iscsiadm -m node -l

Configure basic Multipath  on both Server’s(controller/Controller2)

mpathconf --enable --with_multipathd y

cat /etc/multipath.conf

defaults {
 polling_interval        10
 path_selector           "round-robin 0"
 path_grouping_policy    multibus
 path_checker            readsector0
 rr_min_io               100
 max_fds                 8192
 rr_weight               priorities
 failback                immediate
 no_path_retry           fail
 user_friendly_names     yes
}



[root@controller ~]# multipath -ll
mpathb (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
maptha (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#

Adding Target partition to multipath

Adding Multipath Alias for the Iscsi LUNs in /etc/multipath.conf

multipaths {
        multipath {
                wwid                    36a4badb00053ae7f0000181654753fe5
alias                   LUN0
        }
        multipath {
                wwid                     36a4badb00053ae7f00001c1c54767520
alias                   LUN1
        }
}

[root@controller ~]# systemctl restart multipathd

[root@controller ~]# multipath -ll
LUN1 (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
LUN0 (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#


 [root@controller ~]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled)
   Active: active (running) since Wed 2014-11-26 06:31:41 EST; 5s ago
  Process: 2920 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 2915 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 2913 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 2922 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─2922 /sbin/multipathd

Nov 26 06:31:41 controller systemd[1]: PID file /var/run/multipathd/multipathd.pid not readable (yet?)...tart.
Nov 26 06:31:41 controller multipathd[2922]: LUN0: load table [0 524288000 multipath 3 pg_init_retries ...4 1]
Nov 26 06:31:41 controller multipathd[2922]: LUN0: event checker started
Nov 26 06:31:41 controller systemd[1]: Started Device-Mapper Multipath Device Controller.
Nov 26 06:31:41 controller multipathd[2922]: path checkers start up