Pages

Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Saturday, November 29, 2014

Configuring Multipath in Centos 7 for ISCSI storage LUNS

Install Packages

yum -y install iscsi-initiator-utils
yum install device-mapper-multipath -y

Starting and Enabling the Service 

systemctl start iscsi;
systemctl start iscsid ;
systemctl start multipathd ;

systemctl enable iscsi ;
systemctl enable iscsid ;
systemctl enable multipathd ;

Discovering the iSCSI Targets
iscsiadm -m discovery -t sendtargets -p 10.1.1.100
iscsiadm -m discovery -t sendtargets -p 10.1.0.100


Login to all the targets
iscsiadm -m node -l

Configure basic Multipath  on both Server’s(controller/Controller2)

mpathconf --enable --with_multipathd y

cat /etc/multipath.conf

defaults {
 polling_interval        10
 path_selector           "round-robin 0"
 path_grouping_policy    multibus
 path_checker            readsector0
 rr_min_io               100
 max_fds                 8192
 rr_weight               priorities
 failback                immediate
 no_path_retry           fail
 user_friendly_names     yes
}



[root@controller ~]# multipath -ll
mpathb (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
maptha (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#

Adding Target partition to multipath

Adding Multipath Alias for the Iscsi LUNs in /etc/multipath.conf

multipaths {
        multipath {
                wwid                    36a4badb00053ae7f0000181654753fe5
alias                   LUN0
        }
        multipath {
                wwid                     36a4badb00053ae7f00001c1c54767520
alias                   LUN1
        }
}

[root@controller ~]# systemctl restart multipathd

[root@controller ~]# multipath -ll
LUN1 (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
LUN0 (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#


 [root@controller ~]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled)
   Active: active (running) since Wed 2014-11-26 06:31:41 EST; 5s ago
  Process: 2920 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 2915 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 2913 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 2922 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─2922 /sbin/multipathd

Nov 26 06:31:41 controller systemd[1]: PID file /var/run/multipathd/multipathd.pid not readable (yet?)...tart.
Nov 26 06:31:41 controller multipathd[2922]: LUN0: load table [0 524288000 multipath 3 pg_init_retries ...4 1]
Nov 26 06:31:41 controller multipathd[2922]: LUN0: event checker started
Nov 26 06:31:41 controller systemd[1]: Started Device-Mapper Multipath Device Controller.
Nov 26 06:31:41 controller multipathd[2922]: path checkers start up



Thursday, November 27, 2014

Run a Script Before Shutdown in Centos7

Immediately before executing the actual system halt/poweroff/reboot/kexec systemd-shutdown will run all executables in /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All executables in this directory are executed in parallel, and execution of the action is not continued before all executables finished.

Note that systemd-halt.service (and the related units) should never be executed directly. Instead, trigger system shutdown with a command such as "systemctl halt" or suchlike.

Tuesday, November 25, 2014

NIC Bonding in the Centos7

Most of the setting are same as in the Older Version as in the Following URL .

http://www.adminz.in/2014/07/nic-bonding-in-linux.html

We just need to add following entries in the Master Bond0 config file to make the network system understand that bond0 is the master.

In bond0's config file.

TYPE=Bond
BONDING_MASTER=yes

Sample Master File


DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="bonding parameters separated by spaces"

Sample Slave File

DEVICE=ethN
NAME=bond0-slave
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Thursday, November 20, 2014

Systemd - Systemctl In Rhel7/Centos7


Systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit.

Boot process

Systemd primary task is to manage the boot process and provides informations about it.

To get the boot process duration, type:

>> systemd-analyze
Startup finished in 422ms (kernel) + 2.722s (initrd) + 9.674s (userspace) = 12.820s
To get the time spent by each task during the boot process, type:

>> systemd-analyze blame
7.029s network.service
2.241s plymouth-start.service
1.293s kdump.service
1.156s plymouth-quit-wait.service
1.048s firewalld.service
632ms postfix.service
621ms tuned.service
460ms iprupdate.service
446ms iprinit.service
344ms accounts-daemon.service
...
7ms systemd-update-utmp-runlevel.service
5ms systemd-random-seed.service
5ms sys-kernel-config.mount
To get the list of the dependencies, type:

>> systemctl list-dependencies
default.target
├─abrt-ccpp.service
├─abrt-oops.service
...
├─tuned.service
├─basic.target
│ ├─firewalld.service
│ ├─microcode.service
...
├─getty.target
│ ├─getty@tty1.service
│ └─serial-getty@ttyS0.service
└─remote-fs.target
Note: You will find additional information on this point in the Lennart Poettering’s blog.

Journal analysis

In addition, Systemd handles the system event log, a syslog daemon is not mandatory any more.
To get the content of the Systemd journal, type:

>> journalctl
To get all the events related to the crond process in the journal, type:

>> journalctl /sbin/crond
Note: You can replace /sbin/crond by `which crond`.

To get all the events since the last boot, type:

>> journalctl -b
To get all the events that appeared today in the journal, type:

>> journalctl --since=today
To get all the events with a syslog priority of err, type:

>> journalctl -p err
To get the 10 last events and wait for any new one (like “tail -f /var/log/messages“), type:

>> journalctl -f
Note: You will find additional information on this point in the Lennart Poettering’s blog or Lennart Poettering’s video (44min: the first ten minutes are very interesting concerning security issues).

Control groups

Systemd organizes tasks in control groups. For example, all the processes started by an apache webserver will be in the same control group, CGI scripts included.

To get the full hierarchy of control groups, type:

>> systemd-cgls
├─user.slice
│ └─user-1000.slice
│ └─session-1.scope
│ ├─2889 gdm-session-worker [pam/gdm-password]
│ ├─2899 /usr/bin/gnome-keyring-daemon --daemonize --login
│ ├─2901 gnome-session --session gnome-classic
. .
└─iprupdate.service
└─785 /sbin/iprupdate --daemon
To get the list of control group ordered by CPU, memory and disk I/O load, type:

>> systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
/ 213 3.9 829.7M - -
/system.slice 1 - - - -
/system.slice/ModemManager.service 1 - - - -
To kill all the processes associated with an apache server (CGI scripts included), type:

>> systemctl kill httpd
To put resource limits on a service (here 500 CPUShares), type:

>> systemctl set-property httpd.service CPUShares=500
Note1: The change is written into the service unit file. Use the –runtime option to avoid this behavior.
Note2: By default, each service owns 1024 CPUShares. Nothing prevents you from giving a value smaller or bigger.

To get the current CPUShares service value, type:

>> systemctl show -p CPUShares httpd.service
On this topic, you can additionally watch Georgios’ Magklaras demo (24min).

Sources: New control group interface, Systemd 205 announcement.

Service management

Systemd deals with all the aspects of the service management. The systemctl command replaces the chkconfig and the service commands. The old commands are now a link to the systemctl command.

To activate the NTP service at boot, type:

>> systemctl enable ntpd
Note1: You should specify ntpd.service but by default the .service suffix will be added.
Note2: If you specify a path, the .mount suffix will be added.
Note3: If you mention a device, the .device suffix will be added.

To deactivate it, start it, stop it, restart it, reload it, type:

>> systemctl disable ntpd
>> systemctl start ntpd
>> systemctl stop ntpd
>> systemctl restart ntpd
>> systemctl reload ntpd
Note: It is also possible to mask and unmask a service. Masking a service prevents it from being started manually or by another service.

To know if the NTP service is activated at boot, type:

>> systemctl is-enabled ntpd
enabled
To know if the NTP service is running, type:

>> systemctl is-active ntpd
inactive
To get the status of the NTP service, type:

>> systemctl status ntpd
ntpd.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
If you change a service configuration, you will need to reload it:

>> systemctl daemon-reload
To get the list of all the units (services, mount points, devices) with their status and description, type:

>> systemctl
To get a more readable list, type:

>> systemctl list-unit-files
To get the list of services that failed at boot, type:

>> systemctl --failed
To get the status of a process (here httpd) on a remote server (here test.example.com), type:

>> systemctl -H root@test.example.com status httpd.service
Run levels

Systemd also deals with run levels. As everything is represented by files in Systemd, target files replace run levels.

To move to single user mode, type:

>> systemctl rescue
To move to the level 3 (equivalent to the previous level 3), type:

>> systemctl isolate runlevel3.target
Or:

>> systemctl isolate multi-user.target
To move to the graphical level (equivalent to the previous level 5), type:

>> systemctl isolate graphical.target
To set the default run level to non-graphical mode, type:

>> systemctl set-default multi-user.target
To set the default run level to graphical mode, type:

>> systemctl set-default graphical.target
To get the current default run level, type:

>> systemctl get-default
graphical.target
To stop a server, type:

>> systemctl poweroff
Note: You can still use the poweroff command, a link to the systemctl command has been created (the same thing is true for the halt and reboot commands).

To reboot a server, suspend it or put it into hibernation, type:

>> systemctl reboot
>> systemctl suspend
>> systemctl hibernate
Linux standardization

Systemd‘s authors have decided to help Linux standardization among distributions. Through Systemd, changes happen in the localization of some configuration files.

Miscellaneous

To get the server hostnames, type:

>> hostnamectl
Static hostname: test.example.com
Icon name: computer-laptop
Chassis: laptop
Machine ID: asdasdasdasdsadas9aa37e54a422938d
Boot ID: adasdasdasdasdac4a82fef4ac26d0
Operating System: Centos
CPE OS Name: cpe:/o:rCentos
Kernel: Linux 3.10.0-54.0.1.el7.x86_64
Architecture: x86_64
Note: There are three kinds of hostnames: static, pretty, and transient.
“The static host name is the traditional hostname, which can be chosen by the user, and is stored in the /etc/hostname file. The “transient” hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to “localhost”. It can be changed by DHCP or mDNS at runtime. The pretty hostname is a free-form UTF8 host name for presentation to the user.” Source: Centos 7 Networking Guide.

To assign the test hostname permanently to the server, type:

>> hostnamectl set-hostname test
Note: With this syntax all three hostnames (static, pretty, and transient) take the test value at the same time. However, it is possible to set the three hostnames separately by using the –pretty, –static, and –transient options.

To get the current locale, virtual console keymap and X11 layout, type:

>> localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: en_US
X11 Layout: en_US
To assign the en_GB.utf8 value to the locale, type:

>> localectl set-locale LANG=en_GB.utf8
To assign the en_GB value to the virtual console keymap, type:

>> localectl set-keymap en_GB
To assign the en_GB value to the X11 layout, type:

>> localectl set-x11-keymap en_GB
To get the current date and time, type:

>> timedatectl
Local time: Fri 2014-01-24 22:34:05 CET
Universal time: Fri 2014-01-24 21:34:05 UTC
RTC time: Fri 2014-01-24 21:34:05
Timezone: Europe/Madrid (CET, +0100)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2013-10-27 02:59:59 CEST
Sun 2013-10-27 02:00:00 CET
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2014-03-30 01:59:59 CET
Sun 2014-03-30 03:00:00 CEST
To set the current date, type:

>> timedatectl set-time YYYY-MM-DD
To set the current time, type:

>> timedatectl set-time HH:MM:SS
To get the list of time zones, type:

>> timedatectl list-timezones
To change the time zone to America/New_York, type:

>> timedatectl set-timezone America/New_York
To get the users’ list, type:

>> loginctl list-users
UID USER
42 gdm
1000 tom
0 root
To get the list of all current user sessions, type:

>> loginctl list-sessions
SESSION UID USER SEAT
1 1000 tom seat0

1 sessions listed.
To get the properties of the user tom, type:

>> loginctl show-user tom
UID=1000
GID=1000
Name=tom
Timestamp=Fri 2014-01-24 21:53:43 CET
TimestampMonotonic=160754102
RuntimePath=/run/user/1000
Slice=user-1000.slice
Display=1
State=active
Sessions=1
IdleHint=no
IdleSinceHint=0
IdleSinceHintMonotonic=0

Sources: Archlinux wiki, Freedesktop wiki, Gentoo wiki, RHEL 7 System Administration Guide, Fedora wiki.

Wednesday, November 19, 2014

Docker+Juno Giving MissingSectionHeaderError while creating docker instance

I was able to configure the docker with Juno by following The instructions in http://www.adminz.in/2014/11/integrating-docker-into-juno-nova.html

First I got an time out error with the docker service, the nova service was not starting up then I edited the connectionpool.py as told in the following URL. http://www.adminz.in/2014/11/docker-n...

After that the service was running fine but While launching an instance I am getting following error.

2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 404, in spawn
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     self._start_container(container_id, instance, network_info)
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 376, in _start_container
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     instance_id=instance['name'])
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] InstanceDeployFailure: Cannot setup network: Unexpected error while running command.
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip link add name tapb97f8d6e-a6 type veth peer name nsb97f8d6e-a6
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Exit code: 1
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stdout: u''
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stderr: u'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 10, in <module>\n    sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/cmd.py", line 91, in main\n    filters = wrapper.load_filters(config.filters_path)\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/wrapper.py", line 120, in load_filters\n    filterconfig.read(os.path.join(filterdir, filterfile))\n  File "/usr/lib64/python2.7/ConfigParser.py", line 305, in read\n    self._read(fp, filename)\n  File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read\n    raise MissingSectionHeaderError(fpname, lineno, line)\nConfigParser.MissingSectionHeaderError: File contains no section headers.\nfile: /etc/nova/rootwrap.d/docker.filters, line: 1\n\' [Filters]\\n\'\n'
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]

FIX
The Issue was because of a BLANK space before the [Filters] entry in the docker.filter file in rootwrap.d directory in the docker server. Once the entry was cleared the docker instance was launched correclty .

[root@docker ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
d37ea1ce08b9        tutum/wordpress:latest   "/run.sh"           16 seconds ago      Up 15 seconds                           nova-73a4f67a-b6d0-4251-a292-d28c5137e6d4
[root@docker ~]#

Tuesday, November 18, 2014

Integrating Docker into Juno Nova Service as a Hypervisor


Installing Python Modules Needed for Docker
===========================================
yum install -y python-six
yum install -y python-pbr
yum install -y python-babel
yum install -y python-openbabel
yum install -y python-oslo-*
yum install -y python-docker-py

Installing Latest Version of Docker
==================================
yum install wget
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm
yum install docker-*

Starting the Docker Service
===========================
systemctl start docker
systemctl status docker
systemctl enable docker


Installing and configuring Nova-Docker Driver
=============================================
yum install -y python-pip git
pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install


Install and configure Neutorn Service In Docker Server
======================================================
http://www.adminz.in/2014/10/openstack-juno-part-6-neutron.html

Inatall and configure Nova Service to use Docker
======================================================
Installing Packages
yum install openstack-nova-compute -y ; usermod -G docker nova


openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver novadocker.virt.docker.DockerDriver


openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.15.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

Conufigure Glance to Include Docker Images
==========================================
On Controller server
# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

systemctl restart openstack-glance-api

Creating Custom Rootwrap Filters. On Docker Server
=================================
mkdir /etc/nova/rootwrap.d/
cat << EOF >> /etc/nova/rootwrap.d/docker.filters
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root
EOF

chgrp nova /etc/nova/rootwrap.d -R
chmod 640 /etc/nova/rootwrap.d -R

systemctl restart openstack-nova-compute

If you face an time out issue with Nova try the fix in following URL

http://www.adminz.in/2014/11/docker-nova-time-out-error.html

On Docker Server Adding Docker Image
docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress