Pages

Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Saturday, November 29, 2014

Configuring Multipath in Centos 7 for ISCSI storage LUNS

Install Packages

yum -y install iscsi-initiator-utils
yum install device-mapper-multipath -y

Starting and Enabling the Service 

systemctl start iscsi;
systemctl start iscsid ;
systemctl start multipathd ;

systemctl enable iscsi ;
systemctl enable iscsid ;
systemctl enable multipathd ;

Discovering the iSCSI Targets
iscsiadm -m discovery -t sendtargets -p 10.1.1.100
iscsiadm -m discovery -t sendtargets -p 10.1.0.100


Login to all the targets
iscsiadm -m node -l

Configure basic Multipath  on both Server’s(controller/Controller2)

mpathconf --enable --with_multipathd y

cat /etc/multipath.conf

defaults {
 polling_interval        10
 path_selector           "round-robin 0"
 path_grouping_policy    multibus
 path_checker            readsector0
 rr_min_io               100
 max_fds                 8192
 rr_weight               priorities
 failback                immediate
 no_path_retry           fail
 user_friendly_names     yes
}



[root@controller ~]# multipath -ll
mpathb (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
maptha (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#

Adding Target partition to multipath

Adding Multipath Alias for the Iscsi LUNs in /etc/multipath.conf

multipaths {
        multipath {
                wwid                    36a4badb00053ae7f0000181654753fe5
alias                   LUN0
        }
        multipath {
                wwid                     36a4badb00053ae7f00001c1c54767520
alias                   LUN1
        }
}

[root@controller ~]# systemctl restart multipathd

[root@controller ~]# multipath -ll
LUN1 (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
LUN0 (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#


 [root@controller ~]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled)
   Active: active (running) since Wed 2014-11-26 06:31:41 EST; 5s ago
  Process: 2920 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 2915 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 2913 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 2922 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─2922 /sbin/multipathd

Nov 26 06:31:41 controller systemd[1]: PID file /var/run/multipathd/multipathd.pid not readable (yet?)...tart.
Nov 26 06:31:41 controller multipathd[2922]: LUN0: load table [0 524288000 multipath 3 pg_init_retries ...4 1]
Nov 26 06:31:41 controller multipathd[2922]: LUN0: event checker started
Nov 26 06:31:41 controller systemd[1]: Started Device-Mapper Multipath Device Controller.
Nov 26 06:31:41 controller multipathd[2922]: path checkers start up



Thursday, November 27, 2014

Run a Script Before Shutdown in Centos7

Immediately before executing the actual system halt/poweroff/reboot/kexec systemd-shutdown will run all executables in /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All executables in this directory are executed in parallel, and execution of the action is not continued before all executables finished.

Note that systemd-halt.service (and the related units) should never be executed directly. Instead, trigger system shutdown with a command such as "systemctl halt" or suchlike.

Tuesday, November 25, 2014

NIC Bonding in the Centos7

Most of the setting are same as in the Older Version as in the Following URL .

http://www.adminz.in/2014/07/nic-bonding-in-linux.html

We just need to add following entries in the Master Bond0 config file to make the network system understand that bond0 is the master.

In bond0's config file.

TYPE=Bond
BONDING_MASTER=yes

Sample Master File


DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="bonding parameters separated by spaces"

Sample Slave File

DEVICE=ethN
NAME=bond0-slave
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Thursday, November 20, 2014

Systemd - Systemctl In Rhel7/Centos7


Systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit.

Boot process

Systemd primary task is to manage the boot process and provides informations about it.

To get the boot process duration, type:

>> systemd-analyze
Startup finished in 422ms (kernel) + 2.722s (initrd) + 9.674s (userspace) = 12.820s
To get the time spent by each task during the boot process, type:

>> systemd-analyze blame
7.029s network.service
2.241s plymouth-start.service
1.293s kdump.service
1.156s plymouth-quit-wait.service
1.048s firewalld.service
632ms postfix.service
621ms tuned.service
460ms iprupdate.service
446ms iprinit.service
344ms accounts-daemon.service
...
7ms systemd-update-utmp-runlevel.service
5ms systemd-random-seed.service
5ms sys-kernel-config.mount
To get the list of the dependencies, type:

>> systemctl list-dependencies
default.target
├─abrt-ccpp.service
├─abrt-oops.service
...
├─tuned.service
├─basic.target
│ ├─firewalld.service
│ ├─microcode.service
...
├─getty.target
│ ├─getty@tty1.service
│ └─serial-getty@ttyS0.service
└─remote-fs.target
Note: You will find additional information on this point in the Lennart Poettering’s blog.

Journal analysis

In addition, Systemd handles the system event log, a syslog daemon is not mandatory any more.
To get the content of the Systemd journal, type:

>> journalctl
To get all the events related to the crond process in the journal, type:

>> journalctl /sbin/crond
Note: You can replace /sbin/crond by `which crond`.

To get all the events since the last boot, type:

>> journalctl -b
To get all the events that appeared today in the journal, type:

>> journalctl --since=today
To get all the events with a syslog priority of err, type:

>> journalctl -p err
To get the 10 last events and wait for any new one (like “tail -f /var/log/messages“), type:

>> journalctl -f
Note: You will find additional information on this point in the Lennart Poettering’s blog or Lennart Poettering’s video (44min: the first ten minutes are very interesting concerning security issues).

Control groups

Systemd organizes tasks in control groups. For example, all the processes started by an apache webserver will be in the same control group, CGI scripts included.

To get the full hierarchy of control groups, type:

>> systemd-cgls
├─user.slice
│ └─user-1000.slice
│ └─session-1.scope
│ ├─2889 gdm-session-worker [pam/gdm-password]
│ ├─2899 /usr/bin/gnome-keyring-daemon --daemonize --login
│ ├─2901 gnome-session --session gnome-classic
. .
└─iprupdate.service
└─785 /sbin/iprupdate --daemon
To get the list of control group ordered by CPU, memory and disk I/O load, type:

>> systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
/ 213 3.9 829.7M - -
/system.slice 1 - - - -
/system.slice/ModemManager.service 1 - - - -
To kill all the processes associated with an apache server (CGI scripts included), type:

>> systemctl kill httpd
To put resource limits on a service (here 500 CPUShares), type:

>> systemctl set-property httpd.service CPUShares=500
Note1: The change is written into the service unit file. Use the –runtime option to avoid this behavior.
Note2: By default, each service owns 1024 CPUShares. Nothing prevents you from giving a value smaller or bigger.

To get the current CPUShares service value, type:

>> systemctl show -p CPUShares httpd.service
On this topic, you can additionally watch Georgios’ Magklaras demo (24min).

Sources: New control group interface, Systemd 205 announcement.

Service management

Systemd deals with all the aspects of the service management. The systemctl command replaces the chkconfig and the service commands. The old commands are now a link to the systemctl command.

To activate the NTP service at boot, type:

>> systemctl enable ntpd
Note1: You should specify ntpd.service but by default the .service suffix will be added.
Note2: If you specify a path, the .mount suffix will be added.
Note3: If you mention a device, the .device suffix will be added.

To deactivate it, start it, stop it, restart it, reload it, type:

>> systemctl disable ntpd
>> systemctl start ntpd
>> systemctl stop ntpd
>> systemctl restart ntpd
>> systemctl reload ntpd
Note: It is also possible to mask and unmask a service. Masking a service prevents it from being started manually or by another service.

To know if the NTP service is activated at boot, type:

>> systemctl is-enabled ntpd
enabled
To know if the NTP service is running, type:

>> systemctl is-active ntpd
inactive
To get the status of the NTP service, type:

>> systemctl status ntpd
ntpd.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
If you change a service configuration, you will need to reload it:

>> systemctl daemon-reload
To get the list of all the units (services, mount points, devices) with their status and description, type:

>> systemctl
To get a more readable list, type:

>> systemctl list-unit-files
To get the list of services that failed at boot, type:

>> systemctl --failed
To get the status of a process (here httpd) on a remote server (here test.example.com), type:

>> systemctl -H root@test.example.com status httpd.service
Run levels

Systemd also deals with run levels. As everything is represented by files in Systemd, target files replace run levels.

To move to single user mode, type:

>> systemctl rescue
To move to the level 3 (equivalent to the previous level 3), type:

>> systemctl isolate runlevel3.target
Or:

>> systemctl isolate multi-user.target
To move to the graphical level (equivalent to the previous level 5), type:

>> systemctl isolate graphical.target
To set the default run level to non-graphical mode, type:

>> systemctl set-default multi-user.target
To set the default run level to graphical mode, type:

>> systemctl set-default graphical.target
To get the current default run level, type:

>> systemctl get-default
graphical.target
To stop a server, type:

>> systemctl poweroff
Note: You can still use the poweroff command, a link to the systemctl command has been created (the same thing is true for the halt and reboot commands).

To reboot a server, suspend it or put it into hibernation, type:

>> systemctl reboot
>> systemctl suspend
>> systemctl hibernate
Linux standardization

Systemd‘s authors have decided to help Linux standardization among distributions. Through Systemd, changes happen in the localization of some configuration files.

Miscellaneous

To get the server hostnames, type:

>> hostnamectl
Static hostname: test.example.com
Icon name: computer-laptop
Chassis: laptop
Machine ID: asdasdasdasdsadas9aa37e54a422938d
Boot ID: adasdasdasdasdac4a82fef4ac26d0
Operating System: Centos
CPE OS Name: cpe:/o:rCentos
Kernel: Linux 3.10.0-54.0.1.el7.x86_64
Architecture: x86_64
Note: There are three kinds of hostnames: static, pretty, and transient.
“The static host name is the traditional hostname, which can be chosen by the user, and is stored in the /etc/hostname file. The “transient” hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to “localhost”. It can be changed by DHCP or mDNS at runtime. The pretty hostname is a free-form UTF8 host name for presentation to the user.” Source: Centos 7 Networking Guide.

To assign the test hostname permanently to the server, type:

>> hostnamectl set-hostname test
Note: With this syntax all three hostnames (static, pretty, and transient) take the test value at the same time. However, it is possible to set the three hostnames separately by using the –pretty, –static, and –transient options.

To get the current locale, virtual console keymap and X11 layout, type:

>> localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: en_US
X11 Layout: en_US
To assign the en_GB.utf8 value to the locale, type:

>> localectl set-locale LANG=en_GB.utf8
To assign the en_GB value to the virtual console keymap, type:

>> localectl set-keymap en_GB
To assign the en_GB value to the X11 layout, type:

>> localectl set-x11-keymap en_GB
To get the current date and time, type:

>> timedatectl
Local time: Fri 2014-01-24 22:34:05 CET
Universal time: Fri 2014-01-24 21:34:05 UTC
RTC time: Fri 2014-01-24 21:34:05
Timezone: Europe/Madrid (CET, +0100)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2013-10-27 02:59:59 CEST
Sun 2013-10-27 02:00:00 CET
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2014-03-30 01:59:59 CET
Sun 2014-03-30 03:00:00 CEST
To set the current date, type:

>> timedatectl set-time YYYY-MM-DD
To set the current time, type:

>> timedatectl set-time HH:MM:SS
To get the list of time zones, type:

>> timedatectl list-timezones
To change the time zone to America/New_York, type:

>> timedatectl set-timezone America/New_York
To get the users’ list, type:

>> loginctl list-users
UID USER
42 gdm
1000 tom
0 root
To get the list of all current user sessions, type:

>> loginctl list-sessions
SESSION UID USER SEAT
1 1000 tom seat0

1 sessions listed.
To get the properties of the user tom, type:

>> loginctl show-user tom
UID=1000
GID=1000
Name=tom
Timestamp=Fri 2014-01-24 21:53:43 CET
TimestampMonotonic=160754102
RuntimePath=/run/user/1000
Slice=user-1000.slice
Display=1
State=active
Sessions=1
IdleHint=no
IdleSinceHint=0
IdleSinceHintMonotonic=0

Sources: Archlinux wiki, Freedesktop wiki, Gentoo wiki, RHEL 7 System Administration Guide, Fedora wiki.

Wednesday, November 19, 2014

Docker+Juno Giving MissingSectionHeaderError while creating docker instance

I was able to configure the docker with Juno by following The instructions in http://www.adminz.in/2014/11/integrating-docker-into-juno-nova.html

First I got an time out error with the docker service, the nova service was not starting up then I edited the connectionpool.py as told in the following URL. http://www.adminz.in/2014/11/docker-n...

After that the service was running fine but While launching an instance I am getting following error.

2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 404, in spawn
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     self._start_container(container_id, instance, network_info)
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 376, in _start_container
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     instance_id=instance['name'])
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] InstanceDeployFailure: Cannot setup network: Unexpected error while running command.
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip link add name tapb97f8d6e-a6 type veth peer name nsb97f8d6e-a6
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Exit code: 1
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stdout: u''
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stderr: u'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 10, in <module>\n    sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/cmd.py", line 91, in main\n    filters = wrapper.load_filters(config.filters_path)\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/wrapper.py", line 120, in load_filters\n    filterconfig.read(os.path.join(filterdir, filterfile))\n  File "/usr/lib64/python2.7/ConfigParser.py", line 305, in read\n    self._read(fp, filename)\n  File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read\n    raise MissingSectionHeaderError(fpname, lineno, line)\nConfigParser.MissingSectionHeaderError: File contains no section headers.\nfile: /etc/nova/rootwrap.d/docker.filters, line: 1\n\' [Filters]\\n\'\n'
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]

FIX
The Issue was because of a BLANK space before the [Filters] entry in the docker.filter file in rootwrap.d directory in the docker server. Once the entry was cleared the docker instance was launched correclty .

[root@docker ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
d37ea1ce08b9        tutum/wordpress:latest   "/run.sh"           16 seconds ago      Up 15 seconds                           nova-73a4f67a-b6d0-4251-a292-d28c5137e6d4
[root@docker ~]#

Tuesday, November 18, 2014

Integrating Docker into Juno Nova Service as a Hypervisor


Installing Python Modules Needed for Docker
===========================================
yum install -y python-six
yum install -y python-pbr
yum install -y python-babel
yum install -y python-openbabel
yum install -y python-oslo-*
yum install -y python-docker-py

Installing Latest Version of Docker
==================================
yum install wget
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm
yum install docker-*

Starting the Docker Service
===========================
systemctl start docker
systemctl status docker
systemctl enable docker


Installing and configuring Nova-Docker Driver
=============================================
yum install -y python-pip git
pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install


Install and configure Neutorn Service In Docker Server
======================================================
http://www.adminz.in/2014/10/openstack-juno-part-6-neutron.html

Inatall and configure Nova Service to use Docker
======================================================
Installing Packages
yum install openstack-nova-compute -y ; usermod -G docker nova


openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver novadocker.virt.docker.DockerDriver


openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.15.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

Conufigure Glance to Include Docker Images
==========================================
On Controller server
# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

systemctl restart openstack-glance-api

Creating Custom Rootwrap Filters. On Docker Server
=================================
mkdir /etc/nova/rootwrap.d/
cat << EOF >> /etc/nova/rootwrap.d/docker.filters
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root
EOF

chgrp nova /etc/nova/rootwrap.d -R
chmod 640 /etc/nova/rootwrap.d -R

systemctl restart openstack-nova-compute

If you face an time out issue with Nova try the fix in following URL

http://www.adminz.in/2014/11/docker-nova-time-out-error.html

On Docker Server Adding Docker Image
docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

Monday, November 10, 2014

Docker + Nova Time Out Error

http://paste.openstack.org/show/131728/

Sample Error
==========
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 468, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 456, in request
    resp = self.send(prep, **send_kwargs)
 File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
 File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 559, in send
    r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
    timeout=timeout
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 299, in _make_request
    timeout_obj = self._get_timeout(timeout)
 File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 279, in _get_timeout
    return Timeout.from_float(timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 152, in from_float
    return Timeout(read=timeout, connect=timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 95, in __init__
    self._connect = self._validate_timeout(connect, 'connect')
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
    "int or float." % (name, value))
ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.



To fix the problem i have to modify directly "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py"

def _get_timeout(self, timeout):                                            
    """ Helper that always returns a :class:`urllib3.util.Timeout` """      
    if timeout is _Default:                                                           
        return self.timeout.clone()                                         

    if isinstance(timeout, Timeout): <========================== Timeout is not a urllib3 timeout
        return timeout.clone()                                              
    else:                                                                   
        # User passed us an int/float. This is for backwards compatibility, 
        # can be removed later                                                                                                 
        return Timeout.from_float(timeout._connect ) <======================= manually entered _connect
I

Removing Nova and Neutron Services from Mysql

Some times we need to remove the services listed in the Nova or neutron as they are duplicated or they are removed from the entire system. So we can do it in the following way.

Removing Nova Service from Mysql Database. 

>>nova service-list
>>nova hypervisor-list

mysql> use nova;
mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;

mysql> DELETE FROM compute_node_stats WHERE compute_node_id='1';
mysql> DELETE FROM compute_nodes WHERE hypervisor_hostname='compute1';
mysql> DELETE FROM services WHERE host='compute1';



Removing Nneutron  Service from Mysql Database. 

>>neutron agent-list

mysql> use neutorn
mysql> DELETE FROM agents WHERE host='compute1';

Thursday, November 6, 2014

Parse Error Caused Due to Blank Space Before the entries.

   I noticed that in Openstack Juno if there are white spaces on the beginning of lines containing 'key' = 'value' we get parse error in the logs. 

Sample Error. 

Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib64/python2.7/argparse.py", line 1794...ion
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: action(self, namespace, argument_values, option_string)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...l__
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: ConfigParser._parse_file(values, namespace)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...ile
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: raise ConfigFileParseError(pe.filename, str(pe))
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: oslo.config.cfg.ConfigFileParseError: Failed to pa...ue'

Nov 06 13:29:42 controller.novalocal systemd[1]: neutron-server.service: main process exited, code=exited, st...LURE

solution is to find out the line and remove the blank Space. 

Tuesday, November 4, 2014

Docker with Openstack Giving Error "ova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout"

When I try to integrate Docker to Openstack Juno, I am not able to start the nova service in the compute node. I followed https://wiki.openstack.org/wiki/Docker .

When I remove or comment out #compute_driver = novadocker.virt.docker.DockerDriver from nova configuration, the service is able to start but the pid gets killed soon.

I am getting following error while trying to start the nova service.

Complete Error.
http://paste.openstack.org/show/128805/

Sample Error
****2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup     "int or float." % (name, value))
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup****


The issue has been fixed , I didn't installed the docker requirement . Once i installed it and rebooted the server its working fine now .

For testing I have used * for installation ,we just need to install the correct packages. https://github.com/stackforge/nova-do...

yum install *pbr*
yum install *six*
yum install *babel*
yum install *oslo*
yum install docker-py

Saturday, November 1, 2014

Squid Proxy Server

Squid is a proxy server and web cache daemon. It has a wide variety of uses, from speeding up a web server by caching repeated requests; to caching web, DNS and other computer network lookups for a group of people sharing network resources; to aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including TLS, SSL, Internet Gopher and HTTPS


yum -y install squid
chkconfig squid on

IMPORTANT: First write all the ACLS and Later the http_access order. The Order in which the rules are written in having effect on the working of Proxy.
#Port to which squid listens
http_port 3128


Allowing the Know network/IP
============================
Declare all the known network and allow those network/IP

acl our_networks src 192.168.25.0/24 192.168.2.0/24 10.1.0.1
http_access allow our_networks

The same way we can deny the access using

http_access deny our_networks


Blocking Sites using proxy.
==========================
acl blocksite1 dstdomain www.yahoo.com .facebook.com
http_access deny blocksite1

Blocking List of Sites.
======================
acl blocksitelist dstdomain "/etc/squid/restricted_sites"
http_access deny blocksitelist


Blocking Sites with Specific Words using proxy.
==============================================
acl blockwords url_regex gmail
http_access deny blockwords

Blocking List of Words.
======================
acl blockwordlist url_regex "/etc/squid/restricted_words"
http_access deny blockwordlist


Display Custom message For Blocked Site.
========================================
deny_info <Error-Page-Name> <acl-name>

You can get the error page name from  /usr/share/squid/errors/templates/ some of the error pages are as follow's.
ERR_ACCESS_DENIED            ERR_FTP_FAILURE       ERR_INVALID_URL          ERR_SOCKET_FAILURE
ERR_CACHE_ACCESS_DENIED      ERR_FTP_FORBIDDEN     ERR_LIFETIME_EXP         ERR_TOO_BIG
ERR_CACHE_MGR_ACCESS_DENIED  ERR_FTP_NOT_FOUND     ERR_NEW                  ERR_UNSUP_HTTPVERSION
ERR_CANNOT_FORWARD           ERR_FTP_PUT_CREATED   ERR_NO_RELAY             ERR_UNSUP_REQ
ERR_CONNECT_FAIL             ERR_FTP_PUT_ERROR     ERR_ONLY_IF_CACHED_MISS  ERR_URN_RESOLVE
ERR_DIR_LISTING              ERR_FTP_PUT_MODIFIED  ERR_PRECONDITION_FAILED  ERR_WRITE_ERROR
ERR_DNS_FAIL                 ERR_FTP_UNAVAILABLE   ERR_READ_ERROR           ERR_ZERO_SIZE_OBJECT
ERR_ESI                      ERR_ICAP_FAILURE      ERR_READ_TIMEOUT
ERR_FORWARDING_DENIED        ERR_INVALID_REQ       ERR_SECURE_CONNECT_FAIL
ERR_FTP_DISABLED             ERR_INVALID_RESP      ERR_SHUTTING_DOWN

If we need to input custom pages we need to create the page here and mention it in deny_info part. Theis can be mentioned just above corresponding http_access.
For example if we make a Error page as ERR_NEW the rules will be like.

acl blockwordlist url_regex "/etc/squid/restricted_words"
deny_info ERR_NEW blockwordlist
http_access deny blockwordlist

FOR HTTPS WE WILL GET A PROXY REFUSING MESSAGE DUE TO https://bugzilla.mozilla.org/show_bug.cgi?id=493699 .


Blocking and Allowing By Time
=============================
In second acl the time MTWHFA means the Monday to Saturday
Time 16:00-19:00 is the time frame in 24hr time frame

acl myip src 192.168.25.31
acl worktime time MTWHFA 16:00-19:00
http_access allow myip worktime



Setting up maxconn ACL
======================
acl ACCOUNTSDEPT 192.168.5.0/24
acl limitusercon maxconn 3
http_access deny ACCOUNTSDEPT limitusercon

acl ACCOUNTSDEPT 192.168.3.0/24 : Our accounts department IP range
acl limitusercon maxconn 3 : Set 3 simultaneous web access from the same client IP
http_access deny ACCOUNTSDEPT limitusercon : Apply ACL

Mentioning Allowed Ports
========================
acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports



Adding User Autnetication to Squid
==================================
Check the ncsa_auth file under squid and enter the following line in squid.conf. The ncsa_auth can be in either lib or lib64 directory as per your OS architecture.

#Add Following Line in squid.conf#
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_user

#Creating the User file and adding the user in to the List.#
touch /etc/squid/squid_user
htpasswd /etc/squid/squid_user <username>

#To enable the authentication in the current proxy add the following Line in squid.conf along another acl and http_access rules #

acl class proxy_auth REQUIRED
http_access allow clas

And finally deny all other access to this proxy
==============================================
http_access deny all