Pages

Wednesday, December 24, 2014

Python Error "ImportError: No module named pkg_resources"

I encountered the ImportError today while trying to use pip. Somehow the setup tools package had been deleted in my Python environment.

=============== File "/usr/bin/gunicorn", line 5, in <module> from pkg_resources import load_entry_point ImportError: No module named pkg_resources ===============

Fix to reset to python Environment curl https://bootstrap.pypa.io/ez_setup.py | python

Private Docker Registry @ Centos7

Here we will try to create a private docker registry for the internal use with security.

Install the Packages
>> yum update -y ;
>> yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
>> yum install docker-registry

The configruetion file will be at /etc/docker-registry.yml

Edit the dev section to match the needed Storage portion

 # This is the default configuration when no flavor is specified
dev:
    storage: local
    storage_path: /home/registry
    loglevel: debug

Create the Directory at /home/registry or Configure the storage there.

Start and enable the Docker Registry

>>systemctl start docker-registry
>>systemctl status docker-registry
docker-registry.service - Registry server for Docker
   Loaded: loaded (/usr/lib/systemd/system/docker-registry.service; enabled)
   Active: active (running) since Mon 2014-12-15 21:20:26 UTC; 5s ago
 Main PID: 19468 (gunicorn)
   CGroup: /system.slice/docker-registry.service
           ├─19468 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19473 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19474 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19475 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19482 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19488 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19489 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19494 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           └─19496 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...

Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Listening a...68)
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Using worke...ent
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19473] [INFO] Booting wor...473
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19474] [INFO] Booting wor...474
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19475] [INFO] Booting wor...475
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19482] [INFO] Booting wor...482
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19488] [INFO] Booting wor...488
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19489] [INFO] Booting wor...489
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19494] [INFO] Booting wor...494
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19496] [INFO] Booting wor...496
Hint: Some lines were ellipsized, use -l to show in full.


Check whether its working
>>curl localhost:5000
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#

Now we need to add authentication to the registry and an self signed certificate SSL

Creating the Certificates
# Generate private key
>>openssl genrsa -out docker-registry.key 2048

# Generate CSR **** MAKE SURE WE GIVE THE SERVER NAME CORRECTLY***
>>openssl req -new -key docker-registry.key -out docker-registry.csr

# Generate Self Signed Key
>>openssl x509 -req -days 365 -in docker-registry.csr -signkey docker-registry.key -out docker-registry.crt

# Copy the files to the correct locations
>>cp docker-registry.crt /etc/pki/tls/certs
>>cp docker-registry.key /etc/pki/tls/private/docker-registry.key
>>cp docker-registry.csr /etc/pki/tls/private/docker-registry.csr

#Update the CA-Certificate in the Centos7
>>update-ca-trust enable
>>cp docker-registry.crt /etc/pki/ca-trust/source/anchors/
>>update-ca-trust extract


>>yum install nginx httpd-tools

Create a User for authentication
>>sudo htpasswd -c /etc/nginx/docker-registry.htpasswd dockeruser
<Password will be prompted>

Configure nginx Virtual Host (/etc/nginx/conf.d/virtualhost.conf).

============
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary

upstream docker-registry {
 server localhost:5000;
}

server {
 listen 8080;
 server_name docker.registry;

 # ssl on;
 # ssl_certificate /etc/pki/tls/certs/docker-registry.crt;
 # ssl_certificate_key /etc/pki/tls/private/docker-registry.key;

 proxy_set_header Host       $http_host;   # required for Docker client sake
 proxy_set_header X-Real-IP  $remote_addr; # pass on real client IP

 client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

 # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
 chunked_transfer_encoding on;

 location / {
     # let Nginx know about our auth file
     auth_basic              "Restricted";
     auth_basic_user_file    docker-registry.htpasswd;

     proxy_pass http://docker-registry;
 }
 location /_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }
 location /v1/_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }

}
============
Restart the Service
>>>systemctl restart nginx

Checking the Resgistry curl https://<username>:<password>@<hostname>:<port>  ******Hostanme should be the one we gave in the certificate ********

>> curl https://dockeruser:docker@docker-registry:8080
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#


Configure CoreOs to use the Private Docker Registry

To use the Private Registry in the coreos we need to Copy the CA certificate from the registry server to the Coreos Docker server.

Copy the CA certificate to /etc/ssl/certs/docker-registry.pem as pem .
now update the Certificate list using command
>>sudo update-ca-certificates

Also in this case as our server name is docker-registry:8080 we need to copy the CA certificate to /etc/docker/cert.d/docker-registry\:8080/ca.crt also.

And restart the docker service

Now try to login to registry

core@coreos ~ $ docker login https://docker-registry:8080
Username :
Password :
Email :
Login Succeeded
core@coreos ~ $


Now to Push the Image to Private repo we need to tag the image in format <registry-hostname>:<port>


Import/Export a Docker Images

For Exporting a docker image from one server to another we can user private registry or we can also tar the image and copy the tar over to new server and import it into the new server using the tar file.

Export a Docker image to a file.

docker save image > image.tar

Import a Docker image

docker load -i (archivefile)
Loads in a Docker image in the following formats: .tar, .tar.gz, .tar.xz. lrz is not supported.

Friday, December 12, 2014

Docker Usage Explained

Docker is a platform for developers and sysadmins to develop, ship, and run applications. Docker lets you quickly assemble applications from components and eliminates the friction that can come when shipping code. Docker lets you get your code tested and deployed into production as fast as possible.

Downloading a Docker image >>docker pull centos >>docker pull ubuntu
Running A Docker The -t and -i flags allocate a pseudo-tty and keep stdin open even if not attached. This will allow you to use the container like a traditional VM as long as the bash prompt is running. Let's launch an Ubuntu container and install Apache inside of it using the bash prompt: >>docker run -t -i ubuntu /bin/bash To Quit
Starting with docker 0.6.5, you can add -t to the docker run command, which will attach a pseudo-TTY. Then you can type Control-C to detach from the container without terminating it.If you use -t and -i then Control-C will terminate the container.When using -i with -t then you have to use Control-P Control-Q to detach without terminating.
Control-P Control-Q List the Dockers Running >>docker ps -a Enter a running docker >>docker exec -it [container-id] bash Once inside the Docker install the needed Items and Packages and configure the Services as needed. Now Quit the Docker using Control-P Control-Q To keep it running.
For Using Public Docker Registry, Register with Email Address and Username https://registry.hub.docker.com/
Committing the changes made into a new Image that can be used later. >>docker commit [container-id] <registered_username>/<Nameforimage> eg: core@coreos ~ $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5adf005708db centos:latest "/bin/bash" 11 minutes ago Up 11 minutes thirsty_ritchie core@coreos ~ $ docker commit 5adf005708db rahulrajvn/centos-httpd b8810f9ca8d52a289c963f57824f575341324c353707a5b1f215840c9ea88ebe core@coreos ~ $ Now the Image named rahulrajvn/centos-httpd is present in the local machine if we need to create more of that Image in same sever we can use it. Pushing the Image to registered public Docker-io repo , While pusing we will be asked for Username and password. core@coreos ~ $ docker push rahulrajvn/centos-httpd The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Please login prior to push: Username: rahulrajvn Password:******** Email: ****************** Login Succeeded The push refers to a repository [rahulrajvn/centos-httpd] (len: 1) Sending image list Pushing repository rahulrajvn/centos-httpd (1 tags) 511136ea3c5a: Image already pushed, skipping 5b12ef8fd570: Image already pushed, skipping 34943839435d: Image already pushed, skipping b8810f9ca8d5: Image successfully pushed Pushing tag for rev [b8810f9ca8d5] on {https://cdn-registry-1.docker.io/v1/repositories/rahulrajvn/centos-httpd/tags/latest} core@coreos ~ $ Download a image from a Public Repo We just need to call it using the account name and Image name . Here in below example we use account rahulrajvn and image centos-httpd. core@coreos2 ~ $ docker pull rahulrajvn/centos-httpd Pulling repository rahulrajvn/centos-httpd b8810f9ca8d5: Download complete 511136ea3c5a: Download complete 5b12ef8fd570: Download complete 34943839435d: Download complete Status: Downloaded newer image for rahulrajvn/centos-httpd:latest core@coreos2 ~ $ Network Access to 80 The default apache install will be running on port 80. To give our container access to traffic over port 80, we use the -p flag and specify the port on the host that maps to the port inside the container. In our case we want 80 for each, so we include -p 80:80 in our command: docker run -d -p 80:80 -it rahulrajvn/centos6 /bin/bash If we need to forward more ports we can do it by adding one more -p option. docker run -d -p 80:80 -p 2222:22 -it rahulrajvn/centos6 /bin/bash Listing the Images >>docker images Removing Images >>docker rmi <Image-ID>

Friday, December 5, 2014

NovaException: Unexpected vif_type=binding_failed In Openstack Juno Migration


Sample Error
=============
ERROR nova.compute.manager [req-] [instance: ******-******-******-*******] Setting instance vm_state to ERROR
TRACE nova.compute.manager [instance: ******-******-******-*******] Traceback (most recent call last):
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 5596, in _error_out_instance_on_exception
TRACE nova.compute.manager [instance: ******-******-******-*******]     yield
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 3459, in resize_instance
TRACE nova.compute.manager [instance: ******-******-******-*******]     block_device_info)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/virt/libvirt/driver.py", line 4980, in migrate_disk_and_power_off
TRACE nova.compute.manager [instance: ******-******-******-*******]     utils.execute('ssh', dest, 'mkdir', '-p', inst_base)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/utils.py", line 165, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     return processutils.execute(*cmd, **kwargs)
TRACE nova.compute.manager [instance: ******-******-******-*******]   File "/usr/lib/python2.7/site-packages/nova/openstack/common/processutils.py", line 193, in execute
TRACE nova.compute.manager [instance: ******-******-******-*******]     cmd=' '.join(cmd))
TRACE nova.compute.manager [instance: ******-******-******-*******] ProcessExecutionError: Unexpected error while running command.
TRACE nova.compute.manager [instance: ******-******-******-*******] Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
TRACE nova.compute.manager [instance: ******-******-******-*******] Exit code: 255
TRACE nova.compute.manager [instance: ******-******-******-*******] Stdout: ''
TRACE nova.compute.manager [instance: ******-******-******-*******] Stderr: 'Host key verification failed.\r\n'
TRACE nova.compute.manager [instance: ******-******-******-*******]
ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Unexpected error while running command.
Command: ssh 10.5.2.20 mkdir -p /var/lib/nova/instances/******-******-******-*******
Exit code: 255
Stdout: ''
Stderr: 'Host key verification failed.\r\n'

Things Need to be checked

Configure the nova user
First things first, let's make sure our nova user has an appropriate shell set:

cat /etc/passwd | grep nova
Verify that the last entry is /bin/bash.

If not, let's modify the user and make it so:

usermod -s /bin/bash nova


After doing this the next steps are all run as the nova user.
SSH Configuration
su - nova
We need to generate an SSH key:

ssh-keygen

Next up we need to configure SSH to not do host key verification, unless you want to manually SSH to all compute nodes that exist and accept the key (and continue to do so for each new compute node you add).

cat << EOF > ~/.ssh/config
Host *
    StrictHostKeyChecking no
    UserKnownHostsFile=/dev/null
EOF

Make Password less Authentication with all Nova user's.

Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Saturday, November 29, 2014

Configuring Multipath in Centos 7 for ISCSI storage LUNS

Install Packages

yum -y install iscsi-initiator-utils
yum install device-mapper-multipath -y

Starting and Enabling the Service 

systemctl start iscsi;
systemctl start iscsid ;
systemctl start multipathd ;

systemctl enable iscsi ;
systemctl enable iscsid ;
systemctl enable multipathd ;

Discovering the iSCSI Targets
iscsiadm -m discovery -t sendtargets -p 10.1.1.100
iscsiadm -m discovery -t sendtargets -p 10.1.0.100


Login to all the targets
iscsiadm -m node -l

Configure basic Multipath  on both Server’s(controller/Controller2)

mpathconf --enable --with_multipathd y

cat /etc/multipath.conf

defaults {
 polling_interval        10
 path_selector           "round-robin 0"
 path_grouping_policy    multibus
 path_checker            readsector0
 rr_min_io               100
 max_fds                 8192
 rr_weight               priorities
 failback                immediate
 no_path_retry           fail
 user_friendly_names     yes
}



[root@controller ~]# multipath -ll
mpathb (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
maptha (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#

Adding Target partition to multipath

Adding Multipath Alias for the Iscsi LUNs in /etc/multipath.conf

multipaths {
        multipath {
                wwid                    36a4badb00053ae7f0000181654753fe5
alias                   LUN0
        }
        multipath {
                wwid                     36a4badb00053ae7f00001c1c54767520
alias                   LUN1
        }
}

[root@controller ~]# systemctl restart multipathd

[root@controller ~]# multipath -ll
LUN1 (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
LUN0 (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#


 [root@controller ~]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled)
   Active: active (running) since Wed 2014-11-26 06:31:41 EST; 5s ago
  Process: 2920 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 2915 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 2913 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 2922 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─2922 /sbin/multipathd

Nov 26 06:31:41 controller systemd[1]: PID file /var/run/multipathd/multipathd.pid not readable (yet?)...tart.
Nov 26 06:31:41 controller multipathd[2922]: LUN0: load table [0 524288000 multipath 3 pg_init_retries ...4 1]
Nov 26 06:31:41 controller multipathd[2922]: LUN0: event checker started
Nov 26 06:31:41 controller systemd[1]: Started Device-Mapper Multipath Device Controller.
Nov 26 06:31:41 controller multipathd[2922]: path checkers start up



Thursday, November 27, 2014

Run a Script Before Shutdown in Centos7

Immediately before executing the actual system halt/poweroff/reboot/kexec systemd-shutdown will run all executables in /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All executables in this directory are executed in parallel, and execution of the action is not continued before all executables finished.

Note that systemd-halt.service (and the related units) should never be executed directly. Instead, trigger system shutdown with a command such as "systemctl halt" or suchlike.

Tuesday, November 25, 2014

NIC Bonding in the Centos7

Most of the setting are same as in the Older Version as in the Following URL .

http://www.adminz.in/2014/07/nic-bonding-in-linux.html

We just need to add following entries in the Master Bond0 config file to make the network system understand that bond0 is the master.

In bond0's config file.

TYPE=Bond
BONDING_MASTER=yes

Sample Master File


DEVICE=bond0
NAME=bond0
TYPE=Bond
BONDING_MASTER=yes
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
BONDING_OPTS="bonding parameters separated by spaces"

Sample Slave File

DEVICE=ethN
NAME=bond0-slave
TYPE=Ethernet
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes

Thursday, November 20, 2014

Systemd - Systemctl In Rhel7/Centos7


Systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit.

Boot process

Systemd primary task is to manage the boot process and provides informations about it.

To get the boot process duration, type:

>> systemd-analyze
Startup finished in 422ms (kernel) + 2.722s (initrd) + 9.674s (userspace) = 12.820s
To get the time spent by each task during the boot process, type:

>> systemd-analyze blame
7.029s network.service
2.241s plymouth-start.service
1.293s kdump.service
1.156s plymouth-quit-wait.service
1.048s firewalld.service
632ms postfix.service
621ms tuned.service
460ms iprupdate.service
446ms iprinit.service
344ms accounts-daemon.service
...
7ms systemd-update-utmp-runlevel.service
5ms systemd-random-seed.service
5ms sys-kernel-config.mount
To get the list of the dependencies, type:

>> systemctl list-dependencies
default.target
├─abrt-ccpp.service
├─abrt-oops.service
...
├─tuned.service
├─basic.target
│ ├─firewalld.service
│ ├─microcode.service
...
├─getty.target
│ ├─getty@tty1.service
│ └─serial-getty@ttyS0.service
└─remote-fs.target
Note: You will find additional information on this point in the Lennart Poettering’s blog.

Journal analysis

In addition, Systemd handles the system event log, a syslog daemon is not mandatory any more.
To get the content of the Systemd journal, type:

>> journalctl
To get all the events related to the crond process in the journal, type:

>> journalctl /sbin/crond
Note: You can replace /sbin/crond by `which crond`.

To get all the events since the last boot, type:

>> journalctl -b
To get all the events that appeared today in the journal, type:

>> journalctl --since=today
To get all the events with a syslog priority of err, type:

>> journalctl -p err
To get the 10 last events and wait for any new one (like “tail -f /var/log/messages“), type:

>> journalctl -f
Note: You will find additional information on this point in the Lennart Poettering’s blog or Lennart Poettering’s video (44min: the first ten minutes are very interesting concerning security issues).

Control groups

Systemd organizes tasks in control groups. For example, all the processes started by an apache webserver will be in the same control group, CGI scripts included.

To get the full hierarchy of control groups, type:

>> systemd-cgls
├─user.slice
│ └─user-1000.slice
│ └─session-1.scope
│ ├─2889 gdm-session-worker [pam/gdm-password]
│ ├─2899 /usr/bin/gnome-keyring-daemon --daemonize --login
│ ├─2901 gnome-session --session gnome-classic
. .
└─iprupdate.service
└─785 /sbin/iprupdate --daemon
To get the list of control group ordered by CPU, memory and disk I/O load, type:

>> systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
/ 213 3.9 829.7M - -
/system.slice 1 - - - -
/system.slice/ModemManager.service 1 - - - -
To kill all the processes associated with an apache server (CGI scripts included), type:

>> systemctl kill httpd
To put resource limits on a service (here 500 CPUShares), type:

>> systemctl set-property httpd.service CPUShares=500
Note1: The change is written into the service unit file. Use the –runtime option to avoid this behavior.
Note2: By default, each service owns 1024 CPUShares. Nothing prevents you from giving a value smaller or bigger.

To get the current CPUShares service value, type:

>> systemctl show -p CPUShares httpd.service
On this topic, you can additionally watch Georgios’ Magklaras demo (24min).

Sources: New control group interface, Systemd 205 announcement.

Service management

Systemd deals with all the aspects of the service management. The systemctl command replaces the chkconfig and the service commands. The old commands are now a link to the systemctl command.

To activate the NTP service at boot, type:

>> systemctl enable ntpd
Note1: You should specify ntpd.service but by default the .service suffix will be added.
Note2: If you specify a path, the .mount suffix will be added.
Note3: If you mention a device, the .device suffix will be added.

To deactivate it, start it, stop it, restart it, reload it, type:

>> systemctl disable ntpd
>> systemctl start ntpd
>> systemctl stop ntpd
>> systemctl restart ntpd
>> systemctl reload ntpd
Note: It is also possible to mask and unmask a service. Masking a service prevents it from being started manually or by another service.

To know if the NTP service is activated at boot, type:

>> systemctl is-enabled ntpd
enabled
To know if the NTP service is running, type:

>> systemctl is-active ntpd
inactive
To get the status of the NTP service, type:

>> systemctl status ntpd
ntpd.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
If you change a service configuration, you will need to reload it:

>> systemctl daemon-reload
To get the list of all the units (services, mount points, devices) with their status and description, type:

>> systemctl
To get a more readable list, type:

>> systemctl list-unit-files
To get the list of services that failed at boot, type:

>> systemctl --failed
To get the status of a process (here httpd) on a remote server (here test.example.com), type:

>> systemctl -H root@test.example.com status httpd.service
Run levels

Systemd also deals with run levels. As everything is represented by files in Systemd, target files replace run levels.

To move to single user mode, type:

>> systemctl rescue
To move to the level 3 (equivalent to the previous level 3), type:

>> systemctl isolate runlevel3.target
Or:

>> systemctl isolate multi-user.target
To move to the graphical level (equivalent to the previous level 5), type:

>> systemctl isolate graphical.target
To set the default run level to non-graphical mode, type:

>> systemctl set-default multi-user.target
To set the default run level to graphical mode, type:

>> systemctl set-default graphical.target
To get the current default run level, type:

>> systemctl get-default
graphical.target
To stop a server, type:

>> systemctl poweroff
Note: You can still use the poweroff command, a link to the systemctl command has been created (the same thing is true for the halt and reboot commands).

To reboot a server, suspend it or put it into hibernation, type:

>> systemctl reboot
>> systemctl suspend
>> systemctl hibernate
Linux standardization

Systemd‘s authors have decided to help Linux standardization among distributions. Through Systemd, changes happen in the localization of some configuration files.

Miscellaneous

To get the server hostnames, type:

>> hostnamectl
Static hostname: test.example.com
Icon name: computer-laptop
Chassis: laptop
Machine ID: asdasdasdasdsadas9aa37e54a422938d
Boot ID: adasdasdasdasdac4a82fef4ac26d0
Operating System: Centos
CPE OS Name: cpe:/o:rCentos
Kernel: Linux 3.10.0-54.0.1.el7.x86_64
Architecture: x86_64
Note: There are three kinds of hostnames: static, pretty, and transient.
“The static host name is the traditional hostname, which can be chosen by the user, and is stored in the /etc/hostname file. The “transient” hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to “localhost”. It can be changed by DHCP or mDNS at runtime. The pretty hostname is a free-form UTF8 host name for presentation to the user.” Source: Centos 7 Networking Guide.

To assign the test hostname permanently to the server, type:

>> hostnamectl set-hostname test
Note: With this syntax all three hostnames (static, pretty, and transient) take the test value at the same time. However, it is possible to set the three hostnames separately by using the –pretty, –static, and –transient options.

To get the current locale, virtual console keymap and X11 layout, type:

>> localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: en_US
X11 Layout: en_US
To assign the en_GB.utf8 value to the locale, type:

>> localectl set-locale LANG=en_GB.utf8
To assign the en_GB value to the virtual console keymap, type:

>> localectl set-keymap en_GB
To assign the en_GB value to the X11 layout, type:

>> localectl set-x11-keymap en_GB
To get the current date and time, type:

>> timedatectl
Local time: Fri 2014-01-24 22:34:05 CET
Universal time: Fri 2014-01-24 21:34:05 UTC
RTC time: Fri 2014-01-24 21:34:05
Timezone: Europe/Madrid (CET, +0100)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2013-10-27 02:59:59 CEST
Sun 2013-10-27 02:00:00 CET
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2014-03-30 01:59:59 CET
Sun 2014-03-30 03:00:00 CEST
To set the current date, type:

>> timedatectl set-time YYYY-MM-DD
To set the current time, type:

>> timedatectl set-time HH:MM:SS
To get the list of time zones, type:

>> timedatectl list-timezones
To change the time zone to America/New_York, type:

>> timedatectl set-timezone America/New_York
To get the users’ list, type:

>> loginctl list-users
UID USER
42 gdm
1000 tom
0 root
To get the list of all current user sessions, type:

>> loginctl list-sessions
SESSION UID USER SEAT
1 1000 tom seat0

1 sessions listed.
To get the properties of the user tom, type:

>> loginctl show-user tom
UID=1000
GID=1000
Name=tom
Timestamp=Fri 2014-01-24 21:53:43 CET
TimestampMonotonic=160754102
RuntimePath=/run/user/1000
Slice=user-1000.slice
Display=1
State=active
Sessions=1
IdleHint=no
IdleSinceHint=0
IdleSinceHintMonotonic=0

Sources: Archlinux wiki, Freedesktop wiki, Gentoo wiki, RHEL 7 System Administration Guide, Fedora wiki.

Wednesday, November 19, 2014

Docker+Juno Giving MissingSectionHeaderError while creating docker instance

I was able to configure the docker with Juno by following The instructions in http://www.adminz.in/2014/11/integrating-docker-into-juno-nova.html

First I got an time out error with the docker service, the nova service was not starting up then I edited the connectionpool.py as told in the following URL. http://www.adminz.in/2014/11/docker-n...

After that the service was running fine but While launching an instance I am getting following error.

2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 404, in spawn
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     self._start_container(container_id, instance, network_info)
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]   File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/driver.py", line 376, in _start_container
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]     instance_id=instance['name'])
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] InstanceDeployFailure: Cannot setup network: Unexpected error while running command.
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Command: sudo nova-rootwrap /etc/nova/rootwrap.conf ip link add name tapb97f8d6e-a6 type veth peer name nsb97f8d6e-a6
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Exit code: 1
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stdout: u''
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9] Stderr: u'Traceback (most recent call last):\n  File "/usr/bin/nova-rootwrap", line 10, in <module>\n    sys.exit(main())\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/cmd.py", line 91, in main\n    filters = wrapper.load_filters(config.filters_path)\n  File "/usr/lib/python2.7/site-packages/oslo/rootwrap/wrapper.py", line 120, in load_filters\n    filterconfig.read(os.path.join(filterdir, filterfile))\n  File "/usr/lib64/python2.7/ConfigParser.py", line 305, in read\n    self._read(fp, filename)\n  File "/usr/lib64/python2.7/ConfigParser.py", line 512, in _read\n    raise MissingSectionHeaderError(fpname, lineno, line)\nConfigParser.MissingSectionHeaderError: File contains no section headers.\nfile: /etc/nova/rootwrap.d/docker.filters, line: 1\n\' [Filters]\\n\'\n'
2014-11-18 12:34:43.663 26963 TRACE nova.compute.manager [instance: 5c712c7c-0778-479f-94ba-1bc3343420d9]

FIX
The Issue was because of a BLANK space before the [Filters] entry in the docker.filter file in rootwrap.d directory in the docker server. Once the entry was cleared the docker instance was launched correclty .

[root@docker ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND             CREATED             STATUS              PORTS               NAMES
d37ea1ce08b9        tutum/wordpress:latest   "/run.sh"           16 seconds ago      Up 15 seconds                           nova-73a4f67a-b6d0-4251-a292-d28c5137e6d4
[root@docker ~]#

Tuesday, November 18, 2014

Integrating Docker into Juno Nova Service as a Hypervisor


Installing Python Modules Needed for Docker
===========================================
yum install -y python-six
yum install -y python-pbr
yum install -y python-babel
yum install -y python-openbabel
yum install -y python-oslo-*
yum install -y python-docker-py

Installing Latest Version of Docker
==================================
yum install wget
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-devel-1.2.0-4.el7.centos.x86_64.rpm
wget http://cbs.centos.org/kojifiles/packages/docker/1.2.0/4.el7.centos/x86_64/docker-pkg-devel-1.2.0-4.el7.centos.x86_64.rpm
yum install docker-*

Starting the Docker Service
===========================
systemctl start docker
systemctl status docker
systemctl enable docker


Installing and configuring Nova-Docker Driver
=============================================
yum install -y python-pip git
pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install


Install and configure Neutorn Service In Docker Server
======================================================
http://www.adminz.in/2014/10/openstack-juno-part-6-neutron.html

Inatall and configure Nova Service to use Docker
======================================================
Installing Packages
yum install openstack-nova-compute -y ; usermod -G docker nova


openstack-config --set /etc/nova/nova.conf DEFAULT compute_driver novadocker.virt.docker.DockerDriver


openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_host controller
openstack-config --set /etc/nova/nova.conf DEFAULT rabbit_password guest

openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone

openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/nova/nova.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_user nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken admin_password mar4nova

#On Controller1 #Public IP on contreller server. Hostname don't work. configure the my_ip option to use the management interface IP address of the controller node
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT vnc_enabled True
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 10.1.15.144
openstack-config --set /etc/nova/nova.conf DEFAULT novncproxy_base_url http://10.1.15.140:6080/vnc_auto.html

openstack-config --set /etc/nova/nova.conf glance host controller

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

systemctl enable openstack-nova-compute.service
systemctl start openstack-nova-compute.service

Conufigure Glance to Include Docker Images
==========================================
On Controller server
# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

systemctl restart openstack-glance-api

Creating Custom Rootwrap Filters. On Docker Server
=================================
mkdir /etc/nova/rootwrap.d/
cat << EOF >> /etc/nova/rootwrap.d/docker.filters
# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user
[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root
EOF

chgrp nova /etc/nova/rootwrap.d -R
chmod 640 /etc/nova/rootwrap.d -R

systemctl restart openstack-nova-compute

If you face an time out issue with Nova try the fix in following URL

http://www.adminz.in/2014/11/docker-nova-time-out-error.html

On Docker Server Adding Docker Image
docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

Monday, November 10, 2014

Docker + Nova Time Out Error

http://paste.openstack.org/show/131728/

Sample Error
==========
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 468, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
  File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 456, in request
    resp = self.send(prep, **send_kwargs)
 File "/usr/lib/python2.7/site-packages/novadocker/virt/docker/client.py", line 36, in wrapper
    out = f(*args, **kwds)
 File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 559, in send
    r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 327, in send
    timeout=timeout
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 516, in urlopen
    body=body, headers=headers)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 299, in _make_request
    timeout_obj = self._get_timeout(timeout)
 File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 279, in _get_timeout
    return Timeout.from_float(timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 152, in from_float
    return Timeout(read=timeout, connect=timeout)
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 95, in __init__
    self._connect = self._validate_timeout(connect, 'connect')
  File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
    "int or float." % (name, value))
ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.



To fix the problem i have to modify directly "/usr/lib/python2.7/site-packages/requests/packages/urllib3/connectionpool.py"

def _get_timeout(self, timeout):                                            
    """ Helper that always returns a :class:`urllib3.util.Timeout` """      
    if timeout is _Default:                                                           
        return self.timeout.clone()                                         

    if isinstance(timeout, Timeout): <========================== Timeout is not a urllib3 timeout
        return timeout.clone()                                              
    else:                                                                   
        # User passed us an int/float. This is for backwards compatibility, 
        # can be removed later                                                                                                 
        return Timeout.from_float(timeout._connect ) <======================= manually entered _connect
I

Removing Nova and Neutron Services from Mysql

Some times we need to remove the services listed in the Nova or neutron as they are duplicated or they are removed from the entire system. So we can do it in the following way.

Removing Nova Service from Mysql Database. 

>>nova service-list
>>nova hypervisor-list

mysql> use nova;
mysql> SELECT id, created_at, updated_at, hypervisor_hostname FROM compute_nodes;

mysql> DELETE FROM compute_node_stats WHERE compute_node_id='1';
mysql> DELETE FROM compute_nodes WHERE hypervisor_hostname='compute1';
mysql> DELETE FROM services WHERE host='compute1';



Removing Nneutron  Service from Mysql Database. 

>>neutron agent-list

mysql> use neutorn
mysql> DELETE FROM agents WHERE host='compute1';

Thursday, November 6, 2014

Parse Error Caused Due to Blank Space Before the entries.

   I noticed that in Openstack Juno if there are white spaces on the beginning of lines containing 'key' = 'value' we get parse error in the logs. 

Sample Error. 

Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib64/python2.7/argparse.py", line 1794...ion
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: action(self, namespace, argument_values, option_string)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...l__
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: ConfigParser._parse_file(values, namespace)
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: File "/usr/lib/python2.7/site-packages/oslo/config...ile
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: raise ConfigFileParseError(pe.filename, str(pe))
Nov 06 13:29:42 controller.novalocal neutron-server[14563]: oslo.config.cfg.ConfigFileParseError: Failed to pa...ue'

Nov 06 13:29:42 controller.novalocal systemd[1]: neutron-server.service: main process exited, code=exited, st...LURE

solution is to find out the line and remove the blank Space. 

Tuesday, November 4, 2014

Docker with Openstack Giving Error "ova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout"

When I try to integrate Docker to Openstack Juno, I am not able to start the nova service in the compute node. I followed https://wiki.openstack.org/wiki/Docker .

When I remove or comment out #compute_driver = novadocker.virt.docker.DockerDriver from nova configuration, the service is able to start but the pid gets killed soon.

I am getting following error while trying to start the nova service.

Complete Error.
http://paste.openstack.org/show/128805/

Sample Error
****2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup   File "/usr/lib/python2.7/site-packages/requests/packages/urllib3/util/timeout.py", line 125, in _validate_timeout
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup     "int or float." % (name, value))
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup ValueError: Timeout value connect was Timeout(connect=10, read=10, total=None), but it must be an int or float.
2014-11-03 14:14:08.138 5264 TRACE nova.openstack.common.threadgroup****


The issue has been fixed , I didn't installed the docker requirement . Once i installed it and rebooted the server its working fine now .

For testing I have used * for installation ,we just need to install the correct packages. https://github.com/stackforge/nova-do...

yum install *pbr*
yum install *six*
yum install *babel*
yum install *oslo*
yum install docker-py

Saturday, November 1, 2014

Squid Proxy Server

Squid is a proxy server and web cache daemon. It has a wide variety of uses, from speeding up a web server by caching repeated requests; to caching web, DNS and other computer network lookups for a group of people sharing network resources; to aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including TLS, SSL, Internet Gopher and HTTPS


yum -y install squid
chkconfig squid on

IMPORTANT: First write all the ACLS and Later the http_access order. The Order in which the rules are written in having effect on the working of Proxy.
#Port to which squid listens
http_port 3128


Allowing the Know network/IP
============================
Declare all the known network and allow those network/IP

acl our_networks src 192.168.25.0/24 192.168.2.0/24 10.1.0.1
http_access allow our_networks

The same way we can deny the access using

http_access deny our_networks


Blocking Sites using proxy.
==========================
acl blocksite1 dstdomain www.yahoo.com .facebook.com
http_access deny blocksite1

Blocking List of Sites.
======================
acl blocksitelist dstdomain "/etc/squid/restricted_sites"
http_access deny blocksitelist


Blocking Sites with Specific Words using proxy.
==============================================
acl blockwords url_regex gmail
http_access deny blockwords

Blocking List of Words.
======================
acl blockwordlist url_regex "/etc/squid/restricted_words"
http_access deny blockwordlist


Display Custom message For Blocked Site.
========================================
deny_info <Error-Page-Name> <acl-name>

You can get the error page name from  /usr/share/squid/errors/templates/ some of the error pages are as follow's.
ERR_ACCESS_DENIED            ERR_FTP_FAILURE       ERR_INVALID_URL          ERR_SOCKET_FAILURE
ERR_CACHE_ACCESS_DENIED      ERR_FTP_FORBIDDEN     ERR_LIFETIME_EXP         ERR_TOO_BIG
ERR_CACHE_MGR_ACCESS_DENIED  ERR_FTP_NOT_FOUND     ERR_NEW                  ERR_UNSUP_HTTPVERSION
ERR_CANNOT_FORWARD           ERR_FTP_PUT_CREATED   ERR_NO_RELAY             ERR_UNSUP_REQ
ERR_CONNECT_FAIL             ERR_FTP_PUT_ERROR     ERR_ONLY_IF_CACHED_MISS  ERR_URN_RESOLVE
ERR_DIR_LISTING              ERR_FTP_PUT_MODIFIED  ERR_PRECONDITION_FAILED  ERR_WRITE_ERROR
ERR_DNS_FAIL                 ERR_FTP_UNAVAILABLE   ERR_READ_ERROR           ERR_ZERO_SIZE_OBJECT
ERR_ESI                      ERR_ICAP_FAILURE      ERR_READ_TIMEOUT
ERR_FORWARDING_DENIED        ERR_INVALID_REQ       ERR_SECURE_CONNECT_FAIL
ERR_FTP_DISABLED             ERR_INVALID_RESP      ERR_SHUTTING_DOWN

If we need to input custom pages we need to create the page here and mention it in deny_info part. Theis can be mentioned just above corresponding http_access.
For example if we make a Error page as ERR_NEW the rules will be like.

acl blockwordlist url_regex "/etc/squid/restricted_words"
deny_info ERR_NEW blockwordlist
http_access deny blockwordlist

FOR HTTPS WE WILL GET A PROXY REFUSING MESSAGE DUE TO https://bugzilla.mozilla.org/show_bug.cgi?id=493699 .


Blocking and Allowing By Time
=============================
In second acl the time MTWHFA means the Monday to Saturday
Time 16:00-19:00 is the time frame in 24hr time frame

acl myip src 192.168.25.31
acl worktime time MTWHFA 16:00-19:00
http_access allow myip worktime



Setting up maxconn ACL
======================
acl ACCOUNTSDEPT 192.168.5.0/24
acl limitusercon maxconn 3
http_access deny ACCOUNTSDEPT limitusercon

acl ACCOUNTSDEPT 192.168.3.0/24 : Our accounts department IP range
acl limitusercon maxconn 3 : Set 3 simultaneous web access from the same client IP
http_access deny ACCOUNTSDEPT limitusercon : Apply ACL

Mentioning Allowed Ports
========================
acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports



Adding User Autnetication to Squid
==================================
Check the ncsa_auth file under squid and enter the following line in squid.conf. The ncsa_auth can be in either lib or lib64 directory as per your OS architecture.

#Add Following Line in squid.conf#
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_user

#Creating the User file and adding the user in to the List.#
touch /etc/squid/squid_user
htpasswd /etc/squid/squid_user <username>

#To enable the authentication in the current proxy add the following Line in squid.conf along another acl and http_access rules #

acl class proxy_auth REQUIRED
http_access allow clas

And finally deny all other access to this proxy
==============================================
http_access deny all

Friday, October 31, 2014

Installing Swish Module for php

Swish package does not comes with current repo's of centos or redhat so we need to compile and install it before installing the swish package through the pecl. Else we may end up in error while installing Swish package with pecl

Downloading and installing the swish packages.
wget http://swish-e.org/distribution/swish-e-2.4.7.tar.gz
tar zxvf swish-e-2.4.7.tar.gz
cd swish-e-2.4.7
./configure
make
make check
make install

cd ~

Installing swish php module using pecl
pecl install swish-beta
chmod 755 /usr/lib64/php/modules/swish.so
echo "extension=swish.so" >> /etc/php.ini

Thursday, October 30, 2014

Installing PHP modules using pecl command.

Once you have installed the php you need to install needed modules to support the development process. we can use the pecl function to install the modules.

To install pecl function.

yum install php-pear

Now to install needed modules just use pecl

pecl install <Module Name>

To install a beta version
pecl install <Module Name>-beta

To list all modules in pecl database

pecl list-all

To check whether the module is installed or not

php -m

Wednesday, October 29, 2014

Installing PHP 5.6 in Centos6/7

Compiling php can be difficult some time. But We can just install the latest version of php from proper remi repo.

Install Remi repository

CentOS and Red Hat (RHEL)
Remi and EPEL (Dependency) on CentOS 7 and Red Hat (RHEL) 7

64 bit : yum install -y http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-2.noarch.rpm
yum install -y http://rpms.famillecollet.com/enterprise/remi-release-7.rpm


Remi and Epel repo ( Dependency ) on CentOS 6 and Red Hat (RHEL) 6
64 bit  : yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
32 bit  : yum install -yhttp://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm

yum install -y http://rpms.famillecollet.com/enterprise/remi-release-6.rpm


Installing PHP5.6 from the remi and httpd from local repo
CentOS 7/6.5/5.10 and Red Hat (RHEL) 7/6.5/5.10
yum --enablerepo=remi,remi-php56 install httpd php php-common

Install PHP 5.6.0 modules

yum --enablerepo=remi,remi-php56 install php-pecl-apcu php-cli php-pear php-pdo php-mysqlnd php-pgsql php-pecl-mongo php-sqlite php-pecl-memcache php-pecl-memcached php-gd php-mbstring php-mcrypt php-xml

Start Apache HTTP server (httpd) and autostart Apache HTTP server (httpd) on boot
## CentOS/RHEL 7 ##
systemctl start httpd.service ## use restart after update


## CentOS / RHEL 6.5/5.10 ##
/etc/init.d/httpd start ## use restart after update
## OR ##
service httpd start ## use restart after update


##CentOS/RHEL 7 ##
systemctl enable httpd.service

## CentOS / RHEL 6.5/5.10 ##
chkconfig --levels 235 httpd on


Create test PHP page to check that Apache, PHP and PHP modules are working
Add following content to /var/www/html/test.php file.

<?php

    phpinfo();
?>

Now Check the PHp page at http://<<SERVER_IP>>/test.php

Make sure that the EPEL and Remi repo's are disabled to avoid Further issue in future.

Module Available in Latest PHP

bcmath
bz2
calendar
com_dotnet
ctype
curl
date
dba
dom
enchant
ereg
exif
fileinfo
filter
ftp
gd
gettext
gmp
hash
iconv
imap
interbase
intl
json
ldap
libxml
mbstring
mcrypt
mssql
mysql
mysqli
mysqlnd
oci8
odbc
opcache
openssl
pcntl
pcre
pdo
pdo_dblib
pdo_firebird
pdo_mysql
pdo_oci
pdo_odbc
pdo_pgsql
pdo_sqlite
pgsql
phar
posix
pspell
readline
recode
reflection
session
shmop
simplexml
skeleton
snmp
soap
sockets
spl
sqlite3
standard
sybase_ct
sysvmsg
sysvsem
sysvshm
tidy
tokenizer
wddx
xml
xmlreader
xmlrpc
xmlwriter
xsl
zip
zlib

Monday, October 27, 2014

Openstack Juno - Neutron HA using VRRP (Keepalived)


First configure two neutron server's. Let that be network and network1 .
http://www.adminz.in/2014/10/openstack-juno-part-5-neutron.html

Then install Keepalived in both the neutron server's.

#Added Following entries in both neutron server
#in  /etc/neutron/neutron.conf
l3_ha = True
#And the HA Scheduler has to be used :
router_scheduler_driver = neutron.scheduler.l3_agent_scheduler.ChanceScheduler
network_scheduler_driver = neutron.scheduler.dhcp_agent_scheduler.ChanceScheduler


In Controller Server Database update
neutron-db-manage --config-file=/etc/neutron/neutron.conf  --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head

  mkdir /etc/neutron/rootwrap.d
cp /usr/share/neutron/rootwrap/l3.filters /etc/neutron/rootwrap.d/

Now restart the Openstack Services in  all the controller and neutron nodes.



On Controller Server Create a new set of Network setting

source admin-openrc.sh
neutron net-create ext-net --shared --router:external True --provider:physical_network external --provider:network_type flat
neutron subnet-create ext-net --name ext-subnet --allocation-pool start=10.1.0.101,end=10.1.0.200 --disable-dhcp --gateway 10.1.0.42 10.1.0.0/24


To create the tenant network
neutron net-create cli-net
neutron subnet-create cli-net --name cli-subnet --gateway 192.168.1.1 192.168.1.0/24
neutron router-create cli-router
neutron router-interface-add cli-router cli-subnet
neutron router-gateway-set cli-router ext-net


Now if we check both the neutron node we can see the router's.

[root@network ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network ~]#

[root@network2 ~]# ip netns
qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa
[root@network2 ~]#


[root@network ~]#  ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
10: ha-224b2c85-81: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:42:4d:52 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.8/18 brd 169.254.255.255 scope global ha-224b2c85-81
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe42:4d52/64 scope link
       valid_lft forever preferred_lft forever
11: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
12: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link
       valid_lft forever preferred_lft forever
[root@network ~]#
[root@network ~]#



[root@network2 ~]# ip netns exec qrouter-26aed9ea-b9d5-4427-a3e4-9e75be3e1bfa ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
16: ha-37517361-ec: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:6f:a0:11 brd ff:ff:ff:ff:ff:ff
    inet 169.254.192.7/18 brd 169.254.255.255 scope global ha-37517361-ec
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe6f:a011/64 scope link
       valid_lft forever preferred_lft forever
17: qr-842e3e41-3a: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:13:bc:63 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.1/24 scope global qr-842e3e41-3a
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:fe13:bc63/64 scope link
       valid_lft forever preferred_lft forever
18: qg-04d4c06e-49: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
    link/ether fa:16:3e:b7:19:b8 brd ff:ff:ff:ff:ff:ff
    inet 10.1.0.101/24 scope global qg-04d4c06e-49
       valid_lft forever preferred_lft forever
    inet6 fe80::f816:3eff:feb7:19b8/64 scope link tentative dadfailed
       valid_lft forever preferred_lft forever
[root@network2 ~]#


In above output you can see the device  qg-04d4c06e-49 and  qr-842e3e41-3a has been created in both the server.

Friday, October 24, 2014

Removing Blank Lines from the File.

In sed 
Type the following sed command to delete all empty files:

Display with out Blank Lines
sed '/^$/d' input.txt

Remove all the Blank Lines from file
sed -i '/^$/d' input.txt
cat input.txt

In awk 

Type the following awk command to delete all empty files:

Display with out Blank Lines
awk NF input.txt

Remove all the Blank Lines from file
awk 'NF  input.txt > output.txt
cat output.txt


In perl
Type the following perl one liner to delete all empty files and save orignal file as input.txt.backup:
Remove all the Blank Lines from file
perl -i.backup -n -e "print if /\S/" input.txt


In vi editor
:g/^$/d
:g will execute a command on lines which match a regex. The regex is 'blank line' and the command is
:d (delete)


In tr
tr -s '\n' < abc.txt

In grep
grep -v "^$" abc.txt



Wednesday, October 22, 2014

Openstack Juno Part 6 - Neutron Configuration on Compute Service

Installing the packages

yum install openstack-neutron-ml2 openstack-neutron-openvswitch ipset -y


Configure the Service 
#Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest

openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True

#Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network interface on your compute node. This guide uses 10.0.1.31 for the IP address of the instance tunnels network interface on the first compute node.
#Dedicated Ip for Tunneling in Compute Node

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.214
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service


Replace NEUTRON_PASS with the password you chose for the neutron user in the Identity service.

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_strategy keystone
openstack-config --set /etc/nova/nova.conf neutron admin_tenant_name service
openstack-config --set /etc/nova/nova.conf neutron admin_username neutron
openstack-config --set /etc/nova/nova.conf neutron admin_password mar4neutron
openstack-config --set /etc/nova/nova.conf neutron admin_auth_url http://controller:35357/v2.0

openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.LinuxOVSInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

#Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in #configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the #following commands to resolve this issue:

cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the Services
systemctl enable neutron-openvswitch-agent.service
systemctl restart neutron-openvswitch-agent.service
systemctl restart openstack-nova-compute.service

Tuesday, October 21, 2014

Openstack Juno Part 5 - Neutron configuring Network Node

Installing the Packages

yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch ipset  -y

Configuring  the Service
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000/v2.0
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken identity_uri http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_user neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken admin_password mar4neutron

openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT rabbit_password guest


openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT allow_overlapping_ips True


#verbose = True to the [DEFAULT] section in /etc/neutron/neutron.conf to assist with troubleshooting.
#Comment out any lines in the [service_providers] section.

openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT use_namespaces True

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with #troubleshooting.


openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.OVSInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT use_namespaces True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dnsmasq_config_file /etc/neutron/dnsmasq-neutron.conf

echo "dhcp-option-force=26,1454" >> /etc/neutron/dnsmasq-neutron.conf
chown neutron:neutron /etc/neutron/dnsmasq-neutron.conf

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:5000/v2.0
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region regionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_tenant_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_user neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT admin_password mar4neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret mar4meta

#We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with #troubleshooting.

#Perform the next two steps on the controller node.
#On the controller node, configure Compute to use the metadata service:
#Replace METADATA_SECRET with the secret you chose for the metadata proxy.

openstack-config --set /etc/nova/nova.conf DEFAULT service_neutron_metadata_proxy true
openstack-config --set /etc/nova/nova.conf DEFAULT neutron_metadata_proxy_shared_secret mar4meta

On the controller node, restart the Compute API service:
systemctl restart openstack-nova-api.service

# To configure the Modular Layer 2 (ML2) plug-in

 # Replace INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS with the IP address of the instance tunnels network #interface on your network node. This guide uses 10.0.1.21 for the IP address of the instance tunnels network interface #on the network node.
#Dedicated IP for tunneling in network node
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers openvswitch
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_gre tunnel_id_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks external

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs local_ip 10.0.0.212
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs tunnel_type gre
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs enable_tunneling True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ovs bridge_mappings external:br-ex


openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True


systemctl enable openvswitch.service
systemctl start openvswitch.service

#Add the external bridge:
ovs-vsctl add-br br-ex
#Add a port to the external bridge that connects to the physical external network interface:
#Replace INTERFACE_NAME with the actual interface name. For example, eth2 or ens256.
ovs-vsctl add-port br-ex eth1


#Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve #suitable throughput between your instances and the external network.
#To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K INTERFACE_NAME gro off



ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service


Starting the service's 

systemctl enable neutron-openvswitch-agent.service
systemctl enable neutron-l3-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
systemctl enable neutron-ovs-cleanup.service
systemctl start neutron-openvswitch-agent.service
systemctl start neutron-l3-agent.service
systemctl start neutron-dhcp-agent.service
systemctl start neutron-metadata-agent.service