Pages

Showing posts with label centos7. Show all posts
Showing posts with label centos7. Show all posts

Friday, February 16, 2018

Docker Management using Portainer

Portainer is a lightweight management UI that allows easy management of Docker environments, including creating, deploying, and managing containers, services, and stacks. It is particularly useful for those who are new to Docker or those who prefer a visual interface over command-line management.

To install Portainer with a persistent container, you can follow these steps:
  • Pull the Portainer image: docker pull portainer/portainer
  • Create a directory for Portainer data: mkdir -p /mnt/docker/portainer/data
  • Create a Docker service for Portainer with the following command:  
docker service create \ --name portainer \ --publish 9090:9000 \ --constraint 'node.role == manager' \ --mount type=bind,src=/mnt/docker/portainer/data,dst=/data \ --mount type=bind,src=/var/run/docker.sock,dst=/var/run/docker.sock \ portainer/portainer \ -H unix:///var/run/docker.sock

 

the above command will create a new Docker service named "portainer" with a published port of 9090, mounted volume for persistent data, and a constraint for the node role of "manager".

  • Access the Portainer UI by visiting the IP address or hostname of the Docker swarm manager node on port 9090 in a web browser.
  • Create a new user account and start managing your Docker environment using the Portainer UI.

Access the Portainer UI by visiting the IP address or hostname of the Docker swarm manager node on port 9090 in a web browser.



Wednesday, November 22, 2017

Docker Clustering with Swarm in Centos7

Docker Clustering with Swarm in Centos7 is a process of creating a cluster of Docker hosts using the Docker Swarm feature in the CentOS 7 operating system. The Swarm feature is a native clustering and orchestration tool within Docker that enables users to create and manage a cluster of Docker hosts. This process involves setting up a Docker Swarm manager and one or more Docker Swarm nodes, configuring the network and storage for the cluster, and deploying and scaling Docker services across the cluster. The benefits of clustering Docker hosts with Swarm in CentOS 7 include increased scalability, high availability, and load balancing of Docker services, as well as simplified management and deployment of containerized applications.

Installing Docker

mkdir /install-files ; cd /install-files
wget https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-1.13.1-1.el7.centos.x86_64.rpm
wget https://yum.dockerproject.org/repo/main/centos/7/Packages/docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm


Package for docker-engine-selinux
yum install -y policycoreutils-python
rpm -i docker-engine-selinux-1.13.1-1.el7.centos.noarch.rpm
Package for docker-engine
yum install -y libtool-ltdl libseccomp
rpm -i docker-engine-1.13.1-1.el7.centos.x86_64.rpm
Remove rpm packages
rm docker-engine-* -f
Enable systemd service
systemctl enable docker
Start docker

systemctl start docker

Firewalld Enabling Firewall Rules

firewall-cmd --get-active-zones
firewall-cmd --list-all
firewall-cmd --zone=public --add-port=2377/tcp --permanent
firewall-cmd --permanent --add-source=192.168.56.0/24
firewall-cmd --permanent --add-port=2377/tcp
firewall-cmd --permanent --add-port=7946/tcp
firewall-cmd --permanent --add-port=7946/udp
firewall-cmd --permanent --add-port=4789/udp
firewall-cmd --reload
Enable and Restart systemd service
systemctl enable docker;
systemctl restart docker
Docker Cluster Env

docker swarm init --advertise-addr=192.168.56.105

Swarm initialized: current node (b4b79zi3t1mq1572r0iubxdhc) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join \
    --token SWMTKN-1-1wcz7xfyvhewvj3dd4wcbhufw4lub3b1vgpuoybh90myzookbf-4ksxoxrilifb2tmvuligp9krs \
    192.168.56.101:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

To join as a Swarm manager

docker swarm join-token manager

  docker swarm join \
    --token SWMTKN-1-10cqx6yryq5kyfe128m2xhyxzplsc90lzksqggmscv1nfipsbb-bfdbvfhuw9sg8mx2i1a4rkvlv \
    192.168.56.101:2377


Wednesday, April 1, 2015

Protect Grub2 with Password Centos7/rhel7


Protect Grub2 with Plain Password Method
1.)Login as a root user
su –

2.) Backup the existing grub.cfg so if anything goes wrong we can always restore it.
>>cp /boot/grub2/grub.cfg /boot/grub2/grub.cfg.orig

To specify a superuser, add the following lines in the /etc/grub.d/01_users file, where john is the name of the user designated as the superuser, and johnspassword is the superuser's password:

cat <<EOF
set superusers="john"
password john johnspassword
EOF

On BIOS-based machines, issue the following command as root:
>>grub2-mkconfig -o /boot/grub2/grub.cfg
On UEFI-based machines, issue the following command as root:
>> grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg

To Use Encrypted password
Create the encrypted password using
grub2-mkpasswd-pbkdf2
Enter Password:
Reenter Password:
PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.19074739ED80F115963D984BDCB35AA671C24325755377C3E9B014D862DA6ACC77BC110EED41822800A87FD3700C037320E51E9326188D53247EC0722DDF15FC.C56EC0738911AD86CEA55546139FEBC366A393DF9785A8F44D3E51BF09DB980BAFEF85281CBBC56778D8B19DC94833EA8342F7D73E3A1AA30B205091F1015A85

Now we can change the entry in the file /etc/grub.d/01_users as follows

cat <<EOF
set superusers="john"
password_pbkdf2 john grub.pbkdf2.sha512.10000.19074739ED80F115963D984BDCB35AA671C24325755377C3E9B014D862DA6ACC77BC110EED41822800A87FD3700C037320E51E9326188D53247EC0722DDF15FC.C56EC0738911AD86CEA55546139FEBC366A393DF9785A8F44D3E51BF09DB980BAFEF85281CBBC56778D8B19DC94833EA8342F7D73E3A1AA30B205091F1015A85
EOF





Friday, February 13, 2015

Running a Script in Client Server's using Puppet Master.

Running a Script in Client Server's using Puppet.

Enable the puppet File Server
=============================
Add Following entries to /etc/puppet/fileserver.conf
[extra_files]
path /var/lib/puppet/bucket
allow *


The File is stored in the mentioned path
========================================
[root@master ~]# ll /var/lib/puppet/bucket/
total 4
-rw-r--r--. 1 root root 39 Feb 10 16:45 startup.sh

In the below codes first the scripts is fetched from the master and saved in the local file. and then execute
==============================================================================================================
[root@master ~]# cat /etc/puppet/manifests/site.pp
node "client" {
file { '/tmp/startup.sh':
          owner => 'root',
          group => 'root',
          mode => '700',
          source => 'puppet:///extra_files/startup.sh',
       }
exec    {'run_startup':
        command => '/tmp/startup.sh',
        }
}
[root@master ~]#

Tuesday, February 10, 2015

Puppet Master-Client Setup/Usage

Puppet is a system for automating system administration tasks. It has a master server in which we will be mentioning the client configurations and in the client we will be running an agent which will fetch the configuration form the master server and implement it.

Environment
Master and Client Runs on Centos7

Open the port 8140 in firewall and set SELINUX to permissive mode.

Intalling the packages.
================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet-server

Start the service
============
systemctl start  puppetmaster.service
puppet resource service puppetmaster ensure=running enable=true
--------------------------
Notice: /Service[puppetmaster]/enable: enable changed 'false' to 'true'
service { 'puppetmaster':
  ensure => 'running',
  enable => 'true',
}
[root@master ~]#

Now the Certificate and keys would have been created.
====================================================
[root@master ~]# ll /var/lib/puppet/ssl/certs
total 8
-rw-r--r--. 1 puppet puppet 2013 Feb  9 14:48 ca.pem
-rw-r--r--. 1 puppet puppet 2098 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#
[root@master ~]# ll /var/lib/puppet/ssl/private_keys/
total 4
-rw-r--r--. 1 puppet puppet 3243 Feb  9 14:48 master.example.com.novalocal.pem
[root@master ~]#


Add the Following entries to the Following File. # You will find the cert name in /var/lib/puppet/ssl/certs
================================================
vim /etc/puppet/puppet.conf
[master]
certname = master.example.com.novalocal.pem
autosign = true

Restart the Service
systemctl restart  puppetmaster.service

[root@master ~]# netstat -plan |grep 8140
tcp6       0      0 :::8140                 :::*                    LISTEN      5870/ruby
[root@master ~]#

####################
Client Configuration 
####################

Install the Packages
====================
rpm -ivh https://yum.puppetlabs.com/el/7/products/x86_64/puppetlabs-release-7-11.noarch.rpm
yum install -y puppet

Configure the Client
=====================
 vim /etc/puppet/puppet.conf
# In the [agent] section
    server = master.example.com.novalocal
    report = true
    pluginsync = true

Now the Following Command will add the certificate to Server 
===============================================
puppet agent -t --debug --verbose

From Server we need to sign the client certificate If its not signed Automatically
=============================================================
puppet cert sign --all

Now from Client again run
=========================
puppet agent -t --debug --verbose
to get synced.



Now in Server Create the Configuration file 
==================================
cat /etc/puppet/manifests/site.pp
node "client.example.com" {
file { '/root/example_file.txt':
    ensure => "file",
    owner  => "root",
    group  => "root",
    mode   => "700",
    content => "Congratulations!
Puppet has created this file.
",}
}

Once the above file in created in Server we need to run agent in the client
puppet agent -t --debug --verbose

we can see that file is created

Info: Applying configuration version '1423504520'
Notice: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]/ensure: defined content as '{md5}8a2d86dd40aa579c3fabac1453fcffa5'
Debug: /Stage[main]/Main/Node[client.example.com]/File[/root/example_file.txt]: The container Node[client.example.com] will propagate my refresh event
Debug: Node[client.example.com]: The container Class[Main] will propagate my refresh event
Debug: Class[Main]: The container Stage[main] will propagate my refresh event
Debug: Finishing transaction 23483900
Debug: Storing state
Debug: Stored state in 0.01 seconds
Notice: Finished catalog run in 0.03 seconds
Debug: Using cached connection for https://master.example.com.novalocal:8140
Debug: Caching connection for https://master.example.com.novalocal:8140
Debug: Closing connection for https://master.example.com.novalocal:8140
[root@client ~]# ll /root/
total 4
-rwx------. 1 root root 47 Feb  9 17:55 example_file.txt
[root@client ~]#



Wednesday, December 24, 2014

Private Docker Registry @ Centos7

Here we will try to create a private docker registry for the internal use with security.

Install the Packages
>> yum update -y ;
>> yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
>> yum install docker-registry

The configruetion file will be at /etc/docker-registry.yml

Edit the dev section to match the needed Storage portion

 # This is the default configuration when no flavor is specified
dev:
    storage: local
    storage_path: /home/registry
    loglevel: debug

Create the Directory at /home/registry or Configure the storage there.

Start and enable the Docker Registry

>>systemctl start docker-registry
>>systemctl status docker-registry
docker-registry.service - Registry server for Docker
   Loaded: loaded (/usr/lib/systemd/system/docker-registry.service; enabled)
   Active: active (running) since Mon 2014-12-15 21:20:26 UTC; 5s ago
 Main PID: 19468 (gunicorn)
   CGroup: /system.slice/docker-registry.service
           ├─19468 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19473 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19474 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19475 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19482 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19488 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19489 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           ├─19494 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...
           └─19496 /usr/bin/python /usr/bin/gunicorn --access-logfile - --debug --max-requests 100 --gracefu...

Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Listening a...68)
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19468] [INFO] Using worke...ent
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19473] [INFO] Booting wor...473
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19474] [INFO] Booting wor...474
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19475] [INFO] Booting wor...475
Dec 15 21:20:26 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:26 [19482] [INFO] Booting wor...482
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19488] [INFO] Booting wor...488
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19489] [INFO] Booting wor...489
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19494] [INFO] Booting wor...494
Dec 15 21:20:27 docker-registry.novalocal gunicorn[19468]: 2014-12-15 21:20:27 [19496] [INFO] Booting wor...496
Hint: Some lines were ellipsized, use -l to show in full.


Check whether its working
>>curl localhost:5000
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#

Now we need to add authentication to the registry and an self signed certificate SSL

Creating the Certificates
# Generate private key
>>openssl genrsa -out docker-registry.key 2048

# Generate CSR **** MAKE SURE WE GIVE THE SERVER NAME CORRECTLY***
>>openssl req -new -key docker-registry.key -out docker-registry.csr

# Generate Self Signed Key
>>openssl x509 -req -days 365 -in docker-registry.csr -signkey docker-registry.key -out docker-registry.crt

# Copy the files to the correct locations
>>cp docker-registry.crt /etc/pki/tls/certs
>>cp docker-registry.key /etc/pki/tls/private/docker-registry.key
>>cp docker-registry.csr /etc/pki/tls/private/docker-registry.csr

#Update the CA-Certificate in the Centos7
>>update-ca-trust enable
>>cp docker-registry.crt /etc/pki/ca-trust/source/anchors/
>>update-ca-trust extract


>>yum install nginx httpd-tools

Create a User for authentication
>>sudo htpasswd -c /etc/nginx/docker-registry.htpasswd dockeruser
<Password will be prompted>

Configure nginx Virtual Host (/etc/nginx/conf.d/virtualhost.conf).

============
# For versions of Nginx > 1.3.9 that include chunked transfer encoding support
# Replace with appropriate values where necessary

upstream docker-registry {
 server localhost:5000;
}

server {
 listen 8080;
 server_name docker.registry;

 # ssl on;
 # ssl_certificate /etc/pki/tls/certs/docker-registry.crt;
 # ssl_certificate_key /etc/pki/tls/private/docker-registry.key;

 proxy_set_header Host       $http_host;   # required for Docker client sake
 proxy_set_header X-Real-IP  $remote_addr; # pass on real client IP

 client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads

 # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
 chunked_transfer_encoding on;

 location / {
     # let Nginx know about our auth file
     auth_basic              "Restricted";
     auth_basic_user_file    docker-registry.htpasswd;

     proxy_pass http://docker-registry;
 }
 location /_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }
 location /v1/_ping {
     auth_basic off;
     proxy_pass http://docker-registry;
 }

}
============
Restart the Service
>>>systemctl restart nginx

Checking the Resgistry curl https://<username>:<password>@<hostname>:<port>  ******Hostanme should be the one we gave in the certificate ********

>> curl https://dockeruser:docker@docker-registry:8080
"docker-registry server (local) (v0.6.8)"
[root@docker-registry ~]#


Configure CoreOs to use the Private Docker Registry

To use the Private Registry in the coreos we need to Copy the CA certificate from the registry server to the Coreos Docker server.

Copy the CA certificate to /etc/ssl/certs/docker-registry.pem as pem .
now update the Certificate list using command
>>sudo update-ca-certificates

Also in this case as our server name is docker-registry:8080 we need to copy the CA certificate to /etc/docker/cert.d/docker-registry\:8080/ca.crt also.

And restart the docker service

Now try to login to registry

core@coreos ~ $ docker login https://docker-registry:8080
Username :
Password :
Email :
Login Succeeded
core@coreos ~ $


Now to Push the Image to Private repo we need to tag the image in format <registry-hostname>:<port>


Sunday, November 30, 2014

GFS Storage Cluster in Centos7

Clustering the Storage LUNS : Sharing A ISCSI LUN with Mutiple Server's.

Install Packages
yum -y install pcs fence-agents-all iscsi-initiator-utils

Configure Ha-Cluster user 
Configure password for hacluster user make sure we use same password in both the server’s.
On both Server’s

[root@controller ~]# passwd hacluster

Make sure the host entries are correct.
vi /etc/hosts
10.1.15.32 controller
10.1.15.36 controller2

Start and enable the service for next start

systemctl start pcsd.service
systemctl enable pcsd.service
systemctl start pacemaker
systemctl enable pacemaker

Authenticate the nodes
[root@controller ~]#  pcs cluster auth controller controller2
<password of hacluster>

Enabling the Cluster for Next boot (ON both Server’s)

[root@controller ~]#  pcs cluster enable --all
[root@controller ~]#  pcs cluster status

Creating the Cluster with Controller Nodes
[root@controller ~]# pcs cluster setup --start --name storage-cluster controller controller2
Shutting down pacemaker/corosync services...
Redirecting to /bin/systemctl stop  pacemaker.service
Redirecting to /bin/systemctl stop  corosync.service
Killing any remaining services...
Removing all cluster configuration files...
controller: Succeeded
controller: Starting Cluster...
controller2: Succeeded
controller2: Starting Cluster...
[root@controller ~]#

 Add a STONITH device – i.e. a fencing device

>>pcs stonith create iscsi-stonith-device fence_scsi devices=/dev/mapper/LUN1 meta provides=unfencing
>>pcs stonith show iscsi-stonith-device
 Resource: iscsi-stonith-device (class=stonith type=fence_scsi)
  Attributes: devices=/dev/mapper/LUN1
  Meta Attrs: provides=unfencing
  Operations: monitor interval=60s (iscsi-stonith-device-monitor-interval-60s)

 Create clone resources for DLM and CLVMD
This enable the service to run on both nodes . Run pcs commands from a single node only.

>>pcs resource create dlm ocf:pacemaker:controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
>>pcs resource create clvmd ocf:heartbeat:clvm op monitor interval=30s on-fail=fence clone interleave=true ordered=true

Create an ordering and a colocation constraint,
To make sure that DLM starts before CLVMD, and both resources start on the same node:

>>pcs constraint order start dlm-clone then clvmd-clone
>>pcs constraint colocation add clvmd-clone with dlm-clone

Set the no-quorum-policy of the cluster
This is to ignore so that that when quorum is lost, the system continues with the rest – GFS2 requires quorum to operate.

pcs property set no-quorum-policy=ignore


Create the GFS2 filesystem
The -t option should be specified as <clustername>:<fsname>, and the right number of journals should be specified (here 2 as we have two nodes accessing the filesystem):

 mkfs.gfs2 -p lock_dlm -t storage-cluster:glance -j 2 /dev/mapper/LUN0

 Mounting the GFS file system using pcs resource

Here we don’t use fstab but we use a pcs resource to mount the LUN.

 pcs resource create gfs2_res Filesystem device="/dev/mapper/LUN0" directory="/var/lib/glance" fstype="gfs2" options="noatime,nodiratime" op monitor interval=10s on-fail=fence clone interleave=true
 
create an ordering constraint so that the filesystem resource is started after the CLVMD resource, and a colocation constraint so that both start on the same node:

pcs constraint order start clvmd-clone then gfs2_res-clone

pcs constraint colocation add gfs2_res-clone with clvmd-clone

pcs constraint show


[root@controller ~]# cat /usr/lib/systemd/system-shutdown/turnoff.service
systemctl stop pacemaker
systemctl stop pcsd
/usr/sbin/iscsiadm -m node -u
systemctl stop multipathd
systemctl stop iscsi

Saturday, November 29, 2014

Configuring Multipath in Centos 7 for ISCSI storage LUNS

Install Packages

yum -y install iscsi-initiator-utils
yum install device-mapper-multipath -y

Starting and Enabling the Service 

systemctl start iscsi;
systemctl start iscsid ;
systemctl start multipathd ;

systemctl enable iscsi ;
systemctl enable iscsid ;
systemctl enable multipathd ;

Discovering the iSCSI Targets
iscsiadm -m discovery -t sendtargets -p 10.1.1.100
iscsiadm -m discovery -t sendtargets -p 10.1.0.100


Login to all the targets
iscsiadm -m node -l

Configure basic Multipath  on both Server’s(controller/Controller2)

mpathconf --enable --with_multipathd y

cat /etc/multipath.conf

defaults {
 polling_interval        10
 path_selector           "round-robin 0"
 path_grouping_policy    multibus
 path_checker            readsector0
 rr_min_io               100
 max_fds                 8192
 rr_weight               priorities
 failback                immediate
 no_path_retry           fail
 user_friendly_names     yes
}



[root@controller ~]# multipath -ll
mpathb (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
maptha (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#

Adding Target partition to multipath

Adding Multipath Alias for the Iscsi LUNs in /etc/multipath.conf

multipaths {
        multipath {
                wwid                    36a4badb00053ae7f0000181654753fe5
alias                   LUN0
        }
        multipath {
                wwid                     36a4badb00053ae7f00001c1c54767520
alias                   LUN1
        }
}

[root@controller ~]# systemctl restart multipathd

[root@controller ~]# multipath -ll
LUN1 (36a4badb00053ae7f00001c1c54767520) dm-3 DELL    ,MD3000i
size=2.0G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:1 sdi 8:128 active ready running
| `- 14:0:0:1 sdh 8:112 active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 11:0:0:1 sdg 8:96  active ghost running
  `- 12:0:0:1 sdf 8:80  active ghost running
LUN0 (36a4badb00053ae7f0000181654753fe5) dm-4 DELL    ,MD3000i
size=250G features='3 queue_if_no_path pg_init_retries 50' hwhandler='1 rdac' wp=rw
|-+- policy='round-robin 0' prio=6 status=active
| |- 13:0:0:0 sdd 8:48  active ready running
| `- 14:0:0:0 sde 8:64  active ready running
`-+- policy='round-robin 0' prio=1 status=enabled
  |- 12:0:0:0 sdb 8:16  active ghost running
  `- 11:0:0:0 sdc 8:32  active ghost running
[root@controller ~]#


 [root@controller ~]# systemctl status multipathd
multipathd.service - Device-Mapper Multipath Device Controller
   Loaded: loaded (/usr/lib/systemd/system/multipathd.service; enabled)
   Active: active (running) since Wed 2014-11-26 06:31:41 EST; 5s ago
  Process: 2920 ExecStart=/sbin/multipathd (code=exited, status=0/SUCCESS)
  Process: 2915 ExecStartPre=/sbin/multipath -A (code=exited, status=0/SUCCESS)
  Process: 2913 ExecStartPre=/sbin/modprobe dm-multipath (code=exited, status=0/SUCCESS)
 Main PID: 2922 (multipathd)
   CGroup: /system.slice/multipathd.service
           └─2922 /sbin/multipathd

Nov 26 06:31:41 controller systemd[1]: PID file /var/run/multipathd/multipathd.pid not readable (yet?)...tart.
Nov 26 06:31:41 controller multipathd[2922]: LUN0: load table [0 524288000 multipath 3 pg_init_retries ...4 1]
Nov 26 06:31:41 controller multipathd[2922]: LUN0: event checker started
Nov 26 06:31:41 controller systemd[1]: Started Device-Mapper Multipath Device Controller.
Nov 26 06:31:41 controller multipathd[2922]: path checkers start up



Thursday, November 27, 2014

Run a Script Before Shutdown in Centos7

Immediately before executing the actual system halt/poweroff/reboot/kexec systemd-shutdown will run all executables in /usr/lib/systemd/system-shutdown/ and pass one arguments to them: either "halt", "poweroff", "reboot" or "kexec", depending on the chosen action. All executables in this directory are executed in parallel, and execution of the action is not continued before all executables finished.

Note that systemd-halt.service (and the related units) should never be executed directly. Instead, trigger system shutdown with a command such as "systemctl halt" or suchlike.

Thursday, November 20, 2014

Systemd - Systemctl In Rhel7/Centos7


Systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit.

Boot process

Systemd primary task is to manage the boot process and provides informations about it.

To get the boot process duration, type:

>> systemd-analyze
Startup finished in 422ms (kernel) + 2.722s (initrd) + 9.674s (userspace) = 12.820s
To get the time spent by each task during the boot process, type:

>> systemd-analyze blame
7.029s network.service
2.241s plymouth-start.service
1.293s kdump.service
1.156s plymouth-quit-wait.service
1.048s firewalld.service
632ms postfix.service
621ms tuned.service
460ms iprupdate.service
446ms iprinit.service
344ms accounts-daemon.service
...
7ms systemd-update-utmp-runlevel.service
5ms systemd-random-seed.service
5ms sys-kernel-config.mount
To get the list of the dependencies, type:

>> systemctl list-dependencies
default.target
├─abrt-ccpp.service
├─abrt-oops.service
...
├─tuned.service
├─basic.target
│ ├─firewalld.service
│ ├─microcode.service
...
├─getty.target
│ ├─getty@tty1.service
│ └─serial-getty@ttyS0.service
└─remote-fs.target
Note: You will find additional information on this point in the Lennart Poettering’s blog.

Journal analysis

In addition, Systemd handles the system event log, a syslog daemon is not mandatory any more.
To get the content of the Systemd journal, type:

>> journalctl
To get all the events related to the crond process in the journal, type:

>> journalctl /sbin/crond
Note: You can replace /sbin/crond by `which crond`.

To get all the events since the last boot, type:

>> journalctl -b
To get all the events that appeared today in the journal, type:

>> journalctl --since=today
To get all the events with a syslog priority of err, type:

>> journalctl -p err
To get the 10 last events and wait for any new one (like “tail -f /var/log/messages“), type:

>> journalctl -f
Note: You will find additional information on this point in the Lennart Poettering’s blog or Lennart Poettering’s video (44min: the first ten minutes are very interesting concerning security issues).

Control groups

Systemd organizes tasks in control groups. For example, all the processes started by an apache webserver will be in the same control group, CGI scripts included.

To get the full hierarchy of control groups, type:

>> systemd-cgls
├─user.slice
│ └─user-1000.slice
│ └─session-1.scope
│ ├─2889 gdm-session-worker [pam/gdm-password]
│ ├─2899 /usr/bin/gnome-keyring-daemon --daemonize --login
│ ├─2901 gnome-session --session gnome-classic
. .
└─iprupdate.service
└─785 /sbin/iprupdate --daemon
To get the list of control group ordered by CPU, memory and disk I/O load, type:

>> systemd-cgtop
Path Tasks %CPU Memory Input/s Output/s
/ 213 3.9 829.7M - -
/system.slice 1 - - - -
/system.slice/ModemManager.service 1 - - - -
To kill all the processes associated with an apache server (CGI scripts included), type:

>> systemctl kill httpd
To put resource limits on a service (here 500 CPUShares), type:

>> systemctl set-property httpd.service CPUShares=500
Note1: The change is written into the service unit file. Use the –runtime option to avoid this behavior.
Note2: By default, each service owns 1024 CPUShares. Nothing prevents you from giving a value smaller or bigger.

To get the current CPUShares service value, type:

>> systemctl show -p CPUShares httpd.service
On this topic, you can additionally watch Georgios’ Magklaras demo (24min).

Sources: New control group interface, Systemd 205 announcement.

Service management

Systemd deals with all the aspects of the service management. The systemctl command replaces the chkconfig and the service commands. The old commands are now a link to the systemctl command.

To activate the NTP service at boot, type:

>> systemctl enable ntpd
Note1: You should specify ntpd.service but by default the .service suffix will be added.
Note2: If you specify a path, the .mount suffix will be added.
Note3: If you mention a device, the .device suffix will be added.

To deactivate it, start it, stop it, restart it, reload it, type:

>> systemctl disable ntpd
>> systemctl start ntpd
>> systemctl stop ntpd
>> systemctl restart ntpd
>> systemctl reload ntpd
Note: It is also possible to mask and unmask a service. Masking a service prevents it from being started manually or by another service.

To know if the NTP service is activated at boot, type:

>> systemctl is-enabled ntpd
enabled
To know if the NTP service is running, type:

>> systemctl is-active ntpd
inactive
To get the status of the NTP service, type:

>> systemctl status ntpd
ntpd.service
   Loaded: not-found (Reason: No such file or directory)
   Active: inactive (dead)
If you change a service configuration, you will need to reload it:

>> systemctl daemon-reload
To get the list of all the units (services, mount points, devices) with their status and description, type:

>> systemctl
To get a more readable list, type:

>> systemctl list-unit-files
To get the list of services that failed at boot, type:

>> systemctl --failed
To get the status of a process (here httpd) on a remote server (here test.example.com), type:

>> systemctl -H root@test.example.com status httpd.service
Run levels

Systemd also deals with run levels. As everything is represented by files in Systemd, target files replace run levels.

To move to single user mode, type:

>> systemctl rescue
To move to the level 3 (equivalent to the previous level 3), type:

>> systemctl isolate runlevel3.target
Or:

>> systemctl isolate multi-user.target
To move to the graphical level (equivalent to the previous level 5), type:

>> systemctl isolate graphical.target
To set the default run level to non-graphical mode, type:

>> systemctl set-default multi-user.target
To set the default run level to graphical mode, type:

>> systemctl set-default graphical.target
To get the current default run level, type:

>> systemctl get-default
graphical.target
To stop a server, type:

>> systemctl poweroff
Note: You can still use the poweroff command, a link to the systemctl command has been created (the same thing is true for the halt and reboot commands).

To reboot a server, suspend it or put it into hibernation, type:

>> systemctl reboot
>> systemctl suspend
>> systemctl hibernate
Linux standardization

Systemd‘s authors have decided to help Linux standardization among distributions. Through Systemd, changes happen in the localization of some configuration files.

Miscellaneous

To get the server hostnames, type:

>> hostnamectl
Static hostname: test.example.com
Icon name: computer-laptop
Chassis: laptop
Machine ID: asdasdasdasdsadas9aa37e54a422938d
Boot ID: adasdasdasdasdac4a82fef4ac26d0
Operating System: Centos
CPE OS Name: cpe:/o:rCentos
Kernel: Linux 3.10.0-54.0.1.el7.x86_64
Architecture: x86_64
Note: There are three kinds of hostnames: static, pretty, and transient.
“The static host name is the traditional hostname, which can be chosen by the user, and is stored in the /etc/hostname file. The “transient” hostname is a dynamic host name maintained by the kernel. It is initialized to the static host name by default, whose value defaults to “localhost”. It can be changed by DHCP or mDNS at runtime. The pretty hostname is a free-form UTF8 host name for presentation to the user.” Source: Centos 7 Networking Guide.

To assign the test hostname permanently to the server, type:

>> hostnamectl set-hostname test
Note: With this syntax all three hostnames (static, pretty, and transient) take the test value at the same time. However, it is possible to set the three hostnames separately by using the –pretty, –static, and –transient options.

To get the current locale, virtual console keymap and X11 layout, type:

>> localectl
System Locale: LANG=en_US.UTF-8
VC Keymap: en_US
X11 Layout: en_US
To assign the en_GB.utf8 value to the locale, type:

>> localectl set-locale LANG=en_GB.utf8
To assign the en_GB value to the virtual console keymap, type:

>> localectl set-keymap en_GB
To assign the en_GB value to the X11 layout, type:

>> localectl set-x11-keymap en_GB
To get the current date and time, type:

>> timedatectl
Local time: Fri 2014-01-24 22:34:05 CET
Universal time: Fri 2014-01-24 21:34:05 UTC
RTC time: Fri 2014-01-24 21:34:05
Timezone: Europe/Madrid (CET, +0100)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: no
Last DST change: DST ended at
Sun 2013-10-27 02:59:59 CEST
Sun 2013-10-27 02:00:00 CET
Next DST change: DST begins (the clock jumps one hour forward) at
Sun 2014-03-30 01:59:59 CET
Sun 2014-03-30 03:00:00 CEST
To set the current date, type:

>> timedatectl set-time YYYY-MM-DD
To set the current time, type:

>> timedatectl set-time HH:MM:SS
To get the list of time zones, type:

>> timedatectl list-timezones
To change the time zone to America/New_York, type:

>> timedatectl set-timezone America/New_York
To get the users’ list, type:

>> loginctl list-users
UID USER
42 gdm
1000 tom
0 root
To get the list of all current user sessions, type:

>> loginctl list-sessions
SESSION UID USER SEAT
1 1000 tom seat0

1 sessions listed.
To get the properties of the user tom, type:

>> loginctl show-user tom
UID=1000
GID=1000
Name=tom
Timestamp=Fri 2014-01-24 21:53:43 CET
TimestampMonotonic=160754102
RuntimePath=/run/user/1000
Slice=user-1000.slice
Display=1
State=active
Sessions=1
IdleHint=no
IdleSinceHint=0
IdleSinceHintMonotonic=0

Sources: Archlinux wiki, Freedesktop wiki, Gentoo wiki, RHEL 7 System Administration Guide, Fedora wiki.

Sunday, October 19, 2014

Failed to issue method call: Unit iptables.service failed to load In Centos7

In RHEL 7 / CentOS 7, firewalld was introduced to manage iptables. IMHO, firewalld is more suited for workstations than for server environments.

It is possible to go back to a more classic iptables setup. First, stop and mask the firewalld service:

systemctl stop firewalld
systemctl mask firewalld
Then, install the iptables-services package:

yum install iptables-services
Enable the service at boot-time:

systemctl enable iptables
Managing the service

systemctl [stop|start|restart] iptables
Systemctl doesn't seem to manage the save action like you were able to do in the past with service:

/usr/libexec/iptables/iptables.init save

Monday, October 13, 2014

Single User Mode for Centos/Rhel 7

Single user mode is the one of the Run level in the Linux operating system, Linux operating system has 6 run levels that are used for different requirement or situation. Single user mode mainly used for doing administrative task such as cleaning the file system, Managing the quotas, Recovering the file system and also recover the lost root password. In this mode services won’t start, none of the users are allowed to login except root and also system won’t ask for password to login.

Step 1: While booting you might see the splash screen like below, grub is counting the time to boot the default operating system as mentioned in /etc/grub2.cfg; this time press any key to interrupt the auto boot.

Step 2: It will list operating systems (in my case only CentOS installed) that you have installed on the machine, In below that you might find some information about booting the OS and editing the parameters of menu. If you want to enter into single user mode; select the operating system and press “e” edit arguments of kernel.


Step 3: Once you have pressed, you should see the information about the selected operating system. It gives you the information about the hard disk and partition where the OS installed, location of the kernel, language, video output, keyboard type, keyboard table, crash kernel and initrd (Initial ram disk).
To enter into single user mode; Go to second last line (Starts with linux 16 or linuxefi) using up and down arrow then  modify the ro argument.



Step 5: Modify it to “rw init=/sysroot/bin/sh”. Once done, press “Ctrl+x”

Now you should be in command line mode with root privileges (without entering password). Now you can start to troubleshoot your system or can do maintenance of your system.



You are in Single user mode .chroot to access your system.

chroot /sysroot

Saturday, October 11, 2014

CentOS 7 network install on VMWare Workstation network problem


If we install Centos  7 on Vmware workstation 10 , you can see that the network is not detected. Its because that the v7 3.10 kernel no longer supports the Ethernet Controller device. So once we install the Centos 7Os in Vmware workstation there will not be Network inside the Vm.

I found at https://access.redhat.com/discussions/722093 that its a Bug and can be fixed by editing the vmx file of Vmware Workstation.

I added the following line to my .vmx file:

CODE: SELECT ALL
ethernet0.virtualDev = "e1000"

Friday, October 3, 2014

Changing qpidd to rabbitmq for Openstack


Stopping qpidd on all server

service qpidd stop
chkconfig qpidd off

Installing packages
yum install ntp rabbitmq-server

Configuring Cluster
On Controller1
yum install ntp rabbitmq-server
chown rabbitmq:rabbitmq /var/lib/rabbitmq/*
chown rabbitmq:rabbitmq /var/lib/rabbitmq/.erlang.cookie
/etc/init.d/rabbitmq-server restart
scp /var/lib/rabbitmq/.erlang.cookie root@controller2:/var/lib/rabbitmq/


On Controller2
yum install ntp rabbitmq-server
yum install ntp rabbitmq-server
rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@controller1
rabbitmqctl start_app
rabbitmqctl cluster_status


Edit the configuration Files
Comment out the Qpidd entries In all the configuration and enter the below details. As Rabbitmq is default message broker other entries are not needed.

rabbit_host = controller

In Both the controller set the rabbitmq policy as below


rabbitmqctl set_policy HA '^(?!amq\.).*' '{"ha-mode": "all"}'