Sunday, September 29, 2013

mdadm to Configure RAID-Based

Using mdadm to Configure RAID-Based and Multipath Storage

Similar to other tools comprising the raidtools package set, the mdadm command can be used to perform all the necessary functions related to administering multiple-device sets. This section explains how mdadm can be used to:

Create a RAID device

Create a multipath device

22.3.1. Creating a RAID Device With mdadm

To create a RAID device, edit the /etc/mdadm.conf file to define appropriate DEVICE and ARRAY values:

DEVICE /dev/sd[abcd]1
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
In this example, the DEVICE line is using traditional file name globbing (refer to the glob(7) man page for more information) to define the following SCSI devices:

/dev/sda1

/dev/sdb1

/dev/sdc1

/dev/sdd1

The ARRAY line defines a RAID device (/dev/md0) that is comprised of the SCSI devices defined by the DEVICE line.

Prior to the creation or usage of any RAID devices, the /proc/mdstat file shows no active RAID devices:

Personalities :
read_ahead not set
Event: 0
unused devices: none
Next, use the above configuration and the mdadm command to create a RAID 0 array:

======================

===mdadm -C /dev/md0 --level=raid0 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 \ /dev/sdd1

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb1

=====================

Continue creating array? yes
mdadm: array /dev/md0 started.
Once created, the RAID device can be queried at any time to provide status information. The following example shows the output from the command mdadm --detail /dev/md0:

/dev/md0:
Version : 00.90.00
Creation Time : Mon Mar 1 13:49:10 2004
Raid Level : raid0
Array Size : 15621632 (14.90 GiB 15.100 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Mar 1 13:49:10 2004
State : dirty, no-errors
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
UUID : 25c0f2a1:e882dfc0:c0fe135e:6940d932
Events : 0.1

Boot Process in Linux (Redhat Linux & CentOS 5&6)

Boot Process in Linux (Redhat Linux & CentOS 5&6)







 

1. BIOS

§  BIOS stands for Basic Input/Output System

 

§  Performs some system integrity checks

 

§  Searches, loads, and executes the boot loader program.

 

§  It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence.

 

§  Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.

 

§  So, in simple terms BIOS loads and executes the MBR boot loader.

2. MBR

§  MBR stands for Master Boot Record.

 

§  It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda

 

§  MBR is less than 512 bytes in size.

 

§  It contains information about GRUB (or LILO in old systems).

 

§  So, in simple terms MBR loads and executes the GRUB boot loader.

3. GRUB

§  GRUB stands for Grand Unified Bootloader.

 

§  If you have multiple kernel images installed on your system, you can choose which one to be executed.

 

§  GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.

 

§  GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).

 

§  Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.


#boot=/dev/sda

default=0

timeout=5

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

hiddenmenu

title CentOS (2.6.18-194.el5PAE)

          root (hd0,0)

          kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/

          initrd /boot/initrd-2.6.18-194.el5PAE.img


 

§  As you notice from the above info, it contains kernel and initrd image.

 

§  So, in simple terms GRUB just loads and executes Kernel and initrd images.

4. Init

§  Looks at the /etc/inittab file to decide the Linux run level.

 

§  Following are the available run levels

§  0 – halt

§  1 – Single user mode

§  2 – Multiuser, without NFS

§  3 – Full multiuser mode

§  4 – unused

§  5 – X11

§  6 – reboot

 

§  Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.

 

§  Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level

 

§  If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.

 

§  Typically you would set the default run level to either 3 or 5.

5. Runlevel programs

§  When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.

 

§  Depending on your default init level setting, the system will execute the programs from one of the following directories.

 

§  Run level 0 – /etc/rc.d/rc0.d/

§  Run level 1 – /etc/rc.d/rc1.d/

§  Run level 2 – /etc/rc.d/rc2.d/

§  Run level 3 – /etc/rc.d/rc3.d/

§  Run level 4 – /etc/rc.d/rc4.d/

§  Run level 5 – /etc/rc.d/rc5.d/

§  Run level 6 – /etc/rc.d/rc6.d/

 

§  Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.

 

§  Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.

 

§  Programs starts with S are used during startup. S for startup.

 

§  Programs starts with K are used during shutdown. K for kill.

 

§  There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.

 

§  For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.

 





How to Check and Modify Linux Kernel Performance on Linux RHEL / CentOS VM swap tuning

vm.swappiness is a tunable kernel parameter that controls how much the kernel favors swap over RAM. At the source code level, it’s also defined as the tendency to steal mapped memory. A high swappiness value means that the kernel will be more apt to unmap mapped pages. A low swappiness value means the opposite, the kernel will be less apt to unmap mapped pages. In other words, the higher the vm.swappiness value, the more the system will swap.

The default value I have seen on RHEL/CentOS/SLES  is 60.

To find out what the default value is on a particular server run this Command:

[root@station1 Documents]# sysctl vm.swappiness
vm.swappiness = 60

The value is also located in /proc/sys/vm/swappiness.

[root@station1 Documents]# cat /proc/sys/vm/swappiness
60

Note:  You can set the maximum value up to 100, the minimum is 0.

LVM and RAID

Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, "logical" partitions can span across physical hard drives and can be resized (unlike traditional ext3 "raw" partitions). A physical disk is divided into one or more physical volumes (Pvs), and logical volume groups (VGs) are created by combining PVs as shown in Figure 1. LVM internal organization. Notice the VGs can be an aggregate of PVs from multiple physical disks.

Figure 2. Mapping logical extents to physical extents shows how the logical volumes are mapped onto physical volumes. Each PV consists of a number of fixed-size physical extents (PEs); similarly, each LV consists of a number of fixed-size logical extents (LEs). (LEs and PEs are always the same size, the default in LVM 2 is 4 MB.) An LV is created by mapping logical extents to physical extents, so that references to logical block numbers are resolved to physical block numbers. These mappings can be constructed to achieve particular performance,
scalability, or availability goals.

 

 

For example, multiple PVs can be connected together to create a single large logical volume as shown in Figure 3. LVM linear mapping.This approach, known as a linear mapping, allows a file system or database larger than a single volume to be created using two physical disks. An alternative approach is a striped mapping, in which stripes (groups of contiguous physical extents) from alternate PVs are mapped to a single LV, as shown in  Figure 4. LVM striped mapping.The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers.

Through these different types of logical-to-physical mappings, LVM can achieve four important advantages over raw physical partitions:

1.   Logical volumes can be resized while they are mounted and accessible by the database or file    system,      removing the downtime associated with adding or deleting storage from a Linux server

2.   Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible

3.   Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing)

4.   Logical volume snapshots can be created to represent the exact state of the volume at a certain
point-in-time, allowing accurate backups to proceed simultaneously with regular system operation


Basic LVM commands

Initializing disks or disk partitions

To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda and /dev/hdb into PVs use the following commands:

#pvcreate /dev/hda   (or)  pvcreate /dev/hdb

If a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate:

#pvcreate /dev/hda1

Creating a volume group:

Once you have one or more physical volumes created, you can create a volume group from these
PVs using the vgcreate command. The following command:

#vgcreate    volume_group_one  /dev/hda  /dev/hdb

creates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical volumes.

Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV and then adds that PV to volume_group_one:
#pvcreate /dev/hdc
#vgextend volume_group_one /dev/hdc
This same PV can be removed from volume_group_one by the vgreduce command:

#vgreduce volume_group_one /dev/hdc

Note that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we create an LV within a volume group in the first place.

Creating a logical volume:

We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the PEs in the
volume group to a single linear LV called logical_volume_one with the following LVM
command:


#lvcreate -n logical_volume_one    --size 255G volume_group_one
Instead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the number of PEs in the volume_group_one:

#vgdisplay volume_group_one | grep "Total PE"

which returns

Total     PE      65536

Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely:

#lvcreate -n logical_volume_one    -l 65536 volume_group_one

To create a 1500MB linear LV named logical_volume_one and its block device special file
/dev/volume_group_one/logical_volume_one use the following command:
#lvcreate -L1500 -n logical_volume_one volume_group_one

The lvcreate command uses linear mappings by default.

Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe size of 4 KB the following command can be used:
#lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one

It is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. For example, this command:

#lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdb

creates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and
/dev/hdb) with stripe size 4 KB and 128 GB in size.

An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted:

#umount /dev/volume_group_one/logical_volume_one lvremove /dev/volume_group_one/logical_volume_one

Note that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout:

#/dev/<volume_group_name>/<logical_volume_name>

so that if we had two volume groups myvg1 and myvg2 and each with three logical volumes named lv01, lv02, lv03, six device special files would be created:

/dev/myvg1/lv01
/dev/myvg1/lv02
/dev/myvg1/lv03
/dev/myvg2/lv01
/dev/myvg2/lv02
/dev/myvg2/lv03

Extending a logical volume

An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example:

#lvextend -L120G /dev/myvg/homevol

will extend LV /dev/myvg/homevol to 12 GB, while

#lvextend -L+10G /dev/myvg/homevol

will extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand both the ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline using the ext2resize command.) To resize ext3fs, the following command

#ext2online /dev/myvg/homevol

will extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides.

The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will be enlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.

Differences between LVM1 and LVM2

The new release of LVM, LVM 2, is available only on Red Hat Enterprise Linux 4 and later kernels. It is upwardly compatible with LVM 1 and retains the same command line interface structure. However it uses a new, more scalable and resilient metadata structure that allows for transactional metadata updates (that allow quick recovery after server failures), very large numbers of devices, and clustering. For Enterprise Linux servers deployed in mission-critical environments that require high availability, LVM2 is the right choice for Linux volume management. Table 1. A comparison of LVM 1 and LVM 2 summarizes the differences between    LVM1 and LVM2 in features, kernel support, and other areas.






























































                   Features

LVM1

LVM2


RHEL AS 2.1 support


No


No


RHEL 3 support


Yes


No


RHEL 4 support


No


Yes


Transactional metadata for fast recovery


No


Yes


Shared volume mounts with GFS


No


Yes


Cluster Suite failover supported


Yes


Yes


Striped volume expansion


No


Yes


Max number PVs, LVs


256 PVs, 256 LVs


2**32 PVs, 2**32 LVs


Max device size


2 Terabytes


8 Exabytes (64-bit CPUs)


Volume mirroring support


No


Yes, in Fall 2005



(Table 1. A comparison of LVM 1 and LVM 2)


RAID

Introduction
RAID stands for Redundant Array of Inexpensive Disks. This is a solution where several physical hard disks (two or more) are governed by a unit called RAID controller, which turns them into a single, cohesive data storage block.

An example of a RAID configuration would be to take two hard disks, each 80GB in size, and RAID them into a single unit 160GB in size. Another example of RAID would be to take these two disks and write data to each, creating two identical copies of everything.

RAID controllers can be implemented in hardware, which makes the RAID completely transparent to the operating systems running on top of these disks, or it can be implemented in software, which is the case we are interested in.

Purpose of RAID

RAID is used to increase the logical capacity of storage devices used, improve read/write performance and ensure redundancy in case of a hard disk failure. All these needs can be addressed by other means, usually more expensive than the RAID configuration of several hard disks. The adjective Inexpensive used in the name is not without a reason.

Advantages

The major pluses of RAID are the cost and flexibility. It is possible to dynamically adapt to the growing or changing needs of a storage center, server performance or machine backup requirements merely by changing parameters in software, without physically touching the hardware. This makes RAID more easily implemented than equivalent hardware solutions.

For instance, improved performance can be achieved by buying better, faster hard disks and using them instead of the old ones. This necessitates spending money, turning off the machine, swapping out physical components, and performing a new installation. RAID can achieve the same with only a new installation required. In general, advantages include:

•     Improved read/write performance in some RAID configurations.

•     Improved redundancy in the case of a failure in some RAID configurations.

•     Increased flexibility in hard disk & partition layout.

Disadvantages

The problems with RAID are directly related to their advantages. For instance, while RAID
can improve performance, this setup necessarily reduces the safety of the implementation. On the other hand, with increased redundancy, space efficiency is reduced. Other possible problems
with RAID include:

•     Increased wear of hard disks, leading to an increased failure rate.

•     Lack of compatibility with other hardware components and some software, like system imaging programs.

•     Greater difficulty in performing backups and system rescue/restore in the case of a failure.

•     Support by operating systems expected to use the RAID.

Limitations

RAID introduces a higher level of complexity into the system compared to conventional disk layout. This means that certain operating systems and/or software solutions may not work as intended. A good example of this problem is the  LKCD kernel crash utility, which cannot be used in local dump configuration with RAID devices.

The problem with software limitations is that they might not be apparent until after the system has been configured, complicating things.

To sum things up for this section, using RAID requires careful consideration of system needs. In home setups, RAID is usually not needed, except for people who require exceptional performance or a very high level of redundancy. Still, if you do opt for RAID, be aware of the pros and cons and plan accordingly.

This means testing the backup and imaging solutions, the stability of installed software and the ability to switch away from RAID without significantly disrupting your existing setup.

RAID levels

In the section above, we have mentioned several scenarios, where this or that RAID configuration may benefit this or that aspect of system work. These configurations are known as RAID levels and they govern all aspects of RAID benefits and drawbacks, including read/write performance, redundancy and space efficiency.

There are many RAID levels. It will be impossible to list them all here. For details on all available solutions, you might want to read the  Wikipedia article on the subject. The article not only presents the different levels, it also lists the support for each on different operating systems.

In this tutorial, we will mention the most common, most important RAID types, all of which are fully supported by Linux.

RAID 0 (Striping)

This level is achieved by grouping 2 or more hard disks into a single unit with the total size equaling that of all disks used. Practical example: 3 disks, each 80GB in size can be used in a
240GB RAID 0 configuration.

RAID 0 works by breaking data into fragments and writing to all disk simultaneously. This significantly improves the read and write performance. On the other hand, no single disk contains the entire information for any bit of data committed. This means that if one of the disks fails, the entire RAID is rendered inoperable, with unrecoverable loss of data.

RAID 0 is suitable for non-critical operations that require good performance, like the system partition or the /tmp partition where lots of temporary data is constantly written. It is not suitable for data storage.
  

 

RAID 1 (Mirroring)

This level is achieved by grouping 2 or more hard disks into a single unit with the total size equaling that of the smallest of disks used. This is because RAID 1 keeps every bit of data replicated on each of its devices in the exactly same fashion, create identical clones. Hence the name, mirroring. Practical example: 2 disks, each 80GB in size can be used in a 80GB RAID 1 configuration. On a side note, in mathematical terms, RAID 1 is an AND function, whereas RAID 0 is an OR.

Because of its configuration, RAID 1 reduced write performance, as every chunk of data has to be written n times, on each of the paired devices. The read performance is identical to single disks. Redundancy is improved, as the normal operation of the system can be maintained as long as any one disk is functional. RAID 1 is suitable for data storage, especially with non-intensive I/O tasks.

 

 

RAID 5

This is a more complex solution, with a minimum of three devices used. Two or more devices are configured in a RAID 0 setup, while the third (or last) device is a parity device. If one of the RAID 0 devices malfunctions, the array will continue operating, using the parity device as a backup. The failure will be transparent to the user, save for the reduced performance.

RAID 5 improves the write performance, as well as redundancy and is useful in mission-critical scenarios, where both good throughput and data integrity are important. RAID 5 does induce a slight CPU penalty due to parity calculations.
  

 

Linear RAID

This is a less common level, although fully usable. Linear is similar to RAID 0, except that data is written sequentially rather than in parallel. Linear RAID is a simple grouping of several devices into a larger volume, the total size of which is the sum of all members. For instance, three disks the sizes of 40, 60 and 250GB can be grouped into a linear RAID the total size of
350GB.

Linear RAID provides no read/write performance, not does it provide redundancy; a loss of any member will render the entire array unusable. It merely increases size. It's very similar to LVM. Linear RAID is suitable when large data exceeding the individual size of any disk or partition must be used.

Other levels

There are several other levels available. For example, RAID 6 is very similar to RAID 5, except that it has dual parity. Then, there are also nested levels, which combine different level solution in a single set. For instance, RAID 0+1 is a nested set of striped devices in a mirror configuration. This setup requires a minimum of four disks.

These setups are less common, more complex and more suitable for business rather than home environment, therefore we won't talk about those in this tutorial. Still, it is good to know about them, in case you ever need them.

Bash Shell Scripting

1. Hello bourne  -  Bash Shell Scripting


First you need to find out where is your bash interpreter located. Enter the following into your command line:

$ which bash
 
bash interpreter location:
/bin/bash


Open up you favorite text editor and a create file called hello_BOURNE.sh. Insert the following lines to a file:
NOTE:Every bash shell script in this tutorial starts with shebang:"#!" which is not read as a comment. First line is also a place where you put your interpreter which is in this case: /bin/bash.
Here is our first bash shell script example:

#!/bin/bash
# declare STRING variable
STRING="Hello bourne"
#print variable on a screen
echo $STRING


Navigate to a directory where your hello_bourne.sh is located and make the file executable:
$ chmod +x hello_bourne.sh 
 
Make bash shell script executable


Now you are ready to execute your first bash script:


./hello_bourne.sh 
 


Example of simple bash shell script




2. Simple Backup bash shell script


 

#!/bin/bash
tar -czf myhome_directory.tar.gz /home/linuxconfig







Simple Backup bash script

3. Variables


In this example we declare simple bash variable and print it on the screen ( stdout ) with echo command.
#!/bin/bash
STRING="HELLO BOURNE!!!"
echo $STRING


Bash string Variables in bash script


Your backup script and variables:
#!/bin/bash
OF=myhome_directory_$(date +%Y%m%d).tar.gz
tar -czf $OF /home/linuxconfig




Bash backup Script with bash Variables

3.1. Global vs. Local variables


 
#!/bin/bash
#Define bash global variable
#This variable is global and can be used anywhere in this bash script
VAR="global variable"
function bash {
#Define bash local variable
#This variable is local to bash function only
local VAR="local variable"
echo $VAR
}
echo $VAR
bash
# Note the bash global variable did not change
# "local" is bash reserved word
echo $VAR



Global vs. Local Bash variables in bash script

4. Passing arguments to the bash script


 
#!/bin/bash
# use predefined variables to access passed arguments
#echo arguments to the shell
echo $1 $2 $3 ' -> echo $1 $2 $3'

# We can also store arguments from bash command line in special array
args=("$@")
#echo arguments to the shell
echo ${args[0]} ${args[1]} ${args[2]} ' -> args=("$@"); echo ${args[0]} ${args[1]} ${args[2]}'

#use $@ to print out all arguments at once
echo $@ ' -> echo $@'

# use $# variable to print out
# number of arguments passed to the bash script
echo Number of arguments passed: $# ' -> echo Number of arguments passed: $#'

 
/arguments.sh Bash Scripting Tutorial 



 
Passing arguments to the bash script

5. Executing shell commands with bash


 
#!/bin/bash
# use backticks " ` ` " to execute shell command
echo `uname -o`
# executing bash command without backticks
echo uname -o



 Executing shell commands with bash

6. Reading User Input


 
#!/bin/bash

echo -e "Hi, please type the word: \c "
read word
echo "The word you entered is: $word"
echo -e "Can you please enter two words? "
read word1 word2
echo "Here is your input: \"$word1\" \"$word2\""
echo -e "How do you feel about bash scripting? "
# read command now stores a reply into the default build-in variable $REPLY
read
echo "You said $REPLY, I'm glad to hear that! "
echo -e "What are your favorite colours ? "
# -a makes read command to read into an array
read -a colours
echo "My favorite colours are also ${colours[0]}, ${colours[1]} and ${colours[2]}:-)"



 Reading User Input with bash

7. Bash Trap Command


 









#!/bin/bash
# bash trap command
trap bashtrap INT
# bash clear screen command
clear;
# bash trap function is executed when CTRL-C is pressed:
# bash prints message => Executing bash trap subrutine !
bashtrap()
{
echo "CTRL+C Detected !...executing bash trap !"
}
# for loop from 1/10 to 10/10
for a in `seq 1 10`; do
echo "$a/10 to Exit."
sleep 1;
done
echo "Exit Bash Trap Example!!!"





 

8. Arrays


 

8.1. Declare simple bash array


 
#!/bin/bash
#Declare array with 4 elements
ARRAY=( 'Debian Linux' 'Redhat Linux' Ubuntu Linux )
# get number of elements in the array
ELEMENTS=${#ARRAY[@]}

# echo each element in array
# for loop
for (( i=0;i<$ELEMENTS;i++)); do
echo ${ARRAY[${i}]}
done


Declare simple bash array

8.2. Read file into bash array


 

#!/bin/bash
# Declare array
declare -a ARRAY
# Link filedescriptor 10 with stdin
exec 10<&0
# stdin replaced with a file supplied as a first argument
exec < $1
let count=0

while read LINE; do

ARRAY[$count]=$LINE
((count++))
done

echo Number of elements: ${#ARRAY[@]}
# echo array's content
echo ${ARRAY[@]}
# restore stdin from filedescriptor 10
# and close filedescriptor 10
exec 0<&10 10<&-


Bash script execution with an output:
linuxconfig.org $ cat bash.txt 
Bash
Scripting
Tutorial
Guide
linuxconfig.org $ ./bash-script.sh bash.txt
Number of elements: 4
Bash Scripting Tutorial Guide
linuxconfig.org $

 

9. Bash if / else / fi statements


 

9.1. Simple Bash if/else statement


Please note the spacing inside the [ and ] brackets! Without the spaces, it won't work!
#!/bin/bash
directory="./BashScripting"

# bash check if directory exists
if [ -d $directory ]; then
echo "Directory exists"
else
echo "Directory does not exists"
fi


 Bash if else fi statement

9.2. Nested if/else


 
#!/bin/bash

# Declare variable choice and assign value 4
choice=4
# Print to stdout
echo "1. Bash"
echo "2. Scripting"
echo "3. Tutorial"
echo -n "Please choose a word [1,2 or 3]? "
# Loop while the variable choice is equal 4
# bash while loop
while [ $choice -eq 4 ]; do

# read user input
read choice
# bash nested if/else
if [ $choice -eq 1 ] ; then

echo "You have chosen word: Bash"

else

if [ $choice -eq 2 ] ; then
echo "You have chosen word: Scripting"
else

if [ $choice -eq 3 ] ; then
echo "You have chosen word: Tutorial"
else
echo "Please make a choice between 1-3 !"
echo "1. Bash"
echo "2. Scripting"
echo "3. Tutorial"
echo -n "Please choose a word [1,2 or 3]? "
choice=4
fi
fi
fi
done


Nested Bash if else statement

10. Bash Comparisons


 

10.1. Arithmetic Comparisons


 



























-lt<
-gt>
-le<=
-ge>=
-eq==
-ne!=

 
#!/bin/bash
# declare integers
NUM1=2
NUM2=2
if [ $NUM1 -eq $NUM2 ]; then
echo "Both Values are equal"
else
echo "Values are NOT equal"
fi



  Bash Arithmetic Comparisons


#!/bin/bash
# declare integers
NUM1=2
NUM2=1
if [ $NUM1 -eq $NUM2 ]; then
echo "Both Values are equal"
else
echo "Values are NOT equal"
fi



  Bash Arithmetic Comparisons - values are NOT equal


#!/bin/bash
# declare integers
NUM1=2
NUM2=1
if [ $NUM1 -eq $NUM2 ]; then
echo "Both Values are equal"
elif [ $NUM1 -gt $NUM2 ]; then
echo "NUM1 is greater then NUM2"
else
echo "NUM2 is greater then NUM1"
fi


Bash Arithmetic Comparisons - greater then

10.2. String Comparisons


 



























=equal
!=not equal
<less then
>greater then
-n s1string s1 is not empty
-z s1string s1 is empty

 
#!/bin/bash
#Declare string S1
S1="Bash"
#Declare string S2
S2="Scripting"
if [ $S1 = $S2 ]; then
echo "Both Strings are equal"
else
echo "Strings are NOT equal"
fi



Bash String Comparisons - values are NOT equal
 
#!/bin/bash
#Declare string S1
S1="Bash"
#Declare string S2
S2="Bash"
if [ $S1 = $S2 ]; then
echo "Both Strings are equal"
else
echo "Strings are NOT equal"
fi



bash interpreter location: /bin/bash

11. Bash File Testing


 



































































-b filenameBlock special file
-c filenameSpecial character file
-d directorynameCheck for directory existence
-e filenameCheck for file existence
-f filenameCheck for regular file existence not a directory
-G filenameCheck if file exists and is owned by effective group ID.
-g filenametrue if file exists and is set-group-id.
-k filenameSticky bit
-L filenameSymbolic link
-O filenameTrue if file exists and is owned by the effective user id.
-r filenameCheck if file is a readable
-S filenameCheck if file is socket
-s filenameCheck if file is nonzero size
-u filenameCheck if file set-ser-id bit is set
-w filenameCheck if file is writable
-x filenameCheck if file is executable

 
#!/bin/bash
file="./file"
if [ -e $file ]; then
echo "File exists"
else
echo "File does not exists"
fi



Bash File Testing - File does not exist                      Bash File Testing - File exists


Similarly for example we can use while loop to check if file does not exists. This script will sleep until file does exists. Note bash negator "!" which negates the -e option.
#!/bin/bash

while [ ! -e myfile ]; do
# Sleep until file does exists/is created
sleep 1
done

 

12. Loops


 

12.1. Bash for loop


 
#!/bin/bash

# bash for loop
for f in $( ls /var/ ); do
echo $f
done

Running for loop from bash shell command line:
$ for f in $( ls /var/ ); do echo $f; done 



  Bash for loop

12.2. Bash while loop


 
#!/bin/bash
COUNT=6
# bash while loop
while [ $COUNT -gt 0 ]; do
echo Value of count is: $COUNT
let COUNT=COUNT-1
done



  Bash while loop


12.3. Bash until loop


 
#!/bin/bash
COUNT=0
# bash until loop
until [ $COUNT -gt 5 ]; do
echo Value of count is: $COUNT
let COUNT=COUNT+1
done



Bash until loop


12.4. Control bash loop with


Here is a example of while loop controlled by standard input. Until the redirection chain from STDOUT to STDIN to the read command exists the while loop continues.
#!/bin/bash
# This bash script will locate and replace spaces
# in the filenames
DIR="."
# Controlling a loop with bash read command by redirecting STDOUT as
# a STDIN to while loop
# find will not truncate filenames containing spaces
find $DIR -type f | while read file; do
# using POSIX class [:space:] to find space in the filename
if [[ "$file" = *[[:space:]]* ]]; then
# substitute space with "_" character and consequently rename the file
mv "$file" `echo $file | tr ' ' '_'`
fi;
# end of while loop
done



Bash script to replace spaces in the filenames with   _


13. Bash Functions


 
!/bin/bash
# BASH FUNCTIONS CAN BE DECLARED IN ANY ORDER
function function_B {
echo Function B.
}
function function_A {
echo $1
}
function function_D {
echo Function D.
}
function function_C {
echo $1
}
# FUNCTION CALLS
# Pass parameter to function A
function_A "Function A."
function_B
# Pass parameter to function C
function_C "Function C."
function_D



Bash Functions


14. Bash Select


 
#!/bin/bash

PS3='Choose one word: '

# bash select
select word in "linux" "bash" "scripting" "tutorial"
do
echo "The word you have selected is: $word"
# Break, otherwise endless loop
break
done

exit 0



Bash Select


15. Case statement conditional


 
#!/bin/bash
echo "What is your preferred programming / scripting language"
echo "1) bash"
echo "2) perl"
echo "3) phyton"
echo "4) c++"
echo "5) I do not know !"
read case;
#simple case bash structure
# note in this case $case is variable and does not have to
# be named case this is just an example
case $case in
1) echo "You selected bash";;
2) echo "You selected perl";;
3) echo "You selected phyton";;
4) echo "You selected c++";;
5) exit
esac



bash case statement conditiona


16. Bash quotes and quotations


Quotations and quotes are important part of bash and bash scripting. Here are some bash quotes and quotations basics.

16.1. Escaping Meta characters


Before we start with quotes and quotations we should know something about escaping meta characters. Escaping will suppress a special meaning of meta characters and therefore meta characters will be read by bash literally. To do this we need to use backslash "\" character. Example:
#!/bin/bash

#Declare bash string variable
BASH_VAR="Bash Script"

# echo variable BASH_VAR
echo $BASH_VAR

#when meta character such us "$" is escaped with "\" it will be read literally
echo \$BASH_VAR

# backslash has also special meaning and it can be suppressed with yet another "\"
echo "\\"



  escaping meta characters in bash

16.2. Single quotes


Single quotes in bash will suppress special meaning of every meta characters. Therefore meta characters will be read literally. It is not possible to use another single quote within two single quotes not even if the single quote is escaped by backslash.
#!/bin/bash

#Declare bash string variable
BASH_VAR="Bash Script"

# echo variable BASH_VAR
echo $BASH_VAR

# meta characters special meaning in bash is suppressed when using single quotes
echo '$BASH_VAR "$BASH_VAR"'



Using single quotes in bash


16.3. Double Quotes


Double quotes in bash will suppress special meaning of every meta characters except "$", "\" and "`". Any other meta characters will be read literally. It is also possible to use single quote within double quotes. If we need to use double quotes within double quotes bash can read them literally when escaping them with "\". Example:
#!/bin/bash

#Declare bash string variable
BASH_VAR="Bash Script"

# echo variable BASH_VAR
echo $BASH_VAR

# meta characters and its special meaning in bash is
# suppressed when using double quotes except "$", "\" and "`"

echo "It's $BASH_VAR and \"$BASH_VAR\" using backticks: `date`"



  Using double quotes in bash

16.4. Bash quoting with ANSI-C style


There is also another type of quoting and that is ANSI-C. In this type of quoting characters escaped with "\" will gain special meaning according to the ANSI-C standard.







































\aalert (bell)\bbackspace
\ean escape character\fform feed
\nnewline\rcarriage return
\thorizontal tab\vvertical tab
\\backslash\`single quote
\nnnoctal value of characters ( see [http://www.asciitable.com/ ASCII table] )\xnnhexadecimal value of characters ( see [http://www.asciitable.com/ ASCII table] )

The syntax fo ansi-c bash quoting is: $'' . Here is an example:
#!/bin/bash

# as a example we have used \n as a new line, \x40 is hex value for @
# and \56 is octal value for .
echo $'web: www.linuxconfig.org\nemail: web\x40linuxconfig\56org'



  quoting in bash with ansi-c stype


17. Arithmetic Operations


 

17.1. Bash Addition Calculator Example


 
#!/bin/bash

let RESULT1=$1+$2
echo $1+$2=$RESULT1 ' -> # let RESULT1=$1+$2'
declare -i RESULT2
RESULT2=$1+$2
echo $1+$2=$RESULT2 ' -> # declare -i RESULT2; RESULT2=$1+$2'
echo $1+$2=$(($1 + $2)) ' -> # $(($1 + $2))'



Bash Addition Calculator


17.2. Bash Arithmetics


 
#!/bin/bash

echo '### let ###'
# bash addition
let ADDITION=3+5
echo "3 + 5 =" $ADDITION

# bash subtraction
let SUBTRACTION=7-8
echo "7 - 8 =" $SUBTRACTION

# bash multiplication
let MULTIPLICATION=5*8
echo "5 * 8 =" $MULTIPLICATION

# bash division
let DIVISION=4/2
echo "4 / 2 =" $DIVISION

# bash modulus
let MODULUS=9%4
echo "9 % 4 =" $MODULUS

# bash power of two
let POWEROFTWO=2**2
echo "2 ^ 2 =" $POWEROFTWO

echo '### Bash Arithmetic Expansion ###'
# There are two formats for arithmetic expansion: $[ expression ]
# and $(( expression #)) its your choice which you use

echo 4 + 5 = $((4 + 5))
echo 7 - 7 = $[ 7 - 7 ]
echo 4 x 6 = $((3 * 2))
echo 6 / 3 = $((6 / 3))
echo 8 % 7 = $((8 % 7))
echo 2 ^ 8 = $[ 2 ** 8 ]

echo '### Declare ###'

echo -e "Please enter two numbers \c"
# read user input
read num1 num2
declare -i result
result=$num1+$num2
echo "Result is:$result "

# bash convert binary number 10001
result=2#10001
echo $result

# bash convert octal number 16
result=8#16
echo $result

# bash convert hex number 0xE6A
result=16#E6A
echo $result 

 



  Bash Arithmetic Operations

17.3. Round floating point number


 
#!/bin/bash
# get floating point number
floating_point_number=3.3446
echo $floating_point_number
# round floating point number with bash
for bash_rounded_number in $(printf %.0f $floating_point_number); do
echo "Rounded number with bash:" $bash_rounded_number
done 

 



 Round floating point number with bash

17.4. Bash floating point calculations


 
#!/bin/bash
# Simple linux bash calculator
echo "Enter input:"
read userinput
echo "Result with 2 digits after decimal point:"
echo "scale=2; ${userinput}" | bc
echo "Result with 10 digits after decimal point:"
echo "scale=10; ${userinput}" | bc
echo "Result as rounded integer:"
echo $userinput | bc 

 

   Bash floating point calculations

18. Redirections


 

18.1. STDOUT from bash script to STDERR


 
#!/bin/bash

echo "Redirect this STDOUT to STDERR" 1>&2

To prove that STDOUT is redirected to STDERR we can redirect script's output to file:

STDOUT from bash script to STDERR


18.2. STDERR from bash script to STDOUT


 
#!/bin/bash

cat $1 2>&1

To prove that STDERR is redirected to STDOUT we can redirect script's output to file:

  STDERR from bash script to STDOUT


18.3. stdout to screen


The simple way to redirect a standard output ( stdout ) is to simply use any command, because by default stdout is automatically redirected to screen. First create a file "file1":
$ touch file1
$ ls file1
file1

As you can see from the example above execution of ls command produces STDOUT which by default is redirected to screen.

18.4. stdout to file


The override the default behavior of STDOUT we can use ">" to redirect this output to file:
$ ls file1 > STDOUT
$ cat STDOUT
file1

 

18.5. stderr to file


By default STDERR is displayed on the screen:
$ ls
file1 STDOUT
$ ls file2
ls: cannot access file2: No such file or directory

In the following example we will redirect the standard error ( stderr ) to a file and stdout to a screen as default. Please note that STDOUT is displayed on the screen, however STDERR is redirected to a file called STDERR:
 
$ ls
file1 STDOUT
$ ls file1 file2 2> STDERR
file1
$ cat STDERR
ls: cannot access file2: No such file or directory

 

18.6. stdout to stderr


It is also possible to redirect STDOUT and STDERR to the same file. In the next example we will redirect STDOUT to the same descriptor as STDERR. Both STDOUT and STDERR will be redirected to file "STDERR_STDOUT".
 
$ ls
file1 STDERR STDOUT
$ ls file1 file2 2> STDERR_STDOUT 1>&2
$ cat STDERR_STDOUT
ls: cannot access file2: No such file or directory
file1


File STDERR_STDOUT now contains STDOUT and STDERR.

18.7. stderr to stdout


The above example can be reversed by redirecting STDERR to the same descriptor as SDTOUT:
 
$ ls
file1 STDERR STDOUT
$ ls file1 file2 > STDERR_STDOUT 2>&1
$ cat STDERR_STDOUT
ls: cannot access file2: No such file or directory
file1

 

18.8. stderr and stdout to file


Previous two examples redirected both STDOUT and STDERR to a file. Another way to achieve the same effect is illustrated below:
 
$ ls
file1 STDERR STDOUT
$ ls file1 file2 &> STDERR_STDOUT
$ cat STDERR_STDOUT
ls: cannot access file2: No such file or directory
file1


or
 
$ ls file1 file2 >& STDERR_STDOUT
$ cat STDERR_STDOUT
ls: cannot access file2: No such file or directory
file1

Installation FFmpeg on Linux RHEL/CentOS 6.X

FFmpeg :

FFmpeg is simply a tool which implements a decoder and then an encoder.It is a complete, cross-platform solution to record, convert and stream audio and video. This allows the users to convert files from one form to another.

Features :

  • FFmpeg is free software licensed under the LGPL or GPL depending on your choice of configuration options.

  • FFmpeg Hosting can convert any video format to the web-optimized .flv format so that they can get streamed on the website.

  • FFmpeg provide command line tool to convert multimedia files between formats.


Steps to Installation FFmpeg on Linux RHEL/CentOS 6.X

  

Step 1 : Create FFmpeg Repository

Open repository Directory

[root@bsrtech ~]# cd /etc/yum.repos.d/

Create name with ffmpeg(any name) repositorty& open with vi command

[root@bsrtech yum.repos.d]# vim ffmpeg.repo

Step 2 : Write the following data on that file

     [ffmpeg]
name=FFmpeg RPM Repository for Red Hat Enterprise Linux
baseurl=http://apt.sw.be/redhat/el6/en/x86_64/dag/  (64 Bit OS)
#baseurl=http://apt.sw.be/redhat/el6/en/i386/dag/   (32 Bit OS)
gpgcheck=1
enabled=1


Save&Quit the file(:wq)

Stewp 3 : Copy the conf file in lib directory

 Copy /etc/ld.so.conf file in /usr/local/lib/ directory

[root@bsrtech ~]# cp -r /etc/ld.so.conf  /usr/local/lib/

Then After Run This Command

[root@bsrtech ~]# ldconfig -v  (Enter)

Step 4 : Install rpmforge Repository

For 32 Bit OS


[root@bsrtech ~]#rmp -Uvh http://apt.sw.be/redhat/el6/en/i386/rpmforge/RPMS/rpmforge-release-0.5.3-1.el6.rf.i686.rpm

For 64 Bit OS

[root@bsrtech ~]# rpm -Uvh http://apt.sw.be/redhat/el6/en/x86_64/rpmforge/RPMS/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm

Once Update installed Packages using yum update command

[root@bsrtech ~]# yum update

Step 5 : Now Install ffmpeg & ffmpeg-devel

   [root@bsrtech ~]# yum -y install ffmpeg ffmpeg-devel
( or )

   [root@bsrtech ~]# yum -y install ffmpeg*

After Completion use ffmpeg command to see the Full Details of FFmpeg.

[root@bsrtech ~]# ffmpeg

Saturday, September 28, 2013

PHP Handlers

 

PHP handlers are the programs that interpret the PHP code in your web application and process it to be sent as HTML (or another static format) by your web server. Out of the box none of the major web servers can handle PHP by themselves so they need another program to do it for them. This program, known as a PHP handler takes all of your PHP code and generates the output which is then sent to the web server which forwards it on to the user.
Currently there are 4 major PHP handlers available on Apache. These include mod_php (AKA DSO), CGI, FastCGI, and suPHP. If you’re using another web server your options may be different (for example, nginx requires FastCGI). Each of these handle memory, CPU, and file permissions in a different way which can then manifest itself in your web app in everything from performance to important features of your application. Here’s a breakdown of each of the options

mod_php (DSO)
DSO (which is short for Dynamic Shared Object) or mod_php is the oldest and, some would say, the fastest PHP handler available. It essentially makes PHP a part of Apache by having the Apache server interpret the PHP code itself through use of an Apache module known as mod_php. This is the default handler typically installed when installing a web server package on your server.
On the plus side mod_php is fast, in fact very fast as it runs directly in the same process as your Apache server. Running it together with Apache also means that it has a very low CPU and memory requirement which may be beneficial in situations where computing resources are limited.
The major drawback of mod_php is that it runs as part of Apache which means that it runs as the same user that your Apache process runs as (if you’re on Ubuntu this would www-data). This means that all work on files will be done as the Apache user which therefore must have permissions to all of your files. In most cases when you upload files to your server you do so as a different user that has login rights to the machine. This means that all the files and folders you upload are “owned” by the user that you used to upload them. If you don’t give permissions to them to the Apache user the web server will not be able to read or write to the files, but if you do give access to them to the Apache user and your machine is compromised by an attacker that attacker could have access to much more than just the files in the website they used to get in to your system potentially creating problems for every site hosted on your machine.
The file permission issue is also the biggest source of headache for users of content management systems such as WordPress or Drupal. Because the files of your site are often owned by an account other than that which they are running as, users of mod_php are often unable to upload or modify files from within their CMS without substantial work arounds. Not only could this prevent an administrator from adding pictures and other media to their site easily, but it could also lead to security patches not being installed due to the added complexity of doing so which causes another security hole in your site.
CGI

CGI is the fallback in most servers when mod_php is not available. Instead of running the PHP code within Apache it is now run as it’s own CGI process, that is, in a program outside of your Apache server.
By default CGI will be called by the Apache server meaning that it will run as the Apache user with all the problems of doing so that mod_php encountered. Unlike mod_php however CGI has the ability to see the PHP as another user (presumably the user that owns the files) using another Apache module known as suexec.
For performance CGI is not nearly as fast as mod_php and requires more CPU time. It is however still soft on memory usage which may be a benefit to some users.

suPHP

suPHP runs PHP outside of the Apache script as CGI. Unlike CGI however it will run the scripts as a user other than the Apache user (presumably the user that owns the files). This means that if you are using a CMS you will be able to upload files from within your web application using suPHP. In addition, because your PHP is being run as a different user any vulnerability in your site can be restricted to only the files of your website thereby providing substantial security benefits particularly on servers that run multiple websites.
The cost of the upload ability and security of suPHP is not cheap. suPHP is slow and requires quite a bit of CPU to process all the files. In addition, as it must process the file each and every time it is called, suPHP cannot use any OPCode caching such as APC or memcached resulting in even higher CPU usage by your application. If you are running on a low-end VPS or other server with an application such as WordPress this configuration can easily push you passed any CPU limits you might have whenever traffic starts to climb.

FastCGI

FastCGI is the last major PHP handler. It offers the security benefits of suPHP by executing files as the owner of the file. Unlike suPHP however it keeps open a session for the file when the processing is done resulting in significant memory use but also allowing for the use of OPCode caching such as APC or memcached.
mod_php       CGI         suPHP FastCGI
Memory usage               Low                   Low        Low        High
CPU Usage                       Low                   High       High       Low
Security                             Low                   Low        High      High
Run as file owner          No                      No          Yes        Yes
Overall Performance   Fast                   Slow       Slow    Fast
To determine the PHP Handler used in Cpanel servers :

/usr/local/cpanel/bin/rebuild_phpconfig --current

To determine the PHP version :

php -v

To determine the PHP modules currently enabled :

php -m

To create a phpinfo file, open a plain text file, add the following lines and save :

<?php

// Show all information, defaults to INFO_ALL
phpinfo();

?>

How to Stop Open Relay of Exim

An open relay is a smtp server configured in such a way that is allows a third party to relay (send / receive email messages that are neither from nor for local users). Therefore, such servers are usually targets for spam senders.

You can test if a server is an open relay via this link : http://www.mailradar.com/openrelay/

If the server supports open relay, you can stop it via the following script in Cpanel servers

/scripts/fixrelayd

Enable SSL forn WHM and CPANEL etc

 

Enable SSL through WHM

WHM >> Manage Service SSL Certificates >> cPanel/WHM/Webmail Service

 

back end file

/var/cpanel/ssl/cpanel/cpanel.pem

Exim cheat sheet

To remove all mails from exim queue :

rm -rf /var/spool/exim/input/*

Deleting Frozen Mails:

exim -bpr | grep frozen | awk {‘print $3′} | xargs exim -Mrm

exiqgrep -z -i | xargs exim -Mrm

To delete only frozen messages older than a day:

exiqgrep -zi -o 86400 | xargs exim -Mrm

where you can change 86400 depending on the time frame you want to keep.( 1 day = 86400 seconds. ).

To forcefully deliver mails in queue, use the following exim command:

exim -bpru |awk ‘{print $3}’ | xargs -n 1 -P 40 exim -v -M

To flush the mail queue:

exim -qff

/usr/sbin/exim -qff

exim -qf – Force another queue run

To clear spam mails from Exim Queue:

grep -R -l [SPAM] /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm

To clear frozen mails from Exim Queue.

grep -R -l ‘*** Frozen’ /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm

To clear mails from Exim Queue for which recipient cannot not be verified.

grep -R -l ‘The recipient cannot be verified’ /var/spool/exim/msglog/*|cut -b26-|xargs exim -Mrm

To find exim queue details. It will show ( Count Volume Oldest Newest Domain ) details.

exim -bp |exiqsumm

To remove root mails from exim queue :

When mail queue is high due to root mails, and you only need to remove the root mails and not any other valid mails.

exim -bp |grep “”|awk ‘{print $3}’|xargs exim -Mrm

Replace “HOSTNAME” with server hostname

To remove nobody mails from exim queue :

When you need to clear nobody mails, you can use the following command.

exiqgrep -i -f nobody@HOSTNAME | xargs exim -Mrm (Use -f to search the queue for messages from a specific sender)

exiqgrep -i -r nobody@HOSTNAME | xargs exim -Mrm (Use -r to search the queue for messages for a specific recipient/domain)

Replace “HOSTNAME” with server hostname

Run a pretend SMTP transaction from the command line, as if it were coming from the given IP address. This will display Exim’s checks, ACLs, and filters as they are applied. The message will NOT actually be delivered.

# exim -bh

To forcefully deliver mails of a particular domain :

exim -v -Rff domain

To find the number of frozen mails in queue :

exim -bpr | grep frozen | wc -l

To find the number of mails in Queue:

exim -bpr | grep “<" | wc -l

exim -bpc

To view the log for the message :

exim -Mvl message ID

To show the mail in queue for $name

exim -bp|grep $name

To view the message header

exim -Mvh $MSGID

To view the message body

exim -Mvb $MSGID

To forcefully deliver the message

exim -M $MSGID

To view the transact of the message

exim -v -M $MSGID

To remove message without sending any error message

exim -Mrm messageID

To check the mails in the queue

exim -bp

To check the syntactic errors

exim -C /config/file.new -bV

To delete mails for a particular domain

exim -bp | grep "” | awk {‘print $3′} | xargs exim -Mrm

To view number of mails in queue for each domain

exim -bp | exiqsumm | grep -v ‘\-\-’ | grep -v ‘Volume’ | grep -v ‘^$’ | sort -bg | awk ‘{print “Volume: ” $1 ” \t Domain: ” $5}’A

Run the following command to pull the most used mailing script’s location from the Exim mail log:

grep cwd /var/log/exim_mainlog | grep -v /var/spool | awk -F”cwd=” ‘{print $2}’ | awk ‘{print $1}’ | sort | uniq -c | sort -n

How to monitor file access on Linux with "auditd"

If you are running a mission critical web server, or maintaining a storage server loaded with sensitive data, you probably want to closely monitor file access activities within the server. For example, you want to track any unauthorized change in system configuration files such as /etc/passwd.

To monitor who changed or accessed files or directories on Linux, you can use the Linux Audit System which provides system call auditing and monitoring. In the Linux Audit System, a daemon called auditd is responsible for monitoring individual system calls, and logging them for inspection.

In this tutorial, I will describe how to monitor file access on Linux by using auditd.

To install auditd on Debian, Ubuntu or Linux Mint:

$ sudo apt-get install auditd
Once installed by apt-get, auditd will be set to start automatically upon boot.

To install auditd on Fedora, CentOS or RHEL:

$ sudo yum install audit
If you want to start auditd automatically upon boot on Fedora, CentOS or RHEL, you need to run the following.

$ sudo chkconfig auditd on
Once you installed auditd, you can configure it by two methods. One is to use a command-line utility called auditctl. The other method is to edit the audit configuration file located at /etc/audit/audit.rules. In this tutorial, I will use the audit configuration file.

The following is an example auditd configuration file.

$ sudo vi /etc/audit/audit.rules
# First rule - delete all
-D

# increase the buffers to survive stress events. make this bigger for busy systems.
-b 1024

# monitor unlink() and rmdir() system calls.
-a exit,always -S unlink -S rmdir

# monitor open() system call by Linux UID 1001.
-a exit,always -S open -F loginuid=1001

# monitor write-access and change in file properties (read/write/execute) of the following files.
-w /etc/group -p wa
-w /etc/passwd -p wa
-w /etc/shadow -p wa
-w /etc/sudoers -p wa

# monitor read-access of the following directory.
-w /etc/secret_directory -p r

# lock the audit configuration to prevent any modification of this file.
-e 2
Once you finish editing the audit configuration, restart auditd.

$ sudo service auditd restart
Once auditd starts running, it will start generating an audit daemon log in /var/log/audit/audit.log as auditing is in progress.

A command-line tool called ausearch allows you to query audit daemon logs for specific violations.

To check if a specific file (e.g., /etc/passwd) has been accessed by anyone, run the following. As shown in the above example audit configuration, auditd checks if /etc/passwd is modified or tampered with using chmod.

$ sudo ausearch -f /etc/passwd
time->Sun May 12 19:22:31 2013
type=PATH msg=audit(1368411751.734:94): item=0 name="/etc/passwd" inode=655761 dev=08:01 mode=0100644 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1368411751.734:94): cwd="/home/xmodulo"
type=SYSCALL msg=audit(1368411751.734:94): arch=40000003 syscall=306 success=yes exit=0 a0=ffffff9c a1=8624900 a2=1a6 a3=8000 items=1 ppid=14971 pid=14972 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=19 comm="chmod" exe="/bin/chmod" key=(null)
The ausearch output above shows that chmod has been applied to /etc/passwd by the root once.

To check if a specific directory (e.g., /etc/secret_directory) has been accessed by anyone, run the following.

$ sudo ausearch -f /etc/secret_directory
time->Sun May 12 19:59:58 2013
type=PATH msg=audit(1368413998.927:108): item=0 name="/etc/secret_directory/" inode=686341 dev=08:01 mode=040755 ouid=0 ogid=0 rdev=00:00
type=CWD msg=audit(1368413998.927:108): cwd="/home/xmodulo"
type=SYSCALL msg=audit(1368413998.927:108): arch=40000003 syscall=230 success=no exit=-61 a0=bfcdc4e4 a1=b76f0fa9 a2=8c65c70 a3=ff items=1 ppid=2792 pid=11300 auid=1001 uid=1001 gid=1001 euid=1001 suid=1001 fsuid=1001 egid=1001 sgid=1001 fsgid=1001 tty=pts1 ses=2 comm="ls" exe="/bin/ls" key=(null)
The output shows that /etc/secret_directory was looked into by Linux UID 1001.

In our example audit configuration, auditd was placed in immutable mode, which means that if you attempt to modify /etc/audit/audit.rules, and restart auditd, you will get the following error.

$ sudo /etc/init.d/auditd restart
Error deleting rule (Operation not permitted)
The audit system is in immutable mode, no rules loaded
If you want to be able to modify the audit rules again after auditd is put in immutable mode, you need to reboot your machine after changing the rules in /etc/audit/audit.rules.

If you want to enable daily log rotation for the audit log generated in /var/log/audit directory, use the following command in a daily cronjob.

$ sudo service auditd rotate

Dynamic ssh tunneling with putty to secure web traffic






Sometimes you might want to tunnel traffic over ssh to protect it from prying eyes on wireless/untrusted networks.

You can use an ssh tunnel to a Linux server to encrypt all of your browsing traffic. However, after it leaves the ssh server, it will no longer be encrypted.

Launch putty and head to Connection > SSH Tunnels

In the Source port field, enter a port number that your computer will listen for traffic on. Be sure to pick one that isn’t being used by another program. (8910 should be a safe bet)

Then select Dynamic and Auto as the port type and then click Add.

The window should look like this.

Dynamic Port in Putty

Then scroll back up and click on Session.

Enter the IP address of the machine running the SSH server in the Host Name (or IP address) field.

Then type a name in the Saved Sessions box and click Save for future usage.

Now you can double click on the name of the saved session to start up the tunnel.

You will have to enter your username and password before the tunnel will work correctly, unless the server is configured for anonymous logins.

You may also use key based authentication to bypass the need to enter a username and password for each login. See this article for details.

Once the SSH session is open and the tunnel is up. Your browser needs to be configured to use the tunnel.

Firefox

Click Tools > Options…

Head to the Advanced tab and then the Network sub-tab and click Settings…

Change the setting to Manual proxy configuration:

In the SOCKS Host: field, type 127.0.0.1 and enter the port number you chose earlier (8910for the example)

All of the other fields should be blank other than the No Proxy for: field. This tells firefox to skip the proxy server when visiting these addresses.

Mozilla Proxy Config

Click OK and then OK to return to the browser. Your web traffic through Firefox will now be tunneled.

When you are don’t want to use the proxy any more, head back to this configuration window and set it back to No proxy

Google Chrome & Internet Explorer

Google Chrome uses Internet Explorer’s proxy settings, so changing the configuration for Internet Explorer will apply to Chrome as well.

Go to Start > Run and type inetcpl.cpl and then hit enter. (In Vista/7, just type that command in the Search programs and files box in the start menu and hit enter.)

Click on the Connections tab and then click LAN settings.

Check the Use a proxy server for your LAN option and then click Advanced.

In the Socks: field, enter 127.0.0.1 and then enter the port you chose earlier in the Portfield. (8910 in the example)

IE/Chrome Proxy Settings

Click OK, then OK, and then OK.

Your traffic for IE and Chrome will now be tunneled through the SSH server.

To disable it, just clear the Use a proxy server for your LAN option. The Advanced settings don’t have to be cle

Reset Root Password via Rescue Mode

Make sure you are on the rescue mode, or atleast requested rescue mode from your DC or dedi reseller.

1) Login to the SSH console (rescue mode)

Issue the following command ;

fdisk -l

Then find the disk partition (you will know it by the amount of HD, quite straight forward), then mount using

mount /dev/xvda1 /mnt

/dev/xvda1 will be shown when you type fdisk -l

NOTE : it will not always be /dev/xvda1 - so make sure to choose the right one when you go for the fdisk -l

sometimes chroot /mnt may bot work due to your partition schema or just could not find zch or bash and in such case you could run

chroot /mnt /bin/bash

 Then issue the following command to reset your REAL ROOT password

passwd root

and reset it.

Then type the following to exit chroot;

exit

Then you will have to unmount the temporary partition using the following command;

umount /mnt

And finally reboot the server;

reboot

Saturday, September 21, 2013

Running multiple Skype instance in same computer

I found a solution. Skype is saving the data in ~/.Skype directory. ~ means your home directory. You can run a further skype4.0 session using a further data directory.

e.g. from a bash terminal:

prepartion:

> cd

> mkdir .Skype2

call of second skype session:

> skype --dbpath=~/.Skype2 &

After that works, you can add an icon or menu entry in your Window manager to simplify the start.

Mounting /var to new partation .



  • Created a new reiserfs partition for var on my HDD using gparted. Labelled it as "var"

  • Rebooted to emergency mode and mounted it. (I remounted root as read-write)

     # mount /dev/sda8 /mnt/new_var


  • Copied var contents to new_var

     # cd /var
    # cp -Rax * /mnt/new_var/
    # cd /
    # mv var var.old
    # umount /mnt/new_var
    # mkdir var
    # mount /dev/sda8 /var


  • Added the following line to /etc/fstab

Add more space to /tmp in cPanel server.

The following could help you to increase more space in /tmp.

You need to make alternation in the file /scripts/securetmp

#vi /scripts/securetmp

Find the entry my $tmpdsksize under Global Variables as follows:
# Global Variables
my $tmpdsksize = 512000; # Must be larger than 250000

Change the value for that particular entry to desired size.

Then make sure that no processes are using /tmp using the command, lsof /tmp

Please stop the service /etc/init.d/mysql stop. Also delete the file, /usr/tmpDSK if it exists by rm -rf /usr/tmpDSK

Then

umount /tmp

Run the script

#/scripts/securetmp

Then you will asked for some confirmation steps.

“Would you like to secure /tmp at boot time?” Press y

“Would you like to secure /tmp now?” Press y

Eventually you can see the upgraded space to /tmp in server :)

 

//////////////////////////////////////////////

1.) Stop MySql service and process kill the tailwatchd process.

[root@server ~]# /etc/init.d/mysqld stop
Stopping MySQL: [ OK ]
[root@server ~]# pstree -p | grep tailwatchd
Find the tailwatchd process id and kill it
[root@server ~]# kill -9 2522
2.) Take a backup of /tmp as /tmp.bak

[root@server ~]#cp -prf /tmp /tmp.bak
3.) Create a 2GB file in the avaliable freespace

[root@server ~]# dd if=/dev/zero of=/usr/tmpDSK bs=1024k count=2048
2048Ʈ records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 73.6908 seconds, 29.1 MB/s
[root@server~]# du -sch /usr/tmpDSK
2.1G /usr/tmpDSK
2.1G total
4.) Assign ext3 filesystem to the file

[root@server~]# mkfs -t ext3 /usr/tmpDSK
mke2fs 1.39 (29-May񮖦)
/usr/tmpDSK is not a block special device.
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
262144 inodes, 524288 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
This filesystem will be automatically checked every 25 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
5.) Check the file system type:-

[root@server ~]# file /usr/tmpDSK
/usr/tmpDSK: Linux rev 1.0 ext3 filesystem data (large files)
Note:-

You may also use the following comands for making ext3 file system on a file:

[root@server ~]# mkfs.ext3 /usr/tmpDSK
[root@server ~]# mke2fs /usr/tmpDSK
6.) Unmount /tmp partition

[root@server ~]# umount /tmp
7.) Mount the new /tmp filesystem with noexec

[root@server ~]# mount -o loop,noexec,nosuid,rw /usr/tmpDSK /tmp
8.) Set the correct permission for /tmp

[root@server ~]# install -d –mode=1777 /tmp
[root@antg ~]# ls -ld /tmp
drwxrwxrwt 3 root root 4096 Feb 6 08:42 /tmp
( you may use the command chmod 1777 /tmp for doing the same )

[root@server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda9 28G 6.4G 20G 25% /
/dev/sda8 99M 10M 84M 11% /boot
tmpfs 500M 0 500M 0% /dev/shm
/usr/tmpDSK 2.0G 68M 1.9G 4% /tmp
9.)Restore the content of old /tmp.bkp directory

[root@server ~]# cp -rpf /tmp.bak/* /tmp
10.) Restart the mysql and tailwathchd services.

[root@server ~]# /etc/init.d/mysql start
[root@server ~]# /scripts/restartsrv_tailwatchd
11.)Edit the fstab and replace /tmp entry line with :-

/usr/tmpDSK /tmp ext3 loop,noexec,nosuid,rw 0 0
12.) Mount all filesystems

[root@server~]# mount -a
Check it now:-

[root@server ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda9 28G 6.4G 20G 25% /
/dev/sda8 ɃM 10M 84M 11% /boot
tmpfs 500M 0 500M 0% /dev/shm
/usr/tmpDSK 2.0G 68M 1.9G 4% /tmp

really hope this little tutoral can help you:)

 

 

Wednesday, September 18, 2013

No apache MPM package installed - debian/ubuntu

apt-get purge apache2-mpm-prefork
apt-get install apache2-mpm-prefork
/etc/init.d/nginx stop
service apache2 start
cp -arp /root/apache2/mods-available/php5* /etc/apache2/mods-available/
cp -arp /root/apache2/mods-enabled/php5* /etc/apache2/mods-enabled/
service apache2 start
apt-get install libapache2-mod-php5

Thursday, September 12, 2013

How to Move MySQL Datadir to an alternate location



 

mysqldump - -add-drop-table - -all-databases | gzip > /home/alldatabases.sql.gz

I have put together a guide on how to correctly move MySQL datadir to free up space on the /var partition. I do not take any responsibility for this article if you do not pay attention and you crash MySQL I am not responsible. If you are truly unsure of how to do this please get with a upper tier admin or myself to assist you.

First check the free space on /var and /home partitions:

root@localhost [/var/lib/mysql]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda8 198G 29G 159G 16% /home
/dev/sda2 9.7G 9.2G 0 100% /var

Make sure the Disk Space is being used by MySQL:

root@localhost [/var/lib/mysql]# du -h --max-depth=1
4.9G .

You need to stop MySQL before copying the files this way the InnoDB database does not become corrupt. We also need to make sure that MySQL does not start back up while we are copying the files into the new location. So we will add a syntax error to the /etc/my.cnf:

root@localhost [/var/lib]# vi /etc/my.cnf
add DIE to the top

root@localhost [/var/lib]# grep die /etc/my.cnf
die

Now Stop MySQL:

root@localhost [/var/lib]# /etc/init.d/mysql stop
Shutting down MySQL.. [ OK ]

Make sure that MySQL is not running:

root@localhost [/var/lib]# ps aufx |grep mysql
root@localhost [/var/lib]#

Attempt to start MySQL, it should error out if the syntax error is working correctly:

root@localhost [/var/lib]# /etc/init.d/mysql start
error: Found option without preceding group in config file: /etc/my.cnf at line: 1
Fatal error in defaults handling. Program aborted
error: Found option without preceding group in config file: /etc/my.cnf at line: 1
Fatal error in defaults handling. Program aborted
Starting MySQL.Manager of pid-file quit without updating fi[FAILED]
root@localhost [/var/lib]#

Make sure MySQL is not running:

root@localhost [/var/lib]# ps aufx |grep mysql
root@localhost [/var/lib]#

Let's rsync the data to the new location /home/mysql:

root@localhost [/var/lib]# rsync -avz --progress mysql /home/
building file list ...
6127 files to consider
mysql/

sent 2199787710 bytes received 134014 bytes 2565506.38 bytes/sec
total size is 5063821822 speedup is 2.30

When the rsync completes let's make sure MySQL did not start:

root@localhost [/var/lib]# ps aufx |grep mysql
root@localhost [/var/lib]#

If MySQL did not start and it's still not running let's go ahead and make sure that the data in both folders are the exact same by running the rsync again:

root@localhost [/var/lib]# rsync -avz --progress mysql /home/
building file list ...
6127 files to consider
mysql/hiphopishere.hiphopishere.com.err
18030 100% 0.00kB/s 0:00:00 (xfer#1, to-check=6123/6127)

sent 150883 bytes received 42 bytes 100616.67 bytes/sec
total size is 5063822606 speedup is 33551.91

Now if everything was the same let's go ahead and remove the syntax error from /etc/my.cnf and update the datadir location:

root@localhost [/var/lib]# vi /etc/my.cnf

Remove die syntax and change:

datadir=/var/lib/mysql

TO:

datadir=/home/mysql

Now that we have verified the data let's move the MySQL datadir to mysql.bk:

root@localhost [/var/lib]# mv /var/lib/mysql /var/lib/mysql.bk

Create a symlink to the new MySQL datadir:

root@localhost [/var/lib]# ln -s /home/mysql /var/lib/mysql

Verify the symlink is pointed to the correct location:

root@localhost [/var/lib]# ll |grep mysql
lrwxrwxrwx 1 root root 11 Dec 31 00:39 mysql -> /home/mysql/
drwxr-x--x 51 mysql mysql 4096 Dec 31 00:15 mysql.bk/

Start MySQL:

root@localhost [/var/lib]# /etc/init.d/mysql start
Starting MySQL. [ OK ]

Check the status of MySQL:

root@localhost [/var/lib]# /etc/init.d/mysql status
MySQL running (8283) [ OK ]

root@hiphopishere [/var/lib]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 7
Server version: 5.0.85-community MySQL Community Edition (GPL)

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> status
--------------
mysql Ver 14.12 Distrib 5.0.85, for pc-linux-gnu (i686) using readline 5.1

Go ahead and Check/Repair all databases:

root@localhost [/var/lib]# for i in `mysql -e "show databases"`; do mysqlcheck -r $i; done;

Once it's completed if everything seems good then you can go ahead and remove the old MySQL datadir which is now: /var/lib/mysql.bk:

root@localhost [/var/lib]# rm -rfv /var/lib/mysql.bk/
root@localhost [/var/lib]#

Check free space:

root@localhost [/var/lib]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda8 198G 34G 154G 18% /home
/dev/sda2 9.7G 4.4G 4.9G 48% /var

=========================================

In this tutorial we show you How to change MySQL database data directory to another location. By default MySQL on linux configure at following location
/var/lib/mysql

What will happen if /var goes low on space? Obviously MySQL start failing time to time and new data will be discarded. Then what is the solution?

The only solution is to change MySQL data directory from /var/lib to some other partition where we have a lot of free space. Use the following method to change the MySQL data location.

SSH the server using Terminal (linux), PuTTy (Windows). Login as root. Now let’s create whole database server backup in single database to secure ourselves.
mysqldump - -add-drop-table - -all-databases | gzip > /home/alldatabases.sql.gz

Stop the MySQL server
/etc/init.d/mysql stop

Let’s move the data directory
cd /var/lib
mv mysql /home/mysql
ln -s /home/mysql mysql
chown -R mysql:mysql /home/mysql

Start MySQL server
/etc/init.d/mysql start

Verify MySQL running fine
ps aux | grep mysql

 


Wednesday, September 11, 2013

Exim filter mails by body count for Spam Assassin

Enable /etc/cpanel_exim_system_filter in WHM->Exim Configurations.

Add this lines to /etc/cpanel_exim_system_filter :

Code:
if $message_body: contains "TEXT" and not error_message
then
seen finish
endif
Replace TEXT. For example: Viagra
Its done. You will not receive and send emails with TEXT in email body.

Tuesday, September 3, 2013

SSH - Securing

SSH V2 Configuration


SSH supports SSH1 and SSH2 protocols. SSH2 is more advanced and secured when compared with SSH1.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
Protocol 2

VERBOSE Parameter


The VERBOSE parameter is used to record login and logout activity in SSH. This will be helpful in incident handling.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
LogLevel VERBOSE

X11Forwarding Parameter


The X11Forwarding parameter specifies the ability to tunnel X11 traffic through the connection to enable remote graphic connections. It is possible to compromise the X11 servers of users who are logged in via SSH with X11 forwarding by other users on the X11 server. Disable X11 forwarding if it is not used.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
X11Forwarding no

MaxAuthTries Parameter


The MaxAuthTries parameter specifies the maximum number of authentication attempts permitted per connection. The MaxAuthTries parameter will minimize the risk of successful brute force attacks to the SSH server and improve the linux security.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
MaxAuthTries 4

IgnoreRhosts Parameter


The IgnoreRhosts parameter specifies that .rhosts and .shosts files will not be used in RhostsRSAAuthentication or HostbasedAuthentication. This parameter forces users to enter a password when authenticating with ssh.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
IgnoreRhosts yes

HostbasedAuthentication Parameter


The HostbasedAuthentication parameter specifies whether rhosts or /etc/hosts.equiv authentication together with successful public key client host authentication is allowed. Disabling the ability to use .rhosts files in SSH provides an extra layer of protection.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
HostbasedAuthentication no

PermitRootLogin Parameter


The PermitRootLogin parameter specifies whether root can login using SSH. Disabling root logins over SSH requires adminstrators to authenticate using their own account, then escalating to root via sudo or su. This provides a clear audit trail in the event of a security incident and improves the linux security.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
PermitRootLogin no

PermitEmptyPasswords Parameter


The PermitEmptyPasswords parameter specifies whether the server allows login to accounts with empty password strings. Disallowing login with empty password reduces the probability of unauthorized access to the system.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
PermitEmptyPasswords no

PermitUserEnvironment Parameter


The PermitUserEnvironment parameter specifies users to present environment options to the ssh daemon. Alowing users the ability to set environment variables through the SSH daemon could potentially allow users to bypass security controls.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
PermitUserEnvironment no

Use of Strong Ciphers in Counter Mode


This parameter limits the types of ciphers that SSH can use during communication. This is prevent the man-in-middle attacks.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
Cipher aes128-ctr,aes192-ctr,aes256-ctr

Idle Timeout Interval Parameter


TheClientAliveInterval and ClientAliveCountMax parameter controls the timeout of ssh sessions. Setting the timeout value associated with a connection could prevent an unauthorized user access to another user’s ssh session.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
ClientAliveInterval 300
ClientAliveCountMax 0

Restrict Access via SSH


There are several options available to restrict which users and group can access the system via SSH.

AllowUsers
The AllowUsers parameter provides the administrator the option of allowing specific users to ssh into the system. This keyword can be followed by a list of user names, separated by spaces.
AllowGroups
The AllowGroups parameter provides the administrator the option of allowing specific groups to ssh into the system. This keyword can be followed by a list of user names, separated by spaces.
DenyUsers
The DenyUsers parameter provides the administrator the option of denying specific users to ssh into the system. This keyword can be followed by a list of group names, separated by spaces.
DenyGroups
The DenyGroups parameter provides the administrator the option of denying specific groups to ssh into the system. This keyword can be followed by a list of group names, separated by spaces.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
AllowUsers <userlist>
AllowGroups <grouplist>
DenyUsers <userlist>
DenyGroups <grouplist>

SSH Login Banner


The Banner parameter specifies a file whose contents are sent to the remote user before authentication is allowed.

Solution:
Modify the /etc/ssh/sshd_config file to set the parameter as below:
Banner <bannerfile>

sshd_config Permissions


The /etc/sshd_config file should be protected from unauthorized access. The ownership and file permissions should be properly configured.

Solution:
# chown root:root /etc/sshd_config
# chmod 644 /etc/sshd_config