Pages

Wednesday, February 27, 2013

Easy cpanel WHM or linux remote backup – SSH pull rsync backups for security and integrity using incremental

$ sudo useradd -d /home/backup -m backup
$ sudo su - backup
$ ssh-keygen -t rsa -b 2048
Generating public/private rsa key pair.
Enter file in which to save the key (/home/backup/.ssh/id_rsa):
Created directory '/home/backup/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/backup/.ssh/id_rsa.
Your public key has been saved in /home/backup/.ssh/id_rsa.pub.
The key fingerprint is:
05:8c:df:24:18:a9:9e:22:87:08:49:5b:11:7c:2f:f1 backup@host

You now need to put the public key onto your server for the root user (or if you want, a user with sudo role – its more secure though you will need to change your rsync commands to take account of that)

$ scp .ssh/id_rsa.pub root@your.cpanel.server.com:/root/.ssh/authorized_keys

Now once that done you can test out the key is working by SSH’ing in. If you dont get asked for a password, your SSH key is setup:

$ ssh root@your.cpanel.server.com
root@your.cpanel.server.com:$

Configuring the backup
So now you have SSH key access from your backup machine to the Cpanel/WHM server(s) its just a case of setting up a cron job to grab your data!

$ mkdir /home/backup/server1
$ crontab -e

In crontab, add the following entry (adjust the time the job runs to ensure that your Cpanel/WHM server(s) have enough time to do thier backups. for example, i know my cpanel backups finish around 3:30 am, so I set my rsync to run at 4.30 am). You can adjust bwlimit to something you prefer. I set it to 5000KB/sec (just under 50 mbps, so 50% of my available bandwdith) to ensure my regular users aren’t inconvenienced because something is chewing up all of the servers bandwidth. I also dont backup the spamassasin bloat. This should all be on one line:

30 4 * * * rsync -av --bwlimit=5000 --progress -e ssh --exclude '*spamass*' root@your.cpanel.server.com:/backup/cpbackup /home/backup/server1/ > /home/backup/server1.results.txt 2>&1

Finishing up
That should be all you need. Check back the following day and look look in the /home/backup/server1.results.txt file, it should look something like this:

backup@host:~$ tail server1.results.txt
up 8 100% 0.04kB/s 0:00:00 (xfer#2755, to-check=32/437710)
cpbackup/daily/user/mysql/horde.sql
3156258 100% 4.47MB/s 0:00:00 (xfer#2756, to-check=24/437710)
cpbackup/daily/user/resellerconfig/resellers
0 100% 0.00kB/s 0:00:00 (xfer#2757, to-check=20/437710)
cpbackup/daily/user/resellerconfig/resellers-nameservers
0 100% 0.00kB/s 0:00:00 (xfer#2758, to-check=19/437710)
sent 3351898 bytes received 329706615 bytes 476137.97 bytes/sec
total size is 34722766009 speedup is 104.25

How to make automatic backup in cPanel

Using the script* provided below you will be able to make automatic backup of your hosting account (domains and MySQL databases). This backup script includes SSL support. This is not necessary if you run the script on the server for which you are generating the backup; but the SSL support could be important if you are running the script somewhere else to connect to your cPanel hosting account.
<?php// PHP script to allow periodic cPanel backups automatically, optionally to a remote FTP server.

// This script contains passwords. It is important to keep access to this file secure (we would suggest you to place it in your home directory, not public_html)

// You need create 'backups' folder in your home directory ( or any other folder that you would like to store your backups in ).

// ********* THE FOLLOWING ITEMS NEED TO BE CONFIGURED *********

// Information required for cPanel access

$cpuser = "username"; // Username used to login to cPanel

$cppass = "password"; // Password used to login to cPanel

$domain = "example.com";// Your main domain name

$skin = "x"; // Set to cPanel skin you use (script will not work if it does not match). Most people run the default "x" theme or "x3" theme

// Information required for FTP host

$ftpuser = "ftpusername"; // Username for FTP account

$ftppass = "ftppassword"; // Password for FTP account

$ftphost = "ip_address"; // IP address of your hosting account

$ftpmode = "passiveftp"; // FTP mode

// Notification information $notifyemail = "any@example.com"; // Email address to send results

// Secure or non-secure mode $secure = 0; // Set to 1 for SSL (requires SSL support), otherwise will use standard HTTP

// Set to 1 to have web page result appear in your cron log $debug = 0;

// *********** NO CONFIGURATION ITEMS BELOW THIS LINE *********

$ftpport = "21";

$ftpdir = "/backups/"; // Directory where backups stored (make it in your /home/ directory). Or you can change 'backups' to the name of any other folder created for the backups;

if ($secure) {

$url = "ssl://".$domain;

$port = 2083;

} else {

$url = $domain;

$port = 2082;

}

$socket = fsockopen($url,$port);

if (!$socket) { echo "Failed to open socket connection... Bailing out!n"; exit; }

// Encode authentication string

$authstr = $cpuser.":".$cppass;

$pass = base64_encode($authstr);

$params = "dest=$ftpmode&email=$notifyemail&server=$ftphost&user=$ftpuser&pass=$ftppass&port=$ftpport&rdir=$ftpdir&submit=Generate Backup";

// Make POST to cPanel

fputs($socket,"POST /frontend/".$skin."/backup/dofullbackup.html?".$params." HTTP/1.0\r\n");

fputs($socket,"Host: $domain\r\n");

fputs($socket,"Authorization: Basic $pass\r\n");

fputs($socket,"Connection: Close\r\n");

fputs($socket,"\r\n");

// Grab response even if we do not do anything with it.

while (!feof($socket)) {

$response = fgets($socket,4096); if ($debug) echo $response;

}

fclose($socket);

?>

To schedule the script to run regularly, save it as fullbackup.php in your home directory and enter a new cron job** like the following:

00 2 * * 1 /usr/local/bin/php /home/youraccount/fullbackup.php

(Runs every Sunday night at 2:00 a.m.)

Cpanel /scripts/restorepkg in detail

restorepkg [--force] [--skipres] [--override] [--ip=(y|n|Custom IP)] -- [cpuser|/path/to/cpuser-file]

/scripts/restorepkg --force xxxxxxx

--force
If there's one thing I advise, it's to never use this flag unless you've exhausted normal means of restoring the account. Even then, I'd prefer you contact cPanel support instead so we can figure out what's going on. This option essentially instructs restorepkg to disregard all logic that we put in place to prevent conflicts when an account is being restored.

If the backup you're restoring does contain actual conflicts (domains owned by other users for example), then this sets you up for a world of hurt and unexpected behavior. I see all too often where a sysadmin has forced an account to be restored that conflicts with already existing accounts onto a box.

It does not terminate the account first; it just "shoe horn" restores it on top. The intention of this feature is if you're trying to reduce downtime or are trying to keep 'new' files (like email) that otherwise don't exist in the backup you're restoring from.

But, personally, I would never use --force on my own personal box just for the peace of mind. I'd perform a clean terminate/restore of an account and rest assured that our restorepkg logic has guaranteed me that there are no conflicts.

--skipres
This stands for "Skip Reseller Privileges". Pretty self-explanatory. Using this option will ensure that reseller privileges are NOT restored (if the account had them, that is). If it's not a reseller, this argument effectively does nothing.

--override
This allows you to override the stock cPanel restorepkg code with your own custom written restorepkg logic if you've written some.
Stock Code: /usr/local/cpanel/Whostmgr/Transfers.pm

If you desire to create your own customized version, you would place it at:
Override location: /var/cpanel/lib/Whostmgr/Transfers.pm

Then, when you use "--override" it will call upon the override location code in lieu of the stock code. Note that if you don't have an override setup at that location, the "--override" argument effectively does nothing at all.

--ip=(y|n|Custom IP)
Pretty much self-explanatory.
--ip=y
^-- Allocates the next available IP in the IP Pool to the account upon restore. If none available, uses shared IP.
--ip=n
^-- The same as leaving this flag absent. The account will restore using the shared IP of the box.
--ip=123.123.123.123 (Or any other valid IP)
^-- Attempts to allocate the specified IP to the account upon restore. If not available/does not exist, uses shared IP

cpuser|/path/to/cpuser-file
Self-explanitory again.
cpuser
Will search for archive in various common locations to try and automatically identify the backup you're requesting it to restore. If it cannot find it, it will instruct you where it looked and what it was trying to find.
/path/to/cpuser-file
Simply attempts to restore using the archive that the path specifies.

shell script to backup mysql databases

#!/bin/bash
#Script for mysql database backup
cd /var/lib/mysql
for DBs in $(ls -d */ |tr -d /)
do

cd /mysqlbackup

`mysqldump -u root -p'password' $DBs > $DBs.sql`

done

Backup cPanel Account SSH using rsync

#!/bin/bash
#! Script to Backup cPanel Accounts

for x in `awk '{print $2}' /etc/userdomains | sed -e '/nobody/d'`

do
ssh root@xx.xx.xx.xx mkdir -p /backup/$x

rsync -arv /home/$x/* root@xx.xx.xx.xx:/backup/$x/.
done

Tuesday, February 26, 2013

Linux Split and Join Command to Manage Large Files

Linux Split and Join Command to Manage Large Files

Join and split command syntax:

join [OPTION]… FILE1 FILE2
split [OPTION]… [INPUT [PREFIX]]

Use the split command to do this:

split --bytes=1024m bigfile.iso small_file_

That command will split bigfile.iso into files that are 1024 MB in size (1GB) and name the various parts small_file_aa, small_file_ab, etc. You can specify b for bytes, k for Kilobytes and m for Megabytes to specify sizes.

To join the files back together on Linux:

cat small_file_* > joined_file.iso


Linux Split Command Examples
1. Basic Split Example

Here is a basic example of split command.

$ split split.zip

$ ls
split.zip  xab  xad  xaf  xah  xaj  xal  xan  xap  xar  xat  xav  xax  xaz  xbb  xbd  xbf  xbh  xbj  xbl  xbn
xaa        xac  xae  xag  xai  xak  xam  xao  xaq  xas  xau  xaw  xay  xba  xbc  xbe  xbg  xbi  xbk  xbm  xbo

So we see that the file split.zip was split into smaller files with x** as file names. Where ** is the two character suffix that is added by default. Also, by default each x** file would contain 1000 lines.

$ wc -l *
40947 split.zip
1000 xaa
1000 xab
1000 xac
1000 xad
1000 xae
1000 xaf
1000 xag
1000 xah
1000 xai
...
...
...

So the output above confirms that by default each x** file contains 1000 lines.

2.Change the Suffix Length using -a option

As discussed in example 1 above, the default suffix length is 2. But this can be changed by using -a option.

As you see in the following example, it is using suffix of length 5 on the split files.

$ split -a5 split.zip
$ ls
split.zip  xaaaac  xaaaaf  xaaaai  xaaaal  xaaaao  xaaaar  xaaaau  xaaaax  xaaaba  xaaabd  xaaabg  xaaabj  xaaabm
xaaaaa     xaaaad  xaaaag  xaaaaj  xaaaam  xaaaap  xaaaas  xaaaav  xaaaay  xaaabb  xaaabe  xaaabh  xaaabk  xaaabn
xaaaab     xaaaae  xaaaah  xaaaak  xaaaan  xaaaaq  xaaaat  xaaaaw  xaaaaz  xaaabc  xaaabf  xaaabi  xaaabl  xaaabo

Note: Earlier we also discussed about other file manipulation utilities – tac, rev, paste.
3.Customize Split File Size using -b option

Size of each output split file can be controlled using -b option.

In this example, the split files were created with a size of 200000 bytes.

$ split -b200000 split.zip

$ ls -lart
total 21084
drwxrwxr-x 3 himanshu himanshu     4096 Sep 26 21:20 ..
-rw-rw-r-- 1 himanshu himanshu 10767315 Sep 26 21:21 split.zip
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xad
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xac
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xab
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xaa
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xah
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xag
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xaf
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xae
-rw-rw-r-- 1 himanshu himanshu   200000 Sep 26 21:35 xar
...
...
...

4. Create Split Files with Numeric Suffix using -d option

As seen in examples above, the output has the format of x** where ** are alphabets. You can change this to number using -d option.

Here is an example. This has numeric suffix on the split files.

$ split -d split.zip
$ ls
split.zip  x01  x03  x05  x07  x09  x11  x13  x15  x17  x19  x21  x23  x25  x27  x29  x31  x33  x35  x37  x39
x00        x02  x04  x06  x08  x10  x12  x14  x16  x18  x20  x22  x24  x26  x28  x30  x32  x34  x36  x38  x40

5. Customize the Number of Split Chunks using -C option

To get control over the number of chunks, use the -C option.

This example will create 50 chunks of split files.

$ split -n50 split.zip
$ ls
split.zip  xac  xaf  xai  xal  xao  xar  xau  xax  xba  xbd  xbg  xbj  xbm  xbp  xbs  xbv
xaa        xad  xag  xaj  xam  xap  xas  xav  xay  xbb  xbe  xbh  xbk  xbn  xbq  xbt  xbw
xab        xae  xah  xak  xan  xaq  xat  xaw  xaz  xbc  xbf  xbi  xbl  xbo  xbr  xbu  xbx

6. Avoid Zero Sized Chunks using -e option

While splitting a relatively small file in large number of chunks, its good to avoid zero sized chunks as they do not add any value. This can be done using -e option.

Here is an example:

$ split -n50 testfile

$ split -n50 -e testfile

$ ls
split.zip  testfile  xaa  xab  xac  xad  xae  xaf

So we see that no zero sized chunk was produced in the above output.

7. Customize Number of Lines using -l option

Number of lines per output split file can be customized using the -l option.

As seen in the example below, split files are created with 20000 lines.

$ split -l20000 split.zip

$ ls
split.zip  testfile  xaa  xab  xac

$ wc -l x*
20000 xaa
20000 xab
947 xac
40947 total

Get Detailed Information using –verbose option

To get a diagnostic message each time a new split file is opened, use –verbose option as shown below.

$ split -l20000 --verbose split.zip
creating file `xaa'
creating file `xab'
creating file `xac'

Linux Join Command Examples


8. Basic Join Example

Join command works on first field of the two files (supplied as input) by matching the first fields.

Here is an example :

$ cat testfile1
1 India
2 US
3 Ireland
4 UK
5 Canada

$ cat testfile2
1 NewDelhi
2 Washington
3 Dublin
4 London
5 Toronto

$ join testfile1 testfile2
1 India NewDelhi
2 US Washington
3 Ireland Dublin
4 UK London
5 Canada Toronto

So we see that a file containing countries was joined with another file containing capitals on the basis of first field.
9. Join works on Sorted List

If any of the two files supplied to join command is not sorted then it shows up a warning in output and that particular entry is not joined.

In this example, since the input file is not sorted, it will display a warning/error message.

$ cat testfile1
1 India
2 US
3 Ireland
5 Canada
4 UK

$ cat testfile2
1 NewDelhi
2 Washington
3 Dublin
4 London
5 Toronto

$ join testfile1 testfile2
1 India NewDelhi
2 US Washington
3 Ireland Dublin
join: testfile1:5: is not sorted: 4 UK
5 Canada Toronto

10. Ignore Case using -i option

When comparing fields, the difference in case can be ignored using -i option as shown below.

$ cat testfile1
a India
b US
c Ireland
d UK
e Canada

$ cat testfile2
a NewDelhi
B Washington
c Dublin
d London
e Toronto

$ join testfile1 testfile2
a India NewDelhi
c Ireland Dublin
d UK London
e Canada Toronto

$ join -i testfile1 testfile2
a India NewDelhi
b US Washington
c Ireland Dublin
d UK London
e Canada Toronto

11. Verify that Input is Sorted using –check-order option

Here is an example. Since testfile1 was unsorted towards the end so an error was produced in the output.

$ cat testfile1
a India
b US
c Ireland
d UK
f Australia
e Canada

$ cat testfile2
a NewDelhi
b Washington
c Dublin
d London
e Toronto

$ join --check-order testfile1 testfile2
a India NewDelhi
b US Washington
c Ireland Dublin
d UK London
join: testfile1:6: is not sorted: e Canada

12. Do not Check the Sortness using –nocheck-order option

This is the opposite of the previous example. No check for sortness is done in this example, and it will not display any error message.

$ join --nocheck-order testfile1 testfile2
a India NewDelhi
b US Washington
c Ireland Dublin
d UK London

13. Print Unpairable Lines using -a option

If both the input files cannot be mapped one to one then through -a[FILENUM] option we can have those lines that cannot be paired while comparing. FILENUM is the file number (1 or 2).

In the following example, we see that using -a1 produced the last line in testfile1 (marked as bold below) which had no pair in testfile2.

$ cat testfile1
a India
b US
c Ireland
d UK
e Canada
f Australia

$ cat testfile2
a NewDelhi
b Washington
c Dublin
d London
e Toronto

$ join testfile1 testfile2
a India NewDelhi
b US Washington
c Ireland Dublin
d UK London
e Canada Toronto

$ join -a1 testfile1 testfile2
a India NewDelhi
b US Washington
c Ireland Dublin
d UK London
e Canada Toronto
f Australia

14. Print Only Unpaired Lines using -v option

In the above example both paired and unpaired lines were produced in the output. But, if only unpaired output is desired then use -v option as shown below.

$ join -v1 testfile1 testfile2
f Australia

15. Join Based on Different Columns from Both Files using -1 and -2 option

By default the first columns in both the files is used for comparing before joining. You can change this behavior using -1 and -2 option.

In the following example, the first column of testfile1 was compared with the second column of testfile2 to produce the join command output.

$ cat testfile1
a India
b US
c Ireland
d UK
e Canada

$ cat testfile2
NewDelhi a
Washington b
Dublin c
London d
Toronto e

$ join -1 1 -2 2 testfile1 testfile2
a India NewDelhi
b US Washington
c Ireland Dublin
d UK London
e Canada Toronto

Rsync in detail

Important features of rsync

Speed: First time, rsync replicates the whole content between the source and destination directories. Next time, rsync transfers only the changed blocks or bytes to the destination location, which makes the transfer really fast.
Security: rsync allows encryption of data using ssh protocol during transfer.
Less Bandwidth: rsync uses compression and decompression of data block by block at the sending and receiving end respectively. So the bandwidth used by rsync will be always less compared to other file transfer protocols.
Privileges: No special privileges are required to install and execute rsync

Syntax

$ rsync options source destination

Source and destination could be either local or remote. In case of remote, specify the login name, remote server name and location.
Example 1. Synchronize Two Directories in a Local Server

To sync two directories in a local computer, use the following rsync -zvr command.

$ rsync -zvr /var/opt/installation/inventory/ /root/temp
building file list ... done
sva.xml
svB.xml
.
sent 26385 bytes  received 1098 bytes  54966.00 bytes/sec
total size is 44867  speedup is 1.63
$

In the above rsync example:

-z is to enable compression
-v verbose
-r indicates recursive

Now let us see the timestamp on one of the files that was copied from source to destination. As you see below, rsync didn’t preserve timestamps during sync.

$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml
-r--r--r-- 1 bin  bin  949 Jun 18  2009 /var/opt/installation/inventory/sva.xml
-r--r--r-- 1 root bin  949 Sep  2  2009 /root/temp/sva.xml

Example 2. Preserve timestamps during Sync using rsync -a

rsync option -a indicates archive mode. -a option does the following,

Recursive mode
Preserves symbolic links
Preserves permissions
Preserves timestamp
Preserves owner and group

Now, executing the same command provided in example 1 (But with the rsync option -a) as shown below:

$ rsync -azv /var/opt/installation/inventory/ /root/temp/
building file list ... done
./
sva.xml
svB.xml
.
sent 26499 bytes  received 1104 bytes  55206.00 bytes/sec
total size is 44867  speedup is 1.63
$

As you see below, rsync preserved timestamps during sync.

$ ls -l /var/opt/installation/inventory/sva.xml /root/temp/sva.xml
-r--r--r-- 1 root  bin  949 Jun 18  2009 /var/opt/installation/inventory/sva.xml
-r--r--r-- 1 root  bin  949 Jun 18  2009 /root/temp/sva.xml

Example 3. Synchronize Only One File

To copy only one file, specify the file name to rsync command, as shown below.

$ rsync -v /var/lib/rpm/Pubkeys /root/temp/
Pubkeys

sent 42 bytes  received 12380 bytes  3549.14 bytes/sec
total size is 12288  speedup is 0.99

Example 4. Synchronize Files From Local to Remote

rsync allows you to synchronize files/directories between the local and remote system.

$ rsync -avz /root/temp/ thegeekstuff@192.168.200.10:/home/thegeekstuff/temp/
Password:
building file list ... done
./
rpm/
rpm/Basenames
rpm/Conflictname

sent 15810261 bytes  received 412 bytes  2432411.23 bytes/sec
total size is 45305958  speedup is 2.87

While doing synchronization with the remote server, you need to specify username and ip-address of the remote server. You should also specify the destination directory on the remote server. The format is username@machinename:path

As you see above, it asks for password while doing rsync from local to remote server.

Sometimes you don’t want to enter the password while backing up files from local to remote server. For example, If you have a backup shell script, that copies files from local to remote server using rsync, you need the ability to rsync without having to enter the password.

To do that, setup ssh password less login as we explained earlier.
Example 5. Synchronize Files From Remote to Local

When you want to synchronize files from remote to local, specify remote path in source and local path in target as shown below.

$ rsync -avz thegeekstuff@192.168.200.10:/var/lib/rpm /root/temp
Password:
receiving file list ... done
rpm/
rpm/Basenames
.
sent 406 bytes  received 15810230 bytes  2432405.54 bytes/sec
total size is 45305958  speedup is 2.87

Example 6. Remote shell for Synchronization

rsync allows you to specify the remote shell which you want to use. You can use rsync ssh to enable the secured remote connection.

Use rsync -e ssh to specify which remote shell to use. In this case, rsync will use ssh.

$ rsync -avz -e ssh thegeekstuff@192.168.200.10:/var/lib/rpm /root/temp
Password:
receiving file list ... done
rpm/
rpm/Basenames

sent 406 bytes  received 15810230 bytes  2432405.54 bytes/sec
total size is 45305958  speedup is 2.87

Example 7. Do Not Overwrite the Modified Files at the Destination

In a typical sync situation, if a file is modified at the destination, we might not want to overwrite the file with the old file from the source.

Use rsync -u option to do exactly that. (i.e do not overwrite a file at the destination, if it is modified). In the following example, the file called Basenames is already modified at the destination. So, it will not be overwritten with rsync -u.

$ ls -l /root/temp/Basenames
total 39088
-rwxr-xr-x 1 root root        4096 Sep  2 11:35 Basenames

$ rsync -avzu thegeekstuff@192.168.200.10:/var/lib/rpm /root/temp
Password:
receiving file list ... done
rpm/

sent 122 bytes  received 505 bytes  114.00 bytes/sec
total size is 45305958  speedup is 72258.31

$ ls -lrt
total 39088
-rwxr-xr-x 1 root root        4096 Sep  2 11:35 Basenames

Example 8. Synchronize only the Directory Tree Structure (not the files)

Use rsync -d option to synchronize only directory tree from source to the destination. The below example, synchronize only directory tree in recursive manner, not the files in the directories.

$ rsync -v -d thegeekstuff@192.168.200.10:/var/lib/ .
Password:
receiving file list ... done
logrotate.status
CAM/
YaST2/
acpi/

sent 240 bytes  received 1830 bytes  318.46 bytes/sec
total size is 956  speedup is 0.46

Example 9. View the rsync Progress during Transfer

When you use rsync for backup, you might want to know the progress of the backup. i.e how many files are copies, at what rate it is copying the file, etc.

rsync –progress option displays detailed progress of rsync execution as shown below.

$ rsync -avz --progress thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/
Password:
receiving file list ...
19 files to consider
./
Basenames
5357568 100%   14.98MB/s    0:00:00 (xfer#1, to-check=17/19)
Conflictname
12288 100%   35.09kB/s    0:00:00 (xfer#2, to-check=16/19)
.
.
.
sent 406 bytes  received 15810211 bytes  2108082.27 bytes/sec
total size is 45305958  speedup is 2.87

You can also use rsnapshot utility (that uses rsync) to backup local linux server, or backup remote linux server.
Example 10. Delete the Files Created at the Target

If a file is not present at the source, but present at the target, you might want to delete the file at the target during rsync.

In that case, use –delete option as shown below. rsync delete option deletes files that are not there in source directory.

# Source and target are in sync. Now creating new file at the target.
$ > new-file.txt

$ rsync -avz --delete thegeekstuff@192.168.200.10:/var/lib/rpm/ .
Password:
receiving file list ... done
deleting new-file.txt
./

sent 26 bytes  received 390 bytes  48.94 bytes/sec
total size is 45305958  speedup is 108908.55

Target has the new file called new-file.txt, when synchronize with the source with –delete option, it removed the file new-file.txt
Example 11. Do not Create New File at the Target

If you like, you can update (Sync) only the existing files at the target. In case source has new files, which is not there at the target, you can avoid creating these new files at the target. If you want this feature, use –existing option with rsync command.

First, add a new-file.txt at the source.

[/var/lib/rpm ]$ > new-file.txt

Next, execute the rsync from the target.

$ rsync -avz --existing root@192.168.1.2:/var/lib/rpm/ .
root@192.168.1.2's password:
receiving file list ... done
./

sent 26 bytes  received 419 bytes  46.84 bytes/sec
total size is 88551424  speedup is 198991.96

If you see the above output, it didn’t receive the new file new-file.txt
Example 12. View the Changes Between Source and Destination

This option is useful to view the difference in the files or directories between source and destination.

At the source:

$ ls -l /var/lib/rpm
-rw-r--r-- 1 root root  5357568 2010-06-24 08:57 Basenames
-rw-r--r-- 1 root root    12288 2008-05-28 22:03 Conflictname
-rw-r--r-- 1 root root  1179648 2010-06-24 08:57 Dirnames

At the destination:

$ ls -l /root/temp
-rw-r--r-- 1 root root    12288 May 28  2008 Conflictname
-rw-r--r-- 1 bin  bin   1179648 Jun 24 05:27 Dirnames
-rw-r--r-- 1 root root        0 Sep  3 06:39 Basenames

In the above example, between the source and destination, there are two differences. First, owner and group of the file Dirname differs. Next, size differs for the file Basenames.

Now let us see how rsync displays this difference. -i option displays the item changes.

$ rsync -avzi thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/
Password:
receiving file list ... done
>f.st.... Basenames
.f....og. Dirnames

sent 48 bytes  received 2182544 bytes  291012.27 bytes/sec
total size is 45305958  speedup is 20.76

In the output it displays some 9 letters in front of the file name or directory name indicating the changes.

In our example, the letters in front of the Basenames (and Dirnames) says the following:

> specifies that a file is being transferred to the local host.
f represents that it is a file.
s represents size changes are there.
t represents timestamp changes are there.
o owner changed
g group changed.

Example 13. Include and Exclude Pattern during File Transfer

rsync allows you to give the pattern you want to include and exclude files or directories while doing synchronization.

$ rsync -avz --include 'P*' --exclude '*' thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/
Password:
receiving file list ... done
./
Packages
Providename
Provideversion
Pubkeys

sent 129 bytes  received 10286798 bytes  2285983.78 bytes/sec
total size is 32768000  speedup is 3.19

In the above example, it includes only the files or directories starting with ‘P’ (using rsync include) and excludes all other files. (using rsync exclude ‘*’ )
Example 14. Do Not Transfer Large Files

You can tell rsync not to transfer files that are greater than a specific size using rsync –max-size option.

$ rsync -avz --max-size='100K' thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp/
Password:
receiving file list ... done
./
Conflictname
Group
Installtid
Name
Sha1header
Sigmd5
Triggername

sent 252 bytes  received 123081 bytes  18974.31 bytes/sec
total size is 45305958  speedup is 367.35

max-size=100K makes rsync to transfer only the files that are less than or equal to 100K. You can indicate M for megabytes and G for gigabytes.
Example 15. Transfer the Whole File

One of the main feature of rsync is that it transfers only the changed block to the destination, instead of sending the whole file.

If network bandwidth is not an issue for you (but CPU is), you can transfer the whole file, using rsync -W option. This will speed-up the rsync process, as it doesn’t have to perform the checksum at the source and destination.

#  rsync -avzW  thegeekstuff@192.168.200.10:/var/lib/rpm/ /root/temp
Password:
receiving file list ... done
./
Basenames
Conflictname
Dirnames
Filemd5s
Group
Installtid
Name

sent 406 bytes  received 15810211 bytes  2874657.64 bytes/sec
total size is 45305958  speedup is 2.87

Awk Introduction and Printing Operations

Awk Introduction and Printing Operations

Awk is a programming language which allows easy manipulation of structured data and the generation of formatted reports. Awk stands for the names of its authors “Aho, Weinberger, and Kernighan”

The Awk is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that matches with the specified patterns and then perform associated actions.

Some of the key features of Awk are:

Awk views a text file as records and fields.
Like common programming language, Awk has variables, conditionals and loops
Awk has arithmetic and string operators.
Awk can generate formatted reports

Awk reads from a file or from its standard input, and outputs to its standard output. Awk does not get along with non-text files.

Syntax:

awk '/search pattern1/ {Actions}
/search pattern2/ {Actions}' file

In the above awk syntax:

search pattern is a regular expression.
Actions – statement(s) to be performed.
several patterns and actions are possible in Awk.
file – Input file.
Single quotes around program is to avoid shell not to interpret any of its special characters.

Awk Working Methodology

Awk reads the input files one line at a time.
For each line, it matches with given pattern in the given order, if matches performs the corresponding action.
If no pattern matches, no action will be performed.
In the above syntax, either search pattern or action are optional, But not both.
If the search pattern is not given, then Awk performs the given actions for each line of the input.
If the action is not given, print all that lines that matches with the given patterns which is the default action.
Empty braces with out any action does nothing. It wont perform default printing operation.
Each statement in Actions should be delimited by semicolon.

Let us create employee.txt file which has the following content, which will be used in the
examples mentioned below.

$cat employee.txt
100  Thomas  Manager    Sales       $5,000
200  Jason   Developer  Technology  $5,500
300  Sanjay  Sysadmin   Technology  $7,000
400  Nisha   Manager    Marketing   $9,500
500  Randy   DBA        Technology  $6,000

Awk Example 1. Default behavior of Awk

By default Awk prints every line from the file.

$ awk '{print;}' employee.txt
100  Thomas  Manager    Sales       $5,000
200  Jason   Developer  Technology  $5,500
300  Sanjay  Sysadmin   Technology  $7,000
400  Nisha   Manager    Marketing   $9,500
500  Randy   DBA        Technology  $6,000

In the above example pattern is not given. So the actions are applicable to all the lines.
Action print with out any argument prints the whole line by default. So it prints all the
lines of the file with out fail. Actions has to be enclosed with in the braces.
Awk Example 2. Print the lines which matches with the pattern.

$ awk '/Thomas/
> /Nisha/' employee.txt
100  Thomas  Manager    Sales       $5,000
400  Nisha   Manager    Marketing   $9,500

In the above example it prints all the line which matches with the ‘Thomas’ or ‘Nisha’. It has two patterns. Awk accepts any number of patterns, but each set (patterns and its corresponding actions) has to be separated by newline.
Awk Example 3. Print only specific field.

Awk has number of built in variables. For each record i.e line, it splits the record delimited by whitespace character by default and stores it in the $n variables. If the line has 4 words, it will be stored in $1, $2, $3 and $4. $0 represents whole line. NF is a built in variable which represents total number of fields in a record.

$ awk '{print $2,$5;}' employee.txt
Thomas $5,000
Jason $5,500
Sanjay $7,000
Nisha $9,500
Randy $6,000

$ awk '{print $2,$NF;}' employee.txt
Thomas $5,000
Jason $5,500
Sanjay $7,000
Nisha $9,500
Randy $6,000

In the above example $2 and $5 represents Name and Salary respectively. We can get the Salary using  $NF also, where $NF represents last field. In the print statement ‘,’ is a concatenator.
Awk Example 4. Initialization and Final Action

Awk has two important patterns which are specified by the keyword called BEGIN and END.

Syntax:

BEGIN { Actions}
{ACTION} # Action for everyline in a file
END { Actions }

# is for comments in Awk

Actions specified in the BEGIN section will be executed before starts reading the lines from the input.
END actions will be performed after completing the reading and processing the lines from the input.

$ awk 'BEGIN {print "Name\tDesignation\tDepartment\tSalary";}
> {print $2,"\t",$3,"\t",$4,"\t",$NF;}
> END{print "Report Generated\n--------------";
> }' employee.txt
Name    Designation    Department    Salary
Thomas      Manager      Sales              $5,000
Jason      Developer      Technology      $5,500
Sanjay      Sysadmin      Technology      $7,000
Nisha      Manager      Marketing      $9,500
Randy      DBA           Technology      $6,000
Report Generated
--------------

In the above example, it prints headline and last file for the reports.
Awk Example 5. Find the employees who has employee id greater than 200

$ awk '$1 >200' employee.txt
300  Sanjay  Sysadmin   Technology  $7,000
400  Nisha   Manager    Marketing   $9,500
500  Randy   DBA        Technology  $6,000

In the above example, first field ($1) is employee id. So if $1 is greater than 200, then just do the default print action to print the whole line.
Awk Example 6. Print the list of employees in Technology department

Now department name is available as a fourth field, so need to check if $4 matches with the string “Technology”, if yes print the line.

$ awk '$4 ~/Technology/' employee.txt
200  Jason   Developer  Technology  $5,500
300  Sanjay  Sysadmin   Technology  $7,000
500  Randy   DBA        Technology  $6,000

Operator ~ is for comparing with the regular expressions. If it matches the default action i.e print whole line will be  performed.
Awk Example 7. Print number of employees in Technology department

The below example, checks if the department is Technology, if it is yes, in the Action, just increment the count variable, which was initialized with zero in the BEGIN section.

$ awk 'BEGIN { count=0;}
$4 ~ /Technology/ { count++; }
END { print "Number of employees in Technology Dept =",count;}' employee.txt
Number of employees in Tehcnology Dept = 3

Then at the end of the process, just print the value of count which gives you the number of employees in Technology departme

 

will print all but very first column:
cat somefile | awk '{$1=""; print $0}'

will print all but two first columns:
cat somefile | awk '{$1=$2=""; print $0}'

Sunday, February 24, 2013

Libsafe installation

Libsafe is a tool for protecting the server against buffer overflow attacks. It is written in C language and used to protect systems against some of the more common buffer overflow attacks.

When you first install Libsafe, its advisable to use the first method, since if Libsafe causes problems, one can easily unset LD_PRELOAD to stop Libsafe being used.

cd /usr/local/src/
wget http://pubs.research.avayalabs.com/src/libsafe-2.0-16.tgz
tar -xzvf libsafe-2.0-16.tgz
cd libsafe-2.0-16/
make
yes y | make install

Now that Libsafe has been built and installed, we need to ensure that it intercepts all function calls to the standard C library. We can do this in two ways.

1) We can set the environmental variable LD_PRELOAD e.g. (in bash):
$ LD_PRELOAD=/lib/libsafe.so.2 $ export LD_PRELOAD

To set this on a system-wide basis, just add this to e.g. /etc/profile (or maybe /etc/profile.local)

2) Alternatively, we can add a line to /etc/ld.so.preload
echo '/lib/libsafe.so.2' >> /etc/ld.so.preload

This will ensure that Libsafe is used for all programs, and cannot be disabled by a normal user (unlike environmental variables).

Problems with Libsafe

At this point the reader will no doubt be wondering why Libsafe isn't included by default with all Linux distributions; unfortunately, Libsafe doesn't always work, and worse still, can even cause extra problems.
Because of certain assumptions made about the stack, Libsafe will only work with x86 processors.
Programs that have been linked against libc5 won't work with Libsafe.
If a program has been compiled without a stack pointer (i.e. by using the -fomit-frame-pointer option in GCC or perhaps due to an optimizer), then Libsafe won't be able to catch any overflows.
Libsafe won't catch overflows in statically compiled programs since Libsafe works by intercepting calls to shared libraries.
If a function is included inline, then for the same reason as above Libsafe won't catch overflows.
And of course, since Libsafe only works with a limited set of functions, it won't catch buffer overflows which involve other (user-defined) functions !

from http://www.symantec.com/connect/articles/protecting-systems-libsafe

Tuesday, February 19, 2013

Linux Sed Command

Linux Sed Command


sed command to change all occurrences of one string to another within a file, just like the search-and-replace feature of your word processor. The

sed command can also delete a range of lines from a file. Since

sed is a stream editor, it takes the file given as input, and sends the output to the screen, unless you redirect output to a file. In other words,

sed does not change the input file. The general forms of the

sed command are as follows:



Substitution sed 's/<oldstring>/<newstri ng>/g' <file>
Deletion sed '<start>,<end>d' < file>




Let's start with a substitution example. If you want to change all occurrences of lamb to ham in the poem.txt file in the grep example, enter this:




sed 's/lamb/ham/g' poem.txt
Mary had a little ham
Mary fried a lot of spam
Jack ate a Spam sandwich
Jill had a ham spamwich




In the quoted string, the "s" means substitute, and the "g" means make a global change. You can also leave off the "g" (to change only the first occurrence on each line) or specify a number instead (to change the first n occurrences on each line).




Now let's try an example involving deletion of lines. The values for start and end can be either a line number or a pattern to match. All lines from the start line to the end line are removed from the output. This example will delete starting at line 2, up to and including line 3:




sed '2,3d' poem.txt
Mary had a little lamb
Jill had a lamb spamwich




This example will delete starting at line 1, up to and including the next line containing Jack:




sed '1,/Jack/d' poem.txt
Jill had a lamb spamwich




The most common use of sed is to change one string of text to another string of text. But I should mention that the strings that sed uses for search and delete are actually regular expressions. This means you can use pattern matching, just as with grep. Although you'll probably never need to do anything like this, here's an example anyway. To change any occurrences of lamb at the end of a line to ham, and save the results in a new file, enter this:




sed 's/lamb$/ham/g' poem.txt > new.file




Since we directed output to a file, sed didn't print anything on the screen. If you look at the contents of new.file it will show these lines:




Mary had a little ham
Mary fried a lot of spam
Jack ate a Spam sandwich
Jill had a lamb spamwich




Use the man sed command for more information on using sed.

Connecting wireless through command line

The first command you need to use is ifconfig. With this command you are going to enable your wireless device. Most likely your device will be called wlan0. So in order to enable this you would enter the command (as root):

ifconfig wlan0 up

You won’t see any feedback unless there is a problem.

The next step is to scan for your wireless network to make sure it is available. Do this with the following command:

iwlist wlan0 scan

With this command you will see output like the following:

Cell 01 - Address: 00:21:43:4E:9B:F0
ESSID:"HAIR STROBEL"
Mode:Master
Channel:5
Frequency:2.432 GHz (Channel 5)
Quality=100/100? Signal level:-45 dBm? Noise level=-95 dBm
Encryption key:on
IE: WPA Version 1
Group Cipher : TKIP
Pairwise Ciphers (1) : TKIP
Authentication Suites (1) : PSK
IE: IEEE 802.11i/WPA2 Version 1
Group Cipher : TKIP
Pairwise Ciphers (1) : CCMP
Authentication Suites (1) : PSK
Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 22 Mb/s
6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s
36 Mb/s; 48 Mb/s; 54 Mb/s
Extra:tsf=000002f1d9be01b7


So you know this network is available. From the above output you can also see this network is employing WPA2, so you will need a passkey. If you don’t know that passkey, you are out of luck (which would be the case no matter if you were using a front end in Linux, Windows, or Mac.)

Now it’s time to configure your connection. To do this issue the command:

iwconfig wlan0 essid NETWORK_ID key WIRELESS_KEY

Where NETWORK_ID is the ESSID of the network with which you want to connect and WIRELESS_KEY is the security key needed to connect to the wireless access point.

Note: iwconfig defaults to using a HEX key. If you want to use an ascii key you will have to add the “s:” prefix to your key like so:

iwconfig wlan0 essid NETWORK_ID key s:WIRELESS_KEY

Now that you have your configuration set, it’s time to get an IP address with the help of dhclient. Issue the command:

dhclient wlan0

If no output is reported there are no errors. You should now be up and running.

Make it a script

Of course who wants to type out all of those commands. Instead of doing this you could create a script for this like so:

#! /bin/bash
ifconfig wlan0
iwconfig wlan0 essid NETWORK_ID key WIRELESS_KEY
dhclient wlan0


Where NETWORK_ID is the actually essid of the network and WIRELESS_KEY is the security key for that network. Save this script with the filename wireless_up.sh and then make this script executable with the command:

chmod u+x wireless_up.sh

You can make this a global command by placing this script in /usr/local/bin. You can now issue the command wireless_up.sh from anywhere in your directory structure and it will run, connecting you to the configured wireless access point.

 
sudo iwconfig wlan0 freq 2.422G

Or by running:
sudo iwconfig wlan0 channel 3

ifconfig wlan0 down
iwconfig wlan0 mode managed
ifconfig wlan0 up
iwconfig wlan0 channel 3
iwconfig wlan0 key xxxxxxxxxx
iwconfig wlan0 key restricted
iwconfig wlan0 essid "Blah Blah Foo Bar"
iwconfig wlan0 ap xx:yy:zz:aa:bb:cc
sleep 5
dhcpcd -d wlan0



Hosts file in linux and windows

The hosts file is a computer file used by an operating system to map hostnames to IP addresses. The hosts file is a plain text file, and is conventionally named hosts.

The hosts file is one of several system facilities that assists in addressing network nodes in a computer network. It is a common part of an operating system's Internet Protocol (IP) implementation, and serves the function of translating human-friendly hostnames into numeric protocol addresses, called IP addresses, that identify and locate a host in an IP network.

In some operating systems, the hosts file's content is used preferentially to other methods, such as the Domain Name System (DNS), but many systems implement name service switches (e.g., nsswitch.conf for Linux and Unix) to provide customization. Unlike the DNS, the hosts file is under the direct control of the local computer's administrator

 
















































































































Operating SystemVersion(s)Location
Unix, Unix-like, POSIX/etc/hosts[2]
Microsoft Windows3.1%WinDir%\HOSTS
95, 98/98SE, Me%WinDir%\hosts[3]
NT, 2000, XP (x86 & x64),[4] 2003, Vista, 7 and 8%SystemRoot%\system32\drivers\etc\hosts [5]
Windows MobileRegistry key under HKEY_LOCAL_MACHINE\Comm\Tcpip\Hosts
Apple Macintosh9 and earlier
Mac OS X 10.0 – 10.1.5 [6](Added through NetInfo or niload)
Mac OS X 10.2 and newer/etc/hosts (a symbolic link to /private/etc/hosts)[6]
Novell NetWareSYS:etc\hosts
OS/2 & eComStation"bootdrive":\mptn\etc\
SymbianSymbian OS 6.1–9.0C:\system\data\hosts
Symbian OS 9.1+C:\private\10000882\hosts
MorphOSNetStackENVARC:sys/net/hosts
AmigaOS4DEVS:Internet/hosts
Android/etc/hosts (a symbolic link to /system/etc/hosts)
iOSiOS 2.0 and newer/etc/hosts (a symbolic link to /private/etc/hosts)
TOPS-20
Plan 9/lib/ndb/hosts
BeOS/boot/beos/etc/hosts[7]
Haiku/boot/common/settings/network/hosts[7]
OpenVMSUCXUCX$HOST
TCPwareTCPIP$HOST

 

Saturday, February 16, 2013

update RVSiteBuilder

For cPanel


Go to root WHM / Add-ons / RVSiteBuilder Manager. On the Manager homepage, if you are not on the latest version, it will show you the link 'Upgrade to latest version'. Following the link will upgrade your RVSiteBuilder.

If you cannot access to RVSiteBuilder Manager interface, you can update using command line here.

perl /usr/local/cpanel/whostmgr/docroot/cgi/rvsitebuilderinstaller/autoinstaller.cgi

Server Hardening

1.)chkrootkit (Check Rootkit) is a common Unix-based program intended to help system administrators check their system for known rootkits
cd /usr/local/src
wget ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.tar.gz
wget wget ftp://ftp.pangeia.com.br/pub/seg/pac/chkrootkit.md5
md5sum -c chkrootkit.md5
tar -zxvf chkrootkit.tar.gz
cd chkrootkit-*/
make sense
./chkrootkit
cd ..

Adding program to daily cron job
===============================
You can add a cron entry for running chkrootkit automatically and send a scan report to your mail address.
Create and add the following entries to “/etc/cron.daily/chkrootkit.sh”

#!/bin/sh
(
/usr/local/chkrootkit/chkrootkit
) | /bin/mail -s ‘CHROOTKIT Daily Run (ServerName)’ your@email.com

chmod +x /etc/cron.daily/chkrootkit.sh

2.)RootKit Hunter – A tool which scans for backdoors and malicious softwares present in the server.
cd /usr/local/src
wget http://downloads.rootkit.nl/rkhunter-1.2.7.tar.gz
wget http://nchc.dl.sourceforge.net/project/rkhunter/rkhunter/1.4.0/rkhunter-1.4.0.tar.gz
tar -zxvf rkhunter*
cd rkhunter*
./installer.sh --install
rkhunter --check
log : /var/log/rkhunter.log

To update it
=========
rkhunter –update
rkhunter –propupd
=========

How to setup a daily scan report
================================
pico /etc/cron.daily/rkhunter.sh

set crontab to scan and email the report
#!/bin/sh
(
/usr/local/bin/rkhunter –versioncheck
/usr/local/bin/rkhunter –update
/usr/local/bin/rkhunter –cronjob –report-warnings-only
) | /bin/mail -s ‘rkhunter Daily Run (PutYourServerNameHere)’ your@email.com

chmod +x /etc/cron.daily/rkhunter.sh

3.)    APF or CSF – A policy based iptables firewall system used for the easy configuration of iptables rules.
APF or CSF – A policy based iptables firewall system used for the easy configuration of iptables rules.

CSF
================
cd /usr/local/src
wget http://www.configserver.com/free/csf.tgz
tar -xzf csf.tgz
cd csf
sh install.sh
echo "CSF successfully installed!"
When your configuration is complete, you need to set the following in /etc/csf/csf.conf to disable “TESTING” mode and enable your firewall:
TESTING = “1?
to
TESTING = “0?

csf -r
===============

APF
===============
cd /usr/local
wget http://www.rfxnetworks.com/downloads/apf-current.tar.gz
tar -xvzf apf-current.tar.gz
cd apf*
./install.sh

in config file

Change the value of USE_AD to
USE_AD=”1?

Change the Value of DEVEL_MODE to
DEVEL_MODE=”1?

Save and quit.
chkconfig –del apf
apf -s
If there are no issues and the firewall gets flushed every five minutes,
You can get into the conf file and edit the value of
DEVEL_MODE="1?
that is, change it to
DEVEL_MODE=”0?
===============

sample
TCP_CPORTS=”21,22,25,26,53,80,110,143,443,465,953,993,995,2082,2083,2086,2087,2095,2096,3306,5666,3000_3500?

4.)  Brute Force Detection – BFD is a shell script for parsing applicable logs and checking for authentication failures and blocks the attackers ip in the firewall

cd /usr/local/src
wget http://www.rfxn.com/downloads/bfd-current.tar.gz
tar -xvzf bfd-current.tar.gz
cd bfd*
./install.sh

echo -e "Please enter your email:"
read email
echo "You entered: $email"
echo "ALERT_USR="1"" >>  /usr/local/bfd/conf.bfd
echo "EMAIL_USR="$email"" >>  /usr/local/bfd/conf.bfd
echo "Brute Force Detection has been installed!"
echo "Email would be sent to $email"
/usr/local/sbin/bfd -s

5.)    SSH Securing – For a better security of ssh connections.
Disabling Root Login and changing the listening port .

1.create a user for ssh like sshadminz
2.give the user wheel privilage through Whm
3.in /etc/ssh/sshd_config change the entry PermitRootLogin to no
4.in /etc/ssh/sshd_config change the entry #Port to Port xxxx (needed port,make sure that port is open in csf/iprules)
5.restart the sshd service

>ssh sshadmin@***.***.***.*** -p xxxx

Setting an SSH Legal Message

The message is contained within the following file: /etc/motd

ALERT! You are entering a secured area! Your IP and login information
have been recorded. System administration has been notified.

This system is restricted to authorized access only. All activities on
this system are recorded and logged. Unauthorized access will be fully
investigated and reported to the appropriate law enforcement agencies.

6.) Host.conf Hardening –Prevents IP spoofing and dns poisoning

The host.conf file resides in /etc/host.conf.
order bind,hosts
multi on
nospoof on

7.)  Sysctl.conf Hardening – Prevents syn-flood attacks and other network abuses.
#Kernel sysctl configuration file for Red Hat Linux
#
# For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and
# sysctl.conf(5) for more details.

# Disables packet forwarding
net.ipv4.ip_forward=0

# Disables IP source routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Enable IP spoofing protection, turn on source route verification
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Disable ICMP Redirect Acceptance
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.lo.accept_redirects = 0
net.ipv4.conf.eth0.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0

# Enable Log Spoofed Packets, Source Routed Packets, Redirect Packets
net.ipv4.conf.all.log_martians = 0
net.ipv4.conf.lo.log_martians = 0
net.ipv4.conf.eth0.log_martians = 0

# Disables IP source routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.lo.accept_source_route = 0
net.ipv4.conf.eth0.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0

# Enable IP spoofing protection, turn on source route verification
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.lo.rp_filter = 1
net.ipv4.conf.eth0.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1

# Disable ICMP Redirect Acceptance
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.lo.accept_redirects = 0
net.ipv4.conf.eth0.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0

# Disables the magic-sysrq key
kernel.sysrq = 0

# Decrease the time default value for tcp_fin_timeout connection
net.ipv4.tcp_fin_timeout = 15

# Decrease the time default value for tcp_keepalive_time connection
net.ipv4.tcp_keepalive_time = 1800

# Turn off the tcp_window_scaling
net.ipv4.tcp_window_scaling = 0

# Turn off the tcp_sack
net.ipv4.tcp_sack = 0

# Turn off the tcp_timestamps
net.ipv4.tcp_timestamps = 0

# Enable TCP SYN Cookie Protection
net.ipv4.tcp_syncookies = 1

# Enable ignoring broadcasts request
net.ipv4.icmp_echo_ignore_broadcasts = 1

# Enable bad error message Protection
net.ipv4.icmp_ignore_bogus_error_responses = 1

# Log Spoofed Packets, Source Routed Packets, Redirect Packets
net.ipv4.conf.all.log_martians = 1

# Increases the size of the socket queue (effectively, q0).
net.ipv4.tcp_max_syn_backlog = 1024

# Increase the tcp-time-wait buckets pool size
net.ipv4.tcp_max_tw_buckets = 1440000

# Allowed local port range
net.ipv4.ip_local_port_range = 16384 65536

# Turn on execshield
kernel.exec-shield=1
kernel.randomize_va_space=1

After you make the changes to the file you need to run /sbin/sysctl -p and sysctl -w net.ipv4.route.flush=1 to enable the changes without a reboot.

The rules were taken from: http://ipsysctl-tutorial.frozentux.net/ipsysctl-tutorial.html

8.) FTP Hardening – Secure FTP software by upgrading to latest version

FTP: In WHM >> Service Configuration, there is an option to change 2 settings for FTP. By default
the first will be set to use pure-ftpd (this is good) and
the second is to allow anonymous FTP (this is very bad).
turn anonymous OFF.
How many FTP logons you allow each account in your Feature Lists. Up to 3 is fine - anything over 10 is getting silly and simply invites your users to use your server for file sharing.
===
OR
===
“Hardening Pure/Proftpd”
cp -p /etc/pure-ftpd.conf /etc/pure-ftpd.conf.bk
vi /etc/pure-ftpd.conf
AnonymousOnly no
NoAnonymous yes
PassivePortRange 30000 30050

9.)TMP Hardening – Hardening
>/scripts/securetmp

10.) Secure and Optimize Apache – Tweak apache for a better performance, stability and security.

[root@host /] vim /etc/httpd/conf/httpd.conf
This list is a composite of the settings we will be reviewing from fresh install on a cPanel server:

===
OR AT WHM   Home » Service Configuration » Apache Configuration
===
MinSpareServers 5
MaxSpareServers 10
ServerLimit 600
MaxClients 600
MaxRequestsPerChild 0
KeepAlive On
MaxKeepAliveRequests 100
KeepAliveTimeout 3
Timeout 30
===========

Timeout 300
Usually this value doesn’t require editing and a default of 300 is sufficient. Lowering the ‘Timeout’ value will cause a long running script to terminate earlier than expected.
On virtualized servers like VPS servers, lowering this value to 100 can help improve performance.
KeepAlive On
This setting should be “On” unless the server is getting requests from hundreds of IPs at once.
High volume and/or load balanced servers should have this setting disabled (Off) to increase connection throughput.
MaxKeepAliveRequests 100
This setting limits the number of requests allowed per persistent connection when KeepAlive is on. If it is set to 0, unlimited requests will be allowed.
It is recommended to keep this value at 100 for virtualized accounts like VPS accounts. On dedicated servers it is recommended that this value be modified to 150.
KeepAliveTimeout 15
The number of seconds Apache will wait for another request before closing the connection. Setting this to a high value may cause performance problems in heavily loaded servers. The higher the timeout, the more server processes will be kept occupied waiting on connections with idle clients.
It is recommended that this value be lowered to 5 on all servers.
MinSpareServers 5
This directive sets the desired minimum number of idle child server processes. An idle process is one which is not handling a request. If there are fewer spareservers idle then specified by this value, then the parent process creates new children at a maximum rate of 1 per second. Setting this parameter to a large number is almost always a bad idea.
Liquidweb recommends adjusting the value for this setting to the following:
Virtualized server, ie VPS 5
Dedicated server with 1-2GB RAM 10
Dedicated server with 2-4GB RAM 20
Dedicated server with 4+ GB RAM 25
===========

11.)WHM Tweaking – Tweak WHM for better security and performance.

Server Setup =>> Tweak Settings
Check the following items...
Under Domains: Prevent users from parking/adding on common internet domains. (ie hotmail.com, aol.com)
Under Mail: Attempt to prevent pop3 connection floods
Default catch-all/default address behavior for new accounts - blackhole

Goto Server Setup =>> Tweak Security
Enable php open_basedir Protection

12.) PHP Tightening – Tweak PHP by changing

Edit php.ini as per need

[root@server ]# nano /usr/local/lib/php.ini
safe_mode = On
allow_url_fopen = off
expose_php = Off
Enable_dl= Off
magic_quotes = On
register_globals = off
display errors = off
disable_functions = system, show_source, symlink, exec, dl,shell_exec, passthru, phpinfo, escapeshellarg,escapeshellcmd, popen, proc_open, allow_url_fopen, ini_set

13.)PHP Upgarde – Compile PHP to its latest stable version which increases server security.

/scripts/easyapache

14.)Shell Fork Bomb/Memory Hog Protection

Home »  Security Center »  Shell Fork Bomb Protection

15.)ClamAV – Is a cross-platform antivirus software tool-kit able to detect many types of malicious software, including viruses
Main >> cPanel >> Manage Plugins
* Install clamav
Tick ClamAV

*********
cd /usr/local/src/

wget http://sourceforge.net/projects/clamav/files/clamav/0.95.2/clamav-0.95.2.tar.gz/download

tar -zxvf clamav-0.95.2.tar.gz

cd clamav-0.95.2

useradd clamav

./configure

make

make install

ldconfig

yum install zlib zlib-devel
*********
* Run the scan
[root@server ]# clamscan -r /home
In WHM -> Plugins -> ClamAV Connector, ensure that "Scan Mail" is checked.

clamscan -ir / -l clamscanreport

15.)System Integrity Monitor – Service monitoring of HTTP, FTP, DNS, SSH, MYSQL & more

cd /usr/src/
wget http://www.rfxn.com/downloads/sim-current.tar.gz
tar zxf sim-current.tar.gz
cd sim-3*
./setup -i
perl -pi -e "s/^init.named.*/init.named on/" /usr/local/sim/config/mods.control
perl -pi -e "s/^init.httpd.*/init.httpd on/" /usr/local/sim/config/mods.control
perl -pi -e "s/^init.mysqld.*/init.mysql on/" /usr/local/sim/config/mods.control
perl -pi -e "s/^init.named.*/init.named on/" /usr/local/sim/config/mods.control
perl -pi -e "s/^init.exim.*/init.exim on/" /usr/local/sim/config/mods.control
sim -j

16.)SPRI – Tool for changing the priority of different processess running in the server according to the level of importance and thereby increasing the performance and productivity of the server.

cd /usr/src
wget http://www.rfxn.com/downloads/spri-current.tar.gz
tar zxvf spri-current.tar.gz
cd spri-0*
./install.sh
spri -v

17.)MySQL optimization – Optimize MySQL value for better performance and stability
/usr/local/cpanel/3rdparty/mysqltuner/mysqltuner.pl

#DO NOT MODIFY THE FOLLOWING COMMENTED LINES!
[mysqld]
max_connections = 400
key_buffer = 16M
myisam_sort_buffer_size = 32M
join_buffer_size = 1M
read_buffer_size = 1M
sort_buffer_size = 2M
table_cache = 1024
thread_cache_size = 286
interactive_timeout = 25
wait_timeout = 1000
connect_timeout = 10
max_allowed_packet = 16M
max_connect_errors = 10
query_cache_limit = 1M
query_cache_size = 16M
query_cache_type = 1
tmp_table_size = 16M
skip-innodb

[mysqld_safe]
open_files_limit = 8192

[mysqldump]
quick
max_allowed_packet = 16M

[myisamchk]
key_buffer = 32M
sort_buffer = 32M
read_buffer = 16M
write_buffer = 16M

MySQL parameters like query_cache_size, key_buffer_size, Table_cache, sort_buffer, read_rnd_buffer_size, thread_cache, tmp_table_size, query_cache_size etc should be altered according to your server status.

18.)Root Loger

Root Login Email alert
2. cd /root
3. vi .bashrc
4. Scroll to the end of the file then add the following:
echo ‘ALERT – Root Shell Access (YourserverName) on:’ `date` `who` | mail -s “Alert:
Root Access from `who | cut -d’(‘ -f2 | cut -d’)’ -f1`” you@yourdomain.com
Replace YourServerName with the handle for your actual server
Replace you@yourdomain.com with your actual email address

19.)MyTOP – A console-based (non-gui) tool for monitoring the threads and overall performance of a MySQL

/scripts/realperlinstaller –force Getopt::Long
/scripts/realperlinstaller –force DBI
/scripts/realperlinstaller –force DBD::mysql
/scripts/realperlinstaller –force Term::ReadKey

wget http://jeremy.zawodny.com/mysql/mytop/mytop-1.6.tar.gz
tar zxpfv mytop-1.6.tar.gz
cd mytop-1.6
perl Makefile.PL && make && make install

Error in option spec: “long|!” error message if you are trying to execute the mytop command. Please do the following to solve this error.
After doing perl Makefile.PL edit the mytop script inside the installation location and Search for the line
“long|!”              => \$config{long_nums},
and comment it to
#”long|!”              => \$config{long_nums},
and then execute make install from the source directory to install the altered mytop script.

After installing mytop you need to create a new file under /root/.mytop (mytop config file for root) with the lines below (mysql root password is found on /root/.my.cnf:
user=root
pass=<your mysql password>
host=localhost
db=mysql
delay=5
port=3306
socket=
batchmode=0
header=1
color=1
idle=1

To
mytop -d mysql

20.)MultiTail – MultiTail is a program for monitoring multiple log files, in the fashion of the original tail program
cd /usr/src/
wget http://www.vanheusden.com/multitail/multitail-5.2.12.tgz
tar zxvf multitail-5.2.12.tgz
cd multitail-*
yum install ncurses ncurses-devel -y
make install
multitail -i /etc/host.conf -i /etc/sysctl.conf

21.)Mod_Security – ModSecurity is an embeddable web application firewall.

To install mod_security, go to WHM => EasyApache (or alternatively via CLI, run /scripts/easyapache). After you select your Apache and PHP versions,
you’ll be brought to the Short Options page. Select mod_security from the list, then proceed with the build.

When the build is done, mod_security will be installed. The files are kept in the following location:
/usr/local/apache/conf/modsec2.user.conf

Mod Security once installed, provides some default rules. The file with the rules resides under /usr/local/apache/conf/
The file modsec2.user.conf.default contains the rules which should be copied over to modsec2.user.conf.

cp -p modsec2.user.conf.default modsec2.user.conf

Restart the httpd service once.
**********
http://www.modsecurity.org/documentation/quick-examples.html
mod_security rules
http://www.webhostingtalk.com/showthread.php?t=1072701
http://www.apachelounge.com/viewtopic.php?t=74
**********
When hack attempts are identified by mod_security, they are logged in /usr/local/apache/logs/audit_log with the IP of the offender and what rule was violated.
Visitors that trigger mod_security rules are greeted with a “406: Not Acceptable” error when doing so.
However, mod_security does occasionally block legitimate website access attempts,
specifically for software that happens to make calls consistent with a specific rule that mod_security is configured to block.
Therefore, you may wish to either disable that rule, or disable mod_security for a specific domain or part of your website.
Doing this is rather easy from command line.

First, open up your httpd.conf (/usr/local/apache/conf/httpd.conf) and locate your domain’s <virtualhost> block.
Under it you’ll see a line like this that is commented out:
# Include "/usr/local/apache/conf/userdata/std/2/$user/$domain/*.conf"
Uncomment this line, then create the folder indicated (note that $user is your username, and $domain is your domain name):
mkdir -p /usr/local/apache/conf/userdata/std/2/$user/$domain/
cd /usr/local/apache/conf/userdata/std/2/$user/$domain/
Create a file called modsec.conf, and insert this line:
SecRuleEngine Off
To apply, restart Apache
====
OR
====
Disabling Mod-Security for a single account
To disable the mod_security for a particular account, just add the following in the users .htaccess file
SecFilterEngine Off
SecFilterScanPOST Off

If mod_security2
<IfModule mod_security2.c>
SecRuleEngine Off
</IfModule>

22.)Mod_Evasive – mod_evasive is an evasive maneuvers module for Apache that provides evasive action in the
event of an HTTP DoS attack or brute force attack. It is also designed to be a detection and network
management tool and can be easily configured to talk to ipchains, firewalls, routers, and more.

Download the latest source file from http://www.zdziarski.com
cd /usr/local/src/
wget http://www.zdziarski.com/projects/mod_evasive/mod_evasive_1.10.1.tar.gz
tar -xvzf mod_evasive_1.10.1.tar.gz
cd mod_evasive/

We also have cPanel running on this box, so, to install, we run the following:

/usr/local/apache/bin/apxs -i -a -c mod_evasive20.c

Now, that will create an entry in the httpd.conf file, and, if we want to retain that after an upgrade/rebuild, we need to tell cPanel not to take it out! Do do this, we now run this:

/usr/local/cpanel/bin/apache_conf_distiller –update

Now, to change the settings for mod_evasive, we need to add them in some place. All we have done so far, is install the actually module into apache, and, even with a restart, it would not be using it. So, I like to add things into my includes files through either WHM, or, directly through the terminal. To do this, we run the following:

vim /usr/local/apache/conf/includes/post_virtualhost_2.conf

Once the file is open, lets add in the following lines to the bottom of the file:

DOSHashTableSize 3097
DOSPageCount 2
DOSSiteCount 50
DOSPageInterval 1
DOSSiteInterval 1
DOSBlockingPeriod 3600
DOSEmailNotify root

=====
OR
=====
<IfModule mod_evasive20.c>
DOSHashTableSize 3097
DOSPageCount 5
DOSSiteCount 100
DOSPageInterval 2
DOSSiteInterval 2
DOSBlockingPeriod 10
DOSBlockingPeriod 600
</IfModule>

23.)Maldetect -

cd /usr/local/src
wget  http://www.rfxn.com/downloads/maldetect-current.tar.gz
tar -xzvf maldetect-current.tar.gz
cd maldetect-*
sh install.sh
cd ..

To run the maldet

>maldet -a /

Friday, February 15, 2013

DomainKeys (DKIM) and SPF Installations cpanel

DomainKeys (DKIM) and SPF records are becoming a common, and annoying, demand among email providers, mainly Yahoo and Hotmail. In short, both are methods of email authentication designed to verify email integrity, by linking a sender to a specific server or hostname. In other words, DomainKeys and SPF records specify what servers can send email on behalf of a domain name.

/usr/local/cpanel/bin/domain_keys_installer $user
/usr/local/cpanel/bin/spf_installer $user

 

 

# /usr/local/cpanel/bin/dkim_keys_install <CPANELUSER>
# /usr/local/cpanel/bin/spf_installer <CPANELUSER>

for user in `ls -A /var/cpanel/users` ; do /usr/local/cpanel/bin/dkim_keys_installer $user && /usr/local/cpanel/bin/spf_installer $user ; done

To verify an SPF record and/or DomainKey, you can run a DNS check:

dig default._domainkey.$domain TXT
dig $domain TXT

SPF
domain.co 14400 IN TXT "v=spf1 +a +mx +ip4:108.163.165.58 ?all"

%domain%. IN TXT "v=spf1 a mx ptr ~all"

This put
domain.extension. IN TXT "v=spf1 a mx ip4:XXX.XXX.XXX.XXX ?all"

XXX.XXX.XXX.XXX it's primary IP forn netcard interface, and not IP for shared or dedicated hosting.
DKIM

default._domainkey 14400 IN TXT "v=DKIM1\; k=rsa\; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDAJXzC1vhEoH7yfJfusEWNkFz6DbcS1Ij/fAGi30HltiprZowdlCKIXq1TIWFjJE2vOOxJCnOSYMjxiLYXBzrDN9jVH8sd8H/ZpVMdvV7PUVPWOlbRYIqLwqBM8dvnxzmEvvrXP1r2nNviWrALARt1kJDr5EI+xzCNvfDxXKGDzwIDAQAB\;"

Root Login Email alert

Root Login Email alert
2. cd /root
3. vi .bashrc
4. Scroll to the end of the file then add the following:
echo 'ALERT - Root Shell Access (YourserverName) on:' `date` `who` | mail -s "Alert:
Root Access from `who | cut -d'(' -f2 | cut -d')' -f1`" you@yourdomain.com
Replace YourServerName with the handle for your actual server
Replace you@yourdomain.com with your actual email address

Special permissions on files and directories: SetUID, SetGID and Sticky bit.

Special Permissions

























Special permissions on files and directories: SetUID, SetGID and Sticky bit.
Special Permissionson a Fileon a Directory
SUID or Set User IDA program is executed with the file owner's permissions (rather than with the permissions of the user who executes it).Files created in the directory inherit its UID.
SGID or Set Group IDThe effective group of an executing program is the file owner group.Files created in the directory inherit its GID.
Sticky (bit)A program sticks in memory after execution.Any user can create files, but only the owner of a file can delete it.


































Permissions as output in columns 2 to 10 of
ls -l and their meaning.
PermissionsMeaning
--S------SUID is set, but user (owner) execute is not set.
--s------SUID and user execute are both set.
-----S---SGID is set, but group execute is not set.
-----s---SGID and group execute are both set.
--------TSticky bit is set, bot other execute is not set.
--------tSticky bit and other execute are both set.

Thursday, February 14, 2013

Disabling Mod-Security for a single account

Disabling Mod-Security for a single account

To disable the mod_security for a particular account, just add the following in the users .htaccess file

SecFilterEngine Off

SecFilterScanPOST Off

Wednesday, February 13, 2013

lsof--List all Open Files with lsof Command

1. List all Open Files with lsof Command

In the below example, it will show long listing of open files some of them are extracted for better understanding which displays the columns like Command, PID, USER, FD, TYPE etc.

# lsof

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 253,0 4096 2 /

Sections and it’s values are self-explanatory. However, we’ll review FD & TYPE columns more precisely.

FD – stands for File descriptor and may seen some of the values as:

cwd current working directory
rtd root directory
txt program text (code and data)
mem memory-mapped file

Also in FD column numbers like 1u is actual file descriptor and followed by u,r,w of it’s mode as:

r for read access.
w for write access.
u for read and write access.

TYPE – of files and it’s identification.

DIR – Directory
REG – Regular file
CHR – Character special file.
FIFO – First In First Out

2. List User Specific Opened Files

# lsof -u user

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1838 user cwd DIR 253,0 4096 2 /
sshd 1838 user rtd DIR 253,0 4096 2 /

3. Find Processes running on Specific Port

To find out all the running process of specific port, just use the following command with option -i. The below example will list all running process of port 22.

# lsof -i TCP:22

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 1471 root 3u IPv4 12683 0t0 TCP *:ssh (LISTEN)
sshd 1471 root 4u IPv6 12685 0t0 TCP *:ssh (LISTEN)

4. List Only IPv4 & IPv6 Open Files

In below example shows only IPv4 and IPv6 network files open with separate commands.

# lsof -i 4

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rpcbind 1203 rpc 6u IPv4 11326 0t0 UDP *:sunrpc
rpcbind 1203 rpc 7u IPv4 11330 0t0 UDP *:954

# lsof -i 6

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rpcbind 1203 rpc 9u IPv6 11333 0t0 UDP *:sunrpc
rpcbind 1203 rpc 10u IPv6 11335 0t0 UDP *:954

5. List Open Files of TCP Port ranges 1-1024

To list all the running process of open files of TCP Port ranges from 1-1024.

# lsof -i TCP:1-1024

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rpcbind 1203 rpc 11u IPv6 11336 0t0 TCP *:sunrpc (LISTEN)

6. Exclude User with ‘^’ Character

Here, we have excluded root user. You can exclude a particular user using ‘^’ with command as shown above.

# lsof -i -u^root

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rpcbind 1203 rpc 6u IPv4 11326 0t0 UDP *:sunrpc
rpcbind 1203 rpc 7u IPv4 11330 0t0 UDP *:954
rpcbind 1203 rpc 8u IPv4 11331 0t0 TCP *:sunrpc (LISTEN)

7. Find Out who’s Looking What Files and Commands?

Below example shows user user is using command like ping and /etc directory .

# lsof -i -u user

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 1839 user cwd DIR 253,0 12288 15 /etc
ping 2525 user cwd DIR 253,0 12288 15 /etc

8. List all Network Connections

The following command with option ‘-i’ shows the list of all network connections ‘LISTENING & ESTABLISHED’.

# lsof -i

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
rpcbind 1203 rpc 6u IPv4 11326 0t0 UDP *:sunrpc
rpcbind 1203 rpc 7u IPv4 11330 0t0 UDP *:954

9. Search by PID

The below example only shows whose PID is 1 [One].

# lsof -p 1

COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
init 1 root cwd DIR 253,0 4096 2 /
init 1 root rtd DIR 253,0 4096 2 /

10. Kill all Activity of Particular User

Sometimes you may have to kill all the processes for a specific user. Below command will kills all the processes of user user.

# kill -9 `lsof -t -u user`

Tuesday, February 12, 2013

FFMPEG installation along with all support modules

yum install gcc gcc-c++ libgcc gd gd-devel gettext freetype \
freetype-devel ImageMagick ImageMagick-devel libjpeg* libjpeg-devel* \
libpng* libpng-devel* libstdc++* libstdc++-devel* libtiff* \
libtiff-devel* libtool* libungif* libungif-deve* libxml* libxml2* \
libxml2-devel* zlib* zlib-devel* automake* autoconf* samba-common* \
ncurses-devel ncurses patch make -y

 

mkdir  /usr/src/ffmpegscript

mkdir /usr/local/cpffmpeg

==================
libwmf-0.2.8.3.tar.gz
==================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/177e3hskscfvkfa/libwmf-0.2.8.3.tar.gz
tar -zxvf libwmf-0.2.8.3.tar.gz
cd libwmf-0.2.8.3/
./configure --prefix=/usr/local/cpffmpeg
make
make install

====================
ruby-1.8.6.tar.gz
====================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/45q2nalubstz5jk/ruby-1.8.6.tar.gz
tar -zxvf ruby-1.8.6.tar.gz
cd ruby-1.8.6/
./configure --prefix=/usr/local/cpffmpeg
make
make install

=====================
flvtool2_1.0.5_rc6.tgz
=====================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/hppyhpev0ylyi8f/flvtool2_1.0.5_rc6.tgz
tar -zxvf flvtool2_1.0.5_rc6.tgz
cd flvtool2_1.0.5_rc6
/usr/local/cpffmpeg/bin/ruby setup.rb config
/usr/local/cpffmpeg/bin/ruby setup.rb setup
/usr/local/cpffmpeg/bin/ruby setup.rb install
ln -s /usr/local/cpffmpeg/bin/flvtool2 /usr/local/bin/flvtool2
ln -s /usr/local/cpffmpeg/bin/flvtool2 /usr/bin/flvtool2

====================
lame-3.97.tar.gz
====================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/vd99zooleq1rlgv/lame-3.97.tar.gz
tar -zxvf lame-3.97.tar.gz
cd lame-3.97
./configure --prefix=/usr/local/cpffmpeg --enable-mp3x --enable-mp3rtp
make
make install

======================
all-20071007.tar.bz2
======================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/5mvcho608fagu8r/all-20071007.tar.bz2
tar -xvjf all-20071007.tar.bz2
chown -R root.root all-20071007/
mkdir -pv /usr/local/cpffmpeg/lib/codecs/
cp -vrf all-20071007/* /usr/local/cpffmpeg/lib/codecs/
chmod -R 755 /usr/local/cpffmpeg/lib/codecs/

====================
libogg-1.1.3.tar.gz
====================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/1t33o4r1qpx2jv2/libogg-1.1.3.tar.gz
tar -xvzf libogg-1.1.3.tar.gz
cd libogg-1.1.3/
./configure --prefix=/usr/local/cpffmpeg
make
make install

====================
libvorbis-1.1.2.tar.gz
====================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/cc8rb3ikk8zjr37/libvorbis-1.1.2.tar.gz
tar -xvzf libvorbis-1.1.2.tar.gz
cd libvorbis-1.1.2
/configure --prefix=/usr/local/cpffmpeg
make
make install

=====================
vorbis-tools-1.1.1.tar.gz
=====================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/3gif7xywt42e1aa/vorbis-tools-1.1.1.tar.gz
tar -xvzf vorbis-tools-1.1.1.tar.gz
cd vorbis-tools-1.1.1/
./configure --prefix=/usr/local/cpffmpeg
make
make install

========================
libtheora-1.0alpha7.tar.gz
========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/os8jkmj20bppb9t/libtheora-1.0alpha7.tar.gz
tar -xvzf libtheora-1.0alpha7.tar.gz
cd libtheora-1.0alpha7/
./configure --prefix=/usr/local/cpffmpeg --with-ogg=$INSTALL_DDIR --with-vorbis=$INSTALL_DDIR
make
make install

==========================
amrnb-7.0.0.1.tar.bz2
==========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/di95tlyrkip4asw/amrnb-7.0.0.1.tar.bz2
tar -xvjf amrnb-7.0.0.1.tar.bz2
cd amrnb-7.0.0.1/
./configure --prefix=/usr/local/cpffmpeg
make
make install

==========================
amrwb-7.0.0.2.tar.bz2
==========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/xl4utvgikqkwpgl/amrwb-7.0.0.2.tar.bz2
tar -xvjf amrwb-7.0.0.2.tar.bz2
cd amrwb-7.0.0.2/
./configure --prefix=/usr/local/cpffmpeg
make
make install

=========================
a52dec-0.7.4.tar.gz
=========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/eaugb32ppzbmmyo/a52dec-0.7.4.tar.gz
tar -xvzf a52dec-0.7.4.tar.gz
cd a52dec-0.7.4/
./bootstrap
ARCh=`arch`
#64bit processor bug fix
if [[ $ARCh == 'x86_64' ]];then
./configure --prefix=/usr/local/cpffmpeg --enable-shared 'CFLAGS=-fPIC'

else
./configure --prefix=/usr/local/cpffmpeg --enable-shared
fi
make
make install

==========================
faac-1.26.tar.gz
==========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/55y56ud5quqtssa/faac-1.26.tar.gz
tar -xvzf faac-1.26.tar.gz
cd faac/
./bootstrap
./configure --prefix=/usr/local/cpffmpeg --with-mp4v2
make
make install

============================
faad2-2.6.1.tar.gz
============================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/zwk4gcxkk3km0ge/faad2-2.6.1.tar.gz
tar -xvzf faad2-2.6.1.tar.gz
cd faad2/
./bootstrap
./configure --prefix=/usr/local/cpffmpeg --with-mpeg4ip
make
make install

=========================
yasm-0.6.1.tar.gz
=========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/facxu4ofmxdz7cb/yasm-0.6.1.tar.gz
tar -xvzf yasm-0.6.1.tar.gz
cd yasm-0.6.1/
./configure --prefix=/usr/local/cpffmpeg/
make
make install
ln -sf /usr/local/cpffmpeg/bin/yasm /usr/local/bin/yasm
ldconfig

========================
nasm-2.02.tar.gz
========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/a6jkolmxva0jxmv/nasm-2.02.tar_2.gz
tar -xvzf nasm-2.02.tar.gz
cd nasm-2.02/
./configure --prefix=/usr/local/cpffmpeg/
make
make install
ln -sf /usr/local/cpffmpeg/bin/nasm /usr/local/bin/nasm
ldconfig

=======================
xvidcore-1.1.0.tar.gz
=======================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/iqpwt8qkjja6m5f/xvidcore-1.1.0.tar.bz2
tar -xvzf tar -xvzf $_package
cd xvidcore-1.1.0/build/generic/
./configure --prefix=/usr/local/cpffmpeg/
make
make install

========================
x264-snapshot-20080516-2245.tar.gz
========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/wzd3215mvygwy6o/x264-snapshot-20080516-2245.tar.gz
tar -xvzf x264-snapshot-20080516-2245.tar.gz
cd x264-snapshot-20080516-2245/
./configure --enable-shared
make
make install

========================
re2c-0.13.4.tar.gz
========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/7gua46n3t2clvgo/re2c-0.13.5.tar.gz
tar -xvzf re2c-0.13.5.tar.gz
cd re2c-0.13.5/
./configure --prefix=/usr/local/cpffmpeg/
make
make install
ln -s /usr/local/cpffmpeg/bin/re2c /usr/local/bin/re2c

=========================
MPlayer-1.0rc1.tar.bz2
=========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/mxd56u8pgmmdm30/MPlayer-1.0rc1.tar.bz2
tar -xvjf MPlayer-1.0rc1.tar.bz2
cd MPlayer-1.0rc1/
./configure --prefix=/usr/local/cpffmpeg/ --with-codecsdir=/usr/local/cpffmpeg/lib/codecs/ \
--with-extraincdir=/usr/local/cpffmpeg/include --with-extralibdir=/usr/local/cpffmpeg/lib
make
make install
cp -f etc/codecs.conf /usr/local/cpffmpeg/etc/mplayer/codecs.conf
ln -sf /usr/local/cpffmpeg/bin/mplayer /usr/local/bin/mplayer
ln -sf /usr/local/cpffmpeg/bin/mplayer /usr/bin/mplayer
ln -sf /usr/local/cpffmpeg/bin/mencoder /usr/bin/mencoder
ln -sf /usr/local/cpffmpeg/bin/mencoder /usr/local/bin/mencoder

=========================
ffmpeg-SVN-r14473.tar.gz
=========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/lmvpaz0e7hgirab/ffmpeg-SVN-r14473.tar.gz
tar -xvzf ffmpeg-SVN-r14473.tar.gz
cd ffmpeg/
ldconfig
./configure --prefix=/usr/local/cpffmpeg --enable-shared --enable-nonfree \
--enable-gpl --enable-pthreads --enable-liba52 --enable-libamr-nb \
--enable-libamr-wb --enable-libfaac --enable-libfaad --enable-libmp3lame \
--enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid \
--extra-cflags=-I/usr/local/cpffmpeg/include/ --extra-ldflags=-L/usr/local/cpffmpeg/lib \
--enable-cross-compile
make
make tools/qt-faststart
make install
cp -vf tools/qt-faststart /usr/local/cpffmpeg/bin/
ln -sf /usr/local/cpffmpeg/bin/ffmpeg /usr/local/bin/ffmpeg
ln -sf /usr/local/cpffmpeg/bin/ffmpeg /usr/bin/ffmpeg
ln -sf /usr/local/cpffmpeg/bin/qt-faststart /usr/local/bin/qt-faststart
ln -sf /usr/local/cpffmpeg/bin/qt-faststart /usr/bin/qt-faststart
ldconfig
/usr/bin/ffmpeg -formats

=============================
=========================
MPlayer-1.0rc1.tar.bz2
=========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/mxd56u8pgmmdm30/MPlayer-1.0rc1.tar.bz2
tar -xvjf MPlayer-1.0rc1.tar.bz2
cd MPlayer-1.0rc1/
./configure --prefix=/usr/local/cpffmpeg/ --with-codecsdir=/usr/local/cpffmpeg/lib/codecs/ \
--with-extraincdir=/usr/local/cpffmpeg/include --with-extralibdir=/usr/local/cpffmpeg/lib
make
make install
cp -f etc/codecs.conf /usr/local/cpffmpeg/etc/mplayer/codecs.conf
ln -sf /usr/local/cpffmpeg/bin/mplayer /usr/local/bin/mplayer
ln -sf /usr/local/cpffmpeg/bin/mplayer /usr/bin/mplayer
ln -sf /usr/local/cpffmpeg/bin/mencoder /usr/bin/mencoder
ln -sf /usr/local/cpffmpeg/bin/mencoder /usr/local/bin/mencoder

=============================

export LD_LIBRARY_PATH=/usr/local/cpffmpeg/lib:/usr/local/lib:/usr/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=/usr/local/cpffmpeg/lib:/usr/lib:/usr/local/lib:$LIBRARY_PATH
export CPATH=/usr/local/cpffmpeg/include:/usr/include/:usr/local/include:$CPATH

=========================
ffmpeg-php-0.5.3.1.tbz2
=========================
ldconfig
cd /usr/src/ffmpegscript
wget https://www.dropbox.com/s/o5shxs6zsxps7ur/ffmpeg-php-0.5.3.1.tbz2
tar -jxvf ffmpeg-php-0.5.3.1.tbz2
cd ffmpeg-php-0.5.3.1/
phpize
./configure --enable-shared --with-ffmpeg=/usr/local/cpffmpeg
make -
make install

=====================
echo '[PHP]' > $PHP_INI
echo " " >> $PHP_INI
echo "extension_dir = \"$EXTENSION_DIR\" " >> $PHP_INI
echo "post_max_size = 200M " >> $PHP_INI
echo "upload_max_filesize = 200M " >> $PHP_INI
echo "extension=ffmpeg.so" >>$PHP_INI
echo " " >> $PHP_INI
cat $PHP_INI.ffmpeg >> $PHP_INI
=====================