Wednesday, December 18, 2013

Account DNS Check plugin for cPanel/WHM

This plugin gives administrators the ability to easily get a list of domains on their cPanel/WHM server that do not resolve to the correct IP. This is very helpful when doing server to server transfers, or auditing a server to remove old accounts. You can run this script and easily see a list of domains that are pointing to the old server or customers who are no longer hosted on their server.

 

Account DNS Check screen shot running from the WHM Account DNS Check screen shot running from the command line
Instructions
Installation Instructions
The installation proceedure for this plugins requires that you have root access to the server via the console or SSH. Below are step by step instructions you should use to install this plugin.

# cd /home
# rm -f latest-accountdnscheck
# wget http://www.ndchost.com/cpanel-whm/plugins/accountdnscheck/download.php
# sh latest-accountdnscheck
Using the plugin from the WebHostManager
Log into the WHM, click on plugins, then Account DNS Check. Depending on how many domains are on the server, the speed of your resolver, how many domains dont resolve, etc, the plugin may take a few minutes to show you the output.

Using the plugin from the Command Line
For those of you who would rather execute this plugin through the command line that can be done too.

# /var/cpanel/accountdnscheck/scripts/cli_run.sh

Sunday, December 15, 2013

“exim dead but subsys locked”,

1) Stop the service

/etc/init.d/exim stop
or
service exim stop
2) Create an empty file called "eximdisable" under "/etc"
touch /etc/eximdisable
That’s it!! Now when you try to restart or start the exim service, you will get the following error.

/etc/init.d/exim status
exim dead but subsys locked
That’s means it will remain stopped and “chkservd” can’t start it! :)

So if you ever found the error “exim dead but subsys locked”, now you know how to fix it. It’s simple. Just remove that “eximdisable” file. Then you are good to go.

rm -rf /etc/eximdisable
Now you know how to disable exim and how to fix the error “exim dead but subsys locked” :)

Let us know if you face any further issues. We will be right here for your help.

Saturday, December 14, 2013

Cpanel Moving accounts from one partition (hard drive) to another

(Home >> Account Functions >> Rearrange an Account)

 

Rearrange an Account
To change an account’s hard drive:
Select the desired account from the list.
You may use the Account Search feature to search for an account by domain or by user.

Click Rearrange.
Select the drive from the menu.
note Note: To move accounts between hard drives, each hard drive must match the value of /home set in the Basic cPanel & WHM Setup Basic Config section. Any additional home directories which match the value set in Basic cPanel & WHM Setup will also be used for new home directory creations. For example, /home, /home2, /newhome. If the value does not match the hard drive, you can not move the account. This feature is disabled if left blank.

Click Move Account.

modsec'@'localhost' (using password: YES) Mod security Plugin

Error .

The mod_security plugin could not connect to the database. Please verify that MySQL is running. Error: Access denied for user 'modsec'@'localhost' (using password: YES)

Answer

grep dbpassword /etc/cron.hourly/modsecparse.pl

GRANT ALL ON modsec.* TO 'modsec'@localhost IDENTIFIED BY 'odu6lGYKAIyP';

Sunday, December 1, 2013

LVM RESTORE

While working on linux production boxes,some times system admin mistakenly delete LVM partitions. Using the command “vgcfgrestore” we can recover deleted LVM partitions. Linux keeps the backup copies of lvm configuration  in the /etc/lvm/archive directory. In my scenario I have deleted 10GB lvm partition, follow the below steps to recover the LVM partition :

Step:1 First find the backed up configurations of Volume Group (my-vg)

Synatx :

# vgcfgrestore --list   < Volume-Group-Name >

# vgcfgrestore --list my-vg

recover-lvm-partition

As you can see in the above example correct configuration are backed up , in my case “my-vg_00002-692643462.vg” is correct file throug which I will recover my lvm partitions.

Step:2 Now recover the LVM partition using vgcfgrestore and archive file
Syntax

# vgcfgrestore -f /etc/lvm/archive/<file-name> <Voulme-Group-Name>

# vgcfgrestore -f /etc/lvm/archive/my-vg_00002-692643462.vg  my-vg

Ouptput would be : “Restored volume group my-vg”

Thursday, November 28, 2013

changing IP for a subdomain and addon domain

Edit Ip's in Needed domain's files .

/var/cpanel/userdata

/var/cpanel/users

Roundcube-Horde-Squirrelmail

Check if you can get any errors from the logs .
Common fixes recommended are :-

1 ) vi /usr/local/cpanel/base/3rdparty/roundcube/config/main.inc.php
and change
$rcmail_config['smtp_user'] = '%u';
to
$rcmail_config['smtp_user'] = '';
2) If you are using csf , check csf configuration file for the entry
SMTP_BLOCK = "1"
SMTP_ALLOWLOCAL = "1"
If the value is 0 change it to 1 and restart csf
3) update cPanel ( /scripts/upcp --force ) to latest "STABLE" version.
4) increase the memory_limit in php.ini under /usr/local/cpanel/base/3rdpart/roundcube/

/var/cpanel/roundcube/install.
/usr/local/cpanel/bin/update-roundcube.
/usr/local/cpanel/install/webmail

The update-roundcube script then does the following:
1.Removes the existing Roundcube installation (via the command rm -rf /usr/local/cpanel/base/3rdparty/roundcube).
2.Extracts the appropriate Roundcube source tarball to /usr/local/cpanel/base/3rdparty using the version specified in update-roundcube.
3.Changes the ownership of the Roundcube installation to the root user and the wheel group.
4.Extracts configuration values for Maildir, mbox, and MySQL from the system settings.
5.Backs up the MySQL Roundcube database to /var/cpanel/roundcube/roundcube.backup.sql.«current timestamp».
6.Please note that only 4 copies of the Roundcube database backup are retained in /var/cpanel/roundcube.
7.Drops the Roundcube database from MySQL.
8.Updates the Roundcube configuration files and Roundcube database SQL files with the server's settings.
9.Recreates the Roundcube database from the provided SQL files.
10.Reloads the previous Roundcube database backup, finishing the Roundcube update.
Please try the following procedure.
=====
drop database roundcube;
exit;
create database roundcube;
exit;
Then do the procedure:
mysql-u root-D roundcube-p </usr/local/cpanel/base/3rdparty/roundcube/SQL/mysql.initial.sql
After that, look for the file:
vi /usr/local/cpanel/base/3rdparty/roundcube/config/db.inc.php
In line similar to the database -> mysql://roundcube:SENHADOROUNDCUBE@localhost/roundcube
replace by:
mysql://root:SENHADEROOT@localhost/roundcube
Now try your webmail using roundcube
Also ,
* Check if db.inc.php and main.inc.php contains the correct entries such as username , password and database name
* Running the command /usr/local/cpanel/bin/update-roundcube --force will also help
* go to phpmyadmin and repair all round cube tables
* Also check and reset the mysql root password in whm just to make sure it is set .
*Last but not the least try /scripts/upcp --force

 

Horde Mail
One of the many unique features about Horde webmail is it’s spell checker facility. However, the use of this requires that certain components be installed. To install these follow these instructions:
1) Login to your Linux box via SSH as root.
2) Install aspell by running the following command:
yum -y install aspell aspell-en-gb
Leave it to install; you shouldn’t need to make any changes to Horde as it should automatically detect the ASPELL installation.

/scripts/fullhordereset

mysql horde
Then:
REPAIR TABLE horde_sessionhandler;

For Horde
/usr/local/cpanel/bin/update-horde --force

For SquirrelMail
/usr/local/cpanel/bin/update-squirrelmail --force

For Roundcube
/usr/local/cpanel/bin/update-roundcube --force

 

 

fsck---- Mount partition using alternate superblock

Mount partition using alternate superblock

Find out superblock location for /dev/sda2:
# dumpe2fs /dev/sda2 | grep superblock

Sample output:

Primary superblock at 0, Group descriptors at 1-6
Backup superblock at 32768, Group descriptors at 32769-32774
Backup superblock at 98304, Group descriptors at 98305-98310
Backup superblock at 163840, Group descriptors at 163841-163846
Backup superblock at 229376, Group descriptors at 229377-229382
Backup superblock at 294912, Group descriptors at 294913-294918
Backup superblock at 819200, Group descriptors at 819201-819206
Backup superblock at 884736, Group descriptors at 884737-884742
Backup superblock at 1605632, Group descriptors at 1605633-1605638
Backup superblock at 2654208, Group descriptors at 2654209-2654214
Backup superblock at 4096000, Group descriptors at 4096001-4096006
Backup superblock at 7962624, Group descriptors at 7962625-7962630
Backup superblock at 11239424, Group descriptors at 11239425-11239430
Backup superblock at 20480000, Group descriptors at 20480001-20480006
Backup superblock at 23887872, Group descriptors at 23887873-23887878
Now check and repair a Linux file system using alternate superblock # 32768:
# fsck -b 32768 /dev/sda2

Sample output:

fsck 1.40.2 (12-Jul-2007)
e2fsck 1.40.2 (12-Jul-2007)
/dev/sda2 was not cleanly unmounted, check forced.
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
Free blocks count wrong for group #241 (32254, counted=32253).
Fix? yes
Free blocks count wrong for group #362 (32254, counted=32248).
Fix? yes
Free blocks count wrong for group #368 (32254, counted=27774).
Fix? yes
..........
/dev/sda2: ***** FILE SYSTEM WAS MODIFIED *****
/dev/sda2: 59586/30539776 files (0.6% non-contiguous), 3604682/61059048 blocks
Now try to mount file system using mount command:
# mount /dev/sda2 /mnt

You can also use superblock stored at 32768 to mount partition, enter:
# mount sb={alternative-superblock} /dev/device /mnt
# mount sb=32768 /dev/sda2 /mnt

Try to browse and access file system:
# cd /mnt
# mkdir test
# ls -l
# cp file /path/to/safe/location

You should always keep backup of all important data including configuration files

Sunday, November 24, 2013

How to Change the Location of MySQL on cPanel

here may be some situations where you have to move the location of MySQL, for example, if you’re out of disk space or perhaps looking to host it on another device to increase performance. Whatever the reason, moving MySQL is simple and has no impact ct on cPanel’s functionality.

1) Create a backup

This should go without saying, but never mess with your data without making a backup of it. One simple way:
tar -cvf mysql.tar /var/lib/mysql

2) Modify my.cnf

In the [mysqld] section of /etc/my.cnf, add/modify this line:

datadir=/new/path/to/mysql

For example, if you are moving MySQL from /var/lib/mysql to /home2/mysql:

datadir=/home2/mysql

Don’t restart MySQL yet.

3) Sync the data

Now migrate the data to the new location using rsync. Typically you’ll want to stop MySQL, sync the data, then start it up again. If you have a lot of data and know the sync will take a while, do several syncs while the server is running, until they take less time. However, your last sync should always be done with MySQL stopped, especially if you have InnoDB tables. Here’s the command to sync with the example of MySQL being moved to /home2/mysql:
rsync -av /var/lib/mysql /home2
Now, relink the socket:
ln -sf /home2/mysql/mysql.sock /tmp

4) Restart MySQL

Since you already added the datadir entry to my.cnf, all you need to do is restart again and everything should be working.

CPAN to install Perl modules

CPanel has a internal script for using CPAN to install Perl modules. Learn it and love it:
/scripts/perlinstaller
Most common Perl modules can be installed from WHM ~> Install a Perl Module, or from command line. If you don't know the name of the Perl module you're installing, you may want to use the WHM installer instead, as it has a search feature and its usage is pretty self-explanatory.
For command line installations, pass the name of the perl module (case-sensitive) to the installer like so:
/scripts/perlinstaller MD5
/scripts/perlinstaller IO::Compress::Base
If the module is already installed and you need to update or reinstall it, pass –force:
/scripts/perlinstaller –force MD5
Since cPanel 11, you can now also allow your users to install their own perl modules locally in /home/$user/perl (which is automatically added to their Perl module path) so they don't have to bug you when they need a Perl module, nor to they need SSH access. You can enable this in WHM ~> Module Installers ~> Perl Module [Manage] . You do need to have compilers enabled for users though, which can be done in WHM ~> Security Center ~> Compilers Tweak .

cPanel Out of Memory Errors

I’ve seen several features of cPanel appear to malfunction, and upon reviewing /usr/local/cpanel/logs/error_log, I’d see something similar to this:
Out of memory during request for 2180 bytes, total sbrk() is 130234368 bytes!
Common places this has been known to occur:
In Webmail (Horde and Roundcube) when opening large attachments
Using cPanel’s perl module installer
You could legitimately be out of RAM, but most likely the cause is cPanel’s internal memory limit. You can raise this in WHM > Tweak Settings:
“The maximum memory a cPanel process can use before it is killed off (in megabytes). Values less than 128 megabytes can not be specified. A value of “0″ will disable the memory limits.”
Or you can adjust the maxmem setting in /var/cpanel/cpanel.config.

THREADED MODE | LINEAR MODE FFMPEG & FFMPEG-PHP CENTOS 5, 6 EASY INSTALL

########First way#########

Download the installer

Code:

wget http://9xhost.net/scripts/ffmpeg.sh


run the installer

Code:

sh ffmpeg.sh


The make install command will show PHP extensions path where ffmpeg PHP extension is installed:

Code:

root@server [~/ffmpeg-php-0.6.0]# make install
Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20060613/


Now edit php.ini file

Code:

nano /usr/local/lib/php.ini


Add following line at end of php.ini and this will enable ffmpeg PHP extension:

Code:

extension="ffmpeg.so"


Restart Apache to make this change effective:

Code:

/scripts/restartsrv_httpd


You can verify the status of ffmpeg extension on a PHP info web page or from command line as given below:

Code:

root@server [~]# php -i | grep ffmpeg
ffmpeg
ffmpeg-php version => 0.6.0-svn
ffmpeg-php built on => Jun 2 2012 20:48:04
ffmpeg-php gd support => enabled
ffmpeg libavcodec version => Lavc52.123.0
ffmpeg libavformat version => Lavf52.111.0
ffmpeg swscaler version => SwS0.14.1
ffmpeg.allow_persistent => 0 => 0
ffmpeg.show_warnings => 0 => 0
OLDPWD => /root/ffmpeg-php-0.6.0
_SERVER["OLDPWD"] => /root/ffmpeg-php-0.6.0
_ENV["OLDPWD"] => /root/ffmpeg-php-0.6.0


6. Installation paths

Following are the file system paths of tools that we installed:

Code:

ffmpeg: /usr/bin/ffmpeg


Now open yum.conf

Code:

nano /etc/yum.conf


and add ffmpeg* in exclude line.

===============================
#######Second way##########

Create and open a new file called /etc/yum.repos.d/dag.repo

Code:

nano /etc/yum.repos.d/dag.repo


Add the following text to the file:

Code:

[dag]
name=DAG RPM Repository
baseurl=http://apt.sw.be/redhat/el$releasever/en/$basearch/dag
gpgcheck=1
enabled=1


then run

Code:

rpm --import http://apt.sw.be/RPM-GPG-KEY.dag.txt


Now we are ready to install ffmpeg

First run

Code:

yum update


then

Code:

yum install ffmpeg ffmpeg-devel ffmpeg-libpostproc


Now ffmpeg is installed

Preparing for ffmpeg-php
download the latest ffmpeg-php package:

Code:

wget http://downloads.sourceforge.net/ffmpeg-php/ffmpeg-php-0.6.0.tbz2


Untar this package, build and install it with following commands:

Code:

tar xjf ffmpeg-php-0.6.0.tbz2



Code:

cd ffmpeg-php-0.6.0


sed -i ‘s/PIX_FMT_RGBA32/PIX_FMT_RGB32/g’ ffmpeg_frame.c ####copy paste it too##

Code:

phpize



Code:

./configure



Code:

make



Code:

make install


The make install command will show PHP extensions path where ffmpeg PHP extension is installed:

Code:

root@server [~/ffmpeg-php-0.6.0]# make install
Installing shared extensions: /usr/local/lib/php/extensions/no-debug-non-zts-20060613/


Now edit php.ini file

Code:

nano /usr/local/lib/php.ini


and make sure that value of extension_dir is set to PHP extension directory as given by above make install command:

Code:

extension_dir = "/usr/local/lib/php/extensions/no-debug-non-zts-20060613"


Add following line just below extension_dir and this will enable ffmpeg PHP extension:

Code:

extension="ffmpeg.so"


Restart Apache to make this change effective:

Code:

/scripts/restartsrv_httpd


You can verify the status of ffmpeg extension on a PHP info web page or from command line as given below:

Code:

root@server [~]# php -i | grep ffmpeg
ffmpeg
ffmpeg-php version => 0.6.0-svn
ffmpeg-php built on => Jun 2 2012 20:48:04
ffmpeg-php gd support => enabled
ffmpeg libavcodec version => Lavc52.123.0
ffmpeg libavformat version => Lavf52.111.0
ffmpeg swscaler version => SwS0.14.1
ffmpeg.allow_persistent => 0 => 0
ffmpeg.show_warnings => 0 => 0
OLDPWD => /root/ffmpeg-php-0.6.0
_SERVER["OLDPWD"] => /root/ffmpeg-php-0.6.0
_ENV["OLDPWD"] => /root/ffmpeg-php-0.6.0


6. Installation paths

Following are the file system paths of tools that we installed:

Code:

ffmpeg: /usr/bin/ffmpeg


Now open yum.conf

Code:

nano /etc/yum.conf


and add ffmpeg* in exclude line.
-------------------
Error 1

Code:

/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c: In function 'zim_ffmpeg_movie___construct':
/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c:318: error: 'list_entry' undeclared (first use in this function)
/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c:318: error: (Each undeclared identifier is reported only once
/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c:318: error: for each function it appears in.)
/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c:318: error: 'le' undeclared (first use in this function)
/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c:353: error: expected ';' before 'new_le'
/root/ffmpeg/ffmpeg-php-0.7.0/ffmpeg_movie.c:363: error: 'new_le' undeclared (first use in this function)
 


19 ffmpeg commands for all needs

fmpeg is a multiplatform, open-source library for video and audio files. I have compiled 19 useful and amazing commands covering almost all needs: video conversion, sound extraction, encoding file for iPod or PSP, and more.

 


Getting infos from a video file

ffmpeg -i video.avi

Turn X images to a video sequence

ffmpeg -f image2 -i image%d.jpg video.mpg

This command will transform all the images from the current directory (named image1.jpg, image2.jpg, etc…) to a video file named video.mpg.
Turn a video to X images

ffmpeg -i video.mpg image%d.jpg

This command will generate the files named image1.jpg, image2.jpg, …

The following image formats are also availables : PGM, PPM, PAM, PGMYUV, JPEG, GIF, PNG, TIFF, SGI.
Encode a video sequence for the iPpod/iPhone

ffmpeg -i source_video.avi input -acodec aac -ab 128kb -vcodec mpeg4 -b 1200kb -mbd 2 -flags +4mv+trell -aic 2 -cmp 2 -subcmp 2 -s 320x180 -title X final_video.mp4

Explanations :

  • Source : source_video.avi

  • Audio codec : aac

  • Audio bitrate : 128kb/s

  • Video codec : mpeg4

  • Video bitrate : 1200kb/s

  • Video size : 320px par 180px

  • Generated video : final_video.mp4


Encode video for the PSP

ffmpeg -i source_video.avi -b 300 -s 320x240 -vcodec xvid -ab 32 -ar 24000 -acodec aac final_video.mp4

Explanations :

  • Source : source_video.avi

  • Audio codec : aac

  • Audio bitrate : 32kb/s

  • Video codec : xvid

  • Video bitrate : 1200kb/s

  • Video size : 320px par 180px

  • Generated video : final_video.mp4


Extracting sound from a video, and save it as Mp3

ffmpeg -i source_video.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 sound.mp3

Explanations :

  • Source video : source_video.avi

  • Audio bitrate : 192kb/s

  • output format : mp3

  • Generated sound : sound.mp3


Convert a wav file to Mp3

ffmpeg -i son_origine.avi -vn -ar 44100 -ac 2 -ab 192 -f mp3 son_final.mp3

Convert .avi video to .mpg

ffmpeg -i video_origine.avi video_finale.mpg

Convert .mpg to .avi

ffmpeg -i video_origine.mpg video_finale.avi

Convert .avi to animated gif(uncompressed)

ffmpeg -i video_origine.avi gif_anime.gif

Mix a video with a sound file

ffmpeg -i son.wav -i video_origine.avi video_finale.mpg

Convert .avi to .flv

ffmpeg -i video_origine.avi -ab 56 -ar 44100 -b 200 -r 15 -s 320x240 -f flv video_finale.flv

Convert .avi to dv

ffmpeg -i video_origine.avi -s pal -r pal -aspect 4:3 -ar 48000 -ac 2 video_finale.dv

Or:
ffmpeg -i video_origine.avi -target pal-dv video_finale.dv

Convert .avi to mpeg for dvd players

ffmpeg -i source_video.avi -target pal-dvd -ps 2000000000 -aspect 16:9 finale_video.mpeg

Explanations :

  • target pal-dvd : Output format

  • ps 2000000000 maximum size for the output file, in bits (here, 2 Gb)

  • aspect 16:9 : Widescreen


Compress .avi to divx

ffmpeg -i video_origine.avi -s 320x240 -vcodec msmpeg4v2 video_finale.avi

Compress Ogg Theora to Mpeg dvd

ffmpeg -i film_sortie_cinelerra.ogm -s 720x576 -vcodec mpeg2video -acodec mp3 film_terminée.mpg

Compress .avi to SVCD mpeg2

NTSC format:
ffmpeg -i video_origine.avi -target ntsc-svcd video_finale.mpg

PAL format:
ffmpeg -i video_origine.avi -target pal-svcd video_finale.mpg

Compress .avi to VCD mpeg2

NTSC format:
ffmpeg -i video_origine.avi -target ntsc-vcd video_finale.mpg

PAL format:
ffmpeg -i video_origine.avi -target pal-vcd video_finale.mpg

Multi-pass encoding with ffmpeg

ffmpeg -i fichierentree -pass 2 -passlogfile ffmpeg2pass fichiersortie-2

Wednesday, November 20, 2013

Mysql Innodb Recovery

InnoDB

130306 22:02:18 mysqld_safe Number of processes running now: 0
130306 22:02:18 mysqld_safe mysqld restarted
130306 22:02:18 [Note] Plugin 'FEDERATED' is disabled.
130306 22:02:18 InnoDB: The InnoDB memory heap is disabled
130306 22:02:18 InnoDB: Mutexes and rw_locks use GCC atomic builtins
130306 22:02:18 InnoDB: Compressed tables use zlib 1.2.3
130306 22:02:18 InnoDB: Using Linux native AIO
130306 22:02:18 InnoDB: Initializing buffer pool, size = 128.0M
130306 22:02:18 InnoDB: Completed initialization of buffer pool
130306 22:02:18 InnoDB: highest supported file format is Barracuda.
130306 22:02:18 InnoDB: 5.5.30 started; log sequence number 1629186928
130306 22:02:18 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306
130306 22:02:18 [Note] - '0.0.0.0' resolves to '0.0.0.0';
130306 22:02:18 [Note] Server socket created on IP: '0.0.0.0'.
130306 22:02:18 [Note] Event Scheduler: Loaded 0 events
130306 22:02:18 [Note] /usr/sbin/mysqld: ready for connections.
Version: '5.5.30-cll' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Server (GPL)
130306 22:02:19 InnoDB: Assertion failure in thread 47204348393792 in file trx0purge.c line 840
InnoDB: Failing assertion: purge_sys->purge_trx_no <= purge_sys->rseg->last_trx_no
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
03:02:19 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.

We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail.

Steps to get it back up.

1. Stop mysqld.
2. Backup /var/lib/mysql/ib*
3. Add the following line into /etc/my.cnf

innodb_force_recovery = 4

4. Restart mysqld.
5. Dump all tables:# mysqldump -A > dump.sql
6. Drop all databases which need recovery.
7. Stop mysqld.
8. Remove /var/lib/mysql/ib*
9. Comment out innodb_force_recovery in /etc/my.cnf
10. Restart mysqld. Look at mysql error log. By default it should be /var/lib/mysql/server/hostname.com.err to see how it creates new ib* files.
11. Restore databases from the dump:mysql < dump.sql
mysqlcheck –all-databases –repair

innodb force recovery options
Mode 1 - Doesn't crash MySQL when it sees a corrupt page
Mode 2 - Doesn't run background operations
Mode 3 - Doesn't attempt to roll back transactions
Mode 4 - Doesn't calculate stats or apply stored/buffered changes
Mode 5 - Doesn't look at the undo logs during start-up
Mode 6 - Doesn't roll-forward from the redo logs (ib_logfiles) during start-up


1 (SRV_FORCE_IGNORE_CORRUPT)
Let the server run even if it detects a corrupt page. Try to make SELECT * FROM tbl_name jump over corrupt index records and pages, which helps in dumping tables.
2 (SRV_FORCE_NO_BACKGROUND)
Prevent the main thread from running. If a crash would occur during the purge operation, this recovery value prevents it.
3 (SRV_FORCE_NO_TRX_UNDO)
Do not run transaction rollbacks after recovery.
4 (SRV_FORCE_NO_IBUF_MERGE)
Prevent insert buffer merge operations. If they would cause a crash, do not do them. Do not calculate table statistics.
5 (SRV_FORCE_NO_UNDO_LOG_SCAN)
Do not look at undo logs when starting the database: InnoDB treats even incomplete transactions as committed.
6 (SRV_FORCE_NO_LOG_REDO)
Do not do the log roll-forward in connection with recovery.
The database must not otherwise be used with any nonzero value of innodb_force_recovery. As a safety measure, InnoDB prevents users from performing INSERT, UPDATE, or DELETE operations when innodb_force_recovery is greater than 0.
**Hint : A simple query for finding all of your InnoDB tables in case you want to specifically target the corruption.

SELECT table_schema, table_name
FROM INFORMATION_SCHEMA.TABLES
WHERE engine = 'innodb';

Wednesday, November 6, 2013

How To Extract a Single File / Directory from Tarball Archive

A. tar command allows to extract a single file or directory using the following format. It works under UNIX, Linux, and BSD operating systems.

tar xvf /dev/st0 filename
tar xvf /dev/st0 directory-name
tar xvf mytar.ball.tar filename
tar -zxvf mytar.ball.tar.gz directory-name


Extract file to /tmp directory
tar -zxvf mytar.ball.tar.gz -C /tmp filename
tar -zxvf mytar.ball.tar.gz -C /tmp dir-name


Read tar man page for more information:
man tar

Analyse slow-query-log using mysqldumpslow & pt-query-digest

Mysql can log slow queries which takes longer to execute. In some cases this is expected but some queries take longer because of coding mistakes. slow-query-log can definitely help you find those queries and make it easy to debug your application.

In WordPress world, many plugins are often coded my amateurs who have no idea about the scale at which big sites operate! Its better to use slow-query-log to find out such plugins.

Enable slow-query-log
You can enable slow-log by un-commenting following lines in /etc/mysql/my.cnf

slow-query-log = 1
slow-query-log-file = /var/log/mysql/mysql-slow.log
long_query_time = 1
log-queries-not-using-indexes
Last line will tell slow-log to log queries not using indexes. You can keep it commented if you want to ignore queries which are not using indexes.

If your server has less RAM and you are seeing many of your queries in slow-query-log, you may increase value of long_query_time.

Its advisable to enable slow-query-log while debugging only and disable it once you are done with it. Lets move on to analysis part.

mysqldumpslow

This comes bundled with mysql-server.

mysqldumpslow /var/log/mysql/mysql-slow.log
Following will show top 5 query which returned maximum rows. It can find queries where you missed LIMIT clause. A common performance killer!

mysqldumpslow -a -s r -t 5 /var/log/mysql/mysql-slow.log
Following will sort output by count i.e. number of times query found in slow-log. Most frequency queries sometimes turned out to be unexpected queries!

mysqldumpslow -a -s c -t 5 /var/log/mysql/mysql-slow.log
pt-query-digest

This is part of percona toolkit.

Then basic usage is:

pt-query-digest /var/log/mysql/mysql-slow.log
If you have multiple databases, you can enable filtering for a particular database:

pt-query-digest /var/log/mysql/mysql-slow.log --filter '$event->{db} eq "db_wordpress"'
mysqlsla

This is another 3rd party tool. Can be downloaded from here.

Basic Usage:

./mysqlsla /var/log/mysql/mysql-slow.log
Filter for a database:

./mysqlsla /var/log/mysql/mysql-slow.log -mf "db=db_name"
https://github.com/box/Anemometer

Tuesday, October 22, 2013

Lynis - Server Scanner

# mkdir /usr/local/lynis
Download stable version of Lynis source files from the trusted website using wget command and unpack it using tar command as shown below.
# cd /usr/local/lynis
# wget http://www.rootkit.nl/files/lynis-1.3.0.tar.gz
# tar -xvf lynis-1.3.0.tar.gz
Running and Using Lynis Basics
You must be root user to run Lynis, because it creates and writes output to /var/log/lynis.log file. To run Lynis execute the following command.
# cd lynis-1.3.0
# ./lynis

Friday, October 18, 2013

limit.conf

Name


limits.conf - configuration file for the pam_limits module

Description



The pam_limits.so module applies ulimit limits, nice priority and number of simultaneous login sessions limit to user login sessions. This description of the configuration file syntax applies to the /etc/security/limits.conf file and *.conf files in the /etc/security/limits.d directory.

The syntax of the lines is as follows:

<domain> <type> <item> <value>

The fields listed above should be filled as follows:

<domain>

• a username
• a groupname, with @group syntax. This should not be confused with netgroups.
• the wildcard *, for default entry.
• the wildcard %, for maxlogins limit only, can also be used with %group syntax. If the % wildcard is used alone it is identical to using * with maxsyslogins limit. With a group specified after % it limits the total number of logins of all users that are member of the group.
• an uid range specified as <min_uid>:<max_uid>. If min_uid is omitted, the match is exact for the max_uid. If max_uid is omitted, all uids greater than or equal min_uid match.
• a gid range specified as @<min_gid>:<max_gid>. If min_gid is omitted, the match is exact for the max_gid. If max_gid is omitted, all gids greater than or equal min_gid match. For the exact match all groups including the user's supplementary groups are examined. For the range matches only the user's primary group is examined.
• a gid specified as %:<gid> applicable to maxlogins limit only. It limits the total number of logins of all users that are member of the group with the specified gid.
<type>
hard
for enforcing hard resource limits. These limits are set by the superuser and enforced by the Kernel. The user cannot raise his requirement of system resources above such values.
soft
for enforcing soft resource limits. These limits are ones that the user can move up or down within the permitted range by any pre-existing hard limits. The values specified with this token can be thought of as default values, for normal system usage.
-
for enforcing both soft and hard resource limits together.Note, if you specify a type of '-' but neglect to supply the item and value fields then the module will never enforce any limits on the specified user/group etc. .

<item>
core
limits the core file size (KB)
data
maximum data size (KB)
fsize
maximum filesize (KB)
memlock
maximum locked-in-memory address space (KB)
nofile
maximum number of open files
rss
maximum resident set size (KB) (Ignored in Linux 2.4.30 and higher)
stack
maximum stack size (KB)
cpu
maximum CPU time (minutes)
nproc
maximum number of processes
as
address space limit (KB)
maxlogins
maximum number of logins for this user except for this with uid=0
maxsyslogins
maximum number of all logins on system
priority
the priority to run user process with (negative values boost process priority)
locks
maximum locked files (Linux 2.4 and higher)
sigpending
maximum number of pending signals (Linux 2.6 and higher)
msgqueue
maximum memory used by POSIX message queues (bytes) (Linux 2.6 and higher)
nice
maximum nice priority allowed to raise to (Linux 2.6.12 and higher) values: [-20,19]
rtprio
maximum realtime priority allowed for non-privileged processes (Linux 2.6.12 and higher)
All items support the values -1unlimited or infinity indicating no limit, except for priority and nice.If a hard limit or soft limit of a resource is set to a valid value, but outside of the supported range of the local system, the system may reject the new limit or unexpected behavior may occur. If the control value required is used, the module will reject the login if a limit could not be set.

In general, individual limits have priority over group limits, so if you impose no limits for admin group, but one of the members in this group have a limits line, the user will have its limits set according to this line.

Also, please note that all limit settings are set per login. They are not global, nor are they permanent; existing only for the duration of the session.

In the limits configuration file, the '#' character introduces a comment - after which the rest of the line is ignored.

The pam_limits module does report configuration problems found in its configuration file and errors via syslog(3).


Examples


These are some example lines which might be specified in /etc/security/limits.conf.


*               soft    core            0
* hard nofile 512
@student hard nproc 20
@faculty soft nproc 20
@faculty hard nproc 50
ftp hard nproc 0
@student - maxlogins 4
:123 hard cpu 5000
@500: soft cpu 10000
600:700 hard locks 10


See Also

Linux - Resource Manager - Processes limitations (/etc/security/limits.conf)

Limiting user processes is important for running a stable system. To limit user process, you have just to set shell limit by adding:


  • a user name


  • or group name


  • or all users


to /etc/security/limits.conf file and impose then process limitations.

Example of /etc/security/limits.conf file
*               hard    nofile          65535
* soft nofile 4096
@student hard nproc 16384
@student soft nproc 2047

A soft limit is like a warning and hard limit is a real max limit. For example, following will prevent anyone in the student group from having more than 50 processes, and a warning will be given at 30 processes.
@student        hard    nproc           50
@student soft nproc 30

Hard limits are maintained by the kernel while the soft limits are enforced by the shell.

 

Syntax of the /etc/security/limits.conf file




The /etc/security/limits.conf file contains a list line where each line describes a limit for a user in the form of:
<domain> <type> <item> <shell limit value>

Where:


  • <domain> can be:





    • group name, with @group syntax


    • the wildcard *, for default entry


    • the wildcard %, can be also used with %group syntax, for maxlogin limit




  • <type> can have the two values:



    • “soft” for enforcing the soft limits (soft is like warning)


    • “hard” for enforcing hard limits (hard is a real max limit)




  • <item> can be one of the following:



    • core - limits the core file size (KB)




  • <shell limit value> can be one of the following:



    • core - limits the core file size (KB)


    • data - max data size (KB)


    • fsize - maximum filesize (KB)


    • memlock - max locked-in-memory address space (KB)


    • nofile - Maximum number of open file descriptors


    • rss - max resident set size (KB)


    • stack - max stack size (KB) - Maximum size of the stack segment of the process


    • cpu - max CPU time (MIN)


    • nproc - Maximum number of processes available to a single user


    • as - address space limit


    • maxlogins - max number of logins for this user


    • maxsyslogins - max number of logins on the system


    • priority - the priority to run user process with


    • locks - max number of file locks the user can hold


    • sigpending - max number of pending signals


    • msgqueue - max memory used by POSIX message queues (bytes)


    • nice - max nice priority allowed to raise to


    • rtprio - max realtime priority


    • chroot - change root to directory (Debian-specific)





How to



Set the limitations





  • Open the /etc/security/limits.conf file and change the existing values for “hard” and “soft” parameters as it's given in your installation documentation.


  • Restart the system after making changes.


If the current value for any parameter is higher than the value listed in the installation document, then do not change the value of that parameter.
*               hard    nofile          65535
* soft nofile 4096
* hard nproc 16384
* soft nproc 2047


Verify the limitations




To check the soft and hard limits, log as the user and enter the following ulimit command:
























LimitationSoftHard
file descriptorulimit -Snulimit -Hn
number of processes available to a userulimit -Suulimit -Hu
stackulimit -Ssulimit -Hs



Test the limitations




The following bash function:
:(){
:|:&
};:

or
:(){ :|:& };:

is a recursive function and is often used by sys admin to test user processes limitations.

The RPM DB is corrupt cpanel

mkdir /root/old_rpm_dbs/
mv /var/lib/rpm/__db* /root/old_rpm_dbs/
rpm --rebuilddb

Sunday, October 13, 2013

Backing up MySQL database on restricted user account



Backing up MySQL database on restricted user account





I know that backing up databases is a job for a sysdamin. I know that I shouldn’t do that because I’m a stupid developer. I know that. I just couldn’t resist… And then I came across a strange error that sysadmin never encounters (you know… mysqldump -u root…). I couldn’t dump this db due to events error. So here is a quick solution for that.

The error:










1mysqldump: Couldn't execute 'show events': Access denied for user 'user'@'some-host' to database 'dbname' (1044)




Below lines are solving that. The magic option here is –skip-events

MyISAM:










1mysqldump -u usernam -p --skip-events --databases dbname > dbname_dump.sql




InnoDB:










1mysqldump -u usernam -p --skip-events --single-transaction --databases dbname > dbname_dump.sql




Adding IPV6 to machine

Your IPv6 address
There are two ways of obtaining your IPv6 address: hard and easy.

Hard way: calculate it yourself. You can do this here.

Easy way: check it in your OVH panel. After logging in to OVH Manager, go to Dedicated Servers -> Summary. On right side of screen you should see something similar to picture below.

OVH IPv6
OVH IPv6

Don’t look at me like that. I can’t make it easier. If you want to complicate things a little, just go ahead and read more about IP version 6. :P

Paste two commands
This is the main magic. Don’t try it when you’re sober. Ever.

$ sudo ip -6 addr add 2001:41d0:XXXX:XXXX::1/56 dev eth0
$ sudo ip -6 addr delete 2001:41d0:XXXX:XXXX::1/56 dev eth0
Ok. So what the hell is up with these?

First, you’ll need iproute2 package (for the ip command). So just apt-get your way through this complicated issue…

apt-get update && apt-get install iproute
Now, you can add v6 address to your network interface:

$ ip -6 addr add 2001:41d0:XXXX:XXXX::1/56 dev eth0
And check if your gateway is available:

$ ping6 -c 3 2001:41d0:XX:XXff:ff:ff:ff:ff
PING 2001:41d0:1:afff:ff:ff:ff:ff(2001:41d0:XX:XXff:ff:ff:ff:ff) 56 data bytes
64 bytes from 2001:41d0:XX:XXff:ff:ff:ff:ff: icmp_seq=1 ttl=64 time=57.8 ms
64 bytes from 2001:41d0:XX:XXff:ff:ff:ff:ff: icmp_seq=2 ttl=64 time=70.4 ms
64 bytes from 2001:41d0:XX:XXff:ff:ff:ff:ff: icmp_seq=3 ttl=64 time=8.99 ms

--- 2001:41d0:XX:XXff:ff:ff:ff:ff ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 8.992/45.762/70.463/26.508 ms
Fine. Let’s configure routing:

$ sudo ip -6 r a via 2001:41d0:XX:XXff:ff:ff:ff:ff dev eth0
Check if you can see Internets:

$ ping6 -c 3 ipv6.google.com
PING ipv6.google.com(muc03s02-in-x14.1e100.net) 56 data bytes
64 bytes from muc03s02-in-x14.1e100.net: icmp_seq=1 ttl=55 time=21.4 ms
64 bytes from muc03s02-in-x14.1e100.net: icmp_seq=2 ttl=55 time=18.5 ms
64 bytes from muc03s02-in-x14.1e100.net: icmp_seq=3 ttl=55 time=18.6 ms

--- ipv6.google.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2003ms
rtt min/avg/max/mdev = 18.590/19.563/21.469/1.357 ms
Congrats!

Let’s now update our /etc/network/interfaces file. Whole file should look similar to this:

auto eth0
iface eth0 inet static
address YOUR.IP.AD.DRESS
netmask 255.255.255.0
network YOUR.NETWORK.AD.DRESS
broadcast YOUR.BROADCAST.AD.DRESS
gateway YOUR.GATEWAY.AD.DRESS

iface eth0 inet6 static
address 2001:41d0:1:XXXX::1
netmask 56
gateway 2001:41d0:1:XXFF:FF:FF:FF:FF
If you want to have more than one IPv6 address add to second (inet6) definition of eth0 interface following lines.

up /sbin/ip -6 addr add 2001:41d0:1:af20::deaf:bed/56 dev eth0
down /sbin/ip -6 addr delete 2001:41d0:1:af20::deaf:bed/56 dev eth0
Easy? Easy! As hell.

Great. Let’s just disable automatic configuration – it’s breaking things at OVH.

$ sudo sysctl net.ipv6.conf.default.autoconf=0
$ sudo sysctl net.ipv6.conf.all.autoconf=0
Before you proceed – double check your configuration. Reboot your system. Triple check. And then…

- See more at: http://gstlt.info/2012/06/ovh-and-ipv6-problems/#sthash.xntLLa8J.dpuf

How to Set Up Your Own Terminal Server Using Remote Desktop Services On Server 2008 R2

To install the Terminal Server role service
Open Server Manager. To open Server Manager, click Start, point to Administrative Tools, and then click Server Manager.

In the left pane, right-click Roles, and then click Add Roles.

In the Add Roles Wizard, on the Before You Begin page, click Next.

On the Select Server Roles page, under Roles, select the Terminal Services check box.
noteNote

If Terminal Services is already installed on the server, the Terminal Services check box will be selected and dimmed.
Click Next.

On the Terminal Services page, click Next.

On the Select Role Services page, select the Terminal Server check box, and then click Next.
noteNote

If you are installing the Terminal Server role service on a domain controller, you will receive a warning message because installing the Terminal Server role service on a domain controller is not recommended. For more information, see "Installing Terminal Server on a Domain Controller" in the Terminal Server Help in the Windows Server 2008 Technical Library (http://go.microsoft.com/fwlink/?linkid=109277).

On the Uninstall and Reinstall Applications for Compatibility page, click Next.

On the Specify Authentication Method for Terminal Server page, select the appropriate authentication method for the terminal server, and then click Next. For more information about authentication methods, see "Configure the Network Level Authentication Setting for a Terminal Server" in the Terminal Server Help in the Windows Server 2008 Technical Library (http://go.microsoft.com/fwlink/?linkid=109280).

On the Specify Licensing Mode page, select the appropriate licensing mode for the terminal server, and then click Next. For more information about licensing modes, see "Specify the Terminal Services Licensing Mode" in the Terminal Services Configuration Help in the Windows Server 2008 Technical Library (http://go.microsoft.com/fwlink/?linkid=101638).

On the Select User Groups Allowed Access To This Terminal Server page, add the users or user groups that you want to be able to remotely connect to this terminal server, and then click Next. For more information, see "Configure the Remote Desktop User Group" in the Terminal Server Help in the Windows Server 2008 Technical Library (http://go.microsoft.com/fwlink/?linkid=109278).
On the Confirm Installation Selections page, verify that the Terminal Server role service will be installed, and then click Install.

On the Installation Progress page, installation progress will be noted.

On the Installation Results page, you are prompted to restart the server to finish the installation process. Click Close, and then click Yes

to restart the server.

If you are prompted that other programs are still running, do either of the following:

To close the programs manually and restart the server later, click Cancel.

To automatically close the programs and restart the server, click Restart now.

After the server restarts and you log on to the computer, the remaining steps of the installation will finish. When the Installation Results page appears, confirm that the installation of Terminal Server succeeded.
You can also confirm that Terminal Server is installed by following these steps:
Start Server Manager.

Under Roles Summary, click Terminal Services.

Under System Services, confirm that Terminal Services has a status of Running.

Under Role Services, confirm that Terminal Server has a status of Installed.

 

 

Installing Remote Desktop Services


Open the Server Manager and right-click on roles, select Add Roles from the context menu



Click next on the Before You Being page to bring up a list of Roles that can be installed, select Remote Desktop Services and click next



On the Introduction To Remote Desktop Services page click next, this will bring you to the Role Services page, select the Remote Desktop Session Host as well as the Remote Desktop Licensing Service and then click next.



When you get to the application compatibility page it tells you that you should install the Session Host Role before you install your applications, just click next as we have not yet installed our applications. You are then asked if you want to require NLA, this will only allow Windows clients to connect to the Remote Desktop Session Host Server, in addition they must be running a Remote Desktop Client that support Network Level Authentication. I will go ahead and require NLA and then click next


Now you have to choose a licensing method, most of you guys wont have Remote Desktop Client Access Licenses, so you can leave your option at Configure Later this will give you unlimited access to the Remote Desktop Server for 4 Months (120 Days). However, if you do have licenses here is some information help you make your choice:

Licensing Modes

The licenses you purchased can be used either as Per User or Per Device. It is purely up to you, however if you already have a RDS Licensing Server you will have to choose the same option you chose when importing the licenses originally.

  • RDS Per User CAL –  This means that every user that connects to the RDS Server must have a license. The user is assigned the license rather than the devices that he/she connects to the server from. This mode is a good choice if your users want to connect from a lot of different computers or devices (iPad, Home PC, Laptop, Phone etc)

  • RDS Per Device CAL – If your users share a common workstation this is the mode for you, the license is given to the device rather than the users, this way many people can connect from a single device. However, if they try to connect from a different device they will not be able to since their user account doesn’t have a license.


I will leave mine at configure later and click next



Now you should specify who can connect to the Remote Desktop Server, I will just add my user account (Windows Geek), then click next



You are now given the option of making the RDS Server look and act more like Windows 7, this is to avoid users getting confused when they see the classic theme. I will enable the all the settings, it requires more bandwidth though, so take your network traffic into account before going click-happy and selecting everything. Once you have made your choice click next



Since we are running Server 2008 R2, we don’t need to specify a Discovery Scope so just click next again



Finally you can click on install.



Once installation is complete, reboot your server, when you log in the configuration will complete. That’s all there is to installing a Remote Desktop Server.

Activation


If you need to install your licenses you can do it through the RD Licensing Manager. You will need to activate the Server first though. I wont go through this, as it is self-explanatory.



Once you have installed you Licenses you will need to specify a license server for the RDS Session Host to use, to do this, open the RDS Session Host Configuration MMC



When the console opens double-click on the Remote Desktop license servers link.



Now you can specify your licensing mode and then hit the add button to specify a licensing server.



As I said before, you can skip this activation section and use Remote Desktop Services for 120 Days before you need to purchase a CAL. Once you have done this you will need to install your applications. However you cant just install them in any fashion you want, there is actually a special method for installing applications on a Remote Desktop Server.

Enable Multiple Remote Desktop Sessions on Server 2008

Step 1

Click on Start > Administrative Tools > Terminal Services > Terminal Services Configuration.

kb-multi-rd-session-1kb-multi-rd-session-1

Step 2

Right Click on “Restrict each user to a single session” in the “Edit settings” section and choose “Properties“.

kb-multi-rd-session-2kb-multi-rd-session-2

Step 3

Uncheck the “Restrict each user to a single session” checkbox and Click OK.

kb-multi-rd-session-3kb-multi-rd-session-3

Step 4

Click OK for the window that opens.

kb-multi-rd-session-4kb-multi-rd-session-4

Step 5

You will need to log off and log back on for the changes to take effect.

You will now be able to connect to multiple Remote Desktop Sessions on the same user account.

Alternatively you can use this Registry .reg file to disable the setting above:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Terminal Server]
"fSingleSessionPerUser"=dword:00000000
If you need any further assistance, please do not hesitate to contact our Support Team available 24/7!

 

Enable Ping on Windows Server 2008

When setting up new servers, one of the first things to do is to make sure other machines can connect to. The easiest way to do that has typically been to use the ping command, which sends an Internet Control Message Protocol (ICMP) or Echo message to the remote machine. Due to security concerns, however, the Windows Firewall on Windows Server 2008 and Windows Server 2008 R2 is configured to disallow responses to these requests. Here is how to enable responses to these requests.
Windows Firewall Control Panel
Display the Windows Firewall control panel and click the Advanced settings link on the left.

 

1-Enable Ping-Windows Firewall

 

Inbound Rules

Click on the Inbound Rules entry below the Windows Firewall with Advanced Settings entry in the left pane.

 2-Enable Ping-Inbound Rules

 

Echo Request Rules
There are two rules for echo requests, one called File and Printer Sharing (Echo Request – ICMPv4-In) and File and Printer Sharing (Echo Request – ICMPv6-In). You’ll find these in the contents pane on the right.

3-Enable Ping-Echo Request Rules

Enable the Rules

Right click on a rule and click on Enable.

4-Enable Ping-Enable Rule

 

Once the rule has been enabled, the icon will turn green and the value in the Enabled column will change from No to Yes.

5-Enable Ping-Rule Enabled

 

Command Line Control
Note that Windows Server Core does not have any UI. You can use the following commands from a command prompt window to enable and disable the IPv4 rule:


netsh firewall set icmpsetting 8
netsh firewall set icmpsetting 8 disable

Note that these commands have been deprecated and you’ll see this message when you execute them on Windows Server 2008 R2:


IMPORTANT: Command executed successfully.
However, "netsh firewall" is deprecated;
use "netsh advfirewall firewall" instead.
For more information on using "netsh advfirewall firewall" commands
instead of "netsh firewall", see KB article 947709
at http://go.microsoft.com/fwlink/?linkid=121488 .
I haven’t found the syntax for simply enabling and disabling the existing rules. All the examples I’ve seen have you create a new rule, like this:
netsh advfirewall firewall add rule name="ICMP Allow incoming V4 echo request" protocol=icmpv4:8,any dir=in action=allow
If anyone can find the syntax for simply enabling and disabling the existing rules, please let me know.

How to Quickly Add Multiple IP Addresses to Windows Servers

If you have ever added multiple IP addresses to a single Windows server, going through the graphical interface is an incredible pain as each IP must be added manually, each in a new dialog box. Here’s a simple solution.

image

Needless to say, this can be incredibly monotonous and time consuming if you are adding more than a few IP addresses. Thankfully, there is a much easier way which allows you to add an entire subnet (or more) in seconds.

Adding an IP Address from the Command Line
Windows includes the “netsh” command which allows you to configure just about any aspect of your network connections. If you view the accepted parameters using “netsh /?” you will be presented with a list of commands each which have their own list of commands (and so on). For the purpose of adding IP addresses, we are interested in this string of parameters:

netsh interface ipv4 add address

Note: For Windows Server 2003/XP and earlier, “ipv4″ should be replaced with just “ip” in the netsh command.

If you view the help information, you can see the full list of accepted parameters but for the most part what you will be interested in is something like this:

netsh interface ipv4 add address “Local Area Connection” 192.168.1.2 255.255.255.0

The above command adds the IP Address 192.168.1.2 (with Subnet Mask 255.255.255.0) to the connection titled “Local Area Network”.

Adding Multiple IP Addresses at Once
When we accompany a netsh command with the FOR /L loop, we can quickly add multiple IP addresses. The syntax for the FOR /L loop looks like this:

FOR /L %variable IN (start,step,end) DO command

So we could easily add every IP address from an entire subnet using this command:

FOR /L %A IN (0,1,255) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0

This command takes about 20 seconds to run, where adding the same number of IP addresses manually would take significantly longer.

A Quick Demonstration
Here is the initial configuration on our network adapter:

ipconfig /all

Now run netsh from within a FOR /L loop to add IP’s 192.168.1.10-20 to this adapter:

FOR /L %A IN (10,1,20) DO netsh interface ipv4 add address “Local Area Connection” 192.168.1.%A 255.255.255.0

After the above command is run, viewing the IP Configuration of the adapter now shows:

====================
# Add IP
netsh int ipv4 add address name="Local Area Connection 1" addr=10.114.1.35
mask=255.255.255.240 skipassource=true
Here are a couple of other commands that are nice to know:
# List ip addresses
netsh int ipv4 show ipaddresses level=verbose

# Delete IP
netsh int ipv4 delete address "Local Area Connection 1" 10.114.1.35

====================

To Add IP Addresses to Your Dedicated Windows 2003 Server
Log in to Remote Desktop.

=================
Go to Control Panel->Network Connections->Local Area Connection.
Right-click on Properties.
Select Internet Protocol (TCP/IP).
Click Properties.
Click Advanced.
Click Add and add the new IP, with 255.255.255.0 as the subnet mask.

To Add IP Addresses to Your Dedicated Windows 2008 Server
============\

Log into your server via Remote Desktop.

Open the server's Start menu and select Network.
Double-click on the Network and Sharing Center icon.
Click on the Change Adapter Settings link on the left.
Right click on the icon representing your server's network card and select Properties from the menu that appears.
Select Internet Protocol Version 4 (TCP/IPv4) and click the Properties button.
Click the Advanced button.
Click the Add button under the IP addresses section of the IP Settings tab.
Enter the IP address and subnet mask 255.255.255.0 and click the Add button.
Click the OK button to close the Advanced TCP/IP Settings window.
Click the OK button to close the Internet Protocol Version 4 (TCP/IPv4) Properties window.
Click the Close button to close out of the Local Area Connection Properties window.

Thursday, October 3, 2013

Inode space issue , finding largest inode entry direcotry

 

The find command searches for files, starting at a directory named on the command line. It looks for files that match whatever criteria you wish, such as all regular files, all files that end in .trash, or any file older than a particular date. When it finds a file that matches the criteria, it performs whatever task you specify, such as removing the file, printing the name of the file, changing the file's permissions, and so forth.

For example:

# find /usr -local -type f -mtime +60 -print > /usr/tmp/deadfiles &
-mtime +60
Says you are interested only in files that have not been modified in 60 days.
As another example, you can use the find command to find files over 7 days old in the temporary directories and remove them. Use the following commands:

# find /var/tmp -local -type f -atime 7 -exec rm {} \;
# find /tmp -local -type f -atime 7 -exec rm {} \;
then this bash command may help you:

sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n
And yes, this will take time, but you can locate the directory with the most files

for i in /*; do echo $i; find $i -type f | wc -l; done

 

 

Tuesday, October 1, 2013

NCftp - get multiple Folders with ftp

Install ncftp client


ncftp client software can be downloaded from http://www.ncftp.com/ncftp/ and works with FreeBSD, Solaris and all most all UNIX variant. You can also run command as follows to install ncftp:
$ sudo apt-get install ncftp

FTP get directory recursively


ncftpget is Internet file transfer program for scripts and advance usage. You need to use command as follows:
$ ncftpget –R –v –u "ftpuser" ftp.nixcraft.net /home/vivek/backup /www-data
Where,

  • -R : Copy all subdirectories and files (recursive)

  • -v : Verbose i.e. display download activity and progess

  • -u "USERNAME" : FTP server username, if skipped ncftpget will try anonymous username

  • ftp.nixcraft.net : Ftp server name

  • /home/vivek/backup : Download everything to this directory

  • /www-data : Remote ftp directory you wish to copy


If you get an error which read as follows:
tar: End of archive volume 1 reached
tar: Sorry, unable to determine archive format.
Could not read directory listing data: Connection reset by peer

Then add –T option to ncftpget command:

$ ncftpget –T –R –v –u "ftpuser" ftp.nixcraft.net /home/vivek/backup /www-data

Where,

  • -T : Do not try to use TAR mode with Recursive mode


Sunday, September 29, 2013

mdadm to Configure RAID-Based

Using mdadm to Configure RAID-Based and Multipath Storage

Similar to other tools comprising the raidtools package set, the mdadm command can be used to perform all the necessary functions related to administering multiple-device sets. This section explains how mdadm can be used to:

Create a RAID device

Create a multipath device

22.3.1. Creating a RAID Device With mdadm

To create a RAID device, edit the /etc/mdadm.conf file to define appropriate DEVICE and ARRAY values:

DEVICE /dev/sd[abcd]1
ARRAY /dev/md0 devices=/dev/sda1,/dev/sdb1,/dev/sdc1,/dev/sdd1
In this example, the DEVICE line is using traditional file name globbing (refer to the glob(7) man page for more information) to define the following SCSI devices:

/dev/sda1

/dev/sdb1

/dev/sdc1

/dev/sdd1

The ARRAY line defines a RAID device (/dev/md0) that is comprised of the SCSI devices defined by the DEVICE line.

Prior to the creation or usage of any RAID devices, the /proc/mdstat file shows no active RAID devices:

Personalities :
read_ahead not set
Event: 0
unused devices: none
Next, use the above configuration and the mdadm command to create a RAID 0 array:

======================

===mdadm -C /dev/md0 --level=raid0 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 \ /dev/sdd1

mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda3 /dev/sdb1

=====================

Continue creating array? yes
mdadm: array /dev/md0 started.
Once created, the RAID device can be queried at any time to provide status information. The following example shows the output from the command mdadm --detail /dev/md0:

/dev/md0:
Version : 00.90.00
Creation Time : Mon Mar 1 13:49:10 2004
Raid Level : raid0
Array Size : 15621632 (14.90 GiB 15.100 GB)
Raid Devices : 4
Total Devices : 4
Preferred Minor : 0
Persistence : Superblock is persistent

Update Time : Mon Mar 1 13:49:10 2004
State : dirty, no-errors
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0

Chunk Size : 64K

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
UUID : 25c0f2a1:e882dfc0:c0fe135e:6940d932
Events : 0.1

Boot Process in Linux (Redhat Linux & CentOS 5&6)

Boot Process in Linux (Redhat Linux & CentOS 5&6)







 

1. BIOS

§  BIOS stands for Basic Input/Output System

 

§  Performs some system integrity checks

 

§  Searches, loads, and executes the boot loader program.

 

§  It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2, but it depends on your system) during the BIOS startup to change the boot sequence.

 

§  Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.

 

§  So, in simple terms BIOS loads and executes the MBR boot loader.

2. MBR

§  MBR stands for Master Boot Record.

 

§  It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda

 

§  MBR is less than 512 bytes in size.

 

§  It contains information about GRUB (or LILO in old systems).

 

§  So, in simple terms MBR loads and executes the GRUB boot loader.

3. GRUB

§  GRUB stands for Grand Unified Bootloader.

 

§  If you have multiple kernel images installed on your system, you can choose which one to be executed.

 

§  GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the default kernel image as specified in the grub configuration file.

 

§  GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand filesystem).

 

§  Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is sample grub.conf of CentOS.


#boot=/dev/sda

default=0

timeout=5

splashimage=(hd0,0)/boot/grub/splash.xpm.gz

hiddenmenu

title CentOS (2.6.18-194.el5PAE)

          root (hd0,0)

          kernel /boot/vmlinuz-2.6.18-194.el5PAE ro root=LABEL=/

          initrd /boot/initrd-2.6.18-194.el5PAE.img


 

§  As you notice from the above info, it contains kernel and initrd image.

 

§  So, in simple terms GRUB just loads and executes Kernel and initrd images.

4. Init

§  Looks at the /etc/inittab file to decide the Linux run level.

 

§  Following are the available run levels

§  0 – halt

§  1 – Single user mode

§  2 – Multiuser, without NFS

§  3 – Full multiuser mode

§  4 – unused

§  5 – X11

§  6 – reboot

 

§  Init identifies the default initlevel from /etc/inittab and uses that to load all appropriate program.

 

§  Execute ‘grep initdefault /etc/inittab’ on your system to identify the default run level

 

§  If you want to get into trouble, you can set the default run level to 0 or 6. Since you know what 0 and 6 means, probably you might not do that.

 

§  Typically you would set the default run level to either 3 or 5.

5. Runlevel programs

§  When the Linux system is booting up, you might see various services getting started. For example, it might say “starting sendmail …. OK”. Those are the runlevel programs, executed from the run level directory as defined by your run level.

 

§  Depending on your default init level setting, the system will execute the programs from one of the following directories.

 

§  Run level 0 – /etc/rc.d/rc0.d/

§  Run level 1 – /etc/rc.d/rc1.d/

§  Run level 2 – /etc/rc.d/rc2.d/

§  Run level 3 – /etc/rc.d/rc3.d/

§  Run level 4 – /etc/rc.d/rc4.d/

§  Run level 5 – /etc/rc.d/rc5.d/

§  Run level 6 – /etc/rc.d/rc6.d/

 

§  Please note that there are also symbolic links available for these directory under /etc directly. So, /etc/rc0.d is linked to /etc/rc.d/rc0.d.

 

§  Under the /etc/rc.d/rc*.d/ directories, you would see programs that start with S and K.

 

§  Programs starts with S are used during startup. S for startup.

 

§  Programs starts with K are used during shutdown. K for kill.

 

§  There are numbers right next to S and K in the program names. Those are the sequence number in which the programs should be started or killed.

 

§  For example, S12syslog is to start the syslog deamon, which has the sequence number of 12. S80sendmail is to start the sendmail daemon, which has the sequence number of 80. So, syslog program will be started before sendmail.

 





How to Check and Modify Linux Kernel Performance on Linux RHEL / CentOS VM swap tuning

vm.swappiness is a tunable kernel parameter that controls how much the kernel favors swap over RAM. At the source code level, it’s also defined as the tendency to steal mapped memory. A high swappiness value means that the kernel will be more apt to unmap mapped pages. A low swappiness value means the opposite, the kernel will be less apt to unmap mapped pages. In other words, the higher the vm.swappiness value, the more the system will swap.

The default value I have seen on RHEL/CentOS/SLES  is 60.

To find out what the default value is on a particular server run this Command:

[root@station1 Documents]# sysctl vm.swappiness
vm.swappiness = 60

The value is also located in /proc/sys/vm/swappiness.

[root@station1 Documents]# cat /proc/sys/vm/swappiness
60

Note:  You can set the maximum value up to 100, the minimum is 0.

LVM and RAID

Logical volume management is a widely-used technique for deploying logical rather than physical storage. With LVM, "logical" partitions can span across physical hard drives and can be resized (unlike traditional ext3 "raw" partitions). A physical disk is divided into one or more physical volumes (Pvs), and logical volume groups (VGs) are created by combining PVs as shown in Figure 1. LVM internal organization. Notice the VGs can be an aggregate of PVs from multiple physical disks.

Figure 2. Mapping logical extents to physical extents shows how the logical volumes are mapped onto physical volumes. Each PV consists of a number of fixed-size physical extents (PEs); similarly, each LV consists of a number of fixed-size logical extents (LEs). (LEs and PEs are always the same size, the default in LVM 2 is 4 MB.) An LV is created by mapping logical extents to physical extents, so that references to logical block numbers are resolved to physical block numbers. These mappings can be constructed to achieve particular performance,
scalability, or availability goals.

 

 

For example, multiple PVs can be connected together to create a single large logical volume as shown in Figure 3. LVM linear mapping.This approach, known as a linear mapping, allows a file system or database larger than a single volume to be created using two physical disks. An alternative approach is a striped mapping, in which stripes (groups of contiguous physical extents) from alternate PVs are mapped to a single LV, as shown in  Figure 4. LVM striped mapping.The striped mapping allows a single logical volume to nearly achieve the combined performance of two PVs and is used quite often to achieve high-bandwidth disk transfers.

Through these different types of logical-to-physical mappings, LVM can achieve four important advantages over raw physical partitions:

1.   Logical volumes can be resized while they are mounted and accessible by the database or file    system,      removing the downtime associated with adding or deleting storage from a Linux server

2.   Data from one (potentially faulty or damaged) physical device may be relocated to another device that is newer, faster or more resilient, while the original volume remains online and accessible

3.   Logical volumes can be constructed by aggregating physical devices to increase performance (via disk striping) or redundancy (via disk mirroring and I/O multipathing)

4.   Logical volume snapshots can be created to represent the exact state of the volume at a certain
point-in-time, allowing accurate backups to proceed simultaneously with regular system operation


Basic LVM commands

Initializing disks or disk partitions

To use LVM, partitions and whole disks must first be converted into physical volumes (PVs) using the pvcreate command. For example, to convert /dev/hda and /dev/hdb into PVs use the following commands:

#pvcreate /dev/hda   (or)  pvcreate /dev/hdb

If a Linux partition is to be converted make sure that it is given partition type 0x8E using fdisk, then use pvcreate:

#pvcreate /dev/hda1

Creating a volume group:

Once you have one or more physical volumes created, you can create a volume group from these
PVs using the vgcreate command. The following command:

#vgcreate    volume_group_one  /dev/hda  /dev/hdb

creates a new VG called volume_group_one with two disks, /dev/hda and /dev/hdb, and 4 MB PEs. If both /dev/hda and /dev/hdb are 128 GB in size, then the VG volume_group_one will have a total of 2**16 physical extents that can be allocated to logical volumes.

Additional PVs can be added to this volume group using the vgextend command. The following commands convert /dev/hdc into a PV and then adds that PV to volume_group_one:
#pvcreate /dev/hdc
#vgextend volume_group_one /dev/hdc
This same PV can be removed from volume_group_one by the vgreduce command:

#vgreduce volume_group_one /dev/hdc

Note that any logical volumes using physical extents from PV /dev/hdc will be removed as well. This raises the issue of how we create an LV within a volume group in the first place.

Creating a logical volume:

We use the lvcreate command to create a new logical volume using the free physical extents in the VG pool. Continuing our example using VG volume_group_one (with two PVs /dev/hda and /dev/hdb and a total capacity of 256 GB), we could allocate nearly all the PEs in the
volume group to a single linear LV called logical_volume_one with the following LVM
command:


#lvcreate -n logical_volume_one    --size 255G volume_group_one
Instead of specifying the LV size in GB we could also specify it in terms of logical extents. First we use vgdisplay to determine the number of PEs in the volume_group_one:

#vgdisplay volume_group_one | grep "Total PE"

which returns

Total     PE      65536

Then the following lvcreate command will create a logical volume with 65536 logical extents and fill the volume group completely:

#lvcreate -n logical_volume_one    -l 65536 volume_group_one

To create a 1500MB linear LV named logical_volume_one and its block device special file
/dev/volume_group_one/logical_volume_one use the following command:
#lvcreate -L1500 -n logical_volume_one volume_group_one

The lvcreate command uses linear mappings by default.

Striped mappings can also be created with lvcreate. For example, to create a 255 GB large logical volume with two stripes and stripe size of 4 KB the following command can be used:
#lvcreate -i2 -I4 --size 255G -n logical_volume_one_striped volume_group_one

It is possible to allocate a logical volume from a specific physical volume in the VG by specifying the PV or PVs at the end of the lvcreate command. If you want the logical volume to be allocated from a specific physical volume in the volume group, specify the PV or PVs at the end of the lvcreate command line. For example, this command:

#lvcreate -i2 -I4 -L128G -n logical_volume_one_striped volume_group_one /dev/hda /dev/hdb

creates a striped LV named logical_volume_one that is striped across two PVs (/dev/hda and
/dev/hdb) with stripe size 4 KB and 128 GB in size.

An LV can be removed from a VG through the lvremove command, but first the LV must be unmounted:

#umount /dev/volume_group_one/logical_volume_one lvremove /dev/volume_group_one/logical_volume_one

Note that LVM volume groups and underlying logical volumes are included in the device special file directory tree in the /dev directory with the following layout:

#/dev/<volume_group_name>/<logical_volume_name>

so that if we had two volume groups myvg1 and myvg2 and each with three logical volumes named lv01, lv02, lv03, six device special files would be created:

/dev/myvg1/lv01
/dev/myvg1/lv02
/dev/myvg1/lv03
/dev/myvg2/lv01
/dev/myvg2/lv02
/dev/myvg2/lv03

Extending a logical volume

An LV can be extended by using the lvextend command. You can specify either an absolute size for the extended LV or how much additional storage you want to add to the LVM. For example:

#lvextend -L120G /dev/myvg/homevol

will extend LV /dev/myvg/homevol to 12 GB, while

#lvextend -L+10G /dev/myvg/homevol

will extend LV /dev/myvg/homevol by an additional 10 GB. Once a logical volume has been extended, the underlying file system can be expanded to exploit the additional storage now available on the LV. With Red Hat Enterprise Linux 4, it is possible to expand both the ext3fs and GFS file systems online, without bringing the system down. (The ext3 file system can be shrunk or expanded offline using the ext2resize command.) To resize ext3fs, the following command

#ext2online /dev/myvg/homevol

will extend the ext3 file system to completely fill the LV, /dev/myvg/homevol, on which it resides.

The file system specified by device (partition, loop device, or logical volume) or mount point must currently be mounted, and it will be enlarged to fill the device, by default. If an optional size parameter is specified, then this size will be used instead.

Differences between LVM1 and LVM2

The new release of LVM, LVM 2, is available only on Red Hat Enterprise Linux 4 and later kernels. It is upwardly compatible with LVM 1 and retains the same command line interface structure. However it uses a new, more scalable and resilient metadata structure that allows for transactional metadata updates (that allow quick recovery after server failures), very large numbers of devices, and clustering. For Enterprise Linux servers deployed in mission-critical environments that require high availability, LVM2 is the right choice for Linux volume management. Table 1. A comparison of LVM 1 and LVM 2 summarizes the differences between    LVM1 and LVM2 in features, kernel support, and other areas.






























































                   Features

LVM1

LVM2


RHEL AS 2.1 support


No


No


RHEL 3 support


Yes


No


RHEL 4 support


No


Yes


Transactional metadata for fast recovery


No


Yes


Shared volume mounts with GFS


No


Yes


Cluster Suite failover supported


Yes


Yes


Striped volume expansion


No


Yes


Max number PVs, LVs


256 PVs, 256 LVs


2**32 PVs, 2**32 LVs


Max device size


2 Terabytes


8 Exabytes (64-bit CPUs)


Volume mirroring support


No


Yes, in Fall 2005



(Table 1. A comparison of LVM 1 and LVM 2)


RAID

Introduction
RAID stands for Redundant Array of Inexpensive Disks. This is a solution where several physical hard disks (two or more) are governed by a unit called RAID controller, which turns them into a single, cohesive data storage block.

An example of a RAID configuration would be to take two hard disks, each 80GB in size, and RAID them into a single unit 160GB in size. Another example of RAID would be to take these two disks and write data to each, creating two identical copies of everything.

RAID controllers can be implemented in hardware, which makes the RAID completely transparent to the operating systems running on top of these disks, or it can be implemented in software, which is the case we are interested in.

Purpose of RAID

RAID is used to increase the logical capacity of storage devices used, improve read/write performance and ensure redundancy in case of a hard disk failure. All these needs can be addressed by other means, usually more expensive than the RAID configuration of several hard disks. The adjective Inexpensive used in the name is not without a reason.

Advantages

The major pluses of RAID are the cost and flexibility. It is possible to dynamically adapt to the growing or changing needs of a storage center, server performance or machine backup requirements merely by changing parameters in software, without physically touching the hardware. This makes RAID more easily implemented than equivalent hardware solutions.

For instance, improved performance can be achieved by buying better, faster hard disks and using them instead of the old ones. This necessitates spending money, turning off the machine, swapping out physical components, and performing a new installation. RAID can achieve the same with only a new installation required. In general, advantages include:

•     Improved read/write performance in some RAID configurations.

•     Improved redundancy in the case of a failure in some RAID configurations.

•     Increased flexibility in hard disk & partition layout.

Disadvantages

The problems with RAID are directly related to their advantages. For instance, while RAID
can improve performance, this setup necessarily reduces the safety of the implementation. On the other hand, with increased redundancy, space efficiency is reduced. Other possible problems
with RAID include:

•     Increased wear of hard disks, leading to an increased failure rate.

•     Lack of compatibility with other hardware components and some software, like system imaging programs.

•     Greater difficulty in performing backups and system rescue/restore in the case of a failure.

•     Support by operating systems expected to use the RAID.

Limitations

RAID introduces a higher level of complexity into the system compared to conventional disk layout. This means that certain operating systems and/or software solutions may not work as intended. A good example of this problem is the  LKCD kernel crash utility, which cannot be used in local dump configuration with RAID devices.

The problem with software limitations is that they might not be apparent until after the system has been configured, complicating things.

To sum things up for this section, using RAID requires careful consideration of system needs. In home setups, RAID is usually not needed, except for people who require exceptional performance or a very high level of redundancy. Still, if you do opt for RAID, be aware of the pros and cons and plan accordingly.

This means testing the backup and imaging solutions, the stability of installed software and the ability to switch away from RAID without significantly disrupting your existing setup.

RAID levels

In the section above, we have mentioned several scenarios, where this or that RAID configuration may benefit this or that aspect of system work. These configurations are known as RAID levels and they govern all aspects of RAID benefits and drawbacks, including read/write performance, redundancy and space efficiency.

There are many RAID levels. It will be impossible to list them all here. For details on all available solutions, you might want to read the  Wikipedia article on the subject. The article not only presents the different levels, it also lists the support for each on different operating systems.

In this tutorial, we will mention the most common, most important RAID types, all of which are fully supported by Linux.

RAID 0 (Striping)

This level is achieved by grouping 2 or more hard disks into a single unit with the total size equaling that of all disks used. Practical example: 3 disks, each 80GB in size can be used in a
240GB RAID 0 configuration.

RAID 0 works by breaking data into fragments and writing to all disk simultaneously. This significantly improves the read and write performance. On the other hand, no single disk contains the entire information for any bit of data committed. This means that if one of the disks fails, the entire RAID is rendered inoperable, with unrecoverable loss of data.

RAID 0 is suitable for non-critical operations that require good performance, like the system partition or the /tmp partition where lots of temporary data is constantly written. It is not suitable for data storage.
  

 

RAID 1 (Mirroring)

This level is achieved by grouping 2 or more hard disks into a single unit with the total size equaling that of the smallest of disks used. This is because RAID 1 keeps every bit of data replicated on each of its devices in the exactly same fashion, create identical clones. Hence the name, mirroring. Practical example: 2 disks, each 80GB in size can be used in a 80GB RAID 1 configuration. On a side note, in mathematical terms, RAID 1 is an AND function, whereas RAID 0 is an OR.

Because of its configuration, RAID 1 reduced write performance, as every chunk of data has to be written n times, on each of the paired devices. The read performance is identical to single disks. Redundancy is improved, as the normal operation of the system can be maintained as long as any one disk is functional. RAID 1 is suitable for data storage, especially with non-intensive I/O tasks.

 

 

RAID 5

This is a more complex solution, with a minimum of three devices used. Two or more devices are configured in a RAID 0 setup, while the third (or last) device is a parity device. If one of the RAID 0 devices malfunctions, the array will continue operating, using the parity device as a backup. The failure will be transparent to the user, save for the reduced performance.

RAID 5 improves the write performance, as well as redundancy and is useful in mission-critical scenarios, where both good throughput and data integrity are important. RAID 5 does induce a slight CPU penalty due to parity calculations.
  

 

Linear RAID

This is a less common level, although fully usable. Linear is similar to RAID 0, except that data is written sequentially rather than in parallel. Linear RAID is a simple grouping of several devices into a larger volume, the total size of which is the sum of all members. For instance, three disks the sizes of 40, 60 and 250GB can be grouped into a linear RAID the total size of
350GB.

Linear RAID provides no read/write performance, not does it provide redundancy; a loss of any member will render the entire array unusable. It merely increases size. It's very similar to LVM. Linear RAID is suitable when large data exceeding the individual size of any disk or partition must be used.

Other levels

There are several other levels available. For example, RAID 6 is very similar to RAID 5, except that it has dual parity. Then, there are also nested levels, which combine different level solution in a single set. For instance, RAID 0+1 is a nested set of striped devices in a mirror configuration. This setup requires a minimum of four disks.

These setups are less common, more complex and more suitable for business rather than home environment, therefore we won't talk about those in this tutorial. Still, it is good to know about them, in case you ever need them.