Pages

Saturday, April 6, 2013

SCP in detail

Example syntax for Secure Copy (scp)
What is Secure Copy?
scp allows files to be copied to, from, or between different hosts. It uses ssh for data transfer and provides the same authentication and same level of security as ssh.
Examples
Copy the file "foobar.txt" from a remote host to the local host







$ scp your_username@remotehost.edu:foobar.txt /some/local/directory

Copy the file "foobar.txt" from the local host to a remote host







$ scp foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy the directory "foo" from the local host to a remote host's directory "bar"







$ scp -r foo your_username@remotehost.edu:/some/remote/directory/bar

Copy the file "foobar.txt" from remote host "rh1.edu" to remote host "rh2.edu"







$ scp your_username@rh1.edu:/some/remote/directory/foobar.txt \

your_username@rh2.edu:/some/remote/directory/

Copying the files "foo.txt" and "bar.txt" from the local host to your home directory on the remote host







$ scp foo.txt bar.txt your_username@remotehost.edu:~

Copy the file "foobar.txt" from the local host to a remote host using port 2264







$ scp -P 2264 foobar.txt your_username@remotehost.edu:/some/remote/directory

Copy multiple files from the remote host to your current directory on the local host







$ scp your_username@remotehost.edu:/some/remote/directory/\{a,b,c\} .







$ scp your_username@remotehost.edu:~/\{foo.txt,bar.txt\} .

scp Performance
By default scp uses the Triple-DES cipher to encrypt the data being sent. Using the Blowfish cipher has been shown to increase speed. This can be done by using option -c blowfish in the command line.







$ scp -c blowfish some_file your_username@remotehost.edu:~

It is often suggested that the -C option for compression should also be used to increase speed. The effect of compression, however, will only significantly increase speed if your connection is very slow. Otherwise it may just be adding extra burden to the CPU. An example of using blowfish and compression:







$ scp -c blowfish -C local_file your_username@remotehost.edu:~

How do I turn on/off mod_userdir on my cPanel/WHM server?

Apache's mod_userdir allows users to view their sites by entering a tilde(~) and their username as the uri on a specific host. For example http://test.cpanel.net/~fred/ will bring up the user fred's domain. The disadvantage of this feature is that any bandwidth usage used by this site will be put on the domain it is accessed under (in this case test.cpanel.net). mod_userdir protection prevents this from happening. You may however want to disable it on specific virtual hosts (generally shared ssl hosts.)

First you'll need to login to WHM for your server, http://serversip/whm (serversip being the ip address of your dedicated server or vps).

Once you are logged into WHM, you will want to browse over to the following path:

Main >> Security Center >> Apache mod_userdir Tweak

From there, you can select which accounts you want to enable for mod_userdir

Hard drive replacement in Software-RAID/en

The Following Following configuration is Assumed:
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sda4 [0] sdb4 [1]
1822442815 blocks super 1.2 [2/2] [UU]

md2: active raid1 sda3 [0] sdb3 [1]
1073740664 blocks super 1.2 [2/2] [UU]

md1: active raid1 sda2 [0] sdb2 [1]
524276 blocks super 1.2 [2/2] [UU]

md0: active raid1 sda1 [0] sdb1 [1]
33553336 blocks super 1.2 [2/2] [UU]

unused devices: <none>

There are four partitions in total:

  • / Dev/md0 as swap

  • / Dev/md1 as / boot

  • / Dev/md2 as /

  • / Dev/md3 as / home


/ Dev / sdb is the defective drive in this case. A missing or defective drive is shown by [U_] and / or [_U] . If the RAID array is intact, it shows [UU] .
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sda4 [0] sdb4 [1] (F)
1822442815 blocks super 1.2 [2/1] [U_]

md2: active raid1 sda3 [0] sdb3 [1] (F)
1073740664 blocks super 1.2 [2/1] [U_]

md1: active raid1 sda2 [0] sdb2 [1] (F)
524276 blocks super 1.2 [2/1] [U_]

md0: active raid1 sda1 [0] sdb1 [1] (F)
33553336 blocks super 1.2 [2/1] [U_]

unused devices: <none>

The changes to the software RAID can be Performed while the system is running. If proc / mdstat shows did the drive is failing, like the example here, then on appointment can be made ​​with the support technicians to replace the drive
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sda4 [0]
1822442815 blocks super 1.2 [2/1] [U_]

md2: active raid1 sda3 [0]
1073740664 blocks super 1.2 [2/1] [U_]

md1: active raid1 sda2 [0]
524276 blocks super 1.2 [2/1] [U_]

md0: active raid1 sda1 [0]
33553336 blocks super 1.2 [2/1] [U_]

unused devices: <none>

Removal of the defective drive


Before a new drive can be added the old defective drive needs to be removed from the RAID array. This needs to be done for each individual partition.
# Mdadm / dev/md0-r / dev/sdb1
# Mdadm / dev/md1-r / dev/sdb2
# Mdadm / dev/md2-r / dev/sdb3
# Mdadm / dev/md3-r / dev/sdb4

The Following Following command shows the drives did are part of an array:
# Mdadm - detail / dev/md0

In some cases a drive may only be partly defective, so for example only / dev/md0 is in the [U_] state, Whereas all other devices are in the [UU] state. In this case the command
# Mdadm / dev/md1-r / dev/sdb2

fails, as the / dev/md1 array is ok.

In this event, the command
# Mdadm - manage / dev/md1 - fail / dev/sdb2

needs to be executed first, to move into the RAID [U_] status.

Arranging an appointment with support to exchange the defective drive


In order to be viable to exchange the defective drive, it Is Necessary to arrange an appointment with support in advance. The server will need to be taken off-line for a short time.

Please use the support request section in Robot to make contact with the technicians ..

Preparing the new drive


Both drives in the array need to have the exact same partitioning. DEPENDING on the partition table type used (MBR or GPT) Appropriate utilities have to be used to copy the partition table. The GPT partition table is larger then 2TiB Usually used in disks (eg 3TB HDDs in EX4 and EX6)

Drives with GPT


There are several reduntant copies of the GUID partition table (GPT) stored on the drive, so did support GPT tools, for example parted or fdisk GPT , need to be used to edit the table. The sgdisk tool from GPT fdisk (pre-installed When Using the Rescue System ) can be used to Easily copy the partition table to a new drive. Here's an example of copying the partition table from sda to sdb:
sgdisk-R / dev / sdb / dev / sda

The drive then needs to be assigned a new random UUID:
sgdisk-G / dev / sdb

After this the drive can be added to the array. As a final step the boot loader needs to be installed.

Drives with MBR


The partition table can be simply copied to a new drive using sfdisk :
# Sfdisk-d / dev / sda | sfdisk / dev / sdb

where / dev / sda is the source drive and / dev / sdb is the target drive.

(Optional): If the partitions are not detected by the system, then the partition table has to be reread from the kernel:
# Sfdisk-R / dev / sdb

Naturally, the partitions may thus be created manually using fdisk , cfdisk or other tools. The partitions Should Be Linux raid autodetect (id fd ) types.

Integration of the new drive


Once the defective drive has been removed and the new one installed, it needs to be intagrated into the RAID array. This needs to be done for each partition.
# Mdadm / dev/md0-a / dev/sdb1
# Mdadm / dev/md1-a / dev/sdb2
# Mdadm / dev/md2-a / dev/sdb3
# Mdadm / dev/md3-a / dev/sdb4

The new drive is now part of the array and will be synchronized. DEPENDING on the size of the partitions this procedure can take some time. The status of the synchronization can be observed-using cat / proc / mdstat .
# Cat / proc / mdstat
Personalities: [raid1]
md3: active raid1 sdb4 [1] sda4 [0]
1028096 blocks [2/2] [UU]
[==========> ..........] Resync = 50.0% (514048/1028096) finish = 97.3min speed = 65787K/sec

md2: active raid1 sdb3 [1] sda3 [0]
208768 blocks [2/2] [UU]

md1: active raid1 sdb2 [1] sda2 [0]
2104448 blocks [2/2] [UU]

md0: active raid1 sdb1 [1] sda1 [0]
208768 blocks [2/2] [UU]

unused devices: <none>

Boot loader installation


If you are doing this repair in a booted system, then for GRUB2 running grub-install on the new drive is enough.For example:
grub-install / dev / sdb

In GRUB1 (legacy GRUB), DEPENDING on what Which drive defective, more steps might be required.

  • Start the GRUB console: grub

  • Specify the partition where / boot is located: root (hd0, 1) (/ dev/sda2 = (hd0, 1))

  • Install the bootloader to MBR: setup (hd0)

  • So for installing the boot loader on the second drive:

    • Map the second drive as hd0: device (hd0) / dev / sdb

    • Repeat steps 2 and 3 exactly (do not change the commands)



  • Exit the GRUB console: quit


Probing devices to guess BIOS drives. This may take a long time.

GNU GRUB version 0.97 (640K lower / 3072K upper memory)

[Minimal BASH-like line editing is supported. For the first word, TAB
lists possibleness command completions. Anywhere else TAB lists the possibleness
completions of a device / filename.]
grub> device (hd0) / dev / sdb
device (hd0) / dev / sdb
grub> root (hd0, 1)
root (hd0, 1)
Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0)
setup (hd0)
Checking if "/ boot/grub/stage1" exists ... yes
Checking if "/ boot/grub/stage2" exists ... yes
Checking if "/ boot/grub/e2fs_stage1_5" exists ... yes
Running "embed / boot/grub/e2fs_stage1_5 (hd0)" ... 26 sectors are embedded.
succeeded
Running "install / boot/grub/stage1 (hd0) (hd0) 1 +26 p (hd0, 1) / boot/grub/stage2 / boot / grub / grub.conf" ... succeeded
Done.
grub> quit
#

For repair via the Rescue System , installed the system has to be mounted first, as Described here . All GRUB installation steps then have to be Performed after chroot .