Pages

Showing posts with label command. Show all posts
Showing posts with label command. Show all posts

Saturday, November 1, 2014

Squid Proxy Server

Squid is a proxy server and web cache daemon. It has a wide variety of uses, from speeding up a web server by caching repeated requests; to caching web, DNS and other computer network lookups for a group of people sharing network resources; to aiding security by filtering traffic. Although primarily used for HTTP and FTP, Squid includes limited support for several other protocols including TLS, SSL, Internet Gopher and HTTPS


yum -y install squid
chkconfig squid on

IMPORTANT: First write all the ACLS and Later the http_access order. The Order in which the rules are written in having effect on the working of Proxy.
#Port to which squid listens
http_port 3128


Allowing the Know network/IP
============================
Declare all the known network and allow those network/IP

acl our_networks src 192.168.25.0/24 192.168.2.0/24 10.1.0.1
http_access allow our_networks

The same way we can deny the access using

http_access deny our_networks


Blocking Sites using proxy.
==========================
acl blocksite1 dstdomain www.yahoo.com .facebook.com
http_access deny blocksite1

Blocking List of Sites.
======================
acl blocksitelist dstdomain "/etc/squid/restricted_sites"
http_access deny blocksitelist


Blocking Sites with Specific Words using proxy.
==============================================
acl blockwords url_regex gmail
http_access deny blockwords

Blocking List of Words.
======================
acl blockwordlist url_regex "/etc/squid/restricted_words"
http_access deny blockwordlist


Display Custom message For Blocked Site.
========================================
deny_info <Error-Page-Name> <acl-name>

You can get the error page name from  /usr/share/squid/errors/templates/ some of the error pages are as follow's.
ERR_ACCESS_DENIED            ERR_FTP_FAILURE       ERR_INVALID_URL          ERR_SOCKET_FAILURE
ERR_CACHE_ACCESS_DENIED      ERR_FTP_FORBIDDEN     ERR_LIFETIME_EXP         ERR_TOO_BIG
ERR_CACHE_MGR_ACCESS_DENIED  ERR_FTP_NOT_FOUND     ERR_NEW                  ERR_UNSUP_HTTPVERSION
ERR_CANNOT_FORWARD           ERR_FTP_PUT_CREATED   ERR_NO_RELAY             ERR_UNSUP_REQ
ERR_CONNECT_FAIL             ERR_FTP_PUT_ERROR     ERR_ONLY_IF_CACHED_MISS  ERR_URN_RESOLVE
ERR_DIR_LISTING              ERR_FTP_PUT_MODIFIED  ERR_PRECONDITION_FAILED  ERR_WRITE_ERROR
ERR_DNS_FAIL                 ERR_FTP_UNAVAILABLE   ERR_READ_ERROR           ERR_ZERO_SIZE_OBJECT
ERR_ESI                      ERR_ICAP_FAILURE      ERR_READ_TIMEOUT
ERR_FORWARDING_DENIED        ERR_INVALID_REQ       ERR_SECURE_CONNECT_FAIL
ERR_FTP_DISABLED             ERR_INVALID_RESP      ERR_SHUTTING_DOWN

If we need to input custom pages we need to create the page here and mention it in deny_info part. Theis can be mentioned just above corresponding http_access.
For example if we make a Error page as ERR_NEW the rules will be like.

acl blockwordlist url_regex "/etc/squid/restricted_words"
deny_info ERR_NEW blockwordlist
http_access deny blockwordlist

FOR HTTPS WE WILL GET A PROXY REFUSING MESSAGE DUE TO https://bugzilla.mozilla.org/show_bug.cgi?id=493699 .


Blocking and Allowing By Time
=============================
In second acl the time MTWHFA means the Monday to Saturday
Time 16:00-19:00 is the time frame in 24hr time frame

acl myip src 192.168.25.31
acl worktime time MTWHFA 16:00-19:00
http_access allow myip worktime



Setting up maxconn ACL
======================
acl ACCOUNTSDEPT 192.168.5.0/24
acl limitusercon maxconn 3
http_access deny ACCOUNTSDEPT limitusercon

acl ACCOUNTSDEPT 192.168.3.0/24 : Our accounts department IP range
acl limitusercon maxconn 3 : Set 3 simultaneous web access from the same client IP
http_access deny ACCOUNTSDEPT limitusercon : Apply ACL

Mentioning Allowed Ports
========================
acl SSL_ports port 443
acl Safe_ports port 80          # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 70          # gopher

# Deny requests to certain unsafe ports
http_access deny !Safe_ports

# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports



Adding User Autnetication to Squid
==================================
Check the ncsa_auth file under squid and enter the following line in squid.conf. The ncsa_auth can be in either lib or lib64 directory as per your OS architecture.

#Add Following Line in squid.conf#
auth_param basic program /usr/lib64/squid/ncsa_auth /etc/squid/squid_user

#Creating the User file and adding the user in to the List.#
touch /etc/squid/squid_user
htpasswd /etc/squid/squid_user <username>

#To enable the authentication in the current proxy add the following Line in squid.conf along another acl and http_access rules #

acl class proxy_auth REQUIRED
http_access allow clas

And finally deny all other access to this proxy
==============================================
http_access deny all

Friday, October 24, 2014

Removing Blank Lines from the File.

In sed 
Type the following sed command to delete all empty files:

Display with out Blank Lines
sed '/^$/d' input.txt

Remove all the Blank Lines from file
sed -i '/^$/d' input.txt
cat input.txt

In awk 

Type the following awk command to delete all empty files:

Display with out Blank Lines
awk NF input.txt

Remove all the Blank Lines from file
awk 'NF  input.txt > output.txt
cat output.txt


In perl
Type the following perl one liner to delete all empty files and save orignal file as input.txt.backup:
Remove all the Blank Lines from file
perl -i.backup -n -e "print if /\S/" input.txt


In vi editor
:g/^$/d
:g will execute a command on lines which match a regex. The regex is 'blank line' and the command is
:d (delete)


In tr
tr -s '\n' < abc.txt

In grep
grep -v "^$" abc.txt



Wednesday, October 8, 2014

List installed Python Modules.

First install the python-pip package and use "pip freeze" command to display the Modules.

[root@ceph01-server ~]# python --version
Python 2.6.6
[root@ceph01-server ~]# pip freeze
Cheetah==2.4.1
Markdown==2.0.1
PyYAML==3.10
Pygments==1.1.1
argparse==1.2.1
backports.ssl-match-hostname==3.4.0.2
boto==2.32.1
ceph-deploy==1.5.17
chardet==2.0.1
cloud-init==0.7.4
configobj==4.6.0
distribute==0.7.3
heat-cfntools==1.2.6
iniparse==0.3.1
jsonpatch==1.2
jsonpointer==1.0
ordereddict==1.1
policycoreutils-default-encoding==0.1
prettytable==0.7.2
psutil==0.6.1
pycurl==7.19.0
pygpgme==0.1
requests==1.1.0
setools==1.0
six==1.7.3
urlgrabber==3.9.1
urllib3==1.5
yum-metadata-parser==1.1.2
[root@ceph01-server ~]#

Thursday, September 25, 2014

Script to check the loading time of a Site

Script to check the loading time of a Site

========
#!/bin/bash
CURL="/usr/bin/curl"
GAWK="/usr/bin/gawk"
echo -n "Please pass the url you want to measure: "
read url
URL="$url"
result=`$CURL -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total} $URL`
echo " Time_Connect Time_startTransfer Time_total "
echo $result | $GAWK -F: '{ print $1" "$2" "$3}'
========

cat test.sh
#!/bin/bash
CURL="/usr/bin/curl"
GAWK="/usr/bin/gawk"
echo -n "Please pass the url you want to measure: "
read url
URL="$url"
result=`$CURL -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total} $URL`
echo " Time_Connect Time_startTransfer Time_total "
echo $result | $GAWK -F: '{ print $1" "$2" "$3}'

Sample Testing
[root@vps examples]# sh test.sh
Please pass the url you want to measure: http://www.adminz.in
 Time_Connect Time_startTransfer Time_total
0.294 0.604 1.255
[root@vps examples]#

BASH Shellshock vulnerability and FIX


It allows the attacker to specify arbitrary commands to execute by changing an environment variable in a specific way. Bash is the default command interpreter for Linux and many other Unix versions and is consequently widespread use. But by itself the vulnerability is not that terrible, after all it is a local vulnerability and BASH is a command interpreter, its only reason to exist is to execute commands, so not such a big deal...

Unfortunately this is not quite true as we need to look at how Bash is used. True in its normal form as command interpreter the attack vectors are quite small. However Bash is very often involved in a networked setup to execute commands and that opens up an interesting attack vector. Imagine a webserver that allows you to ping an IP address (my router at home has that function for example), it will most likely just call the "ping" executable with the argument that you supplied, probably checking whether the argument is formatted correctly as an IP address.


RedHat has an extended list of situations that involve Bash in a remote context and you can see it has the potential be a widespread problem, similar to Heartbleed in April. Some of the security researchers involved at the time, namely @ErrataRob have already started their Internet wide scans looking for vulnerable servers:

  • Apache server using mod_cgi or mod_cgid are affected if CGI scripts are either written in bash, or spawn subshells. Such subshells are implicitly used by system/popen in C, by os.system/os.popen in Python, system/exec in PHP (when run in CGI mode), and open/system in Perl if a shell is used (which depends on the command string)
  • ForceCommand is used in sshd configs to provide limited command execution capabilities for remote users. This flaw can be used to bypass that and provide arbitrary command execution. Some Git and Subversion deployments use such restricted shells. Regular use of OpenSSH is not affected because users already have shell access.
  • DHCP clients invoke shell scripts to configure the system, with values taken from a potentially malicious server. This would allow arbitrary commands to be run, typically as root, on the DHCP client machine.
  • Various daemons and SUID/privileged programs may execute shell scripts with environment variable values set / influenced by the user, which would allow for arbitrary commands to be run.
  • Any other application which is hooked onto a shell or runs a shell script as using bash as the interpreter. Shell scripts which do not export variables are not vulnerable to this issue, even if they process untrusted content and store it in (unexported) shell variables and open subshells.


To check the Vulnerability 

env x='() { :;}; echo vulnerable' bash -c 'echo hello'

If you get an out put like
==
Vulnerable
hello
==
The Bash is said to be Vulnerable 

If you get an output like 
====
bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
hello
====
The Bash is not Vulnerable.

Fixes,
 For redhat ,centos ,debian and ubuntu the patches are already available in the repos 

In redhat/Centos
yum update bash

In debian/Ubuntu 
apt-get update && apt-get install --only-update bash

Or else if you want to Compile and install the latest bash 


apt-get install wget patch gcc make

yum install wget patch gcc make 

mkdir src
cd src
wget http://ftp.gnu.org/gnu/bash/bash-4.3.tar.gz
#download all patches
for i in $(seq -f "%03g" 0 25); do wget     http://ftp.gnu.org/gnu/bash/bash-4.3-patches/bash43-$i; done
tar zxvf bash-4.3.tar.gz 
cd bash-4.3
#apply all patches
for i in $(seq -f "%03g" 0 25);do patch -p0 < ../bash43-$i; done
#build and install
./configure && make && make install
cd .. 
cd ..
rm -r src

Once patches are applied check the vulnerability again and make sure its fine.
More updates are been coming regarding this Will keep you updated. 

Workaround: Using mod_security:

The following mod_security rules can be used to reject HTTP requests containing data that may be interpreted by Bash as function definition if set in its environment. They can be used to block attacks against web services, such as attacks against CGI applications outlined above.
Request Header values:
SecRule REQUEST_HEADERS "^\(\) {" "phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
SERVER_PROTOCOL values:
SecRule REQUEST_LINE "\(\) {" "phase:1,deny,id:1000001,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
GET/POST names:
SecRule ARGS_NAMES "^\(\) {" "phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
GET/POST values:
SecRule ARGS "^\(\) {" "phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
File names for uploads:
SecRule FILES_NAMES "^\(\) {" "phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:'CVE-2014-6271 - Bash Attack'"
These may result in false positives but it's unlikely, and they can log them and keep an eye on it. You may also want to avoid logging as this could result in a significant amount of log files.

Workaround: Using IPTables:

A note on using IPTables string matching:
iptables using -m string --hex-string '|28 29 20 7B|'
Is not a good option because the attacker can easily send one or two characters per packet and avoid this signature easily. However, it may provide an overview of automated attempts at exploiting this vulnerability.

Tuesday, September 16, 2014

Mysql error : Message: Transaction level 'READ-COMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'))

Error creating issue: Could not create workflow instance: root cause: while inserting: [GenericEntity:OSWorkflowEntry][id,null][name,jira][state,0] (SQL Exception while executing the following:INSERT INTO OS_WFENTRY (ID, NAME, INITIALIZED, STATE) VALUES (?, ?, ?, ?) (Binary logging not possible. Message: Transaction level 'READ-COMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'))

 

Cause This is required by MySQL:
Statement based binlogging does not work in isolation level READ UNCOMMITTED and READ COMMITTED since the necessary locks cannot be taken.

 

Resolution
To change to row based binary logging, set the following in /etc/my.cnf (or your my.cnf if it's elsewhere):

binlog_format=row

Adding License for Vmware Esxi

Following Command allow us to add the Vmware license through the ssh access into the Esxi Server.



vim-cmd vimsvc/license --set *********************

Wednesday, September 10, 2014

logrotate not working

When default log rotate is not working we need to check its configuration using command

/usr/sbin/logrotate -f /etc/logrotate.conf

and try running a selected configuration using

logrotate -fd /etc/logrotate.d/test

where test is the configuration file name.

Thursday, September 4, 2014

Openstack Icehouse install Part -7 Cinder Service Block storage

Install Cinder- Block Storage Service

On Controller Node
Install the appropriate packages

yum install openstack-cinder -y

Configure Block Storage to use your database

openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cinder4admin@controller/cinder

Creating Database
On Mysql Server

mysql -u root -p

CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.30' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.31' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.35' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.36' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.32' IDENTIFIED BY 'cinder4admin';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'10.1.15.42' IDENTIFIED BY 'cinder4admin';
exit;

Create the database tables

su -s /bin/sh -c "cinder-manage db sync" cinder

Create a cinder user.

keystone user-create --name=cinder --pass=cinder4admin --email=cinder@example.com
keystone user-role-add --user=cinder --tenant=service --role=admin

Edit the /etc/cinder/cinder.conf configuration file:

openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder4admin

Configure Block Storage to use the Qpid message broker:

openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname 10.1.15.40

Register the Block Storage service with the Identity service so that other OpenStack services can locate it:

keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s

Register a service and endpoint for version 2 of the Block Storage service API:

keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)s

Start and configure the Block Storage services to start when the system boots:

service openstack-cinder-api start
service openstack-cinder-scheduler start
chkconfig openstack-cinder-api on
chkconfig openstack-cinder-scheduler on

On Cinder Service Node.

Setting Up NFS Share .

Installing NFS packages
yum install nfs-utils nfs-utils-lib

Make and configure partition
mkfs.ext4 /dev/mapper/vg_cloud2-LogVol03
mkdir /home/cinder_nfs
mount /dev/mapper/vg_cloud2-LogVol03 /home/cinder_nfs/
Add entries in Fstab
/dev/mapper/vg_cloud2-LogVol02 /home/cinder_nfs ext4 rw 0 0

Add Share to NFS
vi /etc/exports
/home/cinder_nfs *(rw,sync,no_root_squash,no_subtree_check)
exportfs -a
showmount -e 192.168.11.42

service nfs start
service nfs restart
service iptables stop
chkconfig iptables off
Install the Cinder Software
yum install openstack-cinder scsi-target-utils

Configure the Service

Copy the /etc/cinder/cinder.conf configuration file from the controller, or perform the following steps to set the keystone credentials:
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_host controller
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_user cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken admin_password cinder4admin
openstack-config --set /etc/cinder/cinder.conf DEFAULT rpc_backend cinder.openstack.common.rpc.impl_qpid
openstack-config --set /etc/cinder/cinder.conf DEFAULT qpid_hostname 10.1.15.40

openstack-config --set /etc/cinder/cinder.conf database connection mysql://cinder:cinder4admin@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT glance_host controller

[root@compute2 ~]# cat /etc/cinder/nfsshares
192.168.11.42:/home/cinder_nfs
[root@compute2 ~]#

openstack-config --set /etc/cinder/cinder.conf DEFAULT nfs_shares_config /etc/cinder/nfsshares
openstack-config --set /etc/cinder/cinder.conf DEFAULT volume_driver cinder.volume.drivers.nfs.NfsDriver
service openstack-cinder-volume start
chkconfig openstack-cinder-volume on

Wednesday, June 18, 2014

OpenStack – Icehouse –Part 2 Glance

Configure the Image Service On controller Server


yum install openstack-glance python-glanceclient -y

openstack-config --set /etc/glance/glance-api.conf database connection mysql://glance:glance4mar@controller/glance
openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:glance4mar@controller/glance

openstack-config --set /etc/glance/glance-api.conf DEFAULT rpc_backend qpid
openstack-config --set /etc/glance/glance-api.conf DEFAULT qpid_hostname controller

mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance4mar';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance4mar';
exit

su -s /bin/sh -c "glance-manage db_sync" glance
keystone user-create --name=glance --pass=glance4mar --email=glance@example.com
keystone user-role-add --user=glance --tenant=service --role=admin

openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password glance4mar
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_port 35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_protocol http
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password glance4mar
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone

keystone service-create --name=glance --type=image --description="OpenStack Image Service"
keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292

service openstack-glance-api start
service openstack-glance-registry start
chkconfig openstack-glance-api on
chkconfig openstack-glance-registry on

#Verify the Image Service installation


mkdir /tmp/images
cd /tmp/images/
wget http://cdn.download.cirros-cloud.net/0.3.2/cirros-0.3.2-x86_64-disk.img

source /root/admin-openrc.sh
glance image-create --name "cirros-0.3.2-x86_64" --disk-format qcow2 --container-format bare --is-public True --progress < cirros-0.3.2-x86_64-disk.img

cd /
rm -rf /tmp/images

glance image-list

 

Importing Images into Glance


You can load an image from the command line with glance, eg:
glance image-create --name 'Fedora 19 x86_64' --disk-format qcow2 --container-format bare --is-public true \
--copy-from http://cloud.fedoraproject.org/fedora-19.x86_64.qcow2

Building Your Own Images


Alternatively, one can use diskimage-builder, which is available in the RDO repository:

yum install diskimage-builder

$ disk-image-create -a amd64 fedora vm -o fedora-image.qcow2

More Images In Following URL


http://openstack.redhat.com/Image_resources

Thursday, May 29, 2014

Git Part 2

http://superuser.com/questions/261060/git-how-can-i-config-git-to-ignore-file-permissions-changes
turn off the filemode so that permissions of files are not considered.

For Mac Machines
http://stackoverflow.com/questions/8402281/github-push-error-permission-denied
cd ~
ssh-keygen
cat .ssh/id_rsa.pub > .ssh/authorized_keys

Internalize a project in server
cd /opt/git/
mkdir <Project-name>
cd <Project-name>
git inti --bare

In client
git clone xxxx@xxx.xxx.xxx.xxx:/opt/git/<Project-name>
cd <Project-name>
git add *
git commit -m "Test Files"
>>git remote add <remote-name> <git-repo-URL>
git remote add orgin xxxx@xxx.xxx.xxx.xxx:/opt/git/<Project-name>
git push orgin master

Branching
git checkout -b <Branch-name>
git push <remote-name> <branch-name>
git push <remote-name> <local-branch-name>:<remote-branch-name>

List ALL Branching
git branch -a
List Remote Branching
git branch -r

Merge two branch
git checkout a (you will switch to branch a)
git merge b (this will merge all changes from branch b into branch a)
git commit -a (this will commit your changes)

List Merged Branches
git branch --merged lists the branches that have been merged into the current branch
git branch --no-merged lists the branches that have not been merged into the current branch

 

Wednesday, May 21, 2014

IFS Internal Field Separator in Bash Scripting

IFS stands for  Internal Field Separator - it's a character that separate fields. In the example you posted it is set to new line character (\n), so after setting it for will process text line by line. In that example you could change value of $IFS (to some letter that you have in your input file) and check how text will be splitted.

 

[root@ip-192-168-1-36 tmp]# for i in `cat sample.txt`; do echo $i; done
Mar 10
Mar 11
Mar 7
Mar 8
Mar 9
[root@ip-192-168-1-36 tmp]# IFS=$' '
[root@ip-192-168-1-36 tmp]# for i in `cat sample.txt`; do echo $i; done

Mar
10
Mar
11
Mar
7
Mar
8
Mar
9
[root@ip-192-168-1-36 tmp]# IFS=$'\n'
[root@ip-192-168-1-36 tmp]# for i in `cat sample.txt`; do echo $i; done
Mar 10
Mar 11
Mar 7
Mar 8
Mar 9
[root@ip-192-168-1-36 tmp]#

Installing and configuring Amazon EC2 command line

Now download the Amazon API CLI tools using following command and extract them at a proper place. For this example, we are using /opt directory.

# mkdir /opt/ec2
# wget http://s3.amazonaws.com/ec2-downloads/ec2-api-tools.zip
# unzip ec2-api-tools.zip -d /tmp
# mv /tmp/ec2-api-tools-* /opt/ec2/tools
Step 3- Download Private Key and Certificate Files

Now create and download X.509 certificate (private key file and certificate file) files from your account from Security Credentials page and copy to /opt/ec2/certs/ directory.

# ls -l /opt/ec2/certs/

-rw-r--r--. 1 root root 1281 May 15 12:57 my-ec2-cert.pem
-rw-r--r--. 1 root root 1704 May 15 12:56 my-ec2-pk.pem
Step 4- Configure Environment

Install JAVA
The Amazon EC2 command line tools required Java 1.6 or later version. Make sure you have proper java installed on your system. You can install JRE or JDK , both are ok to use.
# java -version
java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) Client VM (build 25.5-b02, mixed mode)
If you don’t have Java installed your system, Use below links to install Java on your system first
Installing JAVA/JDK 8 on CentOS, RHEL and Fedora
Installing JAVA/JDK 8 on Ubuntu

Now edit ~/.bashrc file and add the following values at end of file

export EC2_BASE=/opt/ec2
export EC2_HOME=$EC2_BASE/tools
export EC2_PRIVATE_KEY=$EC2_BASE/certs/my-ec2-pk.pem
export EC2_CERT=$EC2_BASE/certs/my-ec2-cert.pem
export EC2_URL=https://ec2.xxxxxxx.amazonaws.com
export AWS_ACCOUNT_NUMBER=
export PATH=$PATH:$EC2_HOME/bin
export JAVA_HOME=/opt/jdk1.8.0_05
Now execute the following command to set environment variables

$ source ~/.bashrc

After completing all configuration, let’s run following command to quickly verify setup.

# ec2-describe-regions

REGION eu-west-1 ec2.eu-west-1.amazonaws.com
REGION sa-east-1 ec2.sa-east-1.amazonaws.com
REGION us-east-1 ec2.us-east-1.amazonaws.com
REGION ap-northeast-1 ec2.ap-northeast-1.amazonaws.com
REGION us-west-2 ec2.us-west-2.amazonaws.com
REGION us-west-1 ec2.us-west-1.amazonaws.com
REGION ap-southeast-1 ec2.ap-southeast-1.amazonaws.com
REGION ap-southeast-2 ec2.ap-southeast-2.amazonaws.com

Saturday, May 17, 2014

Enable up/Down arrow in powershell

For this, you need PSReadline. First, install PsGet if you don’t have it:To install it just run the following URL in powershell.

(new-object Net.WebClient).DownloadString("http://psget.net/GetPsGet.ps1 ' ') | iex
Then, install PSReadline:

install-module PSReadline
Import PSReadline after loading the persistent history:

Import-Module PSReadLine
And you will be able to recall previous commands with up arrow key. Add the following to have partial history search with up/down arrow key:

Set-PSReadlineKeyHandler -Key UpArrow -Function HistorySearchBackward
Set-PSReadlineKeyHandler -Key DownArrow -Function HistorySearchForward
Lastly, to enable bash style completion:

Set-PSReadlineKeyHandler -Key Tab -Function Complete

Tuesday, February 26, 2013

Awk Introduction and Printing Operations

Awk Introduction and Printing Operations

Awk is a programming language which allows easy manipulation of structured data and the generation of formatted reports. Awk stands for the names of its authors “Aho, Weinberger, and Kernighan”

The Awk is mostly used for pattern scanning and processing. It searches one or more files to see if they contain lines that matches with the specified patterns and then perform associated actions.

Some of the key features of Awk are:

Awk views a text file as records and fields.
Like common programming language, Awk has variables, conditionals and loops
Awk has arithmetic and string operators.
Awk can generate formatted reports

Awk reads from a file or from its standard input, and outputs to its standard output. Awk does not get along with non-text files.

Syntax:

awk '/search pattern1/ {Actions}
/search pattern2/ {Actions}' file

In the above awk syntax:

search pattern is a regular expression.
Actions – statement(s) to be performed.
several patterns and actions are possible in Awk.
file – Input file.
Single quotes around program is to avoid shell not to interpret any of its special characters.

Awk Working Methodology

Awk reads the input files one line at a time.
For each line, it matches with given pattern in the given order, if matches performs the corresponding action.
If no pattern matches, no action will be performed.
In the above syntax, either search pattern or action are optional, But not both.
If the search pattern is not given, then Awk performs the given actions for each line of the input.
If the action is not given, print all that lines that matches with the given patterns which is the default action.
Empty braces with out any action does nothing. It wont perform default printing operation.
Each statement in Actions should be delimited by semicolon.

Let us create employee.txt file which has the following content, which will be used in the
examples mentioned below.

$cat employee.txt
100  Thomas  Manager    Sales       $5,000
200  Jason   Developer  Technology  $5,500
300  Sanjay  Sysadmin   Technology  $7,000
400  Nisha   Manager    Marketing   $9,500
500  Randy   DBA        Technology  $6,000

Awk Example 1. Default behavior of Awk

By default Awk prints every line from the file.

$ awk '{print;}' employee.txt
100  Thomas  Manager    Sales       $5,000
200  Jason   Developer  Technology  $5,500
300  Sanjay  Sysadmin   Technology  $7,000
400  Nisha   Manager    Marketing   $9,500
500  Randy   DBA        Technology  $6,000

In the above example pattern is not given. So the actions are applicable to all the lines.
Action print with out any argument prints the whole line by default. So it prints all the
lines of the file with out fail. Actions has to be enclosed with in the braces.
Awk Example 2. Print the lines which matches with the pattern.

$ awk '/Thomas/
> /Nisha/' employee.txt
100  Thomas  Manager    Sales       $5,000
400  Nisha   Manager    Marketing   $9,500

In the above example it prints all the line which matches with the ‘Thomas’ or ‘Nisha’. It has two patterns. Awk accepts any number of patterns, but each set (patterns and its corresponding actions) has to be separated by newline.
Awk Example 3. Print only specific field.

Awk has number of built in variables. For each record i.e line, it splits the record delimited by whitespace character by default and stores it in the $n variables. If the line has 4 words, it will be stored in $1, $2, $3 and $4. $0 represents whole line. NF is a built in variable which represents total number of fields in a record.

$ awk '{print $2,$5;}' employee.txt
Thomas $5,000
Jason $5,500
Sanjay $7,000
Nisha $9,500
Randy $6,000

$ awk '{print $2,$NF;}' employee.txt
Thomas $5,000
Jason $5,500
Sanjay $7,000
Nisha $9,500
Randy $6,000

In the above example $2 and $5 represents Name and Salary respectively. We can get the Salary using  $NF also, where $NF represents last field. In the print statement ‘,’ is a concatenator.
Awk Example 4. Initialization and Final Action

Awk has two important patterns which are specified by the keyword called BEGIN and END.

Syntax:

BEGIN { Actions}
{ACTION} # Action for everyline in a file
END { Actions }

# is for comments in Awk

Actions specified in the BEGIN section will be executed before starts reading the lines from the input.
END actions will be performed after completing the reading and processing the lines from the input.

$ awk 'BEGIN {print "Name\tDesignation\tDepartment\tSalary";}
> {print $2,"\t",$3,"\t",$4,"\t",$NF;}
> END{print "Report Generated\n--------------";
> }' employee.txt
Name    Designation    Department    Salary
Thomas      Manager      Sales              $5,000
Jason      Developer      Technology      $5,500
Sanjay      Sysadmin      Technology      $7,000
Nisha      Manager      Marketing      $9,500
Randy      DBA           Technology      $6,000
Report Generated
--------------

In the above example, it prints headline and last file for the reports.
Awk Example 5. Find the employees who has employee id greater than 200

$ awk '$1 >200' employee.txt
300  Sanjay  Sysadmin   Technology  $7,000
400  Nisha   Manager    Marketing   $9,500
500  Randy   DBA        Technology  $6,000

In the above example, first field ($1) is employee id. So if $1 is greater than 200, then just do the default print action to print the whole line.
Awk Example 6. Print the list of employees in Technology department

Now department name is available as a fourth field, so need to check if $4 matches with the string “Technology”, if yes print the line.

$ awk '$4 ~/Technology/' employee.txt
200  Jason   Developer  Technology  $5,500
300  Sanjay  Sysadmin   Technology  $7,000
500  Randy   DBA        Technology  $6,000

Operator ~ is for comparing with the regular expressions. If it matches the default action i.e print whole line will be  performed.
Awk Example 7. Print number of employees in Technology department

The below example, checks if the department is Technology, if it is yes, in the Action, just increment the count variable, which was initialized with zero in the BEGIN section.

$ awk 'BEGIN { count=0;}
$4 ~ /Technology/ { count++; }
END { print "Number of employees in Technology Dept =",count;}' employee.txt
Number of employees in Tehcnology Dept = 3

Then at the end of the process, just print the value of count which gives you the number of employees in Technology departme

 

will print all but very first column:
cat somefile | awk '{$1=""; print $0}'

will print all but two first columns:
cat somefile | awk '{$1=$2=""; print $0}'