Pages

Showing posts with label monitoring. Show all posts
Showing posts with label monitoring. Show all posts

Monday, August 15, 2016

Aws Flowlogs for Traffic Monitoring

VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. Flow log data is stored using Amazon CloudWatch Logs. After you've created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs.

Flow logs can help you with a number of tasks; for example, to troubleshoot why specific traffic is not reaching an instance, which in turn can help you diagnose overly restrictive security group rules. You can also use flow logs as a security tool to monitor the traffic that is reaching your instance.

To create Flow log for your subnet we followed these steps:
1. Creating Log group in your CloudWatch:
  - We created new log group in your CloudWatch to log your entries.
  - Please remember your can use same log group for multiple flow log you create.
  - To create log group: AWS Management console -> CloudWatch -> Logs --> Create new log group

2. Create Flow Log for VPC
  - Open the Amazon EC2 console -> Service VPC
  - In the navigation pane, choose VPC then select your VPC
  - From VPC Action select Create a Flow Log
Please refer this link to get more information on this: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html#create-flow-log

Note that VPC Flow Log Records[1] is a space-separated string that has the format outline in our document, where 14 fields are available, and field #11 & #12 has recorded the time in Unix seconds)

According to the Filter & Pattern Syntax[2], we can filter the log events matching our conditions for space-delimited logs.

Example filter (as we dont care the 1st 10 fields in this case, so we use ... )

[..., start, end, action, status]

Say if we need to capture the vpc flow log between Sat, 06 Aug 2016 04:35:56 GMT and Sun, 07 Aug 2016 04:35:56 GMT

using epoch time converter(http://www.epochconverter.com/ for example), we get the Unix time in second being 1470458156 & 1470544556

so the filter we will be using become

[..., start>1470458156, end<1470544556, action, status]

So you can follow link[3], To search all log entries after a given start time using the Amazon CloudWatch console

Goto AWS CloudWatch Console-> Logs -> Select the vpc flowlog log group -> above "Log Streams List", click "Search Event"

and use the [..., start>1470458156, end<1470544556, action, status] in the filter field, then press Enter.

You can modify the filter accordingly for more conditions.


Resource Links: [1] AWS - VPC - VPC Flow Logs - Flow Log Records https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html#flow-log-records [2] AWS - CloudWatch - Searching and Filtering Log Data - Filter and Pattern Syntax - Using Metric Filters to Extract Values from Space-Delimited Log Events https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/FilterAndPatternSyntax.html#d0e26783 [3] AWS - CloudWatch - To search all log entries after a given start time using the Amazon CloudWatch console https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/SearchDataFilterPattern.html

Friday, October 17, 2014

Logstash to parse Local files,apache/niginx Logs

Filters in logstach 
Filters are an in-line processing mechanism which provide the flexibility to slice and dice your data to fit your needs. Let’s see one in action, namely the grok filter.

input { stdin { } }

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
Run logstash with this configuration:

bin/logstash -f logstash-filter.conf
Now paste this line into the terminal (so it will be processed by the stdin input):

127.0.0.1 - - [11/Dec/2013:00:01:45 -0800] "GET /xampp/status.php HTTP/1.1" 200 3891 "http://cadenza/xampp/navi.php" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:25.0) Gecko/20100101 Firefox/25.0"


Run Logstash from Local File buy configuring input session. Below we parse a apache access log from local server. 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Logstach Configuration for parsing nginx Logs 

input {
  file {
    path => "/Users/kurt/logs/access_log"
    start_position => beginning
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { "type" => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch {
    host => localhost
  }
  stdout { codec => rubydebug }
}

Log Monitoring WIth Kibana+Logstash+elasticsearch



Centralized logging using Logstash and elasticsearch  can be very useful when attempting to identify problems with your servers or applications, as it allows you to search through all of your logs in a single place.


Installing Java 

yum install java-1.7.0-openjdk-*

Install Elasticsearch

yum install https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.4.noarch.rpm

Elasticsearch is now installed. Let's edit the configuration:/etc/elasticsearch/elasticsearch.yml

Add the following line somewhere in the file, to disable dynamic scripts:

script.disable_dynamic: true

You will also want to restrict outside access to your Elasticsearch instance, so outsiders can't read your data or shutdown your Elasticseach cluster through the HTTP API. Find the line that specifies network.host and uncomment it so it looks like this:

network.host: localhost

Then disable multicast by finding the discovery.zen.ping.multicast.enabled item and uncommenting so it looks like this:

discovery.zen.ping.multicast.enabled: false


Now start Elasticsearch:

sudo service elasticsearch restart


Install Nginx

yum install -y http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

yum install nginx -y

Download the sample Nginx configuration from Kibana's github repository to your home directory:

cd ~; curl -OL https://gist.githubusercontent.com/thisismitch/2205786838a6a5d61f55/raw/f91e06198a7c455925f6e3099e3ea7c186d0b263/nginx.conf

Open the sample configuration file for editing:

vi nginx.conf

Find and change the values of the server_name to your FQDN (or localhost if you aren't using a domain name) and root to where we installed Kibana, so they look like the following entries:

server_name FQDN;
root  /usr/share/nginx/kibana3;

Save and exit. Now copy it over your Nginx default server block with the following command:

sudo cp ~/nginx.conf /etc/nginx/conf.d/default.conf


Installing Kibana to parse the logs
wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.1.tar.gz
tar zxvf kibana-3.1.1.tar.gz


Open the Kibana configuration file kibana-3.1.1/config.js  and  find the line that specifies the elasticsearch server URL, and replace the port number (9200 by default) with 80:

   elasticsearch: "http://"+window.location.hostname+":80",

mv kibana-3.1.1 /usr/share/nginx/kibana3

start the Nginx

service nginx start

sudo yum install httpd-tools-2.2.15
Then generate a login that will be used in Kibana to save and share dashboards (substitute your own username):
sudo htpasswd -c /etc/nginx/conf.d/kibana.myhost.org.htpasswd user

Install Logstash

yum install https://download.elasticsearch.org/logstash/logstash/packages/centos/logstash-1.4.2-1_2c0f5a1.noarch.rpm -y

Creating Certificates

cd /etc/pki/tls; sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt


cat << EOF >> /etc/logstash/conf.d/01-lumberjack-input.conf
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}
EOF

cat << EOF >> /etc/logstash/conf.d/10-syslog.conf
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
EOF


cat << EOF >> /etc/logstash/conf.d/30-lumberjack-output.conf
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
EOF




On Logstash Server

Copy the SSL certificate to Server (substitute with your own login):

scp /etc/pki/tls/certs/logstash-forwarder.crt user@server_private_IP:/tmp


Install Logstash Forwarder Package

yum install -y http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm

Next, you will want to install the Logstash Forwarder init script, so it starts on bootup. We will use the init script provided by logstashbook.com:

cd /etc/init.d/; sudo curl -o logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_init
sudo chmod +x logstash-forwarder

The init script depends on a file called /etc/sysconfig/logstash-forwarder. A sample file is available to download:

sudo curl -o /etc/sysconfig/logstash-forwarder http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

sudo vi /etc/sysconfig/logstash-forwarder
And modify the LOGSTASH_FORWARDER_OPTIONS value so it looks like the following:
LOGSTASH_FORWARDER_OPTIONS="-config /etc/logstash-forwarder -spool-size 100"
Save and quit.

Now copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Configure Logstash Forwarder
On Server, create and edit Logstash Forwarder configuration file, which is in JSON format:

cat << EOF > /etc/logstash-forwarder
{
  "network": {
    "servers": [ "192.168.255.1:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}

EOF


Note that this is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Now we will want to add the Logstash Forwarder service with chkconfig:

sudo chkconfig --add logstash-forwarder

Now start Logstash Forwarder to put our changes into place:

sudo service logstash-forwarder start


Now checkout the kibana server IP to get the dashboard

Thursday, September 25, 2014

Checking loadspeed of a Site Using phantomjs

Using  phantomjs to check different parameters of a site.

 Installing the module
sudo yum install fontconfig freetype libfreetype.so.6 libfontconfig.so.1 libstdc++.so.6

wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-1.9.7-linux-i686.tar.bz2
tar jxvf phantomjs-1.9.7-linux-i686.tar.bz2
mv phantomjs-1.9.7-linux-x86_64 phantomjs
export PATH=$PATH:/root/phantomjs/bin/

In the  phantomjs directory there are examples for js script. We use a test script provided by them for checking the load speed of site. 

phantomjs loadspeed.js http://www.adminz.in
Page title is Linux Conquering Cloud
Loading time 4559 msec


Basic examples

  • arguments.js shows the arguments passed to the script
  • countdown.js prints a 10 second countdown
  • echoToFile.js writes the command line arguments to a file
  • fibo.js lists the first few numbers in the Fibonacci sequence
  • hello.js displays the famous message
  • module.js and universe.js demonstrate the use of module system
  • outputEncoding.js displays a string in various encodings
  • printenv.js displays the system's environment variables
  • scandir.js lists all files in a directory and its subdirectories
  • sleepsort.js sorts integers and delays display depending on their values
  • version.js prints out PhantomJS version number
  • page_events.js prints out page events firing: useful to better grasp page.on* callbacks

Rendering/rasterization

  • colorwheel.js creates a color wheel using HTML5 canvas
  • rasterize.js rasterizes a web page to image or PDF
  • rendermultiurl.js renders multiple web pages to images
  • technews.js captures Google News as a PNG image

Page automation

  • direction.js uses Google Maps to print driving direction
  • follow.js shows the number of followers of some Twitter accounts
  • imagebin.js uploads an image to imagebin.org
  • injectme.js injects itself into a web page context
  • ipgeocode.js deduces the location via IP geocoding
  • movies.js lists movies from kids-in-mind.com
  • phantomwebintro.js uses jQuery to read #intro element text from phantomjs.org
  • pizza.js uses yelp.com to find pizza places in Mountain View
  • seasonfood.js displays the BBC seasonal food list
  • tweets.js displays the most recent tweets
  • unrandomize.js modifies a global object at page initialization
  • waitfor.js waits until a test condition is true or a timeout occurs

Network

  • detectsniff.js detects if a web page sniffs the user agent
  • loadspeed.js computes the loading speed of a web site
  • netlog.js dumps all network requests and responses
  • netsniff.js captures network traffic in HAR format
  • post.js sends an HTTP POST request to a test server
  • postserver.js starts a web server and sends an HTTP POST request to it
  • server.js starts a web server and sends an HTTP GET request to it
  • serverkeepalive.js starts a web server which answers in plain text
  • simpleserver.js starts a web server which answers in HTML

Script to check the loading time of a Site

Script to check the loading time of a Site

========
#!/bin/bash
CURL="/usr/bin/curl"
GAWK="/usr/bin/gawk"
echo -n "Please pass the url you want to measure: "
read url
URL="$url"
result=`$CURL -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total} $URL`
echo " Time_Connect Time_startTransfer Time_total "
echo $result | $GAWK -F: '{ print $1" "$2" "$3}'
========

cat test.sh
#!/bin/bash
CURL="/usr/bin/curl"
GAWK="/usr/bin/gawk"
echo -n "Please pass the url you want to measure: "
read url
URL="$url"
result=`$CURL -o /dev/null -s -w %{time_connect}:%{time_starttransfer}:%{time_total} $URL`
echo " Time_Connect Time_startTransfer Time_total "
echo $result | $GAWK -F: '{ print $1" "$2" "$3}'

Sample Testing
[root@vps examples]# sh test.sh
Please pass the url you want to measure: http://www.adminz.in
 Time_Connect Time_startTransfer Time_total
0.294 0.604 1.255
[root@vps examples]#