Pages

Thursday, July 6, 2017

GrayLog Configuration Error : Please verify that the server is healthy and working correctly.

First, we need to Make sure the Elastic Seach is running fine.

Following was the configuration
cluster.name: graylog
network.host: 127.0.0.1

Then Make sure the Entry in the graylog for following attributes is correct.



rest_listen_uri = http://0.0.0.0:9000/api
web_listen_uri = http://0.0.0.0:9000/
rest_transport_uri = http://192.168.0.66:9000/api
web_endpoint_uri = http://192.168.0.66:9000/api

** In Aws/Azure Make sure we give the Server's Public IP or the Load balancers IP.

Friday, June 30, 2017

Reduce TIME_WAIT socket connections

We will reduce the Time_wait by tweaking the Sysctl to time out at a certain time and reuse that socket.

List the no of time_waits and Established Connections

>>netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n

cat /proc/sys/net/ipv4/tcp_fin_timeout
cat /proc/sys/net/ipv4/tcp_tw_recycle
cat /proc/sys/net/ipv4/tcp_tw_reuse

If you have default settings, you’ll probably see values of 60, 0 and 0. Let’s change those values to 60, 1, 1.

Now, edit the /etc/sysctl.conf with your favorite editor and add these lines to the end of it (or edit the values you have in yours if they exist already):


# Decrease TIME_WAIT seconds
net.ipv4.tcp_fin_timeout = 30

# Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

Sysctl -p

netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n

Tuesday, June 6, 2017

ELK : Json Data not Logged Correctly in Elastic Search

Symptom
Data written to S3 form logstash is in Format
2016-12-08T21:55:36.381Z %{host} %{message}
2016-12-08T21:55:36.385Z %{host} %{message}
2016-12-08T21:55:36.385Z %{host} %{message}
2016-12-08T21:55:36.390Z %{host} %{message}
2016-12-08T21:55:36.391Z %{host} %{message}
2016-12-08T21:55:36.421Z %{host} %{message}
2016-12-08T21:55:36.421Z %{host} %{message}
2016-12-08T21:55:36.421Z %{host} %{message}
Cause
What happens here is that the default plain codec is being used for the S3 output from Logsearch. In the configuration for Custom Logstash outputs, you should use the JSON Lines Codec. There are more codecs you can use which are listed here.
Resolution
You can add the codec by adding the json_lines codec to your Custom Logstash Outputs Configuration in the Logstash tile settings. Your configuration should look like the following:
output {
...
    s3 {
access_key_id => "****************"
secret_access_key => "*********************"
region => "region name"
bucket => "bucket-name"
time_file => 15
codec => "json_lines"
}
...
After adding the json_lines codec, your S3 bucket Logstash entries should look more like this:
{"@timestamp":"2016-12-12T15:58:37.000Z","port":34854,"@type":"CounterEvent","@message":"{\"cf_origin\":\"firehose\",\"delta\":65,\"deployment\":\"cf\",\"event_type\":\"CounterEvent\",\"index\":\"9439da9a-fb72-4064-839f-934d4e8a6a5c\",\"ip\":\"192.0.2.1\",\"job\":\"router\",\"level\":\"info\",\"msg\":\"\",\"name\":\"udp.sentMessageCount\",\"origin\":\"MetronAgent\",\"time\":\"2016-12-12T15:58:37Z\",\"total\":5257491}","syslog_pri":"6","syslog_pid":"6229","@raw":"<6>2016-12-12T15:58:37Z f7643aae-c011-4715-a88b-2333aaf770ab doppler[6229]: {\"cf_origin\":\"firehose\",\"delta\":65,\"deployment\":\"cf\",\"event_type\":\"CounterEvent\",\"index\":\"9439da9a-fb72-4064-839f-934d4e8a6a5c\",\"ip\":\"192.0.2.1\",\"job\":\"router\",\"level\":\"info\",\"msg\":\"\",\"name\":\"udp.sentMessageCount\",\"origin\":\"MetronAgent\",\"time\":\"2016-12-12T15:58:37Z\",\"total\":5257491}","tags":["syslog_standard","firehose","CounterEvent"],"syslog_severity_code":6,"syslog_facility_code":0,"syslog_facility":"kernel","syslog_severity":"informational","@source":{"host":"f7643aae-c011-4715-a88b-2333aaf770ab","deployment":"cf","job":"router","ip":"192.0.2.1","program":"doppler","index":9439,"vm":"router/9439"},"@level":"INFO","CounterEvent":{"delta":65,"name":"udp.sentMessageCount","origin":"MetronAgent","total":5257491}}
Additional Information

Saturday, April 29, 2017

Checking Oracle Database Connection from Java







========================
import java.sql.Connection;
import java.sql.Date;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;

public class OracleSample {

    public static final String DBURL = "jdbc:oracl:thin:@***.***.***.***:1521:oracledatabase";
    public static final String DBUSER = "username";
    public static final String DBPASS = "Oracle8521";

    public static void main(String[] args) throws SQLException {
     
        // Load Oracle JDBC Driver
        DriverManager.registerDriver(new oracle.jdbc.OracleDriver());
     
        // Connect to Oracle Database
        Connection con = DriverManager.getConnection(DBURL, DBUSER, DBPASS);

        Statement statement = con.createStatement();

        // Execute a SELECT query on Oracle Dummy DUAL Table. Useful for retrieving system values
        // Enables us to retrieve values as if querying from a table
        ResultSet rs = statement.executeQuery("SELECT SYSDATE FROM DUAL");
     
     
        if (rs.next()) {
            Date currentDate = rs.getDate(1); // get first column returned
            System.out.println("Current Date from Oracle is : "+currentDate);
        }
        rs.close();
        statement.close();
        con.close();
    }
}
===================

>># javac -cp "./ojdbc7.jar:." OracleSample.java
>># java -cp "./ojdbc7.jar:." OracleSample
Current Date from Oracle is : 2017-02-09

Tuesday, January 17, 2017

Kibana Authentication with Nginx on Centos


Kibana doesn’t support authentication or restricting access to dashboards by default.We can restrict access to Kibana 4 using nginx as a proxy in front of Kibana.

Install nginx server:
To install Nginx using yum we need to include the Nginx repository, install the Nginx repository using,
1
rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm
Install Nginx and httpd-tools by issuing the following command,
1
yum -y install nginx httpd-tools
Create a password file for basic authentication of http users, this is to enable the password protected access to kibana portal. Replace “admin” with your own user name
1
htpasswd -c /etc/nginx/conf.d/kibana.htpasswd adin
Configure Nginx:
Create a confiiguration file with the name kibana.conf in /etc/nginx/conf.d directory
1
vi /etc/nginx/conf.d/kibana.conf
Place the following content to the kibana.conf file, assuming that both kibana and Nginx are installed on same server

server {
listen *:8080;
server_name 192.168.01;
access_log /var/log/nginx/kibana-access.log;
error_log /var/log/nginx/kibana-error.log;
location / {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/conf.d/kibana.htpasswd;
proxy_pass http://192.168.01:5601;
#proxy_connect_timeout 150;
#proxy_send_timeout 100;
#proxy_read_timeout 100;
}
}
Restart nginx server:
1
sudo service nginx restart
Go to the URL : http://192,168.01:8080, we should get an authentication screen as below on successful setup,
6
If nothing is showing up check the logs and see whether you have encountered an error as below,
2015/08/11 22:31:13 [crit] 80274#0: *3 connect() to 192.168.1.5:5601 failed (13: Permission denied) while connecting to upstream, client: 10.200.100.29, server: 10.242.126.73, request: "GET / HTTP/1.1", upstream: "http://192.168.1.5:5601/", host: "192.168.1.5:8080"
Error Resolution:
This is happening because we have selinux enabled on our machine.
Disable the selinux by running the command
1
sudo setsebool -P httpd_can_network_connect 1
Restart nginx:
1
sudo service nginx restart

Friday, December 23, 2016

Aws Volume tagging Script.

Following Script will give the aws cli command to tag the volumes with tags associated to its instance.

Prerequisite
1. All instance are tagged properly
2. Aws Cli is installed and Configured properly
3. Configure the Aws Cli output to Json
4. Create the File Full_Json_Instances.json with output of describe-instances

===================
#/bin/python
import json

def fun_Instance_Volume_Tagging(Instance_ID, Instance):
    for Volumes in Instance["Instances"][0]["BlockDeviceMappings"]:
        VOL_ID = Volumes["Ebs"]["VolumeId"]
        for Tags in Instance["Instances"][0]["Tags"]:
            TAG_KEY = Tags["Key"]
            TAG_VALUE = Tags["Value"]
            print "aws ec2 create-tags --resources " + VOL_ID + " --tags Key=" + TAG_KEY + ",Value=" + TAG_VALUE +""

with open("Full_Json_Instances.json") as json_file:
    json_data = json.load(json_file)
    for Instances in json_data["Reservations"]:
        Instance_ID = Instances["Instances"][0]["InstanceId"];
        fun_Instance_Volume_Tagging(Instance_ID, Instances)
====================

Wednesday, December 7, 2016

Enabling Password Protect for Wordpress Admin Page

For enabling the password for the wp-admin page of wordpress. In the htaccess page of the root page add the following content.

<Files wp-login.php>
   AuthName "xxxxxx Website  Access"
   AuthType Basic
   AuthUserFile /etc/xxxx/apacheuser
   Require valid-user
</Files>