Pages

Thursday, July 6, 2017

GrayLog Configuration Error : Please verify that the server is healthy and working correctly.

First, we need to Make sure the Elastic Seach is running fine.

Following was the configuration
cluster.name: graylog
network.host: 127.0.0.1

Then Make sure the Entry in the graylog for following attributes is correct.



rest_listen_uri = http://0.0.0.0:9000/api
web_listen_uri = http://0.0.0.0:9000/
rest_transport_uri = http://192.168.0.66:9000/api
web_endpoint_uri = http://192.168.0.66:9000/api

** In Aws/Azure Make sure we give the Server's Public IP or the Load balancers IP.

Friday, June 30, 2017

Reduce TIME_WAIT socket connections

We will reduce the Time_wait by tweaking the Sysctl to time out at a certain time and reuse that socket.

List the no of time_waits and Established Connections

>>netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n

cat /proc/sys/net/ipv4/tcp_fin_timeout
cat /proc/sys/net/ipv4/tcp_tw_recycle
cat /proc/sys/net/ipv4/tcp_tw_reuse

If you have default settings, you’ll probably see values of 60, 0 and 0. Let’s change those values to 60, 1, 1.

Now, edit the /etc/sysctl.conf with your favorite editor and add these lines to the end of it (or edit the values you have in yours if they exist already):


# Decrease TIME_WAIT seconds
net.ipv4.tcp_fin_timeout = 30

# Recycle and Reuse TIME_WAIT sockets faster
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1

Sysctl -p

netstat -nat | awk '{print $6}' | sort | uniq -c | sort -n

Tuesday, June 6, 2017

ELK : Json Data not Logged Correctly in Elastic Search

Symptom
Data written to S3 form logstash is in Format
2016-12-08T21:55:36.381Z %{host} %{message}
2016-12-08T21:55:36.385Z %{host} %{message}
2016-12-08T21:55:36.385Z %{host} %{message}
2016-12-08T21:55:36.390Z %{host} %{message}
2016-12-08T21:55:36.391Z %{host} %{message}
2016-12-08T21:55:36.421Z %{host} %{message}
2016-12-08T21:55:36.421Z %{host} %{message}
2016-12-08T21:55:36.421Z %{host} %{message}
Cause
What happens here is that the default plain codec is being used for the S3 output from Logsearch. In the configuration for Custom Logstash outputs, you should use the JSON Lines Codec. There are more codecs you can use which are listed here.
Resolution
You can add the codec by adding the json_lines codec to your Custom Logstash Outputs Configuration in the Logstash tile settings. Your configuration should look like the following:
output {
...
    s3 {
access_key_id => "****************"
secret_access_key => "*********************"
region => "region name"
bucket => "bucket-name"
time_file => 15
codec => "json_lines"
}
...
After adding the json_lines codec, your S3 bucket Logstash entries should look more like this:
{"@timestamp":"2016-12-12T15:58:37.000Z","port":34854,"@type":"CounterEvent","@message":"{\"cf_origin\":\"firehose\",\"delta\":65,\"deployment\":\"cf\",\"event_type\":\"CounterEvent\",\"index\":\"9439da9a-fb72-4064-839f-934d4e8a6a5c\",\"ip\":\"192.0.2.1\",\"job\":\"router\",\"level\":\"info\",\"msg\":\"\",\"name\":\"udp.sentMessageCount\",\"origin\":\"MetronAgent\",\"time\":\"2016-12-12T15:58:37Z\",\"total\":5257491}","syslog_pri":"6","syslog_pid":"6229","@raw":"<6>2016-12-12T15:58:37Z f7643aae-c011-4715-a88b-2333aaf770ab doppler[6229]: {\"cf_origin\":\"firehose\",\"delta\":65,\"deployment\":\"cf\",\"event_type\":\"CounterEvent\",\"index\":\"9439da9a-fb72-4064-839f-934d4e8a6a5c\",\"ip\":\"192.0.2.1\",\"job\":\"router\",\"level\":\"info\",\"msg\":\"\",\"name\":\"udp.sentMessageCount\",\"origin\":\"MetronAgent\",\"time\":\"2016-12-12T15:58:37Z\",\"total\":5257491}","tags":["syslog_standard","firehose","CounterEvent"],"syslog_severity_code":6,"syslog_facility_code":0,"syslog_facility":"kernel","syslog_severity":"informational","@source":{"host":"f7643aae-c011-4715-a88b-2333aaf770ab","deployment":"cf","job":"router","ip":"192.0.2.1","program":"doppler","index":9439,"vm":"router/9439"},"@level":"INFO","CounterEvent":{"delta":65,"name":"udp.sentMessageCount","origin":"MetronAgent","total":5257491}}
Additional Information