日志分析系统ELK(Elasticsearch+logstash+kibana)快速入门--完善版本

目 录

ELK stack    3

elk准备环境    3

服务器环境    3

修改两台服务器的hosts文件    3

下载和安装公共签名key    3

jdk安装,版本高于1.8    3

elasticsearch安装    4

添加yum仓库    4

安装ElasticSearch    4

logstash安装    4

添加yum仓库    4

安装logstash    5

kibana安装    5

添加yum仓库    5

安装kibana    5

管理配置elasticsearch    5

修改ElasticSearch配置文件    5

创建目录并更改权限    5

启动elasticsearch    5

查看elasticsearch启动是否成功    6

elasticsearch交互方法    6

交互的两种方法    6

安装head插件显示索引和分片情况    7

Elasticsearch集群    9

配置另外一个elasticsearch    9

启动elasticsearch    9

node1,node2配置成集群    9

elasticsearch日志查看    9

查看集群节点状态    11

elasticsearch监控-kopf插件    11

安装kopf插件    11

也可以使用bigdesk监控ES    12

Logstash入门学习    13

logstash配置文件学习    15

input插件    15

file输入    15

output插件    15

filter插件    15

实践1:收集日志:/var/log/messages    15

收集java日志    17

codec插件    18

logstash收集nginx访问日志    21

报错    23

系统syslog日志收集    23

syslog标准输出配置    23

启动logstash,然后查看标准输出    24

修改rsyslog配置文件    24

重启rsyslog    24

此时查看标准输出    24

将测试配置正式写入配置文件    24

写入测试数据,查看elasticsearch中是否存入数据    26

查看elasticsearch结果    26

logstash监控tcp日志    27

file plugins    28

grok的使用    28

mysql慢查询日志收集    29

logstash架构设计    30

引入redis到架构中    31

安装redis    31

logstash标准输出    32

将redis数据读出来    32

启动logstash    33

到elasticsearch中查看数据是否被存储    33

将前面整个配置写到redis,然后再从redis读到elasticsearch    33

kibana介绍    34

下载kibana-4    34

启动kibana    34

添加nginx-log的索引到kibana中    36

添加system-syslog的索引到kibana中    37

kibana的搜索    38

可视化    38

markdown    38

ELK生产上线    39

 

 

 

ELK stack

 

elasticsearch是基于lucenue

 

ES概念待补充

 

 

elk环境准备

 

本次环境有2台设备

192.168.29.139 elk-node2

192.168.29.140 elk-node1

 

服务器环境

 

[root@elk-node1 ~]# cat /etc/redhat-release

CentOS release 6.4 (Final)

[root@elk-node1 ~]# uname -r

2.6.32-358.el6.x86_64

[root@elk-node1 ~]# uname -m

x86_64

[root@elk-node1 ~]# uname -a

Linux elk-node1 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux

防火墙和selinux都已关闭;

[root@elk-node1 ~]# getenforce

Disabled

[root@elk-node1 ~]# /etc/init.d/iptables status

iptables: Firewall is not running.

 

修改两台服务器的hosts文件

 

[root@elk-node2 ~]# cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.29.140 elk-node1

192.168.29.139 elk-node2

这一步不做,后面elasticsearch可能启动不了;

 

下载和安装公共签名key

 

rpm –import https://packages.elastic.co/GPG-KEY-elasticsearch

 

jdk安装,版本高于1.8

 

tar xf jdk-8u101-linux-x64.tar.gz

mv jdk1.8.0_101/ /usr/local/

cd /usr/local/

ln -sv jdk1.8.0_101/ jdk

cat >> /etc/profile.d/java.sh <<EOF

JAVA_HOME=/usr/local/jdk

JAVA_BIN=/usr/local/jdk/bin

JRE_HOME=/usr/local/jdk/jre

PATH=/usr/local/jdk/bin:/usr/local/jdk/jre/bin:$PATH

CLASSPATH=/usr/local/jdk/jre/lib:/usr/local/jdk/lib:/usr/local/jdk/jre/lib/charsets.jar

EOF

source /etc/profile.d/java.sh

java -version

 

elasticsearch安装

 

 

添加yum仓库

 

https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-repositories.html

参考安装文档

vi /etc/yum.repos.d/elasticsearch.repo

或者直接cat重定向追加

cat >>/etc/yum.repos.d/elasticsearch.repo <<EOF

[elasticsearch-2.x]

name=Elasticsearch repository for 2.x packages

baseurl=http://packages.elastic.co/elasticsearch/2.x/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

EOF

 

安装ElasticSearch

 

yum install -y elasticsearch

 

logstash安装

 

 

添加yum仓库

 

cat >>/etc/yum.repos.d/logstash.repo <<EOF

[logstash-2.1]

name=Logstash repository for 2.1.x packages

baseurl=http://packages.elastic.co/logstash/2.1/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

EOF

 

安装logstash

 

yum install -y logstash

 

kibana安装

 

 

添加yum仓库

 

cat >>/etc/yum.repos.d/kibana.repo <<EOF

[kibana-4.5]

name=Kibana repository for 4.5.x packages

baseurl=http://packages.elastic.co/kibana/4.5/centos

gpgcheck=1

gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch

enabled=1

EOF

 

安装kibana

 

yum install -y kibana

 

也可以自行下载安装:kibana-4.5.4-1.x86_64.rpm 或者源码编译安装

以上elasticsearch、kibana、logstash都已安装完成;

下面开始配置各个配置文件并启动ELK;

 

管理配置elasticsearch

 

 

修改ElasticSearch配置文件

 

[root@elk-node2 ~]# grep -n ‘^[a-z]’ /etc/elasticsearch/elasticsearch.yml

17:cluster.name: dongbo_elk

23:node.name: elk-node2

33:path.data: /data/elk

37:path.logs: /var/log/elasticsearch/

43:bootstrap.mlockall: true

54:network.host: 0.0.0.0

58:http.port: 9200

 

创建目录并更改权限

 

[root@elk-node2 ~]# id elasticsearch

uid=498(elasticsearch) gid=499(elasticsearch) groups=499(elasticsearch)

[root@elk-node2 ~]# mkdir -p /data/elk

[root@elk-node2 ~]# chown -R elasticsearch.elasticsearch /data/elk/

 

启动elasticsearch

 

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

提示JAVA_HOME

[root@baidu_elk_30 tools]# /etc/init.d/elasticsearch start

which: no java in (/sbin:/usr/sbin:/bin:/usr/bin)

Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME

[root@baidu_elk_30 tools]# vi /etc/init.d/elasticsearch

[root@baidu_elk_30 tools]# head -3 /etc/init.d/elasticsearch

#!/bin/sh

#

JAVA_HOME=/usr/local/jdk

[root@baidu_elk_30 tools]# /etc/init.d/elasticsearch start

Starting elasticsearch: [ OK ]

 

查看elasticsearch启动是否成功

 

[root@elk-node2 local]# netstat -ntulp|grep java

tcp 0 0 :::9200 :::* LISTEN 2058/java

tcp 0 0 :::9300 :::* LISTEN 2058/java

如果启动不了,可能是虚拟机内存太小,elasticsearch要求最小256m

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

Starting elasticsearch: Can’t start up: not enough memory [FAILED]

修改虚拟机内存为1G,另外注意java版本,版本太低也会启动失败;

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

Starting elasticsearch: [ OK ]

访问网页:http://192.168.29.139:9200/

 

elasticsearch交互方法

 

 

交互的两种方法

 

Java API :

node client

Transport client

RESTful API

Javascript

.NET

php

Perl

Python

Ruby

 

[root@elk-node2 ~]# curl -i -XGET ‘http://192.168.29.139:9200/_count?pretty’ -d ‘{

“query” {

“match_all”:{}

}

}’

HTTP/1.1 200 OK

Content-Type: application/json; charset=UTF-8

Content-Length: 95

{

“count” : 0,

“_shards” : {

“total” : 0,

“successful” : 0,

“failed” : 0

}

}

按照上面的方法插入可能觉得比较麻烦,可以直接安装个插件,web进行管理;

https://www.elastic.co/guide/en/marvel/current/introduction.html

上面的需要kibana支持;

删除marvel方法:

[root@elk-node2 plugins]# /usr/share/elasticsearch/bin/plugin remove marvel-agent

 

安装head插件显示索引和分片情况

 

[root@elk-node2 ~]# /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head

安装完后访问web

http://192.168.29.139:9200/_plugin/head/

 

{

“user”:”dongbos”,

“mesg”:”hello”

}

结果:

{

“_index”: “index-demo”,

“_type”: “test”,

“_id”: “AVaX49D0Yf2GPoBFU2cz“,

“_version”: 1,

“_shards”: {

“total”: 2,

“successful”: 1,

“failed”: 0

},

“created”: true

}

然后获取刚才插入的数据;

点击基本查询,可以看到里面插入了2个文档,点击搜索,可以查看到;

 

Elasticsearch集群

 

 

配置另外一个elasticsearch

 

安装是一样安装的,elasticsearch配置文件更改下面样式:

[root@elk-node1 ~]# grep ‘^[a-z]’ /etc/elasticsearch/elasticsearch.yml

cluster.name: dongbo_elk #这个必须一样,才能成为一个集群

node.name: elk-node1

path.data: /data/elk

path.logs: /var/log/elasticsearch/

bootstrap.mlockall: true

network.host: 0.0.0.0

http.port: 9200

 

启动elasticsearch

 

 

node1,node2配置成集群

 

在连接栏里输入http://192.168.29.140:9200/ 点击连接,就可将另外一个节点添加进来;

 

elasticsearch日志查看

 

过了一段时间,集群状态还是未识别,查看日志

[root@elk-node1 ~]# vi /var/log/elasticsearch/dongbo_elk

dongbo_elk_deprecation.log dongbo_elk.log

dongbo_elk_index_indexing_slowlog.log dongbo_elk.log.2016-08-16

dongbo_elk_index_search_slowlog.log

[root@elk-node1 ~]# vi /var/log/elasticsearch/dongbo_elk.log

[2016-08-17 10:48:31,356][INFO ][node ] [elk-node1] stopping …

[2016-08-17 10:48:31,382][INFO ][node ] [elk-node1] stopped

[2016-08-17 10:48:31,382][INFO ][node ] [elk-node1] closing …

[2016-08-17 10:48:31,404][INFO ][node ] [elk-node1] closed

[2016-08-17 10:48:32,592][WARN ][bootstrap ] unable to install syscall filter: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMP and CONFIG_SECCOMP_FILTER compiled in

[2016-08-17 10:48:32,597][WARN ][bootstrap ] Unable to lock JVM Memory: error=12,reason=Cannot allocate memory

[2016-08-17 10:48:32,597][WARN ][bootstrap ] This can result in part of the JVM being swapped out.

[2016-08-17 10:48:32,600][WARN ][bootstrap ] Increase RLIMIT_MEMLOCK, soft limit: 65536, hard limit: 65536

[2016-08-17 10:48:32,600][WARN ][bootstrap ] These can be adjusted by modifying /etc/security/limits.conf, for example:

# allow user ‘elasticsearch’ mlockall

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

解决办法:

2台服务器上都做操作:

vim /etc/security/limits.conf

末尾追加

# allow user ‘elasticsearch’ mlockall

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

检查配置

[root@elk-node2 src]# tail -3 /etc/security/limits.conf

# allow user ‘elasticsearch’ mlockall

elasticsearch soft memlock unlimited

elasticsearch hard memlock unlimited

另外修改open files 数量

[root@elk-node2 ~]# ulimit -a|grep open

open files (-n) 1024

将ElasticSearch的组播修改为单播模式

vim /etc/elasticsearch/elasticsearch.yml

# action.destructive_requires_name: true #默认开启组播

discovery.zen.ping.multicast.enabled: false

discovery.zen.ping.unicast.hosts: [“192.168.29.140”, “192.168.29.139”]

这个地方需要注意,有时候网络不支持多播,或者使用多播不能发现其他节点,就直接改为单播使用,建议使用单播,组播发现节点比较慢;

重启elasticsearch

[root@elk-node2 src]# /etc/init.d/elasticsearch restart

 

查看集群节点状态

 

注意看上上面的方框,粗线条带五角星号的库为主节点,细线条的为备节点;

 

elasticsearch监控-kopf插件

 

https://github.com/lmenezes/elasticsearch-kopf

 

安装kopf插件

 

[root@elk-node2 head]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

访问网页查看

http://192.168.29.139:9200/_plugin/kopf

 

也可以使用bigdesk监控ES

 

https://github.com/hlstudio/bigdesk

 

bigdesk 目前2.3.5 版本不可使用;

centos6.4 默认的组播发现无法发现,改成单播;

bigdesk 不支持2.1

 

当第一个节点启动,它会组播发现其他节点,发现集群名字一样的时候,就会自动加入集群。随便一个节点都是可以连接的,并不是主节点才可以连接,连接的节点起到的作用只是汇总信息展示

最初可以自定义设置分片的个数,分片一旦设置好,就不可以改变。主分片和副本分片都丢失,数据即丢失,无法恢复,可以将无用索引删除。有些老索引或者不常用的索引需要定期删除,否则会导致es资源剩余有限,占用磁盘大,搜索慢等。如果暂时不想删除有些索引,可以在插件中关闭索引,就不会占用内存了。

 

Logstash入门学习

 

yum安装logstash,安装后的目录在/opt/logstash下;

通过find查看logstash安装目录:

[root@elk-node1 tools]# find / -type d -name “logstash”

/opt/logstash

 

启动一个logstash,-e:在命令行执行;input输入,stdin标准输入,是一个插件;output输出,stdout:标准输出

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{} }’

此时只需要静静的等待,等一会就会出现下面2行;

Settings: Default filter workers: 1

Logstash startup completed

hello world            #输入hello world

2016-08-17T09:38:27.155Z elk-node1 hello world        #标准输出结果

 

使用rubudebug显示详细输出,codec为一种编解码器

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{ codec =>rubydebug } }’

Settings: Default filter workers: 1

Logstash startup completed

hello world

{

“message” => “hello world”,

“@version” => “1”,

“@timestamp” => “2016-08-17T09:59:06.768Z”,

“host” => “elk-node1”

}

 

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin {} } output { elasticsearch { hosts => [“192.168.29.140:9200”] } }’

Settings: Default filter workers: 1

Logstash startup completed

wangluozhongxin

hahah                #输入的数据

dongbo

chenxiaoyan

查看elasticsearch中是否有这些数据

在elasticsearch中写一份,同时在本地输出一份,也就是在本地保留一份文本文件,也就不用在elasticsearch中再定时备份到远端一份了。此处使用的保留文本文件三大优势:1)文本最简单 2)文本可以二次加工 3)文本的压缩比最高

[root@elk-node1 ~]# /opt/logstash/bin/logstash -e ‘input { stdin {} } output { elasticsearch { hosts => [“192.168.29.140:9200”] } stdout { codec => rubydebug} }’

Settings: Default filter workers: 1

Logstash startup completed

shanghai

{

“message” => “shanghai”,

“@version” => “1”,

“@timestamp” => “2016-08-17T10:38:06.332Z”,

“host” => “elk-node1”

}

 

上面的使用适合于我们测试一些数据用,没有写入到配置文件中,在生产环境中,我们需要将上面的写入到配置文件中;

[root@elk-node1 ~]# vi /etc/logstash/conf.d/oneday-logstash.conf

input { stdin { } }

output {

elasticsearch { hosts => [“192.168.29.140:9200”] }

stdout { codec => rubydebug }

}

启动

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/oneday-logstash.conf

 

 

logstash配置文件学习

 

 

input插件

 

 

file输入

 

https://www.elastic.co/guide/en/logstash/current/plugins-inputs-file.html

要求格式:

 

file { 

 

 

 path => ... 

 

 

} 

 

可选参数(需要注意几个)

start_position

默认从最后一行后开始收集,如果想把以前有的日志也收集了,需要配置这个参数,[“beginning”,”end”]

 

 

output插件

 

 

 

filter插件

 

 

 

实践1:收集日志:/var/log/messages

 

前提日志已经开启,并收集到日志;

/etc/init.d/rsyslog start

chkconfig rsyslog on

[root@elk-node1 ~]# vi /etc/logstash/conf.d/file.conf

[root@elk-node1 ~]# cat /etc/logstash/conf.d/file.conf

input {

 

file {

path => “/var/log/messages”

type => “system”

start_position => “beginning”

}

}

 

output {

 

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf

可以看到已经增加了system-2016-08.17索引

日志数据查看

对于日志比较大可以每天生成一个索引,但是对于每天并没有什么日志,可以一个月生成一个索引;

对于文件,来说是一行,但是对于logstash是一个事件,可以将多行写成一个事件

 

 

收集java日志

 

因为没有tomcat等环境,但是elasticsearch是java环境的,可以直接收集elasticsearch的日志;文件所在目录:/var/log/elasticsearch/dongbo_elk.log

对于有多个文件,放在同一个目录里的时候,又创建了索引,此时就出将其他日志放到同一个索引中,此时需要将type进行if判断,来将输出日志进行分开建立索引;

[root@elk-node1 ~]# vi /etc/logstash/conf.d/file.conf

[root@elk-node1 ~]# cat /etc/logstash/conf.d/file.conf

input {

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

}

 

}

 

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

 

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

}

启动logstash

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf

查看建立的索引,在查看索引里的数据;

 

 

有个问题,在java报错的日志中,经常有好些行连续的报错日志,正常查看是连续看到报错日志,研发可以快速定位错误,但是现在在elasticsearch中日志收集是按行收集的,将日志拆分成行收集了,所有我们要将一处报错放到一个事件中,可以快速看到问题;

 

codec插件

 

格式:

 

input {

 

 

 stdin {

 

 

 codec => multiline {

 

 

 pattern => "pattern, a regexp"

 

 

 negate => "true" or "false"

 

 

 what => "previous" or "next"

 

 

 }

 

 

 }}

 

测试codec中的multiline

[root@elk-node1 log]# cat /etc/logstash/conf.d/codec.conf

input {

stdin {

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

output {

stdout {

codec => “rubydebug”

}

}

启动logstash查看结果

[root@elk-node1 ~]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/codec.conf

[1]        #红色字体为手动输入

[2]

{

“@timestamp” => “2016-08-17T14:32:59.278Z”,

“message” => “[1]”,

“@version” => “1”,

“host” => “elk-node1”

}

[3dasjdljsf

{

“@timestamp” => “2016-08-17T14:33:06.678Z”,

“message” => “[2]”,

“@version” => “1”,

“host” => “elk-node1”

}

sdlfjaldjfa

sdlkfjasdf

sdjlajfl

sdjlfkajdf

sdlfjal

[4]

{

“@timestamp” => “2016-08-17T14:33:15.356Z”,

“message” => “[3dasjdljsfnsdlfjaldjfansdlkfjasdfnsdjlajflnsdjlfkajdfnsdlfjal”,

“@version” => “1”,

“tags” => [

[0] “multiline”

],

“host” => “elk-node1”

}

修改file.conf配置文件:

[root@elk-node1 log]# cat /etc/logstash/conf.d/file.conf

input {

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

}

在elasticsearch中查看日志还是不太方便的,后面直接通过kibana进行查看日志

 

logstash收集nginx访问日志

 

安装nginx

更改nginx配置文件

log_format json ‘{“@timestamp”:”$time_iso8601″,’

‘”@version”:”1″,’

‘”url”:”$uri”,’

‘”status”:”$status”,’

‘”domain”:”$host”,’

‘”size”:”$body_bytes_sent”,’

‘”responsetime”:”$request_time”,’

‘”ua”:”$http_user_agent”.’

‘}’;

server {

access_log    logs/access_json.log json;

先写一个测试配置文件,看看能否将日志打印出来;

[root@elk-node1 conf.d]# cat json.conf

input {

file {

path => “/application/nginx/logs/access_json.log”

codec => “json”

}

}

output {

stdout {

codec => “rubydebug”

}

}

启动测试,刷新网页,发现可以正常打印日志;

将配置写入到配合文件中:

[root@elk-node1 conf.d]# cat file.conf

input {

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

file {

path => “/application/nginx/logs/access_json.log”

codec => json

type => “nginx-log”

start_position => “beginning”

}

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

if [type] == “nginx-log” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “nginx-log-%{+YYYY.MM.dd}”

}

}

}

 

报错

 

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/file.conf

Unknown setting ‘pattern’ for file {:level=>:error}

Unknown setting ‘negate’ for file {:level=>:error}

Unknown setting ‘what’ for file {:level=>:error}

Error: Something is wrong with your configuration.

You may be interested in the ‘–configtest’ flag which you can

use to validate logstash’s configuration before you choose

to restart a running system.

这样的报错,注意检查配置文件,特别是{}这样是否缺少,或者有没有缺少{}等,总之是配置文件问题;

 

系统syslog日志收集

 

在写收集日志的配置文件时,我们一般可以先标准输出测试一下,然后再写入到正式配置文件中;

 

syslog标准输出配置

 

[root@elk-node1 ~]# cd /etc/logstash/conf.d/

[root@elk-node1 conf.d]# vi syslog.conf

[root@elk-node1 conf.d]# cat syslog.conf

input {

syslog {

type => “system-syslog”

host => “192.168.29.140”

port => “514”

}

}

output {

stdout {

codec => “rubydebug”

}

}

 

启动logstash,然后查看标准输出

 

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/syslog.conf

查看端口

 

修改rsyslog配置文件

 

[root@elk-node1 conf.d]# vi /etc/rsyslog.conf

*.* @@192.168.29.140:514

 

重启rsyslog

 

 

此时查看标准输出

 

 

将测试配置正式写入配置文件

 

[root@elk-node1 conf.d]# vi file.conf

[root@elk-node1 conf.d]# cat file.conf

input {

syslog {

type => “system-syslog”

host => “192.168.29.140”

port => “514”

}

file {

path => “/var/log/messages*”

type => “system”

start_position => “beginning”

}

 

file {

path => “/application/nginx/logs/access_json.log”

codec => json

type => “nginx-log”

start_position => “beginning”

}

 

 

file {

path => “/var/log/elasticsearch/dongbo_elk.log”

type => “es-error”

start_position => “beginning”

codec => multiline {

pattern => “^[”

negate => true

what => “previous”

}

}

}

 

output {

if [type] == “system” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-%{+YYYY.MM.dd}”

}

}

 

if [type] == “es-error” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “es-error-%{+YYYY.MM.dd}”

}

}

 

if [type] == “nginx-log” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “nginx-log-%{+YYYY.MM.dd}”

}

}

if [type] == “system-syslog” {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “system-syslog-%{+YYYY.MM.dd}”

}

}

}

 

写入测试数据,查看elasticsearch中是否存入数据

 

[root@elk-node1 conf.d]# logger “hehe1”

[root@elk-node1 conf.d]# logger “hehe2”

[root@elk-node1 conf.d]# logger “hehe3”

[root@elk-node1 conf.d]# logger “hehe4”

[root@elk-node1 conf.d]# logger “hehe5”

[root@elk-node1 conf.d]# logger “hehe6”

[root@elk-node1 conf.d]# logger “hehe7”

[root@elk-node1 conf.d]# logger “hehe8”

 

查看elasticsearch结果

 

 

 

logstash监控tcp日志

 

[root@elk-node1 conf.d]# vi tcp.conf

[root@elk-node1 conf.d]# cat tcp.conf

input {

tcp {

host => “192.168.29.140”

port => “6666”

}

}

output {

stdout {

codec => “rubydebug”

}

}

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/tcp.conf

 

使用nc发送一个文件到192.168.29.140 的6666 端口

[root@elk-node1 ~]# yum install nc -y

[root@elk-node1 ~]# nc 192.168.29.140 6666 < /etc/hosts

[root@elk-node1 ~]# echo “haha” |nc 192.168.29.140 6666

将信息输入到tcp的伪设备中

[root@elk-node1 ~]# echo “dongbo” > /dev/tcp/192.168.29.140/6666

 

file plugins

 

https://www.elastic.co/guide/en/logstash/current/filter-plugins.html

点击grok

https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html

Logstash ships with about 120 patterns by default. You can find them here: https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns. You can add your own trivially. (See the patterns_dir setting)

 

grok的使用

 

[root@elk-node1 /]# cd /etc/logstash/conf.d/

[root@elk-node1 conf.d]# vi grok.conf

[root@elk-node1 conf.d]# cat grok.conf

input {

stdin {}

}

filter {

grok {

match => { “message” => “%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}” }

}

}

output {

stdout {

codec => “rubydebug”

}

}

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/grok.conf

Settings: Default filter workers: 1

Logstash startup completed

55.3.244.1 GET /index.html 15824 0.043        输入这串,查看输出结果

{

“message” => “55.3.244.1 GET /index.html 15824 0.043”,

“@version” => “1”,

“@timestamp” => “2016-08-20T18:22:01.319Z”,

“host” => “elk-node1”,

“client” => “55.3.244.1”,

“method” => “GET”,

“request” => “/index.html”,

“bytes” => “15824”,

“duration” => “0.043”

}

这个识别的规则,在安装logstash就已经被定义好了,具体路径在:

/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns/

[root@elk-node1 patterns]# ls -l

total 96

-rw-r–r– 1 logstash logstash 1197 Feb 17 2016 aws

-rw-r–r– 1 logstash logstash 4831 Feb 17 2016 bacula

-rw-r–r– 1 logstash logstash 2154 Feb 17 2016 bro

-rw-r–r– 1 logstash logstash 879 Feb 17 2016 exim

-rw-r–r– 1 logstash logstash 9544 Feb 17 2016 firewalls

-rw-r–r– 1 logstash logstash 6007 Feb 17 2016 grok-patterns

-rw-r–r– 1 logstash logstash 3251 Feb 17 2016 haproxy

-rw-r–r– 1 logstash logstash 1339 Feb 17 2016 java

-rw-r–r– 1 logstash logstash 1087 Feb 17 2016 junos

-rw-r–r– 1 logstash logstash 1037 Feb 17 2016 linux-syslog

-rw-r–r– 1 logstash logstash 49 Feb 17 2016 mcollective

-rw-r–r– 1 logstash logstash 190 Feb 17 2016 mcollective-patterns

-rw-r–r– 1 logstash logstash 614 Feb 17 2016 mongodb

-rw-r–r– 1 logstash logstash 9597 Feb 17 2016 nagios

-rw-r–r– 1 logstash logstash 142 Feb 17 2016 postgresql

-rw-r–r– 1 logstash logstash 845 Feb 17 2016 rails

-rw-r–r– 1 logstash logstash 104 Feb 17 2016 redis

-rw-r–r– 1 logstash logstash 188 Feb 17 2016 ruby

 

mysql慢查询日志收集

 

 

 

 

 

 

使用grok特别占用内存,所有需要使用脚本或其他加工一下在收集;

 

 

 

 

 

 

 

 

 

logstash架构设计

 

 

 

 

引入redis到架构中

 

 

安装redis

 

[root@elk-node1 ~]# yum install redis -y

或者源码安装

yum -y install gcc gcc-c++ libstdc++-devel

cd /home/dongbo/tools/

tar xf redis-3.2.3.tar.gz

cd redis-3.2.3

make MALLOC=jemalloc

make PREFIX=/application/redis-3.2.3 install

ln -sv /application/redis-3.2.3/ /application/redis

echo “export PATH=/application/redis/bin/:$PATH” >>/etc/profile

. /etc/profile

mkdir /application/redis/conf

cp redis.conf /application/redis/conf/

vi /application/redis/conf/redis.conf

[root@elk-node1 ~]# grep ‘^[a-z]’ /application/redis/conf/redis.conf

bind 192.168.29.140

protected-mode yes

 

grep -Ev “^$|#|;” /application/redis/conf/redis.conf

echo “vm.overcommit_memory=1” >>/etc/sysctl.conf

echo 511 > /proc/sys/net/core/somaxconn

sysctl -p

/application/redis/bin/redis-server /application/redis/conf/redis.conf

ps aux|grep redis

 

[root@elk-node1 conf.d]# redis-cli

Could not connect to Redis at 127.0.0.1:6379: Connection refused

Could not connect to Redis at 127.0.0.1:6379: Connection refused

not connected> exit

[root@elk-node1 conf.d]# redis-cli -h 192.168.29.140

192.168.29.140:6379> exit

 

[root@elk-node1 ~]# cd /etc/logstash/conf.d/

[root@elk-node1 conf.d]# cat redis.conf

input {

stdin {}

}

output {

redis {

host => “192.168.29.140”

port => “6379”

db => “100”

data_type => “list”

key => “demo”

}

}

 

 

logstash标准输出

 

192.168.29.140:6379> info

# Keyspace

db10:keys=1,expires=0,avg_ttl=0

192.168.29.140:6379> select 10

OK

192.168.29.140:6379[10]> keys *

1) “demo”

查看最后一行;

192.168.29.140:6379[10]> LINDEX demo -1

“{“message”:”heke”,”@version”:”1″,”@timestamp”:”2016-08-21T05:25:35.752Z”,”host”:”elk-node1″}”

多输入点,查看输入的数量

192.168.29.140:6379[10]> LLEN demo

(integer) 77

 

将redis数据读出来

 

[root@elk-node1 conf.d]# cat /etc/logstash/conf.d/redis_in.conf

input {

redis {

host => “192.168.29.140”

port => “6379”

db => “10”

data_type => “list”

key => “demo”

}

}

 

output {

elasticsearch {

hosts => [“192.168.29.139:9200”]

index => “redis_demo-%{+YYYY.MM.dd}”

}

}

 

启动logstash

 

[root@elk-node1 conf.d]# /opt/logstash/bin/logstash -f /etc/logstash/conf.d/redis_in.conf

Settings: Default filter workers: 1

Logstash startup completed

查看redis中数据,立马没有了;

[root@elk-node1 ~]# redis-cli -h 192.168.29.140

192.168.29.140:6379> LLEN demo

(integer) 0

 

到elasticsearch中查看数据是否被存储

 

 

 

将前面整个配置写到redis,然后再从redis读到elasticsearch

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

kibana介绍

 

 

下载kibana-4

 

[root@elk-node1 tools]# wget https://download.elastic.co/kibana/kibana/kibana-4.5.4-linux-x64.tar.gz

[root@elk-node1 tools]# tar xf kibana-4.5.4-linux-x64.tar.gz

[root@elk-node1 tools]# mv kibana-4.5.4-linux-x64 /usr/local/

[root@elk-node1 tools]# ln -sv /usr/local/kibana-4.5.4-linux-x64/ /usr/local/kibana

[root@elk-node1 tools]# cd /usr/local/kibana/config/

[root@elk-node1 config]# vi kibana.yml     #修改kibana配置文件

[root@elk-node1 config]# grep -i ‘^[a-z]’ kibana.yml

server.port: 5601

server.host: “0.0.0.0”

elasticsearch.url: “http://192.168.29.139:9200”

kibana.index: “.kibana”

 

启动kibana

 

[root@elk-node1 kibana]# /usr/local/kibana/bin/kibana

log [09:26:56.988] [info][status][plugin:kibana] Status changed from uninitialized to green – Ready

log [09:26:57.040] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow – Waiting for Elasticsearch

log [09:26:57.083] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green – Ready

log [09:26:57.099] [info][status][plugin:markdown_vis] Status changed from uninitialized to green – Ready

log [09:26:57.118] [info][status][plugin:metric_vis] Status changed from uninitialized to green – Ready

log [09:26:57.131] [info][status][plugin:spyModes] Status changed from uninitialized to green – Ready

log [09:26:57.142] [info][status][plugin:elasticsearch] Status changed from yellow to green – Kibana index ready

log [09:26:57.145] [info][status][plugin:statusPage] Status changed from uninitialized to green – Ready

log [09:26:57.159] [info][status][plugin:table_vis] Status changed from uninitialized to green – Ready

log [09:26:57.169] [info][listening] Server running at http://0.0.0.0:5601

访问网页:http://192.168.29.140:5601/

 

 

点击discover,发现没有日志,可能是因为时间范围问题;

当我把时间改为一周的时候,日志出现了;

 

添加nginx-log的索引到kibana中

 

点击菜单栏setting

设置打开默认的索引;

 

 

添加system-syslog的索引到kibana中

 

搜索ssh* ,可查看到什么时候,哪个IP登录或尝试登录服务器

 

kibana的搜索

 

 

 

 

 

 

 

可视化

 

 

markdown

 

## 值班运维人员

* 董波 1525555555

* 老板 1526666666

 

# 快速联系

http://www.baidu.com

保存

 

 

 

 

 

ELK生产上线

 

1、日志分类

系统日志    rsyslog     logstash syslog插件

访问日志    nginx     logstash codec json

错误日志    file         logstash file+ mulitline

运行日志    file         logstash codec json

设备日志    syslog    logstash syslog 插件

debug日志    file        logstashjson or mulitline

 

2、日志标准化

路径    固定

格式    尽量json

 

系统日志开始收集—>错误日志—>运行日志—–>访问日志

 

 

 

附别人mysql慢查询日志收集

使用logstash收集mysql慢查询日志

 

倒入生产中mysql的slow日志,示例格式如下:

 

# Time: 160108 15:46:14

# User@Host: dev_select_user[dev_select_user] @ [192.168.97.86] Id: 714519

# Query_time: 1.638396 Lock_time: 0.000163 Rows_sent: 40 Rows_examined: 939155

SET timestamp=1452239174;

SELECT DATE(create_time) as day,HOUR(create_time) as h,round(avg(low_price),2) as low_price

FROM t_actual_ad_num_log WHERE create_time>=’2016-01-07′ and ad_num<=10

GROUP BY DATE(create_time),HOUR(create_time);

使用multiline处理,并编写slow.conf

 

[root@linux-node1 ~]# cat mysql-slow.conf

input{

file {

path => “/root/slow.log”

type => “mysql-slow-log”

start_position => “beginning”

codec => multiline {

pattern => “^# User@Host:”

negate => true

what => “previous”

}

}

}

filter {

# drop sleep events

grok {

match => { “message” =>”SELECT SLEEP” }

add_tag => [ “sleep_drop” ]

tag_on_failure => [] # prevent default _grokparsefailure tag on real records

}

if “sleep_drop” in [tags] {

drop {}

}

grok {

match => [ “message”, “(?m)^# User@Host: %{USER:user}[[^]]+] @ (?:(?<clienthost>S*) )?[(?:%{IP:clientip})?]s+Id: %{NUMBER:row_id:int}s*# Query_time: %{NUMBER:query_time:float}s+Lock_time: %{NUMBER:lock_time:float}s+Rows_sent: %{NUMBER:rows_sent:int}s+Rows_examined: %{NUMBER:rows_examined:int}s*(?:use %{DATA:database};s*)?SET timestamp=%{NUMBER:timestamp};s*(?<query>(?<action>w+)s+.*)n#s*” ]

}

date {

match => [ “timestamp”, “UNIX” ]

remove_field => [ “timestamp” ]

}

}

output {

stdout{

codec => “rubydebug”

}

}

 

 

 

 

 

部分报错解决:

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

which: no java in (/sbin:/usr/sbin:/bin:/usr/bin)

Could not find any executable java binary. Please install java in your PATH or set JAVA_HOME

解决:没有安装java;

yum install java -y 解决问题

 

报错:

[root@elk-node2 ~]# /etc/init.d/elasticsearch start

Starting elasticsearch: Can’t start up: not enough memory [FAILED]

解决:

查看java版本,低于1.8

[root@elk-node2 etc]# java -version

java version “1.5.0”

gij (GNU libgcj) version 4.4.7 20120313 (Red Hat 4.4.7-17)

 

tar xf jdk-8u101-linux-x64.tar.gz

mv jdk1.8.0_101/ /usr/local/

cd /usr/local/

ln -sv jdk1.8.0_101/ jdk

vi /etc/profile.d/java.sh

JAVA_HOME=/usr/local/jdk

JAVA_BIN=/usr/local/jdk/bin

JRE_HOME=/usr/local/jdk/jre

PATH=$PATH:/usr/local/jdk/bin:/usr/local/jdk/jre/bin

CLASSPATH=/usr/local/jdk/jre/lib:/usr/local/jdk/lib:/usr/local/jdk/jre/lib/charsets.jar

 

source /etc/profile.d/java.sh

[root@elk-node2 local]# /etc/init.d/elasticsearch start

Starting elasticsearch: [ OK ]

 

 

 

 

建议自行下载安装logstash-2.1.3-1.noarch.rpm

 

 

chkconfig –add kibana

启动kibana

/etc/init.d/kibana start

 

查看kibana是否正常启动

[root@elk-node2 src]# ps aux|grep kibana|grep -v grep

kibana 2898 28.2 9.8 1257092 99516 pts/0 Sl 06:21 0:03 /opt/kibana/bin/../node/bin/node /opt/kibana/bin/../src/cli

[root@elk-node2 bin]# netstat -ntulp|grep 5601

tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 2898/node

 

 

监控一个java程序的日志:

创建模拟数据:

vi /tmp/test.log

Caller+1 at com.alibaba.dubbo.rpc.protocol.dubbo.DubboProtocol$1.reply(DubboProtocol.java:115)

{“startTime”:1459095505006,”time”:5592,”arguments”:[{“businessLicNum”:null,”optLock”:null,”phone”:”18511451798″,”overdueRate”:0.02,”schoolName”:null,”mobileIncome”:null,”macIos”:null,”sourceFrom”:null,”password”:null,”employedDate”:null,”city”:”成都”,”username”:null,”vocation”:”工程师”,”QQ”:null,”isApplyFinish”:1,”idfaIos”:null,”longitude”:null,”openid”:null,”iosAndroidIp”:null,”verifyAmount”:null,”deviceId”:null,”cashQuota”:5000.0,”enteroriseName”:null,”iostoken”:null,”channelId”:null,”channelCustId”:null,”idcard”:”420116198508233317″,”code”:null,”iosAndroidId”:null,”companyName”:”lx”,”talkingDeviceId”:null,”onlineStoreName”:null,”schoolAble”:null,”appversionAd”:null,”businessCircle”:null,”appVersion”:null,”email”:”lx@lx.com”,”inviteCode”:null,”latitude”:null,”rrUrl”:null,”xlUrl”:null,”sex”:”0″,”sourceMark”:”adr”,”registerDate”:”2015-08-12 17:16:32″,”businessTime”:null,”mac”:null,”mainBusiness”:null,”couponCodeId”:null,”electricPlatform”:null,”id”:1764155,”bankVerify”:null,”name”:”lx”,”independentPassword”:”e10adc3949ba59abbe56e057f20f883e”,”adrToken”:null,”picCode”:null,”examineAmount”:5000.0,”payPassword”:null,”customerType”:1,”adpromoteFrom”:null,”wxId”:null,”prevDate”:null,”isTkOn”:null,”baitiao”:1,”isOpenidEnable”:null,”logout”:0,”newDate”:null,”monthIncome”:null,”address”:null,”regeditNum”:null,”monthRate”:0.0,”majorName”:null,”versionIos”:null,”admissionTime”:null}],”monitorKey”:”lx:investorController:listInvestor:4B1EC75C25D55FC0_20160328121824″,”service”:”com.lx.business.investor.service.api.InvestorService”,”method”:”listByCustomer”}

 

日志收集方式,logstash->redis->elasticsearch

 

 

在应用服务器,编写shipper.conf

posted @ 2018-02-27 19:24  北方客888  阅读(7762)  评论(0)    收藏  举报