ELK

ELK

https://www.elastic.co/cn/downloads/past-releases#
https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.5.4.rpm

通俗来讲,ELK是由Elasticsearch、Logstash、Kibana 三个开源软件的组成的一个组合体,这三个软件当中,每个软件用于完成不同的功能,ELK 又称为ELK stack,官方域名为stactic.co,ELK stack的主要优点有如下几个:
处理方式灵活: elasticsearch是实时全文索引,具有强大的搜索功能
配置相对简单:elasticsearch全部使用JSON 接口,logstash使用模块配置,kibana的配置文件部分更简单。
检索性能高效:基于优秀的设计,虽然每次查询都是实时,但是也可以达到百亿级数据的查询秒级响应。
集群线性扩展:elasticsearch和logstash都可以灵活线性扩展
前端操作绚丽:kibana的前端设计比较绚丽,而且操作简单

Elasticsearch:
是一个高度可扩展的开源全文搜索和分析引擎,它可实现数据的实时全文搜索搜索、支持分布式可实现高可用、提供API接口,可以处理大规模日志数据,比如Nginx、Tomcat、系统日志等功能

Logstash
可以通过插件实现日志收集和转发,支持日志过滤,支持普通log、自定义json格式的日志解析

kibana
主要是通过接口调用elasticsearch的数据,并进行前端数据可视化的展现

1.ELK主要用户日志收集、存储、分析、展示
2.解决开发查看日志的需求,解决服务器用户登录的权限问题

Elasticsearch

先安装java环境包

rpm -ivh jdk-8u221-linux-x64.rpm

安装RPM包

yum install elasticsearch-6.5.4.rpm -y
grep "^[a-Z]"   /etc/elasticsearch/elasticsearch.yml 
cluster.name: cluster-e #ELK的集群名称,名称相同即属于是同一个集群
node.name: e1 #本机在集群内的节点名称
path.data: /esdata/data #数据保存目录
path.logs: /esdata/logs #日志保存目
network.host: 0.0.0.0  #监听IP
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.10.100", "192.168.10.101"]
#bootstrap.memory_lock: true #服务启动的时候锁定足够的内存,防止数据写入swap,使用swap会影响性能

创建目录权限更改

mkdir /esdata
id elasticsearch
uid=998(elasticsearch) gid=996(elasticsearch) groups=996(elasticsearch)
chown  998.996 /esdata/ -R

###启动服务

systemctl start elasticsearch

查看日志

tail -f /esdata/logs/cluster-e.log

端口

9200用户访问端口
9300集群内通信端口,选举端口

验证

curl http://192.168.10.100:9200/
{
  "name" : "e1", #本节点名称
  "cluster_name" : "cluster-e", #集群名称
  "cluster_uuid" : "GSegz58CSrmLgAemNjIvpA", 
  "version" : {
    "number" : "6.5.4", #elasticsearch版本号
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "d2ef93d",
    "build_date" : "2018-12-17T21:17:40.758843Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0", #lucene版本号,elasticsearch给予lucene做搜索
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search" #口号
}
curl http://192.168.10.101:9200/
{
  "name" : "e2",
  "cluster_name" : "cluster-e",
  "cluster_uuid" : "GSegz58CSrmLgAemNjIvpA",
  "version" : {
    "number" : "6.5.4",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "d2ef93d",
    "build_date" : "2018-12-17T21:17:40.758843Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

修改内存限制,并同步配置文件

vim /etc/elasticsearch/elasticsearch.yml 
bootstrap.memory_lock: true 去掉注释

最小和最大内存限制,为什么最小和最大设置一样大

vim /etc/elasticsearch/jvm.options
-Xms2g
-Xmx2g

vim /usr/lib/systemd/system/elasticsearch.service
LimitMEMLOCK=infinity  #去掉注释
systemctl daemon-reload
systemctl restart elasticsearch

安装elasticsearch插件之head

方法一:任意节点

elasticsearch-head 是一个用于浏览和与弹性搜索集群交互的 Web 前端。
https://mobz.github.io/elasticsearch-head/
https://github.com/mobz/elasticsearch-head

vim /etc/elasticsearch/elasticsearch.yml
http.cors.enabled: true #最下方添加
http.cors.allow-origin: "*"

yum install npm -y
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
npm run start
open http://localhost:9100/

elasticsearch
elasticsearch

elasticsearch
elasticsearch

方法二 使用DOCKER

yum install docker -y
systemctl  start docker && systemctl  enable docker
docker run -p 9100:9100 mobz/elasticsearch-head:5

open http://localhost:9100/

测试提交数据

elasticsearch
elasticsearch

验证索引是否存在
elasticsearch

查看数据
elasticsearch

Master与Slave的区别

Master的职责:
统计各node节点状态信息、集群状态信息统计、索引的创建和删除、索引分配的管理、关闭node节点等
Slave的职责:
同步数据、等待机会成为Master

监控elasticsearch集群状态

获取到的是一个json格式的返回值,那就可以通过python对其中的信息进行分析,例如对status进行分析,如果等于green(绿色)就是运行在正常,等于yellow(黄色)表示副本分片丢失,red(红色)表示主分片丢失
curl -sXGET http://192.168.10.100:9200/_cluster/health?pretty=true

cat els_monitor.py 
#!/usr/bin/env python
#coding:utf-8

import smtplib
from email.mime.text import MIMEText
from email.utils import formataddr
import subprocess
body = ""
false="false"
obj = subprocess.Popen(("curl -sXGET http://192.168.10.100:9200/_cluster/health?pretty=true"),shell=True, stdout=subprocess.PIPE)
data =  obj.stdout.read()
data1 = eval(data)
status = data1.get("status")
if status == "green":
    print "50"
else:
    print "100"
    
#结果50为正常 100为异常

Logstash

安装logstash

rpm -ivh logstash-6.5.4.rpm

测试logstash

测试输出到文件

/usr/share/logstash/bin/logstash   -e 'input {  stdin{} } output { file { path => "/tmp/log-%{+YYYY.MM.dd}messages.gz"}}'
hello

file log-2019.08.16messages.gz 
log-2019.08.16messages.gz: ASCII text
cat log-2019.08.16messages.gz
{"host":"logstash1","@version":"1","@timestamp":"2019-08-16T05:27:35.612Z","message":"11:01:15.229 [[main]>worker1] INFO  logstash.outputs.file - Opening file {:path=>\"/tmp/log-2017-04-20messages.gz\"}"}
{"host":"logstash1","@version":"1","@timestamp":"2019-08-16T05:27:35.566Z","message":"hello"}

测试输出到elasticsearch

测试下配置文件

/usr/share/logstash/bin/logstash -e  /etc/logstash/conf.d/log-es.conf -t

测试输出到elasticsearch

/usr/share/logstash/bin/logstash   -e 'input {  stdin{} } output { elasticsearch {hosts => ["192.168.10.100:9200"] index => "mytest-%{+YYYY.MM.dd}" }}'

elasticsearch服务器验证收到数据

ll /esdata/data/nodes/0/indices/
total 0
drwxr-xr-x 8 elasticsearch elasticsearch 65 Aug 16 13:18 bvx-zQINSA6t6hxXHsgbiQ
drwxr-xr-x 8 elasticsearch elasticsearch 65 Aug 16 14:44 DWRgHkutQsSpH1HUCRJqcg
drwxr-xr-x 3 elasticsearch elasticsearch 20 Aug 16 13:18 N7_a6rlZQTSuQ_PFxnPxKw
drwxr-xr-x 4 elasticsearch elasticsearch 29 Aug 16 13:18 OyJkEOVMQr2oYLvRp0CZfg

通过logstash收集日志文件

[root@logstash1 ~]# cat /etc/logstash/conf.d/log-es.conf 
input {
  file {
    path => "/var/log/messages" #日志路径
    type => "systemlog" #事件的唯一类型
    start_position => "beginning" #第一次收集日志的位置
    stat_interval => "3" #日志收集的间隔时间
  }
}

output { 
    elasticsearch {
      hosts => ["192.168.10.100:9200"]
      index => "192.168.10.102-syslog-%{+YYYY.MM.dd}"
    }
}

检测配置文件语法是否正确

/usr/share/logstash/bin/logstash -f  /etc/logstash/conf.d/log-es.conf -t
[root@logstash1 ~]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/log-es.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[WARN ] 2019-08-16 16:31:50.625 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[INFO ] 2019-08-16 16:31:54.072 [LogStash::Runner] runner - Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash

此日志需要给日志授权

tail -f /var/log/logstash/logstash-plain.log
[2019-08-16T16:38:05,151][WARN ][filewatch.tailmode.handlers.createinitial] open_file OPEN_WARN_INTERVAL is '300'
解决方法 log日志文件授权
chmod 644 /var/log/messages

重启服务

systemctl restart logstash

浏览器访问

elasticsearch
elasticsearch

浏览器打开kibana建立一个索引

kabana
kabana

kabana
kabana

kibana
kibana

kibana
kibana

echo testlog11111111 >> /var/log/messages

kibana
kibana

收集http访问日志

安装 http服务

yum install httpd -y
echo http > /var/www/html/index.html
systemctl httpd start

配置logstash收集nginx访问日志

[root@logstash1 conf.d]# cat log-es.conf 
input {
  file {
    path => "/var/log/messages"
    type => "systemlog"
    start_position => "beginning"
    stat_interval => "3"
  }

  file {
    path => "/var/log/httpd/access_log"
    type => "apache-accesslog"
    start_position => "beginning"
    #stat_interval => "3"
  }
}


output {
  if [type] == "systemlog" {
  elasticsearch {
    hosts => ["192.168.10.100:9200"]
    index => "192.168.10.102-syslog-%{+YYYY.MM.dd}"
  }}

  if [type] == "apache-accesslog" {
  elasticsearch {
    hosts => ["192.168.10.100:9200"]
    index => "192.168.10.102-apache-accesslog-%{+YYYY.MM.dd}"
  }}
}

使用root用户运行logstash

vim /etc/systemd/system/logstash.service
User=root
Group=root

systemctl daemon-reload
systemctl restart logstash

kibana
kibana

kibana
kibana

kibana
kibana

kibana
kibana

收集nginx访问日志

将nginx日志转换为json格式

cat conf/nginx.conf
    log_format access_json '{"@timestamp":"$time_iso8601",'
        '"host":"$server_addr",'
        '"clientip":"$remote_addr",'
        '"size":$body_bytes_sent,'
        '"responsetime":$request_time,'
        '"upstreamtime":"$upstream_response_time",'
        '"upstreamhost":"$upstream_addr",'
        '"http_host":"$host",'
        '"url":"$uri",'
        '"domain":"$host",'
        '"xff":"$http_x_forwarded_for",'
        '"referer":"$http_referer",'
        '"status":"$status"}';
    access_log  /usr/local/nginx/logs/access_json.log access_json;
    
./sbin/nginx -t
/etc/init.d/nginx restart

配置logstash收集nginx访问日志

[root@logstash1 conf.d]# cat log-es.conf 
input {
  file {
    path => "/var/log/messages"
    type => "systemlog"
    start_position => "beginning"
    stat_interval => "3"
  }

  file {
    path => "/var/log/httpd/access_log"
    type => "apache-accesslog"
    start_position => "beginning"
    #stat_interval => "3"
  }

  file {
    path => "/usr/local/nginx/logs/access_json.log"
    type => "nginx-accesslog"
    start_position => "beginning"
    stat_interval => "3"
    codec => "json"
  }
}


output {
  if [type] == "systemlog" {
  elasticsearch {
    hosts => ["192.168.10.100:9200"]
    index => "192.168.10.102-syslog-%{+YYYY.MM.dd}"
  }}

  if [type] == "apache-accesslog" {
  elasticsearch {
    hosts => ["192.168.10.100:9200"]
    index => "192.168.10.102-apache-accesslog-%{+YYYY.MM.dd}"
  }}

  if [type] == "nginx-accesslog" {
  elasticsearch {
    hosts => ["192.168.10.100:9200"]
    index => "192.168.10.102-nginx-accesslog-%{+YYYY.MM.dd}"
  }}
}

重启服务

systemctl restart logstash

kibana界面添加索引

kibana
kibana

kibana
kibana

kibana
kibana

kibana查看图表

kibana
kibana

kibana
kibana

kibana
kibana

kibana
kibana

kibana
kibana

Kibana

安装 Kibana

yum install kibana-6.5.4-x86_64.rpm -y
grep  "^[a-Z]" /etc/kibana/kibana.yml
server.port: 5601  #监听端口
server.host: "0.0.0.0" #监听地址
elasticsearch.url: "http://192.168.10.100:9200"  #elasticsearch

systemctl  start kibana

kibana
kibana

kibana
kibana

posted @ 2019-08-16 18:25  Final233  阅读(371)  评论(0编辑  收藏  举报