ELK 6.2版本部署

 

ELK 6.2 For CentOS 7.4

(Kibana+ES+Logstash+FileBeat)

 

服务器端需要安装的组件:

1.elasticsearch-6.2.3.tar.gz

2.kibana-6.2.3-linux-x86_64.tar.gz

3.logstash-6.2.3.tar.gz

 

客户端组件:

1.filebeat-6.2.3-linux-x86_64.tar.gz (新版本中使用filebeat进行日志收集,因其占用资源小)

 

配置流程图:

 

 

一.安装JDK 1.8

#tar -zxf jdk-8u151-linux-x64.tar.gz
#mv /tmp/jdk1.8.0_151 /usr/local/java

 

设置环境变量:

#vi /etc/profile
 export JAVA_HOME=/usr/local/java
 export PATH=$PATH:$JAVA_HOME/bin
 export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$CLASSPATH

#source /etc/profile
#java -version
  java version "1.8.0_151"   Java(TM) SE Runtime Environment (build 1.8.0_151-b12)   Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)

 

二.安装elasticsearch-6.2.3

1.创建普通用户,否则ES无法启动相关服务

#useradd es
#passwd es

 

2.解压缩es并修改权限

#tar -zxf elasticsearch-6.2.3.tar.gz –C /usr/local/
#chown –R es:es /usr/local/elasticsearch-6.2.3

 

3.修改配置文件如下参数项:

#su - es
$vi /usr/local/elasticsearch-6.2.3/config/elasticsearch.yml

  http.port: 9200
  network.host: 172.16.68.73
  path.data: /usr/local/elasticsearch-6.2.3/data
  path.logs: /usr/local/elasticsearch-6.2.3/logs

 

4.启动elasticsearch

#su - es
$/usr/local/elasticsearch-6.2.3/bin/elasticsearch -d   #后台启动

 

 5.验证启动成功

[test@redis ~]$ ps -ef | grep java
es 107524 1 66 13:54 pts/1 00:00:02 /usr/local/java/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.Lxepf6Jn -XX:+HeapDumpOnOutOfMemoryError -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:logs/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/local/elasticsearch-6.2.3 -Des.path.conf=/usr/local/elasticsearch-6.2.3/config -cp /usr/local/elasticsearch-6.2.3/lib/* org.elasticsearch.bootstrap.Elasticsearch -d

[test@redis ~]$ netstat -anlp | grep :9200
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 ::ffff:127.0.0.1:9200 :::* LISTEN 107524/java
tcp 0 0 ::1:9200 :::* LISTEN 107524/java

 

6.Elasticsearch启动失败的处理方法:

  Q: [1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]

  A: 切换到root用户,执行

#vi /etc/security/limits.conf

    * soft nofile 65536
    * hard nofile 131072
    * soft nproc 4096
    * hard nproc 4096

 

  Q: [2]: max number of threads [1024] for user [test] is too low, increase to at least [4096]

  A:切换到root用户,执行

#vi /etc/security/limits.d/90-nproc.conf

修改如下内容:
* soft nproc 1024
修改为
* soft nproc 4096

 

  Q:[3]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

  A:切换到root,执行

#vi /etc/sysctl.conf,添加如下

    vm.max_map_count=262144

 

  Q :[4]: system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk

  A: 修改es配置文件:
#vi /usr/local/elasticsearch-6.2.3/config/ elasticsearch.yml,添加如下行:
    
    bootstrap.system_call_filter: false

 

7.访问URL进行验证:

http://172.16.68.73:9200,返回如下:

{
  "name" : "HTBtQtK",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "GSNVLpjKQPe5YCEJqMwelQ",
  "version" : {
    "number" : "6.2.3",
    "build_hash" : "c59ff00",
    "build_date" : "2018-03-13T10:06:29.741383Z",
    "build_snapshot" : false,
    "lucene_version" : "7.2.1",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

 

三.安装kibana-6.2.3

#tar -zxf kibana-6.2.3-linux-x86_64.tar.gz -C /usr/local/

#修改配置文件参数项:
#vi /usr/local/kibana-6.2.3-linux-x86_64/config/kibana.yml

    server.port: 5601
    server.host: "172.16.68.73"
    elasticsearch.url: http://172.16.68.73:9200
    kibana.index: ".kibana"

#启动kibana
#/usr/local/kibana-6.2.3-linux-x86_64/bin/kibana &   --后台运行

 

四.安装logstash-6.2.3

#tar -zxf /tmp/logstash-6.2.3.tar.gz -C /usr/local/
#vi /usr/local/ logstash-6.2.3/logstash.conf

--Logstash.conf示例内容:

input {
   beats {
      port => 5044
   }
}
#filter {
#   if [fields][logIndex] == "nginx" {
#      grok {
#         patterns_dir => "/home/elk/apps/logstash-5.1.1/patterns"
#         match => {
#            "message" => "%{NGINXACCESS}"
#         }
#      }
#      urldecode {
#         charset => "UTF-8"
#         field => "url"
#      }
#      if [upstreamtime] == "" or [upstreamtime] == "null" {
#         mutate {
#            update => { "upstreamtime" => "0" }
#         }
#      }
#      date {
#         match => ["logtime", "dd/MMM/yyyy:HH:mm:ss Z"]
#         target => "@timestamp"
#      }
#      mutate {
#         convert => {
#            "responsetime" => "float"
#            "upstreamtime" => "float"
#            "size" => "integer"
#         }
#         remove_field  => ["port","logtime","message"]
#      }
#
#   }
#}
output {
   elasticsearch {
      hosts => "172.16.68.73:9200"
   }
}

 

#后台启动logstash
#nohup /usr/local/logstash-6.2.3/bin/logstash -f /usr/local/logstash-6.2.3/logstash.conf &

 

五.安装FileBeat-6.2.3

# tar -zxf filebeat-6.2.3-linux-x86_64.tar.gz -C /usr/local/
#修改部分参数内容:

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true   --修改为true,启用探测器

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
- /var/log/messages    --指定要抓取的日志文件

#----------------------------- Logstash output --------------------------------
output.logstash:                      --指定日志获取后发送到logstash
  # The Logstash hosts
  hosts: ["172.16.68.73:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

 

#启动filebeat
#/usr/local/filebeat-6.2.3-linux-x86_64/filebeat -e -c /usr/local/filebeat-6.2.3-linux-x86_64/filebeat.yml -d "publish"

 

六. 高级配置

1.不同类型日志进行分类/匹配/索引,已实现日志的分类别查看

配置示例:

   Filebeat输出日志到logstash中,两者日志进行匹配需要使用到fields字段进行设置,如下红色部分

 a.Logstash配置

input {
   beats {
      port => 5044
   }
}
output {
   if [fields][test] == "system_log" {
     elasticsearch {
        hosts => "172.16.68.73:9200"
        index => "system-%{+YYYY-MM-dd}".               -- 创建索引,以便在kibana上进行匹配
     }
   }

   if [fields][service] == "68_224_b2c_root" {
     elasticsearch {
        hosts => "172.16.68.73:9200"
        index => "68_224_b2c_root"                      -- 创建索引,以便在kibana上进行匹配
     }
   }

   if [fields][service] == "68_224_b2c_order" {
     elasticsearch {
        hosts => "172.16.68.73:9200"
        index => "68_224_b2c_order"
     }
   }

   if [fields][service] == "68_224_b2c_sale" {
     elasticsearch {
        hosts => "172.16.68.73:9200"
        index => "68_224_b2c_sale"
     }
   }

   if [fields][service] == "68_224_b2c_srv" {
     elasticsearch {
        hosts => "172.16.68.73:9200"
        index => "68_224_b2c_srv"
     }
   }
}                

 

 b.Filebeat配置

- type: log

  # Change to true to enable this prospector configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /data/tomcatwww/logs/b2c/root.log
  fields:
    service: 68_224_b2c_root                --添加个性化字段与Logstash进行匹配,6.x中使用该方法进行日志类型匹配
  scan.order: desc

- type: log
  enabled: true
  paths:
    - /data/tomcatwww/logs/b2c/order.log
  fields:
    service: 68_224_b2c_order              --添加个性化字段与Logstash进行匹配,6.x中使用该方法进行日志类型匹配
  scan.order: desc

- type: log
  enabled: true
  paths:
    - /data/tomcatwww/logs/b2c/sale.log
  fields:
    service: 68_224_b2c_sale
  scan.order: desc

- type: log
  enabled: true
  paths:
    - /data/tomcatwww/logs/b2c/srv.log
  fields:
    service: 68_224_b2c_srv
  scan.order: desc

  

 c.在kibana → management下创建索引匹配以便获取数据

 

 

posted @ 2018-04-25 10:26  2240930501  阅读(246)  评论(0编辑  收藏  举报