efk简介
此次采用docker 安装es和kinaba。filebeat 本地安装。
建议这三个组件要保持版本一致
1. docker 安装es和kinaba
1.1 安装es
# 下载镜像
docker pull elasticsearch:7.8.0
# 创建自定义的网络(用于连接到连接到同一网络的其他服务(例如Kibana))
docker network create somenetwork 
# 运行 elasticsearch
docker run -d --name elasticsearch --net somenetwork -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:7.8.0
# 查看容器状态
docker ps
此时es已经安装完成且运行
1.2 安装kibana
# 下载镜像
docker pull kibana:7.8.0
# 运行 Kibana
docker run -d --name kibana --net somenetwork -p 5601:5601 kibana:7.8.0
访问 http://127.0.0.1:5601 (启动可能会较慢,如失败等几秒再尝试刷新一下)
好了,这就安装完成和运行起来了。是不是感觉docker安装很简单
2. 安装filebeat 7.8.0
这里和上面保持一致 安装7.8.0版本。这个下载下来直接解压就好了
2.1 filebeat.yml配置
#=========================== Filebeat inputs =============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
  # 日志实际路径地址
   - /Users/zhenghengbin/code/bailing-service-drug/logs/bailing-service-drug/bailing-service-drug-info*.log
  fields:
  # 日志标签,区别不同日志,下面建立索引会用到
    type: "bailing-service-drug"
  fields_under_root: true
  # 指定被监控的文件的编码类型,使用plain和utf-8都是可以处理中文日志的
  encoding: utf-8
  # 多行日志开始的那一行匹配的pattern
  multiline.pattern: ^\s*\d\d\d\d-\d\d-\d\d
  # 是否需要对pattern条件转置使用,不翻转设为true,反转设置为false。  【建议设置为true】
  multiline.negate: true
  # 匹配pattern后,与前面(before)还是后面(after)的内容合并为一条日志
  multiline.match: after
  #============================== Kibana =====================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:              
#kibanaIP地址
host: "127.0.0.1:5601"
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:
#============================= Elastic Cloud ==================================
#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:
  enabled: true
  # Array of hosts to connect to
  hosts: ["127.0.0.1:9200"]
#  index: "logs-%{[beat.version]}-%{+yyyy.MM.dd}"
  indices:
       #索引名称,一般为  ‘服务名称+ip+ --%{+yyyy.MM.dd}’。
    - index: "bailing-service-drug-%{+yyyy.MM.dd}"  
      when.contains:
      #标签,对应日志和索引,和上面对应
        type: "bailing-service-drug"
  # Optional protocol and basic auth credentials.
  #protocol: "https"
  # username: "elastic"
  # password: "bljk@123"
#----------------------------- Logstash output --------------------------------
#================================ Processors =====================================
# Configure processors to enhance or manipulate events generated by the beat.
processors:
- drop_fields:
# 去除多余字段
     fields: ["agent.type", "agent.version","log.file.path","log.offset","input.type","ecs.version","host.name","agent.ephemeral_id","agent.hostname","agent.id","_id","_index","_score","_suricata.eve.timestamp","agent.hostname","cloud. availability_zone","host.containerized","host.os.kernel","host.os.name","host.os.version"]
2.2 启动filebeat程序,进入filebeat文件夹
 nohup  ./filebeat  -c  filebeat.yml  -e  >/dec/null  2>&1  &
3.配置kibana
到现在基本配置完成。如果安装正常,应可在kibana后台设置看到配置的索引名称,如查看不到,检查filebeat程序是否正常启动,或者filebeat.yml配置是否正确。
 
                    
                
 
                
            
         
         浙公网安备 33010602011771号
浙公网安备 33010602011771号