filebeat+kafka+logstash+es

主机名 IP地址 服务
node01 192.168.15.234 zk+kafka
node02 192.168.15.235 zk+kafka
node03 192.168.15.236 zk+kafka
master 192.168.15.60 容器内的es集群
node01 192.168.15.234 logstash
lmps 192.168.0.183 filebeat

 

需要注意的是链接kafka要以hostname而不能用ip

这里大概流程是filebeat将消息推送到kafka上面,而logstash拉取kafka上的数据传送到es上面,如果这里面有多台机需要配置filebeat,那也只需要配置一台logstash即可

参考文档:

https://www.cnblogs.com/saneri/p/8822116.html

https://www.cnblogs.com/wangzhuxing/p/9678578.html

https://www.cnblogs.com/xiaobaozi-95/p/9214307.html

下载filebeat和logstash

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.0.tar.gz

filebeat配置

进入filebeat目录,创建新的文件filebeat.kafka.yml

[root@bogon filebeat-5.5.1]# ls
data filebeat filebeat.yml logs module scripts start.sh
[root@bogon filebeat-5.5.1]# cat << EOF > filebeat.kafka.yml
filebeat.prospectors:
    - type: log
    encoding: GB2312      #指定抓取日志的字符集
    enabled: true
    paths:
        - /home/lmps/log/*/switch.log      #指定抓取的日志,多个日志可在下面另起一行
output.kafka:
    hosts: ["node01:9092","node02:9092","node03:9092"]
    topic: catalina       #指定输出到kafka的topic,可自定义
EOF

启动filebeat

./filebeat -e -c filebeat.kafka.yml &> logs/filebeat.log &

logstash配置

进入logstash目录,创建新的文件kafka-to-es.conf

[root@bogon logstash-6.3.0]# cd config/conf.d/
[root@bogon logstash-6.3.0]# cat << EOF > kafka-to-es.conf
input{
    kafka{
        bootstrap_servers => "node01:9092,node02:9092,node03:9092"
        topics => ["catalina"]     #指定kafka内的topic,需要和filebeat输出的topic一致
        codec => plain
    }
}
filter {
    json {
        source => "message"
    }
    date {
        match => [ "timestamp", "dd/MM/YYYY:HH:mm:ss Z" ]
    }
}
output {
    elasticsearch {
        hosts => ["192.168.15.60:31200"]
        index => "catalina-%{+YYYY.MM.dd}"     #指定es的index
    }
}
EOF

启动logstash

./bin/logstash -f config/conf.d/kafka-to-es.conf &> logs/logstash.out &

 

 

posted @ 2020-04-13 11:53  一条咸鱼的梦想  阅读(901)  评论(0)    收藏  举报