filebeat+kafka+logstash+es
| 主机名 | IP地址 | 服务 |
| node01 | 192.168.15.234 | zk+kafka |
| node02 | 192.168.15.235 | zk+kafka |
| node03 | 192.168.15.236 | zk+kafka |
| master | 192.168.15.60 | 容器内的es集群 |
| node01 | 192.168.15.234 | logstash |
| lmps | 192.168.0.183 | filebeat |
需要注意的是链接kafka要以hostname而不能用ip
这里大概流程是filebeat将消息推送到kafka上面,而logstash拉取kafka上的数据传送到es上面,如果这里面有多台机需要配置filebeat,那也只需要配置一台logstash即可
参考文档:
https://www.cnblogs.com/saneri/p/8822116.html
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.5.1-linux-x86_64.tar.gz
wget https://artifacts.elastic.co/downloads/logstash/logstash-6.3.0.tar.gz
进入filebeat目录,创建新的文件filebeat.kafka.yml
[root@bogon filebeat-5.5.1]# ls
data filebeat filebeat.yml logs module scripts start.sh
[root@bogon filebeat-5.5.1]# cat << EOF > filebeat.kafka.yml
filebeat.prospectors:
- type: log
encoding: GB2312 #指定抓取日志的字符集
enabled: true
paths:
- /home/lmps/log/*/switch.log #指定抓取的日志,多个日志可在下面另起一行
output.kafka:
hosts: ["node01:9092","node02:9092","node03:9092"]
topic: catalina #指定输出到kafka的topic,可自定义
EOF
启动filebeat
./filebeat -e -c filebeat.kafka.yml &> logs/filebeat.log &
[root@bogon logstash-6.3.0]# cd config/conf.d/
[root@bogon logstash-6.3.0]# cat << EOF > kafka-to-es.conf
input{
kafka{
bootstrap_servers => "node01:9092,node02:9092,node03:9092"
topics => ["catalina"] #指定kafka内的topic,需要和filebeat输出的topic一致
codec => plain
}
}
filter {
json {
source => "message"
}
date {
match => [ "timestamp", "dd/MM/YYYY:HH:mm:ss Z" ]
}
}
output {
elasticsearch {
hosts => ["192.168.15.60:31200"]
index => "catalina-%{+YYYY.MM.dd}" #指定es的index
}
}
EOF
启动logstash
./bin/logstash -f config/conf.d/kafka-to-es.conf &> logs/logstash.out &

浙公网安备 33010602011771号