zabbix原始数据收集

Posted on 2021-11-15 16:35  呱嗒呱嗒  阅读(344)  评论(0)    收藏  举报

基于filebeats收集zabbix数据

1.  安装前说明

1.1 预先配置说明

1、ssh登录zabbix_server检查磁盘分区情况,选择剩余空间较大的目录,新建dump目录,如/data/dump (参考命令: mkdir -p /data/dump)

 

2、请在测试环境zabbix-server服务器上,修改配置文件:

(默认配置文件路径/etc/zabbix/zabbix_server.conf,请根据实际情况修改配置文件)

新增如下内容:

ExportDir=/data/dump       #### dump文件存储目录,请根据分区实际情况设置,如/data/dump

ExportFileSize=30MB               #### dump文件分割大小,测试过程中需要根据实际情况调整,以确保磁盘不被撑满

ExportType=history,trends,events

 

  • 手动创建文件夹mkdir -p /data/dump/backup/
  • 上传脚本至/data/dump/backup/并赋予可执行权限

(说明1:其中fileprocess.sh用于定时将/data/dump/*.old文件移动到/data/dump/backup/目录下,并加时间戳以重命名,同时由filebeats收取/data/dump/backup/目录下的文件,每2min执行一次)

(说明2:其中clean.sh用于清理10分钟之前生成的/data/dump/backup/文件,以确保本地磁盘不会累计过多文件导致磁盘撑满,每10min执行一次)

 chmod + x  /data/dump/backup/*.sh

clean.sh

#!/bin/bash
## 配置dump目录
cleanPath=/tmp/dump

#find $cleanPath/backup/*.old-* -type f -mmin +10 -exec rm {} \;
find $cleanPath/backup/*.old-* -type f -mmin +10 -exec ls -l {} \;
find $cleanPath/backup/*.old-* -type f -mmin +10 -exec rm {} \;
  • 脚本测试:(可选)

cp /data/dump/history-history-syncer-1.ndjson /data/dump/history-history-syncer-1.ndjson.old

 

/data/dump/backup/fileprocess.sh

#!/bin/bash
date=$(date "+%Y%m%d%H%M%S")
dumpPath="/tmp/dump"
if  (ls $dumpPath/*.old) >/dev/null 2>&1 ;then
find  $dumpPath/ -name *.old -type f  -exec mv {}   $dumpPath/backup   \;
cd $dumpPath/backup
Count=1
for line in $(ls *.old)
do
    #echo "find  ${line}"
    mv ${line}  ${line}-$date   
    (( Count = Count+1 ))
    #echo $Count "Rename OK"
done
else
  echo "Notice: 当前"$dumpPath"没有生成新old文件!"
fi

ll ###查看是否移动到/data/dump/backup,并重命名为history-history-syncer-1.ndjson.old-20210915120001

 

  • 创建定时任务:crontab -e

*/2 * * * * sh /data/dump/backup/fileprocess.sh

*/10 * * * * sh /data/dump/backup/clean.sh

 

  • 重启测试zabbix-server 服务。

(如默认命令:systemctl restart zabbix_server.service,请根据实际情况执行重启命令)

 

3、静待3分钟,检查目录/data/dump下是否生成*.ndjson文件,并持续观察/data/dump/backup目录下文件变化

 

4、登录log平台开启UDP收数实例:(完成上述1,2,3操作,第4步请联系鼎茂工程师操作,然后再继续)

  浏览器登录:http://10.92.*.*:7088    admin/ zaq12wsx

  边缘节点开启UDP收数实例,接受来源地址为0.0.0.0,端口为10515(zabbix-history)、10516(zabbix-trends)、10517(zabbix-problems)的客户端数据发送。

2.  Filebeats 基于RHEL7操作系统的部署及安装

2.1监控目的及范围

Filebeats用于密切监控基于机器产生的数据文件变化。使用 Filebeats,将 zabbix实时导出的数据通过UDP传送至Arcana数智运营平台。

Filebeats主要收集三类数据:zabbix history数据、zabbix problems数据、zabbix trends数据。

2.2        Filebeats安装介质下载方式

官方下载地址:https://www.elastic.co/cn/beats/filebeats

选择对应版本:filebeat-7.8.1-linux-x86_64.tar.gz (单独提供)

配置文件模板:problems.yml、trends.yml、history.yml

   problems.yml

filebeat.inputs:
  - type: log
    fields:
      metadata.log_filename: "/tmp/dump/problems-*.ndjson*"
      metadata.data_type: "history"
    paths:
      - "/tmp/dump/problems-*.ndjson*"    
    #multiline.pattern: '^\d'
    #multiline.negate: true
    #multiline.match: after
output.logstash:
  hosts: ["192.168.100.101:10517"]

trends.yml

filebeat.inputs:
  - type: log
    fields:
      metadata.log_filename: "/tmp/dump/trends-*.ndjson*"
      metadata.data_type: "history"
    paths:
      - "/tmp/dump/trends-*.ndjson*"    
    #multiline.pattern: '^\d'
    #multiline.negate: true
    #multiline.match: after
output.logstash:
  hosts: ["192.168.100.101:10516"]

history.yml

filebeat.inputs:
  - type: log
    fields:
      metadata.log_filename: "/tmp/dump/history-*.ndjson*"
      metadata.data_type: "history"
    paths:
      - "/tmp/dump/history-*.ndjson*"    
    #multiline.pattern: '^\d'
    #multiline.negate: true
    #multiline.match: after
    include_lines: ['CPU utilization','Memory utilization','Load average (5m avg)','sda: Disk write rate','sda: Disk read rate']
output.logstash:
  hosts: ["192.168.100.101:10515"]

 

 

2.3        Filebeats安装过程

(一)  上传problems.yml、trends.yml、history.yml 、start.sh、stop.sh五个介质文件到待部署服务器/root/目录下,确保当前filebeats停止状态:systemctl status filebeat-arcana

start.sh

#!/bin/bash
nohup /usr/local/filebeat-arcana/filebeat -e -c /usr/local/filebeat-arcana/prospectors.d/history.yml --path.data=/usr/local/filebeat-arcana/data/history > /usr/local/filebeat-arcana/logs/history.log 2>&1 &
nohup /usr/local/filebeat-arcana/filebeat -e -c /usr/local/filebeat-arcana/prospectors.d/trends.yml --path.data=/usr/local/filebeat-arcana/data/trends > /usr/local/filebeat-arcana/logs/trends.log 2>&1 &
nohup /usr/local/filebeat-arcana/filebeat -e -c /usr/local/filebeat-arcana/prospectors.d/problems.yml --path.data=/usr/local/filebeat-arcana/data/problems > /usr/local/filebeat-arcana/logs/problems.log 2>&1 &

(二)  解压filebeat压缩包到 /usr/local/目录下,并重命名为filebeat-arcana

tar -zxvf filebeat-7.8.1-linux-x86_64.tar.gz -C /usr/local/

mv filebeat-7.8.1-linux-x86_64 filebeat-arcana

cd /usr/local/filebeat-arcana             ###进入filebeats-arcana目录

(三)  手动创建data目录和logs文件:

mkdir -p /usr/local/filebeat-arcana/data/history

mkdir -p /usr/local/filebeat-arcana/data/trends

mkdir -p /usr/local/filebeat-arcana/data/problems

touch /usr/local/filebeat-arcana/logs/history.log

touch /usr/local/filebeat-arcana/logs/trends.log

touch /usr/local/filebeat-arcana/logs/problems.log

(四)  新建prospectors.d目录,并复制problems.yml、trends.yml、history.yml文件至该目录,并修改配置

mkdir -p /usr/local/filebeat-arcana/prospectors.d

cd /usr/local/filebeat-arcana/prospectors.d

cp /root/problems.yml /usr/local/filebeat-arcana/prospectors.d/

cp /root/history.yml /usr/local/filebeat-arcana/prospectors.d/

cp /root/trends.yml /usr/local/filebeat-arcana/prospectors.d/

 

(五)  启动filebeats

cd  /usr/local/filebeat-arcana/

sh start.sh

 

Notice:不再使用systemctl 管理filebeats!!!

(六)  验证filebeat进程是否启动(3个进程)

Ps -ef | grep filebeat-arcana

2.4           数据验证(DM)

在log平台查询是否有数据到对应index

 

附录:数据解析引擎

{"createTime":1631160121767,"updateTime":1631160121767,"createBy":"admin","updateBy":"admin","name":"zabbix_json","dataType":"zabbix_json","description":"","extractType":"json","config":{"pattern":"\"clock\"\\:(?<time>\\d+)","delimiter":",","delimiterKv":":"},"schema":["host.host","host.name","groups","applications","itemid","name","clock","ns","value","type","_raw"],"timeModel":"custom","timeFormat":"","timeRegex":"\"clock\"\\:(?<time>\\d+)","exampleData":"{\"host\":{\"host\":\"192.168.100.213\",\"name\":\"192.168.100.213\"},\"groups\":[\"Linux servers\"],\"applications\":[\"Disk sda\"],\"itemid\":36248,\"name\":\"sda: Disk read request avg waiting time (r_await)\",\"clock\":1629320708,\"ns\":392581519,\"value\":0.0,\"type\":0}","regexSchema":[],"regexOn":false}