filebeat配置日志采集

filebeat配置日志采集

filebeat部署

一、二进制包部署

  1、官网下载二进制包。

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.0.0-linux-x86_64.tar.gz

  2、解压,并移动目录至/usr/local/下

tar zxf filebeat-8.0.0-linux-x86_64.tar.gz
mv filebeat-8.0.0-linux-x86_64 /usr/local/filebeat

  3、配置systemd管理

[Unit]
Description=Filebeat
After=network-online.target

[Service]
ExecStart=/usr/local/filebeat/filebeat -c /usr/local/filebeat/filebeat.yml

[Install]
WantedBy=multi-user.target

 

二、docker部署

  直接使用docker run命令部署,配置文件和数据目录映射到本地。

docker run -d --name=filebeat --restart=always -v /data/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v /data/filebeat/data/:/usr/share/filebeat/data/ elastic/filebeat:8.0.0

 

三、k8s部署

  1、创建config-filebeat.yaml文件,使用k8s的cm来保存filebeat的配置信息。

vim config-filebeat.yaml
#加入以下配置
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: monitor
  labels:
    app: filebeat
data:
  filebeat.yml: |-
    filebeat.autodiscover:
      providers:
        - type: kubernetes
          templates:
            - condition:
              equals:
                kubernetes.namespace: "%{[kubernetes.namespace]}"
              config:
                - type: log
                  containers.ids:
                    - "${data.kubernetes.container.id}"
                  paths:
                    - "/var/lib/kubelet/pods/${data.kubernetes.pod.uid}/volumes/kubernetes.io~empty-dir/logs/*/*.log"
                  encoding: utf-8
                  scan_frequency: 1s
                  tail_files: true
                  fields_under_root: true
                  fields:
                    type: "%{[kubernetes.namespace]}"
    processors:
    - add_kubernetes_metadata:
        in_cluster: true
    output.elasticsearch:
      hosts: ["xxx.xxx.xxx.xxx:9200"]
      indices:
        - index: "k8s-%{[kubernetes.namespace]}-%{+yyyy.MM.dd}"
          when.contains:
            type: "%{[kubernetes.namespace]}"

    该配置需求比较特殊,通过namespace来区分index名称,采集POD容器中的日志文件,并非是docker的日志文件,POD必须要使用emptyDir类型的本地卷组。

 

  2、创建filebeat.yaml文件,使用DaemonSet部署filebeat服务,包含权限、数据目录映射到本地和引入filebeat的配置文件。

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: monitor
  labels:
    app: filebeat
spec:
  selector:
    matchLabels:
      app: filebeat
  template:
    metadata:
      labels:
        app: filebeat
    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: elastic/filebeat:8.0.0
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        env:
        - name: ELASTICSEARCH_HOST
          value: "xxx.xxx.xxx.xxx"
        - name: ELASTICSEARCH_PORT
          value: "9200"
        securityContext:
          runAsUser: 0
        resources:
          limits:
            memory: 500Mi
          requests:
            cpu: 100m
            memory: 100Mi
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: varlibkubeletpods
          mountPath: /var/lib/kubelet/pods
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibkubeletpods
        hostPath:
          path: /var/lib/kubelet/pods
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: data
        hostPath:
          path: /data/filebeat-data
          type: DirectoryOrCreate
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: monitor
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    app: filebeat
rules:
- apiGroups: [""]
  resources: ["*"]
  verbs: ["*"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: monitor
  labels:
    app: filebea

 

 

filebeat配置日志采集,输出到elasticsearch。

一、采集服务日志文件

  1、修改elasticsearch配置。不使用https方式访问es。

vim /opt/elasticsearch-8.0.0/config/elasticsearch.yml
#
xpack.security.http.ssl:
  enabled: true
  keystore.path: certs/http.p12
#修改为:
xpack.security.http.ssl:
  enabled: false
  keystore.path: certs/http.p12

  2、设置kibana_system用户密码,需要手动输入密码信息。

/opt/elasticsearch-8.0.0/bin/elasticsearch-reset-password -u kibana_system -i

  3、重启es

systemctl restart elasticsearch

  4、修改kibana配置

vim /opt/kibana-8.0.0/config/kibana.yml
#需要加入以下配置
elasticsearch.hosts: ["http://xxx.xxx.xxx.xxx:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "xxxxxxxx"

  5、重启kibana

systemctl restart kibana

  6、配置filebeat,采集nginx服务日志。

vim /opt/filebeat-8.0.0/filebeat.yml
#修改为以下配置
#logging.level: debug
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/*.log
  encoding: utf-8
  fields:
    type: "nginx"
  fields_under_root: true

- type: log
  enabled: true
  paths:
    - /usr/local/nginx/logs/*
  encoding: utf-8
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after fields: type: "app" fields_under_root: true exclude_files: ['\.tgz$'] exclude_lines: ['^DEBUG', '^INFO'] include_lines: ['^WARN', '^ERROR'] #scan_frequency: 10s #ignore_older: 10m #harvester_buffer_size: 16384 #max_bytes: 1024000 #backoff: 1s #close_eof: false #close_inactive: 5m #close_removed: true #close_renamed: false setup.ilm.enabled: false #ilm生命周期管理,默认值是true。 output.elasticsearch: hosts: ["xxx.xxx.xxx.xxx:9200"] username: "elastic" password: "xxxxxxxxxxxxxxxx" indices: - index: "nginx-%{+yyyy.MM.dd}" when.contains: type: "nginx" - index: "app-%{+yyyy.MM.dd}" when.contains: type: "app"

    对以上可能会用到的配置项做一些解释。

exclude_files: ['\.tgz$']              # 排除一些正则匹配的文件
exclude_lines: ['^DEBUG', '^INFO']     # 排除日志文件中正则匹配的那些行
include_lines: ['^WARN', '^ERROR']     # 只采集日志文件中正则匹配的那些行。默认采集所有非空的行。该操作会在 exclude_lines 之前执行
scan_frequency: 10s                    # 每隔多久扫描一次 registry 中的所有日志文件,如果文件有变化,则创建 harvester 进行采集,默认值是10s
ignore_older: 10m                      # 不扫描最后修改时间在多久之前的文件,默认不限制时间。其值应该大于 close_inactive
harvester_buffer_size: 16384           # 每个 harvester 在采集日志时的缓冲区大小,单位 bytes,默认是16384
max_bytes: 1024000                     # 每条日志的 message 部分的最大字节数,超过的部分不会发送(但依然会读取)。默认为 10 M 。
backoff: 1s                            # 如果 harvester 读取到文件末尾,则每隔多久检查一次文件是否更新,默认1s
close_eof: false                       # 如果 harvester 读取到文件末尾,则立即关闭,默认是false
close_inactive: 5m                     # 如果 harvester 读取到文件末尾之后,超过该时长没有读取到新日志,则立即关闭,默认值5m
close_removed: true                    # 如果 harvester 读取到文件末尾之后,检查发现日志文件被删除,则立即关闭,默认值为true
close_renamed: false                   # 如果 harvester 读取到文件末尾之后,检查发现日志文件被重命名,则立即关闭,默认值为false
setup.ilm.enabled: false               # ilm生命周期管理,默认值是true。
encoding: utf-8                        # 读取日志文件时的编码格式
fields_under_root: true                # 控制是否将 fields 中定义的字段直接添加到事件的顶层,默认值是false
multiline.pattern: '^\['               # 设置多行日志的匹配模式。匹配 [ 开头的日志。mulitline.pattern: '^[[:space:]]'匹配空格。
multiline.negate: true                 # 设置为 true,表示只有当下一行不匹配 multiline.pattern 时,才将其与上一行合并,设置为false则是匹配了mulitline.pattern了才与上一行合并。
multiline.match: after                 # 设置为 after,表示合并到下一行。

 

二、配置采集docker日志文件

   其他步骤参考以上信息,这里只配置配置文件。

#logging.level: debug
#logging.to_files: true
filebeat.inputs:
filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
      processors:
        - add_docker_metadata:
          match_source: true
      templates:
        - condition:
          config:
            - type: log
              containers.ids:
                - "${data.docker.container.id}"
              paths:
                - "/var/lib/docker/containers/${data.docker.container.id}/*.log"
              fields:
                type: "docker"
              fields_under_root: true


setup.ilm.enabled: false
output.elasticsearch:
  hosts: ["xxx.xxx.xxx.xxx:9200"]
  username: "elastic"
  password: "xxxxxxxxxxxx"
  indices:
    - index: "docker-%{+yyyy.MM.dd}"
      when.contains:
        type: "docker"

 

posted @ 2024-01-22 16:36  难止汗  阅读(158)  评论(0编辑  收藏  举报