项目日志格式化

1、项目logback-spring.xm修改

<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">
<providers>
<pattern>
<pattern>
{
"time": "%d{yyyy-MM-dd HH:mm:ss:SSS}",
"tid": "%X{tid}",
"thread": "%thread",
"class": "%logger{40}",
"describe": "%message",
"hystrixFunction": "%X{hystrixFunction}",
"hystrixTime": "%X{hystrixTime}",
"executionTime": "%X{executionTime}",
"stack_trace": "%exception{20}"
}
</pattern>
</pattern>
</providers>
</encoder>

2、filebeat配置文件修改 目录:/data/filebeat


filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

# Paths that should be crawled and fetched. Glob based paths.
paths:
- /data/springcloud/logs/mobile/mapi-app-forum/error.log
#- c:\programdata\elasticsearch\logs\*


json.keys_under_root: true
json.add_error_key: true

# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]

# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]

# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]

# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1

### Multiline options

# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation

# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
# multiline.pattern: ^\[

# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: true

# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
# multiline.match: after

启动命令  nohup ./filebeat -e -c  xxx.yml > /dev/null 2>&1 &

3.logstash 配置文件修改(添加filter)

 

filter {

if [type] == "mapi-app-forum-error" {

json {

source => "message"

target => "jsoncontent"

remove_field => ["message"]

}

}

 

if [type] == "web-forum-error" {

json {

source => "message"

target => "jsoncontent"

remove_field => ["message"]

}

}

}

sudo bash

启动命令 nohup ./logstash -f ./xxx.conf > ./xxx.log 2>&1 &

4.logstash 嵌套json日志类型转换配置 (中括号隔开)

if [type] == "web-forum-info" {
json {
source => "message"
target => "jsoncontent"
remove_field => [ "message"]
}
mutate {
convert => {"[jsoncontent][backendReqTime]" => "integer"}
convert => {"[jsoncontent][requestCost]" => "integer" }
}

}

5.logstash 嵌套json日志删除多余字段(中括号隔开)

json {

source => "message"

target => "jsoncontent"

remove_field => [ "message","[jsoncontent][source]","[jsoncontent][input_type]","[jsoncontent][type]","[jsoncontent][offset]"]

}

 

 

posted @ 2021-05-28 14:44  不撞南墙  阅读(107)  评论(0)    收藏  举报
Live2D