elk集群搭建
2019-12-17 16:43 dribs 阅读(253) 评论(0) 收藏 举报一:下载对应软件包
版本信息:elasticsearch-7.0.1-linux-x86_64.tar.gz kibana-7.0.1-linux-x86_64.tar.gz logstash-7.0.1.tar.gz
链接:https://pan.baidu.com/s/1uCfnTFqvNPeTrVfAUS2wcQ 提取码:m0uu
二:解压并配置
1、新增host解析 [root@kafka7 config]# cat /etc/hosts ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 10.250.1.196 kafka000008 kafka000008 10.250.1.196 kafka7 10.250.1.197 kafka8 10.250.1.198 kafka9
2、配置elasticsearch [root@kafka7 config]# cat elasticsearch.yml cluster.name: azero-dev-logger node.name: kafka7 path.data: /data/es7-9-data path.logs: /data/es7-9-logs network.host: 10.250.1.196 http.port: 9200 discovery.seed_hosts: ["kafka7", "kafka8","kafka9"] cluster.initial_master_nodes: ["kafka7", "kafka8","kafka9"]
3、 调整jvm.options
[root@kafka7 config]# cat jvm.options
-Xms4g
-Xmx4g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Des.networkaddress.cache.ttl=60
-Des.networkaddress.cache.negative.ttl=10
-XX:+AlwaysPreTouch
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys=true
-XX:-OmitStackTraceInFastThrow
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir=${ES_TMPDIR}
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=data
-XX:ErrorFile=logs/hs_err_pid%p.log
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8:-XX:+PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
9-:-Djava.locale.providers=COMPAT
其他两台配置相同,修改elasticsearch.yml 对应的ip即可,jvm.options参数根据自己需求调整
logstash
1、进到logstash目录,配置logstash,注意topics => ["xxx.log.live"] xxx.log
[root@kafka7 config]# cat logstash-kafka.conf
input {
kafka {
bootstrap_servers => ["10.250.1.196:9092,10.250.1.197:9092,10.250.1.198:9092"]
client_id => "kafka_client_1"
group_id => "logstash"
auto_offset_reset => "latest"
consumer_threads => 16
topics => ["xxx.log.live"]
type => "live"
decorate_events => true
codec => json
}
kafka {
bootstrap_servers => ["10.250.1.196:9092,10.250.1.197:9092,10.250.1.198:9092"]
client_id => "kafka_client_2"
group_id => "logstash"
auto_offset_reset => "latest"
consumer_threads => 16
topics => ["xxx.log.runtime"]
type => "runtime"
decorate_events => true
codec => json
}
}
filter {
# 匹配原始日志中的time字段并设置为时间字段
# time字段为本地时间字段,没有8小时的时间差
date {
match => ["time","yyyy-MM-dd'T'HH:mm:ss.S'Z'"]
target => "@timestamp"
}
}
output {
if[type] == "live"{
elasticsearch {
hosts => ["10.250.1.196:9200","10.250.1.197:9200","10.250.1.198:9200"]
index => "xxxx.log.live-%{+YYYY-MM-dd}"
#document_type => "form"
#document_id => "%{id}"
}
}
if[type] == "runtime"{
elasticsearch {
hosts => ["10.250.1.196:9200","10.250.1.197:9200","10.250.1.198:9200"]
index => "xxxx.log.runtime-%{+YYYY-MM-dd}"
#document_type => "form"
#document_id => "%{id}"
}
}
}
2、配置logstash的jvm.options参数,根据自己情况调整 [root@kafka7 config]# cat jvm.options -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djruby.compile.invokedynamic=true -Djruby.jit.threshold=0 -XX:+HeapDumpOnOutOfMemoryError -Djava.security.egd=file:/dev/urandom
配置kibana,kibana只在一台上配置,进到kibana目录 [root@kafka7 config]# cat kibana.yml server.port: 5601 server.host: "10.250.1.196" elasticsearch.hosts: ["http://10.250.1.196:9200","http://10.250.1.197:9200","http://10.250.1.198:9200"]
3:启动
启动es需要普通用户,注意一些权限,启动时好像有个错,好像是这个 sysctl -w vm.max_map_count=262144
#elk kibnan7,8,9 su 普通用户 执行 #start es #[root@kafka7 elasticsearch-7.0.1]# pwd #/data/elasticsearch-7.0.1 #[root@kafka7 elasticsearch-7.0.1]# sh ./bin/elasticsearch & #start logstash #[root@kafka7 logstash-7.0.1]# pwd #/data/logstash-7.0.1 #[root@kafka7 logstash-7.0.1]# sh ./bin/logstash -f config/logstash-kafka.conf #start kinana #/data/kibana-7.0.1-linux-x86_64/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /data/kibana-7.0.1-linux-x86_64/bin/../src/cli
浙公网安备 33010602011771号