logstash

安装

sudo rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

vi /etc/yum.repos.d/logstash.repo

[logstash-8.x]
name=Elastic repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

 sudo yum install logstash

安装拓扑

logstash.service   :   /etc/systemd/system/logstash.service

home:   /usr/share/logstash

bin:         /usr/share/logstash/bin

config:    /etc/logstash

log:         /var/log/logstash/

plugins:  /usr/share/logstash/plugins

data:      /var/lib/logstash(包含 .lock,)

 

配置

 

pipeline configuration files        which define the Logstash processing pipeline

/etc/logstash/conf.d    

 

 settings files                             which specify options that control Logstash startup and execution

/etc/logstash/logstash.yml        

/etc/logstash/pipelines.yml  Contains the framework and instructions for running multiple pipelines in a single Logstash instance.

/etc/logstash/jvm.options    Contains JVM configuration flags. Use this file to set initial and maximum values for total heap space.

/etc/logstash/startup.options 

 

测试安装

第一个pipeline

cd /usr/share/logstash/

bin/logstash -e 'input {stdin{}} output{stdout{}}'          会打开一个shell,等待一会会出现The stdin plugin is now waiting for input:

输入hello world 回车,会打印

{
"@version" => "1",
"message" => "hello world",
"@timestamp" => 2022-04-26T09:18:26.485741Z,
"event" => {
"original" => "hello world"
},
"host" => {
"hostname" => "10-52-6-111"
}
}

 

安装成功。

 ----------------------------------------------------------------------------------------------------

 

pipeline 配置

pwd
/etc/logstash/conf.d

 

最终可用配置:

相关logstash,kafka,es版本   

Using bundled JDK: /usr/share/logstash/jdk   logstash 8.1.3

es    7.16.3

kafka  2.7.1

cat /etc/logstash/conf.d/kafka_to_es.conf
input{

kafka {
bootstrap_servers => "106:9093,10.1:9093,10:9093"
topics => "vslogs"
group_id => "vslogs_group_id_1"
client_id => "vslogs_client_id_1"
auto_offset_reset => "latest"
consumer_threads => 3
decorate_events => true
type => "vslogs"
codec => "json"
sasl_mechanism => "SCRAM-SHA-256"
security_protocol => "SASL_PLAINTEXT"
sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='' password='';"

}

kafka {
bootstrap_servers => "6:9093,2.31:9093,1.4.0.112:9093"
topics => "vsulblog"
group_id => "vsulblog_group_id_1"
client_id => "vsulblog_client_id_1"
auto_offset_reset => "latest"
consumer_threads => 3
decorate_events => true
type => "vsulblog"
codec => "json"
sasl_mechanism => "SCRAM-SHA-256"
security_protocol => "SASL_PLAINTEXT"
sasl_jaas_config => "org.apache.kafka.common.security.scram.ScramLoginModule required username='' password='';"

}
}

filter{}

output {
if ([type]=="vslogs" ) {
elasticsearch{
hosts => [ ":9200", ":9200", ":9200" ]
index => "vs-vslogs"
user => ""
password => ""
}
}
if ([type]=="vsulblog" ) {
elasticsearch{
hosts => [ ":9200", ":9200", "1.4.1.9:9200" ]
index => "vs-vsulblog"
user => ""
password => ""
}
}
}

配置文件注意情况:

1,当input里面有多个kafka输入源时,client_id => "client1",
client_id => "client2"
必须添加且需要不同

In cases when multiple inputs are being used in a single pipeline, reading from different topics,
it’s essential to set a different group_id => ... for each input. Setting a unique client_id => ... is also recommended.

2,
  topics  => "accesslogs"  -- 旧版本的logstash需要使用参数:topic_id
        bootstrap_servers => "JANSON01:9092,JANSON02:9092,JANSON03:9092" -- 旧版本的logstash需要使用参数:zk_connect=>"JANSON01:2181,xx"



从文件读,写入kafka配置:

input {
file {
codec => plain {
charset => "UTF-8"
}
path => "/root/logserver/gamelog.txt" //tmp/log/* 路径下所有
discover_interval => 5
start_position => "beginning"
}
}

output {
kafka {
topic_id => "gamelogs"
codec => plain {
format => "%{message}"
charset => "UTF-8"
}
bootstrap_servers => "node01:9092,node02:9092,node03:9092"
}
}

 

bin使用:

查看已安装的插件列表

/usr/share/logstash/bin/logstash-plugin list

 检查配置是否书写正确

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/kafka_to_es.conf --config.test_and_exit

 

 

做成服务运行

 

systemctl cat logstash
# /etc/systemd/system/logstash.service
[Unit]
Description=logstash

[Service]
Type=simple
User=logstash
Group=logstash
# Load env vars from /etc/default/ and /etc/sysconfig/ if they exist.
# Prefixing the path with '-' makes it try to load, but if the file doesn't
# exist, it continues onward.
EnvironmentFile=-/etc/default/logstash
EnvironmentFile=-/etc/sysconfig/logstash
ExecStart=/usr/share/logstash/bin/logstash "--path.settings" "/etc/logstash"
Restart=always
WorkingDirectory=/
Nice=19
LimitNOFILE=16384

# When stopping, how long to wait before giving up and sending SIGKILL?
# Keep in mind that SIGKILL on a process can cause data loss.
TimeoutStopSec=infinity

[Install]
WantedBy=multi-user.target

 

 --------------------------------------------------------------------------------------------------------------------

running on docker

https://www.elastic.co/guide/en/logstash/8.1/docker-config.html

 

1,

docker pull docker.elastic.co/logstash/logstash:8.1.3

 2,

docker run --rm -it -v ~/pipeline/:/usr/share/logstash/pipeline/ docker.elastic.co/logstash/logstash:8.1.3

  --rm                             Automatically remove the container when it exits   

 -i, --interactive                    Keep STDIN open even if not attached

-t, --tty                            Allocate a pseudo-TTY

 -v, --volume list                    Bind mount a volume

 

3,

docker run --rm -it -v ~/settings/logstash.yml:/usr/share/logstash/config/logstash.yml docker.elastic.co/logstash/logstash:8.1.3

 

Dockerfile like this one:

FROM docker.elastic.co/logstash/logstash:8.1.3
RUN rm -f /usr/share/logstash/pipeline/logstash.conf (删掉默认的pipeline配置)
ADD pipeline/ /usr/share/logstash/pipeline/
ADD config/ /usr/share/logstash/config/

说白了,就上面Dockerfile里描述的,就是提供两个volume 目录做配置挂进去,一个是

/usr/share/logstash/config/
这里是程序的配置,对应非容器化的 /etc/logstash
/usr/share/logstash/pipeline/
这里是pipeline的配置,对应非容器化的 /etc/logstash/conf.d 

rpm yum 安装时,是按Linux标准分布的,在/usr /var /etc 下都有文件
而容器化是全部在容器内的/usr/share/logstash下面


另外这里镜像有个坑是默认支持xpack.management.enabled的,所以直接启动会报错:

[2022-05-06T14:12:50,706][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2022-05-06T14:13:00,832][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch: Temporary failure in name resolution", :exception=>Manticore::ResolutionFailure, :cause=>java.net.UnknownHostException: elasticsearch: Temporary failure in name resolution}
[2022-05-06T14:13:00,833][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Temporary failure in name resolution"}

 

需要在logstash.yml里,xpack.management.enabled: false 打开注释即可。就是不开启有限制的xpack。

 

先在前台调试

docker run  docker.elastic.co/logstash/logstash:8.1.3

 

docker run --rm -it -v /root/docker-logstash-config/logstash.yml:/usr/share/logstash/config/logstash.yml  docker.elastic.co/logstash/logstash:8.1.3

 

成功后,后台运行

docker run -itd --name logstash -v /root/docker-logstash-config/logstash.yml:/usr/share/logstash/config/logstash.yml  docker.elastic.co/logstash/logstash:8.1.3

 

docker logs -f logstash    查看相应容器的日志


---------------------------------------------------------------------------
Dockerfile, docker build

docker build [OPTIONS] PATH | URL | -
-f, --file string             Name of the Dockerfile (Default is 'PATH/Dockerfile')
 -t, --tag list                Name and optionally a tag in the 'name:tag' format

docker build -t nginx:v3 .
说白了就是用当前目录下的Dockerfile 制作标签为nginx:v3的镜像

注意:上下文路径下不要放无用的文件,因为会一起打包发送给 docker 引擎,如果文件过多会造成过程缓慢。就是那个. 就是PATH

cat Dockerfile
FROM docker.elastic.co/logstash/logstash:8.1.3
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD pipeline/ /usr/share/logstash/pipeline/
ADD config/ /usr/share/logstash/config/

 

[root@1 dockerfile]# ls
config Dockerfile pipeline

 

[root@11 dockerfile]# ls config/
logstash.yml
[root@1 dockerfile]# ls pipeline/
kafka_to_es.conf

 

[root@1 dockerfile]# docker build -t logstash-8.1.3-kafka-to-es:v1 .
Sending build context to Docker daemon 19.46 kB
Step 1/4 : FROM docker.elastic.co/logstash/logstash:8.1.3
Trying to pull repository docker.elastic.co/logstash/logstash ...
8.1.3: Pulling from docker.elastic.co/logstash/logstash
0b785679cd71: Pull complete
14979cbdfceb: Pull complete

 





 

参考连接:

https://www.elastic.co/guide/en/logstash/current/index.html

https://cloud.tencent.com/developer/article/1353068?from=10680

https://www.cnblogs.com/lshan/p/14121342.html

posted @ 2022-04-26 17:21  mmgithub123  阅读(639)  评论(0)    收藏  举报