Kafka 集成Flume

1.环境准备

1.准备一个Kafka集群环境并启动

Kafka 3.6.1 集群安装与部署

2.在任意Kafka集群节点上安装Flume

Flume 1.11 部署

2.Flume 生产者

1.配置 Flume

cd /usr/flume/apache-flume-1.11.0-bin/
mkdir jobs
mkdir /mnt/applog
vi jobs/file_to_kafka.conf
# 1 组件定义
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 2 配置 source
a1.sources.r1.type = TAILDIR
a1.sources.r1.filegroups = f1
a1.sources.r1.filegroups.f1 = /mnt/applog/app.*
a1.sources.r1.positionFile = /usr/flume/apache-flume-1.11.0-bin/taildir_position.json

# 3 配置 channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 4 配置 sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = 192.168.58.130:9092,192.168.58.131:9092,192.168.58.132:9092
a1.sinks.k1.kafka.topic = first
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1

# 5 拼接组件
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2.启动 Flume

bin/flume-ng agent -c conf/ -n a1 -f jobs/file_to_kafka.conf &

3.创建first Topic

/usr/kafka/kafka_2.13-3.6.1/bin/kafka-topics.sh --bootstrap-server 192.168.58.130:9092 --create --partitions 1 --replication-factor 3 --topic first

4.启动Kafka消费者

/usr/kafka/kafka_2.13-3.6.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.58.130:9092 --topic first

5.向文件中追加数据

echo coreqi >> /mnt/applog/app.log

6.观察 kafka 消费者,能够看到消费的 写入文件的 数据

3.Flume 消费者

1.配置 Flume

vi /usr/flume/apache-flume-1.11.0-bin/jobs/kafka_to_file.conf
# 1 组件定义
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# 2 配置 source
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.batchSize = 50
a1.sources.r1.batchDurationMillis = 200
a1.sources.r1.kafka.bootstrap.servers = 192.168.58.130:9092
a1.sources.r1.kafka.topics = first
a1.sources.r1.kafka.consumer.group.id = custom.g.id

# 3 配置 channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 4 配置 sink
a1.sinks.k1.type = logger

# 5 拼接组件
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2.启动 Flume

cd /usr/flume/apache-flume-1.11.0-bin/
bin/flume-ng agent -c conf/ -n a1 -f jobs/kafka_to_file.conf -Dflume.root.logger=INFO,console

3.启动 kafka 生产者

/usr/kafka/kafka_2.13-3.6.1/bin/kafka-console-producer.sh --bootstrap-server 192.168.58.130:9092 --topic first

输入数据,例如:hello world

4.观察控制台输出的日志

posted @ 2024-02-23 10:30  SpringCore  阅读(6)  评论(0编辑  收藏  举报