使用kafka kraft模式安装kafka集群,不再依赖 zookeeper 集群

kafka部署

本示例中采用最新的kafka kraft模式,优点在于不再依赖 zookeeper 集群,部署维护更加方便。用三台 controller 节点代替zookeeper,元数据保存在 controller 中,由 controller 直接进行 Kafka 集群管理。

三台主机hosts文件内容

# cat /etc/hosts 
192.168.8.198 es-hot1
192.168.8.199 es-warm1
192.168.8.201 es-cold1

下载地址

http://archive.apache.org/dist/kafka/

[root@es-hot1 ~]# wget http://archive.apache.org/dist/kafka/3.5.0/kafka_2.13-3.5.0.tgz

安装jdk

yum安装

[root@es-hot1 ~]# dnf -y install java-11-openjdk

或者使用二进制安装

[root@es-hot1 ~]# wget https://builds.openlogic.com/downloadJDK/openlogic-openjdk/17.0.7+7/openlogic-openjdk-17.0.7+7-linux-x64.tar.gz
[root@es-hot1 ~]# tar -zxf openlogic-openjdk-17.0.7+7-linux-x64.tar.gz -C /usr/local
[root@es-hot1 ~]# cd /usr/local/openlogic-openjdk-17.0.7+7-linux-x64/
[root@es-hot1 ~ openlogic-openjdk-17.0.7+7-linux-x64]# ls
bin  conf  demo  include  jmods  legal  lib  man  release

# 添加环境变量
[root@jenkins ~]# vim /etc/profile
export JAVA_HOME=/usr/local/openlogic-openjdk-17.0.7+7-linux-x64
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
export PATH=$PATH:${JAVA_HOME}/bin
[root@es-hot1 ~ ~]# source /etc/profile
[root@jenkins openlogic-openjdk-17.0.7+7-linux-x64]# java -version
openjdk version "17.0.7" 2023-04-18
OpenJDK Runtime Environment OpenLogic-OpenJDK (build 17.0.7+7-adhoc.root.jdk17u)
OpenJDK 64-Bit Server VM OpenLogic-OpenJDK (build 17.0.7+7-adhoc.root.jdk17u, mixed mode, sharing)

# 创建软连接
[root@es-hot1 ~ ~]# ln -s /usr/local/openlogic-openjdk-17.0.7+7-linux-x64/bin/java /usr/bin/java

解压安装kafka

[root@es-hot1 ~]# tar -zxf kafka_2.13-3.5.0.tgz -C /usr/local
[root@es-hot1 ~]# cd /usr/local/kafka_2.13-3.5.0/
[root@es-hot1 kafka_2.13-3.5.0]# ls
LICENSE  NOTICE  bin  config  libs  licenses  site-docs
# 创建kafka数据目录
[root@es-hot1 kafka]# mkdir /data/kafka-data

修改配置

[root@es-hot1 kafka]# vim /usr/local/kafka_2.13-3.5.0/config/kraft/server.properties
process.roles=broker,controller # kafka的角色 controller相当于主、broker节点相当于从,主类似zk功能
node.id=1 # 节点ID,每个节点的值要不同
controller.quorum.voters=1@es-hot1:9093,2@es-warm1:9093,3@es-cold1:9093 # Controller节点配置,用于管理状态的节点(替换Zookeeper作用)
listeners=PLAINTEXT://:9092,CONTROLLER://:9093
inter.broker.listener.name=PLAINTEXT
advertised.listeners=PLAINTEXT://192.168.8.198:9092 # 使用IP端口,每个节点填写自己节点的IP,角色代理向外暴露的IP+端口
controller.listener.names=CONTROLLER
listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka-data # 数据存储位置
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168 # 消息默认一周时间
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

三台hot服务器都要修改,修改内容为

# 节点ID,每个节点的值要不同,与controller.quorum.voters列表保持一致,比如:1@es-hot1:9093
node.id=1
# 使用IP端口,每个节点填写自己节点的IP
advertised.listeners=PLAINTEXT://192.168.8.198:9092

初始化集群

在其中一台服务器上执行下面命令生成一个uuid

[root@es-hot1 ~]# sh /usr/local/kafka_2.13-3.5.0/bin/kafka-storage.sh random-uuid
mO5FD8M9S0aRmVZxHZkZIA

用该uuid格式化kafka存储目录,三台服务器都要执行以下命令

[root@es-hot1 ~]# sh /usr/local/kafka_2.13-3.5.0/bin/kafka-storage.sh format -t mO5FD8M9S0aRmVZxHZkZIA -c /usr/local/kafka_2.13-3.5.0/config/kraft/server.properties
Formatting /data/kafka-data with metadata.version 3.5-IV2.

启动集群

三台都需要启动

[root@es-hot1 ~]# sh /usr/local/kafka_2.13-3.5.0/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.13-3.5.0/config/kraft/server.properties

验证

查看日志

[root@es-hot1 ~]# tail -n 30 /usr/local/kafka_2.13-3.5.0/logs/kafkaServer.out 
[2023-07-18 21:46:44,857] INFO [BrokerServer id=1] Finished waiting for all of the authorizer futures to be completed (kafka.server.BrokerServer)
[2023-07-18 21:46:44,857] INFO [BrokerServer id=1] Waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2023-07-18 21:46:44,857] INFO [BrokerServer id=1] Finished waiting for all of the SocketServer Acceptors to be started (kafka.server.BrokerServer)
[2023-07-18 21:46:44,857] INFO [BrokerServer id=1] Transition from STARTING to STARTED (kafka.server.BrokerServer)
[2023-07-18 21:46:44,858] INFO Kafka version: 3.5.0 (org.apache.kafka.common.utils.AppInfoParser)
[2023-07-18 21:46:44,858] INFO Kafka commitId: c97b88d5db4de28d (org.apache.kafka.common.utils.AppInfoParser)
[2023-07-18 21:46:44,858] INFO Kafka startTimeMs: 1689688004857 (org.apache.kafka.common.utils.AppInfoParser)
[2023-07-18 21:46:44,862] INFO [KafkaRaftServer nodeId=1] Kafka Server started (kafka.server.KafkaRaftServer)

查看进程

[root@es-hot1 ~]# ps -aux | grep kafka

创建topic测试

[root@es-hot1 ~]# sh /usr/local/kafka_2.13-3.5.0/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test --partitions 3 --replication-factor 3
Created topic test.

查看topic详情

[root@es-hot1 ~]# sh /usr/local/kafka_2.13-3.5.0/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe test
Topic: test     TopicId: QvDV8pJwSTC7NQwrNfjfAw PartitionCount: 3       ReplicationFactor: 3    Configs: segment.bytes=1073741824
        Topic: test     Partition: 0    Leader: 2       Replicas: 2,3,1 Isr: 2,3,1
        Topic: test     Partition: 1    Leader: 3       Replicas: 3,1,2 Isr: 3,1,2
        Topic: test     Partition: 2    Leader: 1       Replicas: 1,2,3 Isr: 1,2,3

配置服务管理脚本

三台都需要配置

[root@es-hot1 ~]# cat > /usr/lib/systemd/system/kafka.service << EOF
[Unit]
Description=Apache Kafka server (broker)
After=network.target

[Service]
Type=forking
User=root
Group=root
ExecStart=/usr/local/kafka_2.13-3.5.0/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.13-3.5.0/config/kraft/server.properties
ExecStop=/usr/local/kafka_2.13-3.5.0/bin/kafka-server-stop.sh
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

[root@es-hot1 ~]# systemctl daemon-reload
[root@es-hot1 ~]# systemctl restart kafka
[root@es-hot1 ~]# systemctl enable kafka
Created symlink /etc/systemd/system/multi-user.target.wants/kafka.service → /usr/lib/systemd/system/kafka.service.
[root@es-hot1 ~]# systemctl status kafka
● kafka.service - Apache Kafka server (broker)
     Loaded: loaded (/usr/lib/systemd/system/kafka.service; enabled; preset: disabled)
     Active: active (running) since Tue 2023-07-18 22:42:07 CST; 7s ago
    Process: 3067 ExecStart=/usr/local/kafka/bin/kafka-server-start.sh -daemon /usr/local/kafka/config/kraft/server.properties (code=exited, status=0/SUCCESS)
   Main PID: 3412 (java)
      Tasks: 87 (limit: 23012)
     Memory: 315.7M
        CPU: 12.183s
     CGroup: /system.slice/kafka.service
             └─3412 java -Xmx1G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:Max>

Jul 18 22:42:06 es-hot2 systemd[1]: Starting Apache Kafka server (broker)...
Jul 18 22:42:07 es-hot2 systemd[1]: Started Apache Kafka server (broker).

kafka ui安装部署

项目地址:https://github.com/provectus/kafka-ui

启动容器

[root@es-master ~]# docker run --name kafka-ui -d -p 8080:8080 -e DYNAMIC_CONFIG_ENABLED=true -e KAFKA_CLUSTERS_0_NAME="local" -e KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS="192.168.10.133:9092" --restart always provectuslabs/kafka-ui:master

访问验证,查看集群状态

posted @ 2023-08-07 14:58  哈喽哈喽111111  阅读(1803)  评论(0)    收藏  举报