kafka跨集群发送消息
1.场景
集群B有一个应用要向集群A的kafka集群发送消息,但是集群A和集群B不是直接互通的,需要经过一层转发。
2.问题
kafka集群的地址:
10.10.10.1:9092
10.10.10.2:9092
10.10.10.3:9092
经过一层转发,集群B可以访问的kafka地址为:
9092-->19092
20.20.20.1:19092
20.20.20.2:19092
20.20.20.3:19092
在集群B可以curl通转发后的地址:curl 20.20.20.1:19092
但是通过命令发送消息到kafka失败
${KAFKA_HOME}/bin/kafka-console-producer.sh --topic kafka-test --bootstrap-server 20.20.20.1:19092
3.问题分析
通过20.20.20.1:19092这个地址,确实可以连上kafka集群,然后kafka集群会返回客户端可以连接的ip列表,再通过这个返回的ip向kafka发送消息。
这个ip列表是kafka集群注册到zookeeper节点上的ip,也就是kafka的配置KAFKA_ADVERTISED_LISTENERS的地址。
这个地址没有经过转发,所以就无法发送消息都kafka。
4.问题解决
通过域名映射的方式解决
4.1.修改kafka的配置,注意端口,是经过转发的端口
KAFKA_ADVERTISED_LISTENERS=kafka-1:19092
KAFKA_ADVERTISED_LISTENERS=kafka-2:19092
KAFKA_ADVERTISED_LISTENERS=kafka-3:19092
4.2.在部署kafka的三个节点上配置/etc/hosts文件
10.10.10.1:kafka-1
10.10.10.2:kafka-2
10.10.10.3:kafka-3
4.3.集群B的节点配置/etc/hosts文件
20.20.20.1:kafka-1
20.20.20.2:kafka-2
20.20.20.3:kafka-3
4.4.集群B的应用访问kafka返回的ip列表就是
kafka-1:19092
kafka-2:19092
kafka-3:19092
经过域名解析就是
20.20.20.1:19092
20.20.20.2:19092
20.20.20.3:19092
4.5.然后就可以跨集群向kafka发送消息了。
${KAFKA_HOME}/bin/kafka-console-producer.sh --topic kafka-test --bootstrap-server 20.20.20.1:19092
5.附一份容器化部署的kafka.yaml文件,只有一个节点的哦。
apiVersion: v1
kind: Service
metadata:
labels:
app: kafka
name: kafka-1
name: kafka-1
namespace: my-ns
spec:
ports:
- nodePort: 30090
port: 19092
protocol: TCP
targetPort: 19092
selector:
app: kafka
name: kafka-broker-1
type: NodePort
---
kind: ConfigMap
metadata:
labels:
app: kafka
name: kafka-cm
name: kafka-cm
namespace: my-ns
apiVersion: v1
data:
server.properties: |-
broker.id=0
listeners=PLAINTEXT://:19092
advertised.listeners=PLAINTEXT://kafka-1:19092 #注意这个地址要可以访问到,也就是要通过hostPort把端口映射到主机
num.network.threads=6
num.io.threads=16
socket.send.buffer.bytes=10485760
socket.receive.buffer.bytes=10485760
socket.request.max.bytes=214748364
auto.create.topics.enable=true
delete.topic.enable=true
log.dirs=/kafka/kafka-data
num.partitions=3
default.replication.factor=2
offsets.topic.num.partitions=50
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=2
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleanup.policy=delete
zookeeper.connect=zk-hs.my-ns.svc.cluster.local:2181/kafka-base-test
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
start.sh: |
#!/bin/bash -e
cp /mount/kafka/config/server.properties $KAFKA_HOME/config/server.properties
exec start-kafka.sh
---
apiVersion: v1
kind: Deployment
metadata:
labels:
app: kafka
name: kafka-broker-1
name: kafka-broker-1
namespace: my-ns
spec:
replicas: 1
selector:
app: kafka
name: kafka-broker-1
template:
metadata:
labels:
app: kafka
name: kafka-broker-1
spec:
containers:
- command:
- /mount/kafka/config/start.sh
env:
- name: TZ
value: Asia/Shanghai
image: kafka-standard-main-2.13_2.7.1:latest
imagePullPolicy: Always
name: kafka
ports:
- containerPort: 19092
hostPort: 19092
name: port
volumeMounts:
- mountPath: /mount/kafka/config
name: kafka-cm
dnsPolicy: ClusterFirstWithHostNet
#hostNetwork: true #注意hostNetwork不能重复监听端口,可注释
hostAliases:
- hostnames:
- kafka-1
ip: 10.10.10.1
- hostnames:
- kafka-2
ip: 10.10.10.2
- hostnames:
- kafka-3
ip: 10.10.10.3
nodeSelector:
kubernetes.io/hostname: 10.10.10.1
restartPolicy: Always
volumes:
- configMap:
defaultMode: 493
name: kafka-cm
name: kafka-cm
参考:
https://www.luyouqi.com/shezhi/4703.html