k8s部署单节点 Kafka(KRaft 模式)并启用 SASL 认证

1.前置条件

  • 一套部署好的k8s集群
  • 已部署好Storage Class或PV存储,我使用的nfs csi存储驱动

2.k8s集群部署文档

https://www.cnblogs.com/Leonardo-li/p/18796443

3.csi-driver-nfs部署文档

https://www.cnblogs.com/Leonardo-li/p/18813140

4.kafka单节点部署

4.1 kafka configmap部署

apiVersion: v1
kind: ConfigMap
metadata:
  name: kafka-config
  namespace: kafka-new
data:
  client.properties: |
    security.protocol=SASL_PLAINTEXT
    sasl.mechanism=PLAIN
    sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
      username="admin" \
      password="123456";
kubectl apply -f kafka-cm.yaml

4.2 kafka headless 部署

apiVersion: v1
kind: Service
metadata:
  name: kafka-headless
  namespace: kafka-new
spec:
  clusterIP: None  # Headless Service
  selector:
    app: kafka
  ports:
    - name: client
      port: 9092
      targetPort: 9092
    - name: controller
      port: 9093
      targetPort: 9093
kubectl apply -f kafka-headless.yaml

4.3 kafka statefulset 部署

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: kafka
  namespace: kafka-new
spec:
  serviceName: kafka-headless
  replicas: 1
  selector:
    matchLabels:
      app: kafka
  template:
    metadata:
      labels:
        app: kafka
    spec:
      securityContext:
        runAsUser: 0
        runAsGroup: 0
      containers:
      - name: kafka
        image: bitnami/kafka:4.0.0
        env:
        - name: KAFKA_KRAFT_MODE
          value: "true"
        - name: KAFKA_CFG_NODE_ID
          value: "1"
        - name: KAFKA_CFG_PROCESS_ROLES
          value: "broker,controller"
        - name: KAFKA_CFG_CONTROLLER_QUORUM_VOTERS
          value: "1@kafka-0.kafka-headless.kafka-new.svc.cluster.local:9093"  # 重要:使用完整域名
        - name: KAFKA_CFG_LISTENERS
          value: "SASL_PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093"
        - name: KAFKA_CFG_ADVERTISED_LISTENERS
          value: "SASL_PLAINTEXT://kafka-0.kafka-headless.kafka-new.svc.cluster.local:9092"  # 使用 StatefulSet 域名
        - name: KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP
          value: "SASL_PLAINTEXT:SASL_PLAINTEXT,CONTROLLER:PLAINTEXT"
        - name: KAFKA_CFG_INTER_BROKER_LISTENER_NAME
          value: "SASL_PLAINTEXT"
        - name: KAFKA_CFG_CONTROLLER_LISTENER_NAMES
          value: "CONTROLLER"
        - name: KAFKA_CFG_SASL_ENABLED_MECHANISMS
          value: "PLAIN"
        - name: KAFKA_CFG_SASL_MECHANISM_INTER_BROKER_PROTOCOL
          value: "PLAIN"
        - name: KAFKA_GENERATE_CLUSTER_ID
          value: "true"
        - name: KAFKA_KRAFT_CLUSTER_ID
          value: "NR8Ovok8Q3u0nnGLfuqLtQ"
        - name: ALLOW_PLAINTEXT_LISTENER
          value: "yes"
        - name: KAFKA_CLIENT_USERS
          value: "admin"
        - name: KAFKA_CLIENT_PASSWORDS
          value: "123456"
        - name: KAFKA_HEAP_OPTS
          value: "-Xms1024m -Xmx2048m"
        ports:
        - containerPort: 9092
        - containerPort: 9093
        volumeMounts:
        - name: kafka-data
          mountPath: /bitnami/kafka
        - name: kafka-config  # 挂载 ConfigMap
          mountPath: /tmp/client.properties
          subPath: client.properties
      volumes:  # 新增 volumes 字段
      - name: kafka-config
        configMap:
          name: kafka-config  # 必须与 ConfigMap 名称一致
          items:
          - key: client.properties
            path: client.properties
  volumeClaimTemplates:
  - metadata:
      name: kafka-data  # 修复名称拼写错误
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: nfs-csi
      resources:
        requests:
          storage: 10Gi
kubectl apply -f kafka-sts.yaml

5.kafka生产消息和消费消息测试(python)

5.1 安装库

pip install confluent_kafka

5.2 生产消息Producer.py

from confluent_kafka import Producer

conf = {
    'bootstrap.servers': '192.168.4.60:9092',  # 替换真实IP
    'security.protocol': 'SASL_PLAINTEXT',
    'sasl.mechanism': 'PLAIN',
    'sasl.username': 'admin',
    'sasl.password': '123456'
}

def delivery_report(err, msg):
    """静默处理回调,只关注错误"""
    if err:
        print(f"发送失败: {err}")
try:
    producer = Producer(conf)
    message = "Hello from Python"

    # 核心输出:生产内容
    print(f"已生产消息: {message}")

    producer.produce('test-topic', message, callback=delivery_report)
    producer.flush(timeout=5)

except Exception as e:
    print(f"生产异常: {str(e)}")

5.3 消费消息Consumer.py

from confluent_kafka import Consumer

conf = {
    'bootstrap.servers': '192.168.4.60:9092',  # 替换真实IP
    'security.protocol': 'SASL_PLAINTEXT',
    'sasl.mechanism': 'PLAIN',
    'sasl.username': 'admin',
    'sasl.password': '123456',
    'group.id': 'test-group',
    'auto.offset.reset': 'earliest'
}

try:
    consumer = Consumer(conf)
    consumer.subscribe(['test-topic'])

    print("等待消费消息... (Ctrl+C退出)")
    while True:
        msg = consumer.poll(1.0)
        if msg and not msg.error():
            # 核心输出:消费内容
            print(f"已消费消息: {msg.value().decode('utf-8')}")

except KeyboardInterrupt:
    pass
finally:
    consumer.close()

5.4 执行Producer.py程序会显示

已生产消息: Hello from Python

5.5 执行Consumer.py程序

等待消费消息... (Ctrl+C退出)
已消费消息: Hello from Python

 

至此,Kafka(KRaft 模式)单节点并启用 SASL 认证就部署完了!!!

posted @ 2025-05-29 11:25  Leonardo-li  阅读(354)  评论(0)    收藏  举报