阿里云ECS部署单机kafka 并对外提供服务(带认证)

1. zk配置(单机)

zoo.cfg

tickTime=2000
initLimit=10
syncLimit=5
dataLogDir=/data/zookeeper/log/
dataDir=/data/zookeeper/data
clientPort=2181
server.1= *.*.*.*:2888:3888

只配置一个,为单机启动

./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Mode: standalone

2. kafka配置

cat server.properties |grep -v '#'|grep -v '^$'
broker.id=0
listeners=SASL_PLAINTEXT://本机IP:9092
advertised.listeners=SASL_PLAINTEXT://外网映射的IP与端口
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.enabled.mechanisms=PLAIN
sasl.mechanism.inter.broker.protocol=PLAIN
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka270/datalog
num.partitions=3
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=zk的地址与端口
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
delete.topic.enable=true
log.cleaner.enable=true
auto.create.topics.enable=true
default.replication.factor=1
auto.leader.rebalance.enable=true
request.required.acks =-1

注意中文字的要替换成自己的,其它红色是认证必须要增加的。

在config目录增加两个文件

cat kafka_server_jaas.conf
KafkaServer {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin"
user_admin="admin";
};

注意:user_admin后面为username 的用户名,user_admin="admin"为下面的password的密码。要完全对应。

cat kafka_client_jaas.conf
KafkaClient {
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin";
};

与上面的配置要对应

修改两个配置

cat consumer.properties |grep -v '#'|grep -v '^$'
bootstrap.servers=内部监听的IP与端口
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
group.id=test-consumer-group

红色为新增加的

cat producer.properties |grep -v '#'|grep -v '^$'
bootstrap.servers=内部监听的IP与端口
security.protocol=SASL_PLAINTEXT
sasl.mechanism=PLAIN
compression.type=none

同上面

增加配置变量

cat /etc/profile

export KAFKA_OPTS=-Djava.security.auth.login.config=你的配置文件路径/kafka_server_jaas.conf

生产脚本与消费脚本增加客户端配置

cat kafka-console-producer.sh |grep -v '#'|grep -v '^$'
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx512M"
fi
if [ "x$KAFKA_OPTS" ]; then
export KAFKA_OPTS="-Djava.security.auth.login.config=你的配置文件路径/kafka_client_jaas.conf"
fi
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@"

红色为后期增加

cat kafka-console-consumer.sh |grep -v '#'|grep -v '^$'
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx512M"
fi
if [ "x$KAFKA_OPTS" ]; then export KAFKA_OPTS="-Djava.security.auth.login.config=你的配置文件路径/kafka_client_jaas.conf"
fi
exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@"

同上面

测试是否正常

生产

./kafka-console-producer.sh --bootstrap-server 内部监听IP与端口 --topic test1 --producer.config ../config/producer.properties

消费消息

./kafka-console-consumer.sh --bootstrap-server 内部监听IP与端口 --from-beginning --topic test1 --consumer.config ../config/consumer.properties 

生产里输入信息,消息里可以看到。为正常

 

posted on 2022-05-18 18:00  net2817  阅读(191)  评论(0编辑  收藏  举报

导航