kafka安装配置

kafka安装配置

1.tar -zxvf kafka_2.11-0.11.0.2.tgz -C /usr/local/software/
2.mv kafka_2.11-0.11.0.2/  kafka
3.cd /usr/local/software/kafka 	mkdir logs
4.cd config/
5.vim server.properties
	42服务器:
    broker.id=42
    delete.topic.enable=true
    log.dirs=/usr/local/software/kafka/logs
    zookeeper.connect=192.168.31.42:2181,192.168.31.43:2181,192.168.31.44:2181
	43服务器:
    broker.id=43
    delete.topic.enable=true
    log.dirs=/usr/local/software/kafka/logs
    zookeeper.connect=192.168.31.42:2181,192.168.31.43:2181,192.168.31.44:2181
	44服务器:
    broker.id=44
    delete.topic.enable=true
    log.dirs=/usr/local/software/kafka/logs
    zookeeper.connect=192.168.31.42:2181,192.168.31.43:2181,192.168.31.44:2181
 6.启动 kafka
 	bin/kafka-server-start.sh config/server.properties  
 	创建 topic
    bin/kafka-topics.sh --create --zookeeper 192.168.31.42:2181 --partitions 2 --replication-factor 2  --topic first
    查看 topic
    bin/kafka-topics.sh --list --zookeeper 192.168.31.42:2181
    --控制台启动producer
    bin/kafka-console-producer.sh --broker-list 192.168.31.42:9092 --topic first
    							--生产者连得是kafka集群
    --控制台启动consumer
    bin/kafka-console-consumer.sh --zookeeper 192.168.31.43:2181 --from-beginning --topic first
    bin/kafka-console-consumer.sh --zookeeper pengyy43:2181 --topic fisrt
    							--消费者连得是 zookeeper 集群
    bin/kafka-console-consumer.sh --bootstrap-server 192.168.31.42:9092 --from-beginning --topic first
    查看进程  jps	jos -l
    新版本offset 保留在本地kafka集群,如果保留在你zookeeper集群上,则consumer一边要与kafka集群 leader 通信 获取数据,还要与zookeeper集群通信 保留offset ,这样设计太麻烦 干脆直接保存在kafka集群中
    
    kafka副本数量(replication-factor)不能大于kafka集群节点个数
posted @ 2019-08-07 22:21  心随沙动  阅读(109)  评论(0)    收藏  举报