kafka命令

group=orchestrate-factory && export KAFKA_HEAP_OPTS="" && /kafka/bin/kafka-consumer-groups.sh --command-config config/consumer.properties --describe --bootstrap-server kafka-1-master.basis.svc.cluster.local:9092 --group $group | awk '{print $5}'|awk '{sum+=$1} END {print sum}'

  

清空普罗米修斯pushgateway

#!/bin/bash
Ntime=`date +%s`
LOCALIP=`ip addr |grep $(ip addr|grep ^2:|awk '{print $2}'|cut -d ":" -f1)|awk 'NR==2{print $2}'|cut -d "/" -f1`
curl http://$LOCALIP:9091/metrics|grep push_time_seconds|grep -v "#" > /tmp/metricsjob
for i in $(cat /tmp/metricsjob|awk '{print $NF}');do
    JOBTIME=$(echo $i|awk '{print sprintf("%d", $0);}')
    if [ $((Ntime-JOBTIME)) -gt 600 ];then
        JOB=$(cat /tmp/metricsjob|grep $i|awk -F "job=" '{print $2}'|awk -F '"' '{print $2}')
        curl -X DELETE http://$LOCALIP:9091/metrics/job/$JOB         
        echo $JOB >>/tmp/deletejobs
    fi
done
#add to /etc/crontab
if [ `cat /etc/crontab|grep $0|wc -l` -eq 0 ];then
        sudo cp -f $0 /etc/
        sudo chmod +x /etc/$0
        sudo bash -c "echo '*/3 * * * * root /etc/$0'>>/etc/crontab"
fi
View Code

  kafka集群部署

https://www.cnblogs.com/saneri/p/8762168.html

 

 1 #!/bin/bash
 2 # check zk status
 3 for i in {1..3};do
 4 kubectl -n basis exec -it  `kubectl get po -n basis|grep zookeeper-$i| awk '{print $1}'` -- bash 2>/dev/null << EOF
 5 export JVMFLAGS=""
 6 /opt/zookeeper/bin/zkServer.sh  status
 7 EOF
 8 done
 9 
10 sleep 5
11 kubectl -n basis exec -it  `kubectl get po -n basis|grep kafka-deployment-1| awk '{print$1}'` -- bash 2>/dev/null<< EOF
12 export KAFKA_HEAP_OPTS=""
13 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:interior --operation All --topic=*
14 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:interior --operation All -group=*
15 
16 
17 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:interior2 --operation All --topic=*
18 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:interior2 --operation All -group=*
19 
20 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:businesscase--operation Read --topic=*
21 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:open --operation Read -group=*
22 
23 
24 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:businesscase--operation Read --topic=*
25 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:open --operation Read -group=*
26 
27 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:maintainer--operation Read --topic=*
28 /kafka/bin/kafka-acls.sh --authorizer kafka.security.auth.SimpleAclAuthorizer --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181,zookeeper-2.basis.svc.cluster.local:2181,zookeeper-3.basis.svc.cluster.local:2181 --add --allow-principal User:maintainer --operation Read -group=*
29 EOF
kafka_acl.sh
 1 #!/bin/bash
 2 
 3 kubectl exec -it  -n basis `kubectl get po -n basis|grep kafka-deployment-1| awk '{print$1}'` -- bash 2>/dev/null << EOF
 4 export KAFKA_HEAP_OPTS=""
 5 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --delete --topic ODL-SWITCH-STATUS
 6 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --delete --topic ODL-SWITCH-STATUS-INIT
 7 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --delete --topic BUSINESSCASE-TERMINAL
 8 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --delete --topic BUSINESSCASE-RECOGNITION
 9 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --delete --topic ODL-FLOWCHANGE-COOKIE-80001
10 
11 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --create --topic ODL-SWITCH-STATUS  --partitions 30  --replication-factor 3
12 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --create --topic ODL-SWITCH-STATUS-INIT  --partitions 30  --replication-factor 3
13 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --create --topic BUSINESSCASE-TERMINAL   --partitions 50  --replication-factor 3
14 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --create --topic BUSINESSCASE-RECOGNITION   --partitions 50  --replication-factor 3
15 /kafka/bin/kafka-topics.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --create --topic ODL-FLOWCHANGE-COOKIE-80001 --partitions 30  --replication-factor 3
16 #check list
17 /kafka/bin/kafka-topics.sh --zookeeper zookeeper-1 --topic BUSINESSCASE-RECOGNITION --describe
18 /kafka/bin/kafka-topics.sh --zookeeper zookeeper-1 --topic ODL-FLOWCHANGE-COOKIE-80001 --describe
19 /kafka/bin/kafka-topics.sh --zookeeper zookeeper-1 --topic ODL-SWITCH-STATUS --describe
20 /kafka/bin/kafka-topics.sh --zookeeper zookeeper-1 --topic ODL-SWITCH-STATUS-INIT --describe
21 /kafka/bin/kafka-topics.sh --zookeeper zookeeper-1 --topic BUSINESSCASE-TERMINAL --describe
22 
23 EOF
kafka_create_topic.sh

 

kafka操作命令

一、说明

    收集日常工作中,需要使用的命令

二、命令集

查看k8s中的kafka pod

kubectl get pods -o wide -n basis| grep kafka

 

进入其中一个pod

kubectl -n basis exec -it `kubectl -n basis get pods -o wide | grep kafka-deployment-1 | awk '{print $1}'` /bin/bash

 

 

在操作以下命令之前,请先执行命令:

KAFKA_HEAP_OPTS=""

否则执行下面的命令时,就会报错:

FATAL ERROR in native method: processing of -javaagent failed

Aborted (core dumped)

 

查看topic列表

/kafka/bin/kafka-topics.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --list

此命令会输出当前kafka的所有topic,topic信息保存在zookeeper中

查看ACL权限列表

/kafka/bin/kafka-acls.sh --list --authorizer-properties zookeeper.connect=zookeeper-1.basis.svc.cluster.local:2181

输出:

输出示例  Expand source

Current ACLs for resource `Group:*`: 
          User:interior has Allow permission for operations: All from hosts: *
          User:interior2 has Allow permission for operations: All from hosts: *
          User:open has Allow permission for operations: Read from hosts: *
          User:maintainer has Allow permission for operations: Read from hosts: * 
Current ACLs for resource `Topic:*`: 
          User:interior has Allow permission for operations: All from hosts: *
          User:interior2 has Allow permission for operations: All from hosts: *
          User:businesscase--operation has Allow permission for operations: All from hosts: *
          User:maintainer--operation has Allow permission for operations: All from hosts: *


注意:User后面的就是授权用户。在执行授权命令时,user冒号后面的用户,不能有空格,否则查看授权列时,出现User:has

 

生产者模式

创建测试topsic

/kafka/bin/kafka-topics.sh --create --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --topic test --partitions 1 --replication-factor 1

 

连接kafka-1,指定topic为test,请确保topic已经存在!

/kafka/bin/kafka-console-producer.sh --broker-list kafka-1-master.basis.svc.cluster.local:9092 --topic test --producer.config config/producer.properties
 
# 随便输入一个消息
> 123
> 

 

消费者模式

再开一个窗口,进入其中一个kafka pod

kubectl exec -it `kubectl get pods -o wide | grep kafka-deployment-1 | awk '{print $1}'` /bin/bash

 

使用消费者模式,指定topic为test

/kafka/bin/kafka-console-consumer.sh --bootstrap-server kafka-1-master.basis.svc.cluster.local:9092 --topic test --from-beginning  --consumer.config config/consumer.properties

 

等待20秒,如果能收到生产者发来的 123,说明成功了!

注意:此脚本使用的消费组为 test-consumer-group

#删除 topic

bin/kafka-topics.sh --delete --zookeeper zookeeper-1.default.svc.cluster.local:2181 --topic test

 

压力测试

写入MQ消息

写入10w消息压测

/kafka/bin/kafka-producer-perf-test.sh --producer.config config/producer.properties --topic test_perf --num-records 100000 --record-size 1000  --throughput 5000 --producer-props bootstrap.servers=kafka-1-master.basis.svc.cluster.local:9092

命令解释  Expand source

kafka-producer-perf-test.sh 脚本命令的参数解析(以10w写入消息为例):
 
--topic topic名称,本例为test_perf
--num-records 总共需要发送的消息数,本例为100000
--record-size 每个记录的字节数,本例为1000
--throughput 每秒钟发送的记录数,本例为5000
--producer-props bootstrap.servers= kafka-1-master.basis.svc.cluster.local:9092
发送端的配置信息
--producer.config 生产者配置文件

输出示例  Expand source

records sent, 4999.375078 records/sec (4.77 MB/sec), 88.83 ms avg latency, 2869.00 ms max latency, 1 ms 50th, 327 ms 95th, 2593 ms 99th, 2838 ms 99.9th.

本例中写入10w条MQ消息为例,每秒平均向kafka写入了4.77MB的数据,大概是4999.375条消息/秒,每次写入的平均延迟为88.83毫秒,最大的延迟为2869毫秒。

 

消费MQ消息

消费10w消息

/kafka/bin/kafka-consumer-perf-test.sh --consumer.config config/consumer.properties --broker-list kafka-1-master.basis.svc.cluster.local:9092 --topic test_perf --fetch-size 1048576 --messages 100000 --threads 1

命令解释  Expand source

kafka-consumer-perf-test.sh 脚本命令的参数为:
--broker-list 指定kafka的链接信息,本例为kafka-1-master.basis.svc.cluster.local:9092
--topic 指定topic的名称,本例为test_perf
--fetch-size 指定每次fetch的数据的大小,本例为1048576,也就是1M
--messages 总共要消费的消息个数,本例为100000
--consumer.config 消费者配置文件

输出示例  Expand source

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2018-12-06 05:59:32:360, 2018-12-06 05:59:51:624, 954.0758, 49.5264, 1000421, 51932.1532, 41, 19223, 49.6320, 52042.9173

以本例中消费10w条MQ消息为例总共消费了954.07M的数据,每秒消费数据大小为49.52M,总共消费了1000421条消息,每秒消费51932.15条消息。

 

结果分析

一般写入MQ消息设置5000条/秒时,消息延迟时间小于等于1ms,在可接受范围内,说明消息写入及时。

Kafka消费MQ消息时,1000W待处理消息的处理能力如果在每秒20w条以上,那么处理结果是理想的。


根据Kafka处理10w、100w和1000w级的消息时的处理能力,可以评估出Kafka集群服务,是否有能力处理上亿级别的消息。

 

查看队列消费情况

首先查看所有消费组

/kafka/bin/kafka-consumer-groups.sh --bootstrap-server kafka-1-master.basis.svc.cluster.local:9092 --command-config /kafka/config/consumer.properties --list

输出示例  Expand source

deviceservice
accelerator
perf-consumer-62504
orchestrate
94160169a549b8c9f67de7f8d91fd652
terminalManager
userCenter
918da735f22aed164a712f968d989967
7839afcfea98dad03a7589bcb7f953bd
5a193606bfb232a82248007677a322ca
baseBusiness
odl
eventservice
977e71876f358291469a67b963636849
ReloadId=30786940-e63d-4b67-b7ab-05b6e6a83338
bossAdapter
test-consumer-group

 

查看odl消费情况

/kafka/bin/kafka-consumer-groups.sh --bootstrap-server kafka-1-master.basis.svc.cluster.local:9092 --command-config config/consumer.properties --describe --group odl

解释:

--group 消费组,请确保消费组存在,否则会报错!

 

输出:

输出示例  Expand source

TOPIC                         PARTITION  CURRENT-OFFSET  LOG-END-OFFSET  LAG             CONSUMER-ID                                     HOST             CLIENT-ID
sdn-pending-business-c0a9dc6d 0          4               4               0               consumer-1-dea56f02-3387-4343-9498-60ca69733047 /192.169.220.109 consumer-1
sdn-pending-business-c0a9dc6d 1          4               4               0               consumer-1-dea56f02-3387-4343-9498-60ca69733047 /192.169.220.109 consumer-1
sdn-pending-business-c0a9dc6d 2          4               4               0               consumer-1-dea56f02-3387-4343-9498-60ca69733047 /192.169.220.109 consumer-1

如果LAG数字为0,说明队列没有积压!

 

标记解释:

TOPIC
PARTITION
CURRENT-OFFSET
LOG-END-OFFSET
LAG
CONSUMER-ID
HOST
CLIENT-ID

topic名字

分区id

当前已消费的条数

总条数

未消费的条数

消费id

主机ip

客户端id

 

查看topic详情

查看topic为 sdn-pending-business-c0a9e31f 的详情信息,使用以下命令:

/kafka/bin/kafka-topics.sh --describe --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --topic sdn-pending-business-c0a9e31f

 

修改partitions数量

语法  Expand source

/kafka/bin/kafka-topics.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --alter --topic 【topic名字】 --partitions 【分区数量】 

 

示例:

/kafka/bin/kafka-topics.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --alter --topic test --partitions 3

 

动态扩容replica的副本数量

注意:动态扩容副本数量,依赖于一个json文件

演示过程  Expand source

查看topic为sdn-pending-business-c0a9e31f的partition详情
/kafka//kafka/bin/kafka-topics.sh --describe --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --topic sdn-pending-business-c0a9e31f
 
结果如下:
Topic:sdn-pending-business-c0a9e31f      PartitionCount:3    ReplicationFactor:1  Configs:
          Topic: sdn-pending-business-c0a9e31f     Partition: 0        Leader: 2 Replicas: 2         Isr: 2
          Topic: sdn-pending-business-c0a9e31f     Partition: 1        Leader: 3 Replicas: 3         Isr: 3
          Topic: sdn-pending-business-c0a9e31f     Partition: 2        Leader: 1 Replicas: 1         Isr: 1
 
可以发现,当前topic的副本数为1,需要将副本数修改为3
 
新建sdn-pending-business-c0a9e31f.json文件,内容如下:
 
{
 "version":1,
 "partitions":[      
          {"topic":"sdn-pending-business-c0a9e31f","partition":0,"replicas":[1,2,3]},
    {"topic":"sdn-pending-business-c0a9e31f","partition":1,"replicas":[1,2,3]},
    {"topic":"sdn-pending-business-c0a9e31f","partition":2,"replicas":[1,2,3]}
 ]
}
 
解释:
目前的kafka集群有3个broker,id分别为1,2,3
因此 replicas 的参数为[1,2,3],表示分布在这3台服务器上面
 
然后执行脚本
/kafka//kafka/bin/kafka-reassign-partitions.sh --zookeeper  zookeeper-1.basis.svc.cluster.local:2181 --reassignment-json-file sdn-pending-business-c0a9e31f.json --execute
 
参数解释:
 
--reassignment-json-file 带有分区的JSON文件
--execute 按规定启动重新分配通过---重新分配JSON文件选择权。
 
执行输出:
 
Current partition replica assignment
{"version":1,"partitions":[{"topic":"sdn-pending-business-c0a9e31f","partition":2,"replicas":[1],"log_dirs":["any"]},{"topic":"sdn-pending-business-c0a9e31f","partition":1,"replicas":[3],"log_dirs":["any"]},{"topic":"sdn-pending-business-c0a9e31f","partition":0,"replicas":[2],"log_dirs":["any"]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.
 
出现 Successfully 表示成功了!
 
再次查看topic为sdn-pending-business-c0a9e31f的partition详情
/kafka//kafka/bin/kafka-topics.sh --describe --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --topic sdn-pending-business-c0a9e31f
 
 
结果如下:
 
Topic:sdn-pending-business-c0a9e31f      PartitionCount:3    ReplicationFactor:3  Configs:
          Topic: sdn-pending-business-c0a9e31f     Partition: 0        Leader: 2 Replicas: 1,2,3     Isr: 2,3,1
          Topic: sdn-pending-business-c0a9e31f     Partition: 1        Leader: 3 Replicas: 1,2,3     Isr: 3,1,2
          Topic: sdn-pending-business-c0a9e31f     Partition: 2        Leader: 1 Replicas: 1,2,3     Isr: 1,3,2
 
发现副本数已经改为3了

 

删除消费组

比如:j5pdbojpneojfh5c消费组不需要,需要删除,使用以下命令:

/kafka/bin/kafka-consumer-groups.sh --bootstrap-server kafka-1-master.basis.svc.cluster.local:9092 --command-config config/consumer.properties --delete --group j5pdbojpneojfh5c

 

删除topic

请谨慎操作,务必保证应用没有调用此topic,否则删除后,会导致kafka集群挂掉!

比如删除test

/kafka/bin/kafka-topics.sh --delete --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --topic test

 

动态调整topic删除策略

修改topic删除策略
语法:

/kafka/bin/kafka-configs.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --entity-type topics --entity-name 【topic名】--alter --add-config retention.ms=【保留时间(单位:毫秒)】

 

比如:修改topic为test保留时间为30分钟

/kafka/bin/kafka-configs.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --entity-type topics --entity-name test --alter --add-config retention.ms=1800000

 

查看topic删除策略

语法:

/kafka/bin/kafka-configs.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --describe --entity-type topics --entity-name 【topic名】

 

比如:查看topic为test的删除策略

/kafka/bin/kafka-configs.sh --zookeeper zookeeper-1.basis.svc.cluster.local:2181 --describe --entity-type topics --entity-name test

 

输出:

Configs for topic 'test' are retention.ms=1800000

 

注意:修改保留时间为30分钟,并不是30分钟就马上删掉。kafka是采用轮询的方式,轮询到这个topic时,删除30分钟前的数据。
时间由server.properties里面的log.retention.check.interval.ms选项为准

 

假设说 log.retention.check.interval.ms 值为5分钟,那么等待35分钟,这个topic的数据就会自动被删除!

 

posted @ 2019-10-26 15:34  hanwei666  阅读(213)  评论(0编辑  收藏  举报
……