Kafka 消费者到底是什么 以及消费者位移主题到底是什么(Python 客户端 1.01 broker)

Kafka 中有这样一个概念消费者组,所有我们去订阅 topic 和 topic 交互的一些操作我们都是通过消费者组去交互的。

在 consumer 端设置了消费者的名字之后,该客户端可以对多个 topic 进行订阅。我们也可以通过 group-id 来识别是谁在消费消息,或者在消费哪些组的消息。

发挥 consumer 最大的效果就是和 broker 的 topic 下的 partitions 数相等。

做到一个 parititons 分配一个独立的 consumer 去消费能达到最高效果,比如我们给一个 topic 分配 20 个 partitions 高峰期间最好我们就有 20个 consumer 在消费它们。你说我们要是分配 25个可以吗?也可以,但是会有 5个 consumer 在空闲。

这里还有一个需要注意的地方,如果我们使用消费者组订阅了多个 topic ,那么我们消费者组需要的消费者数量是所有 topic partitions 之合才能达到满载效果这个需要特别注意。

 

老版本的 consumer 把 offset 存储在 zk 上,但是后来发现在大规模部署的生产环境中,这样做会让 zk 随着 kafka 集群规模的增长而线性增长。所以后面新版本的 consumer 是把 offset 存储在 自己集群的 topic 的 __consumer_offsets 位移主题中。

 

下面我们来详细聊下新版 offset 存储以及 __consumer_offsets 的用途。新版本的位移管理机制就是将位移数据一条条提交到 __consumer_offsets 中。

常规位移消息的格式包含三部分<group_id, topic_name, partition_no> 用于说明自己是来自哪个消费组,消费 topic 名称和所消费的 partition 号。

另外还有两种消息:

1. 用于注册新的 consumer group 的消息。

2. 用于删除过期 group 位移或者删除 group 的消息。一旦某个 consumer group 下的所有 consumer 实例都停止了,而且它们的位移数据都已经被删除的时候, kafka 会向位移主题的对应分区写入 tombstone 消息,墓碑消息表明要彻底删除这个 group 的信息。

默认情况下 kafka __consumer_offsets 会在第一个 consumer 创建的时候自动创建,默认是 50 个 partitions 。

consumer 端有参数来控制是否自动提交位移,并且多久提交一次位移:

enable.auto.commit = True # 默认为 True
auto.commit.interval.ms = 5000 # 默认 5s python 多线程会为其提交一次位移

位移的数据会提交得越来越多,就需要整理。

Kafka 使用 Compact 来整理过期的消息。Compact 策略会用于来删除位移主题中的过期消息,避免消息的无限膨胀。

这里有一张来自官网的图片来描述 compact 算法究竟在做什么

 

上面我们说了 key 是大概是由 <group_id, topic_name, partition_no> 谁在哪个主题哪个分区 的消费位置,那么这个消费位置会一直更新,因为我们一直在消费,所以属于这个 group_id 的每个 partitions 的消息位置会一直变化。也就是 value 会一直被更新那么 compact 算法就能基于相同的 key 对前面老旧的消息进行清理。想了解其详细算法可以参考 reference 对 log compaction 的源码解析。

Kafka 提供专门的后台线程定期巡检待 Compact 的主题,我们可以通过查看 kafka 日志 log-cleaner.log 获得一些信息

[2019-07-15 08:59:05,891] INFO Cleaner 0: Beginning cleaning of log __consumer_offsets-43. (kafka.log.LogCleaner)
[2019-07-15 08:59:06,024] INFO Cleaner 0: Building offset map for __consumer_offsets-43... (kafka.log.LogCleaner)
[2019-07-15 08:59:06,043] INFO Cleaner 0: Building offset map for log __consumer_offsets-43 for 1 segments in offset range [393705580, 394772832). (kafka.log.LogCleaner)
[2019-07-15 08:59:08,852] INFO Cleaner 0: Offset map for log __consumer_offsets-43 complete. (kafka.log.LogCleaner)
[2019-07-15 08:59:08,853] INFO Cleaner 0: Cleaning log __consumer_offsets-43 (cleaning prior to Mon Jul 15 08:58:51 CST 2019, discarding tombstones prior to Sat Jul 13 10:45:09 CST 2019)... (kafka.log.LogCleaner)
[2019-07-15 08:59:08,853] INFO Cleaner 0: Cleaning segment 0 in log __consumer_offsets-43 (largest timestamp Fri May 31 18:33:38 CST 2019) into 0, retaining deletes. (kafka.log.LogCleaner)
[2019-07-15 08:59:08,856] INFO Cleaner 0: Cleaning segment 392638328 in log __consumer_offsets-43 (largest timestamp Sun Jul 14 10:45:09 CST 2019) into 0, retaining deletes. (kafka.log.LogCleaner)
[2019-07-15 08:59:08,865] INFO Cleaner 0: Swapping in cleaned segment 0 for segment(s) 0,392638328 in log __consumer_offsets-43. (kafka.log.LogCleaner)
[2019-07-15 08:59:08,868] INFO Cleaner 0: Cleaning segment 393705580 in log __consumer_offsets-43 (largest timestamp Mon Jul 15 08:58:51 CST 2019) into 393705580, retaining deletes. (kafka.log.LogCleaner)
[2019-07-15 08:59:09,439] INFO Cleaner 0: Swapping in cleaned segment 393705580 for segment(s) 393705580 in log __consumer_offsets-43. (kafka.log.LogCleaner)
[2019-07-15 08:59:09,440] INFO [kafka-log-cleaner-thread-0]:
        Log cleaner thread 0 cleaned log __consumer_offsets-43 (dirty section = [393705580, 393705580])
        100.0 MB of log processed in 3.4 seconds (29.3 MB/sec).
        Indexed 100.0 MB in 2.8 seconds (35.4 Mb/sec, 82.8% of total time)
        Buffer utilization: 0.0%
        Cleaned 100.0 MB in 0.6 seconds (170.4 Mb/sec, 17.2% of total time)
        Start size: 100.0 MB (1,067,273 messages)
        End size: 0.0 MB (21 messages)
        100.0% size reduction (100.0% fewer messages)
 (kafka.log.LogCleaner)
[2019-07-15 10:35:39,652] INFO Cleaner 0: Beginning cleaning of log __consumer_offsets-9. (kafka.log.LogCleaner)
[2019-07-15 10:35:39,653] INFO Cleaner 0: Building offset map for __consumer_offsets-9... (kafka.log.LogCleaner)
[2019-07-15 10:35:39,671] INFO Cleaner 0: Building offset map for log __consumer_offsets-9 for 1 segments in offset range [17880293, 17888838). (kafka.log.LogCleaner)
[2019-07-15 10:35:39,787] INFO Cleaner 0: Offset map for log __consumer_offsets-9 complete. (kafka.log.LogCleaner)
[2019-07-15 10:35:39,789] INFO Cleaner 0: Cleaning log __consumer_offsets-9 (cleaning prior to Mon Jul 15 10:31:56 CST 2019, discarding tombstones prior to Sun Jul 07 10:27:56 CST 2019)... (kafka.log.LogCleaner)
[2019-07-15 10:35:39,789] INFO Cleaner 0: Cleaning segment 0 in log __consumer_offsets-9 (largest timestamp Mon Jul 08 10:27:56 CST 2019) into 0, retaining deletes. (kafka.log.LogCleaner)
[2019-07-15 10:35:39,792] INFO Cleaner 0: Cleaning segment 17880293 in log __consumer_offsets-9 (largest timestamp Mon Jul 15 10:31:56 CST 2019) into 0, retaining deletes. (kafka.log.LogCleaner)
[2019-07-15 10:35:39,806] INFO Cleaner 0: Swapping in cleaned segment 0 for segment(s) 0,17880293 in log __consumer_offsets-9. (kafka.log.LogCleaner)
[2019-07-15 10:35:39,808] INFO [kafka-log-cleaner-thread-0]:
        Log cleaner thread 0 cleaned log __consumer_offsets-9 (dirty section = [17880293, 17880293])
        1.8 MB of log processed in 0.2 seconds (11.5 MB/sec).
        Indexed 1.8 MB in 0.1 seconds (13.3 Mb/sec, 86.5% of total time)
        Buffer utilization: 0.0%
        Cleaned 1.8 MB in 0.0 seconds (84.8 Mb/sec, 13.5% of total time)
        Start size: 1.8 MB (8,566 messages)
        End size: 0.0 MB (21 messages)
        99.9% size reduction (99.8% fewer messages)
 (kafka.log.LogCleaner)

 下一篇会来重点谈一谈 rebalance 的问题,和手动提交 offset python 版本的实操。

 

Reference:

https://time.geekbang.org/column/article/105112 geektime 专栏 kafka 核心技术与实战-15 消费者组到底是什么

https://time.geekbang.org/column/article/105473 geektime 专栏 kafka 核心技术与实战-16 揭开神秘的“位移主题”面纱

https://time.geekbang.org/column/article/105473 geektime 专栏 kafka 核心技术与实战-17 消费者组重平衡能避免吗

https://github.com/dpkp/kafka-python/issues/948  KIP-62 / KAFKA-3888: Allow consumer to send heartbeats from a background thread

https://github.com/dpkp/kafka-python/pull/1266/files  KAFKA-3888 Use background thread to process consumer heartbeats

https://segmentfault.com/a/1190000007922290  Kafka Log Compaction 解析

 

posted @ 2019-07-15 15:12  piperck  阅读(853)  评论(0编辑  收藏  举报