kafka max.poll.interval.ms配置太短

max.poll.interval.ms这个应该是消费者每次去kafka拉取数据最大间隔,如果超过这个间隔,服务端会认为消费者已离线。触发rebalance

demo
 1     public ConsumerDemo(String topicName) {
 2         Properties props = new Properties();
 3         props.put("bootstrap.servers", "localhost:9092");
 4         props.put("group.id", GROUPID);
 5         props.put("enable.auto.commit", "false");
 6         props.put("max.poll.interval.ms", "1000");
 7         props.put("auto.offset.reset", "earliest");
 8         props.put("key.deserializer", StringDeserializer.class.getName());
 9         props.put("value.deserializer", StringDeserializer.class.getName());
10         this.consumer = new KafkaConsumer<String, String>(props);
11         this.topic = topicName;
12         this.consumer.subscribe(Arrays.asList(topic));
13     }

5行配置自动提交为false,手动提交。6行配置 max.poll.interval.ms为1秒

 1     public void receiveMsg() {
 2         int messageNo = 1;
 3         System.out.println("---------开始消费---------");
 4         try {
 5             for (;;) {
 6                 msgList = consumer.poll(1000);
 7                 System.out.println("start sleep" + System.currentTimeMillis() / 1000);
 8                 Thread.sleep(10000);
 9                 System.out.println("end sleep" + System.currentTimeMillis() / 1000);
10                 if(null!=msgList&&msgList.count()>0){
11                     for (ConsumerRecord<String, String> record : msgList) {
12                         System.out.println(messageNo+"=======receive: key = " + record.key() + ", value = " + record.value()+" offset==="+record.offset());
13                     }
14                 }else{
15                     Thread.sleep(1000);
16                 }
17                 consumer.commitSync();
18             }
19         } catch (InterruptedException e) {
20             e.printStackTrace();
21         } finally {
22             consumer.close();
23         }
24     }

8行slepp 10秒,模拟处理消息耗时。提交消息的时候报错

Exception in thread "main" org.apache.kafka.clients.consumer.CommitFailedException: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.sendOffsetCommitRequest(ConsumerCoordinator.java:722)
    at org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:600)
    at org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1211)
    at com.gxf.kafka.ConsumerDemo.receiveMsg(ConsumerDemo.java:49)
    at com.gxf.kafka.ConsumerDemo.main(ConsumerDemo.java:59)

max.poll.interval.ms 可以配置稍微大点,或者减少处理时间,每次少拉取数据,异步处理等

posted on 2021-10-27 01:02  luckygxf  阅读(3763)  评论(0编辑  收藏  举报

导航