kafka发送消息的三种方式

1.第一种(发送并忘记)

ProducerRecord<String,String> record = new ProducerRecord<String,String>("topic",1,"TestProducer"); // 主题,key,value

Propertis properties = new Properties();
properties.put("bootstrap.servers","127.0.0.1:9092");;
properties .put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties .put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

KafkaProducer kafkaProducer = new KafkaProducer<>(properties );

kafkaProducer .send(record ) //发送并忘记

2.第二种(同步阻塞)

ProducerRecord<String,String> record = new ProducerRecord<String,String>("topic",1,"TestProducer");

Propertis properties = new Properties();
properties.put("bootstrap.servers","127.0.0.1:9092");;
properties .put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties .put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
// 发送配置(重要)
properties.put("ack","1"); // 集群状态下的复制机制,默认使用1,还有0,all
properties.put("batch.size",16384); // 一个批次可以使用的内存大小,缺省16384(16k)
prperties.put("linger.ms",0L); // 指定生产者在发送批次前等待更多消息加入批次的时间 缺省0
properties.put("max.request.size",1*1024*1024); // 控制生产者发送请求最大大小,默认为1M(这个参数和kafka主机的message.max.bytes 参数有关)
// 发送配置(非重要)
properties.put("buffer.memory",32*1024*1024); // 生产者内存缓冲区大小
properties.put("retries",0); // 重发消息次数
properties.put("request.timeout.ms",30*1000); // 客户端将等待请求的响应的最大时间 默认30s
properties.put("max.block.ms",60 * 1000); // 最大阻塞时间,超过则抛出异常 缺省60000ms
properties.put("compression.type","none"); // 于压缩数据的压缩类型。默认无压缩,none,gzip,snapy
KafkaProducer kafkaProducer
= new KafkaProducer<>(properties ); Future<RecordMetadata> recordMetadata= kafkaProducer.send(record); // 阻塞在这个未知
if (null != recordMetadata){
  System.out.println("offset:" + recordMetadata.offset() + "-" + "partition:" + recordMetadata.partition());
}

 3.第三种(异步发送)

Propertis properties = new Properties();
properties.put("bootstrap.servers","127.0.0.1:9092");;
properties .put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
properties .put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
ProducerRecord<String,String> record;
KafkaProducer kafkaProducer = new KafkaProducer<>(properties );try  record = new ProducerRecord<String,String>("topic",1,"TestProducer");  kafkaProducer.send(record, new Callback(){
    public vlid onCompletion(RecordMetadata metadata,Exception exception){
      if (null != exception) {
        exception.printStrackTrace();
      }
      if (null != metadata) {
             System.out.println("offset:" + recordMetadata.offset() + "-" + "partition:" + recordMetadata.partition());
      }
    }
  });
} finally {
  kafkaProducer.close();
}
Future<RecordMetadata> recordMetadata= kafkaProducer.send(record); // 阻塞在这个未知 if (null != recordMetadata){   System.out.println("offset:" + recordMetadata.offset() + "-" + "partition:" + recordMetadata.partition()); }

 

posted @ 2021-11-03 16:54  yiwenzhang  阅读(3177)  评论(0编辑  收藏  举报