Kafka

1、命令

2、生产者操作

3、消费者操作

4、Zookeeper相关

1、命令:

 启动zookeeper:

bin\windows\zookeeper-server-start.bat config\zookeeper.properties

   启动kafka:

bin\windows\kafka-server-start.bat config\server.properties

   创建topic:

bin\windows\kafka-topics.bat --create --bootstrap-server localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replica-assignment 1:2,2:0,0:2 --topic test2

   创建生产者:

bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic test

   创建消费者:

bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning

   展示主题信息(副本,ISR等)

bin\windows\kafka-topics.bat  --zookeeper localhost:2181 --describe --topic test

   查看所有主题

bin\windows\kafka-topics.bat  --zookeeper localhost:2181 --list

 -------配置管理

 查看配置

bin\windows\kafka-configs.bat --zookeeper localhost:2181 --describe --entity-type topics --entity-name test

 

 注:--entity-type只能填 topics、brokers、clients、users

修改配置

bin\windows\kafka-configs.bat --zookeeper localhost:2181 --alter --entity-type topics --entity-name test --add-config max.message.bytes=9999,cleanup.policy=compact

 

 删除配置

bin\windows\kafka-configs.bat --zookeeper localhost:2181 --delete --entity-type topics --entity-name test --add-config max.message.bytes

 ---------优先副本选取

分区重新平衡(当分区分配不平衡时,可以使每一个分区leader副本分布在优先副本)

bin\windows\kafka-preferred-replica-election.bat  --zookeeper localhost:2181  --path-to-json-file topics.json

 

 也可以使用json文件来针对平衡,文件内容:{"partitions":[{"partition":0,"topic":"test"}]}

 --------分区重新分配

生成方案:

bin\windows\kafka-reassign-partitions.bat  --zookeeper localhost:2181 --generate --topics-to-move-json-file topics_reassign.json --broker-list 0,1

 执行方案:

bin\windows\kafka-reassign-partitions.bat  --zookeeper localhost:2181 --execute --reassignment-json-file project.json

 

2、生产者操作

重要参数:

buffer.memory  //RecordAccumulator消息收集器的缓存大小,默认32MB。超过会抛出异常或阻塞,取决于max.block.ms参数

max.block.ms  //缓存溢出后,阻塞时间。默认60秒。

retries     //重发次数,

vetry.backoff.ms  //重发间隔时间

batch.size   //缓存参数,决定创建ProducterBatch的大小,关于ProducterBatch在P38。

acks      //1:leader收到消息即为成功,0:生产者发送消息后不等待任何相应 ,-1/all:leader和follwer全部都写入消息才为成功

 

 

   (1)、生产者添加消息:

public class Kafka {
    private static KafkaProducer<String,String> producer;
    public static void main(String[] args) {
        Properties kafkaProps = new Properties();
        kafkaProps.put("bootstrap.servers", "localhost:9092");
        kafkaProps.put("acks", "all");
        kafkaProps.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        kafkaProps.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        producer = new  KafkaProducer<String,String>(kafkaProps);
//        consumer=new KafkaConsumer<>(kafkaProps);
        ProducerRecord<String, String> record = new ProducerRecord<>("test", "444");
        try {
            System.out.println(producer.send(record).get());
            List<PartitionInfo> partitions = new ArrayList<PartitionInfo>() ;
            partitions = producer.partitionsFor("test");
            for(PartitionInfo p:partitions)
            {
                System.out.println(p);
            }
        }catch (Exception e) {
            e.printStackTrace();
        }
    }
}

 

 

  (2)、生产者回调函数

            producer.send(record, new Callback() {
                @Override
                public void onCompletion(RecordMetadata arg0, Exception arg1) {
                    System.out.println("回调函数!" + arg0.topic() + "   " + arg1);
                }
            });

 

 注:当执行成功时回调函数的Exception是null.

 

 (3)、生产者拦截器

 第一步、创建拦截器类

public class MyInterceptor implements ProducerInterceptor<String, String>{
    @Override
    public void configure(Map<String, ?> arg0) {
        // TODO Auto-generated method stub
        
    }

    @Override
    public void close() {
        // TODO Auto-generated method stub
        
    }

    @Override
    public void onAcknowledgement(RecordMetadata arg0, Exception arg1) {
        // TODO Auto-generated method stub
        System.out.println("");
    }

    @Override
    public ProducerRecord<String, String> onSend(ProducerRecord<String, String> arg0) {
        System.out.println("现在是拦截器"+arg0.topic());
        if(arg0.value().contains("wgy")) {
            return new ProducerRecord<String,String>(arg0.topic(),arg0.partition(),arg0.timestamp(),arg0.key(),"we is good");
        }
        return arg0;
    }
}

 

 第二步、在生产者配置中配置

kafkaProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,MyInterceptor.class.getName());

 

可以配置多个拦截器,对个拦截器之间使用逗号隔开,例如:

        kafkaProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG,
                MyInterceptor.class.getName()+","+MyInterceptor.class.getName()
                );

 

 

3、消费者操作

参数

enable.auto.commit        //自动提交,默认true

auto.commit.interval.ms //自动提交的时间周期,默认5ms

auto.offset.reset             //设置开始消费的位移,默认latest,还有earliest、none

 

方法

List<PartitionInfo> partitionsFor = consumer.partitionsFor("test3");     //获取某主题下的分区信息,可以根据获取的分区信息订阅。

Set<TopicPartition> assignment = consumer.assignment();                //获取消费者所分配的分区信息

Map<TopicPartition, Long> endOffsets = consumer.endOffsets(assignment);    //获取指定分区的末尾消息位置,同理有 beginningOffsets()方法。

consumer.position(tp);        //获取下次拉取的消息位置

consumer.committed(tp);     //获取已经提交过的消费位移,也就是poll()时开始消费消息的位置。

 

(1)、消费者读取消息:

public class KafkaRead {
    private static KafkaConsumer<String,String> consumer;
    public static void main(String[] args) {
        Properties kafkaProps = new Properties();
        kafkaProps.put("bootstrap.servers", "localhost:9092");
        kafkaProps.put("group.id", "test");
        kafkaProps.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        kafkaProps.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        consumer=new KafkaConsumer<>(kafkaProps);
        consumer.subscribe(java.util.Collections.singletonList("test"));
        try {
            while(true) {
                ConsumerRecords<String, String> records=consumer.poll(100);

                for(ConsumerRecord<String, String> record:records) {
                    System.out.println(record.toString());
                }
            }
        }catch (Exception e) {
            // TODO: handle exception
        }
    }
}

 

(2)、订阅主题

            consumer=new KafkaConsumer<>(kafkaProps);
            //第一种方式subscribe()
            consumer.subscribe(Arrays.asList("test3","test2"));//指定主题
            //第二种方式
            consumer.subscribe(Pattern.compile("test.*"));//正则表达式匹配主题
            //第三种方式assign()
            List<PartitionInfo> partitionsFor = consumer.partitionsFor("test3");//获取某主题下的分区信息,可以根据获取的分区信息订阅。
            TopicPartition topicPartition = new TopicPartition("test3",2);//主题,分区
            consumer.assign(Arrays.asList(topicPartition));

 

(3)、详细操作(手动提交、正常退出、根据分区消费、查看分区消费情况)

                while(true) {
                    ConsumerRecords<String, String> records=consumer.poll(Long.MAX_VALUE);//或者使用Duration.ofMillis(1000)
                    System.out.println("开始执行一轮数据!");
                    Set<TopicPartition> partitions = records.partitions();//集中所有分区
                    for(TopicPartition tp:partitions) {
                        for(ConsumerRecord<String, String> record:records.records(tp)) {//根据分区消费消息
                            System.out.println("当前消息的位移、分区、值"+record.offset()+":"+record.partition()+":"+record.value());
                            System.out.println("下次拉去消息的位移:"+consumer.position(tp));
                            System.out.println("现在的提交消费位移:"+consumer.committed(tp).toString());
//                            if(record.value().equals("break")) {
//                                consumer.wakeup();
//                            }
                        }
//                        for(ConsumerRecord<String, String> record:records.records("test3")) {
//                            System.out.println(record.partition()+":"+record.toString());
//                        }
                    }
                    consumer.commitAsync(new OffsetCommitCallback() {
                        @Override
                        public void onComplete(Map<TopicPartition, OffsetAndMetadata> map, Exception exception) {
                            System.out.println("提交位移结束!");
                            if(exception==null) {
                                for(TopicPartition key:map.keySet()) {
                                    System.out.println(map.get(key));
                                }
                            }
                            
                        }
                    });
                }

 

(4)、指定位移消费

        Map<TopicPartition, Long> endOffsets = consumer.endOffsets(assignment);
        while (assignment.isEmpty()) {// 防止poll时没有分配分区
            consumer.poll(1000);
            assignment = consumer.assignment();
        }
        Map<TopicPartition, Long> t2S = new HashMap<>();
        for (TopicPartition tp : assignment) {
            t2S.put(tp, System.currentTimeMillis() - 1 * 24 * 3600 * 1000);// 一天之前的时间戳
        }
        Map<TopicPartition, OffsetAndTimestamp> offsetsForTimes = consumer.offsetsForTimes(t2S);//根据主题和时间戳获取位移

        for (TopicPartition tp : assignment) {
            OffsetAndTimestamp offsetAndTimestamp = offsetsForTimes.get(tp);
            consumer.seek(tp, offsetAndTimestamp.offset());
        }
        consumer.seekToBeginning(assignment);
        consumer.seekToEnd(assignment);

 

(4)、再均衡监听

第一步,新建类实现ConsumerRebalanceListener接口

public class MyConsumerRebalanceListener implements ConsumerRebalanceListener{
    @Override
    public void onPartitionsAssigned(Collection<TopicPartition> collection) {
        System.out.println("再均衡前"+collection.toString());
        
    }

    @Override
    public void onPartitionsRevoked(Collection<TopicPartition> collection) {
        System.out.println("再均衡后"+collection.toString());
    }
}

 

第二步

consumer.subscribe(Arrays.asList("test3"),new MyConsumerRebalanceListener());// 指定主题

 

 

 

4、Zookeeper相关

(1)客户端命令行

创建节点:

create /wgy/test1    ---创建节点
-s:创建带有序号节点  -e:创建临时节点

 

 值读写:

get -s /wgy    ---取值
set /wgy "123"  ---设值

监听节点:监听一次只会监听一次变化

get -w /wgy

 

删除节点:

delete /wgy/test1    ---删除节点
deleteall /wgy        ---删除全部节点

 

查看节点状态

stat /wgy

 

 

(2)客户端API

依赖:

        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.13.2</version>
            <scope>test</scope>
        </dependency>

        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-core</artifactId>
            <version>2.8.2</version>
        </dependency>

        <dependency>
            <groupId>org.apache.zookeeper</groupId>
            <artifactId>zookeeper</artifactId>
            <version>3.5.7</version>
        </dependency>

主程序:

public class MyTest {
    public ZooKeeper zooKeeper = null;

    public static void main(String[] args) throws Exception {
        String connect = "localhost:2181";
        MyTest myTest = new MyTest();
        myTest.zooKeeper = new ZooKeeper(connect, 2000, new Watcher() {
            @Override
            public void process(WatchedEvent watchedEvent) {
                //循环监听
                try {
                    List<String> children = myTest.zooKeeper.getChildren("/wgy", true);
                    for (String child : children) {
                        System.out.println(child);
                    }
                } catch (Exception e) {
                    e.printStackTrace();
                }
            }
        });
        //创建节点
        myTest.zooKeeper.create("/wgy/test1", "test1".getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
        //获取节点子类,并开启监听
        List<String> children = myTest.zooKeeper.getChildren("/wgy", true);
        //更新值
        System.out.println(myTest.zooKeeper.setData("/wgy", "wll".getBytes(), -1));
        //获取节点的值,不开始监听
        System.out.println(new String(myTest.zooKeeper.getData("/wgy", false, new Stat())));
        //判断节点是否存在
        System.out.println(myTest.zooKeeper.exists("/wgy", false));
        Thread.sleep(Long.MAX_VALUE);
    }
}

 

posted @ 2022-03-27 12:33  王啦啦  阅读(69)  评论(0)    收藏  举报