kafka学习笔记1——hello,kafka

本文包含内容:

  1. kafka安装及基于docker快速在开发环境搭建kafka
  2. spring-kafka集成使用

一、安装及使用(windows环境)

参考资料:
https://www.w3cschool.cn/apache_kafka/apache_kafka_basic_operations.html

下载

https://mirrors.bfsu.edu.cn/apache/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz
https://mirrors.bfsu.edu.cn/apache/kafka/2.8.0/kafka_2.13-2.8.0.tgz

准备

  1. 不要将kafka放在较深的目录,否则启动命令报错“命令过长语法不正确”

  2. 由于开发环境内存不大,所以修改bin/windows/kafka-server-start.bat把堆内存调小一点:
    set KAFKA_HEAP_OPTS=-Xmx1G -Xms128M (默认是:-Xmx1G -Xms1G)

  3. 修改配置文件,单节点上启动多个broker

    1. config/server.properties

      broker.id=0
      listeners=PLAINTEXT://:9092
      log.dirs=/tmp/kafka-logs
      
    2. config/server-9093.properties

      config/server-9093.properties
      broker.id=1
      listeners=PLAINTEXT://:9093
      log.dirs=/tmp/kafka-logs-9093
      
    3. config/server-9094.properties

      broker.id=2
      listeners=PLAINTEXT://:9094
      log.dirs=/tmp/kafka-logs-9094
      
  4. 修改zookeeper配置

    1. 复制zoo_sample.cfg到zoo.cfg
    2. zoo.cfg添加配置admin.serverPort=``8888,避免与开发用的tomcat端口冲突(关于adminServer参考 https://blog.csdn.net/fenglllle/article/details/107966591

启动与使用

  1. 启动zookeeper

    bin/zkServer.sh start-foreground
    
  2. 启动kafka

    bin/windows/kafka-server-start.bat config/server.properties
    bin/windows/kafka-server-start.bat config/server-9093.properties
    bin/windows/kafka-server-start.bat config/server-9094.properties
    
  3. 停止kafka

    bin/windows/kafka-server-stop.bat config/server.properties
    
  4. 使用(单个副本)

    创建topic
    bin/windows/kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic Hello-Kafka
    列出topic
    bin/windows/kafka-topics.bat --list --zookeeper localhost:2181
    发送消息
    bin/windows/kafka-console-producer.bat --broker-list localhost:9092 --topic Hello-Kafka
    接收消息
    bin/windows/kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic Hello-Kafka --from-beginning
    
  5. 使用(多个副本)

    bin/windows/kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic Multibrokerapplication
    bin/windows/kafka-topics.bat --describe --zookeeper localhost:2181 --topic Multibrokerapplication
    

    注:每个topic可以有多个副本,副本位于集群中的不同broker上,副本数不能超过broker数量,否则创建topic时会失败。

  6. ~

二、基于docker-compose快速搭建开发环境

docker-compose.yml (参考自:https://github.com/wurstmeister/kafka-docker)

version: '2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
  kafka:
    image: wurstmeister/kafka
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: localhost
      KAFKA_CREATE_TOPICS: "test:1:1"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_HEAP_OPTS: -Xmx1G
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

启动及停止命令

启动服务
docker-compose up -d
停止服务
docker-compose stop
停止并删除服务
docker-compose down

也可参考:使用docker-compose安装kafka

三、原生kafka接口

https://www.w3cschool.cn/apache_kafka/apache_kafka_simple_producer_example.html

四、spring-kafka集成

参考文档

https://spring.io/projects/spring-kafka/

https://mvnrepository.com/artifact/org.springframework.kafka/spring-kafka

docker-compose 搭建 kafka

maven配置

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <spring.version>4.3.30.RELEASE</spring.version>
        <spring.boot.version>1.5.22.RELEASE</spring.boot.version>
    </properties>
    
    <dependencies>
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>1.3.11.RELEASE</version>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-web</artifactId>
        </dependency>

        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
        </dependency>
    </dependencies>

    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>org.springframework</groupId>
                <artifactId>spring-framework-bom</artifactId>
                <version>${spring.version}</version>
                <scope>import</scope>
                <type>pom</type>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-dependencies</artifactId>
                <version>${spring.boot.version}</version>
                <type>pom</type>
                <scope>import</scope>
            </dependency>
            <dependency>
                <groupId>com.google.code.gson</groupId>
                <artifactId>gson</artifactId>
                <version>2.8.1</version>
            </dependency>
        </dependencies>
    </dependencyManagement>
    
    <build>
        <plugins>
            <plugin>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-maven-plugin</artifactId>
                <version>${spring.boot.version}</version>
            </plugin>
        </plugins>
    </build>

hello, spring-kafka

application.properties

spring.kafka.bootstrap-servers=localhost:9092

spring.kafka.producer.acks=1
spring.kafka.producer.batch-size=1000
spring.kafka.producer.retries=0
spring.kafka.producer.buffer-memory=40960
spring.kafka.producer.linger-ms=1
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

spring.kafka.consumer.group-id=kafka_group_1
spring.kafka.consumer.enable-auto-commit=true
spring.kafka.consumer.auto-commit-interval=100
spring.kafka.consumer.session-timeout-ms=15000
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

KafkaApplication

package com.exmaple;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

@SpringBootApplication
@RestController
public class KafkaApplication {

    public static void main(String[] args) {
        SpringApplication.run(KafkaApplication.class, args);
    }

    @Autowired
    private MsgProducer msgProducer;

    @GetMapping("/hello")
    public String hello(@RequestParam(value = "name", defaultValue = "World") String name) {
        String msg = String.format("Hello %s!", name);
        msgProducer.send(msg);
        return msg;
    }
}

生产者

package com.exmaple;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;

@Component
public class MsgProducer {

    private final Logger logger = LoggerFactory.getLogger(MsgProducer.class);

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    public void send(String value) {
        logger.info("sending msg: {}", value);
        kafkaTemplate.send("test", value);

        String key = "key1";
        logger.info("sending msg: {} with key: {}", value, key);
        kafkaTemplate.send("test", key, value + "2");

        logger.info("msg sent.");
    }
}

消费者

package com.exmaple;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;


@Component
public class MsgConsumer {
    private final Logger logger = LoggerFactory.getLogger(MsgConsumer.class);

    @KafkaListener(topics = "test")
    public void processMsg(ConsumerRecord<?, ?> record) {
        logger.info("listener1: {}|{}|{}|{}", record.topic(), record.partition(), record.offset(), record.value());
    }

}
posted @ 2021-06-28 01:07  liqipeng  阅读(84)  评论(0)    收藏  举报