Kafka

1.定义与特性

官网:kafka.apache.org

Apache Kafka是一个分布式的发布-订阅消息系统,能够支撑海量数据的数据传递。

特性包括:(1) 高吞吐量、低延迟;(2)可扩展性;(3)持久性;4)容错性;(5)高并发。

2.安装与配置

2.1环境准备

使用前需要电脑已安装JDK和Zookeeper,在此不赘述。

2.2安装kafka

1)下载

官网:https://kafka.apache.org

网盘:链接:https://pan.baidu.com/s/1Um2fGe36-ajiHIHlYuZuBQ 提取码:jy4q

2)把下载的压缩包拷贝到Linux

3)解压

tar -zxvf kafka_2.13-2.5.0.tgz -C /usr/local

4)进入目录

cd kafka_2.13-2.5.0

5)修改配置文件,把listeners=PL AINTEXT://:9092 打开并修改,指定本机ip

vi config/server.properties

listeners=PL AINTEXT://192.168.159.128:9092

6)启动

bin/kafka-server-start.sh config/server.properties

说明启动成功。

2.3通过kafka自带的命令测试服务生产者和消费者

创建主题

bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic mykfk --partitions 2 --replication-factor 1
--zookeeper:指定了Kafka所连接的Zookeeper服务地址
--create:创建主题的动作指令
--topic:指定了所要创建主题的名称
--partitions:指定了分区个数
--replication-factor:指定了副本因子

展示所有主题

bin/kafka-topics.sh --zookeeper localhost:2181 --list

查看主题的详细信息

bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic mykfk 

消费端接收消息

bin/kafka-console-consumer.sh --bootstrap-server 192.168.159.128:9092 --topic mykfk

生产端发送消息(重新打开一个终端)

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic mykfk

在生产端输入消息回车就会在消费端看到发送的消息。

3.入门程序

3.1使用java硬编码方式

1)环境准备

先新建一个springboot的项目,导入坐标

 <!--kafka-clients   -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka-clients</artifactId>
            <version>2.0.0</version>
            <type>pom</type>
        </dependency>
        <!--kafka   -->
        <dependency>
            <groupId>org.apache.kafka</groupId>
            <artifactId>kafka_2.11</artifactId>
            <version>2.0.0</version>
            <type>pom</type>
            <exclusions>
                <exclusion>
                    <groupId>org.apache.zookeeper</groupId>
                    <artifactId>zookeeper</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

2)简单的服务提供者

package com.example.kafkaserver.controller;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;

import java.util.Properties;

/**
 * @author zhongyushi
 * @date 2020/6/28 0028
 * @dec 服务生产者简单演示
 */
public class ProductSimple {

    private static final String brokerList = "192.168.159.128:9092";
    private static final String topic = "mykfk";

    public static void main(String[] args) {
        Properties properties = new Properties();
        //设置key序列化器
        properties.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        //设置重试次数
        properties.put(ProducerConfig.RETRIES_CONFIG, 10);
        //设置值序列化器
        properties.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName());
        // 设置集群地址
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
        KafkaProducer<String, String> producer = new KafkaProducer<>(properties);
        ProducerRecord<String, String> record = new ProducerRecord<>(topic, "kafka", "hello,kafka!");
        try {
            producer.send(record);
        } catch (Exception e) {
            e.printStackTrace();
        }
        producer.close();
    }
}

3)简单的服务消费者

package com.example.kafkaserver.controller;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.serialization.StringDeserializer;


import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

/**
 * @author zhongyushi
 * @date 2020/6/28 0028
 * @dec 服务消费者简单演示
 */
public class ConsumerSimple {
    private static final String brokerList = "192.168.159.128:9092";
    private static final String topic = "mykfk";
    private static final String groupId = "group.demo";

    public static void main(String[] args) {
        Properties properties = new Properties();
        //设置key序列化器
        properties.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        //设置值序列化器
        properties.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName());
        // 设置集群地址
        properties.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
        properties.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);

        KafkaConsumer<String, String> consumer = new KafkaConsumer<>(properties);
        consumer.subscribe(Collections.singletonList(topic));

        while (true) {
            ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(1000));
            for (ConsumerRecord<String, String> record : records) {
                System.out.println(record.value());
            }
        }
    }
}

先启动服务消费者,再启动服务提供者,会看到服务消费者打印收到的信息。

3.2使用注解方式

1)导入坐标

<!--kafka   -->
        <dependency>
            <groupId>org.springframework.kafka</groupId>
            <artifactId>spring-kafka</artifactId>
            <version>2.5.2.RELEASE</version>
        </dependency>

2)添加配置

spring.kafka.producer.bootstrap-servers=192.168.159.128:9092
spring.kafka.consumer.bootstrap-servers=192.168.159.128:9092

3)编写controller接口发送和接收消息

package com.example.kafkaserver.controller;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.annotation.Resource;

/**
 * @author zhongyushi
 * @date 2020/6/28 0028
 * @dec kafka接口测试
 */
@RestController
@RequestMapping("/msg")
public class MessageController {

    //设置主题
    private static final String topic = "mykfk";

    //注入kafka模板
    @Resource
    private KafkaTemplate template;


    /**
     * 发送消息
     *
     * @param message
     * @return
     */
    @GetMapping("/send/{message}")
    public String send(@PathVariable String message) {
        template.send(topic, message);
        return message + ",发送消息成功";
    }

    /**
     * 接收消息
     * @param message
     */
    //监听
    @KafkaListener(id = "111",topics = topic, groupId = "group.demo")
    public void get(String message) {
        System.out.println("收到消息:" + message);
    }
}

4)测试

启动项目,输入http://localhost:8080/msg/send/123,可以看到发送成功,在控制台打印了消息内容。

 

posted @ 2020-07-12 20:41  钟小嘿  阅读(212)  评论(0编辑  收藏  举报