Docker搭建kafka

一、拉取镜像

docker pull wurstmeister/zookeeper
docker pull wurstmeister/kafka

二、检查 docker-compose

docker-compose -v

三、创建 docker-compose.yml 文件

cd /data && mkdir docker-compose && cd docker-compose

touch docker-compose.yml

添加内容

version: '1'
services:
  zookeeper:
    image: "zookeeper"
    hostname: "zookeeper"
    container_name: "zookeeper"
  kafka:
    image: "wurstmeister/kafka"
    hostname: "kafka"
    container_name: "kafka"
    ports:
      - "9092:9092"
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
#设置网络,名为local
networks:
  local:
    driver: bridge

四、进入该文件所在目录执行

cd /data/docker-compose
docker-compose build

五、启动服务

docker-compose up -d

六、查看容器

docker ps

七、进入容器之中

docker exec -it kafka bash

八、创建topic

kafka-topics.sh --create --zookeeper zookeeper:2181 -replication-factor 1 --partitions 1 --topic mykafka

执行后输出 Created topic mykafka. 表示成功

九、查看topic

kafka-topics.sh --list --zookeeper zookeeper:2181

十、创建生产者

kafka-console-producer.sh --broker-list kafka:9092 --topic mykafka

十一、查看topic内容

kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic mykafka --from-beginning

  

转载自:

https://www.jianshu.com/p/16e4fb821fa8

 

备注:

经常抛出 host kafka not found的问题,我的做法是更改 /etc/hosts 的域名配置文件,增加一行

127.0.0.1 kafka

我的验证的 java 脚本:

package com.example.one.kafka;
import java.util.Properties;
import org.apache.kafka.clients.producer.Producer;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.common.serialization.StringSerializer;

// 示例地址: https://www.yiibai.com/kafka/apache_kafka_simple_producer_example.html
public class SimpleProducer {
    public static void main(String[] args) throws Exception {
        //Assign topicName to string variable
        String topicName = "mykafka";
        // create instance for properties to access producer configs
        Properties props = new Properties();
        //Assign localhost id
        props.put("bootstrap.servers", "localhost:9092");
        //Set acknowledgements for producer requests.
        props.put("acks", "all");
        //If the request fails, the producer can automatically retry,
        props.put("retries", 0);
        //Specify buffer size in config
        props.put("batch.size", 16384);
        //Reduce the no of requests less than 0
        props.put("linger.ms", 1);
        //The buffer.memory controls the total amount of memory available to the producer for buffering.
        props.put("buffer.memory", 33554432);
        props.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer
                <String, String>(props);

        for (int i = 8; i < 19; i++)
            producer.send(new ProducerRecord<String, String>(topicName,
                    Integer.toString(i), Integer.toString(i)));
        System.out.println("Message sent successfully");
        producer.close();
    }
}

 

 docker下载地址

https://download.docker.com/

kafka的 groupid 与 auto.offset.reset

https://zhuanlan.zhihu.com/p/439936732

 

posted @ 2022-05-03 02:38  许伟强  阅读(1083)  评论(0编辑  收藏  举报