zookeeper+kafka

简介

zookeeper

顾名思义 zookeeper 就是动物园管理员,他是用来管 hadoop(大象)、Hive(蜜蜂)、pig(小 猪)的管理员, Apache Hbase 和 Apache Solr 的分布式集群都用到了 zookeeper;Zookeeper: 是一个分布式的、开源的程序协调服务,是 hadoop 项目下的一个子项目。他提供的主要功 能包括:配置管理、名字服务、分布式锁、集群管理。

kafka

Kafka是一个分布式数据流平台,可以运行在单台服务器上,也可以在多台服务器上部署形成集群。它提供了发布和订阅功能,使用者可以发送数据到Kafka中,也可以从Kafka中读取数据。Kafka具有高吞吐、低延迟、高容错等特点。

环境准备

IP 主机名  
192.168.200.197 1 zookeeper+kafka
192.168.200.199 2 zookeeper+kafka
192.168.200.199 3 zookeeper+kafka

 

###安装JDK
 yum install -y java-1.8.0-openjdk 
###关闭防火墙
systemctl stop firewalld
setenforce 0

安装zookeeper

###上传安装包
[root@1 ~]# ls
anaconda-ks.cfg  apache-zookeeper-3.7.0-bin.tar.gz  kafka_2.12-2.8.0.tgz
###解压
[root@1 ~]# tar -zxvf apache-zookeeper-3.7.0-bin.tar.gz
###配置环境变量
[root@1 ~]# vim /etc/profile
export ZOOKEEPER_HOME=/usr/zookeeper/apache-zookeeper-3.7.0-bin
export PATH=$ZOOKEEPER_HOME/bin:$PATH
###让环境变量生效
[root@1 ~]# source /etc/profile
###进入/apache-zookeeper-3.7.0-bin/conf目录下复制zoo_sample.cfg为zoo.cfg并编辑
[root@1 conf]# cat zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=opt/zookeeper/data
dataLogDir=opt/zookeeper/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
server.1=192.168.200.197:2888:3888
server.2=192.168.200.198:2888:3888
server.3=192.168.200.199:2888:3888
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true
###配置节点标识(不能一样)
[root@1 conf]# echo "1" > /data/zookeeper/data/myid
###创建文件夹
[root@1 ]# mkdir /opt/zookeeper/data -p
[root@1 ]# mkdir /opt/zookeeper/logs -p
###启动zookeeper(apache-zookeeper-3.7.0-bin/bin)
[root@1 bin]# sh zkServer.sh start
###查看状态
[root@1 bin]# sh zkServer.sh status
/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/zookeeper/apache-zookeeper-3.7.0-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

kafka安装

###解压
[root@1 ~]# tar -zxvf kafka_2.12-2.8.0.tgz
###配置kafka
[root@1 ~]# vim kafka_2.12-2.8.0/config/server.properties
broker.id=0(不能相同)
listeners=PLAINTEXT://192.168.200.197:9092
zookeeper.connect=192.168.200.197:2181,192.168.200.198:2181,192.168.200.199:2181
###进入kafka的bin目录下启动kafka
nohup bin/kafka-server-start.sh config/server.properties >> /var/log/kafka-server.log 2>&1 &
###关闭kafka
bin/kafka-server-stop.sh

kafka测试

[root@1 kafka_2.12-2.8.0]# bin/kafka-topics.sh --create --bootstrap-server 192.168.31.165:9092 --replication-factor 3 --partitions 2 --topic test-ken-io
[root@1 kafka_2.12-2.8.0]# bin/kafka-topics.sh --list --bootstrap-server 192.168.200.198:9092
test-ken-io
[root@1 kafka_2.12-2.8.0]# bin/kafka-console-producer.sh --broker-list  192.168.200.198:9092  --topic test-ken-io
>wuhu
>gezi
[root@1 kafka_2.12-2.8.0]# bin/kafka-console-consumer.sh --bootstrap-server 192.168.200.198:9092 --topic test-ken-io --from-beginning
gezi
wuhu

 

posted @ 2022-05-17 16:02  鸽子咕咕咕  阅读(114)  评论(0)    收藏  举报