大数据(flink&hadoop&Kibana&es)集群重启、网站

1. 集群重启

1.1 实时计算集群重启

1.1.1 关闭所有防火墙

systemctl stop firewalld
systemctl enable firewalld

1.1.2 启动zk

cd apache-zookeeper-3.8.4-bin/bin; 
./zkServer.sh start ## 每台都要执行

## 查看有无QuorumPeerMain
jps

1.1.3 启动kafka

cd kafka_2.13-3.4.0
./bin/kafka-server-start.sh -daemon ./config/server.properties

## 查看有无kafka
jps

## 创建生产者
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1  --topic real_time

## 创建消费者
./kafka-console-consumer.sh --topic real_time1 --bootstrap-server 192.168.0.104:9092 --consumer.config ../config/consumer.properties
,192.168.0.105:9092,192.168.0.106:9092

########### consumer.properties ########### 
# format: host1:port1,host2:port2 ...
bootstrap.servers=192.168.0.104:9092,192.168.0.105:9092,192.168.0.106:9092

# consumer group id
group.id=test-consumer-group
########### consumer.properties ########### 

1.1.4 启动Flume

# 启动
cd apache-flume-1.11.0-bin/bin
flume-ng agent --conf /opt/flume-1.9.0/conf/ --conf-file ../conf/test.conf --name client -Dflume.root.logger=INFO,console

## 后台运行
nohup flume-ng agent --conf /opt/flume-1.9.0/conf/ --conf-file ../conf/test.conf --name client -Dflume.root.logger=INFO,console &

1.1.5 启动mysql

su mysqladmin 
service mysql start
cd $flinkhome
bin/start-cluster.sh

1.2 Hadoop集群重启

cd /opt/hadoop/hadoop-2.10.2/sbin
./start-all.sh

1.3 重启Kibana

/opt/hadoop/kibana-8.13.0/bin/kibana --allow-root

2. 集群常用地址

flink: http://192.168.0.104:8081/
hbase: http://192.168.0.104:60010/
yarn: http://192.168.0.104:8088/
hadoop: http://192.168.0.104:9870/
Kibana:http://192.168.0.104:5601/app/integrations/browse
elasticsearch:http://192.168.0.104:9200/
neo4j:http://master:7474/browser/
posted @ 2024-04-17 19:12  付十一。  阅读(24)  评论(0)    收藏  举报