记录一次Hadoop高可用搭建实验
原文链接:https://www.cnblogs.com/liugp/p/16607424.html
系统镜像:CentOS-7-x86_64-Everything-2207-02.iso
ZooKeeper二进制包:zookeeper-3.4.8.tar.gz
1.下载
官网:https://archive.apache.org/dist/hadoop/common/
ps:
阿里、清华和中科大的版本不全
阿里镜像:https://mirrors.aliyun.com/apache/hadoop/common/
清华镜像:https://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/
中科大镜像:https://mirrors.ustc.edu.cn/apache/hadoop/common/
2.将二进制路径添加进命令行
vi /etc/profile
export HADOOP_HOME=/opt/moulds/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
3.解压
tar zxf /opt/hadoop-3.1.3.tar.gz -C /opt/moulds
4.配置
4.1核心配置文件
创建存储目录
mkdir -p /root/HadoopCluster/hadoop/dfs/namenode
vi $HADOOP_HOME/etc/hadoop/core-site.xml
<configuration>
<!-- 设置默认使用的文件系统 Hadoop支持file、HDFS、GFS、ali|Amazon云等文件系统 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://masters</value>
</property>
<!-- 设置Hadoop本地保存数据路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/root/HadoopCluster/hadoop/dfs/namenode/</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
<name>ha.zookeeper.quorum</name>
<value>master1:2181,master2:2181,slaver1:2181,slaver2:2181,slaver3:2181</value>
</property>
<!-- 设置HDFS web UI用户身份 -->
<property>
<name>hadoop.http.staticuser.user</name>
<value>root</value>
</property>
<!-- 配置该root允许通过代理访问的主机节点 -->
<property>
<name>hadoop.proxyuser.root.hosts</name>
<value>*</value>
</property>
<!-- 配置该root允许代理的用户所属组 -->
<property>
<name>hadoop.proxyuser.root.groups</name>
<value>*</value>
</property>
<!-- 配置该root允许代理的用户 -->
<property>
<name>hadoop.proxyuser.root.users</name>
<value>*</value>
</property>
<!-- 对于每个<root>用户,hosts必须进行配置,而groups和users至少需要配置一个。-->
<!-- 文件系统垃圾桶保存时间 -->
<property>
<name>fs.trash.interval</name>
<value>1440</value>
</property>
</configuration>
4.2分布式文件系统
mkdir -p /root/HadoopCluster/hadoop/dfs/journalnode
vi $HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
<!-- 为namenode集群定义一个services name,默认值:null -->
<property>
<name>dfs.nameservices</name>
<value>masters</value>
</property>
<!-- 说明:nameservice 包含哪些namenode,为各个namenode起名,默认值:null,比如这里设置的mt1, mt2 -->
<property>
<name>dfs.ha.namenodes.masters</name>
<value>mt1,mt2</value>
</property>
<!-- 说明:名为mt1的namenode 的rpc地址和端口号,rpc用来和datanode通讯,默认值:9000,master1为节点hostname-->
<property>
<name>dfs.namenode.rpc-address.masters.mt1</name>
<value>master1:8082</value>
</property>
<!-- 说明:名为mt2的namenode 的rpc地址和端口号,rpc用来和datanode通讯,默认值:9000,master2为节点hostname-->
<property>
<name>dfs.namenode.rpc-address.masters.mt2</name>
<value>master2:8082</value>
</property>
<!-- 说明:名为mt1的namenode 的http地址和端口号,web客户端 -->
<property>
<name>dfs.namenode.http-address.masters.mt1</name>
<value>master1:9870</value>
</property>
<!-- 说明:名为mt2的namenode 的http地址和端口号,web客户端 -->
<property>
<name>dfs.namenode.http-address.masters.mt2</name>
<value>master2:9870</value>
</property>
<!-- 说明:namenode间用于共享编辑日志的journal节点列表 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://master1:8485;master2:8485;slaver1:8485;slaver2:8485;slaver3:8485/masters</value>
</property>
<!-- 说明:客户端连接可用状态的NameNode所用的代理类,默认值:org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider -->
<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 说明:HDFS的HA功能的防脑裂方法。可以是内建的方法(例如shell和sshfence)或者用户定义的方法。
建议使用sshfence(hadoop:9922),括号内的是用户名和端口,注意,这需要NN的2台机器之间能够免密码登陆
fences是防止脑裂的方法,保证NN中仅一个是Active的,如果2者都是Active的,新的会把旧的强制Kill
-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<!-- 指定上述选项ssh通讯使用的密钥文件在系统中的位置 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 说明:失效转移时使用的秘钥文件。 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/root/HadoopCluster/hadoop/dfs/journalnode</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.masters</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 设置数据块应该被复制的份数,也就是副本数,默认:3 -->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<!-- 说明:是否开启权限检查 -->
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
</configuration>
4.3工作节点
vi $HADOOP_HOME/etc/hadoop/workers
master1
master2
slaver1
slaver2
slaver3
4.4资源管理器
vi $HADOOP_HOME/etc/hadoop/yarn-site.xml
<configuration>
<!--开启ResourceManager HA功能-->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!--标志ResourceManager-->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarns</value>
</property>
<!--集群中ResourceManager的ID列表,后面的配置将引用该ID-->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 设置YARN集群主角色运行节点rm1-->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master1</value>
</property>
<!-- 设置YARN集群主角色运行节点rm2-->
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>master2</value>
</property>
<!--ResourceManager1的Web页面访问地址-->
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>master1:8088</value>
</property>
<!--ResourceManager2的Web页面访问地址-->
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>master2:8088</value>
</property>
<!--ZooKeeper集群列表-->
<property>
<name>hadoop.zk.address</name>
<value>master1:2181,local-168-182-111:2181,local-168-182-112:2181</value>
</property>
<!--启用ResouerceManager重启的功能,默认为false-->
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<!--用于ResouerceManager状态存储的类-->
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 是否将对容器实施物理内存限制 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<!-- 是否将对容器实施虚拟内存限制 -->
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<!-- 开启日志聚集 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 设置yarn历史服务器地址 -->
<property>
<name>yarn.log.server.url</name>
<value>http://master1:19888/jobhistory/logs</value>
</property>
<!-- 设置yarn历史日志保存时间 7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604880</value>
</property>
</configuration>
4.5计算框架
vi $HADOOP_HOME/etc/hadoop/mapred-site.xml
<configuration>
<!-- 设置MR程序默认运行模式,yarn集群模式,local本地模式 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- MR程序历史服务地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>master1:10020</value>
</property>
<!-- MR程序历史服务web端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master1:19888</value>
</property>
<!-- yarn环境变量 -->
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<!-- map环境变量 -->
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<!-- reduce环境变量 -->
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
</configuration>
5.推送
cd /opt/moulds
tar -zcf hadoop.tar.gz hadoop
scp hadoop.tar.gz root@master2:/opt/moulds
scp hadoop.tar.gz root@slaver1:/opt/moulds
scp hadoop.tar.gz root@slaver2:/opt/moulds
scp hadoop.tar.gz root@slaver3:/opt/moulds
scp /etc/profile root@master2:/etc/profile
scp /etc/profile root@slaver1:/etc/profile
scp /etc/profile root@slaver2:/etc/profile
scp /etc/profile root@slaver3:/etc/profile
master2,slaver1,slaver2,slaver3
tar zxf /opt/moulds/hadoop.tar.gz -C /opt/moulds
6.初始化与首次启动
master1,master2,slaver1,slaver2,slaver3:
zkServer.sh start
hdfs --daemon start journalnode
master1
hdfs namenode -format
hdfs --daemon start namenode
master2:
hdfs namenode -bootstrapStandby
hdfs --daemon start namenode
master1:
hdfs zkfc -formatZK
master1,master2,slaver1,slaver2,slaver3:
mkdir /opt/packages
mount /dev/cdrom /opt/packages
mv /etc/yum.repos.d/ /etc/yum.repos.d.bak
mkdir /etc/yum.repos.d/
vi /etc/yum.repos.d/a.repo
[Local]
name=Local
baseurl=file:///opt/packages
gpgcheck=0
enabled=1
yum clean all
yum -y install psmisc
master1:
vi $HADOOP_HOME/sbin/start-dfs.sh
vi $HADOOP_HOME/sbin/stop-dfs.sh
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_JOURNALNODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HDFS_ZKFC_USER=root
master1:
start-dfs.sh
[root@master1 ~]# jps
1312 JournalNode
2640 DFSZKFailoverController
2705 Jps
1490 NameNode
2242 DataNode
1236 QuorumPeerMain
[root@master1 ~]#
master1:
start-yarn.sh
[root@master1 ~]# jps
1312 JournalNode
2640 DFSZKFailoverController
1490 NameNode
2242 DataNode
1236 QuorumPeerMain
3205 NodeManager
3349 Jps
3051 ResourceManager
master1:
mapred --daemon start historyserver
7.测试:
7.1hdfs:
7.1.1查看状态
[root@master1 ~]# hdfs haadmin -getServiceState mt1
active
[root@master1 ~]# hdfs haadmin -getServiceState mt2
standby
http://192.168.30.10:9870
http://192.168.30.11:9870
Overview 'master1:8082' (active)
Overview 'master2:8082' (standby)
7.1.2强制主备切换
[root@master1 ~]# hdfs haadmin -transitionToStandby --forcemanual mt1
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 18:58:56,592 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at master1/192.168.30.10:8082
[root@master1 ~]# hdfs haadmin -transitionToActive --forcemanual mt2
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 18:59:09,886 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at master1/192.168.30.10:8082
2025-03-09 18:59:10,296 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at master2/192.168.30.11:8082
[root@master1 ~]# hdfs haadmin -getServiceState mt1 standby
[root@master1 ~]# hdfs haadmin -getServiceState mt2 active
[root@master1 ~]# hdfs haadmin -getAllServiceState
master1:8082 standby
master2:8082 active
[root@master1 ~]#
[root@master1 ~]# hdfs haadmin -transitionToStandby --forcemanual mt2
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 19:01:16,910 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at master2/192.168.30.11:8082
[root@master1 ~]# hdfs haadmin -transitionToActive --forcemanual mt1
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 19:01:25,942 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at master2/192.168.30.11:8082
2025-03-09 19:01:26,358 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for NameNode at master1/192.168.30.10:8082
[root@master1 ~]# hdfs haadmin -getServiceState mt1 active
[root@master1 ~]# hdfs haadmin -getServiceState mt2 standby
[root@master1 ~]# hdfs haadmin -getAllServiceState master1:8082 active
7.1.3故障模拟
[root@master1 ~]# jps
1312 JournalNode
2640 DFSZKFailoverController
1490 NameNode
2242 DataNode
1236 QuorumPeerMain
3460 JobHistoryServer
3205 NodeManager
3051 ResourceManager
4175 Jps
[root@master1 ~]# jps|grep NameNode
1490 NameNode
[root@master1 ~]# jps|grep NameNode|awk '{print $1}'
1490
[root@master1 ~]# jps|grep NameNode|awk '{print $1}'|xargs kill -9
[root@master1 ~]# jps
1312 JournalNode
2640 DFSZKFailoverController
2242 DataNode
1236 QuorumPeerMain
3460 JobHistoryServer
3205 NodeManager
4249 Jps
3051 ResourceManager
[root@master1 ~]# hdfs haadmin -getServiceState mt1
2025-03-09 19:03:54,237 INFO ipc.Client: Retrying connect to server: master1/192.168.30.10:8082. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From master1/192.168.30.10 to master1:8082 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
[root@master1 ~]# hdfs haadmin -getServiceState mt2
active
[root@master1 ~]# hdfs haadmin -getAllServiceState
2025-03-09 19:04:12,772 INFO ipc.Client: Retrying connect to server: master1/192.168.30.10:8082. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
master1:8082 Failed to connect: Call From master1/192.168.30.10 to master1:8082 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
master2:8082 active
[root@master1 ~]#
[root@master1 ~]# hdfs --daemon start namenode
[root@master1 ~]# jps
1312 JournalNode
2640 DFSZKFailoverController
4432 NameNode
4513 Jps
2242 DataNode
1236 QuorumPeerMain
3460 JobHistoryServer
3205 NodeManager
3051 ResourceManager
[root@master1 ~]# hdfs haadmin -getServiceState mt1
standby
[root@master1 ~]# hdfs haadmin -getServiceState mt2
active
[root@master1 ~]# hdfs haadmin -getAllServiceState
master1:8082 standby
master2:8082 active
7.2yarn:
7.2.1查看状态:
[root@master1 ~]# yarn rmadmin -getServiceState rm1
active
[root@master1 ~]# yarn rmadmin -getServiceState rm2
standby
http://192.168.30.10:8088/cluster/cluster
http://192.168.30.11:8088/cluster/cluster
| ResourceManager HA state: | active |
| ResourceManager HA state: | standby |
7.2.2强制主备切换
[root@master1 ~]# yarn rmadmin -transitionToStandby -forcemanual rm1
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 19:49:06,396 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@6b57696f
[root@master1 ~]# yarn rmadmin -getAllServiceState
master1:8033 standby
master2:8033 standby
[root@master1 ~]# yarn rmadmin -transitionToActive -forcemanual rm2
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 19:49:23,763 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@6b57696f
2025-03-09 19:49:24,277 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@38c6f217
[root@master1 ~]# yarn rmadmin -getAllServiceState
master1:8033 standby
master2:8033 active
ps:
先将活动主节点强制转换为standby,再设置活动主节点
[root@master1 ~]# yarn rmadmin -transitionToStandby -forcemanual rm2
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 19:50:30,277 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@6b57696f
[root@master1 ~]# yarn rmadmin -transitionToActive -forcemanual rm1
You have specified the --forcemanual flag. This flag is dangerous, as it can induce a split-brain scenario that WILL CORRUPT your HDFS namespace, possibly irrecoverably.
It is recommended not to use this flag, but instead to shut down the cluster and disable automatic failover if you prefer to manually manage your HA state.
You may abort safely by answering 'n' or hitting ^C now.
Are you sure you want to continue? (Y or N) y
2025-03-09 19:50:43,616 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@6b57696f
2025-03-09 19:50:44,093 WARN ha.HAAdmin: Proceeding with manual HA state management even though
automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@38c6f217
[root@master1 ~]# yarn rmadmin -getAllServiceState
master1:8033 active
master2:8033 standby
7.2.3故障模拟
[root@master2 ~]# yarn rmadmin -getAllServiceState
master1:8033 standby
master2:8033 active
[root@master2 ~]# jps
1413 JournalNode
5029 NodeManager
2214 DFSZKFailoverController
5414 Jps
4951 ResourceManager
1561 NameNode
1258 QuorumPeerMain
2062 DataNode
[root@master2 ~]# jps|grep ResourceManager|awk '{print $1}'|xargs kill -9
[root@master2 ~]# jps
1413 JournalNode
5029 NodeManager
2214 DFSZKFailoverController
1561 NameNode
1258 QuorumPeerMain
2062 DataNode
5439 Jps
[root@master2 ~]# yarn rmadmin -getServiceState rm1
active
[root@master2 ~]# yarn rmadmin -getServiceState rm2
2025-03-09 20:01:13,657 INFO ipc.Client: Retrying connect to server: master2/192.168.30.11:8033. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
Operation failed: Call From master2/192.168.30.11 to master2:8033 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
[root@master2 ~]# yarn rmadmin -getAllServiceState
master1:8033 active
2025-03-09 20:01:41,406 INFO ipc.Client: Retrying connect to server: master2/192.168.30.11:8033. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=1, sleepTime=1000 MILLISECONDS)
master2:8033 Failed to connect: Call From master2/192.168.30.11 to master2:8033 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
[root@master2 ~]# yarn --daemon start resourcemanager
[root@master2 ~]# jps
5668 ResourceManager
1413 JournalNode
5029 NodeManager
2214 DFSZKFailoverController
5718 Jps
1561 NameNode
1258 QuorumPeerMain
2062 DataNode
[root@master2 ~]# yarn rmadmin -getAllServiceState
master1:8033 active
master2:8033 standby
ps:
1.首次启动yarn自动选出的活动nodemanage被手动停掉以后会进行故障转移
2.先强制转换再停掉活动节点不会发生故障转移。
3.发生故障转移恢复后再手动停掉新的活动节点,原节点不会恢复成活动节点
4.发生故障转移恢复后强制颠倒各节点状态,再手动停掉活动节点,备用节点不会成为活动节点
5.手动停止yarn,重新启动后见1-4
8.停止HA
master:
mapred --daemon stop historyserver
stop-yarn.sh
stop-dfs.sh
master1,master2,slaver1,slaver2,slaver3
zkServer.sh stop

浙公网安备 33010602011771号