在操作过程中遇到Attempting to operate on hdfs namenode as root报错

在操作过程中遇到Attempting to operate on hdfs namenode as root报错

HDFS格式化后启动dfs出现以下错误:

[root@hadoop101 sbin]# start-dfs.sh
Starting namenodes on [hadoop101]
ERROR: Attempting to operate on hdfs namenode as root
ERROR: but there is no HDFS_NAMENODE_USER defined. Aborting operation.
Starting datanodes
ERROR: Attempting to operate on hdfs datanode as root
ERROR: but there is no HDFS_DATANODE_USER defined. Aborting operation.
Starting secondary namenodes [hadoop103]
ERROR: Attempting to operate on hdfs secondarynamenode as root
ERROR: but there is no HDFS_SECONDARYNAMENODE_USER defined. Aborting operation.

在/hadoop/sbin路径下:
将start-dfs.sh,stop-dfs.sh两个文件顶部添加以下参数

vim start-dfs.sh
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root


-------------------------------------
vim stop-dfs.sh

还有,start-yarn.sh,stop-yarn.sh顶部也需添加以下:

vim start-yarn.sh
#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
-----------------------------------
vim stop-yarn.sh

修改后重启 start-dfs.sh

[root@hadoop101 sbin]# start-dfs.sh

Hadoop启动后没有datenode进程的解决办法

在启动Hadoop时,通过jps目录发现没有datanode进程。

[root@hadoop-single ~]# jps
1792 SecondaryNameNode
1937 Jps
1650 NameNode

2.如何解决:
clusterID不匹配导致的问题
网上的说法大多数都是由于进行hadoop格式化的时候没有事先结束所有进程,或者多次进行了format导致的datanode的clusterID 和 namenode的clusterID不匹配,从而在启动后没有datanode进程。

重新格式化
执行 stop-all.sh关闭集群
删除存放hdfs数据块的文件夹下的所有内容(hadoop/tmp/)
删除hadoop下的日志文件logs
执行hadoop namenode -format格式化hadoop
重启hadoop集群

[root@hadoop102 ~]# start-all.sh
.......
[root@hadoop102 ~]# jps
1957 SecondaryNameNode
1653 DataNode
2858 Jps
1484 NameNode
posted @ 2023-09-23 23:44  Docker-沫老师  阅读(784)  评论(0)    收藏  举报