Linux 搭建Hadoop集群错误锦集
一、Hadoop集群配置好后,执行start-dfs.sh后报错,一堆permission denied
zf sbin $ ./start-dfs.sh Starting namenodes on [master] master: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted master: starting namenode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out master: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out: Permission denied master: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out' for reading: No such file or directory master: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out: Permission denied master: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-namenode-master.out: Permission denied slave-1: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted slave-1: starting datanode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out: Permission denied slave-2: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted slave-2: starting datanode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out slave-2: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out: Permission denied slave-1: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out' for reading: No such file or directory slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out: Permission denied slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-1.out: Permission denied slave-2: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out' for reading: No such file or directory slave-2: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out: Permission denied slave-2: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-datanode-slave-2.out: Permission denied Starting secondary namenodes [slave-1] slave-1: chown: changing ownership of '/home/zf/hadoop/hadoop-2.9.1/logs': Operation not permitted slave-1: starting secondarynamenode, logging to /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 159: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out: Permission denied slave-1: head: cannot open '/home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out' for reading: No such file or directory slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 177: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out: Permission denied slave-1: /home/zf/hadoop/hadoop-2.9.1/sbin/hadoop-daemon.sh: line 178: /home/zf/hadoop/hadoop-2.9.1/logs/hadoop-zf-secondarynamenode-slave-1.out: Permission denied
解决方案:对hadoop安装目录执行命令:sudo chmod a+w *
对文件敞开权限
二、执行./start-dfs.sh 和 ./start-yarn.sh 后master主机上jps 无法启动NameNode和ResourceManager,但slave-1 和slave-2 DataNode启动正常。
查看logs,错误如下:
**************namenode.log**************************** 2018-08-01 04:01:28,091 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.net.BindException: Port in use: master:50070 at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1014) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1037) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1094) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:951) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:179) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:889) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:725) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:953) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:932) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1673) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1741) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1002) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1033) ... 9 more 2018-08-01 04:01:28,097 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1: java.net.BindException: Port in use: master:50070 2018-08-01 04:01:28,099 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down NameNode at master/182.61.39.233 ************************************************************/
*************secondnamemode.log**************************** 2018-08-01 02:29:33,947 INFO org.apache.hadoop.http.HttpServer2: HttpServer.start() threw a non Bind IOException java.net.BindException: Port in use: master:9001 at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1014) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1037) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1094) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:951) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.startInfoServer(SecondaryNameNode.java:498) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:701) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1002) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1033) ... 4 more 2018-08-01 02:29:33,948 FATAL org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Failed to start secondary namenode java.net.BindException: Port in use: master:9001 at org.apache.hadoop.http.HttpServer2.constructBindException(HttpServer2.java:1014) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1037) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:1094) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:951) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.startInfoServer(SecondaryNameNode.java:498) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:701) Caused by: java.net.BindException: Cannot assign requested address at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.bindListener(HttpServer2.java:1002) at org.apache.hadoop.http.HttpServer2.bindForSinglePort(HttpServer2.java:1033) ... 4 more
可见主要原因就是:
1、 java.net.BindException: Port in use: master:50070/9001
2、Caused by: java.net.BindException: Cannot assign requested address
端口被占用是直接原因,但起因是不能分配所需的地址,跟地址有关的就联想到 /etc/hosts文件
原来我的是:
127.0.0.1 localhost localhost.localdomain 182.xx.xx.33 master (外网IP, 主机名) 106.xx.xx.72 slave-2 106.xx.xx.73 slave-1
后来把外网IP全部换成内网IP后就可以了
127.0.0.1 localhost localhost.localdomain 172.xx.x.2 master 172.xx.x.5 slave-1 172.xx.x.6 slave-2
原因我也不明,我三台主机都是使用百度云服务器。
三、Hadoop 配置好后,发现DataNode没有启动,很大原因是因为NameNode 格式化时与上次的残留有冲突,所以要先删除上次的残留。或者每次使用完后及时Hadoop关闭,重新开启后不要重新格式化NameNode。

浙公网安备 33010602011771号