Hadoop2.x异常总结

问题1: 在执行bin/hdfs namenode -format格式化HDFS命令时,抛出异常,异常如下:

16/10/26 18:32:45 ERROR namenode.NameNode: Failed to start namenode.  
java.lang.IllegalArgumentException: URI has an authority component  
    at java.io.File.<init>(File.java:423)  
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:329)  
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:276)  
    at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:247)  
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:985)  
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)  
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)  
16/10/26 18:32:45 INFO util.ExitUtil: Exiting with status 1  
16/10/26 18:32:45 INFO namenode.NameNode: SHUTDOWN_MSG:   

出错原因:

hdfs-site.xml配置错误,错误配置如下:

<configuration>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file://opt/softwares/hadoop-2.6.5/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file://opt/softwares/hadoop-2.6.5/dfs/data</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>master:9001</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
</configuration>

解决办法:将错误配置改成正确配置, 正确配置如下

<configuration>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/opt/softwares/hadoop-2.6.5/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/opt/softwares/hadoop-2.6.5/dfs/data</value>
        </property>
        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>master:9001</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
</configuration>

 

问题2:hadoop启动时报dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.错误

出错原因:

从节点机器core-site.xml配置文件错误,fs.defaultfs, 正确的配置为fs.defaultFS,  后缀应该为大写

解决办法:

 <property>
       <name>fs.defaultFS</name>
       <value>hdfs://master:9000</value>
 </property>

 

问题3:Hadoop启动之后,无法访问web界面,就是http://master:50070无法打开

出错原因:防火墙没有关闭

解决办法:关闭防火墙(centOS7环境)

systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
firewall-cmd --state #查看默认防火墙状态(关闭后显示notrunning,开启后显示running)

 

问题4:在Hadoop上跑MapReduce Job时,日志卡在Running Job,不往下执行了

出错原因:可能是由于分配的资源不够,Job无法进行下去,所以阻塞在那里

解决办法:修改yarn-site.xml

 

<property>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>20480</value>
</property>
<property>
    <name>yarn.scheduler.minimum-allocation-mb</name>
    <value>2048</value>
 </property>
 <property>
    <name>yarn.nodemanager.vmem-pmem-ratio</name>
    <value>2.1</value>
  </property>

 

 

问题5:手写了一个WordCountMR, 放在hadoop上跑,提示异常,异常如截图:

出错原因:MainClass需要写完整的类名,包括包名

解决办法:

hadoop jar WordCountMR.jar com.enniu.study.hadoop.mr.WordCountMR /input /output_wordcount

 

posted @ 2018-02-08 00:01  健身男儿挑灯夜读  阅读(215)  评论(0编辑  收藏  举报