hadoop配置出错信息汇总
1:INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 0 time(s).
运行
ximo@ubuntu:~$ hadoop fs -ls
14/01/08 22:01:41 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 0 time(s).
14/01/08 22:01:42 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 1 time(s).
14/01/08 22:01:43 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 2 time(s).
14/01/08 22:01:44 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 3 time(s).
14/01/08 22:01:45 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 4 time(s).
14/01/08 22:01:46 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 5 time(s).
14/01/08 22:01:47 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 6 time(s).
14/01/08 22:01:48 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 7 time(s).
14/01/08 22:01:49 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 8 time(s).
14/01/08 22:01:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:12200. Already tried 9 time(s).
mkdir: Call From Lenovo-G460-LYH/127.0.0.1 to localhost:12200 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
有几种原因:
1)hadoop配置
主要是$HADOOP_HOME/conf/hdfs-site.xml、mapred-site.xml、core-site.xml中的配置是否正确,伪分布式模式可以参考前面的blog,或是网上的文章,一大堆一大堆的。
2)机器连不通
如果是分布式的,还要看hadoop客户端机器能不能ping通hdfs机器,注意hdfs的端口号
3)namenode没有启动(本人是这种原因)
$stop-all.sh 如果出现no namenode stop则表示是namenode的问题
$hadoop namenode -format
$start-all.sh
4)其他。
2:hadoop datanode无法启动 (一般是由于两次或两次以上的格式化NameNode造成)
$jps
10309 JobTracker
10430 TaskTracker
11707 Jps
10232 SecondaryNameNode
9966 NameNode
没有dataNode服务
解决方法:
1)删除DataNode的所有资料
(将集群中每个datanode的/hdfs/data/current中的VERSION删掉hdfs为hadoop-src/conf/hdfs-site.xml中name为dfs.data.dir所指向的值/usr/local/hadoop/data/hadoop/data,所以我的/hdfs/data/current/VERSION=/usr/local/hadoop/data/hadoop/data/current/VERSION)
2)$./stop-all.sh
3)$./hadoop namenode -format
4)$./start-all.sh
5)$jps
12680 NameNode
12807 DataNode
12941 SecondaryNameNode
13158 TaskTracker
13022 JobTracker
14119 Jps
dataNode服务重新启动
持续更新。。。。。。