ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020错误解决办法

刚开始学习Hadoop,不断遇到新问题,以后把遇到的新问题给记下来。 

有时候大家会看到以下的信息,这表示没连上hdfs。 
ximo@ubuntu:~$ hadoop fs -ls 
11/11/08 10:59:32 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 0 time(s). 
11/11/08 10:59:33 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 1 time(s). 
11/11/08 10:59:34 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 2 time(s). 
11/11/08 10:59:35 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 3 time(s). 
11/11/08 10:59:36 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 4 time(s). 
11/11/08 10:59:37 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:8020. Already tried 5 time(s). 

有几种原因: 
1)hadoop配置 
主要是$HADOOP_HOME/conf/hdfs-site.xml、mapred-site.xml、core-site.xml中的配置是否正确,伪分布式模式可以参考前面的blog,或是网上的文章,一大堆一大堆的。 

2)机器连不通 
如果是分布式的,还要看hadoop客户端机器能不能ping通hdfs机器,注意hdfs的端口号 

3)namenode没有启动 
是否是namenode没有启动, 
$stop-all.sh 如果出现no namenode stop则表示是namenode的问题 
$hadoop namenode -format 
$start-all.sh 
4)其他。

转自:http://tjuximo.iteye.com/blog/1242350

posted @ 2013-05-27 15:14  simon1024  阅读(3209)  评论(0)    收藏  举报