Myeclipase下运行hdfs api测试程序,报wrong fs
伪分布式下 hadoop 0.20.2:
代码如下,简单的api访问:
import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.FileSystem; import org.apache.hadoop.fs.Path; public class FileRW { public static void main(String[] args) throws Exception{ String str = new String(); String uri = args[0]; Path srcPath = new Path(uri); Configuration conf = new Configuration(); FileSystem fs = FileSystem.get(conf); //FileSystem fs = srcPath.getFileSystem(conf); if (fs.exists(srcPath)) { InputStream input = fs.open(srcPath); InputStreamReader isr = new InputStreamReader(input,"GBK"); // FileStatus state = fs.getFileStatus(srcPath); BufferedReader br = new BufferedReader(isr); while((str = br.readLine()) != null) { System.out.println(str); str = null; } input.close(); } } }
run as java Application。然后报错:
Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS: hdfs://analyze:9000/user/root/MahoutInput/test.txt, expected: file:/// at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:310) at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:47) at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:357) at org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:245) at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:648) at FileRW.main(FileRW.java:20)
网上查找到解决方法:
将原代码中的(红色标注部分):
FileSystem fs = FileSystem.get(conf);
改成
FileSystem fs = srcPath.getFileSystem(conf);
后,运行成功。
原因不清楚,可能是伪分布式原因?
浙公网安备 33010602011771号