Spark -14:spark Hadoop 高可用模式下读写hdfs

第一种,通过配置文件

  val sc = new SparkContext()

    sc.hadoopConfiguration.set("fs.defaultFS", "hdfs://cluster1");
    sc.hadoopConfiguration.set("dfs.nameservices", "cluster1");
    sc.hadoopConfiguration.set("dfs.ha.namenodes.cluster1", "nn1,nn2");
    sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn1", "namenode001:8020");
    sc.hadoopConfiguration.set("dfs.namenode.rpc-address.cluster1.nn2", "namenode002:8020");
    sc.hadoopConfiguration.set("dfs.client.failover.proxy.provider.cluster1", "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");

第二种,通过Java代码

   val conf = new SparkConf().setAppName("Spark Word Count") 
    val sc = new SparkContext()
    sc.hadoopConfiguration.addResource("cluster1/core-site.xml")
    sc.hadoopConfiguration.addResource("cluster1/hdfs-site.xml")

posted @ 2017-05-29 12:00  dy9776  阅读(1685)  评论(0编辑  收藏  举报