随笔分类 - spark
摘要:import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_takeOrdered { System.setProperty("hadoop.home.dir
阅读全文
摘要:import org.apache.hadoop.io.compress.GzipCodec import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_sa
阅读全文
摘要:import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_saveAsObjectFile { System.setProperty("hadoop.hom
阅读全文
摘要:import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_reduce { System.setProperty("hadoop.home.dir","F:
阅读全文
摘要:import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_first { System.setProperty("hadoop.home.dir","F:\
阅读全文
摘要:import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_count { System.setProperty("hadoop.home.dir","F:\
阅读全文
摘要:import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/16. */ object A_collect { System.setProperty("hadoop.home.dir","F
阅读全文
摘要:Spark 是专为大规模数据处理而设计的快速通用的计算引擎。 Spark是UC Berkeley AMP lab (加州大学伯克利分校的AMP实验室)所开源的类Hadoop MapReduce的通用并行框架,Spark,拥有Hadoop MapReduce所具有的优点;但不同于MapReduce的是
阅读全文
摘要:import org.apache.log4j.{Level, Logger} import org.apache.spark.{SparkConf, SparkContext} /** * Created by liupeng on 2017/6/17. */ object A_countByKe
阅读全文

浙公网安备 33010602011771号