每日总结

今天清洗了数据

 

from pyspark.sql import SparkSession

spark = SparkSession.builder \
.appName("RemoteSparkConnection") \
.master("yarn") \
.config("spark.pyspark.python", "/opt/apps/anaconda3/envs/myspark/bin/python") \
.config("spark.sql.warehouse.dir", "/hive/warehouse") \
.config("hive.metastore.uris", "thrift://node01:9083") \
.config("spark.sql.parquet.writeLegacyFormat", "true") \
.enableHiveSupport() \
.getOrCreate()

posted @ 2024-03-18 22:16  南北啊  阅读(12)  评论(0)    收藏  举报
1 2 3
4