|NO.Z.00041|——————————|BigDataEnd|——|Hadoop&Spark.V02|——|Spark.v02|spark sql|sparksession|
一、Spark SQL编程
### --- sparkseeion官方地址
~~~ 官方文档:http://spark.apache.org/docs/latest/sql-getting-started.html

### --- SparkSession
~~~ 在 Spark 2.0 之前:
~~~ SQLContext 是创建 DataFrame 和执行 SQL 的入口
~~~ HiveContext通过Hive sql语句操作Hive数据,兼Hhive操作,HiveContext继承自SQLContext
~~~ 在 Spark 2.0 之后:
~~~ 将这些入口点统一到了SparkSession,SparkSession 封装了 SqlContext 及HiveContext;
~~~ 实现了 SQLContext 及 HiveContext 所有功能;
~~~ 通过SparkSession可以获取到SparkConetxt;


### --- sparksession实验示例
scala> import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.SparkSession
scala> val spark = SparkSession
spark: org.apache.spark.sql.SparkSession.type = org.apache.spark.sql.SparkSession$@5b0af511
scala> .builder()
res0: spark.Builder = org.apache.spark.sql.SparkSession$Builder@47651c28
scala> .appName("Spark SQL basic example")
res1: spark.Builder = org.apache.spark.sql.SparkSession$Builder@47651c28
scala> .config("spark.some.config.option", "some-value")
res2: spark.Builder = org.apache.spark.sql.SparkSession$Builder@47651c28
scala> .getOrCreate()
21/10/20 14:11:13 WARN SparkSession$Builder: Using an existing SparkSession; some configuration may not take effect.
res3: org.apache.spark.sql.SparkSession = org.apache.spark.sql.SparkSession@2a292566
~~~ # For implicit conversions like converting RDDs to DataFrames
scala> import spark.implicits._
<console>:26: error: value implicits is not a member of object org.apache.spark.sql.SparkSession
import spark.implicits._
^
Walter Savage Landor:strove with none,for none was worth my strife.Nature I loved and, next to Nature, Art:I warm'd both hands before the fire of life.It sinks, and I am ready to depart
——W.S.Landor
浙公网安备 33010602011771号