【Spark】通过创建DataFrame读取不同类型文件内容


读取文本文件

第一种方法:通过RDD配合case class转换DataFrame

步骤

一、创建测试所需的文本文件

在虚拟机的/export/servers/目录下创建文本文件

cd /export/servers/
vim person.txt
1 zhangsan 20
2 lisi 29
3 wangwu 25
4 zhaoliu 30
5 tianqi 35
6 kobe 40
二、在spark-shell中执行以下操作
// 1.进入spark客户端
cd /export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0/
bin/spark-shell --master local[2]

// 2.读取创建好的文本文件,定义RDD为lineRDD,并对数据进行切割
scala> val lineRDD = sc.textFile("file:///export/servers/person.txt").map(x => x.split(" "))
lineRDD: org.apache.spark.rdd.RDD[Array[String]] = MapPartitionsRDD[7] at map at <console>:24

// 3.定义case class样例类
scala> case class Person(id: Int,name: String,age: Int)
defined class Person

// 4.关联RDD和case class
scala> val personRDD = lineRDD.map(x => Person(x(0).toInt,x(1),x(2).toInt))
personRDD: org.apache.spark.rdd.RDD[Person] = MapPartitionsRDD[8] at map at <console>:28

// 5.将RDD转换为DataFrame
scala> val personDF = personRDD.toDF
personDF: org.apache.spark.sql.DataFrame = [id: int, name: string ... 1 more field]

// 6.查看数据
scala> personDF.show
+---+--------+---+
| id|    name|age|
+---+--------+---+
|  1|zhangsan| 20|
|  2|    lisi| 29|
|  3|  wangwu| 25|
|  4| zhaoliu| 30|
|  5|  tianqi| 35|
|  6|    kobe| 40|
+---+--------+---+

// tips!	将DataFrame转换为RDD直接调用rdd方法即可
scala> personDF.rdd.collect
res2: Array[org.apache.spark.sql.Row] = Array([1,zhangsan,20], [2,lisi,29], [3,wangwu,25], [4,zhaoliu,30], [5,tianqi,35], [6,kobe,40])

第二种方法:通过sparkSession构建DataFrame

// 1.直接读取文件即可
scala> val personDF2 = spark.read.text("file:///export/servers/person.txt")
personDF2: org.apache.spark.sql.DataFrame = [value: string]

// 2.查看数据
scala> personDF2.show
+-------------+
|        value|
+-------------+
|1 zhangsan 20|
|    2 lisi 29|
|  3 wangwu 25|
| 4 zhaoliu 30|
|  5 tianqi 35|
|    6 kobe 40|
+-------------+

可以看到通过sparkSession直接读取的文本文件,查询数据发现每一行的数据都统一放到了一个字段,而通过第一种方法就会按照字段分开,所以读取文本文件时一般更推荐第一种方法


读取json文件

// 1.spark提供了json格式的example,可以直接读取
scala> val jsonDF = spark.read.json("file:///export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0/examples/src/main/resources/people.json")
jsonDF: org.apache.spark.sql.DataFrame = [age: bigint, name: string]

// 2.查看数据
scala> jsonDF.show
+----+-------+
| age|   name|
+----+-------+
|null|Michael|
|  30|   Andy|
|  19| Justin|
+----+-------+

读取parquet列式存储文件

// 1.spark也提供了parquet格式的example,可以直接读取
scala> val parquetDF = spark.read.parquet("file:///export/servers/spark-2.2.0-bin-2.6.0-cdh5.14.0/examples/src/main/resources/users.parquet")
parquetDF: org.apache.spark.sql.DataFrame = [name: string, favorite_color: string ... 1 more field]

// 2.查看数据
scala> parquetDF.show
+------+--------------+----------------+
|  name|favorite_color|favorite_numbers|
+------+--------------+----------------+
|Alyssa|          null|  [3, 9, 15, 20]|
|   Ben|           red|              []|
+------+--------------+----------------+
posted @ 2020-04-13 22:10  _codeRookie  阅读(653)  评论(0编辑  收藏  举报