RDD练习:词频统计

一、词频统计:

1.读文本文件生成RDD lines

lines=sc.textFile("file:///home/hadoop/word.txt")   #读取本地文件
lines.collect()

2.将一行一行的文本分割成单词 words flatmap()

words=lines.flatMap(lambda line:line.split())   #划分单词
words.collect()

3.全部转换为小写 lower()

words=words.map(lambda line:line.lower())   #变为小写
words.collect()

4.去掉长度小于3的单词 filter()

words=words.filter(lambda word:len(word)>3)
words.collect()

5.去掉停用词

with open('/home/hadoop/stopwords.txt')
     stops=f.read().split()

words=words.filter(lambda word:word not in stops)
words.count()
words.collect()

6.转换成键值对 map()

words=words.map(lambda word:(word,1))
words.collect()

7.统计词频 reduceByKey()

words=words.reduceByKey(lambda a,b:a+b)
words.collect()

 

 

 

二、学生课程分数 groupByKey()

-- 按课程汇总全总学生和分数

lines = sc.textFile('file:///home/hadoop/chapter4-data01.txt')
lines.take(5)

1. 分解出字段 map()

group=lines.map(lambda line:line.split(','))
group.take(5)

2. 生成键值对 map()

group=lines.map(lambda line:line.split(',')).map(lambda line:(line[1],(line[0],line[2])))
group.take(5)

3. 按键分组 

group=group.groupByKey()
group.take(5)

4. 输出汇总结果

groupByCourse=group
for i in groupByCourse.first()[1]:
   print(i)

 

 

 

三、学生课程分数 reduceByKey()

-- 每门课程的选修人数

count=lines.map(lambda line:line.split(',')).map(lambda line:(line[1],1))
count=count.reduceByKey(lambda a,b:a+b)
count.take(5)

-- 每个学生的选修课程数

count=lines.map(lambda line:line.split(',')).map(lambda line:(line[0],1))
count=count.reduceByKey(lambda a,b:a+b)
count.take(5)

 

posted @ 2021-04-04 11:30  小王子C  阅读(248)  评论(0编辑  收藏  举报