spark groupByKey 也是可以filter的

>>> v=sc.parallelize(["one", "two", "two", "three", "three", "three"])
>>> v2=v.map(lambda x: (x,1))
>>> v2.collect()
[('one', 1), ('two', 1), ('two', 1), ('three', 1), ('three', 1), ('three', 1)]  
>>> v3=v2.groupByKey()
>>> v3.collect()
[('one', <pyspark.resultiterable.ResultIterable object at 0x7fd3c7850e90>), ('two', <pyspark.resultiterable.ResultIterable object at 0x7fd3c7850f10>), ('three', <pyspark.resultiterable.ResultIterable object at 0x7fd3c6dc83d0>)]
>>> v4=v3.filter(lambda x:len(x[1].data)>2)
>>> v4.collect()
[('three', <pyspark.resultiterable.ResultIterable object at 0x7fd3c6dc8510>)]

过滤了出现次数大于2的结果。

posted @ 2017-11-01 09:34  bonelee  阅读(1072)  评论(0)    收藏  举报