天天看點

[Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子

[Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子

sqlContext = HiveContext(sc)

peopleDF = sqlContext.read.json("people.json")

peopleRDD = peopleDF.map(lambda row: (row.pcode,row.name))

peopleRDD.take(5)

peopleByPCode= peopleRDD.groupByKey()

peopleByPCode.take(5)

[(u'10036', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2290>),

(u'94104', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2690>),

(u'94304', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2490>),

(None, <pyspark.resultiterable.ResultIterable at 0x7f0d683a25d0>)]

本文轉自健哥的資料花園部落格園部落格,原文連結:http://www.cnblogs.com/gaojian/p/7636004.html,如需轉載請自行聯系原作者

繼續閱讀