[Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子
sqlContext = HiveContext(sc)
peopleDF = sqlContext.read.json("people.json")
peopleRDD = peopleDF.map(lambda row: (row.pcode,row.name))
peopleRDD.take(5)
peopleByPCode= peopleRDD.groupByKey()
peopleByPCode.take(5)
[(u'10036', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2290>),
(u'94104', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2690>),
(u'94304', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2490>),
(None, <pyspark.resultiterable.ResultIterable at 0x7f0d683a25d0>)]
本文转自健哥的数据花园博客园博客,原文链接:http://www.cnblogs.com/gaojian/p/7636004.html,如需转载请自行联系原作者