[Spark][Python][DataFrame][RDD]DataFrame中抽取RDD例子

sqlContext = HiveContext(sc)

peopleDF = sqlContext.read.json("people.json")

peopleRDD = peopleDF.map(lambda row: (row.pcode,row.name))

peopleRDD.take(5)

 

Out[5]: 
[(u'94304', u'Alice'),
(u'94304', u'Brayden'),
(u'10036', u'Carla'),
(None, u'Diana'),
(u'94104', u'Etienne')]

 

peopleByPCode= peopleRDD.groupByKey()

peopleByPCode.take(5)

 

[(u'10036', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2290>),
(u'94104', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2690>),
(u'94304', <pyspark.resultiterable.ResultIterable at 0x7f0d683a2490>),
(None, <pyspark.resultiterable.ResultIterable at 0x7f0d683a25d0>)]

 

相关文章:

  • 2022-12-23
  • 2022-12-23
  • 2021-03-31
  • 2021-06-09
  • 2021-07-23
  • 2022-12-23
  • 2021-09-07
  • 2021-11-14
猜你喜欢
  • 2021-11-09
  • 2021-05-29
  • 2021-11-16
  • 2021-06-19
  • 2018-03-01
  • 2021-10-30
  • 2021-11-09
相关资源
相似解决方案