天天看点

sparkstreaming的reduceByKey()算子 和updateStateByKey() 的区别

reduceByKey(): 只计算当前Duration时间内的聚合

updateStateByKey() : 计算从streamingContext 启动开始到当前批次的聚合,当前批次之前的数据保存在内存+checkPoint 设置目录中,不设置checkPoint 会报错

如果Duration > 10s , 每隔Duration时间做一次checkPoint

如果Duration < 10s , 每隔10s时间做一次checkPoint,防止频繁访问checkPoint 目录

以下是reduceByKey updateStateByKey 使用代码

object SparkStreamingTest {
   def main(args: Array[String]): Unit = {

    //receiver模式下接受数据,local的模拟线程必须大于等于2,一个线程用来receiver用来接受数据,另一个线程用来执行job。
    val conf = new SparkConf().setMaster("local[*]").setAppName("SparkStreamingTest")

    //设置日志级别为ERROR 
    val sc = new SparkContext(conf)
    sc.setLogLevel("ERROR")

    //在创建streaminContext的时候 设置batch Interval
    val ssc: StreamingContext = new StreamingContext(sc, Seconds(5))
    
    //创建DStream
    val dstream1: ReceiverInputDStream[String] = ssc.socketTextStream("hadoop-101", 999)

    //执行DStream的transformation算子
    val dstream2: DStream[String] = dstream1.flatMap(x => {
      x.split(" ")
    })

    val dstream3: DStream[(String, Int)] = dstream2.map(x => {
      (x, 1)
    })

    val reducedStream: DStream[(String, Int)] = dstream3.reduceByKey(_ + _)

    //所有的代码逻辑完成后要有一个output operation类算子触发执行
    reducedStream.print()

    //Streaming框架启动后不能再次添加业务逻辑。
    ssc.start()

    //等待reciverTask结束
    ssc.awaitTermination()


  }

}
           
object SparkStreamingTest2 extends App {

  val conf = new SparkConf().setMaster("local[*]").setAppName("SparkStreamingTest")

  //设置日志级别为ERROR
  val sc = new SparkContext(conf)
  sc.setLogLevel("ERROR")

  //在创建streaminContext的时候 设置batch Interval
  val ssc: StreamingContext = new StreamingContext(sc, Seconds(5))

  ssc.checkpoint("./")

  //创建DStream
  val dstream1: ReceiverInputDStream[String] = ssc.socketTextStream("hadoop-101", 999)

  //执行DStream的transformation算子
  val dstream2: DStream[String] = dstream1.flatMap(x => {
    x.split(" ")
  })

  val dstream3: DStream[(String, Int)] = dstream2.map(x => {
    (x, 1)
  })

  //解析updateStateByKey
  //updateStateByKey需要传入一个函数(updateFunc: (Seq[V] Option[S])=>Option[S])
  //针对某个KEY
  //seq[v]: 是当前批次的某个key的数据:(1,1,1)
  //参数Option[S]): 是之前批次的这个key的累加数据:8
  //返回值Option 是吧当前批次和累加批次聚合的结果
   val total: DStream[(String, Int)] = dstream3.updateStateByKey((currValues: Seq[Int], prevValueState: Option[Int]) => {
    val currentSum = currValues.sum
    //a a a
    val previousCount = prevValueState.getOrElse(0) //8
    Some(currentSum + previousCount) //3 + 8
  })

  total.print()
  ssc.start()
  ssc.awaitTermination()

}

           

继续阅读