天天看点

看example源码学spark系列(3)-BroadcastTest

从这一节起开始直接在spark中运行例子,不再自己建立独立的项目。若要建立独立的项目,见第一、二节即可。

这节看下BroadcastTest,运行方法如下:

[email protected]:~/Software/spark-0.9.1$ ./bin/run-example org.apache.spark.examples.BroadcastTest
Usage: BroadcastTest <master> [slices] [numElem] [broadcastAlgo] [blockSize]
[email protected]:~/Software/spark-0.9.1$ ./bin/run-example org.apache.spark.examples.BroadcastTest spark://jpan-Beijing:7077
           

结果为:

14/06/04 14:56:06 INFO spark.SparkContext: Job finished: collect at BroadcastTest.scala:53, took 0.095158958 s
1000000
1000000
1000000
1000000
1000000
1000000
1000000
1000000
1000000
1000000
Iteration 1 took 137 milliseconds
Iteration 2
===========
14/06/04 14:56:06 INFO storage.MemoryStore: ensureFreeSpace(4000128) called with curMem=8000256, maxMem=819776716
14/06/04 14:56:06 INFO scheduler.TaskSetManager: Finished TID 3 in 84 ms on jpan-Beijing.local (progress: 2/2)
14/06/04 14:56:06 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool 
14/06/04 14:56:06 INFO storage.MemoryStore: Block broadcast_2 stored as values to memory (estimated size 3.8 MB, free 770.4 MB)
14/06/04 14:56:06 INFO spark.SparkContext: Starting job: collect at BroadcastTest.scala:53
14/06/04 14:56:06 INFO scheduler.DAGScheduler: Got job 2 (collect at BroadcastTest.scala:53) with 2 output partitions (allowLocal=false)
14/06/04 14:56:06 INFO scheduler.DAGScheduler: Final stage: Stage 2 (collect at BroadcastTest.scala:53)
14/06/04 14:56:06 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/06/04 14:56:06 INFO scheduler.DAGScheduler: Missing parents: List()
14/06/04 14:56:06 INFO scheduler.DAGScheduler: Submitting Stage 2 (MappedRDD[5] at map at BroadcastTest.scala:51), which has no missing parents
14/06/04 14:56:06 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 2 (MappedRDD[5] at map at BroadcastTest.scala:51)
.....................................
1000000
1000000
1000000
1000000
1000000
Iteration 2 took 124 milliseconds
           

BroadcastTest的源码如下:

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

package org.apache.spark.examples

import org.apache.spark.SparkContext

object BroadcastTest {
 def main(args: Array[String]) {
    if (args.length == 0) {
      System.err.println("Usage: BroadcastTest <master> [slices] [numElem] [broadcastAlgo]" +
        " [blockSize]")
      System.exit(1)
    }

    val bcName = if (args.length > 3) args(3) else "Http"
    val blockSize = if (args.length > 4) args(4) else "4096"

    System.setProperty("spark.broadcast.factory", "org.apache.spark.broadcast." + bcName +
      "BroadcastFactory")
    System.setProperty("spark.broadcast.blockSize", blockSize)

    val sc = new SparkContext(args(0), "Broadcast Test",
      System.getenv("SPARK_HOME"), SparkContext.jarOfClass(this.getClass))

    val slices = if (args.length > 1) args(1).toInt else 2
    val num = if (args.length > 2) args(2).toInt else 1000000

    val arr1 = new Array[Int](num)
    for (i <- 0 until arr1.length) {
      arr1(i) = i
    }

    for (i <- 0 until 3) {
      println("Iteration " + i)
      println("===========")
      val startTime = System.nanoTime
      val barr1 = sc.broadcast(arr1)
      val observedSizes = sc.parallelize(1 to 10, slices).map(_ => barr1.value.size)
      // Collect the small RDD so we can print the observed sizes locally.
      observedSizes.collect().foreach(i => println(i))
      println("Iteration %d took %.0f milliseconds".format(i, (System.nanoTime - startTime) / 1E6))
    }

    sc.stop()
  }
}
           

main函数中可以看到,需要4个输入参数,但后面3个参数是可选的,如果不输入,将会采用默认参数。

申请一个数组,名为arr1,并循环把数组值赋值为0到999999

下面介绍broadcast函数。

经过broadcast函数的处理,输入量将成为一个变量。broadcast变量是只读的,它缓存在集群中的每个机器而不是在任务需要时才传输数据到另外节点。使用它可以高效地给每一个节点的一个数据副本,且可以降低通信成本。

broadcast是通过调用变量并从变量创建。如实例中是从arr1创建的,创建后其值可以通过调用值的方法访问。下面用一个更简单的例子说明:

scala> val broadcastVar = sc.broadcast(Array(1, 2, 3))
broadcastVar: spark.Broadcast[Array[Int]] = spark.Broadcast(b5c40191-a864-4c7d-b9bf-d87e1a4e787c)

scala> broadcastVar.value
res0: Array[Int] = Array(1, 2, 3)
           

broadcast变量创建后,它就取代了Array,我们可以通过broadcastVar变量对值进行访问。

parallelize函数

其官网定义是:

def parallelize[T](seq: Seq[T], numSlices: Int = defaultParallelism)(implicit arg0: ClassTag[T]): RDD[T]

Distribute a local Scala collection to form an RDD.(把本地的collection分布式组成一个RDD)

那些map,collect函数都是一些基本的操作,以后再做介绍。

继续阅读