天天看點

spark HA 安裝配置和使用(spark1.2-cdh5.3)

安裝環境如下:

  • 作業系統:CentOs 6.6
  • Hadoop 版本:CDH-5.3.0
  • Spark 版本:1.2

叢集5個節點 node01~05

node01~03 為worker、 node04、node05為master

spark HA 必須要zookeepr來做協同服務,做master主備切換,zookeeper的安裝和配置再次不做贅述。

yum源的配置請看:

1.安裝

檢視spark的相關包有哪些:

[root@node05 hadoop-yarn]# yum list |grep spark
spark-core.noarch                     1.2.0+cdh5.3.0+364-1.cdh5.3.0.p0.36.el6
spark-history-server.noarch           1.2.0+cdh5.3.0+364-1.cdh5.3.0.p0.36.el6
spark-master.noarch                   1.2.0+cdh5.3.0+364-1.cdh5.3.0.p0.36.el6
spark-python.noarch                   1.2.0+cdh5.3.0+364-1.cdh5.3.0.p0.36.el6
hue-spark.x86_64                      3.7.0+cdh5.3.0+134-1.cdh5.3.0.p0.24.el6
spark-worker.noarch                   1.2.0+cdh5.3.0+364-1.cdh5.3.0.p0.36.el6
      

以上包作用如下:

  • spark-core: spark 核心功能
  • spark-worker: spark-worker 初始化腳本
  • spark-master: spark-master 初始化腳本
  • spark-python: spark 的 Python 用戶端
  • hue-spark: spark 和 hue 內建包
  • spark-history-server

node04,node05上安裝master,node01、node02、node03上安裝worker

在node04,node05上運作
sudo yum  -y  install spark-core spark-master spark-worker spark-python spark-history-server 
在node01~03上運作
sudo yum  -y install spark-core spark-worker spark-python 
      

 node04:spark-master  spark-history-server

 node05:spark-master

 node01:spark-worker

 node02:spark-worker

2,修改配置檔案

(1)修改配置檔案 

/etc/spark/conf/spark-env.sh

,其内容如下

export SPARK_LAUNCH_WITH_SCALA=0
export SPARK_LIBRARY_PATH=${SPARK_HOME}/lib
export SCALA_LIBRARY_PATH=${SPARK_HOME}/lib
export SPARK_MASTER_WEBUI_PORT=18080
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_PORT=7078
export SPARK_WORKER_WEBUI_PORT=18081
export SPARK_WORKER_DIR=/var/run/spark/work
export SPARK_LOG_DIR=/var/log/spark
export SPARK_PID_DIR='/var/run/spark/'
#采用Zookeeper保證HA,導入相應的環境變量
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181 -Dspark.deploy.zookeeper.dir=/spark"

export JAVA_HOME=/usr/java/jdk1.7.0_71/
#如果是多Master的情況下,不能定義Spark_Master_IP的屬性,否則無法啟動多個Master,這個屬性的定義可以在Application中定義
#export SPARK_MASTER_IP=node04
export SPARK_WORKER_CORES=1
export SPARK_WORKER_INSTANCES=1
#指定每個Worker需要的記憶體大小(全局)
export SPARK_WORKER_MEMORY=5g


#下面是結合Spark On Yarn方式的叢集模式需要配置的,獨立叢集模式不需要配置
export HADOOP_HOME=/usr/lib/hadoop
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/Hadoop
#spark on yarn 送出任務時防止找不到resourcemanager :INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
export SPARK_YARN_USER_ENV="CLASSPATH=/usr/lib/hadoop"
      

 export SPARK_DAEMON_JAVA_OPTS還可以采用另一種導入方式

#指定Spark恢複模式,這裡采用Zookeeper模式,預設為NONE
spark.deploy.recoveryMode               ZOOKEEPER
spark.deploy.zookeeper.url              node01:2181,node02:2181,node03:2181
spark.deploy.zookeeper.dir              /spark
      

選項:

spark.deploy.recoveryMode       NONE   恢複模式(Master重新啟動的模式),有三種:1, ZooKeeper, 2, FileSystem, 3 NONE

spark.deploy.zookeeper.url  ZooKeeper的Server位址

spark.deploy.zookeeper.dir  /spark ZooKeeper 儲存叢集中繼資料資訊的檔案目錄,包括Worker,Driver和Application。

(2)修改spark-default.conf  (如果沒有做下配置,日志将不會持久化,一旦運作完畢後,無法檢視日志情況)

在最後增加如下選項

#是否啟用事件日志記錄
spark.eventLog.enabled  true
#Driver任務運作的日志生成目錄
spark.eventLog.dir      hdfs://mycluster/user/spark/eventslog
#監控頁面需要監控的目錄,需要先啟用和指定事件日志目錄,配合上面兩項使用
spark.history.fs.logDirectory   hdfs://mycluster/user/spark/eventslog
#如果想 YARN ResourceManager 通路 Spark History Server ,則添加一行:
spark.yarn.historyServer.address        http://node04:19888
      

 hdfs://mycluster/user/spark/eventslog該目錄為HDFS的目錄,需要提前建立好,

同時這裡用到了HADOOP HA模式的叢集名稱mycluster,是以我們需要把HADOOP的配置檔案hdfs-site.xml複制到Spark的conf目錄下,這樣就不會報叢集名字mycluster找不到的問題

(3)修改slaves

 node01

 node02

 node03

修改完後把配置檔案分發到其他節點:

scp -r /etc/spark/conf root@node01:/etc/spark
scp -r /etc/spark/conf root@node02:/etc/spark
scp -r /etc/spark/conf root@node03:/etc/spark
scp -r /etc/spark/conf root@node04:/etc/spark
      

 建立hdfs上的目錄;

sudo -u hdfs hadoop fs -mkdir /user/spark
sudo -u hdfs hadoop fs -mkdir /user/spark/eventlog
sudo -u hdfs hadoop fs -chown -R spark:spark /user/spark
sudo -u hdfs hadoop fs -chmod 1777 /user/spark/eventlog
      

 3.啟動

進入node05 的spark的sbin目錄執行start-all.sh

[root@node05 sbin]# ./start-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /var/log/spark/spark-root-org.apache.spark.deploy.master.Master-1-node05.out
node01: starting org.apache.spark.deploy.worker.Worker, logging to /var/log/spark/spark-root-org.apache.spark.deploy.worker.Worker-1-node01.out
node02: starting org.apache.spark.deploy.worker.Worker, logging to /var/log/spark/spark-root-org.apache.spark.deploy.worker.Worker-1-node02.out
node03: starting org.apache.spark.deploy.worker.Worker, logging to /var/log/spark/spark-root-org.apache.spark.deploy.worker.Worker-1-node03.out
      

 進入node04的sbin目錄執行start-master.sh

[root@node04 sbin]# start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /var/log/spark/spark-root-org.apache.spark.deploy.master.Master-1-node04.out
      

 當node05 ALIVE時,node04 standby,node05挂掉時,node04會頂替成為master

spark HA 安裝配置和使用(spark1.2-cdh5.3)
spark HA 安裝配置和使用(spark1.2-cdh5.3)

 在node05把master停掉

[root@node05 sbin]# ./stop-master.sh 
stopping org.apache.spark.deploy.master.Master
      
spark HA 安裝配置和使用(spark1.2-cdh5.3)

 此時node04變成alive成為master

spark HA 安裝配置和使用(spark1.2-cdh5.3)

  4. 測試

4.1 運作測試例子

你可以在官方站點檢視官方的例子。 除此之外,Spark 在釋出包的 examples 的檔案夾中包含了幾個例子( Scala、Java、Python)。運作 Java 和 Scala 例子時你可以傳遞類名給 Spark 的 bin/run-example腳本, 例如:

[root@node02 bin]# run-example SparkPi 10
16/11/19 00:34:51 INFO spark.SparkContext: Spark configuration:
spark.app.name=Spark Pi
spark.deploy.recoveryMode=ZOOKEEPER
spark.deploy.zookeeper.dir=/spark
spark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181
spark.eventLog.dir=hdfs://mycluster/user/spark/eventlog
spark.eventLog.enabled=true
spark.executor.memory=4g
spark.jars=file:/usr/lib/spark/lib/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar
spark.logConf=true
spark.master=local[*]
spark.scheduler.mode=FAIR
spark.yarn.historyServer.address=http://node04:19888
spark.yarn.submit.file.replication=3
16/11/19 00:34:51 INFO spark.SecurityManager: Changing view acls to: root
16/11/19 00:34:51 INFO spark.SecurityManager: Changing modify acls to: root
16/11/19 00:34:51 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/11/19 00:34:51 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/11/19 00:34:51 INFO Remoting: Starting remoting
16/11/19 00:34:52 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@node02:45368]
16/11/19 00:34:52 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@node02:45368]
16/11/19 00:34:52 INFO util.Utils: Successfully started service 'sparkDriver' on port 45368.
16/11/19 00:34:52 INFO spark.SparkEnv: Registering MapOutputTracker
16/11/19 00:34:52 INFO spark.SparkEnv: Registering BlockManagerMaster
16/11/19 00:34:52 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20161119003452-320d
16/11/19 00:34:52 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
16/11/19 00:34:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/19 00:34:52 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-f91a5447-3d40-4ef8-ba3f-6c4391566017
16/11/19 00:34:52 INFO spark.HttpServer: Starting HTTP Server
16/11/19 00:34:52 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/11/19 00:34:52 INFO server.AbstractConnector: Started [email protected]:46389
16/11/19 00:34:52 INFO util.Utils: Successfully started service 'HTTP file server' on port 46389.
16/11/19 00:34:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/11/19 00:34:53 INFO server.AbstractConnector: Started [email protected]:4040
16/11/19 00:34:53 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/11/19 00:34:53 INFO ui.SparkUI: Started SparkUI at http://node02:4040
16/11/19 00:34:53 INFO spark.SparkContext: Added JAR file:/usr/lib/spark/lib/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar at http://172.16.145.112:46389/jars/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar with timestamp 1479486893473
16/11/19 00:34:53 INFO scheduler.FairSchedulableBuilder: Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1
16/11/19 00:34:53 INFO util.AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@node02:45368/user/HeartbeatReceiver
16/11/19 00:34:53 INFO netty.NettyBlockTransferService: Server created on 37623
16/11/19 00:34:53 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/11/19 00:34:53 INFO storage.BlockManagerMasterActor: Registering block manager localhost:37623 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 37623)
16/11/19 00:34:53 INFO storage.BlockManagerMaster: Registered BlockManager
16/11/19 00:34:54 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
16/11/19 00:34:54 INFO scheduler.EventLoggingListener: Logging events to hdfs://mycluster/user/spark/eventlog/local-1479486893516
16/11/19 00:34:55 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
16/11/19 00:34:55 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 10 output partitions (allowLocal=false)
16/11/19 00:34:55 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
16/11/19 00:34:55 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/11/19 00:34:55 INFO scheduler.DAGScheduler: Missing parents: List()
16/11/19 00:34:55 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:31), which has no missing parents
16/11/19 00:34:55 INFO storage.MemoryStore: ensureFreeSpace(1728) called with curMem=0, maxMem=278302556
16/11/19 00:34:55 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1728.0 B, free 265.4 MB)
16/11/19 00:34:55 INFO storage.MemoryStore: ensureFreeSpace(1126) called with curMem=1728, maxMem=278302556
16/11/19 00:34:55 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1126.0 B, free 265.4 MB)
16/11/19 00:34:55 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:37623 (size: 1126.0 B, free: 265.4 MB)
16/11/19 00:34:55 INFO storage.BlockManagerMaster: Updated info of block broadcast_0_piece0
16/11/19 00:34:55 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:838
16/11/19 00:34:55 INFO scheduler.DAGScheduler: Submitting 10 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:31)
16/11/19 00:34:55 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 10 tasks
16/11/19 00:34:55 INFO scheduler.FairSchedulableBuilder: Added task set TaskSet_0 tasks to pool default
16/11/19 00:34:55 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:55 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:55 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:55 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:55 INFO executor.Executor: Running task 1.0 in stage 0.0 (TID 1)
16/11/19 00:34:55 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0)
16/11/19 00:34:55 INFO executor.Executor: Running task 3.0 in stage 0.0 (TID 3)
16/11/19 00:34:55 INFO executor.Executor: Running task 2.0 in stage 0.0 (TID 2)
16/11/19 00:34:55 INFO executor.Executor: Fetching http://172.16.145.112:46389/jars/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar with timestamp 1479486893473
16/11/19 00:34:55 INFO util.Utils: Fetching http://172.16.145.112:46389/jars/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar to /tmp/fetchFileTemp1952931669628282908.tmp
16/11/19 00:34:56 INFO executor.Executor: Adding file:/tmp/spark-a281a361-04d2-495d-bfa7-ccd2a9c9a2ac/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar to class loader
16/11/19 00:34:56 INFO executor.Executor: Finished task 1.0 in stage 0.0 (TID 1). 727 bytes result sent to driver
16/11/19 00:34:56 INFO executor.Executor: Finished task 3.0 in stage 0.0 (TID 3). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:56 INFO executor.Executor: Running task 4.0 in stage 0.0 (TID 4)
16/11/19 00:34:56 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:56 INFO executor.Executor: Running task 5.0 in stage 0.0 (TID 5)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 727 ms on localhost (1/10)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 747 ms on localhost (2/10)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 734 ms on localhost (3/10)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:56 INFO executor.Executor: Running task 6.0 in stage 0.0 (TID 6)
16/11/19 00:34:56 INFO executor.Executor: Finished task 4.0 in stage 0.0 (TID 4). 727 bytes result sent to driver
16/11/19 00:34:56 INFO executor.Executor: Finished task 2.0 in stage 0.0 (TID 2). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:56 INFO executor.Executor: Running task 7.0 in stage 0.0 (TID 7)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:56 INFO executor.Executor: Running task 8.0 in stage 0.0 (TID 8)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 60 ms on localhost (4/10)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 762 ms on localhost (5/10)
16/11/19 00:34:56 INFO executor.Executor: Finished task 5.0 in stage 0.0 (TID 5). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, localhost, PROCESS_LOCAL, 1357 bytes)
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 59 ms on localhost (6/10)
16/11/19 00:34:56 INFO executor.Executor: Running task 9.0 in stage 0.0 (TID 9)
16/11/19 00:34:56 INFO executor.Executor: Finished task 8.0 in stage 0.0 (TID 8). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 113 ms on localhost (7/10)
16/11/19 00:34:56 INFO executor.Executor: Finished task 6.0 in stage 0.0 (TID 6). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 134 ms on localhost (8/10)
16/11/19 00:34:56 INFO executor.Executor: Finished task 9.0 in stage 0.0 (TID 9). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 136 ms on localhost (9/10)
16/11/19 00:34:56 INFO executor.Executor: Finished task 7.0 in stage 0.0 (TID 7). 727 bytes result sent to driver
16/11/19 00:34:56 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 157 ms on localhost (10/10)
16/11/19 00:34:56 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) finished in 0.933 s
16/11/19 00:34:56 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool default
16/11/19 00:34:56 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:35, took 1.468791 s
Pi is roughly 3.142804
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/static,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
16/11/19 00:34:56 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs,null}
16/11/19 00:34:56 INFO ui.SparkUI: Stopped Spark web UI at http://node02:4040
16/11/19 00:34:56 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/11/19 00:34:57 INFO spark.MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
16/11/19 00:34:57 INFO storage.MemoryStore: MemoryStore cleared
16/11/19 00:34:57 INFO storage.BlockManager: BlockManager stopped
16/11/19 00:34:57 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/11/19 00:34:57 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/11/19 00:34:57 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/11/19 00:34:57 INFO spark.SparkContext: Successfully stopped SparkContext
16/11/19 00:34:57 INFO Remoting: Remoting shut down
16/11/19 00:34:57 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
      

通過 Python API 來運作互動模式:

# 使用2個 Worker 線程本地化運作 Spark(理想情況下,該值應該根據運作機器的 CPU 核數設定)
[root@node02 bin]# pyspark --master local[2]
Python 2.6.6 (r266:84292, Jan 22 2014, 09:42:36) 
[GCC 4.4.7 20120313 (Red Hat 4.4.7-4)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
16/11/19 00:38:55 INFO spark.SparkContext: Spark configuration:
spark.app.name=PySparkShell
spark.deploy.recoveryMode=ZOOKEEPER
spark.deploy.zookeeper.dir=/spark
spark.deploy.zookeeper.url=node01:2181,node02:2181,node03:2181
spark.eventLog.dir=hdfs://mycluster/user/spark/eventlog
spark.eventLog.enabled=true
spark.executor.memory=4g
spark.logConf=true
spark.master=local[2]
spark.rdd.compress=True
spark.scheduler.mode=FAIR
spark.serializer.objectStreamReset=100
spark.yarn.historyServer.address=http://node04:19888
spark.yarn.submit.file.replication=3
16/11/19 00:38:55 INFO spark.SecurityManager: Changing view acls to: root
16/11/19 00:38:55 INFO spark.SecurityManager: Changing modify acls to: root
16/11/19 00:38:55 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/11/19 00:38:56 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/11/19 00:38:56 INFO Remoting: Starting remoting
16/11/19 00:38:56 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@node02:47345]
16/11/19 00:38:56 INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@node02:47345]
16/11/19 00:38:56 INFO util.Utils: Successfully started service 'sparkDriver' on port 47345.
16/11/19 00:38:56 INFO spark.SparkEnv: Registering MapOutputTracker
16/11/19 00:38:56 INFO spark.SparkEnv: Registering BlockManagerMaster
16/11/19 00:38:56 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local-20161119003856-0d19
16/11/19 00:38:56 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
16/11/19 00:38:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/19 00:38:57 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7d1a1480-43a8-4195-a1f1-3909f5c8d02b
16/11/19 00:38:57 INFO spark.HttpServer: Starting HTTP Server
16/11/19 00:38:57 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/11/19 00:38:57 INFO server.AbstractConnector: Started [email protected]:56686
16/11/19 00:38:57 INFO util.Utils: Successfully started service 'HTTP file server' on port 56686.
16/11/19 00:38:57 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/11/19 00:38:57 INFO server.AbstractConnector: Started [email protected]:4040
16/11/19 00:38:57 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/11/19 00:38:57 INFO ui.SparkUI: Started SparkUI at http://node02:4040
16/11/19 00:38:57 INFO scheduler.FairSchedulableBuilder: Created default pool default, schedulingMode: FIFO, minShare: 0, weight: 1
16/11/19 00:38:57 INFO util.AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@node02:47345/user/HeartbeatReceiver
16/11/19 00:38:58 INFO netty.NettyBlockTransferService: Server created on 49996
16/11/19 00:38:58 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/11/19 00:38:58 INFO storage.BlockManagerMasterActor: Registering block manager localhost:49996 with 265.4 MB RAM, BlockManagerId(<driver>, localhost, 49996)
16/11/19 00:38:58 INFO storage.BlockManagerMaster: Registered BlockManager
16/11/19 00:38:59 WARN shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
16/11/19 00:38:59 INFO scheduler.EventLoggingListener: Logging events to hdfs://mycluster/user/spark/eventlog/local-1479487137931
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 1.2.0
      /_/

Using Python version 2.6.6 (r266:84292, Jan 22 2014 09:42:36)
SparkContext available as sc.
>>> 
      

 你也可以運作 Python 編寫的應用:

$ mkdir -p /usr/lib/spark/examples/python
$ tar zxvf /usr/lib/spark/lib/python.tar.gz -C /usr/lib/spark/examples/python
$ ./bin/spark-submit examples/python/pi.py 10
      

 另外,你還可以運作 spark shell 的互動模式:

# 使用2個 Worker 線程本地化運作 Spark(理想情況下,該值應該根據運作機器的 CPU 核數設定)
$ ./bin/spark-shell --master local[2]

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  `_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.2.0
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_71)
Type in expressions to have them evaluated.
Type :help for more information.

Spark context available as sc.

scala> val lines = sc.textFile("data.txt")
scala> val lineLengths = lines.map(s => s.length)
scala> val totalLength = lineLengths.reduce((a, b) => a + b)
      

 上面是一個 RDD 的示例程式,從一個外部檔案建立了一個基本的 RDD對象。如果想運作這段程式,請確定 data.txt 檔案在目前目錄中存在。

4.2 在叢集上運作

Standalone 模式

該模式下隻需在一個節點上安裝 spark 的相關元件即可。通過 spark-shel l 運作下面的 wordcount 例子,

讀取 hdfs 的一個例子:

$ echo "hello world" >test.txt
$ hadoop fs -put test.txt /tmp

$ spark-shell
scala> val file = sc.textFile("hdfs://mycluster/tmp/test.txt")
scala> file.count()
      

 更複雜的一個例子,運作 mapreduce 統計單詞數:

$ spark-shell
scala> val file = sc.textFile("hdfs://mycluster/tmp/test.txt")
scala> val counts = file.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _)
scala> counts.saveAsTextFile("hdfs://mycluster/tmp/output")
      

 運作完成之後,你可以檢視 

hdfs://mycluster/tmp/output

 目錄下的檔案内容

[root@node01 spark]# hadoop fs -cat /tmp/output/part-00000
(hello,1)
(world,1)
      

 另外,spark-shell 後面還可以加上其他參數,例如:連接配接指定的 master、運作核數等等:

$ spark-shell --master spark://node04:7077 --cores 2
scala>
      

 也可以增加 jar:

$ spark-shell --master spark://node04:7077 --cores 2 --jars code.jar
scala>
      

運作 

spark-shell --help

 可以檢視更多的參數。

另外,也可以使用 spark-submit 以 Standalone 模式運作 SparkPi 程式:

$ spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode client --master spark://node04:7077 /usr/lib/spark/lib/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar 10
      

Spark on Yarn

以 YARN 用戶端方式運作 SparkPi 程式:

spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode client --master yarn /usr/lib/spark/lib/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar 10
      

 以 YARN 叢集方式運作 SparkPi 程式:

spark-submit --class org.apache.spark.examples.SparkPi --deploy-mode cluster --master yarn /usr/lib/spark/lib/spark-examples-1.2.0-cdh5.3.0-hadoop2.5.0-cdh5.3.0.jar 10
      

 運作在 YARN 叢集之上的時候,可以手動把 spark-assembly 相關的 jar 包拷貝到 hdfs 上去,然後設定 

SPARK_JAR

 環境變量:

$ hdfs dfs -mkdir -p /user/spark/share/lib
$ hdfs dfs -put $SPARK_HOME/lib/spark-assembly.jar  /user/spark/share/lib/spark-assembly.jar

$ SPARK_JAR=hdfs://<nn>:<port>/user/spark/share/lib/spark-assembly.jar
      

 參考:http://blog.csdn.net/furenjievip/article/details/44003467

http://blog.csdn.net/durie_/article/details/50789560