环境准备
三台虚拟机:spark1,spark2,spark3
三台虚拟机已经实现免密码登录
一、配置local单机模式(spark1中,解压即可用)
1.上传至linux(以spark-1.6.1-bin-hadoop2.6.tgz为例)
2.解压jar 包
[[email protected] soft]# tar -zxvf spark-1.6.1-bin-hadoop2.6.tgz
3.测试
[[email protected] spark-1.6.1]# ./bin/spark-submit -class org.apache.spark.examples.SparkPi --master local[1] ./lib/spark-examples-1.6.1-hadoop2.6.0.jar 100
其中SparkPi:指定运行程序(计算π的值),master:指定模式,local为本地模式,[1]表示一个线程,100表示传递的参数
最后得到的结果形如:Pi is roughly 3.1425304
二、配置standalone集群模式(spark内置的主从,master,worker)
1.上传至linux(spark1,spark2,spark3)
2.解压jar 包
[[email protected] soft]# tar -zxvf spark-1.6.1-bin-hadoop2.6.tgz
3.修改配置文件(配置三个节点)
[[email protected] conf]# cp slaves.template slaves
[[email protected] conf]# vi slaves(配置运行worker的机器)
添加如下内容:
--------------------------------------------------
spark2
spark3
--------------------------------------------------
这样使spark1只运行cluster
[[email protected] conf]# cp spark-env.sh.template spark-env.sh
[[email protected] conf]# vi spark-env.sh
添加如下内容:
--------------------------------------------------
export JAVA_HOME=/opt/soft/jdk1.8.0_11
export SPARK_MASTER_IP=spark1
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_MEMORY=1g
注: 配置master高可用:在master节点上配置,每启动一个master都会向zk传递数据
记得配置master节点对其他节点的免密码登录,另外修改export SPARK_MASTER_IP
为对应所需的主节点,再启动sbin/start-master.sh(下面内容在该配置文件中配置,不配置高可用则不需添加)
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=192.168.13.135:2181,192.168.13.136:2181,192.168.13.137:2181"
-------------------------------------------------------------
4.运行
[[email protected] spark-1.6.1]# ./sbin/start-all.sh
5.测试
1)client模式
[[email protected] spark-1.6.1]# ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://spark1:7077 --executor-memory 1G --total-executor-cores 1 ./lib/spark-examples-1.6.1-hadoop2.6.0.jar 100
其中--master spark指定standlone模式
2)cluster模式(结果spark1:8080里面可见!)
[[email protected] spark-1.6.1]#./bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://spark1:7077 --deploy-mode cluster --supervise --executor-memory 1G --total-executor-cores 1 ./lib/spark-examples-1.6.1-hadoop2.6.0.jar 100
注:如果内存不够时可以将1G改下,如512M
三、配置yarn集群模式
1.yarn模式不需要运行standalone模式,关闭
[[email protected] spark-1.6.1]# ./sbin/stop-all.sh
2.上传linux(spark1,spark2,spark3)
3.解压jar包
4.修改配置文件(spark-env.sh)
-----------------------------------------------------------------------------------
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_HOME=/opt/soft/spark-1.6.1
export SPARK_JAR=/opt/soft/spark-1.6.1/lib/spark-assembly-1.6.1-hadoop2.6.0.jar
export PATH=$SPARK_HOME/bin:$PATH
注: 配置master高可用:在master节点上配置,每启动一个master都会向zk传递数据
记得配置master节点对其他节点的免密码登录,另外修改export SPARK_MASTER_IP为对应节点,再启动sbin/start-master.sh,比如spark2,则访问spark2:8080状态standby
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=192.168.13.135:2181,192.168.13.136:2181,192.168.13.137:2181"
-----------------------------------------------------------------------------------
其中HADOOP_HOME必须在环境变量中已配好
5.测试
首先启动zookeeper、hdfs和yarn
1)client模式
[[email protected] spark-1.6.1]#./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --executor-memory 1G --num-executors 1 ./lib/spark-examples-1.6.1-hadoop2.6.0.jar 100
2)cluster模式:(结果spark1:8088里面可见)
[[email protected] spark-1.6.1]#./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-cluster --executor-memory 1G --num-executors 1 ./lib/spark-examples-1.6.1-hadoop2.6.0.jar 100
注:如果内存不够时可以将1G改下,如512M
内存不够可能出现的错误:WARN cluster.YarnScheduler: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources