天天看點

Standalone模式安裝Spark

前言:本篇文章是學習spark的開篇,搭建一個spark的完全分布式叢集,與大家共勉。

1.将Spark解壓并上傳至/opt目錄下

tar -zxvf spark-1.6.2-bin-hadoop2.6.tgz -C /opt/

2.修改環境變量

vi /etc/profile

export SPARK_HOME=/opt/spark

export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin

source /etc/profile

3.配置Spark-env.sh

a、cd /opt/spark/conf

b、cp spark-env.sh.template spark-env.sh

c、vi spark-env.sh

export JAVA_HOME=/opt/jdk

export SCALA_HOME=/opt/scala

export SPARK_MASTER_IP=master

export SPARK_MASTER_PORT=7077

export SPARK_WORKER_CORES=1

export SPARK_WORKER_INSTANCES=1

export SPARK_WORKER_MEMORY=3g

export HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop

4.配置slaves

b、mv slaves.template slaves

c、vi slaves

master

slave01

slave02

5.啟動和驗證Spark

a、8080

b、spark-shell