天天看点

zeppelin-0.82 与spark-2.1.0.cloudera2使用集成

spark Interpreter简介

http://zeppelin.apache.org/docs/latest/interpreter/spark.html

建议大家看官网地址。

Name Class Description
%spark SparkInterpreter Creates a SparkContext and provides a Scala environment
%spark.pyspark PySparkInterpreter Provides a Python environment
%spark.r SparkRInterpreter Provides an R environment with SparkR support
%spark.sql SparkSQLInterpreter Provides a SQL environment
%spark.dep DepInterpreter Dependency loader

zeppelin自动帮你内置创建好了SparkContext, SQLContext,SparkSession and ZeppelinContext ,他们变量名是 

sc

sqlContext,spark

 and 

z 。 

Note that Scala/Python/R environment shares the same SparkContext, SQLContext and ZeppelinContext instance.

spark interpreter配置

配置可以在多个地方。比如conf/zeppelin-env.sh文件,或者在web界面上的interpreter中新增属性。我的环境启用了hive+sentry的简单认证,所以会有一个身份的配置。

export MASTER=yarn-client
export ZEPPELIN_JAVA_OPTS="-Dmaster=yarn-client -Dspark.executor.memory=1g -Dspark.cores.max=4 -Dspark.executorEnv.PYTHONHASHSEED=0 -Dspark.sql.crossJoin.enabled=true"
export SPARK_HOME=/opt/cloudera/parcels/SPARK2/lib/spark2
export SPARK_SUBMIT_OPTIONS="--driver-memory 512M --executor-memory 1G".
export SPARK_APP_NAME=zeppelin
export HADOOP_CONF_DIR=/bigdata/installer/zeppelin-0.8.2-bin-all/interpreter/spark/conf
           

这个目录下/bigdata/installer/zeppelin-0.8.2-bin-all/interpreter/spark/conf的配置文件是从/etc/hadoop/conf 拷贝过来的,外加一个/etc/hive/conf/hive-site.xml

zeppelin-0.82 与spark-2.1.0.cloudera2使用集成

web界面上spark interpreter主要配置如下:

HADOOP_USER_NAME	hive
SPARK_HOME	/bigdata/cloudera/parcels/SPARK2/lib/spark2
master	yarn-client
spark.app.name	zeppelin
spark.cores.max	4
spark.executor.memory	1g
zeppelin.spark.useHiveContext	true
           

jar包依赖如下:

/opt/cloudera/parcels/SPARK2/lib/spark2/jars/jackson-databind-2.6.5.jar

/opt/cloudera/parcels/SPARK2/lib/spark2/jars/netty-all-4.0.42.Final.jar

使用demo

zeppelin-0.82 与spark-2.1.0.cloudera2使用集成

继续阅读