天天看点

生产Spark On Yarn With Kerberos流程

作者:若泽大数据

1.hadooptool131

CDH部署hdfs 、yarn、 hbase的gateway⻆⾊(其实目的是为了客户端配置文件)

2.keytab⽂件权限(每个节点)

[root@hadooptool131 ~]# chmod 777 /etc/kerberos/*.keytab

3.Apache Spark配置CDH Hadoop目录

[root@hadooptool131 ~]# vi /opt/software/spark/spark-2.4.0-bin-hadoop2.6/conf/spark-env.sh

HADOOP_CONF_DIR=/etc/hadoop/conf

YARN_CONF_DIR=/etc/hadoop/conf

SPARK_HOME=/opt/software/spark/spark

SPARK_CONF_DIR=${SPARK_HOME}/conf

4.hbase-site.xml做软连接到Hadoop配置目录

[root@hadooptool131 ~]# ln -s /etc/hbase/conf/hbase-site.xml /etc/hadoop/conf/hbase-site.xml

5.配置hdfs⽬录得/user权限

[root@hadooptool131 ~]# kinit hdfs

Password for [email protected]:

[root@hadooptool131 ~]# hdfs dfs -chmod -R 777 /user

6.cdh yarn的配置

min.user.id : 0

banned.users : yarn mapred bin

allowed.system.users : nobody impala hive llama hdfs hbase

7.重启yarn服务,⽣效配置

8.提交jar

spark提交到yarn平台,需要principal、keytab参数

[root@hadooptool131 ~]# klist

klist: No credentials cache found (filename: /tmp/krb5cc_0)

自带Pi案例提交

${SPARK_HOME}/bin/spark-submit \
--master yarn \
--deploy-mode cluster \
--queue ruozedata \
--driver-memory 1G \
--num-executors 1 \
--executor-memory 1G \
--executor-cores 1 \
--principal [email protected] \
--keytab /etc/kerberos/hdfs.keytab \
--class org.apache.spark.examples.SparkPi \
/root/spark-2.4.2-bin-hadoop2.6/examples/jars/spark-examples_2.12-2.4.2.jar           

生产ETL标准提交参数(有兴趣小伙伴可以收藏)

${SPARK_HOME}/spark-submit \
--master yarn \
--deploy-mode cluster \
--queue ruozedata \
--driver-memory 4G \
--num-executors 10 \
--executor-memory 6G \
--executor-cores 3 \
--principal [email protected] \    
--keytab /etc/kerberos/hdfs.keytab \  
--class com.ruozedata.homepage.CarrierAmount \
--conf "spark.yarn.archive=hdfs://ruozedata/spark/sparkjar20220324.zip" \
--conf "spark.ui.showConsoleProgress=false" \
--conf "spark.yarn.am.memory=1024m" \
--conf "spark.yarn.am.memoryOverhead=1024m" \
--conf "spark.driver.memoryOverhead=1024m" \
--conf "spark.executor.memoryOverhead=1024m" \
--conf "spark.yarn.am.extraJavaOptions=-XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:InitiatingHeapOccupancyPercent=50 -XX:G1ReservePercent=20 -XX:+DisableExplicitGC -Dcdh.version=5.16.1" \
--conf "spark.driver.extraJavaOptions=-XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:InitiatingHeapOccupancyPercent=50 -XX:G1ReservePercent=20 -XX:+DisableExplicitGC -Dcdh.version=5.16.1" \
--conf "spark.executor.extraJavaOptions=-XX:+UseG1GC -XX:MaxGCPauseMillis=300 -XX:InitiatingHeapOccupancyPercent=50 -XX:G1ReservePercent=20 -XX:+DisableExplicitGC -Dcdh.version=5.16.1" \
--conf "spark.streaming.backpressure.enabled=true" \
--conf "spark.streaming.kafka.maxRatePerPartition=1250" \
--conf "spark.locality.wait=10s" \
--conf "spark.shuffle.consolidateFiles=true" \
--conf "spark.executor.heartbeatInterval=360000" \
--conf "spark.network.timeout=420000" \
--conf "spark.serializer=org.apache.spark.serializer.KryoSerializer" \
--conf "spark.hadoop.fs.hdfs.impl.disable.cache=true" \
/opt/maintaim/core/ruozedata-1.0.jar           

继续阅读