天天看點

在windowns下安裝Anaconda3運作spark

1. 準備工作

1.1需要的軟體:

Anaconda3-5.0.0-Windows-x86_64

hadoop-2.7.4

jdk1.8+

spark-2.2.0-bin-hadoop2.7

1.2下載下傳軟體

Anaconda 官網下載下傳位址:https://www.continuum.io/downloads

目前最新版本是 python 3.6,預設下載下傳也是 Python 3.6,百度網盤下載下傳位址:http://pan.baidu.com/s/1jIePjPc 密碼是:robu 當然,也可以在官網下載下傳最新版本的 Anaconda3,然後根據自己需要設定成 python 3.6

在windowns下安裝Anaconda3運作spark

Hadoop 官網下載下傳位址:http://hadoop.apache.org/releases.html

在windowns下安裝Anaconda3運作spark

Spark 官網下載下傳位址:http://spark.apache.org/downloads.html

在windowns下安裝Anaconda3運作spark

jdk 下載下傳官網位址:http://www.oracle.com/technetwork/java/javase/downloads/index.html

2.安裝并在windowns下配置環境變量

Anaconda 安裝較為簡單,基本都是下一步,為了避免不必要的麻煩,最後預設安裝路徑,具體安裝過程為:

輕按兩下安裝檔案,啟動安裝程式

在windowns下安裝Anaconda3運作spark

點選

I Agree

進行下一步操作

在windowns下安裝Anaconda3運作spark

點選

Next

進行下一步

在windowns下安裝Anaconda3運作spark

如果系統隻有一個使用者選擇預設的第一個即可,如果有多個使用者而且都要用到 Anaconda ,則選擇第二個選項。

在windowns下安裝Anaconda3運作spark

為了避免之後不必要的麻煩,建議預設路徑安裝即可,需要占用空間大約 1.8 G左右。

在windowns下安裝Anaconda3運作spark

安裝需要一段時間,等待安裝完成即可。

在windowns下安裝Anaconda3運作spark

到這裡就安裝完成了,可以将“Learn more about Aanaconda Cloud”Learn more about Aanaconda Support”前的對号去掉,然後點選“Finish”即可。

jdk1.8+也解壓到預設的路徑下;hadoop-2.7.4和spark-2.2.0-bin-hadoop2.7可以裝在任意磁盤下

在windowns下配置環境變量(hadoop/spark/Java)

Java環境變量:

在windowns下安裝Anaconda3運作spark

hadoop環境變量:

在windowns下安裝Anaconda3運作spark

spark環境變量:

在windowns下安裝Anaconda3運作spark

配置path:

在windowns下安裝Anaconda3運作spark

上述操作之後,剩下的就是一直點”确定”,這樣環境變量就配置好了

4.啟動 spark

在啟動之前需要在hadoop-2.7.4的bin目錄下,安裝winutils.exe檔案,否則就會報錯,錯誤如下

E:\>spark-shell
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
// :: ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
        at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:)
        at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:)
        at org.apache.hadoop.util.Shell.<clinit>(Shell.java:)
        at org.apache.hadoop.hive.conf.HiveConf$ConfVars.findHadoopBinary(HiveConf.java:)
        at org.apache.hadoop.hive.conf.HiveConf$ConfVars.<clinit>(HiveConf.java:)
        at org.apache.hadoop.hive.conf.HiveConf.<clinit>(HiveConf.java:)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:)
        at org.apache.spark.util.Utils$.classForName(Utils.scala:)
        at org.apache.spark.sql.SparkSession$.hiveClassesArePresent(SparkSession.scala:)
        at org.apache.spark.repl.Main$.createSparkSession(Main.scala:)
        at $line3.$read$$iw$$iw.<init>(<console>:)
        at $line3.$read$$iw.<init>(<console>:)
        at $line3.$read.<init>(<console>:)
        at $line3.$read$.<init>(<console>:)
        at $line3.$read$.<clinit>(<console>)
        at $line3.$eval$.$print$lzycompute(<console>:)
        at $line3.$eval$.$print(<console>:)
        at $line3.$eval.$print(<console>)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
        at java.lang.reflect.Method.invoke(Method.java:)
        at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:)
        at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest$$anonfun$loadAndRunReq$1.apply(IMain.scala:)
        at scala.reflect.internal.util.ScalaClassLoader$class.asContext(ScalaClassLoader.scala:)
        at scala.reflect.internal.util.AbstractFileClassLoader.asContext(AbstractFileClassLoader.scala:)
        at scala.tools.nsc.interpreter.IMain$WrappedRequest.loadAndRunReq(IMain.scala:)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:)
        at scala.tools.nsc.interpreter.IMain.interpret(IMain.scala:)
        at scala.tools.nsc.interpreter.ILoop.interpretStartingWith(ILoop.scala:)
        at scala.tools.nsc.interpreter.ILoop.command(ILoop.scala:)
        at scala.tools.nsc.interpreter.ILoop.processLine(ILoop.scala:)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply$mcV$sp(SparkILoop.scala:)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:)
        at org.apache.spark.repl.SparkILoop$$anonfun$initializeSpark$1.apply(SparkILoop.scala:)
        at scala.tools.nsc.interpreter.IMain.beQuietDuring(IMain.scala:)
        at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:)
        at org.apache.spark.repl.SparkILoop.loadFiles(SparkILoop.scala:)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply$mcZ$sp(ILoop.scala:)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:)
        at scala.tools.nsc.interpreter.ILoop$$anonfun$process$1.apply(ILoop.scala:)
        at scala.reflect.internal.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:)
        at scala.tools.nsc.interpreter.ILoop.process(ILoop.scala:)
        at org.apache.spark.repl.Main$.doMain(Main.scala:)
        at org.apache.spark.repl.Main$.main(Main.scala:)
        at org.apache.spark.repl.Main.main(Main.scala)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
        at java.lang.reflect.Method.invoke(Method.java:)
        at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:)
        at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:)
        at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
           

或者不用裝hadoop,直接安裝一個winustil.exe 但是必須配置環境變量.接下來在”開始”菜單鍵找到”jupyter notebook”,輕按兩下運作

在windowns下安裝Anaconda3運作spark

啟動之後會看到如下圖所示:

在windowns下安裝Anaconda3運作spark

在浏覽器上會出現如下圖所示:

在windowns下安裝Anaconda3運作spark

之後在頁面的右上角找到”New”建立”Python3”,如圖所示:

在windowns下安裝Anaconda3運作spark

之後在輸入如下代碼,啟動spark

import os
import sys

spark_home = os.environ.get('SPARK_HOME', None)
if not spark_home:
    raise ValueError('SPARK_HOME environment variable is not set')
sys.path.insert(, os.path.join(spark_home, 'python'))
sys.path.insert(, os.path.join(spark_home, 'python/lib/py4j-0.10.4-src.zip'))
comm=os.path.join(spark_home, 'python/lib/py4j-0.10.4-src.zip')
print ('start spark....',comm)
exec(open(os.path.join(spark_home, 'python/pyspark/shell.py')).read())
           

如圖所示:

在windowns下安裝Anaconda3運作spark

至此,spark啟動成功

5.啟動時出現的錯誤及解決辦法

1.錯誤如下:

java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':
  at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:)
  at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:)
  at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$apply(SparkSession.scala:)
  at org.apache.spark.sql.SparkSession$Builder$$anonfun$getOrCreate$apply(SparkSession.scala:)
  at scala.collection.mutable.HashMap$$anonfun$foreach$apply(HashMap.scala:)
  at scala.collection.mutable.HashMap$$anonfun$foreach$apply(HashMap.scala:)
  at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:)
  at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:)
  at scala.collection.mutable.HashMap.foreach(HashMap.scala:)
  at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:)
  at org.apache.spark.repl.Main$.createSparkSession(Main.scala:)
  ...  elided
Caused by: java.lang.reflect.InvocationTargetException: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:)
  at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect(SparkSession.scala:)
  ...  more
Caused by: java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog':
  at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:)
  at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:)
  at org.apache.spark.sql.SparkSession$$anonfun$sharedState$apply(SparkSession.scala:)
  at org.apache.spark.sql.SparkSession$$anonfun$sharedState$apply(SparkSession.scala:)
  at scala.Option.getOrElse(Option.scala:)
  at org.apache.spark.sql.SparkSession.sharedState$lzycompute(SparkSession.scala:)
  at org.apache.spark.sql.SparkSession.sharedState(SparkSession.scala:)
  at org.apache.spark.sql.internal.SessionState.<init>(SessionState.scala:)
  at org.apache.spark.sql.hive.HiveSessionState.<init>(HiveSessionState.scala:)
  ...  more
Caused by: java.lang.reflect.InvocationTargetException: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:)
  at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:)
  ...  more
Caused by: java.lang.reflect.InvocationTargetException: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------
  at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:)
  at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:)
  at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:)
  at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:)
  at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:)
  at org.apache.spark.sql.hive.HiveExternalCatalog.<init>(HiveExternalCatalog.scala:)
  ...  more
Caused by: java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:)
  at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:)
  ...  more
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: ---------
  at org.apache.hadoop.hive.ql.session.SessionState.createRootHDFSDir(SessionState.java:)
  at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:)
  at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:)
  ...  more
           

錯誤消息中提示零時目錄 /tmp/hive 沒有寫的權限:

是以我們需要更新E:/tmp/hive的權限(我在E盤下運作的spark-shell指令,如果在其他盤運作,就改成對應的盤符+/tmp/hive)。運作如下指令:

再次運作spark-shell,spark啟動成功。此時可以通過 http://localhost:4040 來通路Spark UI

解決錯誤的參考部落格:https://yq.aliyun.com/articles/96424?t=t1

http://www.cnblogs.com/czm1032851561/p/5751722.html

繼續閱讀