天天看點

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

errors

  • Spark
    • Spark版本由2.4.2 -> 2.2.0,依賴多個版本沖突
  • Spark叢集啟動時候,JAVA_HOME is not set
  • hadoop叢集,某台伺服器jps無任何輸出
  • IDEA
      • internal java compiler error或java compiler failed
  • kafka
      • Connection to node -1 could not be established. Broker may not be available
  • hadoop
      • namenode一直處于安全模式
      • Could not locate executable null \bin\winutils.exe in the hadoop binaries
      • hdfs資料節點空間清理
  • spark sql
      • join操作
      • timeout
  • file permissions
  • IDEA本地測試 - OutOfMemoryError: GC overhead limit exceeded
  • hdfs負載均衡

Spark

Spark版本由2.4.2 -> 2.2.0,依賴多個版本沖突

Caused by: com.fasterxml.jackson.databind.JsonMappingException: Incompatible Jackson version: 2.9.4

解決思路

maven倉庫删除com.fasterxml,檢視項目中産生沖突的那些依賴,删除pom中中的那些依賴。

Spark叢集啟動時候,JAVA_HOME is not set

在spark 根目錄使用 sbin/start-all.sh 時,console提示:

slave107 JAVA_HOME not set

在sbin目錄下的spark-config.sh 中追加一條記錄:

export JAVA_HOME=/home/hadoop/soft/jdk

hadoop叢集,某台伺服器jps無任何輸出

叢集啟動時,會在/tmp目錄下生成一個hsperfdata_username的檔案夾,這個檔案夾的檔案以java程序的pid命名。是以使用jps檢視目前程序的時候,其實就是把/tmp/hsperfdata_username中的檔案名周遊一遍之後輸出。如果/tmp/hsperfdata_username的檔案所有者和檔案所屬使用者組與啟動程序的使用者不一緻的話,在程序啟動之後,就沒有權限寫/tmp/hsperfdata_username,是以/tmp/hsperfdata_username是一個空檔案,理所當然jps也就沒有任何顯示。

解決:

使用chown修改/tmp/hsperfdata_dumz的檔案所有者和檔案所屬使用者組,重新啟動叢集或者删除/tmp目錄下所有檔案重新啟動叢集

IDEA

internal java compiler error或java compiler failed

1、project的jdk版本設定

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

2、具體子產品jdk設定

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

3、全局設定settings—java compiler設定

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

kafka

Connection to node -1 could not be established. Broker may not be available

分析:連接配接沒有建立

解決:You should first describe the topic so that you can see what partitions and broker ids are available.

https://stackoverflow.com/questions/49947075/kafka-console-consumer-connection-to-node-1-could-not-be-established-broker?answertab=votes

hadoop

namenode一直處于安全模式

檢視日志發現:

Resources are low on NN. Please add or free up more resources then turn off safe mode manually

解決思路:檢視檔案系統磁盤空間使用情況: /home目錄已經使用完

檢視配置檔案:core-site.xml、hdfs-site.xml、hdfs-default.xml

[core-site.xml]

hadoop.tmp.dir = /home/yk/hadoop/hadoopData

[hdfs-default.xml]

dfs.name.dir = ${hadoop.tmp.dir}/dfs/name

dfs.data.dir = ${hadoop.tmp.dir}/dfs/data

修改 core-site.xml、hdfs-site.xml

删除core-site.xml中的hadoop.tmp.dir

增加hdfs-site.xml中的dfs.name.dir 、dfs.data.dir

注:dfs.name.dir 、dfs.data.dir指定在不同的目錄

錯誤原因:name與data資料放在同一目錄,空間不足時,name資料無法加載

df - report file system disk space usage
-h, --human-readable   -> print sizes in human readable format (e.g., 1K 234M 2G)
-l, --local            -> limit listing to local file systems
           
大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

Could not locate executable null \bin\winutils.exe in the hadoop binaries

shell.java
 ......
 String home = System.getProperty("hadoop.home.dir");

    // fall back to the system/user-global env variable
    if (home == null) {
      home = System.getenv("HADOOP_HOME");
    }
 ......
           

解決:配置環境變量

HADOOP_HOME=F:\soft\hadoop\hadoop-2.7.3

Path=%HADOOP_HOME%\bin

hdfs資料節點空間清理

檢視hdfs檔案系統,記錄需要處理資料的上傳時間

hdfs dfs -du -h /

切換目錄

/home/yk/hadoop/hadoopData/dfs/data/current/BP-369437989-172.16.2.106-1552725600401/current/finalized

列印/删除特定時間産生的中間資料(比如作業運作失敗時間為Mar 24)

find . -maxdepth 3 -type f -newermt “Mar 24” -print

find . -maxdepth 3 -type f -newermt “Mar 24” -delete

spark sql

join操作

df1.join(df2)時候報錯如下:

org.apache.spark.sql.AnalysisException: Detected cartesian product for INNER join between logical plans

解決:設定spark.sql.crossJoin.enabled=true

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

timeout

執行腳本如下:

/home/hadoop/soft/spark/bin/spark-submit \
--master spark://172.16.2.106:7077 \
--class sql.cluster \
/home/hadoop/test/jars/sparkDemo-1.0-SNAPSHOT-jar-with-dependencies.jar
           

報錯如下:

Connection to slave106/172.16.2.106:33562 has been quiet for 120000 ms while there are outstanding requests. Assuming connection is dead; please adjust spark.network.timeout if this is wrong
           

file permissions

儲存檔案到本地:

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

檔案格式如下

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

加載檔案時候報錯

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡
大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

解決:val accessRDD = spark.sparkContext.textFile(“sparkDemo/data/test_good/*”)

IDEA本地測試 - OutOfMemoryError: GC overhead limit exceeded

hdfs負載均衡

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

添加配置

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

啟動指令

大資料排錯SparkSpark叢集啟動時候,JAVA_HOME is not sethadoop叢集,某台伺服器jps無任何輸出IDEAkafkahadoopspark sqlfile permissionsIDEA本地測試 - OutOfMemoryError: GC overhead limit exceededhdfs負載均衡

繼續閱讀