天天看點

SparkCoreSparkCore總結

SparkCore總結

1. RDD

1.1定義:

··········1.1.1資料集:存儲資料的計算邏輯
··········1.1.2分布式:資料的來源&計算都是分布式的
··········1.1.3彈性:

································血緣(依賴關系):Spark可以通過特殊的處理方案簡化依賴關系

································計算:Spark的計算是基于記憶體的,是以性能很高,可以和磁盤靈活切換

································分區:Spark在建立預設分區後,可以通過指定的算子來改變分區數量

································容錯:Spark在執行計算時,如果發生了錯誤,需要進行容錯重試處理

··········1.1.4Spark中分區的數量:

································Executor:可以通過送出應用的參數進行設定

································partition:預設情況下,讀取檔案采用的是hadoop的切片規則,如果讀取記憶體中的資料,可以根據特點的算子進行設定。可以通過其他的算子進行改變。多個階段的場合,下個階段的分區數量取決于上個階段最後的分區數量,但是可以在相應的算子中進行修改

································Stage:1(resultStage)+shuffle依賴的數量(shuffleMapStage)劃分任務的目的就是為了任務執行的等待,因為Shuffle的過程需要落盤

································Task:原則上一個分區就是一個任務但是實際應用中,可以動态調整

1.2建立:

··········1.2.1:從記憶體中創鍵
··········1.2.2:從存儲中創鍵
··········1.2.3:從其他RDD中創鍵

1.3屬性

··········1.3.1分區
··········1.3.2依賴關系
··········1.3.3分區器
··········1.3.4優先位置
··········1.3.5計算函數

1.4使用

··········1.4.1轉換

································單value類型

································雙value類型

································K-V類型

··········1.4.2行動

································runJob

2. 廣播變量:分布式共享隻讀資料

3.累加器:分布式共享隻寫資料

繼續閱讀