天天看點

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

本篇部落格,Alice為大家帶來關于如何在IDEA上編寫Spark程式的教程。

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

寫在前面

本次講解我會通過一個非常經典的案例,同時也是在學MapReduce入門時少不了的一個例子——WordCount 來完成不同場景下Spark程式代碼的書寫。大家可以在敲代碼時可以思考這樣一個問題,用Spark是不是真的比MapReduce簡便?

準備材料

wordcount.txt

hello me you her
hello you her
hello her
hello           

複制

圖解WordCount

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

pom.xml

  • 建立Maven項目并補全目錄、配置pom.xml
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.czxy</groupId>
    <artifactId>spark_demo</artifactId>
    <version>1.0-SNAPSHOT</version>


    <!-- 指定倉庫位置,依次為aliyun、cloudera和jboss倉庫 -->
    <repositories>
        <repository>
            <id>aliyun</id>
            <url>http://maven.aliyun.com/nexus/content/groups/public/</url>
        </repository>
        <repository>
            <id>cloudera</id>
            <url>https://repository.cloudera.com/artifactory/cloudera-repos/</url>
        </repository>
        <repository>
            <id>jboss</id>
            <url>http://repository.jboss.com/nexus/content/groups/public</url>
        </repository>
    </repositories>
    <properties>
        <maven.compiler.source>1.8</maven.compiler.source>
        <maven.compiler.target>1.8</maven.compiler.target>
        <encoding>UTF-8</encoding>
        <scala.version>2.11.8</scala.version>
        <scala.compat.version>2.11</scala.compat.version>
        <hadoop.version>2.7.4</hadoop.version>
        <spark.version>2.2.0</spark.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-hive-thriftserver_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <!-- <dependency>
             <groupId>org.apache.spark</groupId>
             <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
             <version>${spark.version}</version>
         </dependency>-->
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-streaming-kafka-0-10_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-sql-kafka-0-10_2.11</artifactId>
            <version>${spark.version}</version>
        </dependency>

        <!--<dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.6.0-mr1-cdh5.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.2.0-cdh5.14.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>1.2.0-cdh5.14.0</version>
        </dependency>-->

        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.7.4</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>1.3.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>1.3.1</version>
        </dependency>
        <dependency>
            <groupId>com.typesafe</groupId>
            <artifactId>config</artifactId>
            <version>1.3.3</version>
        </dependency>
        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
            <version>5.1.38</version>
        </dependency>
    </dependencies>

    <build>
        <sourceDirectory>src/main/java</sourceDirectory>
        <testSourceDirectory>src/test/scala</testSourceDirectory>
        <plugins>
            <!-- 指定編譯java的插件 -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.5.1</version>
            </plugin>
            <!-- 指定編譯scala的插件 -->
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                        <configuration>
                            <args>
                                <arg>-dependencyfile</arg>
                                <arg>${project.build.directory}/.scala_dependencies</arg>
                            </args>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-surefire-plugin</artifactId>
                <version>2.18.1</version>
                <configuration>
                    <useFile>false</useFile>
                    <disableXmlReport>true</disableXmlReport>
                    <includes>
                        <include>**/*Test.*</include>
                        <include>**/*Suite.*</include>
                    </includes>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                            <transformers>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <mainClass></mainClass>
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>
</project>           

複制

  • maven-assembly-plugin和maven-shade-plugin的差別

可以參考這篇部落格https://blog.csdn.net/lisheng19870305/article/details/88300951

本地執行

package com.czxy.scala

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

/*
 * @Auther: Alice菌
 * @Date: 2020/2/19 08:39
 * @Description:
    流年笑擲 未來可期。以夢為馬,不負韶華!
 */
/**
  * 本地運作
  */
object Spark_wordcount {

  def main(args: Array[String]): Unit = {

    // 1.建立SparkContext
    var config = new SparkConf().setAppName("wc").setMaster("local[*]")
    val sc = new SparkContext(config)
    sc.setLogLevel("WARN")

    // 2.讀取檔案
    // A Resilient Distributed Dataset (RDD)彈性分布式資料集
    // 可以簡單了解為分布式的集合,但是Spark對它做了很多的封裝
    // 讓程式員使用起來就像操作本地集合一樣簡單,這樣大家就很happy了
    val fileRDD: RDD[String] = sc.textFile("G:\\2020幹貨\\Spark\\wordcount.txt")

    // 3.處理資料
    // 3.1 對每一行資料按空格切分并壓平形成一個新的集合中
    // flatMap是對集合中的每一個元素進行操作,再進行壓平
    val wordRDD: RDD[String] = fileRDD.flatMap(_.split(" "))
    // 3.2 每個單詞記為1
    val wordAndOneRDD: RDD[(String, Int)] = wordRDD.map((_,1))
    // 3.3 根據key進行聚合,統計每個單詞的數量
    // wordAndOneRDD.reduceByKey((a,b)=>a+b)
    // 第一個_: 之前累加的結果
    // 第二個_: 目前進來的資料
    val wordAndCount: RDD[(String, Int)] = wordAndOneRDD.reduceByKey(_+_)
    // 4. 收集結果
    val result: Array[(String, Int)] = wordAndCount.collect()
    // 控制台列印結果
    result.foreach(println)

  }
}           

複制

運作的結果:

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

叢集上運作

package com.czxy.scala

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

/*
 * @Auther: Alice菌
 * @Date: 2020/2/19 09:12
 * @Description:
    流年笑擲 未來可期。以夢為馬,不負韶華!
 */
/**
  * 叢集運作
  */
object Spark_wordcount_cluster {
  def main(args: Array[String]): Unit = {

    // 1. 建立SparkContext
    val config = new SparkConf().setAppName("wc")
    val sc = new SparkContext(config)
    sc.setLogLevel("WARN")
    // 2. 讀取檔案
    // A Resilient Distributed Dataset (RDD) 彈性分布式資料集
    // 可以簡單了解為分布式的集合,但是spark對它做了很多的封裝
    // 讓程式員使用起來就像操作本地集合一樣簡單,這樣大家就很happy了
    val fileRDD: RDD[String] = sc.textFile(args(0)) // 檔案輸入路徑
    // 3. 處理資料
    // 3.1對每一行資料按照空格進行切分并壓平形成一個新的集合
    // flatMap是對集合中的每一個元素進行操作,再進行壓平
    val wordRDD: RDD[String] = fileRDD.flatMap(_.split(" "))
    // 3.2 每個單詞記為1
    val wordAndOneRDD = wordRDD.map((_,1))
    // 3.3 根據key進行聚合,統計每個單詞的數量
    // wordAndOneRDD.reduceByKey((a,b)=>a+b)
    // 第一個_:之前累加的結果
    // 第二個_:目前進來的資料
    val wordAndCount: RDD[(String, Int)] = wordAndOneRDD.reduceByKey(_+_)
    wordAndCount.saveAsTextFile(args(1)) // 檔案輸出路徑

  }
}           

複制

  • 打包
如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)
  • 上傳
如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)
  • 執行指令送出到Spark-HA叢集
/export/servers/spark/bin/spark-submit \
--class cn.itcast.sparkhello.WordCount \
--master spark://node01:7077,node02:7077 \
--executor-memory 1g \
--total-executor-cores 2 \
/root/wc.jar \
hdfs://node01:8020/wordcount/input/words.txt \
hdfs://node01:8020/wordcount/output4           

複制

  • 執行指令送出到YARN叢集
/export/servers/spark/bin/spark-submit \
--class cn.itcast.sparkhello.WordCount \
--master yarn \
--deploy-mode cluster \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 2 \
--queue default \
/root/wc.jar \
hdfs://node01:8020/wordcount/input/words.txt \
hdfs://node01:8020/wordcount/output5           

複制

這裡我們送出到YARN叢集

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

運作結束後在hue中檢視結果

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)
如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

Java8版[了解]

Spark是用Scala實作的,而scala作為基于JVM的語言,與Java有着良好內建關系。用Java語言來寫前面的案例同樣非常簡單,隻不過會有點冗長。

package com.czxy.scala;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import scala.Tuple2;

import java.util.Arrays;

/**
 * @Auther: Alice菌
 * @Date: 2020/2/21 09:48
 * @Description: 流年笑擲 未來可期。以夢為馬,不負韶華!
 */
public class Spark_wordcount_java8 {


    public static void main(String[] args){
        SparkConf conf = new SparkConf().setAppName("wc").setMaster("local[*]");
        JavaSparkContext jsc = new JavaSparkContext(conf);
        JavaRDD<String> fileRDD = jsc.textFile("G:\\2020幹貨\\Spark\\wordcount.txt");
        JavaRDD<String> wordRDD = fileRDD.flatMap(s -> Arrays.asList(s.split(" ")).iterator());
        JavaPairRDD<String, Integer> wordAndOne = wordRDD.mapToPair(w -> new Tuple2<>(w, 1));
        JavaPairRDD<String, Integer> wordAndCount = wordAndOne.reduceByKey((a, b) -> a + b);
        //wordAndCount.collect().forEach(t->System.out.println(t));
        wordAndCount.collect().forEach(System.out::println);
        //函數式程式設計的核心思想:行為參數化!
    }

}           

複制

運作後的結果是一樣的。

如何在IDEA上編寫Spark程式?(本地+叢集+java三種模式書寫代碼)

本次的分享就到這裡,受益的小夥伴或對大資料技術感興趣的朋友記得點贊關注Alice喲(^U^)ノ~YO