天天看点

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

官方网站

Kafka提供了新的consumer api 在0.8版本和0.10版本之间。0.8的集成是兼容0.9和0.10的。但是0.10的集成不兼容以前的版本。

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

这里使用的集成是spark-streaming-kafka-0-8。官方文档

配置SparkStreaming接收从kafka来的数据有两种方式。老的方式要使用Receiver,新的方式是Spark1.3后引进的不用Receiver。

Approach 1: Receiver-based Approach

Approach 2: Direct Approach (No Receivers)

这里介绍第一种需要使用Receiver的方式。

所有的数据接收都是数据通过Receiver从Kafka过来,然后存到Spark的executor,job启动后由Spark Streaming处理数据。

默认配置下在出现故障的时候可能会丢失数据,为了确保数据零丢失,需要开启WAL(Write Ahead Logs 一个数据过来先写到日志里面(在HDFS上)去,如果出现故障还能从日志里面拿数据,参考HBase的这种机制)机制。该机制在Spark1.2中推出。

注意事项

1.Kafka里面的Topic partition和RDD的partition不是一个概念。

2.创建多个Kafka input DStream可以使用不同的group和topic,采用并行的方式接收。这样可以提升吞吐量。

3.如果要开启WAL(Write Ahead Logs)机制需要一个像HDFS那样的文件系统作为支撑。接收到数据后会备份到log中。输入流的the storage level需要设置为

StorageLevel.MEMORY_AND_DISK_SER

实战

1.启动zk

cd /app/zookeeper/bin

./zkServer.sh start

2.启动kafka

cd /app/kafka

bin/kafka-server-start.sh -daemon config/server.properties &

3.创建topic

bin/kafka-topics.sh --create --zookeeper node1:2181 --replication-factor 1 --partitions 1 --topic spark_topic

4.控制台测试topic是否能够正常生成和消费信息

发送消息

bin/kafka-console-producer.sh --broker-list node1:9092 --topic spark_topic

hello kafka

hello spark streaming

9092是server.properties中配置的监听端口

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

消费消息

bin/kafka-console-consumer.sh --zookeeper node1:2181 --topic spark_topic

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

5.项目目录

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

6.pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.sid.spark</groupId>
  <artifactId>spark-train</artifactId>
  <version>1.0</version>
  <inceptionYear>2008</inceptionYear>
  <properties>
    <scala.version>2.11.8</scala.version>
    <kafka.version>0.9.0.0</kafka.version>
    <spark.version>2.2.0</spark.version>
    <hadoop.version>2.9.0</hadoop.version>
    <hbase.version>1.4.4</hbase.version>
  </properties>

  <repositories>
    <repository>
      <id>scala-tools.org</id>
      <name>Scala-Tools Maven2 Repository</name>
      <url>http://scala-tools.org/repo-releases</url>
    </repository>
  </repositories>

  <pluginRepositories>
    <pluginRepository>
      <id>scala-tools.org</id>
      <name>Scala-Tools Maven2 Repository</name>
      <url>http://scala-tools.org/repo-releases</url>
    </pluginRepository>
  </pluginRepositories>

  <dependencies>
    <dependency>
      <groupId>org.scala-lang</groupId>
      <artifactId>scala-library</artifactId>
      <version>${scala.version}</version>
    </dependency>
    <dependency>
      <groupId>org.apache.kafka</groupId>
      <artifactId>kafka_2.11</artifactId>
      <version>${kafka.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-client</artifactId>
      <version>${hadoop.version}</version>
      <exclusions>
        <exclusion>
          <artifactId>servlet-api</artifactId>
          <groupId>javax.servlet</groupId>
        </exclusion>
      </exclusions>
    </dependency>

    <!--<dependency>-->
      <!--<groupId>org.apache.hbase</groupId>-->
      <!--<artifactId>hbase-clinet</artifactId>-->
      <!--<version>${hbase.version}</version>-->
    <!--</dependency>-->

    <!--<dependency>-->
      <!--<groupId>org.apache.hbase</groupId>-->
      <!--<artifactId>hbase-server</artifactId>-->
      <!--<version>${hbase.version}</version>-->
    <!--</dependency>-->

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-sql_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-flume-sink_2.11</artifactId>
      <version>${spark.version}</version>
    </dependency>

    <dependency>
      <groupId>org.apache.spark</groupId>
      <artifactId>spark-streaming-kafka-0-8_2.11</artifactId>
      <version>2.2.0</version>
    </dependency>



    <dependency>
      <groupId>net.jpountz.lz4</groupId>
      <artifactId>lz4</artifactId>
      <version>1.3.0</version>
    </dependency>

    <dependency>
      <groupId>mysql</groupId>
      <artifactId>mysql-connector-java</artifactId>
      <version>5.1.31</version>
    </dependency>

    <dependency>
      <groupId>org.apache.commons</groupId>
      <artifactId>commons-lang3</artifactId>
      <version>3.5</version>
    </dependency>

  </dependencies>

  <build>
    <sourceDirectory>src/main/scala</sourceDirectory>
    <testSourceDirectory>src/test/scala</testSourceDirectory>
    <plugins>
      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <executions>
          <execution>
            <goals>
              <goal>compile</goal>
              <goal>testCompile</goal>
            </goals>
          </execution>
        </executions>
        <configuration>
          <scalaVersion>${scala.version}</scalaVersion>
          <args>
            <arg>-target:jvm-1.5</arg>
          </args>
        </configuration>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-eclipse-plugin</artifactId>
        <configuration>
          <downloadSources>true</downloadSources>
          <buildcommands>
            <buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
          </buildcommands>
          <additionalProjectnatures>
            <projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
          </additionalProjectnatures>
          <classpathContainers>
            <classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
            <classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
          </classpathContainers>
        </configuration>
      </plugin>
    </plugins>
  </build>
  <reporting>
    <plugins>
      <plugin>
        <groupId>org.scala-tools</groupId>
        <artifactId>maven-scala-plugin</artifactId>
        <configuration>
          <scalaVersion>${scala.version}</scalaVersion>
        </configuration>
      </plugin>
    </plugins>
  </reporting>
</project>
           

7.代码

package com.sid.spark

import org.apache.spark.SparkConf
import org.apache.spark.streaming.kafka.KafkaUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

/**
  * Created by jy02268879 on 2018/7/19.
  *
  * Spark Streaming 基于 Receiver 对接Kafka
  */
object KafkaReceiver {
  def main(args: Array[String]): Unit = {

    if(args.length != 4){
      System.err.println("Usage: KafkaReceiver <zkQuorum> <groupId> <topics> <numPartitions>")
      System.exit(1)
    }

    val Array(zkQuorum, groupId, topics, numPartitions) = args

    val sparkConf = new SparkConf().setAppName("KafkaReceiver").setMaster("local[3]")
    val ssc = new StreamingContext(sparkConf,Seconds(5))

    /**
      * Create an input stream that pulls messages from Kafka Brokers.
      * @param ssc       StreamingContext object
      * @param zkQuorum  Zookeeper quorum (hostname:port,hostname:port,..)
      * @param groupId   The group id for this consumer
      * @param topics    Map of (topic_name to numPartitions) to consume. Each partition is consumed
      *                  in its own thread
      * @param storageLevel  Storage level to use for storing the received objects
      *                      (default: StorageLevel.MEMORY_AND_DISK_SER_2)
      * @return DStream of (Kafka message key, Kafka message value)
      */
    val topicMap = topics.split(",").map((_,numPartitions.toInt)).toMap
    val messages = KafkaUtils.createStream(ssc,zkQuorum,groupId,topicMap)

    messages.print()
    messages.map(_._2).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()

    ssc.start()
    ssc.awaitTermination()
  }
}
           

8.运行

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)
【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

9.在Kafka生成数据 a a a b b c c 

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

10.IDEA查看结果

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

本地运行成功以后提交到服务器上面运行

修改代码,把setMaster和setAppName注销掉

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

maven打包

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)

把target下面的jar包传到服务器,提交到spark上运行

cd /app/spark/spark-2.2.0-bin-2.9.0/bin

./spark-submit --class com.sid.spark.KafkaReceiver --master local[2] --name KafkaReceiver --packages org.apache.spark:spark-streaming-kafka-0-8_2.11:2.2.0 /app/spark/test_data/spark-train-1.0-SNAPSHOT.jar node1:2181,node2:2181,node3:2181 test spark_topic 1

UI

http://node1:4040/jobs/

【十四】Spark Streaming整合Kafka使用Receiver方式(使用Scala语言)