天天看點

spark 彙總mysql_Spark SQL實戰:使用Spark SQL 連接配接hive ,将統計結果存儲到 mysql中

1.需求:

使用Spark SQL 連接配接hive ,讀取資料,将統計結果存儲到 mysql中

2.将寫好的代碼打包上傳的叢集,然後送出spark運作,前提是hive,HDFS已經啟動

3.代碼:

(1)pom.xml

org.apache.spark

spark-core_2.11

2.1.0

org.apache.spark

spark-sql_2.11

2.1.0

(2)demo4.scala

package day1209

import org.apache.spark.sql.SparkSession

import java.util.Properties

object Demo4 {

def main(args: Array[String]): Unit = {

val spark = SparkSession.builder().appName("Hive2Mysql").enableHiveSupport().getOrCreate()

//.config("spark.sql.inMemoryColumnarStorage.batchSize", 10)

//執行sql

val result = spark.sql("select deptno,mgr from default.emp")

//将結果儲存到mysql中

val props = new Properties()

props.setProperty("user", "root")

props.setProperty("password", "000000")

result.write.mode("append").jdbc(

"jdbc:mysql://hadoop2:3306/company?serverTimezone=UTC&characterEncoding=utf-8",

"emp_stat", props)

//停止Spark

spark.stop()

}

}

4.執行:

(1)啟動spark

cd /opt/module/spark-2.1.1

./bin/spark-submit --master spark://hadoop2:7077 --jars /opt/TestFolder/mysql-connector-java-5.1.27.jar --driver-class-path /opt/TestFolder/mysql-connector-java-5.1.27.jar --class spark.sqlshizhan.Demo4 /opt/TestFolder/Scala-1.0-SNAPSHOT.jar

5.結果:

spark 彙總mysql_Spark SQL實戰:使用Spark SQL 連接配接hive ,将統計結果存儲到 mysql中