天天看點

Flink Sql 實用記錄

  1. Sink Kafka

    錯誤1:doesn't support consuming update and delete changes which is produced by node TableSourceScan

    解答:flink1.11之後引入了CDC(Change Data Capture,變動資料捕捉)阿裡大神開源的,此次錯誤是因為Source源是mysql-cdc是以擷取的資料類型為Changelog格式,是以在WITH kafka的時候需要指定format=debezium-json

    錯誤2:No operators defined in streaming topology. Cannot execute

    解答:在此次流計算當中沒有任何一個operators算子/算子鍊執行

  2. Source Mysql

        0、錯誤資訊:The server time zone value 'Öйú±ê׼ʱ¼ä' is unrecognized or represents more than one time zone

        1、set global time_zone = '+8:00';

        2、set time_zone = '+8:00';

        3、flush privileges; 

        4、或者根據參數serverTimeZone指定

  3. Flink sql任務在本地正常,上線報錯
    <dependency>
          <groupId>org.apache.flink</groupId>
          <artifactId>flink-table-planner-blink_2.11</artifactId>
          <version>${flink.version}</version>
          <scope>provided</scope>//需要添加
        </dependency>           
  4. 1.11 版本之前使用者的 DDL 需要聲明成如下方式
    CREATE TABLE user_behavior (
      ...
    ) WITH (
      'connector.type'='kafka',
      'connector.version'='universal',
      'connector.topic'='user_behavior',
      'connector.startup-mode'='earliest-offset',
      'connector.properties.zookeeper.connect'='localhost:2181',
      'connector.properties.bootstrap.servers'='localhost:9092',
      'format.type'='json'
    );           
    而在 Flink SQL 1.11以及之後的版本中則簡化為
    CREATE TABLE user_behavior (
      ...
    ) WITH (
      'connector'='kafka',
      'topic'='user_behavior',
      'scan.startup.mode'='earliest-offset',
      'properties.zookeeper.connect'='localhost:2181',
      'properties.bootstrap.servers'='localhost:9092',
      'format'='json'
    );           

繼續閱讀