網上找了一圈沒找到一個mysql to Iceberg的實踐案例,總結下這幾天的踩坑心血曆程,希望能提供一些幫助。
現狀
目前Iceberg 0.11版本隻支援CDC streaming寫入,sql 寫入以及CDC 讀取都是不支援的。
mysql binlog的讀取有現成的Connector可用,參考 flink-cdc-connectors。不錯demo裡面隻提供了String的序列化方式,Iceberg需要接受RowData的資料格式。
踩坑記錄
java.lang.ClassCastException: org.apache.flink.table.data.GenericRowData cannot be cast to org.apache.flink.types.Row
TypeInformation 構造的不對
Client does not support authentication protocol requested by server; consider upgrading MySQL client
解決方法:https://stackoverflow.com/questions/50093144/mysql-8-0-client-does-not-support-authentication-protocol-requested-by-server
2021-03-12 14:07:57
org.apache.kafka.connect.errors.ConnectException: Access denied; you need (at least one of) the RELOAD privilege(s) for this operation Error code: 1227; SQLSTATE: 42000.
at io.debezium.connector.mysql.AbstractReader.wrap(AbstractReader.java:230)
at io.debezium.connector.mysql.AbstractReader.failed(AbstractReader.java:207)
at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:831)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLSyntaxErrorException: Access denied; you need (at least one of) the RELOAD privilege(s) for this operation
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:120)
at com.mysql.cj.jdbc.exceptions.SQLError.createSQLException(SQLError.java:97)
at com.mysql.cj.jdbc.exceptions.SQLExceptionsMapping.translateException(SQLExceptionsMapping.java:122)
at com.mysql.cj.jdbc.StatementImpl.executeInternal(StatementImpl.java:782)
at com.mysql.cj.jdbc.StatementImpl.execute(StatementImpl.java:666)
at io.debezium.jdbc.JdbcConnection.executeWithoutCommitting(JdbcConnection.java:1201)
at io.debezium.connector.mysql.SnapshotReader.execute(SnapshotReader.java:465)
... 3 more
權限問題,如果生産中遇到找DBA處理下
2021-03-15 11:35:01
java.lang.IllegalArgumentException: Cannot write delete files in a v1 table
at org.apache.iceberg.ManifestFiles.writeDeleteManifest(ManifestFiles.java:154)
at org.apache.iceberg.SnapshotProducer.newDeleteManifestWriter(SnapshotProducer.java:365)
at org.apache.iceberg.MergingSnapshotProducer.newDeleteFilesAsManifest(MergingSnapshotProducer.java:480)
at org.apache.iceberg.MergingSnapshotProducer.prepareDeleteManifests(MergingSnapshotProducer.java:469)
at org.apache.iceberg.MergingSnapshotProducer.apply(MergingSnapshotProducer.java:358)
at org.apache.iceberg.SnapshotProducer.apply(SnapshotProducer.java:163)
at org.apache.iceberg.SnapshotProducer.lambda$commit$2(SnapshotProducer.java:276)
at org.apache.iceberg.util.Tasks$Builder.runTaskWithRetry(Tasks.java:404)
at org.apache.iceberg.util.Tasks$Builder.runSingleThreaded(Tasks.java:213)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:197)
at org.apache.iceberg.util.Tasks$Builder.run(Tasks.java:189)
at org.apache.iceberg.SnapshotProducer.commit(SnapshotProducer.java:275)
at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitOperation(IcebergFilesCommitter.java:298)
at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitDeltaTxn(IcebergFilesCommitter.java:285)
at org.apache.iceberg.flink.sink.IcebergFilesCommitter.commitUpToCheckpoint(IcebergFilesCommitter.java:210)
at org.apache.iceberg.flink.sink.IcebergFilesCommitter.initializeState(IcebergFilesCommitter.java:147)
at org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:106)
at org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:258)
at org.apache.flink.streaming.runtime.tasks.OperatorChain.initializeStateAndOpenOperators(OperatorChain.java:290)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:473)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:47)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:469)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:522)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:721)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:546)
at java.lang.Thread.run(Thread.java:748)
解決辦法:
社群版本現在沒有把v1打開,是因為v2還有一些功能不是特别完善。如果使用者想用v2展開測試的話,需要通過java API把v1更新到v2,調用方式如下:
Table table = …
TableOperations ops = ((BaseTable) table).operations();
TableMetadata meta = ops.current();
ops.commit(meta, meta.upgradeToFormatVersion(2));
完整Demo看代碼吧。