天天看点

阿里云数据湖分析一键建仓遇到net_write_timeout异常

今天,阿里云数据湖分析(DLA: Data Lake Analytics)某用户遇到一键建仓同步数据失败的问题,异常如下所示:

The whole JobGroup: Sync failed! State: JobGroupState(state=CANCELLED, message=[#1054] Failed syncing: gameinfolog_2018_05_06! message: mppQueryId=20200325_090656_18425_uuvx6, mppErrorCode=30101, because Application was streaming results when the connection failed. Consider raising value of 'net_write_timeout' on the server.), 107 out of 775 jobs are finished

通过了解,用户希望同步其自建的Mysql数据库到数据湖分析,使用一键建仓方式同步。Mysql规格 8c32g,数据库中一共775个表,3T数据量。

分析过程

调查了后台异常堆栈如下所示,可以看到是因为在同步过程中,与Mysql的连接突然被断开。

Caused by: java.io.EOFException: Can not read response from server. Expected to read 239 bytes, read 172 bytes before connection was unexpectedly lost.
        at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3014)
        at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3522)
        ... 21 more           
total-allowed-connections=100  
connections-per-job=10           

但还是出现了相同的问题。

  • 分析应该是其他原因导致,由于是用户自建的Mysql,监控不太完善,只能让用户提供Mysql的日志,发现出错的时间点Mysql会有一次重启,判断可能是Mysql重启导致上面的异常。但为什么Mysql会经常重启呢?
hongdeMacBook-Pro:Downloads hongshen$ grep shutdown error.log.1 
2020-03-25 09:34:33 25035 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 11:35:07 31719 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 14:03:33 7018 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 15:02:10 10453 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 15:09:33 11194 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 16:28:36 15479 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 16:35:42 16069 [Note] InnoDB: Database was not shutdown normally!
2020-03-25 17:07:15 17923 [Note] InnoDB: Database was not shutdown normally!
2020-03-26 00:36:17 7953 [Note] InnoDB: Database was not shutdown normally!
2020-03-26 00:43:20 8545 [Note] InnoDB: Database was not shutdown normally!           
  • 观察重启的时间点,与同步时间相关性大,可能是同步触发Mysql重启,但DLA之前测试一键建仓同步云数据库RDS更大的数据量也未出问题,为什么这次出问题了呢?唯一的区别是这次用户使用了自建的Mysql,猜测可能是Mysql某些配置不合理导致,于是再次让用户提供Mysql的配置和系统日志syslog,查看后发现Mysql重启的时间点都有一次oom-killer,而且Mysql配置的内存过高,因此可以断定是由于Mysql内存使用太多,触发了机器的oom-killer。
hongdeMacBook-Pro:Downloads hongshen$ grep oom-killer syslog.1 
Mar 25 09:34:30 iZwz91gspuopjn2g9y7p2bZ kernel: [778504.166514] cron invoked oom-killer: gfp_mask=0x24200ca, order=0, oom_score_adj=0
Mar 25 11:35:03 iZwz91gspuopjn2g9y7p2bZ kernel: [785737.242006] fpmmm.php invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
Mar 25 14:03:29 iZwz91gspuopjn2g9y7p2bZ kernel: [794642.922230] mysqld invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
Mar 25 15:02:07 iZwz91gspuopjn2g9y7p2bZ kernel: [798160.021297] mysqld invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
Mar 25 15:09:30 iZwz91gspuopjn2g9y7p2bZ kernel: [798602.927184] mysqld invoked oom-killer: gfp_mask=0x24280ca, order=0, oom_score_adj=0
Mar 25 16:28:33 iZwz91gspuopjn2g9y7p2bZ kernel: [803345.721144] mysqld invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
Mar 25 16:28:33 iZwz91gspuopjn2g9y7p2bZ kernel: [803345.734002] ntpd invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
Mar 25 16:35:38 iZwz91gspuopjn2g9y7p2bZ kernel: [803771.450129] mysqld invoked oom-killer: gfp_mask=0x24280ca, order=0, oom_score_adj=0
Mar 25 17:07:11 iZwz91gspuopjn2g9y7p2bZ kernel: [805664.819003] mysqld invoked oom-killer: gfp_mask=0x24280ca, order=0, oom_score_adj=0
Mar 26 00:36:13 iZwz91gspuopjn2g9y7p2bZ kernel: [832606.199291] mysqld invoked oom-killer: gfp_mask=0x24000c0, order=0, oom_score_adj=0
Mar 26 00:36:14 iZwz91gspuopjn2g9y7p2bZ kernel: [832606.207690] in:imklog invoked oom-killer: gfp_mask=0x24201ca, order=0, oom_score_adj=0
Mar 26 00:43:16 iZwz91gspuopjn2g9y7p2bZ kernel: [833029.210693] mysqld invoked oom-killer: gfp_mask=0x24280ca, order=0, oom_score_adj=0           

解决方法

由于用户为Mysql的内存配置过高,让用户调小Mysql内存后,问题解决。