前幾天線上Sqoop的一個Job(從MySQL抽取資料到Hadoop)突然報了OOME,後重跑并做java trace發現記憶體占用都是byte[],同時cpu top 3的方法都是com.mysql.jdbc.ByteArraryBuffer.getBytes即記憶體都是被資料消耗掉了;很奇怪,為什麼在option裡指定了fetch-size=100怎麼會OOME呢(平均記錄長度不到1kb);
再看昨天成功的發現100W條記錄,發現占用了860MB記憶體,明顯是fetch-size沒有生效
+---------+---------+------------+----------+-------------+--------------+
|type | status | host | cpusec | mrinput_rec |memory_mb |
|CLEANUP | SUCCESS | A | 0.3400 | NULL | 191.84765625 |
|MAP | SUCCESS | A | 335.6400 | 1006942 | 862.39843750 |
|SETUP | SUCCESS | B | 0.2000 | NULL | 179.34765625 |
沒辦法,把sqoop源碼翻出來終于發現RC了:fetchsize被忽略掉了
protectedvoidinitOptionDefaults() {
if(options.getFetchSize() == null) {
LOG.info("Preparing to use a MySQL streaming resultset.");
options.setFetchSize(Integer.MIN_VALUE);
} elseif(
!options.getFetchSize().equals(Integer.MIN_VALUE)
&&!options.getFetchSize().equals(0)) {
LOG.info("Argument '--fetch-size "+ options.getFetchSize()
+ "' will probably get ignored by MySQL JDBC driver.");
}
究其原因是MySQL提供的API隻支援row-by-row和all模式:
By default,ResultSets are completely retrieved and stored in memory. In most cases this isthe most efficient way to operate, and due to the design of the MySQL networkprotocol is easier to implement. If you are working with ResultSets that have alarge number of rows or large values, and cannot allocate heap space in yourJVM for the memory required, you can tell the driver to stream the results backone row at a time.
<a href="http://dev.mysql.com/doc/refman/5.5/en/connector-j-reference-implementation-notes.html" target="_blank">http://dev.mysql.com/doc/refman/5.5/en/connector-j-reference-implementation-notes.html</a>
最後把fetchsize給去掉了,Job執行成功,700W行占用記憶體400MB;
| type | status | host | cpusec | mrinput_rec | memory_mb |
| CLEANUP | SUCCESS | A | 0.4200 | NULL | 183.49218750 |
| MAP | FAILED | B | NULL | NULL | NULL |
| MAP | SUCCESS | A | 377.1200 | 7195560 | 408.08593750 |
| SETUP | SUCCESS | C | 0.2900| NULL | 188.64843750 |
本文轉自MIKE老畢 51CTO部落格,原文連結:http://blog.51cto.com/boylook/1298634,如需轉載請自行聯系原作者