天天看點

FATAL namenode.FSEditLog ,Error: flush failed for required journal

Hadoop HA NameNode程序異常退出

  • 異常
2018-12-24 22:45:30,418 WARN  client.QuorumJournalManager (QuorumCall.java:waitFor(134)) - Waited 19032 ms (timeout=20000 ms) for a response for sendE
dits. Succeeded so far: [10.10.22.3:8485]. Exceptions so far: [10.10.22.2:8485: Journal disabled until next roll]
2018-12-24 22:45:31,688 FATAL namenode.FSEditLog (JournalSet.java:mapJournalsAndReportErrors(398)) - Error: flush failed for required journal (Journal
AndStream(mgr=QJM to [10.10.22.3:8485, 10.10.22.2:8485, 10.10.22.4:8485], stream=QuorumOutputStream starting at txid 35387466))
java.io.IOException: Timed out waiting 20000ms for a quorum of nodes to respond.
        at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:137)
        at org.apache.hadoop.hdfs.qjournal.client.QuorumOutputStream.flushAndSync(QuorumOutputStream.java:107)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:113)
        at org.apache.hadoop.hdfs.server.namenode.EditLogOutputStream.flush(EditLogOutputStream.java:107)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream$8.apply(JournalSet.java:533)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet.access$100(JournalSet.java:57)
        at org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalSetOutputStream.flush(JournalSet.java:529)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:707)
        at org.apache.hadoop.hdfs.server.namenode.FSEditLog.logSync(FSEditLog.java:641)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.storeAllocatedBlock(FSNamesystem.java:3394)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3268)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:5
04)
           
  • 分析

    昨晚生産叢集突然發生異常報警(NameNode程序挂掉,其他JPS進行依然運作)重新開機NN後看似一切正常。開始檢視日志分析原因。發現是由于請求

    journal

    節點逾時引起的,為什麼NN會去請求journal呢,這就跟HA的設計有關了。journalNode的作用存放EditLog的,在MR1中editlog是和fsimage存放在一起的然後SecondNamenode做定期合并。為了讓Standby NN的狀态和Active NN保持同步,即中繼資料保持一緻,使用journal作為守護程序通信。這時journal就會變成NN節點所依賴的屬性,是以通常我們會配置zookeeper叢集來保證高可用。

    上訴異常也就是因為NN與journal節點通信逾時引起的,預設參數為20s,我們可以在

    hdfs-site.xml

    中設定60s來解決這個問題。
<property>
        <name>dfs.qjournal.write-txns.timeout.ms</name>
        <value>60000</value>
</property>
           

參考:

https://blog.csdn.net/levy_cui/article/details/51143214

https://blog.csdn.net/androidlushangderen/article/details/48415073

繼續閱讀