今天hadoop群集出現crontab job不執行的情況,手動運作job,報錯如下:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RetriableException): org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot
delete /user/hdfs/.staging/job_1441592436807_1892.Name node is in safe mode.
The reported blocks 4710619 needs additional 51773 blocks to reach the threshold 1.0000 of total blocks 4762391.
The number of live datanodes 34 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1211)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:3354)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:3314)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3298)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:733)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete
(ClientNamenodeProtocolServerSideTranslatorPB.java:547)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod
(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1594)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)
Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot delete /user/hdfs/.staging/job_1441592436807_1892. Name node is in safe
mode.
The reported blocks 4710619 needs additional 51773 blocks to reach the threshold 1.0000 of total blocks 4762391.
The number of live datanodes 34 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1207)
... 14 more
at org.apache.hadoop.ipc.Client.call(Client.java:1410)
at org.apache.hadoop.ipc.Client.call(Client.java:1363)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
報錯中提示Namenode safe mode,
我檢視namenode節點,hadoop dfsadmin -safemode get
但是狀态顯示的是off,很奇怪,
是不是這個namenode節點程序死掉了?
我嘗試将另外的namenode節點調整為active狀态,
hdfs haadmin -transitionToActive --forcemanual nn2
nn2節點變成了active狀态,之後檢視nn1
hdfs haadmin -getServiceState nn1盡然還是active狀态,
手動将它調整為standby試試,hdfs haadmin -transitionToStandby --forcemanual nn1
有時候會報錯:forcefence and forceactive flags not supported with auto-failover enabled.
意思是自動切換,不能手動。可以關閉這個Namenode節點服務,重新啟動。
折騰一下,跑個MR,終于成功了,記錄下,幫助遇到這個問題的朋友。
至于什麼原因造成的,大概是近期一直在進行大量的MR并同時進行-put上傳操作造成的。
閱讀(1488) | 評論(0) | 轉發(0) |