天天看點

hdfs 故障服務namenode 報錯GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=

namenode服務啟動,檢視 /var/log/haoop-hdfs裡

namenode日志,

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7692ms

2019-03-11 12:31:00,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7899ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7951ms

2019-03-11 12:31:08,952 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7878ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7937ms

2019-03-11 12:31:17,405 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7951ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8037ms

2019-03-11 12:31:26,611 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8705ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8835ms

2019-03-11 12:31:35,009 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7897ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8083ms

2019-03-11 12:31:43,806 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8296ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8416ms

解決方式:

打開hadoop-env.sh檔案,找到HADOOP_HEAPSIZE= 和HADOOP_NAMENODE_INIT_HEAPSIZE= 調整這兩個參數,具體調整多少,視情況而定,預設是1000m,也就是一個g,我這裡調整如下

export HADOOP_HEAPSIZE=32000

export HADOOP_NAMENODE_INIT_HEAPSIZE=16000

接着重新啟動hdfs,如果還不行,打開hadoop-env.sh檔案,找到HADOOP_NAMENODE_OPTS

export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" ----這是系統預設值

調整如下:

export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xms6000m -Xmx6000m -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -XX:SoftRefLRUPolicyMSPerMB=0 $HADOOP_NAMENODE_OPTS"

  

接着重新啟動hdfs,如果還是報上面的錯誤,那就繼續調大上面

1

HADOOP_HEAPSIZE和

HADOOP_NAMENODE_INIT_HEAPSIZE 的值

生活不隻眼前的苟且,還是詩和遠方