天天看点

hdfs 故障服务namenode 报错GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=

namenode服务启动,查看 /var/log/haoop-hdfs里

namenode日志,

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7692ms

2019-03-11 12:31:00,573 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7899ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7951ms

2019-03-11 12:31:08,952 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7878ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=7937ms

2019-03-11 12:31:17,405 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7951ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8037ms

2019-03-11 12:31:26,611 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8705ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8835ms

2019-03-11 12:31:35,009 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 7897ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8083ms

2019-03-11 12:31:43,806 INFO org.apache.hadoop.util.JvmPauseMonitor: Detected pause in JVM or host machine (eg GC): pause of approximately 8296ms

GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=8416ms

解决方式:

打开hadoop-env.sh文件,找到HADOOP_HEAPSIZE= 和HADOOP_NAMENODE_INIT_HEAPSIZE= 调整这两个参数,具体调整多少,视情况而定,默认是1000m,也就是一个g,我这里调整如下

export HADOOP_HEAPSIZE=32000

export HADOOP_NAMENODE_INIT_HEAPSIZE=16000

接着重新启动hdfs,如果还不行,打开hadoop-env.sh文件,找到HADOOP_NAMENODE_OPTS

export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS" ----这是系统默认值

调整如下:

export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} -Xms6000m -Xmx6000m -XX:+UseCompressedOops -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled -XX:+UseCMSCompactAtFullCollection -XX:CMSFullGCsBeforeCompaction=0 -XX:+CMSParallelRemarkEnabled -XX:+DisableExplicitGC -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -XX:SoftRefLRUPolicyMSPerMB=0 $HADOOP_NAMENODE_OPTS"

  

接着重新启动hdfs,如果还是报上面的错误,那就继续调大上面

1

HADOOP_HEAPSIZE和

HADOOP_NAMENODE_INIT_HEAPSIZE 的值

生活不只眼前的苟且,还是诗和远方