天天看點

HDFS HA搭建 (ZKFC自動故障轉移)

基本叢集搭建見這篇部落格:hadoop叢集搭建筆記

在基本叢集搭建上配置下述檔案

HA搭建配置:

hdfs-site.xml

<configuration>
  <!-- set the replication of file, needed base on number of datanode  -->
<property>
	<name>dfs.replication</name>
     <value>3</value>
</property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<!-- set namenode -->
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<!--set address of nn1,nn2 -->
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>chdp11:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>chdp12:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>chdp11:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>chdp12:50070</value>
</property>
<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://chdp11:8485;chdp12:8485;chdp13:8485/mycluster</value>
</property>
<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
   <property>
      <name>dfs.ha.fencing.methods</name>
      <value>sshfence</value>
    </property>
    <property>
      <name>dfs.ha.fencing.ssh.private-key-files</name>
      <value>/home/root/.ssh/id_rsa</value>
    </property>
<property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
</property>
</configuration>
           

core-site.xml (配置了trash,不需要的直接去掉)

<configuration>
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/usr/SFT/HA/hadoop-2.7.2/data/jn</value>
</property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>chdp11:2181,chdp12:2181,chdp13:2181</value>
</property>
<!-- set the directory with produce when running hadoop-->
<property>
        <name>hadoop.tmp.dir</name>
       <value>/usr/SFT/HA/hadoop-2.7.2/data/tmp</value>
</property>
<!-- configuration for trash-->
<property>
  <name>fs.trash.interval</name>
  <value>60</value>
  <description>Number of minutes after which the checkpoint
  gets deleted.  If zero, the trash feature is disabled.
  This option may be configured both on the server and the
  client. If trash is disabled server side then the client
  side configuration is checked. If trash is enabled on the
  server side then the value configured on the server is
  used and the client configuration value is ignored.
  </description>
</property>
<property>
  <name>fs.trash.checkpoint.interval</name>
  <value>0</value>
  <description>Number of minutes between trash checkpoints.
  Should be smaller or equal to fs.trash.interval. If zero,
  the value is set to the value of fs.trash.interval.
  Every time the checkpointer runs it creates a new checkpoint 
  out of current and removes checkpoints created more than 
  fs.trash.interval minutes ago.
  </description>
</property>
<property>
        <name>ha.zookeeper.quorum</name>
        <value>chdp11:2181,chdp12:2181,chdp13:2181</value>
</property>
</configuration>
           

後續操作(我用的全路徑操作)

(1)關閉所有HDFS服務:

/usr/SFT/HA/hadoop-2.7.2/sbin/stop-dfs.sh

(2)啟動Zookeeper叢集:

/usr/SFT/HA/hadoop-2.7.2/bin/zkServer.sh start

(3)初始化HA在Zookeeper中狀态:

/usr/SFT/HA/hadoop-2.7.2/bin/hdfs zkfc -formatZK

(4)啟動HDFS服務:

/usr/SFT/HA/hadoop-2.7.2/sbin/start-dfs.sh

(5)在備機上啟動namenode

/usr/SFT/HA/hadoop-2.7.2/sbin/hadoop-daemon.sh start namenode

HDFS HA搭建 (ZKFC自動故障轉移)
HDFS HA搭建 (ZKFC自動故障轉移)
HDFS HA搭建 (ZKFC自動故障轉移)