天天看點

【hadoop】搭建 ha 高可用叢集

本文前半部配置設定置高可用叢集參考Hadoop HA高可用叢集搭建(2.7.2),後半部分是解決在個人環境中配置遇到的一些問題。(由于遇到的問題較多,已遷移【hadoop】HA叢集遇到的問題)

配置HA 叢集

  1. 配置zk 的zoo.cfg,由于zk的master是選舉的,是以zk節點的格式必須為單數且至少3個,在該叢集中,隻有4個節點,是以設定master2、slave1、slave2為zk節點
  2. 配置hadoop各個檔案:
    • core-site.xml:
      <configuration>
          <!-- 指定hdfs的nameservice為ns1 -->  
          <property>
              <name>fs.defaultFS</name>
              <value>hdfs://ns1/</value>
          </property>
          <!-- 指定hadoop臨時目錄 -->
          <property>
              <name>hadoop.tmp.dir</name>
              <value>file:/usr/local/hadoop/tmp</value>
          </property>
          <!-- 指定zookeeper位址 -->
          <property>
              <name>ha.zookeeper.quorum</name>
              <value>master2:2181,slave1:2181,slave2:2181</value>
          </property>
      </configuration>
                 
    • hdfs-site.xml:
      <configuration>
      <!--指定hdfs的nameservice為ns1,需要和core-site.xml中的保持一緻 -->  
      <property>
          <name>dfs.nameservices</name>
          <value>ns1</value>
      </property>
      <!-- ns1下面有兩個NameNode,分别是nn1,nn2 -->
      <property>
          <name>dfs.ha.namenodes.ns1</name>
          <value>nn1,nn2</value>
      </property>
      <!-- nn1的RPC通信位址 -->
      <property>
          <name>dfs.namenode.rpc-address.ns1.nn1</name>
          <value>master1:9000</value>
      </property>
      <!-- nn1的http通信位址 -->
      <property>
          <name>dfs.namenode.http-address.ns1.nn1</name>
          <value>master1:50070</value>
      </property>
      <!-- nn2的RPC通信位址 -->
      <property>
          <name>dfs.namenode.rpc-address.ns1.nn2</name>
          <value>master2:9000</value>
      </property>
      <!-- nn2的http通信位址 -->
      <property>
          <name>dfs.namenode.http-address.ns1.nn2</name>
          <value>master2:50070</value>
      </property>
      <!-- 指定NameNode的中繼資料在JournalNode上的存放位置 --> 
      <property>
          <name>dfs.namenode.shared.edits.dir</name>
          <value>qjournal://master2:8485;slave1:8485;slave2:8485/ns1</value>
      </property>
      <!-- 指定JournalNode在本地磁盤存放資料的位置 -->
      <property>
          <name>dfs.journalnode.edits.dir</name>
          <value>/usr/local/hadoop/journaldata</value>
      </property>
      <!-- 開啟NameNode失敗自動切換 -->
      <property>
          <name>dfs.ha.automatic-failover.enabled</name>
          <value>true</value>
      </property>
      <!-- 配置失敗自動切換實作方式 -->
      <property>
          <name>dfs.client.failover.proxy.provider.ns1</name>
          <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
      </property>
      <!-- 配置隔離機制方法,多個機制用換行分割,即每個機制暫用一行-->
      <property>
          <name>dfs.ha.fencing.methods</name>
          <value>
              sshfence
              shell(/bin/true)
          </value>
      </property>
      <!-- 使用sshfence隔離機制時需要ssh免登陸 -->
      <property>
          <name>dfs.ha.fencing.ssh.private-key-files</name>
          <value>/root/.ssh/id_rsa</value>
      </property>
      <!-- 配置sshfence隔離機制逾時時間 -->
      <property>
          <name>dfs.ha.fencing.ssh.connect-timeout</name>
          <value>30000</value>
      </property>
      <!-- 配置namenod存放檔案夾 -->
      <property>
          <name>dfs.namenode.name.dir</name>
          <value>file:/usr/local/hadoop/hdfs/name</value>
      </property>
      <!-- 配置datanode資料存放檔案夾 -->
      <property>
          <name>dfs.datanode.data.dir</name>
          <value>file:/usr/local/hadoop/hdfs/data</value>
      </property>
      <!-- 配置檔案副本 -->
      <property>
          <name>dfs.replication</name>
          <value>2</value>
      </property>
      <property>
          <name>dfs.webhdfs.enabled</name>
          <value>true</value>
      </property>
      </configuration>
                 
    • mapred-site.xml
      <configuration>
      <property>
          <name>mapreduce.framework.name</name>
          <value>yarn</value>
      </property>
      <property>
          <name>mapreduce.jobhistory.address</name>
          <value>192.168.159.129:10020</value>
      </property>
      <property>
          <name>mapreduce.jobhistory.webapp.address</name>
          <value>192.168.159.129:19888</value>
      </property>
      </configuration>
                 
    • yarn-site.xml(該叢集大機率不會用到yarn)
      <configuration>
      <!-- Site specific YARN configuration properties -->
      <!-- 開啟RM高可用 -->
      <property>
         <name>yarn.resourcemanager.ha.enabled</name>
         <value>true</value>
      </property>
      <!-- 指定RM的cluster id -->
      <property>
         <name>yarn.resourcemanager.cluster-id</name>
         <value>yrc</value>
      </property>
      <!-- 指定RM的名字 -->
      <property>
         <name>yarn.resourcemanager.ha.rm-ids</name>
         <value>rm1,rm2</value>
      </property>
      <!-- 分别指定RM的位址 -->
      <property>
         <name>yarn.resourcemanager.hostname.rm1</name>
         <value>master1</value>
      </property>
      <property>
         <name>yarn.resourcemanager.hostname.rm2</name>
         <value>master2</value>
      </property>
      <!-- 指定zk叢集位址 -->
      <property>
         <name>yarn.resourcemanager.zk-address</name>
         <value>master2:2181,slave1:2181,slave2:2181</value>
      </property>
      <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
      </property>
      </configuration>
                 
    • slaves
      #就兩個datanode節點
      
      
      
                 
  3. 啟動zookeeper叢集(分别在master2、slave1、slave2上啟動zookeeper)
    [[email protected] bin]# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    
    ---
    
    [[email protected] bin]# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    
    ---
    
    [[email protected] bin]# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
               
  4. 啟動journalnode(分别在master2、slave1、slave2上啟動journalnode)注意隻有第一次需要這麼啟動,之後啟動hdfs會包含journalnode
    [root@master2 bin]# cd /usr/local/hadoop/sbin/
    [root@master2 sbin]# ./hadoop-daemon.sh start journalnode
    starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-master2.out
    [root@master2 sbin]# jps
     QuorumPeerMain
     JournalNode
     Jps
    
    ---
    
    [root@slave1 bin]# cd /usr/local/hadoop/sbin/
    [root@slave1 sbin]# ./hadoop-daemon.sh start journalnode
    starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-slave1.out
    [root@slave1 sbin]# jps
     JournalNode
     QuorumPeerMain
     Jps
    
    ---
    
    [root@slave2 bin]# cd /usr/local/hadoop/sbin/
    [root@slave2 sbin]# ./hadoop-daemon.sh start journalnode
    starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-slave2.out
    [root@slave2 sbin]# jps
     JournalNode
     QuorumPeerMain
     Jps
               
  5. 格式化HDFS(在master1上執行)
  6. 格式化ZKFC(在master1上執行)
  7. 啟動HDFS

繼續閱讀