天天看点

【hadoop】搭建 ha 高可用集群

本文前半部分配置高可用集群参考Hadoop HA高可用集群搭建(2.7.2),后半部分是解决在个人环境中配置遇到的一些问题。(由于遇到的问题较多,已迁移【hadoop】HA集群遇到的问题)

配置HA 集群

  1. 配置zk 的zoo.cfg,由于zk的master是选举的,所以zk节点的格式必须为单数且至少3个,在该集群中,只有4个节点,所以设置master2、slave1、slave2为zk节点
  2. 配置hadoop各个文件:
    • core-site.xml:
      <configuration>
          <!-- 指定hdfs的nameservice为ns1 -->  
          <property>
              <name>fs.defaultFS</name>
              <value>hdfs://ns1/</value>
          </property>
          <!-- 指定hadoop临时目录 -->
          <property>
              <name>hadoop.tmp.dir</name>
              <value>file:/usr/local/hadoop/tmp</value>
          </property>
          <!-- 指定zookeeper地址 -->
          <property>
              <name>ha.zookeeper.quorum</name>
              <value>master2:2181,slave1:2181,slave2:2181</value>
          </property>
      </configuration>
                 
    • hdfs-site.xml:
      <configuration>
      <!--指定hdfs的nameservice为ns1,需要和core-site.xml中的保持一致 -->  
      <property>
          <name>dfs.nameservices</name>
          <value>ns1</value>
      </property>
      <!-- ns1下面有两个NameNode,分别是nn1,nn2 -->
      <property>
          <name>dfs.ha.namenodes.ns1</name>
          <value>nn1,nn2</value>
      </property>
      <!-- nn1的RPC通信地址 -->
      <property>
          <name>dfs.namenode.rpc-address.ns1.nn1</name>
          <value>master1:9000</value>
      </property>
      <!-- nn1的http通信地址 -->
      <property>
          <name>dfs.namenode.http-address.ns1.nn1</name>
          <value>master1:50070</value>
      </property>
      <!-- nn2的RPC通信地址 -->
      <property>
          <name>dfs.namenode.rpc-address.ns1.nn2</name>
          <value>master2:9000</value>
      </property>
      <!-- nn2的http通信地址 -->
      <property>
          <name>dfs.namenode.http-address.ns1.nn2</name>
          <value>master2:50070</value>
      </property>
      <!-- 指定NameNode的元数据在JournalNode上的存放位置 --> 
      <property>
          <name>dfs.namenode.shared.edits.dir</name>
          <value>qjournal://master2:8485;slave1:8485;slave2:8485/ns1</value>
      </property>
      <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
      <property>
          <name>dfs.journalnode.edits.dir</name>
          <value>/usr/local/hadoop/journaldata</value>
      </property>
      <!-- 开启NameNode失败自动切换 -->
      <property>
          <name>dfs.ha.automatic-failover.enabled</name>
          <value>true</value>
      </property>
      <!-- 配置失败自动切换实现方式 -->
      <property>
          <name>dfs.client.failover.proxy.provider.ns1</name>
          <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
      </property>
      <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
      <property>
          <name>dfs.ha.fencing.methods</name>
          <value>
              sshfence
              shell(/bin/true)
          </value>
      </property>
      <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
      <property>
          <name>dfs.ha.fencing.ssh.private-key-files</name>
          <value>/root/.ssh/id_rsa</value>
      </property>
      <!-- 配置sshfence隔离机制超时时间 -->
      <property>
          <name>dfs.ha.fencing.ssh.connect-timeout</name>
          <value>30000</value>
      </property>
      <!-- 配置namenod存放文件夹 -->
      <property>
          <name>dfs.namenode.name.dir</name>
          <value>file:/usr/local/hadoop/hdfs/name</value>
      </property>
      <!-- 配置datanode数据存放文件夹 -->
      <property>
          <name>dfs.datanode.data.dir</name>
          <value>file:/usr/local/hadoop/hdfs/data</value>
      </property>
      <!-- 配置文件副本 -->
      <property>
          <name>dfs.replication</name>
          <value>2</value>
      </property>
      <property>
          <name>dfs.webhdfs.enabled</name>
          <value>true</value>
      </property>
      </configuration>
                 
    • mapred-site.xml
      <configuration>
      <property>
          <name>mapreduce.framework.name</name>
          <value>yarn</value>
      </property>
      <property>
          <name>mapreduce.jobhistory.address</name>
          <value>192.168.159.129:10020</value>
      </property>
      <property>
          <name>mapreduce.jobhistory.webapp.address</name>
          <value>192.168.159.129:19888</value>
      </property>
      </configuration>
                 
    • yarn-site.xml(该集群大概率不会用到yarn)
      <configuration>
      <!-- Site specific YARN configuration properties -->
      <!-- 开启RM高可用 -->
      <property>
         <name>yarn.resourcemanager.ha.enabled</name>
         <value>true</value>
      </property>
      <!-- 指定RM的cluster id -->
      <property>
         <name>yarn.resourcemanager.cluster-id</name>
         <value>yrc</value>
      </property>
      <!-- 指定RM的名字 -->
      <property>
         <name>yarn.resourcemanager.ha.rm-ids</name>
         <value>rm1,rm2</value>
      </property>
      <!-- 分别指定RM的地址 -->
      <property>
         <name>yarn.resourcemanager.hostname.rm1</name>
         <value>master1</value>
      </property>
      <property>
         <name>yarn.resourcemanager.hostname.rm2</name>
         <value>master2</value>
      </property>
      <!-- 指定zk集群地址 -->
      <property>
         <name>yarn.resourcemanager.zk-address</name>
         <value>master2:2181,slave1:2181,slave2:2181</value>
      </property>
      <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
      </property>
      </configuration>
                 
    • slaves
      #就两个datanode节点
      
      
      
                 
  3. 启动zookeeper集群(分别在master2、slave1、slave2上启动zookeeper)
    [[email protected] bin]# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    
    ---
    
    [[email protected] bin]# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
    
    ---
    
    [[email protected] bin]# ./zkServer.sh start
    ZooKeeper JMX enabled by default
    Using config: /usr/local/zookeeper/bin/../conf/zoo.cfg
    Starting zookeeper ... STARTED
               
  4. 启动journalnode(分别在master2、slave1、slave2上启动journalnode)注意只有第一次需要这么启动,之后启动hdfs会包含journalnode
    [root@master2 bin]# cd /usr/local/hadoop/sbin/
    [root@master2 sbin]# ./hadoop-daemon.sh start journalnode
    starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-master2.out
    [root@master2 sbin]# jps
     QuorumPeerMain
     JournalNode
     Jps
    
    ---
    
    [root@slave1 bin]# cd /usr/local/hadoop/sbin/
    [root@slave1 sbin]# ./hadoop-daemon.sh start journalnode
    starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-slave1.out
    [root@slave1 sbin]# jps
     JournalNode
     QuorumPeerMain
     Jps
    
    ---
    
    [root@slave2 bin]# cd /usr/local/hadoop/sbin/
    [root@slave2 sbin]# ./hadoop-daemon.sh start journalnode
    starting journalnode, logging to /usr/local/hadoop/logs/hadoop-root-journalnode-slave2.out
    [root@slave2 sbin]# jps
     JournalNode
     QuorumPeerMain
     Jps
               
  5. 格式化HDFS(在master1上执行)
  6. 格式化ZKFC(在master1上执行)
  7. 启动HDFS

继续阅读