天天看點

搭建Hadoop高可用環境

說明: 以下操作三台(nodeone,nodetwo,nodethree)同步操作

1.解壓hadoop壓縮包

[root@nodetwo install]# tar -zxvf hadoop-2.6.0-cdh5.14.2.tar.gz      

2.重命名

[root@nodetwo install]# mv hadoop-2.6.0-cdh5.14.2 hadoop
[root@nodetwo install]# ll
總用量 423732
drwxr-xr-x 14 1106 4001       241 3月  28 2018 hadoop
-rw-r--r--  1 root root 433895552 1月  13 19:58 hadoop-2.6.0-cdh5.14.2.tar.gz
drwxr-xr-x 32 root root      4096 1月  12 17:27 hbase
drwxr-xr-x 12 root root       234 1月  12 15:05 hive
drwxr-xr-x  4 root root        51 11月 19 14:59 zookeeper      

3.配置hdfs-site.xml檔案

vi hdfs-site.xml      
<property>

  <name>dfs.ha.fencing.methods</name>

  <value>

    sshfence

    shell(/bin/true)

  </value>

</property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
</property>
<property>
  <name>dfs.ha.namenodes.mycluster</name>
  <value>nn1,nn2</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn1</name>
  <value>nodeone:8020</value>
</property>
<property>
  <name>dfs.namenode.rpc-address.mycluster.nn2</name>
  <value>nodetwo:8020</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn1</name>
  <value>nodeone:50070</value>
</property>
<property>
  <name>dfs.namenode.http-address.mycluster.nn2</name>
  <value>nodetwo:50070</value>
</property>


<property>
  <name>dfs.namenode.shared.edits.dir</name>
  <value>qjournal://nodeone:8485;nodetwo:8485;nodethree:8485/mycluster</value>
</property>
<property>
  <name>dfs.journalnode.edits.dir</name>
  <value>/var/cdh/hadoop/journal/data</value>
</property>


<property>
  <name>dfs.client.failover.proxy.provider.mycluster</name>
  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>

<property>
  <name>dfs.ha.fencing.methods</name>
  <value>sshfence</value>
</property>

<property>
  <name>dfs.ha.fencing.ssh.private-key-files</name>
  <value>/root/.ssh/id_rsa</value>
</property>

 <property>
   <name>dfs.ha.automatic-failover.enabled</name>
   <value>true</value>
 </property>      

4.修改core-site.xml檔案

vi core-site.xml      
<property>
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
</property>

 <property>
   <name>ha.zookeeper.quorum</name>
   <value>nodeone:2181,nodetwo:2181,nodethree:2181</value>
 </property>      

5.修改slaves

vi slaves      
nodeone
nodetwo
nodethree      

6.啟動zookeeper

zkServer.sh start
zkServer.sh status      
搭建Hadoop高可用環境

初始化啟動:

7. 先啟動journalnode

hadoop-daemon.sh start journalnode      

說明:以下操作不在三台機子同時執行

8. 選擇一台NameNode格式化

hdfs namenode -format      

9.啟動這個格式化的NameNode的。以備另外一台同步

hadoop-daemon.sh start namenode      

10. 在另外一台NameNode中同步資料

hdfs namenode -bootstrapStandby      

11.格式zookeeper

hdfs zkfc -formatZK      

12啟動

start-all.sh      

自動切換的元件。

yum install -y psmisc      

繼續閱讀