天天看点

Hadoop-2.5.1-HA搭建

JDK-7u79-64.rpm 安装

环境:CentOS-6.3,Hadoop-2.5.1,zookeeper-3.4.6,jdk-1.7

准备工作:关闭防火墙,时间同步(必做)

hbase--hadoop--jdk版本对应关系

Hadoop-2.5.1-HA搭建
Hadoop-2.5.1-HA搭建

service iptables stop

ntpdate 0.asia.pool.ntp.org

1、主机映射hadoop1,hadoop2,hadoop3,hadoop4

Vi /etc/hosts

192.168.25.151 hadoop1

192.168.25.152 hadoop2

192.168.25.153 hadoop3

192.168.25.154 hadoop4
           

2、免密码登录

hadoop1,hadoop2,hadoop3,hadoop4

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
           

验证自己登录到自己上

如:ssh hadoop1

Hadoop1

cd ~/.ssh

Hadoop1的公钥拷贝到hadoop2,hadoop3,hadoop4

scp ./id_dsa.pub [email protected]:/opt/

scp ./id_dsa.pub [email protected]:/opt/

scp ./id_dsa.pub [email protected]:/opt/
           

hadoop2上

cat /opt/id_dsa.pub >> ~/.ssh/authorized_keys
           

hadoop3上

cat /opt/id_dsa.pub >> ~/.ssh/authorized_keys
           

hadoop4上

cat /opt/id_dsa.pub >> ~/.ssh/authorized_keys
           

验证hadoop1登录到hadoop2,hadoop3,hadoop4

ssh hadoop2     exit

ssh hadoop3     exit

ssh hadoop4     exit
           

hadoop2拷贝到Hadoop1

scp ./id_dsa.pub [email protected]:/opt/

Hadoop1上

cat /opt/id_dsa.pub >> ~/.ssh/authorized_keys

ssh hadoop1
           

3、环境变量hadoop1,hadoop2,hadoop3,hadoop4

vi ~/.bash_profile

export HADOOP_HOME=/home/hadoop-2.5.1

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export ZOOKEEPER_HOME=/home/zookeeper-3.4.6

export PATH=$PATH:$ZOOKEEPER_HOME/bin

source ~/.bash_profile
           

四台机器:hadoop1,hadoop2,hadoop3,hadoop4

NN DN ZK ZKFC JN RM NM(任务管理)
Hadoop1 Y Y Y  Y
Hadoop2 Y Y Y Y Y Y Y
Hadoop3 Y Y Y Y
Hadoop4 Y Y Y

 在hadoop1上解压hadoop-2.5.1.tar.gz

1. vim core-site.xml

<configuration>

<property>

  <name>fs.defaultFS</name>

  <value>hdfs://bjsxt</value>

</property>

<property>

   <name>ha.zookeeper.quorum</name>

   <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>

</property>

<property>

  <name>hadoop.tmp.dir</name>

  <value>/opt/hadoop</value>

</property>

</configuration>
           

2. vim hdfs-site.xml

<configuration>

<property>

  <name>dfs.nameservices</name>

  <value>bjsxt</value>

</property>

<property>

  <name>dfs.ha.namenodes.bjsxt</name>

  <value>nn1,nn2</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.bjsxt.nn1</name>

  <value>hadoop1:8020</value>

</property>

<property>

  <name>dfs.namenode.rpc-address.bjsxt.nn2</name>

  <value>hadoop2:8020</value>

</property>

<property>

  <name>dfs.namenode.http-address.bjsxt.nn1</name>

  <value>hadoop1:50070</value>

</property>

<property>

  <name>dfs.namenode.http-address.bjsxt.nn2</name>

  <value>hadoop2:50070</value>

</property>

<property>

  <name>dfs.namenode.shared.edits.dir</name>

  <value>qjournal://hadoop2:8485;hadoop3:8485;hadoop4:8485/bjsxt</value>

</property>

<property>

  <name>dfs.client.failover.proxy.provider.bjsxt</name>

  <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>

</property>

<property>

  <name>dfs.ha.fencing.methods</name>

  <value>sshfence</value>

</property>

<property>

  <name>dfs.ha.fencing.ssh.private-key-files</name>

  <value>/root/.ssh/id_dsa</value>

</property>

<property>

  <name>dfs.journalnode.edits.dir</name>

  <value>/opt/hadoop/data</value>

</property>

<property>

   <name>dfs.ha.automatic-failover.enabled</name>

   <value>true</value>

</property>

</configuration>
           

3. vim hadoop-env.sh

将其中的JAVA_HOME 修改成为

export JAVA_HOME=/usr/java/jdk1.7.0_79
           

hadoop1拷贝到hadoop2,hadoop3,hadoop4

[[email protected] home]# scp -r hadoop-2.5.1/ [email protected]:/home/

[[email protected] home]# scp -r hadoop-2.5.1/ [email protected]:/home/

[[email protected] home]# scp -r hadoop-2.5.1/ [email protected]:/home/
           

4. 准备zookeeper

a) 三台zookeeper:hadoop1,hadoop2,hadoop3

b) 配置或者修改zoo.cfg配置文件,

vi zoo.cfg

dataDir=/opt/zookeeper

clientPort=2181

tickTime=2000

initLimit=5

syncLimit=2

server.1=hadoop1:2888:3888

server.2=hadoop2:2888:3888

server.3=hadoop3:2888:3888
           

c) 在所有节点的dataDir目录中创建

mkdir /opt/zookeeper

在/opt/zookeeper中创建一个myid的文件,文件内容对应的节点上分别为1,2,3

5. 配置hadoop中的slaves,分别是

hadoop2
hadoop3
hadoop4
           

6. 启动三个zookeeper:zkServer.sh start

7. 启动三个JournalNode:hadoop-daemon.sh start journalnode

8. 在其中一个namenode(Hadoop1)上格式化:hdfs namenode -format

9. 把刚刚格式化之后的元数据拷贝到另外一个namenode(Hadoop2)上 scp -r /opt/hadoop/ [email protected]:/opt/hadoop/

a) 启动刚刚格式化的namenode(Hadoop1)   hadoop-daemon.sh start namenode

b) 在没有格式化的namenode(Hadoop2)上执行:hdfs namenode -bootstrapStandby

c) 启动第二个namenode(Hadoop2)      hadoop-daemon.sh start namenode

10. 在其中一个namenode(这里用Hadoop1)上初始化zkfc:hdfs zkfc -formatZK

11. 停止上面节点:stop-dfs.sh

12. 全面启动:start-dfs.sh

执行完上面步骤之后才可以看到active和standby两种状态

12.1 集群yarn配置,所有节点搭建分布式计算功能

vim mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
           

vim yarn-site.xml

<configuration>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
   <name>yarn.resourcemanager.ha.enabled</name>
   <value>true</value>
 </property>
 <property>
   <name>yarn.resourcemanager.cluster-id</name>
   <value>coolfxl</value>
 </property>
 <property>
   <name>yarn.resourcemanager.ha.rm-ids</name>
   <value>rm1,rm2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm1</name>
   <value>hadoop1</value>
 </property>
 <property>
   <name>yarn.resourcemanager.hostname.rm2</name>
   <value>hadoop2</value>
 </property>
 <property>
   <name>yarn.resourcemanager.zk-address</name>
   <value>hadoop1:2181,hadoop2:2181,hadoop3:2181</value>
 </property>
</configuration>
           

13. 重启服务器的操作:

service iptables stop

ntpdate 0.asia.pool.ntp.org

zkServer.sh start

【

hadoop1    start-dfs.sh

hadoop1    start-yarn.sh

hadoop2    yarn-daemon.sh start resourcemanager

】

或者用

【hadoop1 start-all.sh】
           

启动成功之后:http://192.168.25.151:50070

Hadoop-2.5.1-HA搭建

任何一台含有hadoop的节点上,都可以进行创建文件

对应命令:hdfs dfs -mkdir /test    (文件夹名称)

读取文件内容

hdfs dfs -text /usr/nginx/html/index.html    (文件名称)

文件上传

hadoop fs -put srcFile destFile

hadoop fs -put spark.txt /

跑一个MapReduce任务

hadoop jar /root/wc.jar com.dkjhl.mr.wc.RunJob

Hadoop-2.5.1-HA搭建
Hadoop-2.5.1-HA搭建

浏览器输入网址:http://192.168.25.151:8088/cluster 可以看到一下界面

Hadoop-2.5.1-HA搭建

继续阅读