天天看點

安裝hadoop叢集(Multi Cluster)

配置環境

本文檔安裝hadoop叢集環境,一個master作為namenode節點,一個slave作為datanode節點:

(1) master:

os: CentOS release 6.5 (Final)

ip: 172.16.101.58

user:root

hadoop-2.9.0.tar.gz

(2) slave:

ip: 172.16.101.59

前提條件

(1) master和slave都安裝好java環境,并配置好環境變量;

(2)master節點解壓好hadoop-2.9.0.tar.gz,并配置好環境變量;

(3)本篇文檔使用的是root使用者安裝,是以需要master上的root使用者可以ssh無密碼使用root使用者登入slave節點;

配置叢集檔案

在 master節點上執行(本文檔先在master節點上配置檔案,然後通過scp拷貝到其他slave節點)

(1)slaves檔案:将作為 DataNode 的主機名或者ip寫入該檔案,每行一個,預設為 localhost,是以在僞分布式配置時,節點既作為 NameNode 也作為 DataNode。

[root@sht-sgmhadoopdn-01 hadoop]# cat slaves

172.16.101.59

(2)檔案core-site.xml

[root@sht-sgmhadoopdn-01 hadoop]#cat /usr/local/hadoop-2.9.0/etc/hadoop/core-site.xml

<configuration>

    <property>

        <name>fs.defaultFS</name>

        <value>hdfs://172.16.101.58:9000</value>

    </property>

        <name>hadoop.tmp.dir</name>

        <value>/usr/local/hadoop-2.9.0/tmp</value>

        <description>Abase for other temporary directories.</description>

</configuration>

(3)檔案hdfs-site.xml

[root@sht-sgmhadoopdn-01 hadoop]# cat /usr/local/hadoop-2.9.0/etc/hadoop/hdfs-site.xml

        <property>

                <name>dfs.namenode.secondary.http-address</name>

                <value>172.16.101.58:50090</value>

        </property>

                <name>dfs.replication</name>

                <value>1</value>

                <name>dfs.namenode.name.dir</name>

                <value>file:/usr/local/hadoop-2.9.0/tmp/dfs/name</value>

                <name>dfs.datanode.data.dir</name>

                <value>file:/usr/local/hadoop-2.9.0/tmp/dfs/data</value>

(4)檔案mapred-site.xml

[root@sht-sgmhadoopdn-01 hadoop]# cat /usr/local/hadoop-2.9.0/etc/hadoop/mapred-site.xml

    <name>mapreduce.framework.name</name>

    <value>yarn</value>

        <name>mapreduce.jobhistory.address</name>

        <value>172.16.101.58:10020</value>

        <name>mapreduce.jobhistory.webapp.address</name>

        <value>172.16.101.58:19888</value>

(5)檔案yarn-site.xml

[root@sht-sgmhadoopdn-01 hadoop]# cat /usr/local/hadoop-2.9.0/etc/yarn-site.xml

        <name>yarn.resourcemanager.hostname</name>

        <value>172.16.101.58</value>

        <name>yarn.nodemanager.aux-services</name>

        <value>mapreduce_shuffle</value>

配置好後,将 Master上的 /usr/local/hadoop-2.9.0檔案複制到各個節點上。因為之前有跑過僞分布式模式,建議在切換到叢集模式前先删除之前的臨時檔案。

[root@sht-sgmhadoopdn-01 local]# rm -rf ./hadoop-2.9.0/tmp

[root@sht-sgmhadoopdn-01 local]# rm -rf ./hadoop-2.9.0/logs

[root@sht-sgmhadoopdn-01 local]# tar -zcf  hadoop-2.9.0.master.tar.gz   /usr/local/hadoop-2.9.0

[root@sht-sgmhadoopdn-01 local]# scp hadoop-2.9.0.master.tar.gz sht-sgmhadoopdn-02:/usr/local/

在 Slave節點上執行

[root@sht-sgmhadoopdn-02 local]# tar -zxf hadoop-2.9.0.master.tar.gz

啟動hadoop叢集

在 master節點上執行:

#第一次啟動需要格式化HDFS,以後再啟動不需要

[root@sht-sgmhadoopdn-01 hadoop-2.9.0]# hdfs namenode -format

[root@sht-sgmhadoopdn-01 hadoop-2.9.0]# start-dfs.sh

[root@sht-sgmhadoopdn-01 hadoop-2.9.0]# start-yarn.sh

[root@sht-sgmhadoopdn-01 hadoop-2.9.0]# mr-jobhistory-daemon.sh start historyserver

[root@sht-sgmhadoopdn-01 hadoop-2.9.0]# jps

20289 JobHistoryServer

19730 ResourceManager

18934 NameNode

19163 SecondaryNameNode

20366 Jps

在 Slave節點上執行:

[root@sht-sgmhadoopdn-02 hadoop]# jps

32147 DataNode

535 Jps

32559 NodeManager

[root@sht-sgmhadoopdn-01 hadoop]# hdfs dfsadmin -report

Configured Capacity: 75831140352 (70.62 GB)

Present Capacity: 21246287872 (19.79 GB)

DFS Remaining: 21246263296 (19.79 GB)

DFS Used: 24576 (24 KB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

Missing blocks (with replication factor 1): 0

Pending deletion blocks: 0

-------------------------------------------------

Live datanodes (1):                                                             #存活的slave數量

Name: 172.16.101.59:50010 (sht-sgmhadoopdn-02)

Hostname: sht-sgmhadoopdn-02

Decommission Status : Normal

Non DFS Used: 50732867584 (47.25 GB)

DFS Remaining%: 28.02%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Wed Dec 27 11:08:46 CST 2017

Last Block Report: Wed Dec 27 11:02:01 CST 2017

Console管理平台

NameNode

http://172.16.101.58:50070

執行分布式執行個體MapReduce Job

[root@sht-sgmhadoopdn-01 hadoop]# hdfs dfs -mkdir -p /user/root/input

[root@sht-sgmhadoopdn-01 hadoop]# hdfs dfs -put /usr/local/hadoop-2.9.0/etc/hadoop/*.xml  input

[root@sht-sgmhadoopdn-01 hadoop]# hadoop jar /usr/local/hadoop-2.9.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.0.jar grep input output 'dfs[a-z.]+'

17/12/27 11:25:33 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

17/12/27 11:25:34 INFO client.RMProxy: Connecting to ResourceManager at /172.16.101.58:8032

17/12/27 11:25:36 INFO input.FileInputFormat: Total input files to process : 9

17/12/27 11:25:36 INFO mapreduce.JobSubmitter: number of splits:9

17/12/27 11:25:37 INFO Configuration.deprecation: yarn.resourcemanager.system-metrics-publisher.enabled is deprecated. Instead, use yarn.system-metrics-publisher.enabled

17/12/27 11:25:37 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1514343869308_0001

17/12/27 11:25:38 INFO impl.YarnClientImpl: Submitted application application_1514343869308_0001

17/12/27 11:25:38 INFO mapreduce.Job: The url to track the job:

http://sht-sgmhadoopdn-01:8088/proxy/application_1514343869308_0001/

17/12/27 11:25:38 INFO mapreduce.Job: Running job: job_1514343869308_0001

17/12/27 11:25:51 INFO mapreduce.Job: Job job_1514343869308_0001 running in uber mode : false

17/12/27 11:25:51 INFO mapreduce.Job:  map 0% reduce 0%

17/12/27 11:26:14 INFO mapreduce.Job:  map 11% reduce 0%

17/12/27 11:26:15 INFO mapreduce.Job:  map 67% reduce 0%

17/12/27 11:26:29 INFO mapreduce.Job:  map 100% reduce 0%

17/12/27 11:26:32 INFO mapreduce.Job:  map 100% reduce 100%

17/12/27 11:26:34 INFO mapreduce.Job: Job job_1514343869308_0001 completed successfully

17/12/27 11:26:34 INFO mapreduce.Job: Counters: 50

......

[root@sht-sgmhadoopdn-01 hadoop]# hdfs dfs -cat output/*

17/12/27 11:30:08 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

1    dfsadmin

1    dfs.replication

1    dfs.namenode.secondary.http

1    dfs.namenode.name.dir

1    dfs.datanode.data.dir

也可以通過浏覽器通路console,檢視詳細的分析資訊:

ResourceManager -

http://172.16.101.58:8088

停止hadoop叢集

[root@sht-sgmhadoopdn-01 hadoop]#stop-yarn.sh

[root@sht-sgmhadoopdn-01 hadoop]#stop-dfs.sh

[root@sht-sgmhadoopdn-01 hadoop]#mr-jobhistory-daemon.sh stop historyserver

參考連結:

http://www.powerxing.com/install-hadoop-cluster/ http://hadoop.apache.org/docs/r2.9.0/hadoop-project-dist/hadoop-common/ClusterSetup.html

繼續閱讀