天天看點

Hadoop:完全分布式搭建實驗過程

此叢集三個節點基于三台虛拟機(hadoop01、hadoop02、hadoop03)進行搭建,虛拟機安裝的作業系統為Centos6.5,Hadoop版本選取為2.9.1。

實驗過程

1、基礎叢集的搭建

下載下傳并安裝VMware WorkStation Pro,連結:https://pan.baidu.com/s/1rA30rE9Px5tDJkWSrghlZg 密碼:dydq

下載下傳CentOS鏡像或者Ubuntu鏡像都可,可以去官網下載下傳,我這裡使用的Centos6.5。

使用VMware安裝linux系統,制作三台虛拟機。

2、叢集配置

設定主機名:

vi /etc/sysconfig/network  
           

修改内容:

HOSTNAME=hadoop01

三台虛拟機主機名分别為:hadoop01、hadoop02、hadoop03

修改hosts檔案:

vi /etc/hosts
           

内容:

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.216.15  www.hadoop01.com  hadoop01

192.168.216.16  www.hadoop02.com  hadoop02

192.168.216.17  www.hadoop03.com  hadoop03

 注意:三台虛拟機都做此操作

網絡環境配置:

vi /etc/sysconfig/network-scripts/ifcfg-eth0
           

示例内容:

DEVICE=eth0

HWADDR=00:0C:29:0F:84:86

TYPE=Ethernet

UUID=70d880d5-6852-4c85-a1c9-2491c4c1ac11

ONBOOT=yes

IPADDR=192.168.216.111

PREFIX=24

GATEWAY=192.168.216.2

DNS1=8.8.8.8

DNS2=114.114.114.114

NM_CONTROLLED=yes

BOOTPROTO=static

DEFROUTE=yes

NAME="System eth0"

hadoop01:192.168.216.15

hadoop02:192.168.216.16

hadoop03:192.168.216.17

設定完後,可以通過ping進行網絡測試

注意事項:通過虛拟機檔案複制,可能會産生網卡MAC位址重複的問題,需要在VMware網卡設定中重新生成MAC,在虛拟機複制後需要更改内網網卡的IP。

安裝jdk:

下載下傳jdk,連結:

http://download.oracle.com/otn-pub/java/jdk/8u181-b13/96a7b8442fe848ef90c96a2fad6ed6d1/jdk-8u181-linux-x64.tar.gz

解壓:

tar zxvf jdk-8u181-linux-x64.tar.gz -C /usr/local
           

 配置環境變量:

vi /etc/profile
           

 内容為:

export JAVA_HOME=/usr/local/jdk1.8.0_181
export PATH=$PATH:$JAVA_HOME/bin:
           

使之生效:

source /etc/profile
           

 設定免密登陸:

免密登陸,效果也就是在hadoop01上,通過 ssh登陸到對方計算機上時不用輸入密碼。(注:若沒有安裝ssh,先進行安裝ssh)

首先在hadoop01 上進行如下操作:

$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ chmod 0600 ~/.ssh/authorized_keys
           

 然後:hadoop01 ---> hadoop02

ssh-copy-id hadoop02
           

 然後:hadoop01 ---> hadoop03

ssh-copy-id hadoop03
           

說明:

hadoop01給hadoop02送出請求資訊,hadoop02接到去authorithd_keys找對應的ip和使用者名,能找到則用其對應公鑰加密一個随機字元串(找不到則輸入密碼),然後将加密的随機字元串傳回給hadoop01,hadoop01接到加密後的字元然後用自己的私鑰解密,然後再發給HAdoop02,hadoop02判斷和加密之前的是否一樣,一樣則通過登入,不一樣則拒絕。

如果需要對其他做免密操作,同理。

3、Hadoop安裝配置

安裝前三台節點都需要需要關閉防火牆:

service iptables stop : 停止防火牆
chkconfig iptables off :開機不加載防火牆
           

Hadoop安裝

 首先在hadoop01節點進行hadoop安裝配置,之後在hadoop02和hadoop03進行同樣的操作,我們可以複制hadoop檔案至hadoop02和hadoop03上:scp -r /usr/local/hadoop-2.9.1 hadoop02:/usr/local/

下載下傳Hadoop壓縮包,下載下傳位址:https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.9.1/hadoop-2.9.1.tar.gz

(1)解壓并配置環境變量

tar -zxvf /home/hadoop-2.9.1.tar.gz -C /usr/local/
           

 vi /etc/profile

export HADOOP_HOME=/usr/local/hadoop-2.9.1/
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
           

 source /etc/profile  : 使環境變量生效

hadoop version  : 檢測hadoop版本資訊

[[email protected] hadoop-2.9.1]# source /etc/profile
[[email protected] hadoop-2.9.1]# hadoop version
Hadoop 2.9.1
Subversion https://github.com/apache/hadoop.git -r e30710aea4e6e55e69372929106cf119af06fd0e
Compiled by root on 2018-04-16T09:33Z
Compiled with protoc 2.5.0
From source with checksum 7d6d2b655115c6cc336d662cc2b919bd
This command was run using /usr/local/hadoop-2.9.1/share/hadoop/common/hadoop-common-2.9.1.jar
           

 此時環境變量配置成功。

(2)配置配置檔案(注意,此時的操作是在hadoop解壓目錄下完成的)

[[email protected] hadoop-2.9.1]# pwd
/usr/local/hadoop-2.9.1
           

vi ./etc/hadoop/hadoop-env.sh :指定JAVA_HOME的路徑

修改:

export JAVA_HOME=/usr/java/jdk1.8.0_181-amd64
           

vi ./etc/hadoop/core-site.xml : 配置核心配置檔案

<configuration>

<!--指定hdfs檔案系統的預設名-->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>

<!--指定io的buffer大小-->
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>

<!--指定hadoop的臨時目錄-->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/bigdata/tmp</value>
</property>

</configuration>
           

vi ./etc/hadoop/hdfs-site.xml

<configuration>
<!--指定塊的副本數,預設是3-->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>

<!--指定資料塊的大小-->
<property>
<name>dfs.blocksize</name>
<value>134217728</value>
</property>

<!--指定namenode的中繼資料目錄-->
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoopdata/dfs/name</value>
</property>

<!--指定datanode存儲資料目錄-->
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoopdata/dfs/data</value>
</property>

<!--指定secondarynamenode的檢測點目錄-->
<property>
<name>fs.checkpoint.dir</name>
<value>/home/hadoopdata/dfs/checkpoint/cname</value>
</property>

<!--edit的資料存儲目錄-->
<property>
<name>fs.checkpoint.edits.dir</name>
<value>/home/hadoopdata/dfs/checkpoint/edit</value>
</property>

<!--指定namenode的webui監控端口-->
<property>
<name>dfs.http.address</name>
<value>hadoop01:50070</value>
</property>

<!--指定secondarynamenode的webui監控端口-->
<property>
<name>dfs.secondary.http.address</name>
<value>hadoop02:50090</value>
</property>

<!--是否開啟webhdfs的-->
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>

<!--是否開啟hdfs的權限-->
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>

</configuration>
           

cp ./etc/hadoop/mapred-site.xml.template ./etc/hadoop/mapred-site.xml

vi ./etc/hadoop/mapred-site.xml

<configuration>

<!--配置mapreduce的架構名稱-->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
<final>true</final>
</property>

<!--指定jobhistoryserver的内部通信位址-->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop03:10020</value>
</property>

<!--指定jobhistoryserver的web位址-->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop03:19888</value>
</property>

</configuration>
           

vi ./etc/hadoop/yarn-site.xml

<configuration>
<!--指定resourcemanager的啟動伺服器-->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop02</value>
</property>

<!--指定mapreduce的shuffle服務-->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>

<!--指定rm的内部通信位址-->
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop02:8032</value>
</property>

<!--指定rm的scheduler内部通信位址-->
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop02:8030</value>
</property>

<!--指定rm的resource-tracker内部通信位址-->
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop02:8031</value>
</property>

<!--指定rm的admin内部通信位址-->
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop02:8033</value>
</property>

<!--指定rm的web通路位址-->
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop02:8088</value>
</property>

</configuration>
           

vi ./etc/hadoop/slaves

hadoop01
hadoop02
hadoop03
           

 這裡,我們可以使用scp工具吧hadoop檔案夾複制到hadoop02、hadoop03上,這樣就不需要重複配置這些檔案了。

指令:

複制hadoop檔案至hadoop02和hadoop03上:
scp -r /usr/local/hadoop-2.9.1 hadoop03:/usr/local/
scp -r /usr/local/hadoop-2.9.1 hadoop03:/usr/local/
           

4、啟動Hadoop

在hadoop01格式化namenode

bin/hdfs namenode -format
           

 出現:INFO common.Storage: Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.

則初始化成功。

Hadoop:完全分布式搭建實驗過程

啟動hadoop:

start-all.sh
           
[[email protected] hadoop-2.9.1]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
18/09/13 23:29:29 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /usr/local/hadoop-2.9.1/logs/hadoop-root-namenode-hadoop01.out
hadoop03: starting datanode, logging to /usr/local/hadoop-2.9.1/logs/hadoop-root-datanode-hadoop03.out
hadoop01: starting datanode, logging to /usr/local/hadoop-2.9.1/logs/hadoop-root-datanode-hadoop01.out
hadoop02: starting datanode, logging to /usr/local/hadoop-2.9.1/logs/hadoop-root-datanode-hadoop02.out
Starting secondary namenodes [hadoop02]
hadoop02: starting secondarynamenode, logging to /usr/local/hadoop-2.9.1/logs/hadoop-root-secondarynamenode-hadoop02.out
18/09/13 23:29:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.9.1/logs/yarn-root-resourcemanager-hadoop01.out
hadoop03: nodemanager running as process 61198. Stop it first.
hadoop01: starting nodemanager, logging to /usr/local/hadoop-2.9.1/logs/yarn-root-nodemanager-hadoop01.out
hadoop02: starting nodemanager, logging to /usr/local/hadoop-2.9.1/logs/yarn-root-nodemanager-hadoop02.out
           

這裡的一個警告可以忽略,如果想解決,可以檢視我之前文章。

檢視是否正常啟動:

hadoop01:

[[email protected] hadoop-2.9.1]# jps
36432 Jps
35832 DataNode
36250 NodeManager
35676 NameNode
           

hadoop02

[[email protected] hadoop-2.9.1]# jps
54083 SecondaryNameNode
53987 DataNode
57031 Jps
49338 ResourceManager
54187 NodeManager
           

hadoop03

[[email protected] hadoop-2.9.1]# jps
63570 Jps
63448 DataNode
61198 NodeManager
           

 一切正常。

這裡,我們在hadoop03啟動JobHistoryServer

[[email protected] hadoop-2.9.1]# mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /usr/local/hadoop-2.9.1/logs/mapred-root-historyserver-hadoop03.out
[[email protected] hadoop-2.9.1]# jps
63448 DataNode
63771 Jps
63724 JobHistoryServer
61198 NodeManager
           

Web浏覽器檢視一下:

  • http://hadoop01:50070       namenode的web
  • http://hadoop02:50090       secondarynamenode的web
  • http://hadoop02:8088         resourcemanager的web
  • http://hadoop03:19888       jobhistroyserver的web

驗證一下:

  •  namenode的web
Hadoop:完全分布式搭建實驗過程
Hadoop:完全分布式搭建實驗過程
  •  secondarynamenode的web
Hadoop:完全分布式搭建實驗過程
  • resourcemanager的web
Hadoop:完全分布式搭建實驗過程
  •   jobhistroyserver的web
Hadoop:完全分布式搭建實驗過程

 OK!!!

7、Hadoop叢集測試

目的:驗證目前hadoop叢集正确安裝配置

本次測試用例為利用MapReduce實作wordcount程式

生成檔案helloworld:

echo "Hello world  Hello hadoop" > ./helloworld 
           

将檔案helloworld上傳至hdfs:

hdfs dfs -put /home/words/helloworld /
           

檢視一下:

[[email protected] words]# hdfs dfs -ls /
18/09/13 23:49:30 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
-rw-r--r--   3 root supergroup         26 2018-09-13 23:46 /helloworld
drwxr-xr-x   - root supergroup          0 2018-09-13 23:41 /input
drwxrwx---   - root supergroup          0 2018-09-13 23:36 /tmp
[[email protected] words]# hdfs dfs -cat /helloworld
18/09/13 23:50:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hello world  Hello hadoop
           

執行wordcount程式,

[[email protected] words]# yarn jar /usr/local/hadoop-2.9.1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.9.1.jar wordcount /helloworld/ /out/00000
           

 出現如下内容,則OK

Virtual memory (bytes) snapshot=4240179200
		Total committed heap usage (bytes)=287309824
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=26
	File Output Format Counters 
		Bytes Written=25
           

檢視生成下的檔案:

Hadoop:完全分布式搭建實驗過程

 代碼檢視:

[[email protected] words]# hdfs dfs -cat /out/00000/part-r-00000
18/09/14 00:32:59 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Hello	2
hadoop	1
world	1
           

 結果正确。

Hadoop:完全分布式搭建實驗過程

此時我們就可以在jobHistory看到此任務的一些執行資訊,當然,還有很多資訊,不一一說了。晚上00:37了,睡覺。。。。

竟然維護。。。。。。。。。。。。。。

詳細資訊可檢視官網文檔

繼續閱讀