
<a href="#_Toc21191%20">目錄 1</a>
<a href="#_Toc26718%20">1. 前言 1</a>
<a href="#_Toc11006%20">2. 安裝依賴 1</a>
<a href="#_Toc26209%20">2.1. 安裝ProtocolBuffer 2</a>
<a href="#_Toc16929%20">2.2. 安裝CMake 2</a>
<a href="#_Toc32276%20">2.3. 安裝JDK 2</a>
<a href="#_Toc12753%20">2.4. 安裝Maven 3</a>
<a href="#_Toc8266%20">3. 編譯Hadoop源代碼 3</a>
<a href="#_Toc13951%20">附1:無聯網環境編譯 5</a>
<a href="#_Toc4848%20">附2編譯環境 6</a>
<a href="#_Toc5281%20">附3:版本資訊 6</a>
<a href="#_Toc11767%20">附4:常見錯誤 6</a>
<a href="#_Toc7273%20">1) unexpected end tag: 6</a>
<a href="#_Toc8362%20">附5:相關文檔 7</a>
Hadoop-2.4.0的源碼目錄下有個BUILDING.txt檔案,它介紹了如何在Linux和Windows下編譯源代碼,本文基本是遵照BUILDING.txt訓示來操作的,這裡再做一下簡單的提煉。
第一次編譯要求能夠通路網際網路,Hadoop的編譯依賴非常多的東西,一定要保證機器可通路網際網路,否則難逐一解決所有的編譯問題,但第一次之後的編譯則不用再下載下傳了。
在編譯Hadoop 2.4.0源碼之前,需要将下列幾個依賴的東西安裝好:
1) JDK 1.6或更新版本(本文使用JDK1.7,請不要安裝JDK1.8版本,JDK1.8和Hadoop 2.4.0不比對,編譯Hadoop 2.4.0源碼時會報很多錯誤)
2) Maven 3.0或更新版本
3) ProtocolBuffer 2.5.0
4) CMake 2.6或更新版本
5) Findbugs 1.3.9,可選的(本文編譯時未安裝)
在安裝好之後,還需要設定一下環境變量,可以修改/etc/profile,也可以是修改~/.profile,增加如下内容:
export JAVA_HOME=/root/jdk
export CLASSPATH=$JAVA_HOME/lib/tools.jar
export PATH=$JAVA_HOME/bin:$PATH
export CMAKE_HOME=/root/cmake
export PATH=$CMAKE_HOME/bin:$PATH
export PROTOC_HOME=/root/protobuf
export PATH=$PROTOC_HOME/bin:$PATH
export MAVEN_HOME=/root/maven
export PATH=$MAVEN_HOME/bin:$PATH
本文以root使用者在/root目錄下進行安裝,但實際可以選擇非root使用者及非/root目錄進行安裝。
标準的automake編譯安裝方式:
1) cd /root
2) tar xzf protobuf-2.5.0.tar.gz
3) cd protobuf-2.5.0
4) ./conigure --prefix=/root/protobuf
5) make
6) make install
2) tar xzf cmake-2.8.12.2.tar.gz
3) cd cmake-2.8.12.2
4) ./bootstrap --prefix=/root/cmake
2) tar xzf jdk-7u55-linux-x64.gz
3) cd jdk1.7.0_55
4) ln -s jdk1.7.0_55 jdk
2) tar xzf apache-maven-3.0.5-bin.tar.gz
3) ln -s apache-maven-3.0.5 maven
完成上述準備工作後,即可通過執行指令:mvn package -Pdist -DskipTests -Dtar,啟動對Hadoop源代碼的編譯。請注意一定不要使用JDK1.8。
如果需要編譯成本地庫(Native Libraries)檔案,則使用指令:mvn package -Pdist,native -DskipTests -Dtar。如果C/C++程式需要通路HDFS等,需要使用navite方式編譯生成相應的庫檔案。也可以使用mvn package -Pnative -DskipTests -Dtar特意編譯出本地庫檔案。
相關的編譯指令還有:
1) mvn package -Pdist -DskipTests -Dtar
2) mvn package -Pdist,native,docs,src -DskipTests -Dtar
3) mvn package -Psrc -DskipTests
4) mvn package -Pdist,native,docs -DskipTests -Dtar
5) mvn clean site; mvn site:stage -DstagingDirectory=/tmp/hadoop-site
編譯成功後,jar檔案會放在target子目錄下,可以在Hadoop源碼目錄下借用find指令搜尋各個target子目錄。
編譯成功後,會生成Hadoop二進制安裝包hadoop-2.4.0.tar.gz,放在源代碼的hadoop-dist/target子目錄下:
main:
[exec] $ tar cf hadoop-2.4.0.tar hadoop-2.4.0
[exec] $ gzip -f hadoop-2.4.0.tar
[exec]
[exec] Hadoop dist tar available at: /root/hadoop-2.4.0-src/hadoop-dist/target/hadoop-2.4.0.tar.gz
[INFO] Executed tasks
[INFO]
[INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-dist ---
[INFO] Building jar: /root/hadoop-2.4.0-src/hadoop-dist/target/hadoop-dist-2.4.0-javadoc.jar
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] Apache Hadoop Main ................................ SUCCESS [4.647s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [5.352s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [7.239s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [0.424s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [2.918s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [6.261s]
[INFO] Apache Hadoop MiniKDC ............................. SUCCESS [5.321s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [5.953s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [3.783s]
[INFO] Apache Hadoop Common .............................. SUCCESS [1:54.010s]
[INFO] Apache Hadoop NFS ................................. SUCCESS [9.721s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [0.048s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [4:15.270s]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [6:18.553s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [16.237s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [6.543s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [0.036s]
[INFO] hadoop-yarn ....................................... SUCCESS [0.051s]
[INFO] hadoop-yarn-api ................................... SUCCESS [1:35.227s]
[INFO] hadoop-yarn-common ................................ SUCCESS [43.216s]
[INFO] hadoop-yarn-server ................................ SUCCESS [0.055s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [16.476s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [19.942s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [4.926s]
[INFO] hadoop-yarn-server-applicationhistoryservice ...... SUCCESS [9.804s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [23.320s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [1.208s]
[INFO] hadoop-yarn-client ................................ SUCCESS [9.177s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [0.113s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [4.106s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [3.265s]
[INFO] hadoop-yarn-site .................................. SUCCESS [0.056s]
[INFO] hadoop-yarn-project ............................... SUCCESS [5.552s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [0.096s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [37.231s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [27.135s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [4.886s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [17.876s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [14.140s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [11.305s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [3.083s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [9.855s]
[INFO] hadoop-mapreduce .................................. SUCCESS [5.110s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [7.778s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [12.973s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [3.265s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [11.060s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [7.412s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [4.221s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [4.771s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [0.032s]
[INFO] Apache Hadoop OpenStack support ................... SUCCESS [8.030s]
[INFO] Apache Hadoop Client .............................. SUCCESS [7.730s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [0.158s]
[INFO] Apache Hadoop Scheduler Load Simulator ............ SUCCESS [7.485s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [6.912s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [0.029s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [40.425s]
[INFO] BUILD SUCCESS
[INFO] Total time: 21:57.892s
[INFO] Finished at: Mon Apr 21 14:33:22 CST 2014
[INFO] Final Memory: 88M/243M
如果想在無聯網環境下編譯Hadoop 2.4.0,是個非常複雜的工程,在早期的Hadoop中實作過,對于2.4.0來說有點難了。
但可以采取曲線救國方式,找一台可以聯網的機器,先成功編譯一次,然後将這個源碼包目錄打包複制到不能聯網的機器。但要注意,保持兩台機器的目錄相同,并執行相同的編譯指令。
為什麼要求目錄保持相同了?假設在聯網機器的/root/hadoop-2.4.0-src下編譯的,進入/root/hadoop-2.4.0-src,然後執行:find . -name "*.xml" |xargs grep "/root/",可以看到下表格中的内容,“/root/”被寫入到衆多xml檔案中了,這是導緻需要聯網重新下載下傳的根本原因,可以将它們替換成目标機器的實際目錄,這樣也可以無聯網編譯。
find . -name "*.xml" |xargs grep "/root/"
./hadoop-tools/hadoop-datajoin/target/antrun/build-main.xml: /root/hadoop-2.4.0-src/hadoop-tools/hadoop-datajoin/target/test-dir"/>
./hadoop-tools/hadoop-datajoin/target/antrun/build-main.xml: /root/hadoop-2.4.0-src/hadoop-tools/hadoop-datajoin/target/log"/>
./hadoop-tools/hadoop-extras/target/antrun/build-main.xml:
./hadoop-tools/hadoop-gridmix/target/antrun/build-main.xml:
./hadoop-tools/hadoop-openstack/target/antrun/build-main.xml:
整個過程是在阿裡雲64位主機上進行的,2.30GHz單核1G記憶體:
[root@AY140408105805619186Z hadoop-2.4.0-src]# uname -a
Linux AY140408105805619186Z 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@AY140408105805619186Z ~]# cat /etc/redhat-release
CentOS release 5.8 (Final)
名稱
版本
包名
說明
Maven
3.0.5
apache-maven-3.0.5-bin.tar.gz
使用3.2.1可能會有問題
CMake
2.8.12.2
cmake-2.8.12.2.tar.gz
JDK
1.7.0
jdk-7u55-linux-x64.gz
不能使用JDK1.8.0
Protocol Buffers
2.5.0
protobuf-2.5.0.tar.gz
Hadoop
2.4.0
hadoop-2.4.0-src.tar.gz
[ERROR] Failed to execute goal org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on project hadoop-annotations: MavenReportException: Error while creating archive:
[ERROR] Exit code: 1 - /root/hadoop-2.4.0-src/hadoop-common-project/hadoop-annotations/src/main/java/org/apache/hadoop/classification/InterfaceStability.java:27: error: unexpected end tag:
[ERROR] *
[ERROR] ^
[ERROR]
[ERROR] Command line was: /root/jdk1.8.0/jre/../bin/javadoc @options @packages
原因是InterfaceStability.java中的注釋問題:
解決辦法,将JDK換成1.7版本,使用JDK1.8編譯就會遇到上述問題,将行删除可以解決問題,但後續還會遇到類似的問題,是以不要使用JDK1.8編譯Hadoop 2.4.0。
《HBase-0.98.0分布式安裝指南》
《Hive 0.12.0安裝指南》
《ZooKeeper-3.4.6分布式安裝指南》
《Hadoop 2.3.0源碼反向工程》
《在Linux上編譯Hadoop-2.4.0》
《Accumulo-1.5.1安裝指南》
《Drill 1.0.0安裝指南》
《Shark 0.9.1安裝指南》