天天看點

hadoop的部署以及應用

1.基礎環境

1

2

3

4

5

6

7

8

9

10

<code>[hadoop@master ~]$ </code><code>cat</code>  <code>/etc/redhat-release</code> 

<code>CentOS Linux release 7.2.1511 (Core) </code>

<code>[hadoop@master ~]$ </code>

<code>[hadoop@master ~]$ getenforce </code>

<code>Disabled</code>

<code>[hadoop@master ~]$ systemctl  status  firewalld </code>

<code>● firewalld.service - firewalld - dynamic firewall daemon</code>

<code>   </code><code>Loaded: loaded (</code><code>/usr/lib/systemd/system/firewalld</code><code>.service; disabled; vendor preset: enabled)</code>

<code>   </code><code>Active: inactive (dead)</code>

<code>[hadoop@master ~]$</code>

2.IP以及對應節點

IP

主機名

hadoop node

hadoop  程序名稱

192.168.56.100

master

namenode,jobtracker

192.168.56.101

slave1

slave

datanode,tasktracker

192.168.56.102

slave2

192.168.56.103

slave3

<code>[hadoop@master ~]</code><code># cat  /etc/hosts</code>

<code>192.168.56.100  Master</code>

<code>192.168.56.101  slave1</code>

<code>192.168.56.102  slave2</code>

<code>192.168.56.103  slave3</code>

<code>[hadoop@master ~]</code><code>#</code>

3.增加hadoop使用者,所有節點

<code>useradd</code>  <code>hadoop</code>

<code>echo</code> <code>hadoop|</code><code>passwd</code>  <code>--stdin  hadoop</code>

4.jdk

11

12

<code>[hadoop@slave1 application]</code><code># ll</code>

<code>total 4</code>

<code>lrwxrwxrwx 1 root root   24 Jul 10 01:35 jdk -&gt; </code><code>/application/jdk1</code><code>.8.0_60</code>

<code>drwxr-xr-x 8 root root 4096 Aug  5  2015 jdk1.8.0_60</code>

<code>[hadoop@slave1 application]</code><code># pwd</code>

<code>/application</code>

<code>[hadoop@slave1 application]</code><code># </code>

<code>[hadoop@master ~]</code><code># java  -version </code>

<code>java version </code><code>"1.8.0_60"</code>

<code>Java(TM) SE Runtime Environment (build 1.8.0_60-b27)</code>

<code>Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)</code>

5.master(192.168.56.100)上的hadoop使用者可以ssh所有slave節點的hadoop使用者下

6.設定hadoop安裝路徑 以及環境變量(所有節點)

<code>su</code>  <code>-  hadoop</code>

<code>tar</code> <code>xf  hadoop-2.7.0tar.gz</code>

<code>/home/hadoop/hadoop-2</code><code>.7.0</code>

<code>vi</code> <code>/etc/profile</code>  <code>添加hadoop環境變量</code>

<code>export</code> <code>HADOOP_HOME=</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code>

<code>export</code> <code>PATH=$PATH:$HADOOP_HOME</code><code>/bin</code>

<code>source</code> <code>/etc/profile</code>

7.修改hadoop的環境的Java環境變量

<code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/etc/hadoop</code>

<code>vi</code> <code>hadoop-</code><code>env</code><code>.sh 添加</code>

<code>###JAVA_HOME</code>

<code>export</code> <code>JAVA_HOME=</code><code>/application/jdk/</code>

8.修改hadoop的配置檔案

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

<code>cd</code>  <code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/etc/hadoop</code>

<code>1.</code><code>##############################</code>

<code>[hadoop@master hadoop]$ </code><code>cat</code> <code>core-site.xml</code>

<code>&lt;?xml version=</code><code>"1.0"</code> <code>encoding=</code><code>"UTF-8"</code><code>?&gt;</code>

<code>&lt;?xml-stylesheet </code><code>type</code><code>=</code><code>"text/xsl"</code> <code>href=</code><code>"configuration.xsl"</code><code>?&gt;</code>

<code>&lt;!--</code>

<code>  </code><code>Licensed under the Apache License, Version 2.0 (the </code><code>"License"</code><code>);</code>

<code>  </code><code>you may not use this </code><code>file</code> <code>except </code><code>in</code> <code>compliance with the License.</code>

<code>  </code><code>You may obtain a copy of the License at</code>

<code>    </code><code>http:</code><code>//www</code><code>.apache.org</code><code>/licenses/LICENSE-2</code><code>.0</code>

<code>  </code><code>Unless required by applicable law or agreed to </code><code>in</code> <code>writing, software</code>

<code>  </code><code>distributed under the License is distributed on an </code><code>"AS IS"</code> <code>BASIS,</code>

<code>  </code><code>WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.</code>

<code>  </code><code>See the License </code><code>for</code> <code>the specific language governing permissions and</code>

<code>  </code><code>limitations under the License. See accompanying LICENSE </code><code>file</code><code>.</code>

<code>--&gt;</code>

<code>&lt;!-- Put site-specific property overrides </code><code>in</code> <code>this </code><code>file</code><code>. --&gt;</code>

<code>&lt;configuration&gt;</code>

<code>&lt;property&gt;</code>

<code> </code><code>&lt;name&gt;fs.default.name&lt;</code><code>/name</code><code>&gt;</code>

<code>  </code><code>&lt;value&gt;hdfs:</code><code>//master</code><code>:9000&lt;</code><code>/value</code><code>&gt;</code>

<code>&lt;</code><code>/property</code><code>&gt;</code>

<code>  </code><code>&lt;name&gt;hadoop.tmp.</code><code>dir</code><code>&lt;</code><code>/name</code><code>&gt;</code>

<code> </code><code>&lt;value&gt;</code><code>/home/hadoop/tmp</code><code>&lt;</code><code>/value</code><code>&gt;</code>

<code>&lt;</code><code>/configuration</code><code>&gt;</code>

<code>[hadoop@master hadoop]$ </code>

<code>2.</code><code>###################################(預設不存在 拷貝個模闆即可)</code>

<code>[hadoop@master hadoop]$ </code><code>cat</code> <code>mapred-site.xml</code>

<code>&lt;?xml version=</code><code>"1.0"</code><code>?&gt;</code>

<code>  </code><code>&lt;name&gt;mapred.job.tracker&lt;</code><code>/name</code><code>&gt;</code>

<code>  </code><code>&lt;value&gt;master:9001&lt;</code><code>/value</code><code>&gt;</code>

<code>  </code><code>&lt;name&gt;mapred.</code><code>local</code><code>.</code><code>dir</code><code>&lt;</code><code>/name</code><code>&gt;</code>

<code>3.</code><code>#########################################</code>

<code>[hadoop@master hadoop]$ </code><code>cat</code>  <code>hdfs-site.xml</code>

<code>&lt;name&gt;dfs.name.</code><code>dir</code><code>&lt;</code><code>/name</code><code>&gt;</code>

<code>&lt;value&gt;</code><code>/home/hadoop/name1</code><code>,</code><code>/home/hadoop/name2</code><code>,</code><code>/home/hadoop/name3</code><code>&lt;</code><code>/value</code><code>&gt; </code>

<code>&lt;description&gt;  &lt;</code><code>/description</code><code>&gt;</code>

<code>&lt;name&gt;dfs.data.</code><code>dir</code><code>&lt;</code><code>/name</code><code>&gt;</code>

<code>&lt;value&gt;</code><code>/home/hadoop/data1</code><code>,</code><code>/home/hadoop/data2</code><code>,</code><code>/home/hadoop/data3</code><code>&lt;</code><code>/value</code><code>&gt;</code>

<code>&lt;description&gt; &lt;</code><code>/description</code><code>&gt;</code>

<code>  </code><code>&lt;name&gt;dfs.replication&lt;</code><code>/name</code><code>&gt;</code>

<code>  </code><code>&lt;value&gt;3&lt;</code><code>/value</code><code>&gt;</code>

<code>[hadoop@master hadoop]$ </code><code>cat</code> <code>masters </code>

<code>master</code>

<code>[hadoop@master hadoop]$ </code><code>cat</code> <code>slaves </code>

<code>slave1</code>

<code>slave2</code>

<code>slave3</code>

<code>[hadoop@master hadoop]$</code>

9.分發到slave節點

<code>scp</code>   <code>-r  </code><code>/home/hadoop/hadoop-2</code><code>.7.0  slave1:</code><code>/home/hadoop/</code>

<code>scp</code>   <code>-r  </code><code>/home/hadoop/hadoop-2</code><code>.7.0  slave2:</code><code>/home/hadoop/</code>

<code>scp</code>   <code>-r  </code><code>/home/hadoop/hadoop-2</code><code>.7.0  slave3:</code><code>/home/hadoop/</code>

10.master 節點測試

<code>/home/hadoop/name1</code> <code>/home/hadoop/name2</code>  <code>/home/hadoop/name3</code>  <code>這三個目錄不要建立,如果建立會提示</code>

<code>重新reload</code>

<code>cd</code>  <code>/home/hadoop/hadoop-2</code><code>.7.0</code>

<code>[hadoop@master hadoop-2.7.0]$ .</code><code>/bin/hadoop</code> <code>namenode -</code><code>format</code>

<code>DEPRECATED: Use of this script to execute hdfs </code><code>command</code> <code>is deprecated.</code>

<code>Instead use the hdfs </code><code>command</code> <code>for</code> <code>it.</code>

<code>17</code><code>/07/10</code> <code>02:57:34 INFO namenode.NameNode: STARTUP_MSG: </code>

<code>/************************************************************</code>

<code>STARTUP_MSG: Starting NameNode</code>

<code>STARTUP_MSG:   host = Master</code><code>/192</code><code>.168.56.100</code>

<code>STARTUP_MSG:   args = [-</code><code>format</code><code>]</code>

<code>STARTUP_MSG:   version = 2.7.0</code>

<code>STARTUP_MSG:   classpath = </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/etc/hadoop</code><code>:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-lang-2</code><code>.6.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/hadoop-auth-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/curator-recipes-2</code><code>.7.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/asm-3</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-collections-3</code><code>.2.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/xmlenc-0</code><code>.52.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/htrace-core-3</code><code>.1.0-incubating.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jetty-util-6</code><code>.1.26.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-beanutils-core-1</code><code>.8.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-digester-1</code><code>.8.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jsp-api-2</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/httpcore-4</code><code>.2.5.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jersey-core-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/mockito-all-1</code><code>.8.5.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/hadoop-annotations-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jackson-mapper-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/junit-4</code><code>.11.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-logging-1</code><code>.1.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-configuration-1</code><code>.6.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/api-util-1</code><code>.0.0-M20.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jaxb-impl-2</code><code>.2.3-1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-compress-1</code><code>.4.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jersey-json-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jettison-1</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jackson-jaxrs-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/hamcrest-core-1</code><code>.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-net-3</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-cli-1</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/api-asn1-api-1</code><code>.0.0-M20.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-math3-3</code><code>.1.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jackson-xc-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/stax-api-1</code><code>.0-2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-codec-1</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/slf4j-log4j12-1</code><code>.7.10.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jsr305-3</code><code>.0.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/snappy-java-1</code><code>.0.4.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jetty-6</code><code>.1.26.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/guava-11</code><code>.0.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/netty-3</code><code>.6.2.Final.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/avro-1</code><code>.7.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/paranamer-2</code><code>.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-beanutils-1</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jackson-core-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jsch-0</code><code>.1.42.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/apacheds-kerberos-codec-2</code><code>.0.0-M15.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jets3t-0</code><code>.9.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/activation-1</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/protobuf-java-2</code><code>.5.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/java-xmlbuilder-0</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/servlet-api-2</code><code>.5.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/xz-1</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/apacheds-i18n-2</code><code>.0.0-M15.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-httpclient-3</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/gson-2</code><code>.2.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/commons-io-2</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jaxb-api-2</code><code>.2.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/curator-framework-2</code><code>.7.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/jersey-server-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/httpclient-4</code><code>.2.5.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/curator-client-2</code><code>.7.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/log4j-1</code><code>.2.17.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/slf4j-api-1</code><code>.7.10.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/lib/zookeeper-3</code><code>.4.6.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/hadoop-nfs-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/hadoop-common-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/common/hadoop-common-2</code><code>.7.0-tests.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs</code><code>:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/commons-lang-2</code><code>.6.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/asm-3</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/xmlenc-0</code><code>.52.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/htrace-core-3</code><code>.1.0-incubating.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jetty-util-6</code><code>.1.26.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jersey-core-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jackson-mapper-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/commons-logging-1</code><code>.1.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/leveldbjni-all-1</code><code>.8.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/commons-cli-1</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/commons-daemon-1</code><code>.0.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/commons-codec-1</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jsr305-3</code><code>.0.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jetty-6</code><code>.1.26.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/guava-11</code><code>.0.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/netty-3</code><code>.6.2.Final.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jackson-core-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/protobuf-java-2</code><code>.5.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/servlet-api-2</code><code>.5.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/xercesImpl-2</code><code>.9.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/commons-io-2</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/netty-all-4</code><code>.0.23.Final.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/jersey-server-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/log4j-1</code><code>.2.17.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/lib/xml-apis-1</code><code>.3.04.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/hadoop-hdfs-nfs-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/hadoop-hdfs-2</code><code>.7.0-tests.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/hdfs/hadoop-hdfs-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-lang-2</code><code>.6.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/asm-3</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-collections-3</code><code>.2.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/javax</code><code>.inject-1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jetty-util-6</code><code>.1.26.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/aopalliance-1</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jersey-core-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jackson-mapper-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-logging-1</code><code>.1.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/leveldbjni-all-1</code><code>.8.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jaxb-impl-2</code><code>.2.3-1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-compress-1</code><code>.4.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jersey-json-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jettison-1</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jackson-jaxrs-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-cli-1</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jackson-xc-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/stax-api-1</code><code>.0-2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-codec-1</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jsr305-3</code><code>.0.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jetty-6</code><code>.1.26.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/guava-11</code><code>.0.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jersey-guice-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/guice-3</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/netty-3</code><code>.6.2.Final.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jackson-core-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/activation-1</code><code>.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/protobuf-java-2</code><code>.5.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/servlet-api-2</code><code>.5.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/xz-1</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/commons-io-2</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jaxb-api-2</code><code>.2.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/zookeeper-3</code><code>.4.6-tests.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/guice-servlet-3</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jersey-client-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/jersey-server-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/log4j-1</code><code>.2.17.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/lib/zookeeper-3</code><code>.4.6.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-registry-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-tests-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-common-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-common-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-api-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/yarn/hadoop-yarn-client-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/asm-3</code><code>.2.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/javax</code><code>.inject-1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/aopalliance-1</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/jersey-core-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/hadoop-annotations-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/jackson-mapper-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/junit-4</code><code>.11.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/leveldbjni-all-1</code><code>.8.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/commons-compress-1</code><code>.4.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/hamcrest-core-1</code><code>.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/snappy-java-1</code><code>.0.4.1.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/jersey-guice-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/guice-3</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/netty-3</code><code>.6.2.Final.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/avro-1</code><code>.7.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/paranamer-2</code><code>.3.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/jackson-core-asl-1</code><code>.9.13.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/protobuf-java-2</code><code>.5.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/xz-1</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/commons-io-2</code><code>.4.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/guice-servlet-3</code><code>.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/jersey-server-1</code><code>.9.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/lib/log4j-1</code><code>.2.17.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-examples-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2</code><code>.7.0-tests.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2</code><code>.7.0.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/contrib/capacity-scheduler/</code><code>*.jar:</code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/contrib/capacity-scheduler/</code><code>*.jar</code>

<code>STARTUP_MSG:   build = Unknown -r Unknown; compiled by </code><code>'root'</code> <code>on 2015-05-27T13:56Z</code>

<code>STARTUP_MSG:   java = 1.8.0_60</code>

<code>************************************************************/</code>

<code>17</code><code>/07/10</code> <code>02:57:34 INFO namenode.NameNode: registered UNIX signal handlers </code><code>for</code> <code>[TERM, HUP, INT]</code>

<code>17</code><code>/07/10</code> <code>02:57:34 INFO namenode.NameNode: createNameNode [-</code><code>format</code><code>]</code>

<code>17</code><code>/07/10</code> <code>02:57:35 WARN common.Util: Path </code><code>/home/hadoop/name1</code> <code>should be specified as a URI </code><code>in</code> <code>configuration files. Please update hdfs configuration.</code>

<code>17</code><code>/07/10</code> <code>02:57:35 WARN common.Util: Path </code><code>/home/hadoop/name2</code> <code>should be specified as a URI </code><code>in</code> <code>configuration files. Please update hdfs configuration.</code>

<code>17</code><code>/07/10</code> <code>02:57:35 WARN common.Util: Path </code><code>/home/hadoop/name3</code> <code>should be specified as a URI </code><code>in</code> <code>configuration files. Please update hdfs configuration.</code>

<code>Formatting using clusterid: CID-77e0896d-bda2-49f1-8127-c5343f1c52c9</code>

<code>17</code><code>/07/10</code> <code>02:57:35 INFO namenode.FSNamesystem: No KeyProvider found.</code>

<code>17</code><code>/07/10</code> <code>02:57:35 INFO namenode.FSNamesystem: fsLock is fair:</code><code>true</code>

<code>17</code><code>/07/10</code> <code>02:57:35 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000</code>

<code>17</code><code>/07/10</code> <code>02:57:35 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-</code><code>hostname</code><code>-check=</code><code>true</code>

<code>17</code><code>/07/10</code> <code>02:57:35 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is </code><code>set</code> <code>to 000:00:00:00.000</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Jul 10 02:57:36</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: Computing capacity </code><code>for</code> <code>map BlocksMap</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: VM </code><code>type</code>       <code>= 64-bit</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: capacity      = 2^21 = 2097152 entries</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: dfs.block.access.token.</code><code>enable</code><code>=</code><code>false</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: defaultReplication         = 3</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: maxReplication             = 512</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: minReplication             = 1</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = </code><code>false</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: encryptDataTransfer        = </code><code>false</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: fsOwner             = hadoop (auth:SIMPLE)</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: supergroup          = supergroup</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: isPermissionEnabled = </code><code>true</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: HA Enabled: </code><code>false</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: Append Enabled: </code><code>true</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: Computing capacity </code><code>for</code> <code>map INodeMap</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: capacity      = 2^20 = 1048576 entries</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSDirectory: ACLs enabled? </code><code>false</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSDirectory: XAttrs enabled? </code><code>true</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSDirectory: Maximum size of an xattr: 16384</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.NameNode: Caching </code><code>file</code> <code>names occuring </code><code>more</code> <code>than 10 </code><code>times</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: Computing capacity </code><code>for</code> <code>map cachedBlocks</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: capacity      = 2^18 = 262144 entries</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.</code><code>top</code><code>.window.num.buckets = 10</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.</code><code>top</code><code>.num.</code><code>users</code> <code>= 10</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.</code><code>top</code><code>.windows.minutes = 1,5,25</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: Retry cache on namenode is enabled</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry </code><code>time</code> <code>is 600000 millis</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: Computing capacity </code><code>for</code> <code>map NameNodeRetryCache</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.GSet: capacity      = 2^15 = 32768 entries</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.FSImage: Allocated new BlockPoolId: BP-467031090-192.168.56.100-1499626656612</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO common.Storage: Storage directory </code><code>/home/hadoop/name1</code> <code>has been successfully formatted.</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO common.Storage: Storage directory </code><code>/home/hadoop/name2</code> <code>has been successfully formatted.</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO common.Storage: Storage directory </code><code>/home/hadoop/name3</code> <code>has been successfully formatted.</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid &gt;= 0</code>

<code>17</code><code>/07/10</code> <code>02:57:36 INFO util.ExitUtil: Exiting with status 0</code>

<code>17</code><code>/07/10</code> <code>02:57:37 INFO namenode.NameNode: SHUTDOWN_MSG: </code>

<code>SHUTDOWN_MSG: Shutting down NameNode at Master</code><code>/192</code><code>.168.56.100</code>

<code>[hadoop@master hadoop-2.7.0]$</code>

11.啟動服務

<code>[hadoop@master sbin]$ </code><code>pwd</code>

<code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/sbin</code>

<code>[hadoop@master sbin]$ </code>

<code>[hadoop@master sbin]$ .</code><code>/start-all</code><code>.sh </code>

<code>This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh</code>

<code>Starting namenodes on [master]</code>

<code>master: starting namenode, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/hadoop-hadoop-namenode-master</code><code>.out</code>

<code>slave3: starting datanode, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/hadoop-hadoop-datanode-slave3</code><code>.out</code>

<code>slave2: starting datanode, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/hadoop-hadoop-datanode-slave2</code><code>.out</code>

<code>slave1: starting datanode, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/hadoop-hadoop-datanode-slave1</code><code>.out</code>

<code>Starting secondary namenodes [0.0.0.0]</code>

<code>0.0.0.0: starting secondarynamenode, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/hadoop-hadoop-secondarynamenode-master</code><code>.out</code>

<code>starting yarn daemons</code>

<code>starting resourcemanager, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/yarn-hadoop-resourcemanager-master</code><code>.out</code>

<code>slave3: starting nodemanager, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/yarn-hadoop-nodemanager-slave3</code><code>.out</code>

<code>slave2: starting nodemanager, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/yarn-hadoop-nodemanager-slave2</code><code>.out</code>

<code>slave1: starting nodemanager, logging to </code><code>/home/hadoop/hadoop-2</code><code>.7.0</code><code>/logs/yarn-hadoop-nodemanager-slave1</code><code>.out</code>

<code>[hadoop@master sbin]$ </code><code>netstat</code>  <code>-lntup </code>

<code>(Not all processes could be identified, non-owned process info</code>

<code> </code><code>will not be shown, you would have to be root to see it all.)</code>

<code>Active Internet connections (only servers)</code>

<code>Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID</code><code>/Program</code> <code>name    </code>

<code>tcp        0      0 192.168.56.100:9000     0.0.0.0:*               LISTEN      4405</code><code>/java</code>           

<code>tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      4606</code><code>/java</code>           

<code>tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      4405</code><code>/java</code>           

<code>tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                   </code>

<code>tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      -                   </code>

<code>tcp6       0      0 :::22                   :::*                    LISTEN      -                   </code>

<code>tcp6       0      0 :::8088                 :::*                    LISTEN      4757</code><code>/java</code>           

<code>tcp6       0      0 ::1:25                  :::*                    LISTEN      -                   </code>

<code>tcp6       0      0 :::8030                 :::*                    LISTEN      4757</code><code>/java</code>           

<code>tcp6       0      0 :::8031                 :::*                    LISTEN      4757</code><code>/java</code>           

<code>tcp6       0      0 :::8032                 :::*                    LISTEN      4757</code><code>/java</code>           

<code>tcp6       0      0 :::8033                 :::*                    LISTEN      4757</code><code>/java</code>           

<code>[hadoop@master sbin]$</code>

<a href="http://192.168.56.100:50070/dfshealth.html#tab-overview" target="_blank">http://192.168.56.100:50070/dfshealth.html#tab-overview</a>

<a href="http://192.168.56.103:8042/node/allApplications" target="_blank">http://192.168.56.103:8042/node/allApplications</a>

<a href="http://192.168.56.100:50090/status.html" target="_blank">http://192.168.56.100:50090/status.html</a>

本文轉自 小小三郎1 51CTO部落格,原文連結:http://blog.51cto.com/wsxxsl/1945709,如需轉載請自行聯系原作者