天天看點

僞分布式Hadoop系統運作wordcount

wordcount執行個體測試中,執行之後,報錯,生成了輸出目錄(本人命名為/out),輸出目錄裡面沒有任何檔案。

報錯如下

因為自己忘記複制錯誤代碼了,是以從這位部落客

處複制的報錯資訊

16/09/01 09:32:29 INFO mapreduce.Job: Running job: job_1472644198158_0001

16/09/01 09:32:46 INFO mapreduce.Job: Job job_1472644198158_0001 running in uber mode : false

16/09/01 09:32:46 INFO mapreduce.Job: map 0% reduce 0%

16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_0, Status : FAILED

16/09/01 09:33:08 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_0, Status : FAILED

16/09/01 09:33:25 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_1, Status : FAILED

16/09/01 09:33:29 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_1, Status : FAILED

16/09/01 09:33:41 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000001_2, Status : FAILED

16/09/01 09:33:45 INFO mapreduce.Job: Task Id : attempt_1472644198158_0001_m_000000_2, Status : FAILED

16/09/01 09:33:58 INFO mapreduce.Job: map 100% reduce 100%

16/09/01 09:33:58 INFO mapreduce.Job: Job job_1472644198158_0001 failed with state FAILED due to: Task failed task_1472644198158_0001_m_000001

Job failed as tasks failed. failedMaps:1 failedReduces:0

16/09/01 09:33:58 INFO mapreduce.Job: Counters: 17

Job Counters

Failed map tasks=7

Killed map tasks=1

Killed reduce tasks=1

Launched map tasks=8

Other local map tasks=6

Data-local map tasks=2

Total time spent by all maps in occupied slots (ms)=123536

Total time spent by all reduces in occupied slots (ms)=0

Total time spent by all map tasks (ms)=123536

Total time spent by all reduce tasks (ms)=0

Total vcore-milliseconds taken by all map tasks=123536

Total vcore-milliseconds taken by all reduce tasks=0

Total megabyte-milliseconds taken by all map tasks=126500864

Total megabyte-milliseconds taken by all reduce tasks=0

Map-Reduce Framework

CPU time spent (ms)=0

Physical memory (bytes) snapshot=0

Virtual memory (bytes) snapshot=0

[[email protected] mapreduce]# hadoop jar hadoop-mapreduce-examples-2.7.3.jar wordcount /input /output

16/09/01 10:16:30 INFO client.RMProxy: Connecting to ResourceManager at /114.XXX.XXX.XXX:8032

org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://master:9000/output already exists

at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146)

at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:266)

at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:139)

at org.apache.hadoop.mapreduce.Job10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job10.run(Job.java:1290)atorg.apache.hadoop.mapreduce.Job10.run(Job.java:1287)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)

at org.apache.hadoop.mapreduce.Job.submit(Job.java:1287)

at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1308)

at org.apache.hadoop.examples.WordCount.main(WordCount.java:87)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)

at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)

at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:497)

at org.apache.hadoop.util.RunJar.run(RunJar.java:221)

at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

解決

Hadoop版本:2.10.0

回顧了一下Hadoop配置過程和wordcount運作過程,有可能是以下地方出錯:

  1. yarn-site.xml檔案配置出錯
  2. 不小心格式化了NameNode兩次,在撤銷第二次操作時,沒有徹底撤銷幹淨

(具體為啥還沒搞清楚,留個坑,搞清楚了再回來解答)

改yarn-site.xml的配置

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.clss</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
           

其他的Hadoop-evn.sh、core-site.xml、hdfs-site.xml、mapred-site.xml就不贅述了

徹底撤銷格式化操作:

再配置上述檔案的時候,手動建立了用于存儲的/tmp/dfs/name、/tmp/dsf/data(本人的命名是這樣),第二次格式化撤銷的時候應該要把這幾個檔案删掉。故用rm -r 目錄名 指令将tmp目錄及其下面的檔案一起删掉。再重新格式化。

start-all.sh開啟,檢查jps、localhost:50070、localhost:8088,一切正常。

運作wordcount也正常在生成的輸出目錄下有兩個檔案。

繼續閱讀