天天看點

【Hadoop】編寫和運作WordCount.java

1,在Hadoop檔案夾下,比如在Linux系統下,Hadoop解壓縮後的檔案夾。

該檔案夾下有 bin, conf, ivy, lib, sbin, share, src等檔案夾,也有 hadoop-client-1.2.1.jar, hadoop-examples-1.2.1.jar等檔案。目前設該檔案夾的

路徑為 /usr/local/hadoop。

首先編寫 /usr/local/hadoop/WordCount.java:

package org.myorg;
 
    import java.io.IOException;
    import java.util.*;
 
    import org.apache.hadoop.fs.Path;
    import org.apache.hadoop.conf.*;
    import org.apache.hadoop.io.*;
    import org.apache.hadoop.mapred.*;
    import org.apache.hadoop.util.*;
 
    public class WordCount {
 
      public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
        private final static IntWritable one = new IntWritable(1);
        private Text word = new Text();
 
        public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
          String line = value.toString();
          StringTokenizer tokenizer = new StringTokenizer(line);
          while (tokenizer.hasMoreTokens()) {
            word.set(tokenizer.nextToken());
            output.collect(word, one);
          }
        }
      }
 
      public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
        public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
          int sum = 0;
          while (values.hasNext()) {
            sum += values.next().get();
          } 
          output.collect(key, new IntWritable(sum));
        }
      }
 
      public static void main(String[] args) throws Exception {
        JobConf conf = new JobConf(WordCount.class);
        conf.setJobName("wordcount");
 
        conf.setOutputKeyClass(Text.class);
        conf.setOutputValueClass(IntWritable.class);
 
        conf.setMapperClass(Map.class);
        conf.setCombinerClass(Reduce.class);
        conf.setReducerClass(Reduce.class);
 
        conf.setInputFormat(TextInputFormat.class);
        conf.setOutputFormat(TextOutputFormat.class);
 
        FileInputFormat.setInputPaths(conf, new Path(args[0]));
        FileOutputFormat.setOutputPath(conf, new Path(args[1]));
 
        JobClient.runJob(conf);
      }
    }
           

2,開始編譯。

WordCount.java中引入了org.apache.hadooop.*等多個包,這些都是在hadoop-core-1.2.1.jar, hadoop-client.1.2.1.jar等jar檔案中。

2.1 首先,建立一個新的檔案夾 /usr/local/hadoop/wordcount_classes

在/usr/local/hadoop檔案夾下直接輸入: mkdir wordcount_classes

2.2 編譯

注意,多個jar包直接用【冒号】連接配接。

javac -classpath /usr/local/hadoop/hadoop-core-1.2.1.jar:/usr/local/hadoop/hadoop-client-1.2.1.jar -d wordcount_classes/ WordCount.java
           

這個步驟之後,會在wordcount_classes下面成功生成: org/myorg/的二級子目錄,在myorg目錄下會有:

WordCount.class  WordCount$Map.class  WordCount$Reduce.class
           

等子檔案

3,打包成jar包

jar -cvf WordCount.jar -C wordcount_classes ./
           

打包成目标檔案:WordCount.jar

4,在Hadoop的dfs中建立檔案夾:

./bin/hadoop dfs -mkdir /user/hadoop/wordcount
           
./bin/hadoop dfs -mkdir /user/hadoop/wordcount/input
           

5,生成檔案,和把檔案放進dfs中

echo "Hello World Bye World" > file0
echo "Hello Hadoop Goodbye Hadoop" > file1
./bin/hadoop dfs -put file* /user/hadoop/wordcount/input
           

6,運作

./bin/hadoop jar WordCount.jar org.myorg.WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output
           

7,結果

Warning: $HADOOP_HOME is deprecated.

14/06/20 17:29:55 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/06/20 17:29:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/06/20 17:29:55 WARN snappy.LoadSnappy: Snappy native library not loaded
14/06/20 17:29:55 INFO mapred.FileInputFormat: Total input paths to process : 2
14/06/20 17:29:58 INFO mapred.JobClient: Running job: job_201406201518_0001
14/06/20 17:29:59 INFO mapred.JobClient:  map 0% reduce 0%
14/06/20 17:32:55 INFO mapred.JobClient:  map 33% reduce 0%
14/06/20 17:33:05 INFO mapred.JobClient:  map 66% reduce 0%
14/06/20 17:33:24 INFO mapred.JobClient:  map 100% reduce 0%
14/06/20 17:33:29 INFO mapred.JobClient:  map 100% reduce 22%
14/06/20 17:33:31 INFO mapred.JobClient:  map 100% reduce 100%
14/06/20 17:33:33 INFO mapred.JobClient: Job complete: job_201406201518_0001
14/06/20 17:33:33 INFO mapred.JobClient: Counters: 30
14/06/20 17:33:33 INFO mapred.JobClient:   Job Counters 
14/06/20 17:33:33 INFO mapred.JobClient:     Launched reduce tasks=1
14/06/20 17:33:33 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=369084
14/06/20 17:33:33 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
14/06/20 17:33:33 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
14/06/20 17:33:33 INFO mapred.JobClient:     Launched map tasks=3
14/06/20 17:33:33 INFO mapred.JobClient:     Data-local map tasks=3
14/06/20 17:33:33 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=29759
14/06/20 17:33:33 INFO mapred.JobClient:   File Input Format Counters 
14/06/20 17:33:33 INFO mapred.JobClient:     Bytes Read=53
           
./bin/hadoop dfs -cat /user/hadoop/wordcount/output/part-00000
           

RESULT:

Warning: $HADOOP_HOME is deprecated.

Bye	1
Goodbye	1
Hadoop	2
Hello	2
World	2