1,在Hadoop文件夹下,比如在Linux系统下,Hadoop解压缩后的文件夹。
该文件夹下有 bin, conf, ivy, lib, sbin, share, src等文件夹,也有 hadoop-client-1.2.1.jar, hadoop-examples-1.2.1.jar等文件。目前设该文件夹的
路径为 /usr/local/hadoop。
首先编写 /usr/local/hadoop/WordCount.java:
package org.myorg;
import java.io.IOException;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = value.toString();
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public static void main(String[] args) throws Exception {
JobConf conf = new JobConf(WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
FileInputFormat.setInputPaths(conf, new Path(args[0]));
FileOutputFormat.setOutputPath(conf, new Path(args[1]));
JobClient.runJob(conf);
}
}
2,开始编译。
WordCount.java中引入了org.apache.hadooop.*等多个包,这些都是在hadoop-core-1.2.1.jar, hadoop-client.1.2.1.jar等jar文件中。
2.1 首先,建立一个新的文件夹 /usr/local/hadoop/wordcount_classes
在/usr/local/hadoop文件夹下直接输入: mkdir wordcount_classes
2.2 编译
注意,多个jar包直接用【冒号】连接。
javac -classpath /usr/local/hadoop/hadoop-core-1.2.1.jar:/usr/local/hadoop/hadoop-client-1.2.1.jar -d wordcount_classes/ WordCount.java
这个步骤之后,会在wordcount_classes下面成功生成: org/myorg/的二级子目录,在myorg目录下会有:
WordCount.class WordCount$Map.class WordCount$Reduce.class
等子文件
3,打包成jar包
jar -cvf WordCount.jar -C wordcount_classes ./
打包成目标文件:WordCount.jar
4,在Hadoop的dfs中新建文件夹:
./bin/hadoop dfs -mkdir /user/hadoop/wordcount
./bin/hadoop dfs -mkdir /user/hadoop/wordcount/input
5,生成文件,和把文件放进dfs中
echo "Hello World Bye World" > file0
echo "Hello Hadoop Goodbye Hadoop" > file1
./bin/hadoop dfs -put file* /user/hadoop/wordcount/input
6,运行
./bin/hadoop jar WordCount.jar org.myorg.WordCount /user/hadoop/wordcount/input /user/hadoop/wordcount/output
7,结果
Warning: $HADOOP_HOME is deprecated.
14/06/20 17:29:55 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/06/20 17:29:55 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/06/20 17:29:55 WARN snappy.LoadSnappy: Snappy native library not loaded
14/06/20 17:29:55 INFO mapred.FileInputFormat: Total input paths to process : 2
14/06/20 17:29:58 INFO mapred.JobClient: Running job: job_201406201518_0001
14/06/20 17:29:59 INFO mapred.JobClient: map 0% reduce 0%
14/06/20 17:32:55 INFO mapred.JobClient: map 33% reduce 0%
14/06/20 17:33:05 INFO mapred.JobClient: map 66% reduce 0%
14/06/20 17:33:24 INFO mapred.JobClient: map 100% reduce 0%
14/06/20 17:33:29 INFO mapred.JobClient: map 100% reduce 22%
14/06/20 17:33:31 INFO mapred.JobClient: map 100% reduce 100%
14/06/20 17:33:33 INFO mapred.JobClient: Job complete: job_201406201518_0001
14/06/20 17:33:33 INFO mapred.JobClient: Counters: 30
14/06/20 17:33:33 INFO mapred.JobClient: Job Counters
14/06/20 17:33:33 INFO mapred.JobClient: Launched reduce tasks=1
14/06/20 17:33:33 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=369084
14/06/20 17:33:33 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/06/20 17:33:33 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/06/20 17:33:33 INFO mapred.JobClient: Launched map tasks=3
14/06/20 17:33:33 INFO mapred.JobClient: Data-local map tasks=3
14/06/20 17:33:33 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=29759
14/06/20 17:33:33 INFO mapred.JobClient: File Input Format Counters
14/06/20 17:33:33 INFO mapred.JobClient: Bytes Read=53
./bin/hadoop dfs -cat /user/hadoop/wordcount/output/part-00000
RESULT:
Warning: $HADOOP_HOME is deprecated.
Bye 1
Goodbye 1
Hadoop 2
Hello 2
World 2