文章目錄
- 1 序列化概述
-
- 1.1什麼是序列化
- 1.2為什麼要序列化
- 1.3為什麼不用Java的序列化
- 2.自定義bean對象實作序列化接口(Writable)
- 3 序列化案例實操
-
- 1. 需求
- **2.需求分析**
- **3.編寫MapReduce程式**
- 4.打包到叢集測試
- 5,結果為:
1 序列化概述
1.1什麼是序列化
序列化就是把記憶體中的對象,轉換成位元組序列(或其他資料傳輸協定)以便于存儲到磁盤(持久化)和網絡傳輸。
反序列化就是将收到位元組序列(或其他資料傳輸協定)或者是磁盤的持久化資料,轉換成記憶體中的對象。
1.2為什麼要序列化
一般來說,“活的” 對象隻生存在記憶體裡,關機斷電就沒有了。而且“活的”對象隻能由本地的程序使用,不能被發送到網絡上的另外-台計算機。然而序列化可以存儲“活的”對象,可以将“活的”對象發送到遠端計算機。
1.3為什麼不用Java的序列化
Java的序列化是-個重量級序列化架構(Serializable) ,-一個對象被序列化後,會附帶很多額外的資訊(各種校驗資訊, Header, 繼承體系等),不便于在網絡中高效傳輸。是以, Hadoop自己開發了一套序列化機制 (Writable) 。
Hadoop序列化特點:
(1)緊湊:高效使用存儲空間。
(2)快速:讀寫資料的額外開銷小。
(3)可擴充:随着通信協定的更新而可更新(4)互操作:支援多語言的互動
2.自定義bean對象實作序列化接口(Writable)
在企業開發中往往常用的基本序列化類型不能滿足所有需求,比如在Hadoop架構内部傳遞一個bean對象,那麼該對象就需要實作序列化接口。
具體實作bean對象序列化步驟如下7步。
(1)必須實作Writable接口
(2)反序列化時,需要反射調用空參構造函數,是以必須有空參構造
public FlowBean() {
super();
}
(3)重寫序列化方法
@Override
public void write(DataOutput out) throws IOException {
out.writeLong(upFlow);
out.writeLong(downFlow);
out.writeLong(sumFlow);
}
(4)重寫反序列化方法
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
(5)注意反序列化的順序和序列化的順序完全一緻
(6)要想把結果顯示在檔案中,需要重寫toString(),可用”\t”分開,友善後續用。
(7)如果需要将自定義的bean放在key中傳輸,則還需要實作Comparable接口,因為MapReduce框中的Shuffle過程要求對key必須能排序。詳見後面排序案例。
@Override
public int compareTo(FlowBean o) {
// 倒序排列,從大到小
return this.sumFlow > o.getSumFlow() ? -1 : 1;
}
3 序列化案例實操
1. 需求
統計每一個手機号耗費的總上行流量、下行流量、總流量
檔案名phone_data.txt
1 13736230513 192.196.100.1 www.atguigu.com 2481 24681 200
2 13846544121 192.196.100.2 264 0 200
3 13956435636 192.196.100.3 132 1512 200
4 13966251146 192.168.100.1 240 0 404
5 18271575951 192.168.100.2 www.atguigu.com 1527 2106 200
6 84188413 192.168.100.3 www.atguigu.com 4116 1432 200
7 13590439668 192.168.100.4 1116 954 200
8 15910133277 192.168.100.5 www.hao123.com 3156 2936 200
9 13729199489 192.168.100.6 240 0 200
10 13630577991 192.168.100.7 www.shouhu.com 6960 690 200
11 15043685818 192.168.100.8 www.baidu.com 3659 3538 200
12 15959002129 192.168.100.9 www.atguigu.com 1938 180 500
13 13560439638 192.168.100.10 918 4938 200
14 13470253144 192.168.100.11 180 180 200
15 13682846555 192.168.100.12 www.qq.com 1938 2910 200
16 13992314666 192.168.100.13 www.gaga.com 3008 3720 200
17 13509468723 192.168.100.14 www.qinghua.com 7335 110349 404
18 18390173782 192.168.100.15 www.sogou.com 9531 2412 200
19 13975057813 192.168.100.16 www.baidu.com 11058 48243 200
20 13768778790 192.168.100.17 120 120 200
21 13568436656 192.168.100.18 www.alibaba.com 2481 24681 200
22 13568436656 192.168.100.19 1116 954 200
(2)輸入資料格式:
7 13560436666 120.196.100.99 1116 954 200
id 手機号碼 網絡ip 上行流量 下行流量 網絡狀态碼
(3)期望輸出資料格式
13560436666 1116 954 2070
手機号碼 上行流量 下行流量 總流量
2.需求分析
3.編寫MapReduce程式
(1)編寫流量統計的Bean對象
public class FlowBean implements Writable {
private long upFlow; //上行流量
private long downFlow; //下行流量
private long sumFlow; //總流量
//TODO 反序列化時,需要反射調用空參構造函數,是以必須有
public FlowBean() { }
public FlowBean(long upFlow, long downFlow) {
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
//TODO 序列化
@Override
public void write(DataOutput out) throws IOException {
out.writeLong( upFlow );
out.writeLong( downFlow );
out.writeLong( sumFlow );
}
//TODO 反序列化方法
//TODO 反序列化方法讀順序必須和寫序列化方法的寫順序必須一緻
@Override
public void readFields(DataInput in) throws IOException {
upFlow = in.readLong();
downFlow = in.readLong();
sumFlow = in.readLong();
}
public long getUpFlow() {
return upFlow;
}
public void setUpFlow(long upFlow) {
this.upFlow = upFlow;
}
public long getDownFlow() {
return downFlow;
}
public void setDownFlow(long downFlow) {
this.downFlow = downFlow;
}
public long getSumFlow() {
return sumFlow;
}
public void setSumFlow(long sumFlow) {
this.sumFlow = sumFlow;
}
@Override
public String toString() {
return upFlow + "\t" + downFlow + "\t" + sumFlow ;
}
public void set(long upFlow, long downFlow) {
this.upFlow = upFlow;
this.downFlow = downFlow;
this.sumFlow = upFlow + downFlow;
}
}
(2)編寫Mapper類
public class FlowCountMapper extends Mapper<LongWritable, Text,Text,FlowBean> {
Text k = new Text();
FlowBean v = new FlowBean();
@Override
protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
//TODO 擷取值和切分
String[] splits = value.toString().split( "\t" );
//TODO 封裝手機号
k.set( splits[1]);
//TODO 反方向封裝
//TODO 減去上行流量和下行流量
long upFlow = Long.parseLong( splits[splits.length - 3] );
long downFlow = Long.parseLong( splits[splits.length - 2] );
//TODO 拿到FlowBean類的方法,指派
v.setUpFlow( upFlow );
v.setDownFlow( downFlow );
v.set( upFlow,downFlow );
//TODO 寫出
context.write( k,v );
}
}
(3)編寫Reducer類
public class FlowCountReducer extends Reducer<Text,FlowBean,Text,FlowBean> {
FlowBean v = new FlowBean();
@Override
protected void reduce(Text key, Iterable<FlowBean> values, Context context) throws IOException, InterruptedException {
//TODO 定義累加變量
long sum_unFlow = 0;
long sum_downFlow = 0;
// TODO 周遊所用bean,将其中的上行流量,下行流量分别累加
for (FlowBean value : values) {
sum_unFlow += value.getUpFlow();
sum_downFlow += value.getDownFlow();
}
//TODO 封裝對象
v.set( sum_unFlow,sum_downFlow );
//TODO 寫出
context.write( key,v );
}
}
(4)編寫Driver驅動類
public class FlowsumDriver {
public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
//TODO 擷取job
Configuration conf= new Configuration();
Job job = Job.getInstance( conf );
//TODO 擷取jar
job.setJarByClass( FlowsumDriver.class );
//TODO 擷取Map Reduce
job.setMapperClass( FlowCountMapper.class );
job.setReducerClass( FlowCountReducer.class );
//TODO 擷取Map端輸出類型
job.setMapOutputKeyClass( Text.class );
job.setMapOutputValueClass( FlowBean.class );
//TODO 擷取總輸出類型
job.setOutputKeyClass( Text.class );
job.setOutputValueClass( FlowBean.class );
//TODO 設定輸入路徑和輸出路徑
FileInputFormat.setInputPaths( job,new Path( args[0] ) );
FileOutputFormat.setOutputPath( job,new Path( args[1] ) );
//TODO 送出
boolean result = job.waitForCompletion( true );
System.exit( result ? 0:1 );
}
}
4.打包到叢集測試
[[email protected] jar]# yarn jar FlowSum.jar com.huan.flowsun.FlowsumDriver /huan/input1/phone_data.txt /huan/output2
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
20/08/04 02:12:46 INFO client.RMProxy: Connecting to ResourceManager at huan01/192.168.168.234:8032
20/08/04 02:12:49 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
20/08/04 02:12:51 INFO input.FileInputFormat: Total input paths to process : 1
20/08/04 02:12:51 INFO lzo.GPLNativeCodeLoader: Loaded native gpl library from the embedded binaries
20/08/04 02:12:51 INFO lzo.LzoCodec: Successfully loaded & initialized native-lzo library [hadoop-lzo rev 52decc77982b58949890770d22720a91adce0c3f]
20/08/04 02:12:51 INFO mapreduce.JobSubmitter: number of splits:1
20/08/04 02:12:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1596447552943_0002
20/08/04 02:12:53 INFO impl.YarnClientImpl: Submitted application application_1596447552943_0002
20/08/04 02:12:54 INFO mapreduce.Job: The url to track the job: http://huan01:8088/proxy/application_1596447552943_0002/
20/08/04 02:12:54 INFO mapreduce.Job: Running job: job_1596447552943_0002
20/08/04 02:13:26 INFO mapreduce.Job: Job job_1596447552943_0002 running in uber mode : false
20/08/04 02:13:26 INFO mapreduce.Job: map 0% reduce 0%
20/08/04 02:13:59 INFO mapreduce.Job: map 100% reduce 0%
20/08/04 02:14:19 INFO mapreduce.Job: map 100% reduce 100%
20/08/04 02:14:20 INFO mapreduce.Job: Job job_1596447552943_0002 completed successfully
20/08/04 02:14:21 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=839
FILE: Number of bytes written=237431
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=1288
HDFS: Number of bytes written=550
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=27015
Total time spent by all reduces in occupied slots (ms)=17082
Total time spent by all map tasks (ms)=27015
Total time spent by all reduce tasks (ms)=17082
Total vcore-milliseconds taken by all map tasks=27015
Total vcore-milliseconds taken by all reduce tasks=17082
Total megabyte-milliseconds taken by all map tasks=27663360
Total megabyte-milliseconds taken by all reduce tasks=17491968
Map-Reduce Framework
Map input records=22
Map output records=22
Map output bytes=789
Map output materialized bytes=839
Input split bytes=110
Combine input records=0
Combine output records=0
Reduce input groups=21
Reduce shuffle bytes=839
Reduce input records=22
Reduce output records=21
Spilled Records=44
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=679
CPU time spent (ms)=7390
Physical memory (bytes) snapshot=325103616
Virtual memory (bytes) snapshot=4132458496
Total committed heap usage (bytes)=214433792
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=1178
File Output Format Counters
Bytes Written=550
5,結果為:
[[email protected] ~]# hadoop fs -cat /huan/output2/part-r-00000
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop/hadoop-2.7.2/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
13470253144 180 180 360
13509468723 7335 110349 117684
13560439638 918 4938 5856
13568436656 3597 25635 29232
13590439668 1116 954 2070
13630577991 6960 690 7650
13682846555 1938 2910 4848
13729199489 240 0 240
13736230513 2481 24681 27162
13768778790 120 120 240
13846544121 264 0 264
13956435636 132 1512 1644
13966251146 240 0 240
13975057813 11058 48243 59301
13992314666 3008 3720 6728
15043685818 3659 3538 7197
15910133277 3156 2936 6092
15959002129 1938 180 2118
18271575951 1527 2106 3633
18390173782 9531 2412 11943
84188413 4116 1432 5548