在做Hadoop黑马日志分析项目的过程中,进行了表的链接。本篇博客将结合Hive详细说明Mysql表链接。:
1、统计每日的pv(浏览量)
hive> create table hmbbs_pv
> as select count() as pv from hmbbs_table;
查看运行结果:
hive> describe hmbbs_pv;
OK
pv bigint
Time taken: seconds
hive> select pv from hmbbs_pv;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
Starting Job = job_1469064014798_0058, Tracking URL = http://hadoop22:/proxy/application_1469064014798_0058/
Kill Command = /usr/local/hadoop/bin/hadoop job -Dmapred.job.tracker=ignorethis -kill job_1469064014798_0058
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1469064014798_0058
MapReduce Jobs Launched:
Job : Map: Cumulative CPU: sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK
Time taken: seconds
2、统计每日的register(注册用户数)
hive> create table hmbbs_register
> as select count() as register
> from hmbbs_table
> where instr(urllog,'member.php?mod=register') > ;
查看运行结果:
hive> describe hmbbs_register;
OK
register bigint
Time taken: seconds
hive> select register from hmbbs_register;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
Starting Job = job_1469064014798_0061, Tracking URL = http://hadoop22:/proxy/application_1469064014798_0061/
Kill Command = /usr/local/hadoop/bin/hadoop job -Dmapred.job.tracker=ignorethis -kill job_1469064014798_0061
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1469064014798_0061
MapReduce Jobs Launched:
Job : Map: Cumulative CPU: sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK
Time taken: seconds
3、统计每日的独立的ip
hive> create table hmbbs_ip as
> select count(distinct iplog) as ip
> from hmbbs_table;
查看运行结果:
hive> describe hmbbs_ip;
OK
ip bigint
Time taken: seconds
hive> select ip from hmbbs_ip;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
Starting Job = job_1469064014798_0063, Tracking URL = http://hadoop22:/proxy/application_1469064014798_0063/
Kill Command = /usr/local/hadoop/bin/hadoop job -Dmapred.job.tracker=ignorethis -kill job_1469064014798_0063
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1469064014798_0063
MapReduce Jobs Launched:
Job : Map: Cumulative CPU: sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK
Time taken: seconds
4、统计每日的独立的跳出率
hive> CREATE TABLE hmbbs_jumper AS SELECT COUNT() AS jumper FROM (SELECT COUNT(iplog) AS times FROM hmbbs_table GROUP BY iplog HAVING times=) e ;
查看运行结果:
hive> describe hmbbs_jumper;
OK
jumper bigint
Time taken: seconds
hive> select jumper from hmbbs_jumper;
Total MapReduce jobs =
Launching Job out of
Number of reduce tasks is set to since there's no reduce operator
Starting Job = job_1469064014798_0066, Tracking URL = http://hadoop22:/proxy/application_1469064014798_0066/
Kill Command = /usr/local/hadoop/bin/hadoop job -Dmapred.job.tracker=ignorethis -kill job_1469064014798_0066
Hadoop job information for Stage-: number of mappers: ; number of reducers:
-- ::, Stage- map = %, reduce = %
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
-- ::, Stage- map = %, reduce = %, Cumulative CPU sec
MapReduce Total cumulative CPU time: seconds msec
Ended Job = job_1469064014798_0066
MapReduce Jobs Launched:
Job : Map: Cumulative CPU: sec HDFS Read: HDFS Write: SUCCESS
Total MapReduce CPU Time Spent: seconds msec
OK
Time taken: seconds
到此,上面四个表已经获取到了相应的运行结果:
hive> show tables;
OK
hmbbs_ip
hmbbs_jumper
hmbbs_pv
hmbbs_register
hmbbs_table
Time taken: seconds
hive> select * from hmbbs_ip;
OK
Time taken: seconds
hive> select * from hmbbs_jumper;
OK
Time taken: seconds
hive> select * from hmbbs_pv;
OK
Time taken: seconds
hive> select * from hmbbs_register;
OK
Time taken: seconds
接下来进行表链接:
表关联:层
select from hmbbs_pv
join hmbbs_register on
join hmbbs_ip on
join hmbbs_jumper on
表关联:层
select from hmbbs_pv
join hmbbs_register on =
join hmbbs_ip on =
join hmbbs_jumper on =
表关联:层 (给每个表起别名:hmbbs_pv a hmbbs_register b hmbbs_ip c hmbbs_jumper d )
select from hmbbs_pv a
join hmbbs_register b on =
join hmbbs_ip c on =
join hmbbs_jumper d on =
表关联:层 (取每个表中特定的字段)
select a.pv,b.register,c.ip,d.jumper
from hmbbs_pv a
join hmbbs_register b on =
join hmbbs_ip c on =
join hmbbs_jumper d on =
表关联:层 (增加一个字段,变成个字段)
select '2013_05_30',a.pv,b.register,c.ip,d.jumper
from hmbbs_pv a
join hmbbs_register b on =
join hmbbs_ip c on =
join hmbbs_jumper d on =
如有问题,欢迎指正!