天天看点

flume采集日志到hadoop存储

概览

准备

1.将hadoop的hdfs-site.xml和core-site.xml 放到flume/conf下

2.将hadoop的jar包拷贝到flume的lib目录下

3.配置flume2.conf

4.启动flume(保证首先启动hdfs)

5.测试

准备

首先将flume配置完毕,参考flume的单机版配置及测试

hadoop集群搭建完毕,参考hadoop单机版搭建,hadoop集群搭建

工具:Xshell 5,Xftp 5

1.将hadoop的hdfs-site.xml和core-site.xml 放到flume/conf下

[[email protected] ~]# cp /usr/hadoop/hadoop-2.7.3/etc/hadoop/core-site.xml /usr/flume/apache-flume-1.8.0-bin/conf/
[[email protected] ~]# cp /usr/hadoop/hadoop-2.7.3/etc/hadoop/hdfs-site.xml /usr/flume/apache-flume-1.8.0-bin/conf/
           

2.将hadoop的jar包拷贝到flume的lib目录下

[[email protected] ~]# cp $HADOOP_HOME/share/hadoop/common/hadoop-common-2.7.3.jar  /usr/flume/apache-flume-1.8.0-bin/lib/
[[email protected] ~]# cp $HADOOP_HOME/share/hadoop/common/lib/hadoop-auth-2.7.3.jar   /usr/flume/apache-flume-1.8.0-bin/lib/
[[email protected] ~]# cp $HADOOP_HOME/share/hadoop/common/lib/commons-configuration-1.6.jar   /usr/flume/apache-flume-1.8.0-bin/lib/
           

3.配置flume2.conf

在flume安装目录下的conf下创建一个flume2.conf文件,并写入配置

#定义agent名, source、channel、sink的名称  
a4.sources = r1  
a4.channels = c1  
a4.sinks = k1  
  
#具体定义source  
a4.sources.r1.type = spooldir 
 #先创建此目录,保证里面空的 
a4.sources.r1.spoolDir = /logs  
  
#具体定义channel  
a4.channels.c1.type = memory  
a4.channels.c1.capacity = 10000  
a4.channels.c1.transactionCapacity = 100  
  
#定义拦截器,为消息添加时间戳  
a4.sources.r1.interceptors = i1  
a4.sources.r1.interceptors.i1.type = org.apache.flume.interceptor.TimestampInterceptor$Builder  
  
  
#具体定义sink  
a4.sinks.k1.type = hdfs  
#集群的nameservers名字            
#单节点的直接写:hdfs://localhost:9000/flume/%Y%m%d  
#集群版的ns为集群名称
a4.sinks.k1.hdfs.path = hdfs://ns/flume/%Y%m%d  
a4.sinks.k1.hdfs.filePrefix = events-  
a4.sinks.k1.hdfs.fileType = DataStream  
#不按照条数生成文件  
a4.sinks.k1.hdfs.rollCount = 0  
#HDFS上的文件达到128M时生成一个文件  
a4.sinks.k1.hdfs.rollSize = 134217728  
#HDFS上的文件达到60秒生成一个文件  
a4.sinks.k1.hdfs.rollInterval = 60  
  
#组装source、channel、sink  
a4.sources.r1.channels = c1  
a4.sinks.k1.channel = c1  
           

4.启动flume(保证首先启动hdfs)

启动hdfs参考hadoop单机版搭建,hadoop集群搭建中的hadoop启动

[[email protected] apache-flume-1.8.0-bin]# flume-ng agent -n a4 -c conf -f conf/flume2.conf -Dflume.root.logger=INFO,console 
           

5.测试

重新打开一个窗口

在/usr/tmp下创建一个a文件

[[email protected] ~]# cd /usr/tmp/
[[email protected] tmp]# ls
hive_add.jar  hive_name.jar  hive_xx.jar  student
[[email protected] tmp]# vim a

           

随便输入一些内容

dada
dada
aaa
aa

aaa
           

保存退出,将其copy到logs文件夹下

[[email protected] tmp]# cp a /logs/
           

查看原来的窗口

flume采集日志到hadoop存储

然后登录http://你的主机ip:50070查看

flume采集日志到hadoop存储
flume采集日志到hadoop存储
flume采集日志到hadoop存储

如此便是采集日志到hadoop成功了

继续阅读