天天看点

flume实现kafka到hdfs实时数据采集 - 有负载均衡策略

方案:

两台采集机器pc1,pc2.有两个写hdfs的sink,分别部署到两台机器,然后两个负载均衡的agent,也是分布部属到两台机器上,每一个agent都是写到两个hdfs sink的source端.

配置:

*******************************************hdfs sink

hdfs-sink.sources = r1

hdfs-sink.sinks = k1

hdfs-sink.channels = c1

# Describe/configure the source

hdfs-sink.sources.r1.type = avro

hdfs-sink.sources.r1.channels = c1

hdfs-sink.sources.r1.bind = pc2

hdfs-sink.sources.r1.port = 5555

# Describe the sink

hdfs-sink.sinks.k1.type = hdfs

#保证每天的每个小时一个文件夹

hdfs-sink.sinks.k1.hdfs.path = hdfs://nameservice1/user/dc/test/flume/sdk_function_log/%Y%m%d/%Y%m%d%H

hdfs-sink.sinks.k1.hdfs.filePrefix = base.log

#如果n秒没有写文件就自动关闭hdfs文件,当每个小时结束的时候可以在10s后关闭文件

hdfs-sink.sinks.k1.hdfs.idleTimeout = 10

#以文本形式写入hdfs

hdfs-sink.sinks.k1.hdfs.fileType=DataStream

hdfs-sink.sinks.k1.hdfs.writeFormat=Text

#控制文件大小

hdfs-sink.sinks.k1.hdfs.rollInterval=0

#256mb

hdfs-sink.sinks.k1.hdfs.rollSize=256000000

hdfs-sink.sinks.k1.hdfs.rollCount=0

# Use a channel which buffers events in memory

hdfs-sink.channels.c1.type = memory

hdfs-sink.channels.c1.capacity = 10000

hdfs-sink.channels.c1.transactionCapacity = 10000

# Bind the source and sink to the channel

hdfs-sink.sources.r1.channels = c1

hdfs-sink.sinks.k1.channel = c1

*******************************************

*******************************************load balance

lb-kafka-hdfs.sources=r1

lb-kafka-hdfs.sinks=k1 k2

lb-kafka-hdfs.channels=c1

#failover conf

lb-kafka-hdfs.sinkgroups = g1

lb-kafka-hdfs.sinkgroups.g1.sinks = k1 k2

lb-kafka-hdfs.sinkgroups.g1.processor.type = load_balance

lb-kafka-hdfs.sinkgroups.g1.processor.backoff = true

lb-kafka-hdfs.sinkgroups.g1.processor.selector = round_robin

#source conf

lb-kafka-hdfs.sources.r1.type = org.apache.flume.source.kafka.KafkaSource

lb-kafka-hdfs.sources.r1.channels = c1

lb-kafka-hdfs.sources.r1.zookeeperConnect = pc002:2181,pc003:2181,pc004:2181,pc005:2181,pc006:2181/kafka_0.8.2.2

lb-kafka-hdfs.sources.r1.groupId = flume-test

lb-kafka-hdfs.sources.r1.topic = sdk_function_log

lb-kafka-hdfs.sources.r1.kafka.consumer.timeout.ms = 100

#sink conf

lb-kafka-hdfs.sinks.k1.type = avro

lb-kafka-hdfs.sinks.k1.channel = c1

lb-kafka-hdfs.sinks.k1.hostname = pc1

lb-kafka-hdfs.sinks.k1.port = 5555

lb-kafka-hdfs.sinks.k2.type = avro

lb-kafka-hdfs.sinks.k2.channel = c1

lb-kafka-hdfs.sinks.k2.hostname = pc2

lb-kafka-hdfs.sinks.k2.port = 5555

#channel conf

lb-kafka-hdfs.channels.c1.type = memory

lb-kafka-hdfs.channels.c1.capacity = 10000

lb-kafka-hdfs.channels.c1.transactionCapacity = 10000

*******************************************

启动命令:

1.先启动两个sink,在pc1,pc2

flume-ng agent --conf /home/dc/datacenter/soft/flume/default/conf -f /home/dc/datacenter/src/flume-conf/lb_kafka_hdfs/hdfs-sink.conf -Dflume.root.logger=INFO,console -n hdfs-sink

2.启动两个负载服务,在pc1,pc2

flume-ng agent --conf /home/dc/datacenter/soft/flume/default/conf -f /home/dc/datacenter/src/flume-conf/lb_kafka_hdfs/lb-kafka-hdfs.conf -Dflume.root.logger=INFO,console -n lb-kafka-hdfs