本篇内容介绍了“Flume的Source怎么实现采集数据到通过内存输出到控制台”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
需求:
Flume的Source从NetCat 和 Exec上采集数据到通过内存输出到控制台
这里memory也可以是file聚合,checkpointDir记录偏移量
# Use a channel which buffers events in memoryagent1.channels.channel1.type = fileagent1.channels.channel1.checkpointDir=/var/checkpoint agent1.channels.channel1.dataDirs=/var/tmpagent1.channels.channel1.capacity = 1000agent1.channels.channel1.transactionCapactiy = 100
这里采取memory配置文件:
a1.sources = r1 r2a1.sinks = k1a1.channels = c1a1.sources.r1.channels = c1a1.sources.r1.type = netcata1.sources.r1.bind = 0.0.0.0a1.sources.r1.port = 44444a1.sources.r2.channels = c1a1.sources.r2.type = execa1.sources.r2.command = tail -F /home/hadoop/data/data.log# Describe the sinka1.sinks.k1.type = logger# Use a channel which buffers events in memorya1.channels.c1.type = memorya1.channels.c1.capacity = 1000a1.channels.c1.transactionCapacity = 100# Bind the source and sink to the channela1.sources.r1.channels = c1a1.sources.r2.channels = c1a1.sinks.k1.channel = c1
测试结果:
[hadoop@hadoop001 ~]$ telnet localhost 44444Trying ::1...Connected to localhost.Escape character is '^]'.ZOURC123456789OK[hadoop@hadoop001 data]$ echo 123 >> data.log[hadoop@hadoop001 data]$
控制台输出结果:
2018-08-10 20:12:10,426 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 31 32 33 123 }2018-08-10 20:12:32,439 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 5A 4F 55 52 43 31 32 33 34 35 36 37 38 39 0D ZOURC123456789. }
“Flume的Source怎么实现采集数据到通过内存输出到控制台”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注编程网网站,小编将为大家输出更多高质量的实用文章!