Flume 示例

1.Syslog Tcp Source

sysylog通过配置一个端口,flume能够监控这个端口的数据。如果通往这个端口发送数据可以被flume接收到。可以通过socket发送。

#配置文件:syslog_case5.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
# Describe/configure the source
a1.sources.r1.type =syslogtcp
a1.sources.r1.port =50000
a1.sources.r1.host =192.168.233.128
a1.sources.r1.channels =c1
 
# Describe the sink
a1.sinks.k1.type = logger
 a1.sinks.k1.channel = c1
 
# Use a channel which buffers events inmemory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

这里我们设置的侦听端口为192.168.233.128 50000

#敲命令

flume-ng agent -c -conf -f conf/syslog_case5.conf -n a1 -Dflume.root.logger=INFO,console

启动成功后

打开另一个终端输入,往侦听端口送数据

echo "hello looklook5" | nc 192.168.233.128 50000

然后看之前的终端,将会有如下显示:

数据已经发送过来了。

也可以通过代码,用socket向端口发送数据

  Socket client = null;
        try {
            client = new Socket("192.168.1.75", 50000);
            OutputStream out = client.getOutputStream();
            String event = "hello world \n";
            out.write(event.getBytes());
            out.flush();
            out.close();
            client.close();
        } catch (IOException e) {
            e.printStackTrace();
        }

发送的内容要以换行符结尾"\n"

 2.Taildir Source

taildir source通过监控一个目录或文件,目录的文件有任何改变会将新增加的内容写入到flume中。

a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
# Describe/configure the source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /usr/apache-flume-1.7.0-bin/taildir_position1.json
a1.sources.r1.filegroups = f1
a1.sources.r1.headers.f1.headerKey1 = value1
#a1.sources.r1.filegroups.f1 = /usr/apache-flume-1.7.0-bin/test/.*
a1.sources.r1.filegroups.f1 = /opt/apache-tomcat-7.0.72/8080/logs/.*
a1.sources.r1.fileHeader = true

# Describe the sink
a1.sinks.k1.type = hdfs
a1.sinks.k1.hdfs.path = hdfs://127.0.0.1:9008/tomcat/%Y-%m-%d
a1.sinks.k1.hdfs.writeFormat = Text
a1.sinks.k1.hdfs.fileType = DataStream
a1.sinks.k1.hdfs.rollInterval = 0
a1.sinks.k1.hdfs.rollSize = 0
a1.sinks.k1.hdfs.rollCount = 0
a1.sinks.k1.hdfs.filePrefix = log
a1.sinks.k1.hdfs.useLocalTimeStamp = true
a1.sinks.k1.hdfs.idleTimeout = 30
 
# Use a channel which buffers events in memory
a1.channels.c1.type = file
a1.channels.c1.checkpointDir = /usr/apache-flume-1.7.0-bin/checkpoint
a1.channels.c1.dataDirs = /usr/apache-flume-1.7.0-bin/data

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

 

posted @ 2017-02-15 16:57  Gyoung  阅读(1974)  评论(0编辑  收藏  举报