flume系统使用以及与storm的初步整合

 
Flume NG的简单使用可以参考介绍文档:http://blog.csdn.net/pelick/article/details/18193527,图片也来源此blog:


 
 
 
下载完flume后,就可以在 https://flume.apache.org/FlumeUserGuide.html 中根据教程来启动agent console
 
启动完成后,在console中打印出现下面的日志信息:
2016-06-21 13:00:06,890 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.source.NetcatSource.start(NetcatSource.java:164)] Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/172.16.79.12:44444]
 
 
可以通过telnet 172.16.79.12 44444 的方式来发送数据,发送完成后就可以在启动的agent中查看到该日志输出,至此一个简单的agent示例就演示完成。
 
2016-06-21 13:00:28,905 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 61 62 63 64 65 0D                               abcde. }
 
 

规划配置flume用于日志收集

 
 
经过规划,我们使用flume用来收集日志的场景图如下所示,每台web服务器均配置一个agent用来传输日志,并上传至统一的agent4中,
 


 
 
 
对于每台web server上的agent,我们采用Exec Sources类型的source来配置简单的tail -f 来实现对日志进行处理,并打印到日志控制台中,配置方法如下,其中type需要声明为exec,需要指定执行的命令(tail -F,根据需要还可以以管道的方式加入grep等命令):
 
zhenmq-agent.sources = zhenmq-source
zhenmq-agent.sinks = zhenmq-sink
zhenmq-agent.channels = zhenmq-channel

# Describe/configure the source
zhenmq-agent.sources.zhenmq-source.type = exec
zhenmq-agent.sources.zhenmq-source.command = tail -F /usr/local/tomcat/tomcat-zhenmq/logs/apilog/common-all.log

# Describe the sink
zhenmq-agent.sinks.zhenmq-sink.type = logger

# Use a channel which buffers events in memory
zhenmq-agent.channels.zhenmq-channel.type = memory
zhenmq-agent.channels.zhenmq-channel.capacity = 1000
zhenmq-agent.channels.zhenmq-channel.transactionCapacity = 100

# Bind the source and sink to the channel
zhenmq-agent.sources.zhenmq-source.channels = zhenmq-channel
zhenmq-agent.sinks.zhenmq-sink.channel = zhenmq-channel
 
 
日志流经过channel(可以根据条件选择memory还是file)后,需要输出到统一的collector,这时候就需要指定使用flume中内置的序列化方式,这里我们使用比较通用的Avro Source/Sink,source用来接收其他服务端发送的日志流,sink用于将日志数据输出。
 
如果希望将flume进行分层设计,可以使用中间序列化方式将收集到的日志传输到不同的服务器中,此时可以使用flume中自带的avro source和sink组件,需要指定type为avro,以及hostname和port(端口号)。
 
# Describe the sink
zhenmq-agent.sinks.zhenmq-sink.type = avro
zhenmq-agent.sinks.zhenmq-sink.hostname = 192.168.1.12
zhenmq-agent.sinks.zhenmq-sink.port = 23004

collector-agent.sources.collector-source.type = avro
collector-agent.sources.collector-source.bind= 192.168.1.13
collector-agent.sources.collector-source.port = 23004
 
 
注意,首先要在163服务器上启动flume服务,在先启动collector-source的情况下会报出拒绝连接的错误:
 
org.apache.flume.EventDeliveryException: Failed to send events
    at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:392)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
    at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flume.FlumeException: NettyAvroRpcClient { host: 192.168.1.163, port: 23004 }: RPC connection error
    at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:182)
    at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:121)
    at org.apache.flume.api.NettyAvroRpcClient.configure(NettyAvroRpcClient.java:638)
    at org.apache.flume.api.RpcClientFactory.getInstance(RpcClientFactory.java:89)
    at org.apache.flume.sink.AvroSink.initializeRpcClient(AvroSink.java:127)
    at org.apache.flume.sink.AbstractRpcSink.createConnection(AbstractRpcSink.java:211)
    at org.apache.flume.sink.AbstractRpcSink.verifyConnection(AbstractRpcSink.java:272)
    at org.apache.flume.sink.AbstractRpcSink.process(AbstractRpcSink.java:349)
    ... 3 more
Caused by: java.io.IOException: Error connecting to /192.168.1.163:23004
    at org.apache.avro.ipc.NettyTransceiver.getChannel(NettyTransceiver.java:261)
    at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:203)
    at org.apache.avro.ipc.NettyTransceiver.<init>(NettyTransceiver.java:152)
    at org.apache.flume.api.NettyAvroRpcClient.connect(NettyAvroRpcClient.java:168)
    ... 10 more
Caused by: java.net.ConnectException: 拒绝连接
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.connect(NioClientSocketPipelineSink.java:496)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.processSelectedKeys(NioClientSocketPipelineSink.java:452)
    at org.jboss.netty.channel.socket.nio.NioClientSocketPipelineSink$Boss.run(NioClientSocketPipelineSink.java:365)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    ... 1 more
 
 
启动完成后,会在163 collector服务中看到如下的日志,说明已经启动成功。
 
2016-06-22 18:48:30,179 (New I/O server boss #1 ([id: 0xb85f59b4, /192.168.1.163:23004])) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0xf57de901, /192.168.1.162:52778 => /192.168.1.163:23004] OPEN
2016-06-22 18:48:30,181 (New I/O  worker #1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0xf57de901, /192.168.1.162:52778 => /192.168.1.163:23004] BOUND: /192.168.1.163:23004
2016-06-22 18:48:30,181 (New I/O  worker #1) [INFO - org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:171)] [id: 0xf57de901, /192.168.1.162:52778 => /192.168.1.163:23004] CONNECTED: /192.168.1.162:52778
 
 
 
 
Flume的负载均衡与故障转移
 
 
由于在图中agent4为单点,加入agent4挂掉的话会导致日志无法正常输出,故采用flume的负载均衡/故障转移模式来避免这一单点生效。即每次按照一定的算法选择sink输出到指定地方,如果在文件输出量很大的情况下,负载均衡还是很有必要的,通过多个通道输出缓解输出压力。
 
flume内置的负载均衡的算法默认是round robin,轮询算法,按序选择。
 
source里的event流经channel,进入sink组,在sink组内部根据负载算法(round_robin、random)选择sink,后续可以选择不同机器上的agent实现负载均衡。
 


 
 
 
如果是采用故障转移,这组sinke将会组成一个failover sink processor,此时如果有一个sink处理失败,flume会将这个sink放到一个地方等待冷却时间,等到正常处理event的时候再拿回来。event通过通过一个channel流向一个sink组,在sink组内部根据优先级选择具体的sink,一个失败后再转向另一个sink,流程图如下:
 


 
 
鉴于我们当前的日志规模不算太大,先采用故障转移的方式来进行,后续如果处理不过来可以采用负载均衡。
 
 

配置故障转移

 
首先需要定义sinkgroups,定义group的处理类型,以及每个sink的优先级,此时先会往优先级较高的服务端发送日志,如果该服务不可用,则放到冷却池中,使用优先级较低的sink来处理。
 
注意启动顺序,一定是被依赖的flume先启动。
 
zhenmq-agent.sources = zhenmq-source
zhenmq-agent.sinks = collector-sink1 collector-sink2
zhenmq-agent.channels = zhenmq-channel

# Describe/configure the source
zhenmq-agent.sources.zhenmq-source.type = exec
zhenmq-agent.sources.zhenmq-source.command = tail -F /usr/local/tomcat/tomcat-zhenmq/logs/apilog/common-all.log

# Describe the sink
zhenmq-agent.sinks.collector-sink1.type = avro
zhenmq-agent.sinks.collector-sink1.channel= zhenmq-channel
zhenmq-agent.sinks.collector-sink1.hostname = 192.168.1.163
zhenmq-agent.sinks.collector-sink1.port = 23004

zhenmq-agent.sinks.collector-sink2.type = avro
zhenmq-agent.sinks.collector-sink2.channel= zhenmq-channel
zhenmq-agent.sinks.collector-sink2.hostname = 192.168.1.165
zhenmq-agent.sinks.collector-sink2.port = 23004

# Use a channel which buffers events in memory
zhenmq-agent.channels.zhenmq-channel.type = memory
zhenmq-agent.channels.zhenmq-channel.capacity = 1000
zhenmq-agent.channels.zhenmq-channel.transactionCapacity = 100

zhenmq-agent.sinkgroups = g1
zhenmq-agent.sinkgroups.g1.sinks = collector-sink1 collector-sink2

zhenmq-agent.sinkgroups.g1.processor.type = failover
zhenmq-agent.sinkgroups.g1.processor.priority.collector-sink1 = 10
zhenmq-agent.sinkgroups.g1.processor.priority.collector-sink2 = 11
 
 

Flume连接到Storm

 
一般情况下,flume的数据需要经过一轮转换至kafka中,然后storm读取kafka中的消息,来达到实时分析的目的。但我们可以暂时跳过kafka,直接将flume的输出结果输出到strom中。
 
参考开源实现:https://github.com/rvisweswara/flume-storm-connector,但通过分析其源码可以看出,其内部通过启动一个flume agent组件(SourceRunner,Channel,SinkCounter)来通过avro协议接收flume传输出来的流来完成此目的,FlumeSpout类型的整体类型图如下:
 


 
 
由于原来的实例是三年前写的,jar包比较老,可能无法启动,可以clone下面的链接本地启动(master分支):https://github.com/clamaa/flume-storm-connector
 
 
 
测试用例的启动入口类型为:FlumeConnectorTopology,其main方法中首先需要配置一个topology.properties文件,用来指定在FlumeSpout启动的Agent source类型和端口(一般情况下的type为avro,只需要指定对应的bind和port即可)。
 
flume-agent.source.type=avro
flume-agent.channel.type=memory
flume-agent.source.bind=127.0.0.1
flume-agent.source.port=10101
 
 
根据MaterializedConfigurationProvider以及相关配置,生成启动agent对应的MaterializedConfiguration(flume相关),在FlumeSpout.open的方法中,MaterializedConfiguration可以生成 sourceRunner(avro类型), channel(内存级别的,可以从中直接获取数据)。
 
构造flume agent的过程,由于不需要sink,也不需要添加SinkRunner,只加入SinkCounter用于输出计数使用(MXBean类型,可以通过JMX Console监听其关键输出指标)。
flumeAgentProps = StormEmbeddedAgentConfiguration.configure(
                FLUME_AGENT_NAME, flumeAgentProps);
        MaterializedConfiguration conf = configurationProvider.get(
                getFlumePropertyPrefix(), flumeAgentProps);

        Map<String, Channel> channels = conf.getChannels();
        if (channels.size() != 1) {
            throw new FlumeException("Expected one channel and got "
                    + channels.size());
        }
        Map<String, SourceRunner> sources = conf.getSourceRunners();
        if (sources.size() != 1) {
            throw new FlumeException("Expected one source and got "
                    + sources.size());
        }

        this.sourceRunner = sources.values().iterator().next();
        this.channel = channels.values().iterator().next();

        if (sinkCounter == null) {
            sinkCounter = new SinkCounter(FlumeSpout.class.getName());
        }
 
 
nextTurple方法中,定时对内部启动的Flume Channel进行take操作,获取最新event,
for (int i = 0; i < this.batchSize; i++) {
                Event event = channel.take();
                if (event == null) {
                    break;
                }
                batch.add(event);
            }
 
 
并将这些event包装成Values,由Collector进行emit(发射)操作,这里由于日志的格式可能会有多种类型,FlumeSpout可以设置TurpleProducer,根据对应的event自定义消息类型,以及声明的字段名称。
 
for (Event event : batch) {
                Values vals = this.getTupleProducer().toTuple(event);
                this.collector.emit(vals);
                this.pendingMessages.put(
                        event.getHeaders().get(Constants.MESSAGE_ID), event);

                LOG.debug("NextTuple:"
                        + event.getHeaders().get(Constants.MESSAGE_ID));
            }
 
 
消息在发送之前会暂时存在FlumeSpout.pendingMessages中(ConcurrentHashMap),以支持消息确认,在确认完成后,会将其删除;如果确认失败,会根据消息id进行重发。
 
   
 /*
     * When a message is succeeded remove from the pending list
     * 
     * @see backtype.storm.spout.ISpout#ack(java.lang.Object)
     */
    public void ack(Object msgId) {
        this.pendingMessages.remove(msgId.toString());
    }

    /*
     * When a message fails, retry the message by pushing the event back to channel.
     * Note: Please test this situation...
     * 
     * @see backtype.storm.spout.ISpout#fail(java.lang.Object)
     */
    public void fail(Object msgId) {
        //on a failure, push the message from pending to flume channel;

        Event ev = this.pendingMessages.get(msgId.toString());
        if(null != ev){
            this.channel.put(ev);
        }
    }
 
同时,该connector中也提供AvroSinkBolt,用于将storm生成的消息通过avro的方式再传回至flume中,其基本原理就是维持一个与flume的avro agent的连接RpcClient,并可以自定义flume事件生成器,将storm产生的Turple转换成storm对应的Event,这里就不再详细说明。
 
    private RpcClient rpcClient;
    private FlumeEventProducer flumeEventProducer; 
 
Flume收集日志的agent进程仍然可能出现另一种情况,就是挂掉,此时日志中出现错误:
<!--?xml version="1.0" encoding="UTF-8" standalone="no"?-->
 
2016-07-06 11:14:19,951 (pool-5-thread-1) [INFO - org.apache.flume.source.ExecSource$ExecRunnable.run(ExecSource.java:376)] Command [tail -F /usr/local/tomcat/tomcat-shopapi/logs/apilog/common-warn.log] exited with 137
 
<!--?xml version="1.0" encoding="UTF-8" standalone="no"?-->
exec source中有两个属性,用于处理当进程异常退出时尝试重启操作。
 
restartThrottle 10000 Amount of time (in millis) to wait before attempting a restart
restart false Whether the executed cmd should be restarted if it dies
 
 
 
 
 
 
posted @ 2016-06-27 18:48  clamaa  阅读(1698)  评论(0编辑  收藏  举报