Flume的Avro Sink和Avro Source研究之一: Avro Source

问题 : Avro Source提供了怎么样RPC服务,是怎么提供的?

问题 1.1 Flume Source是如何启动一个Netty Server来提供RPC服务。

由GitHub上avro-rpc-quickstart知道可以通过下面这种方式启动一个NettyServer,来提供特定的RPC。那么Flume Source 是通过这种方法来提供的RPC服务吗?

  server = new NettyServer(new SpecificResponder(Mail.class, new MailImpl()), new InetSocketAddress(65111));

 

 AvroSource中创建NettyServer的源码为:

  

    Responder responder = new SpecificResponder(AvroSourceProtocol.class, this);

    NioServerSocketChannelFactory socketChannelFactory = initSocketChannelFactory();

    ChannelPipelineFactory pipelineFactory = initChannelPipelineFactory();

    server = new NettyServer(responder, new InetSocketAddress(bindAddress, port),
          socketChannelFactory, pipelineFactory, null);

  看来AvroSource也是直接用Avro提供的NettyServer类来建立了一个NettyServe,不过它使用了另一个构造函数,指定了ChannelFactory和ChannelPipelineFactory.

   那么AvroSource使用的是怎么样的一个ChannelFactory呢?

  initSocketChannelFactory()方法的实现为:

  private NioServerSocketChannelFactory initSocketChannelFactory() {
    NioServerSocketChannelFactory socketChannelFactory;
    if (maxThreads <= 0) {
      socketChannelFactory = new NioServerSocketChannelFactory
          (Executors .newCachedThreadPool(), Executors.newCachedThreadPool());
    } else {
      socketChannelFactory = new NioServerSocketChannelFactory(
          Executors.newCachedThreadPool(),
          Executors.newFixedThreadPool(maxThreads));
    }
    return socketChannelFactory;
  }

  看来之所以要指定ChannelFactory,是为了根据AvroSource的"threads”这个参数,来决定可以使用worker thread的最大个数。这个数字决定了最多有多少个线程来处理RPC请求。

  参见NioServerChannelFactory的说明

  

A ServerSocketChannelFactory which creates a server-side NIO-based ServerSocketChannel. It utilizes the non-blocking I/O mode which was introduced with NIO to serve many number of concurrent connections efficiently.

How threads work

There are two types of threads in a NioServerSocketChannelFactory; one is boss thread and the other is worker thread.

Boss threads

Each bound ServerSocketChannel has its own boss thread. For example, if you opened two server ports such as 80 and 443, you will have two boss threads. A boss thread accepts incoming connections until the port is unbound. Once a connection is accepted successfully, the boss thread passes the accepted Channel to one of the worker threads that the NioServerSocketChannelFactory manages.

Worker threads

One NioServerSocketChannelFactory can have one or more worker threads. A worker thread performs non-blocking read and write for one or more Channels in a non-blocking mode.

  ChannelPipelineFactory是干嘛的呢?为什么也要特化一个?

  ChannelPipleline类的说明为:

  A list of ChannelHandlers which handles or intercepts ChannelEvents of a ChannelChannelPipeline implements an advanced form of the Intercepting Filter pattern to give a user full control over how an event is handled and how the ChannelHandlers in the pipeline interact with each other.

 

  看来这东西提供了一种更高级的拦截器组合。那就来看看AvroSource是用了怎么样的ChannelPiplelineFactory

  

  private ChannelPipelineFactory initChannelPipelineFactory() {
    ChannelPipelineFactory pipelineFactory;
    boolean enableCompression = compressionType.equalsIgnoreCase("deflate");
    if (enableCompression || enableSsl) {
      pipelineFactory = new SSLCompressionChannelPipelineFactory(
          enableCompression, enableSsl, keystore,
          keystorePassword, keystoreType);
    } else {
      pipelineFactory = new ChannelPipelineFactory() {
        @Override
        public ChannelPipeline getPipeline() throws Exception {
          return Channels.pipeline();
        }
      };
    }
    return pipelineFactory;
  }

  看来如果开启了压缩或者使用了ssl,就使用SSLCompressionChannelPiplelineFactory,这类是AvroSource一个私有的静态内部类。否则就使用Channels.pipleline()新建一个,这个pipleline貌似啥都不做?

  

问题 1.2这样Server是起来了,可是Server提供了什么样的RPC服务呢?

  关键在这一句。

  

Responder responder = new SpecificResponder(AvroSourceProtocol.class, this);

  查下Avro的API,得知道SpecificResponder的两个参数是protocol和protocol的实现。看起来AvroSource这个类实现了AvroSourceProtocol。Yes, AvroSource的声明为

  

public class AvroSource extends AbstractSource implements EventDrivenSource,Configurable, AvroSourceProtocol

  那就看看AvroSourceProtocol是怎么样定义的吧。它定义在flume-ng-sdk工程的src/main/avro目录下,由flume.avdl定义。avdl是使用Avro IDL定义的协议。放在那个特定的目录下,是avro-maven-plugin的约定。

  这个avdl是这样的

  

@namespace("org.apache.flume.source.avro")

protocol AvroSourceProtocol {

enum Status {
  OK, FAILED, UNKNOWN
}

record AvroFlumeEvent {
  map<string> headers;
  bytes body;
}

Status append( AvroFlumeEvent event );

Status appendBatch( array<AvroFlumeEvent> events );

}

  

  它定义了一个枚举,用作append和appendBatch的返回值。表示Source端对传输来的消息处理的结果,有OK FAILED UNKNOWN三种状态。

  定义了 AvroFlumeEvent这样一个record类型,符合Flume对Event的定义,header是一系列K-V对,即一个Map, body是byte数组。

  定义了两个方法,append单条AvroFlumeEvent,以及append一批AvroFlumeEvent.

  由此avdl,Avro生成了三个java文件,包括:一个枚举Status,一个类AvroFlumeEvent,一个接口AvroSourceProtocol。其中AvroSource类实现了AvroSourceProtocol接口,对外提供了append和appendBatch这两个远程方法调用。

  append方法实现为:

  

  @Override
  public Status append(AvroFlumeEvent avroEvent) {
    logger.debug("Avro source {}: Received avro event: {}", getName(),
        avroEvent);
    sourceCounter.incrementAppendReceivedCount();
    sourceCounter.incrementEventReceivedCount();

    Event event = EventBuilder.withBody(avroEvent.getBody().array(),
        toStringMap(avroEvent.getHeaders()));

    try {
      getChannelProcessor().processEvent(event);
    } catch (ChannelException ex) {
      logger.warn("Avro source " + getName() + ": Unable to process event. " +
          "Exception follows.", ex);
      return Status.FAILED;
    }

    sourceCounter.incrementAppendAcceptedCount();
    sourceCounter.incrementEventAcceptedCount();

    return Status.OK;
  }

  这个方法就是用获取的AvroFlumeEvent对象,经过转换构建一个Event对象。这个转换只是将不对等的数据类型进行了转换,arvoEvent.getBody()返回的是ByteBuffer,而avroEvent.getHeaders()返回的是Map<CharSequence,CharSequence>。

构建完Event后,把这个消息传递给这个Source对应的ChannelProcessor来处理。

  appendBatch方法和append方法的实现很相似。

 

 

posted @ 2014-03-22 18:22  devos  阅读(6724)  评论(0编辑  收藏  举报