spark源码阅读之network(3)
TransportContext用来创建TransportServer和TransportclientFactory,同时使用TransportChannelHandler用来配置channel的pipelines,TransportClient提供了两种传输协议,一个是数据层(fetch chunk),一个是控制层(rpc)。rpc的处理需要用户提供一个RpcHandler来处理,它负责建立一个用于传输的流, 使用zero-copy以块的形式进行数据传输。TransportServer和TransportClientFactory为每个channel都创建了一个TransportChannelHandler,每个TransportChannelHandler都包含一个TransportClient,这样服务端可以使用该client向客户端发送消息。
该类有两个主要方法一个是创建TransportChannelHandler一个是给channel配置处理器。
privateTransportChannelHandler createChannelHandler(Channel channel,RpcHandler rpcHandler){TransportResponseHandler responseHandler =newTransportResponseHandler(channel);TransportClient client =newTransportClient(channel, responseHandler);TransportRequestHandler requestHandler =newTransportRequestHandler(channel, client,rpcHandler);returnnewTransportChannelHandler(client, responseHandler, requestHandler,conf.connectionTimeoutMs(), closeIdleConnections);}
这个可以看到TransportResponseHandler需要一个Channel,TransportClient需要channel和TransportResponseHandler,TransportRequestHandler需要channel, TransportClient和RpcHandler. TransportChannelHandler需要client,requestHandler,responseHandler. 这里发送channel,client被使用了多次。transportclient的channel可以从responseHandler中获取。这里挺乱的。
publicTransportChannelHandler initializePipeline(SocketChannel channel,RpcHandler channelRpcHandler){try{TransportChannelHandler channelHandler = createChannelHandler(channel, channelRpcHandler);channel.pipeline().addLast("encoder", encoder).addLast(TransportFrameDecoder.HANDLER_NAME,NettyUtils.createFrameDecoder()).addLast("decoder", decoder).addLast("idleStateHandler",newIdleStateHandler(0,0, conf.connectionTimeoutMs()/1000))// NOTE: Chunks are currently guaranteed to be returned in the order of request, but this// would require more logic to guarantee if this were not part of the same event loop..addLast("handler", channelHandler);return channelHandler;}catch(RuntimeException e){logger.error("Error while initializing Netty pipeline", e);throw e;}}
用来给channel配置channelHandler.第一个是处理出通道的处理器,后面是处理进通道的处理器。
下面看看TransportServer。构建一个服务端。
privatevoid init(String hostToBind,int portToBind){IOMode ioMode =IOMode.valueOf(conf.ioMode());EventLoopGroup bossGroup =NettyUtils.createEventLoop(ioMode, conf.serverThreads(),"shuffle-server");EventLoopGroup workerGroup = bossGroup;PooledByteBufAllocator allocator =NettyUtils.createPooledByteBufAllocator(conf.preferDirectBufs(),true/* allowCache */, conf.serverThreads());bootstrap =newServerBootstrap().group(bossGroup, workerGroup).channel(NettyUtils.getServerChannelClass(ioMode)).option(ChannelOption.ALLOCATOR, allocator).childOption(ChannelOption.ALLOCATOR, allocator);if(conf.backLog()>0){bootstrap.option(ChannelOption.SO_BACKLOG, conf.backLog());}if(conf.receiveBuf()>0){bootstrap.childOption(ChannelOption.SO_RCVBUF, conf.receiveBuf());}if(conf.sendBuf()>0){bootstrap.childOption(ChannelOption.SO_SNDBUF, conf.sendBuf());}bootstrap.childHandler(newChannelInitializer<SocketChannel>(){@Overrideprotectedvoid initChannel(SocketChannel ch)throwsException{RpcHandler rpcHandler = appRpcHandler;for(TransportServerBootstrap bootstrap : bootstraps){rpcHandler = bootstrap.doBootstrap(ch, rpcHandler);}context.initializePipeline(ch, rpcHandler);}});InetSocketAddress address = hostToBind ==null?newInetSocketAddress(portToBind):newInetSocketAddress(hostToBind, portToBind);channelFuture = bootstrap.bind(address);channelFuture.syncUninterruptibly();port =((InetSocketAddress) channelFuture.channel().localAddress()).getPort();logger.debug("Shuffle server started on port :"+ port);}
这块是netty中构建一个服务器的流程。配置的缓存生成器是内存池分配器。IO使用的是NIO(EPOLL不兼容windows),相关的配置参数看TransportConf
整个spark的network部分的common模块看完了。其余部分有时间在研究。

浙公网安备 33010602011771号