Netty在主从Reactor多线程模式下的请求处理流程

一、主从Reactor模式架构概述

1.1 线程模型结构

image

1.2 职责分工

  • BossGroup:专门负责接收客户端连接(Accept事件)
  • WorkerGroup:专门负责处理已建立连接的I/O读写操作

二、请求处理的完整流程

2.1 服务端启动和初始化

// ServerBootstrap启动代码
EventLoopGroup bossGroup = new NioEventLoopGroup(1);        // Boss线程组
EventLoopGroup workerGroup = new NioEventLoopGroup();       // Worker线程组

ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup)                             // 设置线程组
 .channel(NioServerSocketChannel.class)                     // 指定Channel类型
 .childHandler(new ChannelInitializer<SocketChannel>() {    // 设置子处理器
     @Override
     public void initChannel(SocketChannel ch) {
         ch.pipeline().addLast(new YourHandler());
     }
 });

ChannelFuture f = b.bind(port).sync();                      // 绑定端口

2.2 Boss线程如何运行和接受请求

2.2.1 Boss线程的核心运行循环(NioEventLoop.run())

// NioEventLoop的核心事件循环
@Override
protected void run() {
    for (;;) {
        try {
            try {
                switch (selectStrategy.calculateStrategy(selectNowSupplier, hasTasks())) {
                case SelectStrategy.CONTINUE:
                    continue;
                case SelectStrategy.BUSY_WAIT:
                case SelectStrategy.SELECT:
                    select(wakenUp.getAndSet(false)); // 1. 轮询I/O事件
                    if (wakenUp.get()) {
                        selector.wakeup();
                    }
                default:
                }
            } catch (IOException e) {
                rebuildSelector0();
                handleLoopException(e);
                continue;
            }

            cancelledKeys = 0;
            needsToSelectAgain = false;
            final int ioRatio = this.ioRatio;
            if (ioRatio == 100) {
                try {
                    processSelectedKeys();                    // 2. 处理I/O事件
                } finally {
                    runAllTasks();                           // 3. 处理任务队列
                }
            } else {
                final long ioStartTime = System.nanoTime();
                try {
                    processSelectedKeys();
                } finally {
                    final long ioTime = System.nanoTime() - ioStartTime;
                    runAllTasks(ioTime * (100 - ioRatio) / ioRatio);
                }
            }
        } catch (Throwable t) {
            handleLoopException(t);
        }
    }
}

2.2.2 Boss线程处理Accept事件

// 在processSelectedKeys()中处理不同类型的事件
private void processSelectedKey(SelectionKey k, AbstractNioChannel ch) {
    final AbstractNioChannel.NioUnsafe unsafe = ch.unsafe();
    
    try {
        int readyOps = k.readyOps();
        
        // 处理Accept事件 - Boss线程的核心工作
        if ((readyOps & (SelectionKey.OP_READ | SelectionKey.OP_ACCEPT)) != 0 || readyOps == 0) {
            unsafe.read(); // 调用ServerSocketChannel的read方法
        }
    } catch (CancelledKeyException ignored) {
        unsafe.close(unsafe.voidPromise());
    }
}

2.2.3 ServerSocketChannel接收连接的具体实现

// NioServerSocketChannel.doReadMessages() - 实际接收连接的地方
@Override
protected int doReadMessages(List<Object> buf) throws Exception {
    // 调用JDK原生ServerSocketChannel.accept()接收连接
    SocketChannel ch = SocketUtils.accept(javaChannel());
    
    try {
        if (ch != null) {
            // 将JDK的SocketChannel包装成Netty的NioSocketChannel
            buf.add(new NioSocketChannel(this, ch));
            return 1;
        }
    } catch (Throwable t) {
        logger.warn("Failed to create a new channel from an accepted socket.", t);
        try {
            ch.close();
        } catch (Throwable t2) {
            logger.warn("Failed to close a socket.", t2);
        }
    }
    return 0;
}

2.3 请求被封装成什么以及如何传递给Worker线程

2.3.1 连接接收后的处理流程

// AbstractNioMessageChannel.NioMessageUnsafe.read() - Boss线程读取连接
public void read() {
    assert eventLoop().inEventLoop();
    final ChannelConfig config = config();
    final ChannelPipeline pipeline = pipeline();
    final RecvByteBufAllocator.Handle allocHandle = unsafe().recvBufAllocHandle();
    allocHandle.reset(config);

    boolean closed = false;
    Throwable exception = null;
    try {
        try {
            do {
                // 读取连接,每个连接被包装成NioSocketChannel放入buf
                int localRead = doReadMessages(readBuf);
                if (localRead == 0) {
                    break;
                }
                if (localRead < 0) {
                    closed = true;
                    break;
                }
                allocHandle.incMessagesRead(localRead);
            } while (allocHandle.continueReading());
        } catch (Throwable t) {
            exception = t;
        }

        int size = readBuf.size();
        for (int i = 0; i < size; i ++) {
            readPending = false;
            // 触发channelRead事件,传播新连接
            pipeline.fireChannelRead(readBuf.get(i)); // 关键:触发Pipeline处理
        }
        readBuf.clear();
        allocHandle.readComplete();
        pipeline.fireChannelReadComplete();

        if (exception != null) {
            closed = closeOnReadError(exception);
            pipeline.fireExceptionCaught(exception);
        }

        if (closed) {
            inputShutdown = true;
            if (isOpen()) {
                close(voidPromise());
            }
        }
    } finally {
        if (!readPending && !config.isAutoRead()) {
            removeReadOp();
        }
    }
}

2.3.2 新连接如何分配给Worker线程

// ServerBootstrapAcceptor - 关键的处理器,负责将新连接分配给Worker线程
private static class ServerBootstrapAcceptor extends ChannelInboundHandlerAdapter {
    
    @Override
    @SuppressWarnings("unchecked")
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        // msg就是Boss线程接收到的NioSocketChannel
        final Channel child = (Channel) msg;
        
        // 为新连接设置ChildHandler
        child.pipeline().addLast(childHandler);
        
        // 设置子连接的选项和属性
        setChannelOptions(child, childOptions, logger);
        setAttributes(child, childAttrs);

        try {
            // 关键:将新连接注册到Worker线程组中的某个EventLoop
            childGroup.register(child).addListener(new ChannelFutureListener() {
                @Override
                public void operationComplete(ChannelFuture future) throws Exception {
                    if (!future.isSuccess()) {
                        forceClose(child, future.cause());
                    }
                }
            });
        } catch (Throwable t) {
            forceClose(child, t);
        }
    }
}

2.3.3 Worker线程的选择机制

// MultithreadEventLoopGroup.register() - 选择Worker线程
@Override
public ChannelFuture register(Channel channel) {
    return next().register(channel); // next()方法选择下一个EventLoop
}

// DefaultEventExecutorChooserFactory.GenericEventExecutorChooser
@Override
public EventExecutor next() {
    // 轮询方式选择EventLoop,实现负载均衡
    return executors[Math.abs(idx.getAndIncrement() % executors.length)];
}

2.4 Worker线程如何处理请求

2.4.1 Worker线程处理读事件

// NioByteUnsafe.read() - Worker线程处理读事件
@Override
public final void read() {
    final ChannelConfig config = config();
    if (shouldBreakReadReady(config)) {
        clearReadPending();
        return;
    }
    
    final ChannelPipeline pipeline = pipeline();
    final ByteBufAllocator allocator = config.getAllocator();
    final RecvByteBufAllocator.Handle allocHandle = recvBufAllocHandle();
    allocHandle.reset(config);

    ByteBuf byteBuf = null;
    boolean close = false;
    try {
        do {
            // 1. 分配ByteBuf缓冲区
            byteBuf = allocHandle.allocate(allocator);
            
            // 2. 从Channel读取数据到ByteBuf
            allocHandle.lastBytesRead(doReadBytes(byteBuf));
            if (allocHandle.lastBytesRead() <= 0) {
                byteBuf.release();
                byteBuf = null;
                close = allocHandle.lastBytesRead() < 0;
                if (close) {
                    readPending = false;
                }
                break;
            }

            allocHandle.incMessagesRead(1);
            readPending = false;
            
            // 3. 触发channelRead事件,传播数据
            pipeline.fireChannelRead(byteBuf);
            byteBuf = null;
        } while (allocHandle.continueReading());

        allocHandle.readComplete();
        // 4. 触发channelReadComplete事件
        pipeline.fireChannelReadComplete();

        if (close) {
            closeOnRead(pipeline);
        }
    } catch (Throwable t) {
        handleReadException(pipeline, byteBuf, t, close, allocHandle);
    } finally {
        if (!readPending && !config.isAutoRead()) {
            removeReadOp();
        }
    }
}

2.4.2 数据在Pipeline中的传播

// DefaultChannelPipeline.fireChannelRead() - 数据传播
@Override
public final ChannelPipeline fireChannelRead(Object msg) {
    // 从HeadContext开始向后传播
    AbstractChannelHandlerContext.invokeChannelRead(head, msg);
    return this;
}

// AbstractChannelHandlerContext.invokeChannelRead()
static void invokeChannelRead(final AbstractChannelHandlerContext next, Object msg) {
    final Object m = next.pipeline.touch(ObjectUtil.checkNotNull(msg, "msg"), next);
    EventExecutor executor = next.executor();
    
    if (executor.inEventLoop()) {
        // 在EventLoop线程中直接调用
        next.invokeChannelRead(m);
    } else {
        // 不在EventLoop线程中,提交任务
        executor.execute(new Runnable() {
            @Override
            public void run() {
                next.invokeChannelRead(m);
            }
        });
    }
}

三、核心组件的作用

3.1 EventLoop的本质

// NioEventLoop继承关系
public final class NioEventLoop extends SingleThreadEventLoop {
    private Selector selector;           // Java NIO的Selector
    private final SelectorProvider provider; // Selector提供者
    
    // 关键:一个EventLoop绑定一个线程
    private volatile Thread thread;
    
    // 任务队列,用于在EventLoop中执行任务
    private final Queue<Runnable> taskQueue;
}

3.2 Channel的包装层次

请求数据的包装层次:
1. 网络字节流 → 2. ByteBuf → 3. 业务对象 → 4. Pipeline传播

SocketChannel包装:
JDK SocketChannel → Netty NioSocketChannel → Unsafe接口 → ChannelPipeline

3.3 事件传播机制

// ChannelHandler处理事件的典型模式
public class EchoServerHandler extends ChannelInboundHandlerAdapter {
    
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
        // 处理接收到的数据
        ByteBuf in = (ByteBuf) msg;
        // 业务逻辑处理...
        
        // 写回数据
        ctx.write(msg);
    }
    
    @Override
    public void channelReadComplete(ChannelHandlerContext ctx) {
        ctx.flush(); // 刷新数据
    }
}

四、性能优化要点

4.1 线程模型优势

  1. Boss线程专一化:只处理Accept事件,避免阻塞
  2. Worker线程池化:多个Worker线程并行处理I/O
  3. 事件驱动:基于Selector的非阻塞I/O

4.2 内存管理优化

  1. ByteBuf池化:减少内存分配开销
  2. 直接内存使用:减少内存拷贝
  3. 引用计数:精确的内存回收

4.3 任务调度优化

  1. I/O任务优先:优先处理I/O事件
  2. 任务队列:非I/O任务异步处理
  3. 时间片控制:通过ioRatio控制I/O和任务的时间分配

通过这种精心设计的主从Reactor模式,Netty能够高效地处理大量并发连接,实现高性能的网络通信。Boss线程专门负责连接的建立,Worker线程专门负责数据的读写,各司其职,充分利用多核CPU的优势。

posted @ 2025-08-02 17:30  MuXinu  阅读(23)  评论(0)    收藏  举报