从IO到netty
一、pageCache
1.FileOutputStream与BufferedOutputStream的主要区别
系统调用次数区别,前者频繁调用造成频繁的用户态和内核态的切换,后者每8KB调用一次内核
- BufferOutputStream jvm  8kB   syscall  write(8KBbyte[])
- FileOutputStream每次写都进行系统调用
2.pageCache内核刷盘参数
$ sysctl -a | grep dirty
vm.dirty_background_bytes = 0
vm.dirty_background_ratio = 5
vm.dirty_bytes = 0
vm.dirty_expire_centisecs = 3000
vm.dirty_ratio = 10
vm.dirty_writeback_centisecs = 500
dirty_writeback_centisecs
- 表示多久唤醒一次刷新脏页的后台线程。这里的500表示5秒唤醒一次。
 
dirty_expire_centisecs
- 表示脏数据多久会被刷新到磁盘上。这里的3000表示 30秒。
 
dirty_background_ratio
- 表示当脏页占总内存的的百分比超过这个值时,后台线程开始刷新脏页。这个值如果设置得太小,可能不能很好地利用内存加速文件操作。如果设置得太大,则会周期性地出现一个写 IO 的峰值。
 
dirty_ratio
- 当脏页占用的内存百分比超过此值时,内核会阻塞掉写操作,并开始刷新脏页。
 
dirty_background_bytes、dirty_bytes
- 是和 dirty_background_ratio、dirty_ratio 表示同样意义的不同单位的表示。两者不会同时生效。
 
3.作用
- 优先使用内存,减少硬件IO调用(内存<->磁盘),提速
 - 缺点:可靠性问题:未刷盘断电,易丢失数据。
 
二、ByteBuffer
java程序堆
-Xmx1024m分配jvm heap
 public void whatByteBuffer(){
//        ByteBuffer buffer = ByteBuffer.allocate(1024);//jvm 堆内
       ByteBuffer buffer = ByteBuffer.allocateDirect(1024);//jvm堆外,进程堆内
       System.out.println("postition: " + buffer.position());
       System.out.println("limit: " +  buffer.limit());
       System.out.println("capacity: " + buffer.capacity());
       System.out.println("mark: " + buffer);
       buffer.put("123".getBytes());
       System.out.println("-------------put:123......");
       System.out.println("mark: " + buffer);
       buffer.flip();   //读写交替,读时调用,防止读到还未写的地方pos指向0,limit指向已写位置
       System.out.println("-------------flip......");
       System.out.println("mark: " + buffer);
       buffer.get();
       System.out.println("-------------get......");
       System.out.println("mark: " + buffer);
       buffer.compact();//写时调用,将已读的挤出,pos指向未写的初始的位置,limit指向最大空间位置
       System.out.println("-------------compact......");
       System.out.println("mark: " + buffer);
       buffer.clear();
       System.out.println("-------------clear......");
       System.out.println("mark: " + buffer);
   }
三、FileChannel
map方法
- 内核系统调用,文件在pageCache中映射一个地址
 - map.put 非系统调用,直接向pageCache中写数据
 - map.force 刷盘
 
    FileChannel rafchannel = raf.getChannel();
    //mmap  堆外  和文件映射的   byte  not  object
    MappedByteBuffer map = rafchannel.map(FileChannel.MapMode.READ_WRITE, 0, 4096);
    map.put("xxx".getBytes());  //不是系统调用  但是数据会到达 内核的pagecache
    map.force(); //  flush
可靠性排序
- on heap < off heap < mmap(file)
 
应用
- netty: (on heap; off heap)
 - kafka log: mmap
 
四、TCPIP
1.tcpdump
tcpdump -nn -i eth0 port 8080
2.socket
- 四元组:客户端IP.PORT-->服务端IP.PORT
 
3.BACK_LOG
- 服务端调用accept获取连接前内核可保持的连接数
 
4.窗口与拥塞
- 客户端与服务端在发送数据和接收数据时会报告自己的窗口剩余大小,发送方将根据对方窗口大小决定发送多少个数据包,以防拥塞,导致对方丢弃发送包
 - 窗口大小:网卡MTU:1500
 
5.NODELAY
- 发送优化,默认为false,积攒一定量发送数据包,设置为true关闭优化,表示尽快将缓冲区数据发送
 - 通讯数据包较小时,适合关闭优化,即尽快发送数据
 
6.KEEPALIVE
- TCP级别的心跳,及时踢出死掉的服务
 
7.握手挥手状态变迁图

五、网络IO
1.strace
- 
追踪每一个线程对内核的调用
 - 
strace -ff -o out /usr/jdk1.8/bin/java Test 
2.BIO
- new Thread
 - BLOCKING
 
3.NIO
- 
JDK : new io
 - 
OS : NONBLOCKING
 - 
单线程可同时处理多个连接
 - 
缺点:假设连接数为n,每一次循环,则将有O(n)的复杂度,会对每一个连接描述符进行一次尝试read(系统调用,成本较高),
而实际上并非每次所有连接都有数据到达,会有很多无效的read,连接数越多,无效系统调用就越多。
 
4.open files
1.为什么会报错“too many open files"
- 
linux系统默认最大open files通过 ulimit -n查看是1024,超过此数会报此错
 - 
ulimit -n 
2.root用户为什么创建了4000多个连接才报"too many open files"
- 
linux系统root用户max open files受Hard限制,普通用户才受1024限制,而默认Hard设置正是4096,可通过ulimit -Hn查看
 - 
ulimit -Hn 
3.设置open files
/etc/security/limits.conf
5.SELECTOR
1.多路复用器概念
- 
多条路(IO)通过一个系统调用(select,poll,epoll),筛选获得其中状态可用的IO数据集,再由程序处理ACCEPT/READ/WRITE
 - 
select() and pselect() allow a program to monitor multiple file descriptors, waiting until one or more of the file descriptors become "ready" for some class of I/O operation (e.g., input possible). A file descriptor is considered ready if it is possible to perform the corresponding I/O operation (e.g., read(2)) without blocking. - 
优点:跟NIO比,系统调用次数大大减少,效率提高
 
2.select、poll的缺点
- 每次需要向内核传送全量的fds
 - 内核将重新对新传递的全量fds进行遍历
 - 只完成了将从网卡发来的数据通过内核网络协议栈(4层)关联到fd的缓冲区
 
3.epoll
- epoll_create ->创建一个文件描述符epfd,指向一个红黑树结构
 - epoll_ctl(epfd,ADD,fd,event)->向epfd指向的红黑树结构添加指定文件描述符fd节点,并指定关注事件集event
 - epoll_wait->获取发生关注事件的fd事件集数组,并清除该fd的该类型事件,如需再关注该事件需要重新添加该事件类型epoll_ctl(epfd,EPOLL_CTL_MOD,listenfd,&ev)
 - 内核在epoll_ctl和epoll_wait之间做的事情:数据从网卡传输产生了IO中断后除了将数据关联到fd,还会将epfd红黑树中发生事件的fd添加到另一个数组(暂称为wait数组)中,在epoll_wait时直接从内存中拿到此数据返回,避免了实时遍历红黑树,也就是说数据传输的过程中内核就已经将发生事件的fds准备好了,这也是内核针对epoll做的很棒的优化。
 
6.同步、异步、阻塞、非阻塞
- 同步:应用自己处理都是同步
 - 异步:由内核完成R/W,应用没有调用IO操作(如:注册事件,内核完成读写Buffer)
 - 阻塞:BLOCKING,调用阻塞,直到IO状态可用才返回
 - 非阻塞:NONBLOCKING,调用必实时返回,不一定调用成功,需要应用循环多次调用来保证成功
 
7.中断
- 软中断:系统调用,int x80
 - 硬中断:时钟中断,晶振,打断CPU正在执行的任务,保护现场,读取其他任务执行,以使单个CPU可以处理多任务
 - IO中断:网卡、键盘、鼠标(dpi)
 
六、服务器模型

七、netty
1.ByteBuf
- 更方便使用的ByteBuffer
 
// 默认初始化
ByteBuf byteBuf = ByteBufAllocator.DEFAULT.buffer(8,20);
// 池化初始化
ByteBuf buf = UnpooledByteBufAllocator.DEFAULT.heapBuffer(8,20);
ByteBuf buf = PooledByteBufAllocator.DEFAULT.heapBuffer(8,20);
//方法
buf.isReadable();
buf.readIndex();
buf.readableBytes();
buf.isWritable();
buf.writeIndex();
buf.writableBytes();
buf.capacity();
buf.maxCapacity();
buf.isDirect();
2.一步一步推导使用netty
package com.wod.ncc.common.netty;
import io.netty.bootstrap.Bootstrap;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufAllocator;
import io.netty.buffer.Unpooled;
import io.netty.channel.*;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.channel.socket.nio.NioSocketChannel;
import org.junit.Test;
import java.lang.reflect.ParameterizedType;
import java.net.InetSocketAddress;
import java.nio.charset.StandardCharsets;
/**
 * netty client
 *
 * @author wod.Y
 * @date 2021/03/14 16:56
 **/
public class NettyDemo {
    /**
     * NioEventLoopGroup基本使用
     * @throws Exception
     */
    @Test
    public void loopExecutor() throws Exception {
        // GROUP 线程池
        NioEventLoopGroup selector = new NioEventLoopGroup(2);
        selector.execute(() -> {
            try {
                while (true) {
                    System.out.println("hello world 001!");
                    Thread.sleep(1000);
                }
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        });
        selector.execute(() -> {
            try {
                while (true) {
                    System.out.println("hello world 002!");
                    Thread.sleep(1000);
                }
            } catch (Exception e) {
                e.printStackTrace();
            }
        });
        System.in.read();
    }
    /**
     * 客户端版本1 - 一步一步操作
     * @throws InterruptedException
     */
    @Test
    public void clientMod() throws InterruptedException {
        NioEventLoopGroup nioEventLoopGroup = new NioEventLoopGroup(1);
        NioSocketChannel client = new NioSocketChannel();
        ChannelPipeline pipeline = client.pipeline();
        pipeline.addLast(new MyHandler());
        nioEventLoopGroup.register(client);
        // 同步连接
        ChannelFuture connect = client.connect(new InetSocketAddress("10.0.0.121", 9090));
        ChannelFuture sync = connect.sync();
        // 同步发送
        ByteBuf buf = Unpooled.copiedBuffer("hello".getBytes(StandardCharsets.UTF_8));
        ChannelFuture send = client.writeAndFlush(buf);
        send.sync();
        // 同步接收
        Channel read = client.read();
        ByteBufAllocator alloc = read.alloc();
        //同步断开
        sync.channel().closeFuture().sync();
        System.out.println("client ended");
    }
    /**
     * netty 版本 client
     * @throws InterruptedException
     */
    @Test
    public void clientMod1() throws InterruptedException {
        NioEventLoopGroup nioEventLoopGroup = new NioEventLoopGroup(1);
        Bootstrap bs = new Bootstrap();
        ChannelFuture connect = bs.group(nioEventLoopGroup)
                .handler(new MyInitHandler<Channel>() {//此处可使用ChannelInitializer
                    @Override
                    protected void init(Channel channel) {
                        channel.pipeline().addLast(new MyClientHandler());
                    }
                })
                .channel(NioSocketChannel.class)
                .connect(new InetSocketAddress(9090));
        ChannelFuture send = connect.channel().writeAndFlush(Unpooled.copiedBuffer("netty demo by yhm", StandardCharsets.UTF_8));
        send.sync();
        connect.sync().channel().closeFuture().sync();
        System.out.println("client ended");
    }
    /**
     * 服务端版本1 自定义accepter
     * @throws InterruptedException
     */
    @Test
    public void serverMod() throws InterruptedException {
        NioEventLoopGroup thread = new NioEventLoopGroup(1);
        NioServerSocketChannel server = new NioServerSocketChannel();
        server.pipeline().addLast(new MyAcceptHandler(thread, new MyInitHandler<Channel>() {
            @Override
            protected void init(Channel channel) {
                channel.pipeline().addLast(new MyHandler());
            }
        }));
        thread.register(server);
        ChannelFuture bind = server.bind(new InetSocketAddress(9090));
        bind.sync().channel().closeFuture().sync();
        System.out.println("server closed!");
    }
    /**
     * netty 版本服务端
     * @throws InterruptedException
     */
    @Test
    public void serverMod1() throws InterruptedException {
        NioEventLoopGroup group = new NioEventLoopGroup(1);
        ServerBootstrap bs = new ServerBootstrap();
        ChannelFuture bind = bs.group(group, group)
                .channel(NioServerSocketChannel.class)
                .childHandler(new MyInitHandler<Channel>() {//此处可使用ChannelInitializer
                    @Override
                    protected void init(Channel channel) {
                        channel.pipeline().addLast(new MyHandler());
                    }
                })
                .bind(new InetSocketAddress(9090));
        bind.sync().channel().closeFuture().sync();
        System.out.println("server closed!");
    }
}
/**
 * 模仿 netty ChannelInitializer
 * @param <T>
 */
@ChannelHandler.Sharable
abstract class MyInitHandler<T extends Channel> extends ChannelHandlerAdapter {
    protected abstract void init(T t);
    @Override
    public void channelRegistered(ChannelHandlerContext ctx) throws Exception {
        System.out.println("client registered....");
        ChannelPipeline pipeline = ctx.pipeline();
        init((T) ctx.channel());
        if (pipeline.context(this) != null) {
            pipeline.remove(this);
        }
    }
}
class MyClientHandler extends ChannelHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf buf = (ByteBuf) msg;
        byte[] readBytes = new byte[buf.readableBytes()];
        buf.getBytes(0, readBytes);
        System.out.println(new String(readBytes));
    }
}
class MyHandler extends ChannelHandlerAdapter {
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        ByteBuf buf = (ByteBuf) msg;
        byte[] readBytes = new byte[buf.readableBytes()];
        buf.getBytes(0, readBytes);
        System.out.println(new String(readBytes));
        ctx.writeAndFlush(buf);
    }
}
class MyAcceptHandler extends ChannelHandlerAdapter {
    private final NioEventLoopGroup selector;
    private final ChannelHandler initHandler;
    MyAcceptHandler(NioEventLoopGroup thread, ChannelHandler inHandler) {
        this.selector = thread;
        this.initHandler = inHandler;
    }
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
        SocketChannel sc = (SocketChannel) msg;
        System.out.println(sc);
        sc.pipeline().addLast(initHandler);
        selector.register(sc);
    }
}
                
            
        
浙公网安备 33010602011771号