chrome 网络协议栈 url请求流程分析

Chromium内核原理之网络栈


 

查看网络:以前的:chrome://net-internals#sockets

现在用 chrome://net-export/ 捕获。用chrome://net-export 去看。

效果,比如看sockets多少个:

 


禁止自动更新

void V4UpdateProtocolManager::IssueUpdateRequest() {
  DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);

  // If an update request is already pending, record and return silently.
  if (request_) {
    RecordUpdateResult(V4OperationResult::ALREADY_PENDING_ERROR);
    return;
  }


  net::NetworkTrafficAnnotationTag traffic_annotation =
      net::DefineNetworkTrafficAnnotation("safe_browsing_v4_update", R"(
        semantics {
          sender: "Safe Browsing"
          description:
            "Safe Browsing issues a request to Google every 30 minutes or so "
            "to get the latest database of hashes of bad URLs."
          trigger:
            "On a timer, approximately every 30 minutes."
          data:
             "The state of the local DB is sent so the server can send just "
             "the changes. This doesn't include any user data."
          destination: GOOGLE_OWNED_SERVICE
        }
        policy {
          cookies_allowed: YES
          cookies_store: "Safe Browsing cookie store"
          setting:
            "Users can disable Safe Browsing by unchecking 'Protect you and "
            "your device from dangerous sites' in Chromium settings under "
            "Privacy. The feature is enabled by default."
          chrome_policy {
            SafeBrowsingEnabled {
              policy_options {mode: MANDATORY}
              SafeBrowsingEnabled: false
            }
          }
        })");
  auto resource_request = std::make_unique<network::ResourceRequest>();
  std::string req_base64 = GetBase64SerializedUpdateRequestProto();
  GetUpdateUrlAndHeaders(req_base64, &resource_request->url,
                         &resource_request->headers);
    
    
void V4UpdateProtocolManager::GetUpdateUrlAndHeaders(
    const std::string& req_base64,
    GURL* gurl,
    net::HttpRequestHeaders* headers) const {
  V4ProtocolManagerUtil::GetRequestUrlAndHeaders(
      req_base64, "threatListUpdates:fetch", config_, gurl, headers);
}

dns

DohProviderEntry::List& DohProviderEntry::GetList() : dns域名获取

HostResolverManager::IsGloballyReachable

强制不调用

record_rtt 设置成false
void OnAttemptComplete(unsigned attempt_number,
                         bool record_rtt,
                         base::TimeTicks start,
                         int rv) {
    DCHECK_LT(attempt_number, attempts_.size());
    const DnsAttempt* attempt = attempts_[attempt_number].get();
    
#if 1//zhibin:test not trigger getList for dns check
    record_rtt = false;
#endif
    
    if (record_rtt && attempt->GetResponse()) {
      resolve_context_->RecordRtt(
          attempt->server_index(), secure_ /* is_doh_server */,
          base::TimeTicks::Now() - start, rv, session_.get());
    }
    if (callback_.is_null())
      return;
    AttemptResult result = ProcessAttemptResult(AttemptResult(rv, attempt));
    if (result.rv != ERR_IO_PENDING)
      DoCallback(result);
  }

src\net\log 网络日志类

使用:

NetLogWithSource::NetLogWithSource() {
  // Conceptually, default NetLogWithSource have no NetLog*, and will return
  // nullptr when calling |net_log()|. However for performance reasons, we
  // always store a non-null member to the NetLog in order to avoid needing
  // null checks for critical codepaths.
  //
  // The "dummy" net log used here will always return false for IsCapturing(),
  // and have no sideffects should its method be called. In practice the only
  // method that will get called on it is IsCapturing().
  static base::NoDestructor<NetLog> dummy{base::PassKey<NetLogWithSource>()};
  DCHECK(!dummy->IsCapturing());
  non_null_net_log_ = dummy.get();
}

NetLogWithSource::~NetLogWithSource() = default;

void NetLogWithSource::AddEntry(NetLogEventType type,
                                NetLogEventPhase phase) const {
  non_null_net_log_->AddEntry(type, source_, phase);
}

net_log.h
  AddEntry(NetLogEventType type,
           const NetLogSource& source,
           NetLogEventPhase phase,
           const ParametersCallback& get_params) {
    if (LIKELY(!IsCapturing())) //这里直接返回了
      return;

    AddEntryWithMaterializedParams(type, source, phase, get_params());强制走这里下去:
  }

最终注册了observer就会被记录下来
void NetLog::AddEntryAtTimeWithMaterializedParams(NetLogEventType type,
                                                  const NetLogSource& source,
                                                  NetLogEventPhase phase,
                                                  base::TimeTicks time,
                                                  base::Value&& params) {
  NetLogEntry entry(type, source, phase, time, std::move(params));

  // Notify all of the log observers, regardless of capture mode.
  base::AutoLock lock(lock_);
  for (auto* observer : observers_) {
    observer->OnAddEntry(entry);
  }
}

socket实现架构

Tcp Socket在不同平台有不同定义,比如win和posix。通过宏在编译时选择了不同的类,然后typedef统一成一个类名字:

\net\socket\tcp_socket.h
 
#ifndef NET_SOCKET_TCP_SOCKET_H_
#define NET_SOCKET_TCP_SOCKET_H_

#if BUILDFLAG(IS_WIN)
#include "net/socket/tcp_socket_win.h"
#elif BUILDFLAG(IS_POSIX) || BUILDFLAG(IS_FUCHSIA)
#include "net/socket/tcp_socket_posix.h"
#endif

namespace net {

#if BUILDFLAG(IS_WIN)
typedef TCPSocketWin TCPSocket;
#elif BUILDFLAG(IS_POSIX) || BUILDFLAG(IS_FUCHSIA)
typedef TCPSocketPosix TCPSocket;
#endif

}  // namespace net

这个TcpSocket是个中间类,主要供 TcpClientSocket和 TcpServerSocket使用。

客户端连接超时实现 在socket里面

在 TCPClientSocket::DoConnect 连接时,设置从非阻塞模式。这样connect调用后,一般立即返回是 io_pending 状态。

连接操作:(注册 WatchForRead中 StartWatchingOnce)

1,在连接时,同时启动一个定时器:

  start_connect_attempt_ = base::TimeTicks::Now();
  LOG(ERROR) << " == connect_attempt_timer_.Start:"<< addresses_[current_address_index_].ToString();

  // Start a timer to fail the connect attempt if it takes too long.
  base::TimeDelta attempt_timeout = GetConnectAttemptTimeout();
  if (!attempt_timeout.is_max()) {
    DCHECK(!connect_attempt_timer_.IsRunning());
    connect_attempt_timer_.Start(
        FROM_HERE, attempt_timeout,
        base::BindOnce(&TCPClientSocket::OnConnectAttemptTimeout,
                       base::Unretained(this)));
  }

超时回调OnConnectAttemptTimeout,调用DidCompleteConnect(ERR_TIMED_OUT)

2,连接操作,注册返回结果的回调:

socket_->Connect(endpoint,
                          base::BindOnce(&TCPClientSocket::DidCompleteConnect,
                                         base::Unretained(this)));

 

超时或者连接返回:都会关闭watch: TCPSocketWin::Core::Detach: read_watcher_.StopWatching

可以看到连接和超时,它两的回调最终到了 DidCompleteConnect,里面调用到 DoConnectLoop,走到 DoConnectComplete(rv)。

这里取消了定时器 connect_attempt_timer_.Stop()。调用DoDisconnect,在里面也会判断定时器不为空,要取消。关键有一句 socket_->Close(),这个会根据不同平台,走到不同的socket实现。在win上,走到  TCPSocketWin::Close。这里主要是 core_->Detach(),在它里面会调用  read_watcher_.StopWatching();。这个就关闭了对socket的read监听,也就是connect状态监听。所以如果是超时函数走到这里,也会关闭监听。就不会出现超时结束连接了,监听还会触发。

超时函数计算原理:

chrome自身会根据网络状况动态调整超时时间。他自己算个rtt,即network_quality_estimator_->GetTransportRTT()。
我们自定义个 RTTMultiplier,最终超时时间就是 rtt*RTTMultiplier。就是chrome自己的动态 timeout。
最终这个timeout 要和启动设置的最小值,最大值做比较。timeout比最小值还小,就取最小值。比最大值还大,就取最大值。

base::TimeDelta TCPClientSocket::GetConnectAttemptTimeout() {
    LOG(ERROR) << "kTimeoutTcpConnectAttempt:";
  if (!base::FeatureList::IsEnabled(features::kTimeoutTcpConnectAttempt))
    return base::TimeDelta::Max();
  LOG(ERROR) << "kTimeoutTcpConnectAttempt: Enable";
  absl::optional<base::TimeDelta> transport_rtt = absl::nullopt;
  if (network_quality_estimator_)
  {
      LOG(ERROR) << "network_quality_estimator_: yes";
      transport_rtt = network_quality_estimator_->GetTransportRTT();
  }
  else {
      LOG(ERROR) << "network_quality_estimator_: null. return max";
  }

  base::TimeDelta min_timeout = features::kTimeoutTcpConnectAttemptMin.Get();
  base::TimeDelta max_timeout = features::kTimeoutTcpConnectAttemptMax.Get();

  if (!transport_rtt)
    return max_timeout;

  base::TimeDelta adaptive_timeout =
      transport_rtt.value() *
      features::kTimeoutTcpConnectAttemptRTTMultiplier.Get();
  LOG(ERROR) << "adaptive_timeout =transport_rtt* rttMultiplier: "<< transport_rtt.value() *
      features::kTimeoutTcpConnectAttemptRTTMultiplier.Get()<<"="<< transport_rtt.value()<<"*"<< features::kTimeoutTcpConnectAttemptRTTMultiplier.Get();


  if (adaptive_timeout <= min_timeout)
  {
      LOG(ERROR) << " use min:" << min_timeout;
      return min_timeout;
  }

  if (adaptive_timeout >= max_timeout)
  {
      LOG(ERROR) << " use max:" << max_timeout;
      return max_timeout;
  }
  LOG(ERROR) << " use adaptive_timeout";
  return adaptive_timeout;
}

chrome命令行参数: --flag-switches-begin     --enable-features=TimeoutTcpConnectAttempt:TimeoutTcpConnectAttemptMin/3s/TimeoutTcpConnectAttemptMax/5s/TimeoutTcpConnectAttemptRTTMultiplier/2,NetworkQualityEstimator --flag-switches-end 

上面包括enble特性:features::kTimeoutTcpConnectAttempt。设置最小,最大超时时间。启用network_quality_estimator_。

RTTMultiplier设为2,是去调整rtt的。

1. transport_rtt = network_quality_estimator_->GetTransportRTT()

transport_rtt 是通过 NetworkQualityEstimator 获取的传输往返时间(RTT)。RTT 表示数据从客户端发送到服务器并返回所需的时间。RTT 值可以反映当前网络的延迟和质量。

2. features::kTimeoutTcpConnectAttemptRTTMultiplier.Get()

features::kTimeoutTcpConnectAttemptRTTMultiplier 是一个倍数系数,从特性标志(feature flag)或配置中获取。这个系数用于调整基于 RTT 的超时时间。

3. 计算 adaptive_timeout

adaptive_timeout 是通过 transport_rttkTimeoutTcpConnectAttemptRTTMultiplier 的乘积计算得出的。它的目的是根据网络质量动态调整 TCP 连接的超时时间:

adaptive_timeout = transport_rtt.value() * kTimeoutTcpConnectAttemptRTTMultiplier.Get();

计算 adaptive_timeout 的原因

TCP 连接超时时间的合理设置可以避免连接尝试在网络状况较差时过早失败,或者在网络状况较好时等待过久。通过将 RTT 作为基准值,并结合一个可配置的倍数来计算超时时间,可以使超时时间更符合当前的网络环境。

这种自适应的超时计算方式有几个优点:

  1. 网络状况差时更长的超时: 如果网络延迟较高,意味着 RTT 较大,超时时间就会相应增加,从而给 TCP 连接更多的时间完成。

  2. 网络状况好时更短的超时: 如果网络延迟较低,意味着 RTT 较小,超时时间就会减少,从而加快失败检测速度,避免不必要的等待。

通过这种方法,可以使得应用程序在不同的网络环境下都能表现出合理的连接尝试行为。

 

其他超时实现?在更高层面,目前代码没走到,好像。

 

源码在 chromium 常用函数 - Bigben - 博客园 (cnblogs.com)

linux posix tcp连接

1. 发起连接:
int TCPSocketPosix::Connect(const IPEndPoint& address,
                            CompletionOnceCallback callback) {
  // ...

  int rv = socket_->Connect(
      storage, base::BindOnce(&TCPSocketPosix::ConnectCompleted,
                              base::Unretained(this), std::move(callback)));
  if (rv != ERR_IO_PENDING)
    rv = HandleConnectCompleted(rv);
  return rv;
}
这里调用了 `socket_->Connect`,它是 `SocketPosix` 类的方法。
 
2. 在 `SocketPosix::Connect` 中:
 
int SocketPosix::Connect(const SockaddrStorage& address,
                         CompletionOnceCallback callback) {
    ......
  int rv = DoConnect();
  if (rv != ERR_IO_PENDING)
    return rv;

  if (!base::CurrentIOThread::Get()->WatchFileDescriptor(
          socket_fd_, true, base::MessagePumpForIO::WATCH_WRITE,
          &write_socket_watcher_, this)) {
    PLOG(ERROR) << "WatchFileDescriptor failed on connect";
    return MapSystemError(errno);
  }

  // There is a race-condition in the above code if the kernel receive a RST
  // packet for the "connect" call before the registration of the socket file
  // descriptor to the message loop pump. On most platform it is benign as the
  // message loop pump is awakened for that socket in an error state, but on
  // iOS this does not happens. Check the status of the socket at this point
  // and if in error, consider the connection as failed.
  int os_error = 0;
  socklen_t len = sizeof(os_error);
  if (getsockopt(socket_fd_, SOL_SOCKET, SO_ERROR, &os_error, &len) == 0) {
    // TCPSocketPosix expects errno to be set.
    errno = os_error;
  }

  rv = MapConnectError(errno);
  if (rv != OK && rv != ERR_IO_PENDING) {
    write_socket_watcher_.StopWatchingFileDescriptor();
    return rv;
  }

  write_callback_ = std::move(callback);
  waiting_connect_ = true;
  return ERR_IO_PENDING;
}


int SocketPosix::DoConnect() {
  int rv = HANDLE_EINTR(connect(socket_fd_,
                                peer_address_->addr,
                                peer_address_->addr_len));
  DCHECK_GE(0, rv);
  return rv == 0 ? OK : MapConnectError(errno);
}
这里使用了非阻塞 connect。如果返回 EINPROGRESS,表示连接正在进行中。
WATCH_WRITE是监视事件,使用了 WatchFileDescriptor 来监视套接字的可写状态。当套接字变为可写时(连接完成或失败),将触发回调。

3,处理连接完成:

当套接字变为可写时,会调用 TCPSocketPosix::ConnectCompleted:

void TCPSocketPosix::ConnectCompleted(CompletionOnceCallback callback, int rv) {
  DCHECK_NE(ERR_IO_PENDING, rv);
  std::move(callback).Run(HandleConnectCompleted(rv));
}

int TCPSocketPosix::HandleConnectCompleted(int rv) {
  // ...
  if (rv == OK)
    NotifySocketPerformanceWatcher();
  // ...
  return rv;
}

4,watch的实现

在 CurrentIOThread::WatchFileDescriptor里面调用了

GetMessagePumpForIO()->WatchFileDescriptor(fd, persistent, mode,
controller, delegate);

具体实现是  MessagePumpLibevent::WatchFileDescriptor :

bool MessagePumpLibevent::WatchFileDescriptor(int fd,
                                              bool persistent,
                                              int mode,
                                              FdWatchController* controller,
                                              FdWatcher* delegate) {
  int event_mask = persistent ? EV_PERSIST : 0;
  if (mode & WATCH_READ)
    event_mask |= EV_READ;
  if (mode & WATCH_WRITE)
    event_mask |= EV_WRITE;

  std::unique_ptr<event> evt(controller->ReleaseEvent());
  if (!evt) {
    // Ownership is transferred to the controller.
    evt = std::make_unique<event>();
  }

  // Set current interest mask and message pump for this event.
  event_set(evt.get(), fd, event_mask, OnLibeventNotification, controller);

  // Tell libevent which message pump this socket will belong to when we add it.
  if (event_base_set(event_base_, evt.get())) {
    DPLOG(ERROR) << "event_base_set(fd=" << fd << ")";
    return false;
  }

  // Add this socket to the list of monitored sockets.
  if (event_add(evt.get(), nullptr)) {
    DPLOG(ERROR) << "event_add failed(fd=" << fd << ")";
    return false;
  }

  controller->Init(std::move(evt));
  controller->set_watcher(delegate);
  controller->set_pump(this);

  return true;
}
原始完整代码
bool MessagePumpLibevent::WatchFileDescriptor(int fd,
                                              bool persistent,
                                              int mode,
                                              FdWatchController* controller,
                                              FdWatcher* delegate) {
#if BUILDFLAG(ENABLE_MESSAGE_PUMP_EPOLL)
  if (epoll_pump_) {
    return epoll_pump_->WatchFileDescriptor(fd, persistent, mode, controller,
                                            delegate);
  }
#endif

  TRACE_EVENT("base", "MessagePumpLibevent::WatchFileDescriptor", "fd", fd,
              "persistent", persistent, "watch_read", mode & WATCH_READ,
              "watch_write", mode & WATCH_WRITE);
  DCHECK_GE(fd, 0);
  DCHECK(controller);
  DCHECK(delegate);
  DCHECK(mode == WATCH_READ || mode == WATCH_WRITE || mode == WATCH_READ_WRITE);
  // WatchFileDescriptor should be called on the pump thread. It is not
  // threadsafe, and your watcher may never be registered.
  DCHECK(watch_file_descriptor_caller_checker_.CalledOnValidThread());

  short event_mask = persistent ? EV_PERSIST : 0;
  if (mode & WATCH_READ) {
    event_mask |= EV_READ;
  }
  if (mode & WATCH_WRITE) {
    event_mask |= EV_WRITE;
  }

  std::unique_ptr<event> evt(controller->ReleaseEvent());
  if (!evt) {
    // Ownership is transferred to the controller.
    evt = std::make_unique<event>();
  } else {
    // Make sure we don't pick up any funky internal libevent masks.
    int old_interest_mask = evt->ev_events & (EV_READ | EV_WRITE | EV_PERSIST);

    // Combine old/new event masks.
    event_mask |= old_interest_mask;

    // Must disarm the event before we can reuse it.
    event_del(evt.get());

    // It's illegal to use this function to listen on 2 separate fds with the
    // same |controller|.
    if (EVENT_FD(evt.get()) != fd) {
      NOTREACHED_IN_MIGRATION()
          << "FDs don't match" << EVENT_FD(evt.get()) << "!=" << fd;
      return false;
    }
  }

  // Set current interest mask and message pump for this event.
  event_set(evt.get(), fd, event_mask, OnLibeventNotification, controller);

  // Tell libevent which message pump this socket will belong to when we add it.
  if (event_base_set(event_base_.get(), evt.get())) {
    DPLOG(ERROR) << "event_base_set(fd=" << EVENT_FD(evt.get()) << ")";
    return false;
  }

  // Add this socket to the list of monitored sockets.
  if (event_add(evt.get(), nullptr)) {
    DPLOG(ERROR) << "event_add failed(fd=" << EVENT_FD(evt.get()) << ")";
    return false;
  }

  controller->Init(std::move(evt));
  controller->set_watcher(delegate);
  controller->set_libevent_pump(this);
  return true;
}

这个实现确实是使用 libevent 库来监控文件描述符(包括 socket)的状态变化。对于 connect 操作,这里的关键点是:
1. 它设置了 EV_WRITE 事件(对应于 WATCH_WRITE)。在非阻塞 connect 的情况下,当连接完成时,socket 会变为可写。
它还可能设置了 EV_READ 事件(如果 mode 包含 WATCH_READ)。这对于检测某些类型的连接失败很有用。
事件回调被设置为 OnLibeventNotification,这个函数会在事件触发时被调用。
4. 事件被添加到 libevent 的事件循环中(通过 event_add)。
然而,这个函数本身并不直接判断 connect 是否成功。它只是设置了必要的监听机制。connect 的成功与否是在事件触发后,在回调函数中判断的。
通常,判断 connect 是否成功的逻辑会在 OnLibeventNotification 函数中实现,大致如下:

void MessagePumpLibevent::OnLibeventNotification(int fd, short events, void* context) {
  FdWatchController* controller = static_cast<FdWatchController*>(context);
  DCHECK(controller);

  if (events & EV_WRITE) {
    // Socket is writable, which usually means the connection is established
    // However, we need to check for any errors
    int error = 0;
    socklen_t len = sizeof(error);
    if (getsockopt(fd, SOL_SOCKET, SO_ERROR, &error, &len) < 0) {
      // Error occurred
    } else if (error) {
      // Connection failed
    } else {
      // Connection succeeded
    }
    controller->OnFileCanWriteWithoutBlocking(fd);
  }

  if (events & EV_READ) {
    // Handle readable event
    controller->OnFileCanReadWithoutBlocking(fd);
  }
}

 

完整代码
 // static
void MessagePumpLibevent::OnLibeventNotification(int fd,
                                                 short flags,
                                                 void* context) {
  FdWatchController* controller = static_cast<FdWatchController*>(context);
  DCHECK(controller);

  MessagePumpLibevent* pump = controller->libevent_pump();
  pump->processed_io_events_ = true;

  // Make the MessagePumpDelegate aware of this other form of "DoWork". Skip if
  // OnLibeventNotification is called outside of Run() (e.g. in unit tests).
  Delegate::ScopedDoWorkItem scoped_do_work_item;
  if (pump->run_state_)
    scoped_do_work_item = pump->run_state_->delegate->BeginWorkItem();

  // Trace events must begin after the above BeginWorkItem() so that the
  // ensuing "ThreadController active" outscopes all the events under it.
  TRACE_EVENT("toplevel", "OnLibevent", "controller_created_from",
              controller->created_from_location(), "fd", fd, "flags", flags,
              "context", context);
  TRACE_HEAP_PROFILER_API_SCOPED_TASK_EXECUTION heap_profiler_scope(
      controller->created_from_location().file_name());

  if ((flags & (EV_READ | EV_WRITE)) == (EV_READ | EV_WRITE)) {
    // Both callbacks will be called. It is necessary to check that |controller|
    // is not destroyed.
    bool controller_was_destroyed = false;
    controller->was_destroyed_ = &controller_was_destroyed;
    controller->OnFileCanWriteWithoutBlocking(fd, pump);
    if (!controller_was_destroyed)
      controller->OnFileCanReadWithoutBlocking(fd, pump);
    if (!controller_was_destroyed)
      controller->was_destroyed_ = nullptr;
  } else if (flags & EV_WRITE) {
    controller->OnFileCanWriteWithoutBlocking(fd, pump);
  } else if (flags & EV_READ) {
    controller->OnFileCanReadWithoutBlocking(fd, pump);
  }
}

win的tcp连接

在 Chromium 的 TCPSocketWin 实现中,主要使用了 WSAEventSelect 来处理异步连接。以下是关键部分的分析:
1. 设置非阻塞模式和事件选择:
WSAEventSelect(socket_, core_->read_event_, FD_CONNECT);
这行代码将套接字设置为非阻塞模式,并将 FD_CONNECT 事件与 read_event_ 关联。
发起连接:
if (!connect(socket_, storage.addr, storage.addr_len)) {
  // 连接立即成功的情况(极少发生)
} else {
  int os_error = WSAGetLastError();
  if (os_error != WSAEWOULDBLOCK) {
    // 连接立即失败
  } else {
    // 连接正在进行中
    return ERR_IO_PENDING;
  }
}
int TCPSocketWin::DoConnect() {
  //记录连接log
  net_log_.BeginEvent(NetLogEventType::TCP_CONNECT_ATTEMPT, [&] {
    return CreateNetLogIPEndPointParams(peer_address_.get());
  });
    
  //----
  core_ = base::MakeRefCounted<Core>(this);

  // WSAEventSelect sets the socket to non-blocking mode as a side effect.
  // Our connect() and recv() calls require that the socket be non-blocking.
  WSAEventSelect(socket_, core_->read_event_, FD_CONNECT);设置非阻塞

  SockaddrStorage storage;
  if (!peer_address_->ToSockAddr(storage.addr, &storage.addr_len))
    return ERR_ADDRESS_INVALID;

  if (!connect(socket_, storage.addr, storage.addr_len)) {
      这种情况少见,一连就成功:
    // Connected without waiting!
    //
    // The MSDN page for connect says:
    //   With a nonblocking socket, the connection attempt cannot be completed
    //   immediately. In this case, connect will return SOCKET_ERROR, and
    //   WSAGetLastError will return WSAEWOULDBLOCK.
    // which implies that for a nonblocking socket, connect never returns 0.
    // It's not documented whether the event object will be signaled or not
    // if connect does return 0.  So the code below is essentially dead code
    // and we don't know if it's correct.
    NOTREACHED();
    客户端连接成功后触发读取网络数据
    if (ResetEventIfSignaled(core_->read_event_))
      return OK;
  } else {
    int os_error = WSAGetLastError();
      正常是返回would block
    if (os_error != WSAEWOULDBLOCK) {
        不是block就是失败!!
      LOG(ERROR) << "connect failed: " << os_error;
      connect_os_error_ = os_error;
      int rv = MapConnectError(os_error);
      CHECK_NE(ERR_IO_PENDING, rv);
      return rv;
    }
  }
启动对象监视:
  core_->WatchForRead();
    block时返回正常pending,非阻塞去等WSAEventSelect
  return ERR_IO_PENDING;
}

监视网络read事件
void TCPSocketWin::Core::WatchForRead() {
  // Reads use WSAEventSelect, which closesocket() cancels so unlike writes,
  // there's no need to increment the reference count here.
  read_watcher_.StartWatchingOnce(read_event_, &reader_);
}

WSAEventSelect设置了非阻塞connect。

D:\chromium110\chromium\src\base\win\object_watcher.h

ObjectWatcher 使用说明
// A class that provides a means to asynchronously wait for a Windows object to
// become signaled.  It is an abstraction around RegisterWaitForSingleObject
// that provides a notification callback, OnObjectSignaled, that runs back on
// the origin sequence (i.e., the sequence that called StartWatching).
//
// This class acts like a smart pointer such that when it goes out-of-scope,
// UnregisterWaitEx is automatically called, and any in-flight notification is
// suppressed.
//
// The waiting handle MUST NOT be closed while watching is in progress. If this
// handle is closed while the wait is still pending, the behavior is undefined
// (see MSDN:RegisterWaitForSingleObject).
//
// Typical usage:
//
//   class MyClass : public base::win::ObjectWatcher::Delegate {
//    public:
//     void DoStuffWhenSignaled(HANDLE object) {
//       watcher_.StartWatchingOnce(object, this);
//     }
//     void OnObjectSignaled(HANDLE object) override {
//       // OK, time to do stuff!
//     }
//    private:
//     base::win::ObjectWatcher watcher_;
//   };
//
// In the above example, MyClass wants to "do stuff" when object becomes
// signaled.  ObjectWatcher makes this task easy.  When MyClass goes out of
// scope, the watcher_ will be destroyed, and there is no need to worry about
// OnObjectSignaled being called on a deleted MyClass pointer.  Easy!
// If the object is already signaled before being watched, OnObjectSignaled is
// still called after (but not necessarily immediately after) watch is started.
//

 这里有个嵌套类 Core:

 

TCPSocketWin::Core

主要是用来启动或停止socket的读或写监听。

里面有个嵌套类 ReadDelegate,WriteDelegate,用来实现watcher的通知:OnObjectSignaled。

// This class encapsulates all the state that has to be preserved as long as
// there is a network IO operation in progress. If the owner TCPSocketWin is
// destroyed while an operation is in progress, the Core is detached and it
// lives until the operation completes and the OS doesn't reference any resource
// declared on this class anymore.
class TCPSocketWin::Core : public base::RefCounted<Core> {
 public:
  // Start watching for the end of a read or write operation.
  void WatchForRead();
  void WatchForWrite();

  // Stops watching for read.
  void StopWatchingForRead();

  // The TCPSocketWin is going away.
  void Detach();

 private:
  friend class base::RefCounted<Core>;

  class ReadDelegate : public base::win::ObjectWatcher::Delegate {
   public:
    explicit ReadDelegate(Core* core) : core_(core) {}
    ~ReadDelegate() override = default;

    // base::ObjectWatcher::Delegate methods:
    void OnObjectSignaled(HANDLE object) override;

   private:
    const raw_ptr<Core> core_;
  };

  class WriteDelegate : public base::win::ObjectWatcher::Delegate {
   public:
    explicit WriteDelegate(Core* core) : core_(core) {}
    ~WriteDelegate() override = default;

    // base::ObjectWatcher::Delegate methods:
    void OnObjectSignaled(HANDLE object) override;

   private:
    const raw_ptr<Core> core_;
  };

  ~Core();

  // The socket that created this object.
  raw_ptr<TCPSocketWin> socket_;

  // |reader_| handles the signals from |read_watcher_|.
  ReadDelegate reader_;
  // |writer_| handles the signals from |write_watcher_|.
  WriteDelegate writer_;

  // |read_watcher_| watches for events from Connect() and Read().
  base::win::ObjectWatcher read_watcher_;
  // |write_watcher_| watches for events from Write();
  base::win::ObjectWatcher write_watcher_;
};

 

 

  // The core of the socket that can live longer than the socket itself. We pass
  // resources to the Windows async IO functions and we have to make sure that
  // they are not destroyed while the OS still references them.
  scoped_refptr<Core> core_;

  // External callback; called when connect or read is complete.
  CompletionOnceCallback read_callback_;

3,等待连接完成:

在 DidCompleteConnect 方法中:

 

WSANETWORKEVENTS events;
int rv = WSAEnumNetworkEvents(socket_, core_->read_event_, &events);
if (rv == SOCKET_ERROR) {
  // 处理错误
} else if (events.lNetworkEvents & FD_CONNECT) {
  os_error = events.iErrorCode[FD_CONNECT_BIT];
  result = MapConnectError(os_error);
} else {
  // 意外情况
}

 

TCPSocketWin实现 base::win::ObjectWatcher::Delegate :用来作为watcher结果的通知接口:

void TCPSocketWin::OnObjectSignaled(HANDLE object) {
  WSANETWORKEVENTS ev;
  if (WSAEnumNetworkEvents(socket_, accept_event_, &ev) == SOCKET_ERROR) {
    PLOG(ERROR) << "WSAEnumNetworkEvents()";
    return;
  }

  if (ev.lNetworkEvents & FD_ACCEPT) {
    int result = AcceptInternal(accept_socket_, accept_address_);
    if (result != ERR_IO_PENDING) {
      accept_socket_ = nullptr;
      accept_address_ = nullptr;
      std::move(accept_callback_).Run(result);
    }
  } else {
    // This happens when a client opens a connection and closes it before we
    // have a chance to accept it.
    DCHECK(ev.lNetworkEvents == 0);

    // Start watching the next FD_ACCEPT event.
    WSAEventSelect(socket_, accept_event_, FD_ACCEPT);
    accept_watcher_.StartWatchingOnce(accept_event_, this);
  }
}

 TCPSocketWin::Core::ReadDelegate这个core的代理实现了 OnObjectSignaled,

void TCPSocketWin::Core::ReadDelegate::OnObjectSignaled(HANDLE object) {
  DCHECK_EQ(object, core_->read_event_);
  DCHECK(core_->socket_);
  if (core_->socket_->waiting_connect_) 连接状态
    core_->socket_->DidCompleteConnect();
  else 读取网络数据时
    core_->socket_->DidSignalRead();
}

connect状态时,调到下面 DidCompleteConnect


void TCPSocketWin::DidCompleteConnect() {
  DCHECK(waiting_connect_);
  DCHECK(!read_callback_.is_null());
  int result;

  WSANETWORKEVENTS events;
  int rv = WSAEnumNetworkEvents(socket_, core_->read_event_, &events);
  int os_error = WSAGetLastError();
  if (rv == SOCKET_ERROR) {
    NOTREACHED();
    result = MapSystemError(os_error);
  } else if (events.lNetworkEvents & FD_CONNECT) {
    os_error = events.iErrorCode[FD_CONNECT_BIT];
    result = MapConnectError(os_error);
  } else {
    NOTREACHED();
    result = ERR_UNEXPECTED;
  }

  connect_os_error_ = os_error;
  DoConnectComplete(result);
  waiting_connect_ = false;

  DCHECK_NE(result, ERR_IO_PENDING);
  std::move(read_callback_).Run(result);
}

错误码:

-118是 ERR_CONNECTION_TIMED_OUT
 
int MapConnectError(int os_error) {
  switch (os_error) {
    // connect fails with WSAEACCES when Windows Firewall blocks the
    // connection.
    case WSAEACCES:
      return ERR_NETWORK_ACCESS_DENIED;
    case WSAETIMEDOUT:
      return ERR_CONNECTION_TIMED_OUT;
    default: {
      int net_error = MapSystemError(os_error);
      if (net_error == ERR_FAILED)
        return ERR_CONNECTION_FAILED;  // More specific than ERR_FAILED.

      // Give a more specific error when the user is offline.
      if (net_error == ERR_ADDRESS_UNREACHABLE &&
          NetworkChangeNotifier::IsOffline()) {
        return ERR_INTERNET_DISCONNECTED;
      }

      return net_error;
    }
  }
}

 

 

void TransportClientSocketPool::OnConnectJobComplete(Group* group,
                                                     int result,
                                                     ConnectJob* job) {
 
  // Check if the ConnectJob is already bound to a Request. If so, result is
  // returned to that specific request.
  absl::optional<Group::BoundRequest> bound_request =
      group->FindAndRemoveBoundRequestForConnectJob(job);
  Request* request = nullptr;
  std::unique_ptr<Request> owned_request;
  if (bound_request) {
    --connecting_socket_count_;
 
    // If the socket pools were previously flushed with an error, return that
    // error to the bound request and discard the socket.
    if (bound_request->pending_error != OK) {
      InvokeUserCallbackLater(bound_request->request->handle(),
                              bound_request->request->release_callback(),
                              bound_request->pending_error,
                              bound_request->request->socket_tag());
      bound_request->request->net_log().EndEventWithNetErrorCode(
          NetLogEventType::SOCKET_POOL, bound_request->pending_error);
      OnAvailableSocketSlot(group->group_id(), group);
      CheckForStalledSocketGroups();
      return;
    }
 
    // If the ConnectJob is from a previous generation, add the request back to
    // the group, and kick off another request. The socket will be discarded.
    if (bound_request->generation != group->generation()) {
      group->InsertUnboundRequest(std::move(bound_request->request));
      OnAvailableSocketSlot(group->group_id(), group);
      CheckForStalledSocketGroups();
      return;
    }
 
    request = bound_request->request.get();
  } else {
    // In this case, RemoveConnectJob(job, _) must be called before exiting this
    // method. Otherwise, |job| will be leaked.
    owned_request = group->PopNextUnboundRequest();
    request = owned_request.get();
 
    if (!request) {
      if (result == OK)
        AddIdleSocket(job->PassSocket(), group);
      RemoveConnectJob(job, group);
      OnAvailableSocketSlot(group->group_id(), group);
      CheckForStalledSocketGroups();
      return;
    }
 
    LogBoundConnectJobToRequest(job->net_log().source(), *request);
  }
 
  // The case where there's no request is handled above.
  DCHECK(request);
 
  if (result != OK)
    request->handle()->SetAdditionalErrorState(job);
  if (job->socket()) {
    HandOutSocket(job->PassSocket(), ClientSocketHandle::UNUSED,
                  job->connect_timing(), request->handle(), base::TimeDelta(),
                  group, request->net_log());
  }
  request->net_log().EndEventWithNetErrorCode(NetLogEventType::SOCKET_POOL,
                                              result);
连接出错  InvokeUserCallbackLater(request->handle(), request->release_callback(),
                          result, request->socket_tag());
  if (!bound_request)
    RemoveConnectJob(job, group);
  // If no socket was handed out, there's a new socket slot available.
  if (!request->handle()->socket()) {
    OnAvailableSocketSlot(group->group_id(), group);
    CheckForStalledSocketGroups();
  }
}

 

void TransportClientSocketPool::InvokeUserCallback(ClientSocketHandle* handle) {
  auto it = pending_callback_map_.find(handle);
 
  // Exit if the request has already been cancelled.
  if (it == pending_callback_map_.end())
    return;
 
  CHECK(!handle->is_initialized());
  CompletionOnceCallback callback = std::move(it->second.callback);
  int result = it->second.result;
  pending_callback_map_.erase(it);
  std::move(callback).Run(result);
}

 

void ClientSocketHandle::OnIOComplete(int result) {
  TRACE_EVENT0(NetTracingCategory(), "ClientSocketHandle::OnIOComplete");
  CompletionOnceCallback callback = std::move(callback_);
  callback_.Reset();
  HandleInitCompletion(result);
  std::move(callback).Run(result);
}

 

#1处 post调用回调OnStreamFailedCallback
 void HttpStreamFactory::Job::RunLoop(int result) {
  TRACE_EVENT0(NetTracingCategory(), "HttpStreamFactory::Job::RunLoop");
  result = DoLoop(result);

  if (result == ERR_IO_PENDING)
    return;

  // Stop watching for new SpdySessions, to avoid receiving a new SPDY session
  // while doing anything other than waiting to establish a connection.
  spdy_session_request_.reset();

  if ((job_type_ == PRECONNECT) || (job_type_ == PRECONNECT_DNS_ALPN_H3)) {
    base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
        FROM_HERE,
        base::BindOnce(&HttpStreamFactory::Job::OnPreconnectsComplete,
                       ptr_factory_.GetWeakPtr(), result));
    return;
  }

  if (IsCertificateError(result)) {
    // Retrieve SSL information from the socket.
    SSLInfo ssl_info;
    GetSSLInfo(&ssl_info);

    next_state_ = STATE_WAITING_USER_ACTION;
    base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
        FROM_HERE,
        base::BindOnce(&HttpStreamFactory::Job::OnCertificateErrorCallback,
                       ptr_factory_.GetWeakPtr(), result, ssl_info));
    return;
  }

  switch (result) {
    case ERR_SSL_CLIENT_AUTH_CERT_NEEDED:
      base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
          FROM_HERE,
          base::BindOnce(
              &Job::OnNeedsClientAuthCallback, ptr_factory_.GetWeakPtr(),
              base::RetainedRef(connection_->ssl_cert_request_info())));
      return;

    case OK:
      next_state_ = STATE_DONE;
      if (is_websocket_) {
        DCHECK(websocket_stream_);
        base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
            FROM_HERE,
            base::BindOnce(&Job::OnWebSocketHandshakeStreamReadyCallback,
                           ptr_factory_.GetWeakPtr()));
      } else if (stream_type_ == HttpStreamRequest::BIDIRECTIONAL_STREAM) {
        if (!bidirectional_stream_impl_) {
          base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
              FROM_HERE, base::BindOnce(&Job::OnStreamFailedCallback,
                                        ptr_factory_.GetWeakPtr(), ERR_FAILED));
        } else {
          base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
              FROM_HERE,
              base::BindOnce(&Job::OnBidirectionalStreamImplReadyCallback,
                             ptr_factory_.GetWeakPtr()));
        }
      } else {
        DCHECK(stream_.get());
        base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
            FROM_HERE, base::BindOnce(&Job::OnStreamReadyCallback,
                                      ptr_factory_.GetWeakPtr()));
      }
      return;

    default: #1
      base::SingleThreadTaskRunner::GetCurrentDefault()->PostTask(
          FROM_HERE, base::BindOnce(&Job::OnStreamFailedCallback,
                                    ptr_factory_.GetWeakPtr(), result));
      return;
  }
}

最开始结果:

void HttpStreamFactory::Job::OnStreamFailedCallback(int result) {
  delegate_->OnStreamFailed(this, result, server_ssl_config_);
  // |this| may be deleted after this call.
}

到 URLLoader::NotifyCompleted 远程调用进入

ThrottlingURLLoader::OnComplete

NavigationURLLoaderImpl::OnComplete

失败走到post任务:NotifyRequestFailed
 void NavigationURLLoaderImpl::OnComplete(
    const network::URLLoaderCompletionStatus& status) {
  // Successful load must have used OnResponseStarted first. In this case, the
  // URLLoaderClient has already been transferred to the renderer process and
  // OnComplete is not expected to be called here.
  if (status.error_code == net::OK) {
    SCOPED_CRASH_KEY_STRING256("NavigationURLLoader::Complete", "url",
                               url_.spec());
    base::debug::DumpWithoutCrashing();
    return;
  }

  // If the default loader (network) was used to handle the URL load request
  // we need to see if the interceptors want to potentially create a new
  // loader for the response. e.g. service worker.
  //
  // Note: Despite having received a response, the HTTP_NOT_MODIFIED(304) ones
  //       are ignored using OnComplete(net::ERR_ABORTED). No interceptor must
  //       be used in this case.
  if (!received_response_) {
    auto response = network::mojom::URLResponseHead::New();
    if (MaybeCreateLoaderForResponse(&response))
      return;
  }

  status_ = status;
  GetUIThreadTaskRunner({})->PostTask(
      FROM_HERE, base::BindOnce(&NavigationURLLoaderImpl::NotifyRequestFailed,
                                weak_factory_.GetWeakPtr(), status));
}

 

SimpleURLLoaderImpl::OnComplete

 

  • chrome代码里面一个网络监控工具和测试:src\net\tools\net_watcher\net_watcher.cc

监控到ip地址变化时的通知:OnIPAddressChanged

参考

https://dev.chromium.org/for-testers/providing-network-details

https://chromium.googlesource.com/catapult/+/refs/heads/main/netlog_viewer

JeffMony
12019.02.13 13:11:19字数 2,846阅读 1,413

《Chromium内核原理之blink内核工作解密》
《Chromium内核原理之多进程架构》
《Chromium内核原理之进程间通信(IPC)》
《Chromium内核原理之网络栈》
《Chromium内核原理之网络栈HTTP Cache》
《Chromium内核原理之Preconnect》
《Chromium内核原理之Prerender》
《Chromium内核原理之cronet独立化》

1.内核网络栈概述
2.代码结构
3.网络请求剖析(专注于HTTP)

3.1 URLRequest
3.2 URLRequestHttpJob
3.3 HttpNetworkTransaction
3.4 HttpStreamFactory

3.4.1 Proxy解析
3.4.2 连接管理
3.4.3 Host解析
3.4.4 SSL/TLS

1.内核网络栈概述

网络堆栈主要是单线程跨平台库,主要用于资源获取。它的主要接口是URLRequest和URLRequestContext。 URLRequest,如其名称所示,表示对URL的请求。 URLRequestContext包含完成URL请求所需的所有关联上下文,例如cookie,主机解析器,代理解析器,缓存等。许多URLRequest对象可以共享相同的URLRequestContext。尽管磁盘缓存可以使用专用线程,但是大多数网络对象都不是线程安全的,并且几个组件(主机解析,证书验证等)可能使用未连接的工作线程。由于它主要在单个网络线程上运行,因此不允许阻止网络线程上的操作。因此,我们使用非阻塞操作和异步回调(通常是CompletionCallback)。网络堆栈代码还将大多数操作记录到NetLog,这允许消费者在内存中记录所述操作并以用户友好的格式呈现它以进行调试。

Chromium开发人员编写了网络堆栈,以便:

  • 允许编码到跨平台抽象;
  • 提供比更高级系统网络库(例如WinHTTP或WinINET)更高的控制能力。
    ** 避免系统库中可能存在的错误;
    ** 为性能优化提供更大的机会。

2.代码结构

  • net/base - 获取一些网络实用程序,例如主机解析,cookie,网络变化检测,SSL。
  • net/disk_cache - web resources缓存。
  • net/ftp - FTP实现。代码主要基于旧的HTTP实现。
  • net/http - HTTP实现。
  • net/ocsp - 不使用系统库或系统未提供OCSP实施时的OCSP实施。目前仅包含基于NSS的实现。
  • net/proxy - 代理(SOCKS和HTTP)配置,解析,脚本提取等。
  • net/quic - QUIC实现
  • net/socket - TCP套接字,“SSL套接字”和套接字池的跨平台实现。
  • net/socket_stream - WebSockets的套接字流。
  • net/spdy - HTTP2和SPDY实现。
  • net/url_request - URLRequestURLRequestContext和 URLRequestJob 实现。
  • net/websockets - WebSockets实现。

3.网络请求剖析(专注于HTTP)

 
http_network.jpg
3.1 URLRequest
class URLRequest {
 public:
  // Construct a URLRequest for |url|, notifying events to |delegate|.
  URLRequest(const GURL& url, Delegate* delegate);
  
  // Specify the shared state
  void set_context(URLRequestContext* context);

  // Start the request. Notifications will be sent to |delegate|.
  void Start();

  // Read data from the request.
  bool Read(IOBuffer* buf, int max_bytes, int* bytes_read);
};

class URLRequest::Delegate {
 public:
  // Called after the response has started coming in or an error occurred.
  virtual void OnResponseStarted(...) = 0;

  // Called when Read() calls complete.
  virtual void OnReadCompleted(...) = 0;
};

当URLRequest启动时,它首先要做的是决定要创建什么类型的URLRequestJob。主要作业类型是URLRequestHttpJob,用于实现http://请求。还有其他各种工作,例如URLRequestFileJob(file://),URLRequestFtpJob(ftp://),URLRequestDataJob(data://)等。网络堆栈将确定满足请求的相应作业,但它为客户端提供了两种自定义作业创建的方法:URLRequest :: Interceptor和URLRequest :: ProtocolFactory。这些是相当多余的,除了URLRequest :: Interceptor的接口更广泛。随着工作的进行,它将通知URLRequest,URLRequest将根据需要通知URLRequest :: Delegate。

3.2 URLRequestHttpJob

URLRequestHttpJob将首先识别要为HTTP请求设置的cookie,这需要在请求上下文中查询CookieMonster。这可以是异步的,因为CookieMonster可能由sqlite数据库支持。执行此操作后,它将询问请求上下文的HttpTransactionFactory以创建HttpTransaction。通常,HttpCache将被指定为HttpTransactionFactory。 HttpCache将创建一个HttpCache :: Transaction来处理HTTP请求。 HttpCache :: Transaction将首先检查HttpCache(它检查磁盘缓存)以查看缓存条目是否已存在。如果是这样,这意味着响应已经被缓存,或者此缓存条目已经存在网络事务,因此只需从该条目中读取即可。如果缓存条目不存在,那么我们创建它并要求HttpCache的HttpNetworkLayer创建一个HttpNetworkTransaction来为请求提供服务。给HttpNetworkTransaction一个HttpNetworkSession,它包含执行HTTP请求的上下文状态。其中一些状态来自URLRequestContext。

3.3 HttpNetworkTransaction
class HttpNetworkSession {
 ...

 private:
  // Shim so we can mock out ClientSockets.
  ClientSocketFactory* const socket_factory_;
  // Pointer to URLRequestContext's HostResolver.
  HostResolver* const host_resolver_;
  // Reference to URLRequestContext's ProxyService
  scoped_refptr<ProxyService> proxy_service_;
  // Contains all the socket pools.
  ClientSocketPoolManager socket_pool_manager_;
  // Contains the active SpdySessions.
  scoped_ptr<SpdySessionPool> spdy_session_pool_;
  // Handles HttpStream creation.
  HttpStreamFactory http_stream_factory_;
};

HttpNetworkTransaction要求HttpStreamFactory创建一个HttpStream。 HttpStreamFactory返回一个HttpStreamRequest,该HttpStreamRequest应该处理确定如何建立连接的所有逻辑,并且一旦建立连接,就用一个HttpStream子类包装它,该子类调解直接与网络的通信。

class HttpStream {
 public:
  virtual int SendRequest(...) = 0;
  virtual int ReadResponseHeaders(...) = 0;
  virtual int ReadResponseBody(...) = 0;
  ...
};

目前,只有两个主要的HttpStream子类:HttpBasicStream和SpdyHttpStream,尽管我们计划为HTTP流水线创建子类。 HttpBasicStream假设它正在直接读取/写入套接字。 SpdyHttpStream读取和写入SpdyStream。网络事务将调用流上的方法,并在完成时,将调用回调到HttpCache :: Transaction,它将根据需要通知URLRequestHttpJob和URLRequest。对于HTTP路径,http请求和响应的生成和解析将由HttpStreamParser处理。对于SPDY路径,请求和响应解析由SpdyStream和SpdySession处理。根据HTTP响应,HttpNetworkTransaction可能需要执行HTTP身份验证。这可能涉及重新启动网络事务。

3.4 HttpStreamFactory

HttpStreamFactory首先执行代理解析以​​确定是否需要代理。端点设置为URL主机或代理服务器。然后,HttpStreamFactory检查SpdySessionPool以查看我们是否为此端点提供了可用的SpdySession。如果没有,则流工厂从适当的池请求“套接字”(TCP /代理/ SSL /等)。如果套接字是SSL套接字,则它检查NPN是否指示协议(可能是SPDY),如果是,则使用指定的协议。对于SPDY,我们将检查SpdySession是否已经存在并使用它,如果是这样,否则我们将从这个SSL套接字创建一个新的SpdySession,并从SpdySession创建一个SpdyStream,我们将SpdyHttpStream包装起来。对于HTTP,我们将简单地接受套接字并将其包装在HttpBasicStream中。

3.4.1 Proxy解析

HttpStreamFactory查询ProxyService以返回GURL的ProxyInfo。代理服务首先需要检查它是否具有最新的代理配置。如果没有,它使用ProxyConfigService向系统查询当前代理设置。如果代理设置设置为无代理或特定代理,则代理解析很简单(我们不返回代理或特定代理)。否则,我们需要运行PAC脚本来确定适当的代理(或缺少代理)。如果我们还没有PAC脚本,那么代理设置将指示我们应该使用WPAD自动检测,或者将指定自定义PAC URL,我们将使用ProxyScriptFetcher获取PAC脚本。一旦我们有PAC脚本,我们将通过ProxyResolver执行它。请注意,我们使用填充程序MultiThreadedProxyResolver对象将PAC脚本执行分派给运行ProxyResolverV8实例的线程。这是因为PAC脚本执行可能会阻止主机解析。因此,为了防止一个停滞的PAC脚本执行阻止其他代理解析,我们允许同时执行多个PAC脚本(警告:V8不是线程安全的,所以我们获取了javascript绑定的锁,所以当一个V8实例被阻止时主机解析,它释放锁定,以便另一个V8实例可以执行PAC脚本来解析不同URL的代理。

3.4.2 连接管理

在HttpStreamRequest确定了适当的端点(URL端点或代理端点)之后,它需要建立连接。它通过识别适当的“套接字”池并从中请求套接字来实现。请注意,“socket”在这里基本上意味着我们可以读取和写入的内容,以通过网络发送数据。 SSL套接字构建在传输(TCP)套接字之上,并为用户加密/解密原始TCP数据。不同的套接字类型还处理不同的连接设置,HTTP / SOCKS代理,SSL握手等。套接字池设计为分层,因此各种连接设置可以分层在其他套接字之上。 HttpStream可以与实际的底层套接字类型无关,因为它只需要读取和写入套接字。套接字池执行各种功能 - 它们实现每个代理,每个主机和每个进程限制的连接。目前这些设置为每个代理32个套接字,每个目标主机6个套接字,每个进程256个套接字(未正确实现,但足够好)。套接字池还从履行中抽象出套接字请求,从而为我们提供套接字的“后期绑定”。套接字请求可以由新连接的套接字或空闲套接字实现(从先前的http事务重用)。

3.4.3 Host解析

请注意,传输套接字的连接设置不仅需要传输(TCP)握手,还可能需要主机解析。 HostResolverImpl使用getaddrinfo()来执行主机解析,这是一个阻塞调用,因此解析器会在未连接的工作线程上调用这些调用。通常,主机解析通常涉及DNS解析,但可能涉及非DNS命名空间,例如NetBIOS / WINS。请注意,截至编写本文时,我们将并发主机分辨率的数量限制为8,但希望优化此值。 HostResolverImpl还包含一个HostCache,它可以缓存多达1000个主机名。

3.4.4 SSL/TLS

SSL套接字需要执行SSL连接设置以及证书验证。目前,在所有平台上,我们使用NSS的libssl来处理SSL连接逻辑。但是,我们使用特定于平台的API进行证书验证。我们正在逐步使用证书验证缓存,它将多个同一证书的证书验证请求合并到一个证书验证作业中,并将结果缓存一段时间。

SSLClientSocketNSS大致遵循这一系列事件(忽略Snap Start或基于DNSSEC的证书验证等高级功能):

  • 调用Connect()。我们基于SSLConfig指定的配置或预处理器宏来设置NSS的SSL选项。然后我们开始握手。
  • 握手完成。假设我们没有遇到任何错误,我们继续使用CertVerifier验证服务器的证书。证书验证可能需要一些时间,因此CertVerifier使用WorkerPool实际调用X509Certificate :: Verify(),这是使用特定于平台的API实现的。

请注意,Chromium有自己的NSS补丁,它支持一些不一定在系统的NSS安装中的高级功能,例如支持NPN,False Start,Snap Start,OCSP装订等。

参考:https://www.chromium.org/developers/design-documents/network-stack


最新chrome里面网络变成了服务: network service。通过mojo调用。

 

posted @ 2020-04-06 23:27  Bigben  阅读(2795)  评论(0编辑  收藏  举报