trex-core中run_in_rx_core的工作流程
run_in_rx_core的工作主要是将绑定到本port端的网络节点数据接收后转发到dpdk发送,工作流程如下图所示
flowchart LR
  subgraph CRxCore
    start --> _do_start
    _do_start --> cold_state_loop
    _do_start --> hot_state_loop
    cold_state_loop --> work_tick
    hot_state_loop --> work_tick
    work_tick --> process_all_pending_pkts
    process_all_pending_pkts --> handle_msg_packets
  end
- process_all_pending_pkts 的工作流程
process_all_pending_pkts函数处理数据面数据接收发送的工作
523     for (auto &mngr_pair : m_rx_port_mngr_map) {
524         total_pkts += mngr_pair.second->process_all_pending_pkts(flush_rx);
525     }
flowchart TD
  subgraph CRxCore
    id1[[process_all_pending_pkts]]
  end
  subgraph RXPortManager
    id2[[process_all_pending_pkts]]
    id3[["cnt_rx=handle_tx"]]
    id6[["tx_pkt(string&)"]]
    id7[["tx_pkt(rte_mbuf_t*)"]]
    id12[[handle_pkt]]
  end
  subgraph CStackLinuxBased
    id4[handle_tx]
    id15[[handle_pkt]]
  end
  subgraph CNamespacedIfNode
    id16[[filter_and_send]]
  end
  subgraph RXFeatureAPI
    id5[[tx_pkt]]
  end
  subgraph CZmqPacketWriter
    id14[["write_pkt 将接收到的数据转发给emq插件"]]
  end
  
  id1 -- "for (RXPortManager &mngr_pair : m_rx_port_mngr_map)" --> id2
  id2 --> id3
  id3 ---|handle_tx内部流程|id4
  id3 -- "for (int j = 0; j < cnt_rx; j++)" --> id12
  id12 -- "is_feature_set(EZMQ) && is_emu_filter(rte_mbuf_t *m) " --> id14
  id12 -- "is_feature_set(STACK)" --> id15
  id15 -- "将接收到的数据转发到namespace对应的veth网络接口" --> id16
  id4 --> id5
  id5 --> id6
  id6 --> id7
遍历m_rx_port_mngr_map容器中的RXPortManager对象,调用其函数process_all_pending_pkts;
每个物理端口都对应一个RXPortManager实例
int RXPortManager::process_all_pending_pkts(bool flush_rx) {
    ....
    uint16_t cnt_tx = handle_tx();
   ....
    for (int j = 0; j < cnt_rx; j++) {
        rte_mbuf_t *m = rx_pkts[j];
        if (!flush_rx) {
            handle_pkt(m);
        } else {
            rte_pktmbuf_free(m);
        }
    }
   ....
}
- handle_tx
handle_tx会调用CStackLinuxBased::handle_tx函数
该函数主要监听m_epoll_fd,当接收到数据后,根据mac地址找到对应的node节点,将接收到的数据通过m_api转发出去:
int event_count = epoll_wait(m_epoll_fd, events, MAX_EVENTS, 0);
if (event_count) {
  uint16_t pkt_len = recv(m_rw_buf...);
  auto iter_pair = m_nodes.find(src_mac);
  CNamespacedIfNode *node = (CNamespacedIfNode*)iter_pair->second;
  string &vlans_insert_to_pkt = node->get_vlans_insert_to_pkt();
  if ( vlans_insert_to_pkt.size() ) {
      read_buf_str.assign(m_rw_buf, 12);
      read_buf_str += vlans_insert_to_pkt;
      read_buf_str.append(m_rw_buf + 12, pkt_len - 12);
  } else {
      read_buf_str.assign(m_rw_buf, pkt_len);
  }
  m_api->tx_pkt(read_buf_str);
}
m_api是 RXFeatureAPI 类
每当添加一个CNamespacedIfNode (分为CLinuxIfNode或者CSharedNSIfNode)节点,会将该节点添加到m_epoll_fd监控该节点对应的veth网络接口上的数据
CNamespacedIfNode *node;
node->m_event.events = EPOLLIN;
node->m_event.data.fd = node->get_pair_id();
// thread safe 
epoll_ctl(m_epoll_fd, EPOLL_CTL_ADD, node->get_pair_id(), &node->m_event);
 
                     
                    
                 
                    
                
 
                
            
         
         浙公网安备 33010602011771号
浙公网安备 33010602011771号