From patchwork Fri Sep 25 06:45:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Li RongQing X-Patchwork-Id: 78791 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89312A04C3; Fri, 25 Sep 2020 08:45:41 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E79091E4ED; Fri, 25 Sep 2020 08:45:40 +0200 (CEST) Received: from tc-sys-mailedm03.tc.baidu.com (mx132-tc.baidu.com [61.135.168.132]) by dpdk.org (Postfix) with ESMTP id 352281D158 for ; Fri, 25 Sep 2020 08:45:39 +0200 (CEST) Received: from localhost (cp01-cos-dev01.cp01.baidu.com [10.92.119.46]) by tc-sys-mailedm03.tc.baidu.com (Postfix) with ESMTP id AFDEB450004D; Fri, 25 Sep 2020 14:45:36 +0800 (CST) From: Li RongQing To: dev@dpdk.org, ciara.loftus@intel.com Date: Fri, 25 Sep 2020 14:45:36 +0800 Message-Id: <1601016336-12233-1-git-send-email-lirongqing@baidu.com> X-Mailer: git-send-email 1.7.1 Subject: [dpdk-dev] [PATCH][v2] net/af_xdp: avoid to unnecessary allocation and free mbuf in rx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" when receive packets, the max bunch number of mbuf are allocated if hardware does not receive the max bunch number packets, it will free redundancy mbuf, that is low-performance so optimize rx performance, by allocating number of mbuf based on result of xsk_ring_cons__peek, to avoid to redundancy allocation, and free mbuf when receive packets V2: rollback rx cached_cons if mbuf failed to be allocated Signed-off-by: Li RongQing Signed-off-by: Dongsheng Rong --- drivers/net/af_xdp/rte_eth_af_xdp.c | 67 ++++++++++++++++--------------------- 1 file changed, 29 insertions(+), 38 deletions(-) diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c index 01f462b46..e04fa43f6 100644 --- a/drivers/net/af_xdp/rte_eth_af_xdp.c +++ b/drivers/net/af_xdp/rte_eth_af_xdp.c @@ -251,28 +251,29 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_umem_info *umem = rxq->umem; uint32_t idx_rx = 0; unsigned long rx_bytes = 0; - int rcvd, i; + int i; struct rte_mbuf *fq_bufs[ETH_AF_XDP_RX_BATCH_SIZE]; - /* allocate bufs for fill queue replenishment after rx */ - if (rte_pktmbuf_alloc_bulk(umem->mb_pool, fq_bufs, nb_pkts)) { - AF_XDP_LOG(DEBUG, - "Failed to get enough buffers for fq.\n"); - return 0; - } + nb_pkts = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); - rcvd = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); - - if (rcvd == 0) { + if (nb_pkts == 0) { #if defined(XDP_USE_NEED_WAKEUP) if (xsk_ring_prod__needs_wakeup(fq)) (void)poll(rxq->fds, 1, 1000); #endif - goto out; + return 0; + } + + /* allocate bufs for fill queue replenishment after rx */ + if (rte_pktmbuf_alloc_bulk(umem->mb_pool, fq_bufs, nb_pkts)) { + AF_XDP_LOG(DEBUG, + "Failed to get enough buffers for fq.\n"); + rx->cached_cons -= nb_pkts; + return 0; } - for (i = 0; i < rcvd; i++) { + for (i = 0; i < nb_pkts; i++) { const struct xdp_desc *desc; uint64_t addr; uint32_t len; @@ -297,20 +298,14 @@ af_xdp_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) rx_bytes += len; } - xsk_ring_cons__release(rx, rcvd); - - (void)reserve_fill_queue(umem, rcvd, fq_bufs, fq); + xsk_ring_cons__release(rx, nb_pkts); + (void)reserve_fill_queue(umem, nb_pkts, fq_bufs, fq); /* statistics */ - rxq->stats.rx_pkts += rcvd; + rxq->stats.rx_pkts += nb_pkts; rxq->stats.rx_bytes += rx_bytes; -out: - if (rcvd != nb_pkts) - rte_mempool_put_bulk(umem->mb_pool, (void **)&fq_bufs[rcvd], - nb_pkts - rcvd); - - return rcvd; + return nb_pkts; } #else static uint16_t @@ -322,7 +317,7 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) struct xsk_ring_prod *fq = &rxq->fq; uint32_t idx_rx = 0; unsigned long rx_bytes = 0; - int rcvd, i; + int i; uint32_t free_thresh = fq->size >> 1; struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE]; @@ -330,20 +325,21 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL, fq); - if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) != 0)) - return 0; - - rcvd = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); - if (rcvd == 0) { + nb_pkts = xsk_ring_cons__peek(rx, nb_pkts, &idx_rx); + if (nb_pkts == 0) { #if defined(XDP_USE_NEED_WAKEUP) if (xsk_ring_prod__needs_wakeup(fq)) (void)poll(rxq->fds, 1, 1000); #endif + return 0; + } - goto out; + if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts))) { + rx->cached_cons -= nb_pkts; + return 0; } - for (i = 0; i < rcvd; i++) { + for (i = 0; i < nb_pkts; i++) { const struct xdp_desc *desc; uint64_t addr; uint32_t len; @@ -362,18 +358,13 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) bufs[i] = mbufs[i]; } - xsk_ring_cons__release(rx, rcvd); + xsk_ring_cons__release(rx, nb_pkts); /* statistics */ - rxq->stats.rx_pkts += rcvd; + rxq->stats.rx_pkts += nb_pkts; rxq->stats.rx_bytes += rx_bytes; -out: - if (rcvd != nb_pkts) - rte_mempool_put_bulk(rxq->mb_pool, (void **)&mbufs[rcvd], - nb_pkts - rcvd); - - return rcvd; + return nb_pkts; } #endif