From patchwork Wed Sep 20 10:12:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Bhansali X-Patchwork-Id: 131710 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2275A425EF; Wed, 20 Sep 2023 12:12:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 158F641151; Wed, 20 Sep 2023 12:12:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A794C41143 for ; Wed, 20 Sep 2023 12:12:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38K9FbWa028048 for ; Wed, 20 Sep 2023 03:12:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=1I168SWP+6ODE6aoCKSznSUwJDk+8VL9b4vMo/cUitA=; b=MRKcnudS1pE1x37a8fds+j7xyUZWaq4thh/OZ3seqmWYTjOqDeomf5ITDyUA+/yP2CZy zfh+AKBdpH9ftbwdiwz8cW3JEee44lfgD3iLtXsYsOhiDnUTtFLy3wnuaYVWERgdZUoJ JyWtwVnh/kVOQajSPvT60Swl2dZ/gGCGHlWBob6WBfwOL7lb5FZ0PriE9bbUjJDANTM5 0TcOg9vThzV/yy7o9qCRaFTyiyobUBPLK0955F54K1FspvQQCf2jfQHs5O5H+UN3IhvB KfFOAIb/ceLpUMQmHcbtSg2zCSVTWDsS3jsN5louaJyA4lSaBbJgnBgTCoUHFtk63Wla nw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t7htathb1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 20 Sep 2023 03:12:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Wed, 20 Sep 2023 03:12:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Wed, 20 Sep 2023 03:12:35 -0700 Received: from localhost.localdomain (unknown [10.28.36.158]) by maili.marvell.com (Postfix) with ESMTP id BC7D93F705A; Wed, 20 Sep 2023 03:12:33 -0700 (PDT) From: Rahul Bhansali To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Rahul Bhansali Subject: [PATCH 2/2] net/cnxk: separate callback for Rx flush on CN10k Date: Wed, 20 Sep 2023 15:42:22 +0530 Message-ID: <20230920101222.767408-2-rbhansali@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230920101222.767408-1-rbhansali@marvell.com> References: <20230920101222.767408-1-rbhansali@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Kp8AYbMczwbdOXT4aFpKqzoLCaaKOSer X-Proofpoint-GUID: Kp8AYbMczwbdOXT4aFpKqzoLCaaKOSer X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-20_05,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In dev stop case, Rx packet flush callback uses LMT lines to bulk free of the meta buffers. If dev stop is called from non EAL core then LMT address will not be valid. To avoid this, A separate callback for Rx packets flush is added, which will use NPA aura free API on individual meta packets. Signed-off-by: Rahul Bhansali --- drivers/net/cnxk/cn10k_rx.h | 93 ++++++++++++++++++++++++++++++ drivers/net/cnxk/cn10k_rx_select.c | 10 +++- 2 files changed, 101 insertions(+), 2 deletions(-) diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index 41d11349fd..1d7c5215a7 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -1007,6 +1007,99 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, return nb_pkts; } +static __rte_always_inline uint16_t +cn10k_nix_flush_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, + const uint16_t flags) +{ + struct cn10k_eth_rxq *rxq = rx_queue; + const uint64_t mbuf_init = rxq->mbuf_initializer; + const void *lookup_mem = rxq->lookup_mem; + const uint64_t data_off = rxq->data_off; + struct rte_mempool *meta_pool = NULL; + const uint64_t wdata = rxq->wdata; + const uint32_t qmask = rxq->qmask; + const uintptr_t desc = rxq->desc; + uint64_t lbase = rxq->lmt_base; + uint16_t packets = 0, nb_pkts; + uint16_t lmt_id __rte_unused; + uint32_t head = rxq->head; + struct nix_cqe_hdr_s *cq; + struct rte_mbuf *mbuf; + uint64_t sa_base = 0; + uintptr_t cpth = 0; + uint8_t loff = 0; + uint64_t laddr; + + nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask); + + if (flags & NIX_RX_OFFLOAD_SECURITY_F) { + sa_base = rxq->sa_base; + sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); + ROC_LMT_BASE_ID_GET(lbase, lmt_id); + laddr = lbase; + laddr += 8; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; + } + + while (packets < nb_pkts) { + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(desc + (CQE_SZ((head + 2) & qmask)))); + cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head)); + + mbuf = nix_get_mbuf_from_cqe(cq, data_off); + + /* Mark mempool obj as "get" as it is alloc'ed by NIX */ + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); + + /* Translate meta to mbuf */ + if (flags & NIX_RX_OFFLOAD_SECURITY_F) { + const uint64_t cq_w1 = *((const uint64_t *)cq + 1); + const uint64_t cq_w5 = *((const uint64_t *)cq + 5); + struct rte_mbuf *meta_buf = mbuf; + + cpth = ((uintptr_t)meta_buf + (uint16_t)data_off); + + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + meta_buf->pool = meta_pool; + + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, + meta_buf, data_off, flags, mbuf_init); + /* Free Meta mbuf, not use LMT line for flush as this will be called + * from non-datapath i.e. dev_stop case. + */ + if (loff) { + roc_npa_aura_op_free(meta_buf->pool->pool_id, 0, + (uint64_t)meta_buf); + loff = 0; + } + } + + cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init, + cpth, sa_base, flags); + cn10k_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, + (flags & NIX_RX_OFFLOAD_TSTAMP_F), + (uint64_t *)((uint8_t *)mbuf + data_off)); + rx_pkts[packets++] = mbuf; + roc_prefetch_store_keep(mbuf); + head++; + head &= qmask; + } + + rxq->head = head; + rxq->available -= nb_pkts; + + /* Free all the CQs that we've processed */ + plt_write64((wdata | nb_pkts), rxq->cq_door); + + if (flags & NIX_RX_OFFLOAD_SECURITY_F) + rte_io_wmb(); + + return nb_pkts; +} + #if defined(RTE_ARCH_ARM64) static __rte_always_inline uint64_t diff --git a/drivers/net/cnxk/cn10k_rx_select.c b/drivers/net/cnxk/cn10k_rx_select.c index 1d44f2924e..6a5c34287e 100644 --- a/drivers/net/cnxk/cn10k_rx_select.c +++ b/drivers/net/cnxk/cn10k_rx_select.c @@ -22,6 +22,13 @@ pick_rx_func(struct rte_eth_dev *eth_dev, rte_atomic_thread_fence(__ATOMIC_RELEASE); } +static uint16_t __rte_noinline __rte_hot __rte_unused +cn10k_nix_flush_rx(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts) +{ + const uint16_t flags = NIX_RX_MULTI_SEG_F | NIX_RX_REAS_F | NIX_RX_OFFLOAD_SECURITY_F; + return cn10k_nix_flush_recv_pkts(rx_queue, rx_pkts, pkts, flags); +} + void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev) { @@ -82,8 +89,7 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev) /* Copy multi seg version with security for tear down sequence */ if (rte_eal_process_type() == RTE_PROC_PRIMARY) - dev->rx_pkt_burst_no_offload = - nix_eth_rx_burst_mseg_reas[NIX_RX_OFFLOAD_SECURITY_F]; + dev->rx_pkt_burst_no_offload = cn10k_nix_flush_rx; if (dev->scalar_ena) { if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {