From patchwork Mon Oct 16 05:29:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rahul Bhansali X-Patchwork-Id: 132615 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0FD7643178; Mon, 16 Oct 2023 07:29:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 955F0402BE; Mon, 16 Oct 2023 07:29:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 5CE4E40299 for ; Mon, 16 Oct 2023 07:29:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 39FMqvWO002312 for ; Sun, 15 Oct 2023 22:29:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=LvyLUe35lNEOETMwHpiJsZCtJBT6tV7oi/p2PNAm4Pg=; b=H9ItYHeWWeIrhGlN3uVKIUmfKenDBVnnDa3vIooKbqK1VupalpzIvQYRl4hXLBtFL2Nl OW7RHp9Z2lNXYT9as2Cx+qDs3rkjhfqVjqsWsKKvoJXm/aF/g463ixmBICoufLxsibwV CJaKgCKQYu7jMxAEQ1v0LWWbF/of1DkmKWnTFhScKR1LcJqz/+6xZo/5bjDUn3YgKOnF m9R8gCb++XIiEemRQ70BKiXIKWsyPCZSlDa3bVscXur6jmOUMPlSZamkSJEiQKhAWXRu JhM5Z95f7r9byAPXzB0ssDRV0yu7lM1LnjB+NofiVOLgjROYAWed9DzLbT0wnTdg//PZ hg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tqtgkm2eh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 15 Oct 2023 22:29:38 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 15 Oct 2023 22:29:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 15 Oct 2023 22:29:20 -0700 Received: from localhost.localdomain (unknown [10.28.36.158]) by maili.marvell.com (Postfix) with ESMTP id B510E3F70A8; Sun, 15 Oct 2023 22:29:18 -0700 (PDT) From: Rahul Bhansali To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Jerin Jacob CC: Rahul Bhansali Subject: [PATCH] net/cnxk: fix separate callback for Rx flush on CN10k Date: Mon, 16 Oct 2023 10:59:12 +0530 Message-ID: <20231016052912.2565124-1-rbhansali@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: 8pcPWWp0UmBzxft_QtFczPs1GtrykWH8 X-Proofpoint-ORIG-GUID: 8pcPWWp0UmBzxft_QtFczPs1GtrykWH8 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-15_09,2023-10-12_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In case of dev stop, Rx packet flush will be called which uses LMT lines to bulk free of the pending meta buffers. And LMT line is not valid for non EAL cores. As a fix, a separate callback for Rx packets flush is added, which will use NPA aura free API on individual meta packets. Fixes: 4382a7ccf781 ("net/cnxk: support Rx security offload on cn10k") Signed-off-by: Rahul Bhansali --- drivers/net/cnxk/cn10k_rx.h | 93 ++++++++++++++++++++++++++++++ drivers/net/cnxk/cn10k_rx_select.c | 10 +++- 2 files changed, 101 insertions(+), 2 deletions(-) diff --git a/drivers/net/cnxk/cn10k_rx.h b/drivers/net/cnxk/cn10k_rx.h index f5e935d383..7bb4c86d75 100644 --- a/drivers/net/cnxk/cn10k_rx.h +++ b/drivers/net/cnxk/cn10k_rx.h @@ -1098,6 +1098,99 @@ cn10k_nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, return nb_pkts; } +static __rte_always_inline uint16_t +cn10k_nix_flush_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts, + const uint16_t flags) +{ + struct cn10k_eth_rxq *rxq = rx_queue; + const uint64_t mbuf_init = rxq->mbuf_initializer; + const void *lookup_mem = rxq->lookup_mem; + const uint64_t data_off = rxq->data_off; + struct rte_mempool *meta_pool = NULL; + const uint64_t wdata = rxq->wdata; + const uint32_t qmask = rxq->qmask; + const uintptr_t desc = rxq->desc; + uint64_t lbase = rxq->lmt_base; + uint16_t packets = 0, nb_pkts; + uint16_t lmt_id __rte_unused; + uint32_t head = rxq->head; + struct nix_cqe_hdr_s *cq; + struct rte_mbuf *mbuf; + uint64_t sa_base = 0; + uintptr_t cpth = 0; + uint8_t loff = 0; + uint64_t laddr; + + nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask); + + if (flags & NIX_RX_OFFLOAD_SECURITY_F) { + sa_base = rxq->sa_base; + sa_base &= ~(ROC_NIX_INL_SA_BASE_ALIGN - 1); + ROC_LMT_BASE_ID_GET(lbase, lmt_id); + laddr = lbase; + laddr += 8; + if (flags & NIX_RX_REAS_F) + meta_pool = (struct rte_mempool *)rxq->meta_pool; + } + + while (packets < nb_pkts) { + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(desc + (CQE_SZ((head + 2) & qmask)))); + cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head)); + + mbuf = nix_get_mbuf_from_cqe(cq, data_off); + + /* Mark mempool obj as "get" as it is alloc'ed by NIX */ + RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1); + + /* Translate meta to mbuf */ + if (flags & NIX_RX_OFFLOAD_SECURITY_F) { + const uint64_t cq_w1 = *((const uint64_t *)cq + 1); + const uint64_t cq_w5 = *((const uint64_t *)cq + 5); + struct rte_mbuf *meta_buf = mbuf; + + cpth = ((uintptr_t)meta_buf + (uint16_t)data_off); + + /* Update mempool pointer for full mode pkt */ + if ((flags & NIX_RX_REAS_F) && (cq_w1 & BIT(11)) && + !((*(uint64_t *)cpth) & BIT(15))) + meta_buf->pool = meta_pool; + + mbuf = nix_sec_meta_to_mbuf_sc(cq_w1, cq_w5, sa_base, laddr, &loff, + meta_buf, data_off, flags, mbuf_init); + /* Free Meta mbuf, not use LMT line for flush as this will be called + * from non-datapath i.e. dev_stop case. + */ + if (loff) { + roc_npa_aura_op_free(meta_buf->pool->pool_id, 0, + (uint64_t)meta_buf); + loff = 0; + } + } + + cn10k_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init, + cpth, sa_base, flags); + cn10k_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, + (flags & NIX_RX_OFFLOAD_TSTAMP_F), + (uint64_t *)((uint8_t *)mbuf + data_off)); + rx_pkts[packets++] = mbuf; + roc_prefetch_store_keep(mbuf); + head++; + head &= qmask; + } + + rxq->head = head; + rxq->available -= nb_pkts; + + /* Free all the CQs that we've processed */ + plt_write64((wdata | nb_pkts), rxq->cq_door); + + if (flags & NIX_RX_OFFLOAD_SECURITY_F) + rte_io_wmb(); + + return nb_pkts; +} + #if defined(RTE_ARCH_ARM64) static __rte_always_inline uint64_t diff --git a/drivers/net/cnxk/cn10k_rx_select.c b/drivers/net/cnxk/cn10k_rx_select.c index 1d44f2924e..6a5c34287e 100644 --- a/drivers/net/cnxk/cn10k_rx_select.c +++ b/drivers/net/cnxk/cn10k_rx_select.c @@ -22,6 +22,13 @@ pick_rx_func(struct rte_eth_dev *eth_dev, rte_atomic_thread_fence(__ATOMIC_RELEASE); } +static uint16_t __rte_noinline __rte_hot __rte_unused +cn10k_nix_flush_rx(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t pkts) +{ + const uint16_t flags = NIX_RX_MULTI_SEG_F | NIX_RX_REAS_F | NIX_RX_OFFLOAD_SECURITY_F; + return cn10k_nix_flush_recv_pkts(rx_queue, rx_pkts, pkts, flags); +} + void cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev) { @@ -82,8 +89,7 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev) /* Copy multi seg version with security for tear down sequence */ if (rte_eal_process_type() == RTE_PROC_PRIMARY) - dev->rx_pkt_burst_no_offload = - nix_eth_rx_burst_mseg_reas[NIX_RX_OFFLOAD_SECURITY_F]; + dev->rx_pkt_burst_no_offload = cn10k_nix_flush_rx; if (dev->scalar_ena) { if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {