From patchwork Thu Nov 17 07:25:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 119924 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DCC26A00C2; Thu, 17 Nov 2022 08:26:08 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 80C8340DDC; Thu, 17 Nov 2022 08:26:08 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8857C40DDA for ; Thu, 17 Nov 2022 08:26:07 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AH6pSaB025126 for ; Wed, 16 Nov 2022 23:26:06 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=oLPMgpsRApDYvxW1WnlDmxZJWSQoOZuoQCGyl7TZhBQ=; b=fnvZi+7SdkImaxTqk0GBagcmHnIeGGtAJMJFqg/9G+XWKzlWeF2PcpJxDWdOUSXY5bK5 3JHCnJEQp/M9Q8pu3Kw/ySOJS0rQB/TDRFAs0w8cZkS5HjZCd7fg6X5tSmvSdFEk+SXS gHGn0RgY7Xupos+AU9xboOEPtsDCJgR2itB/nllZZm+l18Ink7hDukUepH22NirdtOcN KSgFR61oouKYSzTNJO8yF00TpdaGXcl5KWOc2FwVvkXbvSdIa2cKrISR2sVCrADpyfjm etZZJ4OgPqqSAw/lxuPk8uPmyBfirkg2HMZvRkhBGCIkvlZ3erxkU181Grt5e29UEvG4 KA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kwg2b02yx-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 16 Nov 2022 23:26:06 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 16 Nov 2022 23:26:04 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 16 Nov 2022 23:26:04 -0800 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 1D31D5B6928; Wed, 16 Nov 2022 23:26:00 -0800 (PST) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , Subject: [PATCH v1 1/3] net/cnxk: rework no-fast-free offload handling Date: Thu, 17 Nov 2022 12:55:56 +0530 Message-ID: <20221117072558.3582292-1-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-GUID: LMzP2FoAxNw6PDRcrA9159CrCXrm5oPP X-Proofpoint-ORIG-GUID: LMzP2FoAxNw6PDRcrA9159CrCXrm5oPP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_04,2022-11-16_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a separate routine to handle no-fast-free offload in vector Tx path for multisegmented packets. Signed-off-by: Ashwin Sekhar T K --- drivers/net/cnxk/cn10k_tx.h | 124 +++++++++++++++++------------------- 1 file changed, 59 insertions(+), 65 deletions(-) diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index 815cd2ff1f..a4c578354c 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -956,6 +956,14 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) rte_io_wmb(); #endif m->next = NULL; + + /* Quickly handle single segmented packets. With this if-condition + * compiler will completely optimize out the below do-while loop + * from the Tx handler when NIX_TX_MULTI_SEG_F offload is not set. + */ + if (!(flags & NIX_TX_MULTI_SEG_F)) + goto done; + m = m_next; if (!m) goto done; @@ -1360,6 +1368,30 @@ cn10k_nix_prepare_tso(struct rte_mbuf *m, union nix_send_hdr_w1_u *w1, } } +static __rte_always_inline uint16_t +cn10k_nix_prepare_mseg_vec_noff(struct rte_mbuf *m, uint64_t *cmd, + uint64x2_t *cmd0, uint64x2_t *cmd1, + uint64x2_t *cmd2, uint64x2_t *cmd3, + const uint32_t flags) +{ + uint16_t segdw; + + vst1q_u64(cmd, *cmd0); /* Send hdr */ + if (flags & NIX_TX_NEED_EXT_HDR) { + vst1q_u64(cmd + 2, *cmd2); /* ext hdr */ + vst1q_u64(cmd + 4, *cmd1); /* sg */ + } else { + vst1q_u64(cmd + 2, *cmd1); /* sg */ + } + + segdw = cn10k_nix_prepare_mseg(m, cmd, flags); + + if (flags & NIX_TX_OFFLOAD_TSTAMP_F) + vst1q_u64(cmd + segdw * 2 - 2, *cmd3); + + return segdw; +} + static __rte_always_inline void cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, union nix_send_hdr_w0_u *sh, @@ -1389,17 +1421,6 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, nb_segs = m->nb_segs - 1; m_next = m->next; - - /* Set invert df if buffer is not to be freed by H/W */ - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= (cnxk_nix_prefree_seg(m) << 55); - /* Mark mempool object as "put" since it is freed by NIX */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (!(sg_u & (1ULL << 55))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); - rte_io_wmb(); -#endif - m->next = NULL; m = m_next; /* Fill mbuf segments */ @@ -1409,16 +1430,6 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, len -= dlen; sg_u = sg_u | ((uint64_t)dlen << (i << 4)); *slist = rte_mbuf_data_iova(m); - /* Set invert df if buffer is not to be freed by H/W */ - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= (cnxk_nix_prefree_seg(m) << (i + 55)); - /* Mark mempool object as "put" since it is freed by NIX - */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (!(sg_u & (1ULL << (i + 55)))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); - rte_io_wmb(); -#endif slist++; i++; nb_segs--; @@ -1456,21 +1467,8 @@ cn10k_nix_prepare_mseg_vec(struct rte_mbuf *m, uint64_t *cmd, uint64x2_t *cmd0, union nix_send_hdr_w0_u sh; union nix_send_sg_s sg; - if (m->nb_segs == 1) { - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { - sg.u = vgetq_lane_u64(cmd1[0], 0); - sg.u |= (cnxk_nix_prefree_seg(m) << 55); - cmd1[0] = vsetq_lane_u64(sg.u, cmd1[0], 0); - } - -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - sg.u = vgetq_lane_u64(cmd1[0], 0); - if (!(sg.u & (1ULL << 55))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); - rte_io_wmb(); -#endif + if (m->nb_segs == 1) return; - } sh.u = vgetq_lane_u64(cmd0[0], 0); sg.u = vgetq_lane_u64(cmd1[0], 0); @@ -1491,16 +1489,32 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0, uint64_t *lmt_addr, __uint128_t *data128, uint8_t *shift, const uint16_t flags) { - uint8_t j, off, lmt_used; + uint8_t j, off, lmt_used = 0; + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + off = 0; + for (j = 0; j < NIX_DESCS_PER_LOOP; j++) { + if (off + segdw[j] > 8) { + *data128 |= ((__uint128_t)off - 1) << *shift; + *shift += 3; + lmt_used++; + lmt_addr += 16; + off = 0; + } + off += cn10k_nix_prepare_mseg_vec_noff(mbufs[j], + lmt_addr + off * 2, &cmd0[j], &cmd1[j], + &cmd2[j], &cmd3[j], flags); + } + *data128 |= ((__uint128_t)off - 1) << *shift; + *shift += 3; + lmt_used++; + return lmt_used; + } if (!(flags & NIX_TX_NEED_EXT_HDR) && !(flags & NIX_TX_OFFLOAD_TSTAMP_F)) { /* No segments in 4 consecutive packets. */ if ((segdw[0] + segdw[1] + segdw[2] + segdw[3]) <= 8) { - for (j = 0; j < NIX_DESCS_PER_LOOP; j++) - cn10k_nix_prepare_mseg_vec(mbufs[j], NULL, - &cmd0[j], &cmd1[j], - segdw[j], flags); vst1q_u64(lmt_addr, cmd0[0]); vst1q_u64(lmt_addr + 2, cmd1[0]); vst1q_u64(lmt_addr + 4, cmd0[1]); @@ -1517,18 +1531,10 @@ cn10k_nix_prep_lmt_mseg_vector(struct rte_mbuf **mbufs, uint64x2_t *cmd0, } } - lmt_used = 0; for (j = 0; j < NIX_DESCS_PER_LOOP;) { /* Fit consecutive packets in same LMTLINE. */ if ((segdw[j] + segdw[j + 1]) <= 8) { if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { - cn10k_nix_prepare_mseg_vec(mbufs[j], NULL, - &cmd0[j], &cmd1[j], - segdw[j], flags); - cn10k_nix_prepare_mseg_vec(mbufs[j + 1], NULL, - &cmd0[j + 1], - &cmd1[j + 1], - segdw[j + 1], flags); /* TSTAMP takes 4 each, no segs. */ vst1q_u64(lmt_addr, cmd0[j]); vst1q_u64(lmt_addr + 2, cmd2[j]); @@ -1643,23 +1649,11 @@ cn10k_nix_xmit_store(struct rte_mbuf *mbuf, uint8_t segdw, uintptr_t laddr, { uint8_t off; - /* Handle no fast free when security is enabled without mseg */ - if ((flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) && - (flags & NIX_TX_OFFLOAD_SECURITY_F) && - !(flags & NIX_TX_MULTI_SEG_F)) { - union nix_send_sg_s sg; - - sg.u = vgetq_lane_u64(cmd1, 0); - sg.u |= (cnxk_nix_prefree_seg(mbuf) << 55); - cmd1 = vsetq_lane_u64(sg.u, cmd1, 0); - -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - sg.u = vgetq_lane_u64(cmd1, 0); - if (!(sg.u & (1ULL << 55))) - RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, - 0); - rte_io_wmb(); -#endif + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + cn10k_nix_prepare_mseg_vec_noff(mbuf, LMT_OFF(laddr, 0, 0), + &cmd0, &cmd1, &cmd2, &cmd3, + flags); + return; } if (flags & NIX_TX_MULTI_SEG_F) { if ((flags & NIX_TX_NEED_EXT_HDR) && From patchwork Thu Nov 17 07:25:57 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 119925 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89505A00C2; Thu, 17 Nov 2022 08:26:13 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 760AF40F18; Thu, 17 Nov 2022 08:26:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id CA44740F18 for ; Thu, 17 Nov 2022 08:26:12 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AH6pSaD025126 for ; Wed, 16 Nov 2022 23:26:12 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=r3d/jc0onlsocHFoVJEi4OcGvDojwm4eScETiwgh0xw=; b=VczL5fzUn0JFk94UTUsn+DAdJb8DeEfZU91lPKpClLfdn4oC+HODc4cqWa0olL4dcBTR e34Oc0saOERm71qyj44byiv8k6C0Hm1BQX1qEW2p8ZER2M9TuKjlFJyCIZIIl+5GiQ4x 7v5sQ689+2vQvy5iyQnf3U2Ero6WZlMXCe9YX0RJjDagx/ilVyx5FinGKQ413+iHoJIj WerUTaha/8njC01F0Vpn3FdK7ITtJnF4J32ZJpsuZkqVe8j/sYAqc+s2++iuPj+Xkady XCtaR8gozI079MkrrJkt+hQGjzecYtf7i7URxb8JPlrONI4+B9HQ7tOEearO6e2uCpzT 9g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kwg2b0304-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 16 Nov 2022 23:26:12 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 16 Nov 2022 23:26:10 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 16 Nov 2022 23:26:10 -0800 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 99B5A5B692A; Wed, 16 Nov 2022 23:26:06 -0800 (PST) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , Subject: [PATCH v1 2/3] net/cnxk: add sg2 descriptor support Date: Thu, 17 Nov 2022 12:55:57 +0530 Message-ID: <20221117072558.3582292-2-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221117072558.3582292-1-asekhar@marvell.com> References: <20221117072558.3582292-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 0szwnZJmhDLLR17NZAvaqfSSEHYIG2g0 X-Proofpoint-ORIG-GUID: 0szwnZJmhDLLR17NZAvaqfSSEHYIG2g0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_04,2022-11-16_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for creating packets with segments from different pools. This is enabled by using the SG2 descriptors. SG2 descriptors are only used when the segment is to be freed by the HW. Signed-off-by: Ashwin Sekhar T K --- drivers/net/cnxk/cn10k_tx.h | 161 +++++++++++++++++++++++++++--------- 1 file changed, 123 insertions(+), 38 deletions(-) diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index a4c578354c..3f08a8a473 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -54,6 +54,36 @@ #define NIX_NB_SEGS_TO_SEGDW(x) ((NIX_SEGDW_MAGIC >> ((x) << 2)) & 0xF) +static __plt_always_inline uint8_t +cn10k_nix_mbuf_sg_dwords(struct rte_mbuf *m) +{ + uint32_t nb_segs = m->nb_segs; + uint16_t aura0, aura; + int segw, sg_segs; + + aura0 = roc_npa_aura_handle_to_aura(m->pool->pool_id); + + nb_segs--; + segw = 2; + sg_segs = 1; + while (nb_segs) { + m = m->next; + aura = roc_npa_aura_handle_to_aura(m->pool->pool_id); + if (aura != aura0) { + segw += 2 + (sg_segs == 2); + sg_segs = 0; + } else { + segw += (sg_segs == 0); /* SUBDC */ + segw += 1; /* IOVA */ + sg_segs += 1; + sg_segs %= 3; + } + nb_segs--; + } + + return (segw + 1) / 2; +} + static __plt_always_inline void cn10k_nix_vwqe_wait_fc(struct cn10k_eth_txq *txq, int64_t req) { @@ -915,15 +945,15 @@ cn10k_nix_xmit_prepare_tstamp(struct cn10k_eth_txq *txq, uintptr_t lmt_addr, static __rte_always_inline uint16_t cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) { + uint64_t prefree = 0, aura0, aura, nb_segs, segdw; struct nix_send_hdr_s *send_hdr; - union nix_send_sg_s *sg; + union nix_send_sg_s *sg, l_sg; + union nix_send_sg2_s l_sg2; struct rte_mbuf *m_next; - uint64_t *slist, sg_u; + uint8_t off, is_sg2; uint64_t len, dlen; uint64_t ol_flags; - uint64_t nb_segs; - uint64_t segdw; - uint8_t off, i; + uint64_t *slist; send_hdr = (struct nix_send_hdr_s *)cmd; @@ -938,20 +968,22 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) ol_flags = m->ol_flags; /* Start from second segment, first segment is already there */ - i = 1; - sg_u = sg->u; - len -= sg_u & 0xFFFF; + is_sg2 = 0; + l_sg.u = sg->u; + len -= l_sg.u & 0xFFFF; nb_segs = m->nb_segs - 1; m_next = m->next; slist = &cmd[3 + off + 1]; /* Set invert df if buffer is not to be freed by H/W */ - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= (cnxk_nix_prefree_seg(m) << 55); + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + prefree = cnxk_nix_prefree_seg(m); + l_sg.i1 = prefree; + } - /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (!(sg_u & (1ULL << 55))) + /* Mark mempool object as "put" since it is freed by NIX */ + if (!prefree) RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); rte_io_wmb(); #endif @@ -964,55 +996,103 @@ cn10k_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) if (!(flags & NIX_TX_MULTI_SEG_F)) goto done; + aura0 = send_hdr->w0.aura; m = m_next; if (!m) goto done; /* Fill mbuf segments */ do { + uint64_t iova; + + /* Save the current mbuf properties. These can get cleared in + * cnxk_nix_prefree_seg() + */ m_next = m->next; + iova = rte_mbuf_data_iova(m); dlen = m->data_len; len -= dlen; - sg_u = sg_u | ((uint64_t)dlen << (i << 4)); - *slist = rte_mbuf_data_iova(m); - /* Set invert df if buffer is not to be freed by H/W */ - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) - sg_u |= (cnxk_nix_prefree_seg(m) << (i + 55)); - /* Mark mempool object as "put" since it is freed by NIX - */ -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG - if (!(sg_u & (1ULL << (i + 55)))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); -#endif - slist++; - i++; + nb_segs--; - if (i > 2 && nb_segs) { - i = 0; + aura = aura0; + prefree = 0; + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + aura = roc_npa_aura_handle_to_aura(m->pool->pool_id); + prefree = cnxk_nix_prefree_seg(m); + is_sg2 = aura != aura0 && !prefree; + } + + if (unlikely(is_sg2)) { + /* This mbuf belongs to a different pool and + * DF bit is not to be set, so use SG2 subdesc + * so that it is freed to the appropriate pool. + */ + + /* Write the previous descriptor out */ + sg->u = l_sg.u; + + /* If the current SG subdc does not have any + * iovas in it, then the SG2 subdc can overwrite + * that SG subdc. + * + * If the current SG subdc has 2 iovas in it, then + * the current iova word should be left empty. + */ + slist += (-1 + (int)l_sg.segs); + sg = (union nix_send_sg_s *)slist; + + l_sg2.u = l_sg.u & 0xC00000000000000; /* LD_TYPE */ + l_sg2.subdc = NIX_SUBDC_SG2; + l_sg2.aura = aura; + l_sg2.seg1_size = dlen; + l_sg.u = l_sg2.u; + + slist++; + *slist = iova; + slist++; + } else { + *slist = iova; + /* Set invert df if buffer is not to be freed by H/W */ + l_sg.u |= (prefree << (l_sg.segs + 55)); + /* Set the segment length */ + l_sg.u |= ((uint64_t)dlen << (l_sg.segs << 4)); + l_sg.segs += 1; + slist++; + } + + if ((is_sg2 || l_sg.segs > 2) && nb_segs) { + sg->u = l_sg.u; /* Next SG subdesc */ - *(uint64_t *)slist = sg_u & 0xFC00000000000000; - sg->u = sg_u; - sg->segs = 3; sg = (union nix_send_sg_s *)slist; - sg_u = sg->u; + l_sg.u &= 0xC00000000000000; /* LD_TYPE */ + l_sg.subdc = NIX_SUBDC_SG; slist++; } m->next = NULL; + +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG + /* Mark mempool object as "put" since it is freed by NIX + */ + if (!prefree) + RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); +#endif m = m_next; } while (nb_segs); done: /* Add remaining bytes of security data to last seg */ if (flags & NIX_TX_OFFLOAD_SECURITY_F && ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD && len) { - uint8_t shft = ((i - 1) << 4); + uint8_t shft = (l_sg.subdc == NIX_SUBDC_SG) ? ((l_sg.segs - 1) << 4) : 0; - dlen = ((sg_u >> shft) & 0xFFFFULL) + len; - sg_u = sg_u & ~(0xFFFFULL << shft); - sg_u |= dlen << shft; + dlen = ((l_sg.u >> shft) & 0xFFFFULL) + len; + l_sg.u = l_sg.u & ~(0xFFFFULL << shft); + l_sg.u |= dlen << shft; } - sg->u = sg_u; - sg->segs = i; + /* Write the last subdc out */ + sg->u = l_sg.u; + segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off]; /* Roundup extra dwords to multiple of 2 */ segdw = (segdw >> 1) + (segdw & 0x1); @@ -1827,7 +1907,12 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws, struct rte_mbuf *m = tx_pkts[j]; /* Get dwords based on nb_segs. */ - segdw[j] = NIX_NB_SEGS_TO_SEGDW(m->nb_segs); + if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && + flags & NIX_TX_MULTI_SEG_F)) + segdw[j] = NIX_NB_SEGS_TO_SEGDW(m->nb_segs); + else + segdw[j] = cn10k_nix_mbuf_sg_dwords(m); + /* Add dwords based on offloads. */ segdw[j] += 1 + /* SEND HDR */ !!(flags & NIX_TX_NEED_EXT_HDR) + From patchwork Thu Nov 17 07:25:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ashwin Sekhar T K X-Patchwork-Id: 119926 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 90C06A00C2; Thu, 17 Nov 2022 08:26:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1ABB342C4D; Thu, 17 Nov 2022 08:26:19 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E6E2B42C4D for ; Thu, 17 Nov 2022 08:26:17 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 2AH6pdSU026293 for ; Wed, 16 Nov 2022 23:26:17 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=LekQVluYp7u/TPZb2lvzsOdtSoI7Bs9DJFE6rJAcycI=; b=VxCCDR8JUhRPb0i8HYMnNbfGlQSfxwejzByttLsgl3IeOi5UiUHel+cKJDamKUtAKEBQ lW7uLnYjUsAycB+59BRJCrxO/nSok1LYipouErHK2/ep5NhJb3ngJx0iDjQQRSkAYgPE QpUYGiNGgYCFn5XMlbqnqwguoHf+aMc5iK80Y3CxomfuHzUzs6a9d1R2dF6gq+cvpHv3 k0f+rdvdmtVsneb8pf7HHxq2DCCcKyhc+Vu/l/pP3MJM8zxehZIy2eutGYPU7Ekumr0T UnZwah+gw8KXQbYq/maSl8zbzv8IHIVqAXJFVfAIYt5PQVR3h7DIgOCmSoOcaDTBigUs /A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3kwg2b030j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 16 Nov 2022 23:26:17 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 16 Nov 2022 23:26:14 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Wed, 16 Nov 2022 23:26:14 -0800 Received: from localhost.localdomain (unknown [10.28.36.142]) by maili.marvell.com (Postfix) with ESMTP id 65C8A5B6927; Wed, 16 Nov 2022 23:26:11 -0800 (PST) From: Ashwin Sekhar T K To: , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , , , , , , Subject: [PATCH v1 3/3] net/cnxk: add debug check for number of Tx descriptors Date: Thu, 17 Nov 2022 12:55:58 +0530 Message-ID: <20221117072558.3582292-3-asekhar@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221117072558.3582292-1-asekhar@marvell.com> References: <20221117072558.3582292-1-asekhar@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: diWbYHk0xcoF_zaWHs0UnDn2Hl5gfRJB X-Proofpoint-ORIG-GUID: diWbYHk0xcoF_zaWHs0UnDn2Hl5gfRJB X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.219,Aquarius:18.0.895,Hydra:6.0.545,FMLib:17.11.122.1 definitions=2022-11-17_04,2022-11-16_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When SG2 descriptors are used and more than 5 segments are present, in certain combination of segments the number of descriptors required will be greater than 16. In debug builds, add an assert to capture this scenario. Signed-off-by: Ashwin Sekhar T K --- drivers/net/cnxk/cn10k_tx.h | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index 3f08a8a473..09c332b2b5 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -84,6 +84,22 @@ cn10k_nix_mbuf_sg_dwords(struct rte_mbuf *m) return (segw + 1) / 2; } +static __plt_always_inline void +cn10k_nix_tx_mbuf_validate(struct rte_mbuf *m, const uint32_t flags) +{ +#ifdef RTE_LIBRTE_MBUF_DEBUG + uint16_t segdw; + + segdw = cn10k_nix_mbuf_sg_dwords(m); + segdw += 1 + !!(flags & NIX_TX_NEED_EXT_HDR) + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F); + + PLT_ASSERT(segdw <= 8); +#else + RTE_SET_USED(m); + RTE_SET_USED(flags); +#endif +} + static __plt_always_inline void cn10k_nix_vwqe_wait_fc(struct cn10k_eth_txq *txq, int64_t req) { @@ -1307,6 +1323,8 @@ cn10k_nix_xmit_pkts_mseg(void *tx_queue, uint64_t *ws, } for (i = 0; i < burst; i++) { + cn10k_nix_tx_mbuf_validate(tx_pkts[i], flags); + /* Perform header writes for TSO, barrier at * lmt steorl will suffice. */ @@ -1906,6 +1924,8 @@ cn10k_nix_xmit_pkts_vector(void *tx_queue, uint64_t *ws, for (j = 0; j < NIX_DESCS_PER_LOOP; j++) { struct rte_mbuf *m = tx_pkts[j]; + cn10k_nix_tx_mbuf_validate(m, flags); + /* Get dwords based on nb_segs. */ if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F && flags & NIX_TX_MULTI_SEG_F))