From patchwork Thu Feb 22 10:05:30 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nithin Dabilpuram X-Patchwork-Id: 137008 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2396F43B73; Thu, 22 Feb 2024 11:07:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26DF2410E8; Thu, 22 Feb 2024 11:06:16 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8B85241060 for ; Thu, 22 Feb 2024 11:06:14 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 41M9BLWF021847 for ; Thu, 22 Feb 2024 02:06:13 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=jZpAhI4X3vtDfi4PmTvCdIULKT6eM5XrxTgFLA8FckE=; b=NTE qPBzogd67dqHWUWuiqddtzyzIHWGff3R7ciLh+roNCRC0cp7aZnnfnaXgnKcGvFU 8BdUMVz7nHFrlIXkRj1lycg4a1kRL+YGgvbmJgkfuHWqhoMEnG+pmDoN4lfUk2GG jSbBLxAzYPPuITqiDuSJomOM4d8pefImNPPmv+HPTtfYVMHWuMAUc63HeXfbiDaa j3Kzpc1LHKSVgQymKqX3JKau3i6JNqJOFR7CbL3lI7bakwYK13gUhoMTfmgT1XRp 5vofZOambe3VfQ64y9or1yaJuotmu/8UkggkYMxBw353vNzJI2ujaGB7DZWUP1pa mRVOY2W671KsnphmLVw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3we3dw84bt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Thu, 22 Feb 2024 02:06:13 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 22 Feb 2024 02:06:12 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 22 Feb 2024 02:06:12 -0800 Received: from hyd1588t430.caveonetworks.com (unknown [10.29.52.204]) by maili.marvell.com (Postfix) with ESMTP id 1F83A3F7136; Thu, 22 Feb 2024 02:06:09 -0800 (PST) From: Nithin Dabilpuram To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: , Rahul Bhansali Subject: [PATCH v3 14/14] net/cnxk: fix mempool debug build compile warnings Date: Thu, 22 Feb 2024 15:35:30 +0530 Message-ID: <20240222100530.2266013-14-ndabilpuram@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240222100530.2266013-1-ndabilpuram@marvell.com> References: <20240222100530.2266013-1-ndabilpuram@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: wjBSmXLwFP6ioZUNkYCBnJyDoLz-t-Ps X-Proofpoint-ORIG-GUID: wjBSmXLwFP6ioZUNkYCBnJyDoLz-t-Ps X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-22_08,2024-02-22_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Rahul Bhansali Fix compile warning of "-Werror=incompatible-pointer-types" when RTE_LIBRTE_MEMPOOL_DEBUG is enabled. Also, reset mbuf next and nb_segs fields in multi-seg Tx path. Fixes: e87c1a590805 ("net/cnxk: fix indirect mbuf handling in Tx path") Signed-off-by: Rahul Bhansali --- drivers/net/cnxk/cn10k_tx.h | 10 ++++++---- drivers/net/cnxk/cn9k_tx.h | 35 ++++++++++++++++++++++++++++------- 2 files changed, 34 insertions(+), 11 deletions(-) diff --git a/drivers/net/cnxk/cn10k_tx.h b/drivers/net/cnxk/cn10k_tx.h index a995696e66..94bfebf246 100644 --- a/drivers/net/cnxk/cn10k_tx.h +++ b/drivers/net/cnxk/cn10k_tx.h @@ -859,7 +859,7 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 0); @@ -888,7 +888,7 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 1); @@ -917,7 +917,7 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 0); @@ -946,7 +946,7 @@ cn10k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn10k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 1); #ifndef RTE_LIBRTE_MEMPOOL_DEBUG @@ -1324,6 +1324,7 @@ cn10k_nix_prepare_mseg(struct cn10k_eth_txq *txq, nb_segs = m->nb_segs - 1; m_next = m->next; m->next = NULL; + m->nb_segs = 1; slist = &cmd[3 + off + 1]; cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m); @@ -1869,6 +1870,7 @@ cn10k_nix_prepare_mseg_vec_list(struct rte_mbuf *m, uint64_t *cmd, nb_segs = m->nb_segs - 1; m_next = m->next; m->next = NULL; + m->nb_segs = 1; m = m_next; /* Fill mbuf segments */ do { diff --git a/drivers/net/cnxk/cn9k_tx.h b/drivers/net/cnxk/cn9k_tx.h index f28cecebd0..fb5e8c5f56 100644 --- a/drivers/net/cnxk/cn9k_tx.h +++ b/drivers/net/cnxk/cn9k_tx.h @@ -157,7 +157,7 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 0); @@ -186,7 +186,7 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc01_w0 = vsetq_lane_u64(w0, *senddesc01_w0, 1); @@ -215,7 +215,7 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 0); @@ -244,7 +244,7 @@ cn9k_nix_prefree_seg_vec(struct rte_mbuf **mbufs, struct cn9k_eth_txq *txq, w0 |= aura << 20; if ((w0 & BIT_ULL(19)) == 0) - RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, &cookie, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); } *senddesc23_w0 = vsetq_lane_u64(w0, *senddesc23_w0, 1); #ifndef RTE_LIBRTE_MEMPOOL_DEBUG @@ -515,7 +515,7 @@ cn9k_nix_xmit_prepare(struct cn9k_eth_txq *txq, #ifdef RTE_LIBRTE_MEMPOOL_DEBUG /* Mark mempool object as "put" since it is freed by NIX */ if (!send_hdr->w0.df) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); #else RTE_SET_USED(cookie); #endif @@ -639,10 +639,14 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq, /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << 55))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); rte_io_wmb(); #else RTE_SET_USED(cookie); +#endif +#ifdef RTE_ENABLE_ASSERT + m->next = NULL; + m->nb_segs = 1; #endif m = m_next; if (!m) @@ -653,6 +657,7 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq, m_next = m->next; sg_u = sg_u | ((uint64_t)m->data_len << (i << 4)); *slist = rte_mbuf_data_iova(m); + cookie = RTE_MBUF_DIRECT(m) ? m : rte_mbuf_from_indirect(m); /* Set invert df if buffer is not to be freed by H/W */ if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { sg_u |= (cn9k_nix_prefree_seg(m, txq, send_hdr, NULL) << (i + 55)); @@ -662,7 +667,7 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq, /* Mark mempool object as "put" since it is freed by NIX */ #ifdef RTE_LIBRTE_MEMPOOL_DEBUG if (!(sg_u & (1ULL << (i + 55)))) - RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0); + RTE_MEMPOOL_CHECK_COOKIES(cookie->pool, (void **)&cookie, 1, 0); rte_io_wmb(); #endif slist++; @@ -678,6 +683,9 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq, sg_u = sg->u; slist++; } +#ifdef RTE_ENABLE_ASSERT + m->next = NULL; +#endif m = m_next; } while (nb_segs); @@ -691,6 +699,9 @@ cn9k_nix_prepare_mseg(struct cn9k_eth_txq *txq, segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F); send_hdr->w0.sizem1 = segdw - 1; +#ifdef RTE_ENABLE_ASSERT + rte_io_wmb(); +#endif return segdw; } @@ -907,6 +918,10 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq, RTE_SET_USED(cookie); #endif +#ifdef RTE_ENABLE_ASSERT + m->next = NULL; + m->nb_segs = 1; +#endif m = m_next; /* Fill mbuf segments */ do { @@ -937,6 +952,9 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq, sg_u = sg->u; slist++; } +#ifdef RTE_ENABLE_ASSERT + m->next = NULL; +#endif m = m_next; } while (nb_segs); @@ -952,6 +970,9 @@ cn9k_nix_prepare_mseg_vec_list(struct cn9k_eth_txq *txq, !!(flags & NIX_TX_OFFLOAD_TSTAMP_F); send_hdr->w0.sizem1 = segdw - 1; +#ifdef RTE_ENABLE_ASSERT + rte_io_wmb(); +#endif return segdw; }