From patchwork Sat Feb 24 08:21:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tyler Retzlaff X-Patchwork-Id: 137159 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C851443BBD; Sat, 24 Feb 2024 09:25:01 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CB52F42DA7; Sat, 24 Feb 2024 09:24:04 +0100 (CET) Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182]) by mails.dpdk.org (Postfix) with ESMTP id 5D460402C8 for ; Sat, 24 Feb 2024 09:22:13 +0100 (CET) Received: by linux.microsoft.com (Postfix, from userid 1086) id 0ACBC20B74CE; Sat, 24 Feb 2024 00:22:11 -0800 (PST) DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com 0ACBC20B74CE DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com; s=default; t=1708762932; bh=GRH8jix1bBTqdTqwCtwYVzxlYvrVOoa444KnTVEB44A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JUZbujq56ZQhE3qt2byaFvdE5uMWMoZ92HAKLrfDnJ51MnE2fhKpToOmjoV0kcU7M 2e2pTPF3KYBtnKfP6bLp3W2mWY6EXdWgvV6WTecpdC8eooow1ySQ2fR+YDzeq55TiN b2UUb5BiJhOf9PBaPyJDgLlSMdXJ81t4rmGQ/rhI= From: Tyler Retzlaff To: dev@dpdk.org Cc: Ajit Khaparde , Andrew Boyer , Andrew Rybchenko , Bruce Richardson , Chenbo Xia , Chengwen Feng , Dariusz Sosnowski , David Christensen , Hyong Youb Kim , Jerin Jacob , Jie Hai , Jingjing Wu , John Daley , Kevin Laatz , Kiran Kumar K , Konstantin Ananyev , Maciej Czekaj , Matan Azrad , Maxime Coquelin , Nithin Dabilpuram , Ori Kam , Ruifeng Wang , Satha Rao , Somnath Kotur , Suanming Mou , Sunil Kumar Kori , Viacheslav Ovsiienko , Yisen Zhuang , Yuying Zhang , mb@smartsharesystems.com, Tyler Retzlaff Subject: [PATCH v5 14/22] net/iavf: use mbuf descriptor accessors Date: Sat, 24 Feb 2024 00:21:59 -0800 Message-Id: <1708762927-14126-15-git-send-email-roretzla@linux.microsoft.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1708762927-14126-1-git-send-email-roretzla@linux.microsoft.com> References: <1706657173-26166-1-git-send-email-roretzla@linux.microsoft.com> <1708762927-14126-1-git-send-email-roretzla@linux.microsoft.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org RTE_MARKER typedefs are a GCC extension unsupported by MSVC. Use new rte_mbuf_rearm_data and rte_mbuf_rx_descriptor_fields1 accessors that provide a compatible type pointer without using the marker fields. Signed-off-by: Tyler Retzlaff --- drivers/net/iavf/iavf_rxtx_vec_avx2.c | 32 ++++++++++++++++---------------- drivers/net/iavf/iavf_rxtx_vec_avx512.c | 32 ++++++++++++++++---------------- drivers/net/iavf/iavf_rxtx_vec_common.h | 4 +--- drivers/net/iavf/iavf_rxtx_vec_neon.c | 16 ++++++++-------- drivers/net/iavf/iavf_rxtx_vec_sse.c | 32 ++++++++++++++++---------------- 5 files changed, 57 insertions(+), 59 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c index 510b4d8..0211e83 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c @@ -398,13 +398,13 @@ rearm2 = _mm256_permute2f128_si256(rearm2, mb2_3, 0x20); rearm0 = _mm256_permute2f128_si256(rearm0, mb0_1, 0x20); /* write to mbuf */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 6]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 6]), rearm6); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 4]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 4]), rearm4); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 2]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 2]), rearm2); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 0]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 0]), rearm0); /* repeat for the odd mbufs */ @@ -427,13 +427,13 @@ rearm3 = _mm256_blend_epi32(rearm3, mb2_3, 0xF0); rearm1 = _mm256_blend_epi32(rearm1, mb0_1, 0xF0); /* again write to mbufs */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 7]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 7]), rearm7); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 5]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 5]), rearm5); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 3]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 3]), rearm3); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 1]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 1]), rearm1); /* extract and record EOP bit */ @@ -1305,13 +1305,13 @@ rearm2 = _mm256_permute2f128_si256(rearm2, mb2_3, 0x20); rearm0 = _mm256_permute2f128_si256(rearm0, mb0_1, 0x20); /* write to mbuf */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 6]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 6]), rearm6); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 4]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 4]), rearm4); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 2]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 2]), rearm2); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 0]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 0]), rearm0); /* repeat for the odd mbufs */ @@ -1334,13 +1334,13 @@ rearm3 = _mm256_blend_epi32(rearm3, mb2_3, 0xF0); rearm1 = _mm256_blend_epi32(rearm1, mb0_1, 0xF0); /* again write to mbufs */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 7]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 7]), rearm7); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 5]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 5]), rearm5); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 3]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 3]), rearm3); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 1]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 1]), rearm1); /* extract and record EOP bit */ diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c index 3bb6f30..950ec91 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c +++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c @@ -450,13 +450,13 @@ rearm0 = _mm256_permute2f128_si256(mbuf_init, mb0_1, 0x20); } /* write to mbuf */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 6]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 6]), rearm6); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 4]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 4]), rearm4); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 2]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 2]), rearm2); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 0]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 0]), rearm0); /* repeat for the odd mbufs */ @@ -486,13 +486,13 @@ rearm1 = _mm256_blend_epi32(mbuf_init, mb0_1, 0xF0); } /* again write to mbufs */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 7]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 7]), rearm7); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 5]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 5]), rearm5); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 3]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 3]), rearm3); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 1]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 1]), rearm1); /* extract and record EOP bit */ @@ -1461,13 +1461,13 @@ rearm2 = _mm256_permute2f128_si256(rearm2, mb2_3, 0x20); rearm0 = _mm256_permute2f128_si256(rearm0, mb0_1, 0x20); /* write to mbuf */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 6]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 6]), rearm6); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 4]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 4]), rearm4); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 2]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 2]), rearm2); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 0]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 0]), rearm0); /* repeat for the odd mbufs */ @@ -1490,13 +1490,13 @@ rearm3 = _mm256_blend_epi32(rearm3, mb2_3, 0xF0); rearm1 = _mm256_blend_epi32(rearm1, mb0_1, 0xF0); /* again write to mbufs */ - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 7]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 7]), rearm7); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 5]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 5]), rearm5); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 3]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 3]), rearm3); - _mm256_storeu_si256((__m256i *)&rx_pkts[i + 1]->rearm_data, + _mm256_storeu_si256((__m256i *)rte_mbuf_rearm_data(rx_pkts[i + 1]), rearm1); /* extract and record EOP bit */ diff --git a/drivers/net/iavf/iavf_rxtx_vec_common.h b/drivers/net/iavf/iavf_rxtx_vec_common.h index 5c52200..71e3644 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_common.h +++ b/drivers/net/iavf/iavf_rxtx_vec_common.h @@ -197,7 +197,6 @@ static inline int iavf_rxq_vec_setup_default(struct iavf_rx_queue *rxq) { - uintptr_t p; struct rte_mbuf mb_def = { .buf_addr = 0 }; /* zeroed mbuf */ mb_def.nb_segs = 1; @@ -207,8 +206,7 @@ /* prevent compiler reordering: rearm_data covers previous fields */ rte_compiler_barrier(); - p = (uintptr_t)&mb_def.rearm_data; - rxq->mbuf_initializer = *(uint64_t *)p; + rxq->mbuf_initializer = *rte_mbuf_rearm_data(&mb_def); return 0; } diff --git a/drivers/net/iavf/iavf_rxtx_vec_neon.c b/drivers/net/iavf/iavf_rxtx_vec_neon.c index 83825aa..d7ea940 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_neon.c +++ b/drivers/net/iavf/iavf_rxtx_vec_neon.c @@ -159,10 +159,10 @@ rearm2 = vsetq_lane_u64(vgetq_lane_u32(vlan0, 2), mbuf_init, 1); rearm3 = vsetq_lane_u64(vgetq_lane_u32(vlan0, 3), mbuf_init, 1); - vst1q_u64((uint64_t *)&rx_pkts[0]->rearm_data, rearm0); - vst1q_u64((uint64_t *)&rx_pkts[1]->rearm_data, rearm1); - vst1q_u64((uint64_t *)&rx_pkts[2]->rearm_data, rearm2); - vst1q_u64((uint64_t *)&rx_pkts[3]->rearm_data, rearm3); + vst1q_u64(rte_mbuf_rearm_data(rx_pkts[0]), rearm0); + vst1q_u64(rte_mbuf_rearm_data(rx_pkts[1]), rearm1); + vst1q_u64(rte_mbuf_rearm_data(rx_pkts[2]), rearm2); + vst1q_u64(rte_mbuf_rearm_data(rx_pkts[3]), rearm3); } #define PKTLEN_SHIFT 10 @@ -332,13 +332,13 @@ pkt_mb1 = vreinterpretq_u8_u16(tmp); /* D.3 copy final data to rx_pkts */ - vst1q_u8((void *)&rx_pkts[pos + 3]->rx_descriptor_fields1, + vst1q_u8(rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 3]), pkt_mb4); - vst1q_u8((void *)&rx_pkts[pos + 2]->rx_descriptor_fields1, + vst1q_u8(rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 2]), pkt_mb3); - vst1q_u8((void *)&rx_pkts[pos + 1]->rx_descriptor_fields1, + vst1q_u8(rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 1]), pkt_mb2); - vst1q_u8((void *)&rx_pkts[pos]->rx_descriptor_fields1, + vst1q_u8(rte_mbuf_rx_descriptor_fields1(rx_pkts[pos]), pkt_mb1); desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c index 96f187f..634d9f5 100644 --- a/drivers/net/iavf/iavf_rxtx_vec_sse.c +++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c @@ -183,10 +183,10 @@ offsetof(struct rte_mbuf, rearm_data) + 8); RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); - _mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0); - _mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1); - _mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2); - _mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[0]), rearm0); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[1]), rearm1); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[2]), rearm2); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[3]), rearm3); } static inline __m128i @@ -416,10 +416,10 @@ offsetof(struct rte_mbuf, rearm_data) + 8); RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, rearm_data) != RTE_ALIGN(offsetof(struct rte_mbuf, rearm_data), 16)); - _mm_store_si128((__m128i *)&rx_pkts[0]->rearm_data, rearm0); - _mm_store_si128((__m128i *)&rx_pkts[1]->rearm_data, rearm1); - _mm_store_si128((__m128i *)&rx_pkts[2]->rearm_data, rearm2); - _mm_store_si128((__m128i *)&rx_pkts[3]->rearm_data, rearm3); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[0]), rearm0); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[1]), rearm1); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[2]), rearm2); + _mm_store_si128((__m128i *)rte_mbuf_rearm_data(rx_pkts[3]), rearm3); } #define PKTLEN_SHIFT 10 @@ -651,10 +651,10 @@ /* D.3 copy final 3,4 data to rx_pkts */ _mm_storeu_si128( - (void *)&rx_pkts[pos + 3]->rx_descriptor_fields1, + rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 3]), pkt_mb4); _mm_storeu_si128( - (void *)&rx_pkts[pos + 2]->rx_descriptor_fields1, + rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 2]), pkt_mb3); /* D.2 pkt 1,2 remove crc */ @@ -689,9 +689,9 @@ /* D.3 copy final 1,2 data to rx_pkts */ _mm_storeu_si128( - (void *)&rx_pkts[pos + 1]->rx_descriptor_fields1, + rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 1]), pkt_mb2); - _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, + _mm_storeu_si128(rte_mbuf_rx_descriptor_fields1(rx_pkts[pos]), pkt_mb1); desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); /* C.4 calc available number of desc */ @@ -1089,10 +1089,10 @@ /* D.3 copy final 3,4 data to rx_pkts */ _mm_storeu_si128 - ((void *)&rx_pkts[pos + 3]->rx_descriptor_fields1, + (rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 3]), pkt_mb3); _mm_storeu_si128 - ((void *)&rx_pkts[pos + 2]->rx_descriptor_fields1, + (rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 2]), pkt_mb2); /* C* extract and record EOP bit */ @@ -1116,9 +1116,9 @@ /* D.3 copy final 1,2 data to rx_pkts */ _mm_storeu_si128 - ((void *)&rx_pkts[pos + 1]->rx_descriptor_fields1, + (rte_mbuf_rx_descriptor_fields1(rx_pkts[pos + 1]), pkt_mb1); - _mm_storeu_si128((void *)&rx_pkts[pos]->rx_descriptor_fields1, + _mm_storeu_si128(rte_mbuf_rx_descriptor_fields1(rx_pkts[pos]), pkt_mb0); flex_desc_to_ptype_v(descs, &rx_pkts[pos], ptype_tbl); /* C.4 calc available number of desc */