From patchwork Tue Dec 11 05:55:11 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 48628 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5F1B24C90; Tue, 11 Dec 2018 06:54:00 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 2A03C3256 for ; Tue, 11 Dec 2018 06:53:57 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 10 Dec 2018 21:53:56 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,341,1539673200"; d="scan'208";a="301115132" Received: from dpdk51.sh.intel.com ([10.67.110.190]) by fmsmga006.fm.intel.com with ESMTP; 10 Dec 2018 21:53:55 -0800 From: Qi Zhang To: ferruh.yigit@intel.com, bruce.richardson@intel.com, keith.wiles@intel.com, konstantin.ananyev@intel.com Cc: dev@dpdk.org, wenzhuo.lu@intel.com, bernard.iremonger@intel.com, Qi Zhang Date: Tue, 11 Dec 2018 13:55:11 +0800 Message-Id: <20181211055511.32284-4-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20181211055511.32284-1-qi.z.zhang@intel.com> References: <20181122172632.6229-1-qi.z.zhang@intel.com> <20181211055511.32284-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 3/3] app/testpmd: further improve MAC swap performance for x86 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Do four packets macswap in same loop iterate to squeeze more CPU cycles. Signed-off-by: Qi Zhang --- app/test-pmd/macswap_sse.h | 62 +++++++++++++++++++++++++++++++++++++--------- 1 file changed, 50 insertions(+), 12 deletions(-) diff --git a/app/test-pmd/macswap_sse.h b/app/test-pmd/macswap_sse.h index 79f4f9a7c..df2875ace 100644 --- a/app/test-pmd/macswap_sse.h +++ b/app/test-pmd/macswap_sse.h @@ -11,11 +11,12 @@ static inline void do_macswap(struct rte_mbuf *pkts[], uint16_t nb, struct rte_port *txp) { - struct ether_hdr *eth_hdr; - struct rte_mbuf *mb; + struct ether_hdr *eth_hdr[4]; + struct rte_mbuf *mb[4]; uint64_t ol_flags; int i; - __m128i addr; + int r; + __m128i addr0, addr1, addr2, addr3; __m128i shfl_msk = _mm_set_epi8(15, 14, 13, 12, 5, 4, 3, 2, 1, 0, 11, 10, @@ -25,19 +26,56 @@ do_macswap(struct rte_mbuf *pkts[], uint16_t nb, vlan_qinq_set(pkts, nb, ol_flags, txp->tx_vlan_id, txp->tx_vlan_id_outer); - for (i = 0; i < nb; i++) { - if (likely(i < nb - 1)) - rte_prefetch0(rte_pktmbuf_mtod(pkts[i+1], void *)); - mb = pkts[i]; + i = 0; + r = nb; + + while (r >= 4) { + mb[0] = pkts[i++]; + eth_hdr[0] = rte_pktmbuf_mtod(mb[0], struct ether_hdr *); + addr0 = _mm_loadu_si128((__m128i *)eth_hdr[0]); + + mb[1] = pkts[i++]; + eth_hdr[1] = rte_pktmbuf_mtod(mb[1], struct ether_hdr *); + addr1 = _mm_loadu_si128((__m128i *)eth_hdr[1]); + + + mb[2] = pkts[i++]; + eth_hdr[2] = rte_pktmbuf_mtod(mb[2], struct ether_hdr *); + addr2 = _mm_loadu_si128((__m128i *)eth_hdr[2]); + + mb[3] = pkts[i++]; + eth_hdr[3] = rte_pktmbuf_mtod(mb[3], struct ether_hdr *); + addr3 = _mm_loadu_si128((__m128i *)eth_hdr[3]); - eth_hdr = rte_pktmbuf_mtod(mb, struct ether_hdr *); + addr0 = _mm_shuffle_epi8(addr0, shfl_msk); + addr1 = _mm_shuffle_epi8(addr1, shfl_msk); + addr2 = _mm_shuffle_epi8(addr2, shfl_msk); + addr3 = _mm_shuffle_epi8(addr3, shfl_msk); + + _mm_storeu_si128((__m128i *)eth_hdr[0], addr0); + _mm_storeu_si128((__m128i *)eth_hdr[1], addr1); + _mm_storeu_si128((__m128i *)eth_hdr[2], addr2); + _mm_storeu_si128((__m128i *)eth_hdr[3], addr3); + + mbuf_field_set(mb[0], ol_flags); + mbuf_field_set(mb[1], ol_flags); + mbuf_field_set(mb[2], ol_flags); + mbuf_field_set(mb[3], ol_flags); + r -= 4; + } + + for ( ; i < nb; i++) { + if (i < nb - 1) + rte_prefetch0(rte_pktmbuf_mtod(pkts[i+1], void *)); + mb[0] = pkts[i]; + eth_hdr[0] = rte_pktmbuf_mtod(mb[0], struct ether_hdr *); /* Swap dest and src mac addresses. */ - addr = _mm_loadu_si128((__m128i *)eth_hdr); - addr = _mm_shuffle_epi8(addr, shfl_msk); - _mm_storeu_si128((__m128i *)eth_hdr, addr); + addr0 = _mm_loadu_si128((__m128i *)eth_hdr); + addr0 = _mm_shuffle_epi8(addr0, shfl_msk); + _mm_storeu_si128((__m128i *)eth_hdr[0], addr0); - mbuf_field_set(mb, ol_flags); + mbuf_field_set(mb[0], ol_flags); } }