From patchwork Thu Apr 29 01:33:57 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenzhuo Lu X-Patchwork-Id: 92359 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CDB6CA0A0E; Thu, 29 Apr 2021 03:34:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 893D8410FB; Thu, 29 Apr 2021 03:34:08 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id BF6F940697 for ; Thu, 29 Apr 2021 03:34:06 +0200 (CEST) IronPort-SDR: vJ7zFO+vBidu4TvMgTizytiZdcF5/CEn8ndZ8Zdd+uf07n7PWZ402h/FJLyE7oCTW9gNH9bvBM S4aF/XN+h30Q== X-IronPort-AV: E=McAfee;i="6200,9189,9968"; a="184371662" X-IronPort-AV: E=Sophos;i="5.82,258,1613462400"; d="scan'208";a="184371662" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Apr 2021 18:34:05 -0700 IronPort-SDR: OMUaV4KtduHjo2emWMuhKZCDAuae81/nMutHCa3Ja2lNC32k3Pa0OGJy6TG+cDGuH8iBSm/FUu 5gUlRtrazVgQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,258,1613462400"; d="scan'208";a="458492519" Received: from dpdk-wenzhuo-haswell.sh.intel.com ([10.67.111.137]) by fmsmga002.fm.intel.com with ESMTP; 28 Apr 2021 18:34:04 -0700 From: Wenzhuo Lu To: dev@dpdk.org Cc: Wenzhuo Lu Date: Thu, 29 Apr 2021 09:33:57 +0800 Message-Id: <1619660037-33334-1-git-send-email-wenzhuo.lu@intel.com> X-Mailer: git-send-email 1.9.3 In-Reply-To: <1619414983-131070-1-git-send-email-wenzhuo.lu@intel.com> References: <1619414983-131070-1-git-send-email-wenzhuo.lu@intel.com> Subject: [dpdk-dev] [PATCH] net/iavf: fix performance drop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The performance drop is caused by that the RX scalar path is selected when AVX512 is disabled and some HW offload is enabled. Actaully, the HW offload is supported by AVX2 and SSE. In this scenario AVX2 path should be chosen. This patch removes the offload related check for SSE and AVX2 as SSE and AVX2 do support the offload features. No implement change about the data path. Fixes: eff56a7b9f97 ("net/iavf: add offload path for Rx AVX512") Signed-off-by: Wenzhuo Lu Acked-by: Qi Zhang --- drivers/net/iavf/iavf_rxtx.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 3f3cf63..0ba19dbf 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -2401,13 +2401,11 @@ check_ret = iavf_rx_vec_dev_check(dev); if (check_ret >= 0 && rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { - if (check_ret == IAVF_VECTOR_PATH) { - use_sse = true; - if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 || - rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && - rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256) - use_avx2 = true; - } + use_sse = true; + if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2) == 1 || + rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1) && + rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256) + use_avx2 = true; #ifdef CC_AVX512_SUPPORT if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) == 1 &&