From patchwork Wed Jan 18 07:33:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Liu, Mingxia" X-Patchwork-Id: 122273 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B4E884240B; Wed, 18 Jan 2023 09:30:18 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 53F8542D49; Wed, 18 Jan 2023 09:28:48 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id DF1C142D0B for ; Wed, 18 Jan 2023 09:28:45 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1674030526; x=1705566526; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MX1fl1hddCSZriThaPQkm3sSimGsfcEf3cQh4nYJ9kI=; b=N49OdMl+8Ou5CWveIiuE/rUktQg0vVr4z/ce2QSdztQl+fbJwT3JDm3h K2kLy4eb50Z/5jfIqyuSvci0CTgfqN8Ly9InILzn4ACPwxr3yZ9/mX7v+ XHCwQsOEi7EellHuvXGVeX8KVJpBugHQjvxCg7ILQBD4c/rqkLyA5yd5a CwAZphVX0JAieqkhECXG2rv3j5SGFXgWtxbq2c3on6FnYefThgc1Zhg0L 0q5Gh+tDRtTRm6j38pIfxDrwYQfSFkes0JtxTwLBezjYsH1Pfzprwi3Gd yEJtdEaRydgMlKv3hjTeQ/Pn3kAcCjQmVnimYarH0qtGp5AWUXSm11ycf Q==; X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="322620069" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="322620069" Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 18 Jan 2023 00:28:45 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10593"; a="728100085" X-IronPort-AV: E=Sophos;i="5.97,224,1669104000"; d="scan'208";a="728100085" Received: from dpdk-mingxial-01.sh.intel.com ([10.67.119.167]) by fmsmga004.fm.intel.com with ESMTP; 18 Jan 2023 00:28:43 -0800 From: Mingxia Liu To: dev@dpdk.org, qi.z.zhang@intel.com, jingjing.wu@intel.com, beilei.xing@intel.com Cc: wenjun1.wu@intel.com, Mingxia Liu Subject: [PATCH v3 17/21] net/cpfl: add AVX512 data path for split queue model Date: Wed, 18 Jan 2023 07:33:03 +0000 Message-Id: <20230118073304.903093-2-mingxia.liu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230118073304.903093-1-mingxia.liu@intel.com> References: <20230113081931.221576-1-mingxia.liu@intel.com> <20230118073304.903093-1-mingxia.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support of AVX512 data path for split queue model. Signed-off-by: Wenjun Wu Signed-off-by: Mingxia Liu --- drivers/net/cpfl/cpfl_rxtx.c | 25 +++++++++++++++++++++++++ drivers/net/cpfl/cpfl_rxtx_vec_common.h | 19 +++++++++++++++++-- 2 files changed, 42 insertions(+), 2 deletions(-) diff --git a/drivers/net/cpfl/cpfl_rxtx.c b/drivers/net/cpfl/cpfl_rxtx.c index 559b10cb85..5cd25278d6 100644 --- a/drivers/net/cpfl/cpfl_rxtx.c +++ b/drivers/net/cpfl/cpfl_rxtx.c @@ -772,6 +772,20 @@ cpfl_set_rx_function(struct rte_eth_dev *dev) #ifdef RTE_ARCH_X86 if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { +#ifdef RTE_ARCH_X86 + if (vport->rx_vec_allowed) { + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + (void)idpf_splitq_rx_vec_setup(rxq); + } +#ifdef CC_AVX512_SUPPORT + if (vport->rx_use_avx512) { + dev->rx_pkt_burst = idpf_splitq_recv_pkts_avx512; + return; + } +#endif + } +#endif dev->rx_pkt_burst = idpf_splitq_recv_pkts; } else { if (vport->rx_vec_allowed) { @@ -833,6 +847,17 @@ cpfl_set_tx_function(struct rte_eth_dev *dev) #endif /* RTE_ARCH_X86 */ if (vport->txq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { +#ifdef RTE_ARCH_X86 + if (vport->tx_vec_allowed) { +#ifdef CC_AVX512_SUPPORT + if (vport->tx_use_avx512) { + dev->tx_pkt_burst = idpf_splitq_xmit_pkts_avx512; + dev->tx_pkt_prepare = idpf_prep_pkts; + return; + } +#endif + } +#endif dev->tx_pkt_burst = idpf_splitq_xmit_pkts; dev->tx_pkt_prepare = idpf_prep_pkts; } else { diff --git a/drivers/net/cpfl/cpfl_rxtx_vec_common.h b/drivers/net/cpfl/cpfl_rxtx_vec_common.h index 503bc87f21..1f01cd40c5 100644 --- a/drivers/net/cpfl/cpfl_rxtx_vec_common.h +++ b/drivers/net/cpfl/cpfl_rxtx_vec_common.h @@ -64,15 +64,30 @@ cpfl_tx_vec_queue_default(struct idpf_tx_queue *txq) return CPFL_VECTOR_PATH; } +static inline int +cpfl_rx_splitq_vec_default(struct idpf_rx_queue *rxq) +{ + if (rxq->bufq2->rx_buf_len < rxq->max_pkt_len) + return CPFL_SCALAR_PATH; + + return CPFL_VECTOR_PATH; +} + static inline int cpfl_rx_vec_dev_check_default(struct rte_eth_dev *dev) { + struct idpf_vport *vport = dev->data->dev_private; struct idpf_rx_queue *rxq; - int i, ret = 0; + int i, default_ret, splitq_ret, ret = CPFL_SCALAR_PATH; for (i = 0; i < dev->data->nb_rx_queues; i++) { rxq = dev->data->rx_queues[i]; - ret = (cpfl_rx_vec_queue_default(rxq)); + default_ret = cpfl_rx_vec_queue_default(rxq); + if (vport->rxq_model == VIRTCHNL2_QUEUE_MODEL_SPLIT) { + splitq_ret = cpfl_rx_splitq_vec_default(rxq); + ret = splitq_ret && default_ret; + } else + ret = default_ret; if (ret == CPFL_SCALAR_PATH) return CPFL_SCALAR_PATH; }