From patchwork Wed Apr 13 17:10:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109669 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FB04A0508; Wed, 13 Apr 2022 11:11:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9A40427FD; Wed, 13 Apr 2022 11:11:48 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id CE2A34068B for ; Wed, 13 Apr 2022 11:11:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841108; x=1681377108; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=J4Wo64ItQBH4N6CEJHCXCjbSj6gRI5z9jkTbPzFPbys=; b=I0Qmx3ffkwyEeeEUmH31VzZpYG5ACnSjjh+Fgt7fMrH11e2l4XkDydjH Ppyn3PsSvfL16uOOwH5a7nYlxoIBsokySL6XBM+hlfxYwRKPRXOtBzyqp 62zYMbBp8N4tqkHT55DERBMzA77hxKxzLA8w+ULzezZ+Rm/gXiARo6EgY hV8zhjJVkUp8MOmN6DMztXzDPz1HnJdyBUM1Pyjh5k5ZYhQ4I7Q5a1ff4 laUZcaH/oGFRqqgkyKOJWoQnJ9QjA6rEhOutbi4ZiMjD7KBKQM7S3PM1d WqASOBr5TQwcj+Mi7OszS4gykHtrwYvAemrEagqoo66bYsyNSCDZDi3XA w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058330" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058330" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:47 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122698" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:45 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 01/22] net/ice: enable RSS RETA ops for DCF hardware Date: Wed, 13 Apr 2022 17:10:09 +0000 Message-Id: <20220413171030.2231163-2-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang RSS RETA should be updated and queried by application, Add related ops ('.reta_update', '.reta_query') for DCF. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 2 +- drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++++ 3 files changed, 79 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 7f0c074b01..070d1b71ac 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -790,7 +790,7 @@ ice_dcf_configure_rss_key(struct ice_dcf_hw *hw) return err; } -static int +int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw) { struct virtchnl_rss_lut *rss_lut; diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 6ec766ebda..b2c6aa2684 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc, int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw); int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); +int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 59610e058f..1ac66ed990 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -761,6 +761,81 @@ ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev, return 0; } +static int +ice_dcf_dev_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + uint8_t *lut; + uint16_t i, idx, shift; + int ret; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) + return -ENOTSUP; + + if (reta_size != hw->vf_res->rss_lut_size) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number of hardware can " + "support (%d)", reta_size, hw->vf_res->rss_lut_size); + return -EINVAL; + } + + lut = rte_zmalloc("rss_lut", reta_size, 0); + if (!lut) { + PMD_DRV_LOG(ERR, "No memory can be allocated"); + return -ENOMEM; + } + /* store the old lut table temporarily */ + rte_memcpy(lut, hw->rss_lut, reta_size); + + for (i = 0; i < reta_size; i++) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + shift = i % RTE_ETH_RETA_GROUP_SIZE; + if (reta_conf[idx].mask & (1ULL << shift)) + lut[i] = reta_conf[idx].reta[shift]; + } + + rte_memcpy(hw->rss_lut, lut, reta_size); + /* send virtchnnl ops to configure rss*/ + ret = ice_dcf_configure_rss_lut(hw); + if (ret) /* revert back */ + rte_memcpy(hw->rss_lut, lut, reta_size); + rte_free(lut); + + return ret; +} + +static int +ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + uint16_t i, idx, shift; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) + return -ENOTSUP; + + if (reta_size != hw->vf_res->rss_lut_size) { + PMD_DRV_LOG(ERR, "The size of hash lookup table configured " + "(%d) doesn't match the number of hardware can " + "support (%d)", reta_size, hw->vf_res->rss_lut_size); + return -EINVAL; + } + + for (i = 0; i < reta_size; i++) { + idx = i / RTE_ETH_RETA_GROUP_SIZE; + shift = i % RTE_ETH_RETA_GROUP_SIZE; + if (reta_conf[idx].mask & (1ULL << shift)) + reta_conf[idx].reta[shift] = hw->rss_lut[i]; + } + + return 0; +} + #define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4) #define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6) #define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t) @@ -1107,6 +1182,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add, .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del, .tm_ops_get = ice_dcf_tm_ops_get, + .reta_update = ice_dcf_dev_rss_reta_update, + .reta_query = ice_dcf_dev_rss_reta_query, }; static int From patchwork Wed Apr 13 17:10:10 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109670 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 195A9A0508; Wed, 13 Apr 2022 11:11:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE9F042806; Wed, 13 Apr 2022 11:11:50 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id CA11142805 for ; Wed, 13 Apr 2022 11:11:49 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841110; x=1681377110; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=mrR24+g49ndVilfIC2X4RY0fJTgOXLiccIJAdDUJoA8=; b=Z+OSp39sw7ShY9y+q6sqQS77x9Z8BHBbbcNCQa7C9TeFszIqh0WyGaXA BpjyXBjTgS1RXFbZLx8X+dGp+O98f781+XF7wH2scA4+epcyTudRuGsBe TMRBLEB737oRSgAsrGUWqNwC64W5xK6Kp/wkkOt0IzbI0MItJ1UMbjQ1+ CyuDTVlWwiQDQN21AS1yS4NzCrYJDC0JdkMN0BDCT36OMW8UocLjAYjbX Ky0KfS3nMj3NqBFpYCCHpIUm3qV8ZXyutg57wW/tFXil+2968Vh4N0XtS sNQ2CLPMH1rtD1UrI1xLNFxhMwDNQ9dZePmDUL/7hL/Tci2QFF5YeOsxJ g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058339" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058339" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:49 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122711" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:47 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 02/22] net/ice: enable RSS HASH ops for DCF hardware Date: Wed, 13 Apr 2022 17:10:10 +0000 Message-Id: <20220413171030.2231163-3-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang RSS HASH should be updated and queried by application, Add related ops ('.rss_hash_update', '.rss_hash_conf_get') for DCF. Because DCF doesn't support configure RSS HASH, only HASH key can be updated within ops '.rss_hash_update'. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 2 +- drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 51 ++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 070d1b71ac..89c0203ba3 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -758,7 +758,7 @@ ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) hw->ets_config = NULL; } -static int +int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw) { struct virtchnl_rss_key *rss_key; diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index b2c6aa2684..f0b45af5ae 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -122,6 +122,7 @@ int ice_dcf_send_aq_cmd(void *dcf_hw, struct ice_aq_desc *desc, int ice_dcf_handle_vsi_update_event(struct ice_dcf_hw *hw); int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); +int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw); int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw *hw); diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 1ac66ed990..ccad7fc304 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -836,6 +836,55 @@ ice_dcf_dev_rss_reta_query(struct rte_eth_dev *dev, return 0; } +static int +ice_dcf_dev_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) + return -ENOTSUP; + + /* HENA setting, it is enabled by default, no change */ + if (!rss_conf->rss_key || rss_conf->rss_key_len == 0) { + PMD_DRV_LOG(DEBUG, "No key to be configured"); + return 0; + } else if (rss_conf->rss_key_len != hw->vf_res->rss_key_size) { + PMD_DRV_LOG(ERR, "The size of hash key configured " + "(%d) doesn't match the size of hardware can " + "support (%d)", rss_conf->rss_key_len, + hw->vf_res->rss_key_size); + return -EINVAL; + } + + rte_memcpy(hw->rss_key, rss_conf->rss_key, rss_conf->rss_key_len); + + return ice_dcf_configure_rss_key(hw); +} + +static int +ice_dcf_dev_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_RSS_PF)) + return -ENOTSUP; + + /* Just set it to default value now. */ + rss_conf->rss_hf = ICE_RSS_OFFLOAD_ALL; + + if (!rss_conf->rss_key) + return 0; + + rss_conf->rss_key_len = hw->vf_res->rss_key_size; + rte_memcpy(rss_conf->rss_key, hw->rss_key, rss_conf->rss_key_len); + + return 0; +} + #define ICE_DCF_32_BIT_WIDTH (CHAR_BIT * 4) #define ICE_DCF_48_BIT_WIDTH (CHAR_BIT * 6) #define ICE_DCF_48_BIT_MASK RTE_LEN2MASK(ICE_DCF_48_BIT_WIDTH, uint64_t) @@ -1184,6 +1233,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .tm_ops_get = ice_dcf_tm_ops_get, .reta_update = ice_dcf_dev_rss_reta_update, .reta_query = ice_dcf_dev_rss_reta_query, + .rss_hash_update = ice_dcf_dev_rss_hash_update, + .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get, }; static int From patchwork Wed Apr 13 17:10:11 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109671 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5E61AA0508; Wed, 13 Apr 2022 11:12:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D6BDB427FB; Wed, 13 Apr 2022 11:11:53 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 09521410DD for ; Wed, 13 Apr 2022 11:11:51 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841113; x=1681377113; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=5FnmNbyjznWNp/QvhpWMhZ+N23zqp246e9Z1lWhhsmg=; b=ZpazPX7JpPea9cptGo8TNulWlXNI92/9DrO2SXaSnO/lc0jBmMe8zTxa Tdld973g07HCUb2QflZIY55y3kFSFXi1HTcylhzujZYnua7y3PQcTZVpA XSTmyyYYkf+bzjyBZ5Ce3yWbX8rz/7mCiXg+iWZ9p4DBVUOSBUy+ElJml VovJ+KsaNu6vkEwq/Ady+ogjBef/fdrY0G4qRMkAHX0pQkmU/eLXXcDAw ZcMzciYwNRppztkhui+de9xbWjlL+DXR5apX9qJxjnRdD/+CI4HmAuMeG iZjX07WYhCouoHoIFZQeiRk8eJiZQWlPZuhd9UZYDZMSolr55TxHe+7L6 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058343" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058343" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:51 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122732" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:49 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Robin Zhang , Kevin Liu Subject: [PATCH v3 03/22] net/ice: cleanup Tx buffers Date: Wed, 13 Apr 2022 17:10:11 +0000 Message-Id: <20220413171030.2231163-4-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Robin Zhang Add support for ops rte_eth_tx_done_cleanup in dcf Signed-off-by: Robin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf_ethdev.c | 1 + 1 file changed, 1 insertion(+) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index ccad7fc304..d8b5961514 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -1235,6 +1235,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .reta_query = ice_dcf_dev_rss_reta_query, .rss_hash_update = ice_dcf_dev_rss_hash_update, .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get, + .tx_done_cleanup = ice_tx_done_cleanup, }; static int From patchwork Wed Apr 13 17:10:12 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109672 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7777A0508; Wed, 13 Apr 2022 11:12:12 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B64884280B; Wed, 13 Apr 2022 11:11:55 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 526DE42805 for ; Wed, 13 Apr 2022 11:11:54 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841114; x=1681377114; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=uhH6sbpPFSDqWF8L5yx2JMPlQrbIgO+TzDOmyZdVXeM=; b=Ux1UxXXNegMV+seNnABYVp+UIWWc16p5uhqAh6mCjtMERCoIteYn1NVo Ar1zl027WbV9DCk8hfSM+QQYKs6cDLCACOpORXhlXcjNiB2I4GagRmjvL LniWXEPWtfDMqwF8uRlkfHZDFs3aaNGgnz5n7Tck2WC0kQaiKZvKy1YwX GFkjhcwN69+N8nhEfoV23iGtfxlkKt+cQWFWDNF3C8ARkZFPWSUOz3mH2 6nScgvq4o9dTIVvRSGXvyllcDiNEZ3TILu65LNI9jd9hSTNA8QFoVRZYo LrQBXrEf0dtI3z3eZbGhu0wY71al2rQRlmil3cRx45w/cC6R0An1a7bGR Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058348" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058348" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:53 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122737" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:51 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Jie Wang , Kevin Liu Subject: [PATCH v3 04/22] net/ice: add ops MTU-SET to dcf Date: Wed, 13 Apr 2022 17:10:12 +0000 Message-Id: <20220413171030.2231163-5-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jie Wang add API "mtu_set" to dcf, and it can configure the port mtu through cmdline. Signed-off-by: Jie Wang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf_ethdev.c | 14 ++++++++++++++ drivers/net/ice/ice_dcf_ethdev.h | 6 ++++++ 2 files changed, 20 insertions(+) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index d8b5961514..06d752fd61 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -1081,6 +1081,19 @@ ice_dcf_link_update(struct rte_eth_dev *dev, return rte_eth_linkstatus_set(dev, &new_link); } +static int +ice_dcf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu __rte_unused) +{ + /* mtu setting is forbidden if port is start */ + if (dev->data->dev_started != 0) { + PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", + dev->data->port_id); + return -EBUSY; + } + + return 0; +} + bool ice_dcf_adminq_need_retry(struct ice_adapter *ad) { @@ -1236,6 +1249,7 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .rss_hash_update = ice_dcf_dev_rss_hash_update, .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get, .tx_done_cleanup = ice_tx_done_cleanup, + .mtu_set = ice_dcf_dev_mtu_set, }; static int diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 11a1305038..f2faf26f58 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -15,6 +15,12 @@ #define ICE_DCF_MAX_RINGS 1 +#define ICE_DCF_FRAME_SIZE_MAX 9728 +#define ICE_DCF_VLAN_TAG_SIZE 4 +#define ICE_DCF_ETH_OVERHEAD \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2) +#define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD) + struct ice_dcf_queue { uint64_t dummy; }; From patchwork Wed Apr 13 17:10:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109673 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A46CAA0508; Wed, 13 Apr 2022 11:12:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 92B4242801; Wed, 13 Apr 2022 11:12:16 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A05304068B for ; Wed, 13 Apr 2022 11:12:14 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841134; x=1681377134; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HAg21Ie7KGkUTfzVHs6REN3QEDKJCuyvy8CYzTL9F7A=; b=WX6avxznukgpDspROmKFjCjLJcwYRVNi0pShAhbk8GPqqO74Gcks2hBm HXqO7UPxAbKz9aqtDh1h5WqJN9HYwd1mUH5WL5aa9QzqT8M3MiBqBMHiR mdEVkfQELuJiX7QrBcChG2rm/jubdi/+tiPcMBFVqg7m0C0tvwULVUqel hGz3DKYs0nwvGM/U15FiMNt9Ud8+KuIPC8E8+ghDW5X1A1EVL/91ok6vj SkBf1vsHHnhot8Cx57T/Of91mVHNODxFOO+dFUDh4u8TMUDMsyVbnMFJs uCAGNwFUDhp5AMkGzuCjgLnojZQ1YSJ97g5NdsmlK9Yt2mmn3iStpCFn7 Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058353" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058353" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:56 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122744" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:54 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Jie Wang , Kevin Liu Subject: [PATCH v3 05/22] net/ice: add ops dev-supported-ptypes-get to dcf Date: Wed, 13 Apr 2022 17:10:13 +0000 Message-Id: <20220413171030.2231163-6-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jie Wang add API "dev_supported_ptypes_get" to dcf, that dcf pmd can get ptypes through the new API. Signed-off-by: Jie Wang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf_ethdev.c | 80 +++++++++++++++++++------------- 1 file changed, 49 insertions(+), 31 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 06d752fd61..6a577a6582 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -1218,38 +1218,56 @@ ice_dcf_dev_reset(struct rte_eth_dev *dev) return ret; } +static const uint32_t * +ice_dcf_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) +{ + static const uint32_t ptypes[] = { + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_L4_ICMP, + RTE_PTYPE_L4_NONFRAG, + RTE_PTYPE_L4_SCTP, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_UNKNOWN + }; + return ptypes; +} + static const struct eth_dev_ops ice_dcf_eth_dev_ops = { - .dev_start = ice_dcf_dev_start, - .dev_stop = ice_dcf_dev_stop, - .dev_close = ice_dcf_dev_close, - .dev_reset = ice_dcf_dev_reset, - .dev_configure = ice_dcf_dev_configure, - .dev_infos_get = ice_dcf_dev_info_get, - .rx_queue_setup = ice_rx_queue_setup, - .tx_queue_setup = ice_tx_queue_setup, - .rx_queue_release = ice_dev_rx_queue_release, - .tx_queue_release = ice_dev_tx_queue_release, - .rx_queue_start = ice_dcf_rx_queue_start, - .tx_queue_start = ice_dcf_tx_queue_start, - .rx_queue_stop = ice_dcf_rx_queue_stop, - .tx_queue_stop = ice_dcf_tx_queue_stop, - .link_update = ice_dcf_link_update, - .stats_get = ice_dcf_stats_get, - .stats_reset = ice_dcf_stats_reset, - .promiscuous_enable = ice_dcf_dev_promiscuous_enable, - .promiscuous_disable = ice_dcf_dev_promiscuous_disable, - .allmulticast_enable = ice_dcf_dev_allmulticast_enable, - .allmulticast_disable = ice_dcf_dev_allmulticast_disable, - .flow_ops_get = ice_dcf_dev_flow_ops_get, - .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add, - .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del, - .tm_ops_get = ice_dcf_tm_ops_get, - .reta_update = ice_dcf_dev_rss_reta_update, - .reta_query = ice_dcf_dev_rss_reta_query, - .rss_hash_update = ice_dcf_dev_rss_hash_update, - .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get, - .tx_done_cleanup = ice_tx_done_cleanup, - .mtu_set = ice_dcf_dev_mtu_set, + .dev_start = ice_dcf_dev_start, + .dev_stop = ice_dcf_dev_stop, + .dev_close = ice_dcf_dev_close, + .dev_reset = ice_dcf_dev_reset, + .dev_configure = ice_dcf_dev_configure, + .dev_infos_get = ice_dcf_dev_info_get, + .dev_supported_ptypes_get = ice_dcf_dev_supported_ptypes_get, + .rx_queue_setup = ice_rx_queue_setup, + .tx_queue_setup = ice_tx_queue_setup, + .rx_queue_release = ice_dev_rx_queue_release, + .tx_queue_release = ice_dev_tx_queue_release, + .rx_queue_start = ice_dcf_rx_queue_start, + .tx_queue_start = ice_dcf_tx_queue_start, + .rx_queue_stop = ice_dcf_rx_queue_stop, + .tx_queue_stop = ice_dcf_tx_queue_stop, + .link_update = ice_dcf_link_update, + .stats_get = ice_dcf_stats_get, + .stats_reset = ice_dcf_stats_reset, + .promiscuous_enable = ice_dcf_dev_promiscuous_enable, + .promiscuous_disable = ice_dcf_dev_promiscuous_disable, + .allmulticast_enable = ice_dcf_dev_allmulticast_enable, + .allmulticast_disable = ice_dcf_dev_allmulticast_disable, + .flow_ops_get = ice_dcf_dev_flow_ops_get, + .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add, + .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del, + .tm_ops_get = ice_dcf_tm_ops_get, + .reta_update = ice_dcf_dev_rss_reta_update, + .reta_query = ice_dcf_dev_rss_reta_query, + .rss_hash_update = ice_dcf_dev_rss_hash_update, + .rss_hash_conf_get = ice_dcf_dev_rss_hash_conf_get, + .tx_done_cleanup = ice_tx_done_cleanup, + .mtu_set = ice_dcf_dev_mtu_set, }; static int From patchwork Wed Apr 13 17:10:14 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109674 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18AC0A0508; Wed, 13 Apr 2022 11:12:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A9A9242819; Wed, 13 Apr 2022 11:12:17 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 54CB64068B for ; Wed, 13 Apr 2022 11:12:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841135; x=1681377135; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=WOQHVMy6I/PNV1SH4Kmb0apkriJ+8tFdVGNa6R1TKVo=; b=NHcgAmHpzmnu2q45+HVmSgYh/czlJlWMpm4beB09syA4ZnknuCQd385t v3UNoYrVFQC9hWkwO9w9uI96BoBgoe0Q/eWl7YtCvR085DhENvQT3gyje vyCTS2e7qgXQ8Ogpz6G2qo6dfHdHJyRw1OQ8/6rF6Qn3xR8LK0IrxpYz+ pkAzdUCikQko/LPnIqs0AceSw/dOKyiqh9MoCAHkBWGdpDs0A1oUb8oUm VEGR/xgpbJX1i+umhJkc3WP2VNC4Dv49w9yt6kYKbbJ+DZwUPcvbybdSB Otb44YzPfSQNl6/2drCfH+zFvw93wn+wzFEnc5/c9Sa/DadjrunM+Ha/Z A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058357" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058357" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:58 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122761" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:56 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Kevin Liu Subject: [PATCH v3 06/22] net/ice: support dcf promisc configuration Date: Wed, 13 Apr 2022 17:10:14 +0000 Message-Id: <20220413171030.2231163-7-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang Support configuration of unicast and multicast promisc on dcf. Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf_ethdev.c | 77 ++++++++++++++++++++++++++++++-- drivers/net/ice/ice_dcf_ethdev.h | 3 ++ 2 files changed, 76 insertions(+), 4 deletions(-) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 6a577a6582..87d281ee93 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -727,27 +727,95 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, } static int -ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) +dcf_config_promisc(struct ice_dcf_adapter *adapter, + bool enable_unicast, + bool enable_multicast) { + struct ice_dcf_hw *hw = &adapter->real_hw; + struct virtchnl_promisc_info promisc; + struct dcf_virtchnl_cmd args; + int err; + + promisc.flags = 0; + promisc.vsi_id = hw->vsi_res->vsi_id; + + if (enable_unicast) + promisc.flags |= FLAG_VF_UNICAST_PROMISC; + + if (enable_multicast) + promisc.flags |= FLAG_VF_MULTICAST_PROMISC; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE; + args.req_msg = (uint8_t *)&promisc; + args.req_msglen = sizeof(promisc); + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_DRV_LOG(ERR, + "fail to execute command VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE"); + return err; + } + + adapter->promisc_unicast_enabled = enable_unicast; + adapter->promisc_multicast_enabled = enable_multicast; return 0; } +static int +ice_dcf_dev_promiscuous_enable(__rte_unused struct rte_eth_dev *dev) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + + if (adapter->promisc_unicast_enabled) { + PMD_DRV_LOG(INFO, "promiscuous has been enabled"); + return 0; + } + + return dcf_config_promisc(adapter, true, + adapter->promisc_multicast_enabled); +} + static int ice_dcf_dev_promiscuous_disable(__rte_unused struct rte_eth_dev *dev) { - return 0; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + + if (!adapter->promisc_unicast_enabled) { + PMD_DRV_LOG(INFO, "promiscuous has been disabled"); + return 0; + } + + return dcf_config_promisc(adapter, false, + adapter->promisc_multicast_enabled); } static int ice_dcf_dev_allmulticast_enable(__rte_unused struct rte_eth_dev *dev) { - return 0; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + + if (adapter->promisc_multicast_enabled) { + PMD_DRV_LOG(INFO, "allmulticast has been enabled"); + return 0; + } + + return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled, + true); } static int ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev) { - return 0; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + + if (!adapter->promisc_multicast_enabled) { + PMD_DRV_LOG(INFO, "allmulticast has been disabled"); + return 0; + } + + return dcf_config_promisc(adapter, adapter->promisc_unicast_enabled, + false); } static int @@ -1299,6 +1367,7 @@ ice_dcf_dev_init(struct rte_eth_dev *eth_dev) return -1; } + dcf_config_promisc(adapter, false, false); return 0; } diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index f2faf26f58..22e450527b 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -33,6 +33,9 @@ struct ice_dcf_adapter { struct ice_adapter parent; /* Must be first */ struct ice_dcf_hw real_hw; + bool promisc_unicast_enabled; + bool promisc_multicast_enabled; + int num_reprs; struct ice_dcf_repr_info *repr_infos; }; From patchwork Wed Apr 13 17:10:15 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109675 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A57FDA0508; Wed, 13 Apr 2022 11:12:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 92F0342803; Wed, 13 Apr 2022 11:12:18 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id BCFC4410DD for ; Wed, 13 Apr 2022 11:12:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841136; x=1681377136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=E+Pxz1IGnkGogeqtg63gIrWEg+zBHJGlVEYLUId0Yg0=; b=YAnc5KPFBJY1mf0Hv02PaKtpoKIdEKDGt9T4z42LY043Wp29wHc/YvJL OF1lvSALXeJ+KVr7G+PZv9tOZgTgQKl7cvt2T7Nkt+nHWcbReBDtUfxkt m7mMubcIiSff7Z0OHroVsLbr4saQCrjqT+X69XPUhAoHnSTH/fTF5y/mV Qdth9ywKez4xxAL9qcOV4TOsQEYQMXxX/BgMzNUPt3G/11BacEc9tBUg+ jW8wcgrHWrj3d+DYMhZtE2VeV7Smdd2BM7ICmSPFeyKktHzAzGqxoDKBH Bx1P6iSViMbdU9yZ8XwhrKEOI2ANT4msvxHsPoQ/NLoXNIHgH+wobDKZo g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058367" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058367" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:00 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122778" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:11:58 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu , Alvin Zhang Subject: [PATCH v3 07/22] net/ice: support dcf MAC configuration Date: Wed, 13 Apr 2022 17:10:15 +0000 Message-Id: <20220413171030.2231163-8-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Below PMD ops are supported in this patch: .mac_addr_add = dcf_dev_add_mac_addr .mac_addr_remove = dcf_dev_del_mac_addr .set_mc_addr_list = dcf_set_mc_addr_list .mac_addr_set = dcf_dev_set_default_mac_addr Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 9 +- drivers/net/ice/ice_dcf.h | 4 +- drivers/net/ice/ice_dcf_ethdev.c | 218 ++++++++++++++++++++++++++++++- drivers/net/ice/ice_dcf_ethdev.h | 5 +- 4 files changed, 226 insertions(+), 10 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 89c0203ba3..55ae68c456 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1089,10 +1089,11 @@ ice_dcf_query_stats(struct ice_dcf_hw *hw, } int -ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add) +ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, + struct rte_ether_addr *addr, + bool add, uint8_t type) { struct virtchnl_ether_addr_list *list; - struct rte_ether_addr *addr; struct dcf_virtchnl_cmd args; int len, err = 0; @@ -1105,7 +1106,6 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add) } len = sizeof(struct virtchnl_ether_addr_list); - addr = hw->eth_dev->data->mac_addrs; len += sizeof(struct virtchnl_ether_addr); list = rte_zmalloc(NULL, len, 0); @@ -1116,9 +1116,10 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add) rte_memcpy(list->list[0].addr, addr->addr_bytes, sizeof(addr->addr_bytes)); + PMD_DRV_LOG(DEBUG, "add/rm mac:" RTE_ETHER_ADDR_PRT_FMT, RTE_ETHER_ADDR_BYTES(addr)); - + list->list[0].type = type; list->vsi_id = hw->vsi_res->vsi_id; list->num_elements = 1; diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index f0b45af5ae..78df202a77 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -131,7 +131,9 @@ int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct virtchnl_eth_stats *pstats); -int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, bool add); +int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, + struct rte_ether_addr *addr, bool add, + uint8_t type); int ice_dcf_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); void ice_dcf_tm_conf_init(struct rte_eth_dev *dev); diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 87d281ee93..0d944f9fd2 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -26,6 +26,12 @@ #include "ice_dcf_ethdev.h" #include "ice_rxtx.h" +#define DCF_NUM_MACADDR_MAX 64 + +static int dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw, + struct rte_ether_addr *mc_addrs, + uint32_t mc_addrs_num, bool add); + static int ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev, struct rte_eth_udp_tunnel *udp_tunnel); @@ -561,12 +567,22 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) return ret; } - ret = ice_dcf_add_del_all_mac_addr(hw, true); + ret = ice_dcf_add_del_all_mac_addr(hw, hw->eth_dev->data->mac_addrs, + true, VIRTCHNL_ETHER_ADDR_PRIMARY); if (ret) { PMD_DRV_LOG(ERR, "Failed to add mac addr"); return ret; } + if (dcf_ad->mc_addrs_num) { + /* flush previous addresses */ + ret = dcf_add_del_mc_addr_list(hw, dcf_ad->mc_addrs, + dcf_ad->mc_addrs_num, true); + if (ret) + return ret; + } + + dev->data->dev_link.link_status = RTE_ETH_LINK_UP; return 0; @@ -625,7 +641,16 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev) rte_intr_efd_disable(intr_handle); rte_intr_vec_list_free(intr_handle); - ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false); + ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, + dcf_ad->real_hw.eth_dev->data->mac_addrs, + false, VIRTCHNL_ETHER_ADDR_PRIMARY); + + if (dcf_ad->mc_addrs_num) + /* flush previous addresses */ + (void)dcf_add_del_mc_addr_list(&dcf_ad->real_hw, + dcf_ad->mc_addrs, + dcf_ad->mc_addrs_num, false); + dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN; ad->pf.adapter_stopped = 1; hw->tm_conf.committed = false; @@ -655,7 +680,7 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, struct ice_dcf_adapter *adapter = dev->data->dev_private; struct ice_dcf_hw *hw = &adapter->real_hw; - dev_info->max_mac_addrs = 1; + dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX; dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs; dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs; dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN; @@ -818,6 +843,189 @@ ice_dcf_dev_allmulticast_disable(__rte_unused struct rte_eth_dev *dev) false); } +static int +dcf_dev_add_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *addr, + __rte_unused uint32_t index, + __rte_unused uint32_t pool) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + int err; + + if (rte_is_zero_ether_addr(addr)) { + PMD_DRV_LOG(ERR, "Invalid Ethernet Address"); + return -EINVAL; + } + + err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, true, + VIRTCHNL_ETHER_ADDR_EXTRA); + if (err) { + PMD_DRV_LOG(ERR, "fail to add MAC address"); + return err; + } + + return 0; +} + +static void +dcf_dev_del_mac_addr(struct rte_eth_dev *dev, uint32_t index) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct rte_ether_addr *addr = &dev->data->mac_addrs[index]; + int err; + + err = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, addr, false, + VIRTCHNL_ETHER_ADDR_EXTRA); + if (err) + PMD_DRV_LOG(ERR, "fail to remove MAC address"); +} + +static int +dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw, + struct rte_ether_addr *mc_addrs, + uint32_t mc_addrs_num, bool add) +{ + struct virtchnl_ether_addr_list *list; + struct dcf_virtchnl_cmd args; + uint32_t i; + int len, err = 0; + + len = sizeof(struct virtchnl_ether_addr_list); + len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num; + + list = rte_zmalloc(NULL, len, 0); + if (!list) { + PMD_DRV_LOG(ERR, "fail to allocate memory"); + return -ENOMEM; + } + + for (i = 0; i < mc_addrs_num; i++) { + memcpy(list->list[i].addr, mc_addrs[i].addr_bytes, + sizeof(list->list[i].addr)); + list->list[i].type = VIRTCHNL_ETHER_ADDR_EXTRA; + } + + list->vsi_id = hw->vsi_res->vsi_id; + list->num_elements = mc_addrs_num; + + memset(&args, 0, sizeof(args)); + args.v_op = add ? VIRTCHNL_OP_ADD_ETH_ADDR : + VIRTCHNL_OP_DEL_ETH_ADDR; + args.req_msg = (uint8_t *)list; + args.req_msglen = len; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + add ? "OP_ADD_ETHER_ADDRESS" : + "OP_DEL_ETHER_ADDRESS"); + rte_free(list); + return err; +} + +static int +dcf_set_mc_addr_list(struct rte_eth_dev *dev, + struct rte_ether_addr *mc_addrs, + uint32_t mc_addrs_num) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + uint32_t i; + int ret; + + + if (mc_addrs_num > DCF_NUM_MACADDR_MAX) { + PMD_DRV_LOG(ERR, + "can't add more than a limited number (%u) of addresses.", + (uint32_t)DCF_NUM_MACADDR_MAX); + return -EINVAL; + } + + for (i = 0; i < mc_addrs_num; i++) { + if (!rte_is_multicast_ether_addr(&mc_addrs[i])) { + const uint8_t *mac = mc_addrs[i].addr_bytes; + + PMD_DRV_LOG(ERR, + "Invalid mac: %02x:%02x:%02x:%02x:%02x:%02x", + mac[0], mac[1], mac[2], mac[3], mac[4], + mac[5]); + return -EINVAL; + } + } + + if (adapter->mc_addrs_num) { + /* flush previous addresses */ + ret = dcf_add_del_mc_addr_list(hw, adapter->mc_addrs, + adapter->mc_addrs_num, false); + if (ret) + return ret; + } + if (!mc_addrs_num) { + adapter->mc_addrs_num = 0; + return 0; + } + + /* add new ones */ + ret = dcf_add_del_mc_addr_list(hw, mc_addrs, mc_addrs_num, true); + if (ret) { + /* if adding mac address list fails, should add the + * previous addresses back. + */ + if (adapter->mc_addrs_num) + (void)dcf_add_del_mc_addr_list(hw, adapter->mc_addrs, + adapter->mc_addrs_num, + true); + return ret; + } + adapter->mc_addrs_num = mc_addrs_num; + memcpy(adapter->mc_addrs, + mc_addrs, mc_addrs_num * sizeof(*mc_addrs)); + + return 0; +} + +static int +dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev, + struct rte_ether_addr *mac_addr) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct rte_ether_addr *old_addr; + int ret; + + old_addr = hw->eth_dev->data->mac_addrs; + if (rte_is_same_ether_addr(old_addr, mac_addr)) + return 0; + + ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, old_addr, false, + VIRTCHNL_ETHER_ADDR_PRIMARY); + if (ret) + PMD_DRV_LOG(ERR, "Fail to delete old MAC:" + " %02X:%02X:%02X:%02X:%02X:%02X", + old_addr->addr_bytes[0], + old_addr->addr_bytes[1], + old_addr->addr_bytes[2], + old_addr->addr_bytes[3], + old_addr->addr_bytes[4], + old_addr->addr_bytes[5]); + + ret = ice_dcf_add_del_all_mac_addr(&adapter->real_hw, mac_addr, true, + VIRTCHNL_ETHER_ADDR_PRIMARY); + if (ret) + PMD_DRV_LOG(ERR, "Fail to add new MAC:" + " %02X:%02X:%02X:%02X:%02X:%02X", + mac_addr->addr_bytes[0], + mac_addr->addr_bytes[1], + mac_addr->addr_bytes[2], + mac_addr->addr_bytes[3], + mac_addr->addr_bytes[4], + mac_addr->addr_bytes[5]); + + if (ret) + return -EIO; + + rte_ether_addr_copy(mac_addr, hw->eth_dev->data->mac_addrs); + return 0; +} + static int ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -1326,6 +1534,10 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .promiscuous_disable = ice_dcf_dev_promiscuous_disable, .allmulticast_enable = ice_dcf_dev_allmulticast_enable, .allmulticast_disable = ice_dcf_dev_allmulticast_disable, + .mac_addr_add = dcf_dev_add_mac_addr, + .mac_addr_remove = dcf_dev_del_mac_addr, + .set_mc_addr_list = dcf_set_mc_addr_list, + .mac_addr_set = dcf_dev_set_default_mac_addr, .flow_ops_get = ice_dcf_dev_flow_ops_get, .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add, .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del, diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 22e450527b..27f6402786 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -14,7 +14,7 @@ #include "ice_dcf.h" #define ICE_DCF_MAX_RINGS 1 - +#define DCF_NUM_MACADDR_MAX 64 #define ICE_DCF_FRAME_SIZE_MAX 9728 #define ICE_DCF_VLAN_TAG_SIZE 4 #define ICE_DCF_ETH_OVERHEAD \ @@ -35,7 +35,8 @@ struct ice_dcf_adapter { bool promisc_unicast_enabled; bool promisc_multicast_enabled; - + uint32_t mc_addrs_num; + struct rte_ether_addr mc_addrs[DCF_NUM_MACADDR_MAX]; int num_reprs; struct ice_dcf_repr_info *repr_infos; }; From patchwork Wed Apr 13 17:10:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109676 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7B619A0508; Wed, 13 Apr 2022 11:12:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA6CD42822; Wed, 13 Apr 2022 11:12:19 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 41AAB4068B for ; Wed, 13 Apr 2022 11:12:15 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841136; x=1681377136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=llY6IOz8dlyDhNchvrE++z5xtL8Zh+wU7YCAs0cdukc=; b=CWGZhNsD3ho84UQG0hIZlN0aR9HAS/P95hEcewM094uAr4isqlpFNGOf K0q7R8zfhRjUeeeRde8D/9EDrpEW0W4yTYEbWeC+sV/k5LaTfOdo233nv hMoHrAx8QZGeDbvJdWUkNd6q0VmmZn6WnG5vqKbqJ7KMh4oXJ3PlTT9O1 1Y/jhWS9T7o+gdYZ/HyPd7ftoB4yHVC1uUwxGrabg7hA9C00W4XY2PMlc KAkdQxNNdkpMbB1TwfWfd4F/r9C0p/82gizNcBKjCidO1TD5dVeyNVofp O+gOUXkAlRJA1/S3MEpKWNdzTSMyTau5tTQeGKPcemH2jSamPZI5ZQygr A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058373" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058373" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:03 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122804" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:00 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Kevin Liu Subject: [PATCH v3 08/22] net/ice: support dcf VLAN filter and offload configuration Date: Wed, 13 Apr 2022 17:10:16 +0000 Message-Id: <20220413171030.2231163-9-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang Below PMD ops are supported in this patch: .vlan_filter_set = dcf_dev_vlan_filter_set .vlan_offload_set = dcf_dev_vlan_offload_set Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf_ethdev.c | 101 +++++++++++++++++++++++++++++++ 1 file changed, 101 insertions(+) diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 0d944f9fd2..e58cdf47d2 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -1026,6 +1026,105 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev, return 0; } +static int +dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add) +{ + struct virtchnl_vlan_filter_list *vlan_list; + uint8_t cmd_buffer[sizeof(struct virtchnl_vlan_filter_list) + + sizeof(uint16_t)]; + struct dcf_virtchnl_cmd args; + int err; + + vlan_list = (struct virtchnl_vlan_filter_list *)cmd_buffer; + vlan_list->vsi_id = hw->vsi_res->vsi_id; + vlan_list->num_elements = 1; + vlan_list->vlan_id[0] = vlanid; + + memset(&args, 0, sizeof(args)); + args.v_op = add ? VIRTCHNL_OP_ADD_VLAN : VIRTCHNL_OP_DEL_VLAN; + args.req_msg = cmd_buffer; + args.req_msglen = sizeof(cmd_buffer); + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + add ? "OP_ADD_VLAN" : "OP_DEL_VLAN"); + + return err; +} + +static int +dcf_enable_vlan_strip(struct ice_dcf_hw *hw) +{ + struct dcf_virtchnl_cmd args; + int ret; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_ENABLE_VLAN_STRIPPING; + ret = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (ret) + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_ENABLE_VLAN_STRIPPING"); + + return ret; +} + +static int +dcf_disable_vlan_strip(struct ice_dcf_hw *hw) +{ + struct dcf_virtchnl_cmd args; + int ret; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_DISABLE_VLAN_STRIPPING; + ret = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (ret) + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_DISABLE_VLAN_STRIPPING"); + + return ret; +} + +static int +dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + int err; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) + return -ENOTSUP; + + err = dcf_add_del_vlan(hw, vlan_id, on); + if (err) + return -EIO; + return 0; +} + +static int +dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; + int err; + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) + return -ENOTSUP; + + /* Vlan stripping setting */ + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + /* Enable or disable VLAN stripping */ + if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + err = dcf_enable_vlan_strip(hw); + else + err = dcf_disable_vlan_strip(hw); + + if (err) + return -EIO; + } + return 0; +} + static int ice_dcf_dev_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops) @@ -1538,6 +1637,8 @@ static const struct eth_dev_ops ice_dcf_eth_dev_ops = { .mac_addr_remove = dcf_dev_del_mac_addr, .set_mc_addr_list = dcf_set_mc_addr_list, .mac_addr_set = dcf_dev_set_default_mac_addr, + .vlan_filter_set = dcf_dev_vlan_filter_set, + .vlan_offload_set = dcf_dev_vlan_offload_set, .flow_ops_get = ice_dcf_dev_flow_ops_get, .udp_tunnel_port_add = ice_dcf_dev_udp_tunnel_port_add, .udp_tunnel_port_del = ice_dcf_dev_udp_tunnel_port_del, From patchwork Wed Apr 13 17:10:17 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109678 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A95B5A0508; Wed, 13 Apr 2022 11:12:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A33B842827; Wed, 13 Apr 2022 11:12:21 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 993FD42803 for ; Wed, 13 Apr 2022 11:12:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841136; x=1681377136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=czQYGmqdwsiJjeNH0C2rD08JOg7Y4c6jT9LjqLzrkTE=; b=DmUt03zUIEgolb8yGaPmFyKinl36J/Bk7oDK2PsRMMYrlznJbTigdHY7 YDvNzGsdqFN0DMzzr+DQuXhcHInubNOK9oo65fYShIx5sq9Ymhu4Xm6+S m0Xi6u0nOrKFRQrcwTqzjPHJMUpJih/ZgDDHG6jnYXb73lEGnmSo+WOjH fPrhVtTiGMlCi1ogBdNJr7R023dtPYvu+Z8wc4mVkOwNd+AlAZdxZVHZo rnh1psq4Z6IB16lYTuD7h28joHaEyd41hd3HUQv1fh79TBMzEniPRlTrb gNXhYN7Dat78PUT4WoEFj1PgExM+3ng8Zm8NlEQ3SfpJ0+02KmqS5+aR8 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058377" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058377" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:05 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122814" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:03 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Kevin Liu Subject: [PATCH v3 09/22] net/ice: support DCF new VLAN capabilities Date: Wed, 13 Apr 2022 17:10:17 +0000 Message-Id: <20220413171030.2231163-10-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang The new VLAN virtchnl opcodes introduce new capabilities like VLAN filtering, stripping and insertion. The DCF needs to query the VLAN capabilities based on current device configuration firstly. DCF is able to configure inner VLAN filter when port VLAN is enabled base on negotiation; and DCF is able to configure outer VLAN (0x8100) if port VLAN is disabled to be compatible with legacy mode. When port VLAN is updated by DCF, the DCF needs to reset to query the new VLAN capabilities. Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 27 +++++ drivers/net/ice/ice_dcf.h | 1 + drivers/net/ice/ice_dcf_ethdev.c | 171 ++++++++++++++++++++++++++++--- 3 files changed, 182 insertions(+), 17 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 55ae68c456..885d58c0f4 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -587,6 +587,29 @@ ice_dcf_get_supported_rxdid(struct ice_dcf_hw *hw) return 0; } +static int +dcf_get_vlan_offload_caps_v2(struct ice_dcf_hw *hw) +{ + struct virtchnl_vlan_caps vlan_v2_caps; + struct dcf_virtchnl_cmd args; + int ret; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS; + args.rsp_msgbuf = (uint8_t *)&vlan_v2_caps; + args.rsp_buflen = sizeof(vlan_v2_caps); + + ret = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to execute command of VIRTCHNL_OP_GET_OFFLOAD_VLAN_V2_CAPS"); + return ret; + } + + rte_memcpy(&hw->vlan_v2_caps, &vlan_v2_caps, sizeof(vlan_v2_caps)); + return 0; +} + int ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) { @@ -701,6 +724,10 @@ ice_dcf_init_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) rte_intr_enable(pci_dev->intr_handle); ice_dcf_enable_irq0(hw); + if ((hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) && + dcf_get_vlan_offload_caps_v2(hw)) + goto err_rss; + return 0; err_rss: diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 78df202a77..32e6031bd9 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -107,6 +107,7 @@ struct ice_dcf_hw { uint16_t nb_msix; uint16_t rxq_map[16]; struct virtchnl_eth_stats eth_stats_offset; + struct virtchnl_vlan_caps vlan_v2_caps; /* Link status */ bool link_up; diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index e58cdf47d2..d4bfa182a4 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -1026,6 +1026,46 @@ dcf_dev_set_default_mac_addr(struct rte_eth_dev *dev, return 0; } +static int +dcf_add_del_vlan_v2(struct ice_dcf_hw *hw, uint16_t vlanid, bool add) +{ + struct virtchnl_vlan_supported_caps *supported_caps = + &hw->vlan_v2_caps.filtering.filtering_support; + struct virtchnl_vlan *vlan_setting; + struct virtchnl_vlan_filter_list_v2 vlan_filter; + struct dcf_virtchnl_cmd args; + uint32_t filtering_caps; + int err; + + if (supported_caps->outer) { + filtering_caps = supported_caps->outer; + vlan_setting = &vlan_filter.filters[0].outer; + } else { + filtering_caps = supported_caps->inner; + vlan_setting = &vlan_filter.filters[0].inner; + } + + if (!(filtering_caps & VIRTCHNL_VLAN_ETHERTYPE_8100)) + return -ENOTSUP; + + memset(&vlan_filter, 0, sizeof(vlan_filter)); + vlan_filter.vport_id = hw->vsi_res->vsi_id; + vlan_filter.num_elements = 1; + vlan_setting->tpid = RTE_ETHER_TYPE_VLAN; + vlan_setting->tci = vlanid; + + memset(&args, 0, sizeof(args)); + args.v_op = add ? VIRTCHNL_OP_ADD_VLAN_V2 : VIRTCHNL_OP_DEL_VLAN_V2; + args.req_msg = (uint8_t *)&vlan_filter; + args.req_msglen = sizeof(vlan_filter); + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command %s", + add ? "OP_ADD_VLAN_V2" : "OP_DEL_VLAN_V2"); + + return err; +} + static int dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add) { @@ -1052,6 +1092,116 @@ dcf_add_del_vlan(struct ice_dcf_hw *hw, uint16_t vlanid, bool add) return err; } +static int +dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + int err; + + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) { + err = dcf_add_del_vlan_v2(hw, vlan_id, on); + if (err) + return -EIO; + return 0; + } + + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) + return -ENOTSUP; + + err = dcf_add_del_vlan(hw, vlan_id, on); + if (err) + return -EIO; + return 0; +} + +static void +dcf_iterate_vlan_filters_v2(struct rte_eth_dev *dev, bool enable) +{ + struct rte_vlan_filter_conf *vfc = &dev->data->vlan_filter_conf; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + uint32_t i, j; + uint64_t ids; + + for (i = 0; i < RTE_DIM(vfc->ids); i++) { + if (vfc->ids[i] == 0) + continue; + + ids = vfc->ids[i]; + for (j = 0; ids != 0 && j < 64; j++, ids >>= 1) { + if (ids & 1) + dcf_add_del_vlan_v2(hw, 64 * i + j, enable); + } + } +} + +static int +dcf_config_vlan_strip_v2(struct ice_dcf_hw *hw, bool enable) +{ + struct virtchnl_vlan_supported_caps *stripping_caps = + &hw->vlan_v2_caps.offloads.stripping_support; + struct virtchnl_vlan_setting vlan_strip; + struct dcf_virtchnl_cmd args; + uint32_t *ethertype; + int ret; + + if ((stripping_caps->outer & VIRTCHNL_VLAN_ETHERTYPE_8100) && + (stripping_caps->outer & VIRTCHNL_VLAN_TOGGLE)) + ethertype = &vlan_strip.outer_ethertype_setting; + else if ((stripping_caps->inner & VIRTCHNL_VLAN_ETHERTYPE_8100) && + (stripping_caps->inner & VIRTCHNL_VLAN_TOGGLE)) + ethertype = &vlan_strip.inner_ethertype_setting; + else + return -ENOTSUP; + + memset(&vlan_strip, 0, sizeof(vlan_strip)); + vlan_strip.vport_id = hw->vsi_res->vsi_id; + *ethertype = VIRTCHNL_VLAN_ETHERTYPE_8100; + + memset(&args, 0, sizeof(args)); + args.v_op = enable ? VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2 : + VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2; + args.req_msg = (uint8_t *)&vlan_strip; + args.req_msglen = sizeof(vlan_strip); + ret = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (ret) + PMD_DRV_LOG(ERR, "fail to execute command %s", + enable ? "VIRTCHNL_OP_ENABLE_VLAN_STRIPPING_V2" : + "VIRTCHNL_OP_DISABLE_VLAN_STRIPPING_V2"); + + return ret; +} + +static int +dcf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask) +{ + struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + bool enable; + int err; + + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER); + + dcf_iterate_vlan_filters_v2(dev, enable); + } + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP); + + err = dcf_config_vlan_strip_v2(hw, enable); + /* If not support, the stripping is already disabled by PF */ + if (err == -ENOTSUP && !enable) + err = 0; + if (err) + return -EIO; + } + + return 0; +} + static int dcf_enable_vlan_strip(struct ice_dcf_hw *hw) { @@ -1084,30 +1234,17 @@ dcf_disable_vlan_strip(struct ice_dcf_hw *hw) return ret; } -static int -dcf_dev_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) -{ - struct ice_dcf_adapter *adapter = dev->data->dev_private; - struct ice_dcf_hw *hw = &adapter->real_hw; - int err; - - if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) - return -ENOTSUP; - - err = dcf_add_del_vlan(hw, vlan_id, on); - if (err) - return -EIO; - return 0; -} - static int dcf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask) { + struct rte_eth_conf *dev_conf = &dev->data->dev_conf; struct ice_dcf_adapter *adapter = dev->data->dev_private; struct ice_dcf_hw *hw = &adapter->real_hw; - struct rte_eth_conf *dev_conf = &dev->data->dev_conf; int err; + if (hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN_V2) + return dcf_dev_vlan_offload_set_v2(dev, mask); + if (!(hw->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_VLAN)) return -ENOTSUP; From patchwork Wed Apr 13 17:10:18 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109677 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C960EA0508; Wed, 13 Apr 2022 11:12:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD1DA4282C; Wed, 13 Apr 2022 11:12:20 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 9FD634280E for ; Wed, 13 Apr 2022 11:12:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841136; x=1681377136; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=soZEOlPaZDYWJK582B374G4S5rjpEAf/cO/hHvnfTro=; b=Th/ZkMYE5oYpJtyDvGM8j5Nkk6rghWuBp1xzVZMjOGC5B+az3m37WEHg RIaEu4AyIjhkQwmzrUIYcTEtr0V16OCr3ukUYAKKhW3AVog9YzNoXbTNZ j910FizXUtcpy1Hz3aF9FywJKJ3+/mUupyVkullfApPmW6Hj5PJdqKLI5 jdQFWjP8OTMkoxA/6+0dEuvFLCcPIyz5Bm2aVx1YB2A+2EmWyGYRmq6G7 EuLrUlB4nuwoUrngwcr8Ieb8wXgb/OIo3Gxlqm0YzrZhUSIdX8UDEhSzY kAKfPSSOaMx0qWap5Y9Cf37ZgoUYgMjhcx1XNMClLfoINQuOvfDIqb5XW g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058381" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058381" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:07 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122825" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:05 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Dapeng Yu , Kevin Liu Subject: [PATCH v3 10/22] net/ice: enable CVL DCF device reset API Date: Wed, 13 Apr 2022 17:10:18 +0000 Message-Id: <20220413171030.2231163-11-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dapeng Yu Enable CVL DCF device reset API. Signed-off-by: Dapeng Yu Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 24 ++++++++++++++++++++++++ drivers/net/ice/ice_dcf.h | 1 + 2 files changed, 25 insertions(+) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 885d58c0f4..9c2f13cf72 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1163,3 +1163,27 @@ ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, rte_free(list); return err; } + +int +ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) +{ + int ret; + + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + + ice_dcf_disable_irq0(hw); + rte_intr_disable(intr_handle); + rte_intr_callback_unregister(intr_handle, ice_dcf_dev_interrupt_handler, + hw); + ret = ice_dcf_mode_disable(hw); + if (ret) + goto err; + ret = ice_dcf_get_vf_resource(hw); +err: + rte_intr_callback_register(intr_handle, ice_dcf_dev_interrupt_handler, + hw); + rte_intr_enable(intr_handle); + ice_dcf_enable_irq0(hw); + return ret; +} diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 32e6031bd9..8cf17e7700 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -137,6 +137,7 @@ int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, uint8_t type); int ice_dcf_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete); +int ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); void ice_dcf_tm_conf_init(struct rte_eth_dev *dev); void ice_dcf_tm_conf_uninit(struct rte_eth_dev *dev); int ice_dcf_replay_vf_bw(struct ice_dcf_hw *hw, uint16_t vf_id); From patchwork Wed Apr 13 17:10:19 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109680 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AFC66A0508; Wed, 13 Apr 2022 11:13:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 48DA142835; Wed, 13 Apr 2022 11:12:23 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id E7ADC4068B for ; Wed, 13 Apr 2022 11:12:16 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841137; x=1681377137; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=z/7XmDGmm/vYYV9DYQCl99wkfRfZy2l0IrhTmuLzWN4=; b=JaSy1t1KvQXF7IwYbNv2HKwdYNNOT/pSZFIFrqVl+FC2o5bmNmw4XOpE Sr7VBiMGQyK6QdCwZmW/AVroTZCVdmazh8U1t28GduR/EbeHZnNRwJva2 7UtBM+Rw1ImLLswg1HZWLXrdHOScrDE+kAkTcScvsPx9zzXd6/09o6m5l pQemi+pWeOpwOabaradd8l1v4JiXy75K0L/lK+0oPKN5G8fPUQiKfHzRt vldAyTZaSini1a4pVGWmBY+Hx8CnZlJl9u2GiDLzWdjmIRfIwRatGrGi0 KNXtstdtBXhWSAVQXaRy3zESsi7bkpwmXtCpaU3WXDfj43Yj43vnXR/dX w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058383" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058383" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:09 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122834" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:07 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Junfeng Guo , Kevin Liu Subject: [PATCH v3 11/22] net/ice: support IPv6 NVGRE tunnel Date: Wed, 13 Apr 2022 17:10:19 +0000 Message-Id: <20220413171030.2231163-12-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang Add protocol definition and pattern matching for IPv6 NVGRE tunnel. Signed-off-by: Junfeng Guo Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_switch_filter.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index 36c9bffb73..c04547235c 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -31,6 +31,7 @@ #define ICE_PPP_IPV4_PROTO 0x0021 #define ICE_PPP_IPV6_PROTO 0x0057 #define ICE_IPV4_PROTO_NVGRE 0x002F +#define ICE_IPV6_PROTO_NVGRE 0x002F #define ICE_SW_PRI_BASE 6 #define ICE_SW_INSET_ETHER ( \ @@ -763,6 +764,10 @@ ice_switch_parse_pattern(const struct rte_flow_item pattern[], break; } } + if ((ipv6_spec->hdr.proto & + ipv6_mask->hdr.proto) == + ICE_IPV6_PROTO_NVGRE) + *tun_type = ICE_SW_TUN_AND_NON_TUN; if (ipv6_mask->hdr.proto) *input |= ICE_INSET_IPV6_NEXT_HDR; if (ipv6_mask->hdr.hop_limits) From patchwork Wed Apr 13 17:10:20 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109679 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9AA18A0508; Wed, 13 Apr 2022 11:12:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 70A9242831; Wed, 13 Apr 2022 11:12:22 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 4C95A4280E for ; Wed, 13 Apr 2022 11:12:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841137; x=1681377137; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6g4YI9PJK5RBBgVRLRiq3JnUxtmql846YtS88ucTvcM=; b=Vvh3udwN2gVFVThFqOThoTN6URS1frlRjhfpKOuZizkMW0gFv8p470iC yvP4KPV1+9F7CIPfS8E1EBFatCjR3kdMKSL9PiwVFOSduF0owSKHN4/Fl wCT0y6BAt4oKeI9U1ukIUBgDjVbydIVdc+EOC3rQ2c82eniywAG0up8hO EWYaZXEuSr9tMDLQGM1y4KQuJmRWdIPkMFWefG/ul697NU780pmkiUOrm /5JP6khqz87ZM4Hxv5HT1wHroHsRZrq+whcfak2U7Qdx6bm/qZVJ3cmPH YOUoos7/twEqbeSWMYINpeBEmd9tTV8uaGZlv/Wkxp0CNJPrNfL8+mxdm g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058389" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058389" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:12 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122842" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:09 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Junfeng Guo , Kevin Liu Subject: [PATCH v3 12/22] net/ice: support new pattern of IPv4 Date: Wed, 13 Apr 2022 17:10:20 +0000 Message-Id: <20220413171030.2231163-13-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang Add definition and pattern entry for IPv4 pattern: MAC/VLAN/IPv4 Signed-off-by: Junfeng Guo Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_switch_filter.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/drivers/net/ice/ice_switch_filter.c b/drivers/net/ice/ice_switch_filter.c index c04547235c..4db7021e3f 100644 --- a/drivers/net/ice/ice_switch_filter.c +++ b/drivers/net/ice/ice_switch_filter.c @@ -38,6 +38,8 @@ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_ETHERTYPE) #define ICE_SW_INSET_MAC_VLAN ( \ ICE_SW_INSET_ETHER | ICE_INSET_VLAN_INNER) +#define ICE_SW_INSET_MAC_VLAN_IPV4 ( \ + ICE_SW_INSET_MAC_VLAN | ICE_SW_INSET_MAC_IPV4) #define ICE_SW_INSET_MAC_QINQ ( \ ICE_INSET_DMAC | ICE_INSET_SMAC | ICE_INSET_VLAN_INNER | \ ICE_INSET_VLAN_OUTER) @@ -215,6 +217,7 @@ ice_pattern_match_item ice_switch_pattern_dist_list[] = { {pattern_eth_ipv4, ICE_SW_INSET_MAC_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_udp, ICE_SW_INSET_MAC_IPV4_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv4_tcp, ICE_SW_INSET_MAC_IPV4_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, + {pattern_eth_vlan_ipv4, ICE_SW_INSET_MAC_VLAN_IPV4, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6, ICE_SW_INSET_MAC_IPV6, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6_udp, ICE_SW_INSET_MAC_IPV6_UDP, ICE_INSET_NONE, ICE_INSET_NONE}, {pattern_eth_ipv6_tcp, ICE_SW_INSET_MAC_IPV6_TCP, ICE_INSET_NONE, ICE_INSET_NONE}, From patchwork Wed Apr 13 17:10:21 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109681 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A47EFA0508; Wed, 13 Apr 2022 11:13:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D36E42839; Wed, 13 Apr 2022 11:12:24 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 62EE142813 for ; Wed, 13 Apr 2022 11:12:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841137; x=1681377137; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=EX5z1L9Py0aRxNU4gUXbHDsejPvnhgFet+y7t9iFhk4=; b=GDkMZBI+qChDM8CpMQTG84zQg3gX6ik+xWaECp24R4jqrHE/sD5ACF5j gT7e5Ow+qnNGGuRmRaA8aaKAiYApb5dXh4UlTrOO5bfDStrZ73UwARR1T xjibb3enXpAMIdPVhYsQohevwzrnWqo7tfcEwxa4Zc98aJlh8g5wTzcka Z15Xz7Jq7DYw8FNbpy6sAxI5SRjpNC7kDZjqQawSkXYw+95KM4J2sHAUl gT6ayGvU9p9AwI60HebKCeV0qiJbTdA1BR4FIFMfFljSXkjPTYzwzsYjn brPIs2hUaTcwBAxqV8Avlg1gjuh7DM17xpKNQ53UbRV/KrIlTY1hOvaCG A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058390" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058390" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:14 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122850" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:12 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Kevin Liu Subject: [PATCH v3 13/22] net/ice: treat unknown package as OS default package Date: Wed, 13 Apr 2022 17:10:21 +0000 Message-Id: <20220413171030.2231163-14-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang In order to use custom package, unknown package should be treated as OS default package. Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_ethdev.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 73e550f5fb..ad9b09d081 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -1710,13 +1710,16 @@ ice_load_pkg_type(struct ice_hw *hw) /* store the activated package type (OS default or Comms) */ if (!strncmp((char *)hw->active_pkg_name, ICE_OS_DEFAULT_PKG_NAME, - ICE_PKG_NAME_SIZE)) + ICE_PKG_NAME_SIZE)) { package_type = ICE_PKG_TYPE_OS_DEFAULT; - else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME, - ICE_PKG_NAME_SIZE)) + } else if (!strncmp((char *)hw->active_pkg_name, ICE_COMMS_PKG_NAME, + ICE_PKG_NAME_SIZE)) { package_type = ICE_PKG_TYPE_COMMS; - else - package_type = ICE_PKG_TYPE_UNKNOWN; + } else { + PMD_INIT_LOG(WARNING, + "The package type is not identified, treaded as OS default type"); + package_type = ICE_PKG_TYPE_OS_DEFAULT; + } PMD_INIT_LOG(NOTICE, "Active package is: %d.%d.%d.%d, %s (%s VLAN mode)", hw->active_pkg_ver.major, hw->active_pkg_ver.minor, From patchwork Wed Apr 13 17:10:22 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109682 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AFCDAA0508; Wed, 13 Apr 2022 11:13:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1513F4283F; Wed, 13 Apr 2022 11:12:25 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id D28044281B for ; Wed, 13 Apr 2022 11:12:17 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841138; x=1681377138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=vx3KvaMnEFzaeQ20Xa1BkP9Xre0h7AyMATPkcVMuFNY=; b=URX2ccAG4VGa97WLyRm0cyJjCLHXNc5VodxluFDvBSABLJOxkJPuuvbs Bzv2k6o6591JmEb30APzYraHJT6pMH1JF5mbfcaBNimKq2rjlxilRcsYa quWU1r9JLmrqVqVRdjgQGIlD9rQdz3e5s5RaA4T1uOb1zIiRZ35Wt31Di /kil0Ck7WWiEzCAEW1lDXwjkf0vION2KICGLJNoUnpNSIuhgLHLdxxVua dp5a/37ln5PohdIYzWr+ICYQYSOELBVAKvUJbOB7rLvzDfw0ighcUj1cG /jap+EHkYo8l1kJs1jEbvGR5MCov4Dj0AFsHYRKvC9dLpQ7TK4CMIdX0N g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058391" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058391" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:16 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122854" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:14 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 14/22] net/ice: handle virtchnl event message without interrupt Date: Wed, 13 Apr 2022 17:10:22 +0000 Message-Id: <20220413171030.2231163-15-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang Currently, VF can only handle virtchnl event message by calling interrupt. It is not available in two cases: 1. If the event message comes during VF initialization before interrupt is enabled, this message will not be handled correctly. 2. Some virtchnl commands need to receive the event message and handle it with interrupt disabled. To solve this issue, we add the virtchnl event message handling in the process of reading vitchnl messages in adminq from PF. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 25 +++++++++++++++++++++++-- 1 file changed, 23 insertions(+), 2 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 9c2f13cf72..1415f26ac3 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -63,11 +63,32 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op, goto again; v_op = rte_le_to_cpu_32(event.desc.cookie_high); - if (v_op != op) - goto again; + + if (v_op == VIRTCHNL_OP_EVENT) { + struct virtchnl_pf_event *vpe = + (struct virtchnl_pf_event *)event.msg_buf; + switch (vpe->event) { + case VIRTCHNL_EVENT_RESET_IMPENDING: + hw->resetting = true; + if (rsp_msglen) + *rsp_msglen = 0; + return IAVF_SUCCESS; + default: + goto again; + } + } else { + /* async reply msg on command issued by vf previously */ + if (v_op != op) { + PMD_DRV_LOG(WARNING, + "command mismatch, expect %u, get %u", + op, v_op); + goto again; + } + } if (rsp_msglen != NULL) *rsp_msglen = event.msg_len; + return rte_le_to_cpu_32(event.desc.cookie_low); again: From patchwork Wed Apr 13 17:10:23 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109683 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98575A0508; Wed, 13 Apr 2022 11:13:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0519942848; Wed, 13 Apr 2022 11:12:26 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id AD200410DD for ; Wed, 13 Apr 2022 11:12:18 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841138; x=1681377138; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=7TAsBqQ3YNcnG1Z7KL7RLjVjf3ECygbjVKiDqccpqxU=; b=EkubyJYn+A7cBVHDoMPhaKJldk824HfIsPhlub8UpqLl8e/csIJQL13m zuWlaqy9jhtKM1L4/HUzqXg5cjfQwyL851UEOvVgUcMeFb+pKN4HqTmlf xNx8n3wiOCMuQymOqvXn1YSDfjAebCZk66X5SmBa6TIRlPf0o+Ioxw72H Q33AQbGgmHhbmnUyBXMEQF4Cy7Rx0H12GjMCxQv3AzHd36fOBeSipvzTX HFwB6fy94rC406peNWSdhkcWkz2kGT49TZ3AaEyDezl6NdJ8QYc4taVFu fgrctb+UtJEJ1bAps/SF9qpWt+oAdWbUu0dvOn4kXiEfKYpW7GK7Kf9ix g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058395" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058395" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:18 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122866" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:16 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 15/22] net/ice: add DCF request queues function Date: Wed, 13 Apr 2022 17:10:23 +0000 Message-Id: <20220413171030.2231163-16-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang Add a new virtchnl function to request additional queues from PF. Current default queue pairs number is 16. In order to support up to 256 queue pairs DCF port, enable this request queues function. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 98 +++++++++++++++++++++++++++++++++------ drivers/net/ice/ice_dcf.h | 1 + 2 files changed, 86 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 1415f26ac3..6aeafa6681 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -257,7 +257,7 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw) VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF | VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | - VIRTCHNL_VF_OFFLOAD_QOS; + VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES; err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES, (uint8_t *)&caps, sizeof(caps)); @@ -468,18 +468,38 @@ ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, goto ret; } - do { - if (!cmd->pending) - break; - - rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); - } while (i++ < ICE_DCF_ARQ_MAX_RETRIES); - - if (cmd->v_ret != IAVF_SUCCESS) { - err = -1; - PMD_DRV_LOG(ERR, - "No response (%d times) or return failure (%d) for cmd %d", - i, cmd->v_ret, cmd->v_op); + switch (cmd->v_op) { + case VIRTCHNL_OP_REQUEST_QUEUES: + err = ice_dcf_recv_cmd_rsp_no_irq(hw, + VIRTCHNL_OP_REQUEST_QUEUES, + cmd->rsp_msgbuf, + cmd->rsp_buflen, + NULL); + if (err != IAVF_SUCCESS || !hw->resetting) { + err = -1; + PMD_DRV_LOG(ERR, + "Failed to get response of " + "VIRTCHNL_OP_REQUEST_QUEUES %d", + err); + } + break; + default: + /* For other virtchnl ops in running time, + * wait for the cmd done flag. + */ + do { + if (!cmd->pending) + break; + rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); + } while (i++ < ICE_DCF_ARQ_MAX_RETRIES); + + if (cmd->v_ret != IAVF_SUCCESS) { + err = -1; + PMD_DRV_LOG(ERR, + "No response (%d times) or " + "return failure (%d) for cmd %d", + i, cmd->v_ret, cmd->v_op); + } } ret: @@ -1011,6 +1031,58 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw) return err; } +int +ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num) +{ + struct virtchnl_vf_res_request vfres; + struct dcf_virtchnl_cmd args; + uint16_t num_queue_pairs; + int err; + + if (!(hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_OFFLOAD_REQ_QUEUES)) { + PMD_DRV_LOG(ERR, "request queues not supported"); + return -1; + } + + if (num == 0) { + PMD_DRV_LOG(ERR, "queue number cannot be zero"); + return -1; + } + vfres.num_queue_pairs = num; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_REQUEST_QUEUES; + + args.req_msg = (u8 *)&vfres; + args.req_msglen = sizeof(vfres); + + args.rsp_msgbuf = hw->arq_buf; + args.rsp_msglen = ICE_DCF_AQ_BUF_SZ; + args.rsp_buflen = ICE_DCF_AQ_BUF_SZ; + + /* + * disable interrupt to avoid the admin queue message to be read + * before iavf_read_msg_from_pf. + */ + rte_intr_disable(hw->eth_dev->intr_handle); + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + rte_intr_enable(hw->eth_dev->intr_handle); + if (err) { + PMD_DRV_LOG(ERR, "fail to execute command OP_REQUEST_QUEUES"); + return err; + } + + /* request additional queues failed, return available number */ + num_queue_pairs = ((struct virtchnl_vf_res_request *) + args.rsp_msgbuf)->num_queue_pairs; + PMD_DRV_LOG(ERR, + "request queues failed, only %u queues available", + num_queue_pairs); + + return -1; +} + int ice_dcf_config_irq_map(struct ice_dcf_hw *hw) { diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 8cf17e7700..99498e2184 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -127,6 +127,7 @@ int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw); int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw *hw); +int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); From patchwork Wed Apr 13 17:10:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109684 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A675A0508; Wed, 13 Apr 2022 11:13:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B88642852; Wed, 13 Apr 2022 11:12:27 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A527142826 for ; Wed, 13 Apr 2022 11:12:20 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841140; x=1681377140; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=fIjotZh2MROq4VuuVjbsamRCTCM56b64epyDL+0hz18=; b=Vq/h3yNhBjOez2X/ZJKv+3culF+jiLQCNCZx90knr90UkFVBvY2yc0N7 hQS6sIF3VaNhlRbj5i8ednw7msYkFNrWPTn9dwCvoimvKjD5BhjSkXOcY vhxPmrYyDoYnibz+SUgfnR6vfG2JATTJs+5Cvl8p7gRuMUVcqyfKdIGqN h7bmDgWn30+fw1upzMWIxH8cMAgg1xRgstpnaG19KHXp6irSdnJeH7AYt DWhu4lIgmpA8OyEka/eGpLEHNuwnvb8JlL5Q1b/+1DBjf7twFlhOTpUw3 r0h5/gjxJN6hoE3AAKRYwi0Q05MAPExh76WY90Y9TiXDNLA3XnWvJ5n5y Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058399" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058399" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:20 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122871" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:18 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 16/22] net/ice: negotiate large VF and request more queues Date: Wed, 13 Apr 2022 17:10:24 +0000 Message-Id: <20220413171030.2231163-17-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang Negotiate large VF capability with PF during VF initialization. If large VF is supported and the number of queues larger than 16 is required, VF requests additional queues from PF. Mark the state that large VF is supported. If the allocated queues number is larger than 16, the max RSS queue region cannot be 16 anymore. Add the function to query max RSS queue region from PF, use it in the RSS initialization and future filters configuration. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 34 +++++++++++++++- drivers/net/ice/ice_dcf.h | 4 ++ drivers/net/ice/ice_dcf_ethdev.c | 69 +++++++++++++++++++++++++++++++- drivers/net/ice/ice_dcf_ethdev.h | 2 + 4 files changed, 106 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 6aeafa6681..7091658841 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -257,7 +257,8 @@ ice_dcf_get_vf_resource(struct ice_dcf_hw *hw) VIRTCHNL_VF_CAP_ADV_LINK_SPEED | VIRTCHNL_VF_CAP_DCF | VIRTCHNL_VF_OFFLOAD_VLAN_V2 | VF_BASE_MODE_OFFLOADS | VIRTCHNL_VF_OFFLOAD_RX_FLEX_DESC | - VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES; + VIRTCHNL_VF_OFFLOAD_QOS | VIRTCHNL_VF_OFFLOAD_REQ_QUEUES | + VIRTCHNL_VF_LARGE_NUM_QPAIRS; err = ice_dcf_send_cmd_req_no_irq(hw, VIRTCHNL_OP_GET_VF_RESOURCES, (uint8_t *)&caps, sizeof(caps)); @@ -1083,6 +1084,37 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num) return -1; } +int +ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw) +{ + struct dcf_virtchnl_cmd args; + uint16_t qregion_width; + int err; + + memset(&args, 0, sizeof(args)); + args.v_op = VIRTCHNL_OP_GET_MAX_RSS_QREGION; + args.req_msg = NULL; + args.req_msglen = 0; + args.rsp_msgbuf = hw->arq_buf; + args.rsp_msglen = ICE_DCF_AQ_BUF_SZ; + args.rsp_buflen = ICE_DCF_AQ_BUF_SZ; + + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) { + PMD_DRV_LOG(ERR, + "Failed to execute command of " + "VIRTCHNL_OP_GET_MAX_RSS_QREGION"); + return err; + } + + qregion_width = ((struct virtchnl_max_rss_qregion *) + args.rsp_msgbuf)->qregion_width; + hw->max_rss_qregion = (uint16_t)(1 << qregion_width); + + return 0; +} + + int ice_dcf_config_irq_map(struct ice_dcf_hw *hw) { diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 99498e2184..05ea91d2a5 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -105,6 +105,7 @@ struct ice_dcf_hw { uint16_t msix_base; uint16_t nb_msix; + uint16_t max_rss_qregion; /* max RSS queue region supported by PF */ uint16_t rxq_map[16]; struct virtchnl_eth_stats eth_stats_offset; struct virtchnl_vlan_caps vlan_v2_caps; @@ -114,6 +115,8 @@ struct ice_dcf_hw { uint32_t link_speed; bool resetting; + /* Indicate large VF support enabled or not */ + bool lv_enabled; }; int ice_dcf_execute_virtchnl_cmd(struct ice_dcf_hw *hw, @@ -128,6 +131,7 @@ int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); int ice_dcf_configure_queues(struct ice_dcf_hw *hw); int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num); +int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index d4bfa182a4..a43c5a320d 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -39,6 +39,8 @@ static int ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev, struct rte_eth_udp_tunnel *udp_tunnel); +static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num); + static int ice_dcf_dev_init(struct rte_eth_dev *eth_dev); @@ -663,6 +665,11 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev) { struct ice_dcf_adapter *dcf_ad = dev->data->dev_private; struct ice_adapter *ad = &dcf_ad->parent; + struct ice_dcf_hw *hw = &dcf_ad->real_hw; + int ret; + + uint16_t num_queue_pairs = + RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); ad->rx_bulk_alloc_allowed = true; ad->tx_simple_allowed = true; @@ -670,6 +677,47 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev) if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; + /* Large VF setting */ + if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_DFLT) { + if (!(hw->vf_res->vf_cap_flags & + VIRTCHNL_VF_LARGE_NUM_QPAIRS)) { + PMD_DRV_LOG(ERR, "large VF is not supported"); + return -1; + } + + if (num_queue_pairs > ICE_DCF_MAX_NUM_QUEUES_LV) { + PMD_DRV_LOG(ERR, + "queue pairs number cannot be larger than %u", + ICE_DCF_MAX_NUM_QUEUES_LV); + return -1; + } + + ret = ice_dcf_queues_req_reset(dev, num_queue_pairs); + if (ret) + return ret; + + ret = ice_dcf_get_max_rss_queue_region(hw); + if (ret) { + PMD_INIT_LOG(ERR, "get max rss queue region failed"); + return ret; + } + + hw->lv_enabled = true; + } else { + /* Check if large VF is already enabled. If so, disable and + * release redundant queue resource. + */ + if (hw->lv_enabled) { + ret = ice_dcf_queues_req_reset(dev, num_queue_pairs); + if (ret) + return ret; + + hw->lv_enabled = false; + } + /* if large VF is not required, use default rss queue region */ + hw->max_rss_qregion = ICE_DCF_MAX_NUM_QUEUES_DFLT; + } + return 0; } @@ -681,8 +729,8 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev, struct ice_dcf_hw *hw = &adapter->real_hw; dev_info->max_mac_addrs = DCF_NUM_MACADDR_MAX; - dev_info->max_rx_queues = hw->vsi_res->num_queue_pairs; - dev_info->max_tx_queues = hw->vsi_res->num_queue_pairs; + dev_info->max_rx_queues = ICE_DCF_MAX_NUM_QUEUES_LV; + dev_info->max_tx_queues = ICE_DCF_MAX_NUM_QUEUES_LV; dev_info->min_rx_bufsize = ICE_BUF_SIZE_MIN; dev_info->max_rx_pktlen = ICE_FRAME_SIZE_MAX; dev_info->hash_key_size = hw->vf_res->rss_key_size; @@ -1829,6 +1877,23 @@ ice_dcf_dev_uninit(struct rte_eth_dev *eth_dev) return 0; } +static int ice_dcf_queues_req_reset(struct rte_eth_dev *dev, uint16_t num) +{ + struct ice_dcf_adapter *adapter = dev->data->dev_private; + struct ice_dcf_hw *hw = &adapter->real_hw; + int ret; + + ret = ice_dcf_request_queues(hw, num); + if (ret) { + PMD_DRV_LOG(ERR, "request queues from PF failed"); + return ret; + } + PMD_DRV_LOG(INFO, "change queue pairs from %u to %u", + hw->vsi_res->num_queue_pairs, num); + + return ice_dcf_dev_reset(dev); +} + static int ice_dcf_cap_check_handler(__rte_unused const char *key, const char *value, __rte_unused void *opaque) diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 27f6402786..4a08d32e0c 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -20,6 +20,8 @@ #define ICE_DCF_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2) #define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD) +#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16 +#define ICE_DCF_MAX_NUM_QUEUES_LV 256 struct ice_dcf_queue { uint64_t dummy; From patchwork Wed Apr 13 17:10:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109685 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC2ECA0508; Wed, 13 Apr 2022 11:13:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 040634284A; Wed, 13 Apr 2022 11:12:28 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id A113142833 for ; Wed, 13 Apr 2022 11:12:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841143; x=1681377143; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=NIbIicjzmCp9S2UjNaODMmdaSeJOBCRfYx5rHYYnZyc=; b=P6flAPzMMcsdFcHaTwC0kcOwSeIjh01CI6MlDgvLlMvdFYPXnl968P8y Ca2nrL/hdK5gVHFH+WWbo+uK/gfiy2lhp5Nh/MFZ/OtwHSlc0p/q+ufqt 1XYjXhS7fWBVNYLwf1gjvLnMdS9A4p4HnDlAi/Lne4Z+2ZTi81LoBpfvl b8ee+Z019XrWYuah1MssO9LdTnCFN/d51ke/2ZFYax2STAn+bsX9nKOvh nMWve5yUauzYsWEsomUc/DmrRWKVYYxl758EKi+YT69HxK3uDAN94qAXD u/hL3lTxKrxM0i/XMIsq7JWdbwrUMdq/QRW8jTr7PYCxaEf56/JK1nbrM w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058410" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058410" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:22 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122876" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:20 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 17/22] net/ice: enable multiple queues configurations for large VF Date: Wed, 13 Apr 2022 17:10:25 +0000 Message-Id: <20220413171030.2231163-18-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang Since the adminq buffer size has a 4K limitation, the current virtchnl command VIRTCHNL_OP_CONFIG_VSI_QUEUES cannot send the message only once to configure up to 256 queues. In this patch, we send the messages multiple times to make sure that the buffer size is less than 4K each time. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 11 ++++++----- drivers/net/ice/ice_dcf.h | 3 ++- drivers/net/ice/ice_dcf_ethdev.c | 20 ++++++++++++++++++-- drivers/net/ice/ice_dcf_ethdev.h | 1 + 4 files changed, 27 insertions(+), 8 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 7091658841..7004c00f1c 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -949,7 +949,8 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw) #define IAVF_RXDID_COMMS_OVS_1 22 int -ice_dcf_configure_queues(struct ice_dcf_hw *hw) +ice_dcf_configure_queues(struct ice_dcf_hw *hw, + uint16_t num_queue_pairs, uint16_t index) { struct ice_rx_queue **rxq = (struct ice_rx_queue **)hw->eth_dev->data->rx_queues; @@ -962,16 +963,16 @@ ice_dcf_configure_queues(struct ice_dcf_hw *hw) int err; size = sizeof(*vc_config) + - sizeof(vc_config->qpair[0]) * hw->num_queue_pairs; + sizeof(vc_config->qpair[0]) * num_queue_pairs; vc_config = rte_zmalloc("cfg_queue", size, 0); if (!vc_config) return -ENOMEM; vc_config->vsi_id = hw->vsi_res->vsi_id; - vc_config->num_queue_pairs = hw->num_queue_pairs; + vc_config->num_queue_pairs = num_queue_pairs; - for (i = 0, vc_qp = vc_config->qpair; - i < hw->num_queue_pairs; + for (i = index, vc_qp = vc_config->qpair; + i < index + num_queue_pairs; i++, vc_qp++) { vc_qp->txq.vsi_id = hw->vsi_res->vsi_id; vc_qp->txq.queue_id = i; diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index 05ea91d2a5..e36428a92a 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -129,7 +129,8 @@ void ice_dcf_uninit_hw(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw); int ice_dcf_configure_rss_key(struct ice_dcf_hw *hw); int ice_dcf_configure_rss_lut(struct ice_dcf_hw *hw); int ice_dcf_init_rss(struct ice_dcf_hw *hw); -int ice_dcf_configure_queues(struct ice_dcf_hw *hw); +int ice_dcf_configure_queues(struct ice_dcf_hw *hw, + uint16_t num_queue_pairs, uint16_t index); int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num); int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index a43c5a320d..78df82d5b5 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -513,6 +513,8 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) struct rte_intr_handle *intr_handle = dev->intr_handle; struct ice_adapter *ad = &dcf_ad->parent; struct ice_dcf_hw *hw = &dcf_ad->real_hw; + uint16_t num_queue_pairs; + uint16_t index = 0; int ret; if (hw->resetting) { @@ -531,6 +533,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) hw->num_queue_pairs = RTE_MAX(dev->data->nb_rx_queues, dev->data->nb_tx_queues); + num_queue_pairs = hw->num_queue_pairs; ret = ice_dcf_init_rx_queues(dev); if (ret) { @@ -546,7 +549,20 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) } } - ret = ice_dcf_configure_queues(hw); + /* If needed, send configure queues msg multiple times to make the + * adminq buffer length smaller than the 4K limitation. + */ + while (num_queue_pairs > ICE_DCF_CFG_Q_NUM_PER_BUF) { + if (ice_dcf_configure_queues(hw, + ICE_DCF_CFG_Q_NUM_PER_BUF, index) != 0) { + PMD_DRV_LOG(ERR, "configure queues failed"); + goto err_queue; + } + num_queue_pairs -= ICE_DCF_CFG_Q_NUM_PER_BUF; + index += ICE_DCF_CFG_Q_NUM_PER_BUF; + } + + ret = ice_dcf_configure_queues(hw, num_queue_pairs, index); if (ret) { PMD_DRV_LOG(ERR, "Fail to config queues"); return ret; @@ -586,7 +602,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev) dev->data->dev_link.link_status = RTE_ETH_LINK_UP; - +err_queue: return 0; } diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 4a08d32e0c..2fac1e5b21 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -22,6 +22,7 @@ #define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD) #define ICE_DCF_MAX_NUM_QUEUES_DFLT 16 #define ICE_DCF_MAX_NUM_QUEUES_LV 256 +#define ICE_DCF_CFG_Q_NUM_PER_BUF 32 struct ice_dcf_queue { uint64_t dummy; From patchwork Wed Apr 13 17:10:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109686 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6024AA0508; Wed, 13 Apr 2022 11:13:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE3BC42858; Wed, 13 Apr 2022 11:12:28 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id C7F8C4283C for ; Wed, 13 Apr 2022 11:12:24 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841145; x=1681377145; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=gEEGeqX8JUaOwgf5NT8uyy75ilpyVaI3wVYaYUB7elg=; b=USVU8ojv8Ba5OcwXhfKoDo7ypCbdVOG1BdNJdXFAANJieFgDQa5j3Z6l j8UU9KpTcE3AtGsXUmIZmy99taEyxvG66HYBKlsP/VTKLTnpX6feN/5KP vOra9/OJ69ab+oIfvGhgcqg/Bv3DMYGbN/+Ev84vGK0mvG9IEj49ndbJT sQZ/x1pcxDMRVyoRF21wAf8AoFVXZTGIayEeYRxd3skab0u9RMNyOlPt/ cSLGfkNdbF0Jy2OSAAK0M4PpwnDiGF3sVvW/mq/fsCxRZHzBvHl3fBNsC ObASAdHxpCrx2edGbU/IWlUDldVr0fFYBSwdaH745tB9Ag33ZtrN44TK2 g==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058413" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058413" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:24 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122883" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:22 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 18/22] net/ice: enable IRQ mapping configuration for large VF Date: Wed, 13 Apr 2022 17:10:26 +0000 Message-Id: <20220413171030.2231163-19-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Steve Yang The current IRQ mapping configuration only supports max 16 queues and 16 MSIX vectors. Change the queue vector mapping structure to indicate up to 256 queues. A new opcode is used to handle the case with large number of queues. To avoid adminq buffer size limitation, we support to send the virtchnl message multiple times if needed. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 50 +++++++++++++++++++++++++++---- drivers/net/ice/ice_dcf.h | 10 ++++++- drivers/net/ice/ice_dcf_ethdev.c | 51 +++++++++++++++++++++++++++----- drivers/net/ice/ice_dcf_ethdev.h | 1 + 4 files changed, 99 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 7004c00f1c..290f754049 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1115,7 +1115,6 @@ ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw) return 0; } - int ice_dcf_config_irq_map(struct ice_dcf_hw *hw) { @@ -1132,13 +1131,14 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw) return -ENOMEM; map_info->num_vectors = hw->nb_msix; - for (i = 0; i < hw->nb_msix; i++) { - vecmap = &map_info->vecmap[i]; + for (i = 0; i < hw->eth_dev->data->nb_rx_queues; i++) { + vecmap = + &map_info->vecmap[hw->qv_map[i].vector_id - hw->msix_base]; vecmap->vsi_id = hw->vsi_res->vsi_id; vecmap->rxitr_idx = 0; - vecmap->vector_id = hw->msix_base + i; + vecmap->vector_id = hw->qv_map[i].vector_id; vecmap->txq_map = 0; - vecmap->rxq_map = hw->rxq_map[hw->msix_base + i]; + vecmap->rxq_map |= 1 << hw->qv_map[i].queue_id; } memset(&args, 0, sizeof(args)); @@ -1154,6 +1154,46 @@ ice_dcf_config_irq_map(struct ice_dcf_hw *hw) return err; } +int +ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw, + uint16_t num, uint16_t index) +{ + struct virtchnl_queue_vector_maps *map_info; + struct virtchnl_queue_vector *qv_maps; + struct dcf_virtchnl_cmd args; + int len, i, err; + int count = 0; + + len = sizeof(struct virtchnl_queue_vector_maps) + + sizeof(struct virtchnl_queue_vector) * (num - 1); + + map_info = rte_zmalloc("map_info", len, 0); + if (!map_info) + return -ENOMEM; + + map_info->vport_id = hw->vsi_res->vsi_id; + map_info->num_qv_maps = num; + for (i = index; i < index + map_info->num_qv_maps; i++) { + qv_maps = &map_info->qv_maps[count++]; + qv_maps->itr_idx = VIRTCHNL_ITR_IDX_0; + qv_maps->queue_type = VIRTCHNL_QUEUE_TYPE_RX; + qv_maps->queue_id = hw->qv_map[i].queue_id; + qv_maps->vector_id = hw->qv_map[i].vector_id; + } + + args.v_op = VIRTCHNL_OP_MAP_QUEUE_VECTOR; + args.req_msg = (u8 *)map_info; + args.req_msglen = len; + args.rsp_msgbuf = hw->arq_buf; + args.req_msglen = ICE_DCF_AQ_BUF_SZ; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR"); + + rte_free(map_info); + return err; +} + int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on) { diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index e36428a92a..ce57a687ab 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -74,6 +74,11 @@ struct ice_dcf_tm_conf { bool committed; }; +struct ice_dcf_qv_map { + uint16_t queue_id; + uint16_t vector_id; +}; + struct ice_dcf_hw { struct iavf_hw avf; @@ -106,7 +111,8 @@ struct ice_dcf_hw { uint16_t msix_base; uint16_t nb_msix; uint16_t max_rss_qregion; /* max RSS queue region supported by PF */ - uint16_t rxq_map[16]; + + struct ice_dcf_qv_map *qv_map; /* queue vector mapping */ struct virtchnl_eth_stats eth_stats_offset; struct virtchnl_vlan_caps vlan_v2_caps; @@ -134,6 +140,8 @@ int ice_dcf_configure_queues(struct ice_dcf_hw *hw, int ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num); int ice_dcf_get_max_rss_queue_region(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); +int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw, + uint16_t num, uint16_t index); int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); int ice_dcf_query_stats(struct ice_dcf_hw *hw, diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 78df82d5b5..1ddba02ebb 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -143,6 +143,7 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, { struct ice_dcf_adapter *adapter = dev->data->dev_private; struct ice_dcf_hw *hw = &adapter->real_hw; + struct ice_dcf_qv_map *qv_map; uint16_t interval, i; int vec; @@ -161,6 +162,14 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, } } + qv_map = rte_zmalloc("qv_map", + dev->data->nb_rx_queues * sizeof(struct ice_dcf_qv_map), 0); + if (!qv_map) { + PMD_DRV_LOG(ERR, "Failed to allocate %d queue-vector map", + dev->data->nb_rx_queues); + return -1; + } + if (!dev->data->dev_conf.intr_conf.rxq || !rte_intr_dp_is_en(intr_handle)) { /* Rx interrupt disabled, Map interrupt only for writeback */ @@ -196,17 +205,22 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, } IAVF_WRITE_FLUSH(&hw->avf); /* map all queues to the same interrupt */ - for (i = 0; i < dev->data->nb_rx_queues; i++) - hw->rxq_map[hw->msix_base] |= 1 << i; + for (i = 0; i < dev->data->nb_rx_queues; i++) { + qv_map[i].queue_id = i; + qv_map[i].vector_id = hw->msix_base; + } + hw->qv_map = qv_map; } else { if (!rte_intr_allow_others(intr_handle)) { hw->nb_msix = 1; hw->msix_base = IAVF_MISC_VEC_ID; for (i = 0; i < dev->data->nb_rx_queues; i++) { - hw->rxq_map[hw->msix_base] |= 1 << i; + qv_map[i].queue_id = i; + qv_map[i].vector_id = hw->msix_base; rte_intr_vec_list_index_set(intr_handle, i, IAVF_MISC_VEC_ID); } + hw->qv_map = qv_map; PMD_DRV_LOG(DEBUG, "vector %u are mapping to all Rx queues", hw->msix_base); @@ -219,21 +233,44 @@ ice_dcf_config_rx_queues_irqs(struct rte_eth_dev *dev, hw->msix_base = IAVF_MISC_VEC_ID; vec = IAVF_MISC_VEC_ID; for (i = 0; i < dev->data->nb_rx_queues; i++) { - hw->rxq_map[vec] |= 1 << i; + qv_map[i].queue_id = i; + qv_map[i].vector_id = vec; rte_intr_vec_list_index_set(intr_handle, i, vec++); if (vec >= hw->nb_msix) vec = IAVF_RX_VEC_START; } + hw->qv_map = qv_map; PMD_DRV_LOG(DEBUG, "%u vectors are mapping to %u Rx queues", hw->nb_msix, dev->data->nb_rx_queues); } } - if (ice_dcf_config_irq_map(hw)) { - PMD_DRV_LOG(ERR, "config interrupt mapping failed"); - return -1; + if (!hw->lv_enabled) { + if (ice_dcf_config_irq_map(hw)) { + PMD_DRV_LOG(ERR, "config interrupt mapping failed"); + return -1; + } + } else { + uint16_t num_qv_maps = dev->data->nb_rx_queues; + uint16_t index = 0; + + while (num_qv_maps > ICE_DCF_IRQ_MAP_NUM_PER_BUF) { + if (ice_dcf_config_irq_map_lv(hw, + ICE_DCF_IRQ_MAP_NUM_PER_BUF, index)) { + PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed"); + return -1; + } + num_qv_maps -= ICE_DCF_IRQ_MAP_NUM_PER_BUF; + index += ICE_DCF_IRQ_MAP_NUM_PER_BUF; + } + + if (ice_dcf_config_irq_map_lv(hw, num_qv_maps, index)) { + PMD_DRV_LOG(ERR, "config interrupt mapping for large VF failed"); + return -1; + } + } return 0; } diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 2fac1e5b21..9ef524c97c 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -23,6 +23,7 @@ #define ICE_DCF_MAX_NUM_QUEUES_DFLT 16 #define ICE_DCF_MAX_NUM_QUEUES_LV 256 #define ICE_DCF_CFG_Q_NUM_PER_BUF 32 +#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128 struct ice_dcf_queue { uint64_t dummy; From patchwork Wed Apr 13 17:10:27 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109687 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F41CDA0508; Wed, 13 Apr 2022 11:13:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C959E4282B; Wed, 13 Apr 2022 11:12:29 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 7DD3E4284A for ; Wed, 13 Apr 2022 11:12:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841146; x=1681377146; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9he4SI3BOgx/8v3pcNkRUBde9SioYkxnwTjlp+0lFNY=; b=AGsGTriLgkT36n09dXTucRdEBNEDYR9QT3lVBXN3UGQb5ENWQl+zcXLA bA5NXEC9RyZI3+MDirxjJfe0fY7MAkr2bQ5wujoiHpV33WiYoahmxgJlZ 1Wyq9ehqsEpuzsAMOI8HHzJn7eT8TQ//Zn0CFpU+9qWWIl3FjOeeXMQTP g/zsm62VI5xQg/ndJgDp+iUDAGZnZgopiKL/lewrqO8JQ60x9JCXReSwg 6w7FsUCTZpBSoILUQFZNxYz85G9G2k7Qji/Aa4KSPsWSjfK+3VpXvD5KV 0HMoPMh5dZ6iQWiiJs7qFn3arnvNnA9cLMKm5rJWz4VoNDeMrQYsknz8C Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058418" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058418" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:26 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122890" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:24 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 19/22] net/ice: add enable/disable queues for DCF large VF Date: Wed, 13 Apr 2022 17:10:27 +0000 Message-Id: <20220413171030.2231163-20-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The current virtchnl structure for enable/disable queues only supports max 32 queue pairs. Use a new opcode and structure to indicate up to 256 queue pairs, in order to enable/disable queues in large VF case. Signed-off-by: Steve Yang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_dcf.c | 99 +++++++++++++++++++++++++++++++- drivers/net/ice/ice_dcf.h | 5 ++ drivers/net/ice/ice_dcf_ethdev.c | 26 +++++++-- drivers/net/ice/ice_dcf_ethdev.h | 8 +-- 4 files changed, 125 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 290f754049..23edfd09b1 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -90,7 +90,6 @@ ice_dcf_recv_cmd_rsp_no_irq(struct ice_dcf_hw *hw, enum virtchnl_ops op, *rsp_msglen = event.msg_len; return rte_le_to_cpu_32(event.desc.cookie_low); - again: rte_delay_ms(ICE_DCF_ARQ_CHECK_TIME); } while (i++ < ICE_DCF_ARQ_MAX_RETRIES); @@ -896,7 +895,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw) { struct rte_eth_dev *dev = hw->eth_dev; struct rte_eth_rss_conf *rss_conf; - uint8_t i, j, nb_q; + uint16_t i, j, nb_q; int ret; rss_conf = &dev->data->dev_conf.rx_adv_conf.rss_conf; @@ -1075,6 +1074,12 @@ ice_dcf_request_queues(struct ice_dcf_hw *hw, uint16_t num) return err; } + /* request queues succeeded, vf is resetting */ + if (hw->resetting) { + PMD_DRV_LOG(INFO, "vf is resetting"); + return 0; + } + /* request additional queues failed, return available number */ num_queue_pairs = ((struct virtchnl_vf_res_request *) args.rsp_msgbuf)->num_queue_pairs; @@ -1185,7 +1190,8 @@ ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw, args.req_msg = (u8 *)map_info; args.req_msglen = len; args.rsp_msgbuf = hw->arq_buf; - args.req_msglen = ICE_DCF_AQ_BUF_SZ; + args.rsp_msglen = ICE_DCF_AQ_BUF_SZ; + args.rsp_buflen = ICE_DCF_AQ_BUF_SZ; err = ice_dcf_execute_virtchnl_cmd(hw, &args); if (err) PMD_DRV_LOG(ERR, "fail to execute command OP_MAP_QUEUE_VECTOR"); @@ -1225,6 +1231,50 @@ ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on) return err; } +int +ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on) +{ + struct virtchnl_del_ena_dis_queues *queue_select; + struct virtchnl_queue_chunk *queue_chunk; + struct dcf_virtchnl_cmd args; + int err, len; + + len = sizeof(struct virtchnl_del_ena_dis_queues); + queue_select = rte_zmalloc("queue_select", len, 0); + if (!queue_select) + return -ENOMEM; + + queue_chunk = queue_select->chunks.chunks; + queue_select->chunks.num_chunks = 1; + queue_select->vport_id = hw->vsi_res->vsi_id; + + if (rx) { + queue_chunk->type = VIRTCHNL_QUEUE_TYPE_RX; + queue_chunk->start_queue_id = qid; + queue_chunk->num_queues = 1; + } else { + queue_chunk->type = VIRTCHNL_QUEUE_TYPE_TX; + queue_chunk->start_queue_id = qid; + queue_chunk->num_queues = 1; + } + + if (on) + args.v_op = VIRTCHNL_OP_ENABLE_QUEUES_V2; + else + args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2; + args.req_msg = (u8 *)queue_select; + args.req_msglen = len; + args.rsp_msgbuf = hw->arq_buf; + args.rsp_msglen = ICE_DCF_AQ_BUF_SZ; + args.rsp_buflen = ICE_DCF_AQ_BUF_SZ; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, "Failed to execute command of %s", + on ? "OP_ENABLE_QUEUES_V2" : "OP_DISABLE_QUEUES_V2"); + rte_free(queue_select); + return err; +} + int ice_dcf_disable_queues(struct ice_dcf_hw *hw) { @@ -1254,6 +1304,49 @@ ice_dcf_disable_queues(struct ice_dcf_hw *hw) return err; } +int +ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw) +{ + struct virtchnl_del_ena_dis_queues *queue_select; + struct virtchnl_queue_chunk *queue_chunk; + struct dcf_virtchnl_cmd args; + int err, len; + + len = sizeof(struct virtchnl_del_ena_dis_queues) + + sizeof(struct virtchnl_queue_chunk) * + (ICE_DCF_RXTX_QUEUE_CHUNKS_NUM - 1); + queue_select = rte_zmalloc("queue_select", len, 0); + if (!queue_select) + return -ENOMEM; + + queue_chunk = queue_select->chunks.chunks; + queue_select->chunks.num_chunks = ICE_DCF_RXTX_QUEUE_CHUNKS_NUM; + queue_select->vport_id = hw->vsi_res->vsi_id; + + queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].type = VIRTCHNL_QUEUE_TYPE_TX; + queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].start_queue_id = 0; + queue_chunk[VIRTCHNL_QUEUE_TYPE_TX].num_queues = + hw->eth_dev->data->nb_tx_queues; + + queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].type = VIRTCHNL_QUEUE_TYPE_RX; + queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].start_queue_id = 0; + queue_chunk[VIRTCHNL_QUEUE_TYPE_RX].num_queues = + hw->eth_dev->data->nb_rx_queues; + + args.v_op = VIRTCHNL_OP_DISABLE_QUEUES_V2; + args.req_msg = (u8 *)queue_select; + args.req_msglen = len; + args.rsp_msgbuf = hw->arq_buf; + args.rsp_msglen = ICE_DCF_AQ_BUF_SZ; + args.rsp_buflen = ICE_DCF_AQ_BUF_SZ; + err = ice_dcf_execute_virtchnl_cmd(hw, &args); + if (err) + PMD_DRV_LOG(ERR, + "Failed to execute command of OP_DISABLE_QUEUES_V2"); + rte_free(queue_select); + return err; +} + int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct virtchnl_eth_stats *pstats) diff --git a/drivers/net/ice/ice_dcf.h b/drivers/net/ice/ice_dcf.h index ce57a687ab..78ab23aaa6 100644 --- a/drivers/net/ice/ice_dcf.h +++ b/drivers/net/ice/ice_dcf.h @@ -15,6 +15,8 @@ #include "base/ice_type.h" #include "ice_logs.h" +#define ICE_DCF_RXTX_QUEUE_CHUNKS_NUM 2 + struct dcf_virtchnl_cmd { TAILQ_ENTRY(dcf_virtchnl_cmd) next; @@ -143,7 +145,10 @@ int ice_dcf_config_irq_map(struct ice_dcf_hw *hw); int ice_dcf_config_irq_map_lv(struct ice_dcf_hw *hw, uint16_t num, uint16_t index); int ice_dcf_switch_queue(struct ice_dcf_hw *hw, uint16_t qid, bool rx, bool on); +int ice_dcf_switch_queue_lv(struct ice_dcf_hw *hw, + uint16_t qid, bool rx, bool on); int ice_dcf_disable_queues(struct ice_dcf_hw *hw); +int ice_dcf_disable_queues_lv(struct ice_dcf_hw *hw); int ice_dcf_query_stats(struct ice_dcf_hw *hw, struct virtchnl_eth_stats *pstats); int ice_dcf_add_del_all_mac_addr(struct ice_dcf_hw *hw, diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index 1ddba02ebb..e46c8405aa 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -317,6 +317,7 @@ static int ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) { struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *dcf_hw = &ad->real_hw; struct iavf_hw *hw = &ad->real_hw.avf; struct ice_rx_queue *rxq; int err = 0; @@ -339,7 +340,11 @@ ice_dcf_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) IAVF_WRITE_FLUSH(hw); /* Ready to switch the queue on */ - err = ice_dcf_switch_queue(&ad->real_hw, rx_queue_id, true, true); + if (!dcf_hw->lv_enabled) + err = ice_dcf_switch_queue(dcf_hw, rx_queue_id, true, true); + else + err = ice_dcf_switch_queue_lv(dcf_hw, rx_queue_id, true, true); + if (err) { PMD_DRV_LOG(ERR, "Failed to switch RX queue %u on", rx_queue_id); @@ -448,6 +453,7 @@ static int ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) { struct ice_dcf_adapter *ad = dev->data->dev_private; + struct ice_dcf_hw *dcf_hw = &ad->real_hw; struct iavf_hw *hw = &ad->real_hw.avf; struct ice_tx_queue *txq; int err = 0; @@ -463,7 +469,10 @@ ice_dcf_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) IAVF_WRITE_FLUSH(hw); /* Ready to switch the queue on */ - err = ice_dcf_switch_queue(&ad->real_hw, tx_queue_id, false, true); + if (!dcf_hw->lv_enabled) + err = ice_dcf_switch_queue(dcf_hw, tx_queue_id, false, true); + else + err = ice_dcf_switch_queue_lv(dcf_hw, tx_queue_id, false, true); if (err) { PMD_DRV_LOG(ERR, "Failed to switch TX queue %u on", @@ -650,12 +659,17 @@ ice_dcf_stop_queues(struct rte_eth_dev *dev) struct ice_dcf_hw *hw = &ad->real_hw; struct ice_rx_queue *rxq; struct ice_tx_queue *txq; - int ret, i; + int i; /* Stop All queues */ - ret = ice_dcf_disable_queues(hw); - if (ret) - PMD_DRV_LOG(WARNING, "Fail to stop queues"); + if (!hw->lv_enabled) { + if (ice_dcf_disable_queues(hw)) + PMD_DRV_LOG(WARNING, "Fail to stop queues"); + } else { + if (ice_dcf_disable_queues_lv(hw)) + PMD_DRV_LOG(WARNING, + "Fail to stop queues for large VF"); + } for (i = 0; i < dev->data->nb_tx_queues; i++) { txq = dev->data->tx_queues[i]; diff --git a/drivers/net/ice/ice_dcf_ethdev.h b/drivers/net/ice/ice_dcf_ethdev.h index 9ef524c97c..3f740e2c7b 100644 --- a/drivers/net/ice/ice_dcf_ethdev.h +++ b/drivers/net/ice/ice_dcf_ethdev.h @@ -20,10 +20,10 @@ #define ICE_DCF_ETH_OVERHEAD \ (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + ICE_DCF_VLAN_TAG_SIZE * 2) #define ICE_DCF_ETH_MAX_LEN (RTE_ETHER_MTU + ICE_DCF_ETH_OVERHEAD) -#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16 -#define ICE_DCF_MAX_NUM_QUEUES_LV 256 -#define ICE_DCF_CFG_Q_NUM_PER_BUF 32 -#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128 +#define ICE_DCF_MAX_NUM_QUEUES_DFLT 16 +#define ICE_DCF_MAX_NUM_QUEUES_LV 256 +#define ICE_DCF_CFG_Q_NUM_PER_BUF 32 +#define ICE_DCF_IRQ_MAP_NUM_PER_BUF 128 struct ice_dcf_queue { uint64_t dummy; From patchwork Wed Apr 13 17:10:28 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109688 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 565F9A0508; Wed, 13 Apr 2022 11:13:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E5F0F42862; Wed, 13 Apr 2022 11:12:30 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id CC40E42816 for ; Wed, 13 Apr 2022 11:12:28 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841149; x=1681377149; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=m3Dz+PXSuG+/roERL3yRoif24R1mp9WbHEukhvaUExo=; b=jxqEfnd+RkrW7xIhaheqsNFwo0seYG4Zlnxu6uamayFpV2lFqSXIR4Ya 43P+ZZe0s1IPX4XYjDgqH+TeDSJZCFPDiFIGXol4fdwJMgizFFTK2ubGf wUeYntJhVyDRWD2m+6KPsKlxUuNHlCmIvo3x52wvz+f8qjjXE0LEeyf9l 44GKprHM2H7UfKmnW2kiAJozFl6fuqUE4U0FnCQROfMpQUjRBf+7PDTOa vYjx5WTyXfoxI2/3R+Thy+2OBolRL6ZlByx4ZfUfce4/Jp19Bj3Ky7GuK Oe/QbsCSEJmZFBOaxRlhQ9HIGWdb0d+Y9aVLdp31J0NcPeUdtFfwijwKO w==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058426" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058426" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:28 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122900" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:26 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Alvin Zhang , Kevin Liu Subject: [PATCH v3 20/22] net/ice: fix DCF ACL flow engine Date: Wed, 13 Apr 2022 17:10:28 +0000 Message-Id: <20220413171030.2231163-21-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Alvin Zhang ACL is not a necessary feature for DCF, it may not be supported by the ice kernel driver, so in this patch the program does not return the ACL initiation fails to high level functions, as substitute it prints some error logs, cleans the related resources and unregisters the ACL engine. Fixes: 40d466fa9f76 ("net/ice: support ACL filter in DCF") Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/ice_acl_filter.c | 20 ++++++++++++++---- drivers/net/ice/ice_generic_flow.c | 34 +++++++++++++++++++++++------- 2 files changed, 42 insertions(+), 12 deletions(-) diff --git a/drivers/net/ice/ice_acl_filter.c b/drivers/net/ice/ice_acl_filter.c index 8fe6f5aeb0..20a1f86c43 100644 --- a/drivers/net/ice/ice_acl_filter.c +++ b/drivers/net/ice/ice_acl_filter.c @@ -56,6 +56,8 @@ ice_pattern_match_item ice_acl_pattern[] = { {pattern_eth_ipv4_sctp, ICE_ACL_INSET_ETH_IPV4_SCTP, ICE_INSET_NONE, ICE_INSET_NONE}, }; +static void ice_acl_prof_free(struct ice_hw *hw); + static int ice_acl_prof_alloc(struct ice_hw *hw) { @@ -1007,17 +1009,27 @@ ice_acl_init(struct ice_adapter *ad) ret = ice_acl_setup(pf); if (ret) - return ret; + goto deinit_acl; ret = ice_acl_bitmap_init(pf); if (ret) - return ret; + goto deinit_acl; ret = ice_acl_prof_init(pf); if (ret) - return ret; + goto deinit_acl; - return ice_register_parser(parser, ad); + ret = ice_register_parser(parser, ad); + if (ret) + goto deinit_acl; + + return 0; + +deinit_acl: + ice_deinit_acl(pf); + ice_acl_prof_free(hw); + PMD_DRV_LOG(ERR, "ACL init failed, may not supported!"); + return ret; } static void diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c index 53b1c0b69a..205ba5d21b 100644 --- a/drivers/net/ice/ice_generic_flow.c +++ b/drivers/net/ice/ice_generic_flow.c @@ -1817,6 +1817,12 @@ ice_register_flow_engine(struct ice_flow_engine *engine) TAILQ_INSERT_TAIL(&engine_list, engine, node); } +static void +ice_unregister_flow_engine(struct ice_flow_engine *engine) +{ + TAILQ_REMOVE(&engine_list, engine, node); +} + int ice_flow_init(struct ice_adapter *ad) { @@ -1840,9 +1846,18 @@ ice_flow_init(struct ice_adapter *ad) ret = engine->init(ad); if (ret) { - PMD_INIT_LOG(ERR, "Failed to initialize engine %d", - engine->type); - return ret; + /** + * ACL may not supported in kernel driver, + * so just unregister the engine. + */ + if (engine->type == ICE_FLOW_ENGINE_ACL) { + ice_unregister_flow_engine(engine); + } else { + PMD_INIT_LOG(ERR, + "Failed to initialize engine %d", + engine->type); + return ret; + } } } return 0; @@ -1929,7 +1944,7 @@ ice_register_parser(struct ice_flow_parser *parser, list = ice_get_parser_list(parser, ad); if (list == NULL) - return -EINVAL; + goto err; if (ad->devargs.pipe_mode_support) { TAILQ_INSERT_TAIL(list, parser_node, node); @@ -1941,7 +1956,7 @@ ice_register_parser(struct ice_flow_parser *parser, ICE_FLOW_ENGINE_ACL) { TAILQ_INSERT_AFTER(list, existing_node, parser_node, node); - goto DONE; + return 0; } } TAILQ_INSERT_HEAD(list, parser_node, node); @@ -1952,7 +1967,7 @@ ice_register_parser(struct ice_flow_parser *parser, ICE_FLOW_ENGINE_SWITCH) { TAILQ_INSERT_AFTER(list, existing_node, parser_node, node); - goto DONE; + return 0; } } TAILQ_INSERT_HEAD(list, parser_node, node); @@ -1961,11 +1976,14 @@ ice_register_parser(struct ice_flow_parser *parser, } else if (parser->engine->type == ICE_FLOW_ENGINE_ACL) { TAILQ_INSERT_HEAD(list, parser_node, node); } else { - return -EINVAL; + goto err; } } -DONE: return 0; +err: + rte_free(parser_node); + PMD_DRV_LOG(ERR, "%s failed.", __func__); + return -EINVAL; } void From patchwork Wed Apr 13 17:10:29 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109689 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A601A0508; Wed, 13 Apr 2022 11:14:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9E6742856; Wed, 13 Apr 2022 11:12:32 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id C341C4285F for ; Wed, 13 Apr 2022 11:12:30 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841151; x=1681377151; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=6916rklRMOIDhdIxoTtjqeyYdrSxqKUKN4C0RUzgDyE=; b=SRXJvRjKrHTtF+G7BW/wxeAmr2/qNCXVIxH8kj76yy/cCDAQgYjoRvaU rThDvzbue33i0Vwbt326mBWBbsO+Kx0Hl1ckvdxLNbmyjRUP2pcpVVbYZ ouvcAEUS+jzuD2E5OzDY6mEnG3HMMNduWZ1QKO4ie2gqn5VGULpzL4Ng4 /lZXQzwRKK8f5/ANlohmq1QkMsG8+jQ6DAp4qv98+/aGXT+BPytxt5IZk oZtPi38vQyS7OtkDxEuJi44U0Kn/N9LGaqmbzLr9AG59mKU1/bo5zWA61 1PhvUr0Ovnc+vBjAFr2D9r88Co2wSHpGlgipEwluJcYOlAyTMaBkyLAKV Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058431" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058431" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:30 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122908" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:28 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu Subject: [PATCH v3 21/22] testpmd: force flow flush Date: Wed, 13 Apr 2022 17:10:29 +0000 Message-Id: <20220413171030.2231163-22-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Qi Zhang For mdcf, rte_flow_flush is still need to be invoked even there are no flows be created in current instance. Signed-off-by: Qi Zhang Signed-off-by: Kevin Liu --- app/test-pmd/config.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index cc8e7aa138..3d40e3e43d 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -2923,15 +2923,15 @@ port_flow_flush(portid_t port_id) port = &ports[port_id]; - if (port->flow_list == NULL) - return ret; - /* Poisoning to make sure PMDs update it in case of error. */ memset(&error, 0x44, sizeof(error)); if (rte_flow_flush(port_id, &error)) { port_flow_complain(&error); } + if (port->flow_list == NULL) + return ret; + while (port->flow_list) { struct port_flow *pf = port->flow_list->next; From patchwork Wed Apr 13 17:10:30 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kevin Liu X-Patchwork-Id: 109690 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 94C7AA0508; Wed, 13 Apr 2022 11:14:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A88094280C; Wed, 13 Apr 2022 11:12:34 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 011D842864 for ; Wed, 13 Apr 2022 11:12:32 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1649841153; x=1681377153; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=dAIbwqYxVmw5fWOkgq11u1+5kzjeVixuJse3T9of4uI=; b=VilITWymYAF5oFhn4XQSCeeicwC4+3fIi9CMLV1CwfajHnLoNqIt6OrJ 3WGBjNQZT5IthW4940qeGbG26war1rEsv9kj/CqLGynX6MXvgZPmwSUHm CfDIKoP5TYInfUEDL536p+9SoQs6CWtau1F2Vi4zlgDrmAjvmgcSCRKcp 1ykGISBVgOqmAekuflRCfD4hLhHxj5JE9W3BEbyLdzKLVTS4n71BrUwRK Doj7vb52xoQ0yxnCLZEyooVifZWDDPdYgXldBjqpo9AWYDYpyBRQZ2kHv KgPsniYeW+5qdold79LPq/BnPLap6Eg9DA+rCQ2KtZU7GhKCcwn1z87Iw A==; X-IronPort-AV: E=McAfee;i="6400,9594,10315"; a="262058442" X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="262058442" Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:32 -0700 X-IronPort-AV: E=Sophos;i="5.90,256,1643702400"; d="scan'208";a="552122915" Received: from intel-cd-odc-kevin.cd.intel.com ([10.240.178.195]) by orsmga007-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2022 02:12:30 -0700 From: Kevin Liu To: dev@dpdk.org Cc: qiming.yang@intel.com, qi.z.zhang@intel.com, stevex.yang@intel.com, Kevin Liu , Alvin Zhang Subject: [PATCH v3 22/22] net/ice: fix DCF reset Date: Wed, 13 Apr 2022 17:10:30 +0000 Message-Id: <20220413171030.2231163-23-kevinx.liu@intel.com> X-Mailer: git-send-email 2.33.1 In-Reply-To: <20220413171030.2231163-1-kevinx.liu@intel.com> References: <20220413160932.2074781-1-kevinx.liu@intel.com> <20220413171030.2231163-1-kevinx.liu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org After the PF triggers the VF reset, before the VF PMD can perform any operations on the hardware, it must reinitialize the all resources. This patch adds a flag to indicate whether the VF has been reset by PF, and update the DCF resetting operations according to this flag. Fixes: 1a86f4dbdf42 ("net/ice: support DCF device reset") Signed-off-by: Alvin Zhang Signed-off-by: Kevin Liu --- drivers/net/ice/base/ice_common.c | 4 +++- drivers/net/ice/ice_dcf.c | 2 +- drivers/net/ice/ice_dcf_ethdev.c | 17 ++++++++++++++++- drivers/net/ice/ice_dcf_parent.c | 3 +++ 4 files changed, 23 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index db87bacd97..13feb55469 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -755,6 +755,7 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) status = ice_init_def_sw_recp(hw, &hw->switch_info->recp_list); if (status) { ice_free(hw, hw->switch_info); + hw->switch_info = NULL; return status; } return ICE_SUCCESS; @@ -823,7 +824,6 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw) } ice_rm_sw_replay_rule_info(hw, sw); ice_free(hw, sw->recp_list); - ice_free(hw, sw); } /** @@ -833,6 +833,8 @@ ice_cleanup_fltr_mgmt_single(struct ice_hw *hw, struct ice_switch_info *sw) void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw) { ice_cleanup_fltr_mgmt_single(hw, hw->switch_info); + ice_free(hw, hw->switch_info); + hw->switch_info = NULL; } /** diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c index 23edfd09b1..35773e2acd 100644 --- a/drivers/net/ice/ice_dcf.c +++ b/drivers/net/ice/ice_dcf.c @@ -1429,7 +1429,7 @@ ice_dcf_cap_reset(struct rte_eth_dev *eth_dev, struct ice_dcf_hw *hw) int ret; struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; ice_dcf_disable_irq0(hw); rte_intr_disable(intr_handle); diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c index e46c8405aa..0315e694d7 100644 --- a/drivers/net/ice/ice_dcf_ethdev.c +++ b/drivers/net/ice/ice_dcf_ethdev.c @@ -1004,6 +1004,15 @@ dcf_add_del_mc_addr_list(struct ice_dcf_hw *hw, uint32_t i; int len, err = 0; + if (hw->resetting) { + if (!add) + return 0; + + PMD_DRV_LOG(ERR, + "fail to add multicast MACs for VF resetting"); + return -EIO; + } + len = sizeof(struct virtchnl_ether_addr_list); len += sizeof(struct virtchnl_ether_addr) * mc_addrs_num; @@ -1642,7 +1651,13 @@ ice_dcf_dev_close(struct rte_eth_dev *dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - (void)ice_dcf_dev_stop(dev); + if (adapter->parent.pf.adapter_stopped) + (void)ice_dcf_dev_stop(dev); + + if (adapter->real_hw.resetting) { + ice_dcf_uninit_hw(dev, &adapter->real_hw); + ice_dcf_init_hw(dev, &adapter->real_hw); + } ice_free_queues(dev); diff --git a/drivers/net/ice/ice_dcf_parent.c b/drivers/net/ice/ice_dcf_parent.c index 2f96dedcce..7f7ed796e2 100644 --- a/drivers/net/ice/ice_dcf_parent.c +++ b/drivers/net/ice/ice_dcf_parent.c @@ -240,6 +240,9 @@ ice_dcf_handle_pf_event_msg(struct ice_dcf_hw *dcf_hw, case VIRTCHNL_EVENT_RESET_IMPENDING: PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_RESET_IMPENDING event"); dcf_hw->resetting = true; + rte_eth_dev_callback_process(dcf_hw->eth_dev, + RTE_ETH_EVENT_INTR_RESET, + NULL); break; case VIRTCHNL_EVENT_LINK_CHANGE: PMD_DRV_LOG(DEBUG, "VIRTCHNL_EVENT_LINK_CHANGE event");