From patchwork Mon Mar 23 07:17:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66989 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E084A0563; Mon, 23 Mar 2020 08:14:47 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1DD741C029; Mon, 23 Mar 2020 08:14:39 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 24E4F3B5; Mon, 23 Mar 2020 08:14:35 +0100 (CET) IronPort-SDR: ih1kzAU65rlZ78bEemMdSmiirJizDvksuyogsM28AlDcxS9CJDDCQ86tyM7xshyKTKx20FpmjU p5JRHDsY3ffQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:34 -0700 IronPort-SDR: gsWA/fXNXuNi1hvPc7VfY/Uv50DBr4/NXAFXO2C/uRj6VVQJiuGFDs+xQVQtcBtjGltlRq66za PKoyNHWLvKPQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111473" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:32 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , stable@dpdk.org, Jesse Brandeburg , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:24 +0800 Message-Id: <20200323071759.13075-2-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 01/36] net/ice/base: fix uninitialized stack variables X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Via code inspection, I found that some partially initialized stack variables were being passed along to called functions, which could eventually result in those uninitialized members being used. To fix this, make sure the local variables are zeroed out before partially initializing them. This should prevent any unintended consequences from using stack memory that might have junk in it. In addition to the memsets, this patch also initializes one member in one function, that needed to be initialized to non-zero. Fixes: fed0c5ca5f19 ("net/ice/base: support programming a new switch recipe") Cc: stable@dpdk.org Signed-off-by: Jesse Brandeburg Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_switch.c | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index 085f34406..e88d0f7fe 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -6227,9 +6227,12 @@ ice_adv_add_update_vsi_list(struct ice_hw *hw, if (status) return status; + ice_memset(&tmp_fltr, 0, sizeof(tmp_fltr), ICE_NONDMA_MEM); tmp_fltr.fltr_rule_id = cur_fltr->fltr_rule_id; tmp_fltr.fltr_act = ICE_FWD_TO_VSI_LIST; tmp_fltr.fwd_id.vsi_list_id = vsi_list_id; + tmp_fltr.lkup_type = ICE_SW_LKUP_LAST; + /* Update the previous switch rule of "forward to VSI" to * "fwd to VSI list" */ @@ -6473,6 +6476,7 @@ ice_add_adv_rule(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, if (rinfo->sw_act.fltr_act == ICE_FWD_TO_VSI) { struct ice_fltr_info tmp_fltr; + ice_memset(&tmp_fltr, 0, sizeof(tmp_fltr), ICE_NONDMA_MEM); tmp_fltr.fltr_rule_id = LE16_TO_CPU(s_rule->pdata.lkup_tx_rx.index); tmp_fltr.fltr_act = ICE_FWD_TO_VSI; @@ -6557,6 +6561,8 @@ ice_adv_rem_update_vsi_list(struct ice_hw *hw, u16 vsi_handle, lkup_type); if (status) return status; + + ice_memset(&tmp_fltr, 0, sizeof(tmp_fltr), ICE_NONDMA_MEM); tmp_fltr.fltr_rule_id = fm_list->rule_info.fltr_rule_id; fm_list->rule_info.sw_act.fltr_act = ICE_FWD_TO_VSI; tmp_fltr.fltr_act = ICE_FWD_TO_VSI; From patchwork Mon Mar 23 07:17:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66990 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F7B3A0563; Mon, 23 Mar 2020 08:14:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A35F61BFE2; Mon, 23 Mar 2020 08:14:48 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id B7F471BEE5 for ; Mon, 23 Mar 2020 08:14:37 +0100 (CET) IronPort-SDR: zaM/QuAOO/+70rNS26BivXuaeU6QlxRzn6E+FfS2hsgeWPd8clGpLtPdaycv7BjgajID5zivkC dJuNx4hffU5Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:37 -0700 IronPort-SDR: vdygz9V0MLu7m+eINFCictVVhD1+7VvHKVqR1sTuLhMGfcFOfXo3lTHSsmAcY+WAd+qWMPJtcL CQkxTnIJ9ksg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111481" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:34 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:25 +0800 Message-Id: <20200323071759.13075-3-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 02/36] net/ice/base: add and update E822 device IDs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the device IDs for the Intel(R) Ethernet Connection E822-L and E822-X SKUs. Update the codenames and branding strings for the previous C822N device IDs which should be using E822-C. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_devids.h | 28 ++++++++++++++++++---------- drivers/net/ice/base/ice_nvm.c | 16 +++++++++++++--- drivers/net/ice/ice_ethdev.c | 10 +++++----- 3 files changed, 36 insertions(+), 18 deletions(-) diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h index 46ffdee2d..890a47b24 100644 --- a/drivers/net/ice/base/ice_devids.h +++ b/drivers/net/ice/base/ice_devids.h @@ -18,15 +18,23 @@ #define ICE_DEV_ID_E810_XXV_QSFP 0x159A /* Intel(R) Ethernet Controller E810-XXV for SFP */ #define ICE_DEV_ID_E810_XXV_SFP 0x159B -/* Intel(R) Ethernet Connection C822N for backplane */ -#define ICE_DEV_ID_C822N_BACKPLANE 0x1890 -/* Intel(R) Ethernet Connection C822N for QSFP */ -#define ICE_DEV_ID_C822N_QSFP 0x1891 -/* Intel(R) Ethernet Connection C822N for SFP */ -#define ICE_DEV_ID_C822N_SFP 0x1892 -/* Intel(R) Ethernet Connection C822N/X557-AT 10GBASE-T */ -#define ICE_DEV_ID_C822N_10G_BASE_T 0x1893 -/* Intel(R) Ethernet Connection C822N 1GbE */ -#define ICE_DEV_ID_C822N_SGMII 0x1894 +/* Intel(R) Ethernet Connection E822-C for backplane */ +#define ICE_DEV_ID_E822C_BACKPLANE 0x1890 +/* Intel(R) Ethernet Connection E822-C for QSFP */ +#define ICE_DEV_ID_E822C_QSFP 0x1891 +/* Intel(R) Ethernet Connection E822-C for SFP */ +#define ICE_DEV_ID_E822C_SFP 0x1892 +/* Intel(R) Ethernet Connection E822-C/X557-AT 10GBASE-T */ +#define ICE_DEV_ID_E822C_10G_BASE_T 0x1893 +/* Intel(R) Ethernet Connection E822-C 1GbE */ +#define ICE_DEV_ID_E822C_SGMII 0x1894 +/* Intel(R) Ethernet Connection E822-L for backplane */ +#define ICE_DEV_ID_E822L_BACKPLANE 0x1897 +/* Intel(R) Ethernet Connection E822-L for SFP */ +#define ICE_DEV_ID_E822L_SFP 0x1898 +/* Intel(R) Ethernet Connection E822-L/X557-AT 10GBASE-T */ +#define ICE_DEV_ID_E822L_10G_BASE_T 0x1899 +/* Intel(R) Ethernet Connection E822-L 1GbE */ +#define ICE_DEV_ID_E822L_SGMII 0x189A #endif /* _ICE_DEVIDS_H_ */ diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index 2d92524f2..5dd702db3 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -310,11 +310,21 @@ enum ice_status ice_init_nvm(struct ice_hw *hw) nvm->eetrack = (eetrack_hi << 16) | eetrack_lo; + switch (hw->device_id) { /* the following devices do not have boot_cfg_tlv yet */ - if (hw->device_id == ICE_DEV_ID_C822N_BACKPLANE || - hw->device_id == ICE_DEV_ID_C822N_QSFP || - hw->device_id == ICE_DEV_ID_C822N_SFP) + case ICE_DEV_ID_E822C_BACKPLANE: + case ICE_DEV_ID_E822C_QSFP: + case ICE_DEV_ID_E822C_10G_BASE_T: + case ICE_DEV_ID_E822C_SGMII: + case ICE_DEV_ID_E822C_SFP: + case ICE_DEV_ID_E822L_BACKPLANE: + case ICE_DEV_ID_E822L_SFP: + case ICE_DEV_ID_E822L_10G_BASE_T: + case ICE_DEV_ID_E822L_SGMII: return status; + default: + break; + } status = ice_get_pfa_module_tlv(hw, &boot_cfg_tlv, &boot_cfg_tlv_len, ICE_SR_BOOT_CFG_PTR); diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index e59761c22..4763770f5 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -163,11 +163,11 @@ static const struct rte_pci_id pci_id_ice_map[] = { { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810_XXV_BACKPLANE) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810_XXV_QSFP) }, { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E810_XXV_SFP) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_C822N_BACKPLANE) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_C822N_QSFP) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_C822N_SFP) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_C822N_10G_BASE_T) }, - { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_C822N_SGMII) }, + { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822C_BACKPLANE) }, + { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822C_QSFP) }, + { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822C_SFP) }, + { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822C_10G_BASE_T) }, + { RTE_PCI_DEVICE(ICE_INTEL_VENDOR_ID, ICE_DEV_ID_E822C_SGMII) }, { .vendor_id = 0, /* sentinel */ }, }; From patchwork Mon Mar 23 07:17:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66991 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3FC7A0563; Mon, 23 Mar 2020 08:15:08 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F14AC1C06A; Mon, 23 Mar 2020 08:14:49 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0B2441C02E; Mon, 23 Mar 2020 08:14:39 +0100 (CET) IronPort-SDR: 7jEq5KKLrM14ic5qiuV5ExwQ/3j9za65O3WLUeKEeyGJ9UagFvYG/kVihFtbsK+hWYhOlJHKIl nMBOyonZ/01Q== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:39 -0700 IronPort-SDR: KcZSyG+93kpuzH1QIrAYOXaBpgJQ9GNYsBO3IDWZdN/xO5qqtnWpfCA3g+dS2sk2bHqxFcTPNg hvqAJqEUSm4Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111490" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:37 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , stable@dpdk.org, Michal Swiatkowski , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:26 +0800 Message-Id: <20200323071759.13075-4-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 03/36] net/ice/base: fix removing MAC rule X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Send correct recp_list to ice_remove_mac_rule. ICE_SW_LKUP_ETHERTYPE rule list was sent instead of ICE_SW_LKUP_MAC. That caused problem with adding new mac rule on VF, because rule wasn't removed correctly. Fixes: c7dd15931183 ("net/ice/base: add virtual switch code") Cc: stable@dpdk.org Signed-off-by: Michal Swiatkowski Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_switch.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index e88d0f7fe..57b50085d 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -4398,7 +4398,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle, switch (lkup) { case ICE_SW_LKUP_MAC: - ice_remove_mac_rule(hw, &remove_list_head, recp_list); + ice_remove_mac_rule(hw, &remove_list_head, &recp_list[lkup]); break; case ICE_SW_LKUP_VLAN: ice_remove_vlan(hw, &remove_list_head); From patchwork Mon Mar 23 07:17:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66992 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E3E97A0563; Mon, 23 Mar 2020 08:15:18 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 785CC1C07E; Mon, 23 Mar 2020 08:14:51 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 74F501C02E; Mon, 23 Mar 2020 08:14:43 +0100 (CET) IronPort-SDR: RWjYqVVE59l2yb/qm8AvmTWVT+exheIoT/kQbPwUfXEbd2Ul2p1Q8VwqsimPLJtTyofOrcQXXX eIup1TXtcb/w== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:42 -0700 IronPort-SDR: wVbjQCMaXcIhQyOMJCFZfkZ8gvyY5wui4mioRbMBRQy1k4GX29O4teQMeLG8CBgDaFc9qIxB9t 0EJG188pAz8Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111493" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:39 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , stable@dpdk.org, Ben Shelton , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:27 +0800 Message-Id: <20200323071759.13075-5-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 04/36] net/ice/base: read PSM clock frequency from register X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Read the GLGEN_CLKSTAT_SRC register to determine which PSM clock frequency is selected. This ensures that the rate limiter profile calculations will be correct. Fixes: 453d087ccaff ("net/ice/base: add common functions") Cc: stable@dpdk.org Signed-off-by: Ben Shelton Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 1 + drivers/net/ice/base/ice_sched.c | 57 ++++++++++++++++++++++++++++++++++----- drivers/net/ice/base/ice_sched.h | 7 ++++- drivers/net/ice/base/ice_type.h | 4 ++- 4 files changed, 60 insertions(+), 9 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 786e99d21..9ef1aeef2 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -672,6 +672,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw) "Failed to get scheduler allocated resources\n"); goto err_unroll_alloc; } + ice_sched_get_psm_clk_freq(hw); /* Initialize port_info struct with scheduler data */ status = ice_sched_init_port(hw->port_info); diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 553fc28ff..740f7c3ff 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -1369,6 +1369,46 @@ enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw) } /** + * ice_sched_get_psm_clk_freq - determine the PSM clock frequency + * @hw: pointer to the HW struct + * + * Determine the PSM clock frequency and store in HW struct + */ +void ice_sched_get_psm_clk_freq(struct ice_hw *hw) +{ + u32 val, clk_src; + + val = rd32(hw, GLGEN_CLKSTAT_SRC); + clk_src = (val & GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_M) >> + GLGEN_CLKSTAT_SRC_PSM_CLK_SRC_S; + +#define PSM_CLK_SRC_367_MHZ 0x0 +#define PSM_CLK_SRC_416_MHZ 0x1 +#define PSM_CLK_SRC_446_MHZ 0x2 +#define PSM_CLK_SRC_390_MHZ 0x3 + + switch (clk_src) { + case PSM_CLK_SRC_367_MHZ: + hw->psm_clk_freq = ICE_PSM_CLK_367MHZ_IN_HZ; + break; + case PSM_CLK_SRC_416_MHZ: + hw->psm_clk_freq = ICE_PSM_CLK_416MHZ_IN_HZ; + break; + case PSM_CLK_SRC_446_MHZ: + hw->psm_clk_freq = ICE_PSM_CLK_446MHZ_IN_HZ; + break; + case PSM_CLK_SRC_390_MHZ: + hw->psm_clk_freq = ICE_PSM_CLK_390MHZ_IN_HZ; + break; + default: + ice_debug(hw, ICE_DBG_SCHED, "PSM clk_src unexpected %u\n", + clk_src); + /* fall back to a safe default */ + hw->psm_clk_freq = ICE_PSM_CLK_446MHZ_IN_HZ; + } +} + +/** * ice_sched_find_node_in_subtree - Find node in part of base node subtree * @hw: pointer to the HW struct * @base: pointer to the base node @@ -3671,11 +3711,12 @@ ice_cfg_agg_bw_alloc(struct ice_port_info *pi, u32 agg_id, u8 ena_tcmap, /** * ice_sched_calc_wakeup - calculate RL profile wakeup parameter + * @hw: pointer to the HW struct * @bw: bandwidth in Kbps * * This function calculates the wakeup parameter of RL profile. */ -static u16 ice_sched_calc_wakeup(s32 bw) +static u16 ice_sched_calc_wakeup(struct ice_hw *hw, s32 bw) { s64 bytes_per_sec, wakeup_int, wakeup_a, wakeup_b, wakeup_f; s32 wakeup_f_int; @@ -3683,7 +3724,7 @@ static u16 ice_sched_calc_wakeup(s32 bw) /* Get the wakeup integer value */ bytes_per_sec = DIV_64BIT(((s64)bw * 1000), BITS_PER_BYTE); - wakeup_int = DIV_64BIT(ICE_RL_PROF_FREQUENCY, bytes_per_sec); + wakeup_int = DIV_64BIT(hw->psm_clk_freq, bytes_per_sec); if (wakeup_int > 63) { wakeup = (u16)((1 << 15) | wakeup_int); } else { @@ -3692,7 +3733,7 @@ static u16 ice_sched_calc_wakeup(s32 bw) */ wakeup_b = (s64)ICE_RL_PROF_MULTIPLIER * wakeup_int; wakeup_a = DIV_64BIT((s64)ICE_RL_PROF_MULTIPLIER * - ICE_RL_PROF_FREQUENCY, bytes_per_sec); + hw->psm_clk_freq, bytes_per_sec); /* Get Fraction value */ wakeup_f = wakeup_a - wakeup_b; @@ -3712,13 +3753,15 @@ static u16 ice_sched_calc_wakeup(s32 bw) /** * ice_sched_bw_to_rl_profile - convert BW to profile parameters + * @hw: pointer to the HW struct * @bw: bandwidth in Kbps * @profile: profile parameters to return * * This function converts the BW to profile structure format. */ static enum ice_status -ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile) +ice_sched_bw_to_rl_profile(struct ice_hw *hw, u32 bw, + struct ice_aqc_rl_profile_elem *profile) { enum ice_status status = ICE_ERR_PARAM; s64 bytes_per_sec, ts_rate, mv_tmp; @@ -3738,7 +3781,7 @@ ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile) for (i = 0; i < 64; i++) { u64 pow_result = BIT_ULL(i); - ts_rate = DIV_64BIT((s64)ICE_RL_PROF_FREQUENCY, + ts_rate = DIV_64BIT((s64)hw->psm_clk_freq, pow_result * ICE_RL_PROF_TS_MULTIPLIER); if (ts_rate <= 0) continue; @@ -3762,7 +3805,7 @@ ice_sched_bw_to_rl_profile(u32 bw, struct ice_aqc_rl_profile_elem *profile) if (found) { u16 wm; - wm = ice_sched_calc_wakeup(bw); + wm = ice_sched_calc_wakeup(hw, bw); profile->rl_multiply = CPU_TO_LE16(mv); profile->wake_up_calc = CPU_TO_LE16(wm); profile->rl_encode = CPU_TO_LE16(encode); @@ -3831,7 +3874,7 @@ ice_sched_add_rl_profile(struct ice_port_info *pi, if (!rl_prof_elem) return NULL; - status = ice_sched_bw_to_rl_profile(bw, &rl_prof_elem->profile); + status = ice_sched_bw_to_rl_profile(hw, bw, &rl_prof_elem->profile); if (status != ICE_SUCCESS) goto exit_add_rl_prof; diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index d6b467477..1a8549931 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -25,12 +25,16 @@ ((BIT(11) - 1) * 64) /* In Bytes */ #define ICE_MAX_BURST_SIZE_KBYTE_GRANULARITY ICE_MAX_BURST_SIZE_ALLOWED -#define ICE_RL_PROF_FREQUENCY 446000000 #define ICE_RL_PROF_ACCURACY_BYTES 128 #define ICE_RL_PROF_MULTIPLIER 10000 #define ICE_RL_PROF_TS_MULTIPLIER 32 #define ICE_RL_PROF_FRACTION 512 +#define ICE_PSM_CLK_367MHZ_IN_HZ 367647059 +#define ICE_PSM_CLK_416MHZ_IN_HZ 416666667 +#define ICE_PSM_CLK_446MHZ_IN_HZ 446428571 +#define ICE_PSM_CLK_390MHZ_IN_HZ 390625000 + struct rl_profile_params { u32 bw; /* in Kbps */ u16 rl_multiplier; @@ -83,6 +87,7 @@ ice_aq_query_sched_elems(struct ice_hw *hw, u16 elems_req, u16 *elems_ret, struct ice_sq_cd *cd); enum ice_status ice_sched_init_port(struct ice_port_info *pi); enum ice_status ice_sched_query_res_alloc(struct ice_hw *hw); +void ice_sched_get_psm_clk_freq(struct ice_hw *hw); /* Functions to cleanup scheduler SW DB */ void ice_sched_clear_port(struct ice_port_info *pi); diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 9773a549f..237220ee8 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -524,7 +524,7 @@ struct ice_sched_node { #define ICE_TXSCHED_GET_EIR_BWALLOC(x) \ LE16_TO_CPU((x)->info.eir_bw.bw_alloc) -struct ice_sched_rl_profle { +struct ice_sched_rl_profile { u32 rate; /* In Kbps */ struct ice_aqc_rl_profile_elem info; }; @@ -741,6 +741,8 @@ struct ice_hw { struct ice_sched_rl_profile **cir_profiles; struct ice_sched_rl_profile **eir_profiles; struct ice_sched_rl_profile **srl_profiles; + /* PSM clock frequency for calculating RL profile params */ + u32 psm_clk_freq; u64 debug_mask; /* BITMAP for debug mask */ enum ice_mac_type mac_type; From patchwork Mon Mar 23 07:17:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66993 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0DFDBA0563; Mon, 23 Mar 2020 08:15:30 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 288AD1C0AE; Mon, 23 Mar 2020 08:14:53 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 82FC81BFE2 for ; Mon, 23 Mar 2020 08:14:45 +0100 (CET) IronPort-SDR: jl/1cwOl8tq0JsUNsk2r02Pii0n3VdlxV/PrQWfniqD+QtAzz+uVeFtksKnVm2xFa9XJ76aTSA VvQxWu8MxidQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:45 -0700 IronPort-SDR: sHL1LWJfBdKkcv5VGNBtFR1WmBXdGZ2Ih2rP70Y6Yp7vK85IE8PHXTuHyNIqsQcUfcYaNrnIrr Ek9sfKTg84/w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111498" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:42 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Michal Swiatkowski , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:28 +0800 Message-Id: <20200323071759.13075-6-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 05/36] net/ice/base: allow VLAN and ethertype filter for port X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add new api function which allow user to choose port on which vlan and ethertype rule going to be added. Signed-off-by: Michal Swiatkowski Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_switch.c | 162 ++++++++++++++++++++++++++------------ 1 file changed, 112 insertions(+), 50 deletions(-) diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index 57b50085d..0f2a5b3e9 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -2704,8 +2704,7 @@ ice_find_rule_entry(struct LIST_HEAD_TYPE *list_head, /** * ice_find_vsi_list_entry - Search VSI list map with VSI count 1 - * @hw: pointer to the hardware structure - * @recp_id: lookup type for which VSI lists needs to be searched + * @recp_list: VSI lists needs to be searched * @vsi_handle: VSI handle to be found in VSI list * @vsi_list_id: VSI list ID found containing vsi_handle * @@ -2714,15 +2713,14 @@ ice_find_rule_entry(struct LIST_HEAD_TYPE *list_head, * than 1 vsi_count. Returns pointer to VSI list entry if found. */ static struct ice_vsi_list_map_info * -ice_find_vsi_list_entry(struct ice_hw *hw, u8 recp_id, u16 vsi_handle, +ice_find_vsi_list_entry(struct ice_sw_recipe *recp_list, u16 vsi_handle, u16 *vsi_list_id) { struct ice_vsi_list_map_info *map_info = NULL; - struct ice_switch_info *sw = hw->switch_info; struct LIST_HEAD_TYPE *list_head; - list_head = &sw->recp_list[recp_id].filt_rules; - if (sw->recp_list[recp_id].adv_rule) { + list_head = &recp_list->filt_rules; + if (recp_list->adv_rule) { struct ice_adv_fltr_mgmt_list_entry *list_itr; LIST_FOR_EACH_ENTRY(list_itr, list_head, @@ -3267,7 +3265,7 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, * @hw: pointer to the hardware structure * @m_list: list of MAC addresses and forwarding information * - * Function add mac rule for logical port from hw struct + * Function add MAC rule for logical port from HW struct */ enum ice_status ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) @@ -3282,15 +3280,15 @@ ice_add_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) /** * ice_add_vlan_internal - Add one VLAN based filter rule * @hw: pointer to the hardware structure + * @recp_list: recipe list for which rule has to be added * @f_entry: filter entry containing one VLAN information */ static enum ice_status -ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry) +ice_add_vlan_internal(struct ice_hw *hw, struct ice_sw_recipe *recp_list, + struct ice_fltr_list_entry *f_entry) { - struct ice_switch_info *sw = hw->switch_info; struct ice_fltr_mgmt_list_entry *v_list_itr; struct ice_fltr_info *new_fltr, *cur_fltr; - struct ice_sw_recipe *recp_list; enum ice_sw_lkup_type lkup_type; u16 vsi_list_id = 0, vsi_handle; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ @@ -3313,7 +3311,6 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry) new_fltr->src = new_fltr->fwd_id.hw_vsi_id; lkup_type = new_fltr->lkup_type; vsi_handle = new_fltr->vsi_handle; - recp_list = &sw->recp_list[ICE_SW_LKUP_VLAN]; rule_lock = &recp_list->filt_rule_lock; ice_acquire_lock(rule_lock); v_list_itr = ice_find_rule_entry(&recp_list->filt_rules, new_fltr); @@ -3326,7 +3323,7 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry) * want to add. If found, use the same vsi_list_id for * this new VLAN rule or else create a new list. */ - map_info = ice_find_vsi_list_entry(hw, ICE_SW_LKUP_VLAN, + map_info = ice_find_vsi_list_entry(recp_list, vsi_handle, &vsi_list_id); if (!map_info) { @@ -3436,24 +3433,26 @@ ice_add_vlan_internal(struct ice_hw *hw, struct ice_fltr_list_entry *f_entry) } /** - * ice_add_vlan - Add VLAN based filter rule + * ice_add_vlan_rule - Add VLAN based filter rule * @hw: pointer to the hardware structure * @v_list: list of VLAN entries and forwarding information + * @sw: pointer to switch info struct for which function add rule */ -enum ice_status -ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) +static enum ice_status +ice_add_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, + struct ice_switch_info *sw) { struct ice_fltr_list_entry *v_list_itr; + struct ice_sw_recipe *recp_list; - if (!v_list || !hw) - return ICE_ERR_PARAM; - + recp_list = &sw->recp_list[ICE_SW_LKUP_VLAN]; LIST_FOR_EACH_ENTRY(v_list_itr, v_list, ice_fltr_list_entry, list_entry) { if (v_list_itr->fltr_info.lkup_type != ICE_SW_LKUP_VLAN) return ICE_ERR_PARAM; v_list_itr->fltr_info.flag = ICE_FLTR_TX; - v_list_itr->status = ice_add_vlan_internal(hw, v_list_itr); + v_list_itr->status = ice_add_vlan_internal(hw, recp_list, + v_list_itr); if (v_list_itr->status) return v_list_itr->status; } @@ -3461,6 +3460,22 @@ ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) } /** + * ice_add_vlan - Add a VLAN based filter rule + * @hw: pointer to the hardware structure + * @v_list: list of VLAN and forwarding information + * + * Function add VLAN rule for logical port from HW struct + */ +enum ice_status +ice_add_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) +{ + if (!v_list || !hw) + return ICE_ERR_PARAM; + + return ice_add_vlan_rule(hw, v_list, hw->switch_info); +} + +/** * ice_add_mac_vlan - Add MAC and VLAN pair based filter rule * @hw: pointer to the hardware structure * @mv_list: list of MAC and VLAN filters @@ -3499,31 +3514,29 @@ ice_add_mac_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *mv_list) } /** - * ice_add_eth_mac - Add ethertype and MAC based filter rule + * ice_add_eth_mac_rule - Add ethertype and MAC based filter rule * @hw: pointer to the hardware structure * @em_list: list of ether type MAC filter, MAC is optional + * @sw: pointer to switch info struct for which function add rule + * @lport: logic port number on which function add rule * * This function requires the caller to populate the entries in * the filter list with the necessary fields (including flags to * indicate Tx or Rx rules). */ -enum ice_status -ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) +static enum ice_status +ice_add_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, + struct ice_switch_info *sw, u8 lport) { struct ice_fltr_list_entry *em_list_itr; - u8 lport; - - if (!em_list || !hw) - return ICE_ERR_PARAM; - lport = hw->port_info->lport; LIST_FOR_EACH_ENTRY(em_list_itr, em_list, ice_fltr_list_entry, list_entry) { struct ice_sw_recipe *recp_list; enum ice_sw_lkup_type l_type; l_type = em_list_itr->fltr_info.lkup_type; - recp_list = &hw->switch_info->recp_list[l_type]; + recp_list = &sw->recp_list[l_type]; if (l_type != ICE_SW_LKUP_ETHERTYPE_MAC && l_type != ICE_SW_LKUP_ETHERTYPE) @@ -3538,30 +3551,47 @@ ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) return ICE_SUCCESS; } +enum ice_status /** - * ice_remove_eth_mac - Remove an ethertype (or MAC) based filter rule + * ice_add_eth_mac - Add a ethertype based filter rule * @hw: pointer to the hardware structure - * @em_list: list of ethertype or ethertype MAC entries + * @em_list: list of ethertype and forwarding information + * + * Function add ethertype rule for logical port from HW struct */ -enum ice_status -ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) +ice_add_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) { - struct ice_fltr_list_entry *em_list_itr, *tmp; - struct ice_sw_recipe *recp_list; - if (!em_list || !hw) return ICE_ERR_PARAM; + return ice_add_eth_mac_rule(hw, em_list, hw->switch_info, + hw->port_info->lport); +} + +/** + * ice_remove_eth_mac_rule - Remove an ethertype (or MAC) based filter rule + * @hw: pointer to the hardware structure + * @em_list: list of ethertype or ethertype MAC entries + * @sw: pointer to switch info struct for which function add rule + */ +static enum ice_status +ice_remove_eth_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list, + struct ice_switch_info *sw) +{ + struct ice_fltr_list_entry *em_list_itr, *tmp; + LIST_FOR_EACH_ENTRY_SAFE(em_list_itr, tmp, em_list, ice_fltr_list_entry, list_entry) { - enum ice_sw_lkup_type l_type = - em_list_itr->fltr_info.lkup_type; + struct ice_sw_recipe *recp_list; + enum ice_sw_lkup_type l_type; + + l_type = em_list_itr->fltr_info.lkup_type; if (l_type != ICE_SW_LKUP_ETHERTYPE_MAC && l_type != ICE_SW_LKUP_ETHERTYPE) return ICE_ERR_PARAM; - recp_list = &hw->switch_info->recp_list[l_type]; + recp_list = &sw->recp_list[l_type]; em_list_itr->status = ice_remove_rule_internal(hw, recp_list, em_list_itr); if (em_list_itr->status) @@ -3571,6 +3601,21 @@ ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) } /** + * ice_remove_eth_mac - remove a ethertype based filter rule + * @hw: pointer to the hardware structure + * @em_list: list of ethertype and forwarding information + * + */ +enum ice_status +ice_remove_eth_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *em_list) +{ + if (!em_list || !hw) + return ICE_ERR_PARAM; + + return ice_remove_eth_mac_rule(hw, em_list, hw->switch_info); +} + +/** * ice_rem_sw_rule_info * @hw: pointer to the hardware structure * @rule_head: pointer to the switch list structure that we want to delete @@ -3826,20 +3871,17 @@ ice_remove_mac(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list) } /** - * ice_remove_vlan - Remove VLAN based filter rule + * ice_remove_vlan_rule - Remove VLAN based filter rule * @hw: pointer to the hardware structure * @v_list: list of VLAN entries and forwarding information + * @recp_list: list from which function remove VLAN */ -enum ice_status -ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) +static enum ice_status +ice_remove_vlan_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list, + struct ice_sw_recipe *recp_list) { struct ice_fltr_list_entry *v_list_itr, *tmp; - struct ice_sw_recipe *recp_list; - - if (!v_list || !hw) - return ICE_ERR_PARAM; - recp_list = &hw->switch_info->recp_list[ICE_SW_LKUP_VLAN]; LIST_FOR_EACH_ENTRY_SAFE(v_list_itr, tmp, v_list, ice_fltr_list_entry, list_entry) { enum ice_sw_lkup_type l_type = v_list_itr->fltr_info.lkup_type; @@ -3855,6 +3897,24 @@ ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) } /** + * ice_remove_vlan - remove a VLAN address based filter rule + * @hw: pointer to the hardware structure + * @v_list: list of VLAN and forwarding information + * + */ +enum ice_status +ice_remove_vlan(struct ice_hw *hw, struct LIST_HEAD_TYPE *v_list) +{ + struct ice_sw_recipe *recp_list; + + if (!v_list || !hw) + return ICE_ERR_PARAM; + + recp_list = &hw->switch_info->recp_list[ICE_SW_LKUP_VLAN]; + return ice_remove_vlan_rule(hw, v_list, recp_list); +} + +/** * ice_remove_mac_vlan - Remove MAC VLAN based filter rule * @hw: pointer to the hardware structure * @v_list: list of MAC VLAN entries and forwarding information @@ -4401,7 +4461,7 @@ ice_remove_vsi_lkup_fltr(struct ice_hw *hw, u16 vsi_handle, ice_remove_mac_rule(hw, &remove_list_head, &recp_list[lkup]); break; case ICE_SW_LKUP_VLAN: - ice_remove_vlan(hw, &remove_list_head); + ice_remove_vlan_rule(hw, &remove_list_head, &recp_list[lkup]); break; case ICE_SW_LKUP_PROMISC: case ICE_SW_LKUP_PROMISC_VLAN: @@ -6770,7 +6830,8 @@ ice_rem_adv_rule_for_vsi(struct ice_hw *hw, u16 vsi_handle) map_info = NULL; LIST_FOR_EACH_ENTRY(list_itr, list_head, ice_adv_fltr_mgmt_list_entry, list_entry) { - map_info = ice_find_vsi_list_entry(hw, rid, vsi_handle, + map_info = ice_find_vsi_list_entry(&sw->recp_list[rid], + vsi_handle, &vsi_list_id); if (!map_info) continue; @@ -6843,7 +6904,8 @@ ice_replay_fltr(struct ice_hw *hw, u8 recp_id, struct LIST_HEAD_TYPE *list_head) ice_get_hw_vsi_num(hw, vsi_handle); f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI; if (recp_id == ICE_SW_LKUP_VLAN) - status = ice_add_vlan_internal(hw, &f_entry); + status = ice_add_vlan_internal(hw, recp_list, + &f_entry); else status = ice_add_rule_internal(hw, recp_list, lport, @@ -6933,7 +6995,7 @@ ice_replay_vsi_fltr(struct ice_hw *hw, u16 vsi_handle, u8 recp_id, if (f_entry.fltr_info.src_id == ICE_SRC_ID_VSI) f_entry.fltr_info.src = hw_vsi_id; if (recp_id == ICE_SW_LKUP_VLAN) - status = ice_add_vlan_internal(hw, &f_entry); + status = ice_add_vlan_internal(hw, recp_list, &f_entry); else status = ice_add_rule_internal(hw, recp_list, hw->port_info->lport, From patchwork Mon Mar 23 07:17:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66994 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B9B0A0563; Mon, 23 Mar 2020 08:15:41 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CB3861C0B2; Mon, 23 Mar 2020 08:14:54 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 503F83B5 for ; Mon, 23 Mar 2020 08:14:47 +0100 (CET) IronPort-SDR: AR2s0w2OD6zIElAGHKjkJhw3WcOAUD4QoEw+nnEJtcMd0BXFkcbCeFFBX6fbr6026X1+3O9B7b ztsHIGILjU6g== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:46 -0700 IronPort-SDR: XN/B8sEwh67jWWjlqUrZLayIpUPTiydfeN5HT9ynzguYXeCQznSMqzjo9M8c8qR9OTxjaQOTR1 R99DcYErlGEA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111504" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:45 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:29 +0800 Message-Id: <20200323071759.13075-7-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 06/36] net/ice/base: replace u16 with enum X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use enum ice_flow_field directly so no need to be converted from u16 for ice_flow_xtract_fld Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_flow.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 02f169808..6c413e307 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -937,15 +937,14 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw, for (i = 0; i < params->prof->segs_cnt; i++) { u64 match = params->prof->segs[i].match; - u16 j; + enum ice_flow_field j; for (j = 0; j < ICE_FLOW_FIELD_IDX_MAX && match; j++) { const u64 bit = BIT_ULL(j); if (match & bit) { - status = ice_flow_xtract_fld - (hw, params, i, (enum ice_flow_field)j, - match); + status = ice_flow_xtract_fld(hw, params, i, j, + match); if (status) return status; match &= ~bit; From patchwork Mon Mar 23 07:17:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66995 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4738A0563; Mon, 23 Mar 2020 08:15:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C79141C065; Mon, 23 Mar 2020 08:15:04 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 46D781C06D for ; Mon, 23 Mar 2020 08:14:50 +0100 (CET) IronPort-SDR: BMgwVYefTWd/XjeT9I0omRHyUxnlc8zAdvyoptHSh7s/wkP3hhOSpPnwyjA6rZ822Jz+XG+p6a MJEyjdv7ixYg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:49 -0700 IronPort-SDR: VHDSlaA0cDXzQzA2ld0/7h54jxlYHYR5ImKHba5AUfUkXYYhl9/X4/zMZGPr6xuCq1mOuBPNm1 sUUHx0MoNhMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111510" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:47 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Bruce Allan , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:30 +0800 Message-Id: <20200323071759.13075-8-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 07/36] net/ice/base: use struct size helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For structures using the common C "struct hack" technique to create a flexible length structure member at the end of the structure, use the ice_struct_size macro to determine the length of the structure instead of open coding the calculation. Signed-off-by: Bruce Allan Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 4 ++-- drivers/net/ice/base/ice_flex_pipe.c | 5 ++--- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_type.h | 3 +++ 4 files changed, 8 insertions(+), 6 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 9ef1aeef2..4b1b31066 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -1680,7 +1680,7 @@ ice_alloc_hw_res(struct ice_hw *hw, u16 type, u16 num, bool btm, u16 *res) enum ice_status status; u16 buf_len; - buf_len = sizeof(*buf) + sizeof(buf->elem) * (num - 1); + buf_len = ice_struct_size(buf, elem, num - 1); buf = (struct ice_aqc_alloc_free_res_elem *) ice_malloc(hw, buf_len); if (!buf) @@ -1720,7 +1720,7 @@ ice_free_hw_res(struct ice_hw *hw, u16 type, u16 num, u16 *res) enum ice_status status; u16 buf_len; - buf_len = sizeof(*buf) + sizeof(buf->elem) * (num - 1); + buf_len = ice_struct_size(buf, elem, num - 1); buf = (struct ice_aqc_alloc_free_res_elem *)ice_malloc(hw, buf_len); if (!buf) return ICE_ERR_NO_MEMORY; diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 1598efd67..82b27de0e 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1136,8 +1136,7 @@ static enum ice_status ice_get_pkg_info(struct ice_hw *hw) ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - size = sizeof(*pkg_info) + (sizeof(pkg_info->pkg_info[0]) * - (ICE_PKG_CNT - 1)); + size = ice_struct_size(pkg_info, pkg_info, ICE_PKG_CNT - 1); pkg_info = (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size); if (!pkg_info) return ICE_ERR_NO_MEMORY; @@ -1209,7 +1208,7 @@ static enum ice_status ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len) return ICE_ERR_CFG; /* make sure segment array fits in package length */ - if (len < sizeof(*pkg) + ((seg_count - 1) * sizeof(pkg->seg_offset))) + if (len < ice_struct_size(pkg, seg_offset, seg_count - 1)) return ICE_ERR_BUF_TOO_SHORT; /* all segments must fit within length */ diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 740f7c3ff..03885d027 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -899,7 +899,7 @@ ice_sched_add_elems(struct ice_port_info *pi, struct ice_sched_node *tc_node, u16 buf_size; u32 teid; - buf_size = sizeof(*buf) + sizeof(*buf->generic) * (num_nodes - 1); + buf_size = ice_struct_size(buf, generic, num_nodes - 1); buf = (struct ice_aqc_add_elem *)ice_malloc(hw, buf_size); if (!buf) return ICE_ERR_NO_MEMORY; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 237220ee8..59dce32fa 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -34,6 +34,9 @@ #define IS_ASCII(_ch) ((_ch) < 0x80) +#define ice_struct_size(ptr, field, num) \ + (sizeof(*(ptr)) + sizeof(*(ptr)->field) * (num)) + #include "ice_status.h" #include "ice_hw_autogen.h" #include "ice_devids.h" From patchwork Mon Mar 23 07:17:31 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66996 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA0D3A0563; Mon, 23 Mar 2020 08:16:02 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 650871C0B7; Mon, 23 Mar 2020 08:15:06 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 7F5911C0AA for ; Mon, 23 Mar 2020 08:14:52 +0100 (CET) IronPort-SDR: IoCZcOTnpmnZ/gGHAASbV0Z1dT2zKJYNEo1CgYU3fOLl2Xz6vP6Bd8wxNBrT4/Epc1+akAaZ4x /t2zhYNTaJCg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:51 -0700 IronPort-SDR: Mbhv2YTv7la8XJxyzE2sqh+OfxYjZ6Fbwb/+sH6tBIOR25BrZJfMq9Q/dlcXJTcdN0uPte4YQR 3jzluFJ9SoKA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111517" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:50 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Bruce Allan , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:31 +0800 Message-Id: <20200323071759.13075-9-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 08/36] net/ice/base: use descriptive vairiable name than type X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The variable name 'type' is not very descriptive. Replace instances of those with a variable name that is more descriptive or replace it if not needed. Signed-off-by: Bruce Allan Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_flex_pipe.c | 8 ++++---- drivers/net/ice/base/ice_flow.c | 8 ++++---- drivers/net/ice/base/ice_switch.c | 38 ++++++++++++++++++------------------ 3 files changed, 27 insertions(+), 27 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 82b27de0e..c18ccea48 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1506,11 +1506,11 @@ ice_get_sw_prof_type(struct ice_hw *hw, struct ice_fv *fv) /** * ice_get_sw_fv_bitmap - Get switch field vector bitmap based on profile type * @hw: pointer to hardware structure - * @type: type of profiles requested + * @req_profs: type of profiles requested * @bm: pointer to memory for returning the bitmap of field vectors */ void -ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, +ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, ice_bitmap_t *bm) { struct ice_pkg_enum state; @@ -1519,7 +1519,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, ice_memset(&state, 0, sizeof(state), ICE_NONDMA_MEM); - if (type == ICE_PROF_ALL) { + if (req_profs == ICE_PROF_ALL) { u16 i; for (i = 0; i < ICE_MAX_NUM_PROFILES; i++) @@ -1543,7 +1543,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, /* Determine field vector type */ prof_type = ice_get_sw_prof_type(hw, fv); - if (type & prof_type) + if (req_profs & prof_type) ice_set_bit((u16)offset, bm); } } while (fv); diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 6c413e307..d52bce1ce 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -1615,7 +1615,7 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h) * ice_flow_set_fld_ext - specifies locations of field from entry's input buffer * @seg: packet segment the field being set belongs to * @fld: field to be set - * @type: type of the field + * @field_type: type of the field * @val_loc: if not ICE_FLOW_FLD_OFF_INVAL, location of the value to match from * entry's input buffer * @mask_loc: if not ICE_FLOW_FLD_OFF_INVAL, location of mask value from entry's @@ -1636,16 +1636,16 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h) */ static void ice_flow_set_fld_ext(struct ice_flow_seg_info *seg, enum ice_flow_field fld, - enum ice_flow_fld_match_type type, u16 val_loc, + enum ice_flow_fld_match_type field_type, u16 val_loc, u16 mask_loc, u16 last_loc) { u64 bit = BIT_ULL(fld); seg->match |= bit; - if (type == ICE_FLOW_FLD_TYPE_RANGE) + if (field_type == ICE_FLOW_FLD_TYPE_RANGE) seg->range |= bit; - seg->fields[fld].type = type; + seg->fields[fld].type = field_type; seg->fields[fld].src.val = val_loc; seg->fields[fld].src.mask = mask_loc; seg->fields[fld].src.last = last_loc; diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index 0f2a5b3e9..adcda9645 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -1872,7 +1872,7 @@ enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) struct ice_aqc_get_sw_cfg_resp_elem *ele; u16 pf_vf_num, swid, vsi_port_num; bool is_vf = false; - u8 type; + u8 res_type; ele = rbuf[i].elements; vsi_port_num = LE16_TO_CPU(ele->vsi_port_num) & @@ -1887,10 +1887,10 @@ enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) ICE_AQC_GET_SW_CONF_RESP_IS_VF) is_vf = true; - type = LE16_TO_CPU(ele->vsi_port_num) >> - ICE_AQC_GET_SW_CONF_RESP_TYPE_S; + res_type = (u8)(LE16_TO_CPU(ele->vsi_port_num) >> + ICE_AQC_GET_SW_CONF_RESP_TYPE_S); - switch (type) { + switch (res_type) { case ICE_AQC_GET_SW_CONF_RESP_PHYS_PORT: case ICE_AQC_GET_SW_CONF_RESP_VIRT_PORT: if (j == num_total_ports) { @@ -1900,7 +1900,7 @@ enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) goto out; } ice_init_port_info(hw->port_info, - vsi_port_num, type, swid, + vsi_port_num, res_type, swid, pf_vf_num, is_vf); j++; break; @@ -2355,7 +2355,7 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, struct ice_aqc_sw_rules_elem *s_rule; enum ice_status status; u16 s_rule_size; - u16 type; + u16 rule_type; int i; if (!num_vsi) @@ -2368,11 +2368,11 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, lkup_type == ICE_SW_LKUP_PROMISC || lkup_type == ICE_SW_LKUP_PROMISC_VLAN || lkup_type == ICE_SW_LKUP_LAST) - type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR : - ICE_AQC_SW_RULES_T_VSI_LIST_SET; + rule_type = remove ? ICE_AQC_SW_RULES_T_VSI_LIST_CLEAR : + ICE_AQC_SW_RULES_T_VSI_LIST_SET; else if (lkup_type == ICE_SW_LKUP_VLAN) - type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR : - ICE_AQC_SW_RULES_T_PRUNE_LIST_SET; + rule_type = remove ? ICE_AQC_SW_RULES_T_PRUNE_LIST_CLEAR : + ICE_AQC_SW_RULES_T_PRUNE_LIST_SET; else return ICE_ERR_PARAM; @@ -2390,7 +2390,7 @@ ice_update_vsi_list_rule(struct ice_hw *hw, u16 *vsi_handle_arr, u16 num_vsi, CPU_TO_LE16(ice_get_hw_vsi_num(hw, vsi_handle_arr[i])); } - s_rule->type = CPU_TO_LE16(type); + s_rule->type = CPU_TO_LE16(rule_type); s_rule->pdata.vsi_list.number_vsi = CPU_TO_LE16(num_vsi); s_rule->pdata.vsi_list.index = CPU_TO_LE16(vsi_list_id); @@ -5671,35 +5671,35 @@ static void ice_get_compat_fv_bitmap(struct ice_hw *hw, struct ice_adv_rule_info *rinfo, ice_bitmap_t *bm) { - enum ice_prof_type type; + enum ice_prof_type prof_type; switch (rinfo->tun_type) { case ICE_NON_TUN: - type = ICE_PROF_NON_TUN; + prof_type = ICE_PROF_NON_TUN; break; case ICE_ALL_TUNNELS: - type = ICE_PROF_TUN_ALL; + prof_type = ICE_PROF_TUN_ALL; break; case ICE_SW_TUN_VXLAN_GPE: case ICE_SW_TUN_GENEVE: case ICE_SW_TUN_VXLAN: case ICE_SW_TUN_UDP: case ICE_SW_TUN_GTP: - type = ICE_PROF_TUN_UDP; + prof_type = ICE_PROF_TUN_UDP; break; case ICE_SW_TUN_NVGRE: - type = ICE_PROF_TUN_GRE; + prof_type = ICE_PROF_TUN_GRE; break; case ICE_SW_TUN_PPPOE: - type = ICE_PROF_TUN_PPPOE; + prof_type = ICE_PROF_TUN_PPPOE; break; case ICE_SW_TUN_AND_NON_TUN: default: - type = ICE_PROF_ALL; + prof_type = ICE_PROF_ALL; break; } - ice_get_sw_fv_bitmap(hw, type, bm); + ice_get_sw_fv_bitmap(hw, prof_type, bm); } /** From patchwork Mon Mar 23 07:17:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66997 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 28E0CA0564; Mon, 23 Mar 2020 08:16:16 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 469F61C0C1; Mon, 23 Mar 2020 08:15:08 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 341111C0AA for ; Mon, 23 Mar 2020 08:14:54 +0100 (CET) IronPort-SDR: 4vYzOh2TgEcvrYFHny4yqDuTziQIrj/HvnfbZ2O3hBFswRYmQ3LVoubDvvSf7dKMaQwWRue6dA /xv1CqvoD0QQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:53 -0700 IronPort-SDR: e8oM1FZ4zgZ3+Z/6dJd34gD+/E0zma9V1Wvsge1le8/GipgLUWHE3/ux8/vePWSQwhxvOB77lo gCeJPqasOs+g== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111524" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:52 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Tony Nguyen , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:32 +0800 Message-Id: <20200323071759.13075-10-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 09/36] net/ice/base: refactor a function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Refactor function ice_prof_bld_xlt2, a switch statement is better suited for this situation and eliminates the need for the "found" variable. Signed-off-by: Tony Nguyen Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_flex_pipe.c | 21 +++++++++------------ 1 file changed, 9 insertions(+), 12 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index c18ccea48..5dd7a0d38 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -4051,19 +4051,13 @@ ice_prof_bld_xlt2(enum ice_block blk, struct ice_buf_build *bld, struct ice_chs_chg *tmp; LIST_FOR_EACH_ENTRY(tmp, chgs, ice_chs_chg, list_entry) { - bool found = false; - - if (tmp->type == ICE_VSIG_ADD) - found = true; - else if (tmp->type == ICE_VSI_MOVE) - found = true; - else if (tmp->type == ICE_VSIG_REM) - found = true; - - if (found) { - struct ice_xlt2_section *p; - u32 id; + struct ice_xlt2_section *p; + u32 id; + switch (tmp->type) { + case ICE_VSIG_ADD: + case ICE_VSI_MOVE: + case ICE_VSIG_REM: id = ice_sect_id(blk, ICE_XLT2); p = (struct ice_xlt2_section *) ice_pkg_buf_alloc_section(bld, id, sizeof(*p)); @@ -4074,6 +4068,9 @@ ice_prof_bld_xlt2(enum ice_block blk, struct ice_buf_build *bld, p->count = CPU_TO_LE16(1); p->offset = CPU_TO_LE16(tmp->vsi); p->value[0] = CPU_TO_LE16(tmp->vsig); + break; + default: + break; } } From patchwork Mon Mar 23 07:17:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66998 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91782A0563; Mon, 23 Mar 2020 08:16:26 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E67EA1C0C5; Mon, 23 Mar 2020 08:15:09 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id D2A121C0B4 for ; Mon, 23 Mar 2020 08:14:55 +0100 (CET) IronPort-SDR: KzRD5ORVwwhhZzhHrnKRWcLAyursG4kGGHHqIPVFNcfKItn6myUHzuEK2+9sSZ8dUTXmLMfKGn BqFVItQmgNvg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:55 -0700 IronPort-SDR: 60yAat8a0yoAVLd16hwVNf+39neiy6SglS9pZw/qtUV8rIb/r2C4M/W8+8Cjdd3hJSQWwenhUT E6GY00kXubDg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111529" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:53 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:33 +0800 Message-Id: <20200323071759.13075-11-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 10/36] net/ice/base: add NVM netlist macros X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As title, these macros are added for future use. Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 9a79c7645..bcb2dd783 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -1720,6 +1720,32 @@ struct ice_aqc_nvm { #define ICE_AQC_NVM_LLDP_STATUS_M_LEN 4 /* In Bits */ #define ICE_AQC_NVM_LLDP_STATUS_RD_LEN 4 /* In Bytes */ +/* The result of netlist NVM read comes in a TLV format. The actual data + * (netlist header) starts from word offset 1 (byte 2). The FW strips + * out the type field from the TLV header so all the netlist fields + * should adjust their offset value by 1 word (2 bytes) in order to map + * their correct location. + */ +#define ICE_AQC_NVM_LINK_TOPO_NETLIST_MOD_ID 0x11B +#define ICE_AQC_NVM_LINK_TOPO_NETLIST_LEN_OFFSET 1 +#define ICE_AQC_NVM_LINK_TOPO_NETLIST_LEN 2 /* In bytes */ +#define ICE_AQC_NVM_NETLIST_NODE_COUNT_OFFSET 2 +#define ICE_AQC_NVM_NETLIST_NODE_COUNT_LEN 2 /* In bytes */ +#define ICE_AQC_NVM_NETLIST_ID_BLK_START_OFFSET 5 +#define ICE_AQC_NVM_NETLIST_ID_BLK_LEN 0x30 /* In words */ + +/* netlist ID block field offsets (word offsets) */ +#define ICE_AQC_NVM_NETLIST_ID_BLK_MAJOR_VER_LOW 2 +#define ICE_AQC_NVM_NETLIST_ID_BLK_MAJOR_VER_HIGH 3 +#define ICE_AQC_NVM_NETLIST_ID_BLK_MINOR_VER_LOW 4 +#define ICE_AQC_NVM_NETLIST_ID_BLK_MINOR_VER_HIGH 5 +#define ICE_AQC_NVM_NETLIST_ID_BLK_TYPE_LOW 6 +#define ICE_AQC_NVM_NETLIST_ID_BLK_TYPE_HIGH 7 +#define ICE_AQC_NVM_NETLIST_ID_BLK_REV_LOW 8 +#define ICE_AQC_NVM_NETLIST_ID_BLK_REV_HIGH 9 +#define ICE_AQC_NVM_NETLIST_ID_BLK_SHA_HASH 0xA +#define ICE_AQC_NVM_NETLIST_ID_BLK_CUST_VER 0x2F + /* Used for 0x0704 as well as for 0x0705 commands */ struct ice_aqc_nvm_cfg { u8 cmd_flags; From patchwork Mon Mar 23 07:17:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66999 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 32486A0563; Mon, 23 Mar 2020 08:16:35 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 705E61C0CD; Mon, 23 Mar 2020 08:15:11 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 2F2641C068 for ; Mon, 23 Mar 2020 08:14:59 +0100 (CET) IronPort-SDR: xU7KRVaAqErtz1Mx3CAJsxSijNn/7wLF7/BF/09NuqkxRWgyXUQQg/RDAaWBWEQNsT3qVh3t9Q x6xbzCzeQ0kQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:14:58 -0700 IronPort-SDR: hcPqoQZUVnmsIacVslNgNm+IDL2PgAs0/w/zPenQgqcBwkneqebRwZ1ET67TIAxijjvq5NBSPh 0XdJ61klyjCw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111540" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:55 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Tony Nguyen , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:34 +0800 Message-Id: <20200323071759.13075-12-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 11/36] net/ice/base: minor fixes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This is a collection of minor fixes that were found during code review. Changes are: - Call ice_hweight8() instead of calculating it ourselves in ice_bits_max_set(). - Call ice_test_and_clear_bit() over calling ice_is_bit_set() then ice_clear_bit() in ice_rem_vsi_rss_list(). - Remove 'chrs' variable in ice_add_prof_id_flow() as it's not being used for anything. - Return result directly instead of assigning to variable then returning the variable in ice_rem_vsig(). - Reduce scope, and don't initialize, 'or_vsig' in ice_add_prof_id_flow(). - Return error immediately in ice_add_prof_id_vsig(). Since the memory wasn't allocated, there is no need to goto and attempt to free memory. - Show that values 37-38 are reserved in ice_flow_avf_hdr_field as the other reserved values are shown. - Fix RCT ordering - Remove initialization of values that aren't needed - Fix function headers to match function names - Use offsetof instead of calculating ourselves in ice_pkg_buf_alloc() - In ice_rem_prof(), do not set status to ICE_SUCCESS as, due to code flow, this will always be ICE_SUCCESS. - Remove unnecessary semicolon in ice_prof_gen_key() - Remove unnecessary initializations - correct bw_alloc type in ice_sched_add_root_node Signed-off-by: Tony Nguyen Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_flex_pipe.c | 52 +++++++++++++++--------------------- drivers/net/ice/base/ice_flex_pipe.h | 2 +- drivers/net/ice/base/ice_flow.c | 7 ++--- drivers/net/ice/base/ice_flow.h | 1 + drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_switch.c | 2 +- 6 files changed, 27 insertions(+), 39 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 5dd7a0d38..0cbd15f92 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -610,7 +610,7 @@ ice_gen_key_word(u8 val, u8 valid, u8 dont_care, u8 nvr_mtch, u8 *key, static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max) { u16 count = 0; - u16 i, j; + u16 i; /* check each byte */ for (i = 0; i < size; i++) { @@ -626,11 +626,9 @@ static bool ice_bits_max_set(const u8 *mask, u16 size, u16 max) return false; /* count the bits in this byte, checking threshold */ - for (j = 0; j < BITS_PER_BYTE; j++) { - count += (mask[i] & (0x1 << j)) ? 1 : 0; - if (count > max) - return false; - } + count += ice_hweight8(mask[i]); + if (count > max) + return false; } return true; @@ -914,9 +912,8 @@ ice_update_pkg(struct ice_hw *hw, struct ice_buf *bufs, u32 count) return status; for (i = 0; i < count; i++) { - bool last = ((i + 1) == count); - struct ice_buf_hdr *bh = (struct ice_buf_hdr *)(bufs + i); + bool last = ((i + 1) == count); status = ice_aq_update_pkg(hw, bh, LE16_TO_CPU(bh->data_end), last, &offset, &info, NULL); @@ -1565,7 +1562,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, * allocated for every list entry. */ enum ice_status -ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt, +ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u16 ids_cnt, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) { struct ice_sw_fv_list_entry *fvl; @@ -1582,7 +1579,7 @@ ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt, ice_seg = hw->seg; do { - u8 i; + u16 i; fv = (struct ice_fv *) ice_pkg_enum_entry(ice_seg, &state, ICE_SID_FLD_VEC_SW, @@ -1806,7 +1803,7 @@ static u16 ice_pkg_buf_get_active_sections(struct ice_buf_build *bld) } /** - * ice_pkg_buf_header + * ice_pkg_buf * @bld: pointer to pkg build (allocated by ice_pkg_buf_alloc()) * * Return a pointer to the buffer's header @@ -1915,9 +1912,11 @@ ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, * ice_create_tunnel * @hw: pointer to the HW structure * @type: type of tunnel - * @port: port to use for vxlan tunnel + * @port: port of tunnel to create * - * Creates a tunnel + * Create a tunnel by updating the parse graph in the parser. We do that by + * creating a package buffer with the tunnel info and issuing an update package + * command. */ enum ice_status ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port) @@ -3887,6 +3886,7 @@ ice_vsig_get_ref(struct ice_hw *hw, enum ice_block blk, u16 vsig, u16 *refs) { u16 idx = vsig & ICE_VSIG_IDX_M; struct ice_vsig_vsi *ptr; + *refs = 0; if (!hw->blk[blk].xlt2.vsig_tbl[idx].in_use) @@ -4090,12 +4090,12 @@ ice_upd_prof_hw(struct ice_hw *hw, enum ice_block blk, struct ice_buf_build *b; struct ice_chs_chg *tmp; enum ice_status status; - u16 pkg_sects = 0; - u16 sects = 0; + u16 pkg_sects; u16 xlt1 = 0; u16 xlt2 = 0; u16 tcam = 0; u16 es = 0; + u16 sects; /* count number of sections we need */ LIST_FOR_EACH_ENTRY(tmp, chgs, ice_chs_chg, list_entry) { @@ -4194,8 +4194,6 @@ static void ice_update_fd_mask(struct ice_hw *hw, u16 prof_id, u32 mask_sel) GLQF_FDMASK_SEL(prof_id), mask_sel); } -#define ICE_SRC_DST_MAX_COUNT 8 - struct ice_fd_src_dst_pair { u8 prot_id; u8 count; @@ -4754,9 +4752,7 @@ ice_rem_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, } while (vsi_cur); } - status = ice_vsig_free(hw, blk, vsig); - - return status; + return ice_vsig_free(hw, blk, vsig); } /** @@ -4974,8 +4970,8 @@ static enum ice_status ice_add_prof_to_lst(struct ice_hw *hw, enum ice_block blk, struct LIST_HEAD_TYPE *lst, u64 hdl) { - struct ice_vsig_prof *p; struct ice_prof_map *map; + struct ice_vsig_prof *p; u16 i; map = ice_search_prof_id(hw, blk, hdl); @@ -5252,7 +5248,7 @@ ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, /* new VSIG profile structure */ t = (struct ice_vsig_prof *)ice_malloc(hw, sizeof(*t)); if (!t) - goto err_ice_add_prof_id_vsig; + return ICE_ERR_NO_MEMORY; t->profile_cookie = map->profile_cookie; t->prof_id = map->prof_id; @@ -5371,7 +5367,7 @@ ice_create_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl, } /** - * ice_create_vsig_from_list - create a new VSIG with a list of profiles + * ice_create_vsig_from_lst - create a new VSIG with a list of profiles * @hw: pointer to the HW struct * @blk: hardware block * @vsi: the initial VSI that will be in VSIG @@ -5498,13 +5494,11 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) struct ice_vsig_prof *tmp1, *del1; struct LIST_HEAD_TYPE union_lst; struct ice_chs_chg *tmp, *del; - struct LIST_HEAD_TYPE chrs; struct LIST_HEAD_TYPE chg; enum ice_status status; - u16 vsig, or_vsig = 0; + u16 vsig; INIT_LIST_HEAD(&union_lst); - INIT_LIST_HEAD(&chrs); INIT_LIST_HEAD(&chg); /* Get profile */ @@ -5516,6 +5510,7 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) status = ice_vsig_find_vsi(hw, blk, vsi, &vsig); if (!status && vsig) { bool only_vsi; + u16 or_vsig; u16 ref; /* found in vsig */ @@ -5625,11 +5620,6 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) ice_free(hw, del1); } - LIST_FOR_EACH_ENTRY_SAFE(del1, tmp1, &chrs, ice_vsig_prof, list) { - LIST_DEL(&del1->list); - ice_free(hw, del1); - } - return status; } diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index fa72e386d..e3ee882da 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -36,7 +36,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, void ice_init_prof_result_bm(struct ice_hw *hw); enum ice_status -ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u8 ids_cnt, +ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u16 ids_cnt, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list); bool ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index d52bce1ce..0838b3bd2 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -1152,7 +1152,7 @@ ice_flow_add_prof_sync(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof **prof) { struct ice_flow_prof_params params; - enum ice_status status = ICE_SUCCESS; + enum ice_status status; u8 i; if (!prof || (acts_cnt && !acts)) @@ -1825,14 +1825,11 @@ void ice_rem_vsi_rss_list(struct ice_hw *hw, u16 vsi_handle) ice_acquire_lock(&hw->rss_locks); LIST_FOR_EACH_ENTRY_SAFE(r, tmp, &hw->rss_list_head, ice_rss_cfg, l_entry) { - if (ice_is_bit_set(r->vsis, vsi_handle)) { - ice_clear_bit(vsi_handle, r->vsis); - + if (ice_test_and_clear_bit(vsi_handle, r->vsis)) if (!ice_is_any_bit_set(r->vsis, ICE_MAX_VSI)) { LIST_DEL(&r->l_entry); ice_free(hw, r); } - } } ice_release_lock(&hw->rss_locks); } diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h index d7b10ccc3..4c2067f0c 100644 --- a/drivers/net/ice/base/ice_flow.h +++ b/drivers/net/ice/base/ice_flow.h @@ -187,6 +187,7 @@ enum ice_flow_avf_hdr_field { ICE_AVF_FLOW_FIELD_IPV4_SCTP, ICE_AVF_FLOW_FIELD_IPV4_OTHER, ICE_AVF_FLOW_FIELD_FRAG_IPV4, + /* Values 37-38 are reserved */ ICE_AVF_FLOW_FIELD_UNICAST_IPV6_UDP = 39, ICE_AVF_FLOW_FIELD_MULTICAST_IPV6_UDP, ICE_AVF_FLOW_FIELD_IPV6_UDP, diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 03885d027..575c2ab58 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -2907,7 +2907,7 @@ ice_sched_update_elem(struct ice_hw *hw, struct ice_sched_node *node, */ static enum ice_status ice_sched_cfg_node_bw_alloc(struct ice_hw *hw, struct ice_sched_node *node, - enum ice_rl_type rl_type, u8 bw_alloc) + enum ice_rl_type rl_type, u16 bw_alloc) { struct ice_aqc_txsched_elem_data buf; struct ice_aqc_txsched_elem *data; diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index adcda9645..796390e93 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -743,7 +743,7 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, /* Complete initialization of the root recipe entry */ lkup_exts->n_val_words = fv_word_idx; recps[rid].big_recp = (num_recps > 1); - recps[rid].n_grp_count = num_recps; + recps[rid].n_grp_count = (u8)num_recps; recps[rid].root_buf = (struct ice_aqc_recipe_data_elem *) ice_memdup(hw, tmp, recps[rid].n_grp_count * sizeof(*recps[rid].root_buf), ICE_NONDMA_TO_NONDMA); From patchwork Mon Mar 23 07:17:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67000 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DF55A0563; Mon, 23 Mar 2020 08:16:48 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 122951C0D0; Mon, 23 Mar 2020 08:15:13 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 9F34E1C065 for ; Mon, 23 Mar 2020 08:15:01 +0100 (CET) IronPort-SDR: wK6EIEtifNJ1NnXT5xn7hBFpnVRrJ/toCfbMgyKYFj3+obCfkYzDQ8bLk2gahtdSs/f04bwvw7 ZEaChs5/mPuQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:01 -0700 IronPort-SDR: xMtLA7XY7OmpDW8hJb8ivCuuvcX0vyq0FwBqaYmAEGs4iqdjgrIb/sDncH9ahj2CL2eo5j49FC RIcxSumO2bwA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111547" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:14:58 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Dan Nowlin , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:35 +0800 Message-Id: <20200323071759.13075-13-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 12/36] net/ice/base: support GTPU uplink and downlink X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enable GTPU uplink and downlink flag usage. TCAM with different GTPU extend header flag can be saperated. Signed-off-by: Dan Nowlin Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_flex_pipe.c | 68 +++++++++++++++++++++++++++------- drivers/net/ice/base/ice_flow.c | 71 ++++++++++++++++++++++++++++++++++-- 2 files changed, 122 insertions(+), 17 deletions(-) diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 0cbd15f92..d8a3f47e8 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1435,8 +1435,8 @@ static struct ice_buf_build *ice_pkg_buf_alloc(struct ice_hw *hw) return NULL; buf = (struct ice_buf_hdr *)bld; - buf->data_end = CPU_TO_LE16(sizeof(*buf) - - sizeof(buf->section_entry[0])); + buf->data_end = CPU_TO_LE16(offsetof(struct ice_buf_hdr, + section_entry)); return bld; } @@ -3834,7 +3834,7 @@ ice_prof_gen_key(struct ice_hw *hw, enum ice_block blk, u8 ptg, u16 vsig, default: ice_debug(hw, ICE_DBG_PKG, "Error in profile config\n"); break; - }; + } return ice_set_key(key, ICE_TCAM_KEY_SZ, (u8 *)&inkey, vl_msk, dc_msk, nm_msk, 0, ICE_TCAM_KEY_SZ / 2); @@ -4863,8 +4863,6 @@ enum ice_status ice_rem_prof(struct ice_hw *hw, enum ice_block blk, u64 id) LIST_DEL(&pmap->list); ice_free(hw, pmap); - status = ICE_SUCCESS; - err_ice_rem_prof: ice_release_lock(&hw->blk[blk].es.prof_map_lock); return status; @@ -5146,6 +5144,32 @@ ice_prof_tcam_ena_dis(struct ice_hw *hw, enum ice_block blk, bool enable, } /** + * ice_ptg_attr_in_use - determine if PTG and attribute pair is in use + * @ptg_attr: pointer to the PTG and attribute pair to check + * @ptgs_used: bitmap that denotes which PTGs are in use + * @attr_used: array of PTG and attributes pairs already used + * @attr_cnt: count of entries in the attr_used array + */ +static bool +ice_ptg_attr_in_use(struct ice_tcam_inf *ptg_attr, ice_bitmap_t *ptgs_used, + struct ice_tcam_inf *attr_used[], u16 attr_cnt) +{ + u16 i; + + if (!ice_is_bit_set(ptgs_used, ptg_attr->ptg)) + return false; + + /* the PTG is used, so now look for correct attributes */ + for (i = 0; i < attr_cnt; i++) + if (attr_used[i]->ptg == ptg_attr->ptg && + attr_used[i]->attr.flags == ptg_attr->attr.flags && + attr_used[i]->attr.mask == ptg_attr->attr.mask) + return true; + + return false; +} + +/** * ice_adj_prof_priorities - adjust profile based on priorities * @hw: pointer to the HW struct * @blk: hardware block @@ -5157,10 +5181,18 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig, struct LIST_HEAD_TYPE *chg) { ice_declare_bitmap(ptgs_used, ICE_XLT1_CNT); + struct ice_tcam_inf **attr_used; + enum ice_status status = ICE_SUCCESS; struct ice_vsig_prof *t; - enum ice_status status; + u16 attr_used_cnt = 0; u16 idx; +#define ICE_MAX_PTG_ATTRS 1024 + attr_used = (struct ice_tcam_inf **)ice_calloc(hw, ICE_MAX_PTG_ATTRS, + sizeof(*attr_used)); + if (!attr_used) + return ICE_ERR_NO_MEMORY; + ice_zero_bitmap(ptgs_used, ICE_XLT1_CNT); idx = vsig & ICE_VSIG_IDX_M; @@ -5178,11 +5210,15 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig, u16 i; for (i = 0; i < t->tcam_count; i++) { + bool used; + /* Scan the priorities from newest to oldest. * Make sure that the newest profiles take priority. */ - if (ice_is_bit_set(ptgs_used, t->tcam[i].ptg) && - t->tcam[i].in_use) { + used = ice_ptg_attr_in_use(&t->tcam[i], ptgs_used, + attr_used, attr_used_cnt); + + if (used && t->tcam[i].in_use) { /* need to mark this PTG as never match, as it * was already in use and therefore duplicate * (and lower priority) @@ -5192,9 +5228,8 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig, &t->tcam[i], chg); if (status) - return status; - } else if (!ice_is_bit_set(ptgs_used, t->tcam[i].ptg) && - !t->tcam[i].in_use) { + goto err_ice_adj_prof_priorities; + } else if (!used && !t->tcam[i].in_use) { /* need to enable this PTG, as it in not in use * and not enabled (highest priority) */ @@ -5203,15 +5238,22 @@ ice_adj_prof_priorities(struct ice_hw *hw, enum ice_block blk, u16 vsig, &t->tcam[i], chg); if (status) - return status; + goto err_ice_adj_prof_priorities; } /* keep track of used ptgs */ ice_set_bit(t->tcam[i].ptg, ptgs_used); + if (attr_used_cnt < ICE_MAX_PTG_ATTRS) + attr_used[attr_used_cnt++] = &t->tcam[i]; + else + ice_debug(hw, ICE_DBG_INIT, + "Warn: ICE_MAX_PTG_ATTRS exceeded\n"); } } - return ICE_SUCCESS; +err_ice_adj_prof_priorities: + ice_free(hw, attr_used); + return status; } /** diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 0838b3bd2..17fd2423e 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -152,7 +152,7 @@ struct ice_flow_field_info ice_flds_info[ICE_FLOW_FIELD_IDX_MAX] = { static const u32 ice_ptypes_mac_ofos[] = { 0xFDC00846, 0xBFBF7F7E, 0xF70001DF, 0xFEFDFDFB, 0x0000077E, 0x00000000, 0x00000000, 0x00000000, - 0x00000000, 0x00003000, 0x00000000, 0x00000000, + 0x00000000, 0x03FFF000, 0x7FFFFFE0, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, @@ -366,6 +366,52 @@ static const struct ice_ptype_attributes ice_attr_gtpu_eh[] = { { ICE_MAC_IPV6_GTPU_IPV6_ICMPV6, ICE_PTYPE_ATTR_GTP_PDU_EH }, }; +static const struct ice_ptype_attributes ice_attr_gtpu_down[] = { + { ICE_MAC_IPV4_GTPU_IPV4_FRAG, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_UDP_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_TCP, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_ICMP, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_FRAG, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_UDP_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_TCP, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_ICMP, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_FRAG, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_UDP_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_TCP, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_ICMPV6, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_FRAG, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_UDP_PAY, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_TCP, ICE_PTYPE_ATTR_GTP_DOWNLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_ICMPV6, ICE_PTYPE_ATTR_GTP_DOWNLINK }, +}; + +static const struct ice_ptype_attributes ice_attr_gtpu_up[] = { + { ICE_MAC_IPV4_GTPU_IPV4_FRAG, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_UDP_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_TCP, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV4_ICMP, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_FRAG, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_UDP_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_TCP, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV4_ICMP, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_FRAG, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_UDP_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_TCP, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV4_GTPU_IPV6_ICMPV6, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_FRAG, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_UDP_PAY, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_TCP, ICE_PTYPE_ATTR_GTP_UPLINK }, + { ICE_MAC_IPV6_GTPU_IPV6_ICMPV6, ICE_PTYPE_ATTR_GTP_UPLINK }, +}; + static const u32 ice_ptypes_gtpu[] = { 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, @@ -586,6 +632,22 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) src = (const ice_bitmap_t *)ice_ptypes_gtpc_tid; ice_and_bitmap(params->ptypes, params->ptypes, src, ICE_FLOW_PTYPE_MAX); + } else if (hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN) { + src = (const ice_bitmap_t *)ice_ptypes_gtpu; + ice_and_bitmap(params->ptypes, params->ptypes, + src, ICE_FLOW_PTYPE_MAX); + + /* Attributes for GTP packet with downlink */ + params->attr = ice_attr_gtpu_down; + params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_down); + } else if (hdrs & ICE_FLOW_SEG_HDR_GTPU_UP) { + src = (const ice_bitmap_t *)ice_ptypes_gtpu; + ice_and_bitmap(params->ptypes, params->ptypes, + src, ICE_FLOW_PTYPE_MAX); + + /* Attributes for GTP packet with uplink */ + params->attr = ice_attr_gtpu_up; + params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_up); } else if (hdrs & ICE_FLOW_SEG_HDR_GTPU_EH) { src = (const ice_bitmap_t *)ice_ptypes_gtpu; ice_and_bitmap(params->ptypes, params->ptypes, @@ -594,7 +656,8 @@ ice_flow_proc_seg_hdrs(struct ice_flow_prof_params *params) /* Attributes for GTP packet with Extension Header */ params->attr = ice_attr_gtpu_eh; params->attr_cnt = ARRAY_SIZE(ice_attr_gtpu_eh); - } else if (hdrs & ICE_FLOW_SEG_HDR_GTPU_IP) { + } else if ((hdrs & ICE_FLOW_SEG_HDR_GTPU) == + ICE_FLOW_SEG_HDR_GTPU) { src = (const ice_bitmap_t *)ice_ptypes_gtpu; ice_and_bitmap(params->ptypes, params->ptypes, src, ICE_FLOW_PTYPE_MAX); @@ -1238,7 +1301,7 @@ static enum ice_status ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk, struct ice_flow_prof *prof) { - enum ice_status status = ICE_SUCCESS; + enum ice_status status; /* Remove all remaining flow entries before removing the flow profile */ if (!LIST_EMPTY(&prof->entries)) { @@ -2080,7 +2143,7 @@ ice_add_rss_cfg_sync(struct ice_hw *hw, u16 vsi_handle, u64 hashed_flds, const enum ice_block blk = ICE_BLK_RSS; struct ice_flow_prof *prof = NULL; struct ice_flow_seg_info *segs; - enum ice_status status = ICE_SUCCESS; + enum ice_status status; if (!segs_cnt || segs_cnt > ICE_FLOW_SEG_MAX) return ICE_ERR_PARAM; From patchwork Mon Mar 23 07:17:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67001 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAB78A0563; Mon, 23 Mar 2020 08:16:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BC71A1C0D6; Mon, 23 Mar 2020 08:15:14 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 9E6A81B949 for ; Mon, 23 Mar 2020 08:15:03 +0100 (CET) IronPort-SDR: RSjPmkj/+/3BHl9EqE++X6WmINTAI8U9NSQAiWNNyr0032bCr7nUO6FYYSi66YB37JivMl3Rf2 tTpx4VSj5NlA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:03 -0700 IronPort-SDR: l8EXuHWv3E98XhuELGn5Bd6/6p5PfK4yYf39a70xPvGzKUIJVE9EAyxTRNe8SYe+k/N03X7Ofh hZWStbvfmmPg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111571" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:01 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Evan Swanson , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:36 +0800 Message-Id: <20200323071759.13075-14-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 13/36] net/ice/base: add link default override support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds functions to check for link override firmware support and get the override settings for a port. Link override allows a user to force link settings that are not normally supported. Firmware support is version dependent so a function to check support has been added. The link FC settings will use the override if available. Signed-off-by: Evan Swanson Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 5 +- drivers/net/ice/base/ice_common.c | 113 +++++++++++++++++++++++++++++++++- drivers/net/ice/base/ice_common.h | 5 ++ drivers/net/ice/base/ice_type.h | 34 ++++++++++ 4 files changed, 154 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index bcb2dd783..34c05815f 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -1363,7 +1363,8 @@ struct ice_aqc_get_phy_caps_data { #define ICE_AQC_PHY_FEC_25G_RS_CLAUSE91_EN BIT(6) #define ICE_AQC_PHY_FEC_25G_KR_CLAUSE74_EN BIT(7) #define ICE_AQC_PHY_FEC_MASK MAKEMASK(0xdf, 0) - u8 rsvd1; /* Byte 35 reserved */ + u8 module_compliance_enforcement; +#define ICE_AQC_MOD_ENFORCE_STRICT_MODE BIT(0) u8 extended_compliance_code; #define ICE_MODULE_TYPE_TOTAL_BYTE 3 u8 module_type[ICE_MODULE_TYPE_TOTAL_BYTE]; @@ -1416,7 +1417,7 @@ struct ice_aqc_set_phy_cfg_data { __le16 eee_cap; /* Value from ice_aqc_get_phy_caps */ __le16 eeer_value; u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */ - u8 rsvd1; + u8 module_compliance_enforcement; }; /* Set MAC Config command data structure (direct 0x0603) */ diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 4b1b31066..3fae6e731 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -2526,6 +2526,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) { struct ice_aqc_set_phy_cfg_data cfg = { 0 }; struct ice_phy_cache_mode_data cache_data; + struct ice_link_default_override_tlv tlv; struct ice_aqc_get_phy_caps_data *pcaps; enum ice_status status; u8 pause_mask = 0x0; @@ -2573,7 +2574,18 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) ICE_AQC_PHY_EN_RX_LINK_PAUSE); /* set the new capabilities */ - cfg.caps |= pause_mask; + if (pi->fc.req_mode == ICE_FC_AUTO && + ice_fw_supports_link_override(hw)) { + status = ice_get_link_default_override(&tlv, pi); + if (status) + return status; + + if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) && + (tlv.options & ICE_LINK_OVERRIDE_EN)) + cfg.caps |= tlv.phy_config & ICE_LINK_OVERRIDE_PAUSE_M; + } else { + cfg.caps |= pause_mask; + } /* If the capabilities have changed, then set the new config */ if (cfg.caps != pcaps->caps) { @@ -4269,3 +4281,102 @@ enum ice_fw_modes ice_get_fw_mode(struct ice_hw *hw) else return ICE_FW_MODE_NORMAL; } + +/** + * ice_fw_supports_link_override + * @hw: pointer to the hardware structure + * + * Checks if the firmware supports link override + */ +bool ice_fw_supports_link_override(struct ice_hw *hw) +{ + if (hw->api_maj_ver == ICE_FW_API_LINK_OVERRIDE_MAJ) { + if (hw->api_min_ver > ICE_FW_API_LINK_OVERRIDE_MIN) + return true; + if (hw->api_min_ver == ICE_FW_API_LINK_OVERRIDE_MIN && + hw->api_patch >= ICE_FW_API_LINK_OVERRIDE_PATCH) + return true; + } else if (hw->api_maj_ver > ICE_FW_API_LINK_OVERRIDE_MAJ) { + return true; + } + + return false; +} + +/** + * ice_get_link_default_override + * @ldo: pointer to the link default override struct + * @pi: pointer to the port info struct + * + * Gets the link default override for a port + */ +enum ice_status +ice_get_link_default_override(struct ice_link_default_override_tlv *ldo, + struct ice_port_info *pi) +{ + u16 i, tlv, tlv_len, tlv_start, buf, offset; + struct ice_hw *hw = pi->hw; + enum ice_status status; + + status = ice_get_pfa_module_tlv(hw, &tlv, &tlv_len, + ICE_SR_LINK_DEFAULT_OVERRIDE_PTR); + if (status) { + ice_debug(hw, ICE_DBG_INIT, + "Failed to read link override TLV.\n"); + return status; + } + + /* Each port has its own config; calculate for our port */ + tlv_start = tlv + pi->lport * ICE_SR_PFA_LINK_OVERRIDE_WORDS + + ICE_SR_PFA_LINK_OVERRIDE_OFFSET; + + /* link options first */ + status = ice_read_sr_word(hw, tlv_start, &buf); + if (status) { + ice_debug(hw, ICE_DBG_INIT, + "Failed to read override link options.\n"); + return status; + } + ldo->options = buf & ICE_LINK_OVERRIDE_OPT_M; + ldo->phy_config = (buf & ICE_LINK_OVERRIDE_PHY_CFG_M) >> + ICE_LINK_OVERRIDE_PHY_CFG_S; + + /* link PHY config */ + offset = tlv_start + ICE_SR_PFA_LINK_OVERRIDE_FEC_OFFSET; + status = ice_read_sr_word(hw, offset, &buf); + if (status) { + ice_debug(hw, ICE_DBG_INIT, + "Failed to read override phy config.\n"); + return status; + } + ldo->fec_options = buf & ICE_LINK_OVERRIDE_FEC_OPT_M; + + /* PHY types low */ + offset = tlv_start + ICE_SR_PFA_LINK_OVERRIDE_PHY_OFFSET; + for (i = 0; i < ICE_SR_PFA_LINK_OVERRIDE_PHY_WORDS; i++) { + status = ice_read_sr_word(hw, (offset + i), &buf); + if (status) { + ice_debug(hw, ICE_DBG_INIT, + "Failed to read override link options.\n"); + return status; + } + /* shift 16 bits at a time to fill 64 bits */ + ldo->phy_type_low |= ((u64)buf << (i * 16)); + } + + /* PHY types high */ + offset = tlv_start + ICE_SR_PFA_LINK_OVERRIDE_PHY_OFFSET + + ICE_SR_PFA_LINK_OVERRIDE_PHY_WORDS; + for (i = 0; i < ICE_SR_PFA_LINK_OVERRIDE_PHY_WORDS; i++) { + status = ice_read_sr_word(hw, (offset + i), &buf); + if (status) { + ice_debug(hw, ICE_DBG_INIT, + "Failed to read override link options.\n"); + return status; + } + /* shift 16 bits at a time to fill 64 bits */ + ldo->phy_type_high |= ((u64)buf << (i * 16)); + } + + return status; +} diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index c73184499..bbff17536 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -140,6 +140,11 @@ enum ice_status ice_clear_pf_cfg(struct ice_hw *hw); enum ice_status ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, struct ice_sq_cd *cd); +bool ice_fw_supports_link_override(struct ice_hw *hw); +enum ice_status +ice_get_link_default_override(struct ice_link_default_override_tlv *ldo, + struct ice_port_info *pi); + enum ice_fc_mode ice_caps_to_fc_mode(u8 caps); enum ice_fec_mode ice_caps_to_fec_mode(u8 caps, u8 fec_options); enum ice_status diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 59dce32fa..29fa34fc0 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -168,6 +168,7 @@ enum ice_fc_mode { ICE_FC_RX_PAUSE, ICE_FC_TX_PAUSE, ICE_FC_FULL, + ICE_FC_AUTO, ICE_FC_PFC, ICE_FC_DFLT }; @@ -483,6 +484,28 @@ struct ice_nvm_info { u8 blank_nvm_mode; /* is NVM empty (no FW present)*/ }; +struct ice_link_default_override_tlv { + u8 options; +#define ICE_LINK_OVERRIDE_OPT_M 0x3F +#define ICE_LINK_OVERRIDE_STRICT_MODE BIT(0) +#define ICE_LINK_OVERRIDE_EPCT_DIS BIT(1) +#define ICE_LINK_OVERRIDE_PORT_DIS BIT(2) +#define ICE_LINK_OVERRIDE_EN BIT(3) +#define ICE_LINK_OVERRIDE_AUTO_LINK_DIS BIT(4) +#define ICE_LINK_OVERRIDE_EEE_EN BIT(5) + u8 phy_config; +#define ICE_LINK_OVERRIDE_PHY_CFG_S 8 +#define ICE_LINK_OVERRIDE_PHY_CFG_M (0xC3 << ICE_LINK_OVERRIDE_PHY_CFG_S) +#define ICE_LINK_OVERRIDE_PAUSE_M 0x3 +#define ICE_LINK_OVERRIDE_LESM_EN BIT(6) +#define ICE_LINK_OVERRIDE_AUTO_FEC_EN BIT(7) + u8 fec_options; +#define ICE_LINK_OVERRIDE_FEC_OPT_M 0xFF + u8 rsvd1; + u64 phy_type_low; + u64 phy_type_high; +}; + #define ICE_NVM_VER_LEN 32 /* Max number of port to queue branches w.r.t topology */ @@ -1003,6 +1026,7 @@ enum ice_sw_fwd_act_type { #define ICE_SR_EMP_SR_SETTINGS_PTR 0x48 #define ICE_SR_CONFIGURATION_METADATA_PTR 0x4D #define ICE_SR_IMMEDIATE_VALUES_PTR 0x4E +#define ICE_SR_LINK_DEFAULT_OVERRIDE_PTR 0x134 #define ICE_SR_POR_REGISTERS_AUTOLOAD_PTR 0x118 /* Auxiliary field, mask and shift definition for Shadow RAM and NVM Flash */ @@ -1020,6 +1044,16 @@ enum ice_sw_fwd_act_type { */ #define ICE_SR_SW_CHECKSUM_BASE 0xBABA +/* Link override related */ +#define ICE_SR_PFA_LINK_OVERRIDE_WORDS 10 +#define ICE_SR_PFA_LINK_OVERRIDE_PHY_WORDS 4 +#define ICE_SR_PFA_LINK_OVERRIDE_OFFSET 2 +#define ICE_SR_PFA_LINK_OVERRIDE_FEC_OFFSET 1 +#define ICE_SR_PFA_LINK_OVERRIDE_PHY_OFFSET 2 +#define ICE_FW_API_LINK_OVERRIDE_MAJ 1 +#define ICE_FW_API_LINK_OVERRIDE_MIN 5 +#define ICE_FW_API_LINK_OVERRIDE_PATCH 2 + #define ICE_PBA_FLAG_DFLT 0xFAFA /* Hash redirection LUT for VSI - maximum array size */ #define ICE_VSIQF_HLUT_ARRAY_SIZE ((VSIQF_HLUT_MAX_INDEX + 1) * 4) From patchwork Mon Mar 23 07:17:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67002 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB0C0A0563; Mon, 23 Mar 2020 08:17:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 51DC11C10A; Mon, 23 Mar 2020 08:15:25 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 6BBD01BF6D for ; Mon, 23 Mar 2020 08:15:05 +0100 (CET) IronPort-SDR: p4n+C9lsZt9iunWazBqMIrJ1J5NbMpkcSloacna3r2jQrZWg8wnCB2/rXQuwERoU3yLVRJm06h gPrwVQTWNdsw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:05 -0700 IronPort-SDR: /CW8u9W9Ga5PH2I7dOqgFbACCH54mG5RP4pg35g5G/Qf9fgPyk9VAIlSwPXF0geQX5ogKQXqK1 J5up/i5c1xwg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111587" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:03 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Anirudh Venkataramanan , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:37 +0800 Message-Id: <20200323071759.13075-15-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 14/36] net/ice/base: add dedicate MAC type for E810 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new MAC type ICE_MAC_E810 to distinguish E810 devices from other devices. MAC types for all other devices will be ICE_MAC_GENERIC till there's a need to distinguish further between devices. Signed-off-by: Anirudh Venkataramanan Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 42 ++++++++++++++++++++++++++------------- drivers/net/ice/base/ice_type.h | 1 + 2 files changed, 29 insertions(+), 14 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 3fae6e731..3fa2256e8 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -20,24 +20,38 @@ */ static enum ice_status ice_set_mac_type(struct ice_hw *hw) { - enum ice_status status = ICE_SUCCESS; - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - if (hw->vendor_id == ICE_INTEL_VENDOR_ID) { - switch (hw->device_id) { - default: - hw->mac_type = ICE_MAC_GENERIC; - break; - } - } else { - status = ICE_ERR_DEVICE_NOT_SUPPORTED; + if (hw->vendor_id != ICE_INTEL_VENDOR_ID) + return ICE_ERR_DEVICE_NOT_SUPPORTED; + + switch (hw->device_id) { + case ICE_DEV_ID_E810C_BACKPLANE: + case ICE_DEV_ID_E810C_QSFP: + case ICE_DEV_ID_E810C_SFP: + case ICE_DEV_ID_E810_XXV_BACKPLANE: + case ICE_DEV_ID_E810_XXV_QSFP: + case ICE_DEV_ID_E810_XXV_SFP: + hw->mac_type = ICE_MAC_E810; + break; + case ICE_DEV_ID_E822C_10G_BASE_T: + case ICE_DEV_ID_E822C_BACKPLANE: + case ICE_DEV_ID_E822C_QSFP: + case ICE_DEV_ID_E822C_SFP: + case ICE_DEV_ID_E822C_SGMII: + case ICE_DEV_ID_E822L_10G_BASE_T: + case ICE_DEV_ID_E822L_BACKPLANE: + case ICE_DEV_ID_E822L_SFP: + case ICE_DEV_ID_E822L_SGMII: + hw->mac_type = ICE_MAC_GENERIC; + break; + default: + hw->mac_type = ICE_MAC_UNKNOWN; + break; } - ice_debug(hw, ICE_DBG_INIT, "found mac_type: %d, status: %d\n", - hw->mac_type, status); - - return status; + ice_debug(hw, ICE_DBG_INIT, "mac_type: %d\n", hw->mac_type); + return ICE_SUCCESS; } /** diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 29fa34fc0..3e24bb1dc 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -205,6 +205,7 @@ enum ice_set_fc_aq_failures { /* MAC types */ enum ice_mac_type { ICE_MAC_UNKNOWN = 0, + ICE_MAC_E810, ICE_MAC_GENERIC, }; From patchwork Mon Mar 23 07:17:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67003 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD5FBA0563; Mon, 23 Mar 2020 08:17:20 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8CB131C113; Mon, 23 Mar 2020 08:15:26 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id CAC641C0BE for ; Mon, 23 Mar 2020 08:15:07 +0100 (CET) IronPort-SDR: o8LdDd1lKIGU8oR/ZspZ/qJ/jQBPt8Ovnsm+UCfGSJfRKUmLmO5TAVqFZ9JpgW0i7apYOojyDi vxtJnscQagrA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:07 -0700 IronPort-SDR: 2M7S3Wizr7UclmETa8fT7nks0f28baYIEPDcQfbJYsT24SuR0eZWxlNkJCSqIK9jmKTOCFaZsa JV/y/O6/oSpw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111607" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:05 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Tony Nguyen , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:38 +0800 Message-Id: <20200323071759.13075-16-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 15/36] net/ice/base: capitalize abbreviations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix abbreviations as found by abbrevcheck Signed-off-by: Tony Nguyen Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_dcb.c | 8 +++--- drivers/net/ice/base/ice_fdir.c | 6 ++--- drivers/net/ice/base/ice_flex_pipe.c | 52 ++++++++++++++++++------------------ drivers/net/ice/base/ice_flex_type.h | 6 ++--- drivers/net/ice/base/ice_switch.c | 1 - 5 files changed, 36 insertions(+), 37 deletions(-) diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c index 7048dbd02..63c981b8e 100644 --- a/drivers/net/ice/base/ice_dcb.c +++ b/drivers/net/ice/base/ice_dcb.c @@ -1323,13 +1323,13 @@ enum ice_status ice_set_dcb_cfg(struct ice_port_info *pi) } /** - * ice_aq_query_port_ets - query port ets configuration + * ice_aq_query_port_ets - query port ETS configuration * @pi: port information structure * @buf: pointer to buffer * @buf_size: buffer size in bytes * @cd: pointer to command details structure or NULL * - * query current port ets configuration + * query current port ETS configuration */ enum ice_status ice_aq_query_port_ets(struct ice_port_info *pi, @@ -1416,13 +1416,13 @@ ice_update_port_tc_tree_cfg(struct ice_port_info *pi, } /** - * ice_query_port_ets - query port ets configuration + * ice_query_port_ets - query port ETS configuration * @pi: port information structure * @buf: pointer to buffer * @buf_size: buffer size in bytes * @cd: pointer to command details structure or NULL * - * query current port ets configuration and update the + * query current port ETS configuration and update the * SW DB with the TC changes */ enum ice_status diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c index ba002586b..90e7e082f 100644 --- a/drivers/net/ice/base/ice_fdir.c +++ b/drivers/net/ice/base/ice_fdir.c @@ -628,13 +628,13 @@ static void ice_pkt_insert_u8(u8 *pkt, int offset, u8 data) } /** - * ice_pkt_insert_u8_tc - insert a u8 value into a memory buffer for tc ipv6. + * ice_pkt_insert_u8_tc - insert a u8 value into a memory buffer for TC ipv6. * @pkt: packet buffer * @offset: offset into buffer * @data: 8 bit value to convert and insert into pkt at offset * - * This function is designed for inserting Traffic Class (tc) for IPv6, - * since that tc is not aligned in number of bytes. Here we split it out + * This function is designed for inserting Traffic Class (TC) for IPv6, + * since that TC is not aligned in number of bytes. Here we split it out * into two part and fill each byte with data copy from pkt, then insert * the two bytes data one by one. */ diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index d8a3f47e8..2d29e986c 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -2107,7 +2107,7 @@ ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, * @ptg: pointer to variable that receives the PTG * * This function will search the PTGs for a particular ptype, returning the - * PTG ID that contains it through the ptg parameter, with the value of + * PTG ID that contains it through the PTG parameter, with the value of * ICE_DEFAULT_PTG (0) meaning it is part the default PTG. */ static enum ice_status @@ -2124,9 +2124,9 @@ ice_ptg_find_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 *ptg) * ice_ptg_alloc_val - Allocates a new packet type group ID by value * @hw: pointer to the hardware structure * @blk: HW block - * @ptg: the ptg to allocate + * @ptg: the PTG to allocate * - * This function allocates a given packet type group ID specified by the ptg + * This function allocates a given packet type group ID specified by the PTG * parameter. */ static void ice_ptg_alloc_val(struct ice_hw *hw, enum ice_block blk, u8 ptg) @@ -2139,9 +2139,9 @@ static void ice_ptg_alloc_val(struct ice_hw *hw, enum ice_block blk, u8 ptg) * @hw: pointer to the hardware structure * @blk: HW block * @ptype: the ptype to remove - * @ptg: the ptg to remove the ptype from + * @ptg: the PTG to remove the ptype from * - * This function will remove the ptype from the specific ptg, and move it to + * This function will remove the ptype from the specific PTG, and move it to * the default PTG (ICE_DEFAULT_PTG). */ static enum ice_status @@ -2184,7 +2184,7 @@ ice_ptg_remove_ptype(struct ice_hw *hw, enum ice_block blk, u16 ptype, u8 ptg) * @hw: pointer to the hardware structure * @blk: HW block * @ptype: the ptype to add or move - * @ptg: the ptg to add or move the ptype to + * @ptg: the PTG to add or move the ptype to * * This function will either add or move a ptype to a particular PTG depending * on if the ptype is already part of another group. Note that using a @@ -2237,7 +2237,7 @@ struct ice_blk_size_details { u16 xlt2; /* # XLT2 entries */ u16 prof_tcam; /* # profile ID TCAM entries */ u16 prof_id; /* # profile IDs */ - u8 prof_cdid_bits; /* # cdid one-hot bits used in key */ + u8 prof_cdid_bits; /* # CDID one-hot bits used in key */ u16 prof_redir; /* # profile redirection entries */ u16 es; /* # extraction sequence entries */ u16 fvw; /* # field vector words */ @@ -2356,9 +2356,9 @@ ice_vsig_find_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 *vsig) * ice_vsig_alloc_val - allocate a new VSIG by value * @hw: pointer to the hardware structure * @blk: HW block - * @vsig: the vsig to allocate + * @vsig: the VSIG to allocate * - * This function will allocate a given VSIG specified by the vsig parameter. + * This function will allocate a given VSIG specified by the VSIG parameter. */ static u16 ice_vsig_alloc_val(struct ice_hw *hw, enum ice_block blk, u16 vsig) { @@ -3791,7 +3791,7 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw) * @blk: the block in which to write profile ID to * @ptg: packet type group (PTG) portion of key * @vsig: VSIG portion of key - * @cdid: cdid portion of key + * @cdid: CDID portion of key * @flags: flag portion of key * @vl_msk: valid mask * @dc_msk: don't care mask @@ -3848,7 +3848,7 @@ ice_prof_gen_key(struct ice_hw *hw, enum ice_block blk, u8 ptg, u16 vsig, * @prof_id: profile ID * @ptg: packet type group (PTG) portion of key * @vsig: VSIG portion of key - * @cdid: cdid portion of key + * @cdid: CDID: portion of key * @flags: flag portion of key * @vl_msk: valid mask * @dc_msk: don't care mask @@ -4406,8 +4406,8 @@ ice_get_ptype_attrib_info(enum ice_ptype_attrib_type type, } /** - * ice_add_prof_attrib - add any ptg with attributes to profile - * @prof: pointer to the profile to which ptg entries will be added + * ice_add_prof_attrib - add any PTG with attributes to profile + * @prof: pointer to the profile to which PTG entries will be added * @ptg: PTG to be added * @ptype: PTYPE that needs to be looked up * @attr: array of attributes that will be considered @@ -4549,7 +4549,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], if (status == ICE_ERR_MAX_LIMIT) break; if (status) { - /* This is simple a ptype/ptg with no + /* This is simple a ptype/PTG with no * attribute */ prof->ptg[prof->ptg_cnt] = ptg; @@ -5036,7 +5036,7 @@ ice_move_vsi(struct ice_hw *hw, enum ice_block blk, u16 vsi, u16 vsig, } /** - * ice_set_tcam_flags - set tcam flag don't care mask + * ice_set_tcam_flags - set TCAM flag don't care mask * @mask: mask for flags * @dc_mask: pointer to the don't care mask */ @@ -5050,9 +5050,9 @@ static void ice_set_tcam_flags(u16 mask, u8 dc_mask[ICE_TCAM_KEY_VAL_SZ]) } /** - * ice_rem_chg_tcam_ent - remove a specific tcam entry from change list + * ice_rem_chg_tcam_ent - remove a specific TCAM entry from change list * @hw: pointer to the HW struct - * @idx: the index of the tcam entry to remove + * @idx: the index of the TCAM entry to remove * @chg: the list of change structures to search */ static void @@ -5073,7 +5073,7 @@ ice_rem_chg_tcam_ent(struct ice_hw *hw, u16 idx, struct LIST_HEAD_TYPE *chg) * @hw: pointer to the HW struct * @blk: hardware block * @enable: true to enable, false to disable - * @vsig: the vsig of the TCAM entry + * @vsig: the VSIG of the TCAM entry * @tcam: pointer the TCAM info structure of the TCAM to disable * @chg: the change list * @@ -5091,13 +5091,13 @@ ice_prof_tcam_ena_dis(struct ice_hw *hw, enum ice_block blk, bool enable, u8 dc_msk[ICE_TCAM_KEY_VAL_SZ] = { 0xFF, 0xFF, 0x00, 0x00, 0x00 }; u8 nm_msk[ICE_TCAM_KEY_VAL_SZ] = { 0x00, 0x00, 0x00, 0x00, 0x00 }; - /* if disabling, free the tcam */ + /* if disabling, free the TCAM */ if (!enable) { status = ice_rel_tcam_idx(hw, blk, tcam->tcam_idx); - /* if we have already created a change for this tcam entry, then + /* if we have already created a change for this TCAM entry, then * we need to remove that entry, in order to prevent writing to - * a tcam entry we no longer will have ownership of. + * a TCAM entry we no longer will have ownership of. */ ice_rem_chg_tcam_ent(hw, tcam->tcam_idx, chg); tcam->tcam_idx = 0; @@ -5105,7 +5105,7 @@ ice_prof_tcam_ena_dis(struct ice_hw *hw, enum ice_block blk, bool enable, return status; } - /* for re-enabling, reallocate a tcam */ + /* for re-enabling, reallocate a TCAM */ status = ice_alloc_tcam_ent(hw, blk, &tcam->tcam_idx); if (status) return status; @@ -5115,7 +5115,7 @@ ice_prof_tcam_ena_dis(struct ice_hw *hw, enum ice_block blk, bool enable, if (!p) return ICE_ERR_NO_MEMORY; - /* set don't care masks for tcam flags */ + /* set don't care masks for TCAM flags */ ice_set_tcam_flags(tcam->attr.mask, dc_msk); status = ice_tcam_write_entry(hw, blk, tcam->tcam_idx, tcam->prof_id, @@ -5326,7 +5326,7 @@ ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, p->vsig = vsig; p->tcam_idx = t->tcam[i].tcam_idx; - /* set don't care masks for tcam flags */ + /* set don't care masks for TCAM flags */ ice_set_tcam_flags(t->tcam[i].attr.mask, dc_msk); /* write the TCAM entry */ @@ -5414,7 +5414,7 @@ ice_create_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl, * @blk: hardware block * @vsi: the initial VSI that will be in VSIG * @lst: the list of profile that will be added to the VSIG - * @new_vsig: return of new vsig + * @new_vsig: return of new VSIG * @chg: the change list */ static enum ice_status @@ -5555,7 +5555,7 @@ ice_add_prof_id_flow(struct ice_hw *hw, enum ice_block blk, u16 vsi, u64 hdl) u16 or_vsig; u16 ref; - /* found in vsig */ + /* found in VSIG */ or_vsig = vsig; /* make sure that there is no overlap/conflict between the new diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h index 1be98ea52..c122196e9 100644 --- a/drivers/net/ice/base/ice_flex_type.h +++ b/drivers/net/ice/base/ice_flex_type.h @@ -642,8 +642,8 @@ struct ice_xlt1 { #define ICE_XLT2_CNT 768 #define ICE_MAX_VSIGS 768 -/* Vsig bit layout: - * [0:12]: incremental vsig index 1 to ICE_MAX_VSIGS +/* VSIG bit layout: + * [0:12]: incremental VSIG index 1 to ICE_MAX_VSIGS * [13:15]: PF number of device */ #define ICE_VSIG_IDX_M (0x1FFF) @@ -713,7 +713,7 @@ struct ice_prof_tcam { u16 count; u16 max_prof_id; struct ice_prof_tcam_entry *t; - u8 cdid_bits; /* # cdid bits to use in key, 0, 2, 4, or 8 */ + u8 cdid_bits; /* # CDID bits to use in key, 0, 2, 4, or 8 */ }; struct ice_prof_redir { diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index 796390e93..f4e020dfc 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -3180,7 +3180,6 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, goto ice_add_mac_exit; } - /* Allocate switch rule buffer for the bulk update for unicast */ s_rule_size = ICE_SW_RULE_RX_TX_ETH_HDR_SIZE; s_rule = (struct ice_aqc_sw_rules_elem *) From patchwork Mon Mar 23 07:17:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67004 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D674FA0563; Mon, 23 Mar 2020 08:17:32 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 15D591C117; Mon, 23 Mar 2020 08:15:28 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 94B381B949 for ; Mon, 23 Mar 2020 08:15:09 +0100 (CET) IronPort-SDR: a+mNw+n5a/wxsSSDvnwsiy82+AKBuOsToQAOwJEDQI4GhBjnvPBM/qNunXoDXd57HYljmlscit jpK61Hwl26tg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:09 -0700 IronPort-SDR: 0RuVvwjUR/P9Tua6QktR6HDdVpp9wjNGSzWwa4wQhlo3cmJp0wkP4d4WV28DFH+zfyscMoxCee Ns9oOPguCrng== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111620" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:07 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:39 +0800 Message-Id: <20200323071759.13075-17-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 16/36] net/ice/base: add PHY number definition values X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As title. Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_type.h | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 3e24bb1dc..4979580ec 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -838,6 +838,14 @@ struct ice_hw { u8 ucast_shared; /* true if VSIs can share unicast addr */ +#define ICE_PHY_PER_NAC 1 +#define ICE_MAX_QUAD 2 +#define ICE_NUM_QUAD_TYPE 2 +#define ICE_PORTS_PER_QUAD 4 +#define ICE_PHY_0_LAST_QUAD 1 +#define ICE_PORTS_PER_PHY 8 +#define ICE_NUM_EXTERNAL_PORTS ICE_PORTS_PER_PHY + /* Active package version (currently active) */ struct ice_pkg_ver active_pkg_ver; u8 active_pkg_name[ICE_PKG_NAME_SIZE]; From patchwork Mon Mar 23 07:17:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67005 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F645A0563; Mon, 23 Mar 2020 08:17:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7DC431C11C; Mon, 23 Mar 2020 08:15:29 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 6DCCF1C0CC for ; Mon, 23 Mar 2020 08:15:11 +0100 (CET) IronPort-SDR: OOMjTk72ZwdG69githJS0HzhI9V4pgyFn8RuI/lmU34nuI4Ilskc7x2teoMO/wHwSM8KTLROsH trwQvtxCiAxQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:11 -0700 IronPort-SDR: 8IOL9iZR2JF+qPWxqCKkiQOZM1TwvwdE0ql0RDhO/oGb9c+lwyHoqFQuLY9pkZlGUG4BI0z6Fe 2ZesbdSPdjaA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111632" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:09 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Lev Faerman , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:40 +0800 Message-Id: <20200323071759.13075-18-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 17/36] net/ice/base: add shared driver parameter command X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds the Driver Shared Parameters (0x0C90) AQ command. Signed-off-by: Lev Faerman Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 34c05815f..f6068a123 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -2181,6 +2181,20 @@ struct ice_aqc_get_pkg_info_resp { struct ice_aqc_get_pkg_info pkg_info[1]; }; +/* Driver Shared Parameters (direct, 0x0C90) */ +struct ice_aqc_driver_shared_params { + u8 set_or_get_op; +#define ICE_AQC_DRIVER_PARAM_OP_MASK BIT(0) +#define ICE_AQC_DRIVER_PARAM_SET 0 +#define ICE_AQC_DRIVER_PARAM_GET 1 + u8 param_indx; +#define ICE_AQC_DRIVER_PARAM_MAX_IDX 15 + u8 rsvd[2]; + __le32 param_val; + __le32 addr_high; + __le32 addr_low; +}; + /* Lan Queue Overflow Event (direct, 0x1001) */ struct ice_aqc_event_lan_overflow { __le32 prtdcb_ruptq; @@ -2269,6 +2283,7 @@ struct ice_aq_desc { struct ice_aqc_get_vsi_resp get_vsi_resp; struct ice_aqc_download_pkg download_pkg; struct ice_aqc_get_pkg_info_list get_pkg_info_list; + struct ice_aqc_driver_shared_params drv_shared_params; struct ice_aqc_set_mac_lb set_mac_lb; struct ice_aqc_alloc_free_res_cmd sw_res_ctrl; struct ice_aqc_get_res_alloc get_res; @@ -2491,6 +2506,8 @@ enum ice_adminq_opc { ice_aqc_opc_update_pkg = 0x0C42, ice_aqc_opc_get_pkg_info_list = 0x0C43, + ice_aqc_opc_driver_shared_params = 0x0C90, + /* Standalone Commands/Events */ ice_aqc_opc_event_lan_overflow = 0x1001, }; From patchwork Mon Mar 23 07:17:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67006 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BBC3A0563; Mon, 23 Mar 2020 08:17:54 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BB74B1C121; Mon, 23 Mar 2020 08:15:30 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 7CD0C1C0D2 for ; Mon, 23 Mar 2020 08:15:13 +0100 (CET) IronPort-SDR: 3AwDRmgfNamS8+PO78xnW6FYFx4MS+F4K/Hp+HSLUas0msoUrbjvgUo1UQXZPcCFIKtx7JEr4x 3P4EKhslRu/A== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:13 -0700 IronPort-SDR: tOhD4iIjOSWDxoaCNq+FM2jyuLaLqcnBirsCyhh2FJwhsY8vm0H2VEHi5Pv3wD2iDMOy0o+R82 twJ/Jpsnk2Fw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111649" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:11 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Lev Faerman , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:41 +0800 Message-Id: <20200323071759.13075-19-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 18/36] net/ice/base: add AN masks to Get PHY Caps X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adds masks indicating AN clauses to the Get PHY Capabilities command. Changes the name of the low_power_ctrl field to be properly descriptive of it being an AN field. Signed-off-by: Lev Faerman Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 7 +++++-- drivers/net/ice/base/ice_common.c | 10 +++++----- drivers/net/ice/ice_ethdev.c | 2 +- 3 files changed, 11 insertions(+), 8 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index f6068a123..9375d615e 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -1337,8 +1337,11 @@ struct ice_aqc_get_phy_caps_data { #define ICE_AQC_PHY_EN_LESM BIT(6) #define ICE_AQC_PHY_EN_AUTO_FEC BIT(7) #define ICE_AQC_PHY_CAPS_MASK MAKEMASK(0xff, 0) - u8 low_power_ctrl; + u8 low_power_ctrl_an; #define ICE_AQC_PHY_EN_D3COLD_LOW_POWER_AUTONEG BIT(0) +#define ICE_AQC_PHY_AN_EN_CLAUSE28 BIT(1) +#define ICE_AQC_PHY_AN_EN_CLAUSE73 BIT(2) +#define ICE_AQC_PHY_AN_EN_CLAUSE37 BIT(3) __le16 eee_cap; #define ICE_AQC_PHY_EEE_EN_100BASE_TX BIT(0) #define ICE_AQC_PHY_EEE_EN_1000BASE_T BIT(1) @@ -1413,7 +1416,7 @@ struct ice_aqc_set_phy_cfg_data { #define ICE_AQ_PHY_ENA_AUTO_LINK_UPDT BIT(5) #define ICE_AQ_PHY_ENA_LESM BIT(6) #define ICE_AQ_PHY_ENA_AUTO_FEC BIT(7) - u8 low_power_ctrl; + u8 low_power_ctrl_an; __le16 eee_cap; /* Value from ice_aqc_get_phy_caps */ __le16 eeer_value; u8 link_fec_opt; /* Use defines from ice_aqc_get_phy_caps */ diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 3fa2256e8..99f696211 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -2393,8 +2393,8 @@ ice_aq_set_phy_cfg(struct ice_hw *hw, struct ice_port_info *pi, ice_debug(hw, ICE_DBG_LINK, "phy_type_high = 0x%llx\n", (unsigned long long)LE64_TO_CPU(cfg->phy_type_high)); ice_debug(hw, ICE_DBG_LINK, "caps = 0x%x\n", cfg->caps); - ice_debug(hw, ICE_DBG_LINK, "low_power_ctrl = 0x%x\n", - cfg->low_power_ctrl); + ice_debug(hw, ICE_DBG_LINK, "low_power_ctrl_an = 0x%x\n", + cfg->low_power_ctrl_an); ice_debug(hw, ICE_DBG_LINK, "eee_cap = 0x%x\n", cfg->eee_cap); ice_debug(hw, ICE_DBG_LINK, "eeer_value = 0x%x\n", cfg->eeer_value); ice_debug(hw, ICE_DBG_LINK, "link_fec_opt = 0x%x\n", cfg->link_fec_opt); @@ -2611,7 +2611,7 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) /* Copy over all the old settings */ cfg.phy_type_high = pcaps->phy_type_high; cfg.phy_type_low = pcaps->phy_type_low; - cfg.low_power_ctrl = pcaps->low_power_ctrl; + cfg.low_power_ctrl_an = pcaps->low_power_ctrl_an; cfg.eee_cap = pcaps->eee_cap; cfg.eeer_value = pcaps->eeer_value; cfg.link_fec_opt = pcaps->link_fec_options; @@ -2672,7 +2672,7 @@ ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps, if (phy_caps->phy_type_low != phy_cfg->phy_type_low || phy_caps->phy_type_high != phy_cfg->phy_type_high || ((phy_caps->caps & caps_mask) != (phy_cfg->caps & cfg_mask)) || - phy_caps->low_power_ctrl != phy_cfg->low_power_ctrl || + phy_caps->low_power_ctrl_an != phy_cfg->low_power_ctrl_an || phy_caps->eee_cap != phy_cfg->eee_cap || phy_caps->eeer_value != phy_cfg->eeer_value || phy_caps->link_fec_options != phy_cfg->link_fec_opt) @@ -2699,7 +2699,7 @@ ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps, cfg->phy_type_low = caps->phy_type_low; cfg->phy_type_high = caps->phy_type_high; cfg->caps = caps->caps; - cfg->low_power_ctrl = caps->low_power_ctrl; + cfg->low_power_ctrl_an = caps->low_power_ctrl_an; cfg->eee_cap = caps->eee_cap; cfg->eeer_value = caps->eeer_value; cfg->link_fec_opt = caps->link_fec_options; diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 4763770f5..7e7bb6954 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3170,7 +3170,7 @@ ice_force_phys_link_state(struct ice_hw *hw, bool link_up) cfg.phy_type_low = pcaps->phy_type_low; cfg.phy_type_high = pcaps->phy_type_high; cfg.caps = pcaps->caps | ICE_AQ_PHY_ENA_AUTO_LINK_UPDT; - cfg.low_power_ctrl = pcaps->low_power_ctrl; + cfg.low_power_ctrl_an = pcaps->low_power_ctrl_an; cfg.eee_cap = pcaps->eee_cap; cfg.eeer_value = pcaps->eeer_value; cfg.link_fec_opt = pcaps->link_fec_options; From patchwork Mon Mar 23 07:17:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67007 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E6240A0563; Mon, 23 Mar 2020 08:18:00 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5716C1C127; Mon, 23 Mar 2020 08:15:32 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 952E61C0D9 for ; Mon, 23 Mar 2020 08:15:15 +0100 (CET) IronPort-SDR: XRtZDZ589R8/eAags5dTFxcCBUxKdBdNAbDxIKanNHyrBA8tOC0Buw2YZoHOZb5oObs1B0tz1U mxVJdLKMFS3A== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:15 -0700 IronPort-SDR: tz+u4zV/slEnTKZ9J/0UJt1VzQq15MkicWDJF/uCfCoRJL9jYTSHP5rMk1gZxrEZN0WCltUlyQ DEB1qgpINUkg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111675" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:13 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:42 +0800 Message-Id: <20200323071759.13075-20-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 19/36] net/ice/base: xtract logic of flat NVM read to function X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The ice_read_sr_buf_aq function implements logic to correctly break apart NVM reads into 4Kb chunks. Additionally, it ensures that each read never crosses a Shadow RAM sector boundary. This logic is useful when reading the flat NVM as a byte-addressable stream. Extract that logic in terms of bytes and implement it as ice_read_flat_nvm. Use this new function to implement ice_read_sr_buf_aq function. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_nvm.c | 114 ++++++++++++++++++++++++++++------------- drivers/net/ice/base/ice_nvm.h | 3 ++ 2 files changed, 81 insertions(+), 36 deletions(-) diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index 5dd702db3..80420afd3 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -50,6 +50,74 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length, } /** + * ice_read_flat_nvm - Read portion of NVM by flat offset + * @hw: pointer to the HW struct + * @offset: offset from beginning of NVM + * @length: (in) number of bytes to read; (out) number of bytes actually read + * @data: buffer to return data in (sized to fit the specified length) + * @read_shadow_ram: if true, read from shadow RAM instead of NVM + * + * Reads a portion of the NVM, as a flat memory space. This function correctly + * breaks read requests across Shadow RAM sectors and ensures that no single + * read request exceeds the maximum 4Kb read for a single AdminQ command. + * + * Returns a status code on failure. Note that the data pointer may be + * partially updated if some reads succeed before a failure. + */ +enum ice_status +ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, + bool read_shadow_ram) +{ + enum ice_status status; + u32 inlen = *length; + u32 bytes_read = 0; + bool last_cmd; + + ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); + + *length = 0; + + /* Verify the length of the read if this is for the Shadow RAM */ + if (read_shadow_ram && ((offset + inlen) > (hw->nvm.sr_words * 2u))) { + ice_debug(hw, ICE_DBG_NVM, + "NVM error: requested data is beyond Shadow RAM limit\n"); + return ICE_ERR_PARAM; + } + + do { + u32 read_size, sector_offset; + + /* ice_aq_read_nvm cannot read more than 4Kb at a time. + * Additionally, a read from the Shadow RAM may not cross over + * a sector boundary. Conveniently, the sector size is also + * 4Kb. + */ + sector_offset = offset % ICE_AQ_MAX_BUF_LEN; + read_size = MIN_T(u32, ICE_AQ_MAX_BUF_LEN - sector_offset, + inlen - bytes_read); + + last_cmd = !(bytes_read + read_size < inlen); + + /* ice_aq_read_nvm takes the length as a u16. Our read_size is + * calculated using a u32, but the ICE_AQ_MAX_BUF_LEN maximum + * size guarantees that it will fit within the 2 bytes. + */ + status = ice_aq_read_nvm(hw, ICE_AQC_NVM_START_POINT, + offset, (u16)read_size, + data + bytes_read, last_cmd, + read_shadow_ram, NULL); + if (status) + break; + + bytes_read += read_size; + offset += read_size; + } while (!last_cmd); + + *length = bytes_read; + return status; +} + +/** * ice_check_sr_access_params - verify params for Shadow RAM R/W operations. * @hw: pointer to the HW structure * @offset: offset in words from module start @@ -144,55 +212,29 @@ ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data) * @words: (in) number of words to read; (out) number of words actually read * @data: words read from the Shadow RAM * - * Reads 16 bit words (data buf) from the SR using the ice_read_sr_aq - * method. Ownership of the NVM is taken before reading the buffer and later - * released. + * Reads 16 bit words (data buf) from the Shadow RAM. Ownership of the NVM is + * taken before reading the buffer and later released. */ static enum ice_status ice_read_sr_buf_aq(struct ice_hw *hw, u16 offset, u16 *words, u16 *data) { + u32 bytes = *words * 2, i; enum ice_status status; - bool last_cmd = false; - u16 words_read = 0; - u16 i = 0; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - do { - u16 read_size, off_w; - - /* Calculate number of bytes we should read in this step. - * It's not allowed to read more than one page at a time or - * to cross page boundaries. - */ - off_w = offset % ICE_SR_SECTOR_SIZE_IN_WORDS; - read_size = off_w ? - MIN_T(u16, *words, - (ICE_SR_SECTOR_SIZE_IN_WORDS - off_w)) : - MIN_T(u16, (*words - words_read), - ICE_SR_SECTOR_SIZE_IN_WORDS); - - /* Check if this is last command, if so set proper flag */ - if ((words_read + read_size) >= *words) - last_cmd = true; - - status = ice_read_sr_aq(hw, offset, read_size, - data + words_read, last_cmd); - if (status) - goto read_nvm_buf_aq_exit; + /* ice_read_flat_nvm takes into account the 4Kb AdminQ and Shadow RAM + * sector restrictions necessary when reading from the NVM. + */ + status = ice_read_flat_nvm(hw, offset * 2, &bytes, (u8 *)data, true); - /* Increment counter for words already read and move offset to - * new read location - */ - words_read += read_size; - offset += read_size; - } while (words_read < *words); + /* Report the number of words successfully read */ + *words = bytes / 2; + /* Byte swap the words up to the amount we actually read */ for (i = 0; i < *words; i++) data[i] = LE16_TO_CPU(((_FORCE_ __le16 *)data)[i]); -read_nvm_buf_aq_exit: - *words = words_read; return status; } diff --git a/drivers/net/ice/base/ice_nvm.h b/drivers/net/ice/base/ice_nvm.h index d5b7b2d19..8dbda8242 100644 --- a/drivers/net/ice/base/ice_nvm.h +++ b/drivers/net/ice/base/ice_nvm.h @@ -84,6 +84,9 @@ ice_nvm_access_get_features(struct ice_nvm_access_cmd *cmd, enum ice_status ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, union ice_nvm_access_data *data); +enum ice_status +ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, + bool read_shadow_ram); enum ice_status ice_init_nvm(struct ice_hw *hw); enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data); enum ice_status From patchwork Mon Mar 23 07:17:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67008 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10253A0563; Mon, 23 Mar 2020 08:18:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8AB231C12E; Mon, 23 Mar 2020 08:15:34 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 693D31C0D9 for ; Mon, 23 Mar 2020 08:15:17 +0100 (CET) IronPort-SDR: coOOizFtaDU3+jsYq35v9KwoPanXoYrT57hVfimKUYvtPq2n8FRdRHVNGrs4FvF68FNxxUSe+N ylwbkATvnQeA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:17 -0700 IronPort-SDR: vSelgU1ktJneC6ST4Ek99dUDdIROiJrm/heD3ZNNcyIoouY2AvuiGB46r35ctIj4NUC0vFayMw Oyl8Ru8S2UPA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111693" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:15 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:43 +0800 Message-Id: <20200323071759.13075-21-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 20/36] net/ice/base: add macro specifying max NVM offset X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The ice_aq_read_nvm function uses a somewhat weird construction for verifying that the incoming offset is valid. Replace this construction with a simple greater-than expression, and define the maximum value (24bits) in the ice_adminq_cmd.h By providing a macro, the check becomes more clear. Additionally the maximum offset can be used in other locations. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 3 ++- drivers/net/ice/base/ice_nvm.c | 3 +-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 9375d615e..3ab76a3fd 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -1678,8 +1678,9 @@ struct ice_aqc_sff_eeprom { * NVM Shadow RAM Dump commands (direct 0x0707) */ struct ice_aqc_nvm { +#define ICE_AQC_NVM_MAX_OFFSET 0xFFFFFF __le16 offset_low; - u8 offset_high; + u8 offset_high; /* For Write Activate offset_high is used as flags2 */ u8 cmd_flags; #define ICE_AQC_NVM_LAST_CMD BIT(0) #define ICE_AQC_NVM_PCIR_REQ BIT(0) /* Used by NVM Write reply */ diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index 80420afd3..a5b990ff8 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -29,8 +29,7 @@ ice_aq_read_nvm(struct ice_hw *hw, u16 module_typeid, u32 offset, u16 length, cmd = &desc.params.nvm; - /* In offset the highest byte must be zeroed. */ - if (offset & 0xFF000000) + if (offset > ICE_AQC_NVM_MAX_OFFSET) return ICE_ERR_PARAM; ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_nvm_read); From patchwork Mon Mar 23 07:17:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67009 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F47CA0563; Mon, 23 Mar 2020 08:18:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 26DD21C134; Mon, 23 Mar 2020 08:15:37 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 728D01C10E for ; Mon, 23 Mar 2020 08:15:19 +0100 (CET) IronPort-SDR: NiF4n/kWurJnqJVwA9xSOLLq/7/vbfse/4V/qcsWUlolpWJ5LoNNcp/q9OUV06JOFGW5RssZYn kbXSX+x9FOuw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:19 -0700 IronPort-SDR: wmceMtAwBAs01rHgzn3c7zNH/vnaBBp5Tqdo3EjZfmA6lasW/SxdmP7HKkFa6qwggsZncduP2L OADFaEdMLbuQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111706" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:17 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:44 +0800 Message-Id: <20200323071759.13075-22-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 21/36] net/ice/base: implement new sr read functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove the ice_read_sr_aq function and implement ice_read_sr_word_aq directly in terms of the new ice_read_flat_nvm function. This simplifies the code by reducing a now unnecessary reading function. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_nvm.c | 84 +++++++----------------------------------- 1 file changed, 13 insertions(+), 71 deletions(-) diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index a5b990ff8..b679f43d7 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -117,91 +117,33 @@ ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, } /** - * ice_check_sr_access_params - verify params for Shadow RAM R/W operations. - * @hw: pointer to the HW structure - * @offset: offset in words from module start - * @words: number of words to access - */ -static enum ice_status -ice_check_sr_access_params(struct ice_hw *hw, u32 offset, u16 words) -{ - if ((offset + words) > hw->nvm.sr_words) { - ice_debug(hw, ICE_DBG_NVM, - "NVM error: offset beyond SR lmt.\n"); - return ICE_ERR_PARAM; - } - - if (words > ICE_SR_SECTOR_SIZE_IN_WORDS) { - /* We can access only up to 4KB (one sector), in one AQ write */ - ice_debug(hw, ICE_DBG_NVM, - "NVM error: tried to access %d words, limit is %d.\n", - words, ICE_SR_SECTOR_SIZE_IN_WORDS); - return ICE_ERR_PARAM; - } - - if (((offset + (words - 1)) / ICE_SR_SECTOR_SIZE_IN_WORDS) != - (offset / ICE_SR_SECTOR_SIZE_IN_WORDS)) { - /* A single access cannot spread over two sectors */ - ice_debug(hw, ICE_DBG_NVM, - "NVM error: cannot spread over two sectors.\n"); - return ICE_ERR_PARAM; - } - - return ICE_SUCCESS; -} - -/** - * ice_read_sr_aq - Read Shadow RAM. - * @hw: pointer to the HW structure - * @offset: offset in words from module start - * @words: number of words to read - * @data: buffer for words reads from Shadow RAM - * @last_command: tells the AdminQ that this is the last command - * - * Reads 16-bit word buffers from the Shadow RAM using the admin command. - */ -static enum ice_status -ice_read_sr_aq(struct ice_hw *hw, u32 offset, u16 words, u16 *data, - bool last_command) -{ - enum ice_status status; - - ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - - status = ice_check_sr_access_params(hw, offset, words); - - /* values in "offset" and "words" parameters are sized as words - * (16 bits) but ice_aq_read_nvm expects these values in bytes. - * So do this conversion while calling ice_aq_read_nvm. - */ - if (!status) - status = ice_aq_read_nvm(hw, ICE_AQC_NVM_START_POINT, - 2 * offset, 2 * words, data, - last_command, true, NULL); - - return status; -} - -/** * ice_read_sr_word_aq - Reads Shadow RAM via AQ * @hw: pointer to the HW structure * @offset: offset of the Shadow RAM word to read (0x000000 - 0x001FFF) * @data: word read from the Shadow RAM * - * Reads one 16 bit word from the Shadow RAM using the ice_read_sr_aq method. + * Reads one 16 bit word from the Shadow RAM using ice_read_flat_nvm. */ static enum ice_status ice_read_sr_word_aq(struct ice_hw *hw, u16 offset, u16 *data) { + u32 bytes = sizeof(u16); enum ice_status status; + __le16 data_local; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - status = ice_read_sr_aq(hw, offset, 1, data, true); - if (!status) - *data = LE16_TO_CPU(*(_FORCE_ __le16 *)data); + /* Note that ice_read_flat_nvm checks if the read is past the Shadow + * RAM size, and ensures we don't read across a Shadow RAM sector + * boundary + */ + status = ice_read_flat_nvm(hw, offset * sizeof(u16), &bytes, + (u8 *)&data_local, true); + if (status) + return status; - return status; + *data = LE16_TO_CPU(data_local); + return ICE_SUCCESS; } /** From patchwork Mon Mar 23 07:17:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67010 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C69B9A0563; Mon, 23 Mar 2020 08:18:29 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7FC521C137; Mon, 23 Mar 2020 08:15:39 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id A2B5F1BEDE for ; Mon, 23 Mar 2020 08:15:21 +0100 (CET) IronPort-SDR: 7p52/OXhh4NAlgKuUGaY2mHBmI2+QSpjKwpy6DEgbEErz24H3F/ZkORZFm3AVQqwKpTtY4XBjO DxP5XmnWCSAQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:21 -0700 IronPort-SDR: V5kPdrX/EedbvFHsgHR+8ptCuSn8Q87IlR3qf3oyC7C8OLZ5E3v4g496bjR6O54o7x+wJhpdUX Zjxb+IBDlTWQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111717" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:19 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:45 +0800 Message-Id: <20200323071759.13075-23-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 22/36] net/ice/base: couple casting issue fixes X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Adjust variable size between u8 and u16 to fix casting issues Also fix couple coding style issues Karol Kolacinski Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 9 ++++---- drivers/net/ice/base/ice_common.c | 4 ++-- drivers/net/ice/base/ice_common.h | 2 +- drivers/net/ice/base/ice_controlq.c | 18 +++++++-------- drivers/net/ice/base/ice_flex_pipe.c | 22 ++++++++++--------- drivers/net/ice/base/ice_flex_pipe.h | 4 ++-- drivers/net/ice/base/ice_switch.c | 41 +++++++++++++++++------------------ drivers/net/ice/base/ice_type.h | 4 ++-- 8 files changed, 52 insertions(+), 52 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 3ab76a3fd..73f5e7090 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -682,7 +682,7 @@ struct ice_aqc_recipe_content { #define ICE_AQ_RECIPE_ID_S 0 #define ICE_AQ_RECIPE_ID_M (0x3F << ICE_AQ_RECIPE_ID_S) #define ICE_AQ_RECIPE_ID_IS_ROOT BIT(7) -#define ICE_AQ_SW_ID_LKUP_IDX 0 +#define ICE_AQ_SW_ID_LKUP_IDX 0 u8 lkup_indx[5]; #define ICE_AQ_RECIPE_LKUP_DATA_S 0 #define ICE_AQ_RECIPE_LKUP_DATA_M (0x3F << ICE_AQ_RECIPE_LKUP_DATA_S) @@ -813,7 +813,7 @@ struct ice_sw_rule_lkup_rx_tx { #define ICE_SINGLE_ACT_OTHER_ACTS 0x3 #define ICE_SINGLE_OTHER_ACT_IDENTIFIER_S 17 #define ICE_SINGLE_OTHER_ACT_IDENTIFIER_M \ - (0x3 << \ ICE_SINGLE_OTHER_ACT_IDENTIFIER_S) + (0x3 << ICE_SINGLE_OTHER_ACT_IDENTIFIER_S) /* Bit 17:18 - Defines other actions */ /* Other action = 0 - Mirror VSI */ @@ -2118,9 +2118,7 @@ struct ice_aqc_move_txqs { __le32 addr_low; }; -/* This is the descriptor of each queue entry for the move Tx LAN Queues - * command (0x0C32). - */ +/* Per-queue data buffer for the Move Tx LAN Queues command/response */ struct ice_aqc_move_txqs_elem { __le16 txq_id; u8 q_cgd; @@ -2128,6 +2126,7 @@ struct ice_aqc_move_txqs_elem { __le32 q_teid; }; +/* Indirect data buffer for the Move Tx LAN Queues command/response */ struct ice_aqc_move_txqs_data { __le32 src_teid; __le32 dest_teid; diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 99f696211..25e205944 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -4004,7 +4004,7 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, * This function adds/updates the VSI queues per TC. */ static enum ice_status -ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap, +ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, u16 *maxqs, u8 owner) { enum ice_status status = ICE_SUCCESS; @@ -4043,7 +4043,7 @@ ice_cfg_vsi_qs(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap, * This function adds/updates the VSI LAN queues per TC. */ enum ice_status -ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap, +ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, u16 *max_lanqs) { return ice_cfg_vsi_qs(pi, vsi_handle, tc_bitmap, max_lanqs, diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index bbff17536..4e2e25744 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -188,7 +188,7 @@ ice_dis_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u8 num_queues, enum ice_disq_rst_src rst_src, u16 vmvf_num, struct ice_sq_cd *cd); enum ice_status -ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u8 tc_bitmap, +ice_cfg_vsi_lan(struct ice_port_info *pi, u16 vsi_handle, u16 tc_bitmap, u16 *max_lanqs); enum ice_status ice_ena_vsi_txq(struct ice_port_info *pi, u16 vsi_handle, u8 tc, u16 q_handle, diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c index 8a65fae40..e7752fca2 100644 --- a/drivers/net/ice/base/ice_controlq.c +++ b/drivers/net/ice/base/ice_controlq.c @@ -624,18 +624,18 @@ static enum ice_status ice_init_ctrlq(struct ice_hw *hw, enum ice_ctl_q q_type) */ enum ice_status ice_init_all_ctrlq(struct ice_hw *hw) { - enum ice_status ret_code; + enum ice_status status; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); /* Init FW admin queue */ - ret_code = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN); - if (ret_code) - return ret_code; + status = ice_init_ctrlq(hw, ICE_CTL_Q_ADMIN); + if (status) + return status; - ret_code = ice_init_check_adminq(hw); - if (ret_code) - return ret_code; + status = ice_init_check_adminq(hw); + if (status) + return status; /* Init Mailbox queue */ return ice_init_ctrlq(hw, ICE_CTL_Q_MAILBOX); } @@ -832,7 +832,7 @@ static void ice_debug_cq(struct ice_hw *hw, void *desc, void *buf, u16 buf_len) flags & ICE_AQ_FLAG_RD)) { ice_debug(hw, ICE_DBG_AQ_DESC_BUF, "Buffer:\n"); ice_debug_array(hw, ICE_DBG_AQ_DESC_BUF, 16, 1, (u8 *)buf, - min(buf_len, datalen)); + MIN_T(u16, buf_len, datalen)); } } @@ -1140,7 +1140,7 @@ ice_clean_rq_elem(struct ice_hw *hw, struct ice_ctl_q_info *cq, } ice_memcpy(&e->desc, desc, sizeof(e->desc), ICE_DMA_TO_NONDMA); datalen = LE16_TO_CPU(desc->datalen); - e->msg_len = min(datalen, e->buf_len); + e->msg_len = MIN_T(u16, datalen, e->buf_len); if (e->msg_buf && e->msg_len) ice_memcpy(e->msg_buf, cq->rq.r.rq_bi[desc_idx].va, e->msg_len, ICE_DMA_TO_NONDMA); diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 2d29e986c..0c64bf681 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1562,7 +1562,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type req_profs, * allocated for every list entry. */ enum ice_status -ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u16 ids_cnt, +ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) { struct ice_sw_fv_list_entry *fvl; @@ -1963,7 +1963,7 @@ ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port) */ ice_set_key((u8 *)§_rx->tcam[0].key, sizeof(sect_rx->tcam[0].key), (u8 *)&port, NULL, NULL, NULL, - offsetof(struct ice_boost_key_value, hv_dst_port_key), + (u16)offsetof(struct ice_boost_key_value, hv_dst_port_key), sizeof(sect_rx->tcam[0].key.key.hv_dst_port_key)); /* exact copy of entry to Tx section entry */ @@ -2011,7 +2011,7 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) return ICE_ERR_PARAM; /* size of section - there is at least one entry */ - size = (count - 1) * sizeof(*sect_rx->tcam) + sizeof(*sect_rx); + size = ice_struct_size(sect_rx, tcam, count - 1); bld = ice_pkg_buf_alloc(hw); if (!bld) @@ -2078,7 +2078,7 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) * @off: variable to receive the protocol offset */ enum ice_status -ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, +ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u16 fv_idx, u8 *prot, u16 *off) { struct ice_fv_word *fv_ext; @@ -2680,9 +2680,9 @@ ice_find_prof_id_with_mask(struct ice_hw *hw, enum ice_block blk, struct ice_fv_word *fv, u16 *masks, u8 *prof_id) { struct ice_es *es = &hw->blk[blk].es; - u16 i; + u8 i; - for (i = 0; i < es->count; i++) { + for (i = 0; i < (u8)es->count; i++) { u16 off = i * es->fvw; if (memcmp(&es->t[off], fv, es->fvw * sizeof(*fv))) @@ -4464,7 +4464,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], ice_declare_bitmap(ptgs_used, ICE_XLT1_CNT); struct ice_prof_map *prof; enum ice_status status; - u32 byte = 0; + u8 byte = 0; u8 prof_id; ice_zero_bitmap(ptgs_used, ICE_XLT1_CNT); @@ -4513,7 +4513,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], /* build list of ptgs */ while (bytes && prof->ptg_cnt < ICE_MAX_PTG_PER_PROFILE) { - u32 bit; + u8 bit; if (!ptypes[byte]) { bytes--; @@ -4562,7 +4562,7 @@ ice_add_prof(struct ice_hw *hw, enum ice_block blk, u64 id, u8 ptypes[], } /* nothing left in byte, then exit */ - m = ~((1 << (bit + 1)) - 1); + m = ~(u8)((1 << (bit + 1)) - 1); if (!(ptypes[byte] & m)) break; } @@ -5335,8 +5335,10 @@ ice_add_prof_id_vsig(struct ice_hw *hw, enum ice_block blk, u16 vsig, u64 hdl, t->tcam[i].ptg, vsig, 0, t->tcam[i].attr.flags, vl_msk, dc_msk, nm_msk); - if (status) + if (status) { + ice_free(hw, p); goto err_ice_add_prof_id_vsig; + } /* log change */ LIST_ADD(&p->list_entry, chg); diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index e3ee882da..bba66c48a 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -25,7 +25,7 @@ enum ice_status ice_acquire_change_lock(struct ice_hw *hw, enum ice_aq_res_access_type access); void ice_release_change_lock(struct ice_hw *hw); enum ice_status -ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u8 fv_idx, +ice_find_prot_off(struct ice_hw *hw, enum ice_block blk, u8 prof, u16 fv_idx, u8 *prot, u16 *off); enum ice_status ice_find_label_value(struct ice_seg *ice_seg, char const *name, u32 type, @@ -36,7 +36,7 @@ ice_get_sw_fv_bitmap(struct ice_hw *hw, enum ice_prof_type type, void ice_init_prof_result_bm(struct ice_hw *hw); enum ice_status -ice_get_sw_fv_list(struct ice_hw *hw, u16 *prot_ids, u16 ids_cnt, +ice_get_sw_fv_list(struct ice_hw *hw, u8 *prot_ids, u16 ids_cnt, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list); bool ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index f4e020dfc..8e594d99b 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -623,8 +623,9 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, struct ice_aqc_recipe_data_elem *tmp; u16 num_recps = ICE_MAX_NUM_RECIPES; struct ice_prot_lkup_ext *lkup_exts; - u16 i, sub_recps, fv_word_idx = 0; enum ice_status status; + u8 fv_word_idx = 0; + u16 sub_recps; ice_zero_bitmap(result_bm, ICE_MAX_FV_WORDS); @@ -662,7 +663,7 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, for (sub_recps = 0; sub_recps < num_recps; sub_recps++) { struct ice_aqc_recipe_data_elem root_bufs = tmp[sub_recps]; struct ice_recp_grp_entry *rg_entry; - u8 prof, idx, prot = 0; + u8 i, prof, idx, prot = 0; bool is_root; u16 off = 0; @@ -718,7 +719,7 @@ ice_get_recp_frm_fw(struct ice_hw *hw, struct ice_sw_recipe *recps, u8 rid, LIST_ADD(&rg_entry->l_entry, &recps[rid].rg_list); /* Propagate some data to the recipe database */ - recps[idx].is_root = is_root; + recps[idx].is_root = !!is_root; recps[idx].priority = root_bufs.content.act_ctrl_fwd_priority; ice_zero_bitmap(recps[idx].res_idxs, ICE_MAX_FV_WORDS); if (root_bufs.content.result_indx & ICE_AQ_RECIPE_RESULT_EN) { @@ -1842,10 +1843,10 @@ enum ice_status ice_get_initial_sw_cfg(struct ice_hw *hw) { struct ice_aqc_get_sw_cfg_resp *rbuf; enum ice_status status; - u16 num_total_ports; + u8 num_total_ports; u16 req_desc = 0; u16 num_elems; - u16 j = 0; + u8 j = 0; u16 i; num_total_ports = 1; @@ -3124,11 +3125,11 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, struct ice_aqc_sw_rules_elem *s_rule, *r_iter; struct ice_fltr_list_entry *m_list_itr; struct LIST_HEAD_TYPE *rule_head; - u16 elem_sent, total_elem_left; + u16 total_elem_left, s_rule_size; struct ice_lock *rule_lock; /* Lock to protect filter rule list */ enum ice_status status = ICE_SUCCESS; u16 num_unicast = 0; - u16 s_rule_size; + u8 elem_sent; s_rule = NULL; rule_lock = &recp_list->filt_rule_lock; @@ -3210,8 +3211,8 @@ ice_add_mac_rule(struct ice_hw *hw, struct LIST_HEAD_TYPE *m_list, total_elem_left -= elem_sent) { struct ice_aqc_sw_rules_elem *entry = r_iter; - elem_sent = min(total_elem_left, - (u16)(ICE_AQ_MAX_BUF_LEN / s_rule_size)); + elem_sent = MIN_T(u8, total_elem_left, + (ICE_AQ_MAX_BUF_LEN / s_rule_size)); status = ice_aq_sw_rules(hw, entry, elem_sent * s_rule_size, elem_sent, ice_aqc_opc_add_sw_rules, NULL); @@ -4943,7 +4944,7 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts) { bool refresh_required = true; struct ice_sw_recipe *recp; - u16 i; + u8 i; /* Walk through existing recipes to find a match */ recp = hw->switch_info->recp_list; @@ -5009,9 +5010,9 @@ static u16 ice_find_recp(struct ice_hw *hw, struct ice_prot_lkup_ext *lkup_exts) * * Returns true if found, false otherwise */ -static bool ice_prot_type_to_id(enum ice_protocol_type type, u16 *id) +static bool ice_prot_type_to_id(enum ice_protocol_type type, u8 *id) { - u16 i; + u8 i; for (i = 0; i < ARRAY_SIZE(ice_prot_id_tbl); i++) if (ice_prot_id_tbl[i].type == type) { @@ -5028,13 +5029,11 @@ static bool ice_prot_type_to_id(enum ice_protocol_type type, u16 *id) * * calculate valid words in a lookup rule using mask value */ -static u16 +static u8 ice_fill_valid_words(struct ice_adv_lkup_elem *rule, struct ice_prot_lkup_ext *lkup_exts) { - u16 j, word = 0; - u16 prot_id; - u16 ret_val; + u8 j, word, prot_id, ret_val; if (!ice_prot_type_to_id(rule->type, &prot_id)) return 0; @@ -5043,7 +5042,7 @@ ice_fill_valid_words(struct ice_adv_lkup_elem *rule, for (j = 0; j < sizeof(rule->m_u) / sizeof(u16); j++) if (((u16 *)&rule->m_u)[j] && - (unsigned long)rule->type < ARRAY_SIZE(ice_prot_ext)) { + rule->type < ARRAY_SIZE(ice_prot_ext)) { /* No more space to accommodate */ if (word >= ICE_MAX_CHAIN_WORDS) return 0; @@ -5612,10 +5611,10 @@ ice_get_fv(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, ice_bitmap_t *bm, struct LIST_HEAD_TYPE *fv_list) { enum ice_status status; - u16 *prot_ids; + u8 *prot_ids; u16 i; - prot_ids = (u16 *)ice_calloc(hw, lkups_cnt, sizeof(*prot_ids)); + prot_ids = (u8 *)ice_calloc(hw, lkups_cnt, sizeof(*prot_ids)); if (!prot_ids) return ICE_ERR_NO_MEMORY; @@ -5791,7 +5790,7 @@ ice_add_adv_recipe(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, match_tun = true; /* set the recipe priority if specified */ - rm->priority = rinfo->priority ? rinfo->priority : 0; + rm->priority = (u8)rinfo->priority; /* Find offsets from the field vector. Pick the first one for all the * recipes. @@ -6197,7 +6196,7 @@ ice_fill_adv_packet_tun(struct ice_hw *hw, enum ice_sw_tunnel_type tun_type, */ static struct ice_adv_fltr_mgmt_list_entry * ice_find_adv_rule_entry(struct ice_hw *hw, struct ice_adv_lkup_elem *lkups, - u16 lkups_cnt, u8 recp_id, + u16 lkups_cnt, u16 recp_id, struct ice_adv_rule_info *rinfo) { struct ice_adv_fltr_mgmt_list_entry *list_itr; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 4979580ec..00459bcc8 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -786,8 +786,8 @@ struct ice_hw { u16 max_burst_size; /* driver sets this value */ /* Tx Scheduler values */ - u16 num_tx_sched_layers; - u16 num_tx_sched_phys_layers; + u8 num_tx_sched_layers; + u8 num_tx_sched_phys_layers; u8 flattened_layers; u8 max_cgds; u8 sw_entry_point_layer; From patchwork Mon Mar 23 07:17:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67011 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA86BA0563; Mon, 23 Mar 2020 08:18:39 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 12D041C190; Mon, 23 Mar 2020 08:15:42 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id A37141BEDE for ; Mon, 23 Mar 2020 08:15:23 +0100 (CET) IronPort-SDR: ggJIiPcP0Dyj06PJgy/N//olGV9HXgnLuL0EDARH6cIUufLpdmKxOVlQMK9ypKzwYyevRfmFd/ 943BjlsizkFg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:23 -0700 IronPort-SDR: RNchHtRptmsVigQdrqS1IXTZP22qKreE+0so+MrwRHN/Dzqfb3lyaSsRzMmmQp5QV6JunFl1BO g+9wbY5P8UfQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111731" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:21 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Chinh T Cao , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:46 +0800 Message-Id: <20200323071759.13075-24-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 23/36] net/ice/base: support PHY persistent feature X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In this patch, we will modify the ice_copy_phy_caps_to_cfg(...) function to conditionally fill up the ice_aqc_set_phy_cfg_data.module_compliance_enforcement with correct value, based on the PHY persistent feature. Apply the ice_copy_phy_caps_to_cfg() function inside ice_set_fc() Signed-off-by: Chinh T Cao Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 124 +++++++++++++++++++++++++++----------- drivers/net/ice/base/ice_common.h | 8 ++- 2 files changed, 93 insertions(+), 39 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 25e205944..756868a5a 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -2540,14 +2540,14 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) { struct ice_aqc_set_phy_cfg_data cfg = { 0 }; struct ice_phy_cache_mode_data cache_data; - struct ice_link_default_override_tlv tlv; struct ice_aqc_get_phy_caps_data *pcaps; enum ice_status status; u8 pause_mask = 0x0; struct ice_hw *hw; - if (!pi) + if (!pi || !aq_failures) return ICE_ERR_PARAM; + hw = pi->hw; *aq_failures = ICE_SET_FC_AQ_FAIL_NONE; @@ -2555,7 +2555,26 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) cache_data.data.curr_user_fc_req = pi->fc.req_mode; ice_cache_phy_user_req(pi, cache_data, ICE_FC_MODE); + pcaps = (struct ice_aqc_get_phy_caps_data *) + ice_malloc(hw, sizeof(*pcaps)); + if (!pcaps) + return ICE_ERR_NO_MEMORY; + switch (pi->fc.req_mode) { + case ICE_FC_AUTO: + /* Query the value of FC that both the NIC and attached media + * can do. + */ + status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, + pcaps, NULL); + if (status) { + *aq_failures = ICE_SET_FC_AQ_FAIL_GET; + goto out; + } + + pause_mask |= pcaps->caps & ICE_AQC_PHY_EN_TX_LINK_PAUSE; + pause_mask |= pcaps->caps & ICE_AQC_PHY_EN_RX_LINK_PAUSE; + break; case ICE_FC_FULL: pause_mask |= ICE_AQC_PHY_EN_TX_LINK_PAUSE; pause_mask |= ICE_AQC_PHY_EN_RX_LINK_PAUSE; @@ -2570,12 +2589,8 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) break; } - pcaps = (struct ice_aqc_get_phy_caps_data *) - ice_malloc(hw, sizeof(*pcaps)); - if (!pcaps) - return ICE_ERR_NO_MEMORY; - /* Get the current PHY config */ + ice_memset(pcaps, 0, sizeof(*pcaps), ICE_NONDMA_MEM); status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_SW_CFG, pcaps, NULL); if (status) { @@ -2583,23 +2598,14 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) goto out; } + ice_copy_phy_caps_to_cfg(pi, pcaps, &cfg); + /* clear the old pause settings */ - cfg.caps = pcaps->caps & ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE | - ICE_AQC_PHY_EN_RX_LINK_PAUSE); + cfg.caps &= ~(ICE_AQC_PHY_EN_TX_LINK_PAUSE | + ICE_AQC_PHY_EN_RX_LINK_PAUSE); /* set the new capabilities */ - if (pi->fc.req_mode == ICE_FC_AUTO && - ice_fw_supports_link_override(hw)) { - status = ice_get_link_default_override(&tlv, pi); - if (status) - return status; - - if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) && - (tlv.options & ICE_LINK_OVERRIDE_EN)) - cfg.caps |= tlv.phy_config & ICE_LINK_OVERRIDE_PAUSE_M; - } else { - cfg.caps |= pause_mask; - } + cfg.caps |= pause_mask; /* If the capabilities have changed, then set the new config */ if (cfg.caps != pcaps->caps) { @@ -2608,13 +2614,6 @@ ice_set_fc(struct ice_port_info *pi, u8 *aq_failures, bool ena_auto_link_update) /* Auto restart link so settings take effect */ if (ena_auto_link_update) cfg.caps |= ICE_AQ_PHY_ENA_AUTO_LINK_UPDT; - /* Copy over all the old settings */ - cfg.phy_type_high = pcaps->phy_type_high; - cfg.phy_type_low = pcaps->phy_type_low; - cfg.low_power_ctrl_an = pcaps->low_power_ctrl_an; - cfg.eee_cap = pcaps->eee_cap; - cfg.eeer_value = pcaps->eeer_value; - cfg.link_fec_opt = pcaps->link_fec_options; status = ice_aq_set_phy_cfg(hw, pi, &cfg, NULL); if (status) { @@ -2683,6 +2682,7 @@ ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps, /** * ice_copy_phy_caps_to_cfg - Copy PHY ability data to configuration data + * @pi: port information structure * @caps: PHY ability structure to copy date from * @cfg: PHY configuration structure to copy data to * @@ -2690,12 +2690,14 @@ ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *phy_caps, * data structure */ void -ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps, +ice_copy_phy_caps_to_cfg(struct ice_port_info *pi, + struct ice_aqc_get_phy_caps_data *caps, struct ice_aqc_set_phy_cfg_data *cfg) { - if (!caps || !cfg) + if (!pi || !caps || !cfg) return; + ice_memset(cfg, 0, sizeof(*cfg), ICE_NONDMA_MEM); cfg->phy_type_low = caps->phy_type_low; cfg->phy_type_high = caps->phy_type_high; cfg->caps = caps->caps; @@ -2703,20 +2705,50 @@ ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps, cfg->eee_cap = caps->eee_cap; cfg->eeer_value = caps->eeer_value; cfg->link_fec_opt = caps->link_fec_options; + cfg->module_compliance_enforcement = + caps->module_compliance_enforcement; + + if (ice_fw_supports_link_override(pi->hw)) { + struct ice_link_default_override_tlv tlv; + + if (ice_get_link_default_override(&tlv, pi)) + return; + + if (tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) + cfg->module_compliance_enforcement |= + ICE_LINK_OVERRIDE_STRICT_MODE; + } } /** * ice_cfg_phy_fec - Configure PHY FEC data based on FEC mode + * @pi: port information structure * @cfg: PHY configuration data to set FEC mode * @fec: FEC mode to configure - * - * Caller should copy ice_aqc_get_phy_caps_data.caps ICE_AQC_PHY_EN_AUTO_FEC - * (bit 7) and ice_aqc_get_phy_caps_data.link_fec_options to cfg.caps - * ICE_AQ_PHY_ENA_AUTO_FEC (bit 7) and cfg.link_fec_options before calling. */ -void -ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec) +enum ice_status +ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, + enum ice_fec_mode fec) { + struct ice_aqc_get_phy_caps_data *pcaps; + enum ice_status status = ICE_SUCCESS; + struct ice_hw *hw; + + if (!pi || !cfg) + return ICE_ERR_BAD_PTR; + + hw = pi->hw; + + pcaps = (struct ice_aqc_get_phy_caps_data *) + ice_malloc(hw, sizeof(*pcaps)); + if (!pcaps) + return ICE_ERR_NO_MEMORY; + + status = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP, pcaps, + NULL); + if (status) + goto out; + switch (fec) { case ICE_FEC_BASER: /* Clear RS bits, and AND BASE-R ability @@ -2742,8 +2774,28 @@ ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec) case ICE_FEC_AUTO: /* AND auto FEC bit, and all caps bits. */ cfg->caps &= ICE_AQC_PHY_CAPS_MASK; + cfg->link_fec_opt |= pcaps->link_fec_options; + break; + default: + status = ICE_ERR_PARAM; break; } + + if (fec == ICE_FEC_AUTO && ice_fw_supports_link_override(pi->hw)) { + struct ice_link_default_override_tlv tlv; + + if (ice_get_link_default_override(&tlv, pi)) + goto out; + + if (!(tlv.options & ICE_LINK_OVERRIDE_STRICT_MODE) && + (tlv.options & ICE_LINK_OVERRIDE_EN)) + cfg->link_fec_opt = tlv.fec_options; + } + +out: + ice_free(hw, pcaps); + + return status; } /** diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index 4e2e25744..ffe4e9f77 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -154,10 +154,12 @@ bool ice_phy_caps_equals_cfg(struct ice_aqc_get_phy_caps_data *caps, struct ice_aqc_set_phy_cfg_data *cfg); void -ice_copy_phy_caps_to_cfg(struct ice_aqc_get_phy_caps_data *caps, +ice_copy_phy_caps_to_cfg(struct ice_port_info *pi, + struct ice_aqc_get_phy_caps_data *caps, struct ice_aqc_set_phy_cfg_data *cfg); -void -ice_cfg_phy_fec(struct ice_aqc_set_phy_cfg_data *cfg, enum ice_fec_mode fec); +enum ice_status +ice_cfg_phy_fec(struct ice_port_info *pi, struct ice_aqc_set_phy_cfg_data *cfg, + enum ice_fec_mode fec); enum ice_status ice_aq_set_link_restart_an(struct ice_port_info *pi, bool ena_link, struct ice_sq_cd *cd); From patchwork Mon Mar 23 07:17:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67012 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3A60A0563; Mon, 23 Mar 2020 08:18:49 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7A52C1C067; Mon, 23 Mar 2020 08:15:55 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 04F0C1C10F for ; Mon, 23 Mar 2020 08:15:25 +0100 (CET) IronPort-SDR: 7pKn7HxjAhfnI/e2Yblz4MfL5g0GSMZIuGLZUwIimi8bOXuqCu9Y5dEV0o4Krfk3NX/snOMjxA MJv9PS6vYMlA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:25 -0700 IronPort-SDR: Zu1FVP0BqZcSpdSETv5R3L5VZf1oDJsxwu8s976aB45ljoLtsU28PDCBGLWDjS3UMnd36ceYEP 6Cj7pXpaOteQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111744" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:23 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:47 +0800 Message-Id: <20200323071759.13075-25-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 24/36] net/ice/base: store NVM version info in extracted format X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently the NVM and Option ROM version information is stored in a minimal format. The ice_get_nvm_version function exists to extract this information for display. This needlessly complicates using these fields as the extraction function must be called to parse the NVM and Option ROM data. Further confusion occurs because the prefix of "oem_" is used for the Option ROM version. This appears to have been done because the Option ROM data was requested for display by OEMs. Refactor this code so that the NVM version and Option ROM version components are extracted immediately. Introduce a new struct ice_orom_info which will store the Option ROM major, build, and patch numbers. Introduce the new major_ver and minor_ver fields to store the NVM version in its high and low byte components. Remove the ice_get_nvm_version function. Instead, use the same logic to convert the fields read from the NVM into the extracted format. This simplifies use of these fields as they will be stored already parsed, without needing to use the bit masks or call ice_get_nvm_version. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 36 +++------------ drivers/net/ice/base/ice_common.h | 3 -- drivers/net/ice/base/ice_nvm.c | 94 ++++++++++++++++++++++++++------------- drivers/net/ice/base/ice_type.h | 26 +++++++---- drivers/net/ice/ice_ethdev.c | 16 +++---- 5 files changed, 93 insertions(+), 82 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 756868a5a..192cfdbf7 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -569,42 +569,20 @@ static void ice_get_itr_intrl_gran(struct ice_hw *hw) } /** - * ice_get_nvm_version - get cached NVM version data - * @hw: pointer to the hardware structure - * @oem_ver: 8 bit NVM version - * @oem_build: 16 bit NVM build number - * @oem_patch: 8 NVM patch number - * @ver_hi: high 16 bits of the NVM version - * @ver_lo: low 16 bits of the NVM version - */ -void -ice_get_nvm_version(struct ice_hw *hw, u8 *oem_ver, u16 *oem_build, - u8 *oem_patch, u8 *ver_hi, u8 *ver_lo) -{ - struct ice_nvm_info *nvm = &hw->nvm; - - *oem_ver = (u8)((nvm->oem_ver & ICE_OEM_VER_MASK) >> ICE_OEM_VER_SHIFT); - *oem_patch = (u8)(nvm->oem_ver & ICE_OEM_VER_PATCH_MASK); - *oem_build = (u16)((nvm->oem_ver & ICE_OEM_VER_BUILD_MASK) >> - ICE_OEM_VER_BUILD_SHIFT); - *ver_hi = (nvm->ver & ICE_NVM_VER_HI_MASK) >> ICE_NVM_VER_HI_SHIFT; - *ver_lo = (nvm->ver & ICE_NVM_VER_LO_MASK) >> ICE_NVM_VER_LO_SHIFT; -} - -/** * ice_print_rollback_msg - print FW rollback message * @hw: pointer to the hardware structure */ void ice_print_rollback_msg(struct ice_hw *hw) { char nvm_str[ICE_NVM_VER_LEN] = { 0 }; - u8 oem_ver, oem_patch, ver_hi, ver_lo; - u16 oem_build; + struct ice_nvm_info *nvm = &hw->nvm; + struct ice_orom_info *orom; + + orom = &nvm->orom; - ice_get_nvm_version(hw, &oem_ver, &oem_build, &oem_patch, &ver_hi, - &ver_lo); - SNPRINTF(nvm_str, sizeof(nvm_str), "%x.%02x 0x%x %d.%d.%d", ver_hi, - ver_lo, hw->nvm.eetrack, oem_ver, oem_build, oem_patch); + SNPRINTF(nvm_str, sizeof(nvm_str), "%x.%02x 0x%x %d.%d.%d", + nvm->major_ver, nvm->minor_ver, nvm->eetrack, orom->major, + orom->build, orom->patch); ice_warn(hw, "Firmware rollback mode detected. Current version is NVM: %s, FW: %d.%d. Device may exhibit limited functionality. Refer to the Intel(R) Ethernet Adapters and Devices User Guide for details on firmware rollback mode\n", nvm_str, hw->fw_maj_ver, hw->fw_min_ver); diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index ffe4e9f77..ccd33b944 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -215,9 +215,6 @@ ice_stat_update32(struct ice_hw *hw, u32 reg, bool prev_stat_loaded, void ice_stat_update_repc(struct ice_hw *hw, u16 vsi_handle, bool prev_stat_loaded, struct ice_eth_stats *cur_stats); -void -ice_get_nvm_version(struct ice_hw *hw, u8 *oem_ver, u16 *oem_build, - u8 *oem_patch, u8 *ver_hi, u8 *ver_lo); enum ice_fw_modes ice_get_fw_mode(struct ice_hw *hw); void ice_print_rollback_msg(struct ice_hw *hw); enum ice_status diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index b679f43d7..1c7050c31 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -235,6 +235,62 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data) } /** + * ice_get_orom_ver_info - Read Option ROM version information + * @hw: pointer to the HW struct + * + * Read the Combo Image version data from the Boot Configuration TLV and fill + * in the option ROM version data. + */ +static enum ice_status ice_get_orom_ver_info(struct ice_hw *hw) +{ + u16 combo_hi, combo_lo, boot_cfg_tlv, boot_cfg_tlv_len; + struct ice_orom_info *orom = &hw->nvm.orom; + enum ice_status status; + u32 combo_ver; + + status = ice_get_pfa_module_tlv(hw, &boot_cfg_tlv, &boot_cfg_tlv_len, + ICE_SR_BOOT_CFG_PTR); + if (status) { + ice_debug(hw, ICE_DBG_INIT, + "Failed to read Boot Configuration Block TLV.\n"); + return status; + } + + /* Boot Configuration Block must have length at least 2 words + * (Combo Image Version High and Combo Image Version Low) + */ + if (boot_cfg_tlv_len < 2) { + ice_debug(hw, ICE_DBG_INIT, + "Invalid Boot Configuration Block TLV size.\n"); + return ICE_ERR_INVAL_SIZE; + } + + status = ice_read_sr_word(hw, (boot_cfg_tlv + ICE_NVM_OROM_VER_OFF), + &combo_hi); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to read OROM_VER hi.\n"); + return status; + } + + status = ice_read_sr_word(hw, (boot_cfg_tlv + ICE_NVM_OROM_VER_OFF + 1), + &combo_lo); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Failed to read OROM_VER lo.\n"); + return status; + } + + combo_ver = ((u32)combo_hi << 16) | combo_lo; + + orom->major = (u8)((combo_ver & ICE_OROM_VER_MASK) >> + ICE_OROM_VER_SHIFT); + orom->patch = (u8)(combo_ver & ICE_OROM_VER_PATCH_MASK); + orom->build = (u16)((combo_ver & ICE_OROM_VER_BUILD_MASK) >> + ICE_OROM_VER_BUILD_SHIFT); + + return ICE_SUCCESS; +} + +/** * ice_init_nvm - initializes NVM setting * @hw: pointer to the HW struct * @@ -243,9 +299,8 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data) */ enum ice_status ice_init_nvm(struct ice_hw *hw) { - u16 oem_hi, oem_lo, boot_cfg_tlv, boot_cfg_tlv_len; struct ice_nvm_info *nvm = &hw->nvm; - u16 eetrack_lo, eetrack_hi; + u16 eetrack_lo, eetrack_hi, ver; enum ice_status status; u32 fla, gens_stat; u8 sr_size; @@ -273,12 +328,14 @@ enum ice_status ice_init_nvm(struct ice_hw *hw) return ICE_ERR_NVM_BLANK_MODE; } - status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &nvm->ver); + status = ice_read_sr_word(hw, ICE_SR_NVM_DEV_STARTER_VER, &ver); if (status) { ice_debug(hw, ICE_DBG_INIT, "Failed to read DEV starter version.\n"); return status; } + nvm->major_ver = (ver & ICE_NVM_VER_HI_MASK) >> ICE_NVM_VER_HI_SHIFT; + nvm->minor_ver = (ver & ICE_NVM_VER_LO_MASK) >> ICE_NVM_VER_LO_SHIFT; status = ice_read_sr_word(hw, ICE_SR_NVM_EETRACK_LO, &eetrack_lo); if (status) { @@ -309,39 +366,12 @@ enum ice_status ice_init_nvm(struct ice_hw *hw) break; } - status = ice_get_pfa_module_tlv(hw, &boot_cfg_tlv, &boot_cfg_tlv_len, - ICE_SR_BOOT_CFG_PTR); + status = ice_get_orom_ver_info(hw); if (status) { - ice_debug(hw, ICE_DBG_INIT, - "Failed to read Boot Configuration Block TLV.\n"); + ice_debug(hw, ICE_DBG_INIT, "Failed to read Option ROM info.\n"); return status; } - /* Boot Configuration Block must have length at least 2 words - * (Combo Image Version High and Combo Image Version Low) - */ - if (boot_cfg_tlv_len < 2) { - ice_debug(hw, ICE_DBG_INIT, - "Invalid Boot Configuration Block TLV size.\n"); - return ICE_ERR_INVAL_SIZE; - } - - status = ice_read_sr_word(hw, (boot_cfg_tlv + ICE_NVM_OEM_VER_OFF), - &oem_hi); - if (status) { - ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER hi.\n"); - return status; - } - - status = ice_read_sr_word(hw, (boot_cfg_tlv + ICE_NVM_OEM_VER_OFF + 1), - &oem_lo); - if (status) { - ice_debug(hw, ICE_DBG_INIT, "Failed to read OEM_VER lo.\n"); - return status; - } - - nvm->oem_ver = ((u32)oem_hi << 16) | oem_lo; - return ICE_SUCCESS; } diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 00459bcc8..89d476482 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -476,12 +476,20 @@ struct ice_fc_info { enum ice_fc_mode req_mode; /* FC mode requested by caller */ }; +/* Option ROM version information */ +struct ice_orom_info { + u8 major; /* Major version of OROM */ + u8 patch; /* Patch version of OROM */ + u16 build; /* Build version of OROM */ +}; + /* NVM Information */ struct ice_nvm_info { + struct ice_orom_info orom; /* Option ROM version info */ u32 eetrack; /* NVM data version */ - u32 oem_ver; /* OEM version info */ u16 sr_words; /* Shadow RAM size in words */ - u16 ver; /* dev starter version */ + u8 major_ver; /* major version of dev starter */ + u8 minor_ver; /* minor version of dev starter */ u8 blank_nvm_mode; /* is NVM empty (no FW present)*/ }; @@ -991,7 +999,7 @@ enum ice_sw_fwd_act_type { #define ICE_SR_PBA_BLOCK_PTR 0x16 #define ICE_SR_BOOT_CFG_PTR 0x132 #define ICE_SR_NVM_WOL_CFG 0x19 -#define ICE_NVM_OEM_VER_OFF 0x02 +#define ICE_NVM_OROM_VER_OFF 0x02 #define ICE_SR_NVM_DEV_STARTER_VER 0x18 #define ICE_SR_ALTERNATE_SAN_MAC_ADDR_PTR 0x27 #define ICE_SR_PERMANENT_SAN_MAC_ADDR_PTR 0x28 @@ -1005,12 +1013,12 @@ enum ice_sw_fwd_act_type { #define ICE_NVM_VER_HI_SHIFT 12 #define ICE_NVM_VER_HI_MASK (0xf << ICE_NVM_VER_HI_SHIFT) #define ICE_OEM_EETRACK_ID 0xffffffff -#define ICE_OEM_VER_PATCH_SHIFT 0 -#define ICE_OEM_VER_PATCH_MASK (0xff << ICE_OEM_VER_PATCH_SHIFT) -#define ICE_OEM_VER_BUILD_SHIFT 8 -#define ICE_OEM_VER_BUILD_MASK (0xffff << ICE_OEM_VER_BUILD_SHIFT) -#define ICE_OEM_VER_SHIFT 24 -#define ICE_OEM_VER_MASK (0xff << ICE_OEM_VER_SHIFT) +#define ICE_OROM_VER_PATCH_SHIFT 0 +#define ICE_OROM_VER_PATCH_MASK (0xff << ICE_OROM_VER_PATCH_SHIFT) +#define ICE_OROM_VER_BUILD_SHIFT 8 +#define ICE_OROM_VER_BUILD_MASK (0xffff << ICE_OROM_VER_BUILD_SHIFT) +#define ICE_OROM_VER_SHIFT 24 +#define ICE_OROM_VER_MASK (0xff << ICE_OROM_VER_SHIFT) #define ICE_SR_VPD_PTR 0x2F #define ICE_SR_PXE_SETUP_PTR 0x30 #define ICE_SR_PXE_CFG_CUST_OPTIONS_PTR 0x31 diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 7e7bb6954..3498a5075 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3846,21 +3846,19 @@ static int ice_fw_version_get(struct rte_eth_dev *dev, char *fw_version, size_t fw_size) { struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); - u32 full_ver; u8 ver, patch; u16 build; int ret; - full_ver = hw->nvm.oem_ver; - ver = (u8)(full_ver >> 24); - build = (u16)((full_ver >> 8) & 0xffff); - patch = (u8)(full_ver & 0xff); + ver = hw->nvm.orom.major; + patch = hw->nvm.orom.patch; + build = hw->nvm.orom.build; ret = snprintf(fw_version, fw_size, - "%d.%d%d 0x%08x %d.%d.%d", - ((hw->nvm.ver >> 12) & 0xf), - ((hw->nvm.ver >> 4) & 0xff), - (hw->nvm.ver & 0xf), hw->nvm.eetrack, + "%d.%d 0x%08x %d.%d.%d", + hw->nvm.major_ver, + hw->nvm.minor_ver, + hw->nvm.eetrack, ver, build, patch); /* add the size of '\0' */ From patchwork Mon Mar 23 07:17:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67013 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56CFBA0563; Mon, 23 Mar 2020 08:19:00 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D2D5E1C120; Mon, 23 Mar 2020 08:15:56 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 7462A1C11A for ; Mon, 23 Mar 2020 08:15:29 +0100 (CET) IronPort-SDR: oNKLCTc0qIKSEeIgCUDmo+iSP5BpeNQKmiaMGfx+ZyyE5SfP3FNjlZ6E/W3GTGDQ99cAOP2SxD eoeMmtCE+byQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:28 -0700 IronPort-SDR: assL3ktQELn4SwWTSYqPAMlcw+7JJu+Y4GxCQy3Q/PDs+MjDRtH7U50VRdA1gG8hz74sSGkWx+ bB6Vl8iP0ibQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111755" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:25 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Real Valiquette , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:48 +0800 Message-Id: <20200323071759.13075-26-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 25/36] net/ice/base: add ACL module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add all ACL related code. Signed-off-by: Real Valiquette Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/Makefile | 2 + drivers/net/ice/base/ice_acl.c | 629 +++++++++++++++++ drivers/net/ice/base/ice_acl.h | 206 ++++++ drivers/net/ice/base/ice_acl_ctrl.c | 1185 +++++++++++++++++++++++++++++++++ drivers/net/ice/base/ice_adminq_cmd.h | 459 ++++++++++++- drivers/net/ice/base/ice_fdir.c | 14 +- drivers/net/ice/base/ice_fdir.h | 5 +- drivers/net/ice/base/ice_flex_pipe.c | 3 +- drivers/net/ice/base/ice_flow.c | 1112 ++++++++++++++++++++++++++++++- drivers/net/ice/base/ice_flow.h | 11 +- drivers/net/ice/base/ice_type.h | 7 + drivers/net/ice/base/meson.build | 2 + drivers/net/ice/ice_fdir_filter.c | 4 +- 13 files changed, 3620 insertions(+), 19 deletions(-) create mode 100644 drivers/net/ice/base/ice_acl.c create mode 100644 drivers/net/ice/base/ice_acl.h create mode 100644 drivers/net/ice/base/ice_acl_ctrl.c diff --git a/drivers/net/ice/Makefile b/drivers/net/ice/Makefile index 6c4d15526..e22c34287 100644 --- a/drivers/net/ice/Makefile +++ b/drivers/net/ice/Makefile @@ -52,6 +52,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flex_pipe.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_flow.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_dcb.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_fdir.c +SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_acl.c +SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_acl_ctrl.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_ethdev.c SRCS-$(CONFIG_RTE_LIBRTE_ICE_PMD) += ice_rxtx.c diff --git a/drivers/net/ice/base/ice_acl.c b/drivers/net/ice/base/ice_acl.c new file mode 100644 index 000000000..26e03aa33 --- /dev/null +++ b/drivers/net/ice/base/ice_acl.c @@ -0,0 +1,629 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2020 + */ + +#include "ice_acl.h" +#include "ice_adminq_cmd.h" + +/** + * ice_aq_alloc_acl_tbl - allocate ACL table + * @hw: pointer to the HW struct + * @tbl: pointer to ice_acl_alloc_tbl struct + * @cd: pointer to command details structure or NULL + * + * Allocate ACL table (indirect 0x0C10) + */ +enum ice_status +ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl, + struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_alloc_table *cmd; + struct ice_aq_desc desc; + + if (!tbl->act_pairs_per_entry) + return ICE_ERR_PARAM; + + if (tbl->act_pairs_per_entry > ICE_AQC_MAX_ACTION_MEMORIES) + return ICE_ERR_MAX_LIMIT; + + /* If this is concurrent table, then buffer shall be valid and + * contain DependentAllocIDs, 'num_dependent_alloc_ids' should be valid + * and within limit + */ + if (tbl->concurr) { + if (!tbl->num_dependent_alloc_ids) + return ICE_ERR_PARAM; + if (tbl->num_dependent_alloc_ids > + ICE_AQC_MAX_CONCURRENT_ACL_TBL) + return ICE_ERR_INVAL_SIZE; + } + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_alloc_acl_tbl); + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + + cmd = &desc.params.alloc_table; + cmd->table_width = CPU_TO_LE16(tbl->width * BITS_PER_BYTE); + cmd->table_depth = CPU_TO_LE16(tbl->depth); + cmd->act_pairs_per_entry = tbl->act_pairs_per_entry; + if (tbl->concurr) + cmd->table_type = tbl->num_dependent_alloc_ids; + + return ice_aq_send_cmd(hw, &desc, &tbl->buf, sizeof(tbl->buf), cd); +} + +/** + * ice_aq_dealloc_acl_tbl - deallocate ACL table + * @hw: pointer to the HW struct + * @alloc_id: allocation ID of the table being released + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Deallocate ACL table (indirect 0x0C11) + * + * NOTE: This command has no buffer format for command itself but response + * format is 'struct ice_aqc_acl_generic', pass ptr to that struct + * as 'buf' and its size as 'buf_size' + */ +enum ice_status +ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_tbl_actpair *cmd; + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_tbl); + cmd = &desc.params.tbl_actpair; + cmd->alloc_id = CPU_TO_LE16(alloc_id); + + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +static enum ice_status +ice_aq_acl_entry(struct ice_hw *hw, u16 opcode, u8 tcam_idx, u16 entry_idx, + struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_entry *cmd; + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, opcode); + + if (opcode == ice_aqc_opc_program_acl_entry) + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + + cmd = &desc.params.program_query_entry; + cmd->tcam_index = tcam_idx; + cmd->entry_index = CPU_TO_LE16(entry_idx); + + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_aq_program_acl_entry - program ACL entry + * @hw: pointer to the HW struct + * @tcam_idx: Updated TCAM block index + * @entry_idx: updated entry index + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Program ACL entry (direct 0x0C20) + */ +enum ice_status +ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, + struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd) +{ + return ice_aq_acl_entry(hw, ice_aqc_opc_program_acl_entry, tcam_idx, + entry_idx, buf, cd); +} + +/** + * ice_aq_query_acl_entry - query ACL entry + * @hw: pointer to the HW struct + * @tcam_idx: Updated TCAM block index + * @entry_idx: updated entry index + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Query ACL entry (direct 0x0C24) + * + * NOTE: Caller of this API to parse 'buf' appropriately since it contains + * response (key and key invert) + */ +enum ice_status +ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, + struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd) +{ + return ice_aq_acl_entry(hw, ice_aqc_opc_query_acl_entry, tcam_idx, + entry_idx, buf, cd); +} + +/* Helper function to alloc/dealloc ACL action pair */ +static enum ice_status +ice_aq_actpair_a_d(struct ice_hw *hw, u16 opcode, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_tbl_actpair *cmd; + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, opcode); + cmd = &desc.params.tbl_actpair; + cmd->alloc_id = CPU_TO_LE16(alloc_id); + + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_aq_alloc_actpair - allocate actionpair for specified ACL table + * @hw: pointer to the HW struct + * @alloc_id: allocation ID of the table being associated with the actionpair + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Allocate ACL actionpair (direct 0x0C12) + * + * This command doesn't need and doesn't have its own command buffer + * but for response format is as specified in 'struct ice_aqc_acl_generic' + */ +enum ice_status +ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) +{ + return ice_aq_actpair_a_d(hw, ice_aqc_opc_alloc_acl_actpair, alloc_id, + buf, cd); +} + +/** + * ice_aq_dealloc_actpair - dealloc actionpair for specified ACL table + * @hw: pointer to the HW struct + * @alloc_id: allocation ID of the table being associated with the actionpair + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Deallocate ACL actionpair (direct 0x0C13) + */ +enum ice_status +ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd) +{ + return ice_aq_actpair_a_d(hw, ice_aqc_opc_dealloc_acl_actpair, alloc_id, + buf, cd); +} + +/* Helper function to program/query ACL action pair */ +static enum ice_status +ice_aq_actpair_p_q(struct ice_hw *hw, u16 opcode, u8 act_mem_idx, + u16 act_entry_idx, struct ice_aqc_actpair *buf, + struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_actpair *cmd; + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, opcode); + + if (opcode == ice_aqc_opc_program_acl_actpair) + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + + cmd = &desc.params.program_query_actpair; + cmd->act_mem_index = act_mem_idx; + cmd->act_entry_index = CPU_TO_LE16(act_entry_idx); + + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_aq_program_actpair - program ACL actionpair + * @hw: pointer to the HW struct + * @act_mem_idx: action memory index to program/update/query + * @act_entry_idx: the entry index in action memory to be programmed/updated + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Program action entries (indirect 0x0C1C) + */ +enum ice_status +ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, + struct ice_aqc_actpair *buf, struct ice_sq_cd *cd) +{ + return ice_aq_actpair_p_q(hw, ice_aqc_opc_program_acl_actpair, + act_mem_idx, act_entry_idx, buf, cd); +} + +/** + * ice_aq_query_actpair - query ACL actionpair + * @hw: pointer to the HW struct + * @act_mem_idx: action memory index to program/update/query + * @act_entry_idx: the entry index in action memory to be programmed/updated + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Query ACL actionpair (indirect 0x0C25) + */ +enum ice_status +ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, + struct ice_aqc_actpair *buf, struct ice_sq_cd *cd) +{ + return ice_aq_actpair_p_q(hw, ice_aqc_opc_query_acl_actpair, + act_mem_idx, act_entry_idx, buf, cd); +} + +/** + * ice_aq_dealloc_acl_res - deallocate ACL resources + * @hw: pointer to the HW struct + * @cd: pointer to command details structure or NULL + * + * ACL - de-allocate (direct 0x0C1A) resources. Used by SW to release all the + * resources allocated for it using a single command + */ +enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_res); + + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + +/** + * ice_acl_prof_aq_send - sending acl profile aq commands + * @hw: pointer to the HW struct + * @opc: command opcode + * @prof_id: profile ID + * @buf: ptr to buffer + * @cd: pointer to command details structure or NULL + * + * This function sends ACL profile commands + */ +static enum ice_status +ice_acl_prof_aq_send(struct ice_hw *hw, u16 opc, u8 prof_id, + struct ice_aqc_acl_prof_generic_frmt *buf, + struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, opc); + desc.params.profile.profile_id = prof_id; + if (opc == ice_aqc_opc_program_acl_prof_extraction || + opc == ice_aqc_opc_program_acl_prof_ranges) + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_prgm_acl_prof_extrt - program ACL profile extraction sequence + * @hw: pointer to the HW struct + * @prof_id: profile ID + * @buf: ptr to buffer + * @cd: pointer to command details structure or NULL + * + * ACL - program ACL profile extraction (indirect 0x0C1D) + */ +enum ice_status +ice_prgm_acl_prof_extrt(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_prof_generic_frmt *buf, + struct ice_sq_cd *cd) +{ + return ice_acl_prof_aq_send(hw, ice_aqc_opc_program_acl_prof_extraction, + prof_id, buf, cd); +} + +/** + * ice_query_acl_prof - query ACL profile + * @hw: pointer to the HW struct + * @prof_id: profile ID + * @buf: ptr to buffer (which will contain response of this command) + * @cd: pointer to command details structure or NULL + * + * ACL - query ACL profile (indirect 0x0C21) + */ +enum ice_status +ice_query_acl_prof(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_prof_generic_frmt *buf, + struct ice_sq_cd *cd) +{ + return ice_acl_prof_aq_send(hw, ice_aqc_opc_query_acl_prof, prof_id, + buf, cd); +} + +/** + * ice_aq_acl_cntrs_chk_params - Checks ACL counter parameters + * @cntrs: ptr to buffer describing input and output params + * + * This function checks the counter bank range for counter type and returns + * success or failure. + */ +static enum ice_status ice_aq_acl_cntrs_chk_params(struct ice_acl_cntrs *cntrs) +{ + enum ice_status status = ICE_SUCCESS; + + if (!cntrs || !cntrs->amount) + return ICE_ERR_PARAM; + + switch (cntrs->type) { + case ICE_AQC_ACL_CNT_TYPE_SINGLE: + /* Single counter type - configured to count either bytes + * or packets, the valid values for byte or packet counters + * shall be 0-3. + */ + if (cntrs->bank > ICE_AQC_ACL_MAX_CNT_SINGLE) + status = ICE_ERR_OUT_OF_RANGE; + break; + case ICE_AQC_ACL_CNT_TYPE_DUAL: + /* Pair counter type - counts number of bytes and packets + * The valid values for byte/packet counter duals shall be 0-1 + */ + if (cntrs->bank > ICE_AQC_ACL_MAX_CNT_DUAL) + status = ICE_ERR_OUT_OF_RANGE; + break; + default: + /* Unspecified counter type - Invalid or error*/ + status = ICE_ERR_PARAM; + } + + return status; +} + +/** + * ice_aq_alloc_acl_cntrs - allocate ACL counters + * @hw: pointer to the HW struct + * @cntrs: ptr to buffer describing input and output params + * @cd: pointer to command details structure or NULL + * + * ACL - allocate (indirect 0x0C16) counters. This function attempts to + * allocate a contiguous block of counters. In case of failures, caller can + * attempt to allocate a smaller chunk. The allocation is considered + * unsuccessful if returned counter value is invalid. In this case it returns + * an error otherwise success. + */ +enum ice_status +ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, + struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_alloc_counters *cmd; + u16 first_cntr, last_cntr; + struct ice_aq_desc desc; + enum ice_status status; + + /* check for invalid params */ + status = ice_aq_acl_cntrs_chk_params(cntrs); + if (status) + return status; + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_alloc_acl_counters); + cmd = &desc.params.alloc_counters; + cmd->counter_amount = cntrs->amount; + cmd->counters_type = cntrs->type; + cmd->bank_alloc = cntrs->bank; + status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd); + if (!status) { + first_cntr = LE16_TO_CPU(cmd->ops.resp.first_counter); + last_cntr = LE16_TO_CPU(cmd->ops.resp.last_counter); + if (first_cntr == ICE_AQC_ACL_ALLOC_CNT_INVAL || + last_cntr == ICE_AQC_ACL_ALLOC_CNT_INVAL) + return ICE_ERR_OUT_OF_RANGE; + cntrs->first_cntr = first_cntr; + cntrs->last_cntr = last_cntr; + } + return status; +} + +/** + * ice_aq_dealloc_acl_cntrs - deallocate ACL counters + * @hw: pointer to the HW struct + * @cntrs: ptr to buffer describing input and output params + * @cd: pointer to command details structure or NULL + * + * ACL - de-allocate (direct 0x0C17) counters. + * This function deallocate ACL counters. + */ +enum ice_status +ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, + struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_dealloc_counters *cmd; + struct ice_aq_desc desc; + enum ice_status status; + + /* check for invalid params */ + status = ice_aq_acl_cntrs_chk_params(cntrs); + if (status) + return status; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_counters); + cmd = &desc.params.dealloc_counters; + cmd->first_counter = CPU_TO_LE16(cntrs->first_cntr); + cmd->last_counter = CPU_TO_LE16(cntrs->last_cntr); + cmd->counters_type = cntrs->type; + cmd->bank_alloc = cntrs->bank; + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + +/** + * ice_aq_query_acl_cntrs - query ACL counter + * @hw: pointer to the HW struct + * @bank: queries counter bank + * @index: queried counter index + * @cntr_val: pointer to counter or packet counter value + * @cd: pointer to command details structure or NULL + * + * ACL - query ACL counter (direct 0x0C27) + */ +enum ice_status +ice_aq_query_acl_cntrs(struct ice_hw *hw, u8 bank, u16 index, u64 *cntr_val, + struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_query_counter *cmd; + struct ice_aq_desc desc; + enum ice_status status; + + if (!cntr_val) + return ICE_ERR_PARAM; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_query_acl_counter); + cmd = &desc.params.query_counter; + cmd->counter_index = CPU_TO_LE16(index); + cmd->counter_bank = bank; + status = ice_aq_send_cmd(hw, &desc, NULL, 0, cd); + if (!status) { + __le64 resp_val = 0; + + ice_memcpy(&resp_val, cmd->ops.resp.val, + sizeof(cmd->ops.resp.val), ICE_NONDMA_TO_NONDMA); + *cntr_val = LE64_TO_CPU(resp_val); + } + return status; +} + +/** + * ice_prog_acl_prof_ranges - program ACL profile ranges + * @hw: pointer to the HW struct + * @prof_id: programmed or updated profile ID + * @buf: pointer to input buffer + * @cd: pointer to command details structure or NULL + * + * ACL - program ACL profile ranges (indirect 0x0C1E) + */ +enum ice_status +ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_profile_ranges *buf, + struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_program_acl_prof_ranges); + desc.params.profile.profile_id = prof_id; + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_query_acl_prof_ranges - query ACL profile ranges + * @hw: pointer to the HW struct + * @prof_id: programmed or updated profile ID + * @buf: pointer to response buffer + * @cd: pointer to command details structure or NULL + * + * ACL - query ACL profile ranges (indirect 0x0C22) + */ +enum ice_status +ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_profile_ranges *buf, + struct ice_sq_cd *cd) +{ + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, + ice_aqc_opc_query_acl_prof_ranges); + desc.params.profile.profile_id = prof_id; + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_aq_alloc_acl_scen - allocate ACL scenario + * @hw: pointer to the HW struct + * @scen_id: memory location to receive allocated scenario ID + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Allocate ACL scenario (indirect 0x0C14) + */ +enum ice_status +ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_alloc_scen *cmd; + struct ice_aq_desc desc; + enum ice_status status; + + if (!scen_id) + return ICE_ERR_PARAM; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_alloc_acl_scen); + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + cmd = &desc.params.alloc_scen; + + status = ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); + if (!status) + *scen_id = LE16_TO_CPU(cmd->ops.resp.scen_id); + + return status; +} + +/** + * ice_aq_dealloc_acl_scen - deallocate ACL scenario + * @hw: pointer to the HW struct + * @scen_id: scen_id to be deallocated (input and output field) + * @cd: pointer to command details structure or NULL + * + * Deallocate ACL scenario (direct 0x0C15) + */ +enum ice_status +ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_dealloc_scen *cmd; + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_dealloc_acl_scen); + cmd = &desc.params.dealloc_scen; + cmd->scen_id = CPU_TO_LE16(scen_id); + + return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); +} + +/** + * ice_aq_update_query_scen - update or query ACL scenario + * @hw: pointer to the HW struct + * @opcode: aq command opcode for either query or update scenario + * @scen_id: scen_id to be updated or queried + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Calls update or query ACL scenario + */ +static enum ice_status +ice_aq_update_query_scen(struct ice_hw *hw, u16 opcode, u16 scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) +{ + struct ice_aqc_acl_update_query_scen *cmd; + struct ice_aq_desc desc; + + ice_fill_dflt_direct_cmd_desc(&desc, opcode); + if (opcode == ice_aqc_opc_update_acl_scen) + desc.flags |= CPU_TO_LE16(ICE_AQ_FLAG_RD); + cmd = &desc.params.update_query_scen; + cmd->scen_id = CPU_TO_LE16(scen_id); + + return ice_aq_send_cmd(hw, &desc, buf, sizeof(*buf), cd); +} + +/** + * ice_aq_update_acl_scen - update ACL scenario + * @hw: pointer to the HW struct + * @scen_id: scen_id to be updated + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Update ACL scenario (indirect 0x0C1B) + */ +enum ice_status +ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) +{ + return ice_aq_update_query_scen(hw, ice_aqc_opc_update_acl_scen, + scen_id, buf, cd); +} + +/** + * ice_aq_query_acl_scen - query ACL scenario + * @hw: pointer to the HW struct + * @scen_id: scen_id to be queried + * @buf: address of indirect data buffer + * @cd: pointer to command details structure or NULL + * + * Query ACL scenario (indirect 0x0C23) + */ +enum ice_status +ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd) +{ + return ice_aq_update_query_scen(hw, ice_aqc_opc_query_acl_scen, + scen_id, buf, cd); +} diff --git a/drivers/net/ice/base/ice_acl.h b/drivers/net/ice/base/ice_acl.h new file mode 100644 index 000000000..00296300b --- /dev/null +++ b/drivers/net/ice/base/ice_acl.h @@ -0,0 +1,206 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2020 + */ + +#ifndef _ICE_ACL_H_ +#define _ICE_ACL_H_ + +#include "ice_common.h" +#include "ice_adminq_cmd.h" + +struct ice_acl_tbl_params { + u16 width; /* Select/match bytes */ + u16 depth; /* Number of entries */ + +#define ICE_ACL_TBL_MAX_DEP_TBLS 15 + u16 dep_tbls[ICE_ACL_TBL_MAX_DEP_TBLS]; + + u8 entry_act_pairs; /* Action pairs per entry */ + u8 concurr; /* Concurrent table lookup enable */ +}; + +struct ice_acl_act_mem { + u8 act_mem; +#define ICE_ACL_ACT_PAIR_MEM_INVAL 0xff + u8 member_of_tcam; +}; + +struct ice_acl_tbl { + /* TCAM configuration */ + u8 first_tcam; /* Index of the first TCAM block */ + u8 last_tcam; /* Index of the last TCAM block */ + /* Index of the first entry in the first TCAM */ + u16 first_entry; + /* Index of the last entry in the last TCAM */ + u16 last_entry; + + /* List of active scenarios */ + struct LIST_HEAD_TYPE scens; + + struct ice_acl_tbl_params info; + struct ice_acl_act_mem act_mems[ICE_AQC_MAX_ACTION_MEMORIES]; + + /* Keep track of available 64-entry chunks in TCAMs */ + ice_declare_bitmap(avail, ICE_AQC_ACL_ALLOC_UNITS); + + u16 id; +}; + +#define ICE_MAX_ACL_TCAM_ENTRY (ICE_AQC_ACL_TCAM_DEPTH * ICE_AQC_ACL_SLICES) +enum ice_acl_entry_prior { + ICE_LOW = 0, + ICE_NORMAL, + ICE_HIGH, + ICE_MAX_PRIOR +}; + +/* Scenario structure + * A scenario is a logical partition within an ACL table. It can span more + * than one TCAM in cascade mode to support select/mask key widths larger. + * than the width of a TCAM. It can also span more than one TCAM in stacked + * mode to support larger number of entries than what a TCAM can hold. It is + * used to select values from selection bases (field vectors holding extract + * protocol header fields) to form lookup keys, and to associate action memory + * banks to the TCAMs used. + */ +struct ice_acl_scen { + struct LIST_ENTRY_TYPE list_entry; + /* If nth bit of act_mem_bitmap is set, then nth action memory will + * participate in this scenario + */ + ice_declare_bitmap(act_mem_bitmap, ICE_AQC_MAX_ACTION_MEMORIES); + + /* If nth bit of entry_bitmap is set, then nth entry will + * be available in this scenario + */ + ice_declare_bitmap(entry_bitmap, ICE_MAX_ACL_TCAM_ENTRY); + u16 first_idx[ICE_MAX_PRIOR]; + u16 last_idx[ICE_MAX_PRIOR]; + + u16 id; + u16 start; /* Number of entry from the start of the parent table */ +#define ICE_ACL_SCEN_MIN_WIDTH 0x3 + u16 width; /* Number of select/mask bytes */ + u16 num_entry; /* Number of scenario entry */ + u16 end; /* Last addressable entry from start of table */ + u8 eff_width; /* Available width in bytes to match */ +#define ICE_ACL_SCEN_PKT_DIR_IDX_IN_TCAM 0x2 +#define ICE_ACL_SCEN_PID_IDX_IN_TCAM 0x3 +#define ICE_ACL_SCEN_RNG_CHK_IDX_IN_TCAM 0x4 + u8 pid_idx; /* Byte index used to match profile ID */ + u8 rng_chk_idx; /* Byte index used to match range checkers result */ + u8 pkt_dir_idx; /* Byte index used to match packet direction */ +}; + +/* This structure represents input fields needed to allocate ACL table */ +struct ice_acl_alloc_tbl { + /* Table's width in number of bytes matched */ + u16 width; + /* Table's depth in number of entries. */ + u16 depth; + u8 num_dependent_alloc_ids; /* number of depdendent alloc IDs */ + u8 concurr; /* true for concurrent table type */ + + /* Amount of action pairs per table entry. Minimal valid + * value for this field is 1 (e.g. single pair of actions) + */ + u8 act_pairs_per_entry; + union { + struct ice_aqc_acl_alloc_table_data data_buf; + struct ice_aqc_acl_generic resp_buf; + } buf; +}; + +/* This structure is used to communicate input and output params for + * [de]allocate_acl_counters + */ +struct ice_acl_cntrs { + u8 amount; + u8 type; + u8 bank; + + /* Next 2 variables are used for output in case of alloc_acl_counters + * and input in case of deallocate_acl_counters + */ + u16 first_cntr; + u16 last_cntr; +}; + +enum ice_status +ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params); +enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw); +enum ice_status +ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries, + u16 *scen_id); +enum ice_status ice_acl_destroy_scen(struct ice_hw *hw, u16 scen_id); +enum ice_status +ice_aq_alloc_acl_tbl(struct ice_hw *hw, struct ice_acl_alloc_tbl *tbl, + struct ice_sq_cd *cd); +enum ice_status +ice_aq_dealloc_acl_tbl(struct ice_hw *hw, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_program_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, + struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_query_acl_entry(struct ice_hw *hw, u8 tcam_idx, u16 entry_idx, + struct ice_aqc_acl_data *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_alloc_actpair(struct ice_hw *hw, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_dealloc_actpair(struct ice_hw *hw, u16 alloc_id, + struct ice_aqc_acl_generic *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_program_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, + struct ice_aqc_actpair *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_query_actpair(struct ice_hw *hw, u8 act_mem_idx, u16 act_entry_idx, + struct ice_aqc_actpair *buf, struct ice_sq_cd *cd); +enum ice_status ice_aq_dealloc_acl_res(struct ice_hw *hw, struct ice_sq_cd *cd); +enum ice_status +ice_prgm_acl_prof_extrt(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_prof_generic_frmt *buf, + struct ice_sq_cd *cd); +enum ice_status +ice_query_acl_prof(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_prof_generic_frmt *buf, + struct ice_sq_cd *cd); +enum ice_status +ice_aq_alloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, + struct ice_sq_cd *cd); +enum ice_status +ice_aq_dealloc_acl_cntrs(struct ice_hw *hw, struct ice_acl_cntrs *cntrs, + struct ice_sq_cd *cd); +enum ice_status +ice_aq_query_acl_cntrs(struct ice_hw *hw, u8 bank, u16 index, u64 *cntr_val, + struct ice_sq_cd *cd); +enum ice_status +ice_prog_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_profile_ranges *buf, + struct ice_sq_cd *cd); +enum ice_status +ice_query_acl_prof_ranges(struct ice_hw *hw, u8 prof_id, + struct ice_aqc_acl_profile_ranges *buf, + struct ice_sq_cd *cd); +enum ice_status +ice_aq_alloc_acl_scen(struct ice_hw *hw, u16 *scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_dealloc_acl_scen(struct ice_hw *hw, u16 scen_id, struct ice_sq_cd *cd); +enum ice_status +ice_aq_update_acl_scen(struct ice_hw *hw, u16 scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd); +enum ice_status +ice_aq_query_acl_scen(struct ice_hw *hw, u16 scen_id, + struct ice_aqc_acl_scen *buf, struct ice_sq_cd *cd); +enum ice_status +ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen, + enum ice_acl_entry_prior prior, u8 *keys, u8 *inverts, + struct ice_acl_act_entry *acts, u8 acts_cnt, u16 *entry_idx); +enum ice_status +ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen, + struct ice_acl_act_entry *acts, u8 acts_cnt, u16 entry_idx); +enum ice_status +ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx); +#endif /* _ICE_ACL_H_ */ diff --git a/drivers/net/ice/base/ice_acl_ctrl.c b/drivers/net/ice/base/ice_acl_ctrl.c new file mode 100644 index 000000000..7dfe0eda3 --- /dev/null +++ b/drivers/net/ice/base/ice_acl_ctrl.c @@ -0,0 +1,1185 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2001-2020 + */ + +#include "ice_acl.h" +#include "ice_flow.h" + +/* Determine the TCAM index of entry 'e' within the ACL table */ +#define ICE_ACL_TBL_TCAM_IDX(e) ((e) / ICE_AQC_ACL_TCAM_DEPTH) + +/* Determine the entry index within the TCAM */ +#define ICE_ACL_TBL_TCAM_ENTRY_IDX(e) ((e) % ICE_AQC_ACL_TCAM_DEPTH) + +#define ICE_ACL_SCEN_ENTRY_INVAL 0xFFFF +/** + * ice_acl_init_entry + * @scen: pointer to the scenario struct + * + * Initialize the scenario control structure. + */ +static void ice_acl_init_entry(struct ice_acl_scen *scen) +{ + /** + * low priority: start from the highest index, 25% of total entries + * normal priority: start from the highest index, 50% of total entries + * high priority: start from the lowest index, 25% of total entries + */ + scen->first_idx[ICE_LOW] = scen->num_entry - 1; + scen->first_idx[ICE_NORMAL] = scen->num_entry - scen->num_entry / 4 - 1; + scen->first_idx[ICE_HIGH] = 0; + + scen->last_idx[ICE_LOW] = scen->num_entry - scen->num_entry / 4; + scen->last_idx[ICE_NORMAL] = scen->num_entry / 4; + scen->last_idx[ICE_HIGH] = scen->num_entry / 4 - 1; +} + +/** + * ice_acl_scen_assign_entry_idx + * @scen: pointer to the scenario struct + * @prior: the priority of the flow entry being allocated + * + * To find the index of an available entry in scenario + * + * Returns ICE_ACL_SCEN_ENTRY_INVAL if fails + * Returns index on success + */ +static u16 ice_acl_scen_assign_entry_idx(struct ice_acl_scen *scen, + enum ice_acl_entry_prior prior) +{ + u16 first_idx, last_idx, i; + s8 step; + + if (prior >= ICE_MAX_PRIOR) + return ICE_ACL_SCEN_ENTRY_INVAL; + + first_idx = scen->first_idx[prior]; + last_idx = scen->last_idx[prior]; + step = first_idx <= last_idx ? 1 : -1; + + for (i = first_idx; i != last_idx + step; i += step) + if (!ice_test_and_set_bit(i, scen->entry_bitmap)) + return i; + + return ICE_ACL_SCEN_ENTRY_INVAL; +} + +/** + * ice_acl_scen_free_entry_idx + * @scen: pointer to the scenario struct + * @idx: the index of the flow entry being de-allocated + * + * To mark an entry available in scenario + */ +static enum ice_status +ice_acl_scen_free_entry_idx(struct ice_acl_scen *scen, u16 idx) +{ + if (idx >= scen->num_entry) + return ICE_ERR_MAX_LIMIT; + + if (!ice_test_and_clear_bit(idx, scen->entry_bitmap)) + return ICE_ERR_DOES_NOT_EXIST; + + return ICE_SUCCESS; +} + +/** + * ice_acl_tbl_calc_end_idx + * @start: start index of the TCAM entry of this partition + * @num_entries: number of entries in this partition + * @width: width of a partition in number of TCAMs + * + * Calculate the end entry index for a partition with starting entry index + * 'start', entries 'num_entries', and width 'width'. + */ +static u16 ice_acl_tbl_calc_end_idx(u16 start, u16 num_entries, u16 width) +{ + u16 end_idx, add_entries = 0; + + end_idx = start + (num_entries - 1); + + /* In case that our ACL partition requires cascading TCAMs */ + if (width > 1) { + u16 num_stack_level; + + /* Figure out the TCAM stacked level in this ACL scenario */ + num_stack_level = (start % ICE_AQC_ACL_TCAM_DEPTH) + + num_entries; + num_stack_level = DIVIDE_AND_ROUND_UP(num_stack_level, + ICE_AQC_ACL_TCAM_DEPTH); + + /* In this case, each entries in our ACL partition span + * multiple TCAMs. Thus, we will need to add + * ((width - 1) * num_stack_level) TCAM's entries to + * end_idx. + * + * For example : In our case, our scenario is 2x2: + * [TCAM 0] [TCAM 1] + * [TCAM 2] [TCAM 3] + * Assuming that a TCAM will have 512 entries. If "start" + * is 500, "num_entries" is 3 and "width" = 2, then end_idx + * should be 1024 (belongs to TCAM 2). + * Before going to this if statement, end_idx will have the + * value of 512. If "width" is 1, then the final value of + * end_idx is 512. However, in our case, width is 2, then we + * will need add (2 - 1) * 1 * 512. As result, end_idx will + * have the value of 1024. + */ + add_entries = (width - 1) * num_stack_level * + ICE_AQC_ACL_TCAM_DEPTH; + } + + return end_idx + add_entries; +} + +/** + * ice_acl_init_tbl + * @hw: pointer to the hardware structure + * + * Initialize the ACL table by invalidating TCAM entries and action pairs. + */ +static enum ice_status ice_acl_init_tbl(struct ice_hw *hw) +{ + struct ice_aqc_actpair act_buf; + struct ice_aqc_acl_data buf; + enum ice_status status = ICE_SUCCESS; + struct ice_acl_tbl *tbl; + u8 tcam_idx, i; + u16 idx; + + tbl = hw->acl_tbl; + if (!tbl) { + status = ICE_ERR_CFG; + return status; + } + + ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); + ice_memset(&act_buf, 0, sizeof(act_buf), ICE_NONDMA_MEM); + + tcam_idx = tbl->first_tcam; + idx = tbl->first_entry; + while (tcam_idx < tbl->last_tcam || + (tcam_idx == tbl->last_tcam && idx <= tbl->last_entry)) { + /* Use the same value for entry_key and entry_key_inv since + * we are initializing the fields to 0 + */ + status = ice_aq_program_acl_entry(hw, tcam_idx, idx, &buf, + NULL); + if (status) + return status; + + if (++idx > tbl->last_entry) { + tcam_idx++; + idx = tbl->first_entry; + } + } + + for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++) { + u16 act_entry_idx, start, end; + + if (tbl->act_mems[i].act_mem == ICE_ACL_ACT_PAIR_MEM_INVAL) + continue; + + start = tbl->first_entry; + end = tbl->last_entry; + + for (act_entry_idx = start; act_entry_idx <= end; + act_entry_idx++) { + /* Invalidate all allocated action pairs */ + status = ice_aq_program_actpair(hw, i, act_entry_idx, + &act_buf, NULL); + if (status) + return status; + } + } + + return status; +} + +/** + * ice_acl_assign_act_mems_to_tcam + * @tbl: pointer to acl table structure + * @cur_tcam: Index of current TCAM. Value = 0 to (ICE_AQC_ACL_SLICES - 1) + * @cur_mem_idx: Index of current action memory bank. Value = 0 to + * (ICE_AQC_MAX_ACTION_MEMORIES - 1) + * @num_mem: Number of action memory banks for this TCAM + * + * Assign "num_mem" valid action memory banks from "curr_mem_idx" to + * "curr_tcam" TCAM. + */ +static void +ice_acl_assign_act_mems_to_tcam(struct ice_acl_tbl *tbl, u8 cur_tcam, + u8 *cur_mem_idx, u8 num_mem) +{ + u8 mem_cnt; + + for (mem_cnt = 0; + *cur_mem_idx < ICE_AQC_MAX_ACTION_MEMORIES && mem_cnt < num_mem; + (*cur_mem_idx)++) { + struct ice_acl_act_mem *p_mem = &tbl->act_mems[*cur_mem_idx]; + + if (p_mem->act_mem == ICE_ACL_ACT_PAIR_MEM_INVAL) + continue; + + p_mem->member_of_tcam = cur_tcam; + + mem_cnt++; + } +} + +/** + * ice_acl_divide_act_mems_to_tcams + * @tbl: pointer to acl table structure + * + * Figure out how to divide given action memory banks to given TCAMs. This + * division is for SW book keeping. In the time when scenario is created, + * an action memory bank can be used for different TCAM. + * + * For example, given that we have 2x2 ACL table with each table entry has + * 2 action memory pairs. As the result, we will have 4 TCAMs (T1,T2,T3,T4) + * and 4 action memory banks (A1,A2,A3,A4) + * [T1 - T2] { A1 - A2 } + * [T3 - T4] { A3 - A4 } + * In the time when we need to create a scenario, for example, 2x1 scenario, + * we will use [T3,T4] in a cascaded layout. As it is a requirement that all + * action memory banks in a cascaded TCAM's row will need to associate with + * the last TCAM. Thus, we will associate action memory banks [A3] and [A4] + * for TCAM [T4]. + * For SW book-keeping purpose, we will keep theoretical maps between TCAM + * [Tn] to action memory bank [An]. + */ +static void ice_acl_divide_act_mems_to_tcams(struct ice_acl_tbl *tbl) +{ + u16 num_cscd, stack_level, stack_idx, min_act_mem; + u8 tcam_idx = tbl->first_tcam; + u16 max_idx_to_get_extra; + u8 mem_idx = 0; + + /* Determine number of stacked TCAMs */ + stack_level = DIVIDE_AND_ROUND_UP(tbl->info.depth, + ICE_AQC_ACL_TCAM_DEPTH); + + /* Determine number of cascaded TCAMs */ + num_cscd = DIVIDE_AND_ROUND_UP(tbl->info.width, + ICE_AQC_ACL_KEY_WIDTH_BYTES); + + /* In a line of cascaded TCAM, given the number of action memory + * banks per ACL table entry, we want to fairly divide these action + * memory banks between these TCAMs. + * + * For example, there are 3 TCAMs (TCAM 3,4,5) in a line of + * cascaded TCAM, and there are 7 act_mems for each ACL table entry. + * The result is: + * [TCAM_3 will have 3 act_mems] + * [TCAM_4 will have 2 act_mems] + * [TCAM_5 will have 2 act_mems] + */ + min_act_mem = tbl->info.entry_act_pairs / num_cscd; + max_idx_to_get_extra = tbl->info.entry_act_pairs % num_cscd; + + for (stack_idx = 0; stack_idx < stack_level; stack_idx++) { + u16 i; + + for (i = 0; i < num_cscd; i++) { + u8 total_act_mem = min_act_mem; + + if (i < max_idx_to_get_extra) + total_act_mem++; + + ice_acl_assign_act_mems_to_tcam(tbl, tcam_idx, + &mem_idx, + total_act_mem); + + tcam_idx++; + } + } +} + +/** + * ice_acl_create_tbl + * @hw: pointer to the HW struct + * @params: parameters for the table to be created + * + * Create a LEM table for ACL usage. We are currently starting with some fixed + * values for the size of the table, but this will need to grow as more flow + * entries are added by the user level. + */ +enum ice_status +ice_acl_create_tbl(struct ice_hw *hw, struct ice_acl_tbl_params *params) +{ + u16 width, depth, first_e, last_e, i; + struct ice_aqc_acl_generic *resp_buf; + struct ice_acl_alloc_tbl tbl_alloc; + struct ice_acl_tbl *tbl; + enum ice_status status; + + if (hw->acl_tbl) + return ICE_ERR_ALREADY_EXISTS; + + if (!params) + return ICE_ERR_PARAM; + + /* round up the width to the next TCAM width boundary. */ + width = ROUND_UP(params->width, (u16)ICE_AQC_ACL_KEY_WIDTH_BYTES); + /* depth should be provided in chunk (64 entry) increments */ + depth = ICE_ALIGN(params->depth, ICE_ACL_ENTRY_ALLOC_UNIT); + + if (params->entry_act_pairs < width / ICE_AQC_ACL_KEY_WIDTH_BYTES) { + params->entry_act_pairs = width / ICE_AQC_ACL_KEY_WIDTH_BYTES; + + if (params->entry_act_pairs > ICE_AQC_TBL_MAX_ACTION_PAIRS) + params->entry_act_pairs = ICE_AQC_TBL_MAX_ACTION_PAIRS; + } + + /* Validate that width*depth will not exceed the TCAM limit */ + if ((DIVIDE_AND_ROUND_UP(depth, ICE_AQC_ACL_TCAM_DEPTH) * + (width / ICE_AQC_ACL_KEY_WIDTH_BYTES)) > ICE_AQC_ACL_SLICES) + return ICE_ERR_MAX_LIMIT; + + ice_memset(&tbl_alloc, 0, sizeof(tbl_alloc), ICE_NONDMA_MEM); + tbl_alloc.width = width; + tbl_alloc.depth = depth; + tbl_alloc.act_pairs_per_entry = params->entry_act_pairs; + tbl_alloc.concurr = params->concurr; + /* Set dependent_alloc_id only for concurrent table type */ + if (params->concurr) { + tbl_alloc.num_dependent_alloc_ids = + ICE_AQC_MAX_CONCURRENT_ACL_TBL; + + for (i = 0; i < ICE_AQC_MAX_CONCURRENT_ACL_TBL; i++) + tbl_alloc.buf.data_buf.alloc_ids[i] = + CPU_TO_LE16(params->dep_tbls[i]); + } + + /* call the aq command to create the ACL table with these values */ + status = ice_aq_alloc_acl_tbl(hw, &tbl_alloc, NULL); + + if (status) { + if (LE16_TO_CPU(tbl_alloc.buf.resp_buf.alloc_id) < + ICE_AQC_ALLOC_ID_LESS_THAN_4K) + ice_debug(hw, ICE_DBG_ACL, + "Alloc ACL table failed. Unavailable resource.\n"); + else + ice_debug(hw, ICE_DBG_ACL, + "AQ allocation of ACL failed with error. status: %d\n", + status); + return status; + } + + tbl = (struct ice_acl_tbl *)ice_malloc(hw, sizeof(*tbl)); + if (!tbl) { + status = ICE_ERR_NO_MEMORY; + + goto out; + } + + resp_buf = &tbl_alloc.buf.resp_buf; + + /* Retrieve information of the allocated table */ + tbl->id = LE16_TO_CPU(resp_buf->alloc_id); + tbl->first_tcam = resp_buf->ops.table.first_tcam; + tbl->last_tcam = resp_buf->ops.table.last_tcam; + tbl->first_entry = LE16_TO_CPU(resp_buf->first_entry); + tbl->last_entry = LE16_TO_CPU(resp_buf->last_entry); + + tbl->info = *params; + tbl->info.width = width; + tbl->info.depth = depth; + hw->acl_tbl = tbl; + + for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++) + tbl->act_mems[i].act_mem = resp_buf->act_mem[i]; + + /* Figure out which TCAMs that these newly allocated action memories + * belong to. + */ + ice_acl_divide_act_mems_to_tcams(tbl); + + /* Initialize the resources allocated by invalidating all TCAM entries + * and all the action pairs + */ + status = ice_acl_init_tbl(hw); + if (status) { + ice_free(hw, tbl); + hw->acl_tbl = NULL; + ice_debug(hw, ICE_DBG_ACL, + "Initialization of TCAM entries failed. status: %d\n", + status); + goto out; + } + + first_e = (tbl->first_tcam * ICE_AQC_MAX_TCAM_ALLOC_UNITS) + + (tbl->first_entry / ICE_ACL_ENTRY_ALLOC_UNIT); + last_e = (tbl->last_tcam * ICE_AQC_MAX_TCAM_ALLOC_UNITS) + + (tbl->last_entry / ICE_ACL_ENTRY_ALLOC_UNIT); + + /* Indicate available entries in the table */ + for (i = first_e; i <= last_e; i++) + ice_set_bit(i, tbl->avail); + + INIT_LIST_HEAD(&tbl->scens); +out: + + return status; +} + +/** + * ice_acl_alloc_partition - Allocate a partition from the ACL table + * @hw: pointer to the hardware structure + * @req: info of partition being allocated + */ +static enum ice_status +ice_acl_alloc_partition(struct ice_hw *hw, struct ice_acl_scen *req) +{ + u16 start = 0, cnt = 0, off = 0; + u16 width, r_entries, row; + bool done = false; + int dir; + + /* Determine the number of TCAMs each entry overlaps */ + width = DIVIDE_AND_ROUND_UP(req->width, ICE_AQC_ACL_KEY_WIDTH_BYTES); + + /* Check if we have enough TCAMs to accommodate the width */ + if (width > hw->acl_tbl->last_tcam - hw->acl_tbl->first_tcam + 1) + return ICE_ERR_MAX_LIMIT; + + /* Number of entries must be multiple of ICE_ACL_ENTRY_ALLOC_UNIT's */ + r_entries = ICE_ALIGN(req->num_entry, ICE_ACL_ENTRY_ALLOC_UNIT); + + /* To look for an available partition that can accommodate the request, + * the process first logically arranges available TCAMs in rows such + * that each row produces entries with the requested width. It then + * scans the TCAMs' available bitmap, one bit at a time, and + * accumulates contiguous available 64-entry chunks until there are + * enough of them or when all TCAM configurations have been checked. + * + * For width of 1 TCAM, the scanning process starts from the top most + * TCAM, and goes downward. Available bitmaps are examined from LSB + * to MSB. + * + * For width of multiple TCAMs, the process starts from the bottom-most + * row of TCAMs, and goes upward. Available bitmaps are examined from + * the MSB to the LSB. + * + * To make sure that adjacent TCAMs can be logically arranged in the + * same row, the scanning process may have multiple passes. In each + * pass, the first TCAM of the bottom-most row is displaced by one + * additional TCAM. The width of the row and the number of the TCAMs + * available determine the number of passes. When the displacement is + * more than the size of width, the TCAM row configurations will + * repeat. The process will terminate when the configurations repeat. + * + * Available partitions can span more than one row of TCAMs. + */ + if (width == 1) { + row = hw->acl_tbl->first_tcam; + dir = 1; + } else { + /* Start with the bottom-most row, and scan for available + * entries upward + */ + row = hw->acl_tbl->last_tcam + 1 - width; + dir = -1; + } + + do { + u16 i; + + /* Scan all 64-entry chunks, one chunk at a time, in the + * current TCAM row + */ + for (i = 0; + i < ICE_AQC_MAX_TCAM_ALLOC_UNITS && cnt < r_entries; + i++) { + bool avail = true; + u16 w, p; + + /* Compute the cumulative available mask across the + * TCAM row to determine if the current 64-entry chunk + * is available. + */ + p = dir > 0 ? i : ICE_AQC_MAX_TCAM_ALLOC_UNITS - i - 1; + for (w = row; w < row + width && avail; w++) { + u16 b; + + b = (w * ICE_AQC_MAX_TCAM_ALLOC_UNITS) + p; + avail &= ice_is_bit_set(hw->acl_tbl->avail, b); + } + + if (!avail) { + cnt = 0; + } else { + /* Compute the starting index of the newly + * found partition. When 'dir' is negative, the + * scan processes is going upward. If so, the + * starting index needs to be updated for every + * available 64-entry chunk found. + */ + if (!cnt || dir < 0) + start = (row * ICE_AQC_ACL_TCAM_DEPTH) + + (p * ICE_ACL_ENTRY_ALLOC_UNIT); + cnt += ICE_ACL_ENTRY_ALLOC_UNIT; + } + } + + if (cnt >= r_entries) { + req->start = start; + req->num_entry = r_entries; + req->end = ice_acl_tbl_calc_end_idx(start, r_entries, + width); + break; + } + + row = (dir > 0) ? (row + width) : (row - width); + if (row > hw->acl_tbl->last_tcam || + row < hw->acl_tbl->first_tcam) { + /* All rows have been checked. Increment 'off' that + * will help yield a different TCAM configuration in + * which adjacent TCAMs can be alternatively in the + * same row. + */ + off++; + + /* However, if the new 'off' value yields previously + * checked configurations, then exit. + */ + if (off >= width) + done = true; + else + row = dir > 0 ? off : + hw->acl_tbl->last_tcam + 1 - off - + width; + } + } while (!done); + + return cnt >= r_entries ? ICE_SUCCESS : ICE_ERR_MAX_LIMIT; +} + +/** + * ice_acl_fill_tcam_select + * @scen_buf: Pointer to the scenario buffer that needs to be populated + * @scen: Pointer to the available space for the scenario + * @tcam_idx: Index of the TCAM used for this scenario + * @tcam_idx_in_cascade : Local index of the TCAM in the cascade scenario + * + * For all TCAM that participate in this scenario, fill out the tcam_select + * value. + */ +static void +ice_acl_fill_tcam_select(struct ice_aqc_acl_scen *scen_buf, + struct ice_acl_scen *scen, u16 tcam_idx, + u16 tcam_idx_in_cascade) +{ + u16 cascade_cnt, idx; + u8 j; + + idx = tcam_idx_in_cascade * ICE_AQC_ACL_KEY_WIDTH_BYTES; + cascade_cnt = DIVIDE_AND_ROUND_UP(scen->width, + ICE_AQC_ACL_KEY_WIDTH_BYTES); + + /* For each scenario, we reserved last three bytes of scenario width for + * profile ID, range checker, and packet direction. Thus, the last three + * bytes of the last cascaded TCAMs will have value of 1st, 31st and + * 32nd byte location of BYTE selection base. + * + * For other bytes in the TCAMs: + * For non-cascade mode (1 TCAM wide) scenario, TCAM[x]'s Select {0-1} + * select indices 0-1 of the Byte Selection Base + * For cascade mode, the leftmost TCAM of the first cascade row selects + * indices 0-4 of the Byte Selection Base; the second TCAM in the + * cascade row selects indices starting with 5-n + */ + for (j = 0; j < ICE_AQC_ACL_KEY_WIDTH_BYTES; j++) { + /* PKT DIR uses the 1st location of Byte Selection Base: + 1 */ + u8 val = ICE_AQC_ACL_BYTE_SEL_BASE + 1 + idx; + + if (tcam_idx_in_cascade == cascade_cnt - 1) { + if (j == ICE_ACL_SCEN_RNG_CHK_IDX_IN_TCAM) + val = ICE_AQC_ACL_BYTE_SEL_BASE_RNG_CHK; + else if (j == ICE_ACL_SCEN_PID_IDX_IN_TCAM) + val = ICE_AQC_ACL_BYTE_SEL_BASE_PID; + else if (j == ICE_ACL_SCEN_PKT_DIR_IDX_IN_TCAM) + val = ICE_AQC_ACL_BYTE_SEL_BASE_PKT_DIR; + } + + /* In case that scenario's width is greater than the width of + * the Byte selection base, we will not assign a value to the + * tcam_select[j]. As a result, the tcam_select[j] will have + * default value which is zero. + */ + if (val > ICE_AQC_ACL_BYTE_SEL_BASE_RNG_CHK) + continue; + + scen_buf->tcam_cfg[tcam_idx].tcam_select[j] = val; + + idx++; + } +} + +/** + * ice_acl_set_scen_chnk_msk + * @scen_buf: Pointer to the scenario buffer that needs to be populated + * @scen: pointer to the available space for the scenario + * + * Set the chunk mask for the entries that will be used by this scenario + */ +static void +ice_acl_set_scen_chnk_msk(struct ice_aqc_acl_scen *scen_buf, + struct ice_acl_scen *scen) +{ + u16 tcam_idx, num_cscd, units, cnt; + u8 chnk_offst; + + /* Determine the starting TCAM index and offset of the start entry */ + tcam_idx = ICE_ACL_TBL_TCAM_IDX(scen->start); + chnk_offst = (u8)((scen->start % ICE_AQC_ACL_TCAM_DEPTH) / + ICE_ACL_ENTRY_ALLOC_UNIT); + + /* Entries are allocated and tracked in multiple of 64's */ + units = scen->num_entry / ICE_ACL_ENTRY_ALLOC_UNIT; + + /* Determine number of cascaded TCAMs */ + num_cscd = scen->width / ICE_AQC_ACL_KEY_WIDTH_BYTES; + + for (cnt = 0; cnt < units; cnt++) { + u16 i; + + /* Set the corresponding bitmap of individual 64-entry + * chunk spans across a cascade of 1 or more TCAMs + * For each TCAM, there will be (ICE_AQC_ACL_TCAM_DEPTH + * / ICE_ACL_ENTRY_ALLOC_UNIT) or 8 chunks. + */ + for (i = tcam_idx; i < tcam_idx + num_cscd; i++) + scen_buf->tcam_cfg[i].chnk_msk |= BIT(chnk_offst); + + chnk_offst = (chnk_offst + 1) % ICE_AQC_MAX_TCAM_ALLOC_UNITS; + if (!chnk_offst) + tcam_idx += num_cscd; + } +} + +/** + * ice_acl_assign_act_mem_for_scen + * @tbl: pointer to acl table structure + * @scen: pointer to the scenario struct + * @scen_buf: pointer to the available space for the scenario + * @current_tcam_idx: theoretical index of the TCAM that we associated those + * action memory banks with, at the table creation time. + * @target_tcam_idx: index of the TCAM that we want to associate those action + * memory banks with. + */ +static void +ice_acl_assign_act_mem_for_scen(struct ice_acl_tbl *tbl, + struct ice_acl_scen *scen, + struct ice_aqc_acl_scen *scen_buf, + u8 current_tcam_idx, + u8 target_tcam_idx) +{ + u8 i; + + for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++) { + struct ice_acl_act_mem *p_mem = &tbl->act_mems[i]; + + if (p_mem->act_mem == ICE_ACL_ACT_PAIR_MEM_INVAL || + p_mem->member_of_tcam != current_tcam_idx) + continue; + + scen_buf->act_mem_cfg[i] = target_tcam_idx; + scen_buf->act_mem_cfg[i] |= ICE_AQC_ACL_SCE_ACT_MEM_EN; + ice_set_bit(i, scen->act_mem_bitmap); + } +} + +/** + * ice_acl_commit_partition - Indicate if the specified partition is active + * @hw: pointer to the hardware structure + * @scen: pointer to the scenario struct + * @commit: true if the partition is being commit + */ +static void +ice_acl_commit_partition(struct ice_hw *hw, struct ice_acl_scen *scen, + bool commit) +{ + u16 tcam_idx, off, num_cscd, units, cnt; + + /* Determine the starting TCAM index and offset of the start entry */ + tcam_idx = ICE_ACL_TBL_TCAM_IDX(scen->start); + off = (scen->start % ICE_AQC_ACL_TCAM_DEPTH) / + ICE_ACL_ENTRY_ALLOC_UNIT; + + /* Entries are allocated and tracked in multiple of 64's */ + units = scen->num_entry / ICE_ACL_ENTRY_ALLOC_UNIT; + + /* Determine number of cascaded TCAM */ + num_cscd = scen->width / ICE_AQC_ACL_KEY_WIDTH_BYTES; + + for (cnt = 0; cnt < units; cnt++) { + u16 w; + + /* Set/clear the corresponding bitmap of individual 64-entry + * chunk spans across a row of 1 or more TCAMs + */ + for (w = 0; w < num_cscd; w++) { + u16 b; + + b = ((tcam_idx + w) * ICE_AQC_MAX_TCAM_ALLOC_UNITS) + + off; + if (commit) + ice_set_bit(b, hw->acl_tbl->avail); + else + ice_clear_bit(b, hw->acl_tbl->avail); + } + + off = (off + 1) % ICE_AQC_MAX_TCAM_ALLOC_UNITS; + if (!off) + tcam_idx += num_cscd; + } +} + +/** + * ice_acl_create_scen + * @hw: pointer to the hardware structure + * @match_width: number of bytes to be matched in this scenario + * @num_entries: number of entries to be allocated for the scenario + * @scen_id: holds returned scenario ID if successful + */ +enum ice_status +ice_acl_create_scen(struct ice_hw *hw, u16 match_width, u16 num_entries, + u16 *scen_id) +{ + u8 cascade_cnt, first_tcam, last_tcam, i, k; + struct ice_aqc_acl_scen scen_buf; + struct ice_acl_scen *scen; + enum ice_status status; + + if (!hw->acl_tbl) + return ICE_ERR_DOES_NOT_EXIST; + + scen = (struct ice_acl_scen *)ice_malloc(hw, sizeof(*scen)); + if (!scen) + return ICE_ERR_NO_MEMORY; + + scen->start = hw->acl_tbl->first_entry; + scen->width = ICE_AQC_ACL_KEY_WIDTH_BYTES * + DIVIDE_AND_ROUND_UP(match_width, ICE_AQC_ACL_KEY_WIDTH_BYTES); + scen->num_entry = num_entries; + + status = ice_acl_alloc_partition(hw, scen); + if (status) { + ice_free(hw, scen); + return status; + } + + ice_memset(&scen_buf, 0, sizeof(scen_buf), ICE_NONDMA_MEM); + + /* Determine the number of cascade TCAMs, given the scenario's width */ + cascade_cnt = DIVIDE_AND_ROUND_UP(scen->width, + ICE_AQC_ACL_KEY_WIDTH_BYTES); + first_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start); + last_tcam = ICE_ACL_TBL_TCAM_IDX(scen->end); + + /* For each scenario, we reserved last three bytes of scenario width for + * packet direction flag, profile ID and range checker. Thus, we want to + * return back to the caller the eff_width, pkt_dir_idx, rng_chk_idx and + * pid_idx. + */ + scen->eff_width = cascade_cnt * ICE_AQC_ACL_KEY_WIDTH_BYTES - + ICE_ACL_SCEN_MIN_WIDTH; + scen->rng_chk_idx = (cascade_cnt - 1) * ICE_AQC_ACL_KEY_WIDTH_BYTES + + ICE_ACL_SCEN_RNG_CHK_IDX_IN_TCAM; + scen->pid_idx = (cascade_cnt - 1) * ICE_AQC_ACL_KEY_WIDTH_BYTES + + ICE_ACL_SCEN_PID_IDX_IN_TCAM; + scen->pkt_dir_idx = (cascade_cnt - 1) * ICE_AQC_ACL_KEY_WIDTH_BYTES + + ICE_ACL_SCEN_PKT_DIR_IDX_IN_TCAM; + + /* set the chunk mask for the tcams */ + ice_acl_set_scen_chnk_msk(&scen_buf, scen); + + /* set the TCAM select and start_cmp and start_set bits */ + k = first_tcam; + /* set the START_SET bit at the beginning of the stack */ + scen_buf.tcam_cfg[k].start_cmp_set |= ICE_AQC_ACL_ALLOC_SCE_START_SET; + while (k <= last_tcam) { + u8 last_tcam_idx_cascade = cascade_cnt + k - 1; + + /* set start_cmp for the first cascaded TCAM */ + scen_buf.tcam_cfg[k].start_cmp_set |= + ICE_AQC_ACL_ALLOC_SCE_START_CMP; + + /* cascade TCAMs up to the width of the scenario */ + for (i = k; i < cascade_cnt + k; i++) { + ice_acl_fill_tcam_select(&scen_buf, scen, i, i - k); + ice_acl_assign_act_mem_for_scen(hw->acl_tbl, scen, + &scen_buf, + i, + last_tcam_idx_cascade); + } + + k = i; + } + + /* We need to set the start_cmp bit for the unused TCAMs. */ + i = 0; + while (i < first_tcam) + scen_buf.tcam_cfg[i++].start_cmp_set = + ICE_AQC_ACL_ALLOC_SCE_START_CMP; + + i = last_tcam + 1; + while (i < ICE_AQC_ACL_SLICES) + scen_buf.tcam_cfg[i++].start_cmp_set = + ICE_AQC_ACL_ALLOC_SCE_START_CMP; + + status = ice_aq_alloc_acl_scen(hw, scen_id, &scen_buf, NULL); + if (status) { + ice_debug(hw, ICE_DBG_ACL, + "AQ allocation of ACL scenario failed. status: %d\n", + status); + ice_free(hw, scen); + return status; + } + + scen->id = *scen_id; + ice_acl_commit_partition(hw, scen, false); + ice_acl_init_entry(scen); + LIST_ADD(&scen->list_entry, &hw->acl_tbl->scens); + + return status; +} + +/** + * ice_acl_destroy_tbl - Destroy a previously created LEM table for ACL + * @hw: pointer to the HW struct + */ +enum ice_status ice_acl_destroy_tbl(struct ice_hw *hw) +{ + struct ice_acl_scen *pos_scen, *tmp_scen; + struct ice_aqc_acl_generic resp_buf; + struct ice_aqc_acl_scen buf; + enum ice_status status; + u8 i; + + if (!hw->acl_tbl) + return ICE_ERR_DOES_NOT_EXIST; + + /* Mark all the created scenario's TCAM to stop the packet lookup and + * delete them afterward + */ + LIST_FOR_EACH_ENTRY_SAFE(pos_scen, tmp_scen, &hw->acl_tbl->scens, + ice_acl_scen, list_entry) { + status = ice_aq_query_acl_scen(hw, pos_scen->id, &buf, NULL); + if (status) { + ice_debug(hw, ICE_DBG_ACL, "ice_aq_query_acl_scen() failed. status: %d\n", + status); + return status; + } + + for (i = 0; i < ICE_AQC_ACL_SLICES; i++) { + buf.tcam_cfg[i].chnk_msk = 0; + buf.tcam_cfg[i].start_cmp_set = + ICE_AQC_ACL_ALLOC_SCE_START_CMP; + } + + for (i = 0; i < ICE_AQC_MAX_ACTION_MEMORIES; i++) + buf.act_mem_cfg[i] = 0; + + status = ice_aq_update_acl_scen(hw, pos_scen->id, &buf, NULL); + if (status) { + ice_debug(hw, ICE_DBG_ACL, "ice_aq_update_acl_scen() failed. status: %d\n", + status); + return status; + } + + status = ice_acl_destroy_scen(hw, pos_scen->id); + if (status) { + ice_debug(hw, ICE_DBG_ACL, "deletion of scenario failed. status: %d\n", + status); + return status; + } + } + + /* call the aq command to destroy the ACL table */ + status = ice_aq_dealloc_acl_tbl(hw, hw->acl_tbl->id, &resp_buf, NULL); + + if (status) { + ice_debug(hw, ICE_DBG_ACL, + "AQ de-allocation of ACL failed. status: %d\n", + status); + return status; + } + + ice_free(hw, hw->acl_tbl); + hw->acl_tbl = NULL; + + return ICE_SUCCESS; +} + +/** + * ice_acl_add_entry - Add a flow entry to an ACL scenario + * @hw: pointer to the HW struct + * @scen: scenario to add the entry to + * @prior: priority level of the entry being added + * @keys: buffer of the value of the key to be programmed to the ACL entry + * @inverts: buffer of the value of the key inverts to be programmed + * @acts: pointer to a buffer containing formatted actions + * @acts_cnt: indicates the number of actions stored in "acts" + * @entry_idx: returned scenario relative index of the added flow entry + * + * Given an ACL table and a scenario, to add the specified key and key invert + * to an available entry in the specified scenario. + * The "keys" and "inverts" buffers must be of the size which is the same as + * the scenario's width + */ +enum ice_status +ice_acl_add_entry(struct ice_hw *hw, struct ice_acl_scen *scen, + enum ice_acl_entry_prior prior, u8 *keys, u8 *inverts, + struct ice_acl_act_entry *acts, u8 acts_cnt, u16 *entry_idx) +{ + u8 i, entry_tcam, num_cscd, idx, offset; + struct ice_aqc_acl_data buf; + enum ice_status status = ICE_SUCCESS; + + if (!scen) + return ICE_ERR_DOES_NOT_EXIST; + + *entry_idx = ice_acl_scen_assign_entry_idx(scen, prior); + if (*entry_idx >= scen->num_entry) { + *entry_idx = 0; + return ICE_ERR_MAX_LIMIT; + } + + /* Determine number of cascaded TCAMs */ + num_cscd = DIVIDE_AND_ROUND_UP(scen->width, + ICE_AQC_ACL_KEY_WIDTH_BYTES); + + entry_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start); + idx = ICE_ACL_TBL_TCAM_ENTRY_IDX(scen->start + *entry_idx); + + ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); + for (i = 0; i < num_cscd; i++) { + /* If the key spans more than one TCAM in the case of cascaded + * TCAMs, the key and key inverts need to be properly split + * among TCAMs.E.g.bytes 0 - 4 go to an index in the first TCAM + * and bytes 5 - 9 go to the same index in the next TCAM, etc. + * If the entry spans more than one TCAM in a cascaded TCAM + * mode, the programming of the entries in the TCAMs must be in + * reversed order - the TCAM entry of the rightmost TCAM should + * be programmed first; the TCAM entry of the leftmost TCAM + * should be programmed last. + */ + offset = num_cscd - i - 1; + ice_memcpy(&buf.entry_key.val, + &keys[offset * sizeof(buf.entry_key.val)], + sizeof(buf.entry_key.val), ICE_NONDMA_TO_NONDMA); + ice_memcpy(&buf.entry_key_invert.val, + &inverts[offset * sizeof(buf.entry_key_invert.val)], + sizeof(buf.entry_key_invert.val), + ICE_NONDMA_TO_NONDMA); + status = ice_aq_program_acl_entry(hw, entry_tcam + offset, idx, + &buf, NULL); + if (status) { + ice_debug(hw, ICE_DBG_ACL, + "aq program acl entry failed status: %d\n", + status); + goto out; + } + } + + /* Program the action memory */ + status = ice_acl_prog_act(hw, scen, acts, acts_cnt, *entry_idx); + +out: + if (status) { + ice_acl_rem_entry(hw, scen, *entry_idx); + *entry_idx = 0; + } + + return status; +} + +/** + * ice_acl_prog_act - Program a scenario's action memory + * @hw: pointer to the HW struct + * @scen: scenario to add the entry to + * @acts: pointer to a buffer containing formatted actions + * @acts_cnt: indicates the number of actions stored in "acts" + * @entry_idx: scenario relative index of the added flow entry + * + * Program a scenario's action memory + */ +enum ice_status +ice_acl_prog_act(struct ice_hw *hw, struct ice_acl_scen *scen, + struct ice_acl_act_entry *acts, u8 acts_cnt, + u16 entry_idx) +{ + u8 entry_tcam, num_cscd, i, actx_idx = 0; + struct ice_aqc_actpair act_buf; + enum ice_status status = ICE_SUCCESS; + u16 idx; + + if (entry_idx >= scen->num_entry) + return ICE_ERR_MAX_LIMIT; + + ice_memset(&act_buf, 0, sizeof(act_buf), ICE_NONDMA_MEM); + + /* Determine number of cascaded TCAMs */ + num_cscd = DIVIDE_AND_ROUND_UP(scen->width, + ICE_AQC_ACL_KEY_WIDTH_BYTES); + + entry_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start); + idx = ICE_ACL_TBL_TCAM_ENTRY_IDX(scen->start + entry_idx); + + i = ice_find_first_bit(scen->act_mem_bitmap, + ICE_AQC_MAX_ACTION_MEMORIES); + while (i < ICE_AQC_MAX_ACTION_MEMORIES) { + struct ice_acl_act_mem *mem = &hw->acl_tbl->act_mems[i]; + + if (actx_idx >= acts_cnt) + break; + if (mem->member_of_tcam >= entry_tcam && + mem->member_of_tcam < entry_tcam + num_cscd) { + ice_memcpy(&act_buf.act[0], &acts[actx_idx], + sizeof(struct ice_acl_act_entry), + ICE_NONDMA_TO_NONDMA); + + if (++actx_idx < acts_cnt) { + ice_memcpy(&act_buf.act[1], &acts[actx_idx], + sizeof(struct ice_acl_act_entry), + ICE_NONDMA_TO_NONDMA); + } + + status = ice_aq_program_actpair(hw, i, idx, &act_buf, + NULL); + if (status) { + ice_debug(hw, ICE_DBG_ACL, + "program actpair failed status: %d\n", + status); + break; + } + actx_idx++; + } + + i = ice_find_next_bit(scen->act_mem_bitmap, + ICE_AQC_MAX_ACTION_MEMORIES, i + 1); + } + + if (!status && actx_idx < acts_cnt) + status = ICE_ERR_MAX_LIMIT; + + return status; +} + +/** + * ice_acl_rem_entry - Remove a flow entry from an ACL scenario + * @hw: pointer to the HW struct + * @scen: scenario to remove the entry from + * @entry_idx: the scenario-relative index of the flow entry being removed + */ +enum ice_status +ice_acl_rem_entry(struct ice_hw *hw, struct ice_acl_scen *scen, u16 entry_idx) +{ + struct ice_aqc_actpair act_buf; + struct ice_aqc_acl_data buf; + u8 entry_tcam, num_cscd, i; + enum ice_status status = ICE_SUCCESS; + u16 idx; + + if (!scen) + return ICE_ERR_DOES_NOT_EXIST; + + if (entry_idx >= scen->num_entry) + return ICE_ERR_MAX_LIMIT; + + if (!ice_is_bit_set(scen->entry_bitmap, entry_idx)) + return ICE_ERR_DOES_NOT_EXIST; + + /* Determine number of cascaded TCAMs */ + num_cscd = DIVIDE_AND_ROUND_UP(scen->width, + ICE_AQC_ACL_KEY_WIDTH_BYTES); + + entry_tcam = ICE_ACL_TBL_TCAM_IDX(scen->start); + idx = ICE_ACL_TBL_TCAM_ENTRY_IDX(scen->start + entry_idx); + + /* invalidate the flow entry */ + ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); + for (i = 0; i < num_cscd; i++) { + status = ice_aq_program_acl_entry(hw, entry_tcam + i, idx, &buf, + NULL); + if (status) + ice_debug(hw, ICE_DBG_ACL, + "aq program acl entry failed status: %d\n", + status); + } + + ice_memset(&act_buf, 0, sizeof(act_buf), ICE_NONDMA_MEM); + i = ice_find_first_bit(scen->act_mem_bitmap, + ICE_AQC_MAX_ACTION_MEMORIES); + while (i < ICE_AQC_MAX_ACTION_MEMORIES) { + struct ice_acl_act_mem *mem = &hw->acl_tbl->act_mems[i]; + + if (mem->member_of_tcam >= entry_tcam && + mem->member_of_tcam < entry_tcam + num_cscd) { + /* Invalidate allocated action pairs */ + status = ice_aq_program_actpair(hw, i, idx, &act_buf, + NULL); + if (status) + ice_debug(hw, ICE_DBG_ACL, + "program actpair failed.status: %d\n", + status); + } + + i = ice_find_next_bit(scen->act_mem_bitmap, + ICE_AQC_MAX_ACTION_MEMORIES, i + 1); + } + + ice_acl_scen_free_entry_idx(scen, entry_idx); + + return status; +} + +/** + * ice_acl_destroy_scen - Destroy an ACL scenario + * @hw: pointer to the HW struct + * @scen_id: ID of the remove scenario + */ +enum ice_status ice_acl_destroy_scen(struct ice_hw *hw, u16 scen_id) +{ + struct ice_acl_scen *scen, *tmp_scen; + struct ice_flow_prof *p, *tmp; + enum ice_status status; + + if (!hw->acl_tbl) + return ICE_ERR_DOES_NOT_EXIST; + + /* Remove profiles that use "scen_id" scenario */ + LIST_FOR_EACH_ENTRY_SAFE(p, tmp, &hw->fl_profs[ICE_BLK_ACL], + ice_flow_prof, l_entry) + if (p->cfg.scen && p->cfg.scen->id == scen_id) { + status = ice_flow_rem_prof(hw, ICE_BLK_ACL, p->id); + if (status) { + ice_debug(hw, ICE_DBG_ACL, + "ice_flow_rem_prof failed. status: %d\n", + status); + goto exit; + } + } + + /* Call the aq command to destroy the targeted scenario */ + status = ice_aq_dealloc_acl_scen(hw, scen_id, NULL); + + if (status) { + ice_debug(hw, ICE_DBG_ACL, + "AQ de-allocation of scenario failed. status: %d\n", + status); + goto exit; + } + + /* Remove scenario from hw->acl_tbl->scens */ + LIST_FOR_EACH_ENTRY_SAFE(scen, tmp_scen, &hw->acl_tbl->scens, + ice_acl_scen, list_entry) + if (scen->id == scen_id) { + LIST_DEL(&scen->list_entry); + ice_free(hw, scen); + } +exit: + return status; +} diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 73f5e7090..2c899b90a 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -419,6 +419,7 @@ struct ice_aqc_vsi_props { #define ICE_AQ_VSI_PROP_RXQ_MAP_VALID BIT(6) #define ICE_AQ_VSI_PROP_Q_OPT_VALID BIT(7) #define ICE_AQ_VSI_PROP_OUTER_UP_VALID BIT(8) +#define ICE_AQ_VSI_PROP_ACL_VALID BIT(10) #define ICE_AQ_VSI_PROP_FLOW_DIR_VALID BIT(11) #define ICE_AQ_VSI_PROP_PASID_VALID BIT(12) /* switch section */ @@ -534,8 +535,16 @@ struct ice_aqc_vsi_props { u8 q_opt_reserved[3]; /* outer up section */ __le32 outer_up_table; /* same structure and defines as ingress tbl */ - /* section 10 */ - __le16 sect_10_reserved; + /* acl section */ + __le16 acl_def_act; +#define ICE_AQ_VSI_ACL_DEF_RX_PROF_S 0 +#define ICE_AQ_VSI_ACL_DEF_RX_PROF_M (0xF << ICE_AQ_VSI_ACL_DEF_RX_PROF_S) +#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_S 4 +#define ICE_AQ_VSI_ACL_DEF_RX_TABLE_M (0xF << ICE_AQ_VSI_ACL_DEF_RX_TABLE_S) +#define ICE_AQ_VSI_ACL_DEF_TX_PROF_S 8 +#define ICE_AQ_VSI_ACL_DEF_TX_PROF_M (0xF << ICE_AQ_VSI_ACL_DEF_TX_PROF_S) +#define ICE_AQ_VSI_ACL_DEF_TX_TABLE_S 12 +#define ICE_AQ_VSI_ACL_DEF_TX_TABLE_M (0xF << ICE_AQ_VSI_ACL_DEF_TX_TABLE_S) /* flow director section */ __le16 fd_options; #define ICE_AQ_VSI_FD_ENABLE BIT(0) @@ -1694,6 +1703,7 @@ struct ice_aqc_nvm { #define ICE_AQC_NVM_ACTIV_SEL_OROM BIT(4) #define ICE_AQC_NVM_ACTIV_SEL_NETLIST BIT(5) #define ICE_AQC_NVM_SPECIAL_UPDATE BIT(6) +#define ICE_AQC_NVM_REVERT_LAST_ACTIV BIT(6) /* Write Activate only */ #define ICE_AQC_NVM_ACTIV_SEL_MASK MAKEMASK(0x7, 3) #define ICE_AQC_NVM_FLASH_ONLY BIT(7) __le16 module_typeid; @@ -2010,6 +2020,418 @@ struct ice_aqc_clear_fd_table { u8 reserved[12]; }; +/* ACL - allocate (indirect 0x0C10) table */ +#define ICE_AQC_ACL_KEY_WIDTH 40 +#define ICE_AQC_ACL_KEY_WIDTH_BYTES 5 +#define ICE_AQC_ACL_TCAM_DEPTH 512 +#define ICE_ACL_ENTRY_ALLOC_UNIT 64 +#define ICE_AQC_MAX_CONCURRENT_ACL_TBL 15 +#define ICE_AQC_MAX_ACTION_MEMORIES 20 +#define ICE_AQC_MAX_ACTION_ENTRIES 512 +#define ICE_AQC_ACL_SLICES 16 +#define ICE_AQC_ALLOC_ID_LESS_THAN_4K 0x1000 +/* The ACL block supports up to 8 actions per a single output. */ +#define ICE_AQC_TBL_MAX_ACTION_PAIRS 4 + +#define ICE_AQC_MAX_TCAM_ALLOC_UNITS (ICE_AQC_ACL_TCAM_DEPTH / \ + ICE_ACL_ENTRY_ALLOC_UNIT) +#define ICE_AQC_ACL_ALLOC_UNITS (ICE_AQC_ACL_SLICES * \ + ICE_AQC_MAX_TCAM_ALLOC_UNITS) + +struct ice_aqc_acl_alloc_table { + __le16 table_width; + __le16 table_depth; + u8 act_pairs_per_entry; + /* For non-concurrent table allocation, this field needs + * to be set to zero(0) otherwise it shall specify the + * amount of concurrent tables whose AllocIDs are + * specified in buffer. Thus the newly allocated table + * is concurrent with table IDs specified in AllocIDs. + */ +#define ICE_AQC_ACL_ALLOC_TABLE_TYPE_NONCONCURR 0 + u8 table_type; + __le16 reserved; + __le32 addr_high; + __le32 addr_low; +}; + +/* Allocate ACL table command buffer format */ +struct ice_aqc_acl_alloc_table_data { + /* Dependent table AllocIDs. Each word in this 15 word array specifies + * a dependent table AllocID according to the amount specified in the + * "table_type" field. All unused words shall be set to 0xFFFF + */ +#define ICE_AQC_CONCURR_ID_INVALID 0xffff + __le16 alloc_ids[ICE_AQC_MAX_CONCURRENT_ACL_TBL]; +}; + +/* ACL - deallocate (indirect 0x0C11) table + * ACL - allocate (indirect 0x0C12) action-pair + * ACL - deallocate (indirect 0x0C13) action-pair + */ + +/* Following structure is common and used in case of deallocation + * of ACL table and action-pair + */ +struct ice_aqc_acl_tbl_actpair { + /* Alloc ID of the table being released */ + __le16 alloc_id; + u8 reserved[6]; + __le32 addr_high; + __le32 addr_low; +}; + +/* This response structure is same in case of alloc/dealloc table, + * alloc/dealloc action-pair + */ +struct ice_aqc_acl_generic { + /* if alloc_id is below 0x1000 then alllocation failed due to + * unavailable resources, else this is set by FW to identify + * table allocation + */ + __le16 alloc_id; + + union { + /* to be used only in case of alloc/dealloc table */ + struct { + /* Index of the first TCAM block, otherwise set to 0xFF + * for a failed allocation + */ + u8 first_tcam; + /* Index of the last TCAM block. This index shall be + * set to the value of first_tcam for single TCAM block + * allocation, otherwise set to 0xFF for a failed + * allocation + */ + u8 last_tcam; + } table; + /* reserved in case of alloc/dealloc action-pair */ + struct { + __le16 reserved; + } act_pair; + } ops; + + /* index of first entry (in both TCAM and action memories), + * otherwise set to 0xFF for a failed allocation + */ + __le16 first_entry; + /* index of last entry (in both TCAM and action memories), + * otherwise set to 0xFF for a failed allocation + */ + __le16 last_entry; + + /* Each act_mem element specifies the order of the memory + * otherwise 0xFF + */ + u8 act_mem[ICE_AQC_MAX_ACTION_MEMORIES]; +}; + +/* ACL - allocate (indirect 0x0C14) scenario. This command doesn't have separate + * response buffer since original command buffer gets updated with + * 'scen_id' in case of success + */ +struct ice_aqc_acl_alloc_scen { + union { + struct { + u8 reserved[8]; + } cmd; + struct { + __le16 scen_id; + u8 reserved[6]; + } resp; + } ops; + __le32 addr_high; + __le32 addr_low; +}; + +/* ACL - de-allocate (direct 0x0C15) scenario. This command doesn't need + * separate response buffer since nothing to be returned as a response + * except status. + */ +struct ice_aqc_acl_dealloc_scen { + __le16 scen_id; + u8 reserved[14]; +}; + +/* ACL - update (direct 0x0C1B) scenario */ +/* ACL - query (direct 0x0C23) scenario */ +struct ice_aqc_acl_update_query_scen { + __le16 scen_id; + u8 reserved[6]; + __le32 addr_high; + __le32 addr_low; +}; + +/* Input buffer format in case allocate/update ACL scenario and same format + * is used for response buffer in case of query ACL scenario. + * NOTE: de-allocate ACL scenario is direct command and doesn't require + * "buffer", hence no buffer format. + */ +struct ice_aqc_acl_scen { + struct { + /* Byte [x] selection for the TCAM key. This value must be set + * set to 0x0 for unusued TCAM. + * Only Bit 6..0 is used in each byte and MSB is reserved + */ +#define ICE_AQC_ACL_ALLOC_SCE_SELECT_M 0x7F +#define ICE_AQC_ACL_BYTE_SEL_BASE 0x20 +#define ICE_AQC_ACL_BYTE_SEL_BASE_PID 0x3E +#define ICE_AQC_ACL_BYTE_SEL_BASE_PKT_DIR ICE_AQC_ACL_BYTE_SEL_BASE +#define ICE_AQC_ACL_BYTE_SEL_BASE_RNG_CHK 0x3F + u8 tcam_select[5]; + /* TCAM Block entry masking. This value should be set to 0x0 for + * unused TCAM + */ + u8 chnk_msk; + /* Bit 0 : masks TCAM entries 0-63 + * Bit 1 : masks TCAM entries 64-127 + * Bit 2 to 7 : follow the pattern of bit 0 and 1 + */ +#define ICE_AQC_ACL_ALLOC_SCE_START_CMP BIT(0) +#define ICE_AQC_ACL_ALLOC_SCE_START_SET BIT(1) + u8 start_cmp_set; + + } tcam_cfg[ICE_AQC_ACL_SLICES]; + + /* Each byte, Bit 6..0: Action memory association to a TCAM block, + * otherwise it shall be set to 0x0 for disabled memory action. + * Bit 7 : Action memory enable for this scenario + */ +#define ICE_AQC_ACL_SCE_ACT_MEM_TCAM_ASSOC_M 0x7F +#define ICE_AQC_ACL_SCE_ACT_MEM_EN BIT(7) + u8 act_mem_cfg[ICE_AQC_MAX_ACTION_MEMORIES]; +}; + +/* ACL - allocate (indirect 0x0C16) counters */ +struct ice_aqc_acl_alloc_counters { + /* Amount of contiguous counters requested. Min value is 1 and + * max value is 255 + */ +#define ICE_AQC_ACL_ALLOC_CNT_MIN_AMT 0x1 +#define ICE_AQC_ACL_ALLOC_CNT_MAX_AMT 0xFF + u8 counter_amount; + + /* Counter type: 'single counter' which can be configured to count + * either bytes or packets + */ +#define ICE_AQC_ACL_CNT_TYPE_SINGLE 0x0 + + /* Counter type: 'counter pair' which counts number of bytes and number + * of packets. + */ +#define ICE_AQC_ACL_CNT_TYPE_DUAL 0x1 + /* requested counter type, single/dual */ + u8 counters_type; + + /* counter bank allocation shall be 0-3 for 'byte or packet counter' */ +#define ICE_AQC_ACL_MAX_CNT_SINGLE 0x3 +/* counter bank allocation shall be 0-1 for 'byte and packet counter dual' */ +#define ICE_AQC_ACL_MAX_CNT_DUAL 0x1 + /* requested counter bank allocation */ + u8 bank_alloc; + + u8 reserved; + + union { + /* Applicable only in case of command */ + struct { + u8 reserved[12]; + } cmd; + /* Applicable only in case of response */ +#define ICE_AQC_ACL_ALLOC_CNT_INVAL 0xFFFF + struct { + /* Index of first allocated counter. 0xFFFF in case + * of unsuccessful allocation + */ + __le16 first_counter; + /* Index of last allocated counter. 0xFFFF in case + * of unsuccessful allocation + */ + __le16 last_counter; + u8 rsvd[8]; + } resp; + } ops; +}; + +/* ACL - de-allocate (direct 0x0C17) counters */ +struct ice_aqc_acl_dealloc_counters { + /* first counter being released */ + __le16 first_counter; + /* last counter being released */ + __le16 last_counter; + /* requested counter type, single/dual */ + u8 counters_type; + /* requested counter bank allocation */ + u8 bank_alloc; + u8 reserved[10]; +}; + +/* ACL - de-allocate (direct 0x0C1A) resources. Used by SW to release all the + * resources allocated for it using a single command + */ +struct ice_aqc_acl_dealloc_res { + u8 reserved[16]; +}; + +/* ACL - program actionpair (indirect 0x0C1C) */ +/* ACL - query actionpair (indirect 0x0C25) */ +struct ice_aqc_acl_actpair { + /* action mem index to program/update */ + u8 act_mem_index; + u8 reserved; + /* The entry index in action memory to be programmed/updated */ + __le16 act_entry_index; + __le32 reserved2; + __le32 addr_high; + __le32 addr_low; +}; + +/* Input buffer format for program/query action-pair admin command */ +struct ice_acl_act_entry { + /* Action priority, values must be between 0..7 */ +#define ICE_AQC_ACT_PRIO_VALID_MAX 7 +#define ICE_AQC_ACT_PRIO_MSK MAKEMASK(0xff, 0) + u8 prio; + /* Action meta-data identifier. This field should be set to 0x0 + * for a NOP action + */ +#define ICE_AQC_ACT_MDID_S 8 +#define ICE_AQC_ACT_MDID_MSK MAKEMASK(0xff00, ICE_AQC_ACT_MDID_S) + u8 mdid; + /* Action value */ +#define ICE_AQC_ACT_VALUE_S 16 +#define ICE_AQC_ACT_VALUE_MSK MAKEMASK(0xffff0000, 16) + __le16 value; +}; + +#define ICE_ACL_NUM_ACT_PER_ACT_PAIR 2 +struct ice_aqc_actpair { + struct ice_acl_act_entry act[ICE_ACL_NUM_ACT_PER_ACT_PAIR]; +}; + +/* Generic format used to describe either input or response buffer + * for admin commands related to ACL profile + */ +struct ice_aqc_acl_prof_generic_frmt { + /* The first byte of the byte selection base is reserved to keep the + * first byte of the field vector where the packet direction info is + * available. Thus we should start at index 1 of the field vector to + * map its entries to the byte selection base. + */ +#define ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX 1 + /* In each byte: + * Bit 0..5 = Byte selection for the byte selection base from the + * extracted fields (expressed as byte offset in extracted fields). + * Applicable values are 0..63 + * Bit 6..7 = Reserved + */ +#define ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS 30 + u8 byte_selection[ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS]; + /* In each byte: + * Bit 0..4 = Word selection for the word selection base from the + * extracted fields (expressed as word offset in extracted fields). + * Applicable values are 0..31 + * Bit 5..7 = Reserved + */ +#define ICE_AQC_ACL_PROF_WORD_SEL_ELEMS 32 + u8 word_selection[ICE_AQC_ACL_PROF_WORD_SEL_ELEMS]; + /* In each byte: + * Bit 0..3 = Double word selection for the double-word selection base + * from the extracted fields (expressed as double-word offset in + * extracted fields). + * Applicable values are 0..15 + * Bit 4..7 = Reserved + */ +#define ICE_AQC_ACL_PROF_DWORD_SEL_ELEMS 15 + u8 dword_selection[ICE_AQC_ACL_PROF_DWORD_SEL_ELEMS]; + /* Scenario numbers for individual Physical Function's */ +#define ICE_AQC_ACL_PROF_PF_SCEN_NUM_ELEMS 8 + u8 pf_scenario_num[ICE_AQC_ACL_PROF_PF_SCEN_NUM_ELEMS]; +}; + +/* ACL - program ACL profile extraction (indirect 0x0C1D) */ +/* ACL - program ACL profile ranges (indirect 0x0C1E) */ +/* ACL - query ACL profile (indirect 0x0C21) */ +/* ACL - query ACL profile ranges (indirect 0x0C22) */ +struct ice_aqc_acl_profile { + u8 profile_id; /* Programmed/Updated profile ID */ + u8 reserved[7]; + __le32 addr_high; + __le32 addr_low; +}; + +/* Input buffer format for program profile extraction admin command and + * response buffer format for query profile admin command is as defined + * in struct ice_aqc_acl_prof_generic_frmt + */ + +/* Input buffer format for program profile ranges and query profile ranges + * admin commands. Same format is used for response buffer in case of query + * profile ranges command + */ +struct ice_acl_rng_data { + /* The range checker output shall be sent when the value + * related to this range checker is lower than low boundary + */ + __be16 low_boundary; + /* The range checker output shall be sent when the value + * related to this range checker is higher than high boundary + */ + __be16 high_boundary; + /* A value of '0' in bit shall clear the relevant bit input + * to the range checker + */ + __be16 mask; +}; + +struct ice_aqc_acl_profile_ranges { +#define ICE_AQC_ACL_PROF_RANGES_NUM_CFG 8 + struct ice_acl_rng_data checker_cfg[ICE_AQC_ACL_PROF_RANGES_NUM_CFG]; +}; + +/* ACL - program ACL entry (indirect 0x0C20) */ +/* ACL - query ACL entry (indirect 0x0C24) */ +struct ice_aqc_acl_entry { + u8 tcam_index; /* Updated TCAM block index */ + u8 reserved; + __le16 entry_index; /* Updated entry index */ + __le32 reserved2; + __le32 addr_high; + __le32 addr_low; +}; + +/* Input buffer format in case of program ACL entry and response buffer format + * in case of query ACL entry + */ +struct ice_aqc_acl_data { + /* Entry key and entry key invert are 40 bits wide. + * Byte 0..4 : entry key and Byte 5..7 are reserved + * Byte 8..12: entry key invert and Byte 13..15 are reserved + */ + struct { + u8 val[5]; + u8 reserved[3]; + } entry_key, entry_key_invert; +}; + +/* ACL - query ACL counter (direct 0x0C27) */ +struct ice_aqc_acl_query_counter { + /* Queried counter index */ + __le16 counter_index; + /* Queried counter bank */ + u8 counter_bank; + union { + struct { + u8 reserved[13]; + } cmd; + struct { + /* Holds counter value/packet counter value */ + u8 val[5]; + u8 reserved[8]; + } resp; + } ops; +}; + /* Add Tx LAN Queues (indirect 0x0C30) */ struct ice_aqc_add_txqs { u8 num_qgrps; @@ -2277,6 +2699,18 @@ struct ice_aq_desc { struct ice_aqc_get_set_rss_lut get_set_rss_lut; struct ice_aqc_get_set_rss_key get_set_rss_key; struct ice_aqc_clear_fd_table clear_fd_table; + struct ice_aqc_acl_alloc_table alloc_table; + struct ice_aqc_acl_tbl_actpair tbl_actpair; + struct ice_aqc_acl_alloc_scen alloc_scen; + struct ice_aqc_acl_dealloc_scen dealloc_scen; + struct ice_aqc_acl_update_query_scen update_query_scen; + struct ice_aqc_acl_alloc_counters alloc_counters; + struct ice_aqc_acl_dealloc_counters dealloc_counters; + struct ice_aqc_acl_dealloc_res dealloc_res; + struct ice_aqc_acl_entry program_query_entry; + struct ice_aqc_acl_actpair program_query_actpair; + struct ice_aqc_acl_profile profile; + struct ice_aqc_acl_query_counter query_counter; struct ice_aqc_add_txqs add_txqs; struct ice_aqc_dis_txqs dis_txqs; struct ice_aqc_move_txqs move_txqs; @@ -2496,6 +2930,27 @@ enum ice_adminq_opc { ice_aqc_opc_get_rss_key = 0x0B04, ice_aqc_opc_get_rss_lut = 0x0B05, ice_aqc_opc_clear_fd_table = 0x0B06, + /* ACL commands */ + ice_aqc_opc_alloc_acl_tbl = 0x0C10, + ice_aqc_opc_dealloc_acl_tbl = 0x0C11, + ice_aqc_opc_alloc_acl_actpair = 0x0C12, + ice_aqc_opc_dealloc_acl_actpair = 0x0C13, + ice_aqc_opc_alloc_acl_scen = 0x0C14, + ice_aqc_opc_dealloc_acl_scen = 0x0C15, + ice_aqc_opc_alloc_acl_counters = 0x0C16, + ice_aqc_opc_dealloc_acl_counters = 0x0C17, + ice_aqc_opc_dealloc_acl_res = 0x0C1A, + ice_aqc_opc_update_acl_scen = 0x0C1B, + ice_aqc_opc_program_acl_actpair = 0x0C1C, + ice_aqc_opc_program_acl_prof_extraction = 0x0C1D, + ice_aqc_opc_program_acl_prof_ranges = 0x0C1E, + ice_aqc_opc_program_acl_entry = 0x0C20, + ice_aqc_opc_query_acl_prof = 0x0C21, + ice_aqc_opc_query_acl_prof_ranges = 0x0C22, + ice_aqc_opc_query_acl_scen = 0x0C23, + ice_aqc_opc_query_acl_entry = 0x0C24, + ice_aqc_opc_query_acl_actpair = 0x0C25, + ice_aqc_opc_query_acl_counter = 0x0C27, /* Tx queue handling commands/events */ ice_aqc_opc_add_txqs = 0x0C30, diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c index 90e7e082f..c78782e5a 100644 --- a/drivers/net/ice/base/ice_fdir.c +++ b/drivers/net/ice/base/ice_fdir.c @@ -956,19 +956,25 @@ void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *fltr) * ice_fdir_update_cntrs - increment / decrement filter counter * @hw: pointer to hardware structure * @flow: filter flow type + * @acl_fltr: true indicates an ACL filter * @add: true implies filters added */ void -ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, bool add) +ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, + bool acl_fltr, bool add) { int incr; incr = (add) ? 1 : -1; hw->fdir_active_fltr += incr; - if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX) + if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX) { ice_debug(hw, ICE_DBG_SW, "Unknown filter type %d\n", flow); - else - hw->fdir_fltr_cnt[flow] += incr; + } else { + if (acl_fltr) + hw->acl_fltr_cnt[flow] += incr; + else + hw->fdir_fltr_cnt[flow] += incr; + } } /** diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h index c811f7606..ff42d2e34 100644 --- a/drivers/net/ice/base/ice_fdir.h +++ b/drivers/net/ice/base/ice_fdir.h @@ -204,6 +204,8 @@ struct ice_fdir_fltr { u16 cnt_index; u8 fdid_prio; u32 fltr_id; + /* Set to true for an ACL filter */ + bool acl_fltr; }; /* Dummy packet filter definition structure. */ @@ -234,6 +236,7 @@ bool ice_fdir_has_frag(enum ice_fltr_ptype flow); struct ice_fdir_fltr * ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx); void -ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, bool add); +ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, + bool acl_fltr, bool add); void ice_fdir_list_add_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input); #endif /* _ICE_FDIR_H_ */ diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 0c64bf681..077325ad5 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -3528,7 +3528,8 @@ static void ice_free_flow_profs(struct ice_hw *hw, u8 blk_idx) LIST_FOR_EACH_ENTRY_SAFE(e, t, &p->entries, ice_flow_entry, l_entry) - ice_flow_rem_entry(hw, ICE_FLOW_ENTRY_HNDL(e)); + ice_flow_rem_entry(hw, (enum ice_block)blk_idx, + ICE_FLOW_ENTRY_HNDL(e)); LIST_DEL(&p->l_entry); if (p->acts) diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index 17fd2423e..f480153f7 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -1024,6 +1024,126 @@ ice_flow_create_xtrct_seq(struct ice_hw *hw, } /** + * ice_flow_sel_acl_scen - returns the specific scenario + * @hw: pointer to the hardware structure + * @params: information about the flow to be processed + * + * This function will return the specific scenario based on the + * params passed to it + */ +static enum ice_status +ice_flow_sel_acl_scen(struct ice_hw *hw, struct ice_flow_prof_params *params) +{ + /* Find the best-fit scenario for the provided match width */ + struct ice_acl_scen *cand_scen = NULL, *scen; + + if (!hw->acl_tbl) + return ICE_ERR_DOES_NOT_EXIST; + + /* Loop through each scenario and match against the scenario width + * to select the specific scenario + */ + LIST_FOR_EACH_ENTRY(scen, &hw->acl_tbl->scens, ice_acl_scen, list_entry) + if (scen->eff_width >= params->entry_length && + (!cand_scen || cand_scen->eff_width > scen->eff_width)) + cand_scen = scen; + if (!cand_scen) + return ICE_ERR_DOES_NOT_EXIST; + + params->prof->cfg.scen = cand_scen; + + return ICE_SUCCESS; +} + +/** + * ice_flow_acl_def_entry_frmt - Determine the layout of flow entries + * @params: information about the flow to be processed + */ +static enum ice_status +ice_flow_acl_def_entry_frmt(struct ice_flow_prof_params *params) +{ + u16 index, i, range_idx = 0; + + index = ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX; + + for (i = 0; i < params->prof->segs_cnt; i++) { + struct ice_flow_seg_info *seg = ¶ms->prof->segs[i]; + u64 match = seg->match; + u8 j; + + for (j = 0; j < ICE_FLOW_FIELD_IDX_MAX && match; j++) { + struct ice_flow_fld_info *fld; + const u64 bit = BIT_ULL(j); + + if (!(match & bit)) + continue; + + fld = &seg->fields[j]; + fld->entry.mask = ICE_FLOW_FLD_OFF_INVAL; + + if (fld->type == ICE_FLOW_FLD_TYPE_RANGE) { + fld->entry.last = ICE_FLOW_FLD_OFF_INVAL; + + /* Range checking only supported for single + * words + */ + if (DIVIDE_AND_ROUND_UP(ice_flds_info[j].size + + fld->xtrct.disp, + BITS_PER_BYTE * 2) > 1) + return ICE_ERR_PARAM; + + /* Ranges must define low and high values */ + if (fld->src.val == ICE_FLOW_FLD_OFF_INVAL || + fld->src.last == ICE_FLOW_FLD_OFF_INVAL) + return ICE_ERR_PARAM; + + fld->entry.val = range_idx++; + } else { + /* Store adjusted byte-length of field for later + * use, taking into account potential + * non-byte-aligned displacement + */ + fld->entry.last = DIVIDE_AND_ROUND_UP + (ice_flds_info[j].size + + (fld->xtrct.disp % BITS_PER_BYTE), + BITS_PER_BYTE); + fld->entry.val = index; + index += fld->entry.last; + } + + match &= ~bit; + } + + for (j = 0; j < seg->raws_cnt; j++) { + struct ice_flow_seg_fld_raw *raw = &seg->raws[j]; + + raw->info.entry.mask = ICE_FLOW_FLD_OFF_INVAL; + raw->info.entry.val = index; + raw->info.entry.last = raw->info.src.last; + index += raw->info.entry.last; + } + } + + /* Currently only support using the byte selection base, which only + * allows for an effective entry size of 30 bytes. Reject anything + * larger. + */ + if (index > ICE_AQC_ACL_PROF_BYTE_SEL_ELEMS) + return ICE_ERR_PARAM; + + /* Only 8 range checkers per profile, reject anything trying to use + * more + */ + if (range_idx > ICE_AQC_ACL_PROF_RANGES_NUM_CFG) + return ICE_ERR_PARAM; + + /* Store # bytes required for entry for later use */ + params->entry_length = index - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX; + + return ICE_SUCCESS; +} + +/** * ice_flow_proc_segs - process all packet segments associated with a profile * @hw: pointer to the HW struct * @params: information about the flow to be processed @@ -1048,6 +1168,14 @@ ice_flow_proc_segs(struct ice_hw *hw, struct ice_flow_prof_params *params) */ status = ICE_SUCCESS; break; + case ICE_BLK_ACL: + status = ice_flow_acl_def_entry_frmt(params); + if (status) + return status; + status = ice_flow_sel_acl_scen(hw, params); + if (status) + return status; + break; case ICE_BLK_FD: status = ICE_SUCCESS; break; @@ -1166,6 +1294,11 @@ ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry) if (entry->entry) ice_free(hw, entry->entry); + if (entry->range_buf) { + ice_free(hw, entry->range_buf); + entry->range_buf = NULL; + } + if (entry->acts) { ice_free(hw, entry->acts); entry->acts = NULL; @@ -1175,17 +1308,155 @@ ice_dealloc_flow_entry(struct ice_hw *hw, struct ice_flow_entry *entry) ice_free(hw, entry); } +#define ICE_ACL_INVALID_SCEN 0x3f + +/** + * ice_flow_acl_is_prof_in_use - Verify if the profile is associated to any pf + * @hw: pointer to the hardware structure + * @prof: pointer to flow profile + * @buf: destination buffer function writes partial xtrct sequence to + * + * returns ICE_SUCCESS if no pf is associated to the given profile + * returns ICE_ERR_IN_USE if at least one pf is associated to the given profile + * returns other error code for real error + */ +static enum ice_status +ice_flow_acl_is_prof_in_use(struct ice_hw *hw, struct ice_flow_prof *prof, + struct ice_aqc_acl_prof_generic_frmt *buf) +{ + enum ice_status status; + u8 prof_id = 0; + + status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id); + if (status) + return status; + + status = ice_query_acl_prof(hw, prof_id, buf, NULL); + if (status) + return status; + + /* If all pf's associated scenarios are all 0 or all + * ICE_ACL_INVALID_SCEN (63) for the given profile then the latter has + * not been configured yet. + */ + if (buf->pf_scenario_num[0] == 0 && buf->pf_scenario_num[1] == 0 && + buf->pf_scenario_num[2] == 0 && buf->pf_scenario_num[3] == 0 && + buf->pf_scenario_num[4] == 0 && buf->pf_scenario_num[5] == 0 && + buf->pf_scenario_num[6] == 0 && buf->pf_scenario_num[7] == 0) + return ICE_SUCCESS; + + if (buf->pf_scenario_num[0] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[1] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[2] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[3] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[4] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[5] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[6] == ICE_ACL_INVALID_SCEN && + buf->pf_scenario_num[7] == ICE_ACL_INVALID_SCEN) + return ICE_SUCCESS; + else + return ICE_ERR_IN_USE; +} + +/** + * ice_flow_acl_free_act_cntr - Free the acl rule's actions + * @hw: pointer to the hardware structure + * @acts: array of actions to be performed on a match + * @acts_cnt: number of actions + */ +static enum ice_status +ice_flow_acl_free_act_cntr(struct ice_hw *hw, struct ice_flow_action *acts, + u8 acts_cnt) +{ + int i; + + for (i = 0; i < acts_cnt; i++) { + if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT || + acts[i].type == ICE_FLOW_ACT_CNTR_BYTES || + acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) { + struct ice_acl_cntrs cntrs; + enum ice_status status; + + cntrs.bank = 0; /* Only bank0 for the moment */ + cntrs.first_cntr = + LE16_TO_CPU(acts[i].data.acl_act.value); + cntrs.last_cntr = + LE16_TO_CPU(acts[i].data.acl_act.value); + + if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) + cntrs.type = ICE_AQC_ACL_CNT_TYPE_DUAL; + else + cntrs.type = ICE_AQC_ACL_CNT_TYPE_SINGLE; + + status = ice_aq_dealloc_acl_cntrs(hw, &cntrs, NULL); + if (status) + return status; + } + } + return ICE_SUCCESS; +} + +/** + * ice_flow_acl_disassoc_scen - Disassociate the scenario to the Profile + * @hw: pointer to the hardware structure + * @prof: pointer to flow profile + * + * Disassociate the scenario to the Profile for the PF of the VSI. + */ +static enum ice_status +ice_flow_acl_disassoc_scen(struct ice_hw *hw, struct ice_flow_prof *prof) +{ + struct ice_aqc_acl_prof_generic_frmt buf; + enum ice_status status = ICE_SUCCESS; + u8 prof_id = 0; + + ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); + + status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id); + if (status) + return status; + + status = ice_query_acl_prof(hw, prof_id, &buf, NULL); + if (status) + return status; + + /* Clear scenario for this pf */ + buf.pf_scenario_num[hw->pf_id] = ICE_ACL_INVALID_SCEN; + status = ice_prgm_acl_prof_extrt(hw, prof_id, &buf, NULL); + + return status; +} + /** * ice_flow_rem_entry_sync - Remove a flow entry * @hw: pointer to the HW struct + * @blk: classification stage * @entry: flow entry to be removed */ static enum ice_status -ice_flow_rem_entry_sync(struct ice_hw *hw, struct ice_flow_entry *entry) +ice_flow_rem_entry_sync(struct ice_hw *hw, enum ice_block blk, + struct ice_flow_entry *entry) { if (!entry) return ICE_ERR_BAD_PTR; + if (blk == ICE_BLK_ACL) { + enum ice_status status; + + if (!entry->prof) + return ICE_ERR_BAD_PTR; + + status = ice_acl_rem_entry(hw, entry->prof->cfg.scen, + entry->scen_entry_idx); + if (status) + return status; + + /* Checks if we need to release an ACL counter. */ + if (entry->acts_cnt && entry->acts) + ice_flow_acl_free_act_cntr(hw, entry->acts, + entry->acts_cnt); + } + LIST_DEL(&entry->l_entry); ice_dealloc_flow_entry(hw, entry); @@ -1311,7 +1582,7 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk, LIST_FOR_EACH_ENTRY_SAFE(e, t, &prof->entries, ice_flow_entry, l_entry) { - status = ice_flow_rem_entry_sync(hw, e); + status = ice_flow_rem_entry_sync(hw, blk, e); if (status) break; } @@ -1319,6 +1590,40 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk, ice_release_lock(&prof->entries_lock); } + if (blk == ICE_BLK_ACL) { + struct ice_aqc_acl_profile_ranges query_rng_buf; + struct ice_aqc_acl_prof_generic_frmt buf; + u8 prof_id = 0; + + /* Deassociate the scenario to the Profile for the PF */ + status = ice_flow_acl_disassoc_scen(hw, prof); + if (status) + return status; + + /* Clear the range-checker if the profile ID is no longer + * used by any PF + */ + status = ice_flow_acl_is_prof_in_use(hw, prof, &buf); + if (status && status != ICE_ERR_IN_USE) { + return status; + } else if (!status) { + /* Clear the range-checker value for profile ID */ + ice_memset(&query_rng_buf, 0, + sizeof(struct ice_aqc_acl_profile_ranges), + ICE_NONDMA_MEM); + + status = ice_flow_get_hw_prof(hw, blk, prof->id, + &prof_id); + if (status) + return status; + + status = ice_prog_acl_prof_ranges(hw, prof_id, + &query_rng_buf, NULL); + if (status) + return status; + } + } + /* Remove all hardware profiles associated with this flow profile */ status = ice_rem_prof(hw, blk, prof->id); if (!status) { @@ -1333,6 +1638,99 @@ ice_flow_rem_prof_sync(struct ice_hw *hw, enum ice_block blk, } /** + * ice_flow_acl_set_xtrct_seq_fld - Populate xtrct seq for single field + * @buf: Destination buffer function writes partial xtrct sequence to + * @info: Info about field + */ +static void +ice_flow_acl_set_xtrct_seq_fld(struct ice_aqc_acl_prof_generic_frmt *buf, + struct ice_flow_fld_info *info) +{ + u16 dst, i; + u8 src; + + src = info->xtrct.idx * ICE_FLOW_FV_EXTRACT_SZ + + info->xtrct.disp / BITS_PER_BYTE; + dst = info->entry.val; + for (i = 0; i < info->entry.last; i++) + /* HW stores field vector words in LE, convert words back to BE + * so constructed entries will end up in network order + */ + buf->byte_selection[dst++] = src++ ^ 1; +} + +/** + * ice_flow_acl_set_xtrct_seq - Program ACL extraction sequence + * @hw: pointer to the hardware structure + * @prof: pointer to flow profile + */ +static enum ice_status +ice_flow_acl_set_xtrct_seq(struct ice_hw *hw, struct ice_flow_prof *prof) +{ + struct ice_aqc_acl_prof_generic_frmt buf; + struct ice_flow_fld_info *info; + enum ice_status status; + u8 prof_id = 0; + u16 i; + + ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); + + status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id); + if (status) + return status; + + status = ice_flow_acl_is_prof_in_use(hw, prof, &buf); + if (status && status != ICE_ERR_IN_USE) + return status; + + if (!status) { + /* Program the profile dependent configuration. This is done + * only once regardless of the number of PFs using that profile + */ + ice_memset(&buf, 0, sizeof(buf), ICE_NONDMA_MEM); + + for (i = 0; i < prof->segs_cnt; i++) { + struct ice_flow_seg_info *seg = &prof->segs[i]; + u64 match = seg->match; + u16 j; + + for (j = 0; j < ICE_FLOW_FIELD_IDX_MAX && match; j++) { + const u64 bit = BIT_ULL(j); + + if (!(match & bit)) + continue; + + info = &seg->fields[j]; + + if (info->type == ICE_FLOW_FLD_TYPE_RANGE) + buf.word_selection[info->entry.val] = + info->xtrct.idx; + else + ice_flow_acl_set_xtrct_seq_fld(&buf, + info); + + match &= ~bit; + } + + for (j = 0; j < seg->raws_cnt; j++) { + info = &seg->raws[j].info; + ice_flow_acl_set_xtrct_seq_fld(&buf, info); + } + } + + ice_memset(&buf.pf_scenario_num[0], ICE_ACL_INVALID_SCEN, + ICE_AQC_ACL_PROF_PF_SCEN_NUM_ELEMS, + ICE_NONDMA_MEM); + } + + /* Update the current PF */ + buf.pf_scenario_num[hw->pf_id] = (u8)prof->cfg.scen->id; + status = ice_prgm_acl_prof_extrt(hw, prof_id, &buf, NULL); + + return status; +} + +/** * ice_flow_assoc_vsig_vsi - associate a VSI with VSIG * @hw: pointer to the hardware structure * @blk: classification stage @@ -1377,6 +1775,11 @@ ice_flow_assoc_prof(struct ice_hw *hw, enum ice_block blk, enum ice_status status = ICE_SUCCESS; if (!ice_is_bit_set(prof->vsis, vsi_handle)) { + if (blk == ICE_BLK_ACL) { + status = ice_flow_acl_set_xtrct_seq(hw, prof); + if (status) + return status; + } status = ice_add_prof_id_flow(hw, blk, ice_get_hw_vsi_num(hw, vsi_handle), @@ -1559,6 +1962,682 @@ u64 ice_flow_find_entry(struct ice_hw *hw, enum ice_block blk, u64 entry_id) } /** + * ice_flow_acl_check_actions - Checks the acl rule's actions + * @hw: pointer to the hardware structure + * @acts: array of actions to be performed on a match + * @acts_cnt: number of actions + * @cnt_alloc: indicates if a ACL counter has been allocated. + */ +static enum ice_status +ice_flow_acl_check_actions(struct ice_hw *hw, struct ice_flow_action *acts, + u8 acts_cnt, bool *cnt_alloc) +{ + ice_declare_bitmap(dup_check, ICE_AQC_TBL_MAX_ACTION_PAIRS * 2); + int i; + + ice_zero_bitmap(dup_check, ICE_AQC_TBL_MAX_ACTION_PAIRS * 2); + *cnt_alloc = false; + + if (acts_cnt > ICE_FLOW_ACL_MAX_NUM_ACT) + return ICE_ERR_OUT_OF_RANGE; + + for (i = 0; i < acts_cnt; i++) { + if (acts[i].type != ICE_FLOW_ACT_NOP && + acts[i].type != ICE_FLOW_ACT_DROP && + acts[i].type != ICE_FLOW_ACT_CNTR_PKT && + acts[i].type != ICE_FLOW_ACT_FWD_QUEUE) + return ICE_ERR_CFG; + + /* If the caller want to add two actions of the same type, then + * it is considered invalid configuration. + */ + if (ice_test_and_set_bit(acts[i].type, dup_check)) + return ICE_ERR_PARAM; + } + + /* Checks if ACL counters are needed. */ + for (i = 0; i < acts_cnt; i++) { + if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT || + acts[i].type == ICE_FLOW_ACT_CNTR_BYTES || + acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) { + struct ice_acl_cntrs cntrs; + enum ice_status status; + + cntrs.amount = 1; + cntrs.bank = 0; /* Only bank0 for the moment */ + + if (acts[i].type == ICE_FLOW_ACT_CNTR_PKT_BYTES) + cntrs.type = ICE_AQC_ACL_CNT_TYPE_DUAL; + else + cntrs.type = ICE_AQC_ACL_CNT_TYPE_SINGLE; + + status = ice_aq_alloc_acl_cntrs(hw, &cntrs, NULL); + if (status) + return status; + /* Counter index within the bank */ + acts[i].data.acl_act.value = + CPU_TO_LE16(cntrs.first_cntr); + *cnt_alloc = true; + } + } + + return ICE_SUCCESS; +} + +/** + * ice_flow_acl_frmt_entry_range - Format an acl range checker for a given field + * @fld: number of the given field + * @info: info about field + * @range_buf: range checker configuration buffer + * @data: pointer to a data buffer containing flow entry's match values/masks + * @range: Input/output param indicating which range checkers are being used + */ +static void +ice_flow_acl_frmt_entry_range(u16 fld, struct ice_flow_fld_info *info, + struct ice_aqc_acl_profile_ranges *range_buf, + u8 *data, u8 *range) +{ + u16 new_mask; + + /* If not specified, default mask is all bits in field */ + new_mask = (info->src.mask == ICE_FLOW_FLD_OFF_INVAL ? + BIT(ice_flds_info[fld].size) - 1 : + (*(u16 *)(data + info->src.mask))) << info->xtrct.disp; + + /* If the mask is 0, then we don't need to worry about this input + * range checker value. + */ + if (new_mask) { + u16 new_high = + (*(u16 *)(data + info->src.last)) << info->xtrct.disp; + u16 new_low = + (*(u16 *)(data + info->src.val)) << info->xtrct.disp; + u8 range_idx = info->entry.val; + + range_buf->checker_cfg[range_idx].low_boundary = + CPU_TO_BE16(new_low); + range_buf->checker_cfg[range_idx].high_boundary = + CPU_TO_BE16(new_high); + range_buf->checker_cfg[range_idx].mask = CPU_TO_BE16(new_mask); + + /* Indicate which range checker is being used */ + *range |= BIT(range_idx); + } +} + +/** + * ice_flow_acl_frmt_entry_fld - Partially format acl entry for a given field + * @fld: number of the given field + * @info: info about the field + * @buf: buffer containing the entry + * @dontcare: buffer containing don't care mask for entry + * @data: pointer to a data buffer containing flow entry's match values/masks + */ +static void +ice_flow_acl_frmt_entry_fld(u16 fld, struct ice_flow_fld_info *info, u8 *buf, + u8 *dontcare, u8 *data) +{ + u16 dst, src, mask, k, end_disp, tmp_s = 0, tmp_m = 0; + bool use_mask = false; + u8 disp; + + src = info->src.val; + mask = info->src.mask; + dst = info->entry.val - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX; + disp = info->xtrct.disp % BITS_PER_BYTE; + + if (mask != ICE_FLOW_FLD_OFF_INVAL) + use_mask = true; + + for (k = 0; k < info->entry.last; k++, dst++) { + /* Add overflow bits from previous byte */ + buf[dst] = (tmp_s & 0xff00) >> 8; + + /* If mask is not valid, tmp_m is always zero, so just setting + * dontcare to 0 (no masked bits). If mask is valid, pulls in + * overflow bits of mask from prev byte + */ + dontcare[dst] = (tmp_m & 0xff00) >> 8; + + /* If there is displacement, last byte will only contain + * displaced data, but there is no more data to read from user + * buffer, so skip so as not to potentially read beyond end of + * user buffer + */ + if (!disp || k < info->entry.last - 1) { + /* Store shifted data to use in next byte */ + tmp_s = data[src++] << disp; + + /* Add current (shifted) byte */ + buf[dst] |= tmp_s & 0xff; + + /* Handle mask if valid */ + if (use_mask) { + tmp_m = (~data[mask++] & 0xff) << disp; + dontcare[dst] |= tmp_m & 0xff; + } + } + } + + /* Fill in don't care bits at beginning of field */ + if (disp) { + dst = info->entry.val - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX; + for (k = 0; k < disp; k++) + dontcare[dst] |= BIT(k); + } + + end_disp = (disp + ice_flds_info[fld].size) % BITS_PER_BYTE; + + /* Fill in don't care bits at end of field */ + if (end_disp) { + dst = info->entry.val - ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX + + info->entry.last - 1; + for (k = end_disp; k < BITS_PER_BYTE; k++) + dontcare[dst] |= BIT(k); + } +} + +/** + * ice_flow_acl_frmt_entry - Format acl entry + * @hw: pointer to the hardware structure + * @prof: pointer to flow profile + * @e: pointer to the flow entry + * @data: pointer to a data buffer containing flow entry's match values/masks + * @acts: array of actions to be performed on a match + * @acts_cnt: number of actions + * + * Formats the key (and key_inverse) to be matched from the data passed in, + * along with data from the flow profile. This key/key_inverse pair makes up + * the 'entry' for an acl flow entry. + */ +static enum ice_status +ice_flow_acl_frmt_entry(struct ice_hw *hw, struct ice_flow_prof *prof, + struct ice_flow_entry *e, u8 *data, + struct ice_flow_action *acts, u8 acts_cnt) +{ + u8 *buf = NULL, *dontcare = NULL, *key = NULL, range = 0, dir_flag_msk; + struct ice_aqc_acl_profile_ranges *range_buf = NULL; + enum ice_status status; + bool cnt_alloc; + u8 prof_id = 0; + u16 i, buf_sz; + + status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, &prof_id); + if (status) + return status; + + /* Format the result action */ + + status = ice_flow_acl_check_actions(hw, acts, acts_cnt, &cnt_alloc); + if (status) + return status; + + status = ICE_ERR_NO_MEMORY; + + e->acts = (struct ice_flow_action *) + ice_memdup(hw, acts, acts_cnt * sizeof(*acts), + ICE_NONDMA_TO_NONDMA); + + if (!e->acts) + goto out; + + e->acts_cnt = acts_cnt; + + /* Format the matching data */ + buf_sz = prof->cfg.scen->width; + buf = (u8 *)ice_malloc(hw, buf_sz); + if (!buf) + goto out; + + dontcare = (u8 *)ice_malloc(hw, buf_sz); + if (!dontcare) + goto out; + + /* 'key' buffer will store both key and key_inverse, so must be twice + * size of buf + */ + key = (u8 *)ice_malloc(hw, buf_sz * 2); + if (!key) + goto out; + + range_buf = (struct ice_aqc_acl_profile_ranges *) + ice_malloc(hw, sizeof(struct ice_aqc_acl_profile_ranges)); + if (!range_buf) + goto out; + + /* Set don't care mask to all 1's to start, will zero out used bytes */ + ice_memset(dontcare, 0xff, buf_sz, ICE_NONDMA_MEM); + + for (i = 0; i < prof->segs_cnt; i++) { + struct ice_flow_seg_info *seg = &prof->segs[i]; + u64 match = seg->match; + u16 j; + + for (j = 0; j < ICE_FLOW_FIELD_IDX_MAX && match; j++) { + struct ice_flow_fld_info *info; + const u64 bit = BIT_ULL(j); + + if (!(match & bit)) + continue; + + info = &seg->fields[j]; + + if (info->type == ICE_FLOW_FLD_TYPE_RANGE) + ice_flow_acl_frmt_entry_range(j, info, + range_buf, data, + &range); + else + ice_flow_acl_frmt_entry_fld(j, info, buf, + dontcare, data); + + match &= ~bit; + } + + for (j = 0; j < seg->raws_cnt; j++) { + struct ice_flow_fld_info *info = &seg->raws[j].info; + u16 dst, src, mask, k; + bool use_mask = false; + + src = info->src.val; + dst = info->entry.val - + ICE_AQC_ACL_PROF_BYTE_SEL_START_IDX; + mask = info->src.mask; + + if (mask != ICE_FLOW_FLD_OFF_INVAL) + use_mask = true; + + for (k = 0; k < info->entry.last; k++, dst++) { + buf[dst] = data[src++]; + if (use_mask) + dontcare[dst] = ~data[mask++]; + else + dontcare[dst] = 0; + } + } + } + + buf[prof->cfg.scen->pid_idx] = (u8)prof_id; + dontcare[prof->cfg.scen->pid_idx] = 0; + + /* Format the buffer for direction flags */ + dir_flag_msk = BIT(ICE_FLG_PKT_DIR); + + if (prof->dir == ICE_FLOW_RX) + buf[prof->cfg.scen->pkt_dir_idx] = dir_flag_msk; + + if (range) { + buf[prof->cfg.scen->rng_chk_idx] = range; + /* Mark any unused range checkers as don't care */ + dontcare[prof->cfg.scen->rng_chk_idx] = ~range; + e->range_buf = range_buf; + } else { + ice_free(hw, range_buf); + } + + status = ice_set_key(key, buf_sz * 2, buf, NULL, dontcare, NULL, 0, + buf_sz); + if (status) + goto out; + + e->entry = key; + e->entry_sz = buf_sz * 2; + +out: + if (buf) + ice_free(hw, buf); + + if (dontcare) + ice_free(hw, dontcare); + + if (status && key) + ice_free(hw, key); + + if (status && range_buf) { + ice_free(hw, range_buf); + e->range_buf = NULL; + } + + if (status && e->acts) { + ice_free(hw, e->acts); + e->acts = NULL; + e->acts_cnt = 0; + } + + if (status && cnt_alloc) + ice_flow_acl_free_act_cntr(hw, acts, acts_cnt); + + return status; +} + +/** + * ice_flow_acl_find_scen_entry_cond - Find an ACL scenario entry that matches + * the compared data. + * @prof: pointer to flow profile + * @e: pointer to the comparing flow entry + * @do_chg_action: decide if we want to change the ACL action + * @do_add_entry: decide if we want to add the new ACL entry + * @do_rem_entry: decide if we want to remove the current ACL entry + * + * Find an ACL scenario entry that matches the compared data. In the same time, + * this function also figure out: + * a/ If we want to change the ACL action + * b/ If we want to add the new ACL entry + * c/ If we want to remove the current ACL entry + */ +static struct ice_flow_entry * +ice_flow_acl_find_scen_entry_cond(struct ice_flow_prof *prof, + struct ice_flow_entry *e, bool *do_chg_action, + bool *do_add_entry, bool *do_rem_entry) +{ + struct ice_flow_entry *p, *return_entry = NULL; + u8 i, j; + + /* Check if: + * a/ There exists an entry with same matching data, but different + * priority, then we remove this existing ACL entry. Then, we + * will add the new entry to the ACL scenario. + * b/ There exists an entry with same matching data, priority, and + * result action, then we do nothing + * c/ There exists an entry with same matching data, priority, but + * different, action, then do only change the action's entry. + * d/ Else, we add this new entry to the ACL scenario. + */ + *do_chg_action = false; + *do_add_entry = true; + *do_rem_entry = false; + LIST_FOR_EACH_ENTRY(p, &prof->entries, ice_flow_entry, l_entry) { + if (memcmp(p->entry, e->entry, p->entry_sz)) + continue; + + /* From this point, we have the same matching_data. */ + *do_add_entry = false; + return_entry = p; + + if (p->priority != e->priority) { + /* matching data && !priority */ + *do_add_entry = true; + *do_rem_entry = true; + break; + } + + /* From this point, we will have matching_data && priority */ + if (p->acts_cnt != e->acts_cnt) + *do_chg_action = true; + for (i = 0; i < p->acts_cnt; i++) { + bool found_not_match = false; + + for (j = 0; j < e->acts_cnt; j++) + if (memcmp(&p->acts[i], &e->acts[j], + sizeof(struct ice_flow_action))) { + found_not_match = true; + break; + } + + if (found_not_match) { + *do_chg_action = true; + break; + } + } + + /* (do_chg_action = true) means : + * matching_data && priority && !result_action + * (do_chg_action = false) means : + * matching_data && priority && result_action + */ + break; + } + + return return_entry; +} + +/** + * ice_flow_acl_convert_to_acl_prior - Convert to ACL priority + * @p: flow priority + */ +static enum ice_acl_entry_prior +ice_flow_acl_convert_to_acl_prior(enum ice_flow_priority p) +{ + enum ice_acl_entry_prior acl_prior; + + switch (p) { + case ICE_FLOW_PRIO_LOW: + acl_prior = ICE_LOW; + break; + case ICE_FLOW_PRIO_NORMAL: + acl_prior = ICE_NORMAL; + break; + case ICE_FLOW_PRIO_HIGH: + acl_prior = ICE_HIGH; + break; + default: + acl_prior = ICE_NORMAL; + break; + } + + return acl_prior; +} + +/** + * ice_flow_acl_union_rng_chk - Perform union operation between two + * range-range checker buffers + * @dst_buf: pointer to destination range checker buffer + * @src_buf: pointer to source range checker buffer + * + * For this function, we do the union between dst_buf and src_buf + * range checker buffer, and we will save the result back to dst_buf + */ +static enum ice_status +ice_flow_acl_union_rng_chk(struct ice_aqc_acl_profile_ranges *dst_buf, + struct ice_aqc_acl_profile_ranges *src_buf) +{ + u8 i, j; + + if (!dst_buf || !src_buf) + return ICE_ERR_BAD_PTR; + + for (i = 0; i < ICE_AQC_ACL_PROF_RANGES_NUM_CFG; i++) { + struct ice_acl_rng_data *cfg_data = NULL, *in_data; + bool will_populate = false; + + in_data = &src_buf->checker_cfg[i]; + + if (!in_data->mask) + break; + + for (j = 0; j < ICE_AQC_ACL_PROF_RANGES_NUM_CFG; j++) { + cfg_data = &dst_buf->checker_cfg[j]; + + if (!cfg_data->mask || + !memcmp(cfg_data, in_data, + sizeof(struct ice_acl_rng_data))) { + will_populate = true; + break; + } + } + + if (will_populate) { + ice_memcpy(cfg_data, in_data, + sizeof(struct ice_acl_rng_data), + ICE_NONDMA_TO_NONDMA); + } else { + /* No available slot left to program range checker */ + return ICE_ERR_MAX_LIMIT; + } + } + + return ICE_SUCCESS; +} + +/** + * ice_flow_acl_add_scen_entry_sync - Add entry to ACL scenario sync + * @hw: pointer to the hardware structure + * @prof: pointer to flow profile + * @entry: double pointer to the flow entry + * + * For this function, we will look at the current added entries in the + * corresponding ACL scenario. Then, we will perform matching logic to + * see if we want to add/modify/do nothing with this new entry. + */ +static enum ice_status +ice_flow_acl_add_scen_entry_sync(struct ice_hw *hw, struct ice_flow_prof *prof, + struct ice_flow_entry **entry) +{ + bool do_add_entry, do_rem_entry, do_chg_action, do_chg_rng_chk; + struct ice_aqc_acl_profile_ranges query_rng_buf, cfg_rng_buf; + struct ice_acl_act_entry *acts = NULL; + struct ice_flow_entry *exist; + enum ice_status status = ICE_SUCCESS; + struct ice_flow_entry *e; + u8 i; + + if (!entry || !(*entry) || !prof) + return ICE_ERR_BAD_PTR; + + e = *(entry); + + do_chg_rng_chk = false; + if (e->range_buf) { + u8 prof_id = 0; + + status = ice_flow_get_hw_prof(hw, ICE_BLK_ACL, prof->id, + &prof_id); + if (status) + return status; + + /* Query the current range-checker value in FW */ + status = ice_query_acl_prof_ranges(hw, prof_id, &query_rng_buf, + NULL); + if (status) + return status; + ice_memcpy(&cfg_rng_buf, &query_rng_buf, + sizeof(struct ice_aqc_acl_profile_ranges), + ICE_NONDMA_TO_NONDMA); + + /* Generate the new range-checker value */ + status = ice_flow_acl_union_rng_chk(&cfg_rng_buf, e->range_buf); + if (status) + return status; + + /* Reconfigure the range check if the buffer is changed. */ + do_chg_rng_chk = false; + if (memcmp(&query_rng_buf, &cfg_rng_buf, + sizeof(struct ice_aqc_acl_profile_ranges))) { + status = ice_prog_acl_prof_ranges(hw, prof_id, + &cfg_rng_buf, NULL); + if (status) + return status; + + do_chg_rng_chk = true; + } + } + + /* Figure out if we want to (change the ACL action) and/or + * (Add the new ACL entry) and/or (Remove the current ACL entry) + */ + exist = ice_flow_acl_find_scen_entry_cond(prof, e, &do_chg_action, + &do_add_entry, &do_rem_entry); + + if (do_rem_entry) { + status = ice_flow_rem_entry_sync(hw, ICE_BLK_ACL, exist); + if (status) + return status; + } + + /* Prepare the result action buffer */ + acts = (struct ice_acl_act_entry *)ice_calloc + (hw, e->entry_sz, sizeof(struct ice_acl_act_entry)); + for (i = 0; i < e->acts_cnt; i++) + ice_memcpy(&acts[i], &e->acts[i].data.acl_act, + sizeof(struct ice_acl_act_entry), + ICE_NONDMA_TO_NONDMA); + + if (do_add_entry) { + enum ice_acl_entry_prior prior; + u8 *keys, *inverts; + u16 entry_idx; + + keys = (u8 *)e->entry; + inverts = keys + (e->entry_sz / 2); + prior = ice_flow_acl_convert_to_acl_prior(e->priority); + + status = ice_acl_add_entry(hw, prof->cfg.scen, prior, keys, + inverts, acts, e->acts_cnt, + &entry_idx); + if (status) + goto out; + + e->scen_entry_idx = entry_idx; + LIST_ADD(&e->l_entry, &prof->entries); + } else { + if (do_chg_action) { + /* For the action memory info, update the SW's copy of + * exist entry with e's action memory info + */ + ice_free(hw, exist->acts); + exist->acts_cnt = e->acts_cnt; + exist->acts = (struct ice_flow_action *) + ice_calloc(hw, exist->acts_cnt, + sizeof(struct ice_flow_action)); + + if (!exist->acts) { + status = ICE_ERR_NO_MEMORY; + goto out; + } + + ice_memcpy(exist->acts, e->acts, + sizeof(struct ice_flow_action) * e->acts_cnt, + ICE_NONDMA_TO_NONDMA); + + status = ice_acl_prog_act(hw, prof->cfg.scen, acts, + e->acts_cnt, + exist->scen_entry_idx); + if (status) + goto out; + } + + if (do_chg_rng_chk) { + /* In this case, we want to update the range checker + * information of the exist entry + */ + status = ice_flow_acl_union_rng_chk(exist->range_buf, + e->range_buf); + if (status) + goto out; + } + + /* As we don't add the new entry to our SW DB, deallocate its + * memories, and return the exist entry to the caller + */ + ice_dealloc_flow_entry(hw, e); + *(entry) = exist; + } +out: + if (acts) + ice_free(hw, acts); + + return status; +} + +/** + * ice_flow_acl_add_scen_entry - Add entry to ACL scenario + * @hw: pointer to the hardware structure + * @prof: pointer to flow profile + * @e: double pointer to the flow entry + */ +static enum ice_status +ice_flow_acl_add_scen_entry(struct ice_hw *hw, struct ice_flow_prof *prof, + struct ice_flow_entry **e) +{ + enum ice_status status; + + ice_acquire_lock(&prof->entries_lock); + status = ice_flow_acl_add_scen_entry_sync(hw, prof, e); + ice_release_lock(&prof->entries_lock); + + return status; +} + +/** * ice_flow_add_entry - Add a flow entry * @hw: pointer to the HW struct * @blk: classification stage @@ -1581,7 +2660,8 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, struct ice_flow_entry *e = NULL; enum ice_status status = ICE_SUCCESS; - if (acts_cnt && !acts) + /* ACL entries must indicate an action */ + if (blk == ICE_BLK_ACL && (!acts || !acts_cnt)) return ICE_ERR_PARAM; /* No flow entry data is expected for RSS */ @@ -1620,6 +2700,18 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, case ICE_BLK_RSS: /* RSS will add only one entry per VSI per profile */ break; + case ICE_BLK_ACL: + /* ACL will handle the entry management */ + status = ice_flow_acl_frmt_entry(hw, prof, e, (u8 *)data, acts, + acts_cnt); + if (status) + goto out; + + status = ice_flow_acl_add_scen_entry(hw, prof, &e); + if (status) + goto out; + + break; case ICE_BLK_FD: break; case ICE_BLK_SW: @@ -1651,13 +2743,15 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, /** * ice_flow_rem_entry - Remove a flow entry * @hw: pointer to the HW struct + * @blk: classification stage * @entry_h: handle to the flow entry to be removed */ -enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h) +enum ice_status ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, + u64 entry_h) { struct ice_flow_entry *entry; struct ice_flow_prof *prof; - enum ice_status status; + enum ice_status status = ICE_SUCCESS; if (entry_h == ICE_FLOW_ENTRY_HANDLE_INVAL) return ICE_ERR_PARAM; @@ -1667,9 +2761,11 @@ enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h) /* Retain the pointer to the flow profile as the entry will be freed */ prof = entry->prof; - ice_acquire_lock(&prof->entries_lock); - status = ice_flow_rem_entry_sync(hw, entry); - ice_release_lock(&prof->entries_lock); + if (prof) { + ice_acquire_lock(&prof->entries_lock); + status = ice_flow_rem_entry_sync(hw, blk, entry); + ice_release_lock(&prof->entries_lock); + } return status; } diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h index 4c2067f0c..ec50b85ac 100644 --- a/drivers/net/ice/base/ice_flow.h +++ b/drivers/net/ice/base/ice_flow.h @@ -6,6 +6,7 @@ #define _ICE_FLOW_H_ #include "ice_flex_type.h" +#include "ice_acl.h" #define ICE_IPV4_MAKE_PREFIX_MASK(prefix) ((u32)(~0) << (32 - (prefix))) #define ICE_FLOW_PROF_ID_INVAL 0xfffffffffffffffful #define ICE_FLOW_PROF_ID_BYPASS 0 @@ -308,9 +309,14 @@ struct ice_flow_entry { struct ice_flow_action *acts; /* Flow entry's content */ void *entry; + /* Range buffer (For ACL only) */ + struct ice_aqc_acl_profile_ranges *range_buf; enum ice_flow_priority priority; u16 vsi_handle; u16 entry_sz; + /* Entry index in the ACL's scenario */ + u16 scen_entry_idx; +#define ICE_FLOW_ACL_MAX_NUM_ACT 2 u8 acts_cnt; }; @@ -336,6 +342,7 @@ struct ice_flow_prof { union { /* struct sw_recipe */ + struct ice_acl_scen *scen; /* struct fd */ u32 data; /* Symmetric Hash for RSS */ @@ -381,6 +388,7 @@ enum ice_flow_action_type { struct ice_flow_action { enum ice_flow_action_type type; union { + struct ice_acl_act_entry acl_act; u32 dummy; } data; }; @@ -408,7 +416,8 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, u64 entry_id, u16 vsi, enum ice_flow_priority prio, void *data, struct ice_flow_action *acts, u8 acts_cnt, u64 *entry_h); -enum ice_status ice_flow_rem_entry(struct ice_hw *hw, u64 entry_h); +enum ice_status ice_flow_rem_entry(struct ice_hw *hw, enum ice_block blk, + u64 entry_h); void ice_flow_set_fld(struct ice_flow_seg_info *seg, enum ice_flow_field fld, u16 val_loc, u16 mask_loc, u16 last_loc, bool range); diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 89d476482..a89c6572b 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -99,6 +99,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R) #define ICE_HI_DWORD(x) ((u32)((((x) >> 16) >> 16) & 0xFFFFFFFF)) #define ICE_LO_DWORD(x) ((u32)((x) & 0xFFFFFFFF)) #define ICE_HI_WORD(x) ((u16)(((x) >> 16) & 0xFFFF)) +#define ICE_LO_WORD(x) ((u16)((x) & 0xFFFF)) /* debug masks - set these bits in hw->debug_mask to control output */ #define ICE_DBG_TRACE BIT_ULL(0) /* for function-trace only */ @@ -119,6 +120,7 @@ static inline u32 ice_round_to_num(u32 N, u32 R) #define ICE_DBG_PKG BIT_ULL(16) #define ICE_DBG_RES BIT_ULL(17) +#define ICE_DBG_ACL BIT_ULL(18) #define ICE_DBG_AQ_MSG BIT_ULL(24) #define ICE_DBG_AQ_DESC BIT_ULL(25) #define ICE_DBG_AQ_DESC_BUF BIT_ULL(26) @@ -389,6 +391,8 @@ struct ice_hw_common_caps { u8 apm_wol_support; u8 acpi_prog_mthd; u8 proxy_support; + bool nvm_unified_update; +#define ICE_NVM_MGMT_UNIFIED_UPD_SUPPORT BIT(3) }; /* Function specific capabilities */ @@ -879,6 +883,9 @@ struct ice_hw { /* tunneling info */ struct ice_tunnel_table tnl; + struct ice_acl_tbl *acl_tbl; + struct ice_fd_hw_prof **acl_prof; + u16 acl_fltr_cnt[ICE_FLTR_PTYPE_MAX]; /* HW block tables */ struct ice_blk_info blk[ICE_BLK_COUNT]; struct ice_lock fl_profs_locks[ICE_BLK_COUNT]; /* lock fltr profiles */ diff --git a/drivers/net/ice/base/meson.build b/drivers/net/ice/base/meson.build index eff155574..100630c5f 100644 --- a/drivers/net/ice/base/meson.build +++ b/drivers/net/ice/base/meson.build @@ -11,6 +11,8 @@ sources = [ 'ice_flow.c', 'ice_dcb.c', 'ice_fdir.c', + 'ice_acl.c', + 'ice_acl_ctrl.c', ] error_cflags = ['-Wno-unused-value', diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 6342b560c..a082a13df 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -584,7 +584,7 @@ ice_fdir_prof_rm(struct ice_pf *pf, enum ice_fltr_ptype ptype, bool is_tunnel) hw_prof->vsi_h[i]); ice_rem_prof_id_flow(hw, ICE_BLK_FD, vsi_num, ptype); - ice_flow_rem_entry(hw, + ice_flow_rem_entry(hw, ICE_BLK_FD, hw_prof->entry_h[i][is_tunnel]); hw_prof->entry_h[i][is_tunnel] = 0; } @@ -876,7 +876,7 @@ ice_fdir_hw_tbl_conf(struct ice_pf *pf, struct ice_vsi *vsi, err_add_entry: vsi_num = ice_get_hw_vsi_num(hw, vsi->idx); ice_rem_prof_id_flow(hw, ICE_BLK_FD, vsi_num, prof_id); - ice_flow_rem_entry(hw, entry_1); + ice_flow_rem_entry(hw, ICE_BLK_FD, entry_1); err_add_prof: ice_flow_rem_prof(hw, ICE_BLK_FD, prof_id); From patchwork Mon Mar 23 07:17:49 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67014 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8451CA0563; Mon, 23 Mar 2020 08:19:13 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3440E1C193; Mon, 23 Mar 2020 08:15:58 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 136F61C124 for ; Mon, 23 Mar 2020 08:15:30 +0100 (CET) IronPort-SDR: /TwUU6CVveSi7Ss+cHtzDmto852JAIt6x5ZISFu1w3cIZRQVphn9+frAUTPIYGNwYsp3MICnDq Bt0J03Q8ey1A== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:30 -0700 IronPort-SDR: VTSgKY7Wp8cvNwKa1K8Fk3YC2cn0Pg/RkY6y5p5DUHq/TbW3as3S8XNDHP8h30yY84wgDPJxFW BluFBOE8c8DA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111762" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:28 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:49 +0800 Message-Id: <20200323071759.13075-27-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 26/36] net/ice/base: update copyright date X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update copyright date to 2020. Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 2 +- drivers/net/ice/base/ice_alloc.h | 2 +- drivers/net/ice/base/ice_bitops.h | 2 +- drivers/net/ice/base/ice_common.c | 2 +- drivers/net/ice/base/ice_common.h | 2 +- drivers/net/ice/base/ice_controlq.c | 2 +- drivers/net/ice/base/ice_controlq.h | 2 +- drivers/net/ice/base/ice_dcb.c | 2 +- drivers/net/ice/base/ice_dcb.h | 2 +- drivers/net/ice/base/ice_devids.h | 2 +- drivers/net/ice/base/ice_fdir.c | 2 +- drivers/net/ice/base/ice_fdir.h | 2 +- drivers/net/ice/base/ice_flex_pipe.c | 2 +- drivers/net/ice/base/ice_flex_pipe.h | 2 +- drivers/net/ice/base/ice_flex_type.h | 2 +- drivers/net/ice/base/ice_flow.c | 2 +- drivers/net/ice/base/ice_flow.h | 2 +- drivers/net/ice/base/ice_hw_autogen.h | 2 +- drivers/net/ice/base/ice_lan_tx_rx.h | 2 +- drivers/net/ice/base/ice_nvm.c | 2 +- drivers/net/ice/base/ice_nvm.h | 2 +- drivers/net/ice/base/ice_protocol_type.h | 2 +- drivers/net/ice/base/ice_sbq_cmd.h | 2 +- drivers/net/ice/base/ice_sched.c | 2 +- drivers/net/ice/base/ice_sched.h | 2 +- drivers/net/ice/base/ice_status.h | 2 +- drivers/net/ice/base/ice_switch.c | 2 +- drivers/net/ice/base/ice_switch.h | 2 +- drivers/net/ice/base/ice_type.h | 2 +- 29 files changed, 29 insertions(+), 29 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 2c899b90a..9f25d34f5 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_ADMINQ_CMD_H_ diff --git a/drivers/net/ice/base/ice_alloc.h b/drivers/net/ice/base/ice_alloc.h index cf823a2c2..2a192f613 100644 --- a/drivers/net/ice/base/ice_alloc.h +++ b/drivers/net/ice/base/ice_alloc.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_ALLOC_H_ diff --git a/drivers/net/ice/base/ice_bitops.h b/drivers/net/ice/base/ice_bitops.h index 32f64cac0..bcdee63c0 100644 --- a/drivers/net/ice/base/ice_bitops.h +++ b/drivers/net/ice/base/ice_bitops.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_BITOPS_H_ diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 192cfdbf7..b526a7738 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index ccd33b944..c7095e727 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_COMMON_H_ diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c index e7752fca2..0fcf62e20 100644 --- a/drivers/net/ice/base/ice_controlq.c +++ b/drivers/net/ice/base/ice_controlq.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_controlq.h b/drivers/net/ice/base/ice_controlq.h index 8b6046547..464a2adb7 100644 --- a/drivers/net/ice/base/ice_controlq.h +++ b/drivers/net/ice/base/ice_controlq.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_CONTROLQ_H_ diff --git a/drivers/net/ice/base/ice_dcb.c b/drivers/net/ice/base/ice_dcb.c index 63c981b8e..b238b966c 100644 --- a/drivers/net/ice/base/ice_dcb.c +++ b/drivers/net/ice/base/ice_dcb.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_dcb.h b/drivers/net/ice/base/ice_dcb.h index 9a0968f5b..fda7150c6 100644 --- a/drivers/net/ice/base/ice_dcb.h +++ b/drivers/net/ice/base/ice_dcb.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_DCB_H_ diff --git a/drivers/net/ice/base/ice_devids.h b/drivers/net/ice/base/ice_devids.h index 890a47b24..21cc915a5 100644 --- a/drivers/net/ice/base/ice_devids.h +++ b/drivers/net/ice/base/ice_devids.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_DEVIDS_H_ diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c index c78782e5a..ae82c4d99 100644 --- a/drivers/net/ice/base/ice_fdir.c +++ b/drivers/net/ice/base/ice_fdir.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_fdir.h b/drivers/net/ice/base/ice_fdir.h index ff42d2e34..86b532b73 100644 --- a/drivers/net/ice/base/ice_fdir.h +++ b/drivers/net/ice/base/ice_fdir.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_FDIR_H_ diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 077325ad5..fa4e90b08 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_flex_pipe.h b/drivers/net/ice/base/ice_flex_pipe.h index bba66c48a..72225ac4b 100644 --- a/drivers/net/ice/base/ice_flex_pipe.h +++ b/drivers/net/ice/base/ice_flex_pipe.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_FLEX_PIPE_H_ diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h index c122196e9..8f2f6ed32 100644 --- a/drivers/net/ice/base/ice_flex_type.h +++ b/drivers/net/ice/base/ice_flex_type.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_FLEX_TYPE_H_ diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index f480153f7..cdb4b004f 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_flow.h b/drivers/net/ice/base/ice_flow.h index ec50b85ac..fd30b1a68 100644 --- a/drivers/net/ice/base/ice_flow.h +++ b/drivers/net/ice/base/ice_flow.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_FLOW_H_ diff --git a/drivers/net/ice/base/ice_hw_autogen.h b/drivers/net/ice/base/ice_hw_autogen.h index 92d432044..6b92d3ce9 100644 --- a/drivers/net/ice/base/ice_hw_autogen.h +++ b/drivers/net/ice/base/ice_hw_autogen.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ /* Machine-generated file; do not edit */ diff --git a/drivers/net/ice/base/ice_lan_tx_rx.h b/drivers/net/ice/base/ice_lan_tx_rx.h index a97c63cc9..331297462 100644 --- a/drivers/net/ice/base/ice_lan_tx_rx.h +++ b/drivers/net/ice/base/ice_lan_tx_rx.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_LAN_TX_RX_H_ diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index 1c7050c31..72fe7f7d5 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_common.h" diff --git a/drivers/net/ice/base/ice_nvm.h b/drivers/net/ice/base/ice_nvm.h index 8dbda8242..068356557 100644 --- a/drivers/net/ice/base/ice_nvm.h +++ b/drivers/net/ice/base/ice_nvm.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_NVM_H_ diff --git a/drivers/net/ice/base/ice_protocol_type.h b/drivers/net/ice/base/ice_protocol_type.h index fdcbb2cad..eda71722d 100644 --- a/drivers/net/ice/base/ice_protocol_type.h +++ b/drivers/net/ice/base/ice_protocol_type.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_PROTOCOL_TYPE_H_ diff --git a/drivers/net/ice/base/ice_sbq_cmd.h b/drivers/net/ice/base/ice_sbq_cmd.h index 70a019292..937b76d18 100644 --- a/drivers/net/ice/base/ice_sbq_cmd.h +++ b/drivers/net/ice/base/ice_sbq_cmd.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_SBQ_CMD_H_ diff --git a/drivers/net/ice/base/ice_sched.c b/drivers/net/ice/base/ice_sched.c index 575c2ab58..1dd563e5e 100644 --- a/drivers/net/ice/base/ice_sched.c +++ b/drivers/net/ice/base/ice_sched.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_sched.h" diff --git a/drivers/net/ice/base/ice_sched.h b/drivers/net/ice/base/ice_sched.h index 1a8549931..77c22e196 100644 --- a/drivers/net/ice/base/ice_sched.h +++ b/drivers/net/ice/base/ice_sched.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_SCHED_H_ diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h index ac120fa30..9a9984dfa 100644 --- a/drivers/net/ice/base/ice_status.h +++ b/drivers/net/ice/base/ice_status.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_STATUS_H_ diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index 8e594d99b..29fa9cf7e 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #include "ice_switch.h" diff --git a/drivers/net/ice/base/ice_switch.h b/drivers/net/ice/base/ice_switch.h index 598e9c939..c01fa8a6d 100644 --- a/drivers/net/ice/base/ice_switch.h +++ b/drivers/net/ice/base/ice_switch.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_SWITCH_H_ diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index a89c6572b..127e8224d 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2001-2019 + * Copyright(c) 2001-2020 */ #ifndef _ICE_TYPE_H_ From patchwork Mon Mar 23 07:17:50 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67015 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B3D1AA0563; Mon, 23 Mar 2020 08:19:26 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 65E6E1C197; Mon, 23 Mar 2020 08:15:59 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 3AC111C129 for ; Mon, 23 Mar 2020 08:15:33 +0100 (CET) IronPort-SDR: gkiYEmLlGoBkJ5IlJuZ6RK1ub+FEJ8gJeZLi6c4nGQA1Q466zWVssg5tqoiP+mkJAefuskuZt1 CgW0kCqG3WoQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:32 -0700 IronPort-SDR: USG8t+Nv505Z6ak8gqXtk5jpPfueR5iYMOQk9ZTp76xy43S8fyHovyulnVivU9GwcK6skSsudS RE9y0po795Tg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111772" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:30 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Haiyue Wang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:50 +0800 Message-Id: <20200323071759.13075-28-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 27/36] net/ice/base: add the hook to send AdminQ command X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add the hook to send the PF's AdminQ command in another path, like not directly to the firmware. If the AdminQ command is sent through the hook path, it needs to save the AQ error codes from firmware as the last status for admin control queue, so that the AdminQ command function can use it to do exception handling like the buffer size is not enough accorindg to error ENOMEM. And convert explicitly the hook path result to the ice_status type. Signed-off-by: Haiyue Wang Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 22 ++++++++++++++++++++++ drivers/net/ice/base/ice_type.h | 4 ++++ 2 files changed, 26 insertions(+) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index b526a7738..61973286f 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -1315,6 +1315,28 @@ enum ice_status ice_aq_send_cmd(struct ice_hw *hw, struct ice_aq_desc *desc, void *buf, u16 buf_size, struct ice_sq_cd *cd) { + if (hw->aq_send_cmd_fn) { + enum ice_status status = ICE_ERR_NOT_READY; + u16 retval = ICE_AQ_RC_OK; + + ice_acquire_lock(&hw->adminq.sq_lock); + if (!hw->aq_send_cmd_fn(hw->aq_send_cmd_param, desc, + buf, buf_size)) { + retval = LE16_TO_CPU(desc->retval); + /* strip off FW internal code */ + if (retval) + retval &= 0xff; + if (retval == ICE_AQ_RC_OK) + status = ICE_SUCCESS; + else + status = ICE_ERR_AQ_ERROR; + } + + hw->adminq.sq_last_status = (enum ice_aq_err)retval; + ice_release_lock(&hw->adminq.sq_lock); + + return status; + } return ice_sq_send_cmd(hw, &hw->adminq, desc, buf, buf_size, cd); } diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 127e8224d..478940225 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -818,6 +818,10 @@ struct ice_hw { /* Control Queue info */ struct ice_ctl_q_info adminq; struct ice_ctl_q_info mailboxq; + /* Additional function to send AdminQ command */ + int (*aq_send_cmd_fn)(void *param, struct ice_aq_desc *desc, + void *buf, u16 buf_size); + void *aq_send_cmd_param; u8 api_branch; /* API branch version */ u8 api_maj_ver; /* API major version */ From patchwork Mon Mar 23 07:17:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67016 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 532DBA0563; Mon, 23 Mar 2020 08:19:37 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A5E201C19F; Mon, 23 Mar 2020 08:16:00 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id C322C1C130 for ; Mon, 23 Mar 2020 08:15:34 +0100 (CET) IronPort-SDR: AZ8TFq9Ba+7M5W7LJBnQDHpUSavv8re0L6N0lxfpqcAPvtvI851kepZWeCMAYIunkWxoAxghOC xMGm1Tnep8pw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:34 -0700 IronPort-SDR: a432yjE39of2t4Hf4l1Me1021woPTHSLaffk1JwD2vZut3Cw4uonB7Q77IMaKrK2cHop/2cfqH Hzx36h7KZOFQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111781" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:32 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Haiyue Wang , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:51 +0800 Message-Id: <20200323071759.13075-29-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 28/36] net/ice/base: don't access some hardware registers in DCF X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DCF runs as a VF so it can't access PF registers. And export the filter management list static functions as public for make DCF initialization. Signed-off-by: Haiyue Wang Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 7 +++++-- drivers/net/ice/base/ice_common.h | 3 ++- drivers/net/ice/base/ice_flex_pipe.c | 8 ++++++-- drivers/net/ice/base/ice_type.h | 1 + 4 files changed, 14 insertions(+), 5 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 61973286f..0d5a4e3e4 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -463,12 +463,13 @@ ice_aq_set_mac_cfg(struct ice_hw *hw, u16 max_frame_size, struct ice_sq_cd *cd) * ice_init_fltr_mgmt_struct - initializes filter management list and locks * @hw: pointer to the HW struct */ -static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) +enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) { struct ice_switch_info *sw; hw->switch_info = (struct ice_switch_info *) ice_malloc(hw, sizeof(*hw->switch_info)); + sw = hw->switch_info; if (!sw) @@ -483,7 +484,7 @@ static enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw) * ice_cleanup_fltr_mgmt_struct - cleanup filter management list and locks * @hw: pointer to the HW struct */ -static void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw) +void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw) { struct ice_switch_info *sw = hw->switch_info; struct ice_vsi_list_map_info *v_pos_map; @@ -1914,6 +1915,8 @@ ice_parse_caps(struct ice_hw *hw, void *buf, u32 cap_count, dev_p->num_flow_director_fltr); } if (func_p) { + if (hw->dcf_enabled) + break; reg_val = rd32(hw, GLQF_FD_SIZE); val = (reg_val & GLQF_FD_SIZE_FD_GSIZE_M) >> GLQF_FD_SIZE_FD_GSIZE_S; diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index c7095e727..8f6a33b91 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -19,7 +19,8 @@ enum ice_fw_modes { }; enum ice_status ice_nvm_validate_checksum(struct ice_hw *hw); - +enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw); +void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw); enum ice_status ice_init_hw(struct ice_hw *hw); void ice_deinit_hw(struct ice_hw *hw); enum ice_status diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index fa4e90b08..dd0c18324 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1253,6 +1253,8 @@ static void ice_init_pkg_regs(struct ice_hw *hw) #define ICE_SW_BLK_INP_MASK_L 0xFFFFFFFF #define ICE_SW_BLK_INP_MASK_H 0x0000FFFF #define ICE_SW_BLK_IDX 0 + if (hw->dcf_enabled) + return; /* setup Switch block input mask, which is 48-bits in two parts */ wr32(hw, GL_PREEXT_L2_PMASK0(ICE_SW_BLK_IDX), ICE_SW_BLK_INP_MASK_L); @@ -3602,7 +3604,8 @@ void ice_free_hw_tbls(struct ice_hw *hw) ice_free(hw, r); } ice_destroy_lock(&hw->rss_locks); - ice_shutdown_all_prof_masks(hw); + if (!hw->dcf_enabled) + ice_shutdown_all_prof_masks(hw); ice_memset(hw->blk, 0, sizeof(hw->blk), ICE_NONDMA_MEM); } @@ -3682,7 +3685,8 @@ enum ice_status ice_init_hw_tbls(struct ice_hw *hw) ice_init_lock(&hw->rss_locks); INIT_LIST_HEAD(&hw->rss_list_head); - ice_init_all_prof_masks(hw); + if (!hw->dcf_enabled) + ice_init_all_prof_masks(hw); for (i = 0; i < ICE_BLK_COUNT; i++) { struct ice_prof_redir *prof_redir = &hw->blk[i].prof_redir; struct ice_prof_tcam *prof = &hw->blk[i].prof; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 478940225..c14188f4c 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -822,6 +822,7 @@ struct ice_hw { int (*aq_send_cmd_fn)(void *param, struct ice_aq_desc *desc, void *buf, u16 buf_size); void *aq_send_cmd_param; + u8 dcf_enabled; /* Device Config Function */ u8 api_branch; /* API branch version */ u8 api_maj_ver; /* API major version */ From patchwork Mon Mar 23 07:17:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67017 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7964A0563; Mon, 23 Mar 2020 08:19:47 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E43711C1A7; Mon, 23 Mar 2020 08:16:01 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id D83761C12B for ; Mon, 23 Mar 2020 08:15:36 +0100 (CET) IronPort-SDR: Go2QkNrCwOdlBcSCMt7N5helF4z1JzL3fn8jzWofeHiR9S0iMr+9u3lsERyDos9piZgpYXGtF/ uFpRGePoq9bA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:36 -0700 IronPort-SDR: pfvZmE/2s6OvlWksMy5lP1DjnVZy8AeAIqQ/AXxbMjF9qW3Oa6tZjV6LZI7+LihdveHO6D34QW BB0HiglkMhMg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111789" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:34 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:52 +0800 Message-Id: <20200323071759.13075-30-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 29/36] net/ice/base: move functions from common to NVM module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The ice_get_pfa_module_tlv and ice_read_pba_string functions primarily deal with reading from the Shadow RAM portion of the NVM contents. As these functions are NVM focused, move them into the ice_nvm.c file. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 66 --------------------------------------- drivers/net/ice/base/ice_common.h | 3 -- drivers/net/ice/base/ice_nvm.c | 66 +++++++++++++++++++++++++++++++++++++++ drivers/net/ice/base/ice_nvm.h | 3 ++ 4 files changed, 69 insertions(+), 69 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 0d5a4e3e4..ed4dfb3e3 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -917,72 +917,6 @@ enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req) } /** - * ice_get_pfa_module_tlv - Reads sub module TLV from NVM PFA - * @hw: pointer to hardware structure - * @module_tlv: pointer to module TLV to return - * @module_tlv_len: pointer to module TLV length to return - * @module_type: module type requested - * - * Finds the requested sub module TLV type from the Preserved Field - * Area (PFA) and returns the TLV pointer and length. The caller can - * use these to read the variable length TLV value. - */ -enum ice_status -ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, - u16 module_type) -{ - enum ice_status status; - u16 pfa_len, pfa_ptr; - u16 next_tlv; - - status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr); - if (status != ICE_SUCCESS) { - ice_debug(hw, ICE_DBG_INIT, "Preserved Field Array pointer.\n"); - return status; - } - status = ice_read_sr_word(hw, pfa_ptr, &pfa_len); - if (status != ICE_SUCCESS) { - ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n"); - return status; - } - /* Starting with first TLV after PFA length, iterate through the list - * of TLVs to find the requested one. - */ - next_tlv = pfa_ptr + 1; - while (next_tlv < pfa_ptr + pfa_len) { - u16 tlv_sub_module_type; - u16 tlv_len; - - /* Read TLV type */ - status = ice_read_sr_word(hw, next_tlv, &tlv_sub_module_type); - if (status != ICE_SUCCESS) { - ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV type.\n"); - break; - } - /* Read TLV length */ - status = ice_read_sr_word(hw, next_tlv + 1, &tlv_len); - if (status != ICE_SUCCESS) { - ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV length.\n"); - break; - } - if (tlv_sub_module_type == module_type) { - if (tlv_len) { - *module_tlv = next_tlv; - *module_tlv_len = tlv_len; - return ICE_SUCCESS; - } - return ICE_ERR_INVAL_SIZE; - } - /* Check next TLV, i.e. current TLV pointer + length + 2 words - * (for current TLV's type and length) - */ - next_tlv = next_tlv + tlv_len + 2; - } - /* Module does not exist */ - return ICE_ERR_DOES_NOT_EXIST; -} - -/** * ice_copy_rxq_ctx_to_hw * @hw: pointer to the hardware structure * @ice_rxq_ctx: pointer to the rxq context diff --git a/drivers/net/ice/base/ice_common.h b/drivers/net/ice/base/ice_common.h index 8f6a33b91..9ca1f75e9 100644 --- a/drivers/net/ice/base/ice_common.h +++ b/drivers/net/ice/base/ice_common.h @@ -23,9 +23,6 @@ enum ice_status ice_init_fltr_mgmt_struct(struct ice_hw *hw); void ice_cleanup_fltr_mgmt_struct(struct ice_hw *hw); enum ice_status ice_init_hw(struct ice_hw *hw); void ice_deinit_hw(struct ice_hw *hw); -enum ice_status -ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, - u16 module_type); enum ice_status ice_check_reset(struct ice_hw *hw); enum ice_status ice_reset(struct ice_hw *hw, enum ice_reset_req req); diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index 72fe7f7d5..51915cb7c 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -235,6 +235,72 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data) } /** + * ice_get_pfa_module_tlv - Reads sub module TLV from NVM PFA + * @hw: pointer to hardware structure + * @module_tlv: pointer to module TLV to return + * @module_tlv_len: pointer to module TLV length to return + * @module_type: module type requested + * + * Finds the requested sub module TLV type from the Preserved Field + * Area (PFA) and returns the TLV pointer and length. The caller can + * use these to read the variable length TLV value. + */ +enum ice_status +ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, + u16 module_type) +{ + enum ice_status status; + u16 pfa_len, pfa_ptr; + u16 next_tlv; + + status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr); + if (status != ICE_SUCCESS) { + ice_debug(hw, ICE_DBG_INIT, "Preserved Field Array pointer.\n"); + return status; + } + status = ice_read_sr_word(hw, pfa_ptr, &pfa_len); + if (status != ICE_SUCCESS) { + ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n"); + return status; + } + /* Starting with first TLV after PFA length, iterate through the list + * of TLVs to find the requested one. + */ + next_tlv = pfa_ptr + 1; + while (next_tlv < pfa_ptr + pfa_len) { + u16 tlv_sub_module_type; + u16 tlv_len; + + /* Read TLV type */ + status = ice_read_sr_word(hw, next_tlv, &tlv_sub_module_type); + if (status != ICE_SUCCESS) { + ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV type.\n"); + break; + } + /* Read TLV length */ + status = ice_read_sr_word(hw, next_tlv + 1, &tlv_len); + if (status != ICE_SUCCESS) { + ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV length.\n"); + break; + } + if (tlv_sub_module_type == module_type) { + if (tlv_len) { + *module_tlv = next_tlv; + *module_tlv_len = tlv_len; + return ICE_SUCCESS; + } + return ICE_ERR_INVAL_SIZE; + } + /* Check next TLV, i.e. current TLV pointer + length + 2 words + * (for current TLV's type and length) + */ + next_tlv = next_tlv + tlv_len + 2; + } + /* Module does not exist */ + return ICE_ERR_DOES_NOT_EXIST; +} + +/** * ice_get_orom_ver_info - Read Option ROM version information * @hw: pointer to the HW struct * diff --git a/drivers/net/ice/base/ice_nvm.h b/drivers/net/ice/base/ice_nvm.h index 068356557..e5f8888e3 100644 --- a/drivers/net/ice/base/ice_nvm.h +++ b/drivers/net/ice/base/ice_nvm.h @@ -87,6 +87,9 @@ ice_handle_nvm_access(struct ice_hw *hw, struct ice_nvm_access_cmd *cmd, enum ice_status ice_read_flat_nvm(struct ice_hw *hw, u32 offset, u32 *length, u8 *data, bool read_shadow_ram); +enum ice_status +ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len, + u16 module_type); enum ice_status ice_init_nvm(struct ice_hw *hw); enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data); enum ice_status From patchwork Mon Mar 23 07:17:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67018 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0376EA0563; Mon, 23 Mar 2020 08:19:58 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 604401C1AE; Mon, 23 Mar 2020 08:16:03 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 05EC31C131 for ; Mon, 23 Mar 2020 08:15:38 +0100 (CET) IronPort-SDR: dJdabvfd4w542FxFo0YOpnHwXi1HTESbbkwMqXbluhHM0Rwc4QpfH9kWSUdhvyVh79xuwGRq56 u5SWKhkBzriA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:38 -0700 IronPort-SDR: J9l648hKdmBqsuArjTR1AHeyRLUOwdieNb55eV/QisP8+k61s5Y74v6q0r+Xcn6UHcULpq3nBV tVHQvUt4h9MQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111796" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:36 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Jacob Keller , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:53 +0800 Message-Id: <20200323071759.13075-31-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 30/36] net/ice/base: discover and store size of available flash X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When reading from the NVM using a flat address, it is useful to know the upper bound on the size of the flash contents. This value is not stored within the NVM. We can determine the size by performing a bisection between upper and lower bounds. It is known that the size cannot exceed 16 MB (offset of 0xFFFFFF). Use a while loop to bisect the upper and lower bounds by reading one byte at a time. On a failed read, lower the maximum bound. On a successful read, increase the lower bound. Save this as the flash_size in the ice_nvm_info structure that contains data related to the NVM. Signed-off-by: Jacob Keller Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_nvm.c | 61 +++++++++++++++++++++++++++++++++++++++++ drivers/net/ice/base/ice_type.h | 1 + 2 files changed, 62 insertions(+) diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c index 51915cb7c..1bbd6e209 100644 --- a/drivers/net/ice/base/ice_nvm.c +++ b/drivers/net/ice/base/ice_nvm.c @@ -357,6 +357,60 @@ static enum ice_status ice_get_orom_ver_info(struct ice_hw *hw) } /** + * ice_discover_flash_size - Discover the available flash size. + * @hw: pointer to the HW struct + * + * The device flash could be up to 16MB in size. However, it is possible that + * the actual size is smaller. Use bisection to determine the accessible size + * of flash memory. + */ +static enum ice_status ice_discover_flash_size(struct ice_hw *hw) +{ + u32 min_size = 0, max_size = ICE_AQC_NVM_MAX_OFFSET + 1; + enum ice_status status; + + ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); + + status = ice_acquire_nvm(hw, ICE_RES_READ); + if (status) + return status; + + while ((max_size - min_size) > 1) { + u32 offset = (max_size + min_size) / 2; + u32 len = 1; + u8 data; + + status = ice_read_flat_nvm(hw, offset, &len, &data, false); + if (status == ICE_ERR_AQ_ERROR && + hw->adminq.sq_last_status == ICE_AQ_RC_EINVAL) { + ice_debug(hw, ICE_DBG_NVM, + "%s: New upper bound of %u bytes\n", + __func__, offset); + status = ICE_SUCCESS; + max_size = offset; + } else if (!status) { + ice_debug(hw, ICE_DBG_NVM, + "%s: New lower bound of %u bytes\n", + __func__, offset); + min_size = offset; + } else { + /* an unexpected error occurred */ + goto err_read_flat_nvm; + } + } + + ice_debug(hw, ICE_DBG_NVM, + "Predicted flash size is %u bytes\n", max_size); + + hw->nvm.flash_size = max_size; + +err_read_flat_nvm: + ice_release_nvm(hw); + + return status; +} + +/** * ice_init_nvm - initializes NVM setting * @hw: pointer to the HW struct * @@ -416,6 +470,13 @@ enum ice_status ice_init_nvm(struct ice_hw *hw) nvm->eetrack = (eetrack_hi << 16) | eetrack_lo; + status = ice_discover_flash_size(hw); + if (status) { + ice_debug(hw, ICE_DBG_NVM, + "NVM init error: failed to discover flash size.\n"); + return status; + } + switch (hw->device_id) { /* the following devices do not have boot_cfg_tlv yet */ case ICE_DEV_ID_E822C_BACKPLANE: diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index c14188f4c..9ee0d405f 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -492,6 +492,7 @@ struct ice_nvm_info { struct ice_orom_info orom; /* Option ROM version info */ u32 eetrack; /* NVM data version */ u16 sr_words; /* Shadow RAM size in words */ + u32 flash_size; /* Size of available flash in bytes */ u8 major_ver; /* major version of dev starter */ u8 minor_ver; /* minor version of dev starter */ u8 blank_nvm_mode; /* is NVM empty (no FW present)*/ From patchwork Mon Mar 23 07:17:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67019 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B6BDA0563; Mon, 23 Mar 2020 08:20:09 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B02291C1B4; Mon, 23 Mar 2020 08:16:04 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 87C491C0B8 for ; Mon, 23 Mar 2020 08:15:41 +0100 (CET) IronPort-SDR: s+bE+tkfteWxh/mPt/BuiRbw0I+mHK2gyJd2zmERS+E2yanScITuWGnacox5j5QKxk+HbaVmRw VYHkcTHsDJfA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:41 -0700 IronPort-SDR: 9zVlqj3ynBfV/NYscCnBv+jCrtRy7TYNSDT0YORyMzwIVN24kfu2P7GHmON9lk3PMJKWPBfrDZ +z/KccmzjSaw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111805" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:38 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Victor Raj , Dan Nowlin , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:54 +0800 Message-Id: <20200323071759.13075-32-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 31/36] net/ice/base: check DDP package compatibility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Check the OS and NVM package versions before downloading the package. If the OS package version is not compatible with NVM then return an appropriate error. Split the 32-byte segment name into a 28-byte segment name and a 4-byte Track-ID. Older packages will still work with this change because no package has a name that will take up more than 28 bytes; in this case the Track-ID will be 0. Note that the driver will store the segment name as 32-bytes in the ice_hw structure, in order to normalize the length of the various package name strings that it uses. Also add section ID and structure for the segment metadata section. Signed-off-by: Victor Raj Signed-off-by: Dan Nowlin Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 4 +- drivers/net/ice/base/ice_flex_pipe.c | 114 ++++++++++++++++++++++++++-------- drivers/net/ice/base/ice_flex_type.h | 8 +-- drivers/net/ice/base/ice_status.h | 1 + drivers/net/ice/base/ice_type.h | 1 + 5 files changed, 98 insertions(+), 30 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index 9f25d34f5..d4c899dea 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -2590,10 +2590,12 @@ struct ice_pkg_ver { }; #define ICE_PKG_NAME_SIZE 32 +#define ICE_SEG_NAME_SIZE 28 struct ice_aqc_get_pkg_info { struct ice_pkg_ver ver; - char name[ICE_PKG_NAME_SIZE]; + char name[ICE_SEG_NAME_SIZE]; + __le32 track_id; u8 is_in_nvm; u8 is_active; u8 is_active_at_boot; diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index dd0c18324..851f0273b 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -876,8 +876,9 @@ ice_find_seg_in_pkg(struct ice_hw *hw, u32 seg_type, ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); ice_debug(hw, ICE_DBG_PKG, "Package format version: %d.%d.%d.%d\n", - pkg_hdr->format_ver.major, pkg_hdr->format_ver.minor, - pkg_hdr->format_ver.update, pkg_hdr->format_ver.draft); + pkg_hdr->pkg_format_ver.major, pkg_hdr->pkg_format_ver.minor, + pkg_hdr->pkg_format_ver.update, + pkg_hdr->pkg_format_ver.draft); /* Search all package segments for the requested segment type */ for (i = 0; i < LE32_TO_CPU(pkg_hdr->seg_count); i++) { @@ -1048,13 +1049,15 @@ ice_download_pkg(struct ice_hw *hw, struct ice_seg *ice_seg) struct ice_buf_table *ice_buf_tbl; ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); - ice_debug(hw, ICE_DBG_PKG, "Segment version: %d.%d.%d.%d\n", - ice_seg->hdr.seg_ver.major, ice_seg->hdr.seg_ver.minor, - ice_seg->hdr.seg_ver.update, ice_seg->hdr.seg_ver.draft); + ice_debug(hw, ICE_DBG_PKG, "Segment format version: %d.%d.%d.%d\n", + ice_seg->hdr.seg_format_ver.major, + ice_seg->hdr.seg_format_ver.minor, + ice_seg->hdr.seg_format_ver.update, + ice_seg->hdr.seg_format_ver.draft); ice_debug(hw, ICE_DBG_PKG, "Seg: type 0x%X, size %d, name %s\n", LE32_TO_CPU(ice_seg->hdr.seg_type), - LE32_TO_CPU(ice_seg->hdr.seg_size), ice_seg->hdr.seg_name); + LE32_TO_CPU(ice_seg->hdr.seg_size), ice_seg->hdr.seg_id); ice_buf_tbl = ice_find_buf_table(ice_seg); @@ -1101,14 +1104,16 @@ ice_init_pkg_info(struct ice_hw *hw, struct ice_pkg_hdr *pkg_hdr) seg_hdr = ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg_hdr); if (seg_hdr) { - hw->ice_pkg_ver = seg_hdr->seg_ver; - ice_memcpy(hw->ice_pkg_name, seg_hdr->seg_name, + hw->ice_pkg_ver = seg_hdr->seg_format_ver; + ice_memcpy(hw->ice_pkg_name, seg_hdr->seg_id, sizeof(hw->ice_pkg_name), ICE_NONDMA_TO_NONDMA); - ice_debug(hw, ICE_DBG_PKG, "Ice Pkg: %d.%d.%d.%d, %s\n", - seg_hdr->seg_ver.major, seg_hdr->seg_ver.minor, - seg_hdr->seg_ver.update, seg_hdr->seg_ver.draft, - seg_hdr->seg_name); + ice_debug(hw, ICE_DBG_PKG, "Ice Seg: %d.%d.%d.%d, %s\n", + seg_hdr->seg_format_ver.major, + seg_hdr->seg_format_ver.minor, + seg_hdr->seg_format_ver.update, + seg_hdr->seg_format_ver.draft, + seg_hdr->seg_id); } else { ice_debug(hw, ICE_DBG_INIT, "Did not find ice segment in driver package\n"); @@ -1150,9 +1155,11 @@ static enum ice_status ice_get_pkg_info(struct ice_hw *hw) if (pkg_info->pkg_info[i].is_active) { flags[place++] = 'A'; hw->active_pkg_ver = pkg_info->pkg_info[i].ver; + hw->active_track_id = + LE32_TO_CPU(pkg_info->pkg_info[i].track_id); ice_memcpy(hw->active_pkg_name, pkg_info->pkg_info[i].name, - sizeof(hw->active_pkg_name), + sizeof(pkg_info->pkg_info[i].name), ICE_NONDMA_TO_NONDMA); hw->active_pkg_in_nvm = pkg_info->pkg_info[i].is_in_nvm; } @@ -1193,10 +1200,10 @@ static enum ice_status ice_verify_pkg(struct ice_pkg_hdr *pkg, u32 len) if (len < sizeof(*pkg)) return ICE_ERR_BUF_TOO_SHORT; - if (pkg->format_ver.major != ICE_PKG_FMT_VER_MAJ || - pkg->format_ver.minor != ICE_PKG_FMT_VER_MNR || - pkg->format_ver.update != ICE_PKG_FMT_VER_UPD || - pkg->format_ver.draft != ICE_PKG_FMT_VER_DFT) + if (pkg->pkg_format_ver.major != ICE_PKG_FMT_VER_MAJ || + pkg->pkg_format_ver.minor != ICE_PKG_FMT_VER_MNR || + pkg->pkg_format_ver.update != ICE_PKG_FMT_VER_UPD || + pkg->pkg_format_ver.draft != ICE_PKG_FMT_VER_DFT) return ICE_ERR_CFG; /* pkg must have at least one segment */ @@ -1280,6 +1287,70 @@ static enum ice_status ice_chk_pkg_version(struct ice_pkg_ver *pkg_ver) } /** + * ice_chk_pkg_compat + * @hw: pointer to the hardware structure + * @ospkg: pointer to the package hdr + * @seg: pointer to the package segment hdr + * + * This function checks the package version compatibility with driver and NVM + */ +static enum ice_status +ice_chk_pkg_compat(struct ice_hw *hw, struct ice_pkg_hdr *ospkg, + struct ice_seg **seg) +{ + struct ice_aqc_get_pkg_info_resp *pkg; + enum ice_status status; + u16 size; + u32 i; + + ice_debug(hw, ICE_DBG_TRACE, "%s\n", __func__); + + /* Check package version compatibility */ + status = ice_chk_pkg_version(&hw->pkg_ver); + if (status) { + ice_debug(hw, ICE_DBG_INIT, "Package version check failed.\n"); + return status; + } + + /* find ICE segment in given package */ + *seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, + ospkg); + if (!*seg) { + ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n"); + return ICE_ERR_CFG; + } + + /* Check if FW is compatible with the OS package */ + size = ice_struct_size(pkg, pkg_info, ICE_PKG_CNT - 1); + pkg = (struct ice_aqc_get_pkg_info_resp *)ice_malloc(hw, size); + if (!pkg) + return ICE_ERR_NO_MEMORY; + + status = ice_aq_get_pkg_info_list(hw, pkg, size, NULL); + if (status) + goto fw_ddp_compat_free_alloc; + + for (i = 0; i < LE32_TO_CPU(pkg->count); i++) { + /* loop till we find the NVM package */ + if (!pkg->pkg_info[i].is_in_nvm) + continue; + if ((*seg)->hdr.seg_format_ver.major != + pkg->pkg_info[i].ver.major || + (*seg)->hdr.seg_format_ver.minor > + pkg->pkg_info[i].ver.minor) { + status = ICE_ERR_FW_DDP_MISMATCH; + ice_debug(hw, ICE_DBG_INIT, + "OS package is not compatible with NVM.\n"); + } + /* done processing NVM package so break */ + break; + } +fw_ddp_compat_free_alloc: + ice_free(hw, pkg); + return status; +} + +/** * ice_init_pkg - initialize/download package * @hw: pointer to the hardware structure * @buf: pointer to the package buffer @@ -1329,17 +1400,10 @@ enum ice_status ice_init_pkg(struct ice_hw *hw, u8 *buf, u32 len) /* before downloading the package, check package version for * compatibility with driver */ - status = ice_chk_pkg_version(&hw->pkg_ver); + status = ice_chk_pkg_compat(hw, pkg, &seg); if (status) return status; - /* find segment in given package */ - seg = (struct ice_seg *)ice_find_seg_in_pkg(hw, SEGMENT_TYPE_ICE, pkg); - if (!seg) { - ice_debug(hw, ICE_DBG_INIT, "no ice segment in package.\n"); - return ICE_ERR_CFG; - } - /* initialize package hints and then download package */ ice_init_pkg_hints(hw, seg); status = ice_download_pkg(hw, seg); diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h index 8f2f6ed32..2c5860887 100644 --- a/drivers/net/ice/base/ice_flex_type.h +++ b/drivers/net/ice/base/ice_flex_type.h @@ -25,7 +25,7 @@ struct ice_fv { /* Package and segment headers and tables */ struct ice_pkg_hdr { - struct ice_pkg_ver format_ver; + struct ice_pkg_ver pkg_format_ver; __le32 seg_count; __le32 seg_offset[1]; }; @@ -35,9 +35,9 @@ struct ice_generic_seg_hdr { #define SEGMENT_TYPE_METADATA 0x00000001 #define SEGMENT_TYPE_ICE 0x00000010 __le32 seg_type; - struct ice_pkg_ver seg_ver; + struct ice_pkg_ver seg_format_ver; __le32 seg_size; - char seg_name[ICE_PKG_NAME_SIZE]; + char seg_id[ICE_PKG_NAME_SIZE]; }; /* ice specific segment */ @@ -80,7 +80,7 @@ struct ice_buf_table { struct ice_global_metadata_seg { struct ice_generic_seg_hdr hdr; struct ice_pkg_ver pkg_ver; - __le32 track_id; + __le32 rsvd; char pkg_name[ICE_PKG_NAME_SIZE]; }; diff --git a/drivers/net/ice/base/ice_status.h b/drivers/net/ice/base/ice_status.h index 9a9984dfa..7bcccd39a 100644 --- a/drivers/net/ice/base/ice_status.h +++ b/drivers/net/ice/base/ice_status.h @@ -28,6 +28,7 @@ enum ice_status { ICE_ERR_MAX_LIMIT = -17, ICE_ERR_RESET_ONGOING = -18, ICE_ERR_HW_TABLE = -19, + ICE_ERR_FW_DDP_MISMATCH = -20, /* NVM specific error codes: Range -50..-59 */ ICE_ERR_NVM = -50, diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 9ee0d405f..394867f35 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -866,6 +866,7 @@ struct ice_hw { /* Active package version (currently active) */ struct ice_pkg_ver active_pkg_ver; + u32 active_track_id; u8 active_pkg_name[ICE_PKG_NAME_SIZE]; u8 active_pkg_in_nvm; From patchwork Mon Mar 23 07:17:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67020 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F228FA0563; Mon, 23 Mar 2020 08:20:21 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 225D41C1BE; Mon, 23 Mar 2020 08:16:06 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 1961D1C0D9; Mon, 23 Mar 2020 08:15:43 +0100 (CET) IronPort-SDR: XvhvilBHZ47aBL94w2ML+b1tGuuPFteh+hcZwdtyHCER2CIwEF2/pqw6gJ4b5LKsTuCpT7XHl9 V+jwG+z/slOA== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:43 -0700 IronPort-SDR: Xp8AwD9W5tt7+GcEHeRBONlYHFFCZ2QAVE8JdKTKnKcaH1dvXv9/zZaJq0ulrSLz4gFBTSPD9E oSGVVIijnFfw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111829" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:41 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , stable@dpdk.org, Jesse Brandeburg , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:55 +0800 Message-Id: <20200323071759.13075-33-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 32/36] net/ice/base: fix MAC write command X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The manage MAC write command was implemented in an overly complex way that actually didn't work, as it wasn't symmetric to the manage MAC read command, and was feeding bytes out of order to the firmware. Fix the implementation by just using a simple array to represent the MAC address when it is being written via firmware command. Fixes: a90fae1d0755 ("net/ice/base: add admin queue structures and commands") Cc: stable@dpdk.org Signed-off-by: Jesse Brandeburg Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_adminq_cmd.h | 10 ++++------ drivers/net/ice/base/ice_common.c | 5 +---- 2 files changed, 5 insertions(+), 10 deletions(-) diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h index d4c899dea..3344481d6 100644 --- a/drivers/net/ice/base/ice_adminq_cmd.h +++ b/drivers/net/ice/base/ice_adminq_cmd.h @@ -158,13 +158,11 @@ struct ice_aqc_manage_mac_write { #define ICE_AQC_MAN_MAC_WR_MC_MAG_EN BIT(0) #define ICE_AQC_MAN_MAC_WR_WOL_LAA_PFR_KEEP BIT(1) #define ICE_AQC_MAN_MAC_WR_S 6 -#define ICE_AQC_MAN_MAC_WR_M (3 << ICE_AQC_MAN_MAC_WR_S) +#define ICE_AQC_MAN_MAC_WR_M MAKEMASK(3, ICE_AQC_MAN_MAC_WR_S) #define ICE_AQC_MAN_MAC_UPDATE_LAA 0 -#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL (BIT(0) << ICE_AQC_MAN_MAC_WR_S) - /* High 16 bits of MAC address in big endian order */ - __be16 sah; - /* Low 32 bits of MAC address in big endian order */ - __be32 sal; +#define ICE_AQC_MAN_MAC_UPDATE_LAA_WOL BIT(ICE_AQC_MAN_MAC_WR_S) + /* byte stream in network order */ + u8 mac_addr[ETH_ALEN]; __le32 addr_high; __le32 addr_low; }; diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index ed4dfb3e3..3fdc93ce9 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -2077,10 +2077,7 @@ ice_aq_manage_mac_write(struct ice_hw *hw, const u8 *mac_addr, u8 flags, ice_fill_dflt_direct_cmd_desc(&desc, ice_aqc_opc_manage_mac_write); cmd->flags = flags; - - /* Prep values for flags, sah, sal */ - cmd->sah = HTONS(*((const u16 *)mac_addr)); - cmd->sal = HTONL(*((const u32 *)(mac_addr + 2))); + ice_memcpy(cmd->mac_addr, mac_addr, ETH_ALEN, ICE_NONDMA_TO_DMA); return ice_aq_send_cmd(hw, &desc, NULL, 0, cd); } From patchwork Mon Mar 23 07:17:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67021 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A5EBA0563; Mon, 23 Mar 2020 08:20:32 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8D2BD1C1C4; Mon, 23 Mar 2020 08:16:07 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id 0A26C1C0D9 for ; Mon, 23 Mar 2020 08:15:45 +0100 (CET) IronPort-SDR: GHud6fpJOM6f7yDnAIJgRj8rgCG5ZaIyxBanek8Ol/ZufiPuFfKu6AT17w/lUk3cTKsRytAL1E okiwMf5ejCKQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:45 -0700 IronPort-SDR: TZM2BID9wzfbOWUknKeGsisCnpRCsBdJV+RkuNFCa8zDnV5AD4iTofVVJYPgyQiTy2B2QUJ6lH Dua257Ookv3w== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111838" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:43 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Tony Nguyen , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:56 +0800 Message-Id: <20200323071759.13075-34-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 33/36] net/ice/base: misc cleanups for Flow Director X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Cleanup some things found while doing code review: - Remove unnececcary initializations, parenthesis, and braces - Fix a couple of function headers Signed-off-by: Tony Nguyen Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_fdir.c | 42 ++++++++++++++++++------------------ drivers/net/ice/base/ice_flex_pipe.c | 2 +- drivers/net/ice/base/ice_flow.c | 2 +- 3 files changed, 23 insertions(+), 23 deletions(-) diff --git a/drivers/net/ice/base/ice_fdir.c b/drivers/net/ice/base/ice_fdir.c index ae82c4d99..6dc8d54da 100644 --- a/drivers/net/ice/base/ice_fdir.c +++ b/drivers/net/ice/base/ice_fdir.c @@ -916,7 +916,7 @@ bool ice_fdir_has_frag(enum ice_fltr_ptype flow) struct ice_fdir_fltr * ice_fdir_find_fltr_by_idx(struct ice_hw *hw, u32 fltr_idx) { - struct ice_fdir_fltr *rule = NULL; + struct ice_fdir_fltr *rule; LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr, fltr_node) { @@ -965,7 +965,7 @@ ice_fdir_update_cntrs(struct ice_hw *hw, enum ice_fltr_ptype flow, { int incr; - incr = (add) ? 1 : -1; + incr = add ? 1 : -1; hw->fdir_active_fltr += incr; if (flow == ICE_FLTR_PTYPE_NONF_NONE || flow >= ICE_FLTR_PTYPE_MAX) { ice_debug(hw, ICE_DBG_SW, "Unknown filter type %d\n", flow); @@ -990,7 +990,7 @@ static int ice_cmp_ipv6_addr(__be32 *a, __be32 *b) } /** - * ice_fdir_comp_ipv6_rules - compare 2 filters + * ice_fdir_comp_rules - compare 2 filters * @a: a Flow Director filter data structure * @b: a Flow Director filter data structure * @v6: bool true if v6 filter @@ -1053,30 +1053,30 @@ ice_fdir_comp_rules(struct ice_fdir_fltr *a, struct ice_fdir_fltr *b, bool v6) */ bool ice_fdir_is_dup_fltr(struct ice_hw *hw, struct ice_fdir_fltr *input) { - enum ice_fltr_ptype flow_type; struct ice_fdir_fltr *rule; bool ret = false; - rule = NULL; - LIST_FOR_EACH_ENTRY(rule, &hw->fdir_list_head, ice_fdir_fltr, fltr_node) { - if (rule->flow_type == input->flow_type) { - flow_type = input->flow_type; - if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP || - flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP || - flow_type == ICE_FLTR_PTYPE_NONF_IPV4_SCTP || - flow_type == ICE_FLTR_PTYPE_NONF_IPV4_OTHER) - ret = ice_fdir_comp_rules(rule, input, false); + enum ice_fltr_ptype flow_type; + + if (rule->flow_type != input->flow_type) + continue; + + flow_type = input->flow_type; + if (flow_type == ICE_FLTR_PTYPE_NONF_IPV4_TCP || + flow_type == ICE_FLTR_PTYPE_NONF_IPV4_UDP || + flow_type == ICE_FLTR_PTYPE_NONF_IPV4_SCTP || + flow_type == ICE_FLTR_PTYPE_NONF_IPV4_OTHER) + ret = ice_fdir_comp_rules(rule, input, false); + else + ret = ice_fdir_comp_rules(rule, input, true); + if (ret) { + if (rule->fltr_id == input->fltr_id && + rule->q_index != input->q_index) + ret = false; else - ret = ice_fdir_comp_rules(rule, input, true); - if (ret) { - if (rule->fltr_id == input->fltr_id && - rule->q_index != input->q_index) - ret = false; - else - break; - } + break; } } diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 851f0273b..213aceef5 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1953,7 +1953,7 @@ ice_find_free_tunnel_entry(struct ice_hw *hw, enum ice_tunnel_type type, } /** - * ice_get_tunnel_port - retrieve an open tunnel port + * ice_get_open_tunnel_port - retrieve an open tunnel port * @hw: pointer to the HW structure * @type: tunnel type (TNL_ALL will return any open port) * @port: returns open port diff --git a/drivers/net/ice/base/ice_flow.c b/drivers/net/ice/base/ice_flow.c index cdb4b004f..e523b8f45 100644 --- a/drivers/net/ice/base/ice_flow.c +++ b/drivers/net/ice/base/ice_flow.c @@ -2656,8 +2656,8 @@ ice_flow_add_entry(struct ice_hw *hw, enum ice_block blk, u64 prof_id, void *data, struct ice_flow_action *acts, u8 acts_cnt, u64 *entry_h) { - struct ice_flow_prof *prof = NULL; struct ice_flow_entry *e = NULL; + struct ice_flow_prof *prof; enum ice_status status = ICE_SUCCESS; /* ACL entries must indicate an action */ From patchwork Mon Mar 23 07:17:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67022 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 29BAFA0563; Mon, 23 Mar 2020 08:20:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BFD541C1C9; Mon, 23 Mar 2020 08:16:08 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id D58001C038 for ; Mon, 23 Mar 2020 08:15:47 +0100 (CET) IronPort-SDR: TdbzZCfLWhZ92r+ynad8FKWCbklRF4iKsQYrlc+0w3N/DwzomRDAx48EVugvjoxoLcMi1RAnez OSZbfbXii4qg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:47 -0700 IronPort-SDR: RjeF8MzewUAT88MDBtcu8TWXjD4gdz2ENcFbLuHeiBiuDCPCt5QoKUWk1c94JFUY5MX/oq7twI 3zXLPt6ytCLw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111846" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:45 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Wei Zhao , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:57 +0800 Message-Id: <20200323071759.13075-35-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 34/36] net/ice/base: add check to ipv4 next protocol X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support switch rule for NVGRE packets, it need to check ipv4 next protocol number, if it is 0x2F, which means next payload is NVGRE, we need to use NVGRE format dummy packet. Signed-off-by: Wei Zhao Signed-off-by: Qi Zhang Signed-off-by: Paul M Stillwell Jr --- drivers/net/ice/base/ice_switch.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index 29fa9cf7e..a7fb30b05 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -10,6 +10,7 @@ #define ICE_ETH_ETHTYPE_OFFSET 12 #define ICE_ETH_VLAN_TCI_OFFSET 14 #define ICE_MAX_VLAN_ID 0xFFF +#define ICE_IPV4_NVGRE_PROTO_ID 0x002F /* Dummy ethernet header needed in the ice_aqc_sw_rules_elem * struct to configure any switch filter rules. @@ -5908,6 +5909,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, const struct ice_dummy_pkt_offsets **offsets) { bool tcp = false, udp = false, ipv6 = false, vlan = false; + bool gre = false; u16 i; if (tun_type == ICE_SW_TUN_GTP) { @@ -5931,6 +5933,12 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, ipv6 = true; else if (lkups[i].type == ICE_VLAN_OFOS) vlan = true; + else if (lkups[i].type == ICE_IPV4_OFOS && + lkups[i].h_u.ipv4_hdr.protocol == + ICE_IPV4_NVGRE_PROTO_ID && + lkups[i].m_u.ipv4_hdr.protocol == + 0xFF) + gre = true; } if (tun_type == ICE_ALL_TUNNELS) { @@ -5940,7 +5948,7 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, return; } - if (tun_type == ICE_SW_TUN_NVGRE) { + if (tun_type == ICE_SW_TUN_NVGRE || gre) { if (tcp) { *pkt = dummy_gre_tcp_packet; *pkt_len = sizeof(dummy_gre_tcp_packet); From patchwork Mon Mar 23 07:17:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67023 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8AE2DA0563; Mon, 23 Mar 2020 08:20:53 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0C9DE1C1C0; Mon, 23 Mar 2020 08:16:10 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id F022E1C067 for ; Mon, 23 Mar 2020 08:15:49 +0100 (CET) IronPort-SDR: 1Dk728T2bckndVj48/gkHaoRM19ShlDctboHw23hLd/VHENXd3V34mCpJIgqDLTWJkJ1Zx0APw BIJt2U0i2p4w== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:49 -0700 IronPort-SDR: Up3FxM+t+aUpoJCFWEe6W8pozHa7faSd7P/KWS7XQa84MiNCjIlZXa24J5JuBJWw9nnoLF1NC0 JV9FxJQAONjw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111852" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:47 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Dan Nowlin , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:58 +0800 Message-Id: <20200323071759.13075-36-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 35/36] net/ice/base: add reference count to tunnels X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a lock for protecting the tunnel table while adding, removing and searching tunnels. Add reference counting to tunnels so that multiple instances of the same tunnel port can be created. Only physically destroy the tunnel when all instances of that tunnel have been destroyed. Signed-off-by: Dan Nowlin Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_common.c | 2 + drivers/net/ice/base/ice_flex_pipe.c | 95 ++++++++++++++++++++++++++++++------ drivers/net/ice/base/ice_flex_type.h | 1 + drivers/net/ice/base/ice_type.h | 1 + 4 files changed, 83 insertions(+), 16 deletions(-) diff --git a/drivers/net/ice/base/ice_common.c b/drivers/net/ice/base/ice_common.c index 3fdc93ce9..0cf578c34 100644 --- a/drivers/net/ice/base/ice_common.c +++ b/drivers/net/ice/base/ice_common.c @@ -728,6 +728,7 @@ enum ice_status ice_init_hw(struct ice_hw *hw) status = ice_init_hw_tbls(hw); if (status) goto err_unroll_fltr_mgmt_struct; + ice_init_lock(&hw->tnl_lock); return ICE_SUCCESS; err_unroll_fltr_mgmt_struct: @@ -759,6 +760,7 @@ void ice_deinit_hw(struct ice_hw *hw) ice_sched_clear_agg(hw); ice_free_seg(hw); ice_free_hw_tbls(hw); + ice_destroy_lock(&hw->tnl_lock); if (hw->port_info) { ice_free(hw, hw->port_info); diff --git a/drivers/net/ice/base/ice_flex_pipe.c b/drivers/net/ice/base/ice_flex_pipe.c index 213aceef5..de14d9d9d 100644 --- a/drivers/net/ice/base/ice_flex_pipe.c +++ b/drivers/net/ice/base/ice_flex_pipe.c @@ -1883,7 +1883,7 @@ static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) } /** - * ice_tunnel_port_in_use + * ice_tunnel_port_in_use_hlpr - helper function to determine tunnel usage * @hw: pointer to the HW structure * @port: port to search for * @index: optionally returns index @@ -1891,7 +1891,7 @@ static struct ice_buf *ice_pkg_buf(struct ice_buf_build *bld) * Returns whether a port is already in use as a tunnel, and optionally its * index */ -bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index) +static bool ice_tunnel_port_in_use_hlpr(struct ice_hw *hw, u16 port, u16 *index) { u16 i; @@ -1906,6 +1906,26 @@ bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index) } /** + * ice_tunnel_port_in_use + * @hw: pointer to the HW structure + * @port: port to search for + * @index: optionally returns index + * + * Returns whether a port is already in use as a tunnel, and optionally its + * index + */ +bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index) +{ + bool res; + + ice_acquire_lock(&hw->tnl_lock); + res = ice_tunnel_port_in_use_hlpr(hw, port, index); + ice_release_lock(&hw->tnl_lock); + + return res; +} + +/** * ice_tunnel_get_type * @hw: pointer to the HW structure * @port: port to search for @@ -1916,15 +1936,21 @@ bool ice_tunnel_port_in_use(struct ice_hw *hw, u16 port, u16 *index) bool ice_tunnel_get_type(struct ice_hw *hw, u16 port, enum ice_tunnel_type *type) { + bool res = false; u16 i; + ice_acquire_lock(&hw->tnl_lock); + for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) if (hw->tnl.tbl[i].in_use && hw->tnl.tbl[i].port == port) { *type = hw->tnl.tbl[i].type; - return true; + res = true; + break; } - return false; + ice_release_lock(&hw->tnl_lock); + + return res; } /** @@ -1962,16 +1988,22 @@ bool ice_get_open_tunnel_port(struct ice_hw *hw, enum ice_tunnel_type type, u16 *port) { + bool res = false; u16 i; + ice_acquire_lock(&hw->tnl_lock); + for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use && (type == TNL_ALL || hw->tnl.tbl[i].type == type)) { *port = hw->tnl.tbl[i].port; - return true; + res = true; + break; } - return false; + ice_release_lock(&hw->tnl_lock); + + return res; } /** @@ -1992,15 +2024,24 @@ ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port) struct ice_buf_build *bld; u16 index; - if (ice_tunnel_port_in_use(hw, port, NULL)) - return ICE_ERR_ALREADY_EXISTS; + ice_acquire_lock(&hw->tnl_lock); - if (!ice_find_free_tunnel_entry(hw, type, &index)) - return ICE_ERR_OUT_OF_RANGE; + if (ice_tunnel_port_in_use_hlpr(hw, port, &index)) { + hw->tnl.tbl[index].ref++; + status = ICE_SUCCESS; + goto ice_create_tunnel_end; + } + + if (!ice_find_free_tunnel_entry(hw, type, &index)) { + status = ICE_ERR_OUT_OF_RANGE; + goto ice_create_tunnel_end; + } bld = ice_pkg_buf_alloc(hw); - if (!bld) - return ICE_ERR_NO_MEMORY; + if (!bld) { + status = ICE_ERR_NO_MEMORY; + goto ice_create_tunnel_end; + } /* allocate 2 sections, one for Rx parser, one for Tx parser */ if (ice_pkg_buf_reserve_section(bld, 2)) @@ -2040,11 +2081,15 @@ ice_create_tunnel(struct ice_hw *hw, enum ice_tunnel_type type, u16 port) if (!status) { hw->tnl.tbl[index].port = port; hw->tnl.tbl[index].in_use = true; + hw->tnl.tbl[index].ref = 1; } ice_create_tunnel_err: ice_pkg_buf_free(hw, bld); +ice_create_tunnel_end: + ice_release_lock(&hw->tnl_lock); + return status; } @@ -2064,24 +2109,38 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) enum ice_status status = ICE_ERR_MAX_LIMIT; struct ice_buf_build *bld; u16 count = 0; + u16 index; u16 size; u16 i; + ice_acquire_lock(&hw->tnl_lock); + + if (!all && ice_tunnel_port_in_use_hlpr(hw, port, &index)) + if (hw->tnl.tbl[index].ref > 1) { + hw->tnl.tbl[index].ref--; + status = ICE_SUCCESS; + goto ice_destroy_tunnel_end; + } + /* determine count */ for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) if (hw->tnl.tbl[i].valid && hw->tnl.tbl[i].in_use && (all || hw->tnl.tbl[i].port == port)) count++; - if (!count) - return ICE_ERR_PARAM; + if (!count) { + status = ICE_ERR_PARAM; + goto ice_destroy_tunnel_end; + } /* size of section - there is at least one entry */ size = ice_struct_size(sect_rx, tcam, count - 1); bld = ice_pkg_buf_alloc(hw); - if (!bld) - return ICE_ERR_NO_MEMORY; + if (!bld) { + status = ICE_ERR_NO_MEMORY; + goto ice_destroy_tunnel_end; + } /* allocate 2 sections, one for Rx parser, one for Tx parser */ if (ice_pkg_buf_reserve_section(bld, 2)) @@ -2123,6 +2182,7 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) for (i = 0; i < hw->tnl.count && i < ICE_TUNNEL_MAX_ENTRIES; i++) if (hw->tnl.tbl[i].marked) { + hw->tnl.tbl[i].ref = 0; hw->tnl.tbl[i].port = 0; hw->tnl.tbl[i].in_use = false; hw->tnl.tbl[i].marked = false; @@ -2131,6 +2191,9 @@ enum ice_status ice_destroy_tunnel(struct ice_hw *hw, u16 port, bool all) ice_destroy_tunnel_err: ice_pkg_buf_free(hw, bld); +ice_destroy_tunnel_end: + ice_release_lock(&hw->tnl_lock); + return status; } diff --git a/drivers/net/ice/base/ice_flex_type.h b/drivers/net/ice/base/ice_flex_type.h index 2c5860887..4e4ba4deb 100644 --- a/drivers/net/ice/base/ice_flex_type.h +++ b/drivers/net/ice/base/ice_flex_type.h @@ -531,6 +531,7 @@ struct ice_tunnel_entry { enum ice_tunnel_type type; u16 boost_addr; u16 port; + u16 ref; struct ice_boost_tcam_entry *boost_entry; u8 valid; u8 in_use; diff --git a/drivers/net/ice/base/ice_type.h b/drivers/net/ice/base/ice_type.h index 394867f35..da2b3548d 100644 --- a/drivers/net/ice/base/ice_type.h +++ b/drivers/net/ice/base/ice_type.h @@ -888,6 +888,7 @@ struct ice_hw { u32 pkg_size; /* tunneling info */ + struct ice_lock tnl_lock; struct ice_tunnel_table tnl; struct ice_acl_tbl *acl_tbl; From patchwork Mon Mar 23 07:17:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 67024 X-Patchwork-Delegate: xiaolong.ye@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B2FDAA0563; Mon, 23 Mar 2020 08:21:03 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 62A931C1D2; Mon, 23 Mar 2020 08:16:11 +0100 (CET) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by dpdk.org (Postfix) with ESMTP id F03361C038 for ; Mon, 23 Mar 2020 08:15:51 +0100 (CET) IronPort-SDR: 7KfUfyLcxBIgxf6CBc5gGK6hccMA0NDQRClE6NQY9WoNZweTxJTmE7XZ6XgKbRnXtXVGNeGRbj 3p/vXkz/hhLw== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 23 Mar 2020 00:15:51 -0700 IronPort-SDR: QePyDlt2H9iBr7Bhk57ewxmwgvxjhbqGFG8pXMtKVUzyNEWraW3BzxTnK0dnlP3mplwtmcz14w yNvOai/jGZzw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.72,295,1580803200"; d="scan'208";a="246111857" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by orsmga003.jf.intel.com with ESMTP; 23 Mar 2020 00:15:49 -0700 From: Qi Zhang To: qiming.yang@intel.com Cc: dev@dpdk.org, xiaolong.ye@intel.com, Qi Zhang , Wei Zhao , Paul M Stillwell Jr Date: Mon, 23 Mar 2020 15:17:59 +0800 Message-Id: <20200323071759.13075-37-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200323071759.13075-1-qi.z.zhang@intel.com> References: <20200309114357.31800-1-qi.z.zhang@intel.com> <20200323071759.13075-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v2 36/36] net/ice/base: add pppoe ipv6 dummy packet X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support switch rule for pppoe packet with ipv6 payload, it has to use a new dummy packet with ipv6 format. Signed-off-by: Wei Zhao Signed-off-by: Paul M Stillwell Jr Signed-off-by: Qi Zhang --- drivers/net/ice/base/ice_switch.c | 67 +++++++++++++++++++++++++++++++-------- 1 file changed, 54 insertions(+), 13 deletions(-) diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c index a7fb30b05..3d83ded6e 100644 --- a/drivers/net/ice/base/ice_switch.c +++ b/drivers/net/ice/base/ice_switch.c @@ -11,6 +11,7 @@ #define ICE_ETH_VLAN_TCI_OFFSET 14 #define ICE_MAX_VLAN_ID 0xFFF #define ICE_IPV4_NVGRE_PROTO_ID 0x002F +#define ICE_PPP_IPV6_PROTO_ID 0x0057 /* Dummy ethernet header needed in the ice_aqc_sw_rules_elem * struct to configure any switch filter rules. @@ -559,7 +560,7 @@ static const struct ice_dummy_pkt_offsets dummy_pppoe_packet_offsets[] = { { ICE_PROTOCOL_LAST, 0 }, }; -static const u8 dummy_pppoe_packet[] = { +static const u8 dummy_pppoe_ipv4_packet[] = { 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, @@ -582,6 +583,34 @@ static const u8 dummy_pppoe_packet[] = { 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ }; +static const u8 dummy_pppoe_ipv6_packet[] = { + 0x00, 0x00, 0x00, 0x00, /* ICE_MAC_OFOS 0 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x81, 0x00, /* ICE_ETYPE_OL 12 */ + + 0x00, 0x00, 0x88, 0x64, /* ICE_VLAN_OFOS 14 */ + + 0x11, 0x00, 0x00, 0x00, /* ICE_PPPOE 18 */ + 0x00, 0x2a, + + 0x00, 0x57, /* PPP Link Layer 24 */ + + 0x60, 0x00, 0x00, 0x00, /* ICE_IPV6_OFOS 26 */ + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, + + 0x00, 0x00, /* 2 bytes for 4 bytes alignment */ +}; + /* this is a recipe to profile association bitmap */ static ice_declare_bitmap(recipe_to_profile[ICE_MAX_NUM_RECIPES], ICE_MAX_NUM_PROFILES); @@ -5912,18 +5941,6 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, bool gre = false; u16 i; - if (tun_type == ICE_SW_TUN_GTP) { - *pkt = dummy_udp_gtp_packet; - *pkt_len = sizeof(dummy_udp_gtp_packet); - *offsets = dummy_udp_gtp_packet_offsets; - return; - } - if (tun_type == ICE_SW_TUN_PPPOE) { - *pkt = dummy_pppoe_packet; - *pkt_len = sizeof(dummy_pppoe_packet); - *offsets = dummy_pppoe_packet_offsets; - return; - } for (i = 0; i < lkups_cnt; i++) { if (lkups[i].type == ICE_UDP_ILOS) udp = true; @@ -5939,6 +5956,30 @@ ice_find_dummy_packet(struct ice_adv_lkup_elem *lkups, u16 lkups_cnt, lkups[i].m_u.ipv4_hdr.protocol == 0xFF) gre = true; + else if (lkups[i].type == ICE_PPPOE && + lkups[i].h_u.pppoe_hdr.ppp_prot_id == + CPU_TO_BE16(ICE_PPP_IPV6_PROTO_ID) && + lkups[i].m_u.pppoe_hdr.ppp_prot_id == + 0xFFFF) + ipv6 = true; + } + + if (tun_type == ICE_SW_TUN_GTP) { + *pkt = dummy_udp_gtp_packet; + *pkt_len = sizeof(dummy_udp_gtp_packet); + *offsets = dummy_udp_gtp_packet_offsets; + return; + } + if (tun_type == ICE_SW_TUN_PPPOE && ipv6) { + *pkt = dummy_pppoe_ipv6_packet; + *pkt_len = sizeof(dummy_pppoe_ipv6_packet); + *offsets = dummy_pppoe_packet_offsets; + return; + } else if (tun_type == ICE_SW_TUN_PPPOE) { + *pkt = dummy_pppoe_ipv4_packet; + *pkt_len = sizeof(dummy_pppoe_ipv4_packet); + *offsets = dummy_pppoe_packet_offsets; + return; } if (tun_type == ICE_ALL_TUNNELS) {