From patchwork Fri Sep 1 09:34:43 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131046 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F17F44221E; Fri, 1 Sep 2023 11:35:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7BFAC40285; Fri, 1 Sep 2023 11:35:39 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 5831B4014F for ; Fri, 1 Sep 2023 11:35:37 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZN9v069800; Fri, 1 Sep 2023 17:35:23 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:22 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 01/32] net/sssnic: add build and doc infrastructure Date: Fri, 1 Sep 2023 17:34:43 +0800 Message-ID: <20230901093514.224824-2-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZN9v069800 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Adding minimum PMD code, doc and build infrastructure for sssnic. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Fixed 'Title underline too short' in doc/guides/nics/sssnic.rst. * Removed error.h from including files. --- .mailmap | 5 +- MAINTAINERS | 8 ++++ doc/guides/nics/features/sssnic.ini | 9 ++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/sssnic.rst | 73 +++++++++++++++++++++++++++++ drivers/net/meson.build | 1 + drivers/net/sssnic/meson.build | 18 +++++++ drivers/net/sssnic/sssnic_ethdev.c | 28 +++++++++++ 8 files changed, 140 insertions(+), 3 deletions(-) create mode 100644 doc/guides/nics/features/sssnic.ini create mode 100644 doc/guides/nics/sssnic.rst create mode 100644 drivers/net/sssnic/meson.build create mode 100644 drivers/net/sssnic/sssnic_ethdev.c diff --git a/.mailmap b/.mailmap index 864d33ee46..6fa73d3b79 100644 --- a/.mailmap +++ b/.mailmap @@ -151,7 +151,6 @@ Bao-Long Tran Bar Neuman Barak Enat Barry Cao -Bartosz Staszewski Baruch Siach Bassam Zaid AlKilani Beilei Xing @@ -496,7 +495,6 @@ Helin Zhang Hemant Agrawal Heng Ding Hengjian Zhang -Heng Jiang Heng Wang Henning Schild Henry Cai @@ -630,7 +628,6 @@ Jie Liu Jie Pan Jie Wang Jie Zhou -Jieqiang Wang Jijiang Liu Jilei Chen Jim Harris @@ -1156,6 +1153,7 @@ Rebecca Troy Remi Pommarel Remy Horton Renata Saiakhova +Renyong Wan Reshma Pattan Ricardo Roldan Ricardo Salveti @@ -1329,6 +1327,7 @@ Stephen Hurd Steve Capper Steven Lariau Steven Luong +Steven Song Steven Webster Steven Zou Steve Rempe diff --git a/MAINTAINERS b/MAINTAINERS index a926155f26..1e57d29aa3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -633,6 +633,13 @@ F: drivers/net/af_xdp/ F: doc/guides/nics/af_xdp.rst F: doc/guides/nics/features/af_xdp.ini +3SNIC sssnic +M: Renyong Wan +M: Steven Song +F: driver/net/sssnic/ +F: doc/guides/nics/sssnic.rst +F: doc/guides/nics/features/sssnic.ini + Amazon ENA M: Michal Krawczyk M: Shai Brandes @@ -1793,6 +1800,7 @@ F: doc/guides/tools/img/eventdev_* F: app/test/test_event_ring.c Procinfo tool +M: Maryam Tahhan M: Reshma Pattan F: app/proc-info/ F: doc/guides/tools/proc_info.rst diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini new file mode 100644 index 0000000000..6d9786db7e --- /dev/null +++ b/doc/guides/nics/features/sssnic.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'sssnic' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux = Y +ARMv8 = Y +x86-64 = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 7bfcac880f..9d2b29383b 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -61,6 +61,7 @@ Network Interface Controller Drivers qede sfc_efx softnic + sssnic tap thunderx txgbe diff --git a/doc/guides/nics/sssnic.rst b/doc/guides/nics/sssnic.rst new file mode 100644 index 0000000000..fe0180c2e6 --- /dev/null +++ b/doc/guides/nics/sssnic.rst @@ -0,0 +1,73 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Shenzhen 3SNIC Information Technology Co., Ltd. + +SSSNIC Poll Mode Driver +======================= + +The sssnic PMD (**librte_pmd_sssnic**) provides poll mode driver support +for 3SNIC 9x0 serials family of Ethernet adapters. + + +Supported NICs +-------------- + +- 3S910 Dual Port SFP28 10/25GbE Ethernet adapter +- 3S920 Quad Port SFP28 10/25GbE Ethernet adapter +- 3S920 Quad Port QSFP28 100GbE Ethernet adapter + + +Features +-------- + +Features of sssnic PMD are: + +- Link status +- Link status event +- Queue start/stop +- Rx interrupt +- Scattered Rx +- TSO +- LRO +- Promiscuous mode +- Allmulticast mode +- Unicast MAC filter +- Multicast MAC filte +- RSS hash +- RSS key update +- RSS reta update +- Inner RSS +- VLAN filter +- VLAN offload +- L3 checksum offload +- L4 checksum offload +- Inner L3 checksum +- Inner L4 checksum +- Basic stats +- Extended stats +- Stats per queue +- Flow control +- FW version +- Generic flow API + + +Prerequisites +------------- + +- Learning about 3SNIC Ethernet NICs using + ``_. + +- Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup the basic DPDK environment. + + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC ` +for details. + + +Limitations or Known issues +--------------------------- + +Build with ICC is not supported yet. +Power8, ARMv7 and BSD are not supported yet. diff --git a/drivers/net/meson.build b/drivers/net/meson.build index bd38b533c5..224eab99a7 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -54,6 +54,7 @@ drivers = [ 'ring', 'sfc', 'softnic', + 'sssnic', 'tap', 'thunderx', 'txgbe', diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build new file mode 100644 index 0000000000..fda65aa380 --- /dev/null +++ b/drivers/net/sssnic/meson.build @@ -0,0 +1,18 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +if (arch_subdir != 'x86' and arch_subdir != 'arm') or (not dpdk_conf.get('RTE_ARCH_64')) + build = false + reason = 'only supported on x86_64 and aarch64' + subdir_done() +endif + +sources = files( + 'sssnic_ethdev.c', +) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c new file mode 100644 index 0000000000..dcda01eeb8 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include + +static int +sssnic_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + RTE_SET_USED(pci_drv); + RTE_SET_USED(pci_dev); + return -EINVAL; +} + +static int +sssnic_pci_remove(struct rte_pci_device *pci_dev) +{ + RTE_SET_USED(pci_dev); + return -EINVAL; +} + +static struct rte_pci_driver sssnic_pmd = { + .probe = sssnic_pci_probe, + .remove = sssnic_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_sssnic, sssnic_pmd); From patchwork Fri Sep 1 09:34:44 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131047 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDC9D4221E; Fri, 1 Sep 2023 11:35:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A3C91402A6; Fri, 1 Sep 2023 11:35:44 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 441BD402A5 for ; Fri, 1 Sep 2023 11:35:42 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZN9w069800; Fri, 1 Sep 2023 17:35:24 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:23 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 02/32] net/sssnic: add log type and log macros Date: Fri, 1 Sep 2023 17:34:44 +0800 Message-ID: <20230901093514.224824-3-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZN9w069800 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Adding log macros to print runtime messages and trace functions. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/sssnic_ethdev.c | 13 ++++++++ drivers/net/sssnic/sssnic_log.h | 51 ++++++++++++++++++++++++++++++ 2 files changed, 64 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_log.h diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index dcda01eeb8..0f1017af9d 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -5,11 +5,14 @@ #include #include +#include "sssnic_log.h" + static int sssnic_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { RTE_SET_USED(pci_drv); RTE_SET_USED(pci_dev); + PMD_INIT_FUNC_TRACE(); return -EINVAL; } @@ -17,6 +20,7 @@ static int sssnic_pci_remove(struct rte_pci_device *pci_dev) { RTE_SET_USED(pci_dev); + PMD_INIT_FUNC_TRACE(); return -EINVAL; } @@ -26,3 +30,12 @@ static struct rte_pci_driver sssnic_pmd = { }; RTE_PMD_REGISTER_PCI(net_sssnic, sssnic_pmd); + +RTE_LOG_REGISTER_SUFFIX(sssnic_logtype_driver, driver, INFO); +RTE_LOG_REGISTER_SUFFIX(sssnic_logtype_init, init, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_RX +RTE_LOG_REGISTER_SUFFIX(sssnic_logtype_rx, rx, DEBUG); +#endif /*RTE_ETHDEV_DEBUG_RX*/ +#ifdef RTE_ETHDEV_DEBUG_TX +RTE_LOG_REGISTER_SUFFIX(sssnic_logtype_tx, tx, DEBUG); +#endif /*RTE_ETHDEV_DEBUG_TX*/ diff --git a/drivers/net/sssnic/sssnic_log.h b/drivers/net/sssnic/sssnic_log.h new file mode 100644 index 0000000000..629e12100c --- /dev/null +++ b/drivers/net/sssnic/sssnic_log.h @@ -0,0 +1,51 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_LOG_H_ +#define _SSSNIC_LOG_H_ + +#include + +extern int sssnic_logtype_driver; +extern int sssnic_logtype_init; + +#define SSSNIC_LOG_NAME "sssnic" +#define PMD_DRV_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_##level, sssnic_logtype_driver, \ + SSSNIC_LOG_NAME ": " fmt "\n", ##args) +#define PMD_INIT_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_##level, sssnic_logtype_init, "%s(): " fmt "\n", \ + __func__, ##args) + +#define SSSNIC_DEBUG(fmt, args...) \ + PMD_DRV_LOG(DEBUG, "[%s():%d] " fmt, __func__, __LINE__, ##args) + +/* + * Trace driver init and uninit. + */ +#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") + +#ifdef RTE_ETHDEV_DEBUG_RX +extern int sssnic_logtype_rx; +#define SSSNIC_RX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_##level, sssnic_logtype_rx, \ + "sssnic_rx: [%s():%d] " fmt "\n", __func__, __LINE__, ##args) +#else +#define SSSNIC_RX_LOG(level, fmt, args...) \ + do { \ + } while (0) +#endif /*RTE_ETHDEV_DEBUG_RX*/ + +#ifdef RTE_ETHDEV_DEBUG_TX +extern int sssnic_logtype_tx; +#define SSSNIC_TX_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_##level, sssnic_logtype_rx, \ + "sssnic_tx: [%s():%d] " fmt "\n", __func__, __LINE__, ##args) +#else +#define SSSNIC_TX_LOG(level, fmt, args...) \ + do { \ + } while (0) +#endif /*RTE_ETHDEV_DEBUG_TX*/ + +#endif /*_SSSNIC_LOG_H_*/ From patchwork Fri Sep 1 09:34:45 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131049 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 34D644221E; Fri, 1 Sep 2023 11:36:10 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3BFAC402A4; Fri, 1 Sep 2023 11:35:47 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 6B8F8402A5 for ; Fri, 1 Sep 2023 11:35:44 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZN9x069800; Fri, 1 Sep 2023 17:35:24 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:23 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 03/32] net/sssnic: support probe and remove Date: Fri, 1 Sep 2023 17:34:45 +0800 Message-ID: <20230901093514.224824-4-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZN9x069800 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Register device ID for 3SNIC ethernet adapter to support PCI ethdev probe and remove. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/base/sssnic_hw.h | 11 +++++++++ drivers/net/sssnic/sssnic_ethdev.c | 37 +++++++++++++++++++++++++---- 2 files changed, 44 insertions(+), 4 deletions(-) create mode 100644 drivers/net/sssnic/base/sssnic_hw.h diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h new file mode 100644 index 0000000000..db916b1977 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_HW_H_ +#define _SSSNIC_HW_H_ + +#define SSSNIC_PCI_VENDOR_ID 0x1F3F +#define SSSNIC_DEVICE_ID_STD 0x9020 + +#endif /* _SSSNIC_HW_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 0f1017af9d..4f8b5c2684 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -6,25 +6,54 @@ #include #include "sssnic_log.h" +#include "base/sssnic_hw.h" + +static int +sssnic_ethdev_init(struct rte_eth_dev *ethdev) +{ + RTE_SET_USED(ethdev); + PMD_INIT_FUNC_TRACE(); + + return -EINVAL; +} + +static int +sssnic_ethdev_uninit(struct rte_eth_dev *ethdev) +{ + RTE_SET_USED(ethdev); + PMD_INIT_FUNC_TRACE(); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return -EINVAL; +} static int sssnic_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) { RTE_SET_USED(pci_drv); - RTE_SET_USED(pci_dev); PMD_INIT_FUNC_TRACE(); - return -EINVAL; + + return rte_eth_dev_pci_generic_probe(pci_dev, 0, sssnic_ethdev_init); } static int sssnic_pci_remove(struct rte_pci_device *pci_dev) { - RTE_SET_USED(pci_dev); PMD_INIT_FUNC_TRACE(); - return -EINVAL; + + return rte_eth_dev_pci_generic_remove(pci_dev, sssnic_ethdev_uninit); } +static const struct rte_pci_id sssnic_pci_id_map[] = { + { RTE_PCI_DEVICE(SSSNIC_PCI_VENDOR_ID, SSSNIC_DEVICE_ID_STD) }, + { .vendor_id = 0 }, +}; + static struct rte_pci_driver sssnic_pmd = { + .id_table = sssnic_pci_id_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, .probe = sssnic_pci_probe, .remove = sssnic_pci_remove, }; From patchwork Fri Sep 1 09:34:46 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131048 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C4C9A4221E; Fri, 1 Sep 2023 11:36:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E2536402A9; Fri, 1 Sep 2023 11:35:45 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 10D054014F for ; Fri, 1 Sep 2023 11:35:43 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZOMw069818; Fri, 1 Sep 2023 17:35:24 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:24 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 04/32] net/sssnic: initialize hardware base Date: Fri, 1 Sep 2023 17:34:46 +0800 Message-ID: <20230901093514.224824-5-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZOMw069818 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Initializing hardware base make hardware ready to be access. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/base/meson.build | 13 ++ drivers/net/sssnic/base/sssnic_hw.c | 207 +++++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_hw.h | 49 +++++++ drivers/net/sssnic/base/sssnic_reg.h | 169 ++++++++++++++++++++++ drivers/net/sssnic/meson.build | 3 + drivers/net/sssnic/sssnic_ethdev.c | 46 +++++- drivers/net/sssnic/sssnic_ethdev.h | 18 +++ 7 files changed, 501 insertions(+), 4 deletions(-) create mode 100644 drivers/net/sssnic/base/meson.build create mode 100644 drivers/net/sssnic/base/sssnic_hw.c create mode 100644 drivers/net/sssnic/base/sssnic_reg.h create mode 100644 drivers/net/sssnic/sssnic_ethdev.h diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build new file mode 100644 index 0000000000..3e64112c72 --- /dev/null +++ b/drivers/net/sssnic/base/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + +sources = [ + 'sssnic_hw.c', +] + +c_args = cflags +base_lib = static_library('sssnic_base', sources, + dependencies: [static_rte_eal, static_rte_ethdev, static_rte_bus_pci, static_rte_net], + c_args: c_args) + +base_objs = base_lib.extract_all_objects() diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c new file mode 100644 index 0000000000..8b7bba7644 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -0,0 +1,207 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_reg.h" + +static int +wait_for_sssnic_hw_ready(struct sssnic_hw *hw) +{ + struct sssnic_attr1_reg reg; + uint32_t timeout_ms = 10; + + do { + reg.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR1_REG); + if (reg.u32 != 0xffffffff && reg.mgmt_init_status != 0) + return 0; + rte_delay_ms(1); + } while (--timeout_ms); + + return -EBUSY; +} + +static int +wait_for_sssnic_db_enabled(struct sssnic_hw *hw) +{ + struct sssnic_attr4_reg r4; + struct sssnic_attr5_reg r5; + uint32_t timeout_ms = 60000; + + do { + r4.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR4_REG); + r5.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR5_REG); + if (r4.db_ctrl == SSSNIC_DB_CTRL_ENABLE && + r5.outbound_ctrl == SSSNIC_DB_CTRL_ENABLE) + return 0; + rte_delay_ms(1); + } while (--timeout_ms); + + return -EBUSY; +} + +static void +sssnic_attr_setup(struct sssnic_hw *hw) +{ + struct sssnic_attr0_reg attr0; + struct sssnic_attr1_reg attr1; + struct sssnic_attr2_reg attr2; + struct sssnic_attr3_reg attr3; + struct sssnic_hw_attr *attr = &hw->attr; + + attr0.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR0_REG); + attr1.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR1_REG); + attr2.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR2_REG); + attr3.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR3_REG); + + attr->func_idx = attr0.func_idx; + attr->pf_idx = attr0.pf_idx; + attr->pci_idx = attr0.pci_idx; + attr->vf_off = attr0.vf_off; + attr->func_type = attr0.func_type; + attr->af_idx = attr1.af_idx; + attr->num_aeq = RTE_BIT32(attr1.num_aeq); + attr->num_ceq = attr2.num_ceq; + attr->num_irq = attr2.num_irq; + attr->global_vf_off = attr3.global_vf_off; + + PMD_DRV_LOG(DEBUG, "attr0=0x%x, attr1=0x%x, attr2=0x%x, attr3=0x%x", + attr0.u32, attr1.u32, attr2.u32, attr3.u32); +} + +/* AF and MF election */ +static void +sssnic_af_setup(struct sssnic_hw *hw) +{ + struct sssnic_af_election_reg reg0; + struct sssnic_mf_election_reg reg1; + + /* AF election */ + reg0.u32 = sssnic_mgmt_reg_read(hw, SSSNIC_AF_ELECTION_REG); + reg0.func_idx = hw->attr.func_idx; + sssnic_mgmt_reg_write(hw, SSSNIC_AF_ELECTION_REG, reg0.u32); + reg0.u32 = sssnic_mgmt_reg_read(hw, SSSNIC_AF_ELECTION_REG); + hw->attr.af_idx = reg0.func_idx; + if (hw->attr.af_idx == hw->attr.func_idx) { + hw->attr.func_type = SSSNIC_FUNC_TYPE_AF; + PMD_DRV_LOG(INFO, "Elected PF %d as AF", hw->attr.func_idx); + + /* MF election */ + reg1.u32 = sssnic_mgmt_reg_read(hw, SSSNIC_MF_ELECTION_REG); + reg1.func_idx = hw->attr.func_idx; + sssnic_mgmt_reg_write(hw, SSSNIC_MF_ELECTION_REG, reg1.u32); + reg1.u32 = sssnic_mgmt_reg_read(hw, SSSNIC_MF_ELECTION_REG); + hw->attr.mf_idx = reg1.func_idx; + if (hw->attr.mf_idx == hw->attr.func_idx) + PMD_DRV_LOG(INFO, "Elected PF %d as MF", + hw->attr.func_idx); + } +} + +void +sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state) +{ + struct sssnic_msix_ctrl_reg reg; + + reg.u32 = 0; + if (state == SSSNIC_MSIX_ENABLE) + reg.int_msk_clr = 1; + else + reg.int_msk_set = 1; + reg.msxi_idx = msix_id; + sssnic_cfg_reg_write(hw, SSSNIC_MSIX_CTRL_REG, reg.u32); +} + +static void +sssnic_msix_all_disable(struct sssnic_hw *hw) +{ + uint16_t i; + int num_irqs = hw->attr.num_irq; + + for (i = 0; i < num_irqs; i++) + sssnic_msix_state_set(hw, i, SSSNIC_MSIX_DISABLE); +} + +static void +sssnic_pf_status_set(struct sssnic_hw *hw, enum sssnic_pf_status status) +{ + struct sssnic_attr6_reg reg; + + reg.u32 = sssnic_cfg_reg_read(hw, SSSNIC_ATTR6_REG); + reg.pf_status = status; + sssnic_cfg_reg_write(hw, SSSNIC_ATTR6_REG, reg.u32); +} + +static int +sssnic_base_init(struct sssnic_hw *hw) +{ + int ret; + struct rte_pci_device *pci_dev; + + PMD_INIT_FUNC_TRACE(); + + pci_dev = hw->pci_dev; + + /* get base addresses of hw registers */ + hw->cfg_base_addr = + (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_CFG].addr; + hw->mgmt_base_addr = + (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_MGMT].addr; + hw->db_base_addr = + (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_DB].addr; + hw->db_mem_len = + (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_DB].len; + + ret = wait_for_sssnic_hw_ready(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Hardware is not ready!"); + return -EBUSY; + } + sssnic_attr_setup(hw); + ret = wait_for_sssnic_db_enabled(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Doorbell is not enabled!"); + return -EBUSY; + } + sssnic_af_setup(hw); + sssnic_msix_all_disable(hw); + sssnic_pf_status_set(hw, SSSNIC_PF_STATUS_INIT); + + PMD_DRV_LOG(DEBUG, + "func_idx:%d, func_type:%d, pci_idx:%d, vf_off:%d, global_vf_off:%d " + "pf_idx:%d, af_idx:%d, mf_idx:%d, num_aeq:%d, num_ceq:%d, num_irq:%d", + hw->attr.func_idx, hw->attr.func_type, hw->attr.pci_idx, + hw->attr.vf_off, hw->attr.global_vf_off, hw->attr.pf_idx, + hw->attr.af_idx, hw->attr.mf_idx, hw->attr.num_aeq, + hw->attr.num_ceq, hw->attr.num_irq); + + return 0; +} + +int +sssnic_hw_init(struct sssnic_hw *hw) +{ + int ret; + + PMD_INIT_FUNC_TRACE(); + + ret = sssnic_base_init(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize hardware base"); + return ret; + } + + return -EINVAL; +} + +void +sssnic_hw_shutdown(struct sssnic_hw *hw) +{ + RTE_SET_USED(hw); + PMD_INIT_FUNC_TRACE(); +} diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index db916b1977..65d4d562b4 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -8,4 +8,53 @@ #define SSSNIC_PCI_VENDOR_ID 0x1F3F #define SSSNIC_DEVICE_ID_STD 0x9020 +#define SSSNIC_PCI_BAR_CFG 1 +#define SSSNIC_PCI_BAR_MGMT 3 +#define SSSNIC_PCI_BAR_DB 4 + +#define SSSNIC_FUNC_TYPE_PF 0 +#define SSSNIC_FUNC_TYPE_VF 1 +#define SSSNIC_FUNC_TYPE_AF 2 +#define SSSNIC_FUNC_TYPE_INVALID 3 + +#define SSSNIC_DB_CTRL_ENABLE 0x0 +#define SSSNIC_DB_CTRL_DISABLE 0x1 + +#define SSSNIC_MSIX_ENABLE 0 +#define SSSNIC_MSIX_DISABLE 1 + +enum sssnic_pf_status { + SSSNIC_PF_STATUS_INIT = 0x0, + SSSNIC_PF_STATUS_ACTIVE = 0x11, + SSSNIC_PF_STATUS_START = 0x12, + SSSNIC_PF_STATUS_FINI = 0x13, +}; + +struct sssnic_hw_attr { + uint16_t func_idx; + uint8_t pf_idx; + uint8_t pci_idx; + uint8_t vf_off; /* vf offset in pf */ + uint8_t global_vf_off; + uint8_t func_type; + uint8_t af_idx; + uint8_t mf_idx; + uint8_t num_aeq; + uint16_t num_ceq; + uint16_t num_irq; +}; + +struct sssnic_hw { + struct rte_pci_device *pci_dev; + uint8_t *cfg_base_addr; + uint8_t *mgmt_base_addr; + uint8_t *db_base_addr; + uint8_t *db_mem_len; + struct sssnic_hw_attr attr; +}; + +int sssnic_hw_init(struct sssnic_hw *hw); +void sssnic_hw_shutdown(struct sssnic_hw *hw); +void sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state); + #endif /* _SSSNIC_HW_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_reg.h b/drivers/net/sssnic/base/sssnic_reg.h new file mode 100644 index 0000000000..77d83292eb --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_reg.h @@ -0,0 +1,169 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_REG_H_ +#define _SSSNIC_REG_H_ + +#include + +/* registers of config */ +#define SSSNIC_ATTR0_REG 0x0 +#define SSSNIC_ATTR1_REG 0x4 +#define SSSNIC_ATTR2_REG 0x8 +#define SSSNIC_ATTR3_REG 0xC +#define SSSNIC_ATTR4_REG 0x10 +#define SSSNIC_ATTR5_REG 0x14 +#define SSSNIC_ATTR6_REG 0x18 + +#define SSSNIC_MSIX_CTRL_REG 0x58 + +/* registers of mgmt */ +#define SSSNIC_AF_ELECTION_REG 0x6000 +#define SSSNIC_MF_ELECTION_REG 0x6020 + +struct sssnic_attr0_reg { + union { + uint32_t u32; + struct { + uint32_t func_idx : 12; + uint32_t pf_idx : 5; + uint32_t pci_idx : 3; + uint32_t vf_off : 8; /* vf offset in pf */ + uint32_t func_type : 1; + uint32_t resvd_0 : 4; + }; + }; +}; + +struct sssnic_attr1_reg { + union { + uint32_t u32; + struct { + uint32_t af_idx : 6; + uint32_t resvd_0 : 2; + uint32_t num_aeq : 2; + uint32_t resvd_1 : 20; + uint32_t mgmt_init_status : 1; + uint32_t pf_init_status : 1; + }; + }; +}; + +struct sssnic_attr2_reg { + union { + uint32_t u32; + struct { + uint32_t num_ceq : 9; + uint32_t num_dma_attr : 3; + uint32_t resvd_0 : 4; + uint32_t num_irq : 11; + uint32_t resvd_1 : 5; + }; + }; +}; + +struct sssnic_attr3_reg { + union { + uint32_t u32; + struct { + uint32_t global_vf_off1 : 12; + uint32_t resvd_0 : 4; + uint32_t global_vf_off : 12; /*global vf offset*/ + uint32_t resvd_1 : 4; + }; + }; +}; + +struct sssnic_attr4_reg { + union { + uint32_t u32; + struct { + uint32_t db_ctrl : 1; + uint32_t resvd_0 : 31; + }; + }; +}; + +struct sssnic_attr5_reg { + union { + uint32_t u32; + struct { + uint32_t outbound_ctrl : 1; + uint32_t resvd_0 : 31; + }; + }; +}; + +struct sssnic_attr6_reg { + union { + uint32_t u32; + struct { + uint32_t pf_status : 16; + uint32_t resvd_0 : 6; + uint32_t msix_en : 1; + uint32_t max_queues : 9; + }; + }; +}; + +struct sssnic_af_election_reg { + union { + uint32_t u32; + struct { + uint32_t func_idx : 6; + uint32_t resvd_0 : 26; + }; + }; +}; + +struct sssnic_mf_election_reg { + union { + uint32_t u32; + struct { + uint32_t func_idx : 5; + uint32_t resvd_0 : 27; + }; + }; +}; + +struct sssnic_msix_ctrl_reg { + union { + uint32_t u32; + struct { + uint32_t resend_timer_clr : 1; + uint32_t int_msk_set : 1; + uint32_t int_msk_clr : 1; + uint32_t auto_msk_set : 1; + uint32_t auto_msk_clr : 1; + uint32_t resvd_0 : 17; + uint32_t msxi_idx : 10; + }; + }; +}; + +static inline uint32_t +sssnic_cfg_reg_read(struct sssnic_hw *hw, uint32_t reg) +{ + return rte_be_to_cpu_32(rte_read32(hw->cfg_base_addr + reg)); +} + +static inline void +sssnic_cfg_reg_write(struct sssnic_hw *hw, uint32_t reg, uint32_t val) +{ + rte_write32(rte_cpu_to_be_32(val), hw->cfg_base_addr + reg); +} + +static inline uint32_t +sssnic_mgmt_reg_read(struct sssnic_hw *hw, uint32_t reg) +{ + return rte_be_to_cpu_32(rte_read32(hw->mgmt_base_addr + reg)); +} + +static inline void +sssnic_mgmt_reg_write(struct sssnic_hw *hw, uint32_t reg, uint32_t val) +{ + rte_write32(rte_cpu_to_be_32(val), hw->mgmt_base_addr + reg); +} + +#endif /*_SSSNIC_REG_H_*/ diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index fda65aa380..328bb41436 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -13,6 +13,9 @@ if (arch_subdir != 'x86' and arch_subdir != 'arm') or (not dpdk_conf.get('RTE_AR subdir_done() endif +subdir('base') +objs = [base_objs] + sources = files( 'sssnic_ethdev.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 4f8b5c2684..e198b1e1d0 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -7,25 +7,62 @@ #include "sssnic_log.h" #include "base/sssnic_hw.h" +#include "sssnic_ethdev.h" + +static void +sssnic_ethdev_release(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + sssnic_hw_shutdown(hw); + rte_free(hw); +} static int sssnic_ethdev_init(struct rte_eth_dev *ethdev) { - RTE_SET_USED(ethdev); + int ret; + struct sssnic_hw *hw; + struct sssnic_netdev *netdev; + struct rte_pci_device *pci_dev; + PMD_INIT_FUNC_TRACE(); - return -EINVAL; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + pci_dev = RTE_ETH_DEV_TO_PCI(ethdev); + hw = rte_zmalloc("sssnic_hw", sizeof(struct sssnic_hw), 0); + if (hw == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc memory for hw"); + return -ENOMEM; + } + netdev->hw = hw; + hw->pci_dev = pci_dev; + ret = sssnic_hw_init(hw); + if (ret != 0) { + rte_free(hw); + return ret; + } + + return 0; } static int sssnic_ethdev_uninit(struct rte_eth_dev *ethdev) { - RTE_SET_USED(ethdev); PMD_INIT_FUNC_TRACE(); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* ethdev port has been released */ + if (ethdev->state == RTE_ETH_DEV_UNUSED) + return 0; + + sssnic_ethdev_release(ethdev); + return -EINVAL; } @@ -35,7 +72,8 @@ sssnic_pci_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) RTE_SET_USED(pci_drv); PMD_INIT_FUNC_TRACE(); - return rte_eth_dev_pci_generic_probe(pci_dev, 0, sssnic_ethdev_init); + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct sssnic_netdev), sssnic_ethdev_init); } static int diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h new file mode 100644 index 0000000000..5d951134cc --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_H_ +#define _SSSNIC_ETHDEV_H_ + +struct sssnic_netdev { + void *hw; +}; + +#define SSSNIC_ETHDEV_PRIVATE(eth_dev) \ + ((struct sssnic_netdev *)(eth_dev)->data->dev_private) +#define SSSNIC_NETDEV_TO_HW(netdev) ((struct sssnic_hw *)(netdev)->hw) +#define SSSNIC_ETHDEV_TO_HW(eth_dev) \ + SSSNIC_NETDEV_TO_HW(SSSNIC_ETHDEV_PRIVATE(eth_dev)) + +#endif /*_SSSNIC_ETHDEV_H_*/ From patchwork Fri Sep 1 09:34:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131051 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D46A14221E; Fri, 1 Sep 2023 11:36:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3ACE8402C1; Fri, 1 Sep 2023 11:35:50 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 447DA402B0 for ; Fri, 1 Sep 2023 11:35:48 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZPIm069820; Fri, 1 Sep 2023 17:35:25 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:24 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 05/32] net/sssnic: add event queue Date: Fri, 1 Sep 2023 17:34:47 +0800 Message-ID: <20230901093514.224824-6-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZPIm069820 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Event queue is intended for receiving event from hardware as well as mailbox response message. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v4: * Fixed dereferencing type-punned pointer. * Fixed coding style issue of COMPLEX_MACRO. --- drivers/net/sssnic/base/meson.build | 1 + drivers/net/sssnic/base/sssnic_eventq.c | 432 ++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_eventq.h | 84 +++++ drivers/net/sssnic/base/sssnic_hw.c | 9 +- drivers/net/sssnic/base/sssnic_hw.h | 5 + drivers/net/sssnic/base/sssnic_reg.h | 51 +++ drivers/net/sssnic/sssnic_ethdev.c | 1 + 7 files changed, 582 insertions(+), 1 deletion(-) create mode 100644 drivers/net/sssnic/base/sssnic_eventq.c create mode 100644 drivers/net/sssnic/base/sssnic_eventq.h diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build index 3e64112c72..7758faa482 100644 --- a/drivers/net/sssnic/base/meson.build +++ b/drivers/net/sssnic/base/meson.build @@ -3,6 +3,7 @@ sources = [ 'sssnic_hw.c', + 'sssnic_eventq.c' ] c_args = cflags diff --git a/drivers/net/sssnic/base/sssnic_eventq.c b/drivers/net/sssnic/base/sssnic_eventq.c new file mode 100644 index 0000000000..a74b74f756 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_eventq.c @@ -0,0 +1,432 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_reg.h" +#include "sssnic_eventq.h" + +#define SSSNIC_EVENTQ_DEF_DEPTH 64 +#define SSSNIC_EVENTQ_NUM_PAGES 4 +#define SSSNIC_EVENTQ_MAX_PAGE_SZ 0x400000 +#define SSSNIC_EVENTQ_MIN_PAGE_SZ 0x1000 + +#define SSSNIC_EVENT_ADDR(base_addr, event_sz, idx) \ + ((struct sssnic_event *)(((uint8_t *)(base_addr)) + ((idx) * (event_sz)))) + +static inline struct sssnic_event * +sssnic_eventq_peek(struct sssnic_eventq *eq) +{ + uint16_t page = eq->ci / eq->page_len; + uint16_t idx = eq->ci % eq->page_len; + + return SSSNIC_EVENT_ADDR(eq->pages[page]->addr, eq->entry_size, idx); +} + +static inline void +sssnic_eventq_reg_write(struct sssnic_eventq *eq, uint32_t reg, uint32_t val) +{ + sssnic_cfg_reg_write(eq->hw, reg, val); +} + +static inline uint32_t +sssnic_eventq_reg_read(struct sssnic_eventq *eq, uint32_t reg) +{ + return sssnic_cfg_reg_read(eq->hw, reg); +} + +static inline void +sssnic_eventq_reg_write64(struct sssnic_eventq *eq, uint32_t reg, uint64_t val) +{ + sssnic_cfg_reg_write(eq->hw, reg, (uint32_t)((val >> 16) >> 16)); + sssnic_cfg_reg_write(eq->hw, reg + sizeof(uint32_t), (uint32_t)val); +} + +/* all eventq registers that to be access must be selected first */ +static inline void +sssnic_eventq_reg_select(struct sssnic_eventq *eq) +{ + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_IDX_SEL_REG, eq->qid); +} + +static const struct rte_memzone * +sssnic_eventq_page_alloc(struct sssnic_eventq *eq, int page_idx) +{ + const struct rte_memzone *mz = NULL; + char mz_name[RTE_MEMZONE_NAMESIZE]; + + snprintf(mz_name, sizeof(mz_name), "sssnic%u_eq%d_page%d", + SSSNIC_ETH_PORT_ID(eq->hw), eq->qid, page_idx); + mz = rte_memzone_reserve_aligned(mz_name, eq->page_size, SOCKET_ID_ANY, + RTE_MEMZONE_IOVA_CONTIG, eq->page_size); + return mz; +} + +static uint32_t +sssnic_eventq_page_size_calc(uint32_t depth, uint32_t entry_size) +{ + uint32_t pages = SSSNIC_EVENTQ_NUM_PAGES; + uint32_t size; + + size = RTE_ALIGN(depth * entry_size, SSSNIC_EVENTQ_MIN_PAGE_SZ); + if (size <= pages * SSSNIC_EVENTQ_MIN_PAGE_SZ) { + /* use minimum page size */ + return SSSNIC_EVENTQ_MIN_PAGE_SZ; + } + + /* Calculate how many pages of minimum size page the big size page covers */ + size = RTE_ALIGN(size / pages, SSSNIC_EVENTQ_MIN_PAGE_SZ); + pages = rte_fls_u32(size / SSSNIC_EVENTQ_MIN_PAGE_SZ); + + return SSSNIC_EVENTQ_MIN_PAGE_SZ * pages; +} + +static int +sssnic_eventq_pages_setup(struct sssnic_eventq *eq) +{ + const struct rte_memzone *mz; + struct sssnic_event *ev; + int i, j; + + eq->pages = rte_zmalloc(NULL, + eq->num_pages * sizeof(struct rte_memzone *), 1); + if (eq->pages == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for pages"); + return -ENOMEM; + } + + for (i = 0; i < eq->num_pages; i++) { + mz = sssnic_eventq_page_alloc(eq, i); + if (mz == NULL) { + PMD_DRV_LOG(ERR, + "Could not alloc DMA memory for eventq page %d", + i); + goto alloc_dma_fail; + } + /* init eventq entries */ + for (j = 0; j < eq->page_len; j++) { + ev = SSSNIC_EVENT_ADDR(mz->addr, eq->entry_size, j); + ev->desc.u32 = 0; + } + eq->pages[i] = mz; + sssnic_eventq_reg_write64(eq, + SSSNIC_EVENTQ_PAGE_ADDR_REG + i * sizeof(uint64_t), + mz->iova); + } + + return 0; + +alloc_dma_fail: + while (i--) + rte_memzone_free(eq->pages[i]); + rte_free(eq->pages); + return -ENOMEM; +} + +static void +sssnic_eventq_pages_cleanup(struct sssnic_eventq *eq) +{ + int i; + + if (eq->pages == NULL) + return; + for (i = 0; i < eq->num_pages; i++) + rte_memzone_free(eq->pages[i]); + rte_free(eq->pages); + eq->pages = NULL; +} + +static void +sssnic_eventq_ctrl_setup(struct sssnic_eventq *eq) +{ + struct sssnic_hw *hw = eq->hw; + struct sssnic_eventq_ctrl0_reg ctrl_0; + struct sssnic_eventq_ctrl1_reg ctrl_1; + + ctrl_0.u32 = sssnic_eventq_reg_read(eq, SSSNIC_EVENTQ_CTRL0_REG); + ctrl_0.intr_idx = eq->msix_entry; + ctrl_0.dma_attr = SSSNIC_REG_EVENTQ_DEF_DMA_ATTR; + ctrl_0.pci_idx = hw->attr.pci_idx; + ctrl_0.intr_mode = SSSNIC_REG_EVENTQ_INTR_MODE_0; + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_CTRL0_REG, ctrl_0.u32); + + ctrl_1.page_size = rte_log2_u32(eq->page_size >> 12); + ctrl_1.depth = eq->depth; + ctrl_1.entry_size = rte_log2_u32(eq->entry_size >> 5); + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_CTRL1_REG, ctrl_1.u32); +} + +/* synchronize current software CI to hardware. + * @ informed: indate event will be informed by interrupt. + * 0: not to be informed + * 1: informed by interrupt + */ +static void +sssnic_eventq_ci_update(struct sssnic_eventq *eq, int informed) +{ + struct sssnic_eventq_ci_ctrl_reg reg; + + reg.u32 = 0; + if (eq->qid == 0) + reg.informed = !!informed; + reg.qid = eq->qid; + reg.ci = eq->ci_wrapped; + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_CI_CTRL_REG, reg.u32); +} + +static int +sssnic_eventq_init(struct sssnic_hw *hw, struct sssnic_eventq *eq, uint16_t qid) +{ + int ret; + + if (hw == NULL || eq == NULL) { + PMD_DRV_LOG(ERR, + "Bad parameter for event queue initialization."); + return -EINVAL; + } + + eq->hw = hw; + eq->msix_entry = 0; /* eventq uses msix 0 in PMD driver */ + eq->qid = qid; + eq->depth = SSSNIC_EVENTQ_DEF_DEPTH; + eq->entry_size = SSSNIC_EVENT_SIZE; + eq->page_size = sssnic_eventq_page_size_calc(eq->depth, eq->entry_size); + eq->page_len = eq->page_size / eq->entry_size; + if (eq->page_len & (eq->page_len - 1)) { + PMD_DRV_LOG(ERR, "Invalid page length: %d, must be power of 2", + eq->page_len); + return -EINVAL; + } + eq->num_pages = RTE_ALIGN((eq->depth * eq->entry_size), eq->page_size) / + eq->page_size; + if (eq->num_pages > SSSNIC_EVENTQ_NUM_PAGES) { + PMD_DRV_LOG(ERR, + "Invalid number of pages: %d, can't be more than %d pages.", + eq->num_pages, SSSNIC_EVENTQ_NUM_PAGES); + return -EINVAL; + } + + /* select the eq which registers to be acesss */ + sssnic_eventq_reg_select(eq); + rte_wmb(); + /* clear entries in eventq */ + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_CTRL1_REG, 0); + rte_wmb(); + /* reset pi to 0 */ + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_PROD_IDX_REG, 0); + + ret = sssnic_eventq_pages_setup(eq); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup eventq pages!"); + return ret; + } + sssnic_eventq_ctrl_setup(eq); + sssnic_eventq_ci_update(eq, 1); + if (qid == 0) + sssnic_msix_state_set(eq->hw, 0, SSSNIC_MSIX_ENABLE); + + PMD_DRV_LOG(DEBUG, + "eventq %u: q_depth=%u, entry_size=%u, num_pages=%u, page_size=%u, page_len=%u", + qid, eq->depth, eq->entry_size, eq->num_pages, eq->page_size, + eq->page_len); + + return 0; +} + +static void +sssnic_eventq_shutdown(struct sssnic_eventq *eq) +{ + if (eq->qid == 0) + sssnic_msix_state_set(eq->hw, 0, SSSNIC_MSIX_DISABLE); + + sssnic_eventq_reg_select(eq); + rte_wmb(); + + sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_CTRL1_REG, 0); + eq->ci = sssnic_eventq_reg_read(eq, SSSNIC_EVENTQ_PROD_IDX_REG); + sssnic_eventq_ci_update(eq, 0); + sssnic_eventq_pages_cleanup(eq); +} + +static void +sssnic_event_be_to_cpu_32(struct sssnic_event *in, struct sssnic_event *out) +{ + uint32_t i; + uint32_t count; + uint32_t *dw_in = (uint32_t *)in; + uint32_t *dw_out = (uint32_t *)out; + + count = SSSNIC_EVENT_SIZE / sizeof(uint32_t); + for (i = 0; i < count; i++) { + *dw_out = rte_be_to_cpu_32(*dw_in); + dw_out++; + dw_in++; + } +} + +static int +sssinc_event_handle(struct sssnic_eventq *eq, struct sssnic_event *event) +{ + struct sssnic_event ev; + sssnic_event_handler_func_t *func; + void *data; + + sssnic_event_be_to_cpu_32(event, &ev); + if (ev.desc.code < SSSNIC_EVENT_CODE_MIN || + ev.desc.code > SSSNIC_EVENT_CODE_MAX) { + PMD_DRV_LOG(ERR, "Event code %d is not supported", + ev.desc.code); + return -1; + } + + func = eq->handlers[ev.desc.code].func; + data = eq->handlers[ev.desc.code].data; + if (func == NULL) { + PMD_DRV_LOG(NOTICE, + "Could not find handler for event qid:%u code:%d", + eq->qid, ev.desc.code); + return -1; + } + + return func(eq, &ev, data); +} + +/* Poll one valid event in timeout_ms */ +static struct sssnic_event * +sssnic_eventq_poll(struct sssnic_eventq *eq, uint32_t timeout_ms) +{ + struct sssnic_event *event; + struct sssnic_eventd desc; + uint64_t end; + + if (timeout_ms > 0) + end = rte_get_timer_cycles() + + rte_get_timer_hz() * timeout_ms / 1000; + + do { + event = sssnic_eventq_peek(eq); + desc.u32 = rte_be_to_cpu_32(event->desc.u32); + if (desc.wrapped != eq->wrapped) + return event; + + if (timeout_ms > 0) + rte_delay_us_sleep(1000); + } while ((timeout_ms > 0) && + (((long)(rte_get_timer_cycles() - end)) < 0)); + + return NULL; +} + +/* Take one or more events to handle. */ +int +sssnic_eventq_flush(struct sssnic_hw *hw, uint16_t qid, uint32_t timeout_ms) +{ + int found = 0; + uint32_t i = 0; + int done = 0; + struct sssnic_event *event; + struct sssnic_eventq *eq; + + if (qid >= hw->num_eventqs) { + PMD_DRV_LOG(ERR, + "Bad parameter, event queue id must be less than %u", + hw->num_eventqs); + return -EINVAL; + } + + eq = &hw->eventqs[qid]; + for (i = 0; i < eq->depth; i++) { + event = sssnic_eventq_poll(eq, timeout_ms); + if (event == NULL) + break; + done = sssinc_event_handle(eq, event); + eq->ci++; + if (eq->ci == eq->depth) { + eq->ci = 0; + eq->wrapped = !eq->wrapped; + } + + found++; + if (done == SSSNIC_EVENT_DONE) + break; + } + + SSSNIC_DEBUG("found:%d, done:%d, ci:%u, depth:%u, wrapped:%u", found, + done, eq->ci, eq->depth, eq->wrapped); + + if (!found) + return -ETIME; + + sssnic_eventq_ci_update(eq, 1); + + if (event == NULL || done != SSSNIC_EVENT_DONE) + return -ETIME; + + return 0; +} + +int +sssnic_eventq_all_init(struct sssnic_hw *hw) +{ + struct sssnic_eventq *eventqs; + int num_eventqs; + int i = 0; + int ret; + + PMD_INIT_FUNC_TRACE(); + + num_eventqs = hw->attr.num_aeq; + eventqs = rte_zmalloc(NULL, sizeof(struct sssnic_eventq) * num_eventqs, + 1); + if (eventqs == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for event queue"); + return -ENOMEM; + } + + for (i = 0; i < num_eventqs; i++) { + ret = sssnic_eventq_init(hw, &eventqs[i], i); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize event queue: %d", + i); + goto init_eventq_fail; + } + } + hw->eventqs = eventqs; + hw->num_eventqs = num_eventqs; + + PMD_DRV_LOG(INFO, "Initialized %d event queues", num_eventqs); + + return 0; + +init_eventq_fail: + while (i--) + sssnic_eventq_shutdown(&eventqs[i]); + rte_free(eventqs); + return ret; +} + +void +sssnic_eventq_all_shutdown(struct sssnic_hw *hw) +{ + int i; + + PMD_INIT_FUNC_TRACE(); + + if (hw->eventqs == NULL) + return; + + for (i = 0; i < hw->num_eventqs; i++) + sssnic_eventq_shutdown(&hw->eventqs[i]); + rte_free(hw->eventqs); + hw->eventqs = NULL; +} diff --git a/drivers/net/sssnic/base/sssnic_eventq.h b/drivers/net/sssnic/base/sssnic_eventq.h new file mode 100644 index 0000000000..a196c10f48 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_eventq.h @@ -0,0 +1,84 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_EVENTQ_H_ +#define _SSSNIC_EVENTQ_H_ + +#define SSSNIC_MAX_NUM_EVENTQ 4 +#define SSSNIC_MIN_NUM_EVENTQ 2 + +#define SSSNIC_EVENT_DESC_SIZE sizeof(uint32_t) +#define SSSNIC_EVENT_SIZE 64 +#define SSSNIC_EVENT_DATA_SIZE (SSSNIC_EVENT_SIZE - SSSNIC_EVENT_DESC_SIZE) + +enum sssnic_event_code { + SSSNIC_EVENT_CODE_RESVD = 0, + SSSNIC_EVENT_FROM_FUNC = 1, /* event from PF and VF */ + SSSNIC_EVENT_FROM_MPU = 2, /* event form management processor unit*/ +}; +#define SSSNIC_EVENT_CODE_MIN SSSNIC_EVENT_FROM_FUNC +#define SSSNIC_EVENT_CODE_MAX SSSNIC_EVENT_FROM_MPU + +struct sssnic_eventq; +struct sssnic_event; + +/* Indicate that sssnic event has been finished to handle */ +#define SSSNIC_EVENT_DONE 1 + +typedef int sssnic_event_handler_func_t(struct sssnic_eventq *eq, + struct sssnic_event *ev, void *data); + +struct sssnic_event_handler { + sssnic_event_handler_func_t *func; + void *data; +}; + +struct sssnic_eventq { + struct sssnic_hw *hw; + uint16_t qid; + uint16_t entry_size; + uint32_t depth; /* max number of entries in eventq */ + uint16_t page_len; /* number of entries in a page */ + uint16_t num_pages; /* number pages to store event entries */ + uint32_t page_size; + const struct rte_memzone **pages; + union { + uint32_t ci_wrapped; + struct { + uint32_t ci : 19; + uint32_t wrapped : 1; + uint32_t resvd : 12; + }; + }; + uint16_t msix_entry; + struct sssnic_event_handler handlers[SSSNIC_EVENT_CODE_MAX + 1]; +}; + +/* event descriptor */ +struct sssnic_eventd { + union { + uint32_t u32; + struct { + uint32_t code : 7; + uint32_t src : 1; + uint32_t size : 8; + uint32_t resvd : 15; + uint32_t wrapped : 1; + }; + }; +}; + +/* event entry */ +struct sssnic_event { + uint8_t data[SSSNIC_EVENT_DATA_SIZE]; + struct sssnic_eventd desc; +}; + +int sssnic_eventq_flush(struct sssnic_hw *hw, uint16_t qid, + uint32_t timeout_ms); + +int sssnic_eventq_all_init(struct sssnic_hw *hw); +void sssnic_eventq_all_shutdown(struct sssnic_hw *hw); + +#endif /* _SSSNIC_EVENTQ_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 8b7bba7644..44e04486a5 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -9,6 +9,7 @@ #include "../sssnic_log.h" #include "sssnic_hw.h" #include "sssnic_reg.h" +#include "sssnic_eventq.h" static int wait_for_sssnic_hw_ready(struct sssnic_hw *hw) @@ -196,12 +197,18 @@ sssnic_hw_init(struct sssnic_hw *hw) return ret; } + ret = sssnic_eventq_all_init(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize event queues"); + return ret; + } + return -EINVAL; } void sssnic_hw_shutdown(struct sssnic_hw *hw) { - RTE_SET_USED(hw); PMD_INIT_FUNC_TRACE(); + sssnic_eventq_all_shutdown(hw); } diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 65d4d562b4..6caf3a6d66 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -51,8 +51,13 @@ struct sssnic_hw { uint8_t *db_base_addr; uint8_t *db_mem_len; struct sssnic_hw_attr attr; + struct sssnic_eventq *eventqs; + uint8_t num_eventqs; + uint16_t eth_port_id; }; +#define SSSNIC_ETH_PORT_ID(hw) ((hw)->eth_port_id) + int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); void sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state); diff --git a/drivers/net/sssnic/base/sssnic_reg.h b/drivers/net/sssnic/base/sssnic_reg.h index 77d83292eb..e38d39a691 100644 --- a/drivers/net/sssnic/base/sssnic_reg.h +++ b/drivers/net/sssnic/base/sssnic_reg.h @@ -18,6 +18,14 @@ #define SSSNIC_MSIX_CTRL_REG 0x58 +#define SSSNIC_EVENTQ_CI_CTRL_REG 0x50 +#define SSSNIC_EVENTQ_IDX_SEL_REG 0x210 +#define SSSNIC_EVENTQ_CTRL0_REG 0x200 +#define SSSNIC_EVENTQ_CTRL1_REG 0x204 +#define SSSNIC_EVENTQ_CONS_IDX_REG 0x208 +#define SSSNIC_EVENTQ_PROD_IDX_REG 0x20c +#define SSSNIC_EVENTQ_PAGE_ADDR_REG 0x240 + /* registers of mgmt */ #define SSSNIC_AF_ELECTION_REG 0x6000 #define SSSNIC_MF_ELECTION_REG 0x6020 @@ -142,6 +150,49 @@ struct sssnic_msix_ctrl_reg { }; }; +#define SSSNIC_REG_EVENTQ_INTR_MODE_0 0 /* armed mode */ +#define SSSNIC_REG_EVENTQ_INTR_MODE_1 1 /* allway mode */ +#define SSSNIC_REG_EVENTQ_DEF_DMA_ATTR 0 +struct sssnic_eventq_ctrl0_reg { + union { + uint32_t u32; + struct { + uint32_t intr_idx : 10; + uint32_t resvd_0 : 2; + uint32_t dma_attr : 6; + uint32_t resvd_1 : 2; + uint32_t pci_idx : 1; + uint32_t resvd_2 : 8; + uint32_t intr_mode : 1; + }; + }; +}; + +struct sssnic_eventq_ctrl1_reg { + union { + uint32_t u32; + struct { + uint32_t depth : 21; + uint32_t resvd_0 : 3; + uint32_t entry_size : 2; + uint32_t resvd_1 : 2; + uint32_t page_size : 4; + }; + }; +}; + +struct sssnic_eventq_ci_ctrl_reg { + union { + uint32_t u32; + struct { + uint32_t ci : 21; + uint32_t informed : 1; + uint32_t resvd_0 : 8; + uint32_t qid : 2; + }; + }; +}; + static inline uint32_t sssnic_cfg_reg_read(struct sssnic_hw *hw, uint32_t reg) { diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index e198b1e1d0..460ff604aa 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -40,6 +40,7 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) } netdev->hw = hw; hw->pci_dev = pci_dev; + hw->eth_port_id = ethdev->data->port_id; ret = sssnic_hw_init(hw); if (ret != 0) { rte_free(hw); From patchwork Fri Sep 1 09:34:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131050 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E48274221E; Fri, 1 Sep 2023 11:36:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE4BF402BE; Fri, 1 Sep 2023 11:35:48 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id A714D402A0 for ; Fri, 1 Sep 2023 11:35:45 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZP7A069821; Fri, 1 Sep 2023 17:35:25 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:25 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 06/32] net/sssnic/base: add message definition and utility Date: Fri, 1 Sep 2023 17:34:48 +0800 Message-ID: <20230901093514.224824-7-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZP7A069821 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan sssnic message is used to encapsulate sssnic command for transmission between driver and firmware. sssnic message is sent by driver via mail box and is received by driver via event queue. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. --- drivers/net/sssnic/base/meson.build | 3 +- drivers/net/sssnic/base/sssnic_eventq.c | 29 +++ drivers/net/sssnic/base/sssnic_hw.c | 15 +- drivers/net/sssnic/base/sssnic_hw.h | 2 + drivers/net/sssnic/base/sssnic_msg.c | 254 ++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_msg.h | 166 ++++++++++++++++ 6 files changed, 467 insertions(+), 2 deletions(-) create mode 100644 drivers/net/sssnic/base/sssnic_msg.c create mode 100644 drivers/net/sssnic/base/sssnic_msg.h diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build index 7758faa482..dd4dd08fc1 100644 --- a/drivers/net/sssnic/base/meson.build +++ b/drivers/net/sssnic/base/meson.build @@ -3,7 +3,8 @@ sources = [ 'sssnic_hw.c', - 'sssnic_eventq.c' + 'sssnic_eventq.c', + 'sssnic_msg.c', ] c_args = cflags diff --git a/drivers/net/sssnic/base/sssnic_eventq.c b/drivers/net/sssnic/base/sssnic_eventq.c index a74b74f756..e90d24bb6b 100644 --- a/drivers/net/sssnic/base/sssnic_eventq.c +++ b/drivers/net/sssnic/base/sssnic_eventq.c @@ -14,6 +14,7 @@ #include "../sssnic_log.h" #include "sssnic_hw.h" #include "sssnic_reg.h" +#include "sssnic_msg.h" #include "sssnic_eventq.h" #define SSSNIC_EVENTQ_DEF_DEPTH 64 @@ -184,6 +185,32 @@ sssnic_eventq_ci_update(struct sssnic_eventq *eq, int informed) sssnic_eventq_reg_write(eq, SSSNIC_EVENTQ_CI_CTRL_REG, reg.u32); } +static int +sssnic_event_default_handler_func(struct sssnic_eventq *eq, + struct sssnic_event *ev, __rte_unused void *data) +{ + struct sssnic_hw *hw; + int ret; + + hw = eq->hw; + ret = sssnic_msg_rx_handle(hw, (struct sssnic_msg_hdr *)(ev->data)); + if (ret != SSSNIC_MSG_DONE) + return -1; + + return SSSNIC_EVENT_DONE; +} + +static void +sssnic_eventq_handlers_init(struct sssnic_eventq *eq) +{ + int i; + + for (i = SSSNIC_EVENT_CODE_MIN; i <= SSSNIC_EVENT_CODE_MAX; i++) { + eq->handlers[i].func = sssnic_event_default_handler_func; + eq->handlers[i].data = NULL; + } +} + static int sssnic_eventq_init(struct sssnic_hw *hw, struct sssnic_eventq *eq, uint16_t qid) { @@ -230,6 +257,8 @@ sssnic_eventq_init(struct sssnic_hw *hw, struct sssnic_eventq *eq, uint16_t qid) PMD_DRV_LOG(ERR, "Failed to setup eventq pages!"); return ret; } + + sssnic_eventq_handlers_init(eq); sssnic_eventq_ctrl_setup(eq); sssnic_eventq_ci_update(eq, 1); if (qid == 0) diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 44e04486a5..387c823c7e 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -10,6 +10,7 @@ #include "sssnic_hw.h" #include "sssnic_reg.h" #include "sssnic_eventq.h" +#include "sssnic_msg.h" static int wait_for_sssnic_hw_ready(struct sssnic_hw *hw) @@ -197,18 +198,30 @@ sssnic_hw_init(struct sssnic_hw *hw) return ret; } + ret = sssnic_msg_inbox_init(hw); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to initialize message inbox."); + return ret; + } + ret = sssnic_eventq_all_init(hw); if (ret != 0) { PMD_DRV_LOG(ERR, "Failed to initialize event queues"); - return ret; + goto eventq_init_fail; } return -EINVAL; + +eventq_init_fail: + sssnic_msg_inbox_shutdown(hw); + return ret; } void sssnic_hw_shutdown(struct sssnic_hw *hw) { PMD_INIT_FUNC_TRACE(); + sssnic_eventq_all_shutdown(hw); + sssnic_msg_inbox_shutdown(hw); } diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 6caf3a6d66..38fb9ac1ac 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -52,11 +52,13 @@ struct sssnic_hw { uint8_t *db_mem_len; struct sssnic_hw_attr attr; struct sssnic_eventq *eventqs; + struct sssnic_msg_inbox *msg_inbox; uint8_t num_eventqs; uint16_t eth_port_id; }; #define SSSNIC_ETH_PORT_ID(hw) ((hw)->eth_port_id) +#define SSSNIC_MPU_FUNC_IDX 0x1fff int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); diff --git a/drivers/net/sssnic/base/sssnic_msg.c b/drivers/net/sssnic/base/sssnic_msg.c new file mode 100644 index 0000000000..4b98fee75b --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_msg.c @@ -0,0 +1,254 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_msg.h" + +/* Receive message segment based on message header + * @param msghdr + * message header + * @param msg + * message where segment store + * @return + * SSSNIC_MSG_REJECT - Message segment was not received because of bad + * parameter of message header. + * SSSNIC_MSG_ACCEPT - Message segment was received. + * SSSNIC_MSG_DONE - The last message segment was received. + */ +static int +sssnic_msg_rx_seg(struct sssnic_msg_hdr *msghdr, struct sssnic_msg *msg) +{ + if (msghdr->seg_id > SSSNIC_MSG_MAX_SEG_ID || + msghdr->seg_len > SSSNIC_MSG_MAX_SEG_SIZE) { + PMD_DRV_LOG(ERR, + "Bad segment id or segment size of message header"); + return SSSNIC_MSG_REJECT; + } + + if (msghdr->seg_id == 0) { + msg->command = msghdr->command; + msg->type = msghdr->type; + msg->module = msghdr->module; + msg->id = msghdr->id; + } else { + if (msghdr->seg_id != (msg->seg + 1) || msghdr->id != msg->id || + msghdr->module != msg->module || + msghdr->command != msg->command) { + PMD_DRV_LOG(ERR, "Bad parameters of message header"); + return SSSNIC_MSG_REJECT; + } + } + rte_memcpy(msg->data_buf + (SSSNIC_MSG_MAX_SEG_SIZE * msghdr->seg_id), + SSSNIC_MSG_DATA(msghdr), msghdr->seg_len); + + if (!msghdr->last_seg) { + msg->seg = msghdr->seg_id; + return SSSNIC_MSG_ACCEPT; + } + + msg->ack = !msghdr->no_response; + msg->status = msghdr->status; + msg->data_len = msghdr->length; + msg->func = msghdr->function; + msg->seg = SSSNIC_MSG_MAX_SEG_ID; + + return SSSNIC_MSG_DONE; +} + +static int +sssnic_msg_buf_alloc(struct sssnic_msg *msg, size_t size) +{ + msg->data_buf = rte_zmalloc("sssnic_msg_data", size, 1); + if (msg->data_buf == NULL) { + PMD_DRV_LOG(ERR, "Could not all message data buffer!"); + return -ENOMEM; + } + + return 0; +} + +static void +sssnic_msg_buf_free(struct sssnic_msg *msg) +{ + rte_free(msg->data_buf); +} + +int +sssnic_msg_rx_handle(struct sssnic_hw *hw, struct sssnic_msg_hdr *msghdr) +{ + struct sssnic_msg *msg; + struct sssnic_msg_handler *msg_handler; + int msg_src; + int msg_chan; + int msg_type; + int ret; + + msg_src = SSSNIC_MSG_SRC(msghdr->function); + msg_chan = msghdr->channel; + msg_type = msghdr->type; + msg = SSSNIC_MSG_LOCATE(hw, msg_chan, msg_type, msg_src); + + ret = sssnic_msg_rx_seg(msghdr, msg); + if (ret != SSSNIC_MSG_DONE) + return ret; + + msg_handler = SSSNIC_MSG_HANDLER(hw, msg_chan, msg_type); + if (msg_handler->func == NULL) { + PMD_DRV_LOG(NOTICE, + "No message handler, message channel:%d, type:%d.", + msg_chan, msg_type); + return SSSNIC_MSG_REJECT; + } + ret = msg_handler->func(msg, msg_chan, msg_handler->priv); + + return ret; +} + +int +sssnic_msg_rx_handler_register(struct sssnic_hw *hw, + enum sssnic_msg_chann_id chann_id, enum sssnic_msg_type msg_type, + sssnic_msg_handler_func_t *func, void *priv) +{ + struct sssnic_msg_handler *msg_handler; + + if (chann_id >= SSSNIC_MSG_CHAN_COUNT || + msg_type >= SSSNIC_MSG_TYPE_CONUT || func == NULL) { + PMD_DRV_LOG(ERR, + "Bad parameters for register rx message handler."); + return -EINVAL; + } + + msg_handler = SSSNIC_MSG_HANDLER(hw, chann_id, msg_type); + if (msg_handler->func != NULL) + PMD_DRV_LOG(WARNING, + "RX message handler has existed, chann_id:%u, msg_type:%u", + chann_id, msg_type); + + msg_handler->func = func; + msg_handler->priv = priv; + + return 0; +} + +static int +sssnic_msg_channel_init(struct sssnic_hw *hw, struct sssnic_msg_channel *chan) +{ + struct sssnic_msg *msg; + int i; + int ret; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < SSSNIC_MSG_TYPE_CONUT; i++) { + msg = &chan->msg[i][SSSNIC_MSG_SRC_MPU]; + ret = sssnic_msg_buf_alloc(msg, SSSNIC_MSG_BUF_SIZE); + if (ret) { + PMD_DRV_LOG(ERR, + "Could not alloc MPU message buf for message inbox channel %d of sssnic%u.", + SSSNIC_ETH_PORT_ID(hw), chan->id); + goto msg_buf_alloc_fail; + } + msg = &chan->msg[i][SSSNIC_MSG_SRC_PF]; + ret = sssnic_msg_buf_alloc(msg, SSSNIC_MSG_BUF_SIZE); + if (ret) { + PMD_DRV_LOG(ERR, + "Could not alloc PF message buf for message inbox channel %d of sssnic%u.", + SSSNIC_ETH_PORT_ID(hw), chan->id); + msg = &chan->msg[i][SSSNIC_MSG_SRC_MPU]; + sssnic_msg_buf_free(msg); + goto msg_buf_alloc_fail; + } + } + + return 0; + +msg_buf_alloc_fail: + while (i--) { + msg = &chan->msg[i][SSSNIC_MSG_SRC_MPU]; + sssnic_msg_buf_free(msg); + msg = &chan->msg[i][SSSNIC_MSG_SRC_PF]; + sssnic_msg_buf_free(msg); + } + return ret; +} + +static void +sssnic_msg_channel_shutdown(__rte_unused struct sssnic_hw *hw, + struct sssnic_msg_channel *chan) +{ + struct sssnic_msg *msg; + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < SSSNIC_MSG_TYPE_CONUT; i++) { + msg = &chan->msg[i][SSSNIC_MSG_SRC_MPU]; + sssnic_msg_buf_free(msg); + msg = &chan->msg[i][SSSNIC_MSG_SRC_PF]; + sssnic_msg_buf_free(msg); + } +} + +int +sssnic_msg_inbox_init(struct sssnic_hw *hw) +{ + struct sssnic_msg_inbox *inbox; + struct sssnic_msg_channel *chan; + int i; + int ret; + + PMD_INIT_FUNC_TRACE(); + + inbox = rte_zmalloc(NULL, sizeof(struct sssnic_msg_inbox), 1); + if (inbox == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for message inbox"); + return -ENOMEM; + } + + inbox->hw = hw; + hw->msg_inbox = inbox; + + for (i = 0; i < SSSNIC_MSG_CHAN_COUNT; i++) { + chan = &inbox->channel[i]; + ret = sssnic_msg_channel_init(hw, chan); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to initialize channel%u of message inbox", + i); + goto init_msg_channel_fail; + } + chan->id = i; + } + + return 0; + +init_msg_channel_fail: + while (i--) { + chan = &inbox->channel[i]; + sssnic_msg_channel_shutdown(hw, chan); + } + rte_free(inbox); + return ret; +} + +void +sssnic_msg_inbox_shutdown(struct sssnic_hw *hw) +{ + struct sssnic_msg_channel *chan; + int i; + + PMD_INIT_FUNC_TRACE(); + + for (i = 0; i < SSSNIC_MSG_CHAN_COUNT; i++) { + chan = &hw->msg_inbox->channel[i]; + sssnic_msg_channel_shutdown(hw, chan); + } + rte_free(hw->msg_inbox); +} diff --git a/drivers/net/sssnic/base/sssnic_msg.h b/drivers/net/sssnic/base/sssnic_msg.h new file mode 100644 index 0000000000..6580f4bb37 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_msg.h @@ -0,0 +1,166 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_MSG_H_ +#define _SSSNIC_MSG_H_ + +#include + +enum sssnic_msg_chann_id { + SSSNIC_MSG_CHAN_MPU = 0, /* Message comes from MPU directly */ + SSSNIC_MSG_CHAN_MBOX = 1, /* Message comes from MBOX */ + SSSNIC_MSG_CHAN_COUNT = 2, +}; + +enum sssnic_msg_src { + SSSNIC_MSG_SRC_MPU, /* mbox message from MPU */ + SSSNIC_MSG_SRC_PF, /* mbox message from PF */ + SSSNIC_MSG_SRC_COUNT, +}; + +#define SSSNIC_MSG_SRC(func_id) \ + (((func_id) == SSSNIC_MPU_FUNC_IDX) ? SSSNIC_MSG_SRC_MPU : \ + SSSNIC_MSG_SRC_PF) + +enum sssnic_msg_type { + SSSNIC_MSG_TYPE_REQ, /* Request message*/ + SSSNIC_MSG_TYPE_RESP, /* Response message */ + SSSNIC_MSG_TYPE_CONUT, +}; + +#define SSSNIC_MSG_TRANS_MODE_DMA 1 +#define SSSNIC_MSG_TRANS_MODE_INLINE 0 + +/* hardware format of sssnic message header */ +struct sssnic_msg_hdr { + union { + uint64_t u64; + struct { + uint32_t dw0; + uint32_t dw1; + }; + struct { + /* Id of the function the message comes from or send to */ + uint64_t function : 13; + /* indicate the result of command */ + uint64_t status : 2; + /* Mbox channel that message receive from or send to */ + uint64_t channel : 1; + /* ID of the EventQ that response message is informed by */ + uint64_t eventq : 2; + /* Message ID */ + uint64_t id : 4; + /* Command ID of the message */ + uint64_t command : 10; + /* total length of message data */ + uint64_t length : 11; + /* Module of message comes from or send to */ + uint64_t module : 5; + /* Length of this data segment */ + uint64_t seg_len : 6; + /* needless response indication */ + uint64_t no_response : 1; + /* Message data transmission mode, 0: inline 1:dma */ + uint64_t trans_mode : 1; + /* Segment sequence of message data */ + uint64_t seg_id : 6; + /* Last data segment indication, 1: Last segment */ + uint64_t last_seg : 1; + /* Message type, see sssnic_mbox_msg_type */ + uint64_t type : 1; + }; + }; +}; +#define SSSNIC_MSG_HDR_SIZE sizeof(struct sssnic_msg_hdr) +#define SSSNIC_MSG_DATA(hdr) (((uint8_t *)hdr) + SSSNIC_MSG_HDR_SIZE) + +#define SSSNIC_MSG_BUF_SIZE 2048UL +#define SSSNIC_MSG_MAX_SEG_SIZE 48 +#define SSSNIC_MSG_MIN_SGE_ID 0 +#define SSSNIC_MSG_MAX_SEG_ID 42 +#define SSSNIC_MSG_MAX_DATA_SIZE (SSSNIC_MSG_BUF_SIZE - SSSNIC_MSG_HDR_SIZE) + +struct sssnic_msg { + /* message command ID */ + uint16_t command; + /* function id of that message send to or receive from */ + uint16_t func; + /* module id of that message send to or receive from */ + uint32_t module; + /* message is request or response*/ + enum sssnic_msg_type type; + /* message data */ + uint8_t *data_buf; + /* data length */ + uint32_t data_len; + /* need response indication */ + uint8_t ack; + /* the id of last received segment*/ + uint8_t seg; + /* indicate the result of request in response message, request failed if not 0 */ + uint8_t status; + /* generated by sender if dir == SSSNIC_MSG_TYPE_REQ */ + uint8_t id; +}; + +#define SSSNIC_MSG_REJECT -1 +#define SSSNIC_MSG_ACCEPT 0 +#define SSSNIC_MSG_DONE 1 + +/* sssnic message handler function + * @return + * SSSNIC_MSG_REJECT - Message failed to handle + * SSSNIC_MSG_DONE - Message succeed to handle + */ +typedef int sssnic_msg_handler_func_t(struct sssnic_msg *msg, + enum sssnic_msg_chann_id chan_id, void *priv); + +struct sssnic_msg_handler { + sssnic_msg_handler_func_t *func; + void *priv; +}; + +struct sssnic_msg_channel { + enum sssnic_msg_chann_id id; + struct sssnic_msg msg[SSSNIC_MSG_TYPE_CONUT][SSSNIC_MSG_SRC_COUNT]; + struct sssnic_msg_handler handler[SSSNIC_MSG_TYPE_CONUT]; +}; + +struct sssnic_msg_inbox { + struct sssnic_hw *hw; + struct sssnic_msg_channel channel[SSSNIC_MSG_CHAN_COUNT]; +}; + +#define SSSNIC_MSG_INBOX(hw) ((hw)->msg_inbox) +#define SSSNIC_MSG_CHANNEL(hw, chann_id) \ + (&(SSSNIC_MSG_INBOX(hw)->channel[chann_id])) +#define SSSNIC_MSG_LOCATE(hw, chann_id, type, src) \ + (&SSSNIC_MSG_CHANNEL(hw, chann_id)->msg[type][src]) +#define SSSNIC_MSG_HANDLER(hw, chann_id, type) \ + (&SSSNIC_MSG_CHANNEL(hw, chann_id)->handler[type]) + +static inline void +sssnic_msg_init(struct sssnic_msg *msg, uint8_t *data, uint32_t data_len, + uint16_t command, uint16_t func, uint32_t module, + enum sssnic_msg_type type) +{ + memset(msg, 0, sizeof(struct sssnic_msg)); + msg->data_buf = data; + msg->data_len = data_len; + msg->command = command; + msg->module = module; + msg->func = func; + msg->type = type; +} + +int sssnic_msg_rx_handler_register(struct sssnic_hw *hw, + enum sssnic_msg_chann_id chann_id, enum sssnic_msg_type msg_type, + sssnic_msg_handler_func_t *func, void *priv); +int sssnic_msg_rx_handle(struct sssnic_hw *hw, struct sssnic_msg_hdr *msghdr); +int sssnic_msg_inbox_init(struct sssnic_hw *hw); +void sssnic_msg_inbox_shutdown(struct sssnic_hw *hw); +int sssnic_msg_rx(struct sssnic_msg_hdr *msghdr, uint16_t max_seg_len, + uint16_t max_seg_id, struct sssnic_msg *msg); + +#endif /* _SSSNIC_MSG_H_ */ From patchwork Fri Sep 1 09:34:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131053 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BE8054221E; Fri, 1 Sep 2023 11:36:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 35825402B6; Fri, 1 Sep 2023 11:35:55 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id CCF56402C4 for ; Fri, 1 Sep 2023 11:35:50 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZQwI069823; Fri, 1 Sep 2023 17:35:26 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:25 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 07/32] net/sssnic/base: add mailbox support Date: Fri, 1 Sep 2023 17:34:49 +0800 Message-ID: <20230901093514.224824-8-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZQwI069823 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Mailbox is a message channel used to communicate between PF and VF as well as driver and hardware functions. Mailbox messages are received by driver through event queue, and sent by driver through registers of mailbox. There are two transfer modes for sending mailbox message, one is DMA mode used to send message to PF, another is inline mode used to send message to VF. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v3: * Fixed dereferencing type-punned pointer. --- drivers/net/sssnic/base/meson.build | 1 + drivers/net/sssnic/base/sssnic_hw.c | 10 + drivers/net/sssnic/base/sssnic_hw.h | 4 + drivers/net/sssnic/base/sssnic_mbox.c | 615 ++++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_mbox.h | 45 ++ drivers/net/sssnic/base/sssnic_misc.h | 11 + drivers/net/sssnic/base/sssnic_reg.h | 47 ++ 7 files changed, 733 insertions(+) create mode 100644 drivers/net/sssnic/base/sssnic_mbox.c create mode 100644 drivers/net/sssnic/base/sssnic_mbox.h create mode 100644 drivers/net/sssnic/base/sssnic_misc.h diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build index dd4dd08fc1..4abd1a0daf 100644 --- a/drivers/net/sssnic/base/meson.build +++ b/drivers/net/sssnic/base/meson.build @@ -5,6 +5,7 @@ sources = [ 'sssnic_hw.c', 'sssnic_eventq.c', 'sssnic_msg.c', + 'sssnic_mbox.c', ] c_args = cflags diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 387c823c7e..ff527b2c7f 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -11,6 +11,7 @@ #include "sssnic_reg.h" #include "sssnic_eventq.h" #include "sssnic_msg.h" +#include "sssnic_mbox.h" static int wait_for_sssnic_hw_ready(struct sssnic_hw *hw) @@ -210,8 +211,16 @@ sssnic_hw_init(struct sssnic_hw *hw) goto eventq_init_fail; } + ret = sssnic_mbox_init(hw); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to initialize mailbox"); + goto mbox_init_fail; + } + return -EINVAL; +mbox_init_fail: + sssnic_eventq_all_shutdown(hw); eventq_init_fail: sssnic_msg_inbox_shutdown(hw); return ret; @@ -222,6 +231,7 @@ sssnic_hw_shutdown(struct sssnic_hw *hw) { PMD_INIT_FUNC_TRACE(); + sssnic_mbox_shutdown(hw); sssnic_eventq_all_shutdown(hw); sssnic_msg_inbox_shutdown(hw); } diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 38fb9ac1ac..41e65f5880 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -53,12 +53,16 @@ struct sssnic_hw { struct sssnic_hw_attr attr; struct sssnic_eventq *eventqs; struct sssnic_msg_inbox *msg_inbox; + struct sssnic_mbox *mbox; uint8_t num_eventqs; uint16_t eth_port_id; }; +#define SSSNIC_FUNC_IDX(hw) ((hw)->attr.func_idx) #define SSSNIC_ETH_PORT_ID(hw) ((hw)->eth_port_id) #define SSSNIC_MPU_FUNC_IDX 0x1fff +#define SSSNIC_FUNC_TYPE(hw) ((hw)->attr.func_type) +#define SSSNIC_AF_FUNC_IDX(hw) ((hw)->attr.af_idx) int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); diff --git a/drivers/net/sssnic/base/sssnic_mbox.c b/drivers/net/sssnic/base/sssnic_mbox.c new file mode 100644 index 0000000000..02957137ea --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_mbox.c @@ -0,0 +1,615 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_reg.h" +#include "sssnic_misc.h" +#include "sssnic_eventq.h" +#include "sssnic_mbox.h" + +#define SSSNIC_MBOX_SEND_RESULT_SIZE 16 +#define SSSNIC_MBOX_SEND_BUF_SIZE 2048UL +#define SSSNIC_MBOX_RESP_MSG_EVENTQ 1 +#define SSSNIC_MBOX_SEND_DONE_TIMEOUT 500000 /* uint is 10us */ +#define SSSNIC_MBOX_DEF_REQ_TIMEOUT 4000 /* millisecond */ +#define SSSNIC_MBOX_REQ_ID_MASK 0xf /* request id only 4 bits*/ + +struct sssnic_sendbox { + struct sssnic_mbox *mbox; + /* Send data memory */ + uint8_t *data; + /* Send result DMA memory zone */ + const struct rte_memzone *result_mz; + /* Send result DMA virtual address */ + volatile uint64_t *result_addr; + /* DMA buffer mz */ + const struct rte_memzone *buf_mz; + /* DMA buffer virtual address */ + uint8_t *buf_addr; + pthread_mutex_t lock; +}; + +struct sssnic_mbox_msg_dma_desc { + /* 32bit xor checksum for DMA data */ + uint32_t checksum; + /* dword of high DMA address */ + uint32_t dma_addr_hi; + /* dword of low DMA address */ + uint32_t dma_addr_lo; + /* DMA data length */ + uint32_t len; + uint32_t resvd[2]; +}; +#define SSSNIC_MBOX_MSG_DMA_DESC_SIZE 16 + +struct sssnic_mbox_send_result { + union { + uint16_t u16; + struct { + /* SSSNIC_MBOX_SEND_STATUS_xx */ + uint16_t status : 8; + uint16_t errcode : 8; + }; + }; +}; + +#define SSSNIC_MBOX_SEND_STATUS_DONE 0xff +#define SSSNIC_MBOX_SEND_STATUS_ERR 0xfe +#define SSSNIC_MBOX_SEND_ERR_NONE 0x0 + +static inline uint16_t +sssnic_sendbox_result_get(struct sssnic_sendbox *sendbox) +{ + uint64_t result = rte_be_to_cpu_64(rte_read64(sendbox->result_addr)); + return (uint16_t)(result & 0xffff); +} + +static inline void +sssnic_sendbox_result_clear(struct sssnic_sendbox *sendbox) +{ + rte_write64(0, sendbox->result_addr); +} + +/* Wait send status to done */ +static int +sssnic_sendbox_result_wait(struct sssnic_sendbox *sendbox, uint32_t timeout) +{ + int ret; + struct sssnic_mbox_send_result result; + + do { + result.u16 = sssnic_sendbox_result_get(sendbox); + if (result.status == SSSNIC_MBOX_SEND_STATUS_DONE) { + return 0; + } else if (result.status == SSSNIC_MBOX_SEND_STATUS_ERR) { + PMD_DRV_LOG(ERR, + "Failed to send mbox segment data, error code=%u", + result.errcode); + ret = -EFAULT; + goto err_return; + } + if (timeout == 0) + break; + rte_delay_us(10); + } while (--timeout); + + PMD_DRV_LOG(ERR, "Mbox segment data sent time out"); + ret = -ETIMEDOUT; + +err_return: + PMD_DRV_LOG(ERR, "MBOX_SEND_CTRL0_REG=0x%x, SEND_CTRL1_REG=0x%x", + sssnic_cfg_reg_read(sendbox->mbox->hw, + SSSNIC_MBOX_SEND_CTRL0_REG), + sssnic_cfg_reg_read(sendbox->mbox->hw, + SSSNIC_MBOX_SEND_CTRL1_REG)); + + return ret; +} + +static void +sssnic_mbox_send_ctrl_set(struct sssnic_mbox *mbox, uint16_t func, + uint16_t dst_eq, uint16_t len) +{ + struct sssnic_mbox_send_ctrl0_reg ctrl_0; + struct sssnic_mbox_send_ctrl1_reg ctrl_1; + + ctrl_1.u32 = 0; + ctrl_1.dma_attr = 0; + ctrl_1.ordering = 0; + ctrl_1.dst_eq = dst_eq; + ctrl_1.src_eq = 0; + ctrl_1.tx_size = RTE_ALIGN(len + SSSNIC_MSG_HDR_SIZE, 4) >> 2; + ctrl_1.wb = 1; + sssnic_cfg_reg_write(mbox->hw, SSSNIC_MBOX_SEND_CTRL1_REG, ctrl_1.u32); + rte_wmb(); + + if (SSSNIC_FUNC_TYPE(mbox->hw) == SSSNIC_FUNC_TYPE_VF && + func != SSSNIC_MPU_FUNC_IDX) { + if (func == SSSNIC_AF_FUNC_IDX(mbox->hw)) + func = 1; + else + func = 0; + } + + ctrl_0.u32 = 0; + ctrl_0.func = func; + ctrl_0.src_eq_en = 0; + ctrl_0.tx_status = SSSNIC_REG_MBOX_TX_READY; + sssnic_cfg_reg_write(mbox->hw, SSSNIC_MBOX_SEND_CTRL0_REG, ctrl_0.u32); +} + +static void +sssnic_mbox_state_set(struct sssnic_mbox *mbox, enum sssnic_mbox_state state) +{ + rte_spinlock_lock(&mbox->state_lock); + mbox->state = state; + rte_spinlock_unlock(&mbox->state_lock); +} + +static void +sssnic_sendbox_write(struct sssnic_sendbox *sendbox, uint16_t offset, + uint8_t *data, uint16_t data_len) +{ + uint32_t *send_addr; + uint32_t send_data; + uint32_t remain_data[3] = { 0 }; + uint16_t remain; + uint16_t i; + uint16_t len; + uint16_t num_dw; + + len = data_len; + remain = len & 0x3; + if (remain > 0) { + len = len - remain; + for (i = 0; i < remain; i++) + remain_data[i] = data[len + i]; + } + num_dw = len / sizeof(uint32_t); + send_addr = (uint32_t *)(sendbox->data + offset); + + SSSNIC_DEBUG("data_buf=%p, data_len=%u, aligned_len=%u, remain=%u, " + "num_dw=%u send_addr=%p", + data, data_len, len, remain, num_dw, send_addr); + + for (i = 0; i < num_dw; i++) { + send_data = *(((uint32_t *)data) + i); + rte_write32(rte_cpu_to_be_32(send_data), send_addr + i); + } + if (remain > 0) { + send_data = remain_data[0] << 24; + send_data |= remain_data[1] << 16; + send_data |= remain_data[2] << 8; + rte_write32(send_data, send_addr + i); + } +} + +static inline void +sssnic_mbox_msg_hdr_init(struct sssnic_msg_hdr *msghdr, struct sssnic_msg *msg) +{ + msghdr->u64 = 0; + if (msg == NULL) + return; + if (msg->func == SSSNIC_MPU_FUNC_IDX) { + msghdr->trans_mode = SSSNIC_MSG_TRANS_MODE_DMA; + msghdr->length = SSSNIC_MBOX_MSG_DMA_DESC_SIZE; + msghdr->seg_len = SSSNIC_MBOX_MSG_DMA_DESC_SIZE; + msghdr->last_seg = 1; + } else { + msghdr->trans_mode = SSSNIC_MSG_TRANS_MODE_INLINE; + msghdr->length = msg->data_len; + if (msg->data_len > SSSNIC_MSG_MAX_SEG_SIZE) { + msghdr->seg_len = SSSNIC_MSG_MAX_SEG_SIZE; + msghdr->last_seg = 0; + } else { + msghdr->seg_len = msg->data_len; + msghdr->last_seg = 1; + } + } + msghdr->module = msg->module; + msghdr->no_response = !msg->ack; + msghdr->seg_id = SSSNIC_MSG_MIN_SGE_ID; + msghdr->type = msg->type; + msghdr->command = msg->command; + msghdr->id = msg->id; + msghdr->eventq = SSSNIC_MBOX_RESP_MSG_EVENTQ; + msghdr->channel = SSSNIC_MSG_CHAN_MBOX; + msghdr->status = msg->status; +} + +/* Calculate data checksum with XOR */ +static uint32_t +sssnic_mbox_dma_data_csum(uint32_t *data, uint16_t data_len) +{ + uint32_t xor = 0x5a5a5a5a; + uint16_t dw = data_len / sizeof(uint32_t); + uint16_t i; + + for (i = 0; i < dw; i++) + xor ^= data[i]; + return xor; +} + +static int +sssnic_mbox_dma_send(struct sssnic_mbox *mbox, struct sssnic_msg *msg) +{ + int ret; + struct sssnic_mbox_msg_dma_desc dma_desc = { 0 }; + struct sssnic_msg_hdr msghdr; + struct sssnic_sendbox *sendbox = mbox->sendbox; + + /* Init DMA description */ + dma_desc.checksum = sssnic_mbox_dma_data_csum((uint32_t *)msg->data_buf, + msg->data_len); + dma_desc.dma_addr_hi = (uint32_t)((sendbox->buf_mz->iova >> 16) >> 16); + dma_desc.dma_addr_lo = (uint32_t)(sendbox->buf_mz->iova); + dma_desc.len = msg->data_len; + /* Copy message data to DMA buffer */ + rte_memcpy(sendbox->buf_addr, msg->data_buf, msg->data_len); + /* Init message header */ + sssnic_mbox_msg_hdr_init(&msghdr, msg); + msghdr.function = SSSNIC_FUNC_IDX(mbox->hw); + /* Clear send result */ + sssnic_sendbox_result_clear(sendbox); + /* write mbox message header */ + sssnic_sendbox_write(sendbox, 0, (uint8_t *)&msghdr, + SSSNIC_MSG_HDR_SIZE); + /* write DMA description*/ + sssnic_sendbox_write(sendbox, SSSNIC_MSG_HDR_SIZE, (void *)&dma_desc, + sizeof(struct sssnic_mbox_msg_dma_desc)); + /* mbox send control set */ + sssnic_mbox_send_ctrl_set(mbox, msg->func, + msg->type == SSSNIC_MSG_TYPE_REQ ? 0 : + SSSNIC_MBOX_RESP_MSG_EVENTQ, + SSSNIC_MBOX_MSG_DMA_DESC_SIZE); + + rte_wmb(); + /* Wait for send status becomes done */ + ret = sssnic_sendbox_result_wait(sendbox, + SSSNIC_MBOX_SEND_DONE_TIMEOUT); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to send mbox DMA data"); + + return ret; +} + +static int +sssnic_mbox_inline_send(struct sssnic_mbox *mbox, struct sssnic_msg *msg) +{ + int ret; + uint16_t remain; + uint16_t send; + struct sssnic_msg_hdr msghdr; + struct sssnic_sendbox *sendbox = mbox->sendbox; + + /* Init message header */ + sssnic_mbox_msg_hdr_init(&msghdr, msg); + send = 0; + remain = msg->data_len; + do { + /* Clear send result */ + sssnic_sendbox_result_clear(sendbox); + /* write mbox message header */ + sssnic_sendbox_write(sendbox, 0, (uint8_t *)&msghdr, + SSSNIC_MSG_HDR_SIZE); + /* write mbox message data */ + sssnic_sendbox_write(sendbox, SSSNIC_MSG_HDR_SIZE, + msg->data_buf + send, msghdr.seg_len); + /* mbox send control set */ + sssnic_mbox_send_ctrl_set(mbox, msg->func, + msg->type == SSSNIC_MSG_TYPE_REQ ? + 0 : + SSSNIC_MBOX_RESP_MSG_EVENTQ, + msghdr.seg_len); + + rte_wmb(); + /* Wait for send status becomes done */ + ret = sssnic_sendbox_result_wait(sendbox, + SSSNIC_MBOX_SEND_DONE_TIMEOUT); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox inline data"); + return ret; + } + /*last segment has been sent*/ + if (msghdr.last_seg) + break; + + remain -= SSSNIC_MSG_MAX_SEG_SIZE; + send += SSSNIC_MSG_MAX_SEG_SIZE; + if (remain <= SSSNIC_MSG_MAX_SEG_SIZE) { + msghdr.seg_len = remain; + msghdr.last_seg = 1; + } + msghdr.seg_id++; + } while (remain > 0); + + return 0; +} + +static int +sssnic_sendbox_init(struct sssnic_mbox *mbox) +{ + int ret; + struct sssnic_sendbox *sendbox; + struct sssnic_hw *hw; + char m_name[RTE_MEMZONE_NAMESIZE]; + + PMD_INIT_FUNC_TRACE(); + + hw = mbox->hw; + + sendbox = rte_zmalloc(NULL, sizeof(struct sssnic_sendbox), 1); + if (sendbox == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for sendbox"); + return -ENOMEM; + } + + hw = mbox->hw; + mbox->sendbox = sendbox; + sendbox->mbox = mbox; + + snprintf(m_name, sizeof(m_name), "sssnic%u_mbox_send_result", + SSSNIC_ETH_PORT_ID(hw)); + sendbox->result_mz = rte_memzone_reserve_aligned(m_name, + SSSNIC_MBOX_SEND_RESULT_SIZE, SOCKET_ID_ANY, + RTE_MEMZONE_IOVA_CONTIG, RTE_CACHE_LINE_SIZE); + if (sendbox->result_mz == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memzone for %s", m_name); + ret = -ENOMEM; + goto alloc_send_result_fail; + } + sssnic_cfg_reg_write(hw, SSSNIC_MBOX_SEND_RESULT_ADDR_H_REG, + SSSNIC_UPPER_32_BITS(sendbox->result_mz->iova)); + sssnic_cfg_reg_write(hw, SSSNIC_MBOX_SEND_RESULT_ADDR_L_REG, + SSSNIC_LOWER_32_BITS(sendbox->result_mz->iova)); + sendbox->result_addr = sendbox->result_mz->addr; + + snprintf(m_name, sizeof(m_name), "sssnic%u_mbox_sendbuf", + SSSNIC_ETH_PORT_ID(hw)); + sendbox->buf_mz = rte_memzone_reserve_aligned(m_name, + SSSNIC_MBOX_SEND_BUF_SIZE, SOCKET_ID_ANY, + RTE_MEMZONE_IOVA_CONTIG, SSSNIC_MBOX_SEND_BUF_SIZE); + if (sendbox->buf_mz == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memzone for %s", m_name); + ret = -ENOMEM; + goto alloc_send_buf_fail; + }; + sendbox->buf_addr = sendbox->buf_mz->addr; + + sendbox->data = hw->cfg_base_addr + SSSNIC_MBOX_SEND_DATA_BASE_REG; + + pthread_mutex_init(&sendbox->lock, NULL); + + return 0; + +alloc_send_buf_fail: + rte_memzone_free(sendbox->result_mz); +alloc_send_result_fail: + rte_free(sendbox); + return ret; +} + +static void +sssnic_sendbox_shutdown(struct sssnic_mbox *mbox) +{ + struct sssnic_sendbox *sendbox = mbox->sendbox; + + PMD_INIT_FUNC_TRACE(); + + rte_memzone_free(sendbox->buf_mz); + sssnic_cfg_reg_write(mbox->hw, SSSNIC_MBOX_SEND_RESULT_ADDR_H_REG, 0); + sssnic_cfg_reg_write(mbox->hw, SSSNIC_MBOX_SEND_RESULT_ADDR_L_REG, 0); + rte_memzone_free(sendbox->result_mz); + pthread_mutex_destroy(&sendbox->lock); + rte_free(sendbox); +} + +static int +sssnic_mbox_response_handle(struct sssnic_msg *msg, + __rte_unused enum sssnic_msg_chann_id chan_id, void *priv) +{ + int ret; + struct sssnic_mbox *mbox = priv; + ; + + rte_spinlock_lock(&mbox->state_lock); + if (msg->id == mbox->req_id && + mbox->state == SSSNIC_MBOX_STATE_RUNNING) { + mbox->state = SSSNIC_MBOX_STATE_READY; + ret = SSSNIC_MSG_DONE; + } else { + PMD_DRV_LOG(ERR, + "Failed to handle mbox response message, msg_id=%u, " + "req_id=%u, msg_status=%u, mbox_state=%u", + msg->id, mbox->req_id, msg->status, mbox->state); + ret = SSSNIC_MSG_REJECT; + } + rte_spinlock_unlock(&mbox->state_lock); + + return ret; +} + +static int +sssnic_mbox_msg_tx(struct sssnic_mbox *mbox, struct sssnic_msg *msg) +{ + int ret; + + if (mbox == NULL || msg == NULL || msg->data_buf == NULL || + msg->data_len == 0 || + msg->data_len > SSSNIC_MSG_MAX_DATA_SIZE) { + PMD_DRV_LOG(ERR, "Bad parameter for mbox message tx"); + return -EINVAL; + } + + SSSNIC_DEBUG("command=%u, func=%u module=%u, type=%u, ack=%u, seq=%u, " + "status=%u, id=%u data_buf=%p, data_len=%u", + msg->command, msg->func, msg->module, msg->type, msg->ack, + msg->seg, msg->status, msg->id, msg->data_buf, msg->data_len); + + pthread_mutex_lock(&mbox->sendbox->lock); + if (msg->func == SSSNIC_MPU_FUNC_IDX) + ret = sssnic_mbox_dma_send(mbox, msg); + else + ret = sssnic_mbox_inline_send(mbox, msg); + pthread_mutex_unlock(&mbox->sendbox->lock); + + return ret; +} + +static int +sssnic_mbox_send_internal(struct sssnic_mbox *mbox, struct sssnic_msg *msg, + uint8_t *resp_data, uint32_t *resp_data_len, uint32_t timeout_ms) +{ + int ret; + struct sssnic_msg *resp_msg = NULL; + + if (resp_data != NULL) { + /* the function of request message equls to response message */ + resp_msg = SSSNIC_MSG_LOCATE(mbox->hw, SSSNIC_MSG_CHAN_MBOX, + SSSNIC_MSG_TYPE_RESP, SSSNIC_MSG_SRC(msg->func)); + mbox->req_id++; + mbox->req_id &= SSSNIC_MBOX_REQ_ID_MASK; + msg->id = mbox->req_id; + msg->ack = 1; + sssnic_mbox_state_set(mbox, SSSNIC_MBOX_STATE_RUNNING); + } + ret = sssnic_mbox_msg_tx(mbox, msg); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to transmit mbox message, ret=%d", + ret); + if (resp_data != NULL) + sssnic_mbox_state_set(mbox, SSSNIC_MBOX_STATE_FAILED); + return ret; + } + + if (resp_data == NULL) + return 0; + + ret = sssnic_eventq_flush(mbox->hw, SSSNIC_MBOX_RESP_MSG_EVENTQ, + timeout_ms ? timeout_ms : SSSNIC_MBOX_DEF_REQ_TIMEOUT); + if (ret != 0) { + PMD_DRV_LOG(ERR, "No response message, ret=%d", ret); + sssnic_mbox_state_set(mbox, SSSNIC_MBOX_STATE_TIMEOUT); + return ret; + } + if (resp_msg->module != msg->module || + resp_msg->command != msg->command) { + PMD_DRV_LOG(ERR, + "Received invalid response message, module=%x, command=%x, expected message module=%x, command=%x", + resp_msg->module, resp_msg->command, msg->module, + msg->command); + sssnic_mbox_state_set(mbox, SSSNIC_MBOX_STATE_FAILED); + return ret; + } + sssnic_mbox_state_set(mbox, SSSNIC_MBOX_STATE_READY); + + if (resp_msg->status != 0) { + PMD_DRV_LOG(ERR, "Bad response status"); + return -EFAULT; + } + + if (*resp_data_len < resp_msg->data_len) { + PMD_DRV_LOG(ERR, + "Invalid response data size %u, expected less than %u for module %x command %x", + resp_msg->data_len, *resp_data_len, msg->module, + msg->command); + return -EFAULT; + } + + rte_memcpy(resp_data, resp_msg->data_buf, resp_msg->data_len); + *resp_data_len = resp_msg->data_len; + return 0; +} + +int +sssnic_mbox_send(struct sssnic_hw *hw, struct sssnic_msg *msg, + uint8_t *resp_data, uint32_t *resp_data_len, uint32_t timeout_ms) +{ + int ret; + struct sssnic_mbox *mbox; + + if (hw == NULL || msg == NULL || + (resp_data != NULL && resp_data_len == NULL)) { + PMD_DRV_LOG(ERR, "Bad parameter for mbox request"); + return -EINVAL; + } + + mbox = hw->mbox; + + if (resp_data != NULL) { + ret = pthread_mutex_lock(&mbox->req_lock); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to lock mbox request lock"); + return ret; + } + } + ret = sssnic_mbox_send_internal(mbox, msg, resp_data, resp_data_len, + timeout_ms); + + if (resp_data != NULL) + pthread_mutex_unlock(&mbox->req_lock); + + return ret; +} + +int +sssnic_mbox_init(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_mbox *mbox; + + PMD_INIT_FUNC_TRACE(); + + mbox = rte_zmalloc(NULL, sizeof(struct sssnic_mbox), 1); + if (mbox == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for mailbox struct"); + return -ENOMEM; + } + + pthread_mutex_init(&mbox->req_lock, NULL); + rte_spinlock_init(&mbox->state_lock); + + mbox->hw = hw; + hw->mbox = mbox; + ret = sssnic_sendbox_init(mbox); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize sendbox!"); + goto sendbox_init_fail; + } + + sssnic_msg_rx_handler_register(hw, SSSNIC_MSG_CHAN_MBOX, + SSSNIC_MSG_TYPE_RESP, sssnic_mbox_response_handle, mbox); + + return 0; + +sendbox_init_fail: + pthread_mutex_destroy(&mbox->req_lock); + rte_free(mbox); + return ret; +} + +void +sssnic_mbox_shutdown(struct sssnic_hw *hw) +{ + struct sssnic_mbox *mbox = hw->mbox; + + PMD_INIT_FUNC_TRACE(); + + if (mbox == NULL) + return; + + sssnic_sendbox_shutdown(mbox); + pthread_mutex_destroy(&mbox->req_lock); + rte_free(mbox); +} diff --git a/drivers/net/sssnic/base/sssnic_mbox.h b/drivers/net/sssnic/base/sssnic_mbox.h new file mode 100644 index 0000000000..00fa02ea78 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_mbox.h @@ -0,0 +1,45 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_MBOX_H_ +#define _SSSNIC_MBOX_H_ + +#include + +#include "sssnic_msg.h" + +enum sssnic_mbox_state { + /* Mbox is sending message or waiting for response */ + SSSNIC_MBOX_STATE_RUNNING, + /* Waiting for response timed out*/ + SSSNIC_MBOX_STATE_TIMEOUT, + /* Mbox failed to send message */ + SSSNIC_MBOX_STATE_FAILED, + /* Response is ready */ + SSSNIC_MBOX_STATE_READY, + /* Mbox is idle, it can send message */ + SSSNIC_MBOX_STATE_IDLE, +}; + +struct sssnic_sendbox; + +struct sssnic_mbox { + struct sssnic_hw *hw; + /* just be used for sending request msg*/ + pthread_mutex_t req_lock; + /* request msg id*/ + uint8_t req_id; + struct sssnic_sendbox *sendbox; + /*current state*/ + enum sssnic_mbox_state state; + rte_spinlock_t state_lock; +}; + +int sssnic_mbox_send(struct sssnic_hw *hw, struct sssnic_msg *msg, + uint8_t *resp_data, uint32_t *resp_data_len, uint32_t timeout_ms); + +int sssnic_mbox_init(struct sssnic_hw *hw); +void sssnic_mbox_shutdown(struct sssnic_hw *hw); + +#endif /* _SSSNIC_MBOX_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_misc.h b/drivers/net/sssnic/base/sssnic_misc.h new file mode 100644 index 0000000000..ac1bbd9c73 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_misc.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_MISC_H_ +#define _SSSNIC_MISC_H_ + +#define SSSNIC_LOWER_32_BITS(x) ((uint32_t)(x)) +#define SSSNIC_UPPER_32_BITS(x) ((uint32_t)(((x) >> 16) >> 16)) + +#endif /* _SSSNIC_MISC_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_reg.h b/drivers/net/sssnic/base/sssnic_reg.h index e38d39a691..859654087d 100644 --- a/drivers/net/sssnic/base/sssnic_reg.h +++ b/drivers/net/sssnic/base/sssnic_reg.h @@ -26,6 +26,12 @@ #define SSSNIC_EVENTQ_PROD_IDX_REG 0x20c #define SSSNIC_EVENTQ_PAGE_ADDR_REG 0x240 +#define SSSNIC_MBOX_SEND_DATA_BASE_REG 0x80 +#define SSSNIC_MBOX_SEND_CTRL0_REG 0x100 +#define SSSNIC_MBOX_SEND_CTRL1_REG 0x104 +#define SSSNIC_MBOX_SEND_RESULT_ADDR_H_REG 0x108 +#define SSSNIC_MBOX_SEND_RESULT_ADDR_L_REG 0x10c + /* registers of mgmt */ #define SSSNIC_AF_ELECTION_REG 0x6000 #define SSSNIC_MF_ELECTION_REG 0x6020 @@ -193,6 +199,47 @@ struct sssnic_eventq_ci_ctrl_reg { }; }; +#define SSSNIC_REG_MBOX_TX_DONE 0 /* Mailbox transmission is done */ +#define SSSNIC_REG_MBOX_TX_READY 1 /* Mailbox is ready to transmit */ +struct sssnic_mbox_send_ctrl0_reg { + union { + uint32_t u32; + struct { + /* enable to inform source eventq if tx done */ + uint32_t src_eq_en : 1; + /* mailbox tx result, see SSSNIC_REG_MBOX_TX_XX */ + uint32_t tx_status : 1; + uint32_t resvd0 : 14; + /* destination function where the mbox send to */ + uint32_t func : 13; + uint32_t resvd1 : 3; + }; + }; +}; + +struct sssnic_mbox_send_ctrl1_reg { + union { + uint32_t u32; + struct { + uint32_t resvd0 : 10; + /* Destination eventq in the mgmt cpu */ + uint32_t dst_eq : 2; + /* eventq that will be informed if tx done */ + uint32_t src_eq : 2; + uint32_t dma_attr : 6; + /* mailbox message size include header and body + * must 4byte align and unit is 4byte + */ + uint32_t tx_size : 5; + uint32_t ordering : 2; + uint32_t resvd1 : 1; + /*write result back to DMA address of sending result */ + uint32_t wb : 1; + uint32_t resvd2 : 3; + }; + }; +}; + static inline uint32_t sssnic_cfg_reg_read(struct sssnic_hw *hw, uint32_t reg) { From patchwork Fri Sep 1 09:34:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131052 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3F7E64221E; Fri, 1 Sep 2023 11:36:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DF5B9402C4; Fri, 1 Sep 2023 11:35:52 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 66913402B4 for ; Fri, 1 Sep 2023 11:35:51 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZQA2069833; Fri, 1 Sep 2023 17:35:26 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:25 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 08/32] net/sssnic/base: add work queue Date: Fri, 1 Sep 2023 17:34:50 +0800 Message-ID: <20230901093514.224824-9-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZQA2069833 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Work queue is used to maintain hardware queue information by driver, it is usually used in control queue, rx queue and tx queue. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. --- drivers/net/sssnic/base/meson.build | 1 + drivers/net/sssnic/base/sssnic_workq.c | 141 +++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_workq.h | 108 +++++++++++++++++++ 3 files changed, 250 insertions(+) create mode 100644 drivers/net/sssnic/base/sssnic_workq.c create mode 100644 drivers/net/sssnic/base/sssnic_workq.h diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build index 4abd1a0daf..7c23a82ff3 100644 --- a/drivers/net/sssnic/base/meson.build +++ b/drivers/net/sssnic/base/meson.build @@ -6,6 +6,7 @@ sources = [ 'sssnic_eventq.c', 'sssnic_msg.c', 'sssnic_mbox.c', + 'sssnic_workq.c', ] c_args = cflags diff --git a/drivers/net/sssnic/base/sssnic_workq.c b/drivers/net/sssnic/base/sssnic_workq.c new file mode 100644 index 0000000000..25b7585246 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_workq.c @@ -0,0 +1,141 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_workq.h" + +/* Consume num_entries and increase CI + * Return the first entry address of previous CI position + */ +void * +sssnic_workq_consume(struct sssnic_workq *workq, uint16_t num_entries, + uint16_t *ci) +{ + void *e; + uint16_t current_ci; + + if (workq->idle_entries + num_entries > workq->num_entries) + return NULL; + + current_ci = sssnic_workq_ci_get(workq); + e = (void *)sssnic_workq_entry_get(workq, current_ci); + workq->idle_entries += num_entries; + workq->ci += num_entries; + if (ci != NULL) + *ci = current_ci; + + return e; +} + +/* Produce num_entries and increase pi. + * Return the first entry address of previous PI position + */ +void * +sssnic_workq_produce(struct sssnic_workq *workq, uint16_t num_entries, + uint16_t *pi) +{ + void *e; + uint16_t current_pi; + + if (workq->idle_entries < num_entries) + return NULL; + + current_pi = sssnic_workq_pi_get(workq); + e = (void *)sssnic_workq_entry_get(workq, current_pi); + workq->idle_entries -= num_entries; + workq->pi += num_entries; + if (pi != NULL) + *pi = current_pi; + + return e; +} + +static int +sssnic_workq_init(struct sssnic_workq *workq, const char *name, int socket_id, + uint32_t entry_size, uint32_t depth) +{ + char zname[RTE_MEMZONE_NAMESIZE]; + + if (!rte_is_power_of_2(entry_size)) { + PMD_DRV_LOG(ERR, + "The entry size(%u) of workq(%s) is not power of 2", + entry_size, name); + return -EINVAL; + } + + if (!rte_is_power_of_2(depth)) { + PMD_DRV_LOG(ERR, "The depth(%u) of workq(%s) is not power of 2", + depth, name); + return -EINVAL; + } + + workq->buf_size = entry_size * depth; + workq->entry_size = entry_size; + workq->entry_shift = rte_log2_u32(entry_size); + workq->num_entries = depth; + workq->idle_entries = depth; + workq->index_mask = depth - 1; + + snprintf(zname, sizeof(zname), "%s_mz", name); + workq->buf_mz = rte_memzone_reserve_aligned(zname, workq->buf_size, + socket_id, RTE_MEMZONE_IOVA_CONTIG, RTE_PGSIZE_256K); + if (workq->buf_mz == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc DMA memory for %s", name); + return -ENOMEM; + } + workq->buf_addr = workq->buf_mz->addr; + workq->buf_phyaddr = workq->buf_mz->iova; + + return 0; +} + +static void +sssnic_workq_cleanup(struct sssnic_workq *workq) +{ + if (workq != NULL && workq->buf_mz != NULL) + rte_memzone_free(workq->buf_mz); +} + +/* Cleanup a work queue and free it*/ +void +sssnic_workq_destroy(struct sssnic_workq *workq) +{ + if (workq != NULL) { + sssnic_workq_cleanup(workq); + rte_free(workq); + } +} + +/*Create a work queue and initialize*/ +struct sssnic_workq * +sssnic_workq_new(const char *name, int socket_id, uint32_t entry_size, + uint32_t depth) +{ + int ret; + struct sssnic_workq *workq; + + if (name == NULL) { + PMD_DRV_LOG(ERR, "Bad parameter, workq name is NULL"); + return NULL; + } + + workq = rte_zmalloc(name, sizeof(struct sssnic_workq), 0); + if (workq == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for %s", name); + return NULL; + } + ret = sssnic_workq_init(workq, name, socket_id, entry_size, depth); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to initialize %s", name); + rte_free(workq); + return NULL; + } + + return workq; +} diff --git a/drivers/net/sssnic/base/sssnic_workq.h b/drivers/net/sssnic/base/sssnic_workq.h new file mode 100644 index 0000000000..470aef6409 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_workq.h @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_WORKQ_H_ +#define _SSSNIC_WORKQ_H_ + +struct sssnic_workq { + /* DMA buffer of entries*/ + const struct rte_memzone *buf_mz; + /* Virtual address of DMA buffer */ + uint8_t *buf_addr; + /* Physic address of DMA buffer */ + uint64_t buf_phyaddr; + /* DMA buffer size */ + uint32_t buf_size; + /* element size */ + uint32_t entry_size; + /* number of bits of entry size */ + uint16_t entry_shift; + /* Max number of entries in buf */ + uint16_t num_entries; + /* Number of entries not be used */ + uint16_t idle_entries; + /* Consumer index */ + uint16_t ci; + /* Producer index */ + uint16_t pi; + /* CI and PI mask */ + uint16_t index_mask; +} __rte_cache_aligned; + +#define SSSNIC_WORKQ_ENTRY_CAST(workq, idx, type) \ + (((type *)((workq)->buf_addr)) + (idx)) +#define SSSNIC_WORKQ_BUF_PHYADDR(workq) ((workq)->buf_phyaddr) + +static inline void * +sssnic_workq_entry_get(struct sssnic_workq *workq, uint32_t index) +{ + return (void *)(workq->buf_addr + (index << workq->entry_shift)); +} + +/* Return the entry address of current CI position. */ +static inline void * +sssnic_workq_peek(struct sssnic_workq *workq) +{ + if ((workq->idle_entries + 1) > workq->num_entries) + return NULL; + + return sssnic_workq_entry_get(workq, workq->ci & workq->index_mask); +} + +static inline uint16_t +sssnic_workq_num_used_entries(struct sssnic_workq *workq) +{ + return workq->num_entries - workq->idle_entries; +} + +static inline uint16_t +sssnic_workq_num_idle_entries(struct sssnic_workq *workq) +{ + return workq->idle_entries; +} + +static inline uint16_t +sssnic_workq_ci_get(struct sssnic_workq *workq) +{ + return workq->ci & workq->index_mask; +} + +static inline uint16_t +sssnic_workq_pi_get(struct sssnic_workq *workq) +{ + return workq->pi & workq->index_mask; +} + +static inline void +sssnic_workq_consume_fast(struct sssnic_workq *workq, uint16_t num_entries) +{ + workq->idle_entries += num_entries; + workq->ci += num_entries; +} + +static inline void +sssnic_workq_produce_fast(struct sssnic_workq *workq, uint16_t num_entries) +{ + workq->idle_entries -= num_entries; + workq->pi += num_entries; +} + +static inline void +sssnic_workq_reset(struct sssnic_workq *workq) +{ + workq->ci = 0; + workq->pi = 0; + workq->idle_entries = workq->num_entries; +} + +void *sssnic_workq_consume(struct sssnic_workq *workq, uint16_t num_entries, + uint16_t *ci); +void *sssnic_workq_produce(struct sssnic_workq *workq, uint16_t num_entries, + uint16_t *pi); + +struct sssnic_workq *sssnic_workq_new(const char *name, int socket_id, + uint32_t entry_size, uint32_t depth); +void sssnic_workq_destroy(struct sssnic_workq *workq); + +#endif /* _SSSNIC_WORKQ_H_ */ From patchwork Fri Sep 1 09:34:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131055 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C78C94221E; Fri, 1 Sep 2023 11:37:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8D93A402D4; Fri, 1 Sep 2023 11:35:57 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 010D8402B3 for ; Fri, 1 Sep 2023 11:35:53 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR2p069834; Fri, 1 Sep 2023 17:35:27 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:26 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 09/32] net/sssnic/base: add control queue Date: Fri, 1 Sep 2023 17:34:51 +0800 Message-ID: <20230901093514.224824-10-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR2p069834 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Control queue is used for communication between driver and datapath code of firmware. Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Fixed variable 'cmd_len' is uninitialized when used. --- drivers/net/sssnic/base/meson.build | 2 + drivers/net/sssnic/base/sssnic_api.c | 102 +++++ drivers/net/sssnic/base/sssnic_api.h | 23 ++ drivers/net/sssnic/base/sssnic_cmd.h | 114 ++++++ drivers/net/sssnic/base/sssnic_ctrlq.c | 521 +++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_ctrlq.h | 58 +++ drivers/net/sssnic/base/sssnic_hw.c | 149 +++++++ drivers/net/sssnic/base/sssnic_hw.h | 8 + 8 files changed, 977 insertions(+) create mode 100644 drivers/net/sssnic/base/sssnic_api.c create mode 100644 drivers/net/sssnic/base/sssnic_api.h create mode 100644 drivers/net/sssnic/base/sssnic_cmd.h create mode 100644 drivers/net/sssnic/base/sssnic_ctrlq.c create mode 100644 drivers/net/sssnic/base/sssnic_ctrlq.h diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build index 7c23a82ff3..e93ca7b24b 100644 --- a/drivers/net/sssnic/base/meson.build +++ b/drivers/net/sssnic/base/meson.build @@ -7,6 +7,8 @@ sources = [ 'sssnic_msg.c', 'sssnic_mbox.c', 'sssnic_workq.c', + 'sssnic_ctrlq.c', + 'sssnic_api.c', ] c_args = cflags diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c new file mode 100644 index 0000000000..51a59f0f25 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -0,0 +1,102 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_cmd.h" +#include "sssnic_mbox.h" +#include "sssnic_api.h" + +int +sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, + struct sssnic_msix_attr *attr) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_msix_ctrl_cmd cmd; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.opcode = SSSNIC_CMD_OPCODE_GET; + cmd.idx = msix_idx; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_MSIX_CTRL_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to MSIX_CTRL_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + attr->lli_credit = cmd.lli_credit; + attr->lli_timer = cmd.lli_timer; + attr->pending_limit = cmd.pending_count; + attr->coalescing_timer = cmd.coalescing_timer; + attr->resend_timer = cmd.resend_timer; + + return 0; +} + +int +sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, + struct sssnic_msix_attr *attr) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_msix_ctrl_cmd cmd; + struct sssnic_msix_attr tmp; + uint32_t cmd_len; + + ret = sssnic_msix_attr_get(hw, msix_idx, &tmp); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get interrupt configuration"); + return ret; + } + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd.idx = msix_idx; + cmd.lli_credit = tmp.lli_credit; + cmd.lli_timer = tmp.lli_timer; + cmd.pending_count = tmp.pending_limit; + cmd.coalescing_timer = tmp.coalescing_timer; + cmd.resend_timer = tmp.resend_timer; + if (attr->lli_set != 0) { + cmd.lli_credit = attr->lli_credit; + cmd.lli_timer = attr->lli_timer; + } + if (attr->coalescing_set != 0) { + cmd.pending_count = attr->pending_limit; + cmd.coalescing_timer = attr->coalescing_timer; + cmd.resend_timer = attr->resend_timer; + } + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_MSIX_CTRL_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to MSIX_CTRL_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h new file mode 100644 index 0000000000..3d54eb826a --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -0,0 +1,23 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_API_H_ +#define _SSSNIC_API_H_ + +struct sssnic_msix_attr { + uint32_t lli_set; + uint32_t coalescing_set; + uint8_t lli_credit; + uint8_t lli_timer; + uint8_t pending_limit; + uint8_t coalescing_timer; + uint8_t resend_timer; +}; + +int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, + struct sssnic_msix_attr *attr); +int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, + struct sssnic_msix_attr *attr); + +#endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h new file mode 100644 index 0000000000..ee9f536ac2 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -0,0 +1,114 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_CMD_H_ +#define _SSSNIC_CMD_H_ + +#define SSSNIC_CMD_OPCODE_SET 1 +#define SSSNIC_CMD_OPCODE_GET 0 + +enum sssnic_mgmt_cmd_id { + SSSNIC_RESET_FUNC_CMD = 0, + SSSNIC_SET_CTRLQ_CTX_CMD = 20, + SSSNIC_SET_ROOT_CTX_CMD = 21, + SSSNIC_PAGESIZE_CFG_CMD = 22, + SSSNIC_MSIX_CTRL_CMD = 23, + SSSNIC_SET_DMA_ATTR_CMD = 25, + SSSNIC_GET_FW_VERSION_CMD = 60, +}; + +struct sssnic_cmd_common { + uint8_t status; + uint8_t version; + uint8_t resvd[6]; +}; + +struct sssnic_set_ctrlq_ctx_cmd { + struct sssnic_cmd_common common; + uint16_t func_id; + /* CtrlQ ID, here always is 0 */ + uint8_t qid; + uint8_t resvd[5]; + union { + uint64_t data[2]; + struct { + /* Page frame number*/ + uint64_t pfn : 52; + uint64_t resvd0 : 4; + /* Completion event queue ID*/ + uint64_t eq_id : 5; + /* Interrupt enable indication */ + uint64_t informed : 1; + /* Completion event queue enable indication */ + uint64_t eq_en : 1; + /* Entries wrapped indication */ + uint64_t wrapped : 1; + uint64_t block_pfn : 52; + uint64_t start_ci : 12; + }; + }; +}; + +struct sssnic_dma_attr_set_cmd { + struct sssnic_cmd_common common; + uint16_t func_id; + uint8_t idx; + uint8_t st; + uint8_t at; + uint8_t ph; + uint8_t no_snooping; + uint8_t tph; + uint32_t resvd0; +}; + +struct sssnic_func_reset_cmd { + struct sssnic_cmd_common common; + uint16_t func_id; + uint16_t resvd[3]; + /* Mask of reource to be reset */ + uint64_t res_mask; +}; + +struct sssnic_root_ctx_cmd { + struct sssnic_cmd_common common; + uint16_t func_id; + /* set ctrlq depth enable */ + uint8_t set_ctrlq_depth; + /* real depth is 2^ctrlq_depth */ + uint8_t ctrlq_depth; + uint16_t rx_buf; + uint8_t lro_enable; + uint8_t resvd0; + uint16_t txq_depth; + uint16_t rxq_depth; + uint64_t resvd1; +}; + +struct sssnic_pagesize_cmd { + struct sssnic_cmd_common common; + uint16_t func_id; + /* SSSNIC_CMD_OPCODE_xx */ + uint8_t opcode; + /* real size is (2^pagesz)*4KB */ + uint8_t pagesz; + uint32_t resvd0; +}; + +struct sssnic_msix_ctrl_cmd { + struct sssnic_cmd_common common; + uint16_t func_id; + /* SSSNIC_CMD_OPCODE_xx */ + uint8_t opcode; + uint8_t resvd0; + /* MSIX index */ + uint16_t idx; + uint8_t pending_count; + uint8_t coalescing_timer; + uint8_t resend_timer; + uint8_t lli_timer; + uint8_t lli_credit; + uint8_t resvd1[5]; +}; + +#endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_ctrlq.c b/drivers/net/sssnic/base/sssnic_ctrlq.c new file mode 100644 index 0000000000..d0b507125c --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_ctrlq.c @@ -0,0 +1,521 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_reg.h" +#include "sssnic_cmd.h" +#include "sssnic_mbox.h" +#include "sssnic_ctrlq.h" + +#define SSSNIC_CTRLQ_DOORBELL_OFFSET 0 +#define SSSNIC_CTRLQ_BUF_SIZE 4096 +#define SSSNIC_CTRLQ_ENTRY_SIZE 64 +#define SSSNIC_CTRLQ_DEPTH 64 + +#define SSSNIC_CTRLQ_RESP_TIMEOUT 5000 /* Default response timeout */ + +enum sssnic_ctrlq_response_fmt { + /* return result and write it back into corresponding field of ctrlq entry */ + SSSNIC_CTRLQ_RESPONSE_RESULT, + /* return data write it into DMA memory that usually is pktmbuf*/ + SSSNIC_CTRLQ_RESPONSE_DATA, +}; + +struct sssnic_ctrlq_entry_desc_section { + union { + uint32_t dword; + struct { + /* buffer section length, always 2*/ + uint32_t buf_sec_len : 8; + uint32_t resvd0 : 7; + /* response fmt, 0:result 1:data */ + uint32_t resp_fmt : 1; + uint32_t resvd1 : 6; + /* buffer data format,always 0 */ + uint32_t buf_fmt : 1; + /* always 1 */ + uint32_t need_resp : 1; + uint32_t resvd2 : 3; + /* response section length, always 3 */ + uint32_t resp_sec_len : 2; + /* control section length, always 1 */ + uint32_t ctrl_sec_len : 2; + /* wrapped bit */ + uint32_t wrapped : 1; + }; + }; +}; + +struct sssnic_ctrlq_entry_status_section { + union { + uint32_t dword; + struct { + /* status value, usually it saves error code */ + uint32_t value : 31; + uint32_t resvd0 : 1; + }; + }; +}; + +struct sssnic_ctrlq_entry_ctrl_section { + union { + uint32_t dword; + struct { + /* producer index*/ + uint32_t pi : 16; + /* command ID */ + uint32_t cmd : 8; + /* hardware module */ + uint32_t module : 5; + uint32_t resvd0 : 2; + /* Indication of command done */ + uint32_t done : 1; + }; + }; +}; + +struct sssnic_ctrlq_entry_response_section { + union { + struct { + uint32_t hi_addr; + uint32_t lo_addr; + uint32_t len; + uint32_t resvd0; + } data; + struct { + uint64_t value; + uint64_t resvd0; + } result; + }; +}; + +struct sssnic_ctrlq_entry_buf_section { + struct { + uint32_t hi_addr; + uint32_t lo_addr; + uint32_t len; + uint32_t resvd0; + } sge; + uint64_t resvd0[2]; +}; + +/* Hardware format of control queue entry */ +struct sssnic_ctrlq_entry { + union { + uint32_t dword[16]; + struct { + struct sssnic_ctrlq_entry_desc_section desc; + uint32_t resvd0; + struct sssnic_ctrlq_entry_status_section status; + struct sssnic_ctrlq_entry_ctrl_section ctrl; + struct sssnic_ctrlq_entry_response_section response; + struct sssnic_ctrlq_entry_buf_section buf; + }; + }; +}; + +/* Hardware format of control queue doorbell */ +struct sssnic_ctrlq_doorbell { + union { + uint64_t u64; + struct { + uint64_t resvd0 : 23; + /* ctrlq type is always 1*/ + uint64_t qtype : 1; + /* cltrq id is always 0*/ + uint64_t qid : 3; + uint64_t resvd1 : 5; + /* most significant byte of pi*/ + uint64_t pi_msb : 8; + uint64_t resvd2 : 24; + }; + }; +}; +static int +sssnic_ctrlq_wait_response(struct sssnic_ctrlq *ctrlq, int *err_code, + uint32_t timeout_ms) +{ + struct sssnic_ctrlq_entry *entry; + struct sssnic_workq *workq; + uint64_t end; + int done = 0; + + workq = ctrlq->workq; + entry = (struct sssnic_ctrlq_entry *)sssnic_workq_peek(workq); + if (entry == NULL) { + PMD_DRV_LOG(ERR, "Not found executing ctrlq command"); + return -EINVAL; + } + if (timeout_ms == 0) + timeout_ms = SSSNIC_CTRLQ_RESP_TIMEOUT; + end = rte_get_timer_cycles() + rte_get_timer_hz() * timeout_ms / 1000; + do { + done = entry->ctrl.done; + if (done) + break; + rte_delay_us(1); + } while (((long)(rte_get_timer_cycles() - end)) < 0); + + if (!done) { + PMD_DRV_LOG(ERR, "Waiting ctrlq response timeout, ci=%u", + workq->ci); + return -ETIMEDOUT; + } + if (err_code) + *err_code = entry->status.value; + sssnic_workq_consume(workq, 1, NULL); + return 0; +} + +static void +sssnic_ctrlq_doorbell_ring(struct sssnic_ctrlq *ctrlq, uint16_t next_pi) +{ + struct sssnic_ctrlq_doorbell db; + + db.u64 = 0; + db.qtype = 1; + db.qid = 0; + db.pi_msb = (next_pi >> 8) & 0xff; + rte_wmb(); + rte_write64(db.u64, ctrlq->doorbell + ((next_pi & 0xff) << 3)); +} + +static void +sssnic_ctrlq_entry_init(struct sssnic_ctrlq_entry *entry, struct rte_mbuf *mbuf, + struct sssnic_ctrlq_cmd *cmd, uint16_t pi, uint16_t wrapped) +{ + struct sssnic_ctrlq_entry tmp_entry; + void *buf_addr; + rte_iova_t buf_iova; + + /* Fill the temporary ctrlq entry */ + memset(&tmp_entry, 0, sizeof(tmp_entry)); + tmp_entry.desc.buf_fmt = 0; + tmp_entry.desc.buf_sec_len = 2; + tmp_entry.desc.need_resp = 1; + tmp_entry.desc.resp_sec_len = 3; + tmp_entry.desc.ctrl_sec_len = 1; + tmp_entry.desc.wrapped = wrapped; + + tmp_entry.status.value = 0; + + tmp_entry.ctrl.cmd = cmd->cmd; + tmp_entry.ctrl.pi = pi; + tmp_entry.ctrl.module = cmd->module; + tmp_entry.ctrl.done = 0; + + buf_iova = rte_mbuf_data_iova(mbuf); + if (cmd->mbuf == NULL && cmd->data != NULL) { + /* cmd data is not allocated in mbuf*/ + buf_addr = rte_pktmbuf_mtod(mbuf, void *); + rte_memcpy(buf_addr, cmd->data, cmd->data_len); + } + tmp_entry.buf.sge.hi_addr = (uint32_t)((buf_iova >> 16) >> 16); + tmp_entry.buf.sge.lo_addr = (uint32_t)buf_iova; + tmp_entry.buf.sge.len = cmd->data_len; + + if (cmd->response_len == 0) { + tmp_entry.desc.resp_fmt = SSSNIC_CTRLQ_RESPONSE_RESULT; + tmp_entry.response.result.value = 0; + } else { + tmp_entry.desc.resp_fmt = SSSNIC_CTRLQ_RESPONSE_DATA; + /* response sge shares cmd mbuf */ + tmp_entry.response.data.hi_addr = + (uint32_t)((buf_iova >> 16) >> 16); + tmp_entry.response.data.lo_addr = (uint32_t)buf_iova; + tmp_entry.response.data.len = SSSNIC_CTRLQ_MBUF_SIZE; + } + + /* write temporary entry to real ctrlq entry + * the first 64bits must be copied last + */ + rte_memcpy(((uint8_t *)entry) + sizeof(uint64_t), + ((uint8_t *)&tmp_entry) + sizeof(uint64_t), + SSSNIC_CTRLQ_ENTRY_SIZE - sizeof(sizeof(uint64_t))); + rte_wmb(); + *((uint64_t *)entry) = *((uint64_t *)&tmp_entry); +} + +static int +sssnic_ctrlq_cmd_exec_internal(struct sssnic_ctrlq *ctrlq, + struct sssnic_ctrlq_cmd *cmd, uint32_t timeout_ms) +{ + struct rte_mbuf *mbuf; + struct sssnic_ctrlq_entry *entry; + struct sssnic_workq *workq; + uint16_t pi; /* current pi */ + uint16_t next_pi; + uint16_t wrapped; + int ret; + int err_code; + + /* Allocate cmd mbuf */ + if (cmd->mbuf == NULL) { + mbuf = rte_pktmbuf_alloc(ctrlq->mbuf_pool); + if (mbuf == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc mbuf for ctrlq cmd"); + return -ENOMEM; + } + } else { + mbuf = cmd->mbuf; + } + + /* allocate ctrlq entry */ + workq = ctrlq->workq; + wrapped = ctrlq->wrapped; + entry = (struct sssnic_ctrlq_entry *)sssnic_workq_produce(workq, 1, + &pi); + if (entry == NULL) { + PMD_DRV_LOG(ERR, "No enough control queue entry"); + ret = -EBUSY; + goto out; + } + /* workq->pi will be the next pi, the next pi could not exceed workq + * depth else must recalculate next pi, and reverse wrapped bit. + */ + if (workq->pi >= workq->num_entries) { + ctrlq->wrapped = !ctrlq->wrapped; + workq->pi -= workq->num_entries; + } + next_pi = workq->pi; + + /* fill ctrlq entry */ + sssnic_ctrlq_entry_init(entry, mbuf, cmd, pi, wrapped); + + /* Ring doorbell */ + sssnic_ctrlq_doorbell_ring(ctrlq, next_pi); + + /* Wait response */ + ret = sssnic_ctrlq_wait_response(ctrlq, &err_code, timeout_ms); + if (ret != 0) + goto out; + + if (err_code) { + PMD_DRV_LOG(ERR, + "Found error while control queue command executing, error code:%x.", + err_code); + ret = err_code; + goto out; + } + + if (cmd->response_len == 0) { + cmd->result = entry->response.result.value; + } else if ((cmd->mbuf != NULL && cmd->response_data != cmd->data) || + cmd->mbuf == NULL) { + /* cmd data may be as same as response data if mbuf is not null */ + rte_memcpy(cmd->response_data, rte_pktmbuf_mtod(mbuf, void *), + cmd->response_len); + } +out: + if (cmd->mbuf == NULL) + rte_pktmbuf_free(mbuf); + return ret; +} + +int +sssnic_ctrlq_cmd_exec(struct sssnic_hw *hw, struct sssnic_ctrlq_cmd *cmd, + uint32_t timeout_ms) +{ + int ret; + struct sssnic_ctrlq *ctrlq; + + if (hw == NULL || hw->ctrlq == NULL || cmd == NULL || + (cmd->response_len != 0 && cmd->response_data == NULL)) { + PMD_DRV_LOG(ERR, "Bad parameter to execute ctrlq command"); + return -EINVAL; + } + + SSSNIC_DEBUG("module=%u, cmd=%u, data=%p, data_len=%u, response_len=%u", + cmd->module, cmd->cmd, cmd->data, cmd->data_len, + cmd->response_len); + + ctrlq = hw->ctrlq; + rte_spinlock_lock(&ctrlq->lock); + ret = sssnic_ctrlq_cmd_exec_internal(ctrlq, cmd, timeout_ms); + rte_spinlock_unlock(&ctrlq->lock); + + return ret; +} + +static int +sssnic_ctrlq_depth_set(struct sssnic_hw *hw, uint32_t depth) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_root_ctx_cmd cmd; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.set_ctrlq_depth = 1; + cmd.ctrlq_depth = (uint8_t)rte_log2_u32(depth); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_SET_ROOT_CTX_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (!cmd_len || cmd.common.status) { + PMD_DRV_LOG(ERR, + "Bad response to SET_ROOT_CTX_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + return 0; +} + +static int +sssnic_ctrlq_ctx_setup(struct sssnic_ctrlq *ctrlq) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_set_ctrlq_ctx_cmd cmd; + uint32_t cmd_len; + struct sssnic_hw *hw = ctrlq->hw; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.qid = 0; + cmd.pfn = ctrlq->workq->buf_phyaddr / RTE_PGSIZE_4K; + cmd.wrapped = !!ctrlq->wrapped; + cmd.start_ci = 0; + cmd.block_pfn = cmd.pfn; + + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_CTRLQ_CTX_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send SSSNIC_SET_CTRLQ_CTX_CMD"); + return ret; + } + return 0; +} + +struct sssnic_ctrlq_cmd * +sssnic_ctrlq_cmd_alloc(struct sssnic_hw *hw) +{ + struct sssnic_ctrlq_cmd *cmd; + + cmd = rte_zmalloc(NULL, sizeof(struct sssnic_ctrlq_cmd), 0); + if (cmd == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate sssnic_ctrlq_cmd"); + return NULL; + } + + cmd->mbuf = rte_pktmbuf_alloc(hw->ctrlq->mbuf_pool); + if (cmd->mbuf == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate sssnic_ctrlq_cmd mbuf"); + rte_free(cmd); + return NULL; + } + + cmd->data = rte_pktmbuf_mtod(cmd->mbuf, void *); + cmd->response_data = cmd->data; + + return cmd; +} + +void +sssnic_ctrlq_cmd_destroy(__rte_unused struct sssnic_hw *hw, + struct sssnic_ctrlq_cmd *cmd) +{ + if (cmd != NULL) { + if (cmd->mbuf != NULL) + rte_pktmbuf_free(cmd->mbuf); + + rte_free(cmd); + } +} + +int +sssnic_ctrlq_init(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_ctrlq *ctrlq; + char m_name[RTE_MEMPOOL_NAMESIZE]; + + PMD_INIT_FUNC_TRACE(); + + ctrlq = rte_zmalloc(NULL, sizeof(struct sssnic_ctrlq), 0); + if (ctrlq == NULL) { + PMD_DRV_LOG(ERR, "Could not alloc memory for ctrlq"); + return -ENOMEM; + } + + ctrlq->hw = hw; + rte_spinlock_init(&ctrlq->lock); + ctrlq->doorbell = hw->db_base_addr + SSSNIC_CTRLQ_DOORBELL_OFFSET; + + snprintf(m_name, sizeof(m_name), "sssnic%u_ctrlq_wq", + SSSNIC_ETH_PORT_ID(hw)); + ctrlq->workq = sssnic_workq_new(m_name, rte_socket_id(), + SSSNIC_CTRLQ_ENTRY_SIZE, SSSNIC_CTRLQ_DEPTH); + if (ctrlq->workq == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc work queue for ctrlq"); + ret = -ENOMEM; + goto new_workq_fail; + } + ctrlq->wrapped = 1; + + snprintf(m_name, sizeof(m_name), "sssnic%u_ctrlq_mbuf", + SSSNIC_ETH_PORT_ID(hw)); + ctrlq->mbuf_pool = rte_pktmbuf_pool_create(m_name, SSSNIC_CTRLQ_DEPTH, + 0, 0, SSSNIC_CTRLQ_MBUF_SIZE, rte_socket_id()); + if (ctrlq->mbuf_pool == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc mbuf for %s", m_name); + ret = -ENOMEM; + goto alloc_mbuf_fail; + } + + ret = sssnic_ctrlq_ctx_setup(ctrlq); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup control queue context"); + goto setup_ctrlq_ctx_fail; + } + + ret = sssnic_ctrlq_depth_set(hw, SSSNIC_CTRLQ_DEPTH); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize control queue depth"); + goto setup_ctrlq_ctx_fail; + } + + hw->ctrlq = ctrlq; + + return 0; + +setup_ctrlq_ctx_fail: + rte_mempool_free(ctrlq->mbuf_pool); +alloc_mbuf_fail: + sssnic_workq_destroy(ctrlq->workq); +new_workq_fail: + rte_free(ctrlq); + return ret; +} + +void +sssnic_ctrlq_shutdown(struct sssnic_hw *hw) +{ + struct sssnic_ctrlq *ctrlq; + + PMD_INIT_FUNC_TRACE(); + + if (hw == NULL || hw->ctrlq == NULL) + return; + ctrlq = hw->ctrlq; + rte_mempool_free(ctrlq->mbuf_pool); + sssnic_workq_destroy(ctrlq->workq); + rte_free(ctrlq); +} diff --git a/drivers/net/sssnic/base/sssnic_ctrlq.h b/drivers/net/sssnic/base/sssnic_ctrlq.h new file mode 100644 index 0000000000..61b182e9f4 --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_ctrlq.h @@ -0,0 +1,58 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_CTRLQ_H_ +#define _SSSNIC_CTRLQ_H_ + +#include "sssnic_workq.h" + +#define SSSNIC_CTRLQ_MBUF_SIZE 2048 +#define SSSNIC_CTRLQ_MAX_CMD_DATA_LEN \ + (SSSNIC_CTRLQ_MBUF_SIZE - RTE_PKTMBUF_HEADROOM) + +struct sssnic_ctrlq_cmd { + uint32_t module; + /* Command ID */ + uint32_t cmd; + /* Command data */ + void *data; + /* mbuf is just used for dynamic allocation of ctrlq cmd, + * cmd data will point to mbuf data to reduce data copying + * as well as response_data. + */ + struct rte_mbuf *mbuf; + union { + /* response data buffer */ + void *response_data; + /* result of command executing */ + uint64_t result; + }; + /* command data length */ + uint32_t data_len; + /* length of response data buffer, return result of command + * if response_len=0, else return response_data + */ + uint32_t response_len; +}; + +struct sssnic_ctrlq { + struct sssnic_hw *hw; + struct sssnic_workq *workq; + struct rte_mempool *mbuf_pool; + uint8_t *doorbell; + uint32_t wrapped; + uint32_t resvd0; + rte_spinlock_t lock; +}; + +struct sssnic_ctrlq_cmd *sssnic_ctrlq_cmd_alloc(struct sssnic_hw *hw); +void sssnic_ctrlq_cmd_destroy(__rte_unused struct sssnic_hw *hw, + struct sssnic_ctrlq_cmd *cmd); + +int sssnic_ctrlq_cmd_exec(struct sssnic_hw *hw, struct sssnic_ctrlq_cmd *cmd, + uint32_t timeout_ms); +int sssnic_ctrlq_init(struct sssnic_hw *hw); +void sssnic_ctrlq_shutdown(struct sssnic_hw *hw); + +#endif /* _SSSNIC_CTRLQ_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index ff527b2c7f..4ca75208af 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -9,9 +9,12 @@ #include "../sssnic_log.h" #include "sssnic_hw.h" #include "sssnic_reg.h" +#include "sssnic_cmd.h" +#include "sssnic_api.h" #include "sssnic_eventq.h" #include "sssnic_msg.h" #include "sssnic_mbox.h" +#include "sssnic_ctrlq.h" static int wait_for_sssnic_hw_ready(struct sssnic_hw *hw) @@ -140,6 +143,116 @@ sssnic_pf_status_set(struct sssnic_hw *hw, enum sssnic_pf_status status) sssnic_cfg_reg_write(hw, SSSNIC_ATTR6_REG, reg.u32); } +static int +sssnic_dma_attr_init(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_dma_attr_set_cmd cmd; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_SET_DMA_ATTR_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SET_DMA_ATTR_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +static int +sssnic_func_reset(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_func_reset_cmd cmd; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.res_mask = RTE_BIT64(0) | RTE_BIT64(1) | RTE_BIT64(2) | + RTE_BIT64(10) | RTE_BIT64(12) | RTE_BIT64(13); + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_RESET_FUNC_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to RESET_FUNC_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +static int +sssnic_pagesize_set(struct sssnic_hw *hw, uint32_t pagesize) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_pagesize_cmd cmd; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.pagesz = (uint8_t)rte_log2_u32(pagesize >> 12); + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_PAGESIZE_CFG_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to PAGESIZE_CFG_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +/* Only initialize msix 0 attributes */ +static int +sssnic_msix_attr_init(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_msix_attr attr; + + attr.lli_set = 0; + attr.coalescing_set = 1; + attr.pending_limit = 0; + attr.coalescing_timer = 0xff; + attr.resend_timer = 0x7; + + ret = sssnic_msix_attr_set(hw, 0, &attr); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set msix0 attributes."); + return ret; + } + + return 0; +} + static int sssnic_base_init(struct sssnic_hw *hw) { @@ -217,8 +330,42 @@ sssnic_hw_init(struct sssnic_hw *hw) goto mbox_init_fail; } + ret = sssnic_func_reset(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to reset function resources"); + goto mbox_init_fail; + } + + ret = sssnic_dma_attr_init(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize DMA attributes"); + goto mbox_init_fail; + } + + ret = sssnic_msix_attr_init(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize msix attributes"); + goto mbox_init_fail; + } + + ret = sssnic_pagesize_set(hw, 0x100000); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set page size to 0x100000"); + goto mbox_init_fail; + } + + ret = sssnic_ctrlq_init(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize control queue"); + goto ctrlq_init_fail; + } + + sssnic_pf_status_set(hw, SSSNIC_PF_STATUS_ACTIVE); + return -EINVAL; +ctrlq_init_fail: + sssnic_mbox_shutdown(hw); mbox_init_fail: sssnic_eventq_all_shutdown(hw); eventq_init_fail: @@ -231,6 +378,8 @@ sssnic_hw_shutdown(struct sssnic_hw *hw) { PMD_INIT_FUNC_TRACE(); + sssnic_pf_status_set(hw, SSSNIC_PF_STATUS_INIT); + sssnic_ctrlq_shutdown(hw); sssnic_mbox_shutdown(hw); sssnic_eventq_all_shutdown(hw); sssnic_msg_inbox_shutdown(hw); diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 41e65f5880..c1b9539015 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -54,6 +54,7 @@ struct sssnic_hw { struct sssnic_eventq *eventqs; struct sssnic_msg_inbox *msg_inbox; struct sssnic_mbox *mbox; + struct sssnic_ctrlq *ctrlq; uint8_t num_eventqs; uint16_t eth_port_id; }; @@ -64,6 +65,13 @@ struct sssnic_hw { #define SSSNIC_FUNC_TYPE(hw) ((hw)->attr.func_type) #define SSSNIC_AF_FUNC_IDX(hw) ((hw)->attr.af_idx) +enum sssnic_module { + SSSNIC_COMM_MODULE = 0, + SSSNIC_LAN_MODULE = 1, + SSSNIC_CFG_MODULE = 7, + SSSNIC_NETIF_MODULE = 14, +}; + int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); void sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state); From patchwork Fri Sep 1 09:34:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131054 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A91484221E; Fri, 1 Sep 2023 11:36:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F4D8402CF; Fri, 1 Sep 2023 11:35:56 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 0A15C402B4 for ; Fri, 1 Sep 2023 11:35:51 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR2q069834; Fri, 1 Sep 2023 17:35:27 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:26 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 10/32] net/sssnic: add dev configure and infos get Date: Fri, 1 Sep 2023 17:34:52 +0800 Message-ID: <20230901093514.224824-11-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR2q069834 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/base/sssnic_api.c | 33 ++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 8 +++ drivers/net/sssnic/base/sssnic_cmd.h | 14 +++++ drivers/net/sssnic/base/sssnic_hw.c | 33 +++++++++++- drivers/net/sssnic/base/sssnic_hw.h | 6 +++ drivers/net/sssnic/sssnic_ethdev.c | 76 ++++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev.h | 53 +++++++++++++++++++ 7 files changed, 222 insertions(+), 1 deletion(-) diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 51a59f0f25..bf0859cd63 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -100,3 +100,36 @@ sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, return 0; } + +int +sssnic_capability_get(struct sssnic_hw *hw, struct sssnic_capability *capa) +{ + struct sssnic_capability_get_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_GET_CAPABILITY_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_CFG_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_CAPABILITY_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + capa->phy_port = cmd.phy_port; + capa->max_num_rxq = cmd.rxq_max_id + 1; + capa->max_num_txq = cmd.txq_max_id + 1; + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 3d54eb826a..8011cc8b0f 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -15,9 +15,17 @@ struct sssnic_msix_attr { uint8_t resend_timer; }; +struct sssnic_capability { + uint16_t max_num_txq; + uint16_t max_num_rxq; + uint8_t phy_port; + uint8_t cos; +}; + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); +int sssnic_capability_get(struct sssnic_hw *hw, struct sssnic_capability *capa); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index ee9f536ac2..79192affbc 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -18,6 +18,8 @@ enum sssnic_mgmt_cmd_id { SSSNIC_GET_FW_VERSION_CMD = 60, }; +#define SSSNIC_GET_CAPABILITY_CMD 0 + struct sssnic_cmd_common { uint8_t status; uint8_t version; @@ -111,4 +113,16 @@ struct sssnic_msix_ctrl_cmd { uint8_t resvd1[5]; }; +struct sssnic_capability_get_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd0; + uint8_t resvd1[3]; + uint8_t phy_port; + uint32_t resvd2[16]; + uint16_t txq_max_id; + uint16_t rxq_max_id; + uint32_t resvd3[63]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 4ca75208af..8f5f556bde 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -253,6 +253,29 @@ sssnic_msix_attr_init(struct sssnic_hw *hw) return 0; } +static int +sssnic_capability_init(struct sssnic_hw *hw) +{ + struct sssnic_capability cap; + int ret; + + ret = sssnic_capability_get(hw, &cap); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get sssnic capability"); + return ret; + } + + PMD_DRV_LOG(INFO, + "Initialized capability, physic port:%u, max %u txqs, max %u rxqs", + cap.phy_port, cap.max_num_txq, cap.max_num_rxq); + + hw->phy_port = cap.phy_port; + hw->max_num_rxq = cap.max_num_rxq; + hw->max_num_txq = cap.max_num_txq; + + return 0; +} + static int sssnic_base_init(struct sssnic_hw *hw) { @@ -360,10 +383,18 @@ sssnic_hw_init(struct sssnic_hw *hw) goto ctrlq_init_fail; } + ret = sssnic_capability_init(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize capability"); + goto capbility_init_fail; + } + sssnic_pf_status_set(hw, SSSNIC_PF_STATUS_ACTIVE); - return -EINVAL; + return 0; +capbility_init_fail: + sssnic_ctrlq_shutdown(hw); ctrlq_init_fail: sssnic_mbox_shutdown(hw); mbox_init_fail: diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index c1b9539015..5f20a9465d 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -56,12 +56,18 @@ struct sssnic_hw { struct sssnic_mbox *mbox; struct sssnic_ctrlq *ctrlq; uint8_t num_eventqs; + uint8_t phy_port; uint16_t eth_port_id; + uint16_t max_num_rxq; + uint16_t max_num_txq; }; #define SSSNIC_FUNC_IDX(hw) ((hw)->attr.func_idx) #define SSSNIC_ETH_PORT_ID(hw) ((hw)->eth_port_id) #define SSSNIC_MPU_FUNC_IDX 0x1fff +#define SSSNIC_MAX_NUM_RXQ(hw) ((hw)->max_num_rxq) +#define SSSNIC_MAX_NUM_TXQ(hw) ((hw)->max_num_txq) +#define SSSNIC_PHY_PORT(hw) ((hw)->phy_port) #define SSSNIC_FUNC_TYPE(hw) ((hw)->attr.func_type) #define SSSNIC_AF_FUNC_IDX(hw) ((hw)->attr.af_idx) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 460ff604aa..2d7ed7d60b 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -9,6 +9,72 @@ #include "base/sssnic_hw.h" #include "sssnic_ethdev.h" +static int +sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, + struct rte_eth_dev_info *devinfo) +{ + struct sssnic_netdev *netdev; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + + devinfo->max_rx_queues = netdev->max_num_rxq; + devinfo->max_tx_queues = netdev->max_num_txq; + + devinfo->max_mtu = SSSNIC_ETHDEV_MAX_MTU; + devinfo->min_mtu = SSSNIC_ETHDEV_MIN_MTU; + devinfo->min_rx_bufsize = SSSNIC_ETHDEV_MIN_RXBUF_SZ; + devinfo->max_rx_pktlen = SSSNIC_ETHDEV_MAX_RXPKT_LEN; + devinfo->max_lro_pkt_size = SSSNIC_ETHDEV_MAX_LRO_PKT_SZ; + + devinfo->max_mac_addrs = SSSNIC_ETHDEV_MAX_NUM_UC_MAC; + + devinfo->rx_queue_offload_capa = 0; + devinfo->tx_queue_offload_capa = 0; + devinfo->rx_offload_capa = SSSNIC_ETHDEV_RX_OFFLOAD_CAPA; + devinfo->tx_offload_capa = SSSNIC_ETHDEV_TX_OFFLOAD_CAPA; + + devinfo->hash_key_size = SSSNIC_ETHDEV_RSS_KEY_SZ; + devinfo->reta_size = SSSNIC_ETHDEV_RSS_RETA_SZ; + devinfo->flow_type_rss_offloads = SSSNIC_ETHDEV_RSS_OFFLOAD_FLOW_TYPES; + + devinfo->rx_desc_lim = (struct rte_eth_desc_lim){ + .nb_max = SSSNIC_ETHDEV_MAX_NUM_Q_DESC, + .nb_min = SSSNIC_ETHDEV_MIN_NUM_Q_DESC, + .nb_align = SSSNIC_ETHDEV_NUM_Q_DESC_ALGIN, + }; + devinfo->tx_desc_lim = (struct rte_eth_desc_lim){ + .nb_max = SSSNIC_ETHDEV_MAX_NUM_Q_DESC, + .nb_min = SSSNIC_ETHDEV_MIN_NUM_Q_DESC, + .nb_align = SSSNIC_ETHDEV_NUM_Q_DESC_ALGIN, + }; + + devinfo->default_rxportconf = (struct rte_eth_dev_portconf){ + .burst_size = SSSNIC_ETHDEV_DEF_BURST_SZ, + .ring_size = SSSNIC_ETHDEV_DEF_RING_SZ, + .nb_queues = SSSNIC_ETHDEV_DEF_NUM_QUEUES, + }; + + devinfo->default_txportconf = (struct rte_eth_dev_portconf){ + .burst_size = SSSNIC_ETHDEV_DEF_BURST_SZ, + .ring_size = SSSNIC_ETHDEV_DEF_RING_SZ, + .nb_queues = SSSNIC_ETHDEV_DEF_NUM_QUEUES, + }; + + return 0; +} + +static int +sssnic_ethdev_configure(struct rte_eth_dev *ethdev) +{ + if (ethdev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + ethdev->data->dev_conf.rxmode.offloads |= + RTE_ETH_RX_OFFLOAD_RSS_HASH; + + PMD_DRV_LOG(INFO, "Port %u is configured", ethdev->data->port_id); + + return 0; +} + static void sssnic_ethdev_release(struct rte_eth_dev *ethdev) { @@ -18,6 +84,11 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) rte_free(hw); } +static const struct eth_dev_ops sssnic_ethdev_ops = { + .dev_configure = sssnic_ethdev_configure, + .dev_infos_get = sssnic_ethdev_infos_get, +}; + static int sssnic_ethdev_init(struct rte_eth_dev *ethdev) { @@ -47,6 +118,11 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) return ret; } + netdev->max_num_rxq = SSSNIC_MAX_NUM_RXQ(hw); + netdev->max_num_txq = SSSNIC_MAX_NUM_TXQ(hw); + + ethdev->dev_ops = &sssnic_ethdev_ops; + return 0; } diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index 5d951134cc..70dd43d5c0 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -5,8 +5,61 @@ #ifndef _SSSNIC_ETHDEV_H_ #define _SSSNIC_ETHDEV_H_ +#define SSSNIC_ETHDEV_MIN_MTU 384 +#define SSSNIC_ETHDEV_MAX_MTU 9600 +#define SSSNIC_ETHDEV_OVERHEAD_LEN \ + (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + RTE_VLAN_HLEN * 2) +#define SSSNIC_ETHDEV_MIN_FRAME_SZ \ + (SSSNIC_ETHDEV_MIN_MTU + SSSNIC_ETHDEV_OVERHEAD_LEN) +#define SSSNIC_ETHDEV_MAX_FRAME_SZ \ + (SSSNIC_ETHDEV_MAX_MTU + SSSNIC_ETHDEV_OVERHEAD_LEN) + +#define SSSNIC_ETHDEV_MIN_RXBUF_SZ 1024 +#define SSSNIC_ETHDEV_MAX_RXPKT_LEN SSSNIC_ETHDEV_MAX_FRAME_SZ +#define SSSNIC_ETHDEV_MAX_LRO_PKT_SZ 65536 + +#define SSSNIC_ETHDEV_RSS_KEY_SZ 40 +#define SSSNIC_ETHDEV_RSS_RETA_SZ 256 + +#define SSSNIC_ETHDEV_MAX_NUM_Q_DESC 16384 +#define SSSNIC_ETHDEV_MIN_NUM_Q_DESC 128 +#define SSSNIC_ETHDEV_NUM_Q_DESC_ALGIN 1 + +#define SSSNIC_ETHDEV_RX_OFFLOAD_CAPA \ + (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ + RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \ + RTE_ETH_RX_OFFLOAD_SCATTER | RTE_ETH_RX_OFFLOAD_TCP_LRO | \ + RTE_ETH_RX_OFFLOAD_RSS_HASH) + +#define SSSNIC_ETHDEV_TX_OFFLOAD_CAPA \ + (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \ + RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO | \ + RTE_ETH_TX_OFFLOAD_MULTI_SEGS) + +#define SSSNIC_ETHDEV_RSS_OFFLOAD_FLOW_TYPES \ + (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ + RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_IPV6 | \ + RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \ + RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ + RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | \ + RTE_ETH_RSS_IPV6_UDP_EX) + +#define SSSNIC_ETHDEV_DEF_BURST_SZ 32 +#define SSSNIC_ETHDEV_DEF_NUM_QUEUES 1 +#define SSSNIC_ETHDEV_DEF_RING_SZ 1024 + +#define SSSNIC_ETHDEV_MAX_NUM_UC_MAC 128 +#define SSSNIC_ETHDEV_MAX_NUM_MC_MAC 2048 + struct sssnic_netdev { void *hw; + uint16_t max_num_txq; + uint16_t max_num_rxq; }; #define SSSNIC_ETHDEV_PRIVATE(eth_dev) \ From patchwork Fri Sep 1 09:34:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131058 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3CD64221E; Fri, 1 Sep 2023 11:37:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 18994402F1; Fri, 1 Sep 2023 11:36:02 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 15684402CD for ; Fri, 1 Sep 2023 11:35:57 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR78069836; Fri, 1 Sep 2023 17:35:28 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:27 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 11/32] net/sssnic: add dev MAC ops Date: Fri, 1 Sep 2023 17:34:53 +0800 Message-ID: <20230901093514.224824-12-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR78069836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 2 + drivers/net/sssnic/base/sssnic_api.c | 174 +++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 4 + drivers/net/sssnic/base/sssnic_cmd.h | 25 +++ drivers/net/sssnic/base/sssnic_hw.h | 1 + drivers/net/sssnic/sssnic_ethdev.c | 270 ++++++++++++++++++++++++++- drivers/net/sssnic/sssnic_ethdev.h | 2 + 7 files changed, 477 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 6d9786db7e..602839d301 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -4,6 +4,8 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Unicast MAC filter = Y +Multicast MAC filter = Y Linux = Y ARMv8 = Y x86-64 = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index bf0859cd63..76266dfbae 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -133,3 +133,177 @@ sssnic_capability_get(struct sssnic_hw *hw, struct sssnic_capability *capa) return 0; } + +int +sssnic_mac_addr_get(struct sssnic_hw *hw, uint8_t *addr) +{ + int ret; + struct sssnic_mac_addr_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + uint16_t func; + + if (hw == NULL || addr == NULL) + return -EINVAL; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + func = SSSNIC_PF_FUNC_IDX(hw); + else + func = SSSNIC_MPU_FUNC_IDX; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_GET_MAC_ADDR_CMD, + func, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_DEF_MAC_ADDR_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + rte_memcpy(addr, cmd.addr, 6); + + return 0; +} + +int +sssnic_mac_addr_update(struct sssnic_hw *hw, uint8_t *new, uint8_t *old) +{ + int ret; + struct sssnic_mac_addr_update_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + uint16_t func; + + if (hw == NULL || new == NULL || old == NULL) + return -EINVAL; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + func = SSSNIC_PF_FUNC_IDX(hw); + else + func = SSSNIC_MPU_FUNC_IDX; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + rte_memcpy(cmd.new_addr, new, 6); + rte_memcpy(cmd.old_addr, old, 6); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_UPDATE_MAC_ADDR_CMD, func, SSSNIC_LAN_MODULE, + SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_MAC_ADDR_CMD_STATUS_IGNORED) { + PMD_DRV_LOG(WARNING, + "MAC address operation is ignored"); + return 0; + } + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_UPDATE_MAC_ADDR_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_mac_addr_add(struct sssnic_hw *hw, uint8_t *addr) +{ + int ret; + struct sssnic_mac_addr_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + uint16_t func; + + if (hw == NULL || addr == NULL) + return -EINVAL; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + func = SSSNIC_PF_FUNC_IDX(hw); + else + func = SSSNIC_MPU_FUNC_IDX; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + rte_memcpy(cmd.addr, addr, 6); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_ADD_MAC_ADDR_CMD, + func, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_MAC_ADDR_CMD_STATUS_IGNORED) { + PMD_DRV_LOG(WARNING, + "MAC address operation is ignored"); + return 0; + } + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_ADD_MAC_ADDR_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_mac_addr_del(struct sssnic_hw *hw, uint8_t *addr) +{ + int ret; + struct sssnic_mac_addr_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + uint16_t func; + + if (hw == NULL || addr == NULL) + return -EINVAL; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + func = SSSNIC_PF_FUNC_IDX(hw); + else + func = SSSNIC_MPU_FUNC_IDX; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + rte_memcpy(cmd.addr, addr, 6); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_DEL_MAC_ADDR_CMD, + func, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_MAC_ADDR_CMD_STATUS_IGNORED) { + PMD_DRV_LOG(WARNING, + "MAC address operation is ignored"); + return 0; + } + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_DEL_MAC_ADDR_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 8011cc8b0f..b985230595 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -27,5 +27,9 @@ int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_capability_get(struct sssnic_hw *hw, struct sssnic_capability *capa); +int sssnic_mac_addr_get(struct sssnic_hw *hw, uint8_t *addr); +int sssnic_mac_addr_update(struct sssnic_hw *hw, uint8_t *new, uint8_t *old); +int sssnic_mac_addr_add(struct sssnic_hw *hw, uint8_t *addr); +int sssnic_mac_addr_del(struct sssnic_hw *hw, uint8_t *addr); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index 79192affbc..e70184866e 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -20,6 +20,14 @@ enum sssnic_mgmt_cmd_id { #define SSSNIC_GET_CAPABILITY_CMD 0 +#define SSSNIC_MAC_ADDR_CMD_STATUS_IGNORED 0x4 +enum sssnic_mac_addr_cmd_id { + SSSNIC_GET_MAC_ADDR_CMD = 20, + SSSNIC_ADD_MAC_ADDR_CMD, + SSSNIC_DEL_MAC_ADDR_CMD, + SSSNIC_UPDATE_MAC_ADDR_CMD, +}; + struct sssnic_cmd_common { uint8_t status; uint8_t version; @@ -125,4 +133,21 @@ struct sssnic_capability_get_cmd { uint32_t resvd3[63]; }; +struct sssnic_mac_addr_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t vlan; + uint16_t resvd; + uint8_t addr[6]; +}; + +struct sssnic_mac_addr_update_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t vlan; + uint16_t resvd0; + uint8_t old_addr[6]; + uint16_t resvd1; + uint8_t new_addr[6]; +}; #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 5f20a9465d..f8fe2ac7e1 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -70,6 +70,7 @@ struct sssnic_hw { #define SSSNIC_PHY_PORT(hw) ((hw)->phy_port) #define SSSNIC_FUNC_TYPE(hw) ((hw)->attr.func_type) #define SSSNIC_AF_FUNC_IDX(hw) ((hw)->attr.af_idx) +#define SSSNIC_PF_FUNC_IDX(hw) ((hw)->attr.pf_idx) enum sssnic_module { SSSNIC_COMM_MODULE = 0, diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 2d7ed7d60b..92a20649fc 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -7,6 +7,7 @@ #include "sssnic_log.h" #include "base/sssnic_hw.h" +#include "base/sssnic_api.h" #include "sssnic_ethdev.h" static int @@ -63,6 +64,258 @@ sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, return 0; } +static int +sssnic_ethdev_mac_addr_set(struct rte_eth_dev *ethdev, + struct rte_ether_addr *mac_addr) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac_addr); + + ret = sssnic_mac_addr_update(hw, mac_addr->addr_bytes, + netdev->default_addr.addr_bytes); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to update default MAC address: %s", + mac_str); + return ret; + } + rte_ether_addr_copy(mac_addr, &netdev->default_addr); + + PMD_DRV_LOG(INFO, "Updated default MAC address %s of port %u", mac_str, + ethdev->data->port_id); + + return 0; +} + +static void +sssnic_ethdev_mac_addr_remove(struct rte_eth_dev *ethdev, uint32_t index) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + struct rte_ether_addr *mac; + char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + mac = ðdev->data->mac_addrs[index]; + ret = sssnic_mac_addr_del(hw, mac->addr_bytes); + if (ret != 0) { + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, mac); + PMD_DRV_LOG(ERR, "Failed to delete MAC address %s", mac_str); + } +} + +static int +sssnic_ethdev_mac_addr_add(struct rte_eth_dev *ethdev, + struct rte_ether_addr *mac_addr, __rte_unused uint32_t index, + __rte_unused uint32_t vmdq) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + if (rte_is_multicast_ether_addr(mac_addr)) { + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, + mac_addr); + PMD_DRV_LOG(ERR, + "Invalid MAC address:%s, cannot be multicast address", + mac_str); + } + + ret = sssnic_mac_addr_add(hw, mac_addr->addr_bytes); + if (ret != 0) { + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, + mac_addr); + PMD_DRV_LOG(ERR, "Failed to add MAC address %s", mac_str); + return ret; + } + + return 0; +} + +static void +sssnic_ethdev_mcast_addrs_clean(struct rte_eth_dev *ethdev) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + int i; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + for (i = 0; i < SSSNIC_ETHDEV_MAX_NUM_MC_MAC; i++) { + if (rte_is_zero_ether_addr(&netdev->mcast_addrs[i])) + break; + + ret = sssnic_mac_addr_del(hw, + netdev->mcast_addrs[i].addr_bytes); + if (ret != 0) + PMD_DRV_LOG(WARNING, "Failed to delete MAC address"); + + memset(&netdev->mcast_addrs[i], 0, + sizeof(struct rte_ether_addr)); + } +} + +static int +sssnic_ethdev_set_mc_addr_list(struct rte_eth_dev *ethdev, + struct rte_ether_addr *mc_addr_set, uint32_t nb_mc_addr) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + uint32_t i; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + if (nb_mc_addr > SSSNIC_ETHDEV_MAX_NUM_MC_MAC) { + PMD_DRV_LOG(ERR, + "Failed to set mcast address list to port %u, excceds max number:%u", + ethdev->data->port_id, SSSNIC_ETHDEV_MAX_NUM_MC_MAC); + return -EINVAL; + } + + for (i = 0; i < nb_mc_addr; i++) { + if (!rte_is_multicast_ether_addr(&mc_addr_set[i])) { + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, + &mc_addr_set[i]); + PMD_DRV_LOG(ERR, "Invalid Multicast MAC address: %s", + mac_str); + return -EINVAL; + } + } + + sssnic_ethdev_mcast_addrs_clean(ethdev); + + for (i = 0; i < nb_mc_addr; i++) { + ret = sssnic_mac_addr_add(hw, mc_addr_set[i].addr_bytes); + if (ret != 0) { + sssnic_ethdev_mcast_addrs_clean(ethdev); + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, + &mc_addr_set[i]); + PMD_DRV_LOG(ERR, + "Failed to add Multicast MAC address: %s", + mac_str); + return ret; + } + rte_ether_addr_copy(&mc_addr_set[i], &netdev->mcast_addrs[i]); + } + + return 0; +} + +static int +sssnic_ethdev_mac_addrs_init(struct rte_eth_dev *ethdev) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + struct rte_ether_addr default_addr; + char mac_str[RTE_ETHER_ADDR_FMT_SIZE]; + + PMD_INIT_FUNC_TRACE(); + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + ethdev->data->mac_addrs = rte_zmalloc(NULL, + SSSNIC_ETHDEV_MAX_NUM_UC_MAC * sizeof(struct rte_ether_addr), + 0); + if (ethdev->data->mac_addrs == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate memory to store %u mac addresses", + SSSNIC_ETHDEV_MAX_NUM_UC_MAC); + return -ENOMEM; + } + + netdev->mcast_addrs = rte_zmalloc(NULL, + SSSNIC_ETHDEV_MAX_NUM_MC_MAC * sizeof(struct rte_ether_addr), + 0); + if (netdev->mcast_addrs == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate memory to store %u mcast addresses", + SSSNIC_ETHDEV_MAX_NUM_MC_MAC); + ret = -ENOMEM; + goto alloc_mcast_addr_fail; + } + + /* initialize default MAC address */ + memset(&default_addr, 0, sizeof(default_addr)); + ret = sssnic_mac_addr_get(hw, default_addr.addr_bytes); + if (ret != 0) + PMD_DRV_LOG(NOTICE, + "Could not get default MAC address, will use random address"); + + if (rte_is_zero_ether_addr(&default_addr)) + rte_eth_random_addr(default_addr.addr_bytes); + + rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, &default_addr); + + ret = sssnic_mac_addr_add(hw, default_addr.addr_bytes); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add default MAC address: %s", + mac_str); + goto add_ether_addr_fail; + } + + rte_ether_addr_copy(&default_addr, ðdev->data->mac_addrs[0]); + rte_ether_addr_copy(&default_addr, &netdev->default_addr); + + PMD_DRV_LOG(INFO, "Port %u default MAC address: %s", + ethdev->data->port_id, mac_str); + + return 0; + +add_ether_addr_fail: + rte_free(netdev->mcast_addrs); + netdev->mcast_addrs = NULL; +alloc_mcast_addr_fail: + rte_free(ethdev->data->mac_addrs); + ethdev->data->mac_addrs = NULL; + return ret; +} + +static void +sssnic_ethdev_mac_addrs_clean(struct rte_eth_dev *ethdev) +{ + int ret; + struct sssnic_netdev *netdev; + struct sssnic_hw *hw; + int i; + + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + hw = SSSNIC_NETDEV_TO_HW(netdev); + + for (i = 0; i < SSSNIC_ETHDEV_MAX_NUM_UC_MAC; i++) { + if (rte_is_zero_ether_addr(ðdev->data->mac_addrs[i])) + continue; + + ret = sssnic_mac_addr_del(hw, + ethdev->data->mac_addrs[i].addr_bytes); + if (ret != 0) + PMD_DRV_LOG(ERR, + "Failed to delete MAC address from port %u", + ethdev->data->port_id); + } + + sssnic_ethdev_mcast_addrs_clean(ethdev); +} + static int sssnic_ethdev_configure(struct rte_eth_dev *ethdev) { @@ -80,6 +333,7 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) { struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + sssnic_ethdev_mac_addrs_clean(ethdev); sssnic_hw_shutdown(hw); rte_free(hw); } @@ -87,6 +341,10 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_configure = sssnic_ethdev_configure, .dev_infos_get = sssnic_ethdev_infos_get, + .mac_addr_set = sssnic_ethdev_mac_addr_set, + .mac_addr_remove = sssnic_ethdev_mac_addr_remove, + .mac_addr_add = sssnic_ethdev_mac_addr_add, + .set_mc_addr_list = sssnic_ethdev_set_mc_addr_list, }; static int @@ -118,12 +376,22 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) return ret; } + ret = sssnic_ethdev_mac_addrs_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize MAC addresses"); + goto mac_addrs_init_fail; + } + netdev->max_num_rxq = SSSNIC_MAX_NUM_RXQ(hw); netdev->max_num_txq = SSSNIC_MAX_NUM_TXQ(hw); ethdev->dev_ops = &sssnic_ethdev_ops; return 0; + +mac_addrs_init_fail: + sssnic_hw_shutdown(0); + return ret; } static int @@ -140,7 +408,7 @@ sssnic_ethdev_uninit(struct rte_eth_dev *ethdev) sssnic_ethdev_release(ethdev); - return -EINVAL; + return 0; } static int diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index 70dd43d5c0..636b3fd04c 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -58,6 +58,8 @@ struct sssnic_netdev { void *hw; + struct rte_ether_addr *mcast_addrs; + struct rte_ether_addr default_addr; uint16_t max_num_txq; uint16_t max_num_rxq; }; From patchwork Fri Sep 1 09:34:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131057 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B55B4221E; Fri, 1 Sep 2023 11:37:24 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E3269402EF; Fri, 1 Sep 2023 11:36:00 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id E9BF1402CD for ; Fri, 1 Sep 2023 11:35:56 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR79069836; Fri, 1 Sep 2023 17:35:29 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:27 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 12/32] net/sssnic: support dev link status Date: Fri, 1 Sep 2023 17:34:54 +0800 Message-ID: <20230901093514.224824-13-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR79069836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. --- doc/guides/nics/features/sssnic.ini | 1 + drivers/net/sssnic/base/sssnic_api.c | 127 ++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 14 +++ drivers/net/sssnic/base/sssnic_cmd.h | 37 +++++++ drivers/net/sssnic/meson.build | 1 + drivers/net/sssnic/sssnic_ethdev.c | 4 + drivers/net/sssnic/sssnic_ethdev_link.c | 111 +++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_link.h | 12 +++ 8 files changed, 307 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_ethdev_link.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_link.h diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 602839d301..a0688e70ef 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -4,6 +4,7 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Link status = Y Unicast MAC filter = Y Multicast MAC filter = Y Linux = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 76266dfbae..3bb31009c5 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -307,3 +307,130 @@ sssnic_mac_addr_del(struct sssnic_hw *hw, uint8_t *addr) return 0; } + +int +sssnic_netif_link_status_get(struct sssnic_hw *hw, uint8_t *status) +{ + int ret; + struct sssnic_netif_link_status_get_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + uint16_t func; + + if (hw == NULL || status == NULL) + return -EINVAL; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + func = SSSNIC_PF_FUNC_IDX(hw); + else + func = SSSNIC_MPU_FUNC_IDX; + + memset(&cmd, 0, sizeof(cmd)); + cmd.port = SSSNIC_PHY_PORT(hw); + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_GET_NETIF_LINK_STATUS_CMD, func, SSSNIC_NETIF_MODULE, + SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_NETIF_LINK_STATUS_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + *status = cmd.status; + + return 0; +} + +int +sssnic_netif_link_info_get(struct sssnic_hw *hw, + struct sssnic_netif_link_info *info) +{ + int ret; + struct sssnic_netif_link_info_get_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + if (hw == NULL || info == NULL) + return -EINVAL; + + ret = sssnic_netif_link_status_get(hw, &info->status); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get netif link state!"); + return ret; + } + + memset(&cmd, 0, sizeof(cmd)); + cmd.port = SSSNIC_PHY_PORT(hw); + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_GET_NETIF_LINK_INFO_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_NETIF_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_NETIF_LINK_INFO_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + info->speed = cmd.speed; + info->duplex = cmd.duplex; + info->fec = cmd.fec; + info->type = cmd.type; + info->autoneg_capa = cmd.autoneg_capa; + info->autoneg = cmd.autoneg; + + return 0; +} + +int +sssnic_netif_enable_set(struct sssnic_hw *hw, uint8_t state) +{ + int ret; + struct sssnic_netif_enable_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + if (hw == NULL) + return -EINVAL; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + return 0; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + if (state != 0) + cmd.flag = SSSNIC_SET_NETIF_ENABLE_CMD_FLAG_RX_EN | + SSSNIC_SET_NETIF_ENABLE_CMD_FLAG_TX_EN; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_NETIF_ENABLE_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_NETIF_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_NETIF_ENABLE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index b985230595..168aa152b9 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -22,6 +22,16 @@ struct sssnic_capability { uint8_t cos; }; +struct sssnic_netif_link_info { + uint8_t status; + uint8_t type; + uint8_t autoneg_capa; + uint8_t autoneg; + uint8_t duplex; + uint8_t speed; + uint8_t fec; +}; + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, @@ -31,5 +41,9 @@ int sssnic_mac_addr_get(struct sssnic_hw *hw, uint8_t *addr); int sssnic_mac_addr_update(struct sssnic_hw *hw, uint8_t *new, uint8_t *old); int sssnic_mac_addr_add(struct sssnic_hw *hw, uint8_t *addr); int sssnic_mac_addr_del(struct sssnic_hw *hw, uint8_t *addr); +int sssnic_netif_link_status_get(struct sssnic_hw *hw, uint8_t *status); +int sssnic_netif_link_info_get(struct sssnic_hw *hw, + struct sssnic_netif_link_info *info); +int sssnic_netif_enable_set(struct sssnic_hw *hw, uint8_t state); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index e70184866e..6957b742fc 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -27,6 +27,13 @@ enum sssnic_mac_addr_cmd_id { SSSNIC_DEL_MAC_ADDR_CMD, SSSNIC_UPDATE_MAC_ADDR_CMD, }; +enum sssnic_netif_cmd_id { + SSSNIC_SET_NETIF_ENABLE_CMD = 6, + SSSNIC_GET_NETIF_LINK_STATUS_CMD = 7, + SSSNIC_GET_NETIF_MAC_STATS_CMD = 151, + SSSNIC_CLEAR_NETIF_MAC_STATS_CMD = 152, + SSSNIC_GET_NETIF_LINK_INFO_CMD = 153, +}; struct sssnic_cmd_common { uint8_t status; @@ -150,4 +157,34 @@ struct sssnic_mac_addr_update_cmd { uint16_t resvd1; uint8_t new_addr[6]; }; +struct sssnic_netif_link_status_get_cmd { + struct sssnic_cmd_common common; + uint8_t port; + uint8_t status; + uint16_t rsvd; +}; + +struct sssnic_netif_link_info_get_cmd { + struct sssnic_cmd_common common; + uint8_t port; + uint8_t resvd0[3]; + uint8_t type; + uint8_t autoneg_capa; + uint8_t autoneg; + uint8_t duplex; + uint8_t speed; + uint8_t fec; + uint8_t resvd1[18]; +}; + +#define SSSNIC_SET_NETIF_ENABLE_CMD_FLAG_TX_EN 0x1 +#define SSSNIC_SET_NETIF_ENABLE_CMD_FLAG_RX_EN 0x2 +struct sssnic_netif_enable_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd0; + uint8_t flag; + uint8_t resvd1[3]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index 328bb41436..12e967722a 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -18,4 +18,5 @@ objs = [base_objs] sources = files( 'sssnic_ethdev.c', + 'sssnic_ethdev_link.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 92a20649fc..4e546cca66 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -9,6 +9,7 @@ #include "base/sssnic_hw.h" #include "base/sssnic_api.h" #include "sssnic_ethdev.h" +#include "sssnic_ethdev_link.h" static int sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, @@ -339,6 +340,9 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) } static const struct eth_dev_ops sssnic_ethdev_ops = { + .dev_set_link_up = sssnic_ethdev_set_link_up, + .dev_set_link_down = sssnic_ethdev_set_link_down, + .link_update = sssnic_ethdev_link_update, .dev_configure = sssnic_ethdev_configure, .dev_infos_get = sssnic_ethdev_infos_get, .mac_addr_set = sssnic_ethdev_mac_addr_set, diff --git a/drivers/net/sssnic/sssnic_ethdev_link.c b/drivers/net/sssnic/sssnic_ethdev_link.c new file mode 100644 index 0000000000..04e2d2e5d2 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_link.c @@ -0,0 +1,111 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_link.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_api.h" + +static const uint32_t sssnic_ethdev_speed_map[] = { RTE_ETH_SPEED_NUM_NONE, + RTE_ETH_SPEED_NUM_10M, RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G, + RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G, RTE_ETH_SPEED_NUM_40G, + RTE_ETH_SPEED_NUM_50G, RTE_ETH_SPEED_NUM_100G, RTE_ETH_SPEED_NUM_200G }; + +#define SSSNIC_ETHDEV_NUM_SPEED_TYPE RTE_DIM(sssnic_ethdev_speed_map) + +static int +sssnic_ethdev_link_get(struct rte_eth_dev *ethdev, struct rte_eth_link *link) +{ + int ret; + struct sssnic_netif_link_info info; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + memset(&info, 0, sizeof(info)); + ret = sssnic_netif_link_info_get(hw, &info); + if (ret != 0) { + link->link_status = RTE_ETH_LINK_DOWN; + link->link_speed = RTE_ETH_SPEED_NUM_NONE; + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + link->link_autoneg = RTE_ETH_LINK_FIXED; + PMD_DRV_LOG(ERR, "Failed to get netif link info"); + return ret; + } + + if (!info.status) { + link->link_status = RTE_ETH_LINK_DOWN; + link->link_speed = RTE_ETH_SPEED_NUM_NONE; + link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX; + link->link_autoneg = RTE_ETH_LINK_FIXED; + } else { + link->link_status = RTE_ETH_LINK_UP; + link->link_duplex = info.duplex; + link->link_autoneg = info.autoneg; + if (info.speed >= SSSNIC_ETHDEV_NUM_SPEED_TYPE) + link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN; + else + link->link_speed = sssnic_ethdev_speed_map[info.speed]; + } + + return 0; +} + +int +sssnic_ethdev_set_link_up(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_netif_enable_set(hw, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable netif"); + return ret; + } + + return 0; +} + +int +sssnic_ethdev_set_link_down(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_netif_enable_set(hw, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable netif"); + return ret; + } + + return 0; +} + +int +sssnic_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) +{ +#define CHECK_INTERVAL 100 /* 100ms */ +#define MAX_REPEAT_TIME 10 /* 1s (10 * 100ms) in total */ + int ret; + struct rte_eth_link link; + unsigned int rep_cnt = MAX_REPEAT_TIME; + + memset(&link, 0, sizeof(link)); + do { + ret = sssnic_ethdev_link_get(ethdev, &link); + if (ret != 0) + goto out; + + if (!wait_to_complete || link.link_status) + goto out; + + rte_delay_ms(CHECK_INTERVAL); + + } while (--rep_cnt); + +out: + return rte_eth_linkstatus_set(ethdev, &link); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_link.h b/drivers/net/sssnic/sssnic_ethdev_link.h new file mode 100644 index 0000000000..00ad13fe9b --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_link.h @@ -0,0 +1,12 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_LINK_H_ +#define _SSSNIC_ETHDEV_LINK_H_ + +int sssnic_ethdev_set_link_up(struct rte_eth_dev *ethdev); +int sssnic_ethdev_set_link_down(struct rte_eth_dev *ethdev); +int sssnic_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete); + +#endif /* _SSSNIC_ETHDEV_LINK_H_ */ From patchwork Fri Sep 1 09:34:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131060 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61C394221E; Fri, 1 Sep 2023 11:37:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 20623402E1; Fri, 1 Sep 2023 11:36:05 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 9922E402ED for ; Fri, 1 Sep 2023 11:36:00 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR7A069836; Fri, 1 Sep 2023 17:35:29 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:28 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 13/32] net/sssnic: support link status event Date: Fri, 1 Sep 2023 17:34:55 +0800 Message-ID: <20230901093514.224824-14-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR7A069836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Fixed 'EINVAL' undeclared. --- doc/guides/nics/features/sssnic.ini | 1 + drivers/net/sssnic/base/meson.build | 1 + drivers/net/sssnic/base/sssnic_exception.c | 116 +++++++++++++++++++++ drivers/net/sssnic/base/sssnic_exception.h | 10 ++ drivers/net/sssnic/base/sssnic_hw.c | 46 ++++++++ drivers/net/sssnic/base/sssnic_hw.h | 22 ++++ drivers/net/sssnic/sssnic_ethdev.c | 6 +- drivers/net/sssnic/sssnic_ethdev_link.c | 98 +++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_link.h | 2 + 9 files changed, 301 insertions(+), 1 deletion(-) create mode 100644 drivers/net/sssnic/base/sssnic_exception.c create mode 100644 drivers/net/sssnic/base/sssnic_exception.h diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index a0688e70ef..82b527ba26 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -5,6 +5,7 @@ ; [Features] Link status = Y +Link status event = Y Unicast MAC filter = Y Multicast MAC filter = Y Linux = Y diff --git a/drivers/net/sssnic/base/meson.build b/drivers/net/sssnic/base/meson.build index e93ca7b24b..cf0c177cab 100644 --- a/drivers/net/sssnic/base/meson.build +++ b/drivers/net/sssnic/base/meson.build @@ -9,6 +9,7 @@ sources = [ 'sssnic_workq.c', 'sssnic_ctrlq.c', 'sssnic_api.c', + 'sssnic_exception.c', ] c_args = cflags diff --git a/drivers/net/sssnic/base/sssnic_exception.c b/drivers/net/sssnic/base/sssnic_exception.c new file mode 100644 index 0000000000..738ca9c6cd --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_exception.c @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include + +#include "../sssnic_log.h" +#include "sssnic_hw.h" +#include "sssnic_reg.h" +#include "sssnic_cmd.h" +#include "sssnic_msg.h" +#include "sssnic_eventq.h" +#include "sssnic_mbox.h" +#include "sssnic_exception.h" + +static void +sssnic_link_event_msg_handle(struct sssnic_hw *hw, struct sssnic_msg *msg, + __rte_unused enum sssnic_msg_chann_id chan_id) +{ + struct sssnic_netif_link_status_get_cmd *cmd; + sssnic_link_event_cb_t *cb; + enum sssnic_link_status status; + void *priv; + + cb = hw->link_event_handler.cb; + priv = hw->link_event_handler.priv; + + cmd = (struct sssnic_netif_link_status_get_cmd *)msg->data_buf; + if (cb != NULL) { + if (cmd->status) + status = SSSNIC_LINK_STATUS_UP; + else + status = SSSNIC_LINK_STATUS_DOWN; + + PMD_DRV_LOG(DEBUG, "Received sssnic%u link %s event", + SSSNIC_ETH_PORT_ID(hw), status ? "up" : "down"); + + return cb(cmd->port, status, priv); + } + + PMD_DRV_LOG(WARNING, "Link event was not processed, port=%u, status=%u", + cmd->port, cmd->status); +} + +static void +sssnic_netif_vf_link_status_msg_handle(struct sssnic_hw *hw, + struct sssnic_msg *msg) +{ + int ret; + + if (msg->ack != 0) { + msg->ack = 0; + msg->type = SSSNIC_MSG_TYPE_RESP; + msg->data_len = 1; /* indicate no data */ + ret = sssnic_mbox_send(hw, msg, NULL, 0, 0); + if (ret != 0) + PMD_DRV_LOG(ERR, + "Failed to send VF link status response, ret=%d", + ret); + } +} + +static void +sssnic_netif_exception_msg_handle(struct sssnic_hw *hw, struct sssnic_msg *msg, + enum sssnic_msg_chann_id chan_id) +{ + if (msg->command == SSSNIC_GET_NETIF_LINK_STATUS_CMD) { + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + sssnic_netif_vf_link_status_msg_handle(hw, msg); + + sssnic_link_event_msg_handle(hw, msg, chan_id); + return; + } + + PMD_DRV_LOG(WARNING, + "Netif exception message was not processed, cmd=%u", + msg->command); +} + +static int +sssnic_exception_msg_process(struct sssnic_msg *msg, + enum sssnic_msg_chann_id chan_id, void *priv) +{ + struct sssnic_hw *hw = (struct sssnic_hw *)priv; + + SSSNIC_DEBUG("command=%u, func=%u module=%u, type=%u, ack=%u, seq=%u, " + "status=%u, id=%u data_buf=%p, data_len=%u", + msg->command, msg->func, msg->module, msg->type, msg->ack, + msg->seg, msg->status, msg->id, msg->data_buf, msg->data_len); + + if (msg->module == SSSNIC_NETIF_MODULE) { + sssnic_netif_exception_msg_handle(hw, msg, chan_id); + return SSSNIC_MSG_DONE; + } + + PMD_DRV_LOG(WARNING, "Exception message was not processed, moule=%u", + msg->module); + + return SSSNIC_MSG_DONE; +} + +int +sssnic_exception_process_init(struct sssnic_hw *hw) +{ + if (hw == NULL) + return -EINVAL; + + sssnic_msg_rx_handler_register(hw, SSSNIC_MSG_CHAN_MPU, + SSSNIC_MSG_TYPE_REQ, sssnic_exception_msg_process, hw); + sssnic_msg_rx_handler_register(hw, SSSNIC_MSG_CHAN_MBOX, + SSSNIC_MSG_TYPE_REQ, sssnic_exception_msg_process, hw); + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_exception.h b/drivers/net/sssnic/base/sssnic_exception.h new file mode 100644 index 0000000000..46f2f7465b --- /dev/null +++ b/drivers/net/sssnic/base/sssnic_exception.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_EXCEPTION_H_ +#define _SSSNIC_EXCEPTION_H_ + +int sssnic_exception_process_init(struct sssnic_hw *hw); + +#endif /* _SSSNIC_EXCEPTION_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 8f5f556bde..82eb4ea295 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -15,6 +15,7 @@ #include "sssnic_msg.h" #include "sssnic_mbox.h" #include "sssnic_ctrlq.h" +#include "sssnic_exception.h" static int wait_for_sssnic_hw_ready(struct sssnic_hw *hw) @@ -133,6 +134,17 @@ sssnic_msix_all_disable(struct sssnic_hw *hw) sssnic_msix_state_set(hw, i, SSSNIC_MSIX_DISABLE); } +void +sssnic_msix_resend_disable(struct sssnic_hw *hw, uint16_t msix_id) +{ + struct sssnic_msix_ctrl_reg reg; + + reg.u32 = 0; + reg.resend_timer_clr = SSSNIC_MSIX_DISABLE; + reg.msxi_idx = msix_id; + sssnic_cfg_reg_write(hw, SSSNIC_MSIX_CTRL_REG, reg.u32); +} + static void sssnic_pf_status_set(struct sssnic_hw *hw, enum sssnic_pf_status status) { @@ -276,6 +288,38 @@ sssnic_capability_init(struct sssnic_hw *hw) return 0; } +int +sssnic_link_event_callback_register(struct sssnic_hw *hw, + sssnic_link_event_cb_t *cb, void *priv) +{ + if (hw == NULL || cb == NULL) + return -EINVAL; + + hw->link_event_handler.cb = cb; + hw->link_event_handler.priv = priv; + + return 0; +} + +void +sssnic_link_event_callback_unregister(struct sssnic_hw *hw) +{ + if (hw != NULL) { + hw->link_event_handler.cb = NULL; + hw->link_event_handler.priv = NULL; + } +} + +void +sssnic_link_intr_handle(struct sssnic_hw *hw) +{ + PMD_DRV_LOG(DEBUG, "sssnic%u link interrupt triggered!", + SSSNIC_ETH_PORT_ID(hw)); + + sssnic_msix_resend_disable(hw, SSSNIC_LINK_INTR_MSIX_ID); + sssnic_eventq_flush(hw, SSSNIC_LINK_INTR_EVENTQ, 0); +} + static int sssnic_base_init(struct sssnic_hw *hw) { @@ -389,6 +433,8 @@ sssnic_hw_init(struct sssnic_hw *hw) goto capbility_init_fail; } + sssnic_exception_process_init(hw); + sssnic_pf_status_set(hw, SSSNIC_PF_STATUS_ACTIVE); return 0; diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index f8fe2ac7e1..e25f5595e6 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -44,6 +44,22 @@ struct sssnic_hw_attr { uint16_t num_irq; }; +enum sssnic_link_status { + SSSNIC_LINK_STATUS_DOWN, + SSSNIC_LINK_STATUS_UP, +}; + +#define SSSNIC_LINK_INTR_MSIX_ID 0 +#define SSSNIC_LINK_INTR_EVENTQ 0 + +typedef void sssnic_link_event_cb_t(uint8_t port, + enum sssnic_link_status status, void *priv); + +struct sssnic_link_event_handler { + sssnic_link_event_cb_t *cb; + void *priv; +}; + struct sssnic_hw { struct rte_pci_device *pci_dev; uint8_t *cfg_base_addr; @@ -55,6 +71,7 @@ struct sssnic_hw { struct sssnic_msg_inbox *msg_inbox; struct sssnic_mbox *mbox; struct sssnic_ctrlq *ctrlq; + struct sssnic_link_event_handler link_event_handler; uint8_t num_eventqs; uint8_t phy_port; uint16_t eth_port_id; @@ -82,5 +99,10 @@ enum sssnic_module { int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); void sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state); +void sssnic_msix_resend_disable(struct sssnic_hw *hw, uint16_t msix_id); +int sssnic_link_event_callback_register(struct sssnic_hw *hw, + sssnic_link_event_cb_t *cb, void *priv); +void sssnic_link_event_callback_unregister(struct sssnic_hw *hw); +void sssnic_link_intr_handle(struct sssnic_hw *hw); #endif /* _SSSNIC_HW_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 4e546cca66..1e2b5c0450 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -334,6 +334,7 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) { struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + sssnic_ethdev_link_intr_disable(ethdev); sssnic_ethdev_mac_addrs_clean(ethdev); sssnic_hw_shutdown(hw); rte_free(hw); @@ -391,6 +392,9 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) ethdev->dev_ops = &sssnic_ethdev_ops; + sssnic_ethdev_link_update(ethdev, 0); + sssnic_ethdev_link_intr_enable(ethdev); + return 0; mac_addrs_init_fail: @@ -440,7 +444,7 @@ static const struct rte_pci_id sssnic_pci_id_map[] = { static struct rte_pci_driver sssnic_pmd = { .id_table = sssnic_pci_id_map, - .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, .probe = sssnic_pci_probe, .remove = sssnic_pci_remove, }; diff --git a/drivers/net/sssnic/sssnic_ethdev_link.c b/drivers/net/sssnic/sssnic_ethdev_link.c index 04e2d2e5d2..9e932c51dd 100644 --- a/drivers/net/sssnic/sssnic_ethdev_link.c +++ b/drivers/net/sssnic/sssnic_ethdev_link.c @@ -109,3 +109,101 @@ sssnic_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete) out: return rte_eth_linkstatus_set(ethdev, &link); } + +static void +sssnic_ethdev_link_status_print(struct rte_eth_dev *ethdev, + struct rte_eth_link *link) +{ + if (link->link_status) { + PMD_DRV_LOG(INFO, "Port %u Link Up - speed %s - %s", + ethdev->data->port_id, + rte_eth_link_speed_to_str(link->link_speed), + link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ? + "full-duplex" : + "half-duplex"); + } else { + PMD_DRV_LOG(INFO, "Port %u Link Down", ethdev->data->port_id); + } +} + +static void +sssnic_ethdev_link_event_cb(__rte_unused uint8_t port, + __rte_unused enum sssnic_link_status status, void *arg) +{ + struct rte_eth_dev *ethdev = (struct rte_eth_dev *)arg; + struct rte_eth_link link; + int ret; + + memset(&link, 0, sizeof(link)); + sssnic_ethdev_link_get(ethdev, &link); + + ret = rte_eth_linkstatus_set(ethdev, &link); + if (ret == 0) { + sssnic_ethdev_link_status_print(ethdev, &link); + rte_eth_dev_callback_process(ethdev, RTE_ETH_EVENT_INTR_LSC, + NULL); + } +} + +static void +sssnic_ethdev_link_intr_callback(void *arg) +{ + struct rte_eth_dev *ethdev = (struct rte_eth_dev *)arg; + struct sssnic_hw *hw; + + hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + sssnic_link_intr_handle(hw); +} + +void +sssnic_ethdev_link_intr_enable(struct rte_eth_dev *ethdev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + sssnic_link_event_callback_register(hw, sssnic_ethdev_link_event_cb, + ethdev); + + ret = rte_intr_callback_register(pci_dev->intr_handle, + sssnic_ethdev_link_intr_callback, ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to register port %u intr callback!", + ethdev->data->port_id); + + sssnic_link_event_callback_unregister(hw); + + return; + } + + ret = rte_intr_enable(pci_dev->intr_handle); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable port %u interrupt!", + ethdev->data->port_id); + + rte_intr_callback_unregister(pci_dev->intr_handle, + sssnic_ethdev_link_intr_callback, ethdev); + sssnic_link_event_callback_unregister(hw); + + return; + } + + sssnic_msix_state_set(hw, SSSNIC_LINK_INTR_MSIX_ID, SSSNIC_MSIX_ENABLE); +} + +void +sssnic_ethdev_link_intr_disable(struct rte_eth_dev *ethdev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + sssnic_msix_state_set(hw, SSSNIC_LINK_INTR_MSIX_ID, + SSSNIC_MSIX_DISABLE); + + rte_intr_disable(pci_dev->intr_handle); + rte_intr_callback_unregister(pci_dev->intr_handle, + sssnic_ethdev_link_intr_callback, ethdev); + + sssnic_link_event_callback_unregister(hw); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_link.h b/drivers/net/sssnic/sssnic_ethdev_link.h index 00ad13fe9b..52a86b4771 100644 --- a/drivers/net/sssnic/sssnic_ethdev_link.h +++ b/drivers/net/sssnic/sssnic_ethdev_link.h @@ -8,5 +8,7 @@ int sssnic_ethdev_set_link_up(struct rte_eth_dev *ethdev); int sssnic_ethdev_set_link_down(struct rte_eth_dev *ethdev); int sssnic_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete); +void sssnic_ethdev_link_intr_enable(struct rte_eth_dev *ethdev); +void sssnic_ethdev_link_intr_disable(struct rte_eth_dev *ethdev); #endif /* _SSSNIC_ETHDEV_LINK_H_ */ From patchwork Fri Sep 1 09:34:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131062 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8019F4221E; Fri, 1 Sep 2023 11:38:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34A18406FF; Fri, 1 Sep 2023 11:36:08 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 200BB402F2 for ; Fri, 1 Sep 2023 11:36:01 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR7B069836; Fri, 1 Sep 2023 17:35:29 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:28 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 14/32] net/sssnic: support Rx queue setup and release Date: Fri, 1 Sep 2023 17:34:56 +0800 Message-ID: <20230901093514.224824-15-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR7B069836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. --- drivers/net/sssnic/meson.build | 1 + drivers/net/sssnic/sssnic_ethdev.c | 4 + drivers/net/sssnic/sssnic_ethdev.h | 2 + drivers/net/sssnic/sssnic_ethdev_rx.c | 420 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_rx.h | 24 ++ 5 files changed, 451 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_ethdev_rx.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_rx.h diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index 12e967722a..7c3516a279 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -19,4 +19,5 @@ objs = [base_objs] sources = files( 'sssnic_ethdev.c', 'sssnic_ethdev_link.c', + 'sssnic_ethdev_rx.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 1e2b5c0450..f98510a55d 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -10,6 +10,7 @@ #include "base/sssnic_api.h" #include "sssnic_ethdev.h" #include "sssnic_ethdev_link.h" +#include "sssnic_ethdev_rx.h" static int sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, @@ -335,6 +336,7 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); sssnic_ethdev_link_intr_disable(ethdev); + sssnic_ethdev_rx_queue_all_release(ethdev); sssnic_ethdev_mac_addrs_clean(ethdev); sssnic_hw_shutdown(hw); rte_free(hw); @@ -350,6 +352,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .mac_addr_remove = sssnic_ethdev_mac_addr_remove, .mac_addr_add = sssnic_ethdev_mac_addr_add, .set_mc_addr_list = sssnic_ethdev_set_mc_addr_list, + .rx_queue_setup = sssnic_ethdev_rx_queue_setup, + .rx_queue_release = sssnic_ethdev_rx_queue_release, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index 636b3fd04c..51740413c6 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -56,6 +56,8 @@ #define SSSNIC_ETHDEV_MAX_NUM_UC_MAC 128 #define SSSNIC_ETHDEV_MAX_NUM_MC_MAC 2048 +#define SSSNIC_ETHDEV_DEF_RX_FREE_THRESH 32 + struct sssnic_netdev { void *hw; struct rte_ether_addr *mcast_addrs; diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c new file mode 100644 index 0000000000..0f489cf82f --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -0,0 +1,420 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_rx.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_workq.h" +#include "base/sssnic_api.h" +#include "base/sssnic_misc.h" + +/* hardware format of rx descriptor*/ +struct sssnic_ethdev_rx_desc { + /* status field */ + union { + uint32_t dword0; + struct { + uint32_t ip_csum_err : 1; + uint32_t tcp_csum_err : 1; + uint32_t udp_csum_err : 1; + uint32_t igmp_csum_err : 1; + uint32_t icmpv4_csum_err : 1; + uint32_t icmpv6_csum_err : 1; + uint32_t sctp_crc_err : 1; + uint32_t hw_crc_err : 1; + uint32_t other_err : 1; + uint32_t err_resvd0 : 7; + uint32_t lro_num : 8; + uint32_t resvd1 : 1; + uint32_t lro_push : 1; + uint32_t lro_enter : 1; + uint32_t lro_intr : 1; + uint32_t flush : 1; + uint32_t decry : 1; + uint32_t bp_en : 1; + uint32_t done : 1; + }; + struct { + uint32_t status_err : 16; + uint32_t status_rest : 16; + }; + }; + + /* VLAN and length field */ + union { + uint32_t dword1; + struct { + uint32_t vlan : 16; + uint32_t len : 16; + }; + }; + + /* offload field */ + union { + uint32_t dword2; + struct { + uint32_t pkt_type : 12; + uint32_t dword2_resvd0 : 9; + uint32_t vlan_en : 1; + uint32_t dword2_resvd1 : 2; + uint32_t rss_type : 8; + }; + }; + + /* rss hash field */ + union { + uint32_t dword3; + uint32_t rss_hash; + }; + + uint32_t dword4; + uint32_t dword5; + uint32_t dword6; + uint32_t dword7; +} __rte_cache_aligned; + +struct sssnic_ethdev_rx_entry { + struct rte_mbuf *pktmbuf; +}; + +struct sssnic_ethdev_rxq { + struct rte_eth_dev *ethdev; + struct sssnic_workq *workq; + volatile struct sssnic_ethdev_rx_desc *desc; + const struct rte_memzone *pi_mz; + const struct rte_memzone *desc_mz; + struct rte_mempool *mp; + struct sssnic_ethdev_rx_entry *rxe; + volatile uint16_t *hw_pi_addr; + uint8_t *doorbell; + struct sssnic_ethdev_rxq_stats stats; + uint16_t port; + uint16_t qid; + uint16_t depth; + uint16_t rx_buf_size; + uint16_t rx_free_thresh; + struct { + uint16_t enable : 1; + uint16_t msix_id : 15; + } intr; + uint32_t resvd0; +} __rte_cache_aligned; + +/* Hardware format of rxq entry */ +struct sssnic_ethdev_rxq_entry { + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; + uint32_t desc_hi_addr; + uint32_t desc_lo_addr; +}; + +#define SSSNIC_ETHDEV_RX_ENTRY_SZ_BITS 4 +#define SSSNIC_ETHDEV_RXQ_ENTRY_SZ (RTE_BIT32(SSSNIC_ETHDEV_RX_ENTRY_SZ_BITS)) + +#define SSSNIC_ETHDEV_RXQ_ENTRY(rxq, idx) \ + SSSNIC_WORKQ_ENTRY_CAST((rxq)->workq, idx, \ + struct sssnic_ethdev_rxq_entry) + +static const uint16_t sssnic_ethdev_rx_buf_size_tbl[] = { 32, 64, 96, 128, 192, + 256, 384, 512, 768, 1024, 1536, 2048, 3072, 4096, 8192, 16384 }; + +#define SSSNIC_ETHDEV_RX_BUF_SIZE_COUNT (RTE_DIM(sssnic_ethdev_rx_buf_size_tbl)) + +#define SSSNIC_ETHDEV_MIN_RX_BUF_SIZE (sssnic_ethdev_rx_buf_size_tbl[0]) +#define SSSNIC_ETHDEV_MAX_RX_BUF_SIZE \ + (sssnic_ethdev_rx_buf_size_tbl[SSSNIC_ETHDEV_RX_BUF_SIZE_COUNT - 1]) + +#define SSSNIC_ETHDEV_DEF_RX_BUF_SIZE_IDX 11 /* 2048 Bytes */ + +/* Doorbell offset 8192 */ +#define SSSNIC_ETHDEV_RXQ_DB_OFFSET 0x2000 + +struct sssnic_ethdev_rxq_doorbell { + union { + uint64_t u64; + struct { + union { + uint32_t dword0; + struct { + uint32_t qid : 13; + uint32_t resvd0 : 9; + uint32_t nf : 1; + uint32_t cf : 1; + uint32_t cos : 3; + uint32_t service : 5; + }; + }; + union { + uint32_t dword1; + struct { + uint32_t pi_hi : 8; + uint32_t resvd1 : 24; + }; + }; + }; + }; +}; + +static inline uint16_t +sssnic_ethdev_rxq_num_used_entries(struct sssnic_ethdev_rxq *rxq) +{ + return sssnic_workq_num_used_entries(rxq->workq); +} + +static inline uint16_t +sssnic_ethdev_rxq_ci_get(struct sssnic_ethdev_rxq *rxq) +{ + return sssnic_workq_ci_get(rxq->workq); +} + +static inline void +sssnic_ethdev_rxq_consume(struct sssnic_ethdev_rxq *rxq, uint16_t num_entries) +{ + sssnic_workq_consume_fast(rxq->workq, num_entries); +} + +static void +sssnic_ethdev_rx_buf_size_optimize(uint32_t orig_size, uint16_t *new_size) +{ + uint32_t i; + uint16_t size; + + if (orig_size >= SSSNIC_ETHDEV_MAX_RX_BUF_SIZE) { + *new_size = SSSNIC_ETHDEV_MAX_RX_BUF_SIZE; + return; + } + + size = SSSNIC_ETHDEV_MIN_RX_BUF_SIZE; + for (i = 0; i < SSSNIC_ETHDEV_RX_BUF_SIZE_COUNT; i++) { + if (orig_size == sssnic_ethdev_rx_buf_size_tbl[i]) { + *new_size = sssnic_ethdev_rx_buf_size_tbl[i]; + return; + } + + if (orig_size < sssnic_ethdev_rx_buf_size_tbl[i]) { + *new_size = size; + return; + } + size = sssnic_ethdev_rx_buf_size_tbl[i]; + } + *new_size = size; +} + +static void +sssnic_ethdev_rxq_entries_init(struct sssnic_ethdev_rxq *rxq) +{ + struct sssnic_ethdev_rxq_entry *rqe; + rte_iova_t rxd_iova; + int i; + + rxd_iova = rxq->desc_mz->iova; + + for (i = 0; i < rxq->depth; i++) { + rqe = SSSNIC_ETHDEV_RXQ_ENTRY(rxq, i); + rqe->desc_hi_addr = SSSNIC_UPPER_32_BITS(rxd_iova); + rqe->desc_lo_addr = SSSNIC_LOWER_32_BITS(rxd_iova); + rxd_iova += sizeof(struct sssnic_ethdev_rx_desc); + } +} + +int +sssnic_ethdev_rx_queue_setup(struct rte_eth_dev *ethdev, uint16_t rx_queue_id, + uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool) +{ + int ret; + struct sssnic_hw *hw; + struct sssnic_ethdev_rxq *rxq; + uint16_t q_depth; + uint16_t rx_buf_size; + uint16_t rx_free_thresh; + char m_name[RTE_MEMZONE_NAMESIZE]; + + hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + q_depth = nb_rx_desc; + /* Adjust q_depth to power of 2 */ + if (!rte_is_power_of_2(nb_rx_desc)) { + q_depth = 1 << rte_log2_u32(nb_rx_desc); + PMD_DRV_LOG(NOTICE, + "nb_rx_desc(%u) is not power of 2, adjust to %u", + nb_rx_desc, q_depth); + } + + if (q_depth > SSSNIC_ETHDEV_MAX_NUM_Q_DESC) { + PMD_DRV_LOG(ERR, "nb_rx_desc(%u) is out of range(max. %u)", + q_depth, SSSNIC_ETHDEV_MAX_NUM_Q_DESC); + return -EINVAL; + } + + rx_buf_size = + rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM; + if (rx_buf_size < SSSNIC_ETHDEV_MIN_RX_BUF_SIZE) { + PMD_DRV_LOG(ERR, + "Bad data_room_size(%u), must be great than %u", + rte_pktmbuf_data_room_size(mb_pool), + RTE_PKTMBUF_HEADROOM + SSSNIC_ETHDEV_MIN_RX_BUF_SIZE); + return -EINVAL; + } + sssnic_ethdev_rx_buf_size_optimize(rx_buf_size, &rx_buf_size); + + if (rx_conf->rx_free_thresh > 0) + rx_free_thresh = rx_conf->rx_free_thresh; + else + rx_free_thresh = SSSNIC_ETHDEV_DEF_RX_FREE_THRESH; + if (rx_free_thresh >= q_depth - 1) { + PMD_DRV_LOG(ERR, + "rx_free_thresh(%u) must be less than nb_rx_desc(%u)-1", + rx_free_thresh, q_depth); + return -EINVAL; + } + + snprintf(m_name, sizeof(m_name), "sssnic_p%u_rq%u", + ethdev->data->port_id, rx_queue_id); + + rxq = rte_zmalloc_socket(m_name, sizeof(struct sssnic_ethdev_rxq), + RTE_CACHE_LINE_SIZE, (int)socket_id); + if (rxq == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate memory for sssnic port %u, rxq %u", + ethdev->data->port_id, rx_queue_id); + return -ENOMEM; + } + + rxq->ethdev = ethdev; + rxq->mp = mb_pool; + rxq->doorbell = hw->db_base_addr + SSSNIC_ETHDEV_RXQ_DB_OFFSET; + rxq->port = ethdev->data->port_id; + rxq->qid = rx_queue_id; + rxq->depth = q_depth; + rxq->rx_buf_size = rx_buf_size; + rxq->rx_free_thresh = rx_free_thresh; + + snprintf(m_name, sizeof(m_name), "sssnic_p%u_rq%u_wq", + ethdev->data->port_id, rx_queue_id); + rxq->workq = sssnic_workq_new(m_name, (int)socket_id, + SSSNIC_ETHDEV_RXQ_ENTRY_SZ, q_depth); + if (rxq->workq == NULL) { + PMD_DRV_LOG(ERR, + "Failed to create workq for sssnic port %u, rxq %u", + ethdev->data->port_id, rx_queue_id); + ret = -ENOMEM; + goto new_workq_fail; + } + + rxq->pi_mz = rte_eth_dma_zone_reserve(ethdev, "sssnic_rxpi_mz", + rxq->qid, RTE_PGSIZE_4K, RTE_CACHE_LINE_SIZE, (int)socket_id); + if (rxq->pi_mz == NULL) { + PMD_DRV_LOG(ERR, + "Failed to alloc DMA memory for rx pi of sssnic port %u rxq %u", + ethdev->data->port_id, rx_queue_id); + ret = -ENOMEM; + goto alloc_pi_mz_fail; + } + rxq->hw_pi_addr = (uint16_t *)rxq->pi_mz->addr; + + rxq->desc_mz = rte_eth_dma_zone_reserve(ethdev, "sssnic_rxd_mz", + rxq->qid, sizeof(struct sssnic_ethdev_rx_desc) * rxq->depth, + RTE_CACHE_LINE_SIZE, (int)socket_id); + if (rxq->pi_mz == NULL) { + PMD_DRV_LOG(ERR, + "Failed to alloc DMA memory for rx desc of sssnic port %u rxq %u", + ethdev->data->port_id, rx_queue_id); + ret = -ENOMEM; + goto alloc_rxd_mz_fail; + } + rxq->desc = (struct sssnic_ethdev_rx_desc *)rxq->desc_mz->addr; + + snprintf(m_name, sizeof(m_name), "sssnic_p%u_rq%u_rxe", + ethdev->data->port_id, rx_queue_id); + + rxq->rxe = rte_zmalloc_socket(m_name, + sizeof(struct sssnic_ethdev_rx_entry) * rxq->depth, + RTE_CACHE_LINE_SIZE, (int)socket_id); + if (rxq->rxe == NULL) { + PMD_DRV_LOG(ERR, + "Failed to alloc memory for rx entries of sssnic port %u rxq %u", + ethdev->data->port_id, rx_queue_id); + ret = -ENOMEM; + goto alloc_pktmbuf_fail; + } + + sssnic_ethdev_rxq_entries_init(rxq); + + ethdev->data->rx_queues[rx_queue_id] = rxq; + + return 0; + +alloc_pktmbuf_fail: + rte_memzone_free(rxq->desc_mz); +alloc_rxd_mz_fail: + rte_memzone_free(rxq->pi_mz); +alloc_pi_mz_fail: + sssnic_workq_destroy(rxq->workq); +new_workq_fail: + rte_free(rxq); + + return ret; +} + +static void +sssnic_ethdev_rxq_pktmbufs_release(struct sssnic_ethdev_rxq *rxq) +{ + struct sssnic_ethdev_rx_entry *rxe; + volatile struct sssnic_ethdev_rx_desc *rxd; + uint16_t num_entries; + uint16_t ci; + uint16_t i; + + num_entries = sssnic_ethdev_rxq_num_used_entries(rxq); + for (i = 0; i < num_entries; i++) { + ci = sssnic_ethdev_rxq_ci_get(rxq); + rxd = &rxq->desc[ci]; + rxd->dword0 = 0; + rxe = &rxq->rxe[ci]; + rte_pktmbuf_free(rxe->pktmbuf); + rxe->pktmbuf = NULL; + sssnic_ethdev_rxq_consume(rxq, 1); + } +} + +static void +sssnic_ethdev_rxq_free(struct sssnic_ethdev_rxq *rxq) +{ + if (rxq == NULL) + return; + + sssnic_ethdev_rxq_pktmbufs_release(rxq); + rte_free(rxq->rxe); + rte_memzone_free(rxq->desc_mz); + rte_memzone_free(rxq->pi_mz); + sssnic_workq_destroy(rxq->workq); + rte_free(rxq); +} + +void +sssnic_ethdev_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[queue_id]; + + if (rxq == NULL) + return; + sssnic_ethdev_rxq_free(rxq); + ethdev->data->rx_queues[queue_id] = NULL; +} + +void +sssnic_ethdev_rx_queue_all_release(struct rte_eth_dev *ethdev) +{ + uint16_t qid; + + for (qid = 0; qid < ethdev->data->nb_rx_queues; qid++) + sssnic_ethdev_rx_queue_release(ethdev, qid); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h new file mode 100644 index 0000000000..dc41811a2f --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -0,0 +1,24 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_RX_H_ +#define _SSSNIC_ETHDEV_RX_H_ + +struct sssnic_ethdev_rxq_stats { + uint64_t packets; + uint64_t bytes; + uint64_t csum_errors; + uint64_t other_errors; + uint64_t nombuf; + uint64_t burst; +}; + +int sssnic_ethdev_rx_queue_setup(struct rte_eth_dev *ethdev, + uint16_t rx_queue_id, uint16_t nb_rx_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mb_pool); +void sssnic_ethdev_rx_queue_release(struct rte_eth_dev *ethdev, + uint16_t queue_id); +void sssnic_ethdev_rx_queue_all_release(struct rte_eth_dev *ethdev); + +#endif From patchwork Fri Sep 1 09:34:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131056 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 16E014221E; Fri, 1 Sep 2023 11:37:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9E94E402EA; Fri, 1 Sep 2023 11:35:59 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 66162402D0 for ; Fri, 1 Sep 2023 11:35:56 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR7C069836; Fri, 1 Sep 2023 17:35:29 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:28 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 15/32] net/sssnic: support Tx queue setup and release Date: Fri, 1 Sep 2023 17:34:57 +0800 Message-ID: <20230901093514.224824-16-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR7C069836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. v4: * Fixed coding style issue of REPEATED_WORD. --- drivers/net/sssnic/meson.build | 1 + drivers/net/sssnic/sssnic_ethdev.c | 4 + drivers/net/sssnic/sssnic_ethdev.h | 1 + drivers/net/sssnic/sssnic_ethdev_tx.c | 354 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_tx.h | 27 ++ 5 files changed, 387 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_ethdev_tx.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_tx.h diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index 7c3516a279..0c6e21310d 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -20,4 +20,5 @@ sources = files( 'sssnic_ethdev.c', 'sssnic_ethdev_link.c', 'sssnic_ethdev_rx.c', + 'sssnic_ethdev_tx.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index f98510a55d..732fddfcf7 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -11,6 +11,7 @@ #include "sssnic_ethdev.h" #include "sssnic_ethdev_link.h" #include "sssnic_ethdev_rx.h" +#include "sssnic_ethdev_tx.h" static int sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, @@ -336,6 +337,7 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); sssnic_ethdev_link_intr_disable(ethdev); + sssnic_ethdev_tx_queue_all_release(ethdev); sssnic_ethdev_rx_queue_all_release(ethdev); sssnic_ethdev_mac_addrs_clean(ethdev); sssnic_hw_shutdown(hw); @@ -354,6 +356,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .set_mc_addr_list = sssnic_ethdev_set_mc_addr_list, .rx_queue_setup = sssnic_ethdev_rx_queue_setup, .rx_queue_release = sssnic_ethdev_rx_queue_release, + .tx_queue_setup = sssnic_ethdev_tx_queue_setup, + .tx_queue_release = sssnic_ethdev_tx_queue_release, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index 51740413c6..ab832d179f 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -57,6 +57,7 @@ #define SSSNIC_ETHDEV_MAX_NUM_MC_MAC 2048 #define SSSNIC_ETHDEV_DEF_RX_FREE_THRESH 32 +#define SSSNIC_ETHDEV_DEF_TX_FREE_THRESH 32 struct sssnic_netdev { void *hw; diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.c b/drivers/net/sssnic/sssnic_ethdev_tx.c new file mode 100644 index 0000000000..e80bf8c396 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_tx.c @@ -0,0 +1,354 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_tx.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_workq.h" +#include "base/sssnic_api.h" +#include "base/sssnic_misc.h" + +/* Hardware format of tx desc */ +struct sssnic_ethdev_tx_desc { + union { + uint32_t dw0; + struct { + /* length of the first tx seg data */ + uint32_t data_len : 18; + uint32_t dw0_resvd0 : 1; + /* number of tx segments in tx entry */ + uint32_t num_segs : 8; + /* offload desc enable */ + uint32_t offload_en : 1; + /* data format, use SGL if 0 else inline */ + uint32_t data_fmt : 1; + /* DN, always set 0 */ + uint32_t dw0_resvd1 : 1; + /* refer sssnic_ethdev_txq_entry_type */ + uint32_t entry_type : 1; + uint32_t owner : 1; + }; + }; + union { + uint32_t dw1; + struct { + uint32_t pkt_type : 2; + uint32_t payload_off : 8; + /* UFO, not used, always set 0 */ + uint32_t dw1_resvd0 : 1; + uint32_t tso_en : 1; + /* TCP/UDP checksum offload enable flag */ + uint32_t csum_en : 1; + uint32_t mss : 14; + uint32_t sctp_crc_en : 1; + /* set 1 if entry type is not compact else set 0 */ + uint32_t uc : 1; + /* PRI, not used, always set 0 */ + uint32_t dw1_resvd1 : 3; + }; + }; + union { + uint32_t dw2; + /* high 32bit of DMA address of the first tx seg data */ + uint32_t data_addr_hi; + }; + union { + uint32_t dw3; + /* low 32bit of DMA address of the first tx seg data */ + uint32_t data_addr_lo; + }; +}; + +/* Hardware format of tx offload */ +struct sssnic_ethdev_tx_offload { + union { + uint32_t dw0; + struct { + uint32_t dw0_resvd0 : 19; + /* indicate a tunnel packet or normal packet */ + uint32_t tunnel_flag : 1; + uint32_t dw0_resvd1 : 2; + /* not used, always set 0 */ + uint32_t esp_next_proto : 2; + /* indicate inner L4 csum offload enable */ + uint32_t inner_l4_csum_en : 1; + /* indicate inner L3 csum offload enable */ + uint32_t inner_l3_csum_en : 1; + /* indicate inner L4 header with pseudo csum */ + uint32_t inner_l4_pseudo_csum : 1; + /* indicate outer L4 csum offload enable*/ + uint32_t l4_csum_en : 1; + /* indicate outer L3 csum offload enable*/ + uint32_t l3_csum_en : 1; + /* indicate outer L4 header with pseudo csum */ + uint32_t l4_pseudo_csum : 1; + /* indicate ESP offload */ + uint32_t esp_en : 1; + /* indicate IPSEC offload */ + uint32_t ipsec_en : 1; + }; + }; + uint32_t dw1; + uint32_t dw2; + union { + uint32_t dw3; + struct { + uint32_t vlan_tag : 16; + /* Always set 0 */ + uint32_t vlan_type : 3; + /* indicate VLAN offload enable */ + uint32_t vlan_en : 1; + uint32_t dw3_resvd0 : 12; + }; + }; +}; + +/* Hardware format of tx seg */ +struct sssnic_ethdev_tx_seg { + uint32_t len; + uint32_t resvd; + uint32_t buf_hi_addr; + uint32_t buf_lo_addr; +}; + +/* hardware format of txq doobell register*/ +struct sssnic_ethdev_txq_doorbell { + union { + uint64_t u64; + struct { + union { + uint32_t dword0; + struct { + uint32_t qid : 13; + uint32_t resvd0 : 9; + uint32_t nf : 1; + uint32_t cf : 1; + uint32_t cos : 3; + uint32_t service : 5; + }; + }; + union { + uint32_t dword1; + struct { + uint32_t pi_hi : 8; + uint32_t resvd1 : 24; + }; + }; + }; + }; +}; + +struct sssnic_ethdev_tx_entry { + struct rte_mbuf *pktmbuf; + uint16_t num_workq_entries; +}; + +struct sssnic_ethdev_txq { + struct rte_eth_dev *ethdev; + struct sssnic_workq *workq; + const struct rte_memzone *ci_mz; + volatile uint16_t *hw_ci_addr; + uint8_t *doorbell; + struct sssnic_ethdev_tx_entry *txe; + struct sssnic_ethdev_txq_stats stats; + uint16_t port; + uint16_t qid; + uint16_t depth; + uint16_t idx_mask; + uint16_t tx_free_thresh; + uint8_t owner; + uint8_t cos; +} __rte_cache_aligned; + +enum sssnic_ethdev_txq_entry_type { + SSSNIC_ETHDEV_TXQ_ENTRY_COMPACT = 0, + SSSNIC_ETHDEV_TXQ_ENTRY_EXTEND = 1, +}; + +#define SSSNIC_ETHDEV_TXQ_ENTRY_SZ_BITS 4 +#define SSSNIC_ETHDEV_TXQ_ENTRY_SZ (RTE_BIT32(SSSNIC_ETHDEV_TXQ_ENTRY_SZ_BITS)) + +#define SSSNIC_ETHDEV_TX_HW_CI_SIZE 64 + +/* Doorbell offset 4096 */ +#define SSSNIC_ETHDEV_TXQ_DB_OFFSET 0x1000 + +static inline uint16_t +sssnic_ethdev_txq_num_used_entries(struct sssnic_ethdev_txq *txq) +{ + return sssnic_workq_num_used_entries(txq->workq); +} + +static inline uint16_t +sssnic_ethdev_txq_ci_get(struct sssnic_ethdev_txq *txq) +{ + return sssnic_workq_ci_get(txq->workq); +} + +static inline void +sssnic_ethdev_txq_consume(struct sssnic_ethdev_txq *txq, uint16_t num_entries) +{ + sssnic_workq_consume_fast(txq->workq, num_entries); +} + +int +sssnic_ethdev_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, + uint16_t nb_tx_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + int ret; + struct sssnic_hw *hw; + struct sssnic_ethdev_txq *txq; + uint16_t q_depth; + uint16_t tx_free_thresh; + char m_name[RTE_MEMZONE_NAMESIZE]; + + hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + q_depth = nb_tx_desc; + /* Adjust q_depth to power of 2 */ + if (!rte_is_power_of_2(nb_tx_desc)) { + q_depth = 1 << rte_log2_u32(nb_tx_desc); + PMD_DRV_LOG(NOTICE, + "nb_tx_desc(%u) is not power of 2, adjust to %u", + nb_tx_desc, q_depth); + } + + if (q_depth > SSSNIC_ETHDEV_MAX_NUM_Q_DESC) { + PMD_DRV_LOG(ERR, "nb_tx_desc(%u) is out of range(max. %u)", + q_depth, SSSNIC_ETHDEV_MAX_NUM_Q_DESC); + return -EINVAL; + } + + if (tx_conf->tx_free_thresh > 0) + tx_free_thresh = tx_conf->tx_free_thresh; + else + tx_free_thresh = SSSNIC_ETHDEV_DEF_TX_FREE_THRESH; + if (tx_free_thresh >= q_depth - 1) { + PMD_DRV_LOG(ERR, + "tx_free_thresh(%u) must be less than nb_tx_desc(%u)-1", + tx_free_thresh, q_depth); + return -EINVAL; + } + + snprintf(m_name, sizeof(m_name), "sssnic_p%u_sq%u", + ethdev->data->port_id, tx_queue_id); + + txq = rte_zmalloc_socket(m_name, sizeof(struct sssnic_ethdev_txq), + RTE_CACHE_LINE_SIZE, (int)socket_id); + + if (txq == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate memory for sssnic port %u, txq %u", + ethdev->data->port_id, tx_queue_id); + return -ENOMEM; + } + + txq->ethdev = ethdev; + txq->depth = q_depth; + txq->port = ethdev->data->port_id; + txq->qid = tx_queue_id; + txq->tx_free_thresh = tx_free_thresh; + txq->idx_mask = q_depth - 1; + txq->owner = 1; + txq->doorbell = hw->db_base_addr + SSSNIC_ETHDEV_TXQ_DB_OFFSET; + + snprintf(m_name, sizeof(m_name), "sssnic_p%u_sq%u_wq", + ethdev->data->port_id, tx_queue_id); + + txq->workq = sssnic_workq_new(m_name, (int)socket_id, + SSSNIC_ETHDEV_TXQ_ENTRY_SZ, q_depth); + if (txq->workq == NULL) { + PMD_DRV_LOG(ERR, + "Failed to create workq for sssnic port %u, txq %u", + ethdev->data->port_id, tx_queue_id); + ret = -ENOMEM; + goto new_workq_fail; + } + + txq->ci_mz = rte_eth_dma_zone_reserve(ethdev, "sssnic_txci_mz", + txq->qid, SSSNIC_ETHDEV_TX_HW_CI_SIZE, + SSSNIC_ETHDEV_TX_HW_CI_SIZE, (int)socket_id); + if (txq->ci_mz == NULL) { + PMD_DRV_LOG(ERR, + "Failed to alloc DMA memory for tx ci of sssnic port %u rxq %u", + ethdev->data->port_id, tx_queue_id); + ret = -ENOMEM; + goto alloc_ci_mz_fail; + } + txq->hw_ci_addr = (volatile uint16_t *)txq->ci_mz->addr; + + snprintf(m_name, sizeof(m_name), "sssnic_p%u_sq%u_txe", + ethdev->data->port_id, tx_queue_id); + txq->txe = rte_zmalloc_socket(m_name, + sizeof(struct sssnic_ethdev_tx_entry) * q_depth, + RTE_CACHE_LINE_SIZE, (int)socket_id); + if (txq->txe == NULL) { + PMD_DRV_LOG(ERR, "Failed to allocate memory for %s", m_name); + ret = -ENOMEM; + goto alloc_txe_fail; + } + + ethdev->data->tx_queues[tx_queue_id] = txq; + + return 0; + +alloc_txe_fail: + rte_memzone_free(txq->ci_mz); +alloc_ci_mz_fail: + sssnic_workq_destroy(txq->workq); +new_workq_fail: + rte_free(txq); + + return ret; +} + +static void +sssnic_ethdev_txq_pktmbufs_release(struct sssnic_ethdev_txq *txq) +{ + struct sssnic_ethdev_tx_entry *txe; + uint16_t num_entries; + uint16_t ci; + uint16_t i; + + num_entries = sssnic_ethdev_txq_num_used_entries(txq); + for (i = 0; i < num_entries; i++) { + ci = sssnic_ethdev_txq_ci_get(txq); + txe = &txq->txe[ci]; + rte_pktmbuf_free(txe->pktmbuf); + txe->pktmbuf = NULL; + sssnic_ethdev_txq_consume(txq, txe->num_workq_entries); + txe->num_workq_entries = 0; + } +} + +void +sssnic_ethdev_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_ethdev_txq *txq = ethdev->data->tx_queues[queue_id]; + + if (txq == NULL) + return; + + sssnic_ethdev_txq_pktmbufs_release(txq); + rte_free(txq->txe); + rte_memzone_free(txq->ci_mz); + sssnic_workq_destroy(txq->workq); + rte_free(txq); + ethdev->data->tx_queues[queue_id] = NULL; +} + +void +sssnic_ethdev_tx_queue_all_release(struct rte_eth_dev *ethdev) +{ + uint16_t qid; + + for (qid = 0; qid < ethdev->data->nb_tx_queues; qid++) + sssnic_ethdev_tx_queue_release(ethdev, qid); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.h b/drivers/net/sssnic/sssnic_ethdev_tx.h new file mode 100644 index 0000000000..bd1d721e37 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_tx.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_TX_H_ +#define _SSSNIC_ETHDEV_TX_H_ + +struct sssnic_ethdev_txq_stats { + uint64_t packets; + uint64_t bytes; + uint64_t nobuf; + uint64_t zero_len_segs; + uint64_t too_large_pkts; + uint64_t too_many_segs; + uint64_t null_segs; + uint64_t offload_errors; + uint64_t burst; +}; + +int sssnic_ethdev_tx_queue_setup(struct rte_eth_dev *ethdev, + uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf); +void sssnic_ethdev_tx_queue_release(struct rte_eth_dev *ethdev, + uint16_t queue_id); +void sssnic_ethdev_tx_queue_all_release(struct rte_eth_dev *ethdev); + +#endif /* _SSSNIC_ETHDEV_TX_H_ */ From patchwork Fri Sep 1 09:34:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131059 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD7624221E; Fri, 1 Sep 2023 11:37:40 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C1B1B4067B; Fri, 1 Sep 2023 11:36:03 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 1CC87402D6 for ; Fri, 1 Sep 2023 11:35:57 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZR7D069836; Fri, 1 Sep 2023 17:35:29 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:29 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 16/32] net/sssnic: support Rx queue start and stop Date: Fri, 1 Sep 2023 17:34:58 +0800 Message-ID: <20230901093514.224824-17-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZR7D069836 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/base/sssnic_api.c | 63 +++++ drivers/net/sssnic/base/sssnic_api.h | 2 + drivers/net/sssnic/base/sssnic_cmd.h | 49 ++++ drivers/net/sssnic/sssnic_ethdev.c | 2 + drivers/net/sssnic/sssnic_ethdev.h | 2 + drivers/net/sssnic/sssnic_ethdev_rx.c | 332 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_rx.h | 4 + 7 files changed, 454 insertions(+) diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 3bb31009c5..3050d573bf 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -11,6 +11,7 @@ #include "sssnic_hw.h" #include "sssnic_cmd.h" #include "sssnic_mbox.h" +#include "sssnic_ctrlq.h" #include "sssnic_api.h" int @@ -434,3 +435,65 @@ sssnic_netif_enable_set(struct sssnic_hw *hw, uint8_t state) return 0; } + +int +sssnic_port_enable_set(struct sssnic_hw *hw, bool state) +{ + int ret; + struct sssnic_port_enable_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + if (hw == NULL) + return -EINVAL; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.state = state ? 1 : 0; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_PORT_ENABLE_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_PORT_ENABLE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_rxq_flush(struct sssnic_hw *hw, uint16_t qid) +{ + struct sssnic_ctrlq_cmd cmd; + struct sssnic_rxq_flush_cmd data; + int ret; + + data.u32 = 0; + data.qid = qid; + data.u32 = rte_cpu_to_be_32(data.u32); + + memset(&cmd, 0, sizeof(cmd)); + cmd.data = &data; + cmd.module = SSSNIC_LAN_MODULE; + cmd.data_len = sizeof(data); + cmd.cmd = SSSNIC_FLUSH_RXQ_CMD; + + ret = sssnic_ctrlq_cmd_exec(hw, &cmd, 0); + if (ret != 0 || cmd.result != 0) { + PMD_DRV_LOG(ERR, + "Failed to execulte ctrlq command %s, ret=%d, result=%" PRIu64, + "SSSNIC_FLUSH_RXQ_CMD", ret, cmd.result); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 168aa152b9..29962aabf8 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -45,5 +45,7 @@ int sssnic_netif_link_status_get(struct sssnic_hw *hw, uint8_t *status); int sssnic_netif_link_info_get(struct sssnic_hw *hw, struct sssnic_netif_link_info *info); int sssnic_netif_enable_set(struct sssnic_hw *hw, uint8_t state); +int sssnic_port_enable_set(struct sssnic_hw *hw, bool state); +int sssnic_rxq_flush(struct sssnic_hw *hw, uint16_t qid); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index 6957b742fc..6364058d36 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -35,6 +35,37 @@ enum sssnic_netif_cmd_id { SSSNIC_GET_NETIF_LINK_INFO_CMD = 153, }; +enum sssnic_port_cmd_id { + SSSNIC_REGISTER_VF_PORT_CMD = 0, + SSSNIC_SET_PORT_RXTX_SIZE_CMD = 5, + SSSNIC_SET_PORT_ENABLE_CMD = 6, + SSSNIC_SET_PORT_RX_MODE_CMD = 7, + SSSNIC_SET_PORT_TX_CI_ATTR_CMD = 8, + SSSNIC_GET_PORT_STATS_CMD = 9, + SSSNIC_CLEAR_PORT_STATS_CMD = 10, + + SSSNIC_CLEAN_PORT_RES_CMD = 11, + + SSSNIC_PORT_LRO_CFG_CMD = 13, + SSSNIC_PORT_LRO_TIMER_CMD = 14, + SSSNIC_PORT_FEATURE_CMD = 15, + + SSSNIC_SET_PORT_VLAN_FILTER_CMD = 25, + SSSNIC_ENABLE_PORT_VLAN_FILTER_CMD = 26, + SSSNIC_ENABLE_PORT_VLAN_STRIP_CMD = 27, + + SSSNIC_PORT_FLOW_CTRL_CMD = 101, +}; + +enum sssnic_ctrlq_cmd_id { + SSSNIC_SET_RXTXQ_CTX_CMD = 0, + SSSNIC_RESET_OFFLOAD_CTX_CMD = 1, + SSSNIC_SET_RSS_INDIR_TABLE_CMD = 4, + SSSNIC_SET_RSS_KEY_CTRLQ_CMD = 5, + SSSNIC_GET_RSS_INDIR_TABLE_CMD = 6, + SSSNIC_FLUSH_RXQ_CMD = 10, +}; + struct sssnic_cmd_common { uint8_t status; uint8_t version; @@ -187,4 +218,22 @@ struct sssnic_netif_enable_set_cmd { uint8_t resvd1[3]; }; +struct sssnic_port_enable_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd0; + uint8_t state; + uint8_t resvd1[3]; +}; + +struct sssnic_rxq_flush_cmd { + union { + struct { + uint16_t resvd0; + uint16_t qid; + }; + uint32_t u32; + }; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 732fddfcf7..208f0db402 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -358,6 +358,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .rx_queue_release = sssnic_ethdev_rx_queue_release, .tx_queue_setup = sssnic_ethdev_tx_queue_setup, .tx_queue_release = sssnic_ethdev_tx_queue_release, + .rx_queue_start = sssnic_ethdev_rx_queue_start, + .rx_queue_stop = sssnic_ethdev_rx_queue_stop, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index ab832d179f..38e6dc0d62 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -65,6 +65,8 @@ struct sssnic_netdev { struct rte_ether_addr default_addr; uint16_t max_num_txq; uint16_t max_num_rxq; + uint16_t num_started_rxqs; + uint16_t num_started_txqs; }; #define SSSNIC_ETHDEV_PRIVATE(eth_dev) \ diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index 0f489cf82f..d8429e734d 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -162,24 +162,64 @@ struct sssnic_ethdev_rxq_doorbell { }; }; +static inline void +sssnic_ethdev_rxq_doorbell_ring(struct sssnic_ethdev_rxq *rxq, uint16_t pi) +{ + uint64_t *db_addr; + struct sssnic_ethdev_rxq_doorbell db; + uint16_t hw_pi; + static const struct sssnic_ethdev_rxq_doorbell default_db = { + .cf = 1, + .service = 1, + }; + + hw_pi = pi << 1; + + db.u64 = default_db.u64; + db.qid = rxq->qid; + db.pi_hi = (hw_pi >> 8) & 0xff; + + db_addr = (uint64_t *)(rxq->doorbell + (hw_pi & 0xff)); + + rte_write64(db.u64, db_addr); +} + static inline uint16_t sssnic_ethdev_rxq_num_used_entries(struct sssnic_ethdev_rxq *rxq) { return sssnic_workq_num_used_entries(rxq->workq); } +static inline uint16_t +sssnic_ethdev_rxq_num_idle_entries(struct sssnic_ethdev_rxq *rxq) +{ + return rxq->workq->idle_entries; +} + static inline uint16_t sssnic_ethdev_rxq_ci_get(struct sssnic_ethdev_rxq *rxq) { return sssnic_workq_ci_get(rxq->workq); } +static inline uint16_t +sssnic_ethdev_rxq_pi_get(struct sssnic_ethdev_rxq *rxq) +{ + return sssnic_workq_pi_get(rxq->workq); +} + static inline void sssnic_ethdev_rxq_consume(struct sssnic_ethdev_rxq *rxq, uint16_t num_entries) { sssnic_workq_consume_fast(rxq->workq, num_entries); } +static inline void +sssnic_ethdev_rxq_produce(struct sssnic_ethdev_rxq *rxq, uint16_t num_entries) +{ + sssnic_workq_produce_fast(rxq->workq, num_entries); +} + static void sssnic_ethdev_rx_buf_size_optimize(uint32_t orig_size, uint16_t *new_size) { @@ -418,3 +458,295 @@ sssnic_ethdev_rx_queue_all_release(struct rte_eth_dev *ethdev) for (qid = 0; qid < ethdev->data->nb_rx_queues; qid++) sssnic_ethdev_rx_queue_release(ethdev, qid); } + +static void +sssnic_ethdev_rxq_pktmbufs_fill(struct sssnic_ethdev_rxq *rxq) +{ + struct rte_mbuf **pktmbuf; + rte_iova_t buf_iova; + struct sssnic_ethdev_rxq_entry *rqe; + uint16_t idle_entries; + uint16_t bulk_entries; + uint16_t pi; + uint16_t i; + int ret; + + idle_entries = sssnic_ethdev_rxq_num_idle_entries(rxq) - 1; + pi = sssnic_ethdev_rxq_pi_get(rxq); + + while (idle_entries > 0) { + /* calculate number of continuous entries */ + bulk_entries = rxq->depth - pi; + if (idle_entries < bulk_entries) + bulk_entries = idle_entries; + + pktmbuf = (struct rte_mbuf **)(&rxq->rxe[pi]); + + ret = rte_pktmbuf_alloc_bulk(rxq->mp, pktmbuf, bulk_entries); + if (ret != 0) { + rxq->stats.nombuf += idle_entries; + return; + } + + for (i = 0; i < bulk_entries; i++) { + rqe = SSSNIC_ETHDEV_RXQ_ENTRY(rxq, pi); + buf_iova = rte_mbuf_data_iova(pktmbuf[i]); + rqe->buf_hi_addr = SSSNIC_UPPER_32_BITS(buf_iova); + rqe->buf_lo_addr = SSSNIC_LOWER_32_BITS(buf_iova); + sssnic_ethdev_rxq_produce(rxq, 1); + pi = sssnic_ethdev_rxq_pi_get(rxq); + } + + idle_entries -= bulk_entries; + sssnic_ethdev_rxq_doorbell_ring(rxq, pi); + } +} + +static uint16_t +sssnic_ethdev_rxq_pktmbufs_cleanup(struct sssnic_ethdev_rxq *rxq) +{ + struct sssnic_ethdev_rx_entry *rxe; + volatile struct sssnic_ethdev_rx_desc *rxd; + uint16_t ci, count = 0; + uint32_t pktlen = 0; + uint32_t buflen = rxq->rx_buf_size; + uint16_t num_entries; + + num_entries = sssnic_ethdev_rxq_num_used_entries(rxq); + + ci = sssnic_ethdev_rxq_ci_get(rxq); + rxe = &rxq->rxe[ci]; + rxd = &rxq->desc[ci]; + + while (num_entries > 0) { + if (pktlen > 0) + pktlen = pktlen > buflen ? (pktlen - buflen) : 0; + else if (rxd->flush != 0) + pktlen = 0; + else if (rxd->done != 0) + pktlen = rxd->len > buflen ? (rxd->len - buflen) : 0; + else + break; + + rte_pktmbuf_free(rxe->pktmbuf); + rxe->pktmbuf = NULL; + + count++; + num_entries--; + + sssnic_ethdev_rxq_consume(rxq, 1); + ci = sssnic_ethdev_rxq_ci_get(rxq); + rxe = &rxq->rxe[ci]; + rxd = &rxq->desc[ci]; + } + + PMD_DRV_LOG(DEBUG, + "%u rx packets cleanned up (Port:%u rxq:%u), ci=%u, pi=%u", + count, rxq->port, rxq->qid, ci, sssnic_ethdev_rxq_pi_get(rxq)); + + return count; +} + +#define SSSNIC_ETHDEV_RXQ_FUSH_TIMEOUT 3000 /* 3000 ms */ +static int +sssnic_ethdev_rxq_flush(struct sssnic_ethdev_rxq *rxq) +{ + struct sssnic_hw *hw; + uint64_t timeout; + uint16_t used_entries; + int ret; + + hw = SSSNIC_ETHDEV_TO_HW(rxq->ethdev); + + ret = sssnic_rxq_flush(hw, rxq->qid); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to flush rxq:%u, port:%u", rxq->qid, + rxq->port); + return ret; + } + + timeout = rte_get_timer_cycles() + + rte_get_timer_hz() * SSSNIC_ETHDEV_RXQ_FUSH_TIMEOUT / 1000; + + do { + sssnic_ethdev_rxq_pktmbufs_cleanup(rxq); + used_entries = sssnic_ethdev_rxq_num_used_entries(rxq); + if (used_entries == 0) + return 0; + rte_delay_us_sleep(1000); + } while (((long)(rte_get_timer_cycles() - timeout)) < 0); + + PMD_DRV_LOG(ERR, "Flush port:%u rxq:%u timeout, used_rxq_entries:%u", + rxq->port, rxq->qid, sssnic_ethdev_rxq_num_used_entries(rxq)); + + return -ETIMEDOUT; +} + +static int +sssnic_ethdev_rxq_enable(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[queue_id]; + + sssnic_ethdev_rxq_pktmbufs_fill(rxq); + + return 0; +} + +static int +sssnic_ethdev_rxq_disable(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[queue_id]; + int ret; + + ret = sssnic_ethdev_rxq_flush(rxq); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to flush rxq:%u, port:%u", queue_id, + ethdev->data->port_id); + return ret; + } + + return 0; +} +int +sssnic_ethdev_rx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_ethdev_rxq_enable(ethdev, queue_id); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to start rxq:%u, port:%u", queue_id, + ethdev->data->port_id); + return ret; + } + + if (netdev->num_started_rxqs == 0) { + ret = sssnic_port_enable_set(hw, true); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable sssnic port:%u", + ethdev->data->port_id); + sssnic_ethdev_rxq_disable(ethdev, queue_id); + return ret; + } + } + + netdev->num_started_rxqs++; + ethdev->data->rx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + + PMD_DRV_LOG(DEBUG, "port %u rxq %u started", ethdev->data->port_id, + queue_id); + + return 0; +} + +int +sssnic_ethdev_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + if (netdev->num_started_rxqs == 1) { + ret = sssnic_port_enable_set(hw, false); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable sssnic port:%u", + ethdev->data->port_id); + return ret; + } + } + + ret = sssnic_ethdev_rxq_disable(ethdev, queue_id); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable rxq:%u, port:%u", queue_id, + ethdev->data->port_id); + sssnic_port_enable_set(hw, true); + return ret; + } + + netdev->num_started_rxqs--; + ethdev->data->rx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + PMD_DRV_LOG(DEBUG, "port %u rxq %u stopped", ethdev->data->port_id, + queue_id); + + return 0; +} + +int +sssnic_ethdev_rx_queue_all_start(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint16_t numq = ethdev->data->nb_rx_queues; + uint16_t qid; + + int ret; + + for (qid = 0; qid < numq; qid++) { + ret = sssnic_ethdev_rxq_enable(ethdev, qid); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable rxq:%u, port:%u", + qid, ethdev->data->port_id); + goto fail_out; + } + + ethdev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STARTED; + netdev->num_started_rxqs++; + + PMD_DRV_LOG(DEBUG, "port %u rxq %u started", + ethdev->data->port_id, qid); + } + + ret = sssnic_port_enable_set(hw, true); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to enable port:%u", + ethdev->data->port_id); + goto fail_out; + } + + return 0; + +fail_out: + while (qid--) { + sssnic_ethdev_rxq_disable(ethdev, qid); + ethdev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED; + netdev->num_started_rxqs--; + } + + return ret; +} + +int +sssnic_ethdev_rx_queue_all_stop(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint16_t numq = ethdev->data->nb_rx_queues; + uint16_t qid; + int ret; + + ret = sssnic_port_enable_set(hw, false); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to disable port:%u", + ethdev->data->port_id); + return ret; + } + + for (qid = 0; qid < numq; qid++) { + ret = sssnic_ethdev_rxq_disable(ethdev, qid); + if (ret != 0) { + PMD_DRV_LOG(WARNING, "Failed to enable rxq:%u, port:%u", + qid, ethdev->data->port_id); + continue; + } + + ethdev->data->rx_queue_state[qid] = RTE_ETH_QUEUE_STATE_STOPPED; + netdev->num_started_rxqs--; + + PMD_DRV_LOG(DEBUG, "port %u rxq %u stopped", + ethdev->data->port_id, qid); + } + + return 0; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h index dc41811a2f..c6ddc366d5 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.h +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -20,5 +20,9 @@ int sssnic_ethdev_rx_queue_setup(struct rte_eth_dev *ethdev, void sssnic_ethdev_rx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id); void sssnic_ethdev_rx_queue_all_release(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id); +int sssnic_ethdev_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); +int sssnic_ethdev_rx_queue_all_start(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rx_queue_all_stop(struct rte_eth_dev *ethdev); #endif From patchwork Fri Sep 1 09:34:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131061 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EBA114221E; Fri, 1 Sep 2023 11:37:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E5787406BA; Fri, 1 Sep 2023 11:36:06 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 5AA8C402E5 for ; Fri, 1 Sep 2023 11:36:02 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZUj7069844; Fri, 1 Sep 2023 17:35:30 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:29 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 17/32] net/sssnic: support Tx queue start and stop Date: Fri, 1 Sep 2023 17:34:59 +0800 Message-ID: <20230901093514.224824-18-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZUj7069844 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 1 + drivers/net/sssnic/sssnic_ethdev.c | 2 + drivers/net/sssnic/sssnic_ethdev_tx.c | 155 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_tx.h | 4 + 4 files changed, 162 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 82b527ba26..b75c68cb33 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -6,6 +6,7 @@ [Features] Link status = Y Link status event = Y +Queue start/stop = Y Unicast MAC filter = Y Multicast MAC filter = Y Linux = Y diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 208f0db402..8a18f25889 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -360,6 +360,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .tx_queue_release = sssnic_ethdev_tx_queue_release, .rx_queue_start = sssnic_ethdev_rx_queue_start, .rx_queue_stop = sssnic_ethdev_rx_queue_stop, + .tx_queue_start = sssnic_ethdev_tx_queue_start, + .tx_queue_stop = sssnic_ethdev_tx_queue_stop, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.c b/drivers/net/sssnic/sssnic_ethdev_tx.c index e80bf8c396..47d7e3f343 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.c +++ b/drivers/net/sssnic/sssnic_ethdev_tx.c @@ -191,6 +191,18 @@ sssnic_ethdev_txq_ci_get(struct sssnic_ethdev_txq *txq) return sssnic_workq_ci_get(txq->workq); } +static inline int +sssnic_ethdev_txq_pi_get(struct sssnic_ethdev_txq *txq) +{ + return sssnic_workq_pi_get(txq->workq); +} + +static inline uint16_t +sssnic_ethdev_txq_hw_ci_get(struct sssnic_ethdev_txq *txq) +{ + return *txq->hw_ci_addr & txq->idx_mask; +} + static inline void sssnic_ethdev_txq_consume(struct sssnic_ethdev_txq *txq, uint16_t num_entries) { @@ -352,3 +364,146 @@ sssnic_ethdev_tx_queue_all_release(struct rte_eth_dev *ethdev) for (qid = 0; qid < ethdev->data->nb_tx_queues; qid++) sssnic_ethdev_tx_queue_release(ethdev, qid); } + +#define SSSNIC_ETHDEV_TX_FREE_BULK 64 +static inline int +sssnic_ethdev_txq_pktmbufs_cleanup(struct sssnic_ethdev_txq *txq) +{ + struct sssnic_ethdev_tx_entry *txe; + struct rte_mbuf *free_pkts[SSSNIC_ETHDEV_TX_FREE_BULK]; + uint16_t num_free_pkts = 0; + uint16_t hw_ci, ci, id_mask; + uint16_t count = 0; + int num_entries; + + ci = sssnic_ethdev_txq_ci_get(txq); + hw_ci = sssnic_ethdev_txq_hw_ci_get(txq); + id_mask = txq->idx_mask; + num_entries = sssnic_ethdev_txq_num_used_entries(txq); + + while (num_entries > 0) { + txe = &txq->txe[ci]; + + /* HW has not consumed enough entries of current packet */ + if (((hw_ci - ci) & id_mask) < txe->num_workq_entries) + break; + + num_entries -= txe->num_workq_entries; + count += txe->num_workq_entries; + ci = (ci + txe->num_workq_entries) & id_mask; + + if (likely(txe->pktmbuf->nb_segs == 1)) { + struct rte_mbuf *pkt = + rte_pktmbuf_prefree_seg(txe->pktmbuf); + txe->pktmbuf = NULL; + + if (unlikely(pkt == NULL)) + continue; + + free_pkts[num_free_pkts++] = pkt; + if (unlikely(pkt->pool != free_pkts[0]->pool || + num_free_pkts >= + SSSNIC_ETHDEV_TX_FREE_BULK)) { + rte_mempool_put_bulk(free_pkts[0]->pool, + (void **)free_pkts, num_free_pkts - 1); + num_free_pkts = 0; + free_pkts[num_free_pkts++] = pkt; + } + } else { + rte_pktmbuf_free(txe->pktmbuf); + txe->pktmbuf = NULL; + } + } + + if (num_free_pkts > 0) + rte_mempool_put_bulk(free_pkts[0]->pool, (void **)free_pkts, + num_free_pkts); + + sssnic_ethdev_txq_consume(txq, count); + + return count; +} + +#define SSSNIC_ETHDEV_TXQ_FUSH_TIMEOUT 3000 /* 3 seconds */ +static int +sssnic_ethdev_txq_flush(struct sssnic_ethdev_txq *txq) +{ + uint64_t timeout; + uint16_t used_entries; + + timeout = rte_get_timer_cycles() + + rte_get_timer_hz() * SSSNIC_ETHDEV_TXQ_FUSH_TIMEOUT / 1000; + + do { + sssnic_ethdev_txq_pktmbufs_cleanup(txq); + used_entries = sssnic_ethdev_txq_num_used_entries(txq); + if (used_entries == 0) + return 0; + + rte_delay_us_sleep(1000); + } while (((long)(rte_get_timer_cycles() - timeout)) < 0); + + PMD_DRV_LOG(ERR, "Flush port:%u txq:%u timeout, used_txq_entries:%u", + txq->port, txq->qid, sssnic_ethdev_txq_num_used_entries(txq)); + + return -ETIMEDOUT; +} + +int +sssnic_ethdev_tx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + + ethdev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + netdev->num_started_txqs++; + + PMD_DRV_LOG(DEBUG, "port %u txq %u started", ethdev->data->port_id, + queue_id); + + return 0; +} + +int +sssnic_ethdev_tx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id) +{ + int ret; + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_ethdev_txq *txq = ethdev->data->tx_queues[queue_id]; + + ret = sssnic_ethdev_txq_flush(txq); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to flush port %u txq %u", + ethdev->data->port_id, queue_id); + return ret; + } + + ethdev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + netdev->num_started_txqs--; + + PMD_DRV_LOG(DEBUG, "port %u txq %u stopped", ethdev->data->port_id, + queue_id); + + return 0; +} + +int +sssnic_ethdev_tx_queue_all_start(struct rte_eth_dev *ethdev) +{ + uint16_t qid; + uint16_t numq = ethdev->data->nb_tx_queues; + + for (qid = 0; qid < numq; qid++) + sssnic_ethdev_tx_queue_start(ethdev, qid); + + return 0; +} + +void +sssnic_ethdev_tx_queue_all_stop(struct rte_eth_dev *ethdev) +{ + uint16_t qid; + uint16_t numq = ethdev->data->nb_tx_queues; + + for (qid = 0; qid < numq; qid++) + sssnic_ethdev_tx_queue_stop(ethdev, qid); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.h b/drivers/net/sssnic/sssnic_ethdev_tx.h index bd1d721e37..3de9e899a0 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.h +++ b/drivers/net/sssnic/sssnic_ethdev_tx.h @@ -23,5 +23,9 @@ int sssnic_ethdev_tx_queue_setup(struct rte_eth_dev *ethdev, void sssnic_ethdev_tx_queue_release(struct rte_eth_dev *ethdev, uint16_t queue_id); void sssnic_ethdev_tx_queue_all_release(struct rte_eth_dev *ethdev); +int sssnic_ethdev_tx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id); +int sssnic_ethdev_tx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); +int sssnic_ethdev_tx_queue_all_start(struct rte_eth_dev *ethdev); +void sssnic_ethdev_tx_queue_all_stop(struct rte_eth_dev *ethdev); #endif /* _SSSNIC_ETHDEV_TX_H_ */ From patchwork Fri Sep 1 09:35:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131063 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B94AF4221E; Fri, 1 Sep 2023 11:38:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 86A2040647; Fri, 1 Sep 2023 11:36:09 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id D011F4067E for ; Fri, 1 Sep 2023 11:36:03 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZUKZ069845; Fri, 1 Sep 2023 17:35:30 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:30 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 18/32] net/sssnic: add Rx interrupt support Date: Fri, 1 Sep 2023 17:35:00 +0800 Message-ID: <20230901093514.224824-19-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZUKZ069845 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 1 + drivers/net/sssnic/base/sssnic_hw.c | 14 +++ drivers/net/sssnic/base/sssnic_hw.h | 2 + drivers/net/sssnic/sssnic_ethdev.c | 2 + drivers/net/sssnic/sssnic_ethdev_rx.c | 135 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_rx.h | 6 ++ 6 files changed, 160 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index b75c68cb33..e3b8166629 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -7,6 +7,7 @@ Link status = Y Link status event = Y Queue start/stop = Y +Rx interrupt = Y Unicast MAC filter = Y Multicast MAC filter = Y Linux = Y diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 82eb4ea295..651a0aa7ef 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -145,6 +145,20 @@ sssnic_msix_resend_disable(struct sssnic_hw *hw, uint16_t msix_id) sssnic_cfg_reg_write(hw, SSSNIC_MSIX_CTRL_REG, reg.u32); } +void +sssnic_msix_auto_mask_set(struct sssnic_hw *hw, uint16_t msix_id, int state) +{ + struct sssnic_msix_ctrl_reg reg; + + reg.u32 = 0; + if (state == SSSNIC_MSIX_ENABLE) + reg.auto_msk_set = 1; + else + reg.auto_msk_clr = 1; + reg.msxi_idx = msix_id; + sssnic_cfg_reg_write(hw, SSSNIC_MSIX_CTRL_REG, reg.u32); +} + static void sssnic_pf_status_set(struct sssnic_hw *hw, enum sssnic_pf_status status) { diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index e25f5595e6..4820212543 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -100,6 +100,8 @@ int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); void sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state); void sssnic_msix_resend_disable(struct sssnic_hw *hw, uint16_t msix_id); +void sssnic_msix_auto_mask_set(struct sssnic_hw *hw, uint16_t msix_id, + int state); int sssnic_link_event_callback_register(struct sssnic_hw *hw, sssnic_link_event_cb_t *cb, void *priv); void sssnic_link_event_callback_unregister(struct sssnic_hw *hw); diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 8a18f25889..35bb26a0b1 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -362,6 +362,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .rx_queue_stop = sssnic_ethdev_rx_queue_stop, .tx_queue_start = sssnic_ethdev_tx_queue_start, .tx_queue_stop = sssnic_ethdev_tx_queue_stop, + .rx_queue_intr_enable = sssnic_ethdev_rx_queue_intr_enable, + .rx_queue_intr_disable = sssnic_ethdev_rx_queue_intr_disable, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index d8429e734d..9c1b2f20d1 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -136,6 +136,12 @@ static const uint16_t sssnic_ethdev_rx_buf_size_tbl[] = { 32, 64, 96, 128, 192, /* Doorbell offset 8192 */ #define SSSNIC_ETHDEV_RXQ_DB_OFFSET 0x2000 +#define SSSNIC_ETHDEV_RX_MSIX_ID_START 1 +#define SSSNIC_ETHDEV_RX_MSIX_ID_INVAL 0 +#define SSSNIC_ETHDEV_RX_MSIX_PENDING_LIMIT 2 +#define SSSNIC_ETHDEV_RX_MSIX_COALESCING_TIMER 2 +#define SSSNIC_ETHDEV_RX_MSIX_RESNEDING_TIMER 7 + struct sssnic_ethdev_rxq_doorbell { union { uint64_t u64; @@ -750,3 +756,132 @@ sssnic_ethdev_rx_queue_all_stop(struct rte_eth_dev *ethdev) return 0; } + +static int +sssinc_ethdev_rxq_intr_attr_init(struct sssnic_ethdev_rxq *rxq) +{ + int ret; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(rxq->ethdev); + struct sssnic_msix_attr attr; + + attr.lli_set = 0; + attr.coalescing_set = 1; + attr.pending_limit = SSSNIC_ETHDEV_RX_MSIX_PENDING_LIMIT; + attr.coalescing_timer = SSSNIC_ETHDEV_RX_MSIX_COALESCING_TIMER; + attr.resend_timer = SSSNIC_ETHDEV_RX_MSIX_RESNEDING_TIMER; + + ret = sssnic_msix_attr_set(hw, rxq->intr.msix_id, &attr); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set msxi attributes"); + return ret; + } + + return 0; +} + +int +sssnic_ethdev_rx_queue_intr_enable(struct rte_eth_dev *ethdev, uint16_t qid) +{ + struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[qid]; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + if (rxq->intr.enable) + return 0; + + sssnic_msix_auto_mask_set(hw, rxq->intr.msix_id, SSSNIC_MSIX_ENABLE); + sssnic_msix_state_set(hw, rxq->intr.msix_id, SSSNIC_MSIX_ENABLE); + rxq->intr.enable = 1; + + return 0; +} + +int +sssnic_ethdev_rx_queue_intr_disable(struct rte_eth_dev *ethdev, uint16_t qid) +{ + struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[qid]; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + if (!rxq->intr.enable) + return 0; + + sssnic_msix_auto_mask_set(hw, rxq->intr.msix_id, SSSNIC_MSIX_DISABLE); + sssnic_msix_state_set(hw, rxq->intr.msix_id, SSSNIC_MSIX_DISABLE); + sssnic_msix_resend_disable(hw, rxq->intr.msix_id); + rxq->intr.enable = 0; + + return 0; +} + +int +sssnic_ethdev_rx_intr_init(struct rte_eth_dev *ethdev) +{ + struct rte_intr_handle *intr_handle; + struct sssnic_ethdev_rxq *rxq; + uint32_t nb_rxq, i; + int vec; + int ret; + + if (!ethdev->data->dev_conf.intr_conf.rxq) + return 0; + + intr_handle = ethdev->intr_handle; + + if (!rte_intr_cap_multiple(intr_handle)) { + PMD_DRV_LOG(ERR, + "Rx interrupts require MSI-X interrupts (vfio-pci driver)\n"); + return -ENOTSUP; + } + + rte_intr_efd_disable(intr_handle); + + nb_rxq = ethdev->data->nb_rx_queues; + + ret = rte_intr_efd_enable(intr_handle, nb_rxq); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable intr efd"); + return ret; + } + + ret = rte_intr_vec_list_alloc(intr_handle, NULL, nb_rxq); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to allocate rx intr vec list"); + rte_intr_efd_disable(intr_handle); + return ret; + } + + for (i = 0; i < nb_rxq; i++) { + vec = (int)(i + SSSNIC_ETHDEV_RX_MSIX_ID_START); + rte_intr_vec_list_index_set(intr_handle, i, vec); + rxq = ethdev->data->rx_queues[i]; + rxq->intr.msix_id = vec; + + ret = sssinc_ethdev_rxq_intr_attr_init(rxq); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to initialize rxq %u (port %u) msix attribute.", + rxq->qid, rxq->port); + goto intr_attr_init_fail; + } + } + + return 0; + +intr_attr_init_fail: + rte_intr_vec_list_free(intr_handle); + rte_intr_efd_disable(intr_handle); + + return ret; +} + +void +sssnic_ethdev_rx_intr_shutdown(struct rte_eth_dev *ethdev) +{ + struct rte_intr_handle *intr_handle = ethdev->intr_handle; + uint16_t i; + + for (i = 0; i < ethdev->data->nb_rx_queues; i++) + sssnic_ethdev_rx_queue_intr_disable(ethdev, i); + + rte_intr_efd_disable(intr_handle); + rte_intr_vec_list_free(intr_handle); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h index c6ddc366d5..77a116f4a5 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.h +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -24,5 +24,11 @@ int sssnic_ethdev_rx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id); int sssnic_ethdev_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); int sssnic_ethdev_rx_queue_all_start(struct rte_eth_dev *ethdev); int sssnic_ethdev_rx_queue_all_stop(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rx_queue_intr_enable(struct rte_eth_dev *ethdev, + uint16_t qid); +int sssnic_ethdev_rx_queue_intr_disable(struct rte_eth_dev *ethdev, + uint16_t qid); +int sssnic_ethdev_rx_intr_init(struct rte_eth_dev *ethdev); +void sssnic_ethdev_rx_intr_shutdown(struct rte_eth_dev *ethdev); #endif From patchwork Fri Sep 1 09:35:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131065 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 105644221E; Fri, 1 Sep 2023 11:38:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8897A40A87; Fri, 1 Sep 2023 11:36:12 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id B2C5B402A3 for ; Fri, 1 Sep 2023 11:36:05 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZVPK069847; Fri, 1 Sep 2023 17:35:31 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:30 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 19/32] net/sssnic: support dev start and stop Date: Fri, 1 Sep 2023 17:35:01 +0800 Message-ID: <20230901093514.224824-20-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZVPK069847 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/base/sssnic_api.c | 508 ++++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 257 +++++++++++++ drivers/net/sssnic/base/sssnic_cmd.h | 100 +++++ drivers/net/sssnic/base/sssnic_misc.h | 34 ++ drivers/net/sssnic/sssnic_ethdev.c | 284 ++++++++++++++ drivers/net/sssnic/sssnic_ethdev.h | 21 ++ drivers/net/sssnic/sssnic_ethdev_rx.c | 270 +++++++++++++- drivers/net/sssnic/sssnic_ethdev_rx.h | 8 + drivers/net/sssnic/sssnic_ethdev_tx.c | 163 +++++++++ drivers/net/sssnic/sssnic_ethdev_tx.h | 6 + 10 files changed, 1650 insertions(+), 1 deletion(-) diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 3050d573bf..81020387bd 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -13,6 +13,7 @@ #include "sssnic_mbox.h" #include "sssnic_ctrlq.h" #include "sssnic_api.h" +#include "sssnic_misc.h" int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, @@ -497,3 +498,510 @@ sssnic_rxq_flush(struct sssnic_hw *hw, uint16_t qid) return 0; } + +static int +sssnic_rxtx_size_set(struct sssnic_hw *hw, uint16_t rx_size, uint16_t tx_size, + uint32_t flags) +{ + int ret; + struct sssnic_rxtx_size_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + if (hw == NULL) + return -EINVAL; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.rx_size = rx_size; + cmd.tx_size = tx_size; + cmd.flags = flags; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_PORT_RXTX_SIZE_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_PORT_RXTX_SIZE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_rxtx_max_size_init(struct sssnic_hw *hw, uint16_t rx_size, + uint16_t tx_size) +{ + return sssnic_rxtx_size_set(hw, rx_size, tx_size, + SSSNIC_CMD_INIT_RXTX_SIZE_FLAG | SSSNIC_CMD_SET_RX_SIZE_FLAG | + SSSNIC_CMD_SET_TX_SIZE_FLAG); +} + +int +sssnic_tx_max_size_set(struct sssnic_hw *hw, uint16_t tx_size) +{ + return sssnic_rxtx_size_set(hw, 0, tx_size, + SSSNIC_CMD_SET_TX_SIZE_FLAG); +} + +int +sssnic_port_features_get(struct sssnic_hw *hw, uint64_t *features) +{ + int ret; + struct sssnic_port_feature_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.opcode = SSSNIC_CMD_OPCODE_GET; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_PORT_FEATURE_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_PORT_FEATURE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + *features = cmd.features; + + return 0; +} + +int +sssnic_port_features_set(struct sssnic_hw *hw, uint64_t features) +{ + int ret; + struct sssnic_port_feature_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.features = features; + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_PORT_FEATURE_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_PORT_FEATURE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +#define SSSNIC_MAX_NUM_RXTXQ_CTX_SET_IN_BULK \ + ((SSSNIC_CTRLQ_MAX_CMD_DATA_LEN - SSSNIC_RXTXQ_CTX_CMD_INFO_LEN) / \ + SSSNIC_RXTXQ_CTX_SIZE) + +static int +sssnic_rxtxq_ctx_set(struct sssnic_hw *hw, struct sssnic_rxtxq_ctx *q_ctx, + uint16_t q_start, enum sssnic_rxtxq_ctx_type type, uint16_t count) +{ + struct sssnic_ctrlq_cmd *cmd; + struct sssnic_rxtxq_ctx_cmd *data; + struct sssnic_rxtxq_ctx *ctx; + uint32_t num, i; + uint32_t max_num; + struct sssnic_rxtxq_ctx_cmd_info cmd_info; + int ret = 0; + + cmd = sssnic_ctrlq_cmd_alloc(hw); + if (cmd == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc ctrlq command"); + return -ENOMEM; + } + + data = cmd->data; + ctx = (struct sssnic_rxtxq_ctx *)(data + 1); + max_num = SSSNIC_MAX_NUM_RXTXQ_CTX_SET_IN_BULK; + + while (count > 0) { + num = RTE_MIN(count, max_num); + + cmd_info.q_count = num; + cmd_info.q_type = type; + cmd_info.q_start = q_start; + cmd_info.resvd0 = 0; + sssnic_mem_cpu_to_be_32(&cmd_info, &data->info, + sizeof(struct sssnic_rxtxq_ctx_cmd_info)); + + for (i = 0; i < num; i++) + sssnic_mem_cpu_to_be_32(q_ctx + i, ctx + i, + SSSNIC_RXTXQ_CTX_SIZE); + + cmd->data_len = sizeof(struct sssnic_rxtxq_ctx_cmd_info) + + (SSSNIC_RXTXQ_CTX_SIZE * num); + cmd->module = SSSNIC_LAN_MODULE; + cmd->cmd = SSSNIC_SET_RXTXQ_CTX_CMD; + + rte_wmb(); + + ret = sssnic_ctrlq_cmd_exec(hw, cmd, 0); + if (ret != 0 || cmd->result != 0) { + PMD_DRV_LOG(ERR, + "Failed to execulte ctrlq command %s, ret=%d, result=%" PRIu64, + "SSSNIC_SET_RXTXQ_CTX_CMD", ret, cmd->result); + ret = -EIO; + goto out; + } + + count -= num; + q_ctx += num; + q_start += num; + } + +out: + sssnic_ctrlq_cmd_destroy(hw, cmd); + return ret; +} + +int +sssnic_txq_ctx_set(struct sssnic_hw *hw, struct sssnic_txq_ctx *ctx, + uint16_t qstart, uint16_t count) +{ + return sssnic_rxtxq_ctx_set(hw, (struct sssnic_rxtxq_ctx *)ctx, qstart, + SSSNIC_TXQ_CTX, count); +} + +int +sssnic_rxq_ctx_set(struct sssnic_hw *hw, struct sssnic_rxq_ctx *ctx, + uint16_t qstart, uint16_t count) +{ + return sssnic_rxtxq_ctx_set(hw, (struct sssnic_rxtxq_ctx *)ctx, qstart, + SSSNIC_RXQ_CTX, count); +} + +static int +sssnic_offload_ctx_reset(struct sssnic_hw *hw, uint16_t q_start, + enum sssnic_rxtxq_ctx_type q_type, uint16_t count) +{ + struct sssnic_ctrlq_cmd cmd; + struct sssnic_offload_ctx_reset_cmd data; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + memset(&data, 0, sizeof(data)); + + data.info.q_count = count; + data.info.q_start = q_start; + data.info.q_type = q_type; + + cmd.data = &data; + cmd.module = SSSNIC_LAN_MODULE; + cmd.data_len = sizeof(data); + cmd.cmd = SSSNIC_RESET_OFFLOAD_CTX_CMD; + + sssnic_mem_cpu_to_be_32(&data, &data, sizeof(data)); + + ret = sssnic_ctrlq_cmd_exec(hw, &cmd, 0); + if (ret != 0 || cmd.result != 0) { + PMD_DRV_LOG(ERR, + "Failed to execulte ctrlq command %s, ret=%d, result=%" PRIu64, + "SSSNIC_RESET_OFFLOAD_CTX_CMD", ret, cmd.result); + + return -EIO; + } + + return 0; +} + +int +sssnic_rx_offload_ctx_reset(struct sssnic_hw *hw) +{ + return sssnic_offload_ctx_reset(hw, 0, SSSNIC_RXQ_CTX, + SSSNIC_MAX_NUM_RXQ(hw)); +} + +int +sssnic_tx_offload_ctx_reset(struct sssnic_hw *hw) +{ + return sssnic_offload_ctx_reset(hw, 0, SSSNIC_TXQ_CTX, + SSSNIC_MAX_NUM_TXQ(hw)); +} + +int +sssnic_rxtx_ctx_set(struct sssnic_hw *hw, bool lro_en, uint16_t rxq_depth, + uint16_t rx_buf, uint16_t txq_depth) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_root_ctx_cmd cmd; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.func_id = SSSNIC_FUNC_IDX(hw); + cmd.lro_enable = lro_en ? 1 : 0; + cmd.rx_buf = rx_buf; + cmd.rxq_depth = (uint16_t)rte_log2_u32(rxq_depth); + cmd.txq_depth = (uint16_t)rte_log2_u32(txq_depth); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_SET_ROOT_CTX_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SET_ROOT_CTX_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_port_tx_ci_attr_set(struct sssnic_hw *hw, uint16_t tx_qid, + uint8_t pending_limit, uint8_t coalescing_time, uint64_t dma_addr) +{ + int ret; + struct sssnic_port_tx_ci_attr_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.coalescing_time = coalescing_time; + cmd.pending_limit = pending_limit; + cmd.qid = tx_qid; + cmd.dma_addr = dma_addr >> 2; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_PORT_TX_CI_ATTR_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_PORT_TX_CI_ATTR_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_port_rx_mode_set(struct sssnic_hw *hw, uint32_t mode) +{ + int ret; + struct sssnic_port_rx_mode_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.mode = mode; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_PORT_RX_MODE_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_PORT_RX_MODE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_lro_enable_set(struct sssnic_hw *hw, bool ipv4_en, bool ipv6_en, + uint8_t nb_lro_bufs) +{ + int ret; + struct sssnic_lro_cfg_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.ipv4_en = ipv4_en ? 1 : 0; + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd.ipv6_en = ipv6_en ? 1 : 0; + cmd.nb_bufs = nb_lro_bufs; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_PORT_LRO_CFG_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_PORT_LRO_CFG_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_lro_timer_set(struct sssnic_hw *hw, uint32_t timer) +{ + int ret; + struct sssnic_lro_timer_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + return 0; + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd.timer = timer; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_PORT_LRO_TIMER_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_PORT_LRO_TIMER_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_vlan_filter_enable_set(struct sssnic_hw *hw, bool state) +{ + int ret; + struct sssnic_vlan_filter_enable_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.state = state ? 1 : 0; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_ENABLE_PORT_VLAN_FILTER_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_ENABLE_PORT_VLAN_FILTER_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_vlan_strip_enable_set(struct sssnic_hw *hw, bool state) +{ + int ret; + struct sssnic_vlan_strip_enable_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.state = state ? 1 : 0; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_ENABLE_PORT_VLAN_STRIP_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_ENABLE_PORT_VLAN_STRIP_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_port_resource_clean(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_port_resource_clean_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_CLEAN_PORT_RES_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_CLEAN_PORT_RES_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 29962aabf8..49336f70cf 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -32,6 +32,241 @@ struct sssnic_netif_link_info { uint8_t fec; }; +struct sssnic_rxq_ctx { + union { + uint32_t dword0; + struct { + /* producer index of workq */ + uint32_t pi : 16; + /* consumer index of workq */ + uint32_t ci : 16; + }; + }; + + union { + uint32_t dword1; + struct { + uint32_t dw1_resvd0 : 21; + uint32_t msix_id : 10; + uint32_t intr_dis : 1; + }; + }; + + union { + uint32_t dword2; + struct { + uint32_t wq_pfn_hi : 20; + uint32_t dw2_resvd0 : 8; + /* DPDK PMD always set to 2, + * represent 16 bytes workq entry + */ + uint32_t wqe_type : 2; + uint32_t dw2_resvd1 : 1; + /* DPDK PMD always set to 1 */ + uint32_t wq_owner : 1; + }; + }; + + union { + uint32_t dword3; + uint32_t wq_pfn_lo; + }; + + uint32_t dword4; + uint32_t dword5; + uint32_t dword6; + + union { + uint32_t dword7; + struct { + uint32_t dw7_resvd0 : 28; + /* PMD always set to 1, represent 32 bytes CQE*/ + uint32_t rxd_len : 2; + uint32_t dw7_resvd1 : 2; + }; + }; + + union { + uint32_t dword8; + struct { + uint32_t pre_cache_thd : 14; + uint32_t pre_cache_max : 11; + uint32_t pre_cache_min : 7; + }; + }; + + union { + uint32_t dword9; + struct { + uint32_t pre_ci_hi : 4; + uint32_t pre_owner : 1; + uint32_t dw9_resvd0 : 27; + }; + }; + + union { + uint32_t dword10; + struct { + uint32_t pre_wq_pfn_hi : 20; + uint32_t pre_ci_lo : 12; + }; + }; + + union { + uint32_t dword11; + uint32_t pre_wq_pfn_lo; + }; + + union { + uint32_t dword12; + /* high 32it of PI DMA address */ + uint32_t pi_addr_hi; + }; + + union { + uint32_t dword13; + /* low 32it of PI DMA address */ + uint32_t pi_addr_lo; + }; + + union { + uint32_t dword14; + struct { + uint32_t wq_blk_pfn_hi : 23; + uint32_t dw14_resvd0 : 9; + }; + }; + + union { + uint32_t dword15; + uint32_t wq_blk_pfn_lo; + }; +}; + +struct sssnic_txq_ctx { + union { + uint32_t dword0; + struct { + uint32_t pi : 16; + uint32_t ci : 16; + }; + }; + + union { + uint32_t dword1; + struct { + uint32_t sp : 1; + uint32_t drop : 1; + uint32_t dw_resvd0 : 30; + }; + }; + + union { + uint32_t dword2; + struct { + uint32_t wq_pfn_hi : 20; + uint32_t dw2_resvd0 : 3; + uint32_t wq_owner : 1; + uint32_t dw2_resvd1 : 8; + }; + }; + + union { + uint32_t dword3; + uint32_t wq_pfn_lo; + }; + + uint32_t dword4; + + union { + uint32_t dword5; + struct { + uint32_t drop_on_thd : 16; + uint32_t drop_off_thd : 16; + }; + }; + union { + uint32_t dword6; + struct { + uint32_t qid : 13; + uint32_t dw6_resvd0 : 19; + }; + }; + + union { + uint32_t dword7; + struct { + uint32_t vlan_tag : 16; + uint32_t vlan_type : 3; + uint32_t insert_mode : 2; + uint32_t dw7_resvd0 : 2; + uint32_t ceq_en : 1; + uint32_t dw7_resvd1 : 8; + }; + }; + + union { + uint32_t dword8; + struct { + uint32_t pre_cache_thd : 14; + uint32_t pre_cache_max : 11; + uint32_t pre_cache_min : 7; + }; + }; + + union { + uint32_t dword9; + struct { + uint32_t pre_ci_hi : 4; + uint32_t pre_owner : 1; + uint32_t dw9_resvd0 : 27; + }; + }; + + union { + uint32_t dword10; + struct { + uint32_t pre_wq_pfn_hi : 20; + uint32_t pre_ci_lo : 12; + }; + }; + + union { + uint32_t dword11; + uint32_t pre_wq_pfn_lo; + }; + + uint32_t dword12; + uint32_t dword13; + + union { + uint32_t dword14; + struct { + uint32_t wq_blk_pfn_hi : 23; + uint32_t dw14_resvd0 : 9; + }; + }; + + union { + uint32_t dword15; + uint32_t wq_blk_pfn_lo; + }; +}; + +enum sssnic_rxtxq_ctx_type { + SSSNIC_TXQ_CTX, + SSSNIC_RXQ_CTX, +}; + +struct sssnic_rxtxq_ctx { + union { + struct sssnic_rxq_ctx rxq; + struct sssnic_txq_ctx txq; + }; +}; + +#define SSSNIC_RXTXQ_CTX_SIZE (sizeof(struct sssnic_rxtxq_ctx)) + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, @@ -47,5 +282,27 @@ int sssnic_netif_link_info_get(struct sssnic_hw *hw, int sssnic_netif_enable_set(struct sssnic_hw *hw, uint8_t state); int sssnic_port_enable_set(struct sssnic_hw *hw, bool state); int sssnic_rxq_flush(struct sssnic_hw *hw, uint16_t qid); +int sssnic_rxtx_max_size_init(struct sssnic_hw *hw, uint16_t rx_size, + uint16_t tx_size); +int sssnic_tx_max_size_set(struct sssnic_hw *hw, uint16_t tx_size); +int sssnic_port_features_get(struct sssnic_hw *hw, uint64_t *features); +int sssnic_port_features_set(struct sssnic_hw *hw, uint64_t features); +int sssnic_txq_ctx_set(struct sssnic_hw *hw, struct sssnic_txq_ctx *ctx, + uint16_t qstart, uint16_t count); +int sssnic_rxq_ctx_set(struct sssnic_hw *hw, struct sssnic_rxq_ctx *ctx, + uint16_t qstart, uint16_t count); +int sssnic_rx_offload_ctx_reset(struct sssnic_hw *hw); +int sssnic_tx_offload_ctx_reset(struct sssnic_hw *hw); +int sssnic_rxtx_ctx_set(struct sssnic_hw *hw, bool lro_en, uint16_t rxq_depth, + uint16_t rx_buf, uint16_t txq_depth); +int sssnic_port_tx_ci_attr_set(struct sssnic_hw *hw, uint16_t tx_qid, + uint8_t pending_limit, uint8_t coalescing_time, uint64_t dma_addr); +int sssnic_port_rx_mode_set(struct sssnic_hw *hw, uint32_t mode); +int sssnic_lro_enable_set(struct sssnic_hw *hw, bool ipv4_en, bool ipv6_en, + uint8_t nb_lro_bufs); +int sssnic_lro_timer_set(struct sssnic_hw *hw, uint32_t timer); +int sssnic_vlan_filter_enable_set(struct sssnic_hw *hw, bool state); +int sssnic_vlan_strip_enable_set(struct sssnic_hw *hw, bool state); +int sssnic_port_resource_clean(struct sssnic_hw *hw); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index 6364058d36..e89719b0de 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -236,4 +236,104 @@ struct sssnic_rxq_flush_cmd { }; }; +#define SSSNIC_CMD_INIT_RXTX_SIZE_FLAG (RTE_BIT32(0)) +#define SSSNIC_CMD_SET_RX_SIZE_FLAG (RTE_BIT32(1)) +#define SSSNIC_CMD_SET_TX_SIZE_FLAG (RTE_BIT32(2)) + +struct sssnic_rxtx_size_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd0; + uint32_t flags; + uint16_t rx_size; + uint16_t tx_size; + uint32_t resvd1[9]; +}; + +struct sssnic_port_feature_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t opcode; + uint8_t resvd0; + uint64_t features; + uint64_t resvd1[3]; +}; + +struct sssnic_rxtxq_ctx_cmd_info { + uint16_t q_count; + uint16_t q_type; + uint16_t q_start; + uint16_t resvd0; +}; + +#define SSSNIC_RXTXQ_CTX_CMD_INFO_LEN (sizeof(struct sssnic_rxtxq_ctx_cmd_info)) + +struct sssnic_rxtxq_ctx_cmd { + struct sssnic_rxtxq_ctx_cmd_info info; + uint32_t ctx[0]; +}; + +struct sssnic_offload_ctx_reset_cmd { + struct sssnic_rxtxq_ctx_cmd_info info; + uint32_t resvd; +}; + +struct sssnic_port_tx_ci_attr_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t resvd0; + uint8_t pending_limit; + uint8_t coalescing_time; + uint8_t resvd1; + uint16_t resvd2; + uint16_t qid; + /* ci DMA address right shift 2 */ + uint64_t dma_addr; +}; + +struct sssnic_port_rx_mode_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd; + uint32_t mode; +}; + +struct sssnic_lro_cfg_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t opcode; + uint8_t resvd0; + uint8_t ipv4_en; + uint8_t ipv6_en; + uint8_t nb_bufs; + uint8_t resvd1[13]; +}; + +struct sssnic_lro_timer_cmd { + struct sssnic_cmd_common common; + uint8_t opcode; + uint8_t resvd[3]; + uint32_t timer; +}; + +struct sssnic_vlan_filter_enable_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd; + uint32_t state; /* 0: disabled 1: enabled */ +}; + +struct sssnic_vlan_strip_enable_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t state; /* 0: disabled 1: enabled */ + uint8_t resvd[5]; +}; + +struct sssnic_port_resource_clean_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_misc.h b/drivers/net/sssnic/base/sssnic_misc.h index ac1bbd9c73..e30691caef 100644 --- a/drivers/net/sssnic/base/sssnic_misc.h +++ b/drivers/net/sssnic/base/sssnic_misc.h @@ -8,4 +8,38 @@ #define SSSNIC_LOWER_32_BITS(x) ((uint32_t)(x)) #define SSSNIC_UPPER_32_BITS(x) ((uint32_t)(((x) >> 16) >> 16)) +static inline void +sssnic_mem_cpu_to_be_32(void *in, void *out, int size) +{ + uint32_t i; + uint32_t num; + uint32_t *data_in = (uint32_t *)in; + uint32_t *data_out = (uint32_t *)out; + + num = size / sizeof(uint32_t); + + for (i = 0; i < num; i++) { + *data_out = rte_cpu_to_be_32(*data_in); + data_in++; + data_out++; + } +} + +static inline void +sssnic_mem_be_to_cpu_32(void *in, void *out, int size) +{ + uint32_t i; + uint32_t num; + uint32_t *data_in = (uint32_t *)in; + uint32_t *data_out = (uint32_t *)out; + + num = size / sizeof(uint32_t); + + for (i = 0; i < num; i++) { + *data_out = rte_be_to_cpu_32(*data_in); + data_in++; + data_out++; + } +} + #endif /* _SSSNIC_MISC_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 35bb26a0b1..8201a1e3c4 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -344,7 +344,287 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) rte_free(hw); } +static int +sssnic_ethdev_rxtx_max_size_init(struct rte_eth_dev *ethdev) +{ + int ret; + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + netdev->max_rx_size = sssnic_ethdev_rx_max_size_determine(ethdev); + + ret = sssnic_rxtx_max_size_init(hw, netdev->max_rx_size, 0x3fff); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize max rx and tx size"); + return ret; + } + + return 0; +} + +static int +sssnic_ethdev_features_setup(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint64_t features; + int ret; + + ret = sssnic_port_features_get(hw, &features); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get features"); + return ret; + } + + features &= SSSNIC_ETHDEV_DEFAULT_FEATURES; + + ret = sssnic_port_features_set(hw, features); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set features to %" PRIx64, + features); + return ret; + } + + PMD_DRV_LOG(DEBUG, "Set features to %" PRIx64, features); + + return 0; +} + +static int +sssnic_ethdev_queues_ctx_setup(struct rte_eth_dev *ethdev) +{ + int ret; + + ret = sssnic_ethdev_tx_queues_ctx_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize tx queues context"); + return ret; + } + + ret = sssnic_ethdev_rx_queues_ctx_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize rx queues context"); + return ret; + } + + ret = sssnic_ethdev_rx_offload_ctx_reset(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize rx offload context"); + return ret; + } + + ret = sssnic_ethdev_tx_offload_ctx_reset(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize tx offload context"); + return ret; + } + + return 0; +} + +static int +sssnic_ethdev_rxtx_ctx_setup(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint16_t rxq_depth; + uint16_t txq_depth; + uint16_t rx_buf_idx; + int ret; + + /* queue 0 as default depth */ + rxq_depth = sssnic_ethdev_rx_queue_depth_get(ethdev, 0); + rxq_depth = rxq_depth << 1; + txq_depth = sssnic_ethdev_tx_queue_depth_get(ethdev, 0); + + rx_buf_idx = sssnic_ethdev_rx_buf_size_index_get(netdev->max_rx_size); + + ret = sssnic_rxtx_ctx_set(hw, true, rxq_depth, rx_buf_idx, txq_depth); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set rxtx context"); + return ret; + } + + PMD_DRV_LOG(INFO, + "Setup rxq_depth: %u, max_rx_size: %u, rx_buf_idx: %u, txq_depth: %u", + rxq_depth >> 1, netdev->max_rx_size, rx_buf_idx, txq_depth); + + return 0; +} + +static void +sssnic_ethdev_rxtx_ctx_clean(struct rte_eth_dev *ethdev) +{ + sssnic_rxtx_ctx_set(SSSNIC_ETHDEV_TO_HW(ethdev), 0, 0, 0, 0); +} + +static int +sssnic_ethdev_resource_clean(struct rte_eth_dev *ethdev) +{ + return sssnic_port_resource_clean(SSSNIC_ETHDEV_TO_HW(ethdev)); +} + +static int +sssnic_ethdev_start(struct rte_eth_dev *ethdev) +{ + int ret; + + /* disable link event */ + sssnic_ethdev_link_intr_disable(ethdev); + + /* Allocate rx intr vec */ + ret = sssnic_ethdev_rx_intr_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize rx initr of port %u", + ethdev->data->port_id); + goto link_intr_enable; + } + + /* Initialize rx and tx max size */ + ret = sssnic_ethdev_rxtx_max_size_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to initialize rxtx max size of port %u", + ethdev->data->port_id); + goto rx_intr_shutdown; + } + + /* Setup default features for port */ + ret = sssnic_ethdev_features_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup features"); + goto rx_intr_shutdown; + } + + /* Setup txqs and rxqs context */ + ret = sssnic_ethdev_queues_ctx_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup queues context"); + goto rx_intr_shutdown; + } + + /* Setup tx and rx root context */ + ret = sssnic_ethdev_rxtx_ctx_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup rxtx context"); + goto rx_intr_shutdown; + } + + /* Initialize tx ci attributes */ + ret = sssnic_ethdev_tx_ci_attr_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize tx ci attributes"); + goto rxtx_ctx_clean; + } + + /* Set MTU */ + ret = sssnic_ethdev_tx_max_size_set(ethdev, ethdev->data->mtu); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set tx max size to %u", + ethdev->data->mtu); + goto rxtx_ctx_clean; + } + + /* init rx mode */ + ret = sssnic_ethdev_rx_mode_set(ethdev, SSSNIC_ETHDEV_DEF_RX_MODE); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set rx mode to %x", + SSSNIC_ETHDEV_DEF_RX_MODE); + goto rxtx_ctx_clean; + } + + /* setup rx offload */ + ret = sssnic_ethdev_rx_offload_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup rx offload"); + goto rx_mode_reset; + } + + /* start all rx queues */ + ret = sssnic_ethdev_rx_queue_all_start(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to start all rx queues"); + goto clean_port_res; + } + + /* start all tx queues */ + sssnic_ethdev_tx_queue_all_start(ethdev); + + /* enable link event */ + sssnic_ethdev_link_intr_enable(ethdev); + + /* set port link up */ + ret = sssnic_ethdev_set_link_up(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set port link up"); + goto stop_queues; + } + + PMD_DRV_LOG(INFO, "Port %u is started", ethdev->data->port_id); + + return 0; + +stop_queues: + sssnic_ethdev_tx_queue_all_stop(ethdev); + sssnic_ethdev_rx_queue_all_stop(ethdev); +clean_port_res: + sssnic_ethdev_resource_clean(ethdev); +rx_mode_reset: + sssnic_ethdev_rx_mode_set(ethdev, SSSNIC_ETHDEV_RX_MODE_NONE); +rxtx_ctx_clean: + sssnic_ethdev_rxtx_ctx_clean(ethdev); +rx_intr_shutdown: + sssnic_ethdev_rx_intr_shutdown(ethdev); +link_intr_enable: + sssnic_ethdev_link_intr_enable(ethdev); + return ret; +} + +static int +sssnic_ethdev_stop(struct rte_eth_dev *ethdev) +{ + struct rte_eth_link linkstatus = { 0 }; + int ret; + + /* disable link event */ + sssnic_ethdev_link_intr_disable(ethdev); + + /* set link down */ + ret = sssnic_ethdev_set_link_down(ethdev); + if (ret != 0) + PMD_DRV_LOG(WARNING, "Failed to set port %u link down", + ethdev->data->port_id); + + rte_eth_linkstatus_set(ethdev, &linkstatus); + + /* wait for hw to stop rx and tx packet */ + rte_delay_ms(100); + + /* stop all tx queues */ + sssnic_ethdev_tx_queue_all_stop(ethdev); + + /* stop all rx queues */ + sssnic_ethdev_rx_queue_all_stop(ethdev); + + /* clean hardware resource */ + sssnic_ethdev_resource_clean(ethdev); + + /* shut down rx queue interrupt */ + sssnic_ethdev_rx_intr_shutdown(ethdev); + + /* clean rxtx context */ + sssnic_ethdev_rxtx_ctx_clean(ethdev); + + /* enable link event */ + sssnic_ethdev_link_intr_enable(ethdev); + + PMD_DRV_LOG(INFO, "Port %u is stopped", ethdev->data->port_id); + + return 0; +} + static const struct eth_dev_ops sssnic_ethdev_ops = { + .dev_start = sssnic_ethdev_start, + .dev_stop = sssnic_ethdev_stop, .dev_set_link_up = sssnic_ethdev_set_link_up, .dev_set_link_down = sssnic_ethdev_set_link_down, .link_update = sssnic_ethdev_link_update, @@ -428,6 +708,10 @@ sssnic_ethdev_uninit(struct rte_eth_dev *ethdev) if (ethdev->state == RTE_ETH_DEV_UNUSED) return 0; + /* stop ethdev first */ + if (ethdev->data->dev_started) + sssnic_ethdev_stop(ethdev); + sssnic_ethdev_release(ethdev); return 0; diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index 38e6dc0d62..1f1e991780 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -59,6 +59,25 @@ #define SSSNIC_ETHDEV_DEF_RX_FREE_THRESH 32 #define SSSNIC_ETHDEV_DEF_TX_FREE_THRESH 32 +#define SSSNIC_ETHDEV_DEFAULT_FEATURES 0x3fff + +enum sssnic_ethdev_rx_mode { + SSSNIC_ETHDEV_RX_MODE_NONE = 0, + SSSNIC_ETHDEV_RX_UCAST = RTE_BIT32(0), + SSSNIC_ETHDEV_RX_MCAST = RTE_BIT32(1), + SSSNIC_ETHDEV_RX_BCAST = RTE_BIT32(2), + SSSNIC_ETHDEV_RX_ALL_MCAST = RTE_BIT32(3), + SSSNIC_ETHDEV_RX_PROMISC = RTE_BIT32(4), + SSSNIC_ETHDEV_RX_MODE_INVAL = RTE_BIT32(5), +}; + +#define SSSNIC_ETHDEV_DEF_RX_MODE \ + (SSSNIC_ETHDEV_RX_UCAST | SSSNIC_ETHDEV_RX_MCAST | \ + SSSNIC_ETHDEV_RX_BCAST) + +#define SSSNIC_ETHDEV_LRO_BUF_SIZE 1024 +#define SSSNIC_ETHDEV_LRO_TIMER 16 + struct sssnic_netdev { void *hw; struct rte_ether_addr *mcast_addrs; @@ -67,6 +86,8 @@ struct sssnic_netdev { uint16_t max_num_rxq; uint16_t num_started_rxqs; uint16_t num_started_txqs; + uint16_t max_rx_size; + uint32_t rx_mode; }; #define SSSNIC_ETHDEV_PRIVATE(eth_dev) \ diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index 9c1b2f20d1..fd4975dfd5 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -185,7 +185,7 @@ sssnic_ethdev_rxq_doorbell_ring(struct sssnic_ethdev_rxq *rxq, uint16_t pi) db.qid = rxq->qid; db.pi_hi = (hw_pi >> 8) & 0xff; - db_addr = (uint64_t *)(rxq->doorbell + (hw_pi & 0xff)); + db_addr = ((uint64_t *)rxq->doorbell) + (hw_pi & 0xff); rte_write64(db.u64, db_addr); } @@ -885,3 +885,271 @@ sssnic_ethdev_rx_intr_shutdown(struct rte_eth_dev *ethdev) rte_intr_efd_disable(intr_handle); rte_intr_vec_list_free(intr_handle); } + +uint16_t +sssnic_ethdev_rx_max_size_determine(struct rte_eth_dev *ethdev) +{ + struct sssnic_ethdev_rxq *rxq; + uint16_t max_size = 0; + uint16_t i; + + for (i = 0; i < ethdev->data->nb_rx_queues; i++) { + rxq = ethdev->data->rx_queues[i]; + if (rxq->rx_buf_size > max_size) + max_size = rxq->rx_buf_size; + } + + return max_size; +} + +static void +sssnic_ethdev_rxq_ctx_build(struct sssnic_ethdev_rxq *rxq, + struct sssnic_rxq_ctx *rxq_ctx) +{ + uint16_t hw_ci, hw_pi; + uint64_t pfn; + + hw_ci = sssnic_ethdev_rxq_ci_get(rxq) << 1; + hw_pi = sssnic_ethdev_rxq_pi_get(rxq) << 1; + + /* dw0 */ + rxq_ctx->pi = hw_pi; + rxq_ctx->ci = hw_ci; + + /* dw1 */ + rxq_ctx->msix_id = rxq->intr.msix_id; + rxq_ctx->intr_dis = !rxq->intr.enable; + + /* workq buf phyaddress PFN, size = 4K */ + pfn = SSSNIC_WORKQ_BUF_PHYADDR(rxq->workq) >> 12; + + /* dw2 */ + rxq_ctx->wq_pfn_hi = SSSNIC_UPPER_32_BITS(pfn); + rxq_ctx->wqe_type = 2; + rxq_ctx->wq_owner = 1; + + /* dw3 */ + rxq_ctx->wq_pfn_lo = SSSNIC_LOWER_32_BITS(pfn); + + /* dw4, dw5, dw6 are reserved */ + + /* dw7 */ + rxq_ctx->rxd_len = 1; + + /* dw8 */ + rxq_ctx->pre_cache_thd = 256; + rxq_ctx->pre_cache_max = 6; + rxq_ctx->pre_cache_min = 1; + + /* dw9 */ + rxq_ctx->pre_ci_hi = (hw_ci >> 12) & 0xf; + rxq_ctx->pre_owner = 1; + + /* dw10 */ + rxq_ctx->pre_wq_pfn_hi = SSSNIC_UPPER_32_BITS(pfn); + rxq_ctx->pre_ci_lo = hw_ci & 0xfff; + + /* dw11 */ + rxq_ctx->pre_wq_pfn_lo = SSSNIC_LOWER_32_BITS(pfn); + + /* dw12 */ + rxq_ctx->pi_addr_hi = SSSNIC_UPPER_32_BITS(rxq->pi_mz->iova); + + /* dw13 */ + rxq_ctx->pi_addr_lo = SSSNIC_LOWER_32_BITS(rxq->pi_mz->iova); + + /* workq buf block PFN, size = 512B */ + pfn = SSSNIC_WORKQ_BUF_PHYADDR(rxq->workq) >> 9; + + /* dw14 */ + rxq_ctx->wq_blk_pfn_hi = SSSNIC_UPPER_32_BITS(pfn); + + /* dw15 */ + rxq_ctx->wq_blk_pfn_lo = SSSNIC_LOWER_32_BITS(pfn); +} + +int +sssnic_ethdev_rx_queues_ctx_init(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_ethdev_rxq *rxq; + struct sssnic_rxq_ctx *qctx; + uint16_t qid, numq; + int ret; + + numq = ethdev->data->nb_rx_queues; + + qctx = rte_zmalloc(NULL, numq * sizeof(struct sssnic_rxq_ctx), 0); + if (qctx == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc memory for rxq ctx"); + return -EINVAL; + } + + for (qid = 0; qid < numq; qid++) { + rxq = ethdev->data->rx_queues[qid]; + + /* reset ci and pi */ + sssnic_workq_reset(rxq->workq); + + sssnic_ethdev_rxq_ctx_build(rxq, &qctx[qid]); + } + + ret = sssnic_rxq_ctx_set(hw, qctx, 0, numq); + rte_free(qctx); + + return ret; +} + +int +sssnic_ethdev_rx_offload_ctx_reset(struct rte_eth_dev *ethdev) +{ + return sssnic_rx_offload_ctx_reset(SSSNIC_ETHDEV_TO_HW(ethdev)); +} + +uint16_t +sssnic_ethdev_rx_queue_depth_get(struct rte_eth_dev *ethdev, uint16_t qid) +{ + struct sssnic_ethdev_rxq *rxq; + + if (qid >= ethdev->data->nb_rx_queues) + return 0; + + rxq = ethdev->data->rx_queues[qid]; + + return rxq->depth; +}; + +uint32_t +sssnic_ethdev_rx_buf_size_index_get(uint16_t rx_buf_size) +{ + uint32_t i; + + for (i = 0; i < SSSNIC_ETHDEV_RX_BUF_SIZE_COUNT; i++) { + if (rx_buf_size == sssnic_ethdev_rx_buf_size_tbl[i]) + return i; + } + + return SSSNIC_ETHDEV_DEF_RX_BUF_SIZE_IDX; +} + +int +sssnic_ethdev_rx_mode_set(struct rte_eth_dev *ethdev, uint32_t mode) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + int ret; + + ret = sssnic_port_rx_mode_set(SSSNIC_ETHDEV_TO_HW(ethdev), mode); + if (ret != 0) + return ret; + + netdev->rx_mode = mode; + + PMD_DRV_LOG(DEBUG, "Set rx_mode to %x", mode); + + return 0; +} + +static int +sssnic_ethdev_lro_setup(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct rte_eth_conf *dev_conf = ðdev->data->dev_conf; + bool enable; + uint8_t num_lro_bufs; + uint32_t max_lro_pkt_size; + uint32_t timer = SSSNIC_ETHDEV_LRO_TIMER; + int ret; + + if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) + enable = true; + else + enable = false; + + max_lro_pkt_size = dev_conf->rxmode.max_lro_pkt_size; + num_lro_bufs = max_lro_pkt_size / SSSNIC_ETHDEV_LRO_BUF_SIZE; + + if (num_lro_bufs == 0) + num_lro_bufs = 1; + + ret = sssnic_lro_enable_set(hw, enable, enable, num_lro_bufs); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s LRO", + enable ? "enable" : "disable"); + return ret; + } + + ret = sssnic_lro_timer_set(hw, timer); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set lro timer to %u", timer); + return ret; + } + + PMD_DRV_LOG(INFO, + "%s LRO, max_lro_pkt_size: %u, num_lro_bufs: %u, lro_timer: %u", + enable ? "Enabled" : "Disabled", max_lro_pkt_size, num_lro_bufs, + timer); + + return 0; +} + +static int +sssnic_ethdev_rx_vlan_offload_setup(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct rte_eth_conf *dev_conf = ðdev->data->dev_conf; + bool vlan_strip_en; + uint32_t vlan_filter_en; + int ret; + + if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + vlan_strip_en = true; + else + vlan_strip_en = false; + + ret = sssnic_vlan_strip_enable_set(hw, vlan_strip_en); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s VLAN strip", + vlan_strip_en ? "enable" : "disable"); + return ret; + } + + PMD_DRV_LOG(INFO, "%s VLAN strip", + vlan_strip_en ? "Enabled" : "Disabled"); + + if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) + vlan_filter_en = true; + else + vlan_filter_en = false; + + ret = sssnic_vlan_filter_enable_set(hw, vlan_filter_en); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s VLAN filter", + vlan_filter_en ? "enable" : "disable"); + return ret; + } + + PMD_DRV_LOG(ERR, "%s VLAN filter", + vlan_filter_en ? "Enabled" : "Disabled"); + + return 0; +} + +int +sssnic_ethdev_rx_offload_setup(struct rte_eth_dev *ethdev) +{ + int ret; + + ret = sssnic_ethdev_lro_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup LRO"); + return ret; + } + + ret = sssnic_ethdev_rx_vlan_offload_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup rx vlan offload"); + return ret; + } + + return 0; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h index 77a116f4a5..f4b4545944 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.h +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -30,5 +30,13 @@ int sssnic_ethdev_rx_queue_intr_disable(struct rte_eth_dev *ethdev, uint16_t qid); int sssnic_ethdev_rx_intr_init(struct rte_eth_dev *ethdev); void sssnic_ethdev_rx_intr_shutdown(struct rte_eth_dev *ethdev); +uint16_t sssnic_ethdev_rx_max_size_determine(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rx_queues_ctx_init(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rx_offload_ctx_reset(struct rte_eth_dev *ethdev); +uint16_t sssnic_ethdev_rx_queue_depth_get(struct rte_eth_dev *ethdev, + uint16_t qid); +uint32_t sssnic_ethdev_rx_buf_size_index_get(uint16_t rx_buf_size); +int sssnic_ethdev_rx_mode_set(struct rte_eth_dev *ethdev, uint32_t mode); +int sssnic_ethdev_rx_offload_setup(struct rte_eth_dev *ethdev); #endif diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.c b/drivers/net/sssnic/sssnic_ethdev_tx.c index 47d7e3f343..b9c4f97cb3 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.c +++ b/drivers/net/sssnic/sssnic_ethdev_tx.c @@ -179,6 +179,9 @@ enum sssnic_ethdev_txq_entry_type { /* Doorbell offset 4096 */ #define SSSNIC_ETHDEV_TXQ_DB_OFFSET 0x1000 +#define SSSNIC_ETHDEV_TX_CI_DEF_COALESCING_TIME 16 +#define SSSNIC_ETHDEV_TX_CI_DEF_PENDING_TIME 4 + static inline uint16_t sssnic_ethdev_txq_num_used_entries(struct sssnic_ethdev_txq *txq) { @@ -507,3 +510,163 @@ sssnic_ethdev_tx_queue_all_stop(struct rte_eth_dev *ethdev) for (qid = 0; qid < numq; qid++) sssnic_ethdev_tx_queue_stop(ethdev, qid); } + +static void +sssnic_ethdev_txq_ctx_build(struct sssnic_ethdev_txq *txq, + struct sssnic_txq_ctx *qctx) +{ + uint64_t pfn; + + /* dw0 */ + qctx->pi = sssnic_ethdev_txq_pi_get(txq); + qctx->ci = sssnic_ethdev_txq_ci_get(txq); + + /* dw1 */ + qctx->sp = 0; + qctx->drop = 0; + + /* workq buf phyaddress PFN */ + pfn = SSSNIC_WORKQ_BUF_PHYADDR(txq->workq) >> 12; + + /* dw2 */ + qctx->wq_pfn_hi = SSSNIC_UPPER_32_BITS(pfn); + qctx->wq_owner = 1; + + /* dw3 */ + qctx->wq_pfn_lo = SSSNIC_LOWER_32_BITS(pfn); + + /* dw4 reserved */ + + /* dw5 */ + qctx->drop_on_thd = 0xffff; + qctx->drop_off_thd = 0; + + /* dw6 */ + qctx->qid = txq->qid; + + /* dw7 */ + qctx->insert_mode = 1; + + /* dw8 */ + qctx->pre_cache_thd = 256; + qctx->pre_cache_max = 6; + qctx->pre_cache_min = 1; + + /* dw9 */ + qctx->pre_ci_hi = sssnic_ethdev_txq_ci_get(txq) >> 12; + qctx->pre_owner = 1; + + /* dw10 */ + qctx->pre_wq_pfn_hi = SSSNIC_UPPER_32_BITS(pfn); + qctx->pre_ci_lo = sssnic_ethdev_txq_ci_get(txq); + + /* dw11 */ + qctx->pre_wq_pfn_lo = SSSNIC_LOWER_32_BITS(pfn); + + /* dw12,dw13 are reserved */ + + /* workq buf block PFN */ + pfn = SSSNIC_WORKQ_BUF_PHYADDR(txq->workq) >> 9; + + /* dw14 */ + qctx->wq_blk_pfn_hi = SSSNIC_UPPER_32_BITS(pfn); + + /* dw15 */ + qctx->wq_blk_pfn_lo = SSSNIC_LOWER_32_BITS(pfn); +} + +int +sssnic_ethdev_tx_queues_ctx_init(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_ethdev_txq *txq; + struct sssnic_txq_ctx *qctx; + uint16_t qid, numq; + int ret; + + numq = ethdev->data->nb_tx_queues; + + qctx = rte_zmalloc(NULL, numq * sizeof(struct sssnic_txq_ctx), 0); + if (qctx == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc memory for txq ctx"); + return -EINVAL; + } + + for (qid = 0; qid < numq; qid++) { + txq = ethdev->data->tx_queues[qid]; + + /* reset ci and pi */ + sssnic_workq_reset(txq->workq); + + *txq->hw_ci_addr = 0; + txq->owner = 1; + + sssnic_ethdev_txq_ctx_build(txq, &qctx[qid]); + } + + ret = sssnic_txq_ctx_set(hw, qctx, 0, numq); + rte_free(qctx); + + return ret; +} + +int +sssnic_ethdev_tx_offload_ctx_reset(struct rte_eth_dev *ethdev) +{ + return sssnic_tx_offload_ctx_reset(SSSNIC_ETHDEV_TO_HW(ethdev)); +} + +uint16_t +sssnic_ethdev_tx_queue_depth_get(struct rte_eth_dev *ethdev, uint16_t qid) +{ + struct sssnic_ethdev_txq *txq; + + if (qid >= ethdev->data->nb_tx_queues) + return 0; + + txq = ethdev->data->tx_queues[qid]; + + return txq->depth; +} + +int +sssnic_ethdev_tx_ci_attr_init(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_ethdev_txq *txq; + uint16_t i; + int ret; + + for (i = 0; i < ethdev->data->nb_tx_queues; i++) { + txq = ethdev->data->tx_queues[i]; + + ret = sssnic_port_tx_ci_attr_set(hw, i, + SSSNIC_ETHDEV_TX_CI_DEF_PENDING_TIME, + SSSNIC_ETHDEV_TX_CI_DEF_COALESCING_TIME, + txq->ci_mz->iova); + + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to initialize tx ci attributes of queue %u", + i); + return ret; + } + } + + return 0; +} + +int +sssnic_ethdev_tx_max_size_set(struct rte_eth_dev *ethdev, uint16_t size) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_tx_max_size_set(hw, size); + if (ret != 0) + return ret; + + PMD_DRV_LOG(INFO, "Set tx_max_size to %u", size); + + return 0; +}; diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.h b/drivers/net/sssnic/sssnic_ethdev_tx.h index 3de9e899a0..88ad82a055 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.h +++ b/drivers/net/sssnic/sssnic_ethdev_tx.h @@ -27,5 +27,11 @@ int sssnic_ethdev_tx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id); int sssnic_ethdev_tx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id); int sssnic_ethdev_tx_queue_all_start(struct rte_eth_dev *ethdev); void sssnic_ethdev_tx_queue_all_stop(struct rte_eth_dev *ethdev); +int sssnic_ethdev_tx_queues_ctx_init(struct rte_eth_dev *ethdev); +int sssnic_ethdev_tx_offload_ctx_reset(struct rte_eth_dev *ethdev); +uint16_t sssnic_ethdev_tx_queue_depth_get(struct rte_eth_dev *ethdev, + uint16_t qid); +int sssnic_ethdev_tx_ci_attr_init(struct rte_eth_dev *ethdev); +int sssnic_ethdev_tx_max_size_set(struct rte_eth_dev *ethdev, uint16_t size); #endif /* _SSSNIC_ETHDEV_TX_H_ */ From patchwork Fri Sep 1 09:35:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131064 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4F874221E; Fri, 1 Sep 2023 11:38:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0FA240A6C; Fri, 1 Sep 2023 11:36:10 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id EB508402E2 for ; Fri, 1 Sep 2023 11:36:04 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZVPL069847; Fri, 1 Sep 2023 17:35:31 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:30 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 20/32] net/sssnic: support dev close and reset Date: Fri, 1 Sep 2023 17:35:02 +0800 Message-ID: <20230901093514.224824-21-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZVPL069847 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/sssnic_ethdev.c | 32 ++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 8201a1e3c4..b59c4fd3ad 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -13,6 +13,8 @@ #include "sssnic_ethdev_rx.h" #include "sssnic_ethdev_tx.h" +static int sssnic_ethdev_init(struct rte_eth_dev *ethdev); + static int sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, struct rte_eth_dev_info *devinfo) @@ -622,9 +624,39 @@ sssnic_ethdev_stop(struct rte_eth_dev *ethdev) return 0; } +static int +sssnic_ethdev_close(struct rte_eth_dev *ethdev) +{ + sssnic_ethdev_release(ethdev); + + PMD_DRV_LOG(INFO, "Port %u is closed", ethdev->data->port_id); + + return 0; +} + +static int +sssnic_ethdev_reset(struct rte_eth_dev *ethdev) +{ + int ret; + + sssnic_ethdev_release(ethdev); + + ret = sssnic_ethdev_init(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to initialize sssnic ethdev"); + return ret; + } + + PMD_DRV_LOG(INFO, "Port %u is reset", ethdev->data->port_id); + + return 0; +} + static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_start = sssnic_ethdev_start, .dev_stop = sssnic_ethdev_stop, + .dev_close = sssnic_ethdev_close, + .dev_reset = sssnic_ethdev_reset, .dev_set_link_up = sssnic_ethdev_set_link_up, .dev_set_link_down = sssnic_ethdev_set_link_down, .link_update = sssnic_ethdev_link_update, From patchwork Fri Sep 1 09:35:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131068 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 18E464221E; Fri, 1 Sep 2023 11:39:01 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 54CC140E36; Fri, 1 Sep 2023 11:36:17 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 1BFE5402A3 for ; Fri, 1 Sep 2023 11:36:08 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZVg7069862; Fri, 1 Sep 2023 17:35:32 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:31 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 21/32] net/sssnic: add allmulticast and promiscuous ops Date: Fri, 1 Sep 2023 17:35:03 +0800 Message-ID: <20230901093514.224824-22-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZVg7069862 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 2 + drivers/net/sssnic/sssnic_ethdev.c | 72 +++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index e3b8166629..359834ce4c 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -8,6 +8,8 @@ Link status = Y Link status event = Y Queue start/stop = Y Rx interrupt = Y +Promiscuous mode = Y +Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y Linux = Y diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index b59c4fd3ad..e1c805aeea 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -652,6 +652,74 @@ sssnic_ethdev_reset(struct rte_eth_dev *ethdev) return 0; } +static int +sssnic_ethdev_allmulticast_enable(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + uint32_t rx_mode; + int ret; + + rx_mode = netdev->rx_mode | SSSNIC_ETHDEV_RX_ALL_MCAST; + ret = sssnic_ethdev_rx_mode_set(ethdev, rx_mode); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set rx_mode: %x", rx_mode); + return ret; + } + + return 0; +} + +static int +sssnic_ethdev_allmulticast_disable(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + uint32_t rx_mode; + int ret; + + rx_mode = netdev->rx_mode & (~SSSNIC_ETHDEV_RX_ALL_MCAST); + ret = sssnic_ethdev_rx_mode_set(ethdev, rx_mode); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set rx_mode: %x", rx_mode); + return ret; + } + + return 0; +} + +static int +sssnic_ethdev_promiscuous_enable(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + uint32_t rx_mode; + int ret; + + rx_mode = netdev->rx_mode | SSSNIC_ETHDEV_RX_PROMISC; + ret = sssnic_ethdev_rx_mode_set(ethdev, rx_mode); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set rx_mode: %x", rx_mode); + return ret; + } + + return 0; +} + +static int +sssnic_ethdev_promiscuous_disable(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + uint32_t rx_mode; + int ret; + + rx_mode = netdev->rx_mode & (~SSSNIC_ETHDEV_RX_PROMISC); + ret = sssnic_ethdev_rx_mode_set(ethdev, rx_mode); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set rx_mode: %x", rx_mode); + return ret; + } + + return 0; +} + static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_start = sssnic_ethdev_start, .dev_stop = sssnic_ethdev_stop, @@ -676,6 +744,10 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .tx_queue_stop = sssnic_ethdev_tx_queue_stop, .rx_queue_intr_enable = sssnic_ethdev_rx_queue_intr_enable, .rx_queue_intr_disable = sssnic_ethdev_rx_queue_intr_disable, + .allmulticast_enable = sssnic_ethdev_allmulticast_enable, + .allmulticast_disable = sssnic_ethdev_allmulticast_disable, + .promiscuous_enable = sssnic_ethdev_promiscuous_enable, + .promiscuous_disable = sssnic_ethdev_promiscuous_disable, }; static int From patchwork Fri Sep 1 09:35:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131067 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D47E14221E; Fri, 1 Sep 2023 11:38:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BB1CD40DDE; Fri, 1 Sep 2023 11:36:15 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 61F3B402A3 for ; Fri, 1 Sep 2023 11:36:07 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZWVr069866; Fri, 1 Sep 2023 17:35:32 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:31 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 22/32] net/sssnic: add basic and extended stats ops Date: Fri, 1 Sep 2023 17:35:04 +0800 Message-ID: <20230901093514.224824-23-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZWVr069866 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. --- doc/guides/nics/features/sssnic.ini | 3 + drivers/net/sssnic/base/sssnic_api.c | 154 +++++++++ drivers/net/sssnic/base/sssnic_api.h | 116 +++++++ drivers/net/sssnic/base/sssnic_cmd.h | 12 + drivers/net/sssnic/meson.build | 1 + drivers/net/sssnic/sssnic_ethdev.c | 6 + drivers/net/sssnic/sssnic_ethdev_rx.c | 30 ++ drivers/net/sssnic/sssnic_ethdev_rx.h | 4 + drivers/net/sssnic/sssnic_ethdev_stats.c | 391 +++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_stats.h | 18 ++ drivers/net/sssnic/sssnic_ethdev_tx.c | 36 +++ drivers/net/sssnic/sssnic_ethdev_tx.h | 4 + 12 files changed, 775 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_ethdev_stats.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_stats.h diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 359834ce4c..aba0b78c95 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -12,6 +12,9 @@ Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y +Basic stats = Y +Extended stats = Y +Stats per queue = Y Linux = Y ARMv8 = Y x86-64 = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 81020387bd..9f063112f2 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1005,3 +1005,157 @@ sssnic_port_resource_clean(struct sssnic_hw *hw) return 0; } + +int +sssnic_port_stats_get(struct sssnic_hw *hw, struct sssnic_port_stats *stats) +{ + int ret; + struct sssnic_port_stats_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len, resp_len; + struct { + struct sssnic_cmd_common common; + uint32_t size; + uint32_t resvd0; + struct sssnic_port_stats stats; + uint64_t rsvd1[6]; + } resp; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_GET_PORT_STATS_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + memset(&resp, 0, sizeof(resp)); + resp_len = sizeof(resp); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&resp, &resp_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (resp_len == 0 || resp.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_PORT_STATS_CMD, len=%u, status=%u", + resp_len, resp.common.status); + return -EIO; + } + + memcpy(stats, &resp.stats, sizeof(resp.stats)); + + return 0; +} + +int +sssnic_port_stats_clear(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_port_stats_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_CLEAR_PORT_STATS_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_CLEAN_PORT_RES_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_mac_stats_get(struct sssnic_hw *hw, struct sssnic_mac_stats *stats) +{ + int ret; + struct sssnic_msg msg; + uint32_t cmd_len, resp_len; + struct sssnic_mac_stats_cmd cmd; + struct { + struct sssnic_cmd_common common; + struct sssnic_mac_stats stats; + uint64_t resvd[15]; + } *resp; + + memset(&cmd, 0, sizeof(cmd)); + cmd.port = SSSNIC_PHY_PORT(hw); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_GET_NETIF_MAC_STATS_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_NETIF_MODULE, SSSNIC_MSG_TYPE_REQ); + + resp_len = sizeof(*resp); + resp = rte_zmalloc(NULL, resp_len, 0); + if (resp == NULL) { + PMD_DRV_LOG(ERR, + "Failed to alloc memory for mac stats response cmd"); + return -ENOMEM; + } + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)resp, &resp_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + goto out; + } + + if (resp_len == 0 || resp->common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_NETIF_MAC_STATS_CMD, len=%u, status=%u", + resp_len, resp->common.status); + ret = -EIO; + goto out; + } + + memcpy(stats, &resp->stats, sizeof(resp->stats)); + +out: + rte_free(resp); + return ret; +} + +int +sssnic_mac_stats_clear(struct sssnic_hw *hw) +{ + int ret; + struct sssnic_mac_stats_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.port = SSSNIC_PHY_PORT(hw); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_CLEAR_NETIF_MAC_STATS_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_NETIF_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_CLEAR_NETIF_MAC_STATS_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 49336f70cf..c2f4f90209 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -267,6 +267,117 @@ struct sssnic_rxtxq_ctx { #define SSSNIC_RXTXQ_CTX_SIZE (sizeof(struct sssnic_rxtxq_ctx)) +struct sssnic_port_stats { + uint64_t tx_ucast_pkts; + uint64_t tx_ucast_bytes; + uint64_t tx_mcast_pkts; + uint64_t tx_mcast_bytes; + uint64_t tx_bcast_pkts; + uint64_t tx_bcast_bytes; + + uint64_t rx_ucast_pkts; + uint64_t rx_ucast_bytes; + uint64_t rx_mcast_pkts; + uint64_t rx_mcast_bytes; + uint64_t rx_bcast_pkts; + uint64_t rx_bcast_bytes; + + uint64_t tx_discards; + uint64_t rx_discards; + uint64_t tx_errors; + uint64_t rx_errors; +}; + +struct sssnic_mac_stats { + uint64_t tx_fragment_pkts; + uint64_t tx_undersize_pkts; + uint64_t tx_undermin_pkts; + uint64_t tx_64b_pkts; + uint64_t tx_65b_127b_pkt; + uint64_t tx_128b_255b_pkts; + uint64_t tx_256b_511b_pkts; + uint64_t tx_512b_1023b_pkts; + uint64_t tx_1024b_1518b_pkts; + uint64_t tx_1519b_2047b_pkts; + uint64_t tx_2048b_4095b_pkts; + uint64_t tx_4096b_8191b_pkts; + uint64_t tx_8192b_9216b_pkts; + uint64_t tx_9217b_12287b_pkts; + uint64_t tx_12288b_16383b_pkts; + uint64_t tx_1519b_bad_pkts; + uint64_t tx_1519b_good_pkts; + uint64_t tx_oversize_pkts; + uint64_t tx_jabber_pkts; + uint64_t tx_bad_pkts; + uint64_t tx_bad_bytes; + uint64_t tx_good_pkts; + uint64_t tx_good_bytes; + uint64_t tx_total_pkts; + uint64_t tx_total_bytes; + uint64_t tx_unicast_pkts; + uint64_t tx_multicast_bytes; + uint64_t tx_broadcast_pkts; + uint64_t tx_pause_pkts; + uint64_t tx_pfc_pkts; + uint64_t tx_pfc_pri0_pkts; + uint64_t tx_pfc_pri1_pkts; + uint64_t tx_pfc_pri2_pkts; + uint64_t tx_pfc_pri3_pkts; + uint64_t tx_pfc_pri4_pkts; + uint64_t tx_pfc_pri5_pkts; + uint64_t tx_pfc_pri6_pkts; + uint64_t tx_pfc_pri7_pkts; + uint64_t tx_control_pkts; + uint64_t tx_total_error_pkts; + uint64_t tx_debug_good_pkts; + uint64_t tx_debug_bad_pkts; + + uint64_t rx_fragment_pkts; + uint64_t rx_undersize_pkts; + uint64_t rx_undermin_pkts; + uint64_t rx_64b_pkts; + uint64_t rx_65b_127b_pkt; + uint64_t rx_128b_255b_pkts; + uint64_t rx_256b_511b_pkts; + uint64_t rx_512b_1023b_pkts; + uint64_t rx_1024b_1518b_pkts; + uint64_t rx_1519b_2047b_pkts; + uint64_t rx_2048b_4095b_pkts; + uint64_t rx_4096b_8191b_pkts; + uint64_t rx_8192b_9216b_pkts; + uint64_t rx_9217b_12287b_pkts; + uint64_t rx_12288b_16383b_pkts; + uint64_t rx_1519b_bad_pkts; + uint64_t rx_1519b_good_pkts; + uint64_t rx_oversize_pkts; + uint64_t rx_jabber_pkts; + uint64_t rx_bad_pkts; + uint64_t rx_bad_bytes; + uint64_t rx_good_pkts; + uint64_t rx_good_bytes; + uint64_t rx_total_pkts; + uint64_t rx_total_bytes; + uint64_t rx_unicast_pkts; + uint64_t rx_multicast_bytes; + uint64_t rx_broadcast_pkts; + uint64_t rx_pause_pkts; + uint64_t rx_pfc_pkts; + uint64_t rx_pfc_pri0_pkts; + uint64_t rx_pfc_pri1_pkts; + uint64_t rx_pfc_pri2_pkts; + uint64_t rx_pfc_pri3_pkts; + uint64_t rx_pfc_pri4_pkts; + uint64_t rx_pfc_pri5_pkts; + uint64_t rx_pfc_pri6_pkts; + uint64_t rx_pfc_pri7_pkts; + uint64_t rx_control_pkts; + uint64_t rx_symbol_error_pkts; + uint64_t rx_fcs_error_pkts; + uint64_t rx_debug_good_pkts; + uint64_t rx_debug_bad_pkts; + uint64_t rx_unfilter_pkts; +}; + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, @@ -304,5 +415,10 @@ int sssnic_lro_timer_set(struct sssnic_hw *hw, uint32_t timer); int sssnic_vlan_filter_enable_set(struct sssnic_hw *hw, bool state); int sssnic_vlan_strip_enable_set(struct sssnic_hw *hw, bool state); int sssnic_port_resource_clean(struct sssnic_hw *hw); +int sssnic_port_stats_get(struct sssnic_hw *hw, + struct sssnic_port_stats *stats); +int sssnic_port_stats_clear(struct sssnic_hw *hw); +int sssnic_mac_stats_get(struct sssnic_hw *hw, struct sssnic_mac_stats *stats); +int sssnic_mac_stats_clear(struct sssnic_hw *hw); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index e89719b0de..bc7303ff57 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -336,4 +336,16 @@ struct sssnic_port_resource_clean_cmd { uint16_t resvd; }; +struct sssnic_port_stats_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd; +}; + +struct sssnic_mac_stats_cmd { + struct sssnic_cmd_common common; + uint8_t port; + uint8_t resvd[3]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index 0c6e21310d..dea24f4b06 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -21,4 +21,5 @@ sources = files( 'sssnic_ethdev_link.c', 'sssnic_ethdev_rx.c', 'sssnic_ethdev_tx.c', + 'sssnic_ethdev_stats.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index e1c805aeea..99e6d6152a 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -12,6 +12,7 @@ #include "sssnic_ethdev_link.h" #include "sssnic_ethdev_rx.h" #include "sssnic_ethdev_tx.h" +#include "sssnic_ethdev_stats.h" static int sssnic_ethdev_init(struct rte_eth_dev *ethdev); @@ -748,6 +749,11 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .allmulticast_disable = sssnic_ethdev_allmulticast_disable, .promiscuous_enable = sssnic_ethdev_promiscuous_enable, .promiscuous_disable = sssnic_ethdev_promiscuous_disable, + .stats_get = sssnic_ethdev_stats_get, + .stats_reset = sssnic_ethdev_stats_reset, + .xstats_get_names = sssnic_ethdev_xstats_get_names, + .xstats_get = sssnic_ethdev_xstats_get, + .xstats_reset = sssnic_ethdev_xstats_reset, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index fd4975dfd5..66045f7a98 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -1153,3 +1153,33 @@ sssnic_ethdev_rx_offload_setup(struct rte_eth_dev *ethdev) return 0; } + +int +sssnic_ethdev_rx_queue_stats_get(struct rte_eth_dev *ethdev, uint16_t qid, + struct sssnic_ethdev_rxq_stats *stats) +{ + struct sssnic_ethdev_rxq *rxq; + + if (qid >= ethdev->data->nb_rx_queues) { + PMD_DRV_LOG(ERR, + "Invalid qid, qid must less than nb_rx_queues(%u)", + ethdev->data->nb_rx_queues); + return -EINVAL; + } + + rxq = ethdev->data->rx_queues[qid]; + memcpy(stats, &rxq->stats, sizeof(rxq->stats)); + + return 0; +} + +void +sssnic_ethdev_rx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid) +{ + struct sssnic_ethdev_rxq *rxq; + + if (qid < ethdev->data->nb_rx_queues) { + rxq = ethdev->data->rx_queues[qid]; + memset(&rxq->stats, 0, sizeof(rxq->stats)); + } +}; diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h index f4b4545944..5532aced4e 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.h +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -38,5 +38,9 @@ uint16_t sssnic_ethdev_rx_queue_depth_get(struct rte_eth_dev *ethdev, uint32_t sssnic_ethdev_rx_buf_size_index_get(uint16_t rx_buf_size); int sssnic_ethdev_rx_mode_set(struct rte_eth_dev *ethdev, uint32_t mode); int sssnic_ethdev_rx_offload_setup(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rx_queue_stats_get(struct rte_eth_dev *ethdev, uint16_t qid, + struct sssnic_ethdev_rxq_stats *stats); +void sssnic_ethdev_rx_queue_stats_clear(struct rte_eth_dev *ethdev, + uint16_t qid); #endif diff --git a/drivers/net/sssnic/sssnic_ethdev_stats.c b/drivers/net/sssnic/sssnic_ethdev_stats.c new file mode 100644 index 0000000000..dd91aef5f7 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_stats.c @@ -0,0 +1,391 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_rx.h" +#include "sssnic_ethdev_tx.h" +#include "sssnic_ethdev_stats.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_api.h" + +struct sssnic_ethdev_xstats_name_off { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + uint32_t offset; +}; + +#define SSSNIC_ETHDEV_XSTATS_STR_OFF(stats_type, field) \ + { #field, offsetof(struct stats_type, field) } + +#define SSSNIC_ETHDEV_XSTATS_VALUE(data, idx, name_off) \ + (*(uint64_t *)(((uint8_t *)(data)) + (name_off)[idx].offset)) + +#define SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(field) \ + SSSNIC_ETHDEV_XSTATS_STR_OFF(sssnic_ethdev_rxq_stats, field) + +#define SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(field) \ + SSSNIC_ETHDEV_XSTATS_STR_OFF(sssnic_ethdev_txq_stats, field) + +#define SSSNIC_ETHDEV_PORT_STATS_STR_OFF(field) \ + SSSNIC_ETHDEV_XSTATS_STR_OFF(sssnic_port_stats, field) + +#define SSSNIC_ETHDEV_MAC_STATS_STR_OFF(field) \ + SSSNIC_ETHDEV_XSTATS_STR_OFF(sssnic_mac_stats, field) + +static const struct sssnic_ethdev_xstats_name_off rxq_stats_strings[] = { + SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(packets), + SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(bytes), + SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(csum_errors), + SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(other_errors), + SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(nombuf), + SSSNIC_ETHDEV_RXQ_STATS_STR_OFF(burst), +}; +#define SSSNIC_ETHDEV_NB_RXQ_XSTATS RTE_DIM(rxq_stats_strings) + +static const struct sssnic_ethdev_xstats_name_off txq_stats_strings[] = { + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(packets), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(bytes), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(nobuf), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(zero_len_segs), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(too_large_pkts), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(too_many_segs), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(null_segs), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(offload_errors), + SSSNIC_ETHDEV_TXQ_STATS_STR_OFF(burst), +}; +#define SSSNIC_ETHDEV_NB_TXQ_XSTATS RTE_DIM(txq_stats_strings) + +static const struct sssnic_ethdev_xstats_name_off port_stats_strings[] = { + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_ucast_pkts), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_ucast_bytes), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_mcast_pkts), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_mcast_bytes), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_bcast_pkts), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_bcast_bytes), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_discards), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(rx_errors), + + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_ucast_pkts), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_ucast_bytes), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_mcast_pkts), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_mcast_bytes), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_bcast_pkts), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_bcast_bytes), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_discards), + SSSNIC_ETHDEV_PORT_STATS_STR_OFF(tx_errors), +}; +#define SSSNIC_ETHDEV_NB_PORT_XSTATS RTE_DIM(port_stats_strings) + +static const struct sssnic_ethdev_xstats_name_off mac_stats_strings[] = { + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_fragment_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_undersize_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_undermin_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_64b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_65b_127b_pkt), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_128b_255b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_256b_511b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_512b_1023b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_1024b_1518b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_1519b_2047b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_2048b_4095b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_4096b_8191b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_8192b_9216b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_9217b_12287b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_12288b_16383b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_1519b_bad_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_1519b_good_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_oversize_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_jabber_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_bad_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_bad_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_good_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_good_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_total_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_total_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_unicast_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_multicast_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_broadcast_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pause_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri0_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri1_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri2_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri3_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri4_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri5_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri6_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_pfc_pri7_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_control_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_symbol_error_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_fcs_error_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(rx_unfilter_pkts), + + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_fragment_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_undersize_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_undermin_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_64b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_65b_127b_pkt), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_128b_255b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_256b_511b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_512b_1023b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_1024b_1518b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_1519b_2047b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_2048b_4095b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_4096b_8191b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_8192b_9216b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_9217b_12287b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_12288b_16383b_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_1519b_bad_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_1519b_good_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_oversize_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_jabber_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_bad_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_bad_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_good_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_good_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_total_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_total_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_unicast_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_multicast_bytes), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_broadcast_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pause_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri0_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri1_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri2_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri3_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri4_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri5_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri6_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_pfc_pri7_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_control_pkts), + SSSNIC_ETHDEV_MAC_STATS_STR_OFF(tx_debug_bad_pkts), +}; +#define SSSNIC_ETHDEV_NB_MAC_XSTATS RTE_DIM(mac_stats_strings) + +int +sssnic_ethdev_stats_get(struct rte_eth_dev *ethdev, struct rte_eth_stats *stats) +{ + struct sssnic_port_stats port_stats; + struct sssnic_ethdev_rxq_stats rxq_stats; + struct sssnic_ethdev_txq_stats txq_stats; + int ret; + uint16_t numq, qid; + + ret = sssnic_port_stats_get(SSSNIC_ETHDEV_TO_HW(ethdev), &port_stats); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get port stats"); + return ret; + } + + stats->ipackets = port_stats.rx_ucast_pkts + port_stats.rx_mcast_pkts + + port_stats.rx_bcast_pkts; + stats->ibytes = port_stats.rx_ucast_bytes + port_stats.rx_mcast_bytes + + port_stats.rx_bcast_bytes; + stats->opackets = port_stats.tx_ucast_pkts + port_stats.tx_mcast_pkts + + port_stats.tx_bcast_pkts; + stats->obytes = port_stats.tx_ucast_bytes + port_stats.tx_mcast_bytes + + port_stats.tx_bcast_bytes; + + stats->imissed = port_stats.rx_discards; + stats->oerrors = port_stats.tx_discards; + + ethdev->data->rx_mbuf_alloc_failed = 0; + + numq = RTE_MIN(ethdev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); + for (qid = 0; qid < numq; qid++) { + sssnic_ethdev_rx_queue_stats_get(ethdev, qid, &rxq_stats); + stats->q_ipackets[qid] = rxq_stats.packets; + stats->q_ibytes[qid] = rxq_stats.bytes; + stats->ierrors += + rxq_stats.csum_errors + rxq_stats.other_errors; + ethdev->data->rx_mbuf_alloc_failed += rxq_stats.nombuf; + } + + numq = RTE_MIN(ethdev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); + for (qid = 0; qid < numq; qid++) { + sssnic_ethdev_tx_queue_stats_get(ethdev, qid, &txq_stats); + stats->q_opackets[qid] = txq_stats.packets; + stats->q_obytes[qid] = txq_stats.bytes; + stats->oerrors += txq_stats.nobuf + txq_stats.too_large_pkts + + txq_stats.zero_len_segs + + txq_stats.offload_errors + + txq_stats.null_segs + txq_stats.too_many_segs; + } + + return 0; +} + +int +sssnic_ethdev_stats_reset(struct rte_eth_dev *ethdev) +{ + int ret; + uint16_t numq, qid; + + ret = sssnic_port_stats_clear(SSSNIC_ETHDEV_TO_HW(ethdev)); + if (ret) + PMD_DRV_LOG(ERR, "Failed to clear port stats"); + + numq = RTE_MIN(ethdev->data->nb_rx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); + for (qid = 0; qid < numq; qid++) + sssnic_ethdev_rx_queue_stats_clear(ethdev, qid); + + numq = RTE_MIN(ethdev->data->nb_tx_queues, RTE_ETHDEV_QUEUE_STAT_CNTRS); + for (qid = 0; qid < numq; qid++) + sssnic_ethdev_tx_queue_stats_clear(ethdev, qid); + + return 0; +} + +static uint32_t +sssnic_ethdev_xstats_num_calc(struct rte_eth_dev *ethdev) +{ + return SSSNIC_ETHDEV_NB_PORT_XSTATS + SSSNIC_ETHDEV_NB_MAC_XSTATS + + (SSSNIC_ETHDEV_NB_TXQ_XSTATS * ethdev->data->nb_tx_queues) + + (SSSNIC_ETHDEV_NB_RXQ_XSTATS * ethdev->data->nb_rx_queues); +} + +int +sssnic_ethdev_xstats_get_names(struct rte_eth_dev *ethdev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit) +{ + uint16_t i, qid, count = 0; + + if (xstats_names == NULL) + return sssnic_ethdev_xstats_num_calc(ethdev); + + for (qid = 0; qid < ethdev->data->nb_rx_queues; qid++) { + for (i = 0; i < SSSNIC_ETHDEV_NB_RXQ_XSTATS; i++) { + snprintf(xstats_names[count].name, + RTE_ETH_XSTATS_NAME_SIZE, "rx_q%u_%s", qid, + rxq_stats_strings[i].name); + count++; + } + } + + for (qid = 0; qid < ethdev->data->nb_tx_queues; qid++) { + for (i = 0; i < SSSNIC_ETHDEV_NB_TXQ_XSTATS; i++) { + snprintf(xstats_names[count].name, + RTE_ETH_XSTATS_NAME_SIZE, "tx_q%u_%s", qid, + txq_stats_strings[i].name); + count++; + } + } + + for (i = 0; i < SSSNIC_ETHDEV_NB_PORT_XSTATS; i++) { + snprintf(xstats_names[count].name, RTE_ETH_XSTATS_NAME_SIZE, + "port_%s", port_stats_strings[i].name); + count++; + } + + for (i = 0; i < SSSNIC_ETHDEV_NB_MAC_XSTATS; i++) { + snprintf(xstats_names[count].name, RTE_ETH_XSTATS_NAME_SIZE, + "mac_%s", mac_stats_strings[i].name); + count++; + } + + return count; +} + +int +sssnic_ethdev_xstats_get(struct rte_eth_dev *ethdev, + struct rte_eth_xstat *xstats, unsigned int n) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + uint16_t i, qid, count = 0; + struct { + struct sssnic_ethdev_rxq_stats rxq; + struct sssnic_ethdev_txq_stats txq; + struct sssnic_port_stats port; + struct sssnic_mac_stats mac; + } *stats; + + if (n < sssnic_ethdev_xstats_num_calc(ethdev)) + return count; + + stats = rte_zmalloc(NULL, sizeof(*stats), 0); + if (stats == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc memory for xstats"); + return -ENOMEM; + } + + for (qid = 0; qid < ethdev->data->nb_rx_queues; qid++) { + sssnic_ethdev_rx_queue_stats_get(ethdev, qid, &stats->rxq); + for (i = 0; i < SSSNIC_ETHDEV_NB_RXQ_XSTATS; i++) { + xstats[count].value = + SSSNIC_ETHDEV_XSTATS_VALUE(&stats->rxq, i, + rxq_stats_strings); + count++; + } + } + + for (qid = 0; qid < ethdev->data->nb_tx_queues; qid++) { + sssnic_ethdev_tx_queue_stats_get(ethdev, qid, &stats->txq); + for (i = 0; i < SSSNIC_ETHDEV_NB_TXQ_XSTATS; i++) { + xstats[count].value = + SSSNIC_ETHDEV_XSTATS_VALUE(&stats->txq, + i, txq_stats_strings); + count++; + } + } + + ret = sssnic_port_stats_get(hw, &stats->port); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get port %u stats", + ethdev->data->port_id); + goto out; + } + + for (i = 0; i < SSSNIC_ETHDEV_NB_PORT_XSTATS; i++) { + xstats[count].value = SSSNIC_ETHDEV_XSTATS_VALUE(&stats->port, + i, port_stats_strings); + count++; + } + + ret = sssnic_mac_stats_get(hw, &stats->mac); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get port %u mac stats", + ethdev->data->port_id); + goto out; + } + + for (i = 0; i < SSSNIC_ETHDEV_NB_MAC_XSTATS; i++) { + xstats[count].value = SSSNIC_ETHDEV_XSTATS_VALUE(&stats->mac, i, + mac_stats_strings); + count++; + } + + ret = count; + +out: + rte_free(stats); + return ret; +} + +int +sssnic_ethdev_xstats_reset(struct rte_eth_dev *ethdev) +{ + int ret; + + ret = sssnic_ethdev_stats_reset(ethdev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to clear port %u basic stats", + ethdev->data->port_id); + return ret; + } + + ret = sssnic_mac_stats_clear(SSSNIC_ETHDEV_TO_HW(ethdev)); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to clear port %u MAC stats", + ethdev->data->port_id); + return ret; + } + + return 0; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_stats.h b/drivers/net/sssnic/sssnic_ethdev_stats.h new file mode 100644 index 0000000000..2fdc419e60 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_stats.h @@ -0,0 +1,18 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_STATS_H_ +#define _SSSNIC_ETHDEV_STATS_H_ + +int sssnic_ethdev_stats_get(struct rte_eth_dev *ethdev, + struct rte_eth_stats *stats); +int sssnic_ethdev_stats_reset(struct rte_eth_dev *ethdev); +int sssnic_ethdev_xstats_get_names(struct rte_eth_dev *ethdev, + struct rte_eth_xstat_name *xstats_names, + __rte_unused unsigned int limit); +int sssnic_ethdev_xstats_get(struct rte_eth_dev *ethdev, + struct rte_eth_xstat *xstats, unsigned int n); +int sssnic_ethdev_xstats_reset(struct rte_eth_dev *ethdev); + +#endif /* _SSSNIC_ETHDEV_STATS_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.c b/drivers/net/sssnic/sssnic_ethdev_tx.c index b9c4f97cb3..d167e3f307 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.c +++ b/drivers/net/sssnic/sssnic_ethdev_tx.c @@ -670,3 +670,39 @@ sssnic_ethdev_tx_max_size_set(struct rte_eth_dev *ethdev, uint16_t size) return 0; }; + +int +sssnic_ethdev_tx_queue_stats_get(struct rte_eth_dev *ethdev, uint16_t qid, + struct sssnic_ethdev_txq_stats *stats) +{ + struct sssnic_ethdev_txq *txq; + + if (qid >= ethdev->data->nb_tx_queues) { + PMD_DRV_LOG(ERR, + "Invalid qid, qid must less than nb_tx_queues(%u)", + ethdev->data->nb_tx_queues); + return -EINVAL; + } + + txq = ethdev->data->tx_queues[qid]; + memcpy(stats, &txq->stats, sizeof(txq->stats)); + + return 0; +} + +void +sssnic_ethdev_tx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid) +{ + struct sssnic_ethdev_txq *txq; + uint64_t *stat; + int i, len; + + len = sizeof(struct sssnic_ethdev_txq_stats) / sizeof(uint64_t); + + if (qid < ethdev->data->nb_tx_queues) { + txq = ethdev->data->tx_queues[qid]; + stat = (uint64_t *)&txq->stats; + for (i = 0; i < len; i++) + *(stat++) = 0; + } +} diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.h b/drivers/net/sssnic/sssnic_ethdev_tx.h index 88ad82a055..f04c3d5be8 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.h +++ b/drivers/net/sssnic/sssnic_ethdev_tx.h @@ -33,5 +33,9 @@ uint16_t sssnic_ethdev_tx_queue_depth_get(struct rte_eth_dev *ethdev, uint16_t qid); int sssnic_ethdev_tx_ci_attr_init(struct rte_eth_dev *ethdev); int sssnic_ethdev_tx_max_size_set(struct rte_eth_dev *ethdev, uint16_t size); +int sssnic_ethdev_tx_queue_stats_get(struct rte_eth_dev *ethdev, uint16_t qid, + struct sssnic_ethdev_txq_stats *stats); +void sssnic_ethdev_tx_queue_stats_clear(struct rte_eth_dev *ethdev, + uint16_t qid); #endif /* _SSSNIC_ETHDEV_TX_H_ */ From patchwork Fri Sep 1 09:35:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131066 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B1F94221E; Fri, 1 Sep 2023 11:38:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 89C2140DCD; Fri, 1 Sep 2023 11:36:14 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id D3E0C40647 for ; Fri, 1 Sep 2023 11:36:07 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZWVs069866; Fri, 1 Sep 2023 17:35:32 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:32 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 23/32] net/sssnic: support Rx packet burst Date: Fri, 1 Sep 2023 17:35:05 +0800 Message-ID: <20230901093514.224824-24-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZWVs069866 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Fixed wrong format of printing uint64_t. --- doc/guides/nics/features/sssnic.ini | 2 + drivers/net/sssnic/sssnic_ethdev.c | 2 + drivers/net/sssnic/sssnic_ethdev_rx.c | 167 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_rx.h | 2 + 4 files changed, 173 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index aba0b78c95..320ac4533d 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -8,6 +8,8 @@ Link status = Y Link status event = Y Queue start/stop = Y Rx interrupt = Y +Scattered Rx = Y +LRO = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 99e6d6152a..021fabcbe5 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -769,6 +769,8 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + ethdev->rx_pkt_burst = sssnic_ethdev_rx_pkt_burst; + netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); pci_dev = RTE_ETH_DEV_TO_PCI(ethdev); hw = rte_zmalloc("sssnic_hw", sizeof(struct sssnic_hw), 0); diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index 66045f7a98..82e65f2482 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -1183,3 +1183,170 @@ sssnic_ethdev_rx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid) memset(&rxq->stats, 0, sizeof(rxq->stats)); } }; + +static inline void +sssnic_ethdev_rx_csum_offload(struct sssnic_ethdev_rxq *rxq, + struct rte_mbuf *rxm, volatile struct sssnic_ethdev_rx_desc *rxd) +{ + /* no errors */ + if (likely(rxd->status_err == 0)) { + rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD | + RTE_MBUF_F_RX_L4_CKSUM_GOOD; + return; + } + + /* bypass hw crc error*/ + if (unlikely(rxd->hw_crc_err)) { + rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN; + return; + } + + if (rxd->ip_csum_err) { + rxm->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + rxq->stats.csum_errors++; + } + + if (rxd->tcp_csum_err || rxd->udp_csum_err || rxd->sctp_crc_err) { + rxm->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + rxq->stats.csum_errors++; + } + + if (unlikely(rxd->other_err)) + rxq->stats.other_errors++; +} + +static inline void +sssnic_ethdev_rx_vlan_offload(struct rte_mbuf *rxm, + volatile struct sssnic_ethdev_rx_desc *rxd) +{ + if (rxd->vlan_en == 0 || rxd->vlan == 0) { + rxm->vlan_tci = 0; + return; + } + + rxm->vlan_tci = rxd->vlan; + rxm->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; +} + +static inline void +sssnic_ethdev_rx_segments(struct sssnic_ethdev_rxq *rxq, struct rte_mbuf *head, + uint32_t remain_size) +{ + struct sssnic_ethdev_rx_entry *rxe; + struct rte_mbuf *curr, *prev = head; + uint16_t rx_buf_size = rxq->rx_buf_size; + uint16_t ci; + uint32_t rx_size; + + while (remain_size > 0) { + ci = sssnic_ethdev_rxq_ci_get(rxq); + rxe = &rxq->rxe[ci]; + curr = rxe->pktmbuf; + + sssnic_ethdev_rxq_consume(rxq, 1); + + rx_size = RTE_MIN(remain_size, rx_buf_size); + remain_size -= rx_size; + + curr->data_len = rx_size; + curr->next = NULL; + prev->next = curr; + prev = curr; + head->nb_segs++; + } +} + +uint16_t +sssnic_ethdev_rx_pkt_burst(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct sssnic_ethdev_rxq *rxq = (struct sssnic_ethdev_rxq *)rx_queue; + struct sssnic_ethdev_rx_entry *rxe; + struct rte_mbuf *rxm; + struct sssnic_ethdev_rx_desc *rxd, rx_desc; + uint16_t ci, idle_entries; + uint16_t rx_buf_size; + uint32_t rx_size; + uint64_t nb_rx = 0; + uint64_t rx_bytes = 0; + + ci = sssnic_ethdev_rxq_ci_get(rxq); + rx_buf_size = rxq->rx_buf_size; + rxd = &rx_desc; + + while (nb_rx < nb_pkts) { + rxd->dword0 = __atomic_load_n(&rxq->desc[ci].dword0, + __ATOMIC_ACQUIRE); + /* check rx done */ + if (!rxd->done) + break; + + rxd->dword1 = rxq->desc[ci].dword1; + rxd->dword2 = rxq->desc[ci].dword2; + rxd->dword3 = rxq->desc[ci].dword3; + + /* reset rx desc status */ + rxq->desc[ci].dword0 = 0; + + /* get current pktmbuf */ + rxe = &rxq->rxe[ci]; + rxm = rxe->pktmbuf; + + /* prefetch next packet */ + sssnic_ethdev_rxq_consume(rxq, 1); + ci = sssnic_ethdev_rxq_ci_get(rxq); + rte_prefetch0(rxq->rxe[ci].pktmbuf); + + /* set pktmbuf len */ + rx_size = rxd->len; + rxm->pkt_len = rx_size; + if (likely(rx_size <= rx_buf_size)) { + rxm->data_len = rx_size; + } else { + rxm->data_len = rx_buf_size; + sssnic_ethdev_rx_segments(rxq, rxm, + rx_size - rx_buf_size); + } + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->port = rxq->port; + + /* process checksum offload*/ + sssnic_ethdev_rx_csum_offload(rxq, rxm, rxd); + + /* process vlan offload */ + sssnic_ethdev_rx_vlan_offload(rxm, rxd); + + /* process lro */ + if (unlikely(rxd->lro_num != 0)) { + rxm->ol_flags |= RTE_MBUF_F_RX_LRO; + rxm->tso_segsz = rx_size / rxd->lro_num; + } + + /* process RSS offload */ + if (likely(rxd->rss_type != 0)) { + rxm->hash.rss = rxd->rss_hash; + rxm->ol_flags |= RTE_MBUF_F_RX_RSS_HASH; + } + + rx_pkts[nb_rx++] = rxm; + rx_bytes += rx_size; + + SSSNIC_RX_LOG(DEBUG, + "Received one packet on port %u, len=%u, nb_seg=%u, tso_segsz=%u, ol_flags=%" + PRIx64, rxq->port, rxm->pkt_len, rxm->nb_segs, rxm->tso_segsz, + rxm->ol_flags); + } + + if (nb_rx > 0) { + rxq->stats.packets += nb_rx; + rxq->stats.bytes += rx_bytes; + rxq->stats.burst = nb_rx; + + /* refill packet mbuf */ + idle_entries = sssnic_ethdev_rxq_num_idle_entries(rxq) - 1; + if (idle_entries >= rxq->rx_free_thresh) + sssnic_ethdev_rxq_pktmbufs_fill(rxq); + } + + return nb_rx; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h index 5532aced4e..b0b35dee73 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.h +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -42,5 +42,7 @@ int sssnic_ethdev_rx_queue_stats_get(struct rte_eth_dev *ethdev, uint16_t qid, struct sssnic_ethdev_rxq_stats *stats); void sssnic_ethdev_rx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid); +uint16_t sssnic_ethdev_rx_pkt_burst(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); #endif From patchwork Fri Sep 1 09:35:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131069 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1D7D44221E; Fri, 1 Sep 2023 11:39:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 989E240E54; Fri, 1 Sep 2023 11:36:18 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 3F2DF40689 for ; Fri, 1 Sep 2023 11:36:10 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZXv6069868; Fri, 1 Sep 2023 17:35:33 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:32 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 24/32] net/sssnic: support Tx packet burst Date: Fri, 1 Sep 2023 17:35:06 +0800 Message-ID: <20230901093514.224824-25-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZXv6069868 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Fixed wrong format of printing uint64_t. --- doc/guides/nics/features/sssnic.ini | 5 + drivers/net/sssnic/sssnic_ethdev.c | 1 + drivers/net/sssnic/sssnic_ethdev_tx.c | 404 ++++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_tx.h | 2 + 4 files changed, 412 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 320ac4533d..7e6b70684a 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -9,11 +9,16 @@ Link status event = Y Queue start/stop = Y Rx interrupt = Y Scattered Rx = Y +TSO = Y LRO = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 021fabcbe5..328fb85d30 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -770,6 +770,7 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) return 0; ethdev->rx_pkt_burst = sssnic_ethdev_rx_pkt_burst; + ethdev->tx_pkt_burst = sssnic_ethdev_tx_pkt_burst; netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); pci_dev = RTE_ETH_DEV_TO_PCI(ethdev); diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.c b/drivers/net/sssnic/sssnic_ethdev_tx.c index d167e3f307..533befb6ea 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.c +++ b/drivers/net/sssnic/sssnic_ethdev_tx.c @@ -171,6 +171,17 @@ enum sssnic_ethdev_txq_entry_type { SSSNIC_ETHDEV_TXQ_ENTRY_EXTEND = 1, }; +struct sssnic_ethdev_tx_info { + /* offload enable flag */ + uint16_t offload_en; + /*l4 payload offset*/ + uint16_t payload_off; + /* number of txq entries */ + uint16_t nb_entries; + /* number of tx segs */ + uint16_t nb_segs; +}; + #define SSSNIC_ETHDEV_TXQ_ENTRY_SZ_BITS 4 #define SSSNIC_ETHDEV_TXQ_ENTRY_SZ (RTE_BIT32(SSSNIC_ETHDEV_TXQ_ENTRY_SZ_BITS)) @@ -182,12 +193,44 @@ enum sssnic_ethdev_txq_entry_type { #define SSSNIC_ETHDEV_TX_CI_DEF_COALESCING_TIME 16 #define SSSNIC_ETHDEV_TX_CI_DEF_PENDING_TIME 4 +#define SSSNIC_ETHDEV_TX_CSUM_OFFLOAD_MASK \ + (RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM | \ + RTE_MBUF_F_TX_UDP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | RTE_MBUF_F_TX_TCP_SEG) + +#define SSSNIC_ETHDEV_TX_OFFLOAD_MASK \ + (RTE_MBUF_F_TX_VLAN | SSSNIC_ETHDEV_TX_CSUM_OFFLOAD_MASK) + +#define SSSNIC_ETHDEV_TX_MAX_NUM_SEGS 38 +#define SSSNIC_ETHDEV_TX_MAX_SEG_SIZE 65535 +#define SSSNIC_ETHDEV_TX_MAX_PAYLOAD_OFF 221 +#define SSSNIC_ETHDEV_TX_DEF_MSS 0x3e00 +#define SSSNIC_ETHDEV_TX_MIN_MSS 0x50 +#define SSSNIC_ETHDEV_TX_COMPACT_SEG_MAX_SIZE 0x3fff + +#define SSSNIC_ETHDEV_TXQ_DESC_ENTRY(txq, idx) \ + (SSSNIC_WORKQ_ENTRY_CAST((txq)->workq, idx, \ + struct sssnic_ethdev_tx_desc)) + +#define SSSNIC_ETHDEV_TXQ_OFFLOAD_ENTRY(txq, idx) \ + SSSNIC_WORKQ_ENTRY_CAST((txq)->workq, idx, \ + struct sssnic_ethdev_tx_offload) + +#define SSSNIC_ETHDEV_TXQ_SEG_ENTRY(txq, idx) \ + SSSNIC_WORKQ_ENTRY_CAST((txq)->workq, idx, struct sssnic_ethdev_tx_seg) + static inline uint16_t sssnic_ethdev_txq_num_used_entries(struct sssnic_ethdev_txq *txq) { return sssnic_workq_num_used_entries(txq->workq); } +static inline uint16_t +sssnic_ethdev_txq_num_idle_entries(struct sssnic_ethdev_txq *txq) +{ + return sssnic_workq_num_idle_entries(txq->workq); +} + static inline uint16_t sssnic_ethdev_txq_ci_get(struct sssnic_ethdev_txq *txq) { @@ -212,6 +255,12 @@ sssnic_ethdev_txq_consume(struct sssnic_ethdev_txq *txq, uint16_t num_entries) sssnic_workq_consume_fast(txq->workq, num_entries); } +static inline void +sssnic_ethdev_txq_produce(struct sssnic_ethdev_txq *txq, uint16_t num_entries) +{ + sssnic_workq_produce_fast(txq->workq, num_entries); +} + int sssnic_ethdev_tx_queue_setup(struct rte_eth_dev *ethdev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, @@ -706,3 +755,358 @@ sssnic_ethdev_tx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid) *(stat++) = 0; } } + +static inline uint16_t +sssnic_ethdev_tx_payload_calc(struct rte_mbuf *tx_mbuf) +{ + if ((tx_mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) != 0) { + uint64_t mask = RTE_MBUF_F_TX_OUTER_IPV6 | + RTE_MBUF_F_TX_OUTER_IP_CKSUM | + RTE_MBUF_F_TX_TCP_SEG; + + if ((tx_mbuf->ol_flags & mask) != 0) + return tx_mbuf->outer_l2_len + tx_mbuf->outer_l3_len + + tx_mbuf->l2_len + tx_mbuf->l3_len + + tx_mbuf->l4_len; + } + + return tx_mbuf->l2_len + tx_mbuf->l3_len + tx_mbuf->l4_len; +} + +static inline int +sssnic_ethdev_tx_offload_check(struct rte_mbuf *tx_mbuf, + struct sssnic_ethdev_tx_info *tx_info) +{ + uint64_t ol_flags = tx_mbuf->ol_flags; + + if ((ol_flags & SSSNIC_ETHDEV_TX_OFFLOAD_MASK) == 0) { + tx_info->offload_en = 0; + return 0; + } + +#ifdef RTE_LIBRTE_ETHDEV_DEBUG + if (rte_validate_tx_offload(tx_mbuf) != 0) { + SSSNIC_TX_LOG(ERR, "Bad tx mbuf offload flags: %" PRIx64, ol_flags); + return -EINVAL; + } +#endif + + tx_info->offload_en = 1; + + if (unlikely(((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) != 0) && + ((ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) == 0))) { + SSSNIC_TX_LOG(ERR, "Only support VXLAN offload"); + return -EINVAL; + } + + if (unlikely((ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0)) { + uint16_t off = sssnic_ethdev_tx_payload_calc(tx_mbuf); + if (unlikely((off >> 1) > SSSNIC_ETHDEV_TX_MAX_PAYLOAD_OFF)) { + SSSNIC_TX_LOG(ERR, "Bad tx payload offset: %u", off); + return -EINVAL; + } + tx_info->payload_off = off; + } + + return 0; +} + +static inline int +sssnic_ethdev_tx_num_segs_calc(struct rte_mbuf *tx_mbuf, + struct sssnic_ethdev_tx_info *tx_info) +{ + uint16_t nb_segs = tx_mbuf->nb_segs; + + if (tx_info->offload_en == 0) { + /* offload not enabled, need no offload entry, + * then txq entries equals tx_segs + */ + tx_info->nb_entries = nb_segs; + } else { + if (unlikely(nb_segs > SSSNIC_ETHDEV_TX_MAX_NUM_SEGS)) { + SSSNIC_TX_LOG(ERR, "Too many segment for tso"); + return -EINVAL; + } + + /*offload enabled, need offload entry, + * then txq entries equals tx_segs + 1 + */ + tx_info->nb_entries = nb_segs + 1; + } + + tx_info->nb_segs = nb_segs; + ; + + return 0; +} + +static inline int +sssnic_ethdev_tx_info_init(struct sssnic_ethdev_txq *txq, + struct rte_mbuf *tx_mbuf, struct sssnic_ethdev_tx_info *tx_info) +{ + int ret; + + /* check tx offload valid and enabled */ + ret = sssnic_ethdev_tx_offload_check(tx_mbuf, tx_info); + if (unlikely(ret != 0)) { + txq->stats.offload_errors++; + return ret; + } + + /* Calculate how many num tx segs and num of txq entries are required*/ + ret = sssnic_ethdev_tx_num_segs_calc(tx_mbuf, tx_info); + if (unlikely(ret != 0)) { + txq->stats.too_many_segs++; + return ret; + } + + return 0; +} + +static inline void +sssnic_ethdev_tx_offload_setup(struct sssnic_ethdev_txq *txq, + struct sssnic_ethdev_tx_desc *tx_desc, uint16_t pi, + struct rte_mbuf *tx_mbuf, struct sssnic_ethdev_tx_info *tx_info) +{ + struct sssnic_ethdev_tx_offload *offload; + + /* reset offload settings */ + offload = SSSNIC_ETHDEV_TXQ_OFFLOAD_ENTRY(txq, pi); + offload->dw0 = 0; + offload->dw1 = 0; + offload->dw2 = 0; + offload->dw3 = 0; + + if (unlikely((tx_mbuf->ol_flags & RTE_MBUF_F_TX_VLAN) != 0)) { + offload->vlan_en = 1; + offload->vlan_tag = tx_mbuf->vlan_tci; + } + + if ((tx_mbuf->ol_flags & SSSNIC_ETHDEV_TX_CSUM_OFFLOAD_MASK) == 0) + return; + + if ((tx_mbuf->ol_flags & RTE_MBUF_F_TX_TCP_SEG) != 0) { + offload->inner_l3_csum_en = 1; + offload->inner_l4_csum_en = 1; + + tx_desc->tso_en = 1; + tx_desc->payload_off = tx_info->payload_off >> 1; + tx_desc->mss = tx_mbuf->tso_segsz; + } else { + if ((tx_mbuf->ol_flags & RTE_MBUF_F_TX_IP_CKSUM) != 0) + offload->inner_l3_csum_en = 1; + + if ((tx_mbuf->ol_flags & RTE_MBUF_F_TX_L4_MASK) != 0) + offload->inner_l4_csum_en = 1; + } + + if (tx_mbuf->ol_flags & RTE_MBUF_F_TX_TUNNEL_VXLAN) + offload->tunnel_flag = 1; + + if (tx_mbuf->ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM) + offload->l3_csum_en = 1; +} + +static inline int +sssnic_ethdev_tx_segs_setup(struct sssnic_ethdev_txq *txq, + struct sssnic_ethdev_tx_desc *tx_desc, uint16_t pi, + struct rte_mbuf *tx_mbuf, struct sssnic_ethdev_tx_info *tx_info) +{ + struct sssnic_ethdev_tx_seg *tx_seg; + uint16_t idx_mask = txq->idx_mask; + uint16_t nb_segs, i; + rte_iova_t seg_iova; + + nb_segs = tx_info->nb_segs; + + /* first segment info fill into tx desc entry*/ + seg_iova = rte_mbuf_data_iova(tx_mbuf); + tx_desc->data_addr_hi = SSSNIC_UPPER_32_BITS(seg_iova); + tx_desc->data_addr_lo = SSSNIC_LOWER_32_BITS(seg_iova); + tx_desc->data_len = tx_mbuf->data_len; + + /* next tx segment */ + tx_mbuf = tx_mbuf->next; + + for (i = 1; i < nb_segs; i++) { + if (unlikely(tx_mbuf == NULL)) { + txq->stats.null_segs++; + SSSNIC_TX_LOG(DEBUG, "Tx mbuf segment is NULL"); + return -EINVAL; + } + + if (unlikely(tx_mbuf->data_len == 0)) { + txq->stats.zero_len_segs++; + SSSNIC_TX_LOG(DEBUG, + "Length of tx mbuf segment is zero"); + return -EINVAL; + } + + seg_iova = rte_mbuf_data_iova(tx_mbuf); + tx_seg = SSSNIC_ETHDEV_TXQ_SEG_ENTRY(txq, pi); + tx_seg->buf_hi_addr = SSSNIC_UPPER_32_BITS(seg_iova); + tx_seg->buf_lo_addr = SSSNIC_LOWER_32_BITS(seg_iova); + tx_seg->len = tx_mbuf->data_len; + tx_seg->resvd = 0; + + pi = (pi + 1) & idx_mask; + tx_mbuf = tx_mbuf->next; + } + + return 0; +} + +static inline int +sssnic_ethdev_txq_entries_setup(struct sssnic_ethdev_txq *txq, uint16_t pi, + struct rte_mbuf *tx_mbuf, struct sssnic_ethdev_tx_info *tx_info) +{ + struct sssnic_ethdev_tx_desc *tx_desc; + uint16_t idx_mask = txq->idx_mask; + + /* reset tx desc entry*/ + tx_desc = SSSNIC_ETHDEV_TXQ_DESC_ENTRY(txq, pi); + tx_desc->dw0 = 0; + tx_desc->dw1 = 0; + tx_desc->dw2 = 0; + tx_desc->dw3 = 0; + tx_desc->owner = txq->owner; + tx_desc->uc = 1; + + if (tx_info->offload_en != 0) { + /* next_pi points to tx offload entry */ + pi = (pi + 1) & idx_mask; + sssnic_ethdev_tx_offload_setup(txq, tx_desc, pi, tx_mbuf, + tx_info); + + tx_desc->entry_type = SSSNIC_ETHDEV_TXQ_ENTRY_EXTEND; + tx_desc->offload_en = 1; + tx_desc->num_segs = tx_info->nb_segs; + + if (tx_desc->mss == 0) + tx_desc->mss = SSSNIC_ETHDEV_TX_DEF_MSS; + else if (tx_desc->mss < SSSNIC_ETHDEV_TX_MIN_MSS) + tx_desc->mss = SSSNIC_ETHDEV_TX_MIN_MSS; + + } else { + /* + * if offload disabled and nb_tx_seg > 0 use extend tx entry + * else use default compact entry + */ + if (tx_info->nb_segs > 1) { + tx_desc->num_segs = tx_info->nb_segs; + tx_desc->entry_type = SSSNIC_ETHDEV_TXQ_ENTRY_EXTEND; + } else { + if (unlikely(tx_mbuf->data_len > + SSSNIC_ETHDEV_TX_COMPACT_SEG_MAX_SIZE)) { + txq->stats.too_large_pkts++; + SSSNIC_TX_LOG(ERR, + "Too large pakcet (size=%u) for compact tx entry", + tx_mbuf->data_len); + return -EINVAL; + } + } + } + + /* get next_pi that points to tx seg entry */ + pi = (pi + 1) & idx_mask; + + return sssnic_ethdev_tx_segs_setup(txq, tx_desc, pi, tx_mbuf, tx_info); +} + +static inline void +sssnic_ethdev_txq_doorbell_ring(struct sssnic_ethdev_txq *txq, uint16_t pi) +{ + uint64_t *db_addr; + struct sssnic_ethdev_txq_doorbell db; + static const struct sssnic_ethdev_txq_doorbell default_db = { + .cf = 0, + .service = 1, + }; + + db.u64 = default_db.u64; + db.qid = txq->qid; + db.cos = txq->cos; + db.pi_hi = (pi >> 8) & 0xff; + + db_addr = ((uint64_t *)txq->doorbell) + (pi & 0xff); + + rte_write64(db.u64, db_addr); +} + +uint16_t +sssnic_ethdev_tx_pkt_burst(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct sssnic_ethdev_txq *txq = (struct sssnic_ethdev_txq *)tx_queue; + struct sssnic_ethdev_tx_entry *txe; + struct rte_mbuf *txm; + struct sssnic_ethdev_tx_info tx_info; + uint64_t tx_bytes = 0; + uint16_t nb_tx = 0; + uint16_t idle_entries; + uint16_t pi; + int ret; + + /* cleanup previous xmit if idle entries is less than tx_free_thresh*/ + idle_entries = sssnic_ethdev_txq_num_idle_entries(txq) - 1; + if (unlikely(idle_entries < txq->tx_free_thresh)) + sssnic_ethdev_txq_pktmbufs_cleanup(txq); + + pi = sssnic_ethdev_txq_pi_get(txq); + + while (nb_tx < nb_pkts) { + txm = tx_pkts[nb_tx]; + + ret = sssnic_ethdev_tx_info_init(txq, txm, &tx_info); + if (unlikely(ret != 0)) + break; + + idle_entries = sssnic_ethdev_txq_num_idle_entries(txq) - 1; + + /* check if there are enough txq entries to xmit one packet */ + if (unlikely(idle_entries < tx_info.nb_entries)) { + sssnic_ethdev_txq_pktmbufs_cleanup(txq); + idle_entries = + sssnic_ethdev_txq_num_idle_entries(txq) - 1; + if (idle_entries < tx_info.nb_entries) { + SSSNIC_TX_LOG(ERR, + "No tx entries, idle_entries: %u, expect %u", + idle_entries, tx_info.nb_entries); + txq->stats.nobuf++; + break; + } + } + + /* setup txq entries, include tx_desc, offload, seg */ + ret = sssnic_ethdev_txq_entries_setup(txq, pi, txm, &tx_info); + if (unlikely(ret != 0)) + break; + + txe = &txq->txe[pi]; + txe->pktmbuf = txm; + txe->num_workq_entries = tx_info.nb_entries; + + if (unlikely((pi + tx_info.nb_entries) >= txq->depth)) + txq->owner = !txq->owner; + + sssnic_ethdev_txq_produce(txq, tx_info.nb_entries); + + pi = sssnic_ethdev_txq_pi_get(txq); + nb_tx++; + tx_bytes += txm->pkt_len; + + SSSNIC_TX_LOG(DEBUG, + "Transmitted one packet on port %u, len=%u, nb_seg=%u, tso_segsz=%u, ol_flags=%" + PRIx64, txq->port, txm->pkt_len, txm->nb_segs, txm->tso_segsz, + txm->ol_flags); + } + + if (likely(nb_tx > 0)) { + sssnic_ethdev_txq_doorbell_ring(txq, pi); + txq->stats.packets += nb_tx; + txq->stats.bytes += tx_bytes; + txq->stats.burst = nb_tx; + } + + return nb_tx; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.h b/drivers/net/sssnic/sssnic_ethdev_tx.h index f04c3d5be8..3a7cd47080 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.h +++ b/drivers/net/sssnic/sssnic_ethdev_tx.h @@ -37,5 +37,7 @@ int sssnic_ethdev_tx_queue_stats_get(struct rte_eth_dev *ethdev, uint16_t qid, struct sssnic_ethdev_txq_stats *stats); void sssnic_ethdev_tx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid); +uint16_t sssnic_ethdev_tx_pkt_burst(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); #endif /* _SSSNIC_ETHDEV_TX_H_ */ From patchwork Fri Sep 1 09:35:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131071 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D96FB4221E; Fri, 1 Sep 2023 11:39:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0D9D840E8A; Fri, 1 Sep 2023 11:36:21 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id C23C24064C for ; Fri, 1 Sep 2023 11:36:11 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZXhD069869; Fri, 1 Sep 2023 17:35:33 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:32 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 25/32] net/sssnic: add RSS support Date: Fri, 1 Sep 2023 17:35:07 +0800 Message-ID: <20230901093514.224824-26-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZXhD069869 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Removed error.h from including files. --- doc/guides/nics/features/sssnic.ini | 4 + drivers/net/sssnic/base/sssnic_api.c | 338 ++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 36 +++ drivers/net/sssnic/base/sssnic_cmd.h | 58 ++++ drivers/net/sssnic/meson.build | 1 + drivers/net/sssnic/sssnic_ethdev.c | 16 ++ drivers/net/sssnic/sssnic_ethdev.h | 2 + drivers/net/sssnic/sssnic_ethdev_rss.c | 377 +++++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_rss.h | 20 ++ drivers/net/sssnic/sssnic_ethdev_rx.c | 13 + 10 files changed, 865 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_ethdev_rss.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_rss.h diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 7e6b70684a..020a9e7056 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -15,6 +15,10 @@ Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y Multicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y L3 checksum offload = Y L4 checksum offload = Y Inner L3 checksum = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 9f063112f2..32b24e841c 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1159,3 +1159,341 @@ sssnic_mac_stats_clear(struct sssnic_hw *hw) return 0; } + +int +sssnic_rss_enable_set(struct sssnic_hw *hw, bool state) +{ + int ret; + struct sssnic_rss_enable_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.state = state ? 1 : 0; + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_ENABLE_RSS_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_ENABLE_RSS_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +static int +sssnic_rss_profile_config(struct sssnic_hw *hw, bool new) +{ + int ret; + struct sssnic_rss_profile_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = new ? SSSNIC_RSS_PROFILE_CMD_OP_NEW : + SSSNIC_RSS_PROFILE_CMD_OP_DEL; + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_RSS_PROFILE_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_RSS_PROFILE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_rss_profile_create(struct sssnic_hw *hw) +{ + return sssnic_rss_profile_config(hw, true); +} + +int +sssnic_rss_profile_destroy(struct sssnic_hw *hw) +{ + return sssnic_rss_profile_config(hw, false); +} + +int +sssnic_rss_hash_key_set(struct sssnic_hw *hw, uint8_t *key, uint16_t len) +{ + int ret; + struct sssnic_rss_hash_key_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + if (len > sizeof(cmd.key)) { + PMD_DRV_LOG(ERR, "Invalid rss hash key length: %u", len); + return -EINVAL; + } + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd.function = SSSNIC_FUNC_IDX(hw); + rte_memcpy(cmd.key, key, len); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_RSS_HASH_KEY_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_RSS_PROFILE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +static int +sssnic_rss_type_set_by_mbox(struct sssnic_hw *hw, struct sssnic_rss_type *type) +{ + int ret; + struct sssnic_rss_type_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.mask = type->mask; + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_SET_RSS_TYPE_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd.common.status == 0xff) + return -EOPNOTSUPP; + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_RSS_TYPE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +static int +sssnic_rss_type_set_by_ctrlq(struct sssnic_hw *hw, struct sssnic_rss_type *type) +{ + struct sssnic_ctrlq_cmd cmd; + struct sssnic_rss_hash_type_ctrlq_cmd data; + int ret; + + memset(&data, 0, sizeof(data)); + data.mask = rte_cpu_to_be_32(type->mask); + + memset(&cmd, 0, sizeof(cmd)); + cmd.data = &data; + cmd.module = SSSNIC_LAN_MODULE; + cmd.data_len = sizeof(data); + cmd.cmd = SSSNIC_SET_RSS_KEY_CTRLQ_CMD; + + ret = sssnic_ctrlq_cmd_exec(hw, &cmd, 0); + if (ret || cmd.result) { + PMD_DRV_LOG(ERR, + "Failed to execulte ctrlq command %s, ret=%d, result=%" PRIu64, + "SSSNIC_SET_RSS_KEY_CTRLQ_CMD", ret, cmd.result); + return -EIO; + } + + return 0; +} + +int +sssnic_rss_type_set(struct sssnic_hw *hw, struct sssnic_rss_type *type) +{ + int ret; + + ret = sssnic_rss_type_set_by_mbox(hw, type); + if (ret == -EOPNOTSUPP) + ret = sssnic_rss_type_set_by_ctrlq(hw, type); + + return ret; +} + +int +sssnic_rss_type_get(struct sssnic_hw *hw, struct sssnic_rss_type *type) +{ + int ret; + struct sssnic_rss_type_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_GET_RSS_TYPE_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_RSS_TYPE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + type->mask = cmd.mask; + + return 0; +} + +int +sssnic_rss_hash_engine_set(struct sssnic_hw *hw, + enum sssnic_rss_hash_engine_type engine) +{ + int ret; + struct sssnic_rss_hash_engine_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.engine = engine; + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_RSS_HASH_ENGINE_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_RSS_HASH_ENGINE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_rss_indir_table_set(struct sssnic_hw *hw, const uint16_t *entry, + uint32_t num_entries) +{ + struct sssnic_ctrlq_cmd *cmd; + struct sssnic_rss_indir_table_cmd *data; + uint32_t i; + int ret; + + cmd = sssnic_ctrlq_cmd_alloc(hw); + if (cmd == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc ctrlq command"); + return -ENOMEM; + } + + data = cmd->data; + memset(data, 0, sizeof(struct sssnic_rss_indir_table_cmd)); + for (i = 0; i < num_entries; i++) + data->entry[i] = entry[i]; + + rte_wmb(); + + sssnic_mem_cpu_to_be_32(data->entry, data->entry, sizeof(data->entry)); + + cmd->data_len = sizeof(struct sssnic_rss_indir_table_cmd); + cmd->module = SSSNIC_LAN_MODULE; + cmd->cmd = SSSNIC_SET_RSS_INDIR_TABLE_CMD; + + ret = sssnic_ctrlq_cmd_exec(hw, cmd, 0); + if (ret != 0 || cmd->result != 0) { + PMD_DRV_LOG(ERR, + "Failed to execulte ctrlq command %s, ret=%d, result=%" PRIu64, + "SSSNIC_SET_RSS_INDIR_TABLE_CMD", ret, cmd->result); + ret = -EIO; + } + + sssnic_ctrlq_cmd_destroy(hw, cmd); + + return ret; +} + +int +sssnic_rss_indir_table_get(struct sssnic_hw *hw, uint16_t *entry, + uint32_t num_entries) +{ + struct sssnic_ctrlq_cmd *cmd; + struct sssnic_rss_indir_table_cmd *data; + uint32_t i; + int ret = 0; + + cmd = sssnic_ctrlq_cmd_alloc(hw); + if (cmd == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc ctrlq command"); + return -ENOMEM; + } + + data = cmd->data; + memset(data, 0, sizeof(struct sssnic_rss_indir_table_cmd)); + cmd->data_len = sizeof(struct sssnic_rss_indir_table_cmd); + cmd->module = SSSNIC_LAN_MODULE; + cmd->cmd = SSSNIC_GET_RSS_INDIR_TABLE_CMD; + cmd->response_len = sizeof(data->entry); + cmd->response_data = data->entry; + + ret = sssnic_ctrlq_cmd_exec(hw, cmd, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to execulte ctrlq command %s, ret=%d, result=%" PRIu64, + "SSSNIC_GET_RSS_INDIR_TABLE_CMD", ret, cmd->result); + ret = -EIO; + goto out; + } + + for (i = 0; i < num_entries; i++) + entry[i] = data->entry[i]; + +out: + sssnic_ctrlq_cmd_destroy(hw, cmd); + return ret; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index c2f4f90209..1d80b93e38 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -378,6 +378,30 @@ struct sssnic_mac_stats { uint64_t rx_unfilter_pkts; }; +struct sssnic_rss_type { + union { + uint32_t mask; + struct { + uint32_t resvd : 23; + uint32_t valid : 1; + uint32_t ipv6_tcp_ex : 1; + uint32_t ipv6_ex : 1; + uint32_t ipv6_tcp : 1; + uint32_t ipv6 : 1; + uint32_t ipv4_tcp : 1; + uint32_t ipv4 : 1; + uint32_t ipv6_udp : 1; + uint32_t ipv4_udp : 1; + }; + }; +}; + +enum sssnic_rss_hash_engine_type { + SSSNIC_RSS_HASH_ENGINE_XOR, + SSSNIC_RSS_HASH_ENGINE_TOEP, + SSSNIC_RSS_HASH_ENGINE_COUNT, +}; + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, @@ -420,5 +444,17 @@ int sssnic_port_stats_get(struct sssnic_hw *hw, int sssnic_port_stats_clear(struct sssnic_hw *hw); int sssnic_mac_stats_get(struct sssnic_hw *hw, struct sssnic_mac_stats *stats); int sssnic_mac_stats_clear(struct sssnic_hw *hw); +int sssnic_rss_enable_set(struct sssnic_hw *hw, bool state); +int sssnic_rss_profile_create(struct sssnic_hw *hw); +int sssnic_rss_profile_destroy(struct sssnic_hw *hw); +int sssnic_rss_hash_key_set(struct sssnic_hw *hw, uint8_t *key, uint16_t len); +int sssnic_rss_type_set(struct sssnic_hw *hw, struct sssnic_rss_type *type); +int sssnic_rss_type_get(struct sssnic_hw *hw, struct sssnic_rss_type *type); +int sssnic_rss_hash_engine_set(struct sssnic_hw *hw, + enum sssnic_rss_hash_engine_type engine); +int sssnic_rss_indir_table_set(struct sssnic_hw *hw, const uint16_t *entry, + uint32_t num_entries); +int sssnic_rss_indir_table_get(struct sssnic_hw *hw, uint16_t *entry, + uint32_t num_entries); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index bc7303ff57..56818471b6 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -66,6 +66,15 @@ enum sssnic_ctrlq_cmd_id { SSSNIC_FLUSH_RXQ_CMD = 10, }; +enum sssnic_rss_cmd_id { + SSSNIC_ENABLE_RSS_CMD = 60, + SSSNIC_RSS_PROFILE_CMD = 61, + SSSNIC_GET_RSS_TYPE_CMD = 62, + SSSNIC_RSS_HASH_KEY_CMD = 63, + SSSNIC_RSS_HASH_ENGINE_CMD = 64, + SSSNIC_SET_RSS_TYPE_CMD = 65, +}; + struct sssnic_cmd_common { uint8_t status; uint8_t version; @@ -348,4 +357,53 @@ struct sssnic_mac_stats_cmd { uint8_t resvd[3]; }; +struct sssnic_rss_enable_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t state; + uint8_t resvd[13]; +}; + +#define SSSNIC_RSS_PROFILE_CMD_OP_NEW 1 /* Allocate RSS profile */ +#define SSSNIC_RSS_PROFILE_CMD_OP_DEL 2 /* Delete RSS profile */ +struct sssnic_rss_profile_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t opcode; /* see SSSNIC_RSS_PROFILE_CMD_OP_xx */ + uint8_t profile; + uint32_t resvd[4]; +}; + +struct sssnic_rss_hash_key_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t opcode; + uint8_t resvd; + uint8_t key[40]; +}; + +struct sssnic_rss_type_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd0; + uint32_t mask; /* mask of types */ +}; + +struct sssnic_rss_hash_type_ctrlq_cmd { + uint32_t resvd[4]; + uint32_t mask; +}; +struct sssnic_rss_hash_engine_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t opcode; + uint8_t engine; + uint8_t resvd[4]; +}; + +struct sssnic_rss_indir_table_cmd { + uint32_t resvd[4]; + uint16_t entry[256]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index dea24f4b06..3541b75c30 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -22,4 +22,5 @@ sources = files( 'sssnic_ethdev_rx.c', 'sssnic_ethdev_tx.c', 'sssnic_ethdev_stats.c', + 'sssnic_ethdev_rss.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 328fb85d30..a00e96bebe 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -13,6 +13,7 @@ #include "sssnic_ethdev_rx.h" #include "sssnic_ethdev_tx.h" #include "sssnic_ethdev_stats.h" +#include "sssnic_ethdev_rss.h" static int sssnic_ethdev_init(struct rte_eth_dev *ethdev); @@ -542,6 +543,13 @@ sssnic_ethdev_start(struct rte_eth_dev *ethdev) goto rx_mode_reset; } + /* setup RSS */ + ret = sssnic_ethdev_rss_setup(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup RSS"); + goto rx_mode_reset; + } + /* start all rx queues */ ret = sssnic_ethdev_rx_queue_all_start(ethdev); if (ret != 0) { @@ -572,6 +580,7 @@ sssnic_ethdev_start(struct rte_eth_dev *ethdev) clean_port_res: sssnic_ethdev_resource_clean(ethdev); rx_mode_reset: + sssnic_ethdev_rss_shutdown(ethdev); sssnic_ethdev_rx_mode_set(ethdev, SSSNIC_ETHDEV_RX_MODE_NONE); rxtx_ctx_clean: sssnic_ethdev_rxtx_ctx_clean(ethdev); @@ -614,6 +623,9 @@ sssnic_ethdev_stop(struct rte_eth_dev *ethdev) /* shut down rx queue interrupt */ sssnic_ethdev_rx_intr_shutdown(ethdev); + /* Disable RSS */ + sssnic_ethdev_rss_shutdown(ethdev); + /* clean rxtx context */ sssnic_ethdev_rxtx_ctx_clean(ethdev); @@ -754,6 +766,10 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .xstats_get_names = sssnic_ethdev_xstats_get_names, .xstats_get = sssnic_ethdev_xstats_get, .xstats_reset = sssnic_ethdev_xstats_reset, + .rss_hash_conf_get = sssnic_ethdev_rss_hash_config_get, + .rss_hash_update = sssnic_ethdev_rss_hash_update, + .reta_update = sssnic_ethdev_rss_reta_update, + .reta_query = sssnic_ethdev_rss_reta_query, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index 1f1e991780..f19b2bd88f 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -88,6 +88,8 @@ struct sssnic_netdev { uint16_t num_started_txqs; uint16_t max_rx_size; uint32_t rx_mode; + uint32_t rss_enable; + uint8_t rss_hash_key[SSSNIC_ETHDEV_RSS_KEY_SZ]; }; #define SSSNIC_ETHDEV_PRIVATE(eth_dev) \ diff --git a/drivers/net/sssnic/sssnic_ethdev_rss.c b/drivers/net/sssnic/sssnic_ethdev_rss.c new file mode 100644 index 0000000000..690a30d7bc --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_rss.c @@ -0,0 +1,377 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_rx.h" +#include "sssnic_ethdev_tx.h" +#include "sssnic_ethdev_stats.h" +#include "sssnic_ethdev_rss.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_api.h" + +static uint8_t default_rss_hash_key[SSSNIC_ETHDEV_RSS_KEY_SZ] = { 0x6d, 0x5a, + 0x56, 0xda, 0x25, 0x5b, 0x0e, 0xc2, 0x41, 0x67, 0x25, 0x3d, 0x43, 0xa3, + 0x8f, 0xb0, 0xd0, 0xca, 0x2b, 0xcb, 0xae, 0x7b, 0x30, 0xb4, 0x77, 0xcb, + 0x2d, 0xa3, 0x80, 0x30, 0xf2, 0x0c, 0x6a, 0x42, 0xb7, 0x3b, 0xbe, 0xac, + 0x01, 0xfa }; + +#define SSSNIC_ETHDEV_RSS_IPV4 \ + (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \ + RTE_ETH_RSS_NONFRAG_IPV4_OTHER) +#define SSSNIC_ETHDEV_RSS_IPV6 \ + (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \ + RTE_ETH_RSS_NONFRAG_IPV6_OTHER) + +static inline void +sssnic_ethdev_rss_type_from_rss_hf(struct sssnic_rss_type *rss_type, + uint64_t rss_hf) +{ + rss_type->mask = 0; + rss_type->ipv4 = (rss_hf & SSSNIC_ETHDEV_RSS_IPV4) ? 1 : 0; + rss_type->ipv6 = (rss_hf & SSSNIC_ETHDEV_RSS_IPV6) ? 1 : 0; + rss_type->ipv6_ex = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0; + rss_type->ipv4_tcp = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0; + rss_type->ipv6_tcp = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0; + rss_type->ipv6_tcp_ex = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0; + rss_type->ipv4_udp = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0; + rss_type->ipv6_udp = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0; +} + +static inline uint64_t +sssnic_ethdev_rss_type_to_rss_hf(struct sssnic_rss_type *rss_type) +{ + uint64_t rss_hf = 0; + + rss_hf |= (rss_type->ipv4 == 0) ? 0 : SSSNIC_ETHDEV_RSS_IPV4; + rss_hf |= (rss_type->ipv6 == 0) ? 0 : SSSNIC_ETHDEV_RSS_IPV6; + rss_hf |= (rss_type->ipv6_ex == 0) ? 0 : RTE_ETH_RSS_IPV6_EX; + rss_hf |= (rss_type->ipv4_tcp == 0) ? 0 : RTE_ETH_RSS_NONFRAG_IPV4_TCP; + rss_hf |= (rss_type->ipv6_tcp == 0) ? 0 : RTE_ETH_RSS_NONFRAG_IPV6_TCP; + rss_hf |= (rss_type->ipv6_tcp_ex == 0) ? 0 : RTE_ETH_RSS_IPV6_TCP_EX; + rss_hf |= (rss_type->ipv4_udp == 0) ? 0 : RTE_ETH_RSS_NONFRAG_IPV4_UDP; + rss_hf |= (rss_type->ipv6_udp == 0) ? 0 : RTE_ETH_RSS_NONFRAG_IPV6_UDP; + + return rss_hf; +} + +int +sssnic_ethdev_rss_hash_update(struct rte_eth_dev *ethdev, + struct rte_eth_rss_conf *rss_conf) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_rss_type rss_type; + uint64_t rss_hf; + uint8_t *rss_key; + uint16_t rss_key_len; + int ret; + + rss_key = rss_conf->rss_key; + rss_key_len = rss_conf->rss_key_len; + if (rss_key == NULL) { + rss_key = default_rss_hash_key; + rss_key_len = SSSNIC_ETHDEV_RSS_KEY_SZ; + } else if (rss_key_len > SSSNIC_ETHDEV_RSS_KEY_SZ) { + PMD_DRV_LOG(ERR, "RSS hash key length too long"); + return -EINVAL; + } + + ret = sssnic_rss_hash_key_set(hw, rss_key, rss_key_len); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash key"); + return ret; + } + + rte_memcpy(netdev->rss_hash_key, rss_key, rss_key_len); + + rss_hf = rss_conf->rss_hf; + + if (rss_hf == 0) + rss_hf = SSSNIC_ETHDEV_RSS_OFFLOAD_FLOW_TYPES; + else + rss_hf &= SSSNIC_ETHDEV_RSS_OFFLOAD_FLOW_TYPES; + + sssnic_ethdev_rss_type_from_rss_hf(&rss_type, rss_hf); + rss_type.valid = 1; + ret = sssnic_rss_type_set(hw, &rss_type); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set RSS type: %x", rss_type.mask); + return ret; + } + + return 0; +} + +int +sssnic_ethdev_rss_hash_config_get(struct rte_eth_dev *ethdev, + struct rte_eth_rss_conf *rss_conf) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw; + struct sssnic_rss_type rss_type; + int ret; + + hw = SSSNIC_NETDEV_TO_HW(netdev); + + if (!netdev->rss_enable) { + PMD_DRV_LOG(NOTICE, "Port %u RSS is not enabled", + ethdev->data->port_id); + rss_conf->rss_hf = 0; + return 0; + } + + ret = sssnic_rss_type_get(hw, &rss_type); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get RSS type"); + return ret; + } + rss_conf->rss_hf = sssnic_ethdev_rss_type_to_rss_hf(&rss_type); + + if (rss_conf->rss_key != NULL && + rss_conf->rss_key_len >= SSSNIC_ETHDEV_RSS_KEY_SZ) { + rte_memcpy(rss_conf->rss_key, netdev->rss_hash_key, + SSSNIC_ETHDEV_RSS_KEY_SZ); + rss_conf->rss_key_len = SSSNIC_ETHDEV_RSS_KEY_SZ; + } + + return 0; +} + +int +sssnic_ethdev_rss_reta_update(struct rte_eth_dev *ethdev, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw; + uint16_t *entries; + int i, group, idx; + int ret; + + if (!netdev->rss_enable) { + PMD_DRV_LOG(ERR, "Port %u RSS is not enabled", + ethdev->data->port_id); + return -EINVAL; + } + + if (reta_size != SSSNIC_ETHDEV_RSS_RETA_SZ) { + PMD_DRV_LOG(ERR, "Invalid reta size:%u, expected reta size:%u ", + reta_size, SSSNIC_ETHDEV_RSS_RETA_SZ); + return -EINVAL; + } + + hw = SSSNIC_NETDEV_TO_HW(netdev); + + entries = rte_zmalloc(NULL, + SSSNIC_ETHDEV_RSS_RETA_SZ * sizeof(uint16_t), 0); + if (entries == NULL) { + PMD_DRV_LOG(ERR, "Could not allocate memory"); + return -ENOMEM; + } + + ret = sssnic_rss_indir_table_get(hw, entries, + SSSNIC_ETHDEV_RSS_RETA_SZ); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get RSS indirect table"); + goto out; + } + + for (i = 0; i < SSSNIC_ETHDEV_RSS_RETA_SZ; i++) { + group = i / RTE_ETH_RETA_GROUP_SIZE; + idx = i % RTE_ETH_RETA_GROUP_SIZE; + if ((reta_conf[group].mask & RTE_BIT64(idx)) != 0) + entries[i] = reta_conf[group].reta[idx]; + } + + ret = sssnic_rss_indir_table_set(hw, entries, + SSSNIC_ETHDEV_RSS_RETA_SZ); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to set RSS indirect table"); + +out: + rte_free(entries); + return ret; +} + +int +sssnic_ethdev_rss_reta_query(struct rte_eth_dev *ethdev, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw; + uint16_t *entries; + int i, group, idx; + int ret; + + if (!netdev->rss_enable) { + PMD_DRV_LOG(ERR, "Port %u RSS is not enabled", + ethdev->data->port_id); + return -EINVAL; + } + + if (reta_size != SSSNIC_ETHDEV_RSS_RETA_SZ) { + PMD_DRV_LOG(ERR, "Invalid reta size:%u, expected reta size:%u ", + reta_size, SSSNIC_ETHDEV_RSS_RETA_SZ); + return -EINVAL; + } + + hw = SSSNIC_NETDEV_TO_HW(netdev); + + entries = rte_zmalloc(NULL, + SSSNIC_ETHDEV_RSS_RETA_SZ * sizeof(uint16_t), 0); + if (entries == NULL) { + PMD_DRV_LOG(ERR, "Could not allocate memory"); + return -ENOMEM; + } + + ret = sssnic_rss_indir_table_get(hw, entries, + SSSNIC_ETHDEV_RSS_RETA_SZ); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to get RSS indirect table"); + goto out; + } + + for (i = 0; i < SSSNIC_ETHDEV_RSS_RETA_SZ; i++) { + group = i / RTE_ETH_RETA_GROUP_SIZE; + idx = i % RTE_ETH_RETA_GROUP_SIZE; + if ((reta_conf[group].mask & RTE_BIT64(idx)) != 0) + reta_conf[group].reta[idx] = entries[i]; + } + +out: + rte_free(entries); + return ret; +} + +int +sssnic_ethdev_rss_reta_reset(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint16_t *entries; + uint16_t nb_rxq; + uint8_t rxq_state; + uint16_t qid, i = 0; + int ret; + + if (!netdev->rss_enable) + return 0; + + entries = rte_zmalloc(NULL, + SSSNIC_ETHDEV_RSS_RETA_SZ * sizeof(uint16_t), 0); + if (entries == NULL) { + PMD_DRV_LOG(ERR, "Could not allocate memory"); + return -ENOMEM; + } + + nb_rxq = ethdev->data->nb_rx_queues; + + if (netdev->num_started_rxqs == 0) { + while (i < SSSNIC_ETHDEV_RSS_RETA_SZ) + entries[i++] = 0xffff; + } else { + while (i < SSSNIC_ETHDEV_RSS_RETA_SZ) { + for (qid = 0; qid < nb_rxq; qid++) { + if (i >= SSSNIC_ETHDEV_RSS_RETA_SZ) + break; + rxq_state = ethdev->data->rx_queue_state[qid]; + if (rxq_state == RTE_ETH_QUEUE_STATE_STARTED) + entries[i++] = qid; + } + } + } + + ret = sssnic_rss_indir_table_set(hw, entries, + SSSNIC_ETHDEV_RSS_RETA_SZ); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to set RSS indirect table"); + + rte_free(entries); + + return ret; +} + +int +sssnic_ethdev_rss_setup(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct rte_eth_conf *dev_conf = ðdev->data->dev_conf; + struct rte_eth_rss_conf *rss_conf; + int ret; + + if (!((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH) && + ethdev->data->nb_rx_queues > 1)) { + PMD_DRV_LOG(INFO, "RSS is not enabled"); + return 0; + } + + if (netdev->rss_enable) + return 0; + + ret = sssnic_rss_profile_create(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to create RSS profile"); + return ret; + } + + rss_conf = &dev_conf->rx_adv_conf.rss_conf; + ret = sssnic_ethdev_rss_hash_update(ethdev, rss_conf); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to setup RSS config"); + goto err_out; + } + + ret = sssnic_rss_hash_engine_set(hw, SSSNIC_RSS_HASH_ENGINE_TOEP); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash engine"); + goto err_out; + } + + ret = sssnic_rss_enable_set(hw, true); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable RSS"); + goto err_out; + } + + netdev->rss_enable = true; + + PMD_DRV_LOG(INFO, "Enabled RSS"); + + return 0; + +err_out: + sssnic_rss_profile_destroy(hw); + return ret; +} + +int +sssnic_ethdev_rss_shutdown(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + if (!netdev->rss_enable) + return 0; + + ret = sssnic_rss_enable_set(hw, false); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable rss"); + return ret; + } + + ret = sssnic_rss_profile_destroy(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to delete rss profile"); + return ret; + } + + netdev->rss_enable = false; + + return 0; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rss.h b/drivers/net/sssnic/sssnic_ethdev_rss.h new file mode 100644 index 0000000000..559722eec7 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_rss.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_RSS_H_ +#define _SSSNIC_ETHDEV_RSS_H_ + +int sssnic_ethdev_rss_hash_update(struct rte_eth_dev *ethdev, + struct rte_eth_rss_conf *rss_conf); +int sssnic_ethdev_rss_hash_config_get(struct rte_eth_dev *ethdev, + struct rte_eth_rss_conf *rss_conf); +int sssnic_ethdev_rss_reta_update(struct rte_eth_dev *ethdev, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); +int sssnic_ethdev_rss_reta_query(struct rte_eth_dev *ethdev, + struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size); +int sssnic_ethdev_rss_reta_reset(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rss_setup(struct rte_eth_dev *ethdev); +int sssnic_ethdev_rss_shutdown(struct rte_eth_dev *ethdev); + +#endif /* _SSSNIC_ETHDEV_RSS_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index 82e65f2482..2874a93a54 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -10,6 +10,7 @@ #include "sssnic_log.h" #include "sssnic_ethdev.h" #include "sssnic_ethdev_rx.h" +#include "sssnic_ethdev_rss.h" #include "base/sssnic_hw.h" #include "base/sssnic_workq.h" #include "base/sssnic_api.h" @@ -640,6 +641,10 @@ sssnic_ethdev_rx_queue_start(struct rte_eth_dev *ethdev, uint16_t queue_id) netdev->num_started_rxqs++; ethdev->data->rx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + ret = sssnic_ethdev_rss_reta_reset(ethdev); + if (ret) + PMD_DRV_LOG(WARNING, "Failed to reset RSS reta"); + PMD_DRV_LOG(DEBUG, "port %u rxq %u started", ethdev->data->port_id, queue_id); @@ -673,6 +678,10 @@ sssnic_ethdev_rx_queue_stop(struct rte_eth_dev *ethdev, uint16_t queue_id) netdev->num_started_rxqs--; ethdev->data->rx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + ret = sssnic_ethdev_rss_reta_reset(ethdev); + if (ret) + PMD_DRV_LOG(WARNING, "Failed to reset RSS reta"); + PMD_DRV_LOG(DEBUG, "port %u rxq %u stopped", ethdev->data->port_id, queue_id); @@ -704,6 +713,10 @@ sssnic_ethdev_rx_queue_all_start(struct rte_eth_dev *ethdev) ethdev->data->port_id, qid); } + ret = sssnic_ethdev_rss_reta_reset(ethdev); + if (ret) + PMD_DRV_LOG(WARNING, "Failed to reset RSS reta"); + ret = sssnic_port_enable_set(hw, true); if (ret) { PMD_DRV_LOG(ERR, "Failed to enable port:%u", From patchwork Fri Sep 1 09:35:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131070 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2ABA24221E; Fri, 1 Sep 2023 11:39:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CDBC240E68; Fri, 1 Sep 2023 11:36:19 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 32FC14064E for ; Fri, 1 Sep 2023 11:36:09 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZY8H069870; Fri, 1 Sep 2023 17:35:34 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:33 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 26/32] net/sssnic: support dev MTU set Date: Fri, 1 Sep 2023 17:35:08 +0800 Message-ID: <20230901093514.224824-27-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZY8H069870 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/sssnic_ethdev.c | 7 +++++++ 1 file changed, 7 insertions(+) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index a00e96bebe..b086e91d2d 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -733,6 +733,12 @@ sssnic_ethdev_promiscuous_disable(struct rte_eth_dev *ethdev) return 0; } +static int +sssnic_ethdev_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu) +{ + return sssnic_ethdev_tx_max_size_set(ethdev, mtu); +} + static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_start = sssnic_ethdev_start, .dev_stop = sssnic_ethdev_stop, @@ -770,6 +776,7 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .rss_hash_update = sssnic_ethdev_rss_hash_update, .reta_update = sssnic_ethdev_rss_reta_update, .reta_query = sssnic_ethdev_rss_reta_query, + .mtu_set = sssnic_ethdev_mtu_set, }; static int From patchwork Fri Sep 1 09:35:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131072 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1FBEA4221E; Fri, 1 Sep 2023 11:39:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 45BA940A71; Fri, 1 Sep 2023 11:36:22 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id 3A48E40A75 for ; Fri, 1 Sep 2023 11:36:12 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZY91069883; Fri, 1 Sep 2023 17:35:34 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:33 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 27/32] net/sssnic: support dev queue info get Date: Fri, 1 Sep 2023 17:35:09 +0800 Message-ID: <20230901093514.224824-28-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZY91069883 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/sssnic_ethdev.c | 2 ++ drivers/net/sssnic/sssnic_ethdev_rx.c | 13 +++++++++++++ drivers/net/sssnic/sssnic_ethdev_rx.h | 2 ++ drivers/net/sssnic/sssnic_ethdev_tx.c | 11 +++++++++++ drivers/net/sssnic/sssnic_ethdev_tx.h | 2 ++ 5 files changed, 30 insertions(+) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index b086e91d2d..bde8d89ddc 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -777,6 +777,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .reta_update = sssnic_ethdev_rss_reta_update, .reta_query = sssnic_ethdev_rss_reta_query, .mtu_set = sssnic_ethdev_mtu_set, + .rxq_info_get = sssnic_ethdev_rx_queue_info_get, + .txq_info_get = sssnic_ethdev_tx_queue_info_get, }; static int diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index 2874a93a54..6c5f209262 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -1363,3 +1363,16 @@ sssnic_ethdev_rx_pkt_burst(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_rx; } + +void +sssnic_ethdev_rx_queue_info_get(struct rte_eth_dev *ethdev, + uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo) +{ + struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[rx_queue_id]; + + qinfo->rx_buf_size = rxq->rx_buf_size; + qinfo->nb_desc = rxq->depth; + qinfo->queue_state = ethdev->data->rx_queue_state[rx_queue_id]; + qinfo->mp = rxq->mp; + qinfo->conf.rx_free_thresh = rxq->rx_free_thresh; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.h b/drivers/net/sssnic/sssnic_ethdev_rx.h index b0b35dee73..20e4d1ac0e 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.h +++ b/drivers/net/sssnic/sssnic_ethdev_rx.h @@ -44,5 +44,7 @@ void sssnic_ethdev_rx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid); uint16_t sssnic_ethdev_rx_pkt_burst(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +void sssnic_ethdev_rx_queue_info_get(struct rte_eth_dev *ethdev, + uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo); #endif diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.c b/drivers/net/sssnic/sssnic_ethdev_tx.c index 533befb6ea..51931df645 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.c +++ b/drivers/net/sssnic/sssnic_ethdev_tx.c @@ -1110,3 +1110,14 @@ sssnic_ethdev_tx_pkt_burst(void *tx_queue, struct rte_mbuf **tx_pkts, return nb_tx; } + +void +sssnic_ethdev_tx_queue_info_get(struct rte_eth_dev *ethdev, + uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo) +{ + struct sssnic_ethdev_txq *txq = ethdev->data->tx_queues[tx_queue_id]; + + qinfo->nb_desc = txq->depth; + qinfo->queue_state = ethdev->data->tx_queue_state[tx_queue_id]; + qinfo->conf.tx_free_thresh = txq->tx_free_thresh; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_tx.h b/drivers/net/sssnic/sssnic_ethdev_tx.h index 3a7cd47080..6130ade4d1 100644 --- a/drivers/net/sssnic/sssnic_ethdev_tx.h +++ b/drivers/net/sssnic/sssnic_ethdev_tx.h @@ -39,5 +39,7 @@ void sssnic_ethdev_tx_queue_stats_clear(struct rte_eth_dev *ethdev, uint16_t qid); uint16_t sssnic_ethdev_tx_pkt_burst(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +void sssnic_ethdev_tx_queue_info_get(struct rte_eth_dev *ethdev, + uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo); #endif /* _SSSNIC_ETHDEV_TX_H_ */ From patchwork Fri Sep 1 09:35:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131074 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A25D74221E; Fri, 1 Sep 2023 11:39:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2F149410E3; Fri, 1 Sep 2023 11:36:25 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id AD24740A8B for ; Fri, 1 Sep 2023 11:36:13 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZYAe069884; Fri, 1 Sep 2023 17:35:35 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:34 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 28/32] net/sssnic: support dev firmware version get Date: Fri, 1 Sep 2023 17:35:10 +0800 Message-ID: <20230901093514.224824-29-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZYAe069884 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 1 + drivers/net/sssnic/base/sssnic_api.c | 36 ++++++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 9 +++++++ drivers/net/sssnic/base/sssnic_cmd.h | 8 +++++++ drivers/net/sssnic/sssnic_ethdev.c | 20 ++++++++++++++++ 5 files changed, 74 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 020a9e7056..a40b509558 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -26,6 +26,7 @@ Inner L4 checksum = Y Basic stats = Y Extended stats = Y Stats per queue = Y +FW version = Y Linux = Y ARMv8 = Y x86-64 = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 32b24e841c..12aca16995 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1497,3 +1497,39 @@ sssnic_rss_indir_table_get(struct sssnic_hw *hw, uint16_t *entry, sssnic_ctrlq_cmd_destroy(hw, cmd); return ret; } + +int +sssnic_fw_version_get(struct sssnic_hw *hw, struct sssnic_fw_version *version) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_fw_version_get_cmd cmd; + uint32_t cmdlen = sizeof(cmd); + int len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.type = 1; /* get MPU firmware version */ + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmdlen, + SSSNIC_GET_FW_VERSION_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_COMM_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmdlen, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmdlen == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_GET_FW_VERSION_CMD, len=%u, status=%u", + cmdlen, cmd.common.status); + return -EIO; + } + + len = RTE_MIN(sizeof(version->version), sizeof(cmd.version)); + rte_memcpy(version->version, cmd.version, len); + len = RTE_MIN(sizeof(version->time), sizeof(cmd.time)); + rte_memcpy(version->time, cmd.time, len); + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 1d80b93e38..bd4f01d388 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -402,6 +402,13 @@ enum sssnic_rss_hash_engine_type { SSSNIC_RSS_HASH_ENGINE_COUNT, }; +#define SSSNIC_FW_VERSION_LEN 16 +#define SSSNIC_FW_TIME_LEN 20 +struct sssnic_fw_version { + char version[SSSNIC_FW_VERSION_LEN]; + char time[SSSNIC_FW_VERSION_LEN]; +}; + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, @@ -456,5 +463,7 @@ int sssnic_rss_indir_table_set(struct sssnic_hw *hw, const uint16_t *entry, uint32_t num_entries); int sssnic_rss_indir_table_get(struct sssnic_hw *hw, uint16_t *entry, uint32_t num_entries); +int sssnic_fw_version_get(struct sssnic_hw *hw, + struct sssnic_fw_version *version); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index 56818471b6..9da07770b1 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -406,4 +406,12 @@ struct sssnic_rss_indir_table_cmd { uint16_t entry[256]; }; +struct sssnic_fw_version_get_cmd { + struct sssnic_cmd_common common; + uint16_t type; + uint16_t resvd; + uint8_t version[16]; + uint8_t time[20]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index bde8d89ddc..cceaa5c8be 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -739,6 +739,25 @@ sssnic_ethdev_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu) return sssnic_ethdev_tx_max_size_set(ethdev, mtu); } +static int +sssnic_ethdev_fw_version_get(struct rte_eth_dev *ethdev, char *fw_version, + size_t fw_size) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_fw_version version; + int ret; + + ret = sssnic_fw_version_get(hw, &version); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get firmware version"); + return ret; + } + + snprintf(fw_version, fw_size, "%s", version.version); + + return 0; +} + static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_start = sssnic_ethdev_start, .dev_stop = sssnic_ethdev_stop, @@ -779,6 +798,7 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .mtu_set = sssnic_ethdev_mtu_set, .rxq_info_get = sssnic_ethdev_rx_queue_info_get, .txq_info_get = sssnic_ethdev_tx_queue_info_get, + .fw_version_get = sssnic_ethdev_fw_version_get, }; static int From patchwork Fri Sep 1 09:35:11 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131073 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF1604221E; Fri, 1 Sep 2023 11:39:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E168C40F35; Fri, 1 Sep 2023 11:36:23 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id AA39740A8A for ; Fri, 1 Sep 2023 11:36:13 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZZK7069893; Fri, 1 Sep 2023 17:35:35 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:34 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 29/32] net/sssnic: add dev flow control ops Date: Fri, 1 Sep 2023 17:35:11 +0800 Message-ID: <20230901093514.224824-30-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZZK7069893 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 1 + drivers/net/sssnic/base/sssnic_api.c | 68 ++++++++++++++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 4 ++ drivers/net/sssnic/base/sssnic_cmd.h | 11 +++++ drivers/net/sssnic/sssnic_ethdev.c | 65 ++++++++++++++++++++++++++ 5 files changed, 149 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index a40b509558..9bf05cb968 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -26,6 +26,7 @@ Inner L4 checksum = Y Basic stats = Y Extended stats = Y Stats per queue = Y +Flow control = P FW version = Y Linux = Y ARMv8 = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 12aca16995..d91896cdd2 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1533,3 +1533,71 @@ sssnic_fw_version_get(struct sssnic_hw *hw, struct sssnic_fw_version *version) return 0; } + +int +sssnic_flow_ctrl_set(struct sssnic_hw *hw, bool autoneg, bool rx_en, bool tx_en) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_flow_ctrl_cmd cmd; + uint32_t cmdlen = sizeof(cmd); + + memset(&cmd, 0, sizeof(cmd)); + cmd.auto_neg = autoneg ? 1 : 0; + cmd.rx_en = rx_en ? 1 : 0; + cmd.tx_en = tx_en ? 1 : 0; + cmd.opcode = SSSNIC_CMD_OPCODE_SET; + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmdlen, + SSSNIC_PORT_FLOW_CTRL_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmdlen, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmdlen == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_PORT_FLOW_CTRL_CMD, len=%u, status=%u", + cmdlen, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_flow_ctrl_get(struct sssnic_hw *hw, bool *autoneg, bool *rx_en, + bool *tx_en) +{ + int ret; + struct sssnic_msg msg; + struct sssnic_flow_ctrl_cmd cmd; + uint32_t cmdlen = sizeof(cmd); + + memset(&cmd, 0, sizeof(cmd)); + cmd.opcode = SSSNIC_CMD_OPCODE_GET; + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmdlen, + SSSNIC_PORT_FLOW_CTRL_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmdlen, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmdlen == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_PORT_FLOW_CTRL_CMD, len=%u, status=%u", + cmdlen, cmd.common.status); + return -EIO; + } + + *autoneg = cmd.auto_neg; + *rx_en = cmd.rx_en; + *tx_en = cmd.tx_en; + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index bd4f01d388..36544a5dc3 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -465,5 +465,9 @@ int sssnic_rss_indir_table_get(struct sssnic_hw *hw, uint16_t *entry, uint32_t num_entries); int sssnic_fw_version_get(struct sssnic_hw *hw, struct sssnic_fw_version *version); +int sssnic_flow_ctrl_set(struct sssnic_hw *hw, bool autoneg, bool rx_en, + bool tx_en); +int sssnic_flow_ctrl_get(struct sssnic_hw *hw, bool *autoneg, bool *rx_en, + bool *tx_en); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index 9da07770b1..d2054fad5a 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -414,4 +414,15 @@ struct sssnic_fw_version_get_cmd { uint8_t time[20]; }; +struct sssnic_flow_ctrl_cmd { + struct sssnic_cmd_common common; + uint8_t port; + uint8_t opcode; + uint16_t resvd0; + uint8_t auto_neg; + uint8_t rx_en; + uint8_t tx_en; + uint8_t resvd1[5]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index cceaa5c8be..8999693027 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -758,6 +758,69 @@ sssnic_ethdev_fw_version_get(struct rte_eth_dev *ethdev, char *fw_version, return 0; } +static int +sssnic_ethdev_flow_ctrl_set(struct rte_eth_dev *ethdev, + struct rte_eth_fc_conf *fc_conf) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + bool autoneg, rx_en, tx_en; + int ret; + + if (fc_conf->autoneg != 0) + autoneg = true; + else + autoneg = false; + + if (fc_conf->mode == RTE_ETH_FC_FULL || + fc_conf->mode == RTE_ETH_FC_RX_PAUSE) + rx_en = true; + else + rx_en = false; + + if (fc_conf->mode == RTE_ETH_FC_FULL || + fc_conf->mode == RTE_ETH_FC_TX_PAUSE) + tx_en = true; + else + tx_en = false; + + ret = sssnic_flow_ctrl_set(hw, autoneg, rx_en, tx_en); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to set flow conctrol"); + return ret; + } + + return 0; +} + +static int +sssnic_ethdev_flow_ctrl_get(struct rte_eth_dev *ethdev, + struct rte_eth_fc_conf *fc_conf) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + bool autoneg, rx_en, tx_en; + int ret; + + ret = sssnic_flow_ctrl_get(hw, &autoneg, &rx_en, &tx_en); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get flow conctrol"); + return ret; + } + + if (autoneg) + fc_conf->autoneg = true; + + if (rx_en && tx_en) + fc_conf->mode = RTE_ETH_FC_FULL; + else if (rx_en) + fc_conf->mode = RTE_ETH_FC_RX_PAUSE; + else if (tx_en) + fc_conf->mode = RTE_ETH_FC_TX_PAUSE; + else + fc_conf->mode = RTE_ETH_FC_NONE; + + return 0; +} + static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_start = sssnic_ethdev_start, .dev_stop = sssnic_ethdev_stop, @@ -799,6 +862,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .rxq_info_get = sssnic_ethdev_rx_queue_info_get, .txq_info_get = sssnic_ethdev_tx_queue_info_get, .fw_version_get = sssnic_ethdev_fw_version_get, + .flow_ctrl_set = sssnic_ethdev_flow_ctrl_set, + .flow_ctrl_get = sssnic_ethdev_flow_ctrl_get, }; static int From patchwork Fri Sep 1 09:35:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131075 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 651BA4221E; Fri, 1 Sep 2023 11:39:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6A4CC410FD; Fri, 1 Sep 2023 11:36:26 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id DF3A640DCF for ; Fri, 1 Sep 2023 11:36:14 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZZ1k069902; Fri, 1 Sep 2023 17:35:35 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:35 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 30/32] net/sssnic: support VLAN offload and filter Date: Fri, 1 Sep 2023 17:35:12 +0800 Message-ID: <20230901093514.224824-31-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZZ1k069902 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- doc/guides/nics/features/sssnic.ini | 2 + drivers/net/sssnic/base/sssnic_api.c | 34 +++++++++++ drivers/net/sssnic/base/sssnic_api.h | 1 + drivers/net/sssnic/base/sssnic_cmd.h | 9 +++ drivers/net/sssnic/sssnic_ethdev.c | 87 ++++++++++++++++++++++++++++ 5 files changed, 133 insertions(+) diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index 9bf05cb968..f5738ac934 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -19,6 +19,8 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y +VLAN offload = Y L3 checksum offload = Y L4 checksum offload = Y Inner L3 checksum = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index d91896cdd2..68c16c9c1e 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1601,3 +1601,37 @@ sssnic_flow_ctrl_get(struct sssnic_hw *hw, bool *autoneg, bool *rx_en, return 0; } + +int +sssnic_vlan_filter_set(struct sssnic_hw *hw, uint16_t vid, bool add) +{ + int ret; + struct sssnic_vlan_filter_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.add = add ? 1 : 0; + cmd.vid = vid; + cmd_len = sizeof(cmd); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_PORT_VLAN_FILTER_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_PORT_VLAN_FILTER_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 36544a5dc3..28b235dda2 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -469,5 +469,6 @@ int sssnic_flow_ctrl_set(struct sssnic_hw *hw, bool autoneg, bool rx_en, bool tx_en); int sssnic_flow_ctrl_get(struct sssnic_hw *hw, bool *autoneg, bool *rx_en, bool *tx_en); +int sssnic_vlan_filter_set(struct sssnic_hw *hw, uint16_t vid, bool add); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index d2054fad5a..3e70d0e223 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -425,4 +425,13 @@ struct sssnic_flow_ctrl_cmd { uint8_t resvd1[5]; }; +struct sssnic_vlan_filter_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t add; + uint8_t resvd0; + uint16_t vid; + uint16_t resvd1; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 8999693027..8a1ccff70b 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -16,6 +16,7 @@ #include "sssnic_ethdev_rss.h" static int sssnic_ethdev_init(struct rte_eth_dev *ethdev); +static void sssnic_ethdev_vlan_filter_clean(struct rte_eth_dev *ethdev); static int sssnic_ethdev_infos_get(struct rte_eth_dev *ethdev, @@ -340,6 +341,7 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) { struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + sssnic_ethdev_vlan_filter_clean(ethdev); sssnic_ethdev_link_intr_disable(ethdev); sssnic_ethdev_tx_queue_all_release(ethdev); sssnic_ethdev_rx_queue_all_release(ethdev); @@ -821,6 +823,89 @@ sssnic_ethdev_flow_ctrl_get(struct rte_eth_dev *ethdev, return 0; } +static int +sssnic_ethdev_vlan_offload_set(struct rte_eth_dev *ethdev, int mask) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct rte_eth_conf *dev_conf = ðdev->data->dev_conf; + uint8_t vlan_strip_en; + uint32_t vlan_filter_en; + int ret; + + if (mask & RTE_ETH_VLAN_STRIP_MASK) { + if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + vlan_strip_en = 1; + else + vlan_strip_en = 0; + + ret = sssnic_vlan_strip_enable_set(hw, vlan_strip_en); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s vlan strip offload", + vlan_strip_en ? "enable" : "disable"); + return ret; + } + } + + if (mask & RTE_ETH_VLAN_FILTER_MASK) { + if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) + vlan_filter_en = 1; + else + vlan_filter_en = 0; + + ret = sssnic_vlan_filter_enable_set(hw, vlan_filter_en); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s vlan filter offload", + vlan_filter_en ? "enable" : "disable"); + return ret; + } + } + + return 0; +} + +static int +sssnic_ethdev_vlan_filter_get(struct rte_eth_dev *ethdev, uint16_t vlan_id) +{ + struct rte_vlan_filter_conf *vfc = ðdev->data->vlan_filter_conf; + int vidx = vlan_id / 64; + int vbit = vlan_id % 64; + + return !!(vfc->ids[vidx] & RTE_BIT64(vbit)); +} + +static int +sssnic_ethdev_vlan_filter_set(struct rte_eth_dev *ethdev, uint16_t vlan_id, + int on) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + if (sssnic_ethdev_vlan_filter_get(ethdev, vlan_id) == !!on) + return 0; + + ret = sssnic_vlan_filter_set(hw, vlan_id, !!on); + if (ret) { + PMD_DRV_LOG(ERR, + "Failed to %s VLAN filter, vlan_id: %u, port: %u", + on ? "add" : "remove", vlan_id, ethdev->data->port_id); + return ret; + } + + PMD_DRV_LOG(DEBUG, "%s VLAN %u filter to port %u", + on ? "Added" : "Removed", vlan_id, ethdev->data->port_id); + + return 0; +} + +static void +sssnic_ethdev_vlan_filter_clean(struct rte_eth_dev *ethdev) +{ + uint16_t vlan_id; + + for (vlan_id = 0; vlan_id <= RTE_ETHER_MAX_VLAN_ID; vlan_id++) + sssnic_ethdev_vlan_filter_set(ethdev, vlan_id, 0); +} + static const struct eth_dev_ops sssnic_ethdev_ops = { .dev_start = sssnic_ethdev_start, .dev_stop = sssnic_ethdev_stop, @@ -864,6 +949,8 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .fw_version_get = sssnic_ethdev_fw_version_get, .flow_ctrl_set = sssnic_ethdev_flow_ctrl_set, .flow_ctrl_get = sssnic_ethdev_flow_ctrl_get, + .vlan_offload_set = sssnic_ethdev_vlan_offload_set, + .vlan_filter_set = sssnic_ethdev_vlan_filter_set, }; static int From patchwork Fri Sep 1 09:35:13 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131076 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 883224221E; Fri, 1 Sep 2023 11:39:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A24A641109; Fri, 1 Sep 2023 11:36:27 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.ramaxel.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id A46C940E4A for ; Fri, 1 Sep 2023 11:36:17 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZaEc069918; Fri, 1 Sep 2023 17:35:36 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:35 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 31/32] net/sssnic: add generic flow ops Date: Fri, 1 Sep 2023 17:35:13 +0800 Message-ID: <20230901093514.224824-32-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZaEc069918 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- v2: * Fixed 'mask->hdr.src_addr' will always evaluate to 'true'. * Removed error.h from including files. --- doc/guides/nics/features/sssnic.ini | 12 + drivers/net/sssnic/base/sssnic_api.c | 264 ++++++ drivers/net/sssnic/base/sssnic_api.h | 22 + drivers/net/sssnic/base/sssnic_cmd.h | 71 ++ drivers/net/sssnic/base/sssnic_hw.h | 3 + drivers/net/sssnic/base/sssnic_misc.h | 7 + drivers/net/sssnic/meson.build | 2 + drivers/net/sssnic/sssnic_ethdev.c | 12 + drivers/net/sssnic/sssnic_ethdev.h | 1 + drivers/net/sssnic/sssnic_ethdev_fdir.c | 1017 +++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_fdir.h | 332 ++++++++ drivers/net/sssnic/sssnic_ethdev_flow.c | 981 ++++++++++++++++++++++ drivers/net/sssnic/sssnic_ethdev_flow.h | 11 + drivers/net/sssnic/sssnic_ethdev_rx.c | 18 + 14 files changed, 2753 insertions(+) create mode 100644 drivers/net/sssnic/sssnic_ethdev_fdir.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_fdir.h create mode 100644 drivers/net/sssnic/sssnic_ethdev_flow.c create mode 100644 drivers/net/sssnic/sssnic_ethdev_flow.h diff --git a/doc/guides/nics/features/sssnic.ini b/doc/guides/nics/features/sssnic.ini index f5738ac934..57e7440d86 100644 --- a/doc/guides/nics/features/sssnic.ini +++ b/doc/guides/nics/features/sssnic.ini @@ -33,3 +33,15 @@ FW version = Y Linux = Y ARMv8 = Y x86-64 = Y + +[rte_flow items] +any = Y +eth = Y +ipv4 = Y +ipv6 = Y +tcp = Y +udp = Y +vxlan = Y + +[rte_flow actions] +queue = Y diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 68c16c9c1e..0e965442fd 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1635,3 +1635,267 @@ sssnic_vlan_filter_set(struct sssnic_hw *hw, uint16_t vid, bool add) return 0; } + +int +sssnic_tcam_enable_set(struct sssnic_hw *hw, bool enabled) +{ + struct sssnic_tcam_enable_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.enabled = enabled ? 1 : 0; + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_SET_TCAM_ENABLE_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED) + PMD_DRV_LOG(WARNING, + "SSSNIC_SET_TCAM_ENABLED_CMD is unsupported"); + else + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_SET_TCAM_ENABLE_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_tcam_flush(struct sssnic_hw *hw) +{ + struct sssnic_tcam_flush_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, SSSNIC_FLUSH_TCAM_CMD, + SSSNIC_MPU_FUNC_IDX, SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED) + PMD_DRV_LOG(WARNING, + "SSSNIC_FLUSH_TCAM_CMD is unsupported"); + else + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_FLUSH_TCAM_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + return 0; +} + +int +sssnic_tcam_disable_and_flush(struct sssnic_hw *hw) +{ + int ret; + + ret = sssnic_tcam_enable_set(hw, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Could not disable TCAM"); + return ret; + } + + ret = sssnic_tcam_flush(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Could not flush TCAM"); + return ret; + } + + return 0; +} + +static int +sssnic_tcam_block_cfg(struct sssnic_hw *hw, uint8_t flag, uint16_t *block_idx) +{ + struct sssnic_tcam_block_cfg_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.flag = flag; + if (flag == SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_FREE) + cmd.idx = *block_idx; + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_TCAM_CFG_BLOCK_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED) + PMD_DRV_LOG(WARNING, + "SSSNIC_CFG_TCAM_BLOCK_CMD is unsupported"); + else + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_CFG_TCAM_BLOCK_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + if (flag == SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_ALLOC) + *block_idx = cmd.idx; + + return 0; +} + +int +sssnic_tcam_block_alloc(struct sssnic_hw *hw, uint16_t *block_idx) +{ + if (block_idx == NULL) + return -EINVAL; + + return sssnic_tcam_block_cfg(hw, SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_ALLOC, + block_idx); +} + +int +sssnic_tcam_block_free(struct sssnic_hw *hw, uint16_t block_idx) +{ + return sssnic_tcam_block_cfg(hw, SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_FREE, + &block_idx); +} + +int +sssnic_tcam_packet_type_filter_set(struct sssnic_hw *hw, uint8_t ptype, + uint16_t qid, bool enabled) +{ + struct sssnic_tcam_ptype_filter_set_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.ptype = ptype; + cmd.qid = qid; + cmd.enable = enabled ? 1 : 0; + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_TCAM_SET_PTYPE_FILTER_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED) + PMD_DRV_LOG(WARNING, + "SSSNIC_TCAM_SET_PTYPE_FILTER_CMD is unsupported"); + else + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_TCAM_SET_PTYPE_FILTER_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_tcam_entry_add(struct sssnic_hw *hw, struct sssnic_tcam_entry *entry) +{ + struct sssnic_tcam_entry_add_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + rte_memcpy(&cmd.data, entry, sizeof(cmd.data)); + + if (entry->index >= SSSNIC_TCAM_MAX_ENTRY_NUM) { + PMD_DRV_LOG(ERR, "Invalid TCAM entry index: %u", entry->index); + return -EINVAL; + } + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_ADD_TCAM_ENTRY_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED) + PMD_DRV_LOG(WARNING, + "SSSNIC_ADD_TCAM_ENTRY_CMD is unsupported"); + else + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_ADD_TCAM_ENTRY_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_tcam_entry_del(struct sssnic_hw *hw, uint32_t entry_idx) +{ + struct sssnic_tcam_entry_del_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + int ret; + + memset(&cmd, 0, sizeof(cmd)); + cmd_len = sizeof(cmd); + cmd.function = SSSNIC_FUNC_IDX(hw); + cmd.start = entry_idx; + cmd.num = 1; + + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_DEL_TCAM_ENTRY_CMD, SSSNIC_MPU_FUNC_IDX, + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + if (cmd.common.status == SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED) + PMD_DRV_LOG(WARNING, + "SSSNIC_ADD_TCAM_ENTRY_CMD is unsupported"); + else + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_ADD_TCAM_ENTRY_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 28b235dda2..7a02ec61ee 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -409,6 +409,18 @@ struct sssnic_fw_version { char time[SSSNIC_FW_VERSION_LEN]; }; +struct sssnic_tcam_entry { + uint32_t index; + struct { + uint32_t qid; + uint32_t resvd; + } result; + struct { + uint8_t data0[SSSNIC_TCAM_KEY_SIZE]; + uint8_t data1[SSSNIC_TCAM_KEY_SIZE]; + } key; +}; + int sssnic_msix_attr_get(struct sssnic_hw *hw, uint16_t msix_idx, struct sssnic_msix_attr *attr); int sssnic_msix_attr_set(struct sssnic_hw *hw, uint16_t msix_idx, @@ -470,5 +482,15 @@ int sssnic_flow_ctrl_set(struct sssnic_hw *hw, bool autoneg, bool rx_en, int sssnic_flow_ctrl_get(struct sssnic_hw *hw, bool *autoneg, bool *rx_en, bool *tx_en); int sssnic_vlan_filter_set(struct sssnic_hw *hw, uint16_t vid, bool add); +int sssnic_tcam_enable_set(struct sssnic_hw *hw, bool enabled); +int sssnic_tcam_flush(struct sssnic_hw *hw); +int sssnic_tcam_disable_and_flush(struct sssnic_hw *hw); +int sssnic_tcam_block_alloc(struct sssnic_hw *hw, uint16_t *block_idx); +int sssnic_tcam_block_free(struct sssnic_hw *hw, uint16_t block_idx); +int sssnic_tcam_packet_type_filter_set(struct sssnic_hw *hw, uint8_t ptype, + uint16_t qid, bool enabled); +int sssnic_tcam_entry_add(struct sssnic_hw *hw, + struct sssnic_tcam_entry *entry); +int sssnic_tcam_entry_del(struct sssnic_hw *hw, uint32_t entry_idx); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index 3e70d0e223..c75cb0dad3 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -75,6 +75,16 @@ enum sssnic_rss_cmd_id { SSSNIC_SET_RSS_TYPE_CMD = 65, }; +#define SSSNIC_TCAM_CMD_STATUS_UNSUPPORTED 0xff +enum sssnic_tcam_cmd_id { + SSSNIC_ADD_TCAM_ENTRY_CMD = 80, + SSSNIC_DEL_TCAM_ENTRY_CMD = 81, + SSSNIC_FLUSH_TCAM_CMD = 83, + SSSNIC_TCAM_CFG_BLOCK_CMD = 84, + SSSNIC_SET_TCAM_ENABLE_CMD = 85, + SSSNIC_TCAM_SET_PTYPE_FILTER_CMD = 91, +}; + struct sssnic_cmd_common { uint8_t status; uint8_t version; @@ -434,4 +444,65 @@ struct sssnic_vlan_filter_set_cmd { uint16_t resvd1; }; +struct sssnic_tcam_enable_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t enabled; + uint8_t resvd[5]; +}; + +struct sssnic_tcam_flush_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd; +}; + +#define SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_ALLOC 1 +#define SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_FREE 0 +struct sssnic_tcam_block_cfg_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t flag; /* SSSNIC_TCAM_BLOCK_CFG_CMD_FLAG_XX */ + uint8_t type; + uint16_t idx; + uint16_t resvd; +}; + +struct sssnic_tcam_ptype_filter_set_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint16_t resvd0; + uint8_t enable; + uint8_t ptype; + uint8_t qid; + uint8_t resvd1; +}; + +struct sssnic_tcam_entry_add_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t type; + uint8_t resv; + struct { + uint32_t index; + struct { + uint32_t qid; + uint32_t resvd; + } result; + struct { + uint8_t d0[SSSNIC_TCAM_KEY_SIZE]; + uint8_t d1[SSSNIC_TCAM_KEY_SIZE]; + } key; + } data; +}; + +struct sssnic_tcam_entry_del_cmd { + struct sssnic_cmd_common common; + uint16_t function; + uint8_t type; + uint8_t resv; + uint32_t start; /* start index of entry to be deleted */ + uint32_t num; /* number of entries to be deleted */ +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 4820212543..6a2d980d5a 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -96,6 +96,9 @@ enum sssnic_module { SSSNIC_NETIF_MODULE = 14, }; +#define SSSNIC_TCAM_KEY_SIZE 44 +#define SSSNIC_TCAM_MAX_ENTRY_NUM 4096 + int sssnic_hw_init(struct sssnic_hw *hw); void sssnic_hw_shutdown(struct sssnic_hw *hw); void sssnic_msix_state_set(struct sssnic_hw *hw, uint16_t msix_id, int state); diff --git a/drivers/net/sssnic/base/sssnic_misc.h b/drivers/net/sssnic/base/sssnic_misc.h index e30691caef..a1e268710e 100644 --- a/drivers/net/sssnic/base/sssnic_misc.h +++ b/drivers/net/sssnic/base/sssnic_misc.h @@ -42,4 +42,11 @@ sssnic_mem_be_to_cpu_32(void *in, void *out, int size) } } +static inline bool +sssnic_is_zero_ipv6_addr(const void *ipv6_addr) +{ + const uint64_t *ddw = ipv6_addr; + return ddw[0] == 0 && ddw[1] == 0; +} + #endif /* _SSSNIC_MISC_H_ */ diff --git a/drivers/net/sssnic/meson.build b/drivers/net/sssnic/meson.build index 3541b75c30..03d60f08ec 100644 --- a/drivers/net/sssnic/meson.build +++ b/drivers/net/sssnic/meson.build @@ -23,4 +23,6 @@ sources = files( 'sssnic_ethdev_tx.c', 'sssnic_ethdev_stats.c', 'sssnic_ethdev_rss.c', + 'sssnic_ethdev_fdir.c', + 'sssnic_ethdev_flow.c', ) diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 8a1ccff70b..545833fb55 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -14,6 +14,8 @@ #include "sssnic_ethdev_tx.h" #include "sssnic_ethdev_stats.h" #include "sssnic_ethdev_rss.h" +#include "sssnic_ethdev_fdir.h" +#include "sssnic_ethdev_flow.h" static int sssnic_ethdev_init(struct rte_eth_dev *ethdev); static void sssnic_ethdev_vlan_filter_clean(struct rte_eth_dev *ethdev); @@ -345,6 +347,7 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) sssnic_ethdev_link_intr_disable(ethdev); sssnic_ethdev_tx_queue_all_release(ethdev); sssnic_ethdev_rx_queue_all_release(ethdev); + sssnic_ethdev_fdir_shutdown(ethdev); sssnic_ethdev_mac_addrs_clean(ethdev); sssnic_hw_shutdown(hw); rte_free(hw); @@ -951,6 +954,7 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .flow_ctrl_get = sssnic_ethdev_flow_ctrl_get, .vlan_offload_set = sssnic_ethdev_vlan_offload_set, .vlan_filter_set = sssnic_ethdev_vlan_filter_set, + .flow_ops_get = sssnic_ethdev_flow_ops_get, }; static int @@ -991,6 +995,12 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) goto mac_addrs_init_fail; } + ret = sssnic_ethdev_fdir_init(ethdev); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to initialize fdir info"); + goto fdir_init_fail; + } + netdev->max_num_rxq = SSSNIC_MAX_NUM_RXQ(hw); netdev->max_num_txq = SSSNIC_MAX_NUM_TXQ(hw); @@ -1001,6 +1011,8 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) return 0; +fdir_init_fail: + sssnic_ethdev_mac_addrs_clean(ethdev); mac_addrs_init_fail: sssnic_hw_shutdown(0); return ret; diff --git a/drivers/net/sssnic/sssnic_ethdev.h b/drivers/net/sssnic/sssnic_ethdev.h index f19b2bd88f..0ca933b53b 100644 --- a/drivers/net/sssnic/sssnic_ethdev.h +++ b/drivers/net/sssnic/sssnic_ethdev.h @@ -82,6 +82,7 @@ struct sssnic_netdev { void *hw; struct rte_ether_addr *mcast_addrs; struct rte_ether_addr default_addr; + struct sssnic_ethdev_fdir_info *fdir_info; uint16_t max_num_txq; uint16_t max_num_rxq; uint16_t num_started_rxqs; diff --git a/drivers/net/sssnic/sssnic_ethdev_fdir.c b/drivers/net/sssnic/sssnic_ethdev_fdir.c new file mode 100644 index 0000000000..cec9fb219f --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_fdir.c @@ -0,0 +1,1017 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_fdir.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_api.h" + +#define SSSNIC_NETDEV_FDIR_INFO(netdev) ((netdev)->fdir_info) +#define SSSNIC_ETHDEV_FDIR_INFO(ethdev) \ + (SSSNIC_NETDEV_FDIR_INFO(SSSNIC_ETHDEV_PRIVATE(ethdev))) + +enum { + SSSNIC_ETHDEV_PTYPE_INVAL = 0, + SSSNIC_ETHDEV_PTYPE_ARP = 1, + SSSNIC_ETHDEV_PTYPE_ARP_REQ = 2, + SSSNIC_ETHDEV_PTYPE_ARP_REP = 3, + SSSNIC_ETHDEV_PTYPE_RARP = 4, + SSSNIC_ETHDEV_PTYPE_LACP = 5, + SSSNIC_ETHDEV_PTYPE_LLDP = 6, + SSSNIC_ETHDEV_PTYPE_OAM = 7, + SSSNIC_ETHDEV_PTYPE_CDCP = 8, + SSSNIC_ETHDEV_PTYPE_CNM = 9, + SSSNIC_ETHDEV_PTYPE_ECP = 10, +}; + +#define SSSNIC_ETHDEV_TCAM_ENTRY_INVAL_IDX 0xffff +struct sssnic_ethdev_fdir_entry { + TAILQ_ENTRY(sssnic_ethdev_fdir_entry) node; + struct sssnic_ethdev_tcam_block *tcam_block; + uint32_t tcam_entry_idx; + int enabled; + struct sssnic_ethdev_fdir_rule *rule; +}; + +#define SSSNIC_ETHDEV_TCAM_BLOCK_SZ 16 +struct sssnic_ethdev_tcam_block { + TAILQ_ENTRY(sssnic_ethdev_tcam_block) node; + uint16_t id; + uint16_t used_entries; + uint8_t entries_status[SSSNIC_ETHDEV_TCAM_BLOCK_SZ]; /* 0: IDLE, 1: USED */ +}; + +struct sssnic_ethdev_tcam { + TAILQ_HEAD(, sssnic_ethdev_tcam_block) block_list; + uint16_t num_blocks; + uint16_t used_entries; /* Count of used entries */ + int enabled; +}; + +struct sssnic_ethdev_fdir_info { + struct rte_eth_dev *ethdev; + struct sssnic_ethdev_tcam tcam; + uint32_t num_entries; + TAILQ_HEAD(, sssnic_ethdev_fdir_entry) ethertype_entry_list; + TAILQ_HEAD(, sssnic_ethdev_fdir_entry) flow_entry_list; +}; + +static int +sssnic_ethdev_tcam_init(struct rte_eth_dev *ethdev) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_ethdev_fdir_info *fdir_info; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + TAILQ_INIT(&fdir_info->tcam.block_list); + + sssnic_tcam_disable_and_flush(hw); + + return 0; +} + +static void +sssnic_ethdev_tcam_shutdown(struct rte_eth_dev *ethdev) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + struct sssnic_ethdev_tcam *tcam; + struct sssnic_ethdev_tcam_block *block, *tmp; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + tcam = &fdir_info->tcam; + + RTE_TAILQ_FOREACH_SAFE(block, &tcam->block_list, node, tmp) + { + TAILQ_REMOVE(&tcam->block_list, block, node); + rte_free(block); + } +} + +static int +sssnic_ethdev_tcam_enable(struct rte_eth_dev *ethdev) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + + if (!fdir_info->tcam.enabled) { + ret = sssnic_tcam_enable_set(hw, 1); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable TCAM"); + return ret; + } + + fdir_info->tcam.enabled = 1; + } + + return 0; +} + +static int +sssnic_ethdev_tcam_disable(struct rte_eth_dev *ethdev) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + + if (fdir_info->tcam.enabled) { + ret = sssnic_tcam_enable_set(hw, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable TCAM"); + return ret; + } + + fdir_info->tcam.enabled = 0; + } + + return 0; +} + +static int +sssnic_ethdev_tcam_block_alloc(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_tcam_block **block) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_ethdev_fdir_info *fdir_info = + SSSNIC_ETHDEV_FDIR_INFO(ethdev); + struct sssnic_ethdev_tcam_block *new; + int ret; + + new = rte_zmalloc("sssnic_tcam_block", sizeof(*new), 0); + if (new == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate memory for tcam block struct!"); + return -ENOMEM; + } + + ret = sssnic_tcam_block_alloc(hw, &new->id); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to alloc tcam block!"); + rte_free(new); + return ret; + } + + TAILQ_INSERT_HEAD(&fdir_info->tcam.block_list, new, node); + fdir_info->tcam.num_blocks++; + + if (block != NULL) + *block = new; + + return 0; +} + +static int +sssnic_ethdev_tcam_block_free(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_tcam_block *block) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_ethdev_fdir_info *fdir_info = + SSSNIC_ETHDEV_FDIR_INFO(ethdev); + int ret; + + ret = sssnic_tcam_block_free(hw, block->id); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to free tcam block:%u!", block->id); + return ret; + } + + TAILQ_REMOVE(&fdir_info->tcam.block_list, block, node); + fdir_info->tcam.num_blocks--; + rte_free(block); + + return 0; +} + +static struct sssnic_ethdev_tcam_block * +sssnic_ethdev_available_tcam_block_lookup(struct sssnic_ethdev_tcam *tcam) +{ + struct sssnic_ethdev_tcam_block *block; + + TAILQ_FOREACH(block, &tcam->block_list, node) + { + if (block->used_entries < SSSNIC_ETHDEV_TCAM_BLOCK_SZ) + return block; + } + + return NULL; +} + +static int +sssnic_ethdev_tcam_block_entry_alloc(struct sssnic_ethdev_tcam_block *block, + uint32_t *entry_idx) +{ + uint32_t i; + + for (i = 0; i < SSSNIC_ETHDEV_TCAM_BLOCK_SZ; i++) { + if (block->entries_status[i] == 0) { + *entry_idx = i; + block->entries_status[i] = 1; + block->used_entries++; + return 0; + } + } + + return -ENOMEM; +} + +static int +sssnic_ethdev_tcam_block_entry_free(struct sssnic_ethdev_tcam_block *block, + uint32_t entry_idx) +{ + if (block != NULL && entry_idx < SSSNIC_ETHDEV_TCAM_BLOCK_SZ) { + if (block->entries_status[entry_idx] == 1) { + block->entries_status[entry_idx] = 0; + block->used_entries--; + return 0; /* found and freed */ + } + } + return -1; /* not found */ +} + +static int +sssnic_ethdev_tcam_entry_alloc(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_tcam_block **block, uint32_t *entry_idx) +{ + struct sssnic_ethdev_fdir_info *fdir_info = + SSSNIC_ETHDEV_FDIR_INFO(ethdev); + struct sssnic_ethdev_tcam *tcam; + struct sssnic_ethdev_tcam_block *tcam_block; + int new_block = 0; + uint32_t eid; + int ret; + + tcam = &fdir_info->tcam; + + if (tcam->num_blocks == 0 || + tcam->used_entries >= + tcam->num_blocks * SSSNIC_ETHDEV_TCAM_BLOCK_SZ) { + ret = sssnic_ethdev_tcam_block_alloc(ethdev, &tcam_block); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "No TCAM memory, used block count: %u, used entries count:%u", + tcam->num_blocks, tcam->used_entries); + return ret; + } + new_block = 1; + } else { + tcam_block = sssnic_ethdev_available_tcam_block_lookup(tcam); + if (tcam_block == NULL) { + PMD_DRV_LOG(CRIT, + "No available TCAM block, used block count:%u, used entries count:%u", + tcam->num_blocks, tcam->used_entries); + return -ENOMEM; + } + } + + ret = sssnic_ethdev_tcam_block_entry_alloc(tcam_block, &eid); + if (ret != 0) { + PMD_DRV_LOG(CRIT, + "No available entry in TCAM block, block idx:%u, used entries:%u", + tcam_block->id, tcam_block->used_entries); + if (unlikely(new_block)) + sssnic_ethdev_tcam_block_free(ethdev, tcam_block); + + return -ENOMEM; + } + + tcam->used_entries++; + + *block = tcam_block; + *entry_idx = eid; + + return 0; +} + +static int +sssnic_ethdev_tcam_entry_free(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_tcam_block *tcam_block, uint32_t entry_idx) +{ + int ret; + struct sssnic_ethdev_fdir_info *fdir_info = + SSSNIC_ETHDEV_FDIR_INFO(ethdev); + struct sssnic_ethdev_tcam *tcam; + + tcam = &fdir_info->tcam; + + ret = sssnic_ethdev_tcam_block_entry_free(tcam_block, entry_idx); + if (ret != 0) + return 0; /* not found was considered as success */ + + if (tcam_block->used_entries == 0) { + ret = sssnic_ethdev_tcam_block_free(ethdev, tcam_block); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to free TCAM block:%u", + tcam_block->id); + } + + tcam->used_entries--; + return 0; +} + +static void +sssnic_ethdev_tcam_entry_init(struct sssnic_ethdev_fdir_flow_match *flow, + struct sssnic_tcam_entry *entry) +{ + uint8_t i; + uint8_t *flow_key; + uint8_t *flow_mask; + + flow_key = (uint8_t *)&flow->key; + flow_mask = (uint8_t *)&flow->mask; + + for (i = 0; i < sizeof(entry->key.data0); i++) { + entry->key.data1[i] = flow_key[i] & flow_mask[i]; + entry->key.data0[i] = + entry->key.data1[i] ^ flow_mask[i]; + } +} + + +static struct sssnic_ethdev_fdir_entry * +sssnic_ethdev_fdir_entry_lookup(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_rule *rule) +{ + struct sssnic_ethdev_fdir_entry *e; + struct sssnic_ethdev_fdir_match *m; + struct sssnic_ethdev_fdir_match *match = &rule->match; + + /* fast lookup */ + if (rule->cookie != NULL) + return (struct sssnic_ethdev_fdir_entry *)rule->cookie; + + if (rule->match.type == SSSNIC_ETHDEV_FDIR_MATCH_FLOW) { + TAILQ_FOREACH(e, &fdir_info->flow_entry_list, node) + { + m = &e->rule->match; + if (memcmp(&match->flow, &m->flow, sizeof(m->flow)) == + 0) + return e; + } + } else if (rule->match.type == SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE) { + TAILQ_FOREACH(e, &fdir_info->ethertype_entry_list, node) + { + m = &e->rule->match; + if (match->ethertype.key.ether_type == + m->ethertype.key.ether_type) + return e; + } + } + + return NULL; +} + +static inline void +sssnic_ethdev_fdir_entry_add(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_entry *entry) +{ + if (entry->rule->match.type == SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE) + TAILQ_INSERT_TAIL(&fdir_info->ethertype_entry_list, entry, + node); + else + TAILQ_INSERT_TAIL(&fdir_info->flow_entry_list, entry, node); + + fdir_info->num_entries++; +} + +static inline void +sssnic_ethdev_fdir_entry_del(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_entry *entry) +{ + if (entry->rule->match.type == SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE) + TAILQ_REMOVE(&fdir_info->ethertype_entry_list, entry, node); + else + TAILQ_REMOVE(&fdir_info->flow_entry_list, entry, node); + + fdir_info->num_entries--; +} + +static int +sssnic_ethdev_fdir_arp_pkt_filter_set(struct rte_eth_dev *ethdev, uint16_t qid, + int enabled) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_ARP, + qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s ARP packet filter!", + enabled ? "enable" : "disable"); + return ret; + } + + ret = sssnic_tcam_packet_type_filter_set(hw, + SSSNIC_ETHDEV_PTYPE_ARP_REQ, qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s ARP request packet filter!", + enabled ? "enable" : "disable"); + goto set_arp_req_fail; + } + + ret = sssnic_tcam_packet_type_filter_set(hw, + SSSNIC_ETHDEV_PTYPE_ARP_REP, qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s ARP reply packet filter!", + enabled ? "enable" : "disable"); + goto set_arp_rep_fail; + } + + return 0; + +set_arp_rep_fail: + sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_ARP_REQ, qid, + !enabled); +set_arp_req_fail: + sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_ARP, qid, + !enabled); + + return ret; +} + +static int +sssnic_ethdev_fdir_slow_pkt_filter_set(struct rte_eth_dev *ethdev, uint16_t qid, + int enabled) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_LACP, + qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s LACP packet filter!", + enabled ? "enable" : "disable"); + return ret; + } + + ret = sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_OAM, + qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s OAM packet filter!", + enabled ? "enable" : "disable"); + + sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_LACP, + qid, !enabled); + } + + return ret; +} + +static int +sssnic_ethdev_fdir_lldp_pkt_filter_set(struct rte_eth_dev *ethdev, uint16_t qid, + int enabled) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + int ret; + + ret = sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_LLDP, + qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s LLDP packet filter!", + enabled ? "enable" : "disable"); + return ret; + } + + ret = sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_CDCP, + qid, enabled); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to %s CDCP packet filter!", + enabled ? "enable" : "disable"); + + sssnic_tcam_packet_type_filter_set(hw, SSSNIC_ETHDEV_PTYPE_LLDP, + qid, !enabled); + } + + return ret; +} + +static int +sssnic_ethdev_fdir_pkt_filter_set(struct rte_eth_dev *ethdev, + uint16_t ether_type, uint16_t qid, int enabled) +{ + int ret; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + + switch (ether_type) { + case RTE_ETHER_TYPE_ARP: + ret = sssnic_ethdev_fdir_arp_pkt_filter_set(ethdev, qid, + enabled); + break; + case RTE_ETHER_TYPE_RARP: + ret = sssnic_tcam_packet_type_filter_set(hw, + SSSNIC_ETHDEV_PTYPE_RARP, qid, enabled); + break; + case RTE_ETHER_TYPE_SLOW: + ret = sssnic_ethdev_fdir_slow_pkt_filter_set(ethdev, qid, + enabled); + break; + case RTE_ETHER_TYPE_LLDP: + ret = sssnic_ethdev_fdir_lldp_pkt_filter_set(ethdev, qid, + enabled); + break; + case 0x22e7: /* CNM ether type */ + ret = sssnic_tcam_packet_type_filter_set(hw, + SSSNIC_ETHDEV_PTYPE_CNM, qid, enabled); + break; + case 0x8940: /* ECP ether type */ + ret = sssnic_tcam_packet_type_filter_set(hw, + SSSNIC_ETHDEV_PTYPE_ECP, qid, enabled); + break; + default: + PMD_DRV_LOG(ERR, "Ethertype 0x%x is not supported to filter!", + ether_type); + return -EINVAL; + } + + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to %s filter for ether type: %x.", + enabled ? "enable" : "disable", ether_type); + + return ret; +} + +static inline struct sssnic_ethdev_fdir_entry * +sssnic_ethdev_fdir_entry_alloc(void) +{ + struct sssnic_ethdev_fdir_entry *e; + + e = rte_zmalloc("sssnic_fdir_entry", sizeof(*e), 0); + if (e != NULL) + e->tcam_entry_idx = SSSNIC_ETHDEV_TCAM_ENTRY_INVAL_IDX; + else + PMD_DRV_LOG(ERR, + "Failed to allocate memory for fdir entry struct!"); + + return e; +} + +static inline void +sssnic_ethdev_fdir_entry_free(struct sssnic_ethdev_fdir_entry *e) +{ + if (e != NULL) + rte_free(e); +} + +/* Apply fdir rule to HW */ +static int +sssnic_ethdev_fdir_entry_enable(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_fdir_entry *entry) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + struct sssnic_tcam_entry tcam_entry; + int ret; + + if (unlikely(entry->rule == NULL)) { + PMD_DRV_LOG(ERR, "fdir rule is null!"); + return -EINVAL; + } + + if (entry->enabled) + return 0; + + if (entry->tcam_entry_idx != SSSNIC_ETHDEV_TCAM_ENTRY_INVAL_IDX) { + memset(&tcam_entry, 0, sizeof(tcam_entry)); + sssnic_ethdev_tcam_entry_init(&entry->rule->match.flow, + &tcam_entry); + tcam_entry.result.qid = entry->rule->action.qid; + tcam_entry.index = + entry->tcam_entry_idx + + (entry->tcam_block->id * SSSNIC_ETHDEV_TCAM_BLOCK_SZ); + + ret = sssnic_tcam_entry_add(hw, &tcam_entry); + if (ret != 0) + PMD_DRV_LOG(ERR, + "Failed to add TCAM entry, block:%u, entry:%u, tcam_entry:%u", + entry->tcam_block->id, entry->tcam_entry_idx, + tcam_entry.index); + + } else { + ret = sssnic_ethdev_fdir_pkt_filter_set(ethdev, + entry->rule->match.ethertype.key.ether_type, + entry->rule->action.qid, 1); + if (ret != 0) + PMD_DRV_LOG(ERR, "Failed to enable ethertype(%x) filter", + entry->rule->match.ethertype.key.ether_type); + } + + entry->enabled = 1; + + return ret; +} + +/* remove fdir rule from HW */ +static int +sssnic_ethdev_fdir_entry_disable(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_fdir_entry *entry) +{ + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint32_t tcam_entry_idx; + int ret; + + if (unlikely(entry->rule == NULL)) { + PMD_DRV_LOG(ERR, "fdir rule is null!"); + return -EINVAL; + } + + if (!entry->enabled) + return 0; + + if (entry->tcam_entry_idx != SSSNIC_ETHDEV_TCAM_ENTRY_INVAL_IDX) { + tcam_entry_idx = + entry->tcam_entry_idx + + (entry->tcam_block->id * SSSNIC_ETHDEV_TCAM_BLOCK_SZ); + + ret = sssnic_tcam_entry_del(hw, tcam_entry_idx); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to del TCAM entry, block:%u, entry:%u", + entry->tcam_block->id, entry->tcam_entry_idx); + return ret; + } + } else { + ret = sssnic_ethdev_fdir_pkt_filter_set(ethdev, + entry->rule->match.ethertype.key.ether_type, + entry->rule->action.qid, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to disable ethertype(%x) filter", + entry->rule->match.ethertype.key.ether_type); + return ret; + } + } + + entry->enabled = 0; + + return 0; +} + +static int +sssnic_ethdev_fdir_ethertype_rule_add(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_rule *rule) +{ + struct sssnic_ethdev_fdir_entry *fdir_entry; + int ret; + + fdir_entry = sssnic_ethdev_fdir_entry_alloc(); + if (fdir_entry == NULL) + return -ENOMEM; + + fdir_entry->rule = rule; + + ret = sssnic_ethdev_fdir_entry_enable(fdir_info->ethdev, fdir_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable ethertype(%u) entry", + rule->match.ethertype.key.ether_type); + + sssnic_ethdev_fdir_entry_free(fdir_entry); + + return ret; + } + + rule->cookie = fdir_entry; + sssnic_ethdev_fdir_entry_add(fdir_info, fdir_entry); + + return 0; +} + +static int +sssnic_ethdev_fdir_ethertype_rule_del(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_rule *rule) +{ + struct sssnic_ethdev_fdir_entry *fdir_entry; + int ret; + + fdir_entry = (struct sssnic_ethdev_fdir_entry *)rule->cookie; + + ret = sssnic_ethdev_fdir_entry_disable(fdir_info->ethdev, fdir_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable ethertype(%u) entry", + rule->match.ethertype.key.ether_type); + return ret; + } + + rule->cookie = NULL; + sssnic_ethdev_fdir_entry_del(fdir_info, fdir_entry); + sssnic_ethdev_fdir_entry_free(fdir_entry); + + return 0; +} + +static int +sssnic_ethdev_fdir_flow_rule_add(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_rule *rule) +{ + struct sssnic_ethdev_fdir_entry *fdir_entry; + int ret; + + fdir_entry = sssnic_ethdev_fdir_entry_alloc(); + if (fdir_entry == NULL) + return -ENOMEM; + + fdir_entry->rule = rule; + + ret = sssnic_ethdev_tcam_entry_alloc(fdir_info->ethdev, + &fdir_entry->tcam_block, &fdir_entry->tcam_entry_idx); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to alloc TCAM entry"); + goto tcam_entry_alloc_fail; + } + + ret = sssnic_ethdev_fdir_entry_enable(fdir_info->ethdev, fdir_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable fdir flow entry"); + goto fdir_entry_enable_fail; + } + + rule->cookie = fdir_entry; + sssnic_ethdev_fdir_entry_add(fdir_info, fdir_entry); + + return 0; + +fdir_entry_enable_fail: + sssnic_ethdev_tcam_entry_free(fdir_info->ethdev, fdir_entry->tcam_block, + fdir_entry->tcam_entry_idx); +tcam_entry_alloc_fail: + sssnic_ethdev_fdir_entry_free(fdir_entry); + + return ret; +} + +static int +sssnic_ethdev_fdir_flow_rule_del(struct sssnic_ethdev_fdir_info *fdir_info, + struct sssnic_ethdev_fdir_rule *rule) +{ + struct sssnic_ethdev_fdir_entry *fdir_entry; + int ret; + + fdir_entry = (struct sssnic_ethdev_fdir_entry *)rule->cookie; + + ret = sssnic_ethdev_fdir_entry_disable(fdir_info->ethdev, fdir_entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable fdir flow entry"); + return ret; + } + + rule->cookie = NULL; + sssnic_ethdev_fdir_entry_del(fdir_info, fdir_entry); + sssnic_ethdev_fdir_entry_free(fdir_entry); + + return 0; +} + +int +sssnic_ethdev_fdir_rule_add(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_fdir_rule *rule) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + int ret; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + + if (sssnic_ethdev_fdir_entry_lookup(fdir_info, rule) != NULL) { + PMD_DRV_LOG(ERR, "FDIR rule exists!"); + return -EEXIST; + } + + if (rule->match.type == SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE) { + ret = sssnic_ethdev_fdir_ethertype_rule_add(fdir_info, rule); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add fdir ethertype rule"); + return ret; + } + PMD_DRV_LOG(DEBUG, + "Added fdir ethertype rule, total number of rules: %u", + fdir_info->num_entries); + } else { + ret = sssnic_ethdev_fdir_flow_rule_add(fdir_info, rule); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to add fdir flow rule"); + return ret; + } + PMD_DRV_LOG(DEBUG, + "Added fdir flow rule, total number of rules: %u", + fdir_info->num_entries); + } + + ret = sssnic_ethdev_tcam_enable(ethdev); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to enable TCAM"); + sssnic_ethdev_fdir_flow_rule_del(fdir_info, rule); + } + + return ret; +} + +int +sssnic_ethdev_fdir_rule_del(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_fdir_rule *fdir_rule) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + struct sssnic_ethdev_fdir_entry *entry; + struct sssnic_ethdev_fdir_rule *rule; + int ret; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + + entry = sssnic_ethdev_fdir_entry_lookup(fdir_info, fdir_rule); + if (entry == NULL) + return 0; + + rule = entry->rule; + if (rule != fdir_rule) + return 0; + + if (rule->match.type == SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE) { + ret = sssnic_ethdev_fdir_ethertype_rule_del(fdir_info, rule); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to delete fdir ethertype rule!"); + return ret; + } + PMD_DRV_LOG(DEBUG, + "Deleted fdir ethertype rule, total number of rules: %u", + fdir_info->num_entries); + } else { + ret = sssnic_ethdev_fdir_flow_rule_del(fdir_info, rule); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to delete fdir flow rule!"); + return ret; + } + PMD_DRV_LOG(DEBUG, + "Deleted fdir flow rule, total number of rules: %u", + fdir_info->num_entries); + } + + /* if there are no added rules, then disable TCAM */ + if (fdir_info->num_entries == 0) { + ret = sssnic_ethdev_tcam_disable(ethdev); + if (ret != 0) { + PMD_DRV_LOG(NOTICE, + "There are no added rules, but failed to disable TCAM"); + ret = 0; + } + } + + return ret; +} + +int +sssnic_ethdev_fdir_rules_disable_by_queue(struct rte_eth_dev *ethdev, + uint16_t qid) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + struct sssnic_ethdev_fdir_entry *entry; + int ret; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + + TAILQ_FOREACH(entry, &fdir_info->flow_entry_list, node) + { + if (entry->rule->action.qid == qid) { + ret = sssnic_ethdev_fdir_entry_disable(ethdev, entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to disable flow rule of queue:%u", + qid); + + return ret; + } + } + } + + return 0; +} + +int +sssnic_ethdev_fdir_rules_enable_by_queue(struct rte_eth_dev *ethdev, + uint16_t qid) +{ + struct sssnic_ethdev_fdir_info *fdir_info; + struct sssnic_ethdev_fdir_entry *entry; + int ret; + + fdir_info = SSSNIC_ETHDEV_FDIR_INFO(ethdev); + + TAILQ_FOREACH(entry, &fdir_info->flow_entry_list, node) + { + if (entry->rule->action.qid == qid) { + ret = sssnic_ethdev_fdir_entry_enable(ethdev, entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, + "Failed to enable flow rule of queue:%u", + qid); + + return ret; + } + } + } + + return 0; +} + +int +sssnic_ethdev_fdir_rules_flush(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_ethdev_fdir_entry *entry, *tmp; + struct sssnic_ethdev_fdir_rule *rule; + int ret; + + RTE_TAILQ_FOREACH_SAFE(entry, &netdev->fdir_info->flow_entry_list, node, + tmp) + { + rule = entry->rule; + ret = sssnic_ethdev_fdir_entry_disable(ethdev, entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable fdir flow entry"); + return ret; + } + TAILQ_REMOVE(&netdev->fdir_info->flow_entry_list, entry, node); + sssnic_ethdev_fdir_entry_free(entry); + sssnic_ethdev_fdir_rule_free(rule); + } + + RTE_TAILQ_FOREACH_SAFE(entry, &netdev->fdir_info->ethertype_entry_list, + node, tmp) + { + rule = entry->rule; + ret = sssnic_ethdev_fdir_entry_disable(ethdev, entry); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to disable ethertype(%u) entry", + rule->match.ethertype.key.ether_type); + return ret; + } + TAILQ_REMOVE(&netdev->fdir_info->ethertype_entry_list, entry, + node); + sssnic_ethdev_fdir_entry_free(entry); + sssnic_ethdev_fdir_rule_free(rule); + } + + return 0; +} + +int +sssnic_ethdev_fdir_init(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + + PMD_INIT_FUNC_TRACE(); + + netdev->fdir_info = rte_zmalloc("sssnic_fdir_info", + sizeof(struct sssnic_ethdev_fdir_info), 0); + + if (netdev->fdir_info == NULL) { + PMD_DRV_LOG(ERR, "Failed to alloc fdir info memory for port %u", + ethdev->data->port_id); + return -ENOMEM; + } + + netdev->fdir_info->ethdev = ethdev; + + TAILQ_INIT(&netdev->fdir_info->ethertype_entry_list); + TAILQ_INIT(&netdev->fdir_info->flow_entry_list); + + sssnic_ethdev_tcam_init(ethdev); + + return 0; +} + +void +sssnic_ethdev_fdir_shutdown(struct rte_eth_dev *ethdev) +{ + struct sssnic_netdev *netdev = SSSNIC_ETHDEV_PRIVATE(ethdev); + struct sssnic_ethdev_fdir_entry *entry, *tmp; + + PMD_INIT_FUNC_TRACE(); + + if (netdev->fdir_info == NULL) + return; + + RTE_TAILQ_FOREACH_SAFE(entry, &netdev->fdir_info->flow_entry_list, node, + tmp) + { + TAILQ_REMOVE(&netdev->fdir_info->flow_entry_list, entry, node); + sssnic_ethdev_fdir_entry_free(entry); + } + + RTE_TAILQ_FOREACH_SAFE(entry, &netdev->fdir_info->ethertype_entry_list, + node, tmp) + { + TAILQ_REMOVE(&netdev->fdir_info->ethertype_entry_list, entry, + node); + sssnic_ethdev_fdir_entry_free(entry); + } + + sssnic_ethdev_tcam_shutdown(ethdev); + + rte_free(netdev->fdir_info); +} diff --git a/drivers/net/sssnic/sssnic_ethdev_fdir.h b/drivers/net/sssnic/sssnic_ethdev_fdir.h new file mode 100644 index 0000000000..aaf426b8f2 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_fdir.h @@ -0,0 +1,332 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_FDIR_H_ +#define _SSSNIC_ETHDEV_FDIR_H_ + +#define SSSINC_ETHDEV_FDIR_FLOW_KEY_SIZE 44 +#define SSSNIC_ETHDEV_FDIR_FLOW_KEY_NUM_DW \ + (SSSINC_ETHDEV_FDIR_FLOW_KEY_SIZE / sizeof(uint32_t)) + +enum sssnic_ethdev_fdir_match_type { + SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE = RTE_ETH_FILTER_ETHERTYPE, + SSSNIC_ETHDEV_FDIR_MATCH_FLOW = RTE_ETH_FILTER_FDIR, +}; + +enum sssnic_ethdev_fdir_flow_ip_type { + SSSNIC_ETHDEV_FDIR_FLOW_IPV4 = 0, + SSSNIC_ETHDEV_FDIR_FLOW_IPV6 = 1, +}; + +enum sssnic_ethdev_fdir_flow_tunnel_type { + SSSNIC_ETHDEV_FDIR_FLOW_TUNNEL_NONE = 0, + SSSNIC_ETHDEV_FDIR_FLOW_TUNNEL_VXLAN = 1, +}; + +#define SSSNIC_ETHDEV_FDIR_FLOW_FUNC_ID_MASK 0x7fff +#define SSSNIC_ETHDEV_FDIR_FLOW_IP_TYPE_MASK 0x1 +#define SSSNIC_ETHDEV_FDIR_FLOW_TUNNEL_TYPE_MASK 0xf + +struct sssnic_ethdev_fdir_ethertype_key { + uint16_t ether_type; +}; + +struct sssnic_ethdev_fdir_ipv4_flow_key { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t resvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 4; + uint32_t resvd1 : 4; + + uint32_t func_id : 15; + uint32_t ip_type : 1; + uint32_t sip_w1 : 16; + + uint32_t sip_w0 : 16; + uint32_t dip_w1 : 16; + + uint32_t dip_w0 : 16; + uint32_t resvd2 : 16; + + uint32_t resvd3; + + uint32_t resvd4 : 16; + uint32_t dport : 16; + + uint32_t sport : 16; + uint32_t resvd5 : 16; + + uint32_t resvd6 : 16; + uint32_t outer_sip_w1 : 16; + + uint32_t outer_sip_w0 : 16; + uint32_t outer_dip_w1 : 16; + + uint32_t outer_dip_w0 : 16; + uint32_t vni_w1 : 16; + + uint32_t vni_w0 : 16; + uint32_t resvd7 : 16; +#else + uint32_t resvd1 : 4; + uint32_t tunnel_type : 4; + uint32_t ip_proto : 8; + uint32_t resvd0 : 16; + + uint32_t sip_w1 : 16; + uint32_t ip_type : 1; + uint32_t func_id : 15; + + uint32_t dip_w1 : 16; + uint32_t sip_w0 : 16; + + uint32_t resvd2 : 16; + uint32_t dip_w0 : 16; + + uint32_t rsvd3; + + uint32_t dport : 16; + uint32_t resvd4 : 16; + + uint32_t resvd5 : 16; + uint32_t sport : 16; + + uint32_t outer_sip_w1 : 16; + uint32_t resvd6 : 16; + + uint32_t outer_dip_w1 : 16; + uint32_t outer_sip_w0 : 16; + + uint32_t vni_w1 : 16; + uint32_t outer_dip_w0 : 16; + + uint32_t resvd7 : 16; + uint32_t vni_w0 : 16; +#endif +}; + +struct sssnic_ethdev_fdir_ipv6_flow_key { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t resvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 4; + uint32_t resvd1 : 4; + + uint32_t func_id : 15; + uint32_t ip_type : 1; + uint32_t sip6_w0 : 16; + + uint32_t sip6_w1 : 16; + uint32_t sip6_w2 : 16; + + uint32_t sip6_w3 : 16; + uint32_t sip6_w4 : 16; + + uint32_t sip6_w5 : 16; + uint32_t sip6_w6 : 16; + + uint32_t sip6_w7 : 16; + uint32_t dport : 16; + + uint32_t sport : 16; + uint32_t dip6_w0 : 16; + + uint32_t dip6_w1 : 16; + uint32_t dip6_w2 : 16; + + uint32_t dip6_w3 : 16; + uint32_t dip6_w4 : 16; + + uint32_t dip6_w5 : 16; + uint32_t dip6_w6 : 16; + + uint32_t dip6_w7 : 16; + uint32_t resvd2 : 16; +#else + uint32_t resvd1 : 4; + uint32_t tunnel_type : 4; + uint32_t ip_proto : 8; + uint32_t resvd0 : 16; + + uint32_t sip6_w0 : 16; + uint32_t ip_type : 1; + uint32_t func_id : 15; + + uint32_t sip6_w2 : 16; + uint32_t sip6_w1 : 16; + + uint32_t sip6_w4 : 16; + uint32_t sip6_w3 : 16; + + uint32_t sip6_w6 : 16; + uint32_t sip6_w5 : 16; + + uint32_t dport : 16; + uint32_t sip6_w7 : 16; + + uint32_t dip6_w0 : 16; + uint32_t sport : 16; + + uint32_t dip6_w2 : 16; + uint32_t dip6_w1 : 16; + + uint32_t dip6_w4 : 16; + uint32_t dip6_w3 : 16; + + uint32_t dip6_w6 : 16; + uint32_t dip6_w5 : 16; + + uint32_t resvd2 : 16; + uint32_t dip6_w7 : 16; +#endif +}; + +struct sssnic_ethdev_fdir_vxlan_ipv6_flow_key { +#if (RTE_BYTE_ORDER == RTE_BIG_ENDIAN) + uint32_t resvd0 : 16; + uint32_t ip_proto : 8; + uint32_t tunnel_type : 4; + uint32_t resvd1 : 4; + + uint32_t func_id : 15; + uint32_t ip_type : 1; + uint32_t dip6_w0 : 16; + + uint32_t dip6_w1 : 16; + uint32_t dip6_w2 : 16; + + uint32_t dip6_w3 : 16; + uint32_t dip6_w4 : 16; + + uint32_t dip6_w5 : 16; + uint32_t dip6_w6 : 16; + + uint32_t dip6_w7 : 16; + uint32_t dport : 16; + + uint32_t sport : 16; + uint32_t resvd2 : 16; + + uint32_t resvd3 : 16; + uint32_t outer_sip_w1 : 16; + + uint32_t outer_sip_w0 : 16; + uint32_t outer_dip_w1 : 16; + + uint32_t outer_dip_w0 : 16; + uint32_t vni_w1 : 16; + + uint32_t vni_w0 : 16; + uint32_t resvd4 : 16; +#else + uint32_t rsvd1 : 4; + uint32_t tunnel_type : 4; + uint32_t ip_proto : 8; + uint32_t resvd0 : 16; + + uint32_t dip6_w0 : 16; + uint32_t ip_type : 1; + uint32_t function_id : 15; + + uint32_t dip6_w2 : 16; + uint32_t dip6_w1 : 16; + + uint32_t dip6_w4 : 16; + uint32_t dip6_w3 : 16; + + uint32_t dip6_w6 : 16; + uint32_t dip6_w5 : 16; + + uint32_t dport : 16; + uint32_t dip6_w7 : 16; + + uint32_t resvd2 : 16; + uint32_t sport : 16; + + uint32_t outer_sip_w1 : 16; + uint32_t resvd3 : 16; + + uint32_t outer_dip_w1 : 16; + uint32_t outer_sip_w0 : 16; + + uint32_t vni_w1 : 16; + uint32_t outer_dip_w0 : 16; + + uint32_t resvd4 : 16; + uint32_t vni_w0 : 16; +#endif +}; + +struct sssnic_ethdev_fdir_flow_key { + union { + uint32_t dword[SSSNIC_ETHDEV_FDIR_FLOW_KEY_NUM_DW]; + struct { + struct sssnic_ethdev_fdir_ipv4_flow_key ipv4; + struct sssnic_ethdev_fdir_ipv6_flow_key ipv6; + struct sssnic_ethdev_fdir_vxlan_ipv6_flow_key vxlan_ipv6; + }; + }; +}; + +struct sssnic_ethdev_fdir_flow_match { + struct sssnic_ethdev_fdir_flow_key key; + struct sssnic_ethdev_fdir_flow_key mask; +}; + +struct sssnic_ethdev_fdir_ethertype_match { + struct sssnic_ethdev_fdir_ethertype_key key; +}; + +struct sssnic_ethdev_fdir_match { + enum sssnic_ethdev_fdir_match_type type; + union { + struct sssnic_ethdev_fdir_flow_match flow; + struct sssnic_ethdev_fdir_ethertype_match ethertype; + }; +}; + +struct sssnic_ethdev_fdir_action { + uint16_t qid; +}; + +/* struct sssnic_ethdev_fdir_rule must be dynamically allocated in the heap */ +struct sssnic_ethdev_fdir_rule { + struct sssnic_ethdev_fdir_match match; + struct sssnic_ethdev_fdir_action action; + void *cookie; /* low level data, initial value must be set to NULL*/ +}; + +struct sssnic_ethdev_fdir_info; + +static inline struct sssnic_ethdev_fdir_rule * +sssnic_ethdev_fdir_rule_alloc(void) +{ + struct sssnic_ethdev_fdir_rule *rule; + + rule = rte_zmalloc("sssnic_fdir_rule", + sizeof(struct sssnic_ethdev_fdir_rule), 0); + + return rule; +} + +static inline void +sssnic_ethdev_fdir_rule_free(struct sssnic_ethdev_fdir_rule *rule) +{ + if (rule != NULL) + rte_free(rule); +} + +int sssnic_ethdev_fdir_rules_disable_by_queue(struct rte_eth_dev *ethdev, + uint16_t qid); +int sssnic_ethdev_fdir_rules_enable_by_queue(struct rte_eth_dev *ethdev, + uint16_t qid); +int sssnic_ethdev_fdir_rule_add(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_fdir_rule *rule); +int sssnic_ethdev_fdir_rule_del(struct rte_eth_dev *ethdev, + struct sssnic_ethdev_fdir_rule *fdir_rule); +int sssnic_ethdev_fdir_rules_flush(struct rte_eth_dev *ethdev); +int sssnic_ethdev_fdir_init(struct rte_eth_dev *ethdev); +void sssnic_ethdev_fdir_shutdown(struct rte_eth_dev *ethdev); + +#endif /* _SSSNIC_ETHDEV_FDIR_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev_flow.c b/drivers/net/sssnic/sssnic_ethdev_flow.c new file mode 100644 index 0000000000..372a5bed6b --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_flow.c @@ -0,0 +1,981 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#include +#include +#include + +#include "sssnic_log.h" +#include "sssnic_ethdev.h" +#include "sssnic_ethdev_fdir.h" +#include "sssnic_ethdev_flow.h" +#include "base/sssnic_hw.h" +#include "base/sssnic_api.h" +#include "base/sssnic_misc.h" + +struct rte_flow { + struct sssnic_ethdev_fdir_rule rule; +}; + +static enum rte_flow_item_type pattern_ethertype[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_any[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_ANY, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv4[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_any[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ANY, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv6[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv6_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv4_udp_vxlan_eth_ipv6_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_VXLAN, + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv6[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv6_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END, +}; + +static enum rte_flow_item_type pattern_eth_ipv6_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END, +}; + +enum sssnic_ethdev_flow_type { + SSSNIC_ETHDEV_FLOW_TYPE_UNKNOWN = -1, + SSSNIC_ETHDEV_FLOW_TYPE_ETHERTYPE, + SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + SSSNIC_ETHDEV_FLOW_TYPE_COUNT, +}; + +struct sssnic_ethdev_flow_pattern { + enum rte_flow_item_type *flow_items; + enum sssnic_ethdev_flow_type type; + bool is_tunnel; +}; + +static struct sssnic_ethdev_flow_pattern supported_flow_patterns[] = { + { pattern_ethertype, SSSNIC_ETHDEV_FLOW_TYPE_ETHERTYPE, false }, + { pattern_eth_ipv4, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, + { pattern_eth_ipv4_udp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, + { pattern_eth_ipv4_tcp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, + { pattern_eth_ipv4_any, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, + { pattern_eth_ipv4_udp_vxlan, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, true }, + { pattern_eth_ipv4_udp_vxlan_udp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, true }, + { pattern_eth_ipv4_udp_vxlan_tcp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, true }, + { pattern_eth_ipv4_udp_vxlan_any, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, true }, + { pattern_eth_ipv4_udp_vxlan_eth_ipv4, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + true }, + { pattern_eth_ipv4_udp_vxlan_eth_ipv4_tcp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + true }, + { pattern_eth_ipv4_udp_vxlan_eth_ipv4_udp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + true }, + { pattern_eth_ipv4_udp_vxlan_eth_ipv6, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + true }, + { pattern_eth_ipv4_udp_vxlan_eth_ipv6_tcp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + true }, + { pattern_eth_ipv4_udp_vxlan_eth_ipv6_udp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, + true }, + { pattern_eth_ipv6, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, + { pattern_eth_ipv6_udp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, + { pattern_eth_ipv6_tcp, SSSNIC_ETHDEV_FLOW_TYPE_FDIR, false }, +}; + +static bool +sssnic_ethdev_flow_pattern_match(enum rte_flow_item_type *item_array, + const struct rte_flow_item *pattern) +{ + const struct rte_flow_item *item = pattern; + + /* skip void items in the head of pattern */ + while (item->type == RTE_FLOW_ITEM_TYPE_VOID) + item++; + + while ((*item_array == item->type) && + (*item_array != RTE_FLOW_ITEM_TYPE_END)) { + item_array++; + item++; + } + + return (*item_array == RTE_FLOW_ITEM_TYPE_END && + item->type == RTE_FLOW_ITEM_TYPE_END); +} + +static struct sssnic_ethdev_flow_pattern * +sssnic_ethdev_flow_pattern_lookup(const struct rte_flow_item *pattern) +{ + struct sssnic_ethdev_flow_pattern *flow_pattern; + enum rte_flow_item_type *flow_items; + size_t i; + + for (i = 0; i < RTE_DIM(supported_flow_patterns); i++) { + flow_pattern = &supported_flow_patterns[i]; + flow_items = flow_pattern->flow_items; + if (sssnic_ethdev_flow_pattern_match(flow_items, pattern)) + return flow_pattern; + } + + return NULL; +} + +static int +sssnic_ethdev_flow_action_parse(struct rte_eth_dev *ethdev, + const struct rte_flow_action *actions, struct rte_flow_error *error, + struct sssnic_ethdev_fdir_rule *fdir_rule) +{ + const struct rte_flow_action_queue *action_queue; + const struct rte_flow_action *action = actions; + + if (action->type != RTE_FLOW_ACTION_TYPE_QUEUE) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Unsupported action type, only support action queue"); + return -EINVAL; + } + + action_queue = (const struct rte_flow_action_queue *)action->conf; + if (action_queue->index >= ethdev->data->nb_rx_queues) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, "Invalid queue index"); + return -EINVAL; + } + + if (fdir_rule != NULL) + fdir_rule->action.qid = action_queue->index; + + return 0; +} + +static int +sssnic_ethdev_flow_ethertype_pattern_parse(const struct rte_flow_item *pattern, + struct rte_flow_error *error, struct sssnic_ethdev_fdir_rule *fdir_rule) +{ + const struct rte_flow_item *item = pattern; + const struct rte_flow_item_eth *spec, *mask; + struct sssnic_ethdev_fdir_ethertype_match *fdir_match; + + while (item->type != RTE_FLOW_ITEM_TYPE_ETH) + item++; + + spec = (const struct rte_flow_item_eth *)item->spec; + mask = (const struct rte_flow_item_eth *)item->mask; + + if (item->last != NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_LAST, + item, "Not support range"); + return -rte_errno; + } + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_LAST, + item, "Ether mask or spec is NULL"); + return -rte_errno; + } + + if (!rte_is_zero_ether_addr(&mask->src) || + !rte_is_zero_ether_addr(&mask->dst)) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid ether address mask"); + return -rte_errno; + } + + if (mask->type != 0xffff) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM_MASK, + item, "Invalid ether type mask"); + return -rte_errno; + } + + if (fdir_rule != NULL) { + fdir_rule->match.type = SSSNIC_ETHDEV_FDIR_MATCH_ETHERTYPE; + fdir_match = &fdir_rule->match.ethertype; + fdir_match->key.ether_type = rte_be_to_cpu_16(spec->type); + } + + return 0; +} + +static int +sssnic_ethdev_flow_eth_parse(const struct rte_flow_item *item, + struct rte_flow_error *error) +{ + if (item->spec != NULL || item->mask != NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Not support eth match in fdir flow"); + return -rte_errno; + } + + return 0; +} + +static int +sssnic_ethdev_flow_ipv4_parse(const struct rte_flow_item *item, + struct rte_flow_error *error, bool outer, + struct sssnic_ethdev_fdir_flow_match *fdir_match) +{ + const struct rte_flow_item_ipv4 *spec, *mask; + uint32_t ip_addr; + + spec = (const struct rte_flow_item_ipv4 *)item->spec; + mask = (const struct rte_flow_item_ipv4 *)item->mask; + + if (outer) { + /* only tunnel flow has outer ipv4 */ + if (spec == NULL && mask == NULL) + return 0; + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Invalid IPV4 spec or mask"); + return -rte_errno; + } + + if (mask->hdr.version_ihl || mask->hdr.type_of_service || + mask->hdr.total_length || mask->hdr.packet_id || + mask->hdr.fragment_offset || mask->hdr.time_to_live || + mask->hdr.next_proto_id || mask->hdr.hdr_checksum) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Only support outer IPv4 src and dest address for tunnel flow"); + return -rte_errno; + } + + if (fdir_match != NULL) { + ip_addr = rte_be_to_cpu_32(spec->hdr.src_addr); + fdir_match->key.ipv4.outer_sip_w0 = (uint16_t)ip_addr; + fdir_match->key.ipv4.outer_sip_w1 = + (uint16_t)(ip_addr >> 16); + + ip_addr = rte_be_to_cpu_32(mask->hdr.src_addr); + fdir_match->mask.ipv4.outer_sip_w0 = (uint16_t)ip_addr; + fdir_match->mask.ipv4.outer_sip_w1 = + (uint16_t)(ip_addr >> 16); + } + } else { + /* inner ip of tunnel flow or ip of non tunnel flow */ + if (spec == NULL && mask == NULL) + return 0; + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Invalid IPV4 spec or mask"); + return -rte_errno; + } + + if (mask->hdr.version_ihl || mask->hdr.type_of_service || + mask->hdr.total_length || mask->hdr.packet_id || + mask->hdr.fragment_offset || mask->hdr.time_to_live || + mask->hdr.hdr_checksum) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Only support IPv4 address and ipproto"); + return -rte_errno; + } + + if (fdir_match != NULL) { + ip_addr = rte_be_to_cpu_32(spec->hdr.src_addr); + fdir_match->key.ipv4.sip_w0 = (uint16_t)ip_addr; + fdir_match->key.ipv4.sip_w1 = (uint16_t)(ip_addr >> 16); + + ip_addr = rte_be_to_cpu_32(mask->hdr.src_addr); + fdir_match->mask.ipv4.sip_w0 = (uint16_t)ip_addr; + fdir_match->mask.ipv4.sip_w1 = + (uint16_t)(ip_addr >> 16); + + fdir_match->key.ipv4.ip_proto = spec->hdr.next_proto_id; + fdir_match->mask.ipv4.ip_proto = + mask->hdr.next_proto_id; + + fdir_match->key.ipv4.ip_type = + SSSNIC_ETHDEV_FDIR_FLOW_IPV4; + fdir_match->mask.ipv4.ip_type = 0x1; + } + } + + return 0; +} + +static int +sssnic_ethdev_flow_ipv6_parse(const struct rte_flow_item *item, + struct rte_flow_error *error, bool is_tunnel, + struct sssnic_ethdev_fdir_flow_match *fdir_match) +{ + const struct rte_flow_item_ipv6 *spec, *mask; + uint32_t ipv6_addr[4]; + int i; + + mask = (const struct rte_flow_item_ipv6 *)item->mask; + spec = (const struct rte_flow_item_ipv6 *)item->spec; + + if (fdir_match != NULL) { + /* ip_type of ipv6 flow_match can share with other flow_matches */ + fdir_match->key.ipv6.ip_type = SSSNIC_ETHDEV_FDIR_FLOW_IPV6; + fdir_match->mask.ipv6.ip_type = 0x1; + } + + if (is_tunnel) { + if (mask == NULL && spec == NULL) + return 0; + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Invalid IPV6 spec or mask"); + return -rte_errno; + } + + if (mask->hdr.vtc_flow || mask->hdr.payload_len || + mask->hdr.hop_limits || + !sssnic_is_zero_ipv6_addr(mask->hdr.src_addr)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Only support IPv6 dest_addr and ipproto in tunnel flow"); + return -rte_errno; + } + + if (fdir_match != NULL) { + rte_memcpy(ipv6_addr, spec->hdr.dst_addr, + sizeof(ipv6_addr)); + for (i = 0; i < 4; i++) + ipv6_addr[i] = rte_be_to_cpu_32(ipv6_addr[i]); + + fdir_match->key.vxlan_ipv6.dip6_w0 = + (uint16_t)ipv6_addr[0]; + fdir_match->key.vxlan_ipv6.dip6_w1 = + (uint16_t)(ipv6_addr[0] >> 16); + fdir_match->key.vxlan_ipv6.dip6_w2 = + (uint16_t)ipv6_addr[1]; + fdir_match->key.vxlan_ipv6.dip6_w3 = + (uint16_t)(ipv6_addr[1] >> 16); + fdir_match->key.vxlan_ipv6.dip6_w4 = + (uint16_t)ipv6_addr[2]; + fdir_match->key.vxlan_ipv6.dip6_w5 = + (uint16_t)(ipv6_addr[2] >> 16); + fdir_match->key.vxlan_ipv6.dip6_w6 = + (uint16_t)ipv6_addr[3]; + fdir_match->key.vxlan_ipv6.dip6_w7 = + (uint16_t)(ipv6_addr[3] >> 16); + + rte_memcpy(ipv6_addr, mask->hdr.dst_addr, + sizeof(ipv6_addr)); + for (i = 0; i < 4; i++) + ipv6_addr[i] = rte_be_to_cpu_32(ipv6_addr[i]); + + fdir_match->mask.vxlan_ipv6.dip6_w0 = + (uint16_t)ipv6_addr[0]; + fdir_match->mask.vxlan_ipv6.dip6_w1 = + (uint16_t)(ipv6_addr[0] >> 16); + fdir_match->mask.vxlan_ipv6.dip6_w2 = + (uint16_t)ipv6_addr[1]; + fdir_match->mask.vxlan_ipv6.dip6_w3 = + (uint16_t)(ipv6_addr[1] >> 16); + fdir_match->mask.vxlan_ipv6.dip6_w4 = + (uint16_t)ipv6_addr[2]; + fdir_match->mask.vxlan_ipv6.dip6_w5 = + (uint16_t)(ipv6_addr[2] >> 16); + fdir_match->mask.vxlan_ipv6.dip6_w6 = + (uint16_t)ipv6_addr[3]; + fdir_match->mask.vxlan_ipv6.dip6_w7 = + (uint16_t)(ipv6_addr[3] >> 16); + + fdir_match->key.vxlan_ipv6.ip_proto = spec->hdr.proto; + fdir_match->mask.vxlan_ipv6.ip_proto = mask->hdr.proto; + } + } else { /* non tunnel */ + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Invalid IPV6 spec or mask"); + return -rte_errno; + } + + if (mask->hdr.vtc_flow || mask->hdr.payload_len || + mask->hdr.hop_limits) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Only support IPv6 addr and ipproto"); + return -rte_errno; + } + + if (fdir_match != NULL) { + rte_memcpy(ipv6_addr, spec->hdr.dst_addr, + sizeof(ipv6_addr)); + for (i = 0; i < 4; i++) + ipv6_addr[i] = rte_be_to_cpu_32(ipv6_addr[i]); + + fdir_match->key.ipv6.dip6_w0 = (uint16_t)ipv6_addr[0]; + fdir_match->key.ipv6.dip6_w1 = + (uint16_t)(ipv6_addr[0] >> 16); + fdir_match->key.ipv6.dip6_w2 = (uint16_t)ipv6_addr[1]; + fdir_match->key.ipv6.dip6_w3 = + (uint16_t)(ipv6_addr[1] >> 16); + fdir_match->key.ipv6.dip6_w4 = (uint16_t)ipv6_addr[2]; + fdir_match->key.ipv6.dip6_w5 = + (uint16_t)(ipv6_addr[2] >> 16); + fdir_match->key.ipv6.dip6_w6 = (uint16_t)ipv6_addr[3]; + fdir_match->key.ipv6.dip6_w7 = + (uint16_t)(ipv6_addr[3] >> 16); + + rte_memcpy(ipv6_addr, spec->hdr.src_addr, + sizeof(ipv6_addr)); + for (i = 0; i < 4; i++) + ipv6_addr[i] = rte_be_to_cpu_32(ipv6_addr[i]); + + fdir_match->key.ipv6.sip6_w0 = (uint16_t)ipv6_addr[0]; + fdir_match->key.ipv6.sip6_w1 = + (uint16_t)(ipv6_addr[0] >> 16); + fdir_match->key.ipv6.sip6_w2 = (uint16_t)ipv6_addr[1]; + fdir_match->key.ipv6.sip6_w3 = + (uint16_t)(ipv6_addr[1] >> 16); + fdir_match->key.ipv6.sip6_w4 = (uint16_t)ipv6_addr[2]; + fdir_match->key.ipv6.sip6_w5 = + (uint16_t)(ipv6_addr[2] >> 16); + fdir_match->key.ipv6.sip6_w6 = (uint16_t)ipv6_addr[3]; + fdir_match->key.ipv6.sip6_w7 = + (uint16_t)(ipv6_addr[3] >> 16); + + rte_memcpy(ipv6_addr, mask->hdr.dst_addr, + sizeof(ipv6_addr)); + for (i = 0; i < 4; i++) + ipv6_addr[i] = rte_be_to_cpu_32(ipv6_addr[i]); + + fdir_match->mask.ipv6.dip6_w0 = (uint16_t)ipv6_addr[0]; + fdir_match->mask.ipv6.dip6_w1 = + (uint16_t)(ipv6_addr[0] >> 16); + fdir_match->mask.ipv6.dip6_w2 = (uint16_t)ipv6_addr[1]; + fdir_match->mask.ipv6.dip6_w3 = + (uint16_t)(ipv6_addr[1] >> 16); + fdir_match->mask.ipv6.dip6_w4 = (uint16_t)ipv6_addr[2]; + fdir_match->mask.ipv6.dip6_w5 = + (uint16_t)(ipv6_addr[2] >> 16); + fdir_match->mask.ipv6.dip6_w6 = (uint16_t)ipv6_addr[3]; + fdir_match->mask.ipv6.dip6_w7 = + (uint16_t)(ipv6_addr[3] >> 16); + + rte_memcpy(ipv6_addr, mask->hdr.src_addr, + sizeof(ipv6_addr)); + for (i = 0; i < 4; i++) + ipv6_addr[i] = rte_be_to_cpu_32(ipv6_addr[i]); + + fdir_match->mask.ipv6.sip6_w0 = (uint16_t)ipv6_addr[0]; + fdir_match->mask.ipv6.sip6_w1 = + (uint16_t)(ipv6_addr[0] >> 16); + fdir_match->mask.ipv6.sip6_w2 = (uint16_t)ipv6_addr[1]; + fdir_match->mask.ipv6.sip6_w3 = + (uint16_t)(ipv6_addr[1] >> 16); + fdir_match->mask.ipv6.sip6_w4 = (uint16_t)ipv6_addr[2]; + fdir_match->mask.ipv6.sip6_w5 = + (uint16_t)(ipv6_addr[2] >> 16); + fdir_match->mask.ipv6.sip6_w6 = (uint16_t)ipv6_addr[3]; + fdir_match->mask.ipv6.sip6_w7 = + (uint16_t)(ipv6_addr[3] >> 16); + + fdir_match->key.ipv6.ip_proto = spec->hdr.proto; + fdir_match->mask.ipv6.ip_proto = mask->hdr.proto; + } + } + + return 0; +} + +static int +sssnic_ethdev_flow_udp_parse(const struct rte_flow_item *item, + struct rte_flow_error *error, bool outer, + struct sssnic_ethdev_fdir_flow_match *fdir_match) +{ + const struct rte_flow_item_udp *spec, *mask; + + spec = (const struct rte_flow_item_udp *)item->spec; + mask = (const struct rte_flow_item_udp *)item->mask; + + if (outer) { + if (spec != NULL || mask != NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Both of outer UDP spec and mask must be NULL in tunnel flow"); + return -rte_errno; + } + + return 0; + } + + if (fdir_match != NULL) { + /* ipv6 match can share ip_proto with ipv4 match */ + fdir_match->key.ipv4.ip_proto = IPPROTO_UDP; + fdir_match->mask.ipv4.ip_proto = 0xff; + } + + if (spec == NULL && mask == NULL) + return 0; + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid UDP spec or mask"); + return -rte_errno; + } + + if (fdir_match != NULL) { + /* Other types of fdir match can share sport and dport with ipv4 match */ + fdir_match->key.ipv4.sport = + rte_be_to_cpu_16(spec->hdr.src_port); + fdir_match->mask.ipv4.sport = + rte_be_to_cpu_16(mask->hdr.src_port); + fdir_match->key.ipv4.dport = + rte_be_to_cpu_16(spec->hdr.dst_port); + fdir_match->mask.ipv4.dport = + rte_be_to_cpu_16(mask->hdr.dst_port); + } + + return 0; +} + +static int +sssnic_ethdev_flow_tcp_parse(const struct rte_flow_item *item, + struct rte_flow_error *error, bool outer, + struct sssnic_ethdev_fdir_flow_match *fdir_match) +{ + const struct rte_flow_item_tcp *spec, *mask; + + spec = (const struct rte_flow_item_tcp *)item->spec; + mask = (const struct rte_flow_item_tcp *)item->mask; + + if (outer) { + if (spec != NULL || mask != NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Both of outer TCP spec and mask must be NULL in tunnel flow"); + return -rte_errno; + } + + return 0; + } + + if (fdir_match != NULL) { + /* ipv6 match can share ip_proto with ipv4 match */ + fdir_match->key.ipv4.ip_proto = IPPROTO_TCP; + fdir_match->mask.ipv6.ip_proto = 0xff; + } + + if (spec == NULL && mask == NULL) + return 0; + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid TCP spec or mask."); + return -rte_errno; + } + + if (mask->hdr.sent_seq || mask->hdr.recv_ack || mask->hdr.data_off || + mask->hdr.rx_win || mask->hdr.tcp_flags || mask->hdr.cksum || + mask->hdr.tcp_urp) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, + "Invalid TCP item, support src_port and dst_port only"); + return -rte_errno; + } + + if (fdir_match != NULL) { + /* Other types of fdir match can share sport and dport with ipv4 match */ + fdir_match->key.ipv4.sport = + rte_be_to_cpu_16(spec->hdr.src_port); + fdir_match->mask.ipv4.sport = + rte_be_to_cpu_16(mask->hdr.src_port); + fdir_match->key.ipv4.dport = + rte_be_to_cpu_16(spec->hdr.dst_port); + fdir_match->mask.ipv4.dport = + rte_be_to_cpu_16(mask->hdr.dst_port); + } + + return 0; +} + +static int +sssnic_ethdev_flow_vxlan_parse(const struct rte_flow_item *item, + struct rte_flow_error *error, + struct sssnic_ethdev_fdir_flow_match *fdir_match) +{ + const struct rte_flow_item_vxlan *spec, *mask; + uint32_t vni; + + spec = (const struct rte_flow_item_vxlan *)item->spec; + mask = (const struct rte_flow_item_vxlan *)item->mask; + + if (spec == NULL && mask == NULL) + return 0; + + if (spec == NULL || mask == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Invalid VXLAN spec or mask"); + return -rte_errno; + } + + /* vxlan-ipv6 match can share vni with vxlan-ipv4 match */ + if (fdir_match != NULL) { + rte_memcpy(((uint8_t *)&vni) + 1, spec->vni, 3); + vni = rte_be_to_cpu_32(vni); + fdir_match->key.ipv4.vni_w0 = (uint16_t)vni; + fdir_match->key.ipv4.vni_w1 = (uint16_t)(vni >> 16); + rte_memcpy(((uint8_t *)&vni) + 1, mask->vni, 3); + vni = rte_be_to_cpu_32(vni); + fdir_match->mask.ipv4.vni_w0 = (uint16_t)vni; + fdir_match->mask.ipv4.vni_w1 = (uint16_t)(vni >> 16); + } + + return 0; +} + +static int +sssnic_ethdev_flow_fdir_pattern_parse(const struct rte_flow_item *pattern, + struct rte_flow_error *error, bool is_tunnel, + struct sssnic_ethdev_fdir_rule *fdir_rule) +{ + struct sssnic_ethdev_fdir_flow_match *fdir_match = NULL; + const struct rte_flow_item *flow_item; + bool outer_ip; + int ret = 0; + + fdir_rule->match.type = SSSNIC_ETHDEV_FDIR_MATCH_FLOW; + if (fdir_rule != NULL) + fdir_match = &fdir_rule->match.flow; + + if (is_tunnel) + outer_ip = true; + else + outer_ip = false; + + flow_item = pattern; + while (flow_item->type != RTE_FLOW_ITEM_TYPE_END) { + switch (flow_item->type) { + case RTE_FLOW_ITEM_TYPE_ETH: + ret = sssnic_ethdev_flow_eth_parse(flow_item, error); + break; + case RTE_FLOW_ITEM_TYPE_IPV4: + ret = sssnic_ethdev_flow_ipv4_parse(flow_item, error, + outer_ip, fdir_match); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + ret = sssnic_ethdev_flow_ipv6_parse(flow_item, error, + is_tunnel, fdir_match); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + ret = sssnic_ethdev_flow_udp_parse(flow_item, error, + outer_ip, fdir_match); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + ret = sssnic_ethdev_flow_tcp_parse(flow_item, error, + outer_ip, fdir_match); + break; + case RTE_FLOW_ITEM_TYPE_VXLAN: + ret = sssnic_ethdev_flow_vxlan_parse(flow_item, error, + fdir_match); + outer_ip = false; /* next parsing is inner_ip */ + break; + default: + break; + } + + if (ret != 0) + return ret; + + flow_item++; + } + + if (is_tunnel) { + if (fdir_match != NULL) { + /* tunnel_type of ipv4 flow_match can share with other flow_matches */ + fdir_match->key.ipv4.tunnel_type = + SSSNIC_ETHDEV_FDIR_FLOW_TUNNEL_VXLAN; + fdir_match->mask.ipv4.tunnel_type = 0x1; + } + } + + return 0; +} + +static int +sssnic_ethdev_flow_attr_parse(const struct rte_flow_attr *attr, + struct rte_flow_error *error) +{ + if (attr->egress != 0 || attr->priority != 0 || attr->group != 0) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, + attr, "Invalid flow attr, support ingress only"); + return -rte_errno; + } + + if (attr->ingress == 0) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, attr, + "Ingress of flow attr is not set"); + return -rte_errno; + } + + return 0; +} + +static int +sssnic_ethdev_flow_parse(struct rte_eth_dev *ethdev, + const struct rte_flow_attr *attr, const struct rte_flow_item *pattern, + const struct rte_flow_action *actions, struct rte_flow_error *error, + struct sssnic_ethdev_fdir_rule *fdir_rule) +{ + int ret; + struct sssnic_ethdev_flow_pattern *flow_pattern; + + flow_pattern = sssnic_ethdev_flow_pattern_lookup(pattern); + if (flow_pattern == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Unsupported pattern"); + return -rte_errno; + } + + if (flow_pattern->type == SSSNIC_ETHDEV_FLOW_TYPE_FDIR) + ret = sssnic_ethdev_flow_fdir_pattern_parse(pattern, error, + flow_pattern->is_tunnel, fdir_rule); + else + ret = sssnic_ethdev_flow_ethertype_pattern_parse(pattern, error, + fdir_rule); + if (ret != 0) + return ret; + + ret = sssnic_ethdev_flow_action_parse(ethdev, actions, error, + fdir_rule); + if (ret != 0) + return ret; + + ret = sssnic_ethdev_flow_attr_parse(attr, error); + if (ret != 0) + return ret; + + return 0; +} + +static struct rte_flow * +sssnic_ethdev_flow_create(struct rte_eth_dev *ethdev, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error) +{ + struct sssnic_ethdev_fdir_rule *rule; + int ret; + + rule = sssnic_ethdev_fdir_rule_alloc(); + if (rule == NULL) { + rte_flow_error_set(error, ENOMEM, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "Failed to allocate fdir rule memory"); + return NULL; + } + + ret = sssnic_ethdev_flow_parse(ethdev, attr, pattern, actions, error, + rule); + if (ret != 0) { + sssnic_ethdev_fdir_rule_free(rule); + return NULL; + } + + ret = sssnic_ethdev_fdir_rule_add(ethdev, rule); + if (ret != 0) { + sssnic_ethdev_fdir_rule_free(rule); + rte_flow_error_set(error, ret, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to add fdir rule"); + return NULL; + } + + return (struct rte_flow *)rule; +} + +static int +sssnic_ethdev_flow_destroy(struct rte_eth_dev *ethdev, struct rte_flow *flow, + struct rte_flow_error *error) +{ + int ret; + + if (flow == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_HANDLE, + NULL, "Invalid parameter"); + return -rte_errno; + } + + ret = sssnic_ethdev_fdir_rule_del(ethdev, + (struct sssnic_ethdev_fdir_rule *)flow); + + if (ret != 0) { + rte_flow_error_set(error, EIO, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to delete fdir rule"); + return -rte_errno; + } + + sssnic_ethdev_fdir_rule_free((struct sssnic_ethdev_fdir_rule *)flow); + + return 0; +} + +static int +sssnic_ethdev_flow_validate(struct rte_eth_dev *ethdev, + const struct rte_flow_attr *attr, const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], struct rte_flow_error *error) +{ + return sssnic_ethdev_flow_parse(ethdev, attr, pattern, actions, error, + NULL); +} + +static int +sssnic_ethdev_flow_flush(struct rte_eth_dev *ethdev, + struct rte_flow_error *error) +{ + int ret; + + ret = sssnic_ethdev_fdir_rules_flush(ethdev); + if (ret != 0) { + rte_flow_error_set(error, EIO, RTE_FLOW_ERROR_TYPE_HANDLE, NULL, + "Failed to flush fdir rules"); + return -rte_errno; + } + + return 0; +} + +static const struct rte_flow_ops sssnic_ethdev_flow_ops = { + .validate = sssnic_ethdev_flow_validate, + .create = sssnic_ethdev_flow_create, + .destroy = sssnic_ethdev_flow_destroy, + .flush = sssnic_ethdev_flow_flush, +}; + +int +sssnic_ethdev_flow_ops_get(struct rte_eth_dev *ethdev, + const struct rte_flow_ops **ops) +{ + RTE_SET_USED(ethdev); + + *ops = &sssnic_ethdev_flow_ops; + + return 0; +} diff --git a/drivers/net/sssnic/sssnic_ethdev_flow.h b/drivers/net/sssnic/sssnic_ethdev_flow.h new file mode 100644 index 0000000000..2812b783e2 --- /dev/null +++ b/drivers/net/sssnic/sssnic_ethdev_flow.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018-2022 Shenzhen 3SNIC Information Technology Co., Ltd. + */ + +#ifndef _SSSNIC_ETHDEV_FLOW_H_ +#define _SSSNIC_ETHDEV_FLOW_H_ + +int sssnic_ethdev_flow_ops_get(struct rte_eth_dev *ethdev, + const struct rte_flow_ops **ops); + +#endif /* _SSSNIC_ETHDEV_FLOW_H_ */ diff --git a/drivers/net/sssnic/sssnic_ethdev_rx.c b/drivers/net/sssnic/sssnic_ethdev_rx.c index 6c5f209262..46a1d5fd23 100644 --- a/drivers/net/sssnic/sssnic_ethdev_rx.c +++ b/drivers/net/sssnic/sssnic_ethdev_rx.c @@ -11,6 +11,7 @@ #include "sssnic_ethdev.h" #include "sssnic_ethdev_rx.h" #include "sssnic_ethdev_rss.h" +#include "sssnic_ethdev_fdir.h" #include "base/sssnic_hw.h" #include "base/sssnic_workq.h" #include "base/sssnic_api.h" @@ -593,9 +594,18 @@ static int sssnic_ethdev_rxq_enable(struct rte_eth_dev *ethdev, uint16_t queue_id) { struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[queue_id]; + int ret; sssnic_ethdev_rxq_pktmbufs_fill(rxq); + pthread_mutex_lock(ðdev->data->flow_ops_mutex); + ret = sssnic_ethdev_fdir_rules_enable_by_queue(ethdev, queue_id); + if (ret) + PMD_DRV_LOG(WARNING, + "Failed to enable fdir rules of rxq:%u, port:%u", + queue_id, ethdev->data->port_id); + pthread_mutex_unlock(ðdev->data->flow_ops_mutex); + return 0; } @@ -605,6 +615,14 @@ sssnic_ethdev_rxq_disable(struct rte_eth_dev *ethdev, uint16_t queue_id) struct sssnic_ethdev_rxq *rxq = ethdev->data->rx_queues[queue_id]; int ret; + pthread_mutex_lock(ðdev->data->flow_ops_mutex); + ret = sssnic_ethdev_fdir_rules_disable_by_queue(ethdev, queue_id); + if (ret != 0) + PMD_DRV_LOG(WARNING, + "Failed to disable fdir rules of rxq:%u, port:%u", + queue_id, ethdev->data->port_id); + pthread_mutex_unlock(ðdev->data->flow_ops_mutex); + ret = sssnic_ethdev_rxq_flush(rxq); if (ret != 0) { PMD_DRV_LOG(ERR, "Failed to flush rxq:%u, port:%u", queue_id, From patchwork Fri Sep 1 09:35:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Renyong Wan X-Patchwork-Id: 131077 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5A1E04221E; Fri, 1 Sep 2023 11:40:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7DDBC41156; Fri, 1 Sep 2023 11:36:29 +0200 (CEST) Received: from VLXDG1SPAM1.ramaxel.com (email.unionmem.com [221.4.138.186]) by mails.dpdk.org (Postfix) with ESMTP id ADD5F40E68 for ; Fri, 1 Sep 2023 11:36:18 +0200 (CEST) Received: from V12DG1MBS03.ramaxel.local ([172.26.18.33]) by VLXDG1SPAM1.ramaxel.com with ESMTP id 3819ZbnD069920; Fri, 1 Sep 2023 17:35:37 +0800 (GMT-8) (envelope-from wanry@3snic.com) Received: from localhost.localdomain (10.64.136.151) by V12DG1MBS03.ramaxel.local (172.26.18.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2375.17; Fri, 1 Sep 2023 17:35:36 +0800 From: To: CC: , Renyong Wan , Steven Song Subject: [PATCH v4 32/32] net/sssnic: add VF dev support Date: Fri, 1 Sep 2023 17:35:14 +0800 Message-ID: <20230901093514.224824-33-wanry@3snic.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230901093514.224824-1-wanry@3snic.com> References: <20230901093514.224824-1-wanry@3snic.com> MIME-Version: 1.0 X-Originating-IP: [10.64.136.151] X-ClientProxiedBy: V12DG1MBS03.ramaxel.local (172.26.18.33) To V12DG1MBS03.ramaxel.local (172.26.18.33) X-DNSRBL: X-SPAM-SOURCE-CHECK: pass X-MAIL: VLXDG1SPAM1.ramaxel.com 3819ZbnD069920 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Renyong Wan In comparison to PF, VF PMD does not support the following features: 1. link up and link down set 2. promiscuous enable and disable 3. MAC stats in extend xstats Signed-off-by: Steven Song Signed-off-by: Renyong Wan --- drivers/net/sssnic/base/sssnic_api.c | 42 +++++++++++++++++ drivers/net/sssnic/base/sssnic_api.h | 2 + drivers/net/sssnic/base/sssnic_cmd.h | 6 +++ drivers/net/sssnic/base/sssnic_hw.c | 19 ++++++-- drivers/net/sssnic/base/sssnic_hw.h | 4 ++ drivers/net/sssnic/sssnic_ethdev.c | 60 +++++++++++++++++++++++- drivers/net/sssnic/sssnic_ethdev_stats.c | 29 ++++++++++-- 7 files changed, 153 insertions(+), 9 deletions(-) diff --git a/drivers/net/sssnic/base/sssnic_api.c b/drivers/net/sssnic/base/sssnic_api.c index 0e965442fd..2d829bc884 100644 --- a/drivers/net/sssnic/base/sssnic_api.c +++ b/drivers/net/sssnic/base/sssnic_api.c @@ -1899,3 +1899,45 @@ sssnic_tcam_entry_del(struct sssnic_hw *hw, uint32_t entry_idx) return 0; } + +static int +sssnic_vf_port_register_op(struct sssnic_hw *hw, bool op) +{ + int ret; + struct sssnic_vf_port_register_cmd cmd; + struct sssnic_msg msg; + uint32_t cmd_len; + + memset(&cmd, 0, sizeof(cmd)); + cmd.op = op ? 1 : 0; + cmd_len = sizeof(cmd); + sssnic_msg_init(&msg, (uint8_t *)&cmd, cmd_len, + SSSNIC_REGISTER_VF_PORT_CMD, SSSNIC_PF_FUNC_IDX(hw), + SSSNIC_LAN_MODULE, SSSNIC_MSG_TYPE_REQ); + ret = sssnic_mbox_send(hw, &msg, (uint8_t *)&cmd, &cmd_len, 0); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to send mbox message, ret=%d", ret); + return ret; + } + + if (cmd_len == 0 || cmd.common.status != 0) { + PMD_DRV_LOG(ERR, + "Bad response to SSSNIC_REGISTER_VF_PORT_CMD, len=%u, status=%u", + cmd_len, cmd.common.status); + return -EIO; + } + + return 0; +} + +int +sssnic_vf_port_register(struct sssnic_hw *hw) +{ + return sssnic_vf_port_register_op(hw, true); +} + +int +sssnic_vf_port_unregister(struct sssnic_hw *hw) +{ + return sssnic_vf_port_register_op(hw, false); +} diff --git a/drivers/net/sssnic/base/sssnic_api.h b/drivers/net/sssnic/base/sssnic_api.h index 7a02ec61ee..2506682821 100644 --- a/drivers/net/sssnic/base/sssnic_api.h +++ b/drivers/net/sssnic/base/sssnic_api.h @@ -492,5 +492,7 @@ int sssnic_tcam_packet_type_filter_set(struct sssnic_hw *hw, uint8_t ptype, int sssnic_tcam_entry_add(struct sssnic_hw *hw, struct sssnic_tcam_entry *entry); int sssnic_tcam_entry_del(struct sssnic_hw *hw, uint32_t entry_idx); +int sssnic_vf_port_register(struct sssnic_hw *hw); +int sssnic_vf_port_unregister(struct sssnic_hw *hw); #endif /* _SSSNIC_API_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_cmd.h b/drivers/net/sssnic/base/sssnic_cmd.h index c75cb0dad3..058ab298f3 100644 --- a/drivers/net/sssnic/base/sssnic_cmd.h +++ b/drivers/net/sssnic/base/sssnic_cmd.h @@ -505,4 +505,10 @@ struct sssnic_tcam_entry_del_cmd { uint32_t num; /* number of entries to be deleted */ }; +struct sssnic_vf_port_register_cmd { + struct sssnic_cmd_common common; + uint8_t op; /* 0: unregister, 1: register */ + uint8_t resvd[39]; +}; + #endif /* _SSSNIC_CMD_H_ */ diff --git a/drivers/net/sssnic/base/sssnic_hw.c b/drivers/net/sssnic/base/sssnic_hw.c index 651a0aa7ef..0edd5b9508 100644 --- a/drivers/net/sssnic/base/sssnic_hw.c +++ b/drivers/net/sssnic/base/sssnic_hw.c @@ -345,10 +345,16 @@ sssnic_base_init(struct sssnic_hw *hw) pci_dev = hw->pci_dev; /* get base addresses of hw registers */ - hw->cfg_base_addr = - (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_CFG].addr; - hw->mgmt_base_addr = - (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_MGMT].addr; + if (pci_dev->id.device_id == SSSNIC_VF_DEVICE_ID) { + uint8_t *addr = + (uint8_t *)pci_dev->mem_resource[SSSNIC_VF_PCI_BAR_CFG].addr; + hw->cfg_base_addr = addr + SSSNIC_VF_CFG_ADDR_OFFSET; + } else { + hw->cfg_base_addr = + (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_CFG].addr; + hw->mgmt_base_addr = + (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_MGMT].addr; + } hw->db_base_addr = (uint8_t *)pci_dev->mem_resource[SSSNIC_PCI_BAR_DB].addr; hw->db_mem_len = @@ -365,7 +371,10 @@ sssnic_base_init(struct sssnic_hw *hw) PMD_DRV_LOG(ERR, "Doorbell is not enabled!"); return -EBUSY; } - sssnic_af_setup(hw); + + if (SSSNIC_FUNC_TYPE(hw) != SSSNIC_FUNC_TYPE_VF) + sssnic_af_setup(hw); + sssnic_msix_all_disable(hw); sssnic_pf_status_set(hw, SSSNIC_PF_STATUS_INIT); diff --git a/drivers/net/sssnic/base/sssnic_hw.h b/drivers/net/sssnic/base/sssnic_hw.h index 6a2d980d5a..9d8a3653a7 100644 --- a/drivers/net/sssnic/base/sssnic_hw.h +++ b/drivers/net/sssnic/base/sssnic_hw.h @@ -7,11 +7,15 @@ #define SSSNIC_PCI_VENDOR_ID 0x1F3F #define SSSNIC_DEVICE_ID_STD 0x9020 +#define SSSNIC_VF_DEVICE_ID 0x9001 +#define SSSNIC_VF_PCI_BAR_CFG 0 #define SSSNIC_PCI_BAR_CFG 1 #define SSSNIC_PCI_BAR_MGMT 3 #define SSSNIC_PCI_BAR_DB 4 +#define SSSNIC_VF_CFG_ADDR_OFFSET 0x2000 + #define SSSNIC_FUNC_TYPE_PF 0 #define SSSNIC_FUNC_TYPE_VF 1 #define SSSNIC_FUNC_TYPE_AF 2 diff --git a/drivers/net/sssnic/sssnic_ethdev.c b/drivers/net/sssnic/sssnic_ethdev.c index 545833fb55..18d9ba7ac1 100644 --- a/drivers/net/sssnic/sssnic_ethdev.c +++ b/drivers/net/sssnic/sssnic_ethdev.c @@ -349,6 +349,8 @@ sssnic_ethdev_release(struct rte_eth_dev *ethdev) sssnic_ethdev_rx_queue_all_release(ethdev); sssnic_ethdev_fdir_shutdown(ethdev); sssnic_ethdev_mac_addrs_clean(ethdev); + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + sssnic_vf_port_unregister(hw); sssnic_hw_shutdown(hw); rte_free(hw); } @@ -957,6 +959,47 @@ static const struct eth_dev_ops sssnic_ethdev_ops = { .flow_ops_get = sssnic_ethdev_flow_ops_get, }; +static const struct eth_dev_ops sssnic_vf_ethdev_ops = { + .dev_start = sssnic_ethdev_start, + .dev_stop = sssnic_ethdev_stop, + .dev_close = sssnic_ethdev_close, + .link_update = sssnic_ethdev_link_update, + .dev_configure = sssnic_ethdev_configure, + .dev_infos_get = sssnic_ethdev_infos_get, + .mtu_set = sssnic_ethdev_mtu_set, + .mac_addr_set = sssnic_ethdev_mac_addr_set, + .mac_addr_remove = sssnic_ethdev_mac_addr_remove, + .mac_addr_add = sssnic_ethdev_mac_addr_add, + .set_mc_addr_list = sssnic_ethdev_set_mc_addr_list, + .rx_queue_setup = sssnic_ethdev_rx_queue_setup, + .rx_queue_release = sssnic_ethdev_rx_queue_release, + .tx_queue_setup = sssnic_ethdev_tx_queue_setup, + .tx_queue_release = sssnic_ethdev_tx_queue_release, + .rx_queue_start = sssnic_ethdev_rx_queue_start, + .rx_queue_stop = sssnic_ethdev_rx_queue_stop, + .tx_queue_start = sssnic_ethdev_tx_queue_start, + .tx_queue_stop = sssnic_ethdev_tx_queue_stop, + .rx_queue_intr_enable = sssnic_ethdev_rx_queue_intr_enable, + .rx_queue_intr_disable = sssnic_ethdev_rx_queue_intr_disable, + .allmulticast_enable = sssnic_ethdev_allmulticast_enable, + .allmulticast_disable = sssnic_ethdev_allmulticast_disable, + .stats_get = sssnic_ethdev_stats_get, + .stats_reset = sssnic_ethdev_stats_reset, + .xstats_get_names = sssnic_ethdev_xstats_get_names, + .xstats_get = sssnic_ethdev_xstats_get, + .xstats_reset = sssnic_ethdev_xstats_reset, + .rss_hash_conf_get = sssnic_ethdev_rss_hash_config_get, + .rss_hash_update = sssnic_ethdev_rss_hash_update, + .reta_update = sssnic_ethdev_rss_reta_update, + .reta_query = sssnic_ethdev_rss_reta_query, + .rxq_info_get = sssnic_ethdev_rx_queue_info_get, + .txq_info_get = sssnic_ethdev_tx_queue_info_get, + .fw_version_get = sssnic_ethdev_fw_version_get, + .vlan_offload_set = sssnic_ethdev_vlan_offload_set, + .vlan_filter_set = sssnic_ethdev_vlan_filter_set, + .flow_ops_get = sssnic_ethdev_flow_ops_get, +}; + static int sssnic_ethdev_init(struct rte_eth_dev *ethdev) { @@ -989,6 +1032,14 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) return ret; } + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) { + ret = sssnic_vf_port_register(hw); + if (ret != 0) { + PMD_DRV_LOG(ERR, "Failed to register VF device"); + goto vf_register_fail; + } + } + ret = sssnic_ethdev_mac_addrs_init(ethdev); if (ret != 0) { PMD_DRV_LOG(ERR, "Failed to initialize MAC addresses"); @@ -1004,7 +1055,10 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) netdev->max_num_rxq = SSSNIC_MAX_NUM_RXQ(hw); netdev->max_num_txq = SSSNIC_MAX_NUM_TXQ(hw); - ethdev->dev_ops = &sssnic_ethdev_ops; + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + ethdev->dev_ops = &sssnic_vf_ethdev_ops; + else + ethdev->dev_ops = &sssnic_ethdev_ops; sssnic_ethdev_link_update(ethdev, 0); sssnic_ethdev_link_intr_enable(ethdev); @@ -1014,6 +1068,9 @@ sssnic_ethdev_init(struct rte_eth_dev *ethdev) fdir_init_fail: sssnic_ethdev_mac_addrs_clean(ethdev); mac_addrs_init_fail: + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + sssnic_vf_port_unregister(hw); +vf_register_fail: sssnic_hw_shutdown(0); return ret; } @@ -1059,6 +1116,7 @@ sssnic_pci_remove(struct rte_pci_device *pci_dev) static const struct rte_pci_id sssnic_pci_id_map[] = { { RTE_PCI_DEVICE(SSSNIC_PCI_VENDOR_ID, SSSNIC_DEVICE_ID_STD) }, + { RTE_PCI_DEVICE(SSSNIC_PCI_VENDOR_ID, SSSNIC_VF_DEVICE_ID) }, { .vendor_id = 0 }, }; diff --git a/drivers/net/sssnic/sssnic_ethdev_stats.c b/drivers/net/sssnic/sssnic_ethdev_stats.c index dd91aef5f7..dc7d912dbf 100644 --- a/drivers/net/sssnic/sssnic_ethdev_stats.c +++ b/drivers/net/sssnic/sssnic_ethdev_stats.c @@ -244,9 +244,19 @@ sssnic_ethdev_stats_reset(struct rte_eth_dev *ethdev) static uint32_t sssnic_ethdev_xstats_num_calc(struct rte_eth_dev *ethdev) { - return SSSNIC_ETHDEV_NB_PORT_XSTATS + SSSNIC_ETHDEV_NB_MAC_XSTATS + - (SSSNIC_ETHDEV_NB_TXQ_XSTATS * ethdev->data->nb_tx_queues) + - (SSSNIC_ETHDEV_NB_RXQ_XSTATS * ethdev->data->nb_rx_queues); + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); + uint32_t num; + + num = SSSNIC_ETHDEV_NB_PORT_XSTATS; + num += SSSNIC_ETHDEV_NB_TXQ_XSTATS * ethdev->data->nb_tx_queues; + num += SSSNIC_ETHDEV_NB_RXQ_XSTATS * ethdev->data->nb_rx_queues; + + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + return num; + + num += SSSNIC_ETHDEV_NB_MAC_XSTATS; + + return num; } int @@ -255,6 +265,7 @@ sssnic_ethdev_xstats_get_names(struct rte_eth_dev *ethdev, __rte_unused unsigned int limit) { uint16_t i, qid, count = 0; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); if (xstats_names == NULL) return sssnic_ethdev_xstats_num_calc(ethdev); @@ -283,6 +294,9 @@ sssnic_ethdev_xstats_get_names(struct rte_eth_dev *ethdev, count++; } + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + return count; + for (i = 0; i < SSSNIC_ETHDEV_NB_MAC_XSTATS; i++) { snprintf(xstats_names[count].name, RTE_ETH_XSTATS_NAME_SIZE, "mac_%s", mac_stats_strings[i].name); @@ -348,6 +362,11 @@ sssnic_ethdev_xstats_get(struct rte_eth_dev *ethdev, count++; } + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) { + ret = count; + goto out; + } + ret = sssnic_mac_stats_get(hw, &stats->mac); if (ret) { PMD_DRV_LOG(ERR, "Failed to get port %u mac stats", @@ -372,6 +391,7 @@ int sssnic_ethdev_xstats_reset(struct rte_eth_dev *ethdev) { int ret; + struct sssnic_hw *hw = SSSNIC_ETHDEV_TO_HW(ethdev); ret = sssnic_ethdev_stats_reset(ethdev); if (ret) { @@ -380,6 +400,9 @@ sssnic_ethdev_xstats_reset(struct rte_eth_dev *ethdev) return ret; } + if (SSSNIC_FUNC_TYPE(hw) == SSSNIC_FUNC_TYPE_VF) + return 0; + ret = sssnic_mac_stats_clear(SSSNIC_ETHDEV_TO_HW(ethdev)); if (ret) { PMD_DRV_LOG(ERR, "Failed to clear port %u MAC stats",