From patchwork Thu Dec 31 07:22:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85924 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4FEF1A0A00; Thu, 31 Dec 2020 08:24:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 26920140D32; Thu, 31 Dec 2020 08:23:13 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2F506140CF3 for ; Thu, 31 Dec 2020 08:23:01 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7GSr2022257; Wed, 30 Dec 2020 23:22:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=y765zdOqlTLMnIkc2vOJPaF79R0TA8kvCz9dRkBsr5U=; b=PU7ZhuFb8RyM0j8pTsq8pxN8ieaFhpgGpvgI9bW3w8y6WjSV39hKF0VvJeLujPo7UUkV rfk5NQjnemzLtgTr6+YL+/oJI3bN88oINXABb0KP4XqpuyTRpRAqyNXW8AhO4z1f8mM2 P0ctl04QsvgRt7flAQwAdT0bSYRYJnHwlkm4jR7m5fQwvjpZIXztsUFKNqhXQxe4WHLP gWncpY+lfGul3osLjovwP4r04W4giPN0kLvBoY8sgjMgHKPE9IGgf/COyHBkFjwvUm5g 8g5y2cqt49kRA1TUwKo3YiuemVPrvHmw1Y19DQvflts0jhbcpYcV7Vu1gj/YgKlz/Z0C zQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx53-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 30 Dec 2020 23:22:56 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:55 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:54 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:55 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id C73303F7040; Wed, 30 Dec 2020 23:22:54 -0800 (PST) From: "Nalla, Pradeep" To: Thomas Monjalon , "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru , Ray Kinsella , "Neil Horman" CC: , , Date: Thu, 31 Dec 2020 07:22:33 +0000 Message-ID: <20201231072247.5719-2-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 01/15] net/octeontx_ep: add build and doc infrastructure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Adding bare minimum PMD library and doc build infrastructure and claim the maintainership for octeontx end point PMD. Signed-off-by: Nalla Pradeep --- MAINTAINERS | 9 +++++++ doc/guides/nics/features/octeontx_ep.ini | 8 ++++++ doc/guides/nics/index.rst | 1 + doc/guides/nics/octeontx_ep.rst | 32 ++++++++++++++++++++++++ drivers/net/meson.build | 1 + drivers/net/octeontx_ep/meson.build | 8 ++++++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 +++ drivers/net/octeontx_ep/version.map | 4 +++ 8 files changed, 66 insertions(+) create mode 100644 doc/guides/nics/features/octeontx_ep.ini create mode 100644 doc/guides/nics/octeontx_ep.rst create mode 100644 drivers/net/octeontx_ep/meson.build create mode 100644 drivers/net/octeontx_ep/otx_ep_ethdev.c create mode 100644 drivers/net/octeontx_ep/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 6787b15dcc..923c92bda2 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -762,6 +762,15 @@ T: git://dpdk.org/next/dpdk-next-crypto F: drivers/common/octeontx2/otx2_sec* F: drivers/net/octeontx2/otx2_ethdev_sec* +Marvell OCTEON TX EP - endpoint +M: Nalla Pradeep +M: Radha Mohan Chintakuntla +M: Veerasenareddy Burru +T: git://dpdk.org/next/dpdk-next-net-mrvl +F: drivers/net/octeontx_ep/ +F: doc/guides/nics/features/octeontx_ep.ini +F: doc/guides/nics/octeontx_ep.rst + Mellanox mlx4 M: Matan Azrad M: Shahaf Shuler diff --git a/doc/guides/nics/features/octeontx_ep.ini b/doc/guides/nics/features/octeontx_ep.ini new file mode 100644 index 0000000000..95d6585222 --- /dev/null +++ b/doc/guides/nics/features/octeontx_ep.ini @@ -0,0 +1,8 @@ +; +; Supported features of the 'octeontx_ep' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 3443617755..799697caf0 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -50,6 +50,7 @@ Network Interface Controller Drivers null octeontx octeontx2 + octeontx_ep pfe qede sfc_efx diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst new file mode 100644 index 0000000000..dba0847b06 --- /dev/null +++ b/doc/guides/nics/octeontx_ep.rst @@ -0,0 +1,32 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2019 Marvell International Ltd. + +OCTEON TX EP Poll Mode driver +=========================== + +The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode ethdev +driver support for **Marvell OCTEON TX2** and **Cavium OCTEON TX** families of +adapters as well as for their virtual functions (VF) in SR-IOV context. + +More information can be found at `Marvell Official Website +`_. + +Features +-------- + +Features of the OCTEON TX EP Ethdev PMD are: + + +Prerequisites +------------- + +See :doc:`../platform/octeontx2` and `../platform/octeontx` for setup information. + +Compile time Config Options +--------------------------- + +The following options may be modified in the ``config`` file. + +- ``CONFIG_RTE_LIBRTE_OCTEONTX_EP_PMD`` (default ``y``) + + Toggle compilation of the ``librte_pmd_octeontx_ep`` driver. diff --git a/drivers/net/meson.build b/drivers/net/meson.build index 6e4aa6bf3f..87475d8fc3 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -42,6 +42,7 @@ drivers = ['af_packet', 'null', 'octeontx', 'octeontx2', + 'octeontx_ep', 'pcap', 'pfe', 'qede', diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build new file mode 100644 index 0000000000..46462c8efe --- /dev/null +++ b/drivers/net/octeontx_ep/meson.build @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +sources = files( + 'otx_ep_ethdev.c', + ) + diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c new file mode 100644 index 0000000000..d26535deec --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -0,0 +1,3 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ diff --git a/drivers/net/octeontx_ep/version.map b/drivers/net/octeontx_ep/version.map new file mode 100644 index 0000000000..f4db678dd5 --- /dev/null +++ b/drivers/net/octeontx_ep/version.map @@ -0,0 +1,4 @@ +DPDK_20.0 { + + local: *; +}; From patchwork Thu Dec 31 07:22:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85915 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D21E6A0A00; Thu, 31 Dec 2020 08:23:15 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E985C140CF5; Thu, 31 Dec 2020 08:23:01 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 52434140CDE for ; Thu, 31 Dec 2020 08:22:58 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G5RB022206 for ; Wed, 30 Dec 2020 23:22:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=wQIk5wRqBrJoPm5pEbRVL4zoxySLuh8wxm3zTbhOsMQ=; b=gPjHVy490YknCFDXb2xIzCkzdFMOunuRYjHqKo3PMdiWebRMWYJvWIW7rqrvMU+ImFLX BuHQ9ulCsRfFOn5BebxWBR2wUluUCvi8/Cdl/mSIOL/Cm8IIrSVYzyXTD4VNIj/MFFUs 6kH6nWsYTRqg8zqbcFV+vcwQM++5dsLkEIIRk0vEtk+n85mphkXC5SbaiVgsJce3Tyzd iVHLwMMjXSZiDknq91HYNJvJWF3J3gl52MpWvD0ZRpCB8kCven1jg2oMlEWZpqoZNPrf HbCgVSQQMNkWq2BTfRhrcAyk3LmdBsPsx1oW9HuaX8cd/eW7Kyv7dwViG8jYEiFwjxnP 4A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx54-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:57 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:56 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:55 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 2ECFB3F7041; Wed, 30 Dec 2020 23:22:55 -0800 (PST) From: "Nalla, Pradeep" To: Jerin Jacob , Nithin Dabilpuram , "Nalla, Pradeep" , "Radha Mohan Chintakuntla" , Veerasenareddy Burru CC: , Date: Thu, 31 Dec 2020 07:22:34 +0000 Message-ID: <20201231072247.5719-3-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 02/15] net/octeontx_ep: add ethdev probe and remove X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" add basic PCIe ethdev probe and remove. Signed-off-by: Nalla Pradeep --- drivers/common/octeontx2/otx2_common.h | 3 ++ drivers/net/octeontx_ep/meson.build | 13 ++++++ drivers/net/octeontx_ep/otx_ep_common.h | 14 ++++++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 62 +++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 9 ++++ 5 files changed, 101 insertions(+) create mode 100644 drivers/net/octeontx_ep/otx_ep_common.h create mode 100644 drivers/net/octeontx_ep/otx_ep_vf.h diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h index b6779f7104..e119222ec2 100644 --- a/drivers/common/octeontx2/otx2_common.h +++ b/drivers/common/octeontx2/otx2_common.h @@ -136,6 +136,9 @@ extern int otx2_logtype_ree; #define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE #define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8 #define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081 +#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */ +/* OCTEON TX2 98xx EP mode */ +#define PCI_DEVID_98XX_EP_NET_VF 0xB103 #define PCI_DEVID_OCTEONTX2_EP_VF 0xB203 /* OCTEON TX2 EP mode */ #define PCI_DEVID_OCTEONTX2_RVU_SDP_PF 0xA0f6 #define PCI_DEVID_OCTEONTX2_RVU_SDP_VF 0xA0f7 diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build index 46462c8efe..42eab9b648 100644 --- a/drivers/net/octeontx_ep/meson.build +++ b/drivers/net/octeontx_ep/meson.build @@ -6,3 +6,16 @@ sources = files( 'otx_ep_ethdev.c', ) +extra_flags = [] +# This integrated controller runs only on a arm64 machine, remove 32bit warnings +if not dpdk_conf.get('RTE_ARCH_64') + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] +endif + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +includes += include_directories('../../common/octeontx2') diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h new file mode 100644 index 0000000000..7a1484f1aa --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -0,0 +1,14 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ +#ifndef _OTX_EP_COMMON_H_ +#define _OTX_EP_COMMON_H_ + +/* OTX_EP EP VF device data structure */ +struct otx_ep_device { + /* PCI device pointer */ + struct rte_pci_device *pdev; + + struct rte_eth_dev *eth_dev; +}; +#endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index d26535deec..960c4f321e 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -1,3 +1,65 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(C) 2019 Marvell International Ltd. */ + +#include +#include +#include + +#include "otx2_common.h" +#include "otx_ep_common.h" +#include "otx_ep_vf.h" + +static int +otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); + + return -ENODEV; +} + +static int +otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); + + return -ENODEV; +} + +static int +otx_ep_eth_dev_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_probe(pci_dev, + sizeof(struct otx_ep_device), + otx_ep_eth_dev_init); +} + +static int +otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev) +{ + return rte_eth_dev_pci_generic_remove(pci_dev, + otx_ep_eth_dev_uninit); +} + + +/* Set of PCI devices this driver supports */ +static const struct rte_pci_id pci_id_otx_ep_map[] = { + { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_EP_NET_VF) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_98XX_EP_NET_VF) }, + { .vendor_id = 0, /* sentinel */ } +}; + + + +static struct rte_pci_driver rte_otx_ep_pmd = { + .id_table = pci_id_otx_ep_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING, + .probe = otx_ep_eth_dev_pci_probe, + .remove = otx_ep_eth_dev_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_otx_ep, rte_otx_ep_pmd); +RTE_PMD_REGISTER_PCI_TABLE(net_otx_ep, pci_id_otx_ep_map); +RTE_PMD_REGISTER_KMOD_DEP(net_otx_ep, "* igb_uio | vfio-pci"); diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h new file mode 100644 index 0000000000..bee8a26727 --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -0,0 +1,9 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ +#ifndef _OTX_EP_VF_H_ +#define _OTX_EP_VF_H_ + +#define PCI_DEVID_OCTEONTX_EP_VF 0xa303 + +#endif /*_OTX_EP_VF_H_ */ From patchwork Thu Dec 31 07:22:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85914 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F2B8DA0A00; Thu, 31 Dec 2020 08:23:06 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2094140CED; Thu, 31 Dec 2020 08:23:00 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 20691140CD3 for ; Thu, 31 Dec 2020 08:22:57 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G5RA022206; Wed, 30 Dec 2020 23:22:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=nofnG0OOIliSpStsmPmwA8tROxQJYR0rEjBxOAqxtSQ=; b=CFGXeZc+CGMP98+kxnENG1zt298GmNcxYg/nbImFZAnGKmuHBgtIxLOfZqTCaXwiUAh+ wUFa5B5xbq9bwF6QzAv0mmku5JnssDWgFIo88S0Xx0JEgvBbPs2MxbjH5DynNXZJwu2u 5u4ex5nJkOTo9FT/mwELGSsObku18/RqvOegCmT4jR/v+WdwIjZwLwJJGkUSQ0yV0CxC jApSDjww2SBTjULyzVVk7NXNpo/q+z7Z6KRl4pRFtFB/AbqKQ/8DWPzinBAlhFL2409o Xma3NHVkt1Sewt6JZNdDHyyjG+ujyog8iLTuoJkUZP7f9452sPAG13yYAnYJ725DamL4 og== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx54-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 30 Dec 2020 23:22:57 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:55 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 7B4723F703F; Wed, 30 Dec 2020 23:22:55 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru , "Anatoly Burakov" CC: , , Date: Thu, 31 Dec 2020 07:22:35 +0000 Message-ID: <20201231072247.5719-4-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 03/15] net/octeontx_ep: add device init and uninit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Add basic init and uninit function which includes initializing fields of ethdev private structure. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx_ep_common.h | 22 ++++++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 99 ++++++++++++++++++++++++- 2 files changed, 117 insertions(+), 4 deletions(-) diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index 7a1484f1aa..fca0c79a43 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -4,11 +4,33 @@ #ifndef _OTX_EP_COMMON_H_ #define _OTX_EP_COMMON_H_ +#define otx_ep_printf(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ + fmt, ##args) + +#define otx_ep_info(fmt, args...) \ + otx_ep_printf(INFO, fmt, ##args) + +#define otx_ep_err(fmt, args...) \ + otx_ep_printf(ERR, fmt, ##args) + +#define otx_ep_dbg(fmt, args...) \ + otx_ep_printf(DEBUG, fmt, ##args) + /* OTX_EP EP VF device data structure */ struct otx_ep_device { /* PCI device pointer */ struct rte_pci_device *pdev; + uint16_t chip_id; + uint16_t vf_num; struct rte_eth_dev *eth_dev; + + int port_id; + + /* Memory mapped h/w address */ + uint8_t *hw_addr; + + int port_configured; }; #endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 960c4f321e..6012c3fe9d 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -10,20 +10,111 @@ #include "otx_ep_common.h" #include "otx_ep_vf.h" +#define OTX_EP_DEV(_eth_dev) ((_eth_dev)->data->dev_private) +static int +otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) +{ + struct rte_pci_device *pdev = otx_epvf->pdev; + uint32_t dev_id = pdev->id.device_id; + int ret; + + switch (dev_id) { + case PCI_DEVID_OCTEONTX_EP_VF: + otx_epvf->chip_id = PCI_DEVID_OCTEONTX_EP_VF; + break; + case PCI_DEVID_OCTEONTX2_EP_NET_VF: + case PCI_DEVID_98XX_EP_NET_VF: + otx_epvf->chip_id = dev_id; + break; + default: + otx_ep_err("Unsupported device\n"); + ret = -EINVAL; + } + + if (!ret) + otx_ep_info("OTX_EP dev_id[%d]\n", dev_id); + + return ret; +} + +/* OTX_EP VF device initialization */ +static int +otx_epdev_init(struct otx_ep_device *otx_epvf) +{ + if (otx_ep_chip_specific_setup(otx_epvf)) { + otx_ep_err("Chip specific setup failed\n"); + goto setup_fail; + } + + return 0; + +setup_fail: + return -ENOMEM; +} + + static int otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) { - RTE_SET_USED(eth_dev); + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + otx_epvf->port_configured = 0; - return -ENODEV; + if (eth_dev->data->mac_addrs != NULL) + rte_free(eth_dev->data->mac_addrs); + + return 0; } + + static int otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) { - RTE_SET_USED(eth_dev); + struct rte_pci_device *pdev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + int vf_id; + unsigned char vf_mac_addr[RTE_ETHER_ADDR_LEN]; + + /* Single process support */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + rte_eth_copy_pci_info(eth_dev, pdev); + + if (pdev->mem_resource[0].addr) { + otx_ep_info("OTX_EP_EP BAR0 is mapped:\n"); + } else { + otx_ep_err("OTX_EP_EP: Failed to map device BARs\n"); + otx_ep_err("BAR0 %p\n BAR2 %p", + pdev->mem_resource[0].addr, + pdev->mem_resource[2].addr); + return -ENODEV; + } + otx_epvf->eth_dev = eth_dev; + otx_epvf->port_id = eth_dev->data->port_id; + eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + otx_ep_err("MAC addresses memory allocation failed\n"); + return -ENOMEM; + } + rte_eth_random_addr(vf_mac_addr); + memcpy(eth_dev->data->mac_addrs, vf_mac_addr, RTE_ETHER_ADDR_LEN); + otx_epvf->hw_addr = pdev->mem_resource[0].addr; + otx_epvf->pdev = pdev; + + /* Discover the VF number being probed */ + vf_id = ((pdev->addr.devid & 0x1F) << 3) | + (pdev->addr.function & 0x7); + + vf_id -= 1; + otx_epvf->vf_num = vf_id; + otx_epdev_init(otx_epvf); + otx_epvf->port_configured = 0; - return -ENODEV; + return 0; } static int From patchwork Thu Dec 31 07:22:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85917 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F6D5A0A00; Thu, 31 Dec 2020 08:23:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8875C140D04; Thu, 31 Dec 2020 08:23:04 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id C4E88140CD3 for ; Thu, 31 Dec 2020 08:22:58 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G5RD022206 for ; Wed, 30 Dec 2020 23:22:58 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=2FkBeJTS+BZrLR3HeX0KwI/zBLyMubGCUOg/TVUgVhA=; b=CMDpZxaV9+eH4DFMDTyziK47xUVESphQU3rnVFIki7QZxjkjahhWXf7L0+8N9jSYxuMS PL4LvySqxCetLcxtIVRh5ZfP0KBf/r0d6NfyTyP2/pf6J7u3eQ6XB2gw0+f5WSX2teom UHgNIkSTJL9LqKv0DSWgb1dTVh1usfKEF0PRxOimk44Jb/CxhUYZ980prWoEB0UYsI6r 03oYXjNJfTSCVNya78dTFAVA+dwFC9Qj20Dj0CS9OY1x93np2gMMuBde3J5N9Y4u1b8N L817C1w4/C/upy/WdL7wfkiFlK68wKwK2sMepY2P54bNprda8Np9RJUrIxRjavFNETWc vw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx54-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:57 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:56 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:55 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:56 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id CDF753F7040; Wed, 30 Dec 2020 23:22:55 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:36 +0000 Message-ID: <20201231072247.5719-5-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 04/15] net/octeontx_ep: Added basic device setup. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Functions to setup device, basic IQ and OQ registers are added. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/meson.build | 2 + drivers/net/octeontx_ep/otx2_ep_vf.c | 138 +++++++++++++++++++++ drivers/net/octeontx_ep/otx2_ep_vf.h | 11 ++ drivers/net/octeontx_ep/otx_ep_common.h | 87 +++++++++++++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 10 ++ drivers/net/octeontx_ep/otx_ep_vf.c | 154 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 33 +++++ 7 files changed, 435 insertions(+) create mode 100644 drivers/net/octeontx_ep/otx2_ep_vf.c create mode 100644 drivers/net/octeontx_ep/otx2_ep_vf.h create mode 100644 drivers/net/octeontx_ep/otx_ep_vf.c diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build index 42eab9b648..c7a7aa84bb 100644 --- a/drivers/net/octeontx_ep/meson.build +++ b/drivers/net/octeontx_ep/meson.build @@ -4,6 +4,8 @@ sources = files( 'otx_ep_ethdev.c', + 'otx_ep_vf.c', + 'otx2_ep_vf.c', ) extra_flags = [] diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c new file mode 100644 index 0000000000..f8be2f4864 --- /dev/null +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -0,0 +1,138 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_common.h" +#include "otx_ep_common.h" +#include "otx2_ep_vf.h" + +static void +otx2_vf_setup_global_iq_reg(struct otx_ep_device *otx_ep, int q_no) +{ + volatile uint64_t reg_val = 0ull; + + /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for IQs + * IS_64B is by default enabled. + */ + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(q_no)); + + reg_val |= SDP_VF_R_IN_CTL_RDSIZE; + reg_val |= SDP_VF_R_IN_CTL_IS_64B; + reg_val |= SDP_VF_R_IN_CTL_ESR; + + otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(q_no)); +} + +static void +otx2_vf_setup_global_oq_reg(struct otx_ep_device *otx_ep, int q_no) +{ + volatile uint64_t reg_val = 0ull; + + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(q_no)); + +#if defined(BUFPTR_ONLY_MODE) + reg_val &= ~(SDP_VF_R_OUT_CTL_IMODE); +#else + reg_val |= (SDP_VF_R_OUT_CTL_IMODE); +#endif + + reg_val &= ~(SDP_VF_R_OUT_CTL_ROR_P); + reg_val &= ~(SDP_VF_R_OUT_CTL_NSR_P); + reg_val &= ~(SDP_VF_R_OUT_CTL_ROR_I); + reg_val &= ~(SDP_VF_R_OUT_CTL_NSR_I); + reg_val &= ~(SDP_VF_R_OUT_CTL_ES_I); + reg_val &= ~(SDP_VF_R_OUT_CTL_ROR_D); + reg_val &= ~(SDP_VF_R_OUT_CTL_NSR_D); + reg_val &= ~(SDP_VF_R_OUT_CTL_ES_D); + + /* INFO/DATA ptr swap is required */ + reg_val |= (SDP_VF_R_OUT_CTL_ES_P); + + otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(q_no)); +} + +static void +otx2_vf_setup_global_input_regs(struct otx_ep_device *otx_ep) +{ + uint64_t q_no = 0ull; + + for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) + otx2_vf_setup_global_iq_reg(otx_ep, q_no); +} + +static void +otx2_vf_setup_global_output_regs(struct otx_ep_device *otx_ep) +{ + uint32_t q_no; + + for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) + otx2_vf_setup_global_oq_reg(otx_ep, q_no); +} + +static int +otx2_vf_setup_device_regs(struct otx_ep_device *otx_ep) +{ + otx2_vf_setup_global_input_regs(otx_ep); + otx2_vf_setup_global_output_regs(otx_ep); + + return 0; +} + +static const struct otx_ep_config default_otx2_ep_conf = { + /* IQ attributes */ + .iq = { + .max_iqs = OTX_EP_CFG_IO_QUEUES, + .instr_type = OTX_EP_64BYTE_INSTR, + .pending_list_size = (OTX_EP_MAX_IQ_DESCRIPTORS * + OTX_EP_CFG_IO_QUEUES), + }, + + /* OQ attributes */ + .oq = { + .max_oqs = OTX_EP_CFG_IO_QUEUES, + .info_ptr = OTX_EP_OQ_INFOPTR_MODE, + .refill_threshold = OTX_EP_OQ_REFIL_THRESHOLD, + }, + + .num_iqdef_descs = OTX_EP_MAX_IQ_DESCRIPTORS, + .num_oqdef_descs = OTX_EP_MAX_OQ_DESCRIPTORS, + .oqdef_buf_size = OTX_EP_OQ_BUF_SIZE, +}; + +static const struct otx_ep_config* +otx2_ep_get_defconf(struct otx_ep_device *otx_ep_dev __rte_unused) +{ + const struct otx_ep_config *default_conf = NULL; + + default_conf = &default_otx2_ep_conf; + + return default_conf; +} + +int +otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep) +{ + uint64_t reg_val = 0ull; + + /* If application doesn't provide its conf, use driver default conf */ + if (otx_ep->conf == NULL) { + otx_ep->conf = otx2_ep_get_defconf(otx_ep); + if (otx_ep->conf == NULL) { + otx2_err("SDP VF default config not found"); + return -ENOMEM; + } + otx2_info("Default config is used"); + } + + /* Get IOQs (RPVF] count */ + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(0)); + + otx_ep->sriov_info.rings_per_vf = ((reg_val >> SDP_VF_R_IN_CTL_RPVF_POS) + & SDP_VF_R_IN_CTL_RPVF_MASK); + + otx2_info("SDP RPVF: %d", otx_ep->sriov_info.rings_per_vf); + + otx_ep->fn_list.setup_device_regs = otx2_vf_setup_device_regs; + + return 0; +} diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h new file mode 100644 index 0000000000..52d6487548 --- /dev/null +++ b/drivers/net/octeontx_ep/otx2_ep_vf.h @@ -0,0 +1,11 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ +#ifndef _OTX2_EP_VF_H_ +#define _OTX2_EP_VF_H_ + +int +otx2_ep_vf_setup_device(struct otx_ep_device *sdpvf); + +#endif /*_OTX2_EP_VF_H_ */ + diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index fca0c79a43..f096bec1c0 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -4,6 +4,15 @@ #ifndef _OTX_EP_COMMON_H_ #define _OTX_EP_COMMON_H_ +#define OTX_EP_MAX_RINGS_PER_VF (8) +#define OTX_EP_CFG_IO_QUEUES OTX_EP_MAX_RINGS_PER_VF +#define OTX_EP_64BYTE_INSTR (64) +#define OTX_EP_MAX_IQ_DESCRIPTORS (8192) +#define OTX_EP_MAX_OQ_DESCRIPTORS (8192) +#define OTX_EP_OQ_BUF_SIZE (2048) + +#define OTX_EP_OQ_INFOPTR_MODE (0) +#define OTX_EP_OQ_REFIL_THRESHOLD (16) #define otx_ep_printf(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ fmt, ##args) @@ -17,6 +26,76 @@ #define otx_ep_dbg(fmt, args...) \ otx_ep_printf(DEBUG, fmt, ##args) +#define otx_ep_write64(value, base_addr, reg_off) \ + {\ + typeof(value) val = (value); \ + typeof(reg_off) off = (reg_off); \ + otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx\n", \ + (unsigned long)off, (unsigned long long)val); \ + rte_write64(val, ((base_addr) + off)); \ + } + +struct otx_ep_device; + +/* Structure to define the configuration attributes for each Input queue. */ +struct otx_ep_iq_config { + /* Max number of IQs available */ + uint16_t max_iqs; + + /* Command size - 32 or 64 bytes */ + uint16_t instr_type; + + /* Pending list size, usually set to the sum of the size of all IQs */ + uint32_t pending_list_size; +}; + +/* Structure to define the configuration attributes for each Output queue. */ +struct otx_ep_oq_config { + /* Max number of OQs available */ + uint16_t max_oqs; + + /* If set, the Output queue uses info-pointer mode. (Default: 1 ) */ + uint16_t info_ptr; + + /** The number of buffers that were consumed during packet processing by + * the driver on this Output queue before the driver attempts to + * replenish the descriptor ring with new buffers. + */ + uint32_t refill_threshold; +}; + +/* Structure to define the configuration. */ +struct otx_ep_config { + /* Input Queue attributes. */ + struct otx_ep_iq_config iq; + + /* Output Queue attributes. */ + struct otx_ep_oq_config oq; + + /* Num of desc for IQ rings */ + uint32_t num_iqdef_descs; + + /* Num of desc for OQ rings */ + uint32_t num_oqdef_descs; + + /* OQ buffer size */ + uint32_t oqdef_buf_size; +}; + +/* SRIOV information */ +struct otx_ep_sriov_info { + /* Number of rings assigned to VF */ + uint32_t rings_per_vf; + + /* Number of VF devices enabled */ + uint32_t num_vfs; +}; + +/* Required functions for each VF device */ +struct otx_ep_fn_list { + int (*setup_device_regs)(struct otx_ep_device *otx_ep); +}; + /* OTX_EP EP VF device data structure */ struct otx_ep_device { /* PCI device pointer */ @@ -31,6 +110,14 @@ struct otx_ep_device { /* Memory mapped h/w address */ uint8_t *hw_addr; + struct otx_ep_fn_list fn_list; + + /* SR-IOV info */ + struct otx_ep_sriov_info sriov_info; + + /* Device configuration */ + const struct otx_ep_config *conf; + int port_configured; }; #endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 6012c3fe9d..7ae9618e72 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -9,6 +9,7 @@ #include "otx2_common.h" #include "otx_ep_common.h" #include "otx_ep_vf.h" +#include "otx2_ep_vf.h" #define OTX_EP_DEV(_eth_dev) ((_eth_dev)->data->dev_private) static int @@ -21,10 +22,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) switch (dev_id) { case PCI_DEVID_OCTEONTX_EP_VF: otx_epvf->chip_id = PCI_DEVID_OCTEONTX_EP_VF; + ret = otx_ep_vf_setup_device(otx_epvf); break; case PCI_DEVID_OCTEONTX2_EP_NET_VF: case PCI_DEVID_98XX_EP_NET_VF: otx_epvf->chip_id = dev_id; + ret = otx2_ep_vf_setup_device(otx_epvf); break; default: otx_ep_err("Unsupported device\n"); @@ -46,6 +49,13 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) goto setup_fail; } + if (otx_epvf->fn_list.setup_device_regs(otx_epvf)) { + otx_ep_err("Failed to configure device registers\n"); + goto setup_fail; + } + + otx_ep_info("OTX_EP Device is Ready\n"); + return 0; setup_fail: diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c new file mode 100644 index 0000000000..9d9be66258 --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -0,0 +1,154 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include +#include +#include +#include + +#include "otx_ep_common.h" +#include "otx_ep_vf.h" + + +static void +otx_ep_setup_global_iq_reg(struct otx_ep_device *otx_ep, int q_no) +{ + volatile uint64_t reg_val = 0ull; + + /* Select ES, RO, NS, RDSIZE,DPTR Fomat#0 for IQs + * IS_64B is by default enabled. + */ + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(q_no)); + + reg_val |= OTX_EP_R_IN_CTL_RDSIZE; + reg_val |= OTX_EP_R_IN_CTL_IS_64B; + reg_val |= OTX_EP_R_IN_CTL_ESR; + + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_CONTROL(q_no)); + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(q_no)); + + if (!(reg_val & OTX_EP_R_IN_CTL_IDLE)) { + do { + reg_val = rte_read64(otx_ep->hw_addr + + OTX_EP_R_IN_CONTROL(q_no)); + } while (!(reg_val & OTX_EP_R_IN_CTL_IDLE)); + } +} + +static void +otx_ep_setup_global_oq_reg(struct otx_ep_device *otx_ep, int q_no) +{ + volatile uint64_t reg_val = 0ull; + + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CONTROL(q_no)); + +#if defined(BUFPTR_ONLY_MODE) + reg_val &= ~(OTX_EP_R_OUT_CTL_IMODE); +#else + reg_val |= (OTX_EP_R_OUT_CTL_IMODE); +#endif + reg_val &= ~(OTX_EP_R_OUT_CTL_ROR_P); + reg_val &= ~(OTX_EP_R_OUT_CTL_NSR_P); + reg_val &= ~(OTX_EP_R_OUT_CTL_ROR_I); + reg_val &= ~(OTX_EP_R_OUT_CTL_NSR_I); + reg_val &= ~(OTX_EP_R_OUT_CTL_ES_I); + reg_val &= ~(OTX_EP_R_OUT_CTL_ROR_D); + reg_val &= ~(OTX_EP_R_OUT_CTL_NSR_D); + reg_val &= ~(OTX_EP_R_OUT_CTL_ES_D); + + /* INFO/DATA ptr swap is required */ + reg_val |= (OTX_EP_R_OUT_CTL_ES_P); + + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_CONTROL(q_no)); +} + +static void +otx_ep_setup_global_input_regs(struct otx_ep_device *otx_ep) +{ + uint64_t q_no = 0ull; + + for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) + otx_ep_setup_global_iq_reg(otx_ep, q_no); +} + +static void +otx_ep_setup_global_output_regs(struct otx_ep_device *otx_ep) +{ + uint32_t q_no; + + for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) + otx_ep_setup_global_oq_reg(otx_ep, q_no); +} + +static int +otx_ep_setup_device_regs(struct otx_ep_device *otx_ep) +{ + otx_ep_setup_global_input_regs(otx_ep); + otx_ep_setup_global_output_regs(otx_ep); + + return 0; +} + +/* OTX_EP default configuration */ +static const struct otx_ep_config default_otx_ep_conf = { + /* IQ attributes */ + .iq = { + .max_iqs = OTX_EP_CFG_IO_QUEUES, + .instr_type = OTX_EP_64BYTE_INSTR, + .pending_list_size = (OTX_EP_MAX_IQ_DESCRIPTORS * + OTX_EP_CFG_IO_QUEUES), + }, + + /* OQ attributes */ + .oq = { + .max_oqs = OTX_EP_CFG_IO_QUEUES, + .info_ptr = OTX_EP_OQ_INFOPTR_MODE, + .refill_threshold = OTX_EP_OQ_REFIL_THRESHOLD, + }, + + .num_iqdef_descs = OTX_EP_MAX_IQ_DESCRIPTORS, + .num_oqdef_descs = OTX_EP_MAX_OQ_DESCRIPTORS, + .oqdef_buf_size = OTX_EP_OQ_BUF_SIZE, + +}; + + +static const struct otx_ep_config* +otx_ep_get_defconf(struct otx_ep_device *otx_ep_dev __rte_unused) +{ + const struct otx_ep_config *default_conf = NULL; + + default_conf = &default_otx_ep_conf; + + return default_conf; +} + +int +otx_ep_vf_setup_device(struct otx_ep_device *otx_ep) +{ + uint64_t reg_val = 0ull; + + /* If application doesn't provide its conf, use driver default conf */ + if (otx_ep->conf == NULL) { + otx_ep->conf = otx_ep_get_defconf(otx_ep); + if (otx_ep->conf == NULL) { + otx_ep_err("OTX_EP VF default config not found\n"); + return -ENOMEM; + } + otx_ep_info("Default config is used\n"); + } + + /* Get IOQs (RPVF] count */ + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(0)); + + otx_ep->sriov_info.rings_per_vf = ((reg_val >> OTX_EP_R_IN_CTL_RPVF_POS) + & OTX_EP_R_IN_CTL_RPVF_MASK); + + otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf); + + otx_ep->fn_list.setup_device_regs = otx_ep_setup_device_regs; + + return 0; +} diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index bee8a26727..ff248c37c6 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -4,6 +4,39 @@ #ifndef _OTX_EP_VF_H_ #define _OTX_EP_VF_H_ +#define OTX_EP_RING_OFFSET (0x1ull << 17) + +/* OTX_EP VF IQ Registers */ +#define OTX_EP_R_IN_CONTROL_START (0x10000) +#define OTX_EP_R_IN_CONTROL(ring) \ + (OTX_EP_R_IN_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET)) + +/* OTX_EP VF IQ Masks */ +#define OTX_EP_R_IN_CTL_RPVF_MASK (0xF) +#define OTX_EP_R_IN_CTL_RPVF_POS (48) + +#define OTX_EP_R_IN_CTL_IDLE (0x1ull << 28) +#define OTX_EP_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */ +#define OTX_EP_R_IN_CTL_IS_64B (0x1ull << 24) +#define OTX_EP_R_IN_CTL_ESR (0x1ull << 1) +/* OTX_EP VF OQ Registers */ +#define OTX_EP_R_OUT_CONTROL_START (0x10150) +#define OTX_EP_R_OUT_CONTROL(ring) \ + (OTX_EP_R_OUT_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET)) +/* OTX_EP VF OQ Masks */ +#define OTX_EP_R_OUT_CTL_ES_I (1ull << 34) +#define OTX_EP_R_OUT_CTL_NSR_I (1ull << 33) +#define OTX_EP_R_OUT_CTL_ROR_I (1ull << 32) +#define OTX_EP_R_OUT_CTL_ES_D (1ull << 30) +#define OTX_EP_R_OUT_CTL_NSR_D (1ull << 29) +#define OTX_EP_R_OUT_CTL_ROR_D (1ull << 28) +#define OTX_EP_R_OUT_CTL_ES_P (1ull << 26) +#define OTX_EP_R_OUT_CTL_NSR_P (1ull << 25) +#define OTX_EP_R_OUT_CTL_ROR_P (1ull << 24) +#define OTX_EP_R_OUT_CTL_IMODE (1ull << 23) + #define PCI_DEVID_OCTEONTX_EP_VF 0xa303 +int +otx_ep_vf_setup_device(struct otx_ep_device *otx_ep); #endif /*_OTX_EP_VF_H_ */ From patchwork Thu Dec 31 07:22:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85916 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3853EA0A00; Thu, 31 Dec 2020 08:23:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 34022140CF9; Thu, 31 Dec 2020 08:23:03 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7B01E140CE4 for ; Thu, 31 Dec 2020 08:22:58 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G5RC022206 for ; Wed, 30 Dec 2020 23:22:57 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=TZcqjor1jPSuJZ5fyRz4kyV1x/WOcBHyBvln5BnrHH8=; b=RepPiZpFQrBcCn3j3n5NxSv+0VIzD7hAs0Nyf7usARJPXyP41Hj/XO8dhBBbNCYpXGOC 3z0+uss2ghoi/7dFzV/Qb5ss3EXYOfBfjENnD9l/QEY4ykzdkBIQJjSROpBuXp88UJgi SVtiqx5ZsG5XcYRZfrbyGox4Ail4aixAlIqi0PYLhpENwRW6E710oW3yS2YfzFfj36wQ QnD6ET1qLLqhQ3CqHoqJXXoP8SpqFeuAoa7nhd70thWX7pzQqqKuf2qUQ6E2cy0Xn/1n mJipI+cfPvD4VUaRqC4DOlFC/0KnjMjp8qS2ue9wFxqaoSEPC98gjY5KOAShrvus45gw 4A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx54-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:57 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:56 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:56 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 227C13F7041; Wed, 30 Dec 2020 23:22:56 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:37 +0000 Message-ID: <20201231072247.5719-6-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 05/15] net/octeontx_ep: Add dev info get and configure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Add device information get and device configure operations. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx_ep_common.h | 15 +++++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 80 +++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_rxtx.h | 10 ++++ 3 files changed, 105 insertions(+) create mode 100644 drivers/net/octeontx_ep/otx_ep_rxtx.h diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index f096bec1c0..a56a68bbec 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -7,9 +7,12 @@ #define OTX_EP_MAX_RINGS_PER_VF (8) #define OTX_EP_CFG_IO_QUEUES OTX_EP_MAX_RINGS_PER_VF #define OTX_EP_64BYTE_INSTR (64) +#define OTX_EP_MIN_IQ_DESCRIPTORS (128) +#define OTX_EP_MIN_OQ_DESCRIPTORS (128) #define OTX_EP_MAX_IQ_DESCRIPTORS (8192) #define OTX_EP_MAX_OQ_DESCRIPTORS (8192) #define OTX_EP_OQ_BUF_SIZE (2048) +#define OTX_EP_MIN_RX_BUF_SIZE (64) #define OTX_EP_OQ_INFOPTR_MODE (0) #define OTX_EP_OQ_REFIL_THRESHOLD (16) @@ -112,6 +115,10 @@ struct otx_ep_device { struct otx_ep_fn_list fn_list; + uint32_t max_tx_queues; + + uint32_t max_rx_queues; + /* SR-IOV info */ struct otx_ep_sriov_info sriov_info; @@ -119,5 +126,13 @@ struct otx_ep_device { const struct otx_ep_config *conf; int port_configured; + + uint64_t rx_offloads; + uint64_t tx_offloads; }; + +#define OTX_EP_MAX_PKT_SZ 64000U + +#define OTX_EP_MAX_MAC_ADDRS 1 + #endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 7ae9618e72..908bed1f60 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -10,8 +10,56 @@ #include "otx_ep_common.h" #include "otx_ep_vf.h" #include "otx2_ep_vf.h" +#include "otx_ep_rxtx.h" #define OTX_EP_DEV(_eth_dev) ((_eth_dev)->data->dev_private) + +static const struct rte_eth_desc_lim otx_ep_rx_desc_lim = { + .nb_max = OTX_EP_MAX_OQ_DESCRIPTORS, + .nb_min = OTX_EP_MIN_OQ_DESCRIPTORS, + .nb_align = OTX_EP_RXD_ALIGN, +}; + +static const struct rte_eth_desc_lim otx_ep_tx_desc_lim = { + .nb_max = OTX_EP_MAX_IQ_DESCRIPTORS, + .nb_min = OTX_EP_MIN_IQ_DESCRIPTORS, + .nb_align = OTX_EP_TXD_ALIGN, +}; + +static int +otx_ep_dev_info_get(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_info *devinfo) +{ + struct otx_ep_device *otx_epvf; + struct rte_pci_device *pdev; + uint32_t dev_id; + + otx_epvf = (struct otx_ep_device *)OTX_EP_DEV(eth_dev); + pdev = otx_epvf->pdev; + dev_id = pdev->id.device_id; + + devinfo->speed_capa = ETH_LINK_SPEED_10G; + devinfo->max_rx_queues = otx_epvf->max_rx_queues; + devinfo->max_tx_queues = otx_epvf->max_tx_queues; + + devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE; + if (dev_id == PCI_DEVID_OCTEONTX_EP_VF || + dev_id == PCI_DEVID_OCTEONTX2_EP_NET_VF || + dev_id == PCI_DEVID_98XX_EP_NET_VF) { + devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ; + devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME; + devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER; + devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS; + } + + devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS; + + devinfo->rx_desc_lim = otx_ep_rx_desc_lim; + devinfo->tx_desc_lim = otx_ep_tx_desc_lim; + + return 0; +} + static int otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) { @@ -62,6 +110,37 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) return -ENOMEM; } +static int +otx_ep_dev_configure(struct rte_eth_dev *eth_dev) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_eth_conf *conf = &data->dev_conf; + struct rte_eth_rxmode *rxmode = &conf->rxmode; + struct rte_eth_txmode *txmode = &conf->txmode; + uint32_t ethdev_queues; + + ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf); + if (eth_dev->data->nb_rx_queues > ethdev_queues || + eth_dev->data->nb_tx_queues > ethdev_queues) { + otx_ep_err("invalid num queues\n"); + return -ENOMEM; + } + otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d\n", + eth_dev->data->nb_rx_queues, eth_dev->data->nb_tx_queues); + + otx_epvf->port_configured = 1; + otx_epvf->rx_offloads = rxmode->offloads; + otx_epvf->tx_offloads = txmode->offloads; + + return 0; +} + +/* Define our ethernet definitions */ +static const struct eth_dev_ops otx_ep_eth_dev_ops = { + .dev_configure = otx_ep_dev_configure, + .dev_infos_get = otx_ep_dev_info_get, +}; static int otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) @@ -105,6 +184,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) } otx_epvf->eth_dev = eth_dev; otx_epvf->port_id = eth_dev->data->port_id; + eth_dev->dev_ops = &otx_ep_eth_dev_ops; eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { otx_ep_err("MAC addresses memory allocation failed\n"); diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.h b/drivers/net/octeontx_ep/otx_ep_rxtx.h new file mode 100644 index 0000000000..819204a763 --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef _OTX_EP_RXTX_H_ +#define _OTX_EP_RXTX_H_ + +#define OTX_EP_RXD_ALIGN 1 +#define OTX_EP_TXD_ALIGN 1 +#endif From patchwork Thu Dec 31 07:22:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85919 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 00D99A0A00; Thu, 31 Dec 2020 08:23:58 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 024CB140D13; Thu, 31 Dec 2020 08:23:07 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 41CB2140CE9 for ; Thu, 31 Dec 2020 08:23:00 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7LGmQ010253 for ; Wed, 30 Dec 2020 23:22:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=WLN+x9YSRjYKfm7U7/4qHWW2jLlyubrbbEkKribL5OA=; b=HL+Luvvq/DBWRvMyDE+Khjvhk7YJxLuqHrXn2r9qUOTSGaeq4YywnaQ8GK+By/Ixa2JD BFCirn0RU/y6ccLblQQCHGn347RxX4PYQkv2zM4RFjg8ilcI+VufjTNXCmxV7W2fDXox bT+Z/O3Q3OzYumniLFAz2cwbAa+iz4o9NKbJj/1kvGru22bL4ckXlMVgDq5EtZ8EiESd 7JjUml8jAEXRApr/xqPDrYI1Z0tJ8N2sZ7Y1MChxoGEzHkNmkd8ZtZNeV5ZqHDKiwQrA 4RnBLVhpd4BfamZC1HuScEFdz+7MV4M4GgsUAAq098tAOyJsg5HNIYyUOf7TyO06RuN1 2w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 35s80806ff-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:59 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:57 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:56 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:56 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 6DA273F7040; Wed, 30 Dec 2020 23:22:56 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:38 +0000 Message-ID: <20201231072247.5719-7-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 06/15] net/octeontx_ep: Added rxq setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Receive queue setup involves allocating memory for the queue, initializing data structure representing the queue and filling queue with receive buffers of rx descriptor count. Receive queues are referred as droq. Hardware fills the receive buffers in queue with the packet. It can use same receive buffer (BUFPTR_ONLY_MODE) or separate buffer (INFOPTR_MODE) to fill with packet metadata. BUFPTR_ONLY_MODE is supported now. In receive queue release, receive buffers are freed along with the receive queue. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/meson.build | 1 + drivers/net/octeontx_ep/otx_ep_common.h | 160 ++++++++++++++++- drivers/net/octeontx_ep/otx_ep_ethdev.c | 133 ++++++++++++++ drivers/net/octeontx_ep/otx_ep_rxtx.c | 222 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 6 + 5 files changed, 517 insertions(+), 5 deletions(-) create mode 100644 drivers/net/octeontx_ep/otx_ep_rxtx.c diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build index c7a7aa84bb..8d804a0398 100644 --- a/drivers/net/octeontx_ep/meson.build +++ b/drivers/net/octeontx_ep/meson.build @@ -6,6 +6,7 @@ sources = files( 'otx_ep_ethdev.c', 'otx_ep_vf.c', 'otx2_ep_vf.c', + 'otx_ep_rxtx.c', ) extra_flags = [] diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index a56a68bbec..5d2f62f45a 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -16,6 +16,10 @@ #define OTX_EP_OQ_INFOPTR_MODE (0) #define OTX_EP_OQ_REFIL_THRESHOLD (16) +#define OTX_EP_PCI_RING_ALIGN 65536 +#define SDP_PKIND 40 +#define SDP_OTX2_PKIND 57 +#define OTX_EP_MAX_IOQS_PER_VF 8 #define otx_ep_printf(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ fmt, ##args) @@ -52,6 +56,65 @@ struct otx_ep_iq_config { uint32_t pending_list_size; }; +/** Descriptor format. + * The descriptor ring is made of descriptors which have 2 64-bit values: + * -# Physical (bus) address of the data buffer. + * -# Physical (bus) address of a otx_ep_droq_info structure. + * The device DMA's incoming packets and its information at the address + * given by these descriptor fields. + */ +struct otx_ep_droq_desc { + /* The buffer pointer */ + uint64_t buffer_ptr; + + /* The Info pointer */ + uint64_t info_ptr; +}; +#define OTX_EP_DROQ_DESC_SIZE (sizeof(struct otx_ep_droq_desc)) + +/* Receive Header */ +union otx_ep_rh { + uint64_t rh64; +}; +#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh)) + +/** Information about packet DMA'ed by OCTEON TX2. + * The format of the information available at Info Pointer after OCTEON TX2 + * has posted a packet. Not all descriptors have valid information. Only + * the Info field of the first descriptor for a packet has information + * about the packet. + */ +struct otx_ep_droq_info { + /* The Length of the packet. */ + uint64_t length; + + /* The Output Receive Header. */ + union otx_ep_rh rh; +}; +#define OTX_EP_DROQ_INFO_SIZE (sizeof(struct otx_ep_droq_info)) + + +/* DROQ statistics. Each output queue has four stats fields. */ +struct otx_ep_droq_stats { + /* Number of packets received in this queue. */ + uint64_t pkts_received; + + /* Bytes received by this queue. */ + uint64_t bytes_received; + + /* Num of failures of rte_pktmbuf_alloc() */ + uint64_t rx_alloc_failure; + + /* Rx error */ + uint64_t rx_err; + + /* packets with data got ready after interrupt arrived */ + uint64_t pkts_delayed_data; + + /* packets dropped due to zero length */ + uint64_t dropped_zlp; +}; + /* Structure to define the configuration attributes for each Output queue. */ struct otx_ep_oq_config { /* Max number of OQs available */ @@ -67,6 +130,74 @@ struct otx_ep_oq_config { uint32_t refill_threshold; }; +/* The Descriptor Ring Output Queue(DROQ) structure. */ +struct otx_ep_droq { + struct otx_ep_device *otx_ep_dev; + /* The 8B aligned descriptor ring starts at this address. */ + struct otx_ep_droq_desc *desc_ring; + + uint32_t q_no; + uint64_t last_pkt_count; + + struct rte_mempool *mpool; + + /* Driver should read the next packet at this index */ + uint32_t read_idx; + + /* OCTEON TX2 will write the next packet at this index */ + uint32_t write_idx; + + /* At this index, the driver will refill the descriptor's buffer */ + uint32_t refill_idx; + + /* Packets pending to be processed */ + uint64_t pkts_pending; + + /* Number of descriptors in this ring. */ + uint32_t nb_desc; + + /* The number of descriptors pending to refill. */ + uint32_t refill_count; + + uint32_t refill_threshold; + + /* The 8B aligned info ptrs begin from this address. */ + struct otx_ep_droq_info *info_list; + + /* receive buffer list contains mbuf ptr list */ + struct rte_mbuf **recv_buf_list; + + /* The size of each buffer pointed by the buffer pointer. */ + uint32_t buffer_size; + + /* Statistics for this DROQ. */ + struct otx_ep_droq_stats stats; + + /* DMA mapped address of the DROQ descriptor ring. */ + size_t desc_ring_dma; + + /* Info_ptr list is allocated at this virtual address. */ + size_t info_base_addr; + + /* DMA mapped address of the info list */ + size_t info_list_dma; + + /* Allocated size of info list. */ + uint32_t info_alloc_size; + + /* Memory zone **/ + const struct rte_memzone *desc_ring_mz; + const struct rte_memzone *info_mz; +}; +#define OTX_EP_DROQ_SIZE (sizeof(struct otx_ep_droq)) + +/* IQ/OQ mask */ +struct otx_ep_io_enable { + uint64_t iq; + uint64_t oq; + uint64_t iq64B; +}; + /* Structure to define the configuration. */ struct otx_ep_config { /* Input Queue attributes. */ @@ -85,6 +216,15 @@ struct otx_ep_config { uint32_t oqdef_buf_size; }; +/* Required functions for each VF device */ +struct otx_ep_fn_list { + void (*setup_oq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no); + + int (*setup_device_regs)(struct otx_ep_device *otx_ep); + + void (*disable_io_queues)(struct otx_ep_device *otx_ep); +}; + /* SRIOV information */ struct otx_ep_sriov_info { /* Number of rings assigned to VF */ @@ -94,11 +234,6 @@ struct otx_ep_sriov_info { uint32_t num_vfs; }; -/* Required functions for each VF device */ -struct otx_ep_fn_list { - int (*setup_device_regs)(struct otx_ep_device *otx_ep); -}; - /* OTX_EP EP VF device data structure */ struct otx_ep_device { /* PCI device pointer */ @@ -106,6 +241,8 @@ struct otx_ep_device { uint16_t chip_id; uint16_t vf_num; + uint32_t pkind; + struct rte_eth_dev *eth_dev; int port_id; @@ -119,6 +256,15 @@ struct otx_ep_device { uint32_t max_rx_queues; + /* Num OQs */ + uint32_t nb_rx_queues; + + /* The DROQ output queues */ + struct otx_ep_droq *droq[OTX_EP_MAX_IOQS_PER_VF]; + + /* IOQ mask */ + struct otx_ep_io_enable io_qmask; + /* SR-IOV info */ struct otx_ep_sriov_info sriov_info; @@ -131,6 +277,10 @@ struct otx_ep_device { uint64_t tx_offloads; }; +int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, + int desc_size, struct rte_mempool *mpool, + unsigned int socket_id); +int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no); #define OTX_EP_MAX_PKT_SZ 64000U #define OTX_EP_MAX_MAC_ADDRS 1 diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 908bed1f60..72178c20f0 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -71,11 +71,13 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) case PCI_DEVID_OCTEONTX_EP_VF: otx_epvf->chip_id = PCI_DEVID_OCTEONTX_EP_VF; ret = otx_ep_vf_setup_device(otx_epvf); + otx_epvf->fn_list.disable_io_queues(otx_epvf); break; case PCI_DEVID_OCTEONTX2_EP_NET_VF: case PCI_DEVID_98XX_EP_NET_VF: otx_epvf->chip_id = dev_id; ret = otx2_ep_vf_setup_device(otx_epvf); + otx_epvf->fn_list.disable_io_queues(otx_epvf); break; default: otx_ep_err("Unsupported device\n"); @@ -92,6 +94,8 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) static int otx_epdev_init(struct otx_ep_device *otx_epvf) { + uint32_t ethdev_queues; + if (otx_ep_chip_specific_setup(otx_epvf)) { otx_ep_err("Chip specific setup failed\n"); goto setup_fail; @@ -102,6 +106,10 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) goto setup_fail; } + ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf); + otx_epvf->max_rx_queues = ethdev_queues; + otx_epvf->max_tx_queues = ethdev_queues; + otx_ep_info("OTX_EP Device is Ready\n"); return 0; @@ -136,12 +144,125 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev) return 0; } +/** + * Setup our receive queue/ringbuffer. This is the + * queue the Octeon uses to send us packets and + * responses. We are given a memory pool for our + * packet buffers that are used to populate the receive + * queue. + * + * @param eth_dev + * Pointer to the structure rte_eth_dev + * @param q_no + * Queue number + * @param num_rx_descs + * Number of entries in the queue + * @param socket_id + * Where to allocate memory + * @param rx_conf + * Pointer to the struction rte_eth_rxconf + * @param mp + * Pointer to the packet pool + * + * @return + * - On success, return 0 + * - On failure, return -1 + */ +static int +otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no, + uint16_t num_rx_descs, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf __rte_unused, + struct rte_mempool *mp) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + struct rte_pktmbuf_pool_private *mbp_priv; + uint16_t buf_size; + + if (q_no >= otx_epvf->max_rx_queues) { + otx_ep_err("Invalid rx queue number %u\n", q_no); + return -EINVAL; + } + + if (num_rx_descs & (num_rx_descs - 1)) { + otx_ep_err("Invalid rx desc number should be pow 2 %u\n", + num_rx_descs); + return -EINVAL; + } + if (num_rx_descs < (SDP_GBL_WMARK * 8)) { + otx_ep_err("Invalid rx desc number should at least be greater than 8xwmark %u\n", + num_rx_descs); + return -EINVAL; + } + + otx_ep_dbg("setting up rx queue %u\n", q_no); + + mbp_priv = rte_mempool_get_priv(mp); + buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; + + if (otx_ep_setup_oqs(otx_epvf, q_no, num_rx_descs, buf_size, mp, + socket_id)) { + otx_ep_err("droq allocation failed\n"); + return -1; + } + + eth_dev->data->rx_queues[q_no] = otx_epvf->droq[q_no]; + + return 0; +} + +/** + * Release the receive queue/ringbuffer. Called by + * the upper layers. + * + * @param rxq + * Opaque pointer to the receive queue to release + * + * @return + * - nothing + */ +static void +otx_ep_rx_queue_release(void *rxq) +{ + struct otx_ep_droq *rq = (struct otx_ep_droq *)rxq; + int q_id = rq->q_no; + struct otx_ep_device *otx_epvf = rq->otx_ep_dev; + + if (otx_ep_delete_oqs(otx_epvf, q_id)) + otx_ep_err("Failed to delete OQ:%d\n", q_id); +} + /* Define our ethernet definitions */ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .dev_configure = otx_ep_dev_configure, + .rx_queue_setup = otx_ep_rx_queue_setup, + .rx_queue_release = otx_ep_rx_queue_release, .dev_infos_get = otx_ep_dev_info_get, }; + + +static int +otx_epdev_exit(struct rte_eth_dev *eth_dev) +{ + struct otx_ep_device *otx_epvf; + uint32_t num_queues, q; + + otx_ep_info("%s:\n", __func__); + + otx_epvf = OTX_EP_DEV(eth_dev); + + num_queues = otx_epvf->nb_rx_queues; + for (q = 0; q < num_queues; q++) { + if (otx_ep_delete_oqs(otx_epvf, q)) { + otx_ep_err("Failed to delete OQ:%d\n", q); + return -ENOMEM; + } + } + otx_ep_info("Num OQs:%d freed\n", otx_epvf->nb_rx_queues); + + return 0; +} + static int otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) { @@ -149,11 +270,15 @@ otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + otx_epdev_exit(eth_dev); + otx_epvf->port_configured = 0; if (eth_dev->data->mac_addrs != NULL) rte_free(eth_dev->data->mac_addrs); + eth_dev->dev_ops = NULL; + return 0; } @@ -188,6 +313,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { otx_ep_err("MAC addresses memory allocation failed\n"); + eth_dev->dev_ops = NULL; return -ENOMEM; } rte_eth_random_addr(vf_mac_addr); @@ -202,6 +328,13 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev) vf_id -= 1; otx_epvf->vf_num = vf_id; otx_epdev_init(otx_epvf); + if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF) + otx_epvf->pkind = SDP_OTX2_PKIND; + else + otx_epvf->pkind = SDP_PKIND + + (vf_id * otx_epvf->sriov_info.rings_per_vf); + otx_ep_info("vfid %d using pkind %d\n", vf_id, otx_epvf->pkind); + otx_epvf->port_configured = 0; return 0; diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c new file mode 100644 index 0000000000..5100424d7c --- /dev/null +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -0,0 +1,222 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include +#include +#include +#include + +#include "otx_ep_common.h" +#include "otx_ep_vf.h" +#include "otx2_ep_vf.h" +#include "otx_ep_rxtx.h" + +static void +otx_ep_dmazone_free(const struct rte_memzone *mz) +{ + const struct rte_memzone *mz_tmp; + int ret = 0; + + if (mz == NULL) { + otx_ep_err("Memzone %s : NULL\n", mz->name); + return; + } + + mz_tmp = rte_memzone_lookup(mz->name); + if (mz_tmp == NULL) { + otx_ep_err("Memzone %s Not Found\n", mz->name); + return; + } + + ret = rte_memzone_free(mz); + if (ret) + otx_ep_err("Memzone free failed : ret = %d\n", ret); +} + +static void +otx_ep_droq_reset_indices(struct otx_ep_droq *droq) +{ + droq->read_idx = 0; + droq->write_idx = 0; + droq->refill_idx = 0; + droq->refill_count = 0; + droq->last_pkt_count = 0; + droq->pkts_pending = 0; +} + +static void +otx_ep_droq_destroy_ring_buffers(struct otx_ep_droq *droq) +{ + uint32_t idx; + + for (idx = 0; idx < droq->nb_desc; idx++) { + if (droq->recv_buf_list[idx]) { + rte_pktmbuf_free(droq->recv_buf_list[idx]); + droq->recv_buf_list[idx] = NULL; + } + } + + otx_ep_droq_reset_indices(droq); +} + +/* Free OQs resources */ +int +otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no) +{ + struct otx_ep_droq *droq; + + droq = otx_ep->droq[oq_no]; + if (droq == NULL) { + otx_ep_err("Invalid droq[%d]\n", oq_no); + return -ENOMEM; + } + + otx_ep_droq_destroy_ring_buffers(droq); + rte_free(droq->recv_buf_list); + droq->recv_buf_list = NULL; + + if (droq->desc_ring_mz) { + otx_ep_dmazone_free(droq->desc_ring_mz); + droq->desc_ring_mz = NULL; + } + + memset(droq, 0, OTX_EP_DROQ_SIZE); + + rte_free(otx_ep->droq[oq_no]); + otx_ep->droq[oq_no] = NULL; + + otx_ep->nb_rx_queues--; + + otx_ep_info("OQ[%d] is deleted\n", oq_no); + return 0; +} + +static int +otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq) +{ + struct otx_ep_droq_desc *desc_ring = droq->desc_ring; + uint32_t idx; + struct rte_mbuf *buf; + struct otx_ep_droq_info *info; + + for (idx = 0; idx < droq->nb_desc; idx++) { + buf = rte_pktmbuf_alloc(droq->mpool); + if (buf == NULL) { + otx_ep_err("OQ buffer alloc failed\n"); + droq->stats.rx_alloc_failure++; + /* otx_ep_droq_destroy_ring_buffers(droq);*/ + return -ENOMEM; + } + + droq->recv_buf_list[idx] = buf; + info = rte_pktmbuf_mtod(buf, struct otx_ep_droq_info *); + memset(info, 0, sizeof(*info)); + desc_ring[idx].buffer_ptr = rte_mbuf_data_iova_default(buf); + } + + otx_ep_droq_reset_indices(droq); + + return 0; +} + +/* OQ initialization */ +static int +otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no, + uint32_t num_descs, uint32_t desc_size, + struct rte_mempool *mpool, unsigned int socket_id) +{ + const struct otx_ep_config *conf = otx_ep->conf; + uint32_t c_refill_threshold; + uint32_t desc_ring_size; + struct otx_ep_droq *droq; + + otx_ep_info("OQ[%d] Init start\n", q_no); + + droq = otx_ep->droq[q_no]; + droq->otx_ep_dev = otx_ep; + droq->q_no = q_no; + droq->mpool = mpool; + + droq->nb_desc = num_descs; + droq->buffer_size = desc_size; + c_refill_threshold = RTE_MAX(conf->oq.refill_threshold, + droq->nb_desc / 2); + + /* OQ desc_ring set up */ + desc_ring_size = droq->nb_desc * OTX_EP_DROQ_DESC_SIZE; + droq->desc_ring_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev, "droq", + q_no, desc_ring_size, + OTX_EP_PCI_RING_ALIGN, + socket_id); + + if (droq->desc_ring_mz == NULL) { + otx_ep_err("OQ:%d desc_ring allocation failed\n", q_no); + goto init_droq_fail; + } + + droq->desc_ring_dma = droq->desc_ring_mz->iova; + droq->desc_ring = (struct otx_ep_droq_desc *)droq->desc_ring_mz->addr; + + otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx\n", + q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma); + otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc); + + /* OQ buf_list set up */ + droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list", + (droq->nb_desc * sizeof(struct rte_mbuf *)), + RTE_CACHE_LINE_SIZE, socket_id); + if (droq->recv_buf_list == NULL) { + otx_ep_err("OQ recv_buf_list alloc failed\n"); + goto init_droq_fail; + } + + if (otx_ep_droq_setup_ring_buffers(droq)) + goto init_droq_fail; + + droq->refill_threshold = c_refill_threshold; + + /* Set up OQ registers */ + otx_ep->fn_list.setup_oq_regs(otx_ep, q_no); + + otx_ep->io_qmask.oq |= (1ull << q_no); + + return 0; + +init_droq_fail: + return -ENOMEM; +} + +/* OQ configuration and setup */ +int +otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, + int desc_size, struct rte_mempool *mpool, unsigned int socket_id) +{ + struct otx_ep_droq *droq; + + /* Allocate new droq. */ + droq = (struct otx_ep_droq *)rte_zmalloc("otx_ep_OQ", + sizeof(*droq), RTE_CACHE_LINE_SIZE); + if (droq == NULL) { + otx_ep_err("Droq[%d] Creation Failed\n", oq_no); + return -ENOMEM; + } + otx_ep->droq[oq_no] = droq; + + if (otx_ep_init_droq(otx_ep, oq_no, num_descs, desc_size, mpool, + socket_id)) { + otx_ep_err("Droq[%d] Initialization failed\n", oq_no); + goto delete_OQ; + } + otx_ep_info("OQ[%d] is created.\n", oq_no); + + otx_ep->nb_rx_queues++; + + return 0; + +delete_OQ: + otx_ep_delete_oqs(otx_ep, oq_no); + return -ENOMEM; +} diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index ff248c37c6..fa224b4de1 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -37,6 +37,12 @@ #define PCI_DEVID_OCTEONTX_EP_VF 0xa303 +/* this is a static value set by SLI PF driver in octeon + * No handshake is available + * Change this if changing the value in SLI PF driver + */ +#define SDP_GBL_WMARK 0x100 + int otx_ep_vf_setup_device(struct otx_ep_device *otx_ep); #endif /*_OTX_EP_VF_H_ */ From patchwork Thu Dec 31 07:22:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85918 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 20069A0A00; Thu, 31 Dec 2020 08:23:47 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BBFBD140D0B; Thu, 31 Dec 2020 08:23:05 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 49E04140CDE for ; Thu, 31 Dec 2020 08:22:59 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7Fo0L022166 for ; Wed, 30 Dec 2020 23:22:58 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=NNs1/ctwnBID6oboJfqIgfXTHs9jQdIy6KU7M9d5qJw=; b=OLHYut+flBsPqAOMnj5ZH3sjQBkFGBKt2fL7ABQoQA3kwPSlJ3VAaV/PMvRx8fy34ZS7 qtGwWskVX0kBlg7jqmYKPRDXXS09aJjAbLj8bQGONMFX/ukrgzY6ZO7Zk1igCsn3/78m /JrArAOm4IFnn7OW1K+A8oL4IEwIW1LdXaxvD5800Wqi28sen/fcAyDrFBMRcROYHzoU l/jWUEfMhT6emUwZ6QXpPJ2X5AtGyAUGksHIjMzbTYDfoETsNn0KuKpajdCT+UkWmW6m rGr6YkFGfJ1F2Lroco0/4VNvua07BpdN0VupxGDFn4tlC5s6hLMEP6CadWcmKfNjk1TZ JQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx55-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:58 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:57 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id B51993F703F; Wed, 30 Dec 2020 23:22:56 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:39 +0000 Message-ID: <20201231072247.5719-8-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 07/15] net/octeontx_ep: Added tx queue setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Transmit queue setup involves allocating memory for the command queue considering tx descriptor count and initializing data structure representing the queue. Transmit queue release function frees the command queue. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx_ep_common.h | 89 +++++++++++++++- drivers/net/octeontx_ep/otx_ep_ethdev.c | 81 ++++++++++++++ drivers/net/octeontx_ep/otx_ep_rxtx.c | 135 ++++++++++++++++++++++++ 3 files changed, 303 insertions(+), 2 deletions(-) diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index 5d2f62f45a..2f0296d847 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -42,7 +42,21 @@ rte_write64(val, ((base_addr) + off)); \ } -struct otx_ep_device; +/* OTX_EP IQ request list */ +struct otx_ep_instr_list { + void *buf; + uint32_t reqtype; +}; +#define OTX_EP_IQREQ_LIST_SIZE (sizeof(struct otx_ep_instr_list)) + +/* Input Queue statistics. Each input queue has four stats fields. */ +struct otx_ep_iq_stats { + uint64_t instr_posted; /* Instructions posted to this queue. */ + uint64_t instr_processed; /* Instructions processed in this queue. */ + uint64_t instr_dropped; /* Instructions that could not be processed */ + uint64_t tx_pkts; + uint64_t tx_bytes; +}; /* Structure to define the configuration attributes for each Input queue. */ struct otx_ep_iq_config { @@ -56,6 +70,66 @@ struct otx_ep_iq_config { uint32_t pending_list_size; }; +/** The instruction (input) queue. + * The input queue is used to post raw (instruction) mode data or packet data + * to OCTEON TX2 device from the host. Each IQ of a OTX_EP EP VF device has one + * such structure to represent it. + */ +struct otx_ep_instr_queue { + struct otx_ep_device *otx_ep_dev; + + uint32_t q_no; + uint32_t pkt_in_done; + + /* Flag for 64 byte commands. */ + uint32_t iqcmd_64B:1; + uint32_t rsvd:17; + uint32_t status:8; + + /* Number of descriptors in this ring. */ + uint32_t nb_desc; + + /* Input ring index, where the driver should write the next packet */ + uint32_t host_write_index; + + /* Input ring index, where the OCTEON TX2 should read the next packet */ + uint32_t otx_read_index; + + uint32_t reset_instr_cnt; + + /** This index aids in finding the window in the queue where OCTEON TX2 + * has read the commands. + */ + uint32_t flush_index; + + /* This keeps track of the instructions pending in this queue. */ + uint64_t instr_pending; + + /* Pointer to the Virtual Base addr of the input ring. */ + uint8_t *base_addr; + + /* This IQ request list */ + struct otx_ep_instr_list *req_list; + + /* OTX_EP doorbell register for the ring. */ + void *doorbell_reg; + + /* OTX_EP instruction count register for this ring. */ + void *inst_cnt_reg; + + /* Number of instructions pending to be posted to OCTEON TX2. */ + uint32_t fill_cnt; + + /* Statistics for this input queue. */ + struct otx_ep_iq_stats stats; + + /* DMA mapped base address of the input descriptor ring. */ + uint64_t base_addr_dma; + + /* Memory zone */ + const struct rte_memzone *iq_mz; +}; + /** Descriptor format. * The descriptor ring is made of descriptors which have 2 64-bit values: * -# Physical (bus) address of the data buffer. @@ -218,6 +292,7 @@ struct otx_ep_config { /* Required functions for each VF device */ struct otx_ep_fn_list { + void (*setup_iq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no); void (*setup_oq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no); int (*setup_device_regs)(struct otx_ep_device *otx_ep); @@ -256,6 +331,12 @@ struct otx_ep_device { uint32_t max_rx_queues; + /* Num IQs */ + uint32_t nb_tx_queues; + + /* The input instruction queues */ + struct otx_ep_instr_queue *instr_queue[OTX_EP_MAX_IOQS_PER_VF]; + /* Num OQs */ uint32_t nb_rx_queues; @@ -277,12 +358,16 @@ struct otx_ep_device { uint64_t tx_offloads; }; +int otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, + int num_descs, unsigned int socket_id); +int otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no); + int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, int desc_size, struct rte_mempool *mpool, unsigned int socket_id); int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no); -#define OTX_EP_MAX_PKT_SZ 64000U +#define OTX_EP_MAX_PKT_SZ 64000U #define OTX_EP_MAX_MAC_ADDRS 1 #endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 72178c20f0..dc419d3447 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -231,11 +231,83 @@ otx_ep_rx_queue_release(void *rxq) otx_ep_err("Failed to delete OQ:%d\n", q_id); } +/** + * Allocate and initialize SW ring. Initialize associated HW registers. + * + * @param eth_dev + * Pointer to structure rte_eth_dev + * + * @param q_no + * Queue number + * + * @param num_tx_descs + * Number of ringbuffer descriptors + * + * @param socket_id + * NUMA socket id, used for memory allocations + * + * @param tx_conf + * Pointer to the structure rte_eth_txconf + * + * @return + * - On success, return 0 + * - On failure, return -errno value + */ +static int +otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no, + uint16_t num_tx_descs, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf __rte_unused) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + int retval; + + if (q_no >= otx_epvf->max_tx_queues) { + otx_ep_err("Invalid tx queue number %u\n", q_no); + return -EINVAL; + } + if (num_tx_descs & (num_tx_descs - 1)) { + otx_ep_err("Invalid tx desc number should be pow 2 %u\n", + num_tx_descs); + return -EINVAL; + } + + retval = otx_ep_setup_iqs(otx_epvf, q_no, num_tx_descs, socket_id); + + if (retval) { + otx_ep_err("IQ(TxQ) creation failed.\n"); + return retval; + } + + eth_dev->data->tx_queues[q_no] = otx_epvf->instr_queue[q_no]; + otx_ep_dbg("tx queue[%d] setup\n", q_no); + return 0; +} + +/** + * Release the transmit queue/ringbuffer. Called by + * the upper layers. + * + * @param txq + * Opaque pointer to the transmit queue to release + * + * @return + * - nothing + */ +static void +otx_ep_tx_queue_release(void *txq) +{ + struct otx_ep_instr_queue *tq = (struct otx_ep_instr_queue *)txq; + + otx_ep_delete_iqs(tq->otx_ep_dev, tq->q_no); +} + /* Define our ethernet definitions */ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .dev_configure = otx_ep_dev_configure, .rx_queue_setup = otx_ep_rx_queue_setup, .rx_queue_release = otx_ep_rx_queue_release, + .tx_queue_setup = otx_ep_tx_queue_setup, + .tx_queue_release = otx_ep_tx_queue_release, .dev_infos_get = otx_ep_dev_info_get, }; @@ -260,6 +332,15 @@ otx_epdev_exit(struct rte_eth_dev *eth_dev) } otx_ep_info("Num OQs:%d freed\n", otx_epvf->nb_rx_queues); + num_queues = otx_epvf->nb_tx_queues; + for (q = 0; q < num_queues; q++) { + if (otx_ep_delete_iqs(otx_epvf, q)) { + otx_ep_err("Failed to delete IQ:%d\n", q); + return -ENOMEM; + } + } + otx_ep_dbg("Num IQs:%d freed\n", otx_epvf->nb_tx_queues); + return 0; } diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c index 5100424d7c..a13c180c99 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -36,6 +36,141 @@ otx_ep_dmazone_free(const struct rte_memzone *mz) otx_ep_err("Memzone free failed : ret = %d\n", ret); } +/* Free IQ resources */ +int +otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no) +{ + struct otx_ep_instr_queue *iq; + + iq = otx_ep->instr_queue[iq_no]; + if (iq == NULL) { + otx_ep_err("Invalid IQ[%d]\n", iq_no); + return -ENOMEM; + } + + rte_free(iq->req_list); + iq->req_list = NULL; + + if (iq->iq_mz) { + otx_ep_dmazone_free(iq->iq_mz); + iq->iq_mz = NULL; + } + + rte_free(otx_ep->instr_queue[iq_no]); + otx_ep->instr_queue[iq_no] = NULL; + + otx_ep->nb_tx_queues--; + + otx_ep_info("IQ[%d] is deleted\n", iq_no); + + return 0; +} + +/* IQ initialization */ +static int +otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs, + unsigned int socket_id) +{ + const struct otx_ep_config *conf; + struct otx_ep_instr_queue *iq; + uint32_t q_size; + + conf = otx_ep->conf; + iq = otx_ep->instr_queue[iq_no]; + q_size = conf->iq.instr_type * num_descs; + + /* IQ memory creation for Instruction submission to OCTEON TX2 */ + iq->iq_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev, + "instr_queue", iq_no, q_size, + OTX_EP_PCI_RING_ALIGN, + socket_id); + if (iq->iq_mz == NULL) { + otx_ep_err("IQ[%d] memzone alloc failed\n", iq_no); + goto iq_init_fail; + } + + iq->base_addr_dma = iq->iq_mz->iova; + iq->base_addr = (uint8_t *)iq->iq_mz->addr; + + if (num_descs & (num_descs - 1)) { + otx_ep_err("IQ[%d] descs not in power of 2\n", iq_no); + goto iq_init_fail; + } + + iq->nb_desc = num_descs; + + /* Create a IQ request list to hold requests that have been + * posted to OCTEON TX2. This list will be used for freeing the IQ + * data buffer(s) later once the OCTEON TX2 fetched the requests. + */ + iq->req_list = rte_zmalloc_socket("request_list", + (iq->nb_desc * OTX_EP_IQREQ_LIST_SIZE), + RTE_CACHE_LINE_SIZE, + rte_socket_id()); + if (iq->req_list == NULL) { + otx_ep_err("IQ[%d] req_list alloc failed\n", iq_no); + goto iq_init_fail; + } + + otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d\n", + iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma, + iq->nb_desc); + + iq->otx_ep_dev = otx_ep; + iq->q_no = iq_no; + iq->fill_cnt = 0; + iq->host_write_index = 0; + iq->otx_read_index = 0; + iq->flush_index = 0; + iq->instr_pending = 0; + + + + otx_ep->io_qmask.iq |= (1ull << iq_no); + + /* Set 32B/64B mode for each input queue */ + if (conf->iq.instr_type == 64) + otx_ep->io_qmask.iq64B |= (1ull << iq_no); + + iq->iqcmd_64B = (conf->iq.instr_type == 64); + + /* Set up IQ registers */ + otx_ep->fn_list.setup_iq_regs(otx_ep, iq_no); + + return 0; + +iq_init_fail: + return -ENOMEM; +} + +int +otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs, + unsigned int socket_id) +{ + struct otx_ep_instr_queue *iq; + + iq = (struct otx_ep_instr_queue *)rte_zmalloc("otx_ep_IQ", sizeof(*iq), + RTE_CACHE_LINE_SIZE); + if (iq == NULL) + return -ENOMEM; + + otx_ep->instr_queue[iq_no] = iq; + + if (otx_ep_init_instr_queue(otx_ep, iq_no, num_descs, socket_id)) { + otx_ep_err("IQ init is failed\n"); + goto delete_IQ; + } + otx_ep->nb_tx_queues++; + + otx_ep_info("IQ[%d] is created.\n", iq_no); + + return 0; + +delete_IQ: + otx_ep_delete_iqs(otx_ep, iq_no); + return -ENOMEM; +} + static void otx_ep_droq_reset_indices(struct otx_ep_droq *droq) { From patchwork Thu Dec 31 07:22:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85920 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B38A4A0A00; Thu, 31 Dec 2020 08:24:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 363D8140D18; Thu, 31 Dec 2020 08:23:08 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id F0474140CE8 for ; Thu, 31 Dec 2020 08:22:59 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7HB59023824 for ; Wed, 30 Dec 2020 23:22:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=EYXXa4jRmJ5iZifKjvECdlRQ/jAJDCpiF4p23w6+GWM=; b=aLO+rY42T1jYW7do1TD3hPxyzSy1Cz1M3C09586sY1U0LULhfwLfZE9CToZ0b7vyfPwi x5ROMDsHhZS1LHHIQ1SaszPsvRNhOnMd9fy2qv1diLG8CCz6OMV0AGyzlfrqESZsRWI2 n2qkLS1LnoIr9L1wVmQUDz89shem1vATHXrbRpwHUFfyVkceh4cfw7nc3oqUaqgjOUoN +FLpGjpYOmS7XLoLYxIast7ts46yk51p9balvoVV35p44/BIa+Kp3L4RpW9vmM6gRyUQ ONUiTpy71GwOSwqHBwPDW+TrkICiRrF5cpZDs0h9marRzToYy5+gtu46vrRtnE0YZ7Cc nA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx57-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:59 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:57 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 09E853F7040; Wed, 30 Dec 2020 23:22:57 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:40 +0000 Message-ID: <20201231072247.5719-9-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 08/15] net/octeontx_ep: Setting up iq and oq registers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Configuring hardware registers with command queue(iq) and droq(oq) parameters. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx2_ep_vf.c | 124 ++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_common.h | 65 ++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.c | 132 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 53 ++++++++++ 4 files changed, 374 insertions(+) diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c index f8be2f4864..f2cd442e97 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.c +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -78,6 +78,127 @@ otx2_vf_setup_device_regs(struct otx_ep_device *otx_ep) return 0; } +static void +otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no) +{ + struct otx_ep_instr_queue *iq = otx_ep->instr_queue[iq_no]; + volatile uint64_t reg_val = 0ull; + + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CONTROL(iq_no)); + + /* Wait till IDLE to set to 1, not supposed to configure BADDR + * as long as IDLE is 0 + */ + if (!(reg_val & SDP_VF_R_IN_CTL_IDLE)) { + do { + reg_val = otx2_read64(otx_ep->hw_addr + + SDP_VF_R_IN_CONTROL(iq_no)); + } while (!(reg_val & SDP_VF_R_IN_CTL_IDLE)); + } + + /* Write the start of the input queue's ring and its size */ + otx2_write64(iq->base_addr_dma, otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_BADDR(iq_no)); + otx2_write64(iq->nb_desc, otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_RSIZE(iq_no)); + + /* Remember the doorbell & instruction count register addr + * for this queue + */ + iq->doorbell_reg = (uint8_t *)otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_DBELL(iq_no); + iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr + + SDP_VF_R_IN_CNTS(iq_no); + + otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p", + iq_no, iq->doorbell_reg, iq->inst_cnt_reg); + + do { + reg_val = rte_read32(iq->inst_cnt_reg); + rte_write32(reg_val, iq->inst_cnt_reg); + } while (reg_val != 0); + + /* IN INTR_THRESHOLD is set to max(FFFFFFFF) which disable the IN INTR + * to raise + */ + otx2_write64(0x3FFFFFFFFFFFFFUL, + otx_ep->hw_addr + SDP_VF_R_IN_INT_LEVELS(iq_no)); +} + +static void +otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no) +{ + volatile uint64_t reg_val = 0ull; + uint64_t oq_ctl = 0ull; + struct otx_ep_droq *droq = otx_ep->droq[oq_no]; + + /* Wait on IDLE to set to 1, supposed to configure BADDR + * as log as IDLE is 0 + */ + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(oq_no)); + + while (!(reg_val & SDP_VF_R_OUT_CTL_IDLE)) { + reg_val = otx2_read64(otx_ep->hw_addr + + SDP_VF_R_OUT_CONTROL(oq_no)); + } + + otx2_write64(droq->desc_ring_dma, otx_ep->hw_addr + + SDP_VF_R_OUT_SLIST_BADDR(oq_no)); + otx2_write64(droq->nb_desc, otx_ep->hw_addr + + SDP_VF_R_OUT_SLIST_RSIZE(oq_no)); + + oq_ctl = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(oq_no)); + + /* Clear the ISIZE and BSIZE (22-0) */ + oq_ctl &= ~(0x7fffffull); + + /* Populate the BSIZE (15-0) */ + oq_ctl |= (droq->buffer_size & 0xffff); + +#ifndef BUFPTR_ONLY_MODE + /* Populate ISIZE(22-16) */ + oq_ctl |= ((OTX_EP_RH_SIZE << 16) & 0x7fffff); +#endif + otx2_write64(oq_ctl, otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(oq_no)); + + /* Mapped address of the pkt_sent and pkts_credit regs */ + droq->pkts_sent_reg = (uint8_t *)otx_ep->hw_addr + + SDP_VF_R_OUT_CNTS(oq_no); + droq->pkts_credit_reg = (uint8_t *)otx_ep->hw_addr + + SDP_VF_R_OUT_SLIST_DBELL(oq_no); + + rte_write64(0x3FFFFFFFFFFFFFUL, + otx_ep->hw_addr + SDP_VF_R_OUT_INT_LEVELS(oq_no)); + + /* Clear PKT_CNT register */ + rte_write64(0xFFFFFFFFF, (uint8_t *)otx_ep->hw_addr + + SDP_VF_R_OUT_PKT_CNT(oq_no)); + + /* Clear the OQ doorbell */ + rte_write32(0xFFFFFFFF, droq->pkts_credit_reg); + while ((rte_read32(droq->pkts_credit_reg) != 0ull)) { + rte_write32(0xFFFFFFFF, droq->pkts_credit_reg); + rte_delay_ms(1); + } + otx_ep_dbg("SDP_R[%d]_credit:%x", oq_no, + rte_read32(droq->pkts_credit_reg)); + + /* Clear the OQ_OUT_CNTS doorbell */ + reg_val = rte_read32(droq->pkts_sent_reg); + rte_write32((uint32_t)reg_val, droq->pkts_sent_reg); + + otx_ep_dbg("SDP_R[%d]_sent: %x", oq_no, + rte_read32(droq->pkts_sent_reg)); + + while (((rte_read32(droq->pkts_sent_reg)) != 0ull)) { + reg_val = rte_read32(droq->pkts_sent_reg); + rte_write32((uint32_t)reg_val, droq->pkts_sent_reg); + rte_delay_ms(1); + } + otx_ep_dbg("SDP_R[%d]_sent: %x", oq_no, + rte_read32(droq->pkts_sent_reg)); +} + static const struct otx_ep_config default_otx2_ep_conf = { /* IQ attributes */ .iq = { @@ -132,6 +253,9 @@ otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx2_info("SDP RPVF: %d", otx_ep->sriov_info.rings_per_vf); + otx_ep->fn_list.setup_iq_regs = otx2_vf_setup_iq_regs; + otx_ep->fn_list.setup_oq_regs = otx2_vf_setup_oq_regs; + otx_ep->fn_list.setup_device_regs = otx2_vf_setup_device_regs; return 0; diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index 2f0296d847..e7e393ef00 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -33,6 +33,33 @@ #define otx_ep_dbg(fmt, args...) \ otx_ep_printf(DEBUG, fmt, ##args) +/* Input Request Header format */ +union otx_ep_instr_irh { + uint64_t u64; + struct { + /* Request ID */ + uint64_t rid:16; + + /* PCIe port to use for response */ + uint64_t pcie_port:3; + + /* Scatter indicator 1=scatter */ + uint64_t scatter:1; + + /* Size of Expected result OR no. of entries in scatter list */ + uint64_t rlenssz:14; + + /* Desired destination port for result */ + uint64_t dport:6; + + /* Opcode Specific parameters */ + uint64_t param:8; + + /* Opcode for the return packet */ + uint64_t opcode:16; + } s; +}; + #define otx_ep_write64(value, base_addr, reg_off) \ {\ typeof(value) val = (value); \ @@ -42,6 +69,33 @@ rte_write64(val, ((base_addr) + off)); \ } +/* Instruction Header - for OCTEON-TX models */ +typedef union otx_ep_instr_ih { + uint64_t u64; + struct { + /** Data Len */ + uint64_t tlen:16; + + /** Reserved */ + uint64_t rsvd:20; + + /** PKIND for OTX_EP */ + uint64_t pkind:6; + + /** Front Data size */ + uint64_t fsz:6; + + /** No. of entries in gather list */ + uint64_t gsz:14; + + /** Gather indicator 1=gather*/ + uint64_t gather:1; + + /** Reserved3 */ + uint64_t reserved3:1; + } s; +} otx_ep_instr_ih_t; + /* OTX_EP IQ request list */ struct otx_ep_instr_list { void *buf; @@ -244,6 +298,16 @@ struct otx_ep_droq { /* The size of each buffer pointed by the buffer pointer. */ uint32_t buffer_size; + /** Pointer to the mapped packet credit register. + * Host writes number of info/buffer ptrs available to this register + */ + void *pkts_credit_reg; + + /** Pointer to the mapped packet sent register. OCTEON TX2 writes the + * number of packets DMA'ed to host memory in this register. + */ + void *pkts_sent_reg; + /* Statistics for this DROQ. */ struct otx_ep_droq_stats stats; @@ -259,6 +323,7 @@ struct otx_ep_droq { /* Allocated size of info list. */ uint32_t info_alloc_size; + /* Memory zone **/ const struct rte_memzone *desc_ring_mz; const struct rte_memzone *info_mz; diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c index 9d9be66258..3d990a488f 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.c +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -91,6 +91,135 @@ otx_ep_setup_device_regs(struct otx_ep_device *otx_ep) return 0; } +static void +otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no) +{ + struct otx_ep_instr_queue *iq = otx_ep->instr_queue[iq_no]; + volatile uint64_t reg_val = 0ull; + + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CONTROL(iq_no)); + + /* Wait till IDLE to set to 1, not supposed to configure BADDR + * as long as IDLE is 0 + */ + if (!(reg_val & OTX_EP_R_IN_CTL_IDLE)) { + do { + reg_val = rte_read64(otx_ep->hw_addr + + OTX_EP_R_IN_CONTROL(iq_no)); + } while (!(reg_val & OTX_EP_R_IN_CTL_IDLE)); + } + + /* Write the start of the input queue's ring and its size */ + otx_ep_write64(iq->base_addr_dma, otx_ep->hw_addr, + OTX_EP_R_IN_INSTR_BADDR(iq_no)); + otx_ep_write64(iq->nb_desc, otx_ep->hw_addr, + OTX_EP_R_IN_INSTR_RSIZE(iq_no)); + + /* Remember the doorbell & instruction count register addr + * for this queue + */ + iq->doorbell_reg = (uint8_t *)otx_ep->hw_addr + + OTX_EP_R_IN_INSTR_DBELL(iq_no); + iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr + + OTX_EP_R_IN_CNTS(iq_no); + + otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p instcnt_reg @ 0x%p\n", + iq_no, iq->doorbell_reg, iq->inst_cnt_reg); + + do { + reg_val = rte_read32(iq->inst_cnt_reg); + rte_write32(reg_val, iq->inst_cnt_reg); + } while (reg_val != 0); + + /* IN INTR_THRESHOLD is set to max(FFFFFFFF) which disable the IN INTR + * to raise + */ + /* reg_val = rte_read64(otx_ep->hw_addr + + * OTX_EP_R_IN_INT_LEVELS(iq_no)); + */ + reg_val = 0xffffffff; + + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_INT_LEVELS(iq_no)); +} + +static void +otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no) +{ + volatile uint64_t reg_val = 0ull; + uint64_t oq_ctl = 0ull; + + struct otx_ep_droq *droq = otx_ep->droq[oq_no]; + + /* Wait on IDLE to set to 1, supposed to configure BADDR + * as log as IDLE is 0 + */ + otx_ep_write64(0ULL, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(oq_no)); + + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CONTROL(oq_no)); + + while (!(reg_val & OTX_EP_R_OUT_CTL_IDLE)) { + reg_val = rte_read64(otx_ep->hw_addr + + OTX_EP_R_OUT_CONTROL(oq_no)); + } + + otx_ep_write64(droq->desc_ring_dma, otx_ep->hw_addr, + OTX_EP_R_OUT_SLIST_BADDR(oq_no)); + otx_ep_write64(droq->nb_desc, otx_ep->hw_addr, + OTX_EP_R_OUT_SLIST_RSIZE(oq_no)); + + oq_ctl = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CONTROL(oq_no)); + + /* Clear the ISIZE and BSIZE (22-0) */ + oq_ctl &= ~(0x7fffffull); + + /* Populate the BSIZE (15-0) */ + oq_ctl |= (droq->buffer_size & 0xffff); + +#ifndef BUFPTR_ONLY_MODE + oq_ctl |= ((OTX_EP_RH_SIZE << 16) & 0x7fffff);/*populate ISIZE(22-16)*/ +#endif + otx_ep_write64(oq_ctl, otx_ep->hw_addr, OTX_EP_R_OUT_CONTROL(oq_no)); + + /* Mapped address of the pkt_sent and pkts_credit regs */ + droq->pkts_sent_reg = (uint8_t *)otx_ep->hw_addr + + OTX_EP_R_OUT_CNTS(oq_no); + droq->pkts_credit_reg = (uint8_t *)otx_ep->hw_addr + + OTX_EP_R_OUT_SLIST_DBELL(oq_no); + + /* reg_val = rte_read64(otx_ep->hw_addr + + * OTX_EP_R_OUT_INT_LEVELS(oq_no)); + */ + otx_ep_write64(0x3fffffffffffffULL, otx_ep->hw_addr, + OTX_EP_R_OUT_INT_LEVELS(oq_no)); + + /* Clear PKT_CNT register */ + /* otx_ep_write64(0xFFFFFFFFF, (uint8_t *)otx_ep->hw_addr, + * OTX_EP_R_OUT_PKT_CNT(oq_no)); + */ + + /* Clear the OQ doorbell */ + rte_write32(0xFFFFFFFF, droq->pkts_credit_reg); + while ((rte_read32(droq->pkts_credit_reg) != 0ull)) { + rte_write32(0xFFFFFFFF, droq->pkts_credit_reg); + rte_delay_ms(1); + } + otx_ep_dbg("OTX_EP_R[%d]_credit:%x\n", oq_no, + rte_read32(droq->pkts_credit_reg)); + + /* Clear the OQ_OUT_CNTS doorbell */ + reg_val = rte_read32(droq->pkts_sent_reg); + rte_write32((uint32_t)reg_val, droq->pkts_sent_reg); + + otx_ep_dbg("OTX_EP_R[%d]_sent: %x\n", oq_no, + rte_read32(droq->pkts_sent_reg)); + + while (((rte_read32(droq->pkts_sent_reg)) != 0ull)) { + reg_val = rte_read32(droq->pkts_sent_reg); + rte_write32((uint32_t)reg_val, droq->pkts_sent_reg); + rte_delay_ms(1); + } +} + /* OTX_EP default configuration */ static const struct otx_ep_config default_otx_ep_conf = { /* IQ attributes */ @@ -148,6 +277,9 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf); + otx_ep->fn_list.setup_iq_regs = otx_ep_setup_iq_regs; + otx_ep->fn_list.setup_oq_regs = otx_ep_setup_oq_regs; + otx_ep->fn_list.setup_device_regs = otx_ep_setup_device_regs; return 0; diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index fa224b4de1..d6aa326dc3 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -4,13 +4,38 @@ #ifndef _OTX_EP_VF_H_ #define _OTX_EP_VF_H_ + + + + #define OTX_EP_RING_OFFSET (0x1ull << 17) /* OTX_EP VF IQ Registers */ #define OTX_EP_R_IN_CONTROL_START (0x10000) +#define OTX_EP_R_IN_INSTR_BADDR_START (0x10020) +#define OTX_EP_R_IN_INSTR_RSIZE_START (0x10030) +#define OTX_EP_R_IN_INSTR_DBELL_START (0x10040) +#define OTX_EP_R_IN_CNTS_START (0x10050) +#define OTX_EP_R_IN_INT_LEVELS_START (0x10060) + #define OTX_EP_R_IN_CONTROL(ring) \ (OTX_EP_R_IN_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET)) +#define OTX_EP_R_IN_INSTR_BADDR(ring) \ + (OTX_EP_R_IN_INSTR_BADDR_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_IN_INSTR_RSIZE(ring) \ + (OTX_EP_R_IN_INSTR_RSIZE_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_IN_INSTR_DBELL(ring) \ + (OTX_EP_R_IN_INSTR_DBELL_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_IN_CNTS(ring) \ + (OTX_EP_R_IN_CNTS_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_IN_INT_LEVELS(ring) \ + (OTX_EP_R_IN_INT_LEVELS_START + ((ring) * OTX_EP_RING_OFFSET)) + /* OTX_EP VF IQ Masks */ #define OTX_EP_R_IN_CTL_RPVF_MASK (0xF) #define OTX_EP_R_IN_CTL_RPVF_POS (48) @@ -20,10 +45,38 @@ #define OTX_EP_R_IN_CTL_IS_64B (0x1ull << 24) #define OTX_EP_R_IN_CTL_ESR (0x1ull << 1) /* OTX_EP VF OQ Registers */ +#define OTX_EP_R_OUT_CNTS_START (0x10100) +#define OTX_EP_R_OUT_INT_LEVELS_START (0x10110) +#define OTX_EP_R_OUT_SLIST_BADDR_START (0x10120) +#define OTX_EP_R_OUT_SLIST_RSIZE_START (0x10130) +#define OTX_EP_R_OUT_SLIST_DBELL_START (0x10140) #define OTX_EP_R_OUT_CONTROL_START (0x10150) +#define OTX_EP_R_OUT_ENABLE_START (0x10160) + #define OTX_EP_R_OUT_CONTROL(ring) \ (OTX_EP_R_OUT_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_OUT_ENABLE(ring) \ + (OTX_EP_R_OUT_ENABLE_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_OUT_SLIST_BADDR(ring) \ + (OTX_EP_R_OUT_SLIST_BADDR_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_OUT_SLIST_RSIZE(ring) \ + (OTX_EP_R_OUT_SLIST_RSIZE_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_OUT_SLIST_DBELL(ring) \ + (OTX_EP_R_OUT_SLIST_DBELL_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_OUT_CNTS(ring) \ + (OTX_EP_R_OUT_CNTS_START + ((ring) * OTX_EP_RING_OFFSET)) + +#define OTX_EP_R_OUT_INT_LEVELS(ring) \ + (OTX_EP_R_OUT_INT_LEVELS_START + ((ring) * OTX_EP_RING_OFFSET)) + /* OTX_EP VF OQ Masks */ + +#define OTX_EP_R_OUT_CTL_IDLE (1ull << 36) #define OTX_EP_R_OUT_CTL_ES_I (1ull << 34) #define OTX_EP_R_OUT_CTL_NSR_I (1ull << 33) #define OTX_EP_R_OUT_CTL_ROR_I (1ull << 32) From patchwork Thu Dec 31 07:22:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85926 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C960BA0A00; Thu, 31 Dec 2020 08:25:24 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A623F140D51; Thu, 31 Dec 2020 08:23:16 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 475C5140CF5 for ; Thu, 31 Dec 2020 08:23:01 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G0IP022182 for ; Wed, 30 Dec 2020 23:23:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=FYooKExqCew6z63sX5XboHmeaGm21uIo8dHCueoL9bo=; b=jIDdUUq0e0daBTB7sFIoaZ09Agd87/5Ol3frY7gD/qCgy73NMQBHQxzvACMSYlqiJBKj frESvDFKqe+Ka8z4YlvNmO1eIF3GA2OcgzNyRTQncH4QMpjnsM9redTQHMBNPVTeM2gm aRXknSHcNxl5oz2qpFJn4Eiuk0kHMRU1qFZeill3yAUOQ4+MkoPJWocYxti8xWWaUJVy 2z/tDPS4Qby4M7zdxdy5nA0N1vqrdcuKj8jUNe6l4a5F1e+yuF+6bzz/XQd9FoXbpa/n Hu6BwhFaKQ+MF3rDKiXS1Q5iJjs0c8ZI7ZnDu5qz0zzxNEPnWXv1RFccIlRW0dJOM2t1 8g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx58-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:23:00 -0800 Received: from SC-EXCH04.marvell.com (10.93.176.84) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:57 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 531903F7041; Wed, 30 Dec 2020 23:22:57 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:41 +0000 Message-ID: <20201231072247.5719-10-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 09/15] net/octeontx_ep: Added dev start and stop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Dev start and stop operations are added. To accomplish this internal functions to enable or disable io queues are incorporated. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx2_ep_vf.c | 92 +++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_common.h | 13 +++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 45 ++++++++++ drivers/net/octeontx_ep/otx_ep_vf.c | 114 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 4 + 5 files changed, 268 insertions(+) diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c index f2cd442e97..b570a49566 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.c +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -199,6 +199,89 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no) rte_read32(droq->pkts_sent_reg)); } +static void +otx2_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + uint64_t loop = SDP_VF_BUSY_LOOP_COUNT; + + /* Resetting doorbells during IQ enabling also to handle abrupt + * guest reboot. IQ reset does not clear the doorbells. + */ + otx2_write64(0xFFFFFFFF, otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_DBELL(q_no)); + + while (((otx2_read64(otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_DBELL(q_no))) != 0ull) && loop--) { + rte_delay_ms(1); + } + + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no)); + reg_val |= 0x1ull; + + otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no)); + + otx2_info("IQ[%d] enable done", q_no); +} + +static void +otx2_vf_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no)); + reg_val |= 0x1ull; + otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no)); + + otx2_info("OQ[%d] enable done", q_no); +} + +static void +otx2_vf_enable_io_queues(struct otx_ep_device *otx_ep) +{ + uint32_t q_no = 0; + + for (q_no = 0; q_no < otx_ep->nb_tx_queues; q_no++) + otx2_vf_enable_iq(otx_ep, q_no); + + for (q_no = 0; q_no < otx_ep->nb_rx_queues; q_no++) + otx2_vf_enable_oq(otx_ep, q_no); +} + +static void +otx2_vf_disable_iq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + + /* Reset the doorbell register for this Input Queue. */ + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no)); + reg_val &= ~0x1ull; + + otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no)); +} + +static void +otx2_vf_disable_oq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + + reg_val = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no)); + reg_val &= ~0x1ull; + + otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no)); +} + +static void +otx2_vf_disable_io_queues(struct otx_ep_device *otx_ep) +{ + uint32_t q_no = 0; + + for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) { + otx2_vf_disable_iq(otx_ep, q_no); + otx2_vf_disable_oq(otx_ep, q_no); + } +} + static const struct otx_ep_config default_otx2_ep_conf = { /* IQ attributes */ .iq = { @@ -258,5 +341,14 @@ otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep->fn_list.setup_device_regs = otx2_vf_setup_device_regs; + otx_ep->fn_list.enable_io_queues = otx2_vf_enable_io_queues; + otx_ep->fn_list.disable_io_queues = otx2_vf_disable_io_queues; + + otx_ep->fn_list.enable_iq = otx2_vf_enable_iq; + otx_ep->fn_list.disable_iq = otx2_vf_disable_iq; + + otx_ep->fn_list.enable_oq = otx2_vf_enable_oq; + otx_ep->fn_list.disable_oq = otx2_vf_disable_oq; + return 0; } diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index e7e393ef00..50c9e2daa3 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -19,6 +19,8 @@ #define OTX_EP_PCI_RING_ALIGN 65536 #define SDP_PKIND 40 #define SDP_OTX2_PKIND 57 +#define OTX_EP_BUSY_LOOP_COUNT (10000) + #define OTX_EP_MAX_IOQS_PER_VF 8 #define otx_ep_printf(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ @@ -362,7 +364,14 @@ struct otx_ep_fn_list { int (*setup_device_regs)(struct otx_ep_device *otx_ep); + void (*enable_io_queues)(struct otx_ep_device *otx_ep); void (*disable_io_queues)(struct otx_ep_device *otx_ep); + + void (*enable_iq)(struct otx_ep_device *otx_ep, uint32_t q_no); + void (*disable_iq)(struct otx_ep_device *otx_ep, uint32_t q_no); + + void (*enable_oq)(struct otx_ep_device *otx_ep, uint32_t q_no); + void (*disable_oq)(struct otx_ep_device *otx_ep, uint32_t q_no); }; /* SRIOV information */ @@ -417,6 +426,10 @@ struct otx_ep_device { /* Device configuration */ const struct otx_ep_config *conf; + int started; + + int linkup; + int port_configured; uint64_t rx_offloads; diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index dc419d3447..f782c90ad7 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -60,6 +60,47 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev, return 0; } +static int +otx_ep_dev_start(struct rte_eth_dev *eth_dev) +{ + struct otx_ep_device *otx_epvf; + unsigned int q; + + otx_epvf = (struct otx_ep_device *)OTX_EP_DEV(eth_dev); + /* Enable IQ/OQ for this device */ + otx_epvf->fn_list.enable_io_queues(otx_epvf); + + for (q = 0; q < otx_epvf->nb_rx_queues; q++) { + rte_write32(otx_epvf->droq[q]->nb_desc, + otx_epvf->droq[q]->pkts_credit_reg); + + rte_wmb(); + otx_ep_info("OQ[%d] dbells [%d]\n", q, + rte_read32(otx_epvf->droq[q]->pkts_credit_reg)); + } + + otx_epvf->started = 1; + otx_epvf->linkup = 1; + + rte_wmb(); + otx_ep_info("dev started\n"); + + return 0; +} + +/* Stop device and disable input/output functions */ +static int +otx_ep_dev_stop(struct rte_eth_dev *eth_dev) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + + otx_epvf->fn_list.disable_io_queues(otx_epvf); + otx_epvf->started = 0; + otx_epvf->linkup = 0; + + return 0; +} + static int otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf) { @@ -304,6 +345,8 @@ otx_ep_tx_queue_release(void *txq) /* Define our ethernet definitions */ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .dev_configure = otx_ep_dev_configure, + .dev_start = otx_ep_dev_start, + .dev_stop = otx_ep_dev_stop, .rx_queue_setup = otx_ep_rx_queue_setup, .rx_queue_release = otx_ep_rx_queue_release, .tx_queue_setup = otx_ep_tx_queue_setup, @@ -323,6 +366,8 @@ otx_epdev_exit(struct rte_eth_dev *eth_dev) otx_epvf = OTX_EP_DEV(eth_dev); + otx_epvf->fn_list.disable_io_queues(otx_epvf); + num_queues = otx_epvf->nb_rx_queues; for (q = 0; q < num_queues; q++) { if (otx_ep_delete_oqs(otx_epvf, q)) { diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c index 3d990a488f..4a00736dab 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.c +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -220,6 +220,110 @@ otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no) } } +static void +otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + uint64_t loop = OTX_EP_BUSY_LOOP_COUNT; + + /* Resetting doorbells during IQ enabling also to handle abrupt + * guest reboot. IQ reset does not clear the doorbells. + */ + otx_ep_write64(0xFFFFFFFF, otx_ep->hw_addr, + OTX_EP_R_IN_INSTR_DBELL(q_no)); + + while (((rte_read64(otx_ep->hw_addr + + OTX_EP_R_IN_INSTR_DBELL(q_no))) != 0ull) && loop--) { + rte_delay_ms(1); + } + if (loop == 0) { + otx_ep_err("dbell reset failed\n"); + return; + } + + + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_ENABLE(q_no)); + reg_val |= 0x1ull; + + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no)); + + otx_ep_info("IQ[%d] enable done\n", q_no); +} + +static void +otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + uint64_t loop = OTX_EP_BUSY_LOOP_COUNT; + + /* Resetting doorbells during IQ enabling also to handle abrupt + * guest reboot. IQ reset does not clear the doorbells. + */ + otx_ep_write64(0xFFFFFFFF, otx_ep->hw_addr, + OTX_EP_R_OUT_SLIST_DBELL(q_no)); + while (((rte_read64(otx_ep->hw_addr + + OTX_EP_R_OUT_SLIST_DBELL(q_no))) != 0ull) && loop--) { + rte_delay_ms(1); + } + if (loop == 0) { + otx_ep_err("dbell reset failed\n"); + return; + } + + + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_ENABLE(q_no)); + reg_val |= 0x1ull; + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no)); + + otx_ep_info("OQ[%d] enable done\n", q_no); +} + +static void +otx_ep_enable_io_queues(struct otx_ep_device *otx_ep) +{ + uint32_t q_no = 0; + + for (q_no = 0; q_no < otx_ep->nb_tx_queues; q_no++) + otx_ep_enable_iq(otx_ep, q_no); + + for (q_no = 0; q_no < otx_ep->nb_rx_queues; q_no++) + otx_ep_enable_oq(otx_ep, q_no); +} + +static void +otx_ep_disable_iq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + + /* Reset the doorbell register for this Input Queue. */ + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_ENABLE(q_no)); + reg_val &= ~0x1ull; + + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no)); +} + +static void +otx_ep_disable_oq(struct otx_ep_device *otx_ep, uint32_t q_no) +{ + volatile uint64_t reg_val = 0ull; + + reg_val = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_ENABLE(q_no)); + reg_val &= ~0x1ull; + + otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no)); +} + +static void +otx_ep_disable_io_queues(struct otx_ep_device *otx_ep) +{ + uint32_t q_no = 0; + + for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) { + otx_ep_disable_iq(otx_ep, q_no); + otx_ep_disable_oq(otx_ep, q_no); + } +} + /* OTX_EP default configuration */ static const struct otx_ep_config default_otx_ep_conf = { /* IQ attributes */ @@ -282,5 +386,15 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep->fn_list.setup_device_regs = otx_ep_setup_device_regs; + otx_ep->fn_list.enable_io_queues = otx_ep_enable_io_queues; + otx_ep->fn_list.disable_io_queues = otx_ep_disable_io_queues; + + otx_ep->fn_list.enable_iq = otx_ep_enable_iq; + otx_ep->fn_list.disable_iq = otx_ep_disable_iq; + + otx_ep->fn_list.enable_oq = otx_ep_enable_oq; + otx_ep->fn_list.disable_oq = otx_ep_disable_oq; + + return 0; } diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index d6aa326dc3..d2128712aa 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -12,6 +12,7 @@ /* OTX_EP VF IQ Registers */ #define OTX_EP_R_IN_CONTROL_START (0x10000) +#define OTX_EP_R_IN_ENABLE_START (0x10010) #define OTX_EP_R_IN_INSTR_BADDR_START (0x10020) #define OTX_EP_R_IN_INSTR_RSIZE_START (0x10030) #define OTX_EP_R_IN_INSTR_DBELL_START (0x10040) @@ -21,6 +22,9 @@ #define OTX_EP_R_IN_CONTROL(ring) \ (OTX_EP_R_IN_CONTROL_START + ((ring) * OTX_EP_RING_OFFSET)) +#define OTX_EP_R_IN_ENABLE(ring) \ + (OTX_EP_R_IN_ENABLE_START + ((ring) * OTX_EP_RING_OFFSET)) + #define OTX_EP_R_IN_INSTR_BADDR(ring) \ (OTX_EP_R_IN_INSTR_BADDR_START + ((ring) * OTX_EP_RING_OFFSET)) From patchwork Thu Dec 31 07:22:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85922 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2E87DA0A00; Thu, 31 Dec 2020 08:24:31 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A240F140D25; Thu, 31 Dec 2020 08:23:10 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DAA67140CEE for ; Thu, 31 Dec 2020 08:23:00 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7LGmR010253 for ; Wed, 30 Dec 2020 23:23:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=qHlBAdG+JhjQei+ysUY7qqkIpMP5B8th63BoCo+RkYI=; b=YLWpfwXUJX84KqcahcLH5zZ0sNvsVRk/V1BfaXEKx0YIPUMih+ZUO1VNM1RZ8tRcX7EX DBioHb2e+fAMVhagHwsN2rwxYTKKoOiJkKCeZOAUaJhLW411yxYVSRs7mU9lFRb+TwWB ExXqZI7I5Bp2eRip+f536JgRxEI0vFhAFirSl096hsVoN0GhUU/j+1sC01bMH2Qh1/PK 1WAh4P5HkUflzdItKZmkDYvncj1fhlttYWsZj+l2SkgfSSRvEPV0suEVTrFwNWbt8r/9 jLfb/GbvLFpIHUu+TO49CW1ys0nulZ6g4S2ww0L5zy07Sv+dxgnsS1UIYdlWSoftImQc fA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 35s80806ff-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:23:00 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:58 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 9B9EC3F7040; Wed, 30 Dec 2020 23:22:57 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:42 +0000 Message-ID: <20201231072247.5719-11-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 10/15] net/octeontx_ep: Receive data path function added X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Function to deliver packets from DROQ to application is added. It also fills DROQ with receive buffers timely such that device can fill them with incoming packets. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx_ep_common.h | 5 + drivers/net/octeontx_ep/otx_ep_ethdev.c | 3 + drivers/net/octeontx_ep/otx_ep_rxtx.c | 292 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_rxtx.h | 17 +- 4 files changed, 316 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index 50c9e2daa3..e3819213dd 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -22,6 +22,11 @@ #define OTX_EP_BUSY_LOOP_COUNT (10000) #define OTX_EP_MAX_IOQS_PER_VF 8 + +#define OTX_CUST_META_DATA 64 +#define OTX_CUST_PRIV_TAG 2 +#define OTX_CUST_DATA_LEN (OTX_CUST_META_DATA + OTX_CUST_PRIV_TAG) + #define otx_ep_printf(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, RTE_LOGTYPE_PMD, \ fmt, ##args) diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index f782c90ad7..9ec8bc52c9 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -147,6 +147,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) goto setup_fail; } + otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts; ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf); otx_epvf->max_rx_queues = ethdev_queues; otx_epvf->max_tx_queues = ethdev_queues; @@ -404,6 +405,8 @@ otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) rte_free(eth_dev->data->mac_addrs); eth_dev->dev_ops = NULL; + eth_dev->rx_pkt_burst = NULL; + eth_dev->tx_pkt_burst = NULL; return 0; } diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c index a13c180c99..3e77d579c2 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -7,6 +7,8 @@ #include #include #include +#include +#include #include #include "otx_ep_common.h" @@ -14,6 +16,8 @@ #include "otx2_ep_vf.h" #include "otx_ep_rxtx.h" +/* SDP_LENGTH_S specifies packet length and is of 8-byte size */ +#define INFO_SIZE 8 static void otx_ep_dmazone_free(const struct rte_memzone *mz) { @@ -355,3 +359,291 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, otx_ep_delete_oqs(otx_ep, oq_no); return -ENOMEM; } + +static uint32_t +otx_ep_droq_refill(struct otx_ep_droq *droq) +{ + struct otx_ep_droq_desc *desc_ring; + uint32_t desc_refilled = 0; + struct rte_mbuf *buf = NULL; + struct otx_ep_droq_info *info; + + desc_ring = droq->desc_ring; + + while (droq->refill_count && (desc_refilled < droq->nb_desc)) { + /* If a valid buffer exists (happens if there is no dispatch), + * reuse the buffer, else allocate. + */ + if (droq->recv_buf_list[droq->refill_idx] != NULL) + break; + + buf = rte_pktmbuf_alloc(droq->mpool); + /* If a buffer could not be allocated, no point in + * continuing + */ + if (buf == NULL) { + droq->stats.rx_alloc_failure++; + break; + } + info = rte_pktmbuf_mtod(buf, struct otx_ep_droq_info *); + memset(info, 0, sizeof(*info)); + + droq->recv_buf_list[droq->refill_idx] = buf; + desc_ring[droq->refill_idx].buffer_ptr = + rte_mbuf_data_iova_default(buf); + + + droq->refill_idx = otx_ep_incr_index(droq->refill_idx, 1, + droq->nb_desc); + + desc_refilled++; + droq->refill_count--; + } + + return desc_refilled; +} + +static struct rte_mbuf * +otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, + struct otx_ep_droq *droq, int next_fetch) +{ + struct rte_net_hdr_lens hdr_lens; + volatile struct otx_ep_droq_info *info; + struct otx_ep_droq_info *info2; + uint32_t pkt_len = 0; + uint64_t total_pkt_len; + struct rte_mbuf *droq_pkt = NULL; + struct rte_mbuf *droq_pkt2 = NULL; + int next_idx; + + droq_pkt = droq->recv_buf_list[droq->read_idx]; + droq_pkt2 = droq->recv_buf_list[droq->read_idx]; + info = rte_pktmbuf_mtod(droq_pkt, struct otx_ep_droq_info *); + /* make sure info is available */ + rte_rmb(); + if (unlikely(!info->length)) { + int retry = OTX_EP_MAX_DELAYED_PKT_RETRIES; + /* otx_ep_dbg("OCTEON DROQ[%d]: read_idx: %d; Data not ready " + * "yet, Retry; pending=%lu\n", droq->q_no, droq->read_idx, + * droq->pkts_pending); + */ + droq->stats.pkts_delayed_data++; + while (retry && !info->length) + retry--; + if (!retry && !info->length) { + otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!\n", + droq->q_no, droq->read_idx); + /* May be zero length packet; drop it */ + rte_pktmbuf_free(droq_pkt); + droq->recv_buf_list[droq->read_idx] = NULL; + droq->read_idx = otx_ep_incr_index(droq->read_idx, 1, + droq->nb_desc); + droq->stats.dropped_zlp++; + droq->refill_count++; + goto oq_read_fail; + } + } + if (next_fetch) { + next_idx = otx_ep_incr_index(droq->read_idx, 1, droq->nb_desc); + droq_pkt2 = droq->recv_buf_list[next_idx]; + info2 = rte_pktmbuf_mtod(droq_pkt2, struct otx_ep_droq_info *); + rte_prefetch_non_temporal((const void *)info2); + } + + info->length = rte_bswap64(info->length); + /* Deduce the actual data size */ + total_pkt_len = info->length + INFO_SIZE; + if (total_pkt_len <= droq->buffer_size) { + info->length -= OTX_EP_RH_SIZE; + droq_pkt = droq->recv_buf_list[droq->read_idx]; + if (likely(droq_pkt != NULL)) { + droq_pkt->data_off += OTX_EP_DROQ_INFO_SIZE; + /* otx_ep_dbg("OQ: pkt_len[%ld], buffer_size %d\n", + * (long)info->length, droq->buffer_size); + */ + pkt_len = (uint32_t)info->length; + droq_pkt->pkt_len = pkt_len; + droq_pkt->data_len = pkt_len; + droq_pkt->port = otx_ep->port_id; + droq->recv_buf_list[droq->read_idx] = NULL; + droq->read_idx = otx_ep_incr_index(droq->read_idx, 1, + droq->nb_desc); + droq->refill_count++; + } + } else { + struct rte_mbuf *first_buf = NULL; + struct rte_mbuf *last_buf = NULL; + + while (pkt_len < total_pkt_len) { + int cpy_len = 0; + + cpy_len = ((pkt_len + droq->buffer_size) > + total_pkt_len) + ? ((uint32_t)total_pkt_len - + pkt_len) + : droq->buffer_size; + + droq_pkt = droq->recv_buf_list[droq->read_idx]; + droq->recv_buf_list[droq->read_idx] = NULL; + + if (likely(droq_pkt != NULL)) { + /* Note the first seg */ + if (!pkt_len) + first_buf = droq_pkt; + + droq_pkt->port = otx_ep->port_id; + if (!pkt_len) { + droq_pkt->data_off += + OTX_EP_DROQ_INFO_SIZE; + droq_pkt->pkt_len = + cpy_len - OTX_EP_DROQ_INFO_SIZE; + droq_pkt->data_len = + cpy_len - OTX_EP_DROQ_INFO_SIZE; + } else { + droq_pkt->pkt_len = cpy_len; + droq_pkt->data_len = cpy_len; + } + + if (pkt_len) { + first_buf->nb_segs++; + first_buf->pkt_len += droq_pkt->pkt_len; + } + + if (last_buf) + last_buf->next = droq_pkt; + + last_buf = droq_pkt; + } else { + otx_ep_err("no buf\n"); + } + + pkt_len += cpy_len; + droq->read_idx = otx_ep_incr_index(droq->read_idx, 1, + droq->nb_desc); + droq->refill_count++; + } + droq_pkt = first_buf; + } + droq_pkt->packet_type = rte_net_get_ptype(droq_pkt, &hdr_lens, + RTE_PTYPE_ALL_MASK); + droq_pkt->l2_len = hdr_lens.l2_len; + droq_pkt->l3_len = hdr_lens.l3_len; + droq_pkt->l4_len = hdr_lens.l4_len; + + if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) && + !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) { + rte_pktmbuf_free(droq_pkt); + goto oq_read_fail; + } + + if (droq_pkt->nb_segs > 1 && + !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) { + rte_pktmbuf_free(droq_pkt); + goto oq_read_fail; + } + + return droq_pkt; + +oq_read_fail: + return NULL; +} + +static inline uint32_t +otx_ep_check_droq_pkts(struct otx_ep_droq *droq) +{ + uint32_t new_pkts; + volatile uint64_t pkt_count; + + /* Latest available OQ packets */ + pkt_count = rte_read32(droq->pkts_sent_reg); + rte_write32(pkt_count, droq->pkts_sent_reg); + new_pkts = pkt_count; + /* otx_ep_dbg("Recvd [%d] new OQ pkts\n", new_pkts); */ + droq->pkts_pending += new_pkts; + return new_pkts; +} + + +/* Check for response arrival from OCTEON TX2 + * returns number of requests completed + */ +uint16_t +otx_ep_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t budget) +{ + struct rte_mbuf *oq_pkt; + struct otx_ep_device *otx_ep; + struct otx_ep_droq *droq = rx_queue; + + uint32_t pkts = 0; + uint32_t new_pkts = 0; + int next_fetch; + + otx_ep = droq->otx_ep_dev; + + if (droq->pkts_pending > budget) { + new_pkts = budget; + } else { + new_pkts = droq->pkts_pending; + new_pkts += otx_ep_check_droq_pkts(droq); + if (new_pkts > budget) + new_pkts = budget; + } + if (!new_pkts) { + /* otx_ep_dbg("Zero new_pkts:%d\n", new_pkts); */ + goto update_credit; /* No pkts at this moment */ + } + + /* otx_ep_dbg("Received new_pkts = %d\n", new_pkts); */ + + for (pkts = 0; pkts < new_pkts; pkts++) { + /* Push the received pkt to application */ + next_fetch = (pkts == new_pkts - 1) ? 0 : 1; + oq_pkt = otx_ep_droq_read_packet(otx_ep, droq, next_fetch); + if (!oq_pkt) { + otx_ep_err("DROQ read pkt failed pending %lu last_pkt_count %lu new_pkts %d.\n", + droq->pkts_pending, droq->last_pkt_count, + new_pkts); + droq->pkts_pending -= pkts; + droq->stats.rx_err++; + goto finish; + } + /* rte_pktmbuf_dump(stdout, oq_pkt, + * rte_pktmbuf_pkt_len(oq_pkt)); + */ + rx_pkts[pkts] = oq_pkt; + /* Stats */ + droq->stats.pkts_received++; + droq->stats.bytes_received += oq_pkt->pkt_len; + } + droq->pkts_pending -= pkts; + /* otx_ep_dbg("DROQ pkts[%d] pushed to application\n", pkts); */ + + /* Refill DROQ buffers */ +update_credit: + if (droq->refill_count >= 16 /* droq->refill_threshold */) { + int desc_refilled = otx_ep_droq_refill(droq); + + /* Flush the droq descriptor data to memory to be sure + * that when we update the credits the data in memory is + * accurate. + */ + rte_wmb(); + rte_write32(desc_refilled, droq->pkts_credit_reg); + /* otx_ep_dbg("Refilled count = %d\n", desc_refilled); */ + } else { + /* + * SDP output goes into DROP state when output doorbell count + * goes below drop count. When door bell count is written with + * a value greater than drop count SDP output should come out + * of DROP state. Due to a race condition this is not happening. + * Writing doorbell register with 0 again may make SDP output + * come out of this state. + */ + + rte_write32(0, droq->pkts_credit_reg); + } +finish: + return pkts; +} diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.h b/drivers/net/octeontx_ep/otx_ep_rxtx.h index 819204a763..602534cf0b 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.h +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.h @@ -7,4 +7,19 @@ #define OTX_EP_RXD_ALIGN 1 #define OTX_EP_TXD_ALIGN 1 -#endif +#define OTX_EP_MAX_DELAYED_PKT_RETRIES 10000 +static inline uint32_t +otx_ep_incr_index(uint32_t index, uint32_t count, uint32_t max) +{ + if ((index + count) >= max) + index = index + count - max; + else + index += count; + + return index; +} +uint16_t +otx_ep_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t budget); +#endif /* _OTX_EP_RXTX_H_ */ From patchwork Thu Dec 31 07:22:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85925 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CD59EA0A00; Thu, 31 Dec 2020 08:25:11 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 72B2D140D4B; Thu, 31 Dec 2020 08:23:15 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 1A680140CF2 for ; Thu, 31 Dec 2020 08:23:00 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G0IO022182 for ; Wed, 30 Dec 2020 23:23:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=bF75Fx897bIazAu7bmZrm4x7nVu3b4iFrB5Niwo5OK4=; b=JQUZ+m3Ft1mU6EB/rZxPN9CAvtnOI1ylQZ0oobbczUfDkvHdgPBydDMkQTs1dHcpYeAg 4fRLUJ6vzbc83XJ6RQTpLAku5Z8u/CLwqOuYXH0vGz9fQtKbTw6Ibg8u5KwqJJGGMyRd 0THE99fOss37HCrl8ZVUu3UygEjmdbgtcxixaMXG1k7U0un+thHDcBMWCWWXwS6dOWXY wx1BGDi3odgqKXV4kEQ6Jri84bgh7124fh0MvGJRsPWyiHWpwujlV1HC5BQilVlhZxyc /jmmVgZ86qE4jR4etyjxM74XRDEjdqy51YmdoDYfhqprNFjiPobHSvM5X9Ozpih5L8/A Xg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx58-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:23:00 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:57 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:58 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id E63883F703F; Wed, 30 Dec 2020 23:22:57 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:43 +0000 Message-ID: <20201231072247.5719-12-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 11/15] net/octeontx_ep: Transmit data path function added X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" 1. Packet transmit function for both otx and otx2 are added. 2. Flushing trasmit(command) queue when pending commands are more than maximum allowed value (currently 16). 3. Scatter gather support if the packet spans multiple buffers. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx2_ep_vf.h | 19 + drivers/net/octeontx_ep/otx_ep_common.h | 51 +++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 5 + drivers/net/octeontx_ep/otx_ep_rxtx.c | 448 +++++++++++++++++++++++- drivers/net/octeontx_ep/otx_ep_rxtx.h | 26 ++ drivers/net/octeontx_ep/otx_ep_vf.h | 68 ++++ 6 files changed, 615 insertions(+), 2 deletions(-) diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h index 52d6487548..3a3b3413b2 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.h +++ b/drivers/net/octeontx_ep/otx2_ep_vf.h @@ -7,5 +7,24 @@ int otx2_ep_vf_setup_device(struct otx_ep_device *sdpvf); +struct otx2_ep_instr_64B { + /* Pointer where the input data is available. */ + uint64_t dptr; + + /* OTX_EP Instruction Header. */ + union otx_ep_instr_ih ih; + + /** Pointer where the response for a RAW mode packet + * will be written by OCTEON TX. + */ + uint64_t rptr; + + /* Input Request Header. */ + union otx_ep_instr_irh irh; + + /* Additional headers available in a 64-byte instruction. */ + uint64_t exhdr[4]; +}; + #endif /*_OTX2_EP_VF_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index e3819213dd..978cceab01 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -4,6 +4,10 @@ #ifndef _OTX_EP_COMMON_H_ #define _OTX_EP_COMMON_H_ + +#define OTX_EP_NW_PKT_OP 0x1220 +#define OTX_EP_NW_CMD_OP 0x1221 + #define OTX_EP_MAX_RINGS_PER_VF (8) #define OTX_EP_CFG_IO_QUEUES OTX_EP_MAX_RINGS_PER_VF #define OTX_EP_64BYTE_INSTR (64) @@ -16,9 +20,24 @@ #define OTX_EP_OQ_INFOPTR_MODE (0) #define OTX_EP_OQ_REFIL_THRESHOLD (16) + +/* IQ instruction req types */ +#define OTX_EP_REQTYPE_NONE (0) +#define OTX_EP_REQTYPE_NORESP_INSTR (1) +#define OTX_EP_REQTYPE_NORESP_NET_DIRECT (2) +#define OTX_EP_REQTYPE_NORESP_NET OTX_EP_REQTYPE_NORESP_NET_DIRECT +#define OTX_EP_REQTYPE_NORESP_GATHER (3) +#define OTX_EP_NORESP_OHSM_SEND (4) +#define OTX_EP_NORESP_LAST (4) #define OTX_EP_PCI_RING_ALIGN 65536 #define SDP_PKIND 40 #define SDP_OTX2_PKIND 57 + +#define ORDERED_TAG 0 +#define ATOMIC_TAG 1 +#define NULL_TAG 2 +#define NULL_NULL_TAG 3 + #define OTX_EP_BUSY_LOOP_COUNT (10000) #define OTX_EP_MAX_IOQS_PER_VF 8 @@ -450,7 +469,39 @@ int otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, unsigned int socket_id); int otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no); +struct otx_ep_sg_entry { + /** The first 64 bit gives the size of data in each dptr. */ + union { + uint16_t size[4]; + uint64_t size64; + } u; + + /** The 4 dptr pointers for this entry. */ + uint64_t ptr[4]; +}; + +#define OTX_EP_SG_ENTRY_SIZE (sizeof(struct otx_ep_sg_entry)) + +/** Structure of a node in list of gather components maintained by + * driver for each network device. + */ +struct otx_ep_gather { + /** number of gather entries. */ + int num_sg; + + /** Gather component that can accommodate max sized fragment list + * received from the IP layer. + */ + struct otx_ep_sg_entry *sg; +}; + +struct otx_ep_buf_free_info { + struct rte_mbuf *mbuf; + struct otx_ep_gather g; +}; + #define OTX_EP_MAX_PKT_SZ 64000U #define OTX_EP_MAX_MAC_ADDRS 1 +#define OTX_EP_SG_ALIGN 8 #endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 9ec8bc52c9..d8da457d86 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -148,6 +148,11 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) } otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts; + if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF) + otx_epvf->eth_dev->tx_pkt_burst = &otx_ep_xmit_pkts; + else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX2_EP_NET_VF || + otx_epvf->chip_id == PCI_DEVID_98XX_EP_NET_VF) + otx_epvf->eth_dev->tx_pkt_burst = &otx2_ep_xmit_pkts; ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf); otx_epvf->max_rx_queues = ethdev_queues; otx_epvf->max_tx_queues = ethdev_queues; diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c index 3e77d579c2..4ffe0b8546 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -128,8 +128,6 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs, iq->flush_index = 0; iq->instr_pending = 0; - - otx_ep->io_qmask.iq |= (1ull << iq_no); /* Set 32B/64B mode for each input queue */ @@ -360,6 +358,452 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs, return -ENOMEM; } +static inline void +otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx) +{ + uint32_t reqtype; + void *buf; + struct otx_ep_buf_free_info *finfo; + + buf = iq->req_list[idx].buf; + reqtype = iq->req_list[idx].reqtype; + + switch (reqtype) { + case OTX_EP_REQTYPE_NORESP_NET: + rte_pktmbuf_free((struct rte_mbuf *)buf); + otx_ep_dbg("IQ buffer freed at idx[%d]\n", idx); + break; + + case OTX_EP_REQTYPE_NORESP_GATHER: + finfo = (struct otx_ep_buf_free_info *)buf; + /* This will take care of multiple segments also */ + rte_pktmbuf_free(finfo->mbuf); + rte_free(finfo->g.sg); + rte_free(finfo); + break; + + case OTX_EP_REQTYPE_NONE: + default: + otx_ep_info("This iqreq mode is not supported:%d\n", reqtype); + } + + /* Reset the request list at this index */ + iq->req_list[idx].buf = NULL; + iq->req_list[idx].reqtype = 0; +} + +static inline void +otx_ep_iqreq_add(struct otx_ep_instr_queue *iq, void *buf, + uint32_t reqtype, int index) +{ + iq->req_list[index].buf = buf; + iq->req_list[index].reqtype = reqtype; + + /*otx_ep_dbg("IQ buffer added at idx[%d]\n", iq->host_write_index);*/ +} + +static uint32_t +otx_vf_update_read_index(struct otx_ep_instr_queue *iq) +{ + uint32_t new_idx = rte_read32(iq->inst_cnt_reg); + if (unlikely(new_idx == 0xFFFFFFFFU)) { + /*otx2_sdp_dbg("%s Going to reset IQ index\n", __func__);*/ + rte_write32(new_idx, iq->inst_cnt_reg); + } + /* Modulo of the new index with the IQ size will give us + * the new index. + */ + new_idx &= (iq->nb_desc - 1); + + return new_idx; +} + +static void +otx_ep_flush_iq(struct otx_ep_instr_queue *iq) +{ + uint32_t instr_processed = 0; + + iq->otx_read_index = otx_vf_update_read_index(iq); + while (iq->flush_index != iq->otx_read_index) { + /* Free the IQ data buffer to the pool */ + otx_ep_iqreq_delete(iq, iq->flush_index); + iq->flush_index = + otx_ep_incr_index(iq->flush_index, 1, iq->nb_desc); + + instr_processed++; + } + + iq->stats.instr_processed = instr_processed; + iq->instr_pending -= instr_processed; +} + +static inline void +otx_ep_ring_doorbell(struct otx_ep_device *otx_ep __rte_unused, + struct otx_ep_instr_queue *iq) +{ + rte_wmb(); + rte_write64(iq->fill_cnt, iq->doorbell_reg); + iq->fill_cnt = 0; +} + +static inline int +post_iqcmd(struct otx_ep_instr_queue *iq, uint8_t *iqcmd) +{ + uint8_t *iqptr, cmdsize; + + /* This ensures that the read index does not wrap around to + * the same position if queue gets full before OCTEON TX2 could + * fetch any instr. + */ + if (iq->instr_pending > (iq->nb_desc - 1)) + return OTX_EP_IQ_SEND_FAILED; + + /* Copy cmd into iq */ + cmdsize = 64; + iqptr = iq->base_addr + (iq->host_write_index << 6); + + rte_memcpy(iqptr, iqcmd, cmdsize); + + /* Increment the host write index */ + iq->host_write_index = + otx_ep_incr_index(iq->host_write_index, 1, iq->nb_desc); + + iq->fill_cnt++; + + /* Flush the command into memory. We need to be sure the data + * is in memory before indicating that the instruction is + * pending. + */ + iq->instr_pending++; + /* OTX_EP_IQ_SEND_SUCCESS */ + return 0; +} + + +static int +otx_ep_send_data(struct otx_ep_device *otx_ep, struct otx_ep_instr_queue *iq, + void *cmd, int dbell) +{ + uint32_t ret; + + /* Submit IQ command */ + ret = post_iqcmd(iq, cmd); + + if (ret == OTX_EP_IQ_SEND_SUCCESS) { + if (dbell) + otx_ep_ring_doorbell(otx_ep, iq); + iq->stats.instr_posted++; + + } else { + iq->stats.instr_dropped++; + if (iq->fill_cnt) + otx_ep_ring_doorbell(otx_ep, iq); + } + return ret; +} + +static inline void +set_sg_size(struct otx_ep_sg_entry *sg_entry, uint16_t size, uint32_t pos) +{ +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + sg_entry->u.size[pos] = size; +#elif RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN + sg_entry->u.size[3 - pos] = size; +#endif +} + +/* Enqueue requests/packets to OTX_EP IQ queue. + * returns number of requests enqueued successfully + */ +uint16_t +otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts) +{ + struct otx_ep_instr_64B iqcmd; + struct otx_ep_instr_queue *iq; + struct otx_ep_device *otx_ep; + struct rte_mbuf *m; + + uint32_t iqreq_type, sgbuf_sz; + int dbell, index, count = 0; + unsigned int pkt_len, i; + int gather, gsz; + void *iqreq_buf; + uint64_t dptr; + + iq = (struct otx_ep_instr_queue *)tx_queue; + otx_ep = iq->otx_ep_dev; + + /* if (!otx_ep->started || !otx_ep->linkup) { + * goto xmit_fail; + * } + */ + + iqcmd.ih.u64 = 0; + iqcmd.pki_ih3.u64 = 0; + iqcmd.irh.u64 = 0; + + /* ih invars */ + iqcmd.ih.s.fsz = OTX_EP_FSZ; + iqcmd.ih.s.pkind = otx_ep->pkind; /* The SDK decided PKIND value */ + + /* pki ih3 invars */ + iqcmd.pki_ih3.s.w = 1; + iqcmd.pki_ih3.s.utt = 1; + iqcmd.pki_ih3.s.tagtype = ORDERED_TAG; + /* sl will be sizeof(pki_ih3) */ + iqcmd.pki_ih3.s.sl = OTX_EP_FSZ + OTX_CUST_DATA_LEN; + + /* irh invars */ + iqcmd.irh.s.opcode = OTX_EP_NW_PKT_OP; + + for (i = 0; i < nb_pkts; i++) { + m = pkts[i]; + if (m->nb_segs == 1) { + /* dptr */ + dptr = rte_mbuf_data_iova(m); + pkt_len = rte_pktmbuf_data_len(m); + iqreq_buf = m; + iqreq_type = OTX_EP_REQTYPE_NORESP_NET; + gather = 0; + gsz = 0; + } else { + struct otx_ep_buf_free_info *finfo; + int j, frags, num_sg; + + if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)) + goto xmit_fail; + + finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL, + sizeof(*finfo), 0); + if (finfo == NULL) { + otx_ep_err("free buffer alloc failed\n"); + goto xmit_fail; + } + num_sg = (m->nb_segs + 3) / 4; + sgbuf_sz = sizeof(struct otx_ep_sg_entry) * num_sg; + finfo->g.sg = + rte_zmalloc(NULL, sgbuf_sz, OTX_EP_SG_ALIGN); + if (finfo->g.sg == NULL) { + rte_free(finfo); + otx_ep_err("sg entry alloc failed\n"); + goto xmit_fail; + } + gather = 1; + gsz = m->nb_segs; + finfo->g.num_sg = num_sg; + finfo->g.sg[0].ptr[0] = rte_mbuf_data_iova(m); + set_sg_size(&finfo->g.sg[0], m->data_len, 0); + pkt_len = m->data_len; + finfo->mbuf = m; + + frags = m->nb_segs - 1; + j = 1; + m = m->next; + while (frags--) { + finfo->g.sg[(j >> 2)].ptr[(j & 3)] = + rte_mbuf_data_iova(m); + set_sg_size(&finfo->g.sg[(j >> 2)], + m->data_len, (j & 3)); + pkt_len += m->data_len; + j++; + m = m->next; + } + dptr = rte_mem_virt2iova(finfo->g.sg); + iqreq_buf = finfo; + iqreq_type = OTX_EP_REQTYPE_NORESP_GATHER; + if (pkt_len > OTX_EP_MAX_PKT_SZ) { + rte_free(finfo->g.sg); + rte_free(finfo); + otx_ep_err("failed\n"); + goto xmit_fail; + } + } + /* ih vars */ + iqcmd.ih.s.tlen = pkt_len + iqcmd.ih.s.fsz; + iqcmd.ih.s.gather = gather; + iqcmd.ih.s.gsz = gsz; + /* PKI_IH3 vars */ + /* irh vars */ + /* irh.rlenssz = ; */ + + iqcmd.dptr = dptr; + /* Swap FSZ(front data) here, to avoid swapping on + * OCTEON TX side rprt is not used so not swapping + */ + /* otx_ep_swap_8B_data(&iqcmd.rptr, 1); */ + otx_ep_swap_8B_data(&iqcmd.irh.u64, 1); + +#ifdef OTX_EP_IO_DEBUG + otx_ep_dbg("After swapping\n"); + otx_ep_dbg("Word0 [dptr]: 0x%016lx\n", + (unsigned long)iqcmd.dptr); + otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih); + otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n", + (unsigned long)iqcmd.pki_ih3); + otx_ep_dbg("Word3 [rptr]: 0x%016lx\n", + (unsigned long)iqcmd.rptr); + otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh); + otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n", + (unsigned long)iqcmd.exhdr[0]); + rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m)); +#endif + dbell = (i == (unsigned int)(nb_pkts - 1)) ? 1 : 0; + index = iq->host_write_index; + if (otx_ep_send_data(otx_ep, iq, &iqcmd, dbell)) + goto xmit_fail; + otx_ep_iqreq_add(iq, iqreq_buf, iqreq_type, index); + iq->stats.tx_pkts++; + iq->stats.tx_bytes += pkt_len; + count++; + } + +xmit_fail: + if (iq->instr_pending >= OTX_EP_MAX_INSTR) + otx_ep_flush_iq(iq); + + /* Return no# of instructions posted successfully. */ + return count; +} + +/* Enqueue requests/packets to OTX_EP IQ queue. + * returns number of requests enqueued successfully + */ +uint16_t +otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts) +{ + struct otx2_ep_instr_64B iqcmd2; + struct otx_ep_instr_queue *iq; + struct otx_ep_device *otx_ep; + uint64_t dptr; + int count = 0; + unsigned int i; + struct rte_mbuf *m; + unsigned int pkt_len; + void *iqreq_buf; + uint32_t iqreq_type, sgbuf_sz; + int gather, gsz; + int dbell; + int index; + + iq = (struct otx_ep_instr_queue *)tx_queue; + otx_ep = iq->otx_ep_dev; + + iqcmd2.ih.u64 = 0; + iqcmd2.irh.u64 = 0; + + /* ih invars */ + iqcmd2.ih.s.fsz = OTX2_EP_FSZ; + iqcmd2.ih.s.pkind = otx_ep->pkind; /* The SDK decided PKIND value */ + /* irh invars */ + iqcmd2.irh.s.opcode = OTX_EP_NW_PKT_OP; + + for (i = 0; i < nb_pkts; i++) { + m = pkts[i]; + if (m->nb_segs == 1) { + /* dptr */ + dptr = rte_mbuf_data_iova(m); + pkt_len = rte_pktmbuf_data_len(m); + iqreq_buf = m; + iqreq_type = OTX_EP_REQTYPE_NORESP_NET; + gather = 0; + gsz = 0; + } else { + struct otx_ep_buf_free_info *finfo; + int j, frags, num_sg; + + if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)) + goto xmit_fail; + + finfo = (struct otx_ep_buf_free_info *) + rte_malloc(NULL, sizeof(*finfo), 0); + if (finfo == NULL) { + otx_ep_err("free buffer alloc failed\n"); + goto xmit_fail; + } + num_sg = (m->nb_segs + 3) / 4; + sgbuf_sz = sizeof(struct otx_ep_sg_entry) * num_sg; + finfo->g.sg = + rte_zmalloc(NULL, sgbuf_sz, OTX_EP_SG_ALIGN); + if (finfo->g.sg == NULL) { + rte_free(finfo); + otx_ep_err("sg entry alloc failed\n"); + goto xmit_fail; + } + gather = 1; + gsz = m->nb_segs; + finfo->g.num_sg = num_sg; + finfo->g.sg[0].ptr[0] = rte_mbuf_data_iova(m); + set_sg_size(&finfo->g.sg[0], m->data_len, 0); + pkt_len = m->data_len; + finfo->mbuf = m; + + frags = m->nb_segs - 1; + j = 1; + m = m->next; + while (frags--) { + finfo->g.sg[(j >> 2)].ptr[(j & 3)] = + rte_mbuf_data_iova(m); + set_sg_size(&finfo->g.sg[(j >> 2)], + m->data_len, (j & 3)); + pkt_len += m->data_len; + j++; + m = m->next; + } + dptr = rte_mem_virt2iova(finfo->g.sg); + iqreq_buf = finfo; + iqreq_type = OTX_EP_REQTYPE_NORESP_GATHER; + if (pkt_len > OTX_EP_MAX_PKT_SZ) { + rte_free(finfo->g.sg); + rte_free(finfo); + otx_ep_err("failed\n"); + goto xmit_fail; + } + } + /* ih vars */ + iqcmd2.ih.s.tlen = pkt_len + iqcmd2.ih.s.fsz; + iqcmd2.ih.s.gather = gather; + iqcmd2.ih.s.gsz = gsz; + /* irh vars */ + /* irh.rlenssz = ; */ + iqcmd2.dptr = dptr; + /* Swap FSZ(front data) here, to avoid swapping on + * OCTEON TX side rptr is not used so not swapping. + */ + /* otx_ep_swap_8B_data(&iqcmd2.rptr, 1); */ + otx_ep_swap_8B_data(&iqcmd2.irh.u64, 1); + +#ifdef OTX_EP_IO_DEBUG + otx_ep_dbg("After swapping\n"); + otx_ep_dbg("Word0 [dptr]: 0x%016lx\n", + (unsigned long)iqcmd.dptr); + otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih); + otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n", + (unsigned long)iqcmd.pki_ih3); + otx_ep_dbg("Word3 [rptr]: 0x%016lx\n", + (unsigned long)iqcmd.rptr); + otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh); + otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n", + (unsigned long)iqcmd.exhdr[0]); +#endif + /* rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m)); */ + index = iq->host_write_index; + dbell = (i == (unsigned int)(nb_pkts - 1)) ? 1 : 0; + if (otx_ep_send_data(otx_ep, iq, &iqcmd2, dbell)) + goto xmit_fail; + otx_ep_iqreq_add(iq, iqreq_buf, iqreq_type, index); + iq->stats.tx_pkts++; + iq->stats.tx_bytes += pkt_len; + count++; + } + +xmit_fail: + if (iq->instr_pending >= OTX_EP_MAX_INSTR) + otx_ep_flush_iq(iq); + + /* Return no# of instructions posted successfully. */ + return count; +} + static uint32_t otx_ep_droq_refill(struct otx_ep_droq *droq) { diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.h b/drivers/net/octeontx_ep/otx_ep_rxtx.h index 602534cf0b..78c7fe27e9 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.h +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.h @@ -5,9 +5,31 @@ #ifndef _OTX_EP_RXTX_H_ #define _OTX_EP_RXTX_H_ +#include + #define OTX_EP_RXD_ALIGN 1 #define OTX_EP_TXD_ALIGN 1 + +#define OTX_EP_IQ_SEND_FAILED (-1) +#define OTX_EP_IQ_SEND_SUCCESS (0) + #define OTX_EP_MAX_DELAYED_PKT_RETRIES 10000 + +#define OTX_EP_FSZ 28 +#define OTX2_EP_FSZ 24 +#define OTX_EP_MAX_INSTR 16 + +static inline void +otx_ep_swap_8B_data(uint64_t *data, uint32_t blocks) +{ + /* Swap 8B blocks */ + while (blocks) { + *data = rte_bswap64(*data); + blocks--; + data++; + } +} + static inline uint32_t otx_ep_incr_index(uint32_t index, uint32_t count, uint32_t max) { @@ -19,6 +41,10 @@ otx_ep_incr_index(uint32_t index, uint32_t count, uint32_t max) return index; } uint16_t +otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts); +uint16_t +otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts); +uint16_t otx_ep_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t budget); diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index d2128712aa..f91251865b 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -100,6 +100,74 @@ */ #define SDP_GBL_WMARK 0x100 + +/* Optional PKI Instruction Header(PKI IH) */ +typedef union { + uint64_t u64; + struct { + /** Tag Value */ + uint64_t tag:32; + + /** QPG Value */ + uint64_t qpg:11; + + /** Reserved1 */ + uint64_t reserved1:2; + + /** Tag type */ + uint64_t tagtype:2; + + /** Use Tag Type */ + uint64_t utt:1; + + /** Skip Length */ + uint64_t sl:8; + + /** Parse Mode */ + uint64_t pm:3; + + /** Reserved2 */ + uint64_t reserved2:1; + + /** Use QPG */ + uint64_t uqpg:1; + + /** Use Tag */ + uint64_t utag:1; + + /** Raw mode indicator 1 = RAW */ + uint64_t raw:1; + + /** Wider bit */ + uint64_t w:1; + } s; +} otx_ep_instr_pki_ih3_t; + + +/* OTX_EP 64B instruction format */ +struct otx_ep_instr_64B { + /* Pointer where the input data is available. */ + uint64_t dptr; + + /* OTX_EP Instruction Header. */ + union otx_ep_instr_ih ih; + + /* PKI Optional Instruction Header. */ + otx_ep_instr_pki_ih3_t pki_ih3; + + /** Pointer where the response for a RAW mode packet + * will be written by OCTEON TX. + */ + uint64_t rptr; + + /* Input Request Header. */ + union otx_ep_instr_irh irh; + + /* Additional headers available in a 64-byte instruction. */ + uint64_t exhdr[3]; +}; +#define OTX_EP_64B_INSTR_SIZE (sizeof(otx_ep_instr_64B)) + int otx_ep_vf_setup_device(struct otx_ep_device *otx_ep); #endif /*_OTX_EP_VF_H_ */ From patchwork Thu Dec 31 07:22:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85921 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DFFF5A0A00; Thu, 31 Dec 2020 08:24:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 71649140D1E; Thu, 31 Dec 2020 08:23:09 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id B3A20140CEC for ; Thu, 31 Dec 2020 08:23:00 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G0IN022182 for ; Wed, 30 Dec 2020 23:22:59 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=Z51CNQwmNsdNzopUOCF7wMeGef+w+2YAtWW7EguuFTI=; b=j/aaTvkfk+2QKk9ZmK9W7qeRnH2Yn8H2ierLC5QWj547ru7QoKeao/xrICCnuM9icAnN wvsIZvu3gqb1IPoxr3mW1DDw7JPSHYzsAM2jgOgGxq9hyQuCOASftnAikpaSN7JVRZsb zgmcHH7LVoTdn1AUCzzLuv0XqaolLQLkJhLuhRa+qW8AMrbdl9glZjdz1Kiy/V83QP8B xY8ya43dYIeN46jfXd/DZzma4SS9ve5t7MjTcGqYuDKbLw04e2PrXDcO103txXXw9zqX fXx7V+Og4gVmqGG2+0KRd89A2QaR9ordoGAehtIt9lltT1Bpg+yHg6s8tWDVDGRiPDvQ 6w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx58-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:22:59 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:58 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 3968C3F7041; Wed, 30 Dec 2020 23:22:58 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:44 +0000 Message-ID: <20201231072247.5719-13-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 12/15] net/octeontx_ep: INFO PTR mode support added. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Hardware can be programmed to write the meta data of incoming packet in the same buffer it uses to fill the packet(BUF PTR mode) or a different buffer (INFO PTR mode). Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/meson.build | 2 +- drivers/net/octeontx_ep/otx_ep_common.h | 8 ++++ drivers/net/octeontx_ep/otx_ep_rxtx.c | 55 ++++++++++++++++++++++++- 3 files changed, 63 insertions(+), 2 deletions(-) diff --git a/drivers/net/octeontx_ep/meson.build b/drivers/net/octeontx_ep/meson.build index 8d804a0398..08e8131bfe 100644 --- a/drivers/net/octeontx_ep/meson.build +++ b/drivers/net/octeontx_ep/meson.build @@ -9,7 +9,7 @@ sources = files( 'otx_ep_rxtx.c', ) -extra_flags = [] +extra_flags = ['-DBUFPTR_ONLY_MODE'] # This integrated controller runs only on a arm64 machine, remove 32bit warnings if not dpdk_conf.get('RTE_ARCH_64') extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index 978cceab01..0b6e7e2042 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -239,11 +239,19 @@ union otx_ep_rh { * about the packet. */ struct otx_ep_droq_info { +#ifndef BUFPTR_ONLY_MODE + /* The Output Receive Header. */ + union otx_ep_rh rh; + + /* The Length of the packet. */ + uint64_t length; +#else /* The Length of the packet. */ uint64_t length; /* The Output Receive Header. */ union otx_ep_rh rh; +#endif }; #define OTX_EP_DROQ_INFO_SIZE (sizeof(struct otx_ep_droq_info)) diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c index 4ffe0b8546..279ab9f6d6 100644 --- a/drivers/net/octeontx_ep/otx_ep_rxtx.c +++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c @@ -215,6 +215,13 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no) rte_free(droq->recv_buf_list); droq->recv_buf_list = NULL; +#ifndef BUFPTR_ONLY_MODE + if (droq->info_mz) { + otx_ep_dmazone_free(droq->info_mz); + droq->info_mz = NULL; + } +#endif + if (droq->desc_ring_mz) { otx_ep_dmazone_free(droq->desc_ring_mz); droq->desc_ring_mz = NULL; @@ -249,6 +256,13 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq) } droq->recv_buf_list[idx] = buf; +#ifndef BUFPTR_ONLY_MODE + droq->info_list[idx].length = 0; + + /* Map ring buffers into memory */ + desc_ring[idx].info_ptr = (uint64_t)(droq->info_list_dma + + (idx * OTX_EP_DROQ_INFO_SIZE)); +#endif info = rte_pktmbuf_mtod(buf, struct otx_ep_droq_info *); memset(info, 0, sizeof(*info)); desc_ring[idx].buffer_ptr = rte_mbuf_data_iova_default(buf); @@ -259,6 +273,28 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq) return 0; } +#ifndef BUFPTR_ONLY_MODE +static void * +otx_ep_alloc_info_buffer(struct otx_ep_device *otx_ep __rte_unused, + struct otx_ep_droq *droq, unsigned int socket_id) +{ + droq->info_mz = rte_memzone_reserve_aligned("OQ_info_list", + (droq->nb_desc * OTX_EP_DROQ_INFO_SIZE), + socket_id, + RTE_MEMZONE_IOVA_CONTIG, + OTX_EP_PCI_RING_ALIGN); + + if (droq->info_mz == NULL) + return NULL; + + droq->info_list_dma = droq->info_mz->iova; + droq->info_alloc_size = droq->info_mz->len; + droq->info_base_addr = (size_t)droq->info_mz->addr; + + return droq->info_mz->addr; +} +#endif + /* OQ initialization */ static int otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no, @@ -301,6 +337,16 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no, q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma); otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc); +#ifndef BUFPTR_ONLY_MODE + /* OQ info_list set up */ + droq->info_list = otx_ep_alloc_info_buffer(otx_ep, droq, socket_id); + if (droq->info_list == NULL) { + otx_ep_err("memory allocation failed for OQ[%d] info_list\n", + q_no); + goto init_droq_fail; + } + +#endif /* OQ buf_list set up */ droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list", (droq->nb_desc * sizeof(struct rte_mbuf *)), @@ -836,7 +882,10 @@ otx_ep_droq_refill(struct otx_ep_droq *droq) desc_ring[droq->refill_idx].buffer_ptr = rte_mbuf_data_iova_default(buf); - +#ifndef BUFPTR_ONLY_MODE + /* Reset any previous values in the length field. */ + droq->info_list[droq->refill_idx].length = 0; +#endif droq->refill_idx = otx_ep_incr_index(droq->refill_idx, 1, droq->nb_desc); @@ -862,6 +911,9 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, droq_pkt = droq->recv_buf_list[droq->read_idx]; droq_pkt2 = droq->recv_buf_list[droq->read_idx]; +#ifndef BUFPTR_ONLY_MODE + info = &droq->info_list[droq->read_idx]; +#else info = rte_pktmbuf_mtod(droq_pkt, struct otx_ep_droq_info *); /* make sure info is available */ rte_rmb(); @@ -893,6 +945,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, info2 = rte_pktmbuf_mtod(droq_pkt2, struct otx_ep_droq_info *); rte_prefetch_non_temporal((const void *)info2); } +#endif info->length = rte_bswap64(info->length); /* Deduce the actual data size */ From patchwork Thu Dec 31 07:22:45 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85923 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A6EFDA0A00; Thu, 31 Dec 2020 08:24:42 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E61AC140D2D; Thu, 31 Dec 2020 08:23:11 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 16486140CF1 for ; Thu, 31 Dec 2020 08:23:00 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7LGmS010253 for ; Wed, 30 Dec 2020 23:23:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=c+s2b3uqKgOXLYGxlBI3/RO/oWJvuOkSWi0NZmHiozk=; b=SRIMVHi+PJPdZDMv9UpmirqTfrm43DiPY0BMNzFbU/uya6wXm7cosE8HNIEPdHHi4UXB 37R9VrtoQoMoVlghtdg33DhGBr2mXj3J+NWqa6OUNZ/+BZTDNEJvQaPpzbn9iV654afq TA396wmNr43er0y3cmHlL81hSwRO2gglPYgp7f7mFySDyMuaQRkf6j0XFSOLWyyqmU/d dMpRk+asvzn2VaoU8Z6yIFX5ix1l4dfyK9HiJ0xnDwqxLGVPM8ctdQU4F0mTIJvnTAGA LPsXVkhYHSF/rB+QvsiKIoViVjNiRFxHTrhK4uv24w/r/nRT1wEyc3nuARWU11Ge6t0h TA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 35s80806ff-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:23:00 -0800 Received: from SC-EXCH01.marvell.com (10.93.176.81) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:59 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:58 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 7EDAF3F703F; Wed, 30 Dec 2020 23:22:58 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:45 +0000 Message-ID: <20201231072247.5719-14-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 13/15] net/octeontx_ep: stats get/reset and link update X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Added stats get, stats reset and link update operations. Following stats are reported currently 1. ibytes, ipackets and ierrors. 2. obytes, opackets and oerrors. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx_ep_ethdev.c | 84 +++++++++++++++++++++++++ 1 file changed, 84 insertions(+) diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index d8da457d86..1739bae765 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -60,6 +60,27 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev, return 0; } +static int +otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, + int wait_to_complete __rte_unused) +{ + struct otx_ep_device *otx_epvf; + struct rte_eth_link link; + + otx_epvf = (struct otx_ep_device *)OTX_EP_DEV(eth_dev); + memset(&link, 0, sizeof(link)); + link.link_status = ETH_LINK_DOWN; + link.link_speed = ETH_SPEED_NUM_NONE; + link.link_duplex = ETH_LINK_HALF_DUPLEX; + link.link_autoneg = ETH_LINK_AUTONEG; + if (otx_epvf->linkup) { + link.link_status = ETH_LINK_UP; + link.link_speed = ETH_SPEED_NUM_10G; + link.link_duplex = ETH_LINK_FULL_DUPLEX; + } + return rte_eth_linkstatus_set(eth_dev, &link); +} + static int otx_ep_dev_start(struct rte_eth_dev *eth_dev) { @@ -348,6 +369,66 @@ otx_ep_tx_queue_release(void *txq) otx_ep_delete_iqs(tq->otx_ep_dev, tq->q_no); } +static int +otx_ep_dev_stats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *stats) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + struct otx_ep_instr_queue *iq; + struct otx_ep_droq *droq; + int i; + uint64_t bytes = 0; + uint64_t pkts = 0; + uint64_t drop = 0; + + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + iq = otx_epvf->instr_queue[i]; + pkts += iq->stats.tx_pkts; + bytes += iq->stats.tx_bytes; + drop += iq->stats.instr_dropped; + } + stats->opackets = pkts; + stats->obytes = bytes; + stats->oerrors = drop; + + pkts = 0; + drop = 0; + bytes = 0; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + droq = otx_epvf->droq[i]; + pkts += droq->stats.pkts_received; + bytes += droq->stats.bytes_received; + drop += droq->stats.rx_alloc_failure + droq->stats.rx_err; + } + stats->ibytes = bytes; + stats->ipackets = pkts; + stats->ierrors = drop; + + return 0; +} + +static int +otx_ep_dev_stats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + struct otx_ep_instr_queue *iq; + struct otx_ep_droq *droq; + int i; + + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + iq = otx_epvf->instr_queue[i]; + iq->stats.tx_pkts = 0; + iq->stats.tx_bytes = 0; + } + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + droq = otx_epvf->droq[i]; + droq->stats.pkts_received = 0; + droq->stats.bytes_received = 0; + } + return 0; +} + /* Define our ethernet definitions */ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .dev_configure = otx_ep_dev_configure, @@ -357,6 +438,9 @@ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .rx_queue_release = otx_ep_rx_queue_release, .tx_queue_setup = otx_ep_tx_queue_setup, .tx_queue_release = otx_ep_tx_queue_release, + .link_update = otx_ep_dev_link_update, + .stats_get = otx_ep_dev_stats_get, + .stats_reset = otx_ep_dev_stats_reset, .dev_infos_get = otx_ep_dev_info_get, }; From patchwork Thu Dec 31 07:22:46 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85927 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0E811A0A00; Thu, 31 Dec 2020 08:25:36 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D82DA140D56; Thu, 31 Dec 2020 08:23:17 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7EB5A140CEE for ; Thu, 31 Dec 2020 08:23:01 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G0IQ022182 for ; Wed, 30 Dec 2020 23:23:00 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=JWM7Ibp+hGXAZFYkQ0+b2Y7eUz+J7TGFPqbK5FOnHzI=; b=gdDraC++5v9IwCvbCaBNLP6birfwYkvUZVh6PpIEtJhtxyrv6qekRDIES69ej+5vIvot vC5i0buKK6mr5TgljVSwZIkHc9jmDC47x380enEgYUvXGsNEBuzB2911Jil61xoLZfEj IoDF1SdWcMobCdCNEgCuIB+m6HuMXkWUSE+xUa+ab6LEmyb4uXmJbdyz1TpLVp2h+Gkc nuPa7jnHoVxSwTMjB4hwuVzQp7iYWFe3tJQuctUC0ZLX0Q334aWdKK1JmmnoBLS74MXB dIsO7VvobeViATOBzel/k5miEw2bMicbt42uJp+hFH/nT8ZuNsA/yj1QUgLMtF8JKU3n 3Q== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx58-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:23:00 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:59 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:59 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id C925C3F7044; Wed, 30 Dec 2020 23:22:58 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:46 +0000 Message-ID: <20201231072247.5719-15-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 14/15] net/octeontx_ep: rx queue interrupt X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Added rx queue interrupt enable and disable operations. These functions are supported on both otx and otx2 platforms. Application can make use of these functions and wait on event at packet reception. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx2_ep_vf.c | 63 ++++++ drivers/net/octeontx_ep/otx2_ep_vf.h | 26 +++ drivers/net/octeontx_ep/otx_ep_common.h | 53 +++++ drivers/net/octeontx_ep/otx_ep_ethdev.c | 255 ++++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.c | 66 ++++++ drivers/net/octeontx_ep/otx_ep_vf.h | 25 +++ 6 files changed, 488 insertions(+) diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c index b570a49566..0fb8c26a5e 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.c +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -2,6 +2,7 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include #include "otx2_common.h" #include "otx_ep_common.h" #include "otx2_ep_vf.h" @@ -282,6 +283,33 @@ otx2_vf_disable_io_queues(struct otx_ep_device *otx_ep) } } +static uint32_t +otx2_vf_update_read_index(struct otx_ep_instr_queue *iq) +{ + uint32_t new_idx = rte_read32(iq->inst_cnt_reg); + + if (new_idx == 0xFFFFFFFF) { + otx_ep_dbg("%s Going to reset IQ index\n", __func__); + rte_write32(new_idx, iq->inst_cnt_reg); + } + + /* The new instr cnt reg is a 32-bit counter that can roll over. + * We have noted the counter's initial value at init time into + * reset_instr_cnt + */ + if (iq->reset_instr_cnt < new_idx) + new_idx -= iq->reset_instr_cnt; + else + new_idx += (0xffffffff - iq->reset_instr_cnt) + 1; + + /* Modulo of the new index with the IQ size will give us + * the new index. + */ + new_idx %= iq->nb_desc; + + return new_idx; +} + static const struct otx_ep_config default_otx2_ep_conf = { /* IQ attributes */ .iq = { @@ -313,6 +341,38 @@ otx2_ep_get_defconf(struct otx_ep_device *otx_ep_dev __rte_unused) return default_conf; } +static int otx2_vf_enable_rxq_intr(struct otx_ep_device *otx_epvf, + uint16_t q_no) +{ + union out_int_lvl_t out_int_lvl; + union out_cnts_t out_cnts; + + out_int_lvl.s.time_cnt_en = 1; + out_int_lvl.s.cnt = 0; + otx2_write64(out_int_lvl.d64, otx_epvf->hw_addr + + SDP_VF_R_OUT_INT_LEVELS(q_no)); + out_cnts.d64 = 0; + out_cnts.s.resend = 1; + otx2_write64(out_cnts.d64, otx_epvf->hw_addr + SDP_VF_R_OUT_CNTS(q_no)); + return 0; +} + +static int otx2_vf_disable_rxq_intr(struct otx_ep_device *otx_epvf, + uint16_t q_no) +{ + union out_int_lvl_t out_int_lvl; + + /* Disable the interrupt for this queue */ + out_int_lvl.d64 = otx2_read64(otx_epvf->hw_addr + + SDP_VF_R_OUT_INT_LEVELS(q_no)); + out_int_lvl.s.time_cnt_en = 0; + out_int_lvl.s.cnt = 0; + otx2_write64(out_int_lvl.d64, otx_epvf->hw_addr + + SDP_VF_R_OUT_INT_LEVELS(q_no)); + + return 0; +} + int otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep) { @@ -340,6 +400,7 @@ otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep->fn_list.setup_oq_regs = otx2_vf_setup_oq_regs; otx_ep->fn_list.setup_device_regs = otx2_vf_setup_device_regs; + otx_ep->fn_list.update_iq_read_idx = otx2_vf_update_read_index; otx_ep->fn_list.enable_io_queues = otx2_vf_enable_io_queues; otx_ep->fn_list.disable_io_queues = otx2_vf_disable_io_queues; @@ -349,6 +410,8 @@ otx2_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep->fn_list.enable_oq = otx2_vf_enable_oq; otx_ep->fn_list.disable_oq = otx2_vf_disable_oq; + otx_ep->fn_list.enable_rxq_intr = otx2_vf_enable_rxq_intr; + otx_ep->fn_list.disable_rxq_intr = otx2_vf_disable_rxq_intr; return 0; } diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h index 3a3b3413b2..64a505afdb 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.h +++ b/drivers/net/octeontx_ep/otx2_ep_vf.h @@ -26,5 +26,31 @@ struct otx2_ep_instr_64B { uint64_t exhdr[4]; }; +union out_int_lvl_t { + uint64_t d64; + struct { + uint64_t cnt:32; + uint64_t timet:22; + uint64_t max_len:7; + uint64_t max_len_en:1; + uint64_t time_cnt_en:1; + uint64_t bmode:1; + } s; +}; + +union out_cnts_t { + uint64_t d64; + struct { + uint64_t cnt:32; + uint64_t timer:22; + uint64_t rsvd:5; + uint64_t resend:1; + uint64_t mbox_int:1; + uint64_t in_int:1; + uint64_t out_int:1; + uint64_t send_ism:1; + } s; +}; + #endif /*_OTX2_EP_VF_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h index 0b6e7e2042..fdd31c889d 100644 --- a/drivers/net/octeontx_ep/otx_ep_common.h +++ b/drivers/net/octeontx_ep/otx_ep_common.h @@ -122,6 +122,37 @@ typedef union otx_ep_instr_ih { } s; } otx_ep_instr_ih_t; + + +typedef union otx_ep_resp_hdr { + uint64_t u64; + struct { + /** The request id for a packet thats in response + * to pkt sent by host. + */ + uint64_t request_id:16; + + /** Reserved. */ + uint64_t reserved:2; + + /** checksum verified. */ + uint64_t csum_verified:2; + + /** The destination Queue port. */ + uint64_t dest_qport:22; + + /** The source port for a packet thats in response + * to pkt sent by host. + */ + uint64_t src_port:6; + + /** Opcode for this packet. */ + uint64_t opcode:16; + } s; +} otx_ep_resp_hdr_t; + +#define OTX_EP_RESP_HDR_SIZE (sizeof(otx_ep_resp_hdr_t)) + /* OTX_EP IQ request list */ struct otx_ep_instr_list { void *buf; @@ -210,6 +241,17 @@ struct otx_ep_instr_queue { const struct rte_memzone *iq_mz; }; +/* DROQ packet format for application i/f. */ +struct otx_ep_droq_pkt { + /* DROQ packet data buffer pointer. */ + uint8_t *data; + + /* DROQ packet data length */ + uint32_t len; + + uint32_t misc; +}; + /** Descriptor format. * The descriptor ring is made of descriptors which have 2 64-bit values: * -# Physical (bus) address of the data buffer. @@ -395,6 +437,7 @@ struct otx_ep_fn_list { void (*setup_oq_regs)(struct otx_ep_device *otx_ep, uint32_t q_no); int (*setup_device_regs)(struct otx_ep_device *otx_ep); + uint32_t (*update_iq_read_idx)(struct otx_ep_instr_queue *iq); void (*enable_io_queues)(struct otx_ep_device *otx_ep); void (*disable_io_queues)(struct otx_ep_device *otx_ep); @@ -404,6 +447,8 @@ struct otx_ep_fn_list { void (*enable_oq)(struct otx_ep_device *otx_ep, uint32_t q_no); void (*disable_oq)(struct otx_ep_device *otx_ep, uint32_t q_no); + int (*enable_rxq_intr)(struct otx_ep_device *otx_epvf, uint16_t q_no); + int (*disable_rxq_intr)(struct otx_ep_device *otx_epvf, uint16_t q_no); }; /* SRIOV information */ @@ -508,8 +553,16 @@ struct otx_ep_buf_free_info { struct otx_ep_gather g; }; +int +otx_ep_register_irq(struct rte_intr_handle *intr_handle, unsigned int vec); + +void +otx_ep_unregister_irq(struct rte_intr_handle *intr_handle, unsigned int vec); + #define OTX_EP_MAX_PKT_SZ 64000U #define OTX_EP_MAX_MAC_ADDRS 1 #define OTX_EP_SG_ALIGN 8 +#define SDP_VF_R_MSIX_START (0x0) +#define SDP_VF_R_MSIX(ring) (SDP_VF_R_MSIX_START + (ring)) #endif /* _OTX_EP_COMMON_H_ */ diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c index 1739bae765..d37a4c1c5a 100644 --- a/drivers/net/octeontx_ep/otx_ep_ethdev.c +++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c @@ -2,6 +2,7 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include #include #include #include @@ -12,6 +13,14 @@ #include "otx2_ep_vf.h" #include "otx_ep_rxtx.h" +#include +#include +#include +#include + +#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID +#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \ + sizeof(int) * (MAX_INTR_VEC_ID)) #define OTX_EP_DEV(_eth_dev) ((_eth_dev)->data->dev_private) static const struct rte_eth_desc_lim otx_ep_rx_desc_lim = { @@ -186,6 +195,55 @@ otx_epdev_init(struct otx_ep_device *otx_epvf) return -ENOMEM; } +static int otx_epvf_setup_rxq_intr(struct otx_ep_device *otx_epvf, + uint16_t q_no) +{ + struct rte_eth_dev *eth_dev = otx_epvf->eth_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int rc, vec; + + vec = SDP_VF_R_MSIX(q_no); + + rc = otx_ep_register_irq(handle, vec); + if (rc) { + otx_ep_err("Fail to register Rx irq, rc=%d", rc); + return rc; + } + + if (!handle->intr_vec) { + handle->intr_vec = rte_zmalloc("intr_vec", + otx_epvf->max_rx_queues * + sizeof(int), 0); + if (!handle->intr_vec) { + otx_ep_err("Failed to allocate %d rx intr_vec", + otx_epvf->max_rx_queues); + return -ENOMEM; + } + } + + /* VFIO vector zero is resereved for misc interrupt so + * doing required adjustment. + */ + handle->intr_vec[q_no] = RTE_INTR_VEC_RXTX_OFFSET + vec; + + return rc; +} + +static void otx_epvf_unset_rxq_intr(struct otx_ep_device *otx_epvf, + uint16_t q_no) +{ + /* Not yet implemented */ + struct rte_eth_dev *eth_dev = otx_epvf->eth_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + int vec; + + vec = SDP_VF_R_MSIX(q_no); + otx_epvf->fn_list.disable_rxq_intr(otx_epvf, q_no); + otx_ep_unregister_irq(handle, vec); +} + static int otx_ep_dev_configure(struct rte_eth_dev *eth_dev) { @@ -195,6 +253,7 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev) struct rte_eth_rxmode *rxmode = &conf->rxmode; struct rte_eth_txmode *txmode = &conf->txmode; uint32_t ethdev_queues; + uint16_t q; ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf); if (eth_dev->data->nb_rx_queues > ethdev_queues || @@ -209,9 +268,177 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev) otx_epvf->rx_offloads = rxmode->offloads; otx_epvf->tx_offloads = txmode->offloads; + if (eth_dev->data->dev_conf.intr_conf.rxq) { + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) + otx_epvf_setup_rxq_intr(otx_epvf, q); + } return 0; } +static int +irq_get_info(struct rte_intr_handle *intr_handle) +{ + struct vfio_irq_info irq = { .argsz = sizeof(irq) }; + int rc; + + irq.index = VFIO_PCI_MSIX_IRQ_INDEX; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq); + if (rc < 0) { + otx_ep_err("Failed to get IRQ info rc=%d errno=%d", rc, errno); + return rc; + } + + otx_ep_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x", + irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID); + + if (irq.count > MAX_INTR_VEC_ID) { + otx_ep_err("HW max=%d > MAX_INTR_VEC_ID: %d", + intr_handle->max_intr, MAX_INTR_VEC_ID); + intr_handle->max_intr = MAX_INTR_VEC_ID; + } else { + intr_handle->max_intr = irq.count; + } + + return 0; +} + +static int +irq_init(struct rte_intr_handle *intr_handle) +{ + char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; + struct vfio_irq_set *irq_set; + int32_t *fd_ptr; + int len, rc; + uint32_t i; + + if (intr_handle->max_intr > MAX_INTR_VEC_ID) { + otx_ep_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d", + intr_handle->max_intr, MAX_INTR_VEC_ID); + return -ERANGE; + } + + len = sizeof(struct vfio_irq_set) + + sizeof(int32_t) * intr_handle->max_intr; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = len; + irq_set->start = 0; + irq_set->count = intr_handle->max_intr; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + + fd_ptr = (int32_t *)&irq_set->data[0]; + for (i = 0; i < irq_set->count; i++) + fd_ptr[i] = -1; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) + otx_ep_err("Failed to set irqs vector rc=%d", rc); + + return rc; +} + +static int +irq_config(struct rte_intr_handle *intr_handle, unsigned int vec) +{ + char irq_set_buf[MSIX_IRQ_SET_BUF_LEN]; + struct vfio_irq_set *irq_set; + int32_t *fd_ptr; + int len, rc; + + if (vec > intr_handle->max_intr) { + otx_ep_err("vector=%d greater than max_intr=%d", vec, + intr_handle->max_intr); + return -EINVAL; + } + + len = sizeof(struct vfio_irq_set) + sizeof(int32_t); + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = len; + irq_set->start = vec; + irq_set->count = 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + + /* Use vec fd to set interrupt vectors */ + fd_ptr = (int32_t *)&irq_set->data[0]; + fd_ptr[0] = intr_handle->efds[vec]; + + rc = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) + otx_ep_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc); + + return rc; +} + +int +otx_ep_register_irq(struct rte_intr_handle *intr_handle, unsigned int vec) +{ + struct rte_intr_handle tmp_handle; + + /* If no max_intr read from VFIO */ + if (intr_handle->max_intr == 0) { + irq_get_info(intr_handle); + irq_init(intr_handle); + } + + if (vec > intr_handle->max_intr) { + otx_ep_err("Vector=%d greater than max_intr=%d", vec, + intr_handle->max_intr); + return -EINVAL; + } + + tmp_handle = *intr_handle; + /* Create new eventfd for interrupt vector */ + tmp_handle.fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC); + if (tmp_handle.fd == -1) + return -ENODEV; + + intr_handle->efds[vec] = tmp_handle.fd; + intr_handle->nb_efd = ((vec + 1) > intr_handle->nb_efd) ? + (vec + 1) : intr_handle->nb_efd; + intr_handle->max_intr = RTE_MAX(intr_handle->nb_efd + 1, + intr_handle->max_intr); + + otx_ep_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", + vec, intr_handle->nb_efd, intr_handle->max_intr); + + /* Enable MSIX vectors to VFIO */ + return irq_config(intr_handle, vec); +} + +/** + * @internal + * Unregister IRQ + */ +void +otx_ep_unregister_irq(struct rte_intr_handle *intr_handle, unsigned int vec) +{ + struct rte_intr_handle tmp_handle; + + if (vec > intr_handle->max_intr) { + otx_ep_err("Error unregistering MSI-X interrupts vec:%d > %d", + vec, intr_handle->max_intr); + return; + } + + tmp_handle = *intr_handle; + tmp_handle.fd = intr_handle->efds[vec]; + if (tmp_handle.fd == -1) + return; + + otx_ep_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", + vec, intr_handle->nb_efd, intr_handle->max_intr); + + if (intr_handle->efds[vec] != -1) + close(intr_handle->efds[vec]); + /* Disable MSIX vectors from VFIO */ + intr_handle->efds[vec] = -1; + irq_config(intr_handle, vec); +} /** * Setup our receive queue/ringbuffer. This is the * queue the Octeon uses to send us packets and @@ -429,6 +656,26 @@ otx_ep_dev_stats_reset(struct rte_eth_dev *eth_dev) return 0; } +static int otx_ep_dev_rxq_irq_enable(struct rte_eth_dev *dev, + uint16_t rx_queue_id) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(dev); + int rc; + + rc = otx_epvf->fn_list.enable_rxq_intr(otx_epvf, rx_queue_id); + return rc; +} + +static int otx_ep_dev_rxq_irq_disable(struct rte_eth_dev *dev, + uint16_t rx_queue_id) +{ + struct otx_ep_device *otx_epvf = OTX_EP_DEV(dev); + int rc; + + rc = otx_epvf->fn_list.disable_rxq_intr(otx_epvf, rx_queue_id); + return rc; +} + /* Define our ethernet definitions */ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .dev_configure = otx_ep_dev_configure, @@ -442,6 +689,8 @@ static const struct eth_dev_ops otx_ep_eth_dev_ops = { .stats_get = otx_ep_dev_stats_get, .stats_reset = otx_ep_dev_stats_reset, .dev_infos_get = otx_ep_dev_info_get, + .rx_queue_intr_enable = otx_ep_dev_rxq_irq_enable, + .rx_queue_intr_disable = otx_ep_dev_rxq_irq_disable, }; @@ -483,11 +732,17 @@ static int otx_ep_eth_dev_uninit(struct rte_eth_dev *eth_dev) { struct otx_ep_device *otx_epvf = OTX_EP_DEV(eth_dev); + uint16_t q; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; otx_epdev_exit(eth_dev); + if (eth_dev->data->dev_conf.intr_conf.rxq) { + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) + otx_epvf_unset_rxq_intr(otx_epvf, q); + } + otx_epvf->port_configured = 0; if (eth_dev->data->mac_addrs != NULL) diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c index 4a00736dab..a7c9d48dbc 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.c +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -324,6 +324,33 @@ otx_ep_disable_io_queues(struct otx_ep_device *otx_ep) } } +static uint32_t +otx_ep_update_read_index(struct otx_ep_instr_queue *iq) +{ + uint32_t new_idx = rte_read32(iq->inst_cnt_reg); + + if (new_idx == 0xFFFFFFFF) { + otx_ep_dbg("%s Going to reset IQ index\n", __func__); + rte_write32(new_idx, iq->inst_cnt_reg); + } + + /* The new instr cnt reg is a 32-bit counter that can roll over. + * We have noted the counter's initial value at init time into + * reset_instr_cnt + */ + if (iq->reset_instr_cnt < new_idx) + new_idx -= iq->reset_instr_cnt; + else + new_idx += (0xffffffff - iq->reset_instr_cnt) + 1; + + /* Modulo of the new index with the IQ size will give us + * the new index. + */ + new_idx %= iq->nb_desc; + + return new_idx; +} + /* OTX_EP default configuration */ static const struct otx_ep_config default_otx_ep_conf = { /* IQ attributes */ @@ -358,6 +385,41 @@ otx_ep_get_defconf(struct otx_ep_device *otx_ep_dev __rte_unused) return default_conf; } +static int otx_vf_enable_rxq_intr(struct otx_ep_device *otx_epvf __rte_unused, + uint16_t q_no __rte_unused) +{ + union otx_out_int_lvl_t out_int_lvl; + union otx_out_cnts_t out_cnts; + + out_int_lvl.d64 = rte_read64(otx_epvf->hw_addr + + OTX_EP_R_OUT_INT_LEVELS(q_no)); + out_int_lvl.s.cnt = 0; + otx_ep_write64(out_int_lvl.d64, otx_epvf->hw_addr, + OTX_EP_R_OUT_INT_LEVELS(q_no)); + + out_cnts.d64 = 0; + out_cnts.s.resend = 1; + otx_ep_write64(out_cnts.d64, otx_epvf->hw_addr, + OTX_EP_R_OUT_CNTS(q_no)); + + return 0; +} + +static int otx_vf_disable_rxq_intr(struct otx_ep_device *otx_epvf __rte_unused, + uint16_t q_no __rte_unused) +{ + union otx_out_int_lvl_t out_int_lvl; + + /* Increase the int level so that you get no more interrupts */ + out_int_lvl.d64 = rte_read64(otx_epvf->hw_addr + + OTX_EP_R_OUT_INT_LEVELS(q_no)); + out_int_lvl.s.cnt = 0xFFFFFFFF; + otx_ep_write64(out_int_lvl.d64, otx_epvf->hw_addr, + OTX_EP_R_OUT_INT_LEVELS(q_no)); + + return 0; +} + int otx_ep_vf_setup_device(struct otx_ep_device *otx_ep) { @@ -385,6 +447,7 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep->fn_list.setup_oq_regs = otx_ep_setup_oq_regs; otx_ep->fn_list.setup_device_regs = otx_ep_setup_device_regs; + otx_ep->fn_list.update_iq_read_idx = otx_ep_update_read_index; otx_ep->fn_list.enable_io_queues = otx_ep_enable_io_queues; otx_ep->fn_list.disable_io_queues = otx_ep_disable_io_queues; @@ -394,7 +457,10 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep) otx_ep->fn_list.enable_oq = otx_ep_enable_oq; otx_ep->fn_list.disable_oq = otx_ep_disable_oq; + otx_ep->fn_list.enable_rxq_intr = otx_vf_enable_rxq_intr; + otx_ep->fn_list.disable_rxq_intr = otx_vf_disable_rxq_intr; return 0; } + diff --git a/drivers/net/octeontx_ep/otx_ep_vf.h b/drivers/net/octeontx_ep/otx_ep_vf.h index f91251865b..da1893bc1f 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.h +++ b/drivers/net/octeontx_ep/otx_ep_vf.h @@ -170,4 +170,29 @@ struct otx_ep_instr_64B { int otx_ep_vf_setup_device(struct otx_ep_device *otx_ep); + +union otx_out_int_lvl_t { + uint64_t d64; + struct { + uint64_t cnt:32; + uint64_t timet:22; + uint64_t raz:9; + uint64_t bmode:1; + } s; +}; + +union otx_out_cnts_t { + uint64_t d64; + struct { + uint64_t cnt:32; + uint64_t timer:22; + uint64_t rsvd0:5; + uint64_t resend:1; + uint64_t mbox_int:1; + uint64_t in_int:1; + uint64_t out_int:1; + uint64_t rsvd1:1; + } s; +}; + #endif /*_OTX_EP_VF_H_ */ From patchwork Thu Dec 31 07:22:47 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pradeep Nalla X-Patchwork-Id: 85928 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (xvm-189-124.dc0.ghst.net [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4CB04A0A00; Thu, 31 Dec 2020 08:25:47 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1A78B140D5E; Thu, 31 Dec 2020 08:23:19 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 9F21A140CF9 for ; Thu, 31 Dec 2020 08:23:02 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0BV7G0IR022182 for ; Wed, 30 Dec 2020 23:23:01 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0220; bh=/Hh+tASQ29NcEtRh6iSucHzAcyghwKoCjv0if48wz8U=; b=BE5pN2yCvKycTH08x2iiNWd3zpnrto+bqD505CTlsk6G8M6+xb2c7sCvYmYyhUcrgyKg K9/clf2M/7oWi4ufTvqtungyE3qnV1ByEZdELK10YYIeFRH41NMbP+tPNuhDLYQpsZhY /mbi8WqtsQ1IsKd8SYm5QKJpB7iU/im7Pn5LJu64IFfvY8IK3wKQEte8px/X+6BGcoOC twIhpRGR/2/Dbume3vyxrmn6/KUAszMKzBUhZBVSuJu1vQnT8YP7aNH6lMl4I8Jr//JS os7GA69uXWcOFM/ZDbGT7vQ1udYaJeKjHB8gUyoZdAwOLdAgs1KS5YURjy9zD0qZc7sQ BA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 35rqgehx5e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 30 Dec 2020 23:23:01 -0800 Received: from SC-EXCH02.marvell.com (10.93.176.82) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:23:00 -0800 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 30 Dec 2020 23:22:58 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 30 Dec 2020 23:22:59 -0800 Received: from localhost.localdomain (unknown [10.111.145.157]) by maili.marvell.com (Postfix) with ESMTP id 1AECB3F703F; Wed, 30 Dec 2020 23:22:59 -0800 (PST) From: "Nalla, Pradeep" To: "Nalla, Pradeep" , Radha Mohan Chintakuntla , Veerasenareddy Burru CC: , , Date: Thu, 31 Dec 2020 07:22:47 +0000 Message-ID: <20201231072247.5719-16-pnalla@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201231072247.5719-1-pnalla@marvell.com> References: <20201231072247.5719-1-pnalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.343, 18.0.737 definitions=2020-12-31_02:2020-12-30, 2020-12-31 signatures=0 Subject: [dpdk-dev] [PATCH 15/15] net/octeontx_ep: Input output reset. X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: "Nalla Pradeep" Function to allow resetting input and output queues are added. Supports both otx and otx2 endpoints. Signed-off-by: Nalla Pradeep --- drivers/net/octeontx_ep/otx2_ep_vf.c | 120 ++++++++++++++++++++++ drivers/net/octeontx_ep/otx_ep_vf.c | 143 +++++++++++++++++++++++++++ 2 files changed, 263 insertions(+) diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.c b/drivers/net/octeontx_ep/otx2_ep_vf.c index 0fb8c26a5e..095c43b05a 100644 --- a/drivers/net/octeontx_ep/otx2_ep_vf.c +++ b/drivers/net/octeontx_ep/otx2_ep_vf.c @@ -7,6 +7,96 @@ #include "otx_ep_common.h" #include "otx2_ep_vf.h" +static int +otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no) +{ + uint64_t loop = SDP_VF_BUSY_LOOP_COUNT; + volatile uint64_t d64 = 0ull; + + /* There is no RST for a ring. + * Clear all registers one by one after disabling the ring + */ + + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_ENABLE(q_no)); + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_INSTR_BADDR(q_no)); + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_INSTR_RSIZE(q_no)); + + d64 = 0xFFFFFFFF; /* ~0ull */ + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_INSTR_DBELL(q_no)); + d64 = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_INSTR_DBELL(q_no)); + + while ((d64 != 0) && loop--) { + otx2_write64(d64, otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_DBELL(q_no)); + + rte_delay_ms(1); + + d64 = otx2_read64(otx_ep->hw_addr + + SDP_VF_R_IN_INSTR_DBELL(q_no)); + } + + loop = SDP_VF_BUSY_LOOP_COUNT; + d64 = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CNTS(q_no)); + while ((d64 != 0) && loop--) { + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_CNTS(q_no)); + + rte_delay_ms(1); + + d64 = otx2_read64(otx_ep->hw_addr + SDP_VF_R_IN_CNTS(q_no)); + } + + d64 = 0ull; + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_INT_LEVELS(q_no)); + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_PKT_CNT(q_no)); + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_IN_BYTE_CNT(q_no)); + + return 0; +} + +static int +otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no) +{ + uint64_t loop = SDP_VF_BUSY_LOOP_COUNT; + volatile uint64_t d64 = 0ull; + + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_ENABLE(q_no)); + + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_SLIST_BADDR(q_no)); + + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_SLIST_RSIZE(q_no)); + + d64 = 0xFFFFFFFF; + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_SLIST_DBELL(q_no)); + d64 = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_SLIST_DBELL(q_no)); + + while ((d64 != 0) && loop--) { + otx2_write64(d64, otx_ep->hw_addr + + SDP_VF_R_OUT_SLIST_DBELL(q_no)); + + rte_delay_ms(1); + + d64 = otx2_read64(otx_ep->hw_addr + + SDP_VF_R_OUT_SLIST_DBELL(q_no)); + } + + loop = SDP_VF_BUSY_LOOP_COUNT; + d64 = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CNTS(q_no)); + while ((d64 != 0) && (loop--)) { + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_CNTS(q_no)); + + rte_delay_ms(1); + + d64 = otx2_read64(otx_ep->hw_addr + SDP_VF_R_OUT_CNTS(q_no)); + } + + d64 = 0ull; + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_INT_LEVELS(q_no)); + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_PKT_CNT(q_no)); + otx2_write64(d64, otx_ep->hw_addr + SDP_VF_R_OUT_BYTE_CNT(q_no)); + + return 0; +} + static void otx2_vf_setup_global_iq_reg(struct otx_ep_device *otx_ep, int q_no) { @@ -52,11 +142,39 @@ otx2_vf_setup_global_oq_reg(struct otx_ep_device *otx_ep, int q_no) otx2_write64(reg_val, otx_ep->hw_addr + SDP_VF_R_OUT_CONTROL(q_no)); } +static int +otx2_vf_reset_input_queues(struct otx_ep_device *otx_ep) +{ + uint32_t q_no = 0; + + otx_ep_dbg("%s :", __func__); + + for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) + otx2_vf_reset_iq(otx_ep, q_no); + + return 0; +} + +static int +otx2_vf_reset_output_queues(struct otx_ep_device *otx_ep) +{ + uint64_t q_no = 0ull; + + otx_ep_dbg(" %s :", __func__); + + for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) + otx2_vf_reset_oq(otx_ep, q_no); + + return 0; +} + static void otx2_vf_setup_global_input_regs(struct otx_ep_device *otx_ep) { uint64_t q_no = 0ull; + otx2_vf_reset_input_queues(otx_ep); + for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) otx2_vf_setup_global_iq_reg(otx_ep, q_no); } @@ -66,6 +184,8 @@ otx2_vf_setup_global_output_regs(struct otx_ep_device *otx_ep) { uint32_t q_no; + otx2_vf_reset_output_queues(otx_ep); + for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) otx2_vf_setup_global_oq_reg(otx_ep, q_no); } diff --git a/drivers/net/octeontx_ep/otx_ep_vf.c b/drivers/net/octeontx_ep/otx_ep_vf.c index a7c9d48dbc..0280802aa1 100644 --- a/drivers/net/octeontx_ep/otx_ep_vf.c +++ b/drivers/net/octeontx_ep/otx_ep_vf.c @@ -11,6 +11,114 @@ #include "otx_ep_common.h" #include "otx_ep_vf.h" +#ifdef OTX_EP_RESET_IOQ +static int +otx_ep_reset_iq(struct otx_ep_device *otx_ep, int q_no) +{ + uint64_t loop = OTX_EP_BUSY_LOOP_COUNT; + volatile uint64_t d64 = 0ull; + + /* There is no RST for a ring. + * Clear all registers one by one after disabling the ring + */ + + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no)); + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_INSTR_BADDR(q_no)); + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_INSTR_RSIZE(q_no)); + + d64 = 0xFFFFFFFF; /* ~0ull */ + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_INSTR_DBELL(q_no)); + d64 = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_INSTR_DBELL(q_no)); + + while ((d64 != 0) && loop--) { + otx_ep_write64(d64, otx_ep->hw_addr, + OTX_EP_R_IN_INSTR_DBELL(q_no)); + + rte_delay_ms(1); + + d64 = rte_read64(otx_ep->hw_addr + + OTX_EP_R_IN_INSTR_DBELL(q_no)); + } + if (loop == 0) { + otx_ep_err("dbell reset failed\n"); + return -1; + } + + loop = OTX_EP_BUSY_LOOP_COUNT; + d64 = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CNTS(q_no)); + while ((d64 != 0) && loop--) { + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_CNTS(q_no)); + + rte_delay_ms(1); + + d64 = rte_read64(otx_ep->hw_addr + OTX_EP_R_IN_CNTS(q_no)); + } + if (loop == 0) { + otx_ep_err("cnt reset failed\n"); + return -1; + } + + d64 = 0ull; + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_INT_LEVELS(q_no)); + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_PKT_CNT(q_no)); + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_IN_BYTE_CNT(q_no)); + + return 0; +} + +static int +otx_ep_reset_oq(struct otx_ep_device *otx_ep, int q_no) +{ + uint64_t loop = OTX_EP_BUSY_LOOP_COUNT; + volatile uint64_t d64 = 0ull; + + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no)); + + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_SLIST_BADDR(q_no)); + + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_SLIST_RSIZE(q_no)); + + d64 = 0xFFFFFFFF; + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_SLIST_DBELL(q_no)); + d64 = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_SLIST_DBELL(q_no)); + + while ((d64 != 0) && loop--) { + otx_ep_write64(d64, otx_ep->hw_addr, + OTX_EP_R_OUT_SLIST_DBELL(q_no)); + + rte_delay_ms(1); + + d64 = rte_read64(otx_ep->hw_addr + + OTX_EP_R_OUT_SLIST_DBELL(q_no)); + } + if (loop == 0) { + otx_ep_err("dbell reset failed\n"); + return -1; + } + + loop = OTX_EP_BUSY_LOOP_COUNT; + d64 = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CNTS(q_no)); + while ((d64 != 0) && (loop--)) { + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_CNTS(q_no)); + + rte_delay_ms(1); + + d64 = rte_read64(otx_ep->hw_addr + OTX_EP_R_OUT_CNTS(q_no)); + } + if (loop == 0) { + otx_ep_err("cnt reset failed\n"); + return -1; + } + + + d64 = 0ull; + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_INT_LEVELS(q_no)); + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_PKT_CNT(q_no)); + otx_ep_write64(d64, otx_ep->hw_addr, OTX_EP_R_OUT_BYTE_CNT(q_no)); + + return 0; +} +#endif static void otx_ep_setup_global_iq_reg(struct otx_ep_device *otx_ep, int q_no) @@ -64,11 +172,42 @@ otx_ep_setup_global_oq_reg(struct otx_ep_device *otx_ep, int q_no) otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_CONTROL(q_no)); } +#ifdef OTX_EP_RESET_IOQ +static int +otx_ep_reset_input_queues(struct otx_ep_device *otx_ep) +{ + uint32_t q_no = 0; + + otx_ep_dbg("%s :\n", __func__); + + for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) + otx_ep_reset_iq(otx_ep, q_no); + + return 0; +} + +static int +otx_ep_reset_output_queues(struct otx_ep_device *otx_ep) +{ + uint64_t q_no = 0ull; + + otx_ep_dbg(" %s :\n", __func__); + + for (q_no = 0; q_no < otx_ep->sriov_info.rings_per_vf; q_no++) + otx_ep_reset_oq(otx_ep, q_no); + + return 0; +} +#endif + static void otx_ep_setup_global_input_regs(struct otx_ep_device *otx_ep) { uint64_t q_no = 0ull; +#ifdef OTX_EP_RESET_IOQ + otx_ep_reset_input_queues(otx_ep); +#endif for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) otx_ep_setup_global_iq_reg(otx_ep, q_no); } @@ -78,8 +217,12 @@ otx_ep_setup_global_output_regs(struct otx_ep_device *otx_ep) { uint32_t q_no; +#ifdef OTX_EP_RESET_IOQ + otx_ep_reset_output_queues(otx_ep); +#endif for (q_no = 0; q_no < (otx_ep->sriov_info.rings_per_vf); q_no++) otx_ep_setup_global_oq_reg(otx_ep, q_no); + } static int