From patchwork Sun Jun 2 15:23:37 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54057 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A9F5244C3; Sun, 2 Jun 2019 17:24:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EA4F12BEA for ; Sun, 2 Jun 2019 17:24:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKoh020361; Sun, 2 Jun 2019 08:24:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=KB3zY1kJyoT/jEmk+0T1bW7cO3n6JOUqmLOYbbEMxCU=; b=Gizk5/GwgRcHm0K6+yz1U/WWu0tsC6T/jiMB+0cMWs3d7wbQK511MFo3tQxAvDzBEabI oSYqvubK7ehYWfXXsjlZ5zlqsybqpU3LwCZtorEaNH64clI78UtTFH0lnDTZC47kJfQo CYviorb4xsO7/R7EwWBvjz+qUk4kmXY9uiXIOlrUGzy0RT29QHrFdS36m0UclThem9lp J8/ACHZpTacrJcGPY+QvMCVVK/fG87lly9a/wHWi6HXflGnOwWlXcRbWXPgBHO09ChlI /PLt3Zig5LSxmefbPSbKnkvT/xknSVk4eseShFcnsBiTx9CgQNnA3UmqypMthupDBwqf og== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4906-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:37 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:35 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id AD70D3F7040; Sun, 2 Jun 2019 08:24:33 -0700 (PDT) From: To: , Thomas Monjalon , John McNamara , Marko Kovacevic , "Jerin Jacob" , Nithin Dabilpuram , Kiran Kumar K CC: , Pavan Nikhilesh Date: Sun, 2 Jun 2019 20:53:37 +0530 Message-ID: <20190602152434.23996-2-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 01/58] net/octeontx2: add build infrastructure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Adding bare minimum PMD library and doc build infrastructure. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K Signed-off-by: Pavan Nikhilesh --- config/common_base | 5 +++ doc/guides/nics/features/octeontx2.ini | 8 ++++ doc/guides/nics/features/octeontx2_vec.ini | 8 ++++ doc/guides/nics/features/octeontx2_vf.ini | 8 ++++ drivers/net/Makefile | 1 + drivers/net/meson.build | 2 +- drivers/net/octeontx2/Makefile | 38 +++++++++++++++++++ drivers/net/octeontx2/meson.build | 24 ++++++++++++ drivers/net/octeontx2/otx2_ethdev.c | 3 ++ .../octeontx2/rte_pmd_octeontx2_version.map | 4 ++ mk/rte.app.mk | 2 + 11 files changed, 102 insertions(+), 1 deletion(-) create mode 100644 doc/guides/nics/features/octeontx2.ini create mode 100644 doc/guides/nics/features/octeontx2_vec.ini create mode 100644 doc/guides/nics/features/octeontx2_vf.ini create mode 100644 drivers/net/octeontx2/Makefile create mode 100644 drivers/net/octeontx2/meson.build create mode 100644 drivers/net/octeontx2/otx2_ethdev.c create mode 100644 drivers/net/octeontx2/rte_pmd_octeontx2_version.map diff --git a/config/common_base b/config/common_base index 4a3de0360..38edad355 100644 --- a/config/common_base +++ b/config/common_base @@ -405,6 +405,11 @@ CONFIG_RTE_LIBRTE_LIO_DEBUG_REGS=n # CONFIG_RTE_LIBRTE_OCTEONTX_PMD=y +# +# Compile burst-oriented Cavium OCTEONTX2 network PMD driver +# +CONFIG_RTE_LIBRTE_OCTEONTX2_PMD=y + # # Compile WRS accelerated virtual port (AVP) guest PMD driver # diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini new file mode 100644 index 000000000..0ec3b6983 --- /dev/null +++ b/doc/guides/nics/features/octeontx2.ini @@ -0,0 +1,8 @@ +; +; Supported features of the 'octeontx2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini new file mode 100644 index 000000000..774f136c1 --- /dev/null +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -0,0 +1,8 @@ +; +; Supported features of the 'octeontx2_vec' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +ARMv8 = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini new file mode 100644 index 000000000..36642354e --- /dev/null +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -0,0 +1,8 @@ +; +; Supported features of the 'octeontx2_vf' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Linux VFIO = Y +ARMv8 = Y diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 3a72cf38c..5bb618b21 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -45,6 +45,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX_PMD) += octeontx +DIRS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += octeontx2 DIRS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += pcap DIRS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += qede DIRS-$(CONFIG_RTE_LIBRTE_PMD_RING) += ring diff --git a/drivers/net/meson.build b/drivers/net/meson.build index ed99896c3..086a2f4cd 100644 --- a/drivers/net/meson.build +++ b/drivers/net/meson.build @@ -31,7 +31,7 @@ drivers = ['af_packet', 'netvsc', 'nfb', 'nfp', - 'null', 'octeontx', 'pcap', 'qede', 'ring', + 'null', 'octeontx', 'octeontx2', 'pcap', 'ring', 'sfc', 'softnic', 'szedata2', diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile new file mode 100644 index 000000000..0a606d27b --- /dev/null +++ b/drivers/net/octeontx2/Makefile @@ -0,0 +1,38 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +include $(RTE_SDK)/mk/rte.vars.mk + +# +# library name +# +LIB = librte_pmd_octeontx2.a + +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -I$(RTE_SDK)/drivers/common/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/mempool/octeontx2 +CFLAGS += -I$(RTE_SDK)/drivers/net/octeontx2 +CFLAGS += -O3 +CFLAGS += -DALLOW_EXPERIMENTAL_API +CFLAGS += -flax-vector-conversions + +ifneq ($(CONFIG_RTE_ARCH_64),y) +CFLAGS += -Wno-int-to-pointer-cast +CFLAGS += -Wno-pointer-to-int-cast +endif + +EXPORT_MAP := rte_pmd_octeontx2_version.map + +LIBABIVER := 1 + +# +# all source are stored in SRCS-y +# +SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_ethdev.c + +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_bus_pci -lrte_mempool_octeontx2 + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build new file mode 100644 index 000000000..0bd32446b --- /dev/null +++ b/drivers/net/octeontx2/meson.build @@ -0,0 +1,24 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2019 Marvell International Ltd. +# + +sources = files( + 'otx2_ethdev.c', + ) + +allow_experimental_apis = true +deps += ['bus_pci', 'common_octeontx2', 'mempool_octeontx2'] + +cflags += ['-flax-vector-conversions','-DALLOW_EXPERIMENTAL_API'] + +extra_flags = [] +# This integrated controller runs only on a arm64 machine, remove 32bit warnings +if not dpdk_conf.get('RTE_ARCH_64') + extra_flags += ['-Wno-int-to-pointer-cast', '-Wno-pointer-to-int-cast'] +endif + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c new file mode 100644 index 000000000..d26535dee --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -0,0 +1,3 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map new file mode 100644 index 000000000..fc8c95e91 --- /dev/null +++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map @@ -0,0 +1,4 @@ +DPDK_19.05 { + + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index cd89ccfd5..3dff91190 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -127,6 +127,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_COMMON_DPAAX) += -lrte_common_dpaax endif OCTEONTX2-y := $(CONFIG_RTE_LIBRTE_OCTEONTX2_MEMPOOL) +OCTEONTX2-y += $(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) ifeq ($(findstring y,$(OCTEONTX2-y)),y) _LDLIBS-y += -lrte_common_octeontx2 endif @@ -197,6 +198,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2 _LDLIBS-$(CONFIG_RTE_LIBRTE_MVNETA_PMD) += -lrte_pmd_mvneta _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null +_LDLIBS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += -lrte_pmd_octeontx2 -lm _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap _LDLIBS-$(CONFIG_RTE_LIBRTE_QEDE_PMD) += -lrte_pmd_qede _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_RING) += -lrte_pmd_ring From patchwork Sun Jun 2 15:23:38 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54058 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9E70A1B947; Sun, 2 Jun 2019 17:24:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 4FD481B945 for ; Sun, 2 Jun 2019 17:24:42 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKoi020361; Sun, 2 Jun 2019 08:24:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Y+00wwuejjPTGhfDftey2rV6eRJlK+En4IO1kqheNyI=; b=mf08oFLaAAbnH08Ug1CDCVIgDR9AjV5AI/FvapVi3OB7mNCVqwF+E5kamKR32HgxYT5c tIJkQjq2aapo4dWPgSNt3704ohpw4o/45HV2RUuboH4R+EswNdNO4oxiDgeaFiL9ST9e oHoNizPCM8RQXxDnpAbGZBpP3D1dglEFro5NZn3kaYUvd0u1KiwmVqpAF8NyNXMmMgNt zy6wXEv2ejWx9+90hdlGiNsNpXRDpmODEClMQUgyH9m0jqeAComDXvRi9mDVbVeJtR0W /MRFRy3dGFIlyiMKAu8tpq/hyOMqoNc8YUcEqVF+mek8YJWzcOkhMkF0+6yyZHPBJmzQ 1A== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4909-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:41 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:40 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 9069A3F703F; Sun, 2 Jun 2019 08:24:38 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "Anatoly Burakov" CC: Date: Sun, 2 Jun 2019 20:53:38 +0530 Message-ID: <20190602152434.23996-3-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 02/58] net/octeontx2: add ethdev probe and remove X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob add basic PCIe ethdev probe and remove. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 93 +++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++ 2 files changed, 120 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_ethdev.h diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index d26535dee..05fa8988e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1,3 +1,96 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright(C) 2019 Marvell International Ltd. */ + +#include +#include +#include + +#include "otx2_ethdev.h" + +static int +otx2_eth_dev_init(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); + + return -ENODEV; +} + +static int +otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) +{ + RTE_SET_USED(eth_dev); + RTE_SET_USED(mbox_close); + + return -ENODEV; +} + +static int +nix_remove(struct rte_pci_device *pci_dev) +{ + struct rte_eth_dev *eth_dev; + int rc; + + eth_dev = rte_eth_dev_allocated(pci_dev->device.name); + if (eth_dev) { + /* Cleanup eth dev */ + rc = otx2_eth_dev_uninit(eth_dev, true); + if (rc) + return rc; + + rte_eth_dev_pci_release(eth_dev); + } + + /* Nothing to be done for secondary processes */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + return 0; +} + +static int +nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev) +{ + int rc; + + RTE_SET_USED(pci_drv); + + rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev), + otx2_eth_dev_init); + + /* On error on secondary, recheck if port exists in primary or + * in mid of detach state. + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc) + if (!rte_eth_dev_allocated(pci_dev->device.name)) + return 0; + return rc; +} + +static const struct rte_pci_id pci_nix_map[] = { + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF) + }, + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF) + }, + { + RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, + PCI_DEVID_OCTEONTX2_RVU_AF_VF) + }, + { + .vendor_id = 0, + }, +}; + +static struct rte_pci_driver pci_nix = { + .id_table = pci_nix_map, + .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_IOVA_AS_VA | + RTE_PCI_DRV_INTR_LSC, + .probe = nix_probe, + .remove = nix_remove, +}; + +RTE_PMD_REGISTER_PCI(net_octeontx2, pci_nix); +RTE_PMD_REGISTER_PCI_TABLE(net_octeontx2, pci_nix_map); +RTE_PMD_REGISTER_KMOD_DEP(net_octeontx2, "vfio-pci"); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h new file mode 100644 index 000000000..fd01a3254 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -0,0 +1,27 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_ETHDEV_H__ +#define __OTX2_ETHDEV_H__ + +#include + +#include + +#include "otx2_common.h" +#include "otx2_dev.h" +#include "otx2_irq.h" +#include "otx2_mempool.h" + +struct otx2_eth_dev { + OTX2_DEV; /* Base class */ +} __rte_cache_aligned; + +static inline struct otx2_eth_dev * +otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) +{ + return eth_dev->data->dev_private; +} + +#endif /* __OTX2_ETHDEV_H__ */ From patchwork Sun Jun 2 15:23:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54059 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8263F1B945; Sun, 2 Jun 2019 17:24:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 31513559A for ; Sun, 2 Jun 2019 17:24:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKoj020361; Sun, 2 Jun 2019 08:24:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=P2EljxBolAn/AMeVz7IzadQ1tLrLvm61D05eibgNfGM=; b=bR5U7JvYhKqQEU2zli9TA5hdrsQXttFtLPL4pK1nvvIY890NfS9xEmq6QBE4VUVgvy8N BACmftXAcCZdXLKn+A1OVOe4KCyqZ2jIMtT5UALgiUviaq7AOeBhWfmfVtAWh9hvO7Y+ 8EidszirszqgXKbSCwUyqzBzZ10psKGV6nzWL1OHbD5TqIMHUiR1cXho3Dwxdn5EBk3E wER7zdwiFfCEO8dK61r2/kD4my4CA1QY/fF7k+3du4cyBM7LqvYadExBAt+QOqSqJn2D v6suIoASnUyHOCHdMC6SujginzMJRiz0/1EMiibNuY8as0XGHKIpuWUbFIz5jx2LtaT3 DQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk490j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:45 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:44 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:44 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 1D28A3F703F; Sun, 2 Jun 2019 08:24:41 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K , "Anatoly Burakov" CC: , Sunil Kumar Kori , "Vamsi Attunuru" Date: Sun, 2 Jun 2019 20:53:39 +0530 Message-ID: <20190602152434.23996-4-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 03/58] net/octeontx2: add device init and uninit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add basic init and uninit function which includes attaching LF device to probed PCIe device. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Sunil Kumar Kori Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 277 +++++++++++++++++++++++++++- drivers/net/octeontx2/otx2_ethdev.h | 72 ++++++++ drivers/net/octeontx2/otx2_mac.c | 72 ++++++++ 5 files changed, 418 insertions(+), 5 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_mac.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 0a606d27b..9ca1eea99 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_mac.c \ otx2_ethdev.c LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 0bd32446b..6cdd036e9 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_mac.c', 'otx2_ethdev.c', ) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 05fa8988e..08f03b4c3 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -8,27 +8,277 @@ #include "otx2_ethdev.h" +static inline void +otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); +} + +static inline void +otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) +{ + RTE_SET_USED(eth_dev); +} + +static inline uint64_t +nix_get_rx_offload_capa(struct otx2_eth_dev *dev) +{ + uint64_t capa = NIX_RX_OFFLOAD_CAPA; + + if (otx2_dev_is_vf(dev)) + capa &= ~DEV_RX_OFFLOAD_TIMESTAMP; + + return capa; +} + +static inline uint64_t +nix_get_tx_offload_capa(struct otx2_eth_dev *dev) +{ + RTE_SET_USED(dev); + + return NIX_TX_OFFLOAD_CAPA; +} + +static int +nix_lf_free(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_lf_free_req *req; + struct ndc_sync_op *ndc_req; + int rc; + + /* Sync NDC-NIX for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->nix_lf_tx_sync = 1; + ndc_req->nix_lf_rx_sync = 1; + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc); + + req = otx2_mbox_alloc_msg_nix_lf_free(mbox); + /* Let AF driver free all this nix lf's + * NPC entries allocated using NPC MBOX. + */ + req->flags = 0; + + return otx2_mbox_process(mbox); +} + +static inline int +nix_lf_attach(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct rsrc_attach_req *req; + + /* Attach NIX(lf) */ + req = otx2_mbox_alloc_msg_attach_resources(mbox); + req->modify = true; + req->nixlf = true; + + return otx2_mbox_process(mbox); +} + +static inline int +nix_lf_get_msix_offset(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct msix_offset_rsp *msix_rsp; + int rc; + + /* Get NPA and NIX MSIX vector offsets */ + otx2_mbox_alloc_msg_msix_offset(mbox); + + rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp); + + dev->nix_msixoff = msix_rsp->nix_msixoff; + + return rc; +} + +static inline int +otx2_eth_dev_lf_detach(struct otx2_mbox *mbox) +{ + struct rsrc_detach_req *req; + + req = otx2_mbox_alloc_msg_detach_resources(mbox); + + /* Detach all except npa lf */ + req->partial = true; + req->nixlf = true; + req->sso = true; + req->ssow = true; + req->timlfs = true; + req->cptlfs = true; + + return otx2_mbox_process(mbox); +} + static int otx2_eth_dev_init(struct rte_eth_dev *eth_dev) { - RTE_SET_USED(eth_dev); + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_pci_device *pci_dev; + int rc, max_entries; - return -ENODEV; + /* For secondary processes, the primary has done all the work */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) { + /* Setup callbacks for secondary process */ + otx2_eth_set_tx_function(eth_dev); + otx2_eth_set_rx_function(eth_dev); + return 0; + } + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + rte_eth_copy_pci_info(eth_dev, pci_dev); + eth_dev->data->dev_flags |= RTE_ETH_DEV_CLOSE_REMOVE; + + /* Zero out everything after OTX2_DEV to allow proper dev_reset() */ + memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) - + offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start)); + + if (!dev->mbox_active) { + /* Initialize the base otx2_dev object + * only if already present + */ + rc = otx2_dev_init(pci_dev, dev); + if (rc) { + otx2_err("Failed to initialize otx2_dev rc=%d", rc); + goto error; + } + } + + /* Grab the NPA LF if required */ + rc = otx2_npa_lf_init(pci_dev, dev); + if (rc) + goto otx2_dev_uninit; + + dev->configured = 0; + dev->drv_inited = true; + dev->base = dev->bar2 + (RVU_BLOCK_ADDR_NIX0 << 20); + dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20); + + /* Attach NIX LF */ + rc = nix_lf_attach(dev); + if (rc) + goto otx2_npa_uninit; + + /* Get NIX MSIX offset */ + rc = nix_lf_get_msix_offset(dev); + if (rc) + goto otx2_npa_uninit; + + /* Get maximum number of supported MAC entries */ + max_entries = otx2_cgx_mac_max_entries_get(dev); + if (max_entries < 0) { + otx2_err("Failed to get max entries for mac addr"); + rc = -ENOTSUP; + goto mbox_detach; + } + + /* For VFs, returned max_entries will be 0. But to keep default MAC + * address, one entry must be allocated. So setting up to 1. + */ + if (max_entries == 0) + max_entries = 1; + + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries * + RTE_ETHER_ADDR_LEN, 0); + if (eth_dev->data->mac_addrs == NULL) { + otx2_err("Failed to allocate memory for mac addr"); + rc = -ENOMEM; + goto mbox_detach; + } + + dev->max_mac_entries = max_entries; + + rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr); + if (rc) + goto free_mac_addrs; + + /* Update the mac address */ + memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN); + + /* Also sync same MAC address to CGX table */ + otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]); + + dev->tx_offload_capa = nix_get_tx_offload_capa(dev); + dev->rx_offload_capa = nix_get_rx_offload_capa(dev); + + if (otx2_dev_is_A0(dev)) { + dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q; + dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL; + } + + otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64 + " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64, + eth_dev->data->port_id, dev->pf, dev->vf, + OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap, + dev->rx_offload_capa, dev->tx_offload_capa); + return 0; + +free_mac_addrs: + rte_free(eth_dev->data->mac_addrs); +mbox_detach: + otx2_eth_dev_lf_detach(dev->mbox); +otx2_npa_uninit: + otx2_npa_lf_fini(); +otx2_dev_uninit: + otx2_dev_fini(pci_dev, dev); +error: + otx2_err("Failed to init nix eth_dev rc=%d", rc); + return rc; } static int otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) { - RTE_SET_USED(eth_dev); - RTE_SET_USED(mbox_close); + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_pci_device *pci_dev; + int rc; - return -ENODEV; + /* Nothing to be done for secondary processes */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + rc = nix_lf_free(dev); + if (rc) + otx2_err("Failed to free nix lf, rc=%d", rc); + + rc = otx2_npa_lf_fini(); + if (rc) + otx2_err("Failed to cleanup npa lf, rc=%d", rc); + + rte_free(eth_dev->data->mac_addrs); + eth_dev->data->mac_addrs = NULL; + dev->drv_inited = false; + + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + + rc = otx2_eth_dev_lf_detach(dev->mbox); + if (rc) + otx2_err("Failed to detach resources, rc=%d", rc); + + /* Check if mbox close is needed */ + if (!mbox_close) + return 0; + + if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) { + /* Will be freed later by PMD */ + eth_dev->data->dev_private = NULL; + return 0; + } + + otx2_dev_fini(pci_dev, dev); + return 0; } static int nix_remove(struct rte_pci_device *pci_dev) { struct rte_eth_dev *eth_dev; + struct otx2_idev_cfg *idev; + struct otx2_dev *otx2_dev; int rc; eth_dev = rte_eth_dev_allocated(pci_dev->device.name); @@ -45,7 +295,24 @@ nix_remove(struct rte_pci_device *pci_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Check for common resources */ + idev = otx2_intra_dev_get_cfg(); + if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev) + return 0; + + otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf); + + if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev)) + goto exit; + + /* Safe to cleanup mbox as no more users */ + otx2_dev_fini(pci_dev, otx2_dev); + rte_free(otx2_dev); return 0; + +exit: + otx2_info("%s: common resource in use by other devices", pci_dev->name); + return -EAGAIN; } static int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index fd01a3254..d9f72686a 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -8,14 +8,76 @@ #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" #include "otx2_irq.h" #include "otx2_mempool.h" +#define OTX2_ETH_DEV_PMD_VERSION "1.0" + +/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */ + +/* Minimum CQ size should be 4K */ +#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63) +#define otx2_ethdev_fixup_is_min_4k_q(dev) \ + ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q) +/* Limit CQ being full */ +#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62) +#define otx2_ethdev_fixup_is_limit_cq_full(dev) \ + ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL) + +/* Used for struct otx2_eth_dev::flags */ +#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) + +#define NIX_TX_OFFLOAD_CAPA ( \ + DEV_TX_OFFLOAD_MBUF_FAST_FREE | \ + DEV_TX_OFFLOAD_MT_LOCKFREE | \ + DEV_TX_OFFLOAD_VLAN_INSERT | \ + DEV_TX_OFFLOAD_QINQ_INSERT | \ + DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \ + DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \ + DEV_TX_OFFLOAD_TCP_CKSUM | \ + DEV_TX_OFFLOAD_UDP_CKSUM | \ + DEV_TX_OFFLOAD_SCTP_CKSUM | \ + DEV_TX_OFFLOAD_MULTI_SEGS | \ + DEV_TX_OFFLOAD_IPV4_CKSUM) + +#define NIX_RX_OFFLOAD_CAPA ( \ + DEV_RX_OFFLOAD_CHECKSUM | \ + DEV_RX_OFFLOAD_SCTP_CKSUM | \ + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \ + DEV_RX_OFFLOAD_SCATTER | \ + DEV_RX_OFFLOAD_JUMBO_FRAME | \ + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \ + DEV_RX_OFFLOAD_VLAN_STRIP | \ + DEV_RX_OFFLOAD_VLAN_FILTER | \ + DEV_RX_OFFLOAD_QINQ_STRIP | \ + DEV_RX_OFFLOAD_TIMESTAMP) + struct otx2_eth_dev { OTX2_DEV; /* Base class */ + MARKER otx2_eth_dev_data_start; + uint16_t sqb_size; + uint16_t rx_chan_base; + uint16_t tx_chan_base; + uint8_t rx_chan_cnt; + uint8_t tx_chan_cnt; + uint8_t lso_tsov4_idx; + uint8_t lso_tsov6_idx; + uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t max_mac_entries; + uint8_t configured; + uint16_t nix_msixoff; + uintptr_t base; + uintptr_t lmt_addr; + uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */ + uint64_t rx_offloads; + uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */ + uint64_t tx_offloads; + uint64_t rx_offload_capa; + uint64_t tx_offload_capa; } __rte_cache_aligned; static inline struct otx2_eth_dev * @@ -24,4 +86,14 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) return eth_dev->data->dev_private; } +/* CGX */ +int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); +int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); +int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, + struct rte_ether_addr *addr); + +/* Mac address handling */ +int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr); +int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c new file mode 100644 index 000000000..89b0ca6b0 --- /dev/null +++ b/drivers/net/octeontx2/otx2_mac.c @@ -0,0 +1,72 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_dev.h" +#include "otx2_ethdev.h" + +int +otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_mac_addr_set_or_get *req; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + if (otx2_dev_active_vfs(dev)) + return -ENOTSUP; + + req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox); + otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Failed to set mac address in CGX, rc=%d", rc); + + return 0; +} + +int +otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev) +{ + struct cgx_max_dmac_entries_get_rsp *rsp; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + return rsp->max_dmac_filters; +} + +int +otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_get_mac_addr_rsp *rsp; + int rc; + + otx2_mbox_alloc_msg_nix_get_mac_addr(mbox); + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc) { + otx2_err("Failed to get mac address, rc=%d", rc); + goto done; + } + + otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN); + +done: + return rc; +} From patchwork Sun Jun 2 15:23:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54060 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 474891B955; Sun, 2 Jun 2019 17:24:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 5AEC51B954 for ; Sun, 2 Jun 2019 17:24:49 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCM021032; Sun, 2 Jun 2019 08:24:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=0ghDKM4kvjam50/0X1Iw8KQJvj9UiIvMaUfetRGfpgY=; b=p7oCANBlxXDaUE8zkJrvnKEfjQpGzPJTsMl2a2I06SZ03TC1m3h+dk+fIG4ZD0toaPq8 F6nf+V9hFC7AQpAdGGbMSAPcxtRI7mIUDquh5HTf604j2IwDoE8SEamj2voOoGCfV8cJ iK+puAUyw584RA+/bum2z5325WxyXaAfItE9NCpPwT/E5VZj9wl2tBtc1WaNhhqsyryx onpByvP3S6gWEAuJcqV8Uxjc2lwNNXmqM1AkVb0RGFFVGrEhUUTSmqisqM/vHnYR2vAa pwSkk3YvOXEfzBL9a8OCVFLCvU7VcL86vvO1Y7seOcJa5jypH6RiIo8f8bV6fCP6RhDP MA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk490q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:48 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:47 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:47 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DC56F3F703F; Sun, 2 Jun 2019 08:24:45 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Pavan Nikhilesh Date: Sun, 2 Jun 2019 20:53:40 +0530 Message-ID: <20190602152434.23996-5-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 04/58] net/octeontx2: add devargs parsing functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob add various devargs command line options supported by this driver. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Kiran Kumar K --- drivers/net/octeontx2/Makefile | 3 +- drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 7 + drivers/net/octeontx2/otx2_ethdev.h | 20 +++ drivers/net/octeontx2/otx2_ethdev_devargs.c | 143 ++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 10 ++ 6 files changed, 183 insertions(+), 1 deletion(-) create mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c create mode 100644 drivers/net/octeontx2/otx2_rx.h diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 9ca1eea99..dbcfec5b4 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -31,7 +31,8 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ - otx2_ethdev.c + otx2_ethdev.c \ + otx2_ethdev_devargs.c LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_bus_pci -lrte_mempool_octeontx2 diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 6cdd036e9..57657de3d 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files( 'otx2_mac.c', 'otx2_ethdev.c', + 'otx2_ethdev_devargs.c' ) allow_experimental_apis = true diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 08f03b4c3..eeba0c2c6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -137,6 +137,13 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) - offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start)); + /* Parse devargs string */ + rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev); + if (rc) { + otx2_err("Failed to parse devargs rc=%d", rc); + goto error; + } + if (!dev->mbox_active) { /* Initialize the base otx2_dev object * only if already present diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index d9f72686a..f91e5fcac 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -9,11 +9,13 @@ #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" #include "otx2_irq.h" #include "otx2_mempool.h" +#include "otx2_rx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -31,6 +33,8 @@ /* Used for struct otx2_eth_dev::flags */ #define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) +#define NIX_RSS_RETA_SIZE 64 + #define NIX_TX_OFFLOAD_CAPA ( \ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \ DEV_TX_OFFLOAD_MT_LOCKFREE | \ @@ -56,6 +60,15 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +struct otx2_rss_info { + uint16_t rss_size; +}; + +struct otx2_npc_flow_info { + uint16_t flow_prealloc_size; + uint16_t flow_max_priority; +}; + struct otx2_eth_dev { OTX2_DEV; /* Base class */ MARKER otx2_eth_dev_data_start; @@ -72,12 +85,15 @@ struct otx2_eth_dev { uint16_t nix_msixoff; uintptr_t base; uintptr_t lmt_addr; + uint16_t scalar_ena; uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */ uint64_t rx_offloads; uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */ uint64_t tx_offloads; uint64_t rx_offload_capa; uint64_t tx_offload_capa; + struct otx2_rss_info rss_info; + struct otx2_npc_flow_info npc_flow; } __rte_cache_aligned; static inline struct otx2_eth_dev * @@ -96,4 +112,8 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr); int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); +/* Devargs */ +int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, + struct otx2_eth_dev *dev); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c new file mode 100644 index 000000000..0b3e7c145 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c @@ -0,0 +1,143 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" + +static int +parse_flow_max_priority(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint16_t val; + + val = atoi(value); + + /* Limit the max priority to 32 */ + if (val < 1 || val > 32) + return -EINVAL; + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_flow_prealloc_size(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint16_t val; + + val = atoi(value); + + /* Limit the prealloc size to 32 */ + if (val < 1 || val > 32) + return -EINVAL; + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_reta_size(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + + if (val <= ETH_RSS_RETA_SIZE_64) + val = ETH_RSS_RETA_SIZE_64; + else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128) + val = ETH_RSS_RETA_SIZE_128; + else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256) + val = ETH_RSS_RETA_SIZE_256; + else + val = NIX_RSS_RETA_SIZE; + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_ptype_flag(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + uint32_t val; + + val = atoi(value); + if (val) + val = 0; /* Disable NIX_RX_OFFLOAD_PTYPE_F */ + + *(uint16_t *)extra_args = val; + + return 0; +} + +static int +parse_flag(const char *key, const char *value, void *extra_args) +{ + RTE_SET_USED(key); + + *(uint16_t *)extra_args = atoi(value); + + return 0; +} + +#define OTX2_RSS_RETA_SIZE "reta_size" +#define OTX2_PTYPE_DISABLE "ptype_disable" +#define OTX2_SCL_ENABLE "scalar_enable" +#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size" +#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority" + +int +otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev) +{ + uint16_t offload_flag = NIX_RX_OFFLOAD_PTYPE_F; + uint16_t rss_size = NIX_RSS_RETA_SIZE; + uint16_t flow_prealloc_size = 8; + uint16_t flow_max_priority = 3; + uint16_t scalar_enable = 0; + struct rte_kvargs *kvlist; + + if (devargs == NULL) + goto null_devargs; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + goto exit; + + rte_kvargs_process(kvlist, OTX2_PTYPE_DISABLE, + &parse_ptype_flag, &offload_flag); + rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE, + &parse_reta_size, &rss_size); + rte_kvargs_process(kvlist, OTX2_SCL_ENABLE, + &parse_flag, &scalar_enable); + rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE, + &parse_flow_prealloc_size, &flow_prealloc_size); + rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY, + &parse_flow_max_priority, &flow_max_priority); + rte_kvargs_free(kvlist); + +null_devargs: + dev->rx_offload_flags = offload_flag; + dev->scalar_ena = scalar_enable; + dev->rss_info.rss_size = rss_size; + dev->npc_flow.flow_prealloc_size = flow_prealloc_size; + dev->npc_flow.flow_max_priority = flow_max_priority; + return 0; + +exit: + return -EINVAL; +} + +RTE_PMD_REGISTER_PARAM_STRING(net_octeontx2, + OTX2_RSS_RETA_SIZE "=<64|128|256>" + OTX2_PTYPE_DISABLE "=1" + OTX2_SCL_ENABLE "=1" + OTX2_FLOW_PREALLOC_SIZE "=<1-32>" + OTX2_FLOW_MAX_PRIORITY "=<1-32>"); diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h new file mode 100644 index 000000000..1749c43ff --- /dev/null +++ b/drivers/net/octeontx2/otx2_rx.h @@ -0,0 +1,10 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_RX_H__ +#define __OTX2_RX_H__ + +#define NIX_RX_OFFLOAD_PTYPE_F BIT(1) + +#endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 2 15:23:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54061 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A33521B95A; Sun, 2 Jun 2019 17:24:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 196BD1B964 for ; Sun, 2 Jun 2019 17:24:53 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Yc020248; Sun, 2 Jun 2019 08:24:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=BwLUGcyJQijgn6oO43rjpGDJqEAJjgcDtPiQJVHf2x4=; b=YIj4XYCfWRFQlnbID+0oKrZiBbs+s4n2QTR4a+rORpLIqJFP1g7GrukzNvrI5lg955Ru 3KSsLLx/kMF4woFvl+f2YCzxOuZyyrmVMzqEo5aWSI7aKU2y8f1W+U6zhJ+/QuwsNIjR Z8hjJeJoUrncDMwkQgpcjxY536Cforx3bQRbiA6d/+uyelUhIkPwRWj/PEW1SXCsyY1Y TT4iUPFV/7jKKbf5HCoTJwxQ+nTw9EZV31IJ1DS0BfSDvc8Sk8eeeFQ63XO7xlCMuXPj LV0Iaxqq6AuRvonu/DAqq7mGUOrQA9PKMerChFUB6045FI+t/Rddr8U2iI9i9tQ0adH9 Cg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4912-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:52 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:50 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:50 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 20D123F7040; Sun, 2 Jun 2019 08:24:48 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Harman Kalra Date: Sun, 2 Jun 2019 20:53:41 +0530 Message-ID: <20190602152434.23996-6-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 05/58] net/octeontx2: handle device error interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Handle device specific error and ras interrupts. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Harman Kalra --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 12 +- drivers/net/octeontx2/otx2_ethdev.h | 4 + drivers/net/octeontx2/otx2_ethdev_irq.c | 140 ++++++++++++++++++++++++ 5 files changed, 156 insertions(+), 2 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index dbcfec5b4..a56143dcd 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -32,6 +32,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_ethdev.c \ + otx2_ethdev_irq.c \ otx2_ethdev_devargs.c LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 57657de3d..c49e1cb80 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files( 'otx2_mac.c', 'otx2_ethdev.c', + 'otx2_ethdev_irq.c', 'otx2_ethdev_devargs.c' ) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index eeba0c2c6..67a7ebb36 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -175,12 +175,17 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) if (rc) goto otx2_npa_uninit; + /* Register LF irq handlers */ + rc = otx2_nix_register_irqs(eth_dev); + if (rc) + goto mbox_detach; + /* Get maximum number of supported MAC entries */ max_entries = otx2_cgx_mac_max_entries_get(dev); if (max_entries < 0) { otx2_err("Failed to get max entries for mac addr"); rc = -ENOTSUP; - goto mbox_detach; + goto unregister_irq; } /* For VFs, returned max_entries will be 0. But to keep default MAC @@ -194,7 +199,7 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) if (eth_dev->data->mac_addrs == NULL) { otx2_err("Failed to allocate memory for mac addr"); rc = -ENOMEM; - goto mbox_detach; + goto unregister_irq; } dev->max_mac_entries = max_entries; @@ -226,6 +231,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) free_mac_addrs: rte_free(eth_dev->data->mac_addrs); +unregister_irq: + otx2_nix_unregister_irqs(eth_dev); mbox_detach: otx2_eth_dev_lf_detach(dev->mbox); otx2_npa_uninit: @@ -261,6 +268,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) dev->drv_inited = false; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + otx2_nix_unregister_irqs(eth_dev); rc = otx2_eth_dev_lf_detach(dev->mbox); if (rc) diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index f91e5fcac..670d1ff0b 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -102,6 +102,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) return eth_dev->data->dev_private; } +/* IRQ */ +int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); +void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c new file mode 100644 index 000000000..33fed93c4 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -0,0 +1,140 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include + +#include "otx2_ethdev.h" + +static void +nix_lf_err_irq(void *param) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_ERR_INT); + if (intr == 0) + return; + + otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_ERR_INT); +} + +static int +nix_lf_register_err_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec); + /* Enable all dev interrupt except for RQ_DISABLED */ + otx2_write64(~BIT_ULL(11), dev->base + NIX_LF_ERR_INT_ENA_W1S); + + return rc; +} + +static void +nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C); + otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec); +} + +static void +nix_lf_ras_irq(void *param) +{ + struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_RAS); + if (intr == 0) + return; + + otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf); + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_RAS); +} + +static int +nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C); + /* Set used interrupt vectors */ + rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec); + /* Enable dev interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S); + + return rc; +} + +static void +nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec; + + vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON; + + /* Clear err interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C); + otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec); +} + +int +otx2_nix_register_irqs(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + if (dev->nix_msixoff == MSIX_VECTOR_INVALID) { + otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x", + dev->nix_msixoff); + return -EINVAL; + } + + /* Register lf err interrupt */ + rc = nix_lf_register_err_irq(eth_dev); + /* Register RAS interrupt */ + rc |= nix_lf_register_ras_irq(eth_dev); + + return rc; +} + +void +otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev) +{ + nix_lf_unregister_err_irq(eth_dev); + nix_lf_unregister_ras_irq(eth_dev); +} From patchwork Sun Jun 2 15:23:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54062 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 254641B96E; Sun, 2 Jun 2019 17:24:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 5833E1B959 for ; Sun, 2 Jun 2019 17:24:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Yd020248; Sun, 2 Jun 2019 08:24:55 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=+Es6M4aJdEAZ1hkzT31eRqMshpppTbpqeyuAwH75Kv8=; b=XhLiRlI6b8RebDivJMwcKgXFQUYGcjIace3NU43XNTgSeA8DGj0VKsW18ONTuSGtf9+k oDt9ZAuNymHYM/uUpu4ODjd9BUaQzHHQbML8iYQn9rsEjefECd8Q+aEzLLFBBH8bTv3x 0QPlrEifITK9YOXV1bHdzQFij7skUJi/nbj/+MC0waFbVeeROBCgej8joq7iMyYk+nC/ N45Jx27E9MfV/q5RAf18+hpwBt5hPn9O2fGJPybMNvQB0hYljznkWSfOnrhYKrPhQlyR m1lgglKoDgVESF8b4gDUA6N7SOWw+Ch2d10UJoD5EzZImrUQWOhVHnz180+UjcFa/2L1 +w== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk491a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:55 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:54 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:54 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 228413F703F; Sun, 2 Jun 2019 08:24:51 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru , "Harman Kalra" Date: Sun, 2 Jun 2019 20:53:42 +0530 Message-ID: <20190602152434.23996-7-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 06/58] net/octeontx2: add info get operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add device information get operation. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 4 ++ doc/guides/nics/features/octeontx2_vec.ini | 4 ++ doc/guides/nics/features/octeontx2_vf.ini | 3 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 7 +++ drivers/net/octeontx2/otx2_ethdev.h | 27 +++++++++ drivers/net/octeontx2/otx2_ethdev_ops.c | 64 ++++++++++++++++++++++ 8 files changed, 111 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 0ec3b6983..1f0148669 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -4,5 +4,9 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Speed capabilities = Y Linux VFIO = Y ARMv8 = Y +Lock-free Tx queue = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 774f136c1..2b0644ee5 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -4,5 +4,9 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Speed capabilities = Y Linux VFIO = Y ARMv8 = Y +Lock-free Tx queue = Y +SR-IOV = Y +Multiprocess aware = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 36642354e..80f0d5c95 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -4,5 +4,8 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Speed capabilities = Y Linux VFIO = Y ARMv8 = Y +Lock-free Tx queue = Y +Multiprocess aware = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index a56143dcd..820202eb2 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ + otx2_ethdev_ops.c \ otx2_ethdev_devargs.c LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index c49e1cb80..a2dc983e3 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -6,6 +6,7 @@ sources = files( 'otx2_mac.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', + 'otx2_ethdev_ops.c', 'otx2_ethdev_devargs.c' ) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 67a7ebb36..6e3c70559 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -64,6 +64,11 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +/* Initialize and register driver with DPDK Application */ +static const struct eth_dev_ops otx2_eth_dev_ops = { + .dev_infos_get = otx2_nix_info_get, +}; + static inline int nix_lf_attach(struct otx2_eth_dev *dev) { @@ -120,6 +125,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) struct rte_pci_device *pci_dev; int rc, max_entries; + eth_dev->dev_ops = &otx2_eth_dev_ops; + /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) { /* Setup callbacks for secondary process */ diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 670d1ff0b..00baabaac 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -33,7 +33,30 @@ /* Used for struct otx2_eth_dev::flags */ #define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) +#define VLAN_TAG_SIZE 4 +#define NIX_HW_L2_OVERHEAD 22 +/* ETH_HLEN+2*VLAN_HLEN */ +#define NIX_MAX_HW_MTU 9190 +#define NIX_MAX_HW_FRS (NIX_MAX_HW_MTU + NIX_HW_L2_OVERHEAD) +#define NIX_MIN_HW_FRS 60 +#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */ #define NIX_RSS_RETA_SIZE 64 +#define NIX_RX_MIN_DESC 16 +#define NIX_RX_MIN_DESC_ALIGN 16 +#define NIX_RX_NB_SEG_MAX 6 + +/* If PTP is enabled additional SEND MEM DESC is required which + * takes 2 words, hence max 7 iova address are possible + */ +#if defined(RTE_LIBRTE_IEEE1588) +#define NIX_TX_NB_SEG_MAX 7 +#else +#define NIX_TX_NB_SEG_MAX 9 +#endif + +#define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\ + ETH_RSS_TCP | ETH_RSS_SCTP | \ + ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD) #define NIX_TX_OFFLOAD_CAPA ( \ DEV_TX_OFFLOAD_MBUF_FAST_FREE | \ @@ -102,6 +125,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) return eth_dev->data->dev_private; } +/* Ops */ +void otx2_nix_info_get(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_info *dev_info); + /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c new file mode 100644 index 000000000..9f86635d4 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -0,0 +1,64 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +void +otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + devinfo->min_rx_bufsize = NIX_MIN_HW_FRS; + devinfo->max_rx_pktlen = NIX_MAX_HW_FRS; + devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT; + devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT; + devinfo->max_mac_addrs = dev->max_mac_entries; + devinfo->max_vfs = pci_dev->max_vfs; + devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_HW_L2_OVERHEAD; + devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_HW_L2_OVERHEAD; + + devinfo->rx_offload_capa = dev->rx_offload_capa; + devinfo->tx_offload_capa = dev->tx_offload_capa; + devinfo->rx_queue_offload_capa = 0; + devinfo->tx_queue_offload_capa = 0; + + devinfo->reta_size = dev->rss_info.rss_size; + devinfo->hash_key_size = NIX_HASH_KEY_SIZE; + devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD; + + devinfo->default_rxconf = (struct rte_eth_rxconf) { + .rx_drop_en = 0, + .offloads = 0, + }; + + devinfo->default_txconf = (struct rte_eth_txconf) { + .offloads = 0, + }; + + devinfo->rx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = UINT16_MAX, + .nb_min = NIX_RX_MIN_DESC, + .nb_align = NIX_RX_MIN_DESC_ALIGN, + .nb_seg_max = NIX_RX_NB_SEG_MAX, + .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX, + }; + devinfo->rx_desc_lim.nb_max = + RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max, + NIX_RX_MIN_DESC_ALIGN); + + devinfo->tx_desc_lim = (struct rte_eth_desc_lim) { + .nb_max = UINT16_MAX, + .nb_min = 1, + .nb_align = 1, + .nb_seg_max = NIX_TX_NB_SEG_MAX, + .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX, + }; + + /* Auto negotiation disabled */ + devinfo->speed_capa = ETH_LINK_SPEED_FIXED; + devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G | + ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G | + ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G; +} From patchwork Sun Jun 2 15:23:43 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54063 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9993D1B996; Sun, 2 Jun 2019 17:25:00 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id CD9801B995 for ; Sun, 2 Jun 2019 17:24:59 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKPI7020378; Sun, 2 Jun 2019 08:24:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=rgRY58DKQ0oPH1/EQtidCnldwClF/TQaxcCeMLuNL9Q=; b=VY41iW8Jns1ZJGEPQ/hzHsGIZ+7gBm4oE5Hkx7at4nTcz5Cky+LuzJ+VPF5oYAX88GLp 57F9ghU9NN6vN3L6OKpgVzBlWU4YHD0uYs3K/eKa2rb77KACoIHsgx7e2ChUGn0aCOUw ZyXJe1T+0QntsUF5ZAcS1RXQ2HzFZ6ZQGSb9w3K4xkOKAXYa1hhLTeCiF8lHlRsD6rnu FovK+aHziQxP60irlzzaBtiBZ9x0fRilqCjvyzQUz6pBK+yPiiVQ8n4rPXBX/SRQxBTQ crIeqsWHtrHo4Q1gr1pddKlprrBtYBH0i7TNQpn1BYmm0TaBHAW9GaEJha9c4NoG7UcU ww== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk491h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:24:59 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:24:57 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:24:57 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 388473F703F; Sun, 2 Jun 2019 08:24:56 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:43 +0530 Message-ID: <20190602152434.23996-8-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 07/58] net/octeontx2: add device configure operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add device configure operation. This would call lf_alloc mailbox to allocate a NIX LF and upon return, AF will return the attributes for the select LF. Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 151 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 11 ++ 2 files changed, 162 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 6e3c70559..65d72a47f 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -39,6 +39,52 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev) return NIX_TX_OFFLOAD_CAPA; } +static int +nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_lf_alloc_req *req; + struct nix_lf_alloc_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox); + req->rq_cnt = nb_rxq; + req->sq_cnt = nb_txq; + req->cq_cnt = nb_rxq; + /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */ + RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128); + req->xqe_sz = NIX_XQESZ_W16; + req->rss_sz = dev->rss_info.rss_size; + req->rss_grps = NIX_RSS_GRPS; + req->npa_func = otx2_npa_pf_func_get(); + req->sso_func = otx2_sso_pf_func_get(); + req->rx_cfg = BIT_ULL(35 /* DIS_APAD */); + if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM)) { + req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */); + req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */); + } + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + dev->sqb_size = rsp->sqb_size; + dev->tx_chan_base = rsp->tx_chan_base; + dev->rx_chan_base = rsp->rx_chan_base; + dev->rx_chan_cnt = rsp->rx_chan_cnt; + dev->tx_chan_cnt = rsp->tx_chan_cnt; + dev->lso_tsov4_idx = rsp->lso_tsov4_idx; + dev->lso_tsov6_idx = rsp->lso_tsov6_idx; + dev->lf_tx_stats = rsp->lf_tx_stats; + dev->lf_rx_stats = rsp->lf_rx_stats; + dev->cints = rsp->cints; + dev->qints = rsp->qints; + dev->npc_flow.channel = dev->rx_chan_base; + + return 0; +} + static int nix_lf_free(struct otx2_eth_dev *dev) { @@ -64,9 +110,114 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +static int +otx2_nix_configure(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_eth_conf *conf = &data->dev_conf; + struct rte_eth_rxmode *rxmode = &conf->rxmode; + struct rte_eth_txmode *txmode = &conf->txmode; + char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE]; + struct rte_ether_addr *ea; + uint8_t nb_rxq, nb_txq; + int rc; + + rc = -EINVAL; + + /* Sanity checks */ + if (rte_eal_has_hugepages() == 0) { + otx2_err("Huge page is not configured"); + goto fail; + } + + if (rte_eal_iova_mode() != RTE_IOVA_VA) { + otx2_err("iova mode should be va"); + goto fail; + } + + if (conf->link_speeds & ETH_LINK_SPEED_FIXED) { + otx2_err("Setting link speed/duplex not supported"); + goto fail; + } + + if (conf->dcb_capability_en == 1) { + otx2_err("dcb enable is not supported"); + goto fail; + } + + if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) { + otx2_err("Flow director is not supported"); + goto fail; + } + + if (rxmode->mq_mode != ETH_MQ_RX_NONE && + rxmode->mq_mode != ETH_MQ_RX_RSS) { + otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode); + goto fail; + } + + if (txmode->mq_mode != ETH_MQ_TX_NONE) { + otx2_err("Unsupported mq tx mode %d", txmode->mq_mode); + goto fail; + } + + /* Free the resources allocated from the previous configure */ + if (dev->configured == 1) + nix_lf_free(dev); + + if (otx2_dev_is_A0(dev) && + (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) && + ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) || + (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) { + otx2_err("Outer IP and SCTP checksum unsupported"); + rc = -EINVAL; + goto fail; + } + + dev->rx_offloads = rxmode->offloads; + dev->tx_offloads = txmode->offloads; + dev->rss_info.rss_grps = NIX_RSS_GRPS; + + nb_rxq = RTE_MAX(data->nb_rx_queues, 1); + nb_txq = RTE_MAX(data->nb_tx_queues, 1); + + /* Alloc a nix lf */ + rc = nix_lf_alloc(dev, nb_rxq, nb_txq); + if (rc) { + otx2_err("Failed to init nix_lf rc=%d", rc); + goto fail; + } + + /* Update the mac address */ + ea = eth_dev->data->mac_addrs; + memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); + if (rte_is_zero_ether_addr(ea)) + rte_eth_random_addr((uint8_t *)ea); + + rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea); + + otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d" + " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 "" + " rx_flags=0x%x tx_flags=0x%x", + eth_dev->data->port_id, ea_fmt, nb_rxq, + nb_txq, dev->rx_offloads, dev->tx_offloads, + dev->rx_offload_flags, dev->tx_offload_flags); + + /* All good */ + dev->configured = 1; + dev->configured_nb_rx_qs = data->nb_rx_queues; + dev->configured_nb_tx_qs = data->nb_tx_queues; + return 0; + +fail: + return rc; +} + /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, + .dev_configure = otx2_nix_configure, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 00baabaac..27cad971c 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -39,11 +39,14 @@ #define NIX_MAX_HW_MTU 9190 #define NIX_MAX_HW_FRS (NIX_MAX_HW_MTU + NIX_HW_L2_OVERHEAD) #define NIX_MIN_HW_FRS 60 +/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/ +#define NIX_RSS_GRPS 8 #define NIX_HASH_KEY_SIZE 48 /* 352 Bits */ #define NIX_RSS_RETA_SIZE 64 #define NIX_RX_MIN_DESC 16 #define NIX_RX_MIN_DESC_ALIGN 16 #define NIX_RX_NB_SEG_MAX 6 +#define NIX_CQ_ENTRY_SZ 128 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -85,9 +88,11 @@ struct otx2_rss_info { uint16_t rss_size; + uint8_t rss_grps; }; struct otx2_npc_flow_info { + uint16_t channel; /*rx channel */ uint16_t flow_prealloc_size; uint16_t flow_max_priority; }; @@ -104,7 +109,13 @@ struct otx2_eth_dev { uint8_t lso_tsov6_idx; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; uint8_t max_mac_entries; + uint8_t lf_tx_stats; + uint8_t lf_rx_stats; + uint16_t cints; + uint16_t qints; uint8_t configured; + uint8_t configured_nb_rx_qs; + uint8_t configured_nb_tx_qs; uint16_t nix_msixoff; uintptr_t base; uintptr_t lmt_addr; From patchwork Sun Jun 2 15:23:44 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54064 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6884A1B9A0; Sun, 2 Jun 2019 17:25:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 7BA201B99E for ; Sun, 2 Jun 2019 17:25:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7Go020263; Sun, 2 Jun 2019 08:25:02 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=URmNuRDHtFmB6eHuLOiyUBYAX5YRh5QL3MsjsR3laQQ=; b=KUr9nRQPoxTGWSX2+SB9LwnzfDRG5kdmQWkdzsWwo/lDIXrtUMA3Pyx7jR9/saW45OVq snReYLl1S8xkQ2D1V1UZI0z4RJAR4DQSmxcEPZMT1Iqe6zoBl8wLZRD6Q5q9a+PXR9+i 8hz9x2i9z7BnHV5TjDwQKd5t+a1oubYdWxqKMlerEX7rm82RU8jl/a1Amnlrc5I2+6IN pL139RR/ldzvp0M9bDT9EMM080x33fVkRfwnXhCVlEVmk1mOz6agZ93eQ5PLX9nXylUj M8GW44eRKvlpF14Lao1wtB/SJMLDYgDDTH6Rv1UfUpw8gwFVjhwgcKfAkia1/rHZiJAl Xg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk491n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:01 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:00 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:00 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 224543F703F; Sun, 2 Jun 2019 08:24:58 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:53:44 +0530 Message-ID: <20190602152434.23996-9-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 08/58] net/octeontx2: handle queue specific error interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Handle queue specific error interrupts. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 16 +- drivers/net/octeontx2/otx2_ethdev.h | 9 ++ drivers/net/octeontx2/otx2_ethdev_irq.c | 191 ++++++++++++++++++++++++ 3 files changed, 215 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 65d72a47f..045855c2e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -163,8 +163,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) } /* Free the resources allocated from the previous configure */ - if (dev->configured == 1) + if (dev->configured == 1) { + oxt2_nix_unregister_queue_irqs(eth_dev); nix_lf_free(dev); + } if (otx2_dev_is_A0(dev) && (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) && @@ -189,6 +191,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto fail; } + /* Register queue IRQs */ + rc = oxt2_nix_register_queue_irqs(eth_dev); + if (rc) { + otx2_err("Failed to register queue interrupts rc=%d", rc); + goto free_nix_lf; + } + /* Update the mac address */ ea = eth_dev->data->mac_addrs; memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); @@ -210,6 +219,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) dev->configured_nb_tx_qs = data->nb_tx_queues; return 0; +free_nix_lf: + rc = nix_lf_free(dev); fail: return rc; } @@ -413,6 +424,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Unregister queue irqs */ + oxt2_nix_unregister_queue_irqs(eth_dev); + rc = nix_lf_free(dev); if (rc) otx2_err("Failed to free nix lf, rc=%d", rc); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 27cad971c..ca0587a63 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -86,6 +86,11 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +struct otx2_qint { + struct rte_eth_dev *eth_dev; + uint8_t qintx; +}; + struct otx2_rss_info { uint16_t rss_size; uint8_t rss_grps; @@ -114,6 +119,7 @@ struct otx2_eth_dev { uint16_t cints; uint16_t qints; uint8_t configured; + uint8_t configured_qints; uint8_t configured_nb_rx_qs; uint8_t configured_nb_tx_qs; uint16_t nix_msixoff; @@ -126,6 +132,7 @@ struct otx2_eth_dev { uint64_t tx_offloads; uint64_t rx_offload_capa; uint64_t tx_offload_capa; + struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; struct otx2_npc_flow_info npc_flow; } __rte_cache_aligned; @@ -142,7 +149,9 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); +int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); +void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 33fed93c4..476c7ea78 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -112,6 +112,197 @@ nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev) otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec); } +static inline uint8_t +nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q, + uint32_t off, uint64_t mask) +{ + uint64_t reg, wdata; + uint8_t qint; + + wdata = (uint64_t)q << 44; + reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off)); + + if (reg & BIT_ULL(42) /* OP_ERR */) { + otx2_err("Failed execute irq get off=0x%x", off); + return 0; + } + + qint = reg & 0xff; + wdata &= mask; + otx2_write64(wdata, dev->base + off); + + return qint; +} + +static inline uint8_t +nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq) +{ + return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq) +{ + return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00); +} + +static inline uint8_t +nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq) +{ + return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00); +} + +static inline void +nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off) +{ + uint64_t reg; + + reg = otx2_read64(dev->base + off); + if (reg & BIT_ULL(44)) + otx2_err("SQ=%d err_code=0x%x", + (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff)); +} + +static void +nix_lf_q_irq(void *param) +{ + struct otx2_qint *qint = (struct otx2_qint *)param; + struct rte_eth_dev *eth_dev = qint->eth_dev; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint8_t irq, qintx = qint->qintx; + int q, cq, rq, sq; + uint64_t intr; + + intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx)); + if (intr == 0) + return; + + otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d", + intr, qintx, dev->pf, dev->vf); + + /* Handle RQ interrupts */ + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) { + rq = q % dev->qints; + irq = nix_lf_rq_irq_get_and_clear(dev, rq); + + if (irq & BIT_ULL(NIX_RQINT_DROP)) + otx2_err("RQ=%d NIX_RQINT_DROP", rq); + + if (irq & BIT_ULL(NIX_RQINT_RED)) + otx2_err("RQ=%d NIX_RQINT_RED", rq); + } + + /* Handle CQ interrupts */ + for (q = 0; q < eth_dev->data->nb_rx_queues; q++) { + cq = q % dev->qints; + irq = nix_lf_cq_irq_get_and_clear(dev, cq); + + if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR)) + otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL)) + otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq); + + if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT)) + otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq); + } + + /* Handle SQ interrupts */ + for (q = 0; q < eth_dev->data->nb_tx_queues; q++) { + sq = q % dev->qints; + irq = nix_lf_sq_irq_get_and_clear(dev, sq); + + if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) { + otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) { + otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) { + otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG); + } + if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) { + otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq); + nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG); + } + } + + /* Clear interrupt */ + otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); +} + +int +oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q, sqs, rqs, qs, rc = 0; + + /* Figure out max qintx required */ + rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues); + sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues); + qs = RTE_MAX(rqs, sqs); + + dev->configured_qints = qs; + + for (q = 0; q < qs; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q)); + + dev->qints_mem[q].eth_dev = eth_dev; + dev->qints_mem[q].qintx = q; + + /* Sync qints_mem update */ + rte_smp_wmb(); + + /* Register queue irq vector */ + rc = otx2_register_irq(handle, nix_lf_q_irq, + &dev->qints_mem[q], vec); + if (rc) + break; + + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q)); + /* Enable QINT interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q)); + } + + return rc; +} + +void +oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + struct rte_intr_handle *handle = &pci_dev->intr_handle; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int vec, q; + + for (q = 0; q < dev->configured_qints; q++) { + vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q; + + /* Clear QINT CNT */ + otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q)); + otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q)); + + /* Clear interrupt */ + otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q)); + + /* Unregister queue irq vector */ + otx2_unregister_irq(handle, nix_lf_q_irq, + &dev->qints_mem[q], vec); + } +} + int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev) { From patchwork Sun Jun 2 15:23:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54065 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D90771B99A; Sun, 2 Jun 2019 17:25:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 6A58E1B99A for ; Sun, 2 Jun 2019 17:25:05 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKoo020361; Sun, 2 Jun 2019 08:25:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=llYJjqYCf05BCr9m7wZQAEqCi2vvv8qAKicalLZgA/I=; b=M5HfEuoNi4Mf5Ksydoqq0CS7nitbhuB57T0yHfPuFgiL6dwdDLBOwA4ilsKtu5nThfdL XcDfH33I7358W59XREkvh3t5B1y0JQavE45vERJNKXx3pyFPjq/yjr7jOqXwr3rUm3Gn abc9m1WOdky3bLabl7Eqvg3A99K1JPuiEEaq6U6m+eVfjcWRSMpxzYAb9OMmRIBjm/Di OBSbI/3DLz9qD4u9HayH8pNJN13iR4ph1yS6T1rHzyk8Ek4rJzruDyQEVHNjMy5p8WIJ nXQDbcymxkSaNVkw0M1+R47VWBACZogazEO6VRU9A6k0eVk34pqVDasPRwGyyM4MZJLj /Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk491w-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:04 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:03 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:03 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C885E3F703F; Sun, 2 Jun 2019 08:25:01 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:53:45 +0530 Message-ID: <20190602152434.23996-10-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 09/58] net/octeontx2: add context debug utils X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add RQ,SQ,CQ context and CQE structure dump utils. Signed-off-by: Jerin Jacob Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.h | 4 + drivers/net/octeontx2/otx2_ethdev_debug.c | 272 ++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev_irq.c | 9 + 5 files changed, 287 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 820202eb2..0dfd43f4f 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ + otx2_ethdev_debug.c \ otx2_ethdev_devargs.c LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_common_octeontx2 -lm diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index a2dc983e3..1c010c342 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -7,6 +7,7 @@ sources = files( 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', + 'otx2_ethdev_debug.c', 'otx2_ethdev_devargs.c' ) diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index ca0587a63..ff14a0129 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -153,6 +153,10 @@ int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); +/* Debug */ +int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev); +void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c new file mode 100644 index 000000000..39cda7637 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ethdev_debug.c @@ -0,0 +1,272 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) + +static inline void +nix_lf_sq_dump(struct nix_sq_ctx_s *ctx) +{ + nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d", + ctx->sqe_way_mask, ctx->cq); + nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->sdp_mcast, ctx->substream); + nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n", + ctx->qint_idx, ctx->ena); + + nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d", + ctx->sqb_count, ctx->default_chan); + nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d", + ctx->smq_rr_quantum, ctx->sso_ena); + nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n", + ctx->xoff, ctx->cq_ena, ctx->smq); + + nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d", + ctx->sqe_stype, ctx->sq_int_ena); + nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d", + ctx->sq_int, ctx->sqb_aura); + nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count); + + nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d", + ctx->smq_next_sq_vld, ctx->smq_pend); + nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d", + ctx->smenq_next_sqb_vld, ctx->head_offset); + nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d", + ctx->smenq_offset, ctx->tail_offset); + nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d", + ctx->smq_lso_segnum, ctx->smq_next_sq); + nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d", + ctx->mnq_dis, ctx->lmt_dis); + nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n", + ctx->cq_limit, ctx->max_sqe_size); + + nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb); + nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb); + nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb); + nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb); + nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb); + + nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d", + ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena); + nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d", + ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps); + nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d", + ctx->vfi_lso_sb, ctx->vfi_lso_sizem1); + nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total); + + nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "", + (uint64_t)ctx->scm_lso_rem); + nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_octs); + nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "", + (uint64_t)ctx->drop_pkts); +} + +static inline void +nix_lf_rq_dump(struct nix_rq_ctx_s *ctx) +{ + nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x", + ctx->wqe_aura, ctx->substream); + nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d", + ctx->cq, ctx->ena_wqwd); + nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d", + ctx->ipsech_ena, ctx->sso_ena); + nix_dump("W0: ena \t\t\t%d\n", ctx->ena); + + nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d", + ctx->lpb_drop_ena, ctx->spb_drop_ena); + nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d", + ctx->xqe_drop_ena, ctx->wqe_caching); + nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d", + ctx->pb_caching, ctx->sso_tt); + nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d", + ctx->sso_grp, ctx->lpb_aura); + nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura); + + nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d", + ctx->xqe_hdr_split, ctx->xqe_imm_copy); + nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d", + ctx->xqe_imm_size, ctx->later_skip); + nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d", + ctx->first_skip, ctx->lpb_sizem1); + nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d", + ctx->spb_ena, ctx->wqe_skip); + nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1); + + nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d", + ctx->spb_pool_pass, ctx->spb_pool_drop); + nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d", + ctx->spb_aura_pass, ctx->spb_aura_drop); + nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d", + ctx->wqe_pool_pass, ctx->wqe_pool_drop); + nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n", + ctx->xqe_pass, ctx->xqe_drop); + + nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d", + ctx->qint_idx, ctx->rq_int_ena); + nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d", + ctx->rq_int, ctx->lpb_pool_pass); + nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d", + ctx->lpb_pool_drop, ctx->lpb_aura_pass); + nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop); + + nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d", + ctx->flow_tagw, ctx->bad_utag); + nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n", + ctx->good_utag, ctx->ltag); + + nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs); + nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts); + nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs); + nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts); + nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts); +} + +static inline void +nix_lf_cq_dump(struct nix_cq_ctx_s *ctx) +{ + nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base); + + nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr); + nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d", + ctx->avg_con, ctx->cint_idx); + nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d", + ctx->cq_err, ctx->qint_idx); + nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n", + ctx->bpid, ctx->bp_ena); + + nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d", + ctx->update_time, ctx->avg_level); + nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n", + ctx->head, ctx->tail); + + nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d", + ctx->cq_err_int_ena, ctx->cq_err_int); + nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d", + ctx->qsize, ctx->caching); + nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d", + ctx->substream, ctx->ena); + nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d", + ctx->drop_ena, ctx->drop); + nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp); +} + +int +otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, q, rq = eth_dev->data->nb_rx_queues; + int sq = eth_dev->data->nb_tx_queues; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + + for (q = 0; q < rq; q++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = q; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get cq context"); + goto fail; + } + nix_dump("============== port=%d cq=%d ===============", + eth_dev->data->port_id, q); + nix_lf_cq_dump(&rsp->cq); + } + + for (q = 0; q < rq; q++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = q; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void **)&rsp); + if (rc) { + otx2_err("Failed to get rq context"); + goto fail; + } + nix_dump("============== port=%d rq=%d ===============", + eth_dev->data->port_id, q); + nix_lf_rq_dump(&rsp->rq); + } + for (q = 0; q < sq; q++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = q; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get sq context"); + goto fail; + } + nix_dump("============== port=%d sq=%d ===============", + eth_dev->data->port_id, q); + nix_lf_sq_dump(&rsp->sq); + } + +fail: + return rc; +} + +/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */ +void +otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq) +{ + const struct nix_rx_parse_s *rx = + (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1); + + nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d", + cq->tag, cq->q, cq->node, cq->cqe_type); + + nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d", + rx->chan, rx->desc_sizem1); + nix_dump("W0: imm_copy \t%d\t\texpress \t%d", + rx->imm_copy, rx->express); + nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d", + rx->wqwd, rx->errlev, rx->errcode); + nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d", + rx->latype, rx->lbtype, rx->lctype); + nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d", + rx->ldtype, rx->letype, rx->lftype); + nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d", + rx->lgtype, rx->lhtype); + + nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1); + nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d", + rx->l2m, rx->l2b, rx->l3m, rx->l3b); + nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d", + rx->vtag0_valid, rx->vtag0_gone); + nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d", + rx->vtag1_valid, rx->vtag1_gone); + nix_dump("W1: pkind \t%d", rx->pkind); + nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d", + rx->vtag0_tci, rx->vtag1_tci); + + nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d", + rx->laflags, rx->lbflags, rx->lcflags); + nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d", + rx->ldflags, rx->leflags, rx->lfflags); + nix_dump("W2: lgflags \t%d\t\tlhflags \t%d", + rx->lgflags, rx->lhflags); + + nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d", + rx->eoh_ptr, rx->wqe_aura, rx->pb_aura); + nix_dump("W3: match_id \t%d", rx->match_id); + + nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d", + rx->laptr, rx->lbptr, rx->lcptr); + nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d", + rx->ldptr, rx->leptr, rx->lfptr); + nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr); + + nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d", + rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg); +} diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 476c7ea78..9bc9d99f8 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -23,6 +23,9 @@ nix_lf_err_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_ERR_INT); + + otx2_nix_queues_ctx_dump(eth_dev); + rte_panic("nix_lf_error_interrupt\n"); } static int @@ -75,6 +78,9 @@ nix_lf_ras_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_RAS); + + otx2_nix_queues_ctx_dump(eth_dev); + rte_panic("nix_lf_ras_interrupt\n"); } static int @@ -232,6 +238,9 @@ nix_lf_q_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); + + otx2_nix_queues_ctx_dump(eth_dev); + rte_panic("nix_lf_q_interrupt\n"); } int From patchwork Sun Jun 2 15:23:46 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54066 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5AA141B995; Sun, 2 Jun 2019 17:25:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 8139B1B9A8 for ; Sun, 2 Jun 2019 17:25:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCR021032; Sun, 2 Jun 2019 08:25:08 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=9WSrop22NU4mZwCEoXpwqs5lsvRAjGf/hQgDMBnnrsY=; b=kHCasG39w9d9kRSeTz0bm9G1T4E1a7LKDywY4turLHvtD4UugEeIXROT+3wj1XXKFpcb gNwg9l3XsOHkJfz3DQC9bN2S9o0wdBaa/ekXX8HapOlrZpmn/eEzDey3sco9sATQ1hGY u43HJSYJ5ccvN3qa/sXsVaneIqjde8m14Y3PJiopACCh7cIWRiqjSculXEAjzo8sv5aY K/t224tec7rEs4KEsh/o/KdHG8nnp1lm0Za/VwF2lGEO0SdCs+Aek35TMikAJSVX/3lc BArFOEWKnDfu+gbOWfrm8mbdfPJR1/Gfmb15hXdmsgVI82wUvN3NR2CRyAnVw+3wybS1 JA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk492a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:07 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:06 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 940843F7041; Sun, 2 Jun 2019 08:25:04 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:53:46 +0530 Message-ID: <20190602152434.23996-11-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 10/58] net/octeontx2: add register dump support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Add register dump support and mark Registers dump in features. Signed-off-by: Kiran Kumar K Signed-off-by: Jerin Jacob --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 3 + drivers/net/octeontx2/otx2_ethdev_debug.c | 228 +++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev_irq.c | 6 + 7 files changed, 241 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 1f0148669..ce3067596 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -10,3 +10,4 @@ ARMv8 = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 2b0644ee5..b2be52ccb 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -10,3 +10,4 @@ ARMv8 = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 80f0d5c95..76b0c3c10 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -9,3 +9,4 @@ Linux VFIO = Y ARMv8 = Y Lock-free Tx queue = Y Multiprocess aware = Y +Registers dump = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 045855c2e..48d5a15d6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -229,6 +229,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, + .get_reg = otx2_nix_dev_get_reg, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index ff14a0129..c01fe0211 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -154,6 +154,9 @@ void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev); void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev); /* Debug */ +int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data); +int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, + struct rte_dev_reg_info *regs); int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev); void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c index 39cda7637..9f06e5505 100644 --- a/drivers/net/octeontx2/otx2_ethdev_debug.c +++ b/drivers/net/octeontx2/otx2_ethdev_debug.c @@ -5,6 +5,234 @@ #include "otx2_ethdev.h" #define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__) +#define NIX_REG_INFO(reg) {reg, #reg} + +struct nix_lf_reg_info { + uint32_t offset; + const char *name; +}; + +static const struct +nix_lf_reg_info nix_lf_reg[] = { + NIX_REG_INFO(NIX_LF_RX_SECRETX(0)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(1)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(2)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(3)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(4)), + NIX_REG_INFO(NIX_LF_RX_SECRETX(5)), + NIX_REG_INFO(NIX_LF_CFG), + NIX_REG_INFO(NIX_LF_GINT), + NIX_REG_INFO(NIX_LF_GINT_W1S), + NIX_REG_INFO(NIX_LF_GINT_ENA_W1C), + NIX_REG_INFO(NIX_LF_GINT_ENA_W1S), + NIX_REG_INFO(NIX_LF_ERR_INT), + NIX_REG_INFO(NIX_LF_ERR_INT_W1S), + NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C), + NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S), + NIX_REG_INFO(NIX_LF_RAS), + NIX_REG_INFO(NIX_LF_RAS_W1S), + NIX_REG_INFO(NIX_LF_RAS_ENA_W1C), + NIX_REG_INFO(NIX_LF_RAS_ENA_W1S), + NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG), + NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG), + NIX_REG_INFO(NIX_LF_SEND_ERR_DBG), +}; + +static int +nix_lf_get_reg_count(struct otx2_eth_dev *dev) +{ + int reg_count = 0; + + reg_count = RTE_DIM(nix_lf_reg); + /* NIX_LF_TX_STATX */ + reg_count += dev->lf_tx_stats; + /* NIX_LF_RX_STATX */ + reg_count += dev->lf_rx_stats; + /* NIX_LF_QINTX_CNT*/ + reg_count += dev->qints; + /* NIX_LF_QINTX_INT */ + reg_count += dev->qints; + /* NIX_LF_QINTX_ENA_W1S */ + reg_count += dev->qints; + /* NIX_LF_QINTX_ENA_W1C */ + reg_count += dev->qints; + /* NIX_LF_CINTX_CNT */ + reg_count += dev->cints; + /* NIX_LF_CINTX_WAIT */ + reg_count += dev->cints; + /* NIX_LF_CINTX_INT */ + reg_count += dev->cints; + /* NIX_LF_CINTX_INT_W1S */ + reg_count += dev->cints; + /* NIX_LF_CINTX_ENA_W1S */ + reg_count += dev->cints; + /* NIX_LF_CINTX_ENA_W1C */ + reg_count += dev->cints; + + return reg_count; +} + +int +otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data) +{ + uintptr_t nix_lf_base = dev->base; + bool dump_stdout; + uint64_t reg; + uint32_t i; + + dump_stdout = data ? 0 : 1; + + for (i = 0; i < RTE_DIM(nix_lf_reg); i++) { + reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset); + if (dump_stdout && reg) + nix_dump("%32s = 0x%" PRIx64, + nix_lf_reg[i].name, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_TX_STATX */ + for (i = 0; i < dev->lf_tx_stats; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_TX_STATX", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_RX_STATX */ + for (i = 0; i < dev->lf_rx_stats; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_RX_STATX", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_CNT*/ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_CNT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_INT */ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_INT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_ENA_W1S */ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_ENA_W1S", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_QINTX_ENA_W1C */ + for (i = 0; i < dev->qints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_QINTX_ENA_W1C", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_CNT */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_CNT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_WAIT */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_WAIT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_INT */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_INT", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_INT_W1S */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_INT_W1S", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_ENA_W1S */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_ENA_W1S", i, reg); + if (data) + *data++ = reg; + } + + /* NIX_LF_CINTX_ENA_W1C */ + for (i = 0; i < dev->cints; i++) { + reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i)); + if (dump_stdout && reg) + nix_dump("%32s_%d = 0x%" PRIx64, + "NIX_LF_CINTX_ENA_W1C", i, reg); + if (data) + *data++ = reg; + } + return 0; +} + +int +otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t *data = regs->data; + + if (data == NULL) { + regs->length = nix_lf_get_reg_count(dev); + regs->width = 8; + return 0; + } + + if (!regs->length || + regs->length == (uint32_t)nix_lf_get_reg_count(dev)) { + otx2_nix_reg_dump(dev, data); + return 0; + } + + return -ENOTSUP; +} static inline void nix_lf_sq_dump(struct nix_sq_ctx_s *ctx) diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c index 9bc9d99f8..7bb0ef35e 100644 --- a/drivers/net/octeontx2/otx2_ethdev_irq.c +++ b/drivers/net/octeontx2/otx2_ethdev_irq.c @@ -24,6 +24,8 @@ nix_lf_err_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_ERR_INT); + /* Dump registers to std out */ + otx2_nix_reg_dump(dev, NULL); otx2_nix_queues_ctx_dump(eth_dev); rte_panic("nix_lf_error_interrupt\n"); } @@ -79,6 +81,8 @@ nix_lf_ras_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_RAS); + /* Dump registers to std out */ + otx2_nix_reg_dump(dev, NULL); otx2_nix_queues_ctx_dump(eth_dev); rte_panic("nix_lf_ras_interrupt\n"); } @@ -239,6 +243,8 @@ nix_lf_q_irq(void *param) /* Clear interrupt */ otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx)); + /* Dump registers to std out */ + otx2_nix_reg_dump(dev, NULL); otx2_nix_queues_ctx_dump(eth_dev); rte_panic("nix_lf_q_interrupt\n"); } From patchwork Sun Jun 2 15:23:47 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54067 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F328C1B9B1; Sun, 2 Jun 2019 17:25:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id B3A691B99F for ; Sun, 2 Jun 2019 17:25:11 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJxgo021326; Sun, 2 Jun 2019 08:25:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=C7Ai0qlykO2hM6tCM5rTd9YjWbVIJD8chlNuE8O6/ng=; b=Y8nxCK2kAXWcI2rIFpVrcTPhmUNs4X5bXcPUit2UkqghQMKKJ8oonMMq4I2UzBrwZzSy fFtrGXxe9kGiB3w18uxeDoMo+EeM9c8EShRW13HXDtV0qp87otF+gpGbgKrZqyoP4fEp MpzC01jskjUM4NmfOFhSAtcQ/obrVhQ211U2VkNreQR8Qt8mi18sXACW9jrkjg6pQgQH GmukM2Xo5aCFk7hPGUAtME1eImO+5jE8DZrpc2UBPup0agbcjrSwgdM+2MZlqkZ62PBt pj4lgaaL82Zb0NifbjqSkz3BD1Ul/p5iSj0H7iCuTlQzjhSTZv6HM8jfC5kiE9S22Ui8 jA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqf0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:10 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:09 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A5DF03F703F; Sun, 2 Jun 2019 08:25:07 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:47 +0530 Message-ID: <20190602152434.23996-12-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 11/58] net/octeontx2: add link stats operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add link stats related operations and mark respective items in the documentation. Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 8 ++ drivers/net/octeontx2/otx2_ethdev.h | 8 ++ drivers/net/octeontx2/otx2_link.c | 108 +++++++++++++++++++++ 8 files changed, 132 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_link.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index ce3067596..60009ab6d 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -10,4 +10,6 @@ ARMv8 = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Link status = Y +Link status event = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index b2be52ccb..3a859edd1 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -10,4 +10,6 @@ ARMv8 = Y Lock-free Tx queue = Y SR-IOV = Y Multiprocess aware = Y +Link status = Y +Link status event = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 76b0c3c10..e1cbd18b1 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -9,4 +9,6 @@ Linux VFIO = Y ARMv8 = Y Lock-free Tx queue = Y Multiprocess aware = Y +Link status = Y +Link status event = Y Registers dump = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 0dfd43f4f..aa428fe6a 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -31,6 +31,7 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ + otx2_link.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 1c010c342..117d038ab 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -4,6 +4,7 @@ sources = files( 'otx2_mac.c', + 'otx2_link.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 48d5a15d6..cb4f6ebb9 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -39,6 +39,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev) return NIX_TX_OFFLOAD_CAPA; } +static const struct otx2_dev_ops otx2_dev_ops = { + .link_status_update = otx2_eth_dev_link_status_update, +}; + static int nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq) { @@ -229,6 +233,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, + .link_update = otx2_nix_link_update, .get_reg = otx2_nix_dev_get_reg, }; @@ -324,6 +329,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) goto error; } } + /* Device generic callbacks */ + dev->ops = &otx2_dev_ops; + dev->eth_dev = eth_dev; /* Grab the NPA LF if required */ rc = otx2_npa_lf_init(pci_dev, dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index c01fe0211..8a099817d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -116,6 +116,7 @@ struct otx2_eth_dev { uint8_t max_mac_entries; uint8_t lf_tx_stats; uint8_t lf_rx_stats; + uint16_t flags; uint16_t cints; uint16_t qints; uint8_t configured; @@ -135,6 +136,7 @@ struct otx2_eth_dev { struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; struct otx2_npc_flow_info npc_flow; + struct rte_eth_dev *eth_dev; } __rte_cache_aligned; static inline struct otx2_eth_dev * @@ -147,6 +149,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +/* Link */ +void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); +int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); +void otx2_eth_dev_link_status_update(struct otx2_dev *dev, + struct cgx_link_user_info *link); + /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c new file mode 100644 index 000000000..228a0cd8e --- /dev/null +++ b/drivers/net/octeontx2/otx2_link.c @@ -0,0 +1,108 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" + +void +otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set) +{ + if (set) + dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F; + else + dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F; + + rte_wmb(); +} + +static inline int +nix_wait_for_link_cfg(struct otx2_eth_dev *dev) +{ + uint16_t wait = 1000; + + do { + rte_rmb(); + if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F)) + break; + wait--; + rte_delay_ms(1); + } while (wait); + + return wait ? 0 : -1; +} + +static void +nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link) +{ + if (link && link->link_status) + otx2_info("Port %d: Link Up - speed %u Mbps - %s", + (int)(eth_dev->data->port_id), + (uint32_t)link->link_speed, + link->link_duplex == ETH_LINK_FULL_DUPLEX ? + "full-duplex" : "half-duplex"); + else + otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id)); +} + +void +otx2_eth_dev_link_status_update(struct otx2_dev *dev, + struct cgx_link_user_info *link) +{ + struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev; + struct rte_eth_dev *eth_dev = otx2_dev->eth_dev; + struct rte_eth_link eth_link; + + if (!link || !dev || !eth_dev->data->dev_conf.intr_conf.lsc) + return; + + if (nix_wait_for_link_cfg(otx2_dev)) { + otx2_err("Timeout waiting for link_cfg to complete"); + return; + } + + eth_link.link_status = link->link_up; + eth_link.link_speed = link->speed; + eth_link.link_autoneg = ETH_LINK_AUTONEG; + eth_link.link_duplex = link->full_duplex; + + /* Print link info */ + nix_link_status_print(eth_dev, ð_link); + + /* Update link info */ + rte_eth_linkstatus_set(eth_dev, ð_link); + + /* Set the flag and execute application callbacks */ + _rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL); +} + +int +otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_link_info_msg *rsp; + struct rte_eth_link link; + int rc; + + RTE_SET_USED(wait_to_complete); + + if (otx2_dev_is_lbk(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox); + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + link.link_status = rsp->link_info.link_up; + link.link_speed = rsp->link_info.speed; + link.link_autoneg = ETH_LINK_AUTONEG; + + if (rsp->link_info.full_duplex) + link.link_duplex = rsp->link_info.full_duplex; + + return rte_eth_linkstatus_set(eth_dev, &link); +} From patchwork Sun Jun 2 15:23:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54068 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 69FA71B9B5; Sun, 2 Jun 2019 17:25:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id CCE1E1B99F for ; Sun, 2 Jun 2019 17:25:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Ye020248; Sun, 2 Jun 2019 08:25:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=VyuQV82IJWEVbIZAviCUHe8cqwSI1TP35/8qu0BXSgE=; b=AbRWf4i4jCK2P4MWgVHWlBoOaLsOjnagwVaqm9rKI/cVx8NPFqEPJBnahbus7F1MM2IA D6hlLF2JVFRvBuLNw4BEEWvUkb0J24SQyyyw6qm4SkWhpwQBZwpTmxOZMugImk7JQd9U 77T+3CINn9Tnp+2kxRu8ADS/2ds9/0KCVKIrFvqtxWy4a7LW8vR/yGk9nNoGr2dOaEef /Y+bHEhtZMO8g7VmrlDIEEBcy43gnHj1fiQJJYBPn3jNk3Mw7zKUIGDEEfuXk4tOngOU IWk+R0E6NbBA6rjvB++Ny2XCVKjLumsm1Vgm7BcirdndsUNEV/YGRHMHfjJnsXf0Ut86 yg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk492h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:14 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:12 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:12 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id D44253F703F; Sun, 2 Jun 2019 08:25:10 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:48 +0530 Message-ID: <20190602152434.23996-13-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 12/58] net/octeontx2: add basic stats operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Add basic stat operation and updated the feature list. Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 3 + drivers/net/octeontx2/otx2_ethdev.h | 17 +++ drivers/net/octeontx2/otx2_stats.c | 117 +++++++++++++++++++++ 8 files changed, 145 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_stats.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 60009ab6d..72336ae15 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,4 +12,6 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Basic stats = Y +Stats per queue = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 3a859edd1..0f3850188 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,4 +12,6 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Basic stats = Y +Stats per queue = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index e1cbd18b1..8bc72c4fb 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,4 +11,6 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +Basic stats = Y +Stats per queue = Y Registers dump = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index aa428fe6a..dcd692b7b 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -32,6 +32,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_link.c \ + otx2_stats.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 117d038ab..384237104 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -5,6 +5,7 @@ sources = files( 'otx2_mac.c', 'otx2_link.c', + 'otx2_stats.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index cb4f6ebb9..5787029d9 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -234,7 +234,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, .link_update = otx2_nix_link_update, + .stats_get = otx2_nix_dev_stats_get, + .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8a099817d..c9366a9ed 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -57,6 +57,12 @@ #define NIX_TX_NB_SEG_MAX 9 #endif +#define CQ_OP_STAT_OP_ERR 63 +#define CQ_OP_STAT_CQ_ERR 46 + +#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR) +#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR) + #define NIX_RSS_OFFLOAD (ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\ ETH_RSS_TCP | ETH_RSS_SCTP | \ ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD) @@ -135,6 +141,8 @@ struct otx2_eth_dev { uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; struct otx2_rss_info rss_info; + uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; + uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; struct rte_eth_dev *eth_dev; } __rte_cache_aligned; @@ -168,6 +176,15 @@ int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev); void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq); +/* Stats */ +int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *stats); +void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev); + +int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev, + uint16_t queue_id, uint8_t stat_idx, + uint8_t is_rx); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c new file mode 100644 index 000000000..ade0f6ad6 --- /dev/null +++ b/drivers/net/octeontx2/otx2_stats.c @@ -0,0 +1,117 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" + +int +otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_stats *stats) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t reg, val; + uint32_t qidx, i; + int64_t *addr; + + stats->opackets = otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST)); + stats->opackets += otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST)); + stats->opackets += otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST)); + stats->oerrors = otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP)); + stats->obytes = otx2_read64(dev->base + + NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS)); + + stats->ipackets = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST)); + stats->ipackets += otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST)); + stats->ipackets += otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST)); + stats->imissed = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP)); + stats->ibytes = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS)); + stats->ierrors = otx2_read64(dev->base + + NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR)); + + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + if (dev->txmap[i] & (1 << 31)) { + qidx = dev->txmap[i] & 0xFFFF; + reg = (((uint64_t)qidx) << 32); + + addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_ipackets[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_ibytes[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_errors[i] = val; + } + } + + for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) { + if (dev->rxmap[i] & (1 << 31)) { + qidx = dev->rxmap[i] & 0xFFFF; + reg = (((uint64_t)qidx) << 32); + + addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_opackets[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_obytes[i] = val; + + addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS); + val = otx2_atomic64_add_nosync(reg, addr); + if (val & OP_ERR) + val = 0; + stats->q_errors[i] += val; + } + } + + return 0; +} + +void +otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_stats_rst(mbox); + otx2_mbox_process(mbox); +} + +int +otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id, + uint8_t stat_idx, uint8_t is_rx) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + if (is_rx) + dev->rxmap[stat_idx] = ((1 << 31) | queue_id); + else + dev->txmap[stat_idx] = ((1 << 31) | queue_id); + + return 0; +} From patchwork Sun Jun 2 15:23:49 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54069 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DDA121B956; Sun, 2 Jun 2019 17:25:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BBFD51B956 for ; Sun, 2 Jun 2019 17:25:18 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCS021032; Sun, 2 Jun 2019 08:25:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=lxc6PC4TyuWV/HS4BW2ZzYwqA/xjR9gK9Cd4+C7wIOQ=; b=pkGPjr+BJRJ+tVwS+sw7vaTXSwZD+pvE/vM6PsJqejLTw6Zet4logSLHWg7VRk6T3P8j WHZGIb4aFX3onpXO+6VngdScd/czoGS2TQoLR9v6LV0Wyiuk/1ytohPwv4SFXwq7Dwmu lexaC2ddvjPyK4aVcbTdW1OqqeMr7+XBx/iPgWGyceYXRY4m3DBD6OsxoRHTcJJRjQ7A 4a67lmS5nDnhNEK6RGoRpMjlrLijCNR2SP5f/dy1FZnivjGCshr+kd3dppGWQvT2Og3w 0B7uvB2HX33ohkIZ7pSKLVckiALRBfdneumxtueJAIvyr8G+jU2IOF4p+Z/I82jyhH5y OA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk492x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:18 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:16 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:16 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 4BC2B3F7040; Sun, 2 Jun 2019 08:25:14 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:49 +0530 Message-ID: <20190602152434.23996-14-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 13/58] net/octeontx2: add extended stats operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Add extended operations and updated the feature list. Signed-off-by: Kiran Kumar K Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 5 + drivers/net/octeontx2/otx2_ethdev.h | 13 + drivers/net/octeontx2/otx2_stats.c | 270 +++++++++++++++++++++ 6 files changed, 291 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 72336ae15..3835b5069 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -14,4 +14,5 @@ Link status = Y Link status event = Y Basic stats = Y Stats per queue = Y +Extended stats = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 0f3850188..e18443742 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -13,5 +13,6 @@ Multiprocess aware = Y Link status = Y Link status event = Y Basic stats = Y +Extended stats = Y Stats per queue = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 8bc72c4fb..89df760b3 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -12,5 +12,6 @@ Multiprocess aware = Y Link status = Y Link status event = Y Basic stats = Y +Extended stats = Y Stats per queue = Y Registers dump = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 5787029d9..937ba6399 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -238,6 +238,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, + .xstats_get = otx2_nix_xstats_get, + .xstats_get_names = otx2_nix_xstats_get_names, + .xstats_reset = otx2_nix_xstats_reset, + .xstats_get_by_id = otx2_nix_xstats_get_by_id, + .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index c9366a9ed..223dd5a5a 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -184,6 +184,19 @@ void otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev); int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev, uint16_t queue_id, uint8_t stat_idx, uint8_t is_rx); +int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat *xstats, unsigned int n); +int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit); +void otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev); + +int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, + const uint64_t *ids, + uint64_t *values, unsigned int n); +int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, unsigned int limit); /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c index ade0f6ad6..deb83b704 100644 --- a/drivers/net/octeontx2/otx2_stats.c +++ b/drivers/net/octeontx2/otx2_stats.c @@ -6,6 +6,45 @@ #include "otx2_ethdev.h" +struct otx2_nix_xstats_name { + char name[RTE_ETH_XSTATS_NAME_SIZE]; + uint32_t offset; +}; + +static const struct otx2_nix_xstats_name nix_tx_xstats[] = { + {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST}, + {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST}, + {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST}, + {"tx_drop", NIX_STAT_LF_TX_TX_DROP}, + {"tx_octs", NIX_STAT_LF_TX_TX_OCTS}, +}; + +static const struct otx2_nix_xstats_name nix_rx_xstats[] = { + {"rx_octs", NIX_STAT_LF_RX_RX_OCTS}, + {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST}, + {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST}, + {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST}, + {"rx_drop", NIX_STAT_LF_RX_RX_DROP}, + {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS}, + {"rx_fcs", NIX_STAT_LF_RX_RX_FCS}, + {"rx_err", NIX_STAT_LF_RX_RX_ERR}, + {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST}, + {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST}, + {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST}, + {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST}, +}; + +static const struct otx2_nix_xstats_name nix_q_xstats[] = { + {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS}, +}; + +#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats) +#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats) +#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats) + +#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \ + OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS) + int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev, struct rte_eth_stats *stats) @@ -115,3 +154,234 @@ otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id, return 0; } + +int +otx2_nix_xstats_get(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat *xstats, + unsigned int n) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + unsigned int i, count = 0; + uint64_t reg, val; + + if (n < OTX2_NIX_NUM_XSTATS_REG) + return OTX2_NIX_NUM_XSTATS_REG; + + if (xstats == NULL) + return 0; + + for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) { + xstats[count].value = otx2_read64(dev->base + + NIX_LF_TX_STATX(nix_tx_xstats[i].offset)); + xstats[count].id = count; + count++; + } + + for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) { + xstats[count].value = otx2_read64(dev->base + + NIX_LF_RX_STATX(nix_rx_xstats[i].offset)); + xstats[count].id = count; + count++; + } + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + reg = (((uint64_t)i) << 32); + val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base + + nix_q_xstats[0].offset)); + if (val & OP_ERR) + val = 0; + xstats[count].value += val; + } + xstats[count].id = count; + count++; + + return count; +} + +int +otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + unsigned int limit) +{ + unsigned int i, count = 0; + + RTE_SET_USED(eth_dev); + + if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL) + return -ENOMEM; + + if (xstats_names) { + for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "%s", nix_tx_xstats[i].name); + count++; + } + + for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "%s", nix_rx_xstats[i].name); + count++; + } + + for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) { + snprintf(xstats_names[count].name, + sizeof(xstats_names[count].name), + "%s", nix_q_xstats[i].name); + count++; + } + } + + return OTX2_NIX_NUM_XSTATS_REG; +} + +int +otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, + struct rte_eth_xstat_name *xstats_names, + const uint64_t *ids, unsigned int limit) +{ + struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG]; + uint16_t i; + + if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL) + return OTX2_NIX_NUM_XSTATS_REG; + + if (limit > OTX2_NIX_NUM_XSTATS_REG) + return -EINVAL; + + if (xstats_names == NULL) + return -ENOMEM; + + otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit); + + for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) { + if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) { + otx2_err("Invalid id value"); + return -EINVAL; + } + strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name, + sizeof(xstats_names[i].name)); + } + + return limit; +} + +int +otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids, + uint64_t *values, unsigned int n) +{ + struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG]; + uint16_t i; + + if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL) + return OTX2_NIX_NUM_XSTATS_REG; + + if (n > OTX2_NIX_NUM_XSTATS_REG) + return -EINVAL; + + if (values == NULL) + return -ENOMEM; + + otx2_nix_xstats_get(eth_dev, xstats, n); + + for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) { + if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) { + otx2_err("Invalid id value"); + return -EINVAL; + } + values[i] = xstats[ids[i]].value; + } + + return n; +} + +static void +nix_queue_stats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + uint32_t i; + int rc; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to read rq context"); + return; + } + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq)); + otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask)); + aq->rq.octs = 0; + aq->rq.pkts = 0; + aq->rq.drop_octs = 0; + aq->rq.drop_pkts = 0; + aq->rq.re_pkts = 0; + + aq->rq_mask.octs = ~(aq->rq_mask.octs); + aq->rq_mask.pkts = ~(aq->rq_mask.pkts); + aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs); + aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts); + aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to write rq context"); + return; + } + } + + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to read sq context"); + return; + } + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = i; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq)); + otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask)); + aq->sq.octs = 0; + aq->sq.pkts = 0; + aq->sq.drop_octs = 0; + aq->sq.drop_pkts = 0; + + aq->sq_mask.octs = ~(aq->sq_mask.octs); + aq->sq_mask.pkts = ~(aq->sq_mask.pkts); + aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs); + aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to write sq context"); + return; + } + } +} + +void +otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_stats_rst(mbox); + otx2_mbox_process(mbox); + + /* Reset queue stats */ + nix_queue_stats_reset(eth_dev); +} From patchwork Sun Jun 2 15:23:50 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54070 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C65E11B9C4; Sun, 2 Jun 2019 17:25:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id A37C41B9C2 for ; Sun, 2 Jun 2019 17:25:21 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKou020361; Sun, 2 Jun 2019 08:25:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=+tY0Iz222VfNMiYKexpD1J8eM3PhTF2dLeJH4n5yL1k=; b=QNYNjFRNOFZubwmDmySbm+Zyzpq9E1ni99Kxz+0P6yb36WPi+p93m374RL3mEbcN7dBP oQiyzXTmqogyTO4D+sE0tmJ3lNUzgV5zGF+1lqST0mV8RGQNPRPU2QMNqEDS+oaUAV66 tgwpS8QwGaRZvVKMtRawmXwJiRMY4ffnNpAU8y41kuLyZ6k0hIwUkL+hoHuElW10fZm0 d8/uaIm/Sp+roV/4ImMD/4kWcUkpkdXNmPju8H/8+I3lsekKp48mmdQZftY2s+3BJJdQ k6i9Io2rKf11I9saQGaKUiaQmQyVXwF/EKObAr3HiFK7OtzGsgOZBXd4yhYcTT+/MQgy Hg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4936-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:21 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:19 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:19 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 8A59E3F7040; Sun, 2 Jun 2019 08:25:17 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru , "Sunil Kumar Kori" Date: Sun, 2 Jun 2019 20:53:50 +0530 Message-ID: <20190602152434.23996-15-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 14/58] net/octeontx2: add promiscuous and allmulticast mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add promiscuous and allmulticast mode for PF devices and update the respective feature list. Signed-off-by: Vamsi Attunuru Signed-off-by: Sunil Kumar Kori --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + drivers/net/octeontx2/otx2_ethdev.c | 4 ++ drivers/net/octeontx2/otx2_ethdev.h | 6 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 82 ++++++++++++++++++++++ 5 files changed, 96 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 3835b5069..40da1bb68 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,8 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Promiscuous mode = Y +Allmulticast mode = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index e18443742..1b89be452 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,8 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Promiscuous mode = Y +Allmulticast mode = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 937ba6399..826ce7f4e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -237,6 +237,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .promiscuous_enable = otx2_nix_promisc_enable, + .promiscuous_disable = otx2_nix_promisc_disable, + .allmulticast_enable = otx2_nix_allmulticast_enable, + .allmulticast_disable = otx2_nix_allmulticast_disable, .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, .xstats_get = otx2_nix_xstats_get, .xstats_get_names = otx2_nix_xstats_get_names, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 223dd5a5a..549bc26e4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -157,6 +157,12 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en); +void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); +void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev); +void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); +void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); + /* Link */ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 9f86635d4..77cfa2cec 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -4,6 +4,88 @@ #include "otx2_ethdev.h" +static void +nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return; + + if (en) + otx2_mbox_alloc_msg_cgx_promisc_enable(mbox); + else + otx2_mbox_alloc_msg_cgx_promisc_disable(mbox); + + otx2_mbox_process(mbox); +} + +void +otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_rx_mode *req; + + if (otx2_dev_is_vf(dev)) + return; + + req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox); + + if (en) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC; + + otx2_mbox_process(mbox); + eth_dev->data->promiscuous = en; +} + +void +otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev) +{ + otx2_nix_promisc_config(eth_dev, 1); + nix_cgx_promisc_config(eth_dev, 1); +} + +void +otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev) +{ + otx2_nix_promisc_config(eth_dev, 0); + nix_cgx_promisc_config(eth_dev, 0); +} + +static void +nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_rx_mode *req; + + if (otx2_dev_is_vf(dev)) + return; + + req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox); + + if (en) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI; + else if (eth_dev->data->promiscuous) + req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC; + + otx2_mbox_process(mbox); +} + +void +otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev) +{ + nix_allmulticast_config(eth_dev, 1); +} + +void +otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev) +{ + nix_allmulticast_config(eth_dev, 0); +} + void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) { From patchwork Sun Jun 2 15:23:51 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54071 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4D3F01B9AC; Sun, 2 Jun 2019 17:25:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 753631B99D for ; Sun, 2 Jun 2019 17:25:25 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKPIE020378; Sun, 2 Jun 2019 08:25:25 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=YTSKExr5ILbvfh1ZsRZNjIGkxJBFvLFA5VZpMU5cCQE=; b=R4jQ6Hm7tfrtlZpX2I3sWzNQ72RBjoOOSAfxIBNqnF/VW96ifBr4d6dbbZmPIbcHumZ/ KIo0XDDDaCOW0KKN91fXbv+HONkcBpv4RTiBWpNe6zncL5PbpzY6VfZfIusBjJrc4JW4 BHhfiKiNg67PagEx+rFssMy7wR+tlWPjrr7KOkQJeDm2Z2iDGukCPjpr3/7Dfu0Elqqh KQFfX+8uwCmlgsK6PK1QRYqb8N2wx1F1z9Vwuixvew/cvs+u4Onrp9ybU3m0LkfmIda7 b2om11+Xsg25LSEgdWEnqvEb1YFUUXDYDFZE8YNRBn0tFfNgbiBgirzV+/tavaHDiUZE qg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk493d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:24 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:23 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:23 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id D7E943F703F; Sun, 2 Jun 2019 08:25:20 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Sunil Kumar Kori , "Vamsi Attunuru" Date: Sun, 2 Jun 2019 20:53:51 +0530 Message-ID: <20190602152434.23996-16-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 15/58] net/octeontx2: add unicast MAC filter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Sunil Kumar Kori Add unicast MAC filter for PF device and update the respective feature list. Signed-off-by: Sunil Kumar Kori Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 3 + drivers/net/octeontx2/otx2_ethdev.h | 6 ++ drivers/net/octeontx2/otx2_mac.c | 77 ++++++++++++++++++++++ 5 files changed, 88 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 40da1bb68..cb77ab0fc 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -14,6 +14,7 @@ Link status = Y Link status event = Y Promiscuous mode = Y Allmulticast mode = Y +Unicast MAC filter = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 1b89be452..a51291158 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -14,6 +14,7 @@ Link status = Y Link status event = Y Promiscuous mode = Y Allmulticast mode = Y +Unicast MAC filter = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 826ce7f4e..a72c901f4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -237,6 +237,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .mac_addr_add = otx2_nix_mac_addr_add, + .mac_addr_remove = otx2_nix_mac_addr_del, + .mac_addr_set = otx2_nix_mac_addr_set, .promiscuous_enable = otx2_nix_promisc_enable, .promiscuous_disable = otx2_nix_promisc_disable, .allmulticast_enable = otx2_nix_allmulticast_enable, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 549bc26e4..8d0147afb 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -211,7 +211,13 @@ int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); /* Mac address handling */ +int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, + struct rte_ether_addr *addr); int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr); +int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, + struct rte_ether_addr *addr, + uint32_t index, uint32_t pool); +void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index); int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); /* Devargs */ diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c index 89b0ca6b0..b4bcc61f8 100644 --- a/drivers/net/octeontx2/otx2_mac.c +++ b/drivers/net/octeontx2/otx2_mac.c @@ -49,6 +49,83 @@ otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev) return rsp->max_dmac_filters; } +int +otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr, + uint32_t index __rte_unused, uint32_t pool __rte_unused) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_mac_addr_add_req *req; + struct cgx_mac_addr_add_rsp *rsp; + int rc; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + if (otx2_dev_active_vfs(dev)) + return -ENOTSUP; + + req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox); + otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to add mac address, rc=%d", rc); + goto done; + } + + /* Enable promiscuous mode at NIX level */ + otx2_nix_promisc_config(eth_dev, 1); + +done: + return rc; +} + +void +otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_mac_addr_del_req *req; + int rc; + + if (otx2_dev_is_vf(dev)) + return; + + req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox); + req->index = index; + + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Failed to delete mac address, rc=%d", rc); +} + +int +otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_set_mac_addr *req; + int rc; + + req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox); + otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to set mac address, rc=%d", rc); + goto done; + } + + otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN); + + /* Install the same entry into CGX DMAC filter table too. */ + otx2_cgx_mac_addr_set(eth_dev, addr); + +done: + return rc; +} + int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr) { From patchwork Sun Jun 2 15:23:52 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54072 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DF38E1B9CA; Sun, 2 Jun 2019 17:25:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 5AA4E1B9C9 for ; Sun, 2 Jun 2019 17:25:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKPIG020378; Sun, 2 Jun 2019 08:25:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=hwf5kRqYT0WTUf3eBx006rkXpAOT5u9mNmpOZEiawjY=; b=a/gkQF3vJ6lGnHk24kaJGs6tjFrp4KFNr5tnCO81usIUWYHYb76Cx7QbGs4dBiTqRiok uJMr3lwnW+oEThgBlEX4sCiaAtXfxK40gLq1JKgozipgyZF41ZQ+Z+4shkk6bbDzsCrJ e5Mvv9aXK6+3l0vFzwhTfzdrmGRgpzwHg1+ce2BrOAEmIkoCQpbntKujpVa7fueR9vyS et38CxTLp1AodSIHGws1FL/ArGE6BKMrceV9w+891UeJX/D1zAERHIWJi7NasM9XdXtj +Na75yKvwY88lulx/CCzfX0YxLF2XzqB+A+rEaPnE8B7NtQosGs8YHJQ8xPMbhClYkDq Jg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk493n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:27 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:26 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:26 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 380033F7040; Sun, 2 Jun 2019 08:25:24 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:52 +0530 Message-ID: <20190602152434.23996-17-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 16/58] net/octeontx2: add RSS support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add RSS support and expose RSS related functions to implement RSS action for rte_flow driver. Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K --- doc/guides/nics/features/octeontx2.ini | 4 + doc/guides/nics/features/octeontx2_vec.ini | 4 + doc/guides/nics/features/octeontx2_vf.ini | 4 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 11 + drivers/net/octeontx2/otx2_ethdev.h | 35 ++ drivers/net/octeontx2/otx2_rss.c | 378 +++++++++++++++++++++ 8 files changed, 438 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_rss.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index cb77ab0fc..48ac58b3a 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -15,6 +15,10 @@ Link status event = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index a51291158..6fc647af4 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -15,6 +15,10 @@ Link status event = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 89df760b3..af3c70269 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,6 +11,10 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +RSS hash = Y +RSS key update = Y +RSS reta update = Y +Inner RSS = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index dcd692b7b..67352ec81 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_rss.c \ otx2_mac.c \ otx2_link.c \ otx2_stats.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 384237104..b7e56e2ca 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_rss.c', 'otx2_mac.c', 'otx2_link.c', 'otx2_stats.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index a72c901f4..5289c79e8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -195,6 +195,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto fail; } + /* Configure RSS */ + rc = otx2_nix_rss_config(eth_dev); + if (rc) { + otx2_err("Failed to configure rss rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -245,6 +252,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .allmulticast_enable = otx2_nix_allmulticast_enable, .allmulticast_disable = otx2_nix_allmulticast_disable, .queue_stats_mapping_set = otx2_nix_queue_stats_mapping, + .reta_update = otx2_nix_dev_reta_update, + .reta_query = otx2_nix_dev_reta_query, + .rss_hash_update = otx2_nix_rss_hash_update, + .rss_hash_conf_get = otx2_nix_rss_hash_conf_get, .xstats_get = otx2_nix_xstats_get, .xstats_get_names = otx2_nix_xstats_get_names, .xstats_reset = otx2_nix_xstats_reset, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8d0147afb..67b164740 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -39,6 +39,9 @@ #define NIX_MAX_HW_MTU 9190 #define NIX_MAX_HW_FRS (NIX_MAX_HW_MTU + NIX_HW_L2_OVERHEAD) #define NIX_MIN_HW_FRS 60 +#define NIX_MIN_SQB 512 +#define NIX_SQB_LIST_SPACE 2 +#define NIX_RSS_RETA_SIZE_MAX 256 /* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/ #define NIX_RSS_GRPS 8 #define NIX_HASH_KEY_SIZE 48 /* 352 Bits */ @@ -92,14 +95,22 @@ DEV_RX_OFFLOAD_QINQ_STRIP | \ DEV_RX_OFFLOAD_TIMESTAMP) +#define NIX_DEFAULT_RSS_CTX_GROUP 0 +#define NIX_DEFAULT_RSS_MCAM_IDX -1 + struct otx2_qint { struct rte_eth_dev *eth_dev; uint8_t qintx; }; struct otx2_rss_info { + uint64_t nix_rss; + uint32_t flowkey_cfg; uint16_t rss_size; uint8_t rss_grps; + uint8_t alg_idx; /* Selected algo index */ + uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX]; + uint8_t key[NIX_HASH_KEY_SIZE]; }; struct otx2_npc_flow_info { @@ -204,6 +215,30 @@ int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev, struct rte_eth_xstat_name *xstats_names, const uint64_t *ids, unsigned int limit); +/* RSS */ +void otx2_nix_rss_set_key(struct otx2_eth_dev *dev, + uint8_t *key, uint32_t key_len); +uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, + uint64_t ethdev_rss, uint8_t rss_level); +int otx2_rss_set_hf(struct otx2_eth_dev *dev, + uint32_t flowkey_cfg, uint8_t *alg_idx, + uint8_t group, int mcam_index); +int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group, + uint16_t *ind_tbl); +int otx2_nix_rss_config(struct rte_eth_dev *eth_dev); + +int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf); + +int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf); + /* CGX */ int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev); int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c new file mode 100644 index 000000000..089846da7 --- /dev/null +++ b/drivers/net/octeontx2/otx2_rss.c @@ -0,0 +1,378 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +int +otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, + uint8_t group, uint16_t *ind_tbl) +{ + struct otx2_rss_info *rss = &dev->rss_info; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *req; + int rc, idx; + + for (idx = 0; idx < rss->rss_size; idx++) { + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) { + /* The shared memory buffer can be full. + * Flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!req) + return -ENOMEM; + } + req->rss.rq = ind_tbl[idx]; + /* Fill AQ info */ + req->qidx = (group * rss->rss_size) + idx; + req->ctype = NIX_AQ_CTYPE_RSS; + req->op = NIX_AQ_INSTOP_INIT; + } + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + return 0; +} + +int +otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_rss_info *rss = &dev->rss_info; + int rc, i, j; + int idx = 0; + + rc = -EINVAL; + if (reta_size != dev->rss_info.rss_size) { + otx2_err("Size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, dev->rss_info.rss_size); + goto fail; + } + + /* Copy RETA table */ + for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) { + for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) { + if ((reta_conf[i].mask >> j) & 0x01) + rss->ind_tbl[idx] = reta_conf[i].reta[j]; + idx++; + } + } + + return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl); + +fail: + return rc; +} + +int +otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_rss_info *rss = &dev->rss_info; + int rc, i, j; + + rc = -EINVAL; + + if (reta_size != dev->rss_info.rss_size) { + otx2_err("Size of hash lookup table configured " + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, dev->rss_info.rss_size); + goto fail; + } + + /* Copy RETA table */ + for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) { + for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) + if ((reta_conf[i].mask >> j) & 0x01) + reta_conf[i].reta[j] = rss->ind_tbl[j]; + } + + return 0; + +fail: + return rc; +} + +void +otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key, + uint32_t key_len) +{ + const uint8_t default_key[NIX_HASH_KEY_SIZE] = { + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD, + 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD + }; + struct otx2_rss_info *rss = &dev->rss_info; + uint64_t *keyptr; + uint64_t val; + uint32_t idx; + + if (key == NULL || key == 0) { + keyptr = (uint64_t *)(uintptr_t)default_key; + key_len = NIX_HASH_KEY_SIZE; + memset(rss->key, 0, key_len); + } else { + memcpy(rss->key, key, key_len); + keyptr = (uint64_t *)rss->key; + } + + for (idx = 0; idx < (key_len >> 3); idx++) { + val = rte_cpu_to_be_64(*keyptr); + otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx)); + keyptr++; + } +} + +static void +rss_get_key(struct otx2_eth_dev *dev, uint8_t *key) +{ + uint64_t *keyptr = (uint64_t *)key; + uint64_t val; + int idx; + + for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) { + val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx)); + *keyptr = rte_be_to_cpu_64(val); + keyptr++; + } +} + +#define RSS_IPV4_ENABLE ( \ + ETH_RSS_IPV4 | \ + ETH_RSS_FRAG_IPV4 | \ + ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV4_TCP | \ + ETH_RSS_NONFRAG_IPV4_SCTP) + +#define RSS_IPV6_ENABLE ( \ + ETH_RSS_IPV6 | \ + ETH_RSS_FRAG_IPV6 | \ + ETH_RSS_NONFRAG_IPV6_UDP | \ + ETH_RSS_NONFRAG_IPV6_TCP | \ + ETH_RSS_NONFRAG_IPV6_SCTP) + +#define RSS_IPV6_EX_ENABLE ( \ + ETH_RSS_IPV6_EX | \ + ETH_RSS_IPV6_TCP_EX | \ + ETH_RSS_IPV6_UDP_EX) + +#define RSS_MAX_LEVELS 3 + +#define RSS_IPV4_INDEX 0 +#define RSS_IPV6_INDEX 1 +#define RSS_TCP_INDEX 2 +#define RSS_UDP_INDEX 3 +#define RSS_SCTP_INDEX 4 +#define RSS_DMAC_INDEX 5 + +uint32_t +otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss, + uint8_t rss_level) +{ + uint32_t flow_key_type[RSS_MAX_LEVELS][6] = { + { + FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6, + FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP, + FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC + }, + { + FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6, + FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP, + FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC + }, + { + FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4, + FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6, + FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP, + FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP, + FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP, + FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC + } + }; + uint32_t flowkey_cfg = 0; + + dev->rss_info.nix_rss = ethdev_rss; + + if (ethdev_rss & RSS_IPV4_ENABLE) + flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX]; + + if (ethdev_rss & RSS_IPV6_ENABLE) + flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX]; + + if (ethdev_rss & ETH_RSS_TCP) + flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX]; + + if (ethdev_rss & ETH_RSS_UDP) + flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX]; + + if (ethdev_rss & ETH_RSS_SCTP) + flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX]; + + if (ethdev_rss & ETH_RSS_L2_PAYLOAD) + flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX]; + + if (ethdev_rss & RSS_IPV6_EX_ENABLE) + flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT; + + if (ethdev_rss & ETH_RSS_PORT) + flowkey_cfg |= FLOW_KEY_TYPE_PORT; + + if (ethdev_rss & ETH_RSS_NVGRE) + flowkey_cfg |= FLOW_KEY_TYPE_NVGRE; + + if (ethdev_rss & ETH_RSS_VXLAN) { + flowkey_cfg |= FLOW_KEY_TYPE_VXLAN; + if (flowkey_cfg & FLOW_KEY_TYPE_UDP) + flowkey_cfg |= FLOW_KEY_TYPE_UDP_VXLAN; + } + + if (ethdev_rss & ETH_RSS_GENEVE) { + flowkey_cfg |= FLOW_KEY_TYPE_GENEVE; + if (flowkey_cfg & FLOW_KEY_TYPE_UDP) + flowkey_cfg |= FLOW_KEY_TYPE_UDP_GENEVE; + } + + return flowkey_cfg; +} + +int +otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg, + uint8_t *alg_idx, uint8_t group, int mcam_index) +{ + struct nix_rss_flowkey_cfg_rsp *rss_rsp; + struct otx2_mbox *mbox = dev->mbox; + struct nix_rss_flowkey_cfg *cfg; + int rc; + + rc = -EINVAL; + + dev->rss_info.flowkey_cfg = flowkey_cfg; + + cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox); + + cfg->flowkey_cfg = flowkey_cfg; + cfg->mcam_index = mcam_index; /* -1 indicates default group */ + cfg->group = group; /* 0 is default group */ + + rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp); + if (rc) + return rc; + + if (alg_idx) + *alg_idx = rss_rsp->alg_idx; + + return rc; +} + +int +otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t flowkey_cfg; + uint8_t alg_idx; + int rc; + + rc = -EINVAL; + + if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) { + otx2_err("Hash key size mismatch %d vs %d", + rss_conf->rss_key_len, NIX_HASH_KEY_SIZE); + goto fail; + } + + if (rss_conf->rss_key) + otx2_nix_rss_set_key(dev, rss_conf->rss_key, + (uint32_t)rss_conf->rss_key_len); + + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, 0); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx, + NIX_DEFAULT_RSS_CTX_GROUP, + NIX_DEFAULT_RSS_MCAM_IDX); + if (rc) { + otx2_err("Failed to set RSS hash function rc=%d", rc); + return rc; + } + + dev->rss_info.alg_idx = alg_idx; + +fail: + return rc; +} + +int +otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + if (rss_conf->rss_key) + rss_get_key(dev, rss_conf->rss_key); + + rss_conf->rss_key_len = NIX_HASH_KEY_SIZE; + rss_conf->rss_hf = dev->rss_info.nix_rss; + + return 0; +} + +int +otx2_nix_rss_config(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t idx, qcnt = eth_dev->data->nb_rx_queues; + uint32_t flowkey_cfg; + uint64_t rss_hf; + uint8_t alg_idx; + int rc; + + /* Skip further configuration if selected mode is not RSS */ + if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) + return 0; + + /* Update default RSS key and cfg */ + otx2_nix_rss_set_key(dev, NULL, 0); + + /* Update default RSS RETA */ + for (idx = 0; idx < dev->rss_info.rss_size; idx++) + dev->rss_info.ind_tbl[idx] = idx % qcnt; + + /* Init RSS table context */ + rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl); + if (rc) { + otx2_err("Failed to init RSS table rc=%d", rc); + return rc; + } + + rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf; + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, 0); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx, + NIX_DEFAULT_RSS_CTX_GROUP, + NIX_DEFAULT_RSS_MCAM_IDX); + if (rc) { + otx2_err("Failed to set RSS hash function rc=%d", rc); + return rc; + } + + dev->rss_info.alg_idx = alg_idx; + + return 0; +} From patchwork Sun Jun 2 15:23:53 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54073 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C9A661B964; Sun, 2 Jun 2019 17:25:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id CC9FD1B9D4 for ; Sun, 2 Jun 2019 17:25:30 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJtoI021289; Sun, 2 Jun 2019 08:25:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=PZ/VTjpZ8udtpLgEPqxD/fiEyVoVkqKrPRSGALshVWo=; b=QJCOl25WChnmbyvB8lqYrB+J1ZXzsfcp5569TXCcqiVVCCuVrUATwUwU9UBkGmPYPQde LH/37iVBYp0+Y/lTG/KvhAww6CBV+wCLi5pRj7qD5oIZzuhsOj9PuOhS8SPLWuQYBQfQ URo2gnkcPgd8pRaUT0hk2DpLH0b/nZj1iHuuR3SFmt0wVcSckLk/J20VvV82tYxS0SrC ocbfs5slIw1ZQkWkszLUrSIsR+g7SPh618Pqb7ODHJqOpxsRrLvIWHf7dx7lYTV6AE49 3oRnaZR18q0OJXfiAZhwchKTq25pFhmfJiVcpyK2R+uFXrZPqLd5pswHg+LbDOjz20z0 mg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqfk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:29 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:29 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:29 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 6471F3F703F; Sun, 2 Jun 2019 08:25:27 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:53 +0530 Message-ID: <20190602152434.23996-18-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 17/58] net/octeontx2: add Rx queue setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Rx queue setup and release. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/otx2_ethdev.c | 310 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 51 +++++ 2 files changed, 361 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 5289c79e8..dbbc2263d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -2,9 +2,15 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include +#include + #include #include #include +#include +#include +#include #include "otx2_ethdev.h" @@ -114,6 +120,308 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +static inline void +nix_rx_queue_reset(struct otx2_eth_rxq *rxq) +{ + rxq->head = 0; + rxq->available = 0; +} + +static inline uint32_t +nix_qsize_to_val(enum nix_q_size_e qsize) +{ + return (16UL << (qsize * 2)); +} + +static inline enum nix_q_size_e +nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val) +{ + int i; + + if (otx2_ethdev_fixup_is_min_4k_q(dev)) + i = nix_q_size_4K; + else + i = nix_q_size_16; + + for (; i < nix_q_size_max; i++) + if (val <= nix_qsize_to_val(i)) + break; + + if (i >= nix_q_size_max) + i = nix_q_size_max - 1; + + return i; +} + +static int +nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, + uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp) +{ + struct otx2_mbox *mbox = dev->mbox; + const struct rte_memzone *rz; + uint32_t ring_size, cq_size; + struct nix_aq_enq_req *aq; + uint16_t first_skip; + int rc; + + cq_size = rxq->qlen; + ring_size = cq_size * NIX_CQ_ENTRY_SZ; + rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size, + NIX_CQ_ALIGN, dev->node); + if (rz == NULL) { + otx2_err("Failed to allocate mem for cq hw ring"); + rc = -ENOMEM; + goto fail; + } + memset(rz->addr, 0, rz->len); + rxq->desc = (uintptr_t)rz->addr; + rxq->qmask = cq_size - 1; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_INIT; + + aq->cq.ena = 1; + aq->cq.caching = 1; + aq->cq.qsize = rxq->qsize; + aq->cq.base = rz->iova; + aq->cq.avg_level = 0xff; + aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); + aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); + + /* Many to one reduction */ + aq->cq.qint_idx = qid % dev->qints; + + if (otx2_ethdev_fixup_is_limit_cq_full(dev)) { + uint16_t min_rx_drop; + const float rx_cq_skid = 1024 * 256; + + min_rx_drop = ceil(rx_cq_skid / (float)cq_size); + aq->cq.drop = min_rx_drop; + aq->cq.drop_ena = 1; + } + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to init cq context"); + goto fail; + } + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = qid; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_INIT; + + aq->rq.sso_ena = 0; + aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */ + aq->rq.spb_ena = 0; + aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id); + first_skip = (sizeof(struct rte_mbuf)); + first_skip += RTE_PKTMBUF_HEADROOM; + first_skip += rte_pktmbuf_priv_size(mp); + rxq->data_off = first_skip; + + first_skip /= 8; /* Expressed in number of dwords */ + aq->rq.first_skip = first_skip; + aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8); + aq->rq.flow_tagw = 32; /* 32-bits */ + aq->rq.lpb_sizem1 = rte_pktmbuf_data_room_size(mp); + aq->rq.lpb_sizem1 += rte_pktmbuf_priv_size(mp); + aq->rq.lpb_sizem1 += sizeof(struct rte_mbuf); + aq->rq.lpb_sizem1 /= 8; + aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */ + aq->rq.ena = 1; + aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */ + aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */ + aq->rq.rq_int_ena = 0; + /* Many to one reduction */ + aq->rq.qint_idx = qid % dev->qints; + + if (otx2_ethdev_fixup_is_limit_cq_full(dev)) + aq->rq.xqe_drop_ena = 1; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to init rq context"); + goto fail; + } + + return 0; +fail: + return rc; +} + +static int +nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + int rc; + + /* RQ is already disabled */ + /* Disable CQ */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->cq.ena = 0; + aq->cq_mask.ena = ~(aq->cq_mask.ena); + + rc = otx2_mbox_process(mbox); + if (rc < 0) { + otx2_err("Failed to disable cq context"); + return rc; + } + + return 0; +} + +static inline int +nix_get_data_off(struct otx2_eth_dev *dev) +{ + RTE_SET_USED(dev); + + return 0; +} + +uint64_t +otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id) +{ + struct rte_mbuf mb_def; + uint64_t *tmp; + + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) - + offsetof(struct rte_mbuf, data_off) != 2); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) - + offsetof(struct rte_mbuf, data_off) != 4); + RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) - + offsetof(struct rte_mbuf, data_off) != 6); + mb_def.nb_segs = 1; + mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev); + mb_def.port = port_id; + rte_mbuf_refcnt_set(&mb_def, 1); + + /* Prevent compiler reordering: rearm_data covers previous fields */ + rte_compiler_barrier(); + tmp = (uint64_t *)&mb_def.rearm_data; + + return *tmp; +} + +static void +otx2_nix_rx_queue_release(void *rx_queue) +{ + struct otx2_eth_rxq *rxq = rx_queue; + + if (!rxq) + return; + + otx2_nix_dbg("Releasing rxq %u", rxq->rq); + nix_cq_rq_uninit(rxq->eth_dev, rxq); + rte_free(rx_queue); +} + +static int +otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, + uint16_t nb_desc, unsigned int socket, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_mempool_ops *ops; + struct otx2_eth_rxq *rxq; + const char *platform_ops; + enum nix_q_size_e qsize; + uint64_t offloads; + int rc; + + rc = -EINVAL; + + /* Compile time check to make sure all fast path elements in a CL */ + RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128); + + /* Sanity checks */ + if (rx_conf->rx_deferred_start == 1) { + otx2_err("Deferred Rx start is not supported"); + goto fail; + } + + platform_ops = rte_mbuf_platform_mempool_ops(); + /* This driver needs octeontx2_npa mempool ops to work */ + ops = rte_mempool_get_ops(mp->ops_index); + if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) { + otx2_err("mempool ops should be of octeontx2_npa type"); + goto fail; + } + + if (mp->pool_id == 0) { + otx2_err("Invalid pool_id"); + goto fail; + } + + /* Free memory prior to re-allocation if needed */ + if (eth_dev->data->rx_queues[rq] != NULL) { + otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq); + otx2_nix_rx_queue_release(eth_dev->data->rx_queues[rq]); + eth_dev->data->rx_queues[rq] = NULL; + } + + offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads; + dev->rx_offloads |= offloads; + + /* Find the CQ queue size */ + qsize = nix_qsize_clampup_get(dev, nb_desc); + /* Allocate rxq memory */ + rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket); + if (rxq == NULL) { + otx2_err("Failed to allocate rq=%d", rq); + rc = -ENOMEM; + goto fail; + } + + rxq->eth_dev = eth_dev; + rxq->rq = rq; + rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR; + rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS); + rxq->wdata = (uint64_t)rq << 32; + rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id); + rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev, + eth_dev->data->port_id); + rxq->offloads = offloads; + rxq->pool = mp; + rxq->qlen = nix_qsize_to_val(qsize); + rxq->qsize = qsize; + + /* Alloc completion queue */ + rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp); + if (rc) { + otx2_err("Failed to allocate rxq=%u", rq); + goto free_rxq; + } + + rxq->qconf.socket_id = socket; + rxq->qconf.nb_desc = nb_desc; + rxq->qconf.mempool = mp; + memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf)); + + nix_rx_queue_reset(rxq); + otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d", + rq, mp->name, qsize, nb_desc, rxq->qlen); + + eth_dev->data->rx_queues[rq] = rxq; + eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; + +free_rxq: + otx2_nix_rx_queue_release(rxq); +fail: + return rc; +} + static int otx2_nix_configure(struct rte_eth_dev *eth_dev) { @@ -241,6 +549,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, .link_update = otx2_nix_link_update, + .rx_queue_setup = otx2_nix_rx_queue_setup, + .rx_queue_release = otx2_nix_rx_queue_release, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 67b164740..562724b4e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -10,6 +10,9 @@ #include #include #include +#include +#include +#include #include "otx2_common.h" #include "otx2_dev.h" @@ -50,6 +53,7 @@ #define NIX_RX_MIN_DESC_ALIGN 16 #define NIX_RX_NB_SEG_MAX 6 #define NIX_CQ_ENTRY_SZ 128 +#define NIX_CQ_ALIGN 512 /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -98,6 +102,19 @@ #define NIX_DEFAULT_RSS_CTX_GROUP 0 #define NIX_DEFAULT_RSS_MCAM_IDX -1 +enum nix_q_size_e { + nix_q_size_16, /* 16 entries */ + nix_q_size_64, /* 64 entries */ + nix_q_size_256, + nix_q_size_1K, + nix_q_size_4K, + nix_q_size_16K, + nix_q_size_64K, + nix_q_size_256K, + nix_q_size_1M, /* Million entries */ + nix_q_size_max +}; + struct otx2_qint { struct rte_eth_dev *eth_dev; uint8_t qintx; @@ -113,6 +130,16 @@ struct otx2_rss_info { uint8_t key[NIX_HASH_KEY_SIZE]; }; +struct otx2_eth_qconf { + union { + struct rte_eth_txconf tx; + struct rte_eth_rxconf rx; + } conf; + void *mempool; + uint32_t socket_id; + uint16_t nb_desc; +}; + struct otx2_npc_flow_info { uint16_t channel; /*rx channel */ uint16_t flow_prealloc_size; @@ -158,6 +185,29 @@ struct otx2_eth_dev { struct rte_eth_dev *eth_dev; } __rte_cache_aligned; +struct otx2_eth_rxq { + uint64_t mbuf_initializer; + uint64_t data_off; + uintptr_t desc; + void *lookup_mem; + uintptr_t cq_door; + uint64_t wdata; + int64_t *cq_status; + uint32_t head; + uint32_t qmask; + uint32_t available; + uint16_t rq; + struct otx2_timesync_info *tstamp; + MARKER slow_path_start; + uint64_t aura; + uint64_t offloads; + uint32_t qlen; + struct rte_mempool *pool; + enum nix_q_size_e qsize; + struct rte_eth_dev *eth_dev; + struct otx2_eth_qconf qconf; +} __rte_cache_aligned; + static inline struct otx2_eth_dev * otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) { @@ -173,6 +223,7 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); +uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id); /* Link */ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); From patchwork Sun Jun 2 15:23:54 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54074 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5B4221B9D8; Sun, 2 Jun 2019 17:25:35 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 1D4111B9D6 for ; Sun, 2 Jun 2019 17:25:34 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6kH020260; Sun, 2 Jun 2019 08:25:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=NIRJZTqLLxDVmnZmwKdvWlIMgJUFvP0VT1WkL3ywm0k=; b=Mzmk0/6+7GIg/BLLeOWZTRESwxwYZJbsphEaevDva3zRbaxJ+c3z6HhKFKM28Q2aONZ7 fANyOZP0rFjpMVR/311wccCoubuYhqyXyUwKgQIvas4e2bVpBphIgKi3UX7DzpjBfTja JRkrCgF1aPcqL8usr8KAC/UG6d+xiHtRZYYgU2FtRLey0OaTykVAdbITFRjQCrorfabd vP7W6fb5mnhondUjRsnwY/AlVhOo2azrCcq8BjIxOSxKz0+HRPztrIJO1vEcp5n7nOVw O7l9O1U3VkOL+5knUl1D0iW9CWeIsSsJm9+QhUiSqfAnlLH5U0dVgjAHwRV7wEolo1gi fQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk493x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:33 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 64EE63F703F; Sun, 2 Jun 2019 08:25:30 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:53:54 +0530 Message-ID: <20190602152434.23996-19-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 18/58] net/octeontx2: add Tx queue setup and release X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Tx queue setup and release. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 384 +++++++++++++++++++++++++++- drivers/net/octeontx2/otx2_ethdev.h | 24 ++ drivers/net/octeontx2/otx2_tx.h | 28 ++ 3 files changed, 435 insertions(+), 1 deletion(-) create mode 100644 drivers/net/octeontx2/otx2_tx.h diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index dbbc2263d..b501ba865 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -422,6 +422,372 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, return rc; } +static inline uint8_t +nix_sq_max_sqe_sz(struct otx2_eth_txq *txq) +{ + /* + * Maximum three segments can be supported with W8, Choose + * NIX_MAXSQESZ_W16 for multi segment offload. + */ + if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) + return NIX_MAXSQESZ_W16; + else + return NIX_MAXSQESZ_W8; +} + +static int +nix_sq_init(struct otx2_eth_txq *txq) +{ + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *sq; + + if (txq->sqb_pool->pool_id == 0) + return -EINVAL; + + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + sq->qidx = txq->sq; + sq->ctype = NIX_AQ_CTYPE_SQ; + sq->op = NIX_AQ_INSTOP_INIT; + sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq); + + sq->sq.default_chan = dev->tx_chan_base; + sq->sq.sqe_stype = NIX_STYPE_STF; + sq->sq.ena = 1; + if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8) + sq->sq.sqe_stype = NIX_STYPE_STP; + sq->sq.sqb_aura = + npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id); + sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR); + sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL); + sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR); + sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR); + + /* Many to one reduction */ + sq->sq.qint_idx = txq->sq % dev->qints; + + return otx2_mbox_process(mbox); +} + +static int +nix_sq_uninit(struct otx2_eth_txq *txq) +{ + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct ndc_sync_op *ndc_req; + struct nix_aq_enq_rsp *rsp; + struct nix_aq_enq_req *aq; + uint16_t sqes_per_sqb; + void *sqb_buf; + int rc, count; + + otx2_nix_dbg("Cleaning up sq %u", txq->sq); + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Check if sq is already cleaned up */ + if (!rsp->sq.ena) + return 0; + + /* Disable sq */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->sq_mask.ena = ~aq->sq_mask.ena; + aq->sq.ena = 0; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + /* Read SQ and free sqb's */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = txq->sq; + aq->ctype = NIX_AQ_CTYPE_SQ; + aq->op = NIX_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (aq->sq.smq_pend) + rte_panic("otx2: sq has pending sqe's"); + + count = aq->sq.sqb_count; + sqes_per_sqb = 1 << txq->sqes_per_sqb_log2; + /* Free SQB's that are used */ + sqb_buf = (void *)rsp->sq.head_sqb; + while (count) { + void *next_sqb; + + next_sqb = *(void **)((uintptr_t)sqb_buf + ((sqes_per_sqb - 1) * + nix_sq_max_sqe_sz(txq))); + npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1, + (uint64_t)sqb_buf); + sqb_buf = next_sqb; + count--; + } + + /* Free next to use sqb */ + if (rsp->sq.next_sqb) + npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1, + rsp->sq.next_sqb); + + /* Sync NDC-NIX-TX for LF */ + ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox); + ndc_req->nix_lf_tx_sync = 1; + rc = otx2_mbox_process(mbox); + if (rc) + otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc); + + return rc; +} + +static int +nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc) +{ + struct otx2_eth_dev *dev = txq->dev; + uint16_t sqes_per_sqb, nb_sqb_bufs; + char name[RTE_MEMPOOL_NAMESIZE]; + struct rte_mempool_objsz sz; + struct npa_aura_s *aura; + uint32_t tmp, blk_sz; + + aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN); + snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq); + blk_sz = dev->sqb_size; + + if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16) + sqes_per_sqb = (dev->sqb_size / 8) / 16; + else + sqes_per_sqb = (dev->sqb_size / 8) / 8; + + nb_sqb_bufs = nb_desc / sqes_per_sqb; + /* Clamp up to minimum SQB buffers */ + nb_sqb_bufs = RTE_MAX(NIX_MIN_SQB, nb_sqb_bufs + NIX_SQB_LIST_SPACE); + + txq->sqb_pool = rte_mempool_create_empty(name, nb_sqb_bufs, blk_sz, + 0, 0, dev->node, + MEMPOOL_F_NO_SPREAD); + txq->nb_sqb_bufs = nb_sqb_bufs; + txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb); + txq->nb_sqb_bufs_adj = nb_sqb_bufs - + RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb; + txq->nb_sqb_bufs_adj = + (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100; + + if (txq->sqb_pool == NULL) { + otx2_err("Failed to allocate sqe mempool"); + goto fail; + } + + memset(aura, 0, sizeof(*aura)); + aura->fc_ena = 1; + aura->fc_addr = txq->fc_iova; + aura->fc_hyst_bits = 0; /* Store count on all updates */ + if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) { + otx2_err("Failed to set ops for sqe mempool"); + goto fail; + } + if (rte_mempool_populate_default(txq->sqb_pool) < 0) { + otx2_err("Failed to populate sqe mempool"); + goto fail; + } + + tmp = rte_mempool_calc_obj_size(blk_sz, MEMPOOL_F_NO_SPREAD, &sz); + if (dev->sqb_size != sz.elt_size) { + otx2_err("sqe pool block size is not expected %d != %d", + dev->sqb_size, tmp); + goto fail; + } + + return 0; +fail: + return -ENOMEM; +} + +void +otx2_nix_form_default_desc(struct otx2_eth_txq *txq) +{ + struct nix_send_ext_s *send_hdr_ext; + struct nix_send_hdr_s *send_hdr; + struct nix_send_mem_s *send_mem; + union nix_send_sg_s *sg; + + /* Initialize the fields based on basic single segment packet */ + memset(&txq->cmd, 0, sizeof(txq->cmd)); + + if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) { + send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0]; + /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */ + send_hdr->w0.sizem1 = 2; + + send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2]; + send_hdr_ext->w0.subdc = NIX_SUBDC_EXT; + if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) { + /* Default: one seg packet would have: + * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM) + * => 8/2 - 1 = 3 + */ + send_hdr->w0.sizem1 = 3; + send_hdr_ext->w0.tstmp = 1; + + /* To calculate the offset for send_mem, + * send_hdr->w0.sizem1 * 2 + */ + send_mem = (struct nix_send_mem_s *)(txq->cmd + + (send_hdr->w0.sizem1 << 1)); + send_mem->subdc = NIX_SUBDC_MEM; + send_mem->dsz = 0x0; + send_mem->wmem = 0x1; + send_mem->alg = NIX_SENDMEMALG_SETTSTMP; + } + sg = (union nix_send_sg_s *)&txq->cmd[4]; + } else { + send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0]; + /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */ + send_hdr->w0.sizem1 = 1; + sg = (union nix_send_sg_s *)&txq->cmd[2]; + } + + send_hdr->w0.sq = txq->sq; + sg->subdc = NIX_SUBDC_SG; + sg->segs = 1; + sg->ld_type = NIX_SENDLDTYPE_LDD; + + rte_smp_wmb(); +} + +static void +otx2_nix_tx_queue_release(void *_txq) +{ + struct otx2_eth_txq *txq = _txq; + + if (!txq) + return; + + otx2_nix_dbg("Releasing txq %u", txq->sq); + + /* Free sqb's and disable sq */ + nix_sq_uninit(txq); + + if (txq->sqb_pool) { + rte_mempool_free(txq->sqb_pool); + txq->sqb_pool = NULL; + } + rte_free(txq); +} + + +static int +otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq, + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_txconf *tx_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + const struct rte_memzone *fc; + struct otx2_eth_txq *txq; + uint64_t offloads; + int rc; + + rc = -EINVAL; + + /* Compile time check to make sure all fast path elements in a CL */ + RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128); + + if (tx_conf->tx_deferred_start) { + otx2_err("Tx deferred start is not supported"); + goto fail; + } + + /* Free memory prior to re-allocation if needed. */ + if (eth_dev->data->tx_queues[sq] != NULL) { + otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq); + otx2_nix_tx_queue_release(eth_dev->data->tx_queues[sq]); + eth_dev->data->tx_queues[sq] = NULL; + } + + /* Find the expected offloads for this queue */ + offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads; + + /* Allocating tx queue data structure */ + txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq), + OTX2_ALIGN, socket_id); + if (txq == NULL) { + otx2_err("Failed to alloc txq=%d", sq); + rc = -ENOMEM; + goto fail; + } + txq->sq = sq; + txq->dev = dev; + txq->sqb_pool = NULL; + txq->offloads = offloads; + dev->tx_offloads |= offloads; + + /* + * Allocate memory for flow control updates from HW. + * Alloc one cache line, so that fits all FC_STYPE modes. + */ + fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq, + OTX2_ALIGN + sizeof(struct npa_aura_s), + OTX2_ALIGN, dev->node); + if (fc == NULL) { + otx2_err("Failed to allocate mem for fcmem"); + rc = -ENOMEM; + goto free_txq; + } + txq->fc_iova = fc->iova; + txq->fc_mem = fc->addr; + + /* Initialize the aura sqb pool */ + rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc); + if (rc) { + otx2_err("Failed to alloc sqe pool rc=%d", rc); + goto free_txq; + } + + /* Initialize the SQ */ + rc = nix_sq_init(txq); + if (rc) { + otx2_err("Failed to init sq=%d context", sq); + goto free_txq; + } + + txq->fc_cache_pkts = 0; + txq->io_addr = dev->base + NIX_LF_OP_SENDX(0); + /* Evenly distribute LMT slot for each sq */ + txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12)); + + txq->qconf.socket_id = socket_id; + txq->qconf.nb_desc = nb_desc; + memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf)); + + otx2_nix_form_default_desc(txq); + + otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 "" + " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq, + fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr, + txq->nb_sqb_bufs, txq->sqes_per_sqb_log2); + eth_dev->data->tx_queues[sq] = txq; + eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; + +free_txq: + otx2_nix_tx_queue_release(txq); +fail: + return rc; +} + + static int otx2_nix_configure(struct rte_eth_dev *eth_dev) { @@ -549,6 +915,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, .dev_configure = otx2_nix_configure, .link_update = otx2_nix_link_update, + .tx_queue_setup = otx2_nix_tx_queue_setup, + .tx_queue_release = otx2_nix_tx_queue_release, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, .stats_get = otx2_nix_dev_stats_get, @@ -763,12 +1131,26 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct rte_pci_device *pci_dev; - int rc; + int rc, i; /* Nothing to be done for secondary processes */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Free up SQs */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); + eth_dev->data->tx_queues[i] = NULL; + } + eth_dev->data->nb_tx_queues = 0; + + /* Free up RQ's and CQ's */ + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + otx2_nix_rx_queue_release(eth_dev->data->rx_queues[i]); + eth_dev->data->rx_queues[i] = NULL; + } + eth_dev->data->nb_rx_queues = 0; + /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 562724b4e..4ec950100 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -19,6 +19,7 @@ #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" +#include "otx2_tx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -54,6 +55,8 @@ #define NIX_RX_NB_SEG_MAX 6 #define NIX_CQ_ENTRY_SZ 128 #define NIX_CQ_ALIGN 512 +#define NIX_SQB_LOWER_THRESH 90 +#define LMT_SLOT_MASK 0x7f /* If PTP is enabled additional SEND MEM DESC is required which * takes 2 words, hence max 7 iova address are possible @@ -185,6 +188,24 @@ struct otx2_eth_dev { struct rte_eth_dev *eth_dev; } __rte_cache_aligned; +struct otx2_eth_txq { + uint64_t cmd[8]; + int64_t fc_cache_pkts; + uint64_t *fc_mem; + void *lmt_addr; + rte_iova_t io_addr; + rte_iova_t fc_iova; + uint16_t sqes_per_sqb_log2; + int16_t nb_sqb_bufs_adj; + MARKER slow_path_start; + uint16_t nb_sqb_bufs; + uint16_t sq; + uint64_t offloads; + struct otx2_eth_dev *dev; + struct rte_mempool *sqb_pool; + struct otx2_eth_qconf qconf; +} __rte_cache_aligned; + struct otx2_eth_rxq { uint64_t mbuf_initializer; uint64_t data_off; @@ -310,4 +331,7 @@ int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev); int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev); +/* Rx and Tx routines */ +void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h new file mode 100644 index 000000000..4d0993f87 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tx.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TX_H__ +#define __OTX2_TX_H__ + +#define NIX_TX_OFFLOAD_NONE (0) +#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0) +#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1) +#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2) +#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3) +#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4) + +/* Flags to control xmit_prepare function. + * Defining it from backwards to denote its been + * not used as offload flags to pick function + */ +#define NIX_TX_MULTI_SEG_F BIT(15) + +#define NIX_TX_NEED_SEND_HDR_W1 \ + (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \ + NIX_TX_OFFLOAD_VLAN_QINQ_F) + +#define NIX_TX_NEED_EXT_HDR \ + (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F) + +#endif /* __OTX2_TX_H__ */ From patchwork Sun Jun 2 15:23:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54075 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6C4DF1B9D0; Sun, 2 Jun 2019 17:25:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 962AD1B9DC for ; Sun, 2 Jun 2019 17:25:36 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKK4d020364; Sun, 2 Jun 2019 08:25:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=/oOWJepU5UtLYYd4xeubYpSbzzmFx3IkEW/3nBzYjYM=; b=J6WiUaHptKzZFUJmGB63x707eeAzoP1B1RLi98xcPOGytYq69xU8tEZPly0HhC8m83Cj joMOEWju/WrULcEJDH4fuCfoYQRuXGgCU6pkcWoMOu0rLPG/zL5EGl87Mhb1kpvV+Fq5 ELPuJepq0wJ2Kxe0GNGAWHgntC5v8FQ9d0I/9pcdVUKZ1E9bDUt0LAY/Gb+UN7muGRFN 4XtvMW2Y2ditrueJuIH1Q5SNsiR8p/qspp0/O8XcV6rtj7fgowNcjLGqwPmXrfiocdzu lNZziTOafrHANWEsv70XkLgvDfSXRqVK3UcTtIeii3EHUw1hcv6LPuNtOELIIOMyOpnv Pw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4944-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:35 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:34 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:34 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id DC0D23F703F; Sun, 2 Jun 2019 08:25:32 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:55 +0530 Message-ID: <20190602152434.23996-20-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 19/58] net/octeontx2: handle port reconfigure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru setup tx & rx queues with the previous configuration during port reconfig, it handles cases like port reconfigure without reconfiguring tx & rx queues. Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 2 + 2 files changed, 182 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index b501ba865..6e14e12f0 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -787,6 +787,172 @@ otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq, return rc; } +static int +nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_eth_qconf *tx_qconf = NULL; + struct otx2_eth_qconf *rx_qconf = NULL; + struct otx2_eth_txq **txq; + struct otx2_eth_rxq **rxq; + int i, nb_rxq, nb_txq; + + nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues); + nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues); + + tx_qconf = malloc(nb_txq * sizeof(*tx_qconf)); + if (tx_qconf == NULL) { + otx2_err("Failed to allocate memory for tx_qconf"); + goto fail; + } + + rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf)); + if (rx_qconf == NULL) { + otx2_err("Failed to allocate memory for rx_qconf"); + goto fail; + } + + txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues; + for (i = 0; i < nb_txq; i++) { + if (txq[i] == NULL) { + otx2_err("txq[%d] is already released", i); + goto fail; + } + memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf)); + otx2_nix_tx_queue_release(txq[i]); + eth_dev->data->tx_queues[i] = NULL; + } + + rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues; + for (i = 0; i < nb_rxq; i++) { + if (rxq[i] == NULL) { + otx2_err("rxq[%d] is already released", i); + goto fail; + } + memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf)); + otx2_nix_rx_queue_release(rxq[i]); + eth_dev->data->rx_queues[i] = NULL; + } + + dev->tx_qconf = tx_qconf; + dev->rx_qconf = rx_qconf; + return 0; + +fail: + if (tx_qconf) + free(tx_qconf); + if (rx_qconf) + free(rx_qconf); + + return -ENOMEM; +} + +static int +nix_restore_queue_cfg(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_eth_qconf *tx_qconf = dev->tx_qconf; + struct otx2_eth_qconf *rx_qconf = dev->rx_qconf; + struct otx2_eth_txq **txq; + struct otx2_eth_rxq **rxq; + int rc, i, nb_rxq, nb_txq; + + nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues); + nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues); + + rc = -ENOMEM; + /* Setup tx & rx queues with previous configuration so + * that the queues can be functional in cases like ports + * are started without re configuring queues. + * + * Usual re config sequence is like below: + * port_configure() { + * if(reconfigure) { + * queue_release() + * queue_setup() + * } + * queue_configure() { + * queue_release() + * queue_setup() + * } + * } + * port_start() + * + * In some application's control path, queue_configure() would + * NOT be invoked for TXQs/RXQs in port_configure(). + * In such cases, queues can be functional after start as the + * queues are already setup in port_configure(). + */ + for (i = 0; i < nb_txq; i++) { + rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc, + tx_qconf[i].socket_id, + &tx_qconf[i].conf.tx); + if (rc) { + otx2_err("Failed to setup tx queue rc=%d", rc); + txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues; + for (i -= 1; i >= 0; i--) + otx2_nix_tx_queue_release(txq[i]); + goto fail; + } + } + + free(tx_qconf); tx_qconf = NULL; + + for (i = 0; i < nb_rxq; i++) { + rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc, + rx_qconf[i].socket_id, + &rx_qconf[i].conf.rx, + rx_qconf[i].mempool); + if (rc) { + otx2_err("Failed to setup rx queue rc=%d", rc); + rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues; + for (i -= 1; i >= 0; i--) + otx2_nix_rx_queue_release(rxq[i]); + goto release_tx_queues; + } + } + + free(rx_qconf); rx_qconf = NULL; + + return 0; + +release_tx_queues: + txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues; + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + otx2_nix_tx_queue_release(txq[i]); +fail: + if (tx_qconf) + free(tx_qconf); + if (rx_qconf) + free(rx_qconf); + + return rc; +} + +static uint16_t +nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts) +{ + RTE_SET_USED(queue); + RTE_SET_USED(mbufs); + RTE_SET_USED(pkts); + + return 0; +} + +static void +nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev) +{ + /* These dummy functions are required for supporting + * some applications which reconfigure queues without + * stopping tx burst and rx burst threads(eg kni app) + * When the queues context is saved, txq/rxqs are released + * which caused app crash since rx/tx burst is still + * on different lcores + */ + eth_dev->tx_pkt_burst = nix_eth_nop_burst; + eth_dev->rx_pkt_burst = nix_eth_nop_burst; + rte_mb(); +} static int otx2_nix_configure(struct rte_eth_dev *eth_dev) @@ -843,6 +1009,10 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { oxt2_nix_unregister_queue_irqs(eth_dev); + nix_set_nop_rxtx_function(eth_dev); + rc = nix_store_queue_cfg_and_then_release(eth_dev); + if (rc) + goto fail; nix_lf_free(dev); } @@ -883,6 +1053,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* + * Restore queue config when reconfigure followed by + * reconfigure and no queue configure invoked from application case. + */ + if (dev->configured == 1) { + rc = nix_restore_queue_cfg(eth_dev); + if (rc) + goto free_nix_lf; + } + /* Update the mac address */ ea = eth_dev->data->mac_addrs; memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 4ec950100..c0568dcd1 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -185,6 +185,8 @@ struct otx2_eth_dev { uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; + struct otx2_eth_qconf *tx_qconf; + struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; } __rte_cache_aligned; From patchwork Sun Jun 2 15:23:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54076 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1302B1B9E6; Sun, 2 Jun 2019 17:25:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 986CC1B95B for ; Sun, 2 Jun 2019 17:25:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4jt020253; Sun, 2 Jun 2019 08:25:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=QPncdV8ruv6j086dRAmI93+FfoZctL6AiKknNyOUTGs=; b=P2OZaWQSTBC5mdfXE3eHnLgYbdmtn82R3B7zQKMmUtwdhrSA8wZBXd7hRnxSwqBokum4 +6wRcGpDhIZLkGdWYW34x6VQQUic7Uayv4PxeyWl9cRS1DAbzpQDD3hiLw6Y0Vh3DQ1y ITuALl2MP4qShSBYBc6arqICEgCTwog6GWDmcaxmhwb2AZCMM8TO8qoZQ5RlOuT+68Y7 wYVa8G4QXazqoy29rkeguEbiTsiPEE8NGzAy1yMC2u4QMJoFif8/yDt/nR5QjzRO1jiR 0K+ieaDieHWcNuUIwyV39GSvc23e6hojvhvmvxIXksQFohpOrjeK9d9yNg7y1McCeM6p gA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk494c-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:38 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:37 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:37 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A65D03F703F; Sun, 2 Jun 2019 08:25:35 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:53:56 +0530 Message-ID: <20190602152434.23996-21-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 20/58] net/octeontx2: add queue start and stop operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add queue start and stop operations. Tx queue needs to update the flow control value, Which will be added in sub subsequent patch. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 92 ++++++++++++++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 2 + 5 files changed, 97 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 48ac58b3a..31816a183 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 6fc647af4..d79428652 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index af3c70269..d4deb52af 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,6 +11,7 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +Queue start/stop = Y RSS hash = Y RSS key update = Y RSS reta update = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 6e14e12f0..04a953441 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -252,6 +252,26 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, return rc; } +static int +nix_rq_enb_dis(struct rte_eth_dev *eth_dev, + struct otx2_eth_rxq *rxq, const bool enb) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + + /* Pkts will be dropped silently if RQ is disabled */ + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_RQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + aq->rq.ena = enb; + aq->rq_mask.ena = ~(aq->rq_mask.ena); + + return otx2_mbox_process(mbox); +} + static int nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) { @@ -1090,6 +1110,74 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) return rc; } +int +otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct rte_eth_dev_data *data = eth_dev->data; + + if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + return 0; +} + +int +otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct rte_eth_dev_data *data = eth_dev->data; + + if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) + return 0; + + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + return 0; +} + +static int +otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx]; + struct rte_eth_dev_data *data = eth_dev->data; + int rc; + + if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) + return 0; + + rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true); + if (rc) { + otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc); + goto done; + } + + data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; + +done: + return rc; +} + +static int +otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) +{ + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx]; + struct rte_eth_dev_data *data = eth_dev->data; + int rc; + + if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) + return 0; + + rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false); + if (rc) { + otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc); + goto done; + } + + data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; + +done: + return rc; +} + /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, @@ -1099,6 +1187,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_release = otx2_nix_tx_queue_release, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, + .tx_queue_start = otx2_nix_tx_queue_start, + .tx_queue_stop = otx2_nix_tx_queue_stop, + .rx_queue_start = otx2_nix_rx_queue_start, + .rx_queue_stop = otx2_nix_rx_queue_stop, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index c0568dcd1..7b8c7e1e5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -246,6 +246,8 @@ void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); void otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev); void otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev); +int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx); +int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx); uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id); /* Link */ From patchwork Sun Jun 2 15:23:57 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54077 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A02331B970; Sun, 2 Jun 2019 17:25:43 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 0FF531B945 for ; Sun, 2 Jun 2019 17:25:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJtoJ021289; Sun, 2 Jun 2019 08:25:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=U23Ew+HiV3Hdkb+3wPeEsfC1ErY0GFP8NQWJ0T/YcEY=; b=yZzIvbZemveyU3c00P46alHCkQ/AOxvdnIKnl4jRMuYdMmsK8e41FqK+1GarrLFVXqYB F8SvsVPl+h0OKNU7RHLuvH8Pwoekz0WvjJCX2l6DGJWiLEO93/mtoyxXgZZYZAPvDpLL r0oLjXFXHcOBPvQIqeuOAfUv6nGfpHA4yRu5VmnyOUaWfTbQfQ+L9BqMV98jL+d1s/mW tIbgl8xK+Hor5Mly0df84yVhOucvy61UsadTU6m/z4oR++4onAA/FHScgzpoTztms5e4 bhYJz6xI0MY9I7aQrxQzQVoMpBc3pLk2J82nWBXOXCZYFgkRMC9rdJRzQIFuyCz+QSqN cA== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqg5-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:41 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:40 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CA7CA3F703F; Sun, 2 Jun 2019 08:25:38 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Krzysztof Kanas Date: Sun, 2 Jun 2019 20:53:57 +0530 Message-ID: <20190602152434.23996-22-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 21/58] net/octeontx2: introduce traffic manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Introduce traffic manager infra and default hierarchy creation. Upon ethdev configure, a default hierarchy is created with one-to-one mapped tm nodes. This topology will be overridden when user explicitly creates and commits a new hierarchy using rte_tm interface. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 16 ++ drivers/net/octeontx2/otx2_ethdev.h | 14 ++ drivers/net/octeontx2/otx2_tm.c | 252 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 67 ++++++++ 6 files changed, 351 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_tm.c create mode 100644 drivers/net/octeontx2/otx2_tm.h diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 67352ec81..cf2ba0e0e 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ otx2_link.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index b7e56e2ca..14e8e78f8 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files( + 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', 'otx2_link.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 04a953441..2808058a8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1033,6 +1033,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) rc = nix_store_queue_cfg_and_then_release(eth_dev); if (rc) goto fail; + otx2_nix_tm_fini(eth_dev); nix_lf_free(dev); } @@ -1066,6 +1067,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Init the default TM scheduler hierarchy */ + rc = otx2_nix_tm_init_default(eth_dev); + if (rc) { + otx2_err("Failed to init traffic manager rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -1368,6 +1376,9 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) /* Also sync same MAC address to CGX table */ otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]); + /* Initialize the tm data structures */ + otx2_nix_tm_conf_init(eth_dev); + dev->tx_offload_capa = nix_get_tx_offload_capa(dev); dev->rx_offload_capa = nix_get_rx_offload_capa(dev); @@ -1423,6 +1434,11 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) } eth_dev->data->nb_rx_queues = 0; + /* Free tm resources */ + rc = otx2_nix_tm_fini(eth_dev); + if (rc) + otx2_err("Failed to cleanup tm, rc=%d", rc); + /* Unregister queue irqs */ oxt2_nix_unregister_queue_irqs(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7b8c7e1e5..b2b7d4186 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -19,6 +19,7 @@ #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" +#include "otx2_tm.h" #include "otx2_tx.h" #define OTX2_ETH_DEV_PMD_VERSION "1.0" @@ -181,6 +182,19 @@ struct otx2_eth_dev { uint64_t rx_offload_capa; uint64_t tx_offload_capa; struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT]; + uint16_t txschq[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_index[NIX_TXSCH_LVL_CNT]; + uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT]; + /* Dis-contiguous queues */ + uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + /* Contiguous queues */ + uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC]; + uint16_t otx2_tm_root_lvl; + uint16_t tm_flags; + uint16_t tm_leaf_cnt; + struct otx2_nix_tm_node_list node_list; + struct otx2_nix_tm_shaper_profile_list shaper_profile_list; struct otx2_rss_info rss_info; uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c new file mode 100644 index 000000000..bc0474242 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.c @@ -0,0 +1,252 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" +#include "otx2_tm.h" + +/* Use last LVL_CNT nodes as default nodes */ +#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT) + +enum otx2_tm_node_level { + OTX2_TM_LVL_ROOT = 0, + OTX2_TM_LVL_SCH1, + OTX2_TM_LVL_SCH2, + OTX2_TM_LVL_SCH3, + OTX2_TM_LVL_SCH4, + OTX2_TM_LVL_QUEUE, + OTX2_TM_LVL_MAX, +}; + +static bool +nix_tm_have_tl1_access(struct otx2_eth_dev *dev) +{ + bool is_lbk = otx2_dev_is_lbk(dev); + return otx2_dev_is_pf(dev) && !otx2_dev_is_A0(dev) && + !is_lbk && !dev->maxvf; +} + +static struct otx2_nix_tm_shaper_profile * +nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) +{ + struct otx2_nix_tm_shaper_profile *tm_shaper_profile; + + TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) { + if (tm_shaper_profile->shaper_profile_id == shaper_id) + return tm_shaper_profile; + } + return NULL; +} + +static struct otx2_nix_tm_node * +nix_tm_node_search(struct otx2_eth_dev *dev, + uint32_t node_id, bool user) +{ + struct otx2_nix_tm_node *tm_node; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->id == node_id && + (user == !!(tm_node->flags & NIX_TM_NODE_USER))) + return tm_node; + } + return NULL; +} + +static int +nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, + uint32_t parent_node_id, uint32_t priority, + uint32_t weight, uint16_t hw_lvl_id, + uint16_t level_id, bool user, + struct rte_tm_node_params *params) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + struct otx2_nix_tm_node *tm_node, *parent_node; + uint32_t shaper_profile_id; + + shaper_profile_id = params->shaper_profile_id; + shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id); + + parent_node = nix_tm_node_search(dev, parent_node_id, user); + + tm_node = rte_zmalloc("otx2_nix_tm_node", + sizeof(struct otx2_nix_tm_node), 0); + if (!tm_node) + return -ENOMEM; + + tm_node->level_id = level_id; + tm_node->hw_lvl_id = hw_lvl_id; + + tm_node->id = node_id; + tm_node->priority = priority; + tm_node->weight = weight; + tm_node->rr_prio = 0xf; + tm_node->max_prio = UINT32_MAX; + tm_node->hw_id = UINT32_MAX; + tm_node->flags = 0; + if (user) + tm_node->flags = NIX_TM_NODE_USER; + rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params)); + + if (shaper_profile) + shaper_profile->reference_count++; + tm_node->parent = parent_node; + tm_node->parent_hw_id = UINT32_MAX; + + TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node); + + return 0; +} + +static int +nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + + while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) { + if (shaper_profile->reference_count) + otx2_tm_dbg("Shaper profile %u has non zero references", + shaper_profile->shaper_profile_id); + TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper); + rte_free(shaper_profile); + } + + return 0; +} + +static int +nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t def = eth_dev->data->nb_tx_queues; + struct rte_tm_node_params params; + uint32_t leaf_parent, i; + int rc = 0; + + /* Default params */ + memset(¶ms, 0, sizeof(params)); + params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE; + + if (nix_tm_have_tl1_access(dev)) { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL1, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH4, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 4; + } else { + dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2; + rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL2, + OTX2_TM_LVL_ROOT, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 1, def, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL3, + OTX2_TM_LVL_SCH1, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_TL4, + OTX2_TM_LVL_SCH2, false, ¶ms); + if (rc) + goto exit; + + rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_SMQ, + OTX2_TM_LVL_SCH3, false, ¶ms); + if (rc) + goto exit; + + leaf_parent = def + 3; + } + + /* Add leaf nodes */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0, + DEFAULT_RR_WEIGHT, + NIX_TXSCH_LVL_CNT, + OTX2_TM_LVL_QUEUE, false, ¶ms); + if (rc) + break; + } + +exit: + return rc; +} + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + TAILQ_INIT(&dev->node_list); + TAILQ_INIT(&dev->shaper_profile_list); +} + +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint16_t sq_cnt = eth_dev->data->nb_tx_queues; + int rc; + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + dev->tm_flags = NIX_TM_DEFAULT_TREE; + + rc = nix_tm_prepare_default_tree(eth_dev); + if (rc != 0) + return rc; + + dev->tm_leaf_cnt = sq_cnt; + + return 0; +} + +int +otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* Clear shaper profiles */ + nix_tm_clear_shaper_profiles(dev); + + dev->tm_flags = 0; + return 0; +} diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h new file mode 100644 index 000000000..94023fa99 --- /dev/null +++ b/drivers/net/octeontx2/otx2_tm.h @@ -0,0 +1,67 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_TM_H__ +#define __OTX2_TM_H__ + +#include + +#include + +#define NIX_TM_DEFAULT_TREE BIT_ULL(0) + +struct otx2_eth_dev; + +void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); + +struct otx2_nix_tm_node { + TAILQ_ENTRY(otx2_nix_tm_node) node; + uint32_t id; + uint32_t hw_id; + uint32_t priority; + uint32_t weight; + uint16_t level_id; + uint16_t hw_lvl_id; + uint32_t rr_prio; + uint32_t rr_num; + uint32_t max_prio; + uint32_t parent_hw_id; + uint32_t flags; +#define NIX_TM_NODE_HWRES BIT_ULL(0) +#define NIX_TM_NODE_ENABLED BIT_ULL(1) +#define NIX_TM_NODE_USER BIT_ULL(2) + struct otx2_nix_tm_node *parent; + struct rte_tm_node_params params; +}; + +struct otx2_nix_tm_shaper_profile { + TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper; + uint32_t shaper_profile_id; + uint32_t reference_count; + struct rte_tm_shaper_params profile; +}; + +struct shaper_params { + uint64_t burst_exponent; + uint64_t burst_mantissa; + uint64_t div_exp; + uint64_t exponent; + uint64_t mantissa; + uint64_t burst; + uint64_t rate; +}; + +TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node); +TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); + +#define MAX_SCHED_WEIGHT ((uint8_t)~0) +#define NIX_TM_RR_QUANTUM_MAX ((1 << 24) - 1) + +/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */ +/* = NIX_MAX_HW_MTU */ +#define DEFAULT_RR_WEIGHT 71 + +#endif /* __OTX2_TM_H__ */ From patchwork Sun Jun 2 15:23:58 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54078 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A5BA31B9F0; Sun, 2 Jun 2019 17:25:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 2F4D41B9EB for ; Sun, 2 Jun 2019 17:25:45 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4jv020253; Sun, 2 Jun 2019 08:25:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=ZjXhgFVEu4kYcft6J/VFZjSZcNANJ/iTLYSxZ3MhKVE=; b=E3FevbtaIQnBzrYCvHaFtQzJCSNcZOIb8v7Y6+3Ck1gpdjLptF7gGwLQdkyC6WsSiMT0 tNSRNoO+DyIY4JIEmOmJoOG8Kr7Mjhfcf05NPQSPQKfWp2g5sqVh3jNNFnyIy5TFffuR o75wQc1g/KDO3VVmowAC5DDJPKzYKXZbEgg6QiJbjRZKPb8EsMCrAmILOVOAFxrbJWcP e+0pyIg1UsJXpzsyrZedTyq+7pUig2aWjz2nvxjwqQeZzKp10FgLA7Nw5Vh8bnNp5oXe DgYhvxdNbZ23LCELVTuZtwpvNuBJcCnIjuJbwQzSzXZHNLjR4aWUOqapn05sOu2+hFib hg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk494s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:44 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:43 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:43 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A719B3F703F; Sun, 2 Jun 2019 08:25:41 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Krzysztof Kanas Date: Sun, 2 Jun 2019 20:53:58 +0530 Message-ID: <20190602152434.23996-23-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 22/58] net/octeontx2: alloc and free TM HW resources X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Krzysztof Kanas Allocate and free shaper/scheduler hardware resources for nodes of hirearchy levels in sw. Signed-off-by: Krzysztof Kanas Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_tm.c | 350 ++++++++++++++++++++++++++++++++ 1 file changed, 350 insertions(+) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index bc0474242..91f31df05 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -54,6 +54,69 @@ nix_tm_node_search(struct otx2_eth_dev *dev, return NULL; } +static uint32_t +check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id) +{ + struct otx2_nix_tm_node *tm_node; + uint32_t rr_num = 0; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (!tm_node->parent) + continue; + + if (!(tm_node->parent->id == parent_id)) + continue; + + if (tm_node->priority == priority) + rr_num++; + } + return rr_num; +} + +static int +nix_tm_update_parent_info(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_node *tm_node_child; + struct otx2_nix_tm_node *tm_node; + struct otx2_nix_tm_node *parent; + uint32_t rr_num = 0; + uint32_t priority; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (!tm_node->parent) + continue; + /* Count group of children of same priority i.e are RR */ + parent = tm_node->parent; + priority = tm_node->priority; + rr_num = check_rr(dev, priority, parent->id); + + /* Assuming that multiple RR groups are + * not configured based on capability. + */ + if (rr_num > 1) { + parent->rr_prio = priority; + parent->rr_num = rr_num; + } + + /* Find out static priority children that are not in RR */ + TAILQ_FOREACH(tm_node_child, &dev->node_list, node) { + if (!tm_node_child->parent) + continue; + if (parent->id != tm_node_child->parent->id) + continue; + if (parent->max_prio == UINT32_MAX && + tm_node_child->priority != parent->rr_prio) + parent->max_prio = 0; + + if (parent->max_prio < tm_node_child->priority && + parent->rr_prio != tm_node_child->priority) + parent->max_prio = tm_node_child->priority; + } + } + + return 0; +} + static int nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id, uint32_t parent_node_id, uint32_t priority, @@ -115,6 +178,274 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) return 0; } +static int +nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask, + uint32_t flags, bool hw_only) +{ + struct otx2_nix_tm_shaper_profile *shaper_profile; + struct otx2_nix_tm_node *tm_node, *next_node; + struct otx2_mbox *mbox = dev->mbox; + struct nix_txsch_free_req *req; + uint32_t shaper_profile_id; + bool skip_node = false; + int rc = 0; + + next_node = TAILQ_FIRST(&dev->node_list); + while (next_node) { + tm_node = next_node; + next_node = TAILQ_NEXT(tm_node, node); + + /* Check for only requested nodes */ + if ((tm_node->flags & flags_mask) != flags) + continue; + + if (nix_tm_have_tl1_access(dev) && + tm_node->hw_lvl_id == NIX_TXSCH_LVL_TL1) + skip_node = true; + + otx2_tm_dbg("Free hwres for node %u, hwlvl %u, hw_id %u (%p)", + tm_node->id, tm_node->hw_lvl_id, + tm_node->hw_id, tm_node); + /* Free specific HW resource if requested */ + if (!skip_node && flags_mask && + tm_node->flags & NIX_TM_NODE_HWRES) { + req = otx2_mbox_alloc_msg_nix_txsch_free(mbox); + req->flags = 0; + req->schq_lvl = tm_node->hw_lvl_id; + req->schq = tm_node->hw_id; + rc = otx2_mbox_process(mbox); + if (rc) + break; + } else { + skip_node = false; + } + tm_node->flags &= ~NIX_TM_NODE_HWRES; + + /* Leave software elements if needed */ + if (hw_only) + continue; + + shaper_profile_id = tm_node->params.shaper_profile_id; + shaper_profile = + nix_tm_shaper_profile_search(dev, shaper_profile_id); + if (shaper_profile) + shaper_profile->reference_count--; + + TAILQ_REMOVE(&dev->node_list, tm_node, node); + rte_free(tm_node); + } + + if (!flags_mask) { + /* Free all hw resources */ + req = otx2_mbox_alloc_msg_nix_txsch_free(mbox); + req->flags = TXSCHQ_FREE_ALL; + + return otx2_mbox_process(mbox); + } + + return rc; +} + +static uint8_t +nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev, + struct nix_txsch_alloc_rsp *rsp) +{ + uint16_t schq; + uint8_t lvl; + + for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) { + for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) { + dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq]; + dev->txschq_contig_list[lvl][schq] = + rsp->schq_contig_list[lvl][schq]; + } + + dev->txschq[lvl] = rsp->schq[lvl]; + dev->txschq_contig[lvl] = rsp->schq_contig[lvl]; + } + return 0; +} + +static int +nix_tm_assign_id_to_node(struct otx2_eth_dev *dev, + struct otx2_nix_tm_node *child, + struct otx2_nix_tm_node *parent) +{ + uint32_t hw_id, schq_con_index, prio_offset; + uint32_t l_id, schq_index; + + otx2_tm_dbg("Assign hw id for child node %u, lvl %u, hw_lvl %u (%p)", + child->id, child->level_id, child->hw_lvl_id, child); + + child->flags |= NIX_TM_NODE_HWRES; + + /* Process root nodes */ + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 && + child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) { + int idx = 0; + uint32_t tschq_con_index; + + l_id = child->hw_lvl_id; + tschq_con_index = dev->txschq_contig_index[l_id]; + hw_id = dev->txschq_contig_list[l_id][tschq_con_index]; + child->hw_id = hw_id; + dev->txschq_contig_index[l_id]++; + /* Update TL1 hw_id for its parent for config purpose */ + idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++; + hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx]; + child->parent_hw_id = hw_id; + return 0; + } + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 && + child->hw_lvl_id == dev->otx2_tm_root_lvl && !parent) { + uint32_t tschq_con_index; + + l_id = child->hw_lvl_id; + tschq_con_index = dev->txschq_index[l_id]; + hw_id = dev->txschq_list[l_id][tschq_con_index]; + child->hw_id = hw_id; + dev->txschq_index[l_id]++; + return 0; + } + + /* Process children with parents */ + l_id = child->hw_lvl_id; + schq_index = dev->txschq_index[l_id]; + schq_con_index = dev->txschq_contig_index[l_id]; + + if (child->priority == parent->rr_prio) { + hw_id = dev->txschq_list[l_id][schq_index]; + child->hw_id = hw_id; + child->parent_hw_id = parent->hw_id; + dev->txschq_index[l_id]++; + } else { + prio_offset = schq_con_index + child->priority; + hw_id = dev->txschq_contig_list[l_id][prio_offset]; + child->hw_id = hw_id; + } + return 0; +} + +static int +nix_tm_assign_hw_id(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_node *parent, *child; + uint32_t child_hw_lvl, con_index_inc, i; + + for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) { + TAILQ_FOREACH(parent, &dev->node_list, node) { + child_hw_lvl = parent->hw_lvl_id - 1; + if (parent->hw_lvl_id != i) + continue; + TAILQ_FOREACH(child, &dev->node_list, node) { + if (!child->parent) + continue; + if (child->parent->id != parent->id) + continue; + nix_tm_assign_id_to_node(dev, child, parent); + } + + con_index_inc = parent->max_prio + 1; + dev->txschq_contig_index[child_hw_lvl] += con_index_inc; + + /* + * Explicitly assign id to parent node if it + * doesn't have a parent + */ + if (parent->hw_lvl_id == dev->otx2_tm_root_lvl) + nix_tm_assign_id_to_node(dev, parent, NULL); + } + } + return 0; +} + +static uint8_t +nix_tm_count_req_schq(struct otx2_eth_dev *dev, + struct nix_txsch_alloc_req *req, uint8_t lvl) +{ + struct otx2_nix_tm_node *tm_node; + uint8_t contig_count; + + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (lvl == tm_node->hw_lvl_id) { + req->schq[lvl - 1] += tm_node->rr_num; + if (tm_node->max_prio != UINT32_MAX) { + contig_count = tm_node->max_prio + 1; + req->schq_contig[lvl - 1] += contig_count; + } + } + if (lvl == dev->otx2_tm_root_lvl && + dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 && + tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) { + req->schq_contig[dev->otx2_tm_root_lvl]++; + } + } + + req->schq[NIX_TXSCH_LVL_TL1] = 1; + req->schq_contig[NIX_TXSCH_LVL_TL1] = 0; + + return 0; +} + +static int +nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev, + struct nix_txsch_alloc_req *req) +{ + uint8_t i; + + for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) + nix_tm_count_req_schq(dev, req, i); + + for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) { + dev->txschq_index[i] = 0; + dev->txschq_contig_index[i] = 0; + } + return 0; +} + +static int +nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_txsch_alloc_req *req; + struct nix_txsch_alloc_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox); + + rc = nix_tm_prepare_txschq_req(dev, req); + if (rc) + return rc; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + nix_tm_copy_rsp_to_dev(dev, rsp); + + nix_tm_assign_hw_id(dev); + return 0; +} + +static int +nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + RTE_SET_USED(xmit_enable); + + nix_tm_update_parent_info(dev); + + rc = nix_tm_send_txsch_alloc_msg(dev); + if (rc) { + otx2_err("TM failed to alloc tm resources=%d", rc); + return rc; + } + + return 0; +} + static int nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev) { @@ -226,6 +557,13 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) uint16_t sq_cnt = eth_dev->data->nb_tx_queues; int rc; + /* Free up all resources already held */ + rc = nix_tm_free_resources(dev, 0, 0, false); + if (rc) { + otx2_err("Failed to freeup existing resources,rc=%d", rc); + return rc; + } + /* Clear shaper profiles */ nix_tm_clear_shaper_profiles(dev); dev->tm_flags = NIX_TM_DEFAULT_TREE; @@ -234,6 +572,9 @@ int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev) if (rc != 0) return rc; + rc = nix_tm_alloc_resources(eth_dev, false); + if (rc != 0) + return rc; dev->tm_leaf_cnt = sq_cnt; return 0; @@ -243,6 +584,15 @@ int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + /* Xmit is assumed to be disabled */ + /* Free up resources already held */ + rc = nix_tm_free_resources(dev, 0, 0, false); + if (rc) { + otx2_err("Failed to freeup existing resources,rc=%d", rc); + return rc; + } /* Clear shaper profiles */ nix_tm_clear_shaper_profiles(dev); From patchwork Sun Jun 2 15:23:59 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54079 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7BFC21B9F5; Sun, 2 Jun 2019 17:25:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D10751B9F4 for ; Sun, 2 Jun 2019 17:25:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJmwN021277; Sun, 2 Jun 2019 08:25:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=7wEtV/VJ5MBWwSqspU14lupfJwTjPkQFX/L7glJlQog=; b=lHv4sh8Jp/Nsu1vTN8yGOmLtsdFZMz02nCm0QKXg4M4kCFBNY1mSLOyJAefrYVbZvt4h 7oP1FtYF3mxEOe2P36mf6yb/N21sUuJhI8B+cLOZ1lV/fwUW1eUipr8k2v4jZ1KNyho7 lkqz9AkesdttLMLXRVGYa56b4XWx+PNLZN9sT/qFgbMY13aElDsx0gNEHggCJs+5oRJ1 8Qn6XiaCCm+ood0TKNIi1V9PdnzcA5ooTLUJHbar6QaS9E1O6GwM8IvDnqZDMF85T7EF niuTELSLHHAVPC9jufkZxnshJA6WvVxqXhHgFVo5N2AexonJWtiDwHIrHqNXGX7hhikK Kg== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqgk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:46 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:46 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:46 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A03CA3F703F; Sun, 2 Jun 2019 08:25:44 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Krzysztof Kanas Date: Sun, 2 Jun 2019 20:53:59 +0530 Message-ID: <20190602152434.23996-24-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 23/58] net/octeontx2: configure TM HW resources X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram This patch sets up and configure hierarchy in hw nodes. Since all the registers are with RVU AF, register configuration is also done using mbox communication. Signed-off-by: Nithin Dabilpuram Signed-off-by: Krzysztof Kanas --- drivers/net/octeontx2/otx2_tm.c | 504 ++++++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_tm.h | 82 ++++++ 2 files changed, 586 insertions(+) diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index 91f31df05..463f90acd 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -20,6 +20,41 @@ enum otx2_tm_node_level { OTX2_TM_LVL_MAX, }; +static inline +uint64_t shaper2regval(struct shaper_params *shaper) +{ + return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) | + (shaper->div_exp << 13) | (shaper->exponent << 9) | + (shaper->mantissa << 1); +} + +static int +nix_get_link(struct otx2_eth_dev *dev) +{ + int link = 13 /* SDP */; + uint16_t lmac_chan; + uint16_t map; + + lmac_chan = dev->tx_chan_base; + + /* CGX lmac link */ + if (lmac_chan >= 0x800) { + map = lmac_chan & 0x7FF; + link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF); + } else if (lmac_chan < 0x700) { + /* LBK channel */ + link = 12; + } + + return link; +} + +static uint8_t +nix_get_relchan(struct otx2_eth_dev *dev) +{ + return dev->tx_chan_base & 0xff; +} + static bool nix_tm_have_tl1_access(struct otx2_eth_dev *dev) { @@ -28,6 +63,24 @@ nix_tm_have_tl1_access(struct otx2_eth_dev *dev) !is_lbk && !dev->maxvf; } +static int +find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id) +{ + struct otx2_nix_tm_node *child_node; + + TAILQ_FOREACH(child_node, &dev->node_list, node) { + if (!child_node->parent) + continue; + if (!(child_node->parent->id == node_id)) + continue; + if (child_node->priority == child_node->parent->rr_prio) + continue; + return child_node->hw_id - child_node->priority; + } + return 0; +} + + static struct otx2_nix_tm_shaper_profile * nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) { @@ -40,6 +93,451 @@ nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id) return NULL; } +static inline uint64_t +shaper_rate_to_nix(uint64_t cclk_hz, uint64_t cclk_ticks, + uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p, uint64_t *div_exp_p) +{ + uint64_t div_exp, exponent, mantissa; + + /* Boundary checks */ + if (value < MIN_SHAPER_RATE(cclk_hz, cclk_ticks) || + value > MAX_SHAPER_RATE(cclk_hz, cclk_ticks)) + return 0; + + if (value <= SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, 0)) { + /* Calculate rate div_exp and mantissa using + * the following formula: + * + * value = (cclk_hz * (256 + mantissa) + * / ((cclk_ticks << div_exp) * 256) + */ + div_exp = 0; + exponent = 0; + mantissa = MAX_RATE_MANTISSA; + + while (value < (cclk_hz / (cclk_ticks << div_exp))) + div_exp += 1; + + while (value < + ((cclk_hz * (256 + mantissa)) / + ((cclk_ticks << div_exp) * 256))) + mantissa -= 1; + } else { + /* Calculate rate exponent and mantissa using + * the following formula: + * + * value = (cclk_hz * ((256 + mantissa) << exponent) + * / (cclk_ticks * 256) + * + */ + div_exp = 0; + exponent = MAX_RATE_EXPONENT; + mantissa = MAX_RATE_MANTISSA; + + while (value < (cclk_hz * (1 << exponent)) / cclk_ticks) + exponent -= 1; + + while (value < (cclk_hz * ((256 + mantissa) << exponent)) / + (cclk_ticks * 256)) + mantissa -= 1; + } + + if (div_exp > MAX_RATE_DIV_EXP || + exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA) + return 0; + + if (div_exp_p) + *div_exp_p = div_exp; + if (exponent_p) + *exponent_p = exponent; + if (mantissa_p) + *mantissa_p = mantissa; + + /* Calculate real rate value */ + return SHAPER_RATE(cclk_hz, cclk_ticks, exponent, mantissa, div_exp); +} + +static inline uint64_t +lx_shaper_rate_to_nix(uint64_t cclk_hz, uint32_t hw_lvl, + uint64_t value, uint64_t *exponent, + uint64_t *mantissa, uint64_t *div_exp) +{ + if (hw_lvl == NIX_TXSCH_LVL_TL1) + return shaper_rate_to_nix(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, + value, exponent, mantissa, div_exp); + else + return shaper_rate_to_nix(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, + value, exponent, mantissa, div_exp); +} + +static inline uint64_t +shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p, + uint64_t *mantissa_p) +{ + uint64_t exponent, mantissa; + + if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST) + return 0; + + /* Calculate burst exponent and mantissa using + * the following formula: + * + * value = (((256 + mantissa) << (exponent + 1) + / 256) + * + */ + exponent = MAX_BURST_EXPONENT; + mantissa = MAX_BURST_MANTISSA; + + while (value < (1ull << (exponent + 1))) + exponent -= 1; + + while (value < ((256 + mantissa) << (exponent + 1)) / 256) + mantissa -= 1; + + if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA) + return 0; + + if (exponent_p) + *exponent_p = exponent; + if (mantissa_p) + *mantissa_p = mantissa; + + return SHAPER_BURST(exponent, mantissa); +} + +static int +configure_shaper_cir_pir_reg(struct otx2_eth_dev *dev, + struct otx2_nix_tm_node *tm_node, + struct shaper_params *cir, + struct shaper_params *pir) +{ + uint32_t shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE; + struct otx2_nix_tm_shaper_profile *shaper_profile = NULL; + struct rte_tm_shaper_params *param; + + shaper_profile_id = tm_node->params.shaper_profile_id; + + shaper_profile = nix_tm_shaper_profile_search(dev, shaper_profile_id); + if (shaper_profile) { + param = &shaper_profile->profile; + /* Calculate CIR exponent and mantissa */ + if (param->committed.rate) + cir->rate = lx_shaper_rate_to_nix(CCLK_HZ, + tm_node->hw_lvl_id, + param->committed.rate, + &cir->exponent, + &cir->mantissa, + &cir->div_exp); + + /* Calculate PIR exponent and mantissa */ + if (param->peak.rate) + pir->rate = lx_shaper_rate_to_nix(CCLK_HZ, + tm_node->hw_lvl_id, + param->peak.rate, + &pir->exponent, + &pir->mantissa, + &pir->div_exp); + + /* Calculate CIR burst exponent and mantissa */ + if (param->committed.size) + cir->burst = shaper_burst_to_nix(param->committed.size, + &cir->burst_exponent, + &cir->burst_mantissa); + + /* Calculate PIR burst exponent and mantissa */ + if (param->peak.size) + pir->burst = shaper_burst_to_nix(param->peak.size, + &pir->burst_exponent, + &pir->burst_mantissa); + } + + return 0; +} + +static int +send_tm_reqval(struct otx2_mbox *mbox, struct nix_txschq_config *req) +{ + int rc; + + if (req->num_regs > MAX_REGS_PER_MBOX_MSG) + return -ERANGE; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + req->num_regs = 0; + return 0; +} + +static int +populate_tm_registers(struct otx2_eth_dev *dev, + struct otx2_nix_tm_node *tm_node) +{ + uint64_t strict_schedul_prio, rr_prio; + struct otx2_mbox *mbox = dev->mbox; + volatile uint64_t *reg, *regval; + uint64_t parent = 0, child = 0; + struct shaper_params cir, pir; + struct nix_txschq_config *req; + uint64_t rr_quantum; + uint32_t hw_lvl; + uint32_t schq; + int rc; + + memset(&cir, 0, sizeof(cir)); + memset(&pir, 0, sizeof(pir)); + + /* Skip leaf nodes */ + if (tm_node->hw_lvl_id == NIX_TXSCH_LVL_CNT) + return 0; + + /* Root node will not have a parent node */ + if (tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) + parent = tm_node->parent_hw_id; + else + parent = tm_node->parent->hw_id; + + /* Do we need this trigger to configure TL1 */ + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 && + tm_node->hw_lvl_id == dev->otx2_tm_root_lvl) { + schq = parent; + /* + * Default config for TL1. + * For VF this is always ignored. + */ + + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_TL1; + + /* Set DWRR quantum */ + req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq); + req->regval[0] = TXSCH_TL1_DFLT_RR_QTM; + req->num_regs++; + + req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq); + req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1); + req->num_regs++; + + req->reg[2] = NIX_AF_TL1X_CIR(schq); + req->regval[2] = 0; + req->num_regs++; + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + } + + if (tm_node->hw_lvl_id != NIX_TXSCH_LVL_SMQ) + child = find_prio_anchor(dev, tm_node->id); + + rr_prio = tm_node->rr_prio; + hw_lvl = tm_node->hw_lvl_id; + strict_schedul_prio = tm_node->priority; + schq = tm_node->hw_id; + rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) / + MAX_SCHED_WEIGHT; + + configure_shaper_cir_pir_reg(dev, tm_node, &cir, &pir); + + otx2_tm_dbg("Configure node %p, lvl %u hw_lvl %u, id %u, hw_id %u," + "parent_hw_id %" PRIx64 ", pir %" PRIx64 ", cir %" PRIx64, + tm_node, tm_node->level_id, hw_lvl, + tm_node->id, schq, parent, pir.rate, cir.rate); + + rc = -EFAULT; + + switch (hw_lvl) { + case NIX_TXSCH_LVL_SMQ: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + reg = req->reg; + regval = req->regval; + req->num_regs = 0; + + /* Set xoff which will be cleared later */ + *reg++ = NIX_AF_SMQX_CFG(schq); + *regval++ = BIT_ULL(50) | + (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; + req->num_regs++; + *reg++ = NIX_AF_MDQX_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_MDQX_SCHEDULE(schq); + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_MDQX_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_MDQX_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL4: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL4X_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_TL4X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1); + req->num_regs++; + *reg++ = NIX_AF_TL4X_SCHEDULE(schq); + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_TL4X_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL4X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL3: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL3X_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_TL3X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1); + req->num_regs++; + *reg++ = NIX_AF_TL3X_SCHEDULE(schq); + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_TL3X_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL3X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL2: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL2X_PARENT(schq); + *regval++ = parent << 16; + req->num_regs++; + *reg++ = NIX_AF_TL2X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1); + req->num_regs++; + *reg++ = NIX_AF_TL2X_SCHEDULE(schq); + if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2) + *regval++ = (1 << 24) | rr_quantum; + else + *regval++ = (strict_schedul_prio << 24) | rr_quantum; + req->num_regs++; + *reg++ = NIX_AF_TL3_TL2X_LINKX_CFG(schq, nix_get_link(dev)); + *regval++ = BIT_ULL(12) | nix_get_relchan(dev); + req->num_regs++; + if (pir.rate && pir.burst) { + *reg++ = NIX_AF_TL2X_PIR(schq); + *regval++ = shaper2regval(&pir) | 1; + req->num_regs++; + } + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL2X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + case NIX_TXSCH_LVL_TL1: + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = hw_lvl; + req->num_regs = 0; + reg = req->reg; + regval = req->regval; + + *reg++ = NIX_AF_TL1X_SCHEDULE(schq); + *regval++ = rr_quantum; + req->num_regs++; + *reg++ = NIX_AF_TL1X_TOPOLOGY(schq); + *regval++ = (child << 32) | (rr_prio << 1 /*RR_PRIO*/); + req->num_regs++; + if (cir.rate && cir.burst) { + *reg++ = NIX_AF_TL1X_CIR(schq); + *regval++ = shaper2regval(&cir) | 1; + req->num_regs++; + } + + rc = send_tm_reqval(mbox, req); + if (rc) + goto error; + break; + } + + return 0; +error: + otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc); + return rc; +} + + +static int +nix_tm_txsch_reg_config(struct otx2_eth_dev *dev) +{ + struct otx2_nix_tm_node *tm_node; + uint32_t lvl; + int rc = 0; + + if (nix_get_link(dev) == 13) + return -EPERM; + + for (lvl = 0; lvl < (uint32_t)dev->otx2_tm_root_lvl + 1; lvl++) { + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->hw_lvl_id == lvl) { + rc = populate_tm_registers(dev, tm_node); + if (rc) + goto exit; + } + } + } +exit: + return rc; +} + static struct otx2_nix_tm_node * nix_tm_node_search(struct otx2_eth_dev *dev, uint32_t node_id, bool user) @@ -443,6 +941,12 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) return rc; } + rc = nix_tm_txsch_reg_config(dev); + if (rc) { + otx2_err("TM failed to configure sched registers=%d", rc); + return rc; + } + return 0; } diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index 94023fa99..af1bb1862 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -64,4 +64,86 @@ TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile); /* = NIX_MAX_HW_MTU */ #define DEFAULT_RR_WEIGHT 71 +/** NIX rate limits */ +#define MAX_RATE_DIV_EXP 12 +#define MAX_RATE_EXPONENT 0xf +#define MAX_RATE_MANTISSA 0xff + +/** NIX rate limiter time-wheel resolution */ +#define L1_TIME_WHEEL_CCLK_TICKS 240 +#define LX_TIME_WHEEL_CCLK_TICKS 860 + +#define CCLK_HZ 1000000000 + +/* NIX rate calculation + * CCLK = coprocessor-clock frequency in MHz + * CCLK_TICKS = rate limiter time-wheel resolution + * + * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA]) + * << NIX_*_PIR[RATE_EXPONENT]) / 256 + * PIR = (CCLK / (CCLK_TICKS << NIX_*_PIR[RATE_DIVIDER_EXPONENT])) + * * PIR_ADD + * + * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA]) + * << NIX_*_CIR[RATE_EXPONENT]) / 256 + * CIR = (CCLK / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT])) + * * CIR_ADD + */ +#define SHAPER_RATE(cclk_hz, cclk_ticks, \ + exponent, mantissa, div_exp) \ + (((uint64_t)(cclk_hz) * ((256 + (mantissa)) << (exponent))) \ + / (((cclk_ticks) << (div_exp)) * 256)) + +#define L1_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \ + SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS, \ + exponent, mantissa, div_exp) + +#define LX_SHAPER_RATE(cclk_hz, exponent, mantissa, div_exp) \ + SHAPER_RATE(cclk_hz, LX_TIME_WHEEL_CCLK_TICKS, \ + exponent, mantissa, div_exp) + +/* Shaper rate limits */ +#define MIN_SHAPER_RATE(cclk_hz, cclk_ticks) \ + SHAPER_RATE(cclk_hz, cclk_ticks, 0, 0, MAX_RATE_DIV_EXP) + +#define MAX_SHAPER_RATE(cclk_hz, cclk_ticks) \ + SHAPER_RATE(cclk_hz, cclk_ticks, MAX_RATE_EXPONENT, \ + MAX_RATE_MANTISSA, 0) + +#define MIN_L1_SHAPER_RATE(cclk_hz) \ + MIN_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS) + +#define MAX_L1_SHAPER_RATE(cclk_hz) \ + MAX_SHAPER_RATE(cclk_hz, L1_TIME_WHEEL_CCLK_TICKS) + +/** TM Shaper - low level operations */ + +/** NIX burst limits */ +#define MAX_BURST_EXPONENT 0xf +#define MAX_BURST_MANTISSA 0xff + +/* NIX burst calculation + * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA]) + * << (NIX_*_PIR[BURST_EXPONENT] + 1)) + * / 256 + * + * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA]) + * << (NIX_*_CIR[BURST_EXPONENT] + 1)) + * / 256 + */ +#define SHAPER_BURST(exponent, mantissa) \ + (((256 + (mantissa)) << ((exponent) + 1)) / 256) + +/** Shaper burst limits */ +#define MIN_SHAPER_BURST \ + SHAPER_BURST(0, 0) + +#define MAX_SHAPER_BURST \ + SHAPER_BURST(MAX_BURST_EXPONENT,\ + MAX_BURST_MANTISSA) + +/* Default TL1 priority and Quantum from AF */ +#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1) +#define TXSCH_TL1_DFLT_RR_PRIO 1 + #endif /* __OTX2_TM_H__ */ From patchwork Sun Jun 2 15:24:00 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54080 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 529FE1B9F4; Sun, 2 Jun 2019 17:25:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B94691B9ED for ; Sun, 2 Jun 2019 17:25:51 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6kP020260; Sun, 2 Jun 2019 08:25:51 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=x0ZJb+2v7w2swDEdwmN50/urVUEqmKmI3dng0DYMGNA=; b=MB03QRcbV+tXZOTH9+7EmrZHoXA73yWBJ1WllxaqKkLj3yIJvW3pmmUFnvuh6vyw8fWQ 6gLticZurBwguqkF2pyHrGdmQ1w01BNTOxM210l5OPNymu0sT626HjYCc/eyoePkT8rQ mRld7i/5kd3+47IgHmQS7GFmS6U+KDy5bbyeo+F7txagoBliuNyz7tUWUG+BD0h7M3vz B+xyORmL1PowmPbmt37f11ym5WERppUGffidM2W5vf0uN01k5kW8JxL9lp2yiAdw+CrI G8apNDcNsQDdTOkV234UvnxTa7RP9Opn1UquB6XiXVLkoha4/Nlt6PLVdvNpu1D7xIkQ QQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk495a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:51 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:49 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:49 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A2CBF3F7040; Sun, 2 Jun 2019 08:25:47 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Krzysztof Kanas , "Vamsi Attunuru" Date: Sun, 2 Jun 2019 20:54:00 +0530 Message-ID: <20190602152434.23996-25-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 24/58] net/octeontx2: enable Tx through traffic manager X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Krzysztof Kanas This patch enables pkt transmit through traffic manager hierarchy by clearing software XOFF on the nodes and linking tx queues to corresponding leaf nodes. It also adds support to start and stop tx queue using traffic manager. Signed-off-by: Krzysztof Kanas Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/otx2_ethdev.c | 72 ++++++- drivers/net/octeontx2/otx2_tm.c | 295 +++++++++++++++++++++++++++- drivers/net/octeontx2/otx2_tm.h | 4 + 3 files changed, 366 insertions(+), 5 deletions(-) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 2808058a8..a269e1be6 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -120,6 +120,32 @@ nix_lf_free(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +int +otx2_cgx_rxtx_start(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_start_rxtx(mbox); + + return otx2_mbox_process(mbox); +} + +int +otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox); + + return otx2_mbox_process(mbox); +} + static inline void nix_rx_queue_reset(struct otx2_eth_rxq *rxq) { @@ -461,16 +487,27 @@ nix_sq_init(struct otx2_eth_txq *txq) struct otx2_eth_dev *dev = txq->dev; struct otx2_mbox *mbox = dev->mbox; struct nix_aq_enq_req *sq; + uint32_t rr_quantum; + uint16_t smq; + int rc; if (txq->sqb_pool->pool_id == 0) return -EINVAL; + rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq); + if (rc) { + otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc); + return rc; + } + sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); sq->qidx = txq->sq; sq->ctype = NIX_AQ_CTYPE_SQ; sq->op = NIX_AQ_INSTOP_INIT; sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq); + sq->sq.smq = smq; + sq->sq.smq_rr_quantum = rr_quantum; sq->sq.default_chan = dev->tx_chan_base; sq->sq.sqe_stype = NIX_STYPE_STF; sq->sq.ena = 1; @@ -697,6 +734,9 @@ otx2_nix_tx_queue_release(void *_txq) otx2_nix_dbg("Releasing txq %u", txq->sq); + /* Flush and disable tm */ + otx2_nix_tm_sw_xoff(txq, false); + /* Free sqb's and disable sq */ nix_sq_uninit(txq); @@ -1122,24 +1162,52 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx) { struct rte_eth_dev_data *data = eth_dev->data; + struct otx2_eth_txq *txq; + int rc = -EINVAL; + + txq = eth_dev->data->tx_queues[qidx]; if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED) return 0; + rc = otx2_nix_sq_sqb_aura_fc(txq, true); + if (rc) { + otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d", + qidx, rc); + goto done; + } + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED; - return 0; + +done: + return rc; } int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) { struct rte_eth_dev_data *data = eth_dev->data; + struct otx2_eth_txq *txq; + int rc; + + txq = eth_dev->data->tx_queues[qidx]; if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED) return 0; + txq->fc_cache_pkts = 0; + + rc = otx2_nix_sq_sqb_aura_fc(txq, false); + if (rc) { + otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d", + qidx, rc); + goto done; + } + data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED; - return 0; + +done: + return rc; } static int diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index 463f90acd..4439389b8 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -676,6 +676,223 @@ nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev) return 0; } +static int +nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable) +{ + struct otx2_mbox *mbox = dev->mbox; + struct nix_txschq_config *req; + + req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox); + req->lvl = NIX_TXSCH_LVL_SMQ; + req->num_regs = 1; + + req->reg[0] = NIX_AF_SMQX_CFG(smq); + /* Unmodified fields */ + req->regval[0] = (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; + + if (enable) + req->regval[0] |= BIT_ULL(50) | BIT_ULL(49); + else + req->regval[0] |= 0; + + return otx2_mbox_process(mbox); +} + +int +otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable) +{ + struct otx2_eth_txq *txq = __txq; + struct npa_aq_enq_req *req; + struct npa_aq_enq_rsp *rsp; + struct otx2_npa_lf *lf; + struct otx2_mbox *mbox; + uint64_t aura_handle; + int rc; + + lf = otx2_npa_lf_obj_get(); + if (!lf) + return -EFAULT; + mbox = lf->mbox; + /* Set/clear sqb aura fc_ena */ + aura_handle = txq->sqb_pool->pool_id; + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + + req->aura_id = npa_lf_aura_handle_to_aura(aura_handle); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_WRITE; + /* Below is not needed for aura writes but AF driver needs it */ + /* AF will translate to associated poolctx */ + req->aura.pool_addr = req->aura_id; + + req->aura.fc_ena = enable; + req->aura_mask.fc_ena = 1; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + /* Read back npa aura ctx */ + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + + req->aura_id = npa_lf_aura_handle_to_aura(aura_handle); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + /* Init when enabled as there might be no triggers */ + if (enable) + *(volatile uint64_t *)txq->fc_mem = rsp->aura.count; + else + *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs; + /* Sync write barrier */ + rte_wmb(); + + return 0; +} + +static void +nix_txq_flush_sq_spin(struct otx2_eth_txq *txq) +{ + uint16_t sqb_cnt, head_off, tail_off; + struct otx2_eth_dev *dev = txq->dev; + uint16_t sq = txq->sq; + uint64_t reg, val; + int64_t *regaddr; + + while (true) { + reg = ((uint64_t)sq << 32); + regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS); + val = otx2_atomic64_add_nosync(reg, regaddr); + + regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS); + val = otx2_atomic64_add_nosync(reg, regaddr); + sqb_cnt = val & 0xFFFF; + head_off = (val >> 20) & 0x3F; + tail_off = (val >> 28) & 0x3F; + + /* SQ reached quiescent state */ + if (sqb_cnt <= 1 && head_off == tail_off && + (*txq->fc_mem == txq->nb_sqb_bufs)) { + break; + } + + rte_pause(); + } +} + +int +otx2_nix_tm_sw_xoff(void *__txq, bool dev_started) +{ + struct otx2_eth_txq *txq = __txq; + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *req; + struct nix_aq_enq_rsp *rsp; + uint16_t smq; + int rc; + + /* Get smq from sq */ + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + req->qidx = txq->sq; + req->ctype = NIX_AQ_CTYPE_SQ; + req->op = NIX_AQ_INSTOP_READ; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get smq, rc=%d", rc); + return -EIO; + } + + /* Check if sq is enabled */ + if (!rsp->sq.ena) + return 0; + + smq = rsp->sq.smq; + + /* Enable CGX RXTX to drain pkts */ + if (!dev_started) { + rc = otx2_cgx_rxtx_start(dev); + if (rc) + return rc; + } + + rc = otx2_nix_sq_sqb_aura_fc(txq, false); + if (rc < 0) { + otx2_err("Failed to disable sqb aura fc, rc=%d", rc); + goto cleanup; + } + + /* Disable smq xoff for case it was enabled earlier */ + rc = nix_smq_xoff(dev, smq, false); + if (rc) { + otx2_err("Failed to enable smq for sq %u, rc=%d", txq->sq, rc); + goto cleanup; + } + + /* Wait for sq entries to be flushed */ + nix_txq_flush_sq_spin(txq); + + /* Flush and enable smq xoff */ + rc = nix_smq_xoff(dev, smq, true); + if (rc) { + otx2_err("Failed to disable smq for sq %u, rc=%d", txq->sq, rc); + return rc; + } + +cleanup: + /* Restore cgx state */ + if (!dev_started) + rc |= otx2_cgx_rxtx_stop(dev); + + return rc; +} + +static int +nix_tm_sw_xon(struct otx2_eth_txq *txq, + uint16_t smq, uint32_t rr_quantum) +{ + struct otx2_eth_dev *dev = txq->dev; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *req; + int rc; + + otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum %u", + txq->sq, txq->sq, rr_quantum); + /* Set smq from sq */ + req = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + req->qidx = txq->sq; + req->ctype = NIX_AQ_CTYPE_SQ; + req->op = NIX_AQ_INSTOP_WRITE; + req->sq.smq = smq; + req->sq.smq_rr_quantum = rr_quantum; + req->sq_mask.smq = ~req->sq_mask.smq; + req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum; + + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("Failed to set smq, rc=%d", rc); + return -EIO; + } + + /* Enable sqb_aura fc */ + rc = otx2_nix_sq_sqb_aura_fc(txq, true); + if (rc < 0) { + otx2_err("Failed to enable sqb aura fc, rc=%d", rc); + return rc; + } + + /* Disable smq xoff */ + rc = nix_smq_xoff(dev, smq, false); + if (rc) { + otx2_err("Failed to enable smq for sq %u", txq->sq); + return rc; + } + + return 0; +} + static int nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask, uint32_t flags, bool hw_only) @@ -929,10 +1146,11 @@ static int nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_nix_tm_node *tm_node; + uint16_t sq, smq, rr_quantum; + struct otx2_eth_txq *txq; int rc; - RTE_SET_USED(xmit_enable); - nix_tm_update_parent_info(dev); rc = nix_tm_send_txsch_alloc_msg(dev); @@ -947,7 +1165,43 @@ nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable) return rc; } - return 0; + /* Enable xmit as all the topology is ready */ + TAILQ_FOREACH(tm_node, &dev->node_list, node) { + if (tm_node->flags & NIX_TM_NODE_ENABLED) + continue; + + /* Enable xmit on sq */ + if (tm_node->level_id != OTX2_TM_LVL_QUEUE) { + tm_node->flags |= NIX_TM_NODE_ENABLED; + continue; + } + + /* Don't enable SMQ or mark as enable */ + if (!xmit_enable) + continue; + + sq = tm_node->id; + if (sq > eth_dev->data->nb_tx_queues) { + rc = -EFAULT; + break; + } + + txq = eth_dev->data->tx_queues[sq]; + + smq = tm_node->parent->hw_id; + rr_quantum = (tm_node->weight * + NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT; + + rc = nix_tm_sw_xon(txq, smq, rr_quantum); + if (rc) + break; + tm_node->flags |= NIX_TM_NODE_ENABLED; + } + + if (rc) + otx2_err("TM failed to enable xmit on sq %u, rc=%d", sq, rc); + + return rc; } static int @@ -1104,3 +1358,38 @@ otx2_nix_tm_fini(struct rte_eth_dev *eth_dev) dev->tm_flags = 0; return 0; } + +int +otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq, + uint32_t *rr_quantum, uint16_t *smq) +{ + struct otx2_nix_tm_node *tm_node; + int rc; + + /* 0..sq_cnt-1 are leaf nodes */ + if (sq >= dev->tm_leaf_cnt) + return -EINVAL; + + /* Search for internal node first */ + tm_node = nix_tm_node_search(dev, sq, false); + if (!tm_node) + tm_node = nix_tm_node_search(dev, sq, true); + + /* Check if we found a valid leaf node */ + if (!tm_node || tm_node->level_id != OTX2_TM_LVL_QUEUE || + !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) { + return -EIO; + } + + /* Get SMQ Id of leaf node's parent */ + *smq = tm_node->parent->hw_id; + *rr_quantum = (tm_node->weight * NIX_TM_RR_QUANTUM_MAX) + / MAX_SCHED_WEIGHT; + + rc = nix_smq_xoff(dev, *smq, false); + if (rc) + return rc; + tm_node->flags |= NIX_TM_NODE_ENABLED; + + return 0; +} diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h index af1bb1862..2a009eece 100644 --- a/drivers/net/octeontx2/otx2_tm.h +++ b/drivers/net/octeontx2/otx2_tm.h @@ -16,6 +16,10 @@ struct otx2_eth_dev; void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev); int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev); int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev); +int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq, + uint32_t *rr_quantum, uint16_t *smq); +int otx2_nix_tm_sw_xoff(void *_txq, bool dev_started); +int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable); struct otx2_nix_tm_node { TAILQ_ENTRY(otx2_nix_tm_node) node; From patchwork Sun Jun 2 15:24:01 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54099 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 96BBD1BA59; Sun, 2 Jun 2019 17:25:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id C49CE1BA56 for ; Sun, 2 Jun 2019 17:25:54 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCZ021032; Sun, 2 Jun 2019 08:25:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=vscKQO4sF2vl6d471hKbv4L3S9mt3Ssh403ApMLfhuY=; b=tfuLbDIwjyMEcbd4wuX5vi8+Xzuz7Ip52QbDNEmBYabfbBvnlIEAagdnX6J77QVpN7Wj q7QbYPKP1/KQP+5locfiZnUopnriaJ5flCGGcU8T0t/xDik9j5WH3OTdJgXoq9Q78iQr IquEdkXKbMsPL4GWP91YN2vMQpqPfQHzprwULm3O9VG8xeUWSzBY7KjT6tB6bQ2Dduw+ Sj0O7TqQFhfsAkYA/eJSzFxSal5xZWecH7E2pA1D++CKZ40FdPIg8MB+Y/4d6rnVgGVt i500ME3x9nWH9b1oCbzL8mqzadDHiwUA3R3YjlTNRnyruFY/bI8dI6vABMWuxR+bFJB2 9Q== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk495f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:54 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:52 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:52 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C51EB3F703F; Sun, 2 Jun 2019 08:25:50 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Harman Kalra Date: Sun, 2 Jun 2019 20:54:01 +0530 Message-ID: <20190602152434.23996-26-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 25/58] net/octeontx2: add ptype support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob The fields from CQE needs to be converted to ptype and rx ol flags in mbuf. This patch adds create lookup memory for those items to be used in Fastpath. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 6 + drivers/net/octeontx2/otx2_lookup.c | 279 ++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 7 + .../octeontx2/rte_pmd_octeontx2_version.map | 3 + 10 files changed, 302 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_lookup.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 31816a183..221fc84d8 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -20,6 +20,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Packet type parsing = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index d79428652..e11327c7a 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -20,6 +20,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Packet type parsing = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index d4deb52af..b2115cea4 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -16,6 +16,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Packet type parsing = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index cf2ba0e0e..00f61c354 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -35,6 +35,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_mac.c \ otx2_link.c \ otx2_stats.c \ + otx2_lookup.c \ otx2_ethdev.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 14e8e78f8..eb5206ea1 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -8,6 +8,7 @@ sources = files( 'otx2_mac.c', 'otx2_link.c', 'otx2_stats.c', + 'otx2_lookup.c', 'otx2_ethdev.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index a269e1be6..9fbade075 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -441,6 +441,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, rxq->pool = mp; rxq->qlen = nix_qsize_to_val(qsize); rxq->qsize = qsize; + rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get(); /* Alloc completion queue */ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp); @@ -1267,6 +1268,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, .rx_queue_stop = otx2_nix_rx_queue_stop, + .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index b2b7d4186..83d6b2dc2 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -335,6 +335,12 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +/* Lookup configuration */ +void *otx2_nix_fastpath_lookup_mem_get(void); + +/* PTYPES */ +const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev); + /* Mac address handling */ int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c new file mode 100644 index 000000000..025933efa --- /dev/null +++ b/drivers/net/octeontx2/otx2_lookup.c @@ -0,0 +1,279 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" + +/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */ +#define ERRCODE_ERRLEN_WIDTH 12 +#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\ + sizeof(uint32_t)) + +#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ) + +const uint32_t * +otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + static const uint32_t ptypes[] = { + RTE_PTYPE_L2_ETHER_QINQ, /* LB */ + RTE_PTYPE_L2_ETHER_VLAN, /* LB */ + RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */ + RTE_PTYPE_L2_ETHER_ARP, /* LC */ + RTE_PTYPE_L2_ETHER_NSH, /* LC */ + RTE_PTYPE_L2_ETHER_FCOE, /* LC */ + RTE_PTYPE_L2_ETHER_MPLS, /* LC */ + RTE_PTYPE_L3_IPV4, /* LC */ + RTE_PTYPE_L3_IPV4_EXT, /* LC */ + RTE_PTYPE_L3_IPV6, /* LC */ + RTE_PTYPE_L3_IPV6_EXT, /* LC */ + RTE_PTYPE_L4_TCP, /* LD */ + RTE_PTYPE_L4_UDP, /* LD */ + RTE_PTYPE_L4_SCTP, /* LD */ + RTE_PTYPE_L4_ICMP, /* LD */ + RTE_PTYPE_L4_IGMP, /* LD */ + RTE_PTYPE_TUNNEL_GRE, /* LD */ + RTE_PTYPE_TUNNEL_ESP, /* LD */ + RTE_PTYPE_INNER_L2_ETHER,/* LE */ + RTE_PTYPE_INNER_L3_IPV4, /* LF */ + RTE_PTYPE_INNER_L3_IPV6, /* LF */ + RTE_PTYPE_INNER_L4_TCP, /* LG */ + RTE_PTYPE_INNER_L4_UDP, /* LG */ + RTE_PTYPE_INNER_L4_SCTP, /* LG */ + RTE_PTYPE_INNER_L4_ICMP, /* LG */ + }; + + if (dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F) + return ptypes; + else + return NULL; +} + +/* + * +------------------ +------------------ + + * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 | + * +-------------------+-------------------+ + * + * +-------------------+------------------ + + * | | LG | LF | LE | LD | LC | LB | | + * +-------------------+-------------------+ + * + * ptype [LD - LC - LB] = TU - L4 - L3 - T2 + * ptype_tunnel[LG - LF - LE] = IL4 - IL3 - IL2 - TU + * + */ +static void +nix_create_non_tunnel_ptype_array(uint16_t *ptype) +{ + uint8_t lb, lc, ld; + uint16_t idx, val; + + for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) { + lb = idx & 0xF; + lc = (idx & 0xF0) >> 4; + ld = (idx & 0xF00) >> 8; + val = RTE_PTYPE_UNKNOWN; + + switch (lb) { + case NPC_LT_LB_QINQ: + val |= RTE_PTYPE_L2_ETHER_QINQ; + break; + case NPC_LT_LB_CTAG: + val |= RTE_PTYPE_L2_ETHER_VLAN; + break; + } + + switch (lc) { + case NPC_LT_LC_ARP: + val |= RTE_PTYPE_L2_ETHER_ARP; + break; + case NPC_LT_LC_NSH: + val |= RTE_PTYPE_L2_ETHER_NSH; + break; + case NPC_LT_LC_FCOE: + val |= RTE_PTYPE_L2_ETHER_FCOE; + break; + case NPC_LT_LC_MPLS: + val |= RTE_PTYPE_L2_ETHER_MPLS; + break; + case NPC_LT_LC_IP: + val |= RTE_PTYPE_L3_IPV4; + break; + case NPC_LT_LC_IP_OPT: + val |= RTE_PTYPE_L3_IPV4_EXT; + break; + case NPC_LT_LC_IP6: + val |= RTE_PTYPE_L3_IPV6; + break; + case NPC_LT_LC_IP6_EXT: + val |= RTE_PTYPE_L3_IPV6_EXT; + break; + case NPC_LT_LC_PTP: + val |= RTE_PTYPE_L2_ETHER_TIMESYNC; + break; + } + + switch (ld) { + case NPC_LT_LD_TCP: + val |= RTE_PTYPE_L4_TCP; + break; + case NPC_LT_LD_UDP: + val |= RTE_PTYPE_L4_UDP; + break; + case NPC_LT_LD_SCTP: + val |= RTE_PTYPE_L4_SCTP; + break; + case NPC_LT_LD_ICMP: + val |= RTE_PTYPE_L4_ICMP; + break; + case NPC_LT_LD_IGMP: + val |= RTE_PTYPE_L4_IGMP; + break; + case NPC_LT_LD_GRE: + val |= RTE_PTYPE_TUNNEL_GRE; + break; + case NPC_LT_LD_ESP: + val |= RTE_PTYPE_TUNNEL_ESP; + break; + } + ptype[idx] = val; + } +} + +#define TU_SHIFT(x) ((x) >> PTYPE_WIDTH) +static void +nix_create_tunnel_ptype_array(uint16_t *ptype) +{ + uint8_t le, lf, lg; + uint16_t idx, val; + + /* Skip non tunnel ptype array memory */ + ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ; + + for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) { + le = idx & 0xF; + lf = (idx & 0xF0) >> 4; + lg = (idx & 0xF00) >> 8; + val = RTE_PTYPE_UNKNOWN; + + switch (le) { + case NPC_LT_LE_TU_ETHER: + val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER); + break; + } + switch (lf) { + case NPC_LT_LF_TU_IP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4); + break; + case NPC_LT_LF_TU_IP6: + val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6); + break; + } + switch (lg) { + case NPC_LT_LG_TU_TCP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP); + break; + case NPC_LT_LG_TU_UDP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP); + break; + case NPC_LT_LG_TU_SCTP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP); + break; + case NPC_LT_LG_TU_ICMP: + val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP); + break; + } + + ptype[idx] = val; + } +} + +static void +nix_create_rx_ol_flags_array(void *mem) +{ + uint16_t idx, errcode, errlev; + uint32_t val, *ol_flags; + + /* Skip ptype array memory */ + ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ); + + for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) { + errlev = idx & 0xf; + errcode = (idx & 0xff0) >> 4; + + val = PKT_RX_IP_CKSUM_UNKNOWN; + val |= PKT_RX_L4_CKSUM_UNKNOWN; + val |= PKT_RX_OUTER_L4_CKSUM_UNKNOWN; + + switch (errlev) { + case NPC_ERRLEV_RE: + /* Mark all errors as BAD checksum errors */ + if (errcode) { + val |= PKT_RX_IP_CKSUM_BAD; + val |= PKT_RX_L4_CKSUM_BAD; + } else { + val |= PKT_RX_IP_CKSUM_GOOD; + val |= PKT_RX_L4_CKSUM_GOOD; + } + break; + case NPC_ERRLEV_LC: + if (errcode == NPC_EC_OIP4_CSUM || + errcode == NPC_EC_IP_FRAG_OFFSET_1) { + val |= PKT_RX_IP_CKSUM_BAD; + val |= PKT_RX_EIP_CKSUM_BAD; + } else { + val |= PKT_RX_IP_CKSUM_GOOD; + } + break; + case NPC_ERRLEV_LF: + if (errcode == NPC_EC_IIP4_CSUM) + val |= PKT_RX_IP_CKSUM_BAD; + else + val |= PKT_RX_IP_CKSUM_GOOD; + break; + case NPC_ERRLEV_NIX: + if (errcode == NIX_RX_PERRCODE_OL4_CHK) { + val |= PKT_RX_OUTER_L4_CKSUM_BAD; + val |= PKT_RX_L4_CKSUM_BAD; + } else if (errcode == NIX_RX_PERRCODE_IL4_CHK) { + val |= PKT_RX_L4_CKSUM_BAD; + } else { + val |= PKT_RX_IP_CKSUM_GOOD; + val |= PKT_RX_L4_CKSUM_GOOD; + } + break; + } + + ol_flags[idx] = val; + } +} + +void * +otx2_nix_fastpath_lookup_mem_get(void) +{ + const char name[] = "otx2_nix_fastpath_lookup_mem"; + const struct rte_memzone *mz; + void *mem; + + mz = rte_memzone_lookup(name); + if (mz != NULL) + return mz->addr; + + /* Request for the first time */ + mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ, + SOCKET_ID_ANY, 0, OTX2_ALIGN); + if (mz != NULL) { + mem = mz->addr; + /* Form the ptype array lookup memory */ + nix_create_non_tunnel_ptype_array(mem); + nix_create_tunnel_ptype_array(mem); + /* Form the rx ol_flags based on errcode */ + nix_create_rx_ol_flags_array(mem); + return mem; + } + return NULL; +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 1749c43ff..1283fdf37 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -5,6 +5,13 @@ #ifndef __OTX2_RX_H__ #define __OTX2_RX_H__ +#define PTYPE_WIDTH 12 +#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) +#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) +#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\ + PTYPE_TUNNEL_ARRAY_SZ) *\ + sizeof(uint16_t)) + #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) #endif /* __OTX2_RX_H__ */ diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map index fc8c95e91..3cfd37715 100644 --- a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map +++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map @@ -1,4 +1,7 @@ DPDK_19.05 { + global: + + otx2_nix_fastpath_lookup_mem_get; local: *; }; From patchwork Sun Jun 2 15:24:02 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54100 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 314941BA90; Sun, 2 Jun 2019 17:26:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id F30401BA56 for ; Sun, 2 Jun 2019 17:25:57 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKp3020361; Sun, 2 Jun 2019 08:25:57 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=WiiKubifMQcDY+t3Qzx7kzodLc95n1rC7FTfTjEa5xU=; b=l5JzTDbJs6YVtuGoOYBPqTHk8wRj7wpLbiWAa2A+nA6DRX5TOj1OmrIx7Q2+oYaj6Z6k ZwcgClVJ1XMHDq4WMa595P7FPc1ov0NUDIYcvTFWIzaCOuLuSkbi8rtXybKy18vVn++R dEVIrVCfgdC8FfwfzHGGrp6nem9xqy0JbdhlgS1Rljgw2DngFM+CeCrUBaIJedNnO24d r9t/m/lJ9QROUWdxNkCV5IckC3JXIzxeYTMOTvQACrYCIQrx+R4kvhNbu7xEOuYhCr1F 08ecJsuBvnBJeMtRNZ2/CoZdemdPSJOTcP2qYEMrzfrWJBrL3qj7qo+m1rUP0Iyhh0yE Yw== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk495q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:57 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:55 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:55 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 22FF23F703F; Sun, 2 Jun 2019 08:25:53 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:54:02 +0530 Message-ID: <20190602152434.23996-27-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 26/58] net/octeontx2: add link status set operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add support for setting the link up and down. Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_ethdev.c | 2 ++ drivers/net/octeontx2/otx2_ethdev.h | 2 ++ drivers/net/octeontx2/otx2_link.c | 49 +++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 9fbade075..9ceeb6ffa 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1268,6 +1268,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, .rx_queue_stop = otx2_nix_rx_queue_stop, + .dev_set_link_up = otx2_nix_dev_set_link_up, + .dev_set_link_down = otx2_nix_dev_set_link_down, .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 83d6b2dc2..7bd3e83e4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -269,6 +269,8 @@ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); void otx2_eth_dev_link_status_update(struct otx2_dev *dev, struct cgx_link_user_info *link); +int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev); +int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev); /* IRQ */ int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c index 228a0cd8e..8fcbdc9b7 100644 --- a/drivers/net/octeontx2/otx2_link.c +++ b/drivers/net/octeontx2/otx2_link.c @@ -106,3 +106,52 @@ otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete) return rte_eth_linkstatus_set(eth_dev, &link); } + +static int +nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct cgx_set_link_state_msg *req; + + req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox); + req->enable = enable; + return otx2_mbox_process(mbox); +} + +int +otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, i; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + rc = nix_dev_set_link_state(eth_dev, 1); + if (rc) + goto done; + + /* Start tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + otx2_nix_tx_queue_start(eth_dev, i); + +done: + return rc; +} + +int +otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int i; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + /* Stop tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + otx2_nix_tx_queue_stop(eth_dev, i); + + return nix_dev_set_link_state(eth_dev, 0); +} From patchwork Sun Jun 2 15:24:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54101 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AE21C1B9FD; Sun, 2 Jun 2019 17:26:04 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3E5511B9D6 for ; Sun, 2 Jun 2019 17:26:00 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Yp020248; Sun, 2 Jun 2019 08:25:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=tnjcfrjcGVLOeKdOFoPq1NXqnQ09g0X+PsRjOPech+o=; b=Bc15VwZ8C8WqMgDwA8bPDafL+TPTW3ud2hw+8n7uzntqJOJaxMrgBSK3xB/oLjIsVupw 2xD5T/LBjumOBboG9RKq+MuQoe3oey/YKVrgcRtQGkIMILAOKkXr9I4L94XH3+blrkGV GyJ1cuiAC1mHxt49ZIt1KYNIgyvtWnlynd9mNdQGXXKD2JUxdQjqoVr41nYnz5G3eEB8 WIPgE8kMVnPPxpudeV1JKOWXl8E4YPyJrD/v14l7FpB7KJc7NsBjHedN0DHjbVFMRV+S vWOxSNBs3p7snwcWvMbWNCxCIAaCLLMc1AtOb3C89tnwm+9V3DD/CkbKOAqdruiO2xHG SQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk495t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:25:59 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:25:58 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:25:58 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 22CB33F703F; Sun, 2 Jun 2019 08:25:56 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:54:03 +0530 Message-ID: <20190602152434.23996-28-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 27/58] net/octeontx2: add queue info and pool supported operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add Rx and Tx queue info get and pool ops supported operations. Signed-off-by: Nithin Dabilpuram Signed-off-by: Kiran Kumar K --- drivers/net/octeontx2/otx2_ethdev.c | 3 ++ drivers/net/octeontx2/otx2_ethdev.h | 5 +++ drivers/net/octeontx2/otx2_ethdev_ops.c | 51 +++++++++++++++++++++++++ 3 files changed, 59 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 9ceeb6ffa..e9af48c8d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1291,6 +1291,9 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .xstats_reset = otx2_nix_xstats_reset, .xstats_get_by_id = otx2_nix_xstats_get_by_id, .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, + .rxq_info_get = otx2_nix_rxq_info_get, + .txq_info_get = otx2_nix_txq_info_get, + .pool_ops_supported = otx2_nix_pool_ops_supported, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7bd3e83e4..594021285 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -254,6 +254,11 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) /* Ops */ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool); +void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo); +void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo); void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en); void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 77cfa2cec..95a5eb6ed 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -2,6 +2,8 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include + #include "otx2_ethdev.h" static void @@ -86,6 +88,55 @@ otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev) nix_allmulticast_config(eth_dev, 0); } +void +otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_rxq_info *qinfo) +{ + struct otx2_eth_rxq *rxq; + + rxq = eth_dev->data->rx_queues[queue_id]; + + qinfo->mp = rxq->pool; + qinfo->scattered_rx = eth_dev->data->scattered_rx; + qinfo->nb_desc = rxq->qconf.nb_desc; + + qinfo->conf.rx_free_thresh = 0; + qinfo->conf.rx_drop_en = 0; + qinfo->conf.rx_deferred_start = 0; + qinfo->conf.offloads = rxq->offloads; +} + +void +otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct otx2_eth_txq *txq; + + txq = eth_dev->data->tx_queues[queue_id]; + + qinfo->nb_desc = txq->qconf.nb_desc; + + qinfo->conf.tx_thresh.pthresh = 0; + qinfo->conf.tx_thresh.hthresh = 0; + qinfo->conf.tx_thresh.wthresh = 0; + + qinfo->conf.tx_free_thresh = 0; + qinfo->conf.tx_rs_thresh = 0; + qinfo->conf.offloads = txq->offloads; + qinfo->conf.tx_deferred_start = 0; +} + +int +otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) +{ + RTE_SET_USED(eth_dev); + + if (!strcmp(pool, rte_mbuf_platform_mempool_ops())) + return 0; + + return -ENOTSUP; +} + void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) { From patchwork Sun Jun 2 15:24:04 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54102 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7DA621BACB; Sun, 2 Jun 2019 17:26:07 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BE7B81B9AD for ; Sun, 2 Jun 2019 17:26:03 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKp4020361; Sun, 2 Jun 2019 08:26:03 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=FomXw+6zzeNbnt/iLXcyvSKCjdGeCd00vtnmHp8qxG0=; b=Pct5pGBTS2kza6+AtY+2SgQB6GVB59gIfszGWcz+7khvxx6AimG8ujPhQnFNaASMes6D 0sb7I0iRHsYRPWZqsTUFxDHMud8zjtcRokdtl3GtOS3lN2WGoKiDJbQuh7S/VdEqOxW4 5G7Dfl8e4r97lA0+Wufp7xn2V8nvcuClzxh5LoLOS3dgGDUMV1HwdYtcuX2u+7uul/pH cAZIk9YulcE7JrEUdld/OkmF1NbC7TF4zvHdBBau9T7DRKDh8Z15koJEw7PmLOvyAhwi frAjR91aY0eU2G2WP93rEnXvkW5xtPKVc88cgc72ngbPendgTk3PjBO1gViNX3AARhjZ oA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk495x-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:03 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:01 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:01 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B25C93F703F; Sun, 2 Jun 2019 08:25:59 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:54:04 +0530 Message-ID: <20190602152434.23996-29-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 28/58] net/octeontx2: add Rx and Tx descriptor operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Rx and Tx queue descriptor related operations. Signed-off-by: Jerin Jacob Signed-off-by: Kiran Kumar K --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + drivers/net/octeontx2/otx2_ethdev.c | 4 ++ drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 83 ++++++++++++++++++++++ 6 files changed, 97 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 221fc84d8..79b49bf66 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y @@ -21,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Packet type parsing = Y +Rx descriptor status = Y Basic stats = Y Stats per queue = Y Extended stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index e11327c7a..fc0390dac 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y Allmulticast mode = Y @@ -21,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Packet type parsing = Y +Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index b2115cea4..6c63e12d0 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,12 +11,14 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +Free Tx mbuf on demand = Y Queue start/stop = Y RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y Packet type parsing = Y +Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index e9af48c8d..41adc6858 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1293,6 +1293,10 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id, .rxq_info_get = otx2_nix_rxq_info_get, .txq_info_get = otx2_nix_txq_info_get, + .rx_queue_count = otx2_nix_rx_queue_count, + .rx_descriptor_done = otx2_nix_rx_descriptor_done, + .rx_descriptor_status = otx2_nix_rx_descriptor_status, + .tx_done_cleanup = otx2_nix_tx_done_cleanup, .pool_ops_supported = otx2_nix_pool_ops_supported, }; diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 594021285..c849231d0 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -259,6 +259,10 @@ void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +uint32_t otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t qidx); +int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt); +int otx2_nix_rx_descriptor_done(void *rxq, uint16_t offset); +int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset); void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en); void otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 95a5eb6ed..627f20cf5 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -126,6 +126,89 @@ otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, qinfo->conf.tx_deferred_start = 0; } +static void +nix_rx_head_tail_get(struct otx2_eth_dev *dev, + uint32_t *head, uint32_t *tail, uint16_t queue_idx) +{ + uint64_t reg, val; + + if (head == NULL || tail == NULL) + return; + + reg = (((uint64_t)queue_idx) << 32); + val = otx2_atomic64_add_nosync(reg, (int64_t *) + (dev->base + NIX_LF_CQ_OP_STATUS)); + if (val & (OP_ERR | CQ_ERR)) + val = 0; + + *tail = (uint32_t)(val & 0xFFFFF); + *head = (uint32_t)((val >> 20) & 0xFFFFF); +} + +uint32_t +otx2_nix_rx_queue_count(struct rte_eth_dev *eth_dev, uint16_t queue_idx) +{ + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[queue_idx]; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint32_t head, tail; + + nix_rx_head_tail_get(dev, &head, &tail, queue_idx); + return (tail - head) % rxq->qlen; +} + +static inline int +nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset) +{ + /* Check given offset(queue index) has packet filled by HW */ + if (tail > head && offset <= tail && offset >= head) + return 1; + /* Wrap around case */ + if (head > tail && (offset >= head || offset <= tail)) + return 1; + + return 0; +} + +int +otx2_nix_rx_descriptor_done(void *rx_queue, uint16_t offset) +{ + struct otx2_eth_rxq *rxq = rx_queue; + uint32_t head, tail; + + nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev), + &head, &tail, rxq->rq); + + return nix_offset_has_packet(head, tail, offset); +} + +int +otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset) +{ + struct otx2_eth_rxq *rxq = rx_queue; + uint32_t head, tail; + + if (rxq->qlen >= offset) + return -EINVAL; + + nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev), + &head, &tail, rxq->rq); + + if (nix_offset_has_packet(head, tail, offset)) + return RTE_ETH_RX_DESC_DONE; + else + return RTE_ETH_RX_DESC_AVAIL; +} + +/* It is a NOP for octeontx2 as HW frees the buffer on xmit */ +int +otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt) +{ + RTE_SET_USED(txq); + RTE_SET_USED(free_cnt); + + return 0; +} + int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) { From patchwork Sun Jun 2 15:24:05 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54103 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3C3901BAC7; Sun, 2 Jun 2019 17:26:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id D01561B9AD for ; Sun, 2 Jun 2019 17:26:06 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKp5020361; Sun, 2 Jun 2019 08:26:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=F9v00PY06nmfsoS61sXCxJ2qTjsC0lwTpfDeIwgF1IY=; b=XGbd5Ebqlx/Mrfw3sEBmxpKP/vEwStGPu78osTHTbDOVIYcyl+K4UqTJzDOAL0AZKfl2 tIemSMtcqLyjEy6WnYOslReTBwVfPMmH1lQdJV9RW8dYt+alpkZSUO/iogyV1IhgbZ5m sc0H7QCAHComMVs8C9uZZe4WPrs6Fid7pFWpHQ4Im13/KbjXlSKl1Sr3VE7a1H4x3mDa pNU0aj+KI7egtIRLXvTNB8fs1LfEpU+qH8/4hcZsWN16jHIkziLNbllh2L+8kdS3un8V nKlS5rVrqJHoN2YyT0GiZmAZqswhcxqkhAB43w8bBfzIRnxjTwyew8rvUn0bVXl3jPIX SQ== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4964-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:06 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:04 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:04 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C07A53F703F; Sun, 2 Jun 2019 08:26:02 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:54:05 +0530 Message-ID: <20190602152434.23996-30-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 29/58] net/octeontx2: add module EEPROM dump X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru add module EEPROM dump operation. Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 51 ++++++++++++++++++++++ 6 files changed, 60 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 79b49bf66..18daccc49 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -26,4 +26,5 @@ Rx descriptor status = Y Basic stats = Y Stats per queue = Y Extended stats = Y +Module EEPROM dump = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index fc0390dac..ccf4dac42 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -26,4 +26,5 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +Module EEPROM dump = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 6c63e12d0..812d5d649 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -22,4 +22,5 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +Module EEPROM dump = Y Registers dump = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 41adc6858..0df487983 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1298,6 +1298,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .rx_descriptor_status = otx2_nix_rx_descriptor_status, .tx_done_cleanup = otx2_nix_tx_done_cleanup, .pool_ops_supported = otx2_nix_pool_ops_supported, + .get_module_info = otx2_nix_get_module_info, + .get_module_eeprom = otx2_nix_get_module_eeprom, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index c849231d0..8fbd4532e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -254,6 +254,10 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) /* Ops */ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_module_info *modinfo); +int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, + struct rte_dev_eeprom_info *info); int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool); void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 627f20cf5..51c156786 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -220,6 +220,57 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) return -ENOTSUP; } +static struct cgx_fw_data * +nix_get_fwdata(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct cgx_fw_data *rsp = NULL; + + otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox); + + otx2_mbox_process_msg(mbox, (void *)&rsp); + + return rsp; +} + +int +otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, + struct rte_eth_dev_module_info *modinfo) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_fw_data *rsp; + + rsp = nix_get_fwdata(dev); + if (rsp == NULL) + return -EIO; + + modinfo->type = rsp->fwdata.sfp_eeprom.sff_id; + modinfo->eeprom_len = SFP_EEPROM_SIZE; + + return 0; +} + +int +otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, + struct rte_dev_eeprom_info *info) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_fw_data *rsp; + + if (!info->data || !info->length || + (info->offset + info->length > SFP_EEPROM_SIZE)) + return -EINVAL; + + rsp = nix_get_fwdata(dev); + if (rsp == NULL) + return -EIO; + + otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset, + info->length); + + return 0; +} + void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo) { From patchwork Sun Jun 2 15:24:06 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54104 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 690CF1BAE4; Sun, 2 Jun 2019 17:26:14 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9F9601BACE for ; Sun, 2 Jun 2019 17:26:10 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKK4k020364; Sun, 2 Jun 2019 08:26:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=UdX/E1T53oNXe8WvwAJIbJhI/clmkjV54i0L7JoKxPk=; b=qtM0O/w13Qh7vF2hxwrQ3G4cY1WQCk1w7YdQfLQCQfjxQ2e/QeYoPUknRIVJgRozS2xx VYhmzB5obvRpnvs6PKB3g67Ks3EBbPRjKXt8bx0Ch7pWuyer6ydqygBshoUtjD2jvy16 pfHjFVz4ZHTV6brQjKfmgZ03Ymn3foDQu6Nb4egWYoV84mMuRQKEoMv4LwDDYQSnzQx2 gtnntP1WmswWHanZF1IgpkgfjV/i7XQizs6RL4NJSCPf68NiB+efKewOOvHcEfk5/eoB 3vCmP1G2+xqDbOFyF0CRaItBa0uM3AgwK10z5yJ8r5liNVMxhYfW903tX+SaTn5UebeP JA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk496a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:08 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 0B8ED3F703F; Sun, 2 Jun 2019 08:26:05 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:54:06 +0530 Message-ID: <20190602152434.23996-31-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 30/58] net/octeontx2: add flow control support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add flow control operations and exposed otx2_nix_update_flow_ctrl_mode() to enable on the configured mode in dev_start(). Signed-off-by: Vamsi Attunuru Signed-off-by: Nithin Dabilpuram --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 20 ++ drivers/net/octeontx2/otx2_ethdev.h | 23 +++ drivers/net/octeontx2/otx2_flow_ctrl.c | 230 +++++++++++++++++++++ 7 files changed, 277 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 18daccc49..ba7fdc868 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Flow control = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index ccf4dac42..b909918ce 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Flow control = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 00f61c354..1d3788466 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -37,6 +37,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_stats.c \ otx2_lookup.c \ otx2_ethdev.c \ + otx2_flow_ctrl.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ otx2_ethdev_debug.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index eb5206ea1..e4fcac763 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -10,6 +10,7 @@ sources = files( 'otx2_stats.c', 'otx2_lookup.c', 'otx2_ethdev.c', + 'otx2_flow_ctrl.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', 'otx2_ethdev_debug.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 0df487983..97e0e3465 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -216,6 +216,14 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev, aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT); aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR); + /* TX pause frames enable flowctrl on RX side */ + if (dev->fc_info.tx_pause) { + /* Single bpid is allocated for all rx channels for now */ + aq->cq.bpid = dev->fc_info.bpid[0]; + aq->cq.bp = NIX_CQ_BP_LEVEL; + aq->cq.bp_ena = 1; + } + /* Many to one reduction */ aq->cq.qint_idx = qid % dev->qints; @@ -1069,6 +1077,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { + otx2_nix_rxchan_bpid_cfg(eth_dev, false); oxt2_nix_unregister_queue_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); rc = nix_store_queue_cfg_and_then_release(eth_dev); @@ -1122,6 +1131,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true); + if (rc) { + otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc); + goto free_nix_lf; + } + /* * Restore queue config when reconfigure followed by * reconfigure and no queue configure invoked from application case. @@ -1300,6 +1315,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .pool_ops_supported = otx2_nix_pool_ops_supported, .get_module_info = otx2_nix_get_module_info, .get_module_eeprom = otx2_nix_get_module_eeprom, + .flow_ctrl_get = otx2_nix_flow_ctrl_get, + .flow_ctrl_set = otx2_nix_flow_ctrl_set, }; static inline int @@ -1501,6 +1518,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Disable nix bpid config */ + otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Free up SQs */ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 8fbd4532e..fad151b54 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -68,6 +68,9 @@ #define NIX_TX_NB_SEG_MAX 9 #endif +/* Apply BP when CQ is 75% full */ +#define NIX_CQ_BP_LEVEL (25 * 256 / 100) + #define CQ_OP_STAT_OP_ERR 63 #define CQ_OP_STAT_CQ_ERR 46 @@ -150,6 +153,14 @@ struct otx2_npc_flow_info { uint16_t flow_max_priority; }; +struct otx2_fc_info { + enum rte_eth_fc_mode mode; /**< Link flow control mode */ + uint8_t rx_pause; + uint8_t tx_pause; + uint8_t chan_cnt; + uint16_t bpid[NIX_MAX_CHAN]; +}; + struct otx2_eth_dev { OTX2_DEV; /* Base class */ MARKER otx2_eth_dev_data_start; @@ -196,6 +207,7 @@ struct otx2_eth_dev { struct otx2_nix_tm_node_list node_list; struct otx2_nix_tm_shaper_profile_list shaper_profile_list; struct otx2_rss_info rss_info; + struct otx2_fc_info fc_info; uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS]; struct otx2_npc_flow_info npc_flow; @@ -350,6 +362,17 @@ int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev); int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr); +/* Flow Control */ +int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf); + +int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf); + +int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb); + +int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev); + /* Lookup configuration */ void *otx2_nix_fastpath_lookup_mem_get(void); diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c new file mode 100644 index 000000000..bd3cda594 --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow_ctrl.c @@ -0,0 +1,230 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" + +int +otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct otx2_mbox *mbox = dev->mbox; + struct nix_bp_cfg_req *req; + struct nix_bp_cfg_rsp *rsp; + int rc; + + if (otx2_dev_is_vf(dev)) + return 0; + + if (enb) { + req = otx2_mbox_alloc_msg_nix_bp_enable(mbox); + req->chan_base = 0; + req->chan_cnt = 1; + req->bpid_per_chan = 0; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc || req->chan_cnt != rsp->chan_cnt) { + otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d", + rsp->chan_cnt, req->chan_cnt, rc); + return rc; + } + + fc->bpid[0] = rsp->chan_bpid[0]; + } else { + req = otx2_mbox_alloc_msg_nix_bp_disable(mbox); + req->chan_base = 0; + req->chan_cnt = 1; + + rc = otx2_mbox_process(mbox); + + memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN); + } + + return rc; +} + +int +otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct cgx_pause_frm_cfg *req, *rsp; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox); + req->set = 0; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + goto done; + + if (rsp->rx_pause && rsp->tx_pause) + fc_conf->mode = RTE_FC_FULL; + else if (rsp->rx_pause) + fc_conf->mode = RTE_FC_RX_PAUSE; + else if (rsp->tx_pause) + fc_conf->mode = RTE_FC_TX_PAUSE; + else + fc_conf->mode = RTE_FC_NONE; + +done: + return rc; +} + +static int +otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct otx2_mbox *mbox = dev->mbox; + struct nix_aq_enq_req *aq; + struct otx2_eth_rxq *rxq; + int i, rc; + + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rxq = eth_dev->data->rx_queues[i]; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) { + /* The shared memory buffer can be full. + * flush it and retry + */ + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox); + if (!aq) + return -ENOMEM; + } + aq->qidx = rxq->rq; + aq->ctype = NIX_AQ_CTYPE_CQ; + aq->op = NIX_AQ_INSTOP_WRITE; + + if (enb) { + aq->cq.bpid = fc->bpid[0]; + aq->cq_mask.bpid = ~(aq->cq_mask.bpid); + aq->cq.bp = NIX_CQ_BP_LEVEL; + aq->cq_mask.bp = ~(aq->cq_mask.bp); + } + + aq->cq.bp_ena = !!enb; + aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena); + } + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_wait_for_rsp(mbox, 0); + if (rc < 0) + return rc; + + return 0; +} + +static int +otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb) +{ + return otx2_nix_cq_bp_cfg(eth_dev, enb); +} + +int +otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev, + struct rte_eth_fc_conf *fc_conf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct otx2_mbox *mbox = dev->mbox; + struct cgx_pause_frm_cfg *req; + uint8_t tx_pause, rx_pause; + int rc = 0; + + if (otx2_dev_is_vf(dev)) + return -ENOTSUP; + + if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time || + fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) { + otx2_info("Flowctrl parameter is not supported"); + return -EINVAL; + } + + if (fc_conf->mode == fc->mode) + return 0; + + rx_pause = (fc_conf->mode == RTE_FC_FULL) || + (fc_conf->mode == RTE_FC_RX_PAUSE); + tx_pause = (fc_conf->mode == RTE_FC_FULL) || + (fc_conf->mode == RTE_FC_TX_PAUSE); + + /* Check if TX pause frame is already enabled or not */ + if (fc->tx_pause ^ tx_pause) { + if (otx2_dev_is_A0(dev) && eth_dev->data->dev_started) { + /* on A0, CQ should be in disabled state + * while setting flow control configuration. + */ + otx2_info("Stop the port=%d for setting flow control\n", + eth_dev->data->port_id); + return 0; + } + /* TX pause frames, enable/disable flowctrl on RX side. */ + rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause); + if (rc) + return rc; + } + + req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox); + req->set = 1; + req->rx_pause = rx_pause; + req->tx_pause = tx_pause; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + fc->tx_pause = tx_pause; + fc->rx_pause = rx_pause; + fc->mode = fc_conf->mode; + + return rc; +} + +int +otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_fc_info *fc = &dev->fc_info; + struct rte_eth_fc_conf fc_conf; + + if (otx2_dev_is_vf(dev)) + return 0; + + memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf)); + /* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW + * by AF driver, update those info in PMD structure. + */ + otx2_nix_flow_ctrl_get(eth_dev, &fc_conf); + + if (fc_conf.mode != fc->mode && fc->mode == RTE_FC_NONE) { + /* PMD disables HW flow control in the initial application's call + * to dev_start(), application uses flow_ctrl_set() API to set + * flow control later. + */ + fc->mode = fc_conf.mode; + fc_conf.mode = RTE_FC_NONE; + } + + /* To avoid Link credit deadlock on A0, disable Tx FC if it's enabled */ + if (otx2_dev_is_A0(dev) && + (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) { + fc_conf.mode = + (fc_conf.mode == RTE_FC_FULL || + fc_conf.mode == RTE_FC_TX_PAUSE) ? + RTE_FC_TX_PAUSE : RTE_FC_NONE; + } + + return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf); +} From patchwork Sun Jun 2 15:24:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54105 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AF8B71B9EC; Sun, 2 Jun 2019 17:26:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id E76D21BAB9 for ; Sun, 2 Jun 2019 17:26:12 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJjgP021271; Sun, 2 Jun 2019 08:26:12 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=fkRWhlPJul17g2hVf8eNFQhQV944MB+bCYH4DXwTpbQ=; b=Ssk75/ZYkkY7bpExIa2V50W8EkfvIfCYctCsUQJqSaPgqW4yeS/5+0GybK1JC9zi9dfl Pr4ZGiav4YMb7CPqO22IY8K7RJ9epYdAJFLnQ+1BthDgs7jA/J32tu3Lj8FRlgU/ydTt FrOwr5WhyGUetr6RdArBQMUpNb9oo/boqDnZhDe6QSAfOy/3OK4sMtk/tE87jRZu6eOq quGG9+DnTAphK/S35SZaTU/Gldz7Et8zbCbmFzHqrswWzDwSex8rsQWpDSigvLyzzHQe WM0NI00LcJjt9WvUOgPpTFSAngVmQYg7K/Aq0biUKvpffOMXn3C61y6KPKgRIbWPGZQ8 gw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqhy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:12 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:11 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:11 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 512973F7041; Sun, 2 Jun 2019 08:26:09 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Harman Kalra , Zyta Szpak Date: Sun, 2 Jun 2019 20:54:07 +0530 Message-ID: <20190602152434.23996-32-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 31/58] net/octeontx2: add PTP base support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Harman Kalra Add PTP enable and disable operations. Signed-off-by: Harman Kalra Signed-off-by: Zyta Szpak --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 22 ++++- drivers/net/octeontx2/otx2_ethdev.h | 17 ++++ drivers/net/octeontx2/otx2_ptp.c | 135 ++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 11 +++ 6 files changed, 184 insertions(+), 3 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_ptp.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 1d3788466..b1c8e4e52 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -33,6 +33,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ + otx2_ptp.c \ otx2_link.c \ otx2_stats.c \ otx2_lookup.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index e4fcac763..57d6c0a58 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -6,6 +6,7 @@ sources = files( 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', + 'otx2_ptp.c', 'otx2_link.c', 'otx2_stats.c', 'otx2_lookup.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 97e0e3465..683aecd4e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -336,9 +336,7 @@ nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq) static inline int nix_get_data_off(struct otx2_eth_dev *dev) { - RTE_SET_USED(dev); - - return 0; + return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0; } uint64_t @@ -450,6 +448,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq, rxq->qlen = nix_qsize_to_val(qsize); rxq->qsize = qsize; rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get(); + rxq->tstamp = &dev->tstamp; /* Alloc completion queue */ rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp); @@ -716,6 +715,7 @@ otx2_nix_form_default_desc(struct otx2_eth_txq *txq) send_mem->dsz = 0x0; send_mem->wmem = 0x1; send_mem->alg = NIX_SENDMEMALG_SETTSTMP; + send_mem->addr = txq->dev->tstamp.tx_tstamp_iova; } sg = (union nix_send_sg_s *)&txq->cmd[4]; } else { @@ -1137,6 +1137,16 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Enable PTP if it was requested by the app or if it is already + * enabled in PF owning this VF + */ + memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info)); + if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || + otx2_ethdev_is_ptp_en(dev)) + otx2_nix_timesync_enable(eth_dev); + else + otx2_nix_timesync_disable(eth_dev); + /* * Restore queue config when reconfigure followed by * reconfigure and no queue configure invoked from application case. @@ -1317,6 +1327,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .get_module_eeprom = otx2_nix_get_module_eeprom, .flow_ctrl_get = otx2_nix_flow_ctrl_get, .flow_ctrl_set = otx2_nix_flow_ctrl_set, + .timesync_enable = otx2_nix_timesync_enable, + .timesync_disable = otx2_nix_timesync_disable, }; static inline int @@ -1521,6 +1533,10 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Disable PTP if already enabled */ + if (otx2_ethdev_is_ptp_en(dev)) + otx2_nix_timesync_disable(eth_dev); + /* Free up SQs */ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index fad151b54..809a9656f 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -13,6 +13,7 @@ #include #include #include +#include #include "otx2_common.h" #include "otx2_dev.h" @@ -109,6 +110,12 @@ #define NIX_DEFAULT_RSS_CTX_GROUP 0 #define NIX_DEFAULT_RSS_MCAM_IDX -1 +#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en) + +#define NIX_TIMESYNC_TX_CMD_LEN 8 +/* Additional timesync values. */ +#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL + enum nix_q_size_e { nix_q_size_16, /* 16 entries */ nix_q_size_64, /* 64 entries */ @@ -214,6 +221,12 @@ struct otx2_eth_dev { struct otx2_eth_qconf *tx_qconf; struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; + /* PTP counters */ + bool ptp_en; + struct otx2_timesync_info tstamp; + struct rte_timecounter systime_tc; + struct rte_timecounter rx_tstamp_tc; + struct rte_timecounter tx_tstamp_tc; } __rte_cache_aligned; struct otx2_eth_txq { @@ -396,4 +409,8 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, /* Rx and Tx routines */ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); +/* Timesync - PTP routines */ +int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev); +int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev); + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c new file mode 100644 index 000000000..105067949 --- /dev/null +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -0,0 +1,135 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" + +#define PTP_FREQ_ADJUST (1 << 9) + +static void +nix_start_timecounters(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter)); + memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter)); + memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter)); + + dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK; + dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK; + dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK; +} + +static int +nix_ptp_config(struct rte_eth_dev *eth_dev, int en) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + uint8_t rc = 0; + + if (otx2_dev_is_vf(dev)) + return rc; + + if (en) { + /* Enable time stamping of sent PTP packets. */ + otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("MBOX ptp tx conf enable failed: err %d", rc); + return rc; + } + /* Enable time stamping of received PTP packets. */ + otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox); + } else { + /* Disable time stamping of sent PTP packets. */ + otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox); + rc = otx2_mbox_process(mbox); + if (rc) { + otx2_err("MBOX ptp tx conf disable failed: err %d", rc); + return rc; + } + /* Disable time stamping of received PTP packets. */ + otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox); + } + + return otx2_mbox_process(mbox); +} + +int +otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int i, rc = 0; + + if (otx2_ethdev_is_ptp_en(dev)) { + otx2_info("PTP mode is already enabled "); + return -EINVAL; + } + + /* If we are VF, no further action can be taken */ + if (otx2_dev_is_vf(dev)) + return -EINVAL; + + if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) { + otx2_err("Ptype offload is disabled, it should be enabled"); + return -EINVAL; + } + + /* Allocating a iova address for tx tstamp */ + const struct rte_memzone *ts; + ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts", + 0, OTX2_ALIGN, OTX2_ALIGN, + dev->node); + if (ts == NULL) + otx2_err("Failed to allocate mem for tx tstamp addr"); + + dev->tstamp.tx_tstamp_iova = ts->iova; + dev->tstamp.tx_tstamp = ts->addr; + + /* System time should be already on by default */ + nix_start_timecounters(eth_dev); + + dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F; + dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F; + + rc = nix_ptp_config(eth_dev, 1); + if (!rc) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; + otx2_nix_form_default_desc(txq); + } + } + return rc; +} + +int +otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int i, rc = 0; + + if (!otx2_ethdev_is_ptp_en(dev)) { + otx2_nix_dbg("PTP mode is disabled"); + return -EINVAL; + } + + /* If we are VF, nothing else can be done */ + if (otx2_dev_is_vf(dev)) + return -EINVAL; + + dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP; + dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F; + dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F; + + rc = nix_ptp_config(eth_dev, 0); + if (!rc) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; + otx2_nix_form_default_desc(txq); + } + } + return rc; +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 1283fdf37..0c3627c12 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -13,5 +13,16 @@ sizeof(uint16_t)) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) + +#define NIX_TIMESYNC_RX_OFFSET 8 + +struct otx2_timesync_info { + uint64_t rx_tstamp; + rte_iova_t tx_tstamp_iova; + uint64_t *tx_tstamp; + uint8_t tx_ready; + uint8_t rx_ready; +} __rte_cache_aligned; #endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 2 15:24:08 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54106 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4A77F1BB22; Sun, 2 Jun 2019 17:26:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BB4DB1BB06 for ; Sun, 2 Jun 2019 17:26:16 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7H6020263; Sun, 2 Jun 2019 08:26:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=CY1mK70G51Uc2m0AV5xLD1N6aNk5bYUPX5j3/04evAI=; b=QKBnfWjNAI06yuetvyry5ugwA0rmdTDhlIVYOxyPCuBYHt2db+2woMzQPS1CRjR6wweA XzAnpVw0+rbjetWXYQI+/dBFPJF0yGS6bE/WISWidD8NxKez70CLjr5rGErr2baXGzGK RCXu975dKHhHoTFEP+2BQFRFPFyBHbPMsvnB0XImZLE0DNMpx6FWXpwkFGedIMdujLU/ 7inQWlvuFoaeIEREhFQhT+qi3SRLRA4cCQ2mQGGKAO5G5gtWji0xnimGogWhPBdGsNhO 14+a5n5Plg0Xr9rVqupxGeAOIjENr4G+TNz9xAAif7bNRBnvilbVu4DS1GrhIw711kNM 2Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk496n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:16 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:14 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:14 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 60C363F703F; Sun, 2 Jun 2019 08:26:12 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Harman Kalra , Zyta Szpak Date: Sun, 2 Jun 2019 20:54:08 +0530 Message-ID: <20190602152434.23996-33-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 32/58] net/octeontx2: add remaining PTP operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Harman Kalra Add remaining PTP configuration/slowpath operations. Timesync feature is available only for PF devices. Signed-off-by: Harman Kalra Signed-off-by: Zyta Szpak --- doc/guides/nics/features/octeontx2.ini | 2 + drivers/net/octeontx2/otx2_ethdev.c | 6 ++ drivers/net/octeontx2/otx2_ethdev.h | 11 +++ drivers/net/octeontx2/otx2_ptp.c | 130 +++++++++++++++++++++++++ 4 files changed, 149 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index ba7fdc868..0f416ee4b 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Packet type parsing = Y +Timesync = Y +Timestamp offload = Y Rx descriptor status = Y Basic stats = Y Stats per queue = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 683aecd4e..9cd3ce407 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -47,6 +47,7 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev) static const struct otx2_dev_ops otx2_dev_ops = { .link_status_update = otx2_eth_dev_link_status_update, + .ptp_info_update = otx2_eth_dev_ptp_info_update }; static int @@ -1329,6 +1330,11 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .flow_ctrl_set = otx2_nix_flow_ctrl_set, .timesync_enable = otx2_nix_timesync_enable, .timesync_disable = otx2_nix_timesync_disable, + .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp, + .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp, + .timesync_adjust_time = otx2_nix_timesync_adjust_time, + .timesync_read_time = otx2_nix_timesync_read_time, + .timesync_write_time = otx2_nix_timesync_write_time, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 809a9656f..ba6d1736e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -412,5 +412,16 @@ void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); /* Timesync - PTP routines */ int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev); int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev); +int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp, + uint32_t flags); +int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp); +int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta); +int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev, + const struct timespec *ts); +int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, + struct timespec *ts); +int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en); #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c index 105067949..5291da241 100644 --- a/drivers/net/octeontx2/otx2_ptp.c +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -57,6 +57,23 @@ nix_ptp_config(struct rte_eth_dev *eth_dev, int en) return otx2_mbox_process(mbox); } +int +otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en) +{ + struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev; + struct rte_eth_dev *eth_dev = otx2_dev->eth_dev; + int i; + + otx2_dev->ptp_en = ptp_en; + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i]; + rxq->mbuf_initializer = + otx2_nix_rxq_mbuf_setup(otx2_dev, + eth_dev->data->port_id); + } + return 0; +} + int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev) { @@ -133,3 +150,116 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev) } return rc; } + +int +otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp, + uint32_t __rte_unused flags) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_timesync_info *tstamp = &dev->tstamp; + uint64_t ns; + + if (!tstamp->rx_ready) + return -EINVAL; + + ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp); + *timestamp = rte_ns_to_timespec(ns); + tstamp->rx_ready = 0; + + otx2_nix_dbg("rx timestamp: %llu sec: %lu nsec %lu", + (unsigned long long)tstamp->rx_tstamp, timestamp->tv_sec, + timestamp->tv_nsec); + + return 0; +} + +int +otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev, + struct timespec *timestamp) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_timesync_info *tstamp = &dev->tstamp; + uint64_t ns; + + if (*tstamp->tx_tstamp == 0) + return -EINVAL; + + ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp); + *timestamp = rte_ns_to_timespec(ns); + + otx2_nix_dbg("tx timestamp: %llu sec: %lu nsec %lu", + *(unsigned long long *)tstamp->tx_tstamp, + timestamp->tv_sec, timestamp->tv_nsec); + + *tstamp->tx_tstamp = 0; + rte_wmb(); + + return 0; +} + +int +otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct ptp_req *req; + struct ptp_rsp *rsp; + int rc; + + /* Adjust the frequent to make tics increments in 10^9 tics per sec */ + if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) { + req = otx2_mbox_alloc_msg_ptp_op(mbox); + req->op = PTP_OP_ADJFINE; + req->scaled_ppm = delta; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + } + dev->systime_tc.nsec += delta; + dev->rx_tstamp_tc.nsec += delta; + dev->tx_tstamp_tc.nsec += delta; + + return 0; +} + +int +otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev, + const struct timespec *ts) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t ns; + + ns = rte_timespec_to_ns(ts); + /* Set the time counters to a new value. */ + dev->systime_tc.nsec = ns; + dev->rx_tstamp_tc.nsec = ns; + dev->tx_tstamp_tc.nsec = ns; + + return 0; +} + +int +otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct ptp_req *req; + struct ptp_rsp *rsp; + uint64_t ns; + int rc; + + req = otx2_mbox_alloc_msg_ptp_op(mbox); + req->op = PTP_OP_GET_CLOCK; + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + ns = rte_timecounter_update(&dev->systime_tc, rsp->clk); + *ts = rte_ns_to_timespec(ns); + + otx2_nix_dbg("PTP time read: %ld.%09ld", ts->tv_sec, ts->tv_nsec); + + return 0; +} From patchwork Sun Jun 2 15:24:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54107 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8DA861BB32; Sun, 2 Jun 2019 17:26:24 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 63E661BAB6 for ; Sun, 2 Jun 2019 17:26:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Ys020248; Sun, 2 Jun 2019 08:26:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=T/dnx3asGbNATPiQ1YkAfCjoH7QAOYHx78XmIPkX2wo=; b=TI1HhWTWn5rM4hmKb8MAwO1e1DTeF20X1hz/HROewLo0jM4ayTxpuyEnMTZv9yTYzAMi kkWE6n4ZuXD6sgOb7t9Fr6nBV+Amtsk8ukn+tVPs0bG+rfnTInIJMs1GmyFHOn+FqrvE VQaA3BsiHw9aFhRqnCvlbigYGDRqUTWWPxjBe44whq6+hvYHYM/9kbH29OGX77q05uLf +kxbFVooD79Fo0hgp/JsJ7h7NEPjKECs3DnZDonf5vmXbRTPNgnaLTZelifHZIxutUfX 6UOYgKiRU/B91xCsIs0Hduw1FmTgabVnICHWRMA4TVONXyuqg/Csxl78nyq2/RZHdlkC YQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk496s-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:18 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:17 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:17 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id CCB703F703F; Sun, 2 Jun 2019 08:26:15 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:09 +0530 Message-ID: <20190602152434.23996-34-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 33/58] net/octeontx2: introducing flow driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Introducing flow infra for octeontx2. This will be used to maintain rte_flow rules. Create, destroy, validate,query, flush, isolate flow operations will be supported. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_ethdev.h | 7 +- drivers/net/octeontx2/otx2_flow.h | 384 ++++++++++++++++++++++++++++ 2 files changed, 385 insertions(+), 6 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_flow.h diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index ba6d1736e..1edc7da29 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -17,6 +17,7 @@ #include "otx2_common.h" #include "otx2_dev.h" +#include "otx2_flow.h" #include "otx2_irq.h" #include "otx2_mempool.h" #include "otx2_rx.h" @@ -154,12 +155,6 @@ struct otx2_eth_qconf { uint16_t nb_desc; }; -struct otx2_npc_flow_info { - uint16_t channel; /*rx channel */ - uint16_t flow_prealloc_size; - uint16_t flow_max_priority; -}; - struct otx2_fc_info { enum rte_eth_fc_mode mode; /**< Link flow control mode */ uint8_t rx_pause; diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h new file mode 100644 index 000000000..07d9e9fd6 --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow.h @@ -0,0 +1,384 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#ifndef __OTX2_FLOW_H__ +#define __OTX2_FLOW_H__ + +#include + +#include +#include +#include + +#include "otx2_common.h" +#include "otx2_ethdev.h" +#include "otx2_mbox.h" + +struct otx2_eth_dev; + +int otx2_flow_init(struct otx2_eth_dev *hw); +int otx2_flow_fini(struct otx2_eth_dev *hw); +extern const struct rte_flow_ops otx2_flow_ops; + +enum { + OTX2_INTF_RX = 0, + OTX2_INTF_TX = 1, + OTX2_INTF_MAX = 2, +}; + +#define NPC_COUNTER_NONE (-1) +/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */ +#define NPC_MAX_EXTRACT_DATA_LEN (64) +#define NPC_LDATA_LFLAG_LEN (16) +#define NPC_MCAM_TOT_ENTRIES (4096) +#define NPC_MAX_KEY_NIBBLES (31) +/* Bit offsets */ +#define NPC_LAYER_KEYX_SZ (12) +#define NPC_PARSE_KEX_S_LA_OFFSET (28) +#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \ + ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \ + + NPC_PARSE_KEX_S_LA_OFFSET) + + +/* supported flow actions flags */ +#define OTX2_FLOW_ACT_MARK (1 << 0) +#define OTX2_FLOW_ACT_FLAG (1 << 1) +#define OTX2_FLOW_ACT_DROP (1 << 2) +#define OTX2_FLOW_ACT_QUEUE (1 << 3) +#define OTX2_FLOW_ACT_RSS (1 << 4) +#define OTX2_FLOW_ACT_DUP (1 << 5) +#define OTX2_FLOW_ACT_SEC (1 << 6) +#define OTX2_FLOW_ACT_COUNT (1 << 7) + +/* terminating actions */ +#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \ + OTX2_FLOW_ACT_QUEUE | \ + OTX2_FLOW_ACT_RSS | \ + OTX2_FLOW_ACT_DUP | \ + OTX2_FLOW_ACT_SEC) + +/* This mark value indicates flag action */ +#define OTX2_FLOW_FLAG_VAL (0xffff) + +#define NIX_RX_ACT_MATCH_OFFSET (40) +#define NIX_RX_ACT_MATCH_MASK (0xFFFF) + +#define NIX_RSS_ACT_GRP_OFFSET (20) +#define NIX_RSS_ACT_ALG_OFFSET (56) +#define NIX_RSS_ACT_GRP_MASK (0xFFFFF) +#define NIX_RSS_ACT_ALG_MASK (0x1F) + +/* PMD-specific definition of the opaque struct rte_flow */ +#define OTX2_MAX_MCAM_WIDTH_DWORDS 7 + +enum npc_mcam_intf { + NPC_MCAM_RX, + NPC_MCAM_TX +}; + +struct npc_xtract_info { + /* Length in bytes of pkt data extracted. len = 0 + * indicates that extraction is disabled. + */ + uint8_t len; + uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */ + uint8_t key_off; /* Byte offset in MCAM key where data is placed */ + uint8_t enable; /* Extraction enabled or disabled */ +}; + +/* Information for a given {LAYER, LTYPE} */ +struct npc_lid_lt_xtract_info { + /* Info derived from parser configuration */ + uint16_t npc_proto; /* Network protocol identified */ + uint8_t valid_flags_mask; /* Flags applicable */ + uint8_t is_terminating:1; /* No more parsing */ + struct npc_xtract_info xtract[NPC_MAX_LD]; +}; + +union npc_kex_ldata_flags_cfg { + struct { + #if defined(__BIG_ENDIAN_BITFIELD) + uint64_t rvsd_62_1 : 61; + uint64_t lid : 3; + #else + uint64_t lid : 3; + uint64_t rvsd_62_1 : 61; + #endif + } s; + + uint64_t i; +}; + +typedef struct npc_lid_lt_xtract_info + otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]; +typedef struct npc_lid_lt_xtract_info + otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL]; +typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD]; + + +/* MBOX_MSG_NPC_GET_DATAX_CFG Response */ +struct npc_get_datax_cfg { + /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */ + union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD]; + /* Extract information indexed with [LID][LTYPE] */ + struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT]; + /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE] + * Fields flags_ena_ld0, flags_ena_ld1 in + * struct npc_lid_lt_xtract_info indicate if this is applicable + * for a given {LAYER, LTYPE} + */ + struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT]; +}; + +struct otx2_mcam_ents_info { + /* Current max & min values of mcam index */ + uint32_t max_id; + uint32_t min_id; + uint32_t free_ent; + uint32_t live_ent; +}; + +struct rte_flow { + uint8_t nix_intf; + uint32_t mcam_id; + int32_t ctr_id; + uint32_t priority; + /* Contiguous match string */ + uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS]; + uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS]; + uint64_t npc_action; + TAILQ_ENTRY(rte_flow) next; +}; + +TAILQ_HEAD(otx2_flow_list, rte_flow); + +/* Accessed from ethdev private - otx2_eth_dev */ +struct otx2_npc_flow_info { + rte_atomic32_t mark_actions; + uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */ + uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */ + uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */ + uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */ + uint32_t mcam_entries; /* mcam entries supported */ + otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */ + otx2_fxcfg_t prx_fxcfg; /* Flag extract */ + otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */ + /* mcam entry info per priority level: both free & in-use */ + struct otx2_mcam_ents_info *flow_entry_info; + /* Bitmap of free preallocated entries in ascending index & + * descending priority + */ + struct rte_bitmap **free_entries; + /* Bitmap of free preallocated entries in descending index & + * ascending priority + */ + struct rte_bitmap **free_entries_rev; + /* Bitmap of live entries in ascending index & descending priority */ + struct rte_bitmap **live_entries; + /* Bitmap of live entries in descending index & ascending priority */ + struct rte_bitmap **live_entries_rev; + /* Priority bucket wise tail queue of all rte_flow resources */ + struct otx2_flow_list *flow_list; + uint32_t rss_grps; /* rss groups supported */ + struct rte_bitmap *rss_grp_entries; + uint16_t channel; /*rx channel */ + uint16_t flow_prealloc_size; + uint16_t flow_max_priority; +}; + +struct otx2_parse_state { + struct otx2_npc_flow_info *npc; + const struct rte_flow_item *pattern; + const struct rte_flow_item *last_pattern; /* Temp usage */ + struct rte_flow_error *error; + struct rte_flow *flow; + uint8_t tunnel; + uint8_t terminate; + uint8_t layer_mask; + uint8_t lt[NPC_MAX_LID]; + uint8_t flags[NPC_MAX_LID]; + uint8_t *mcam_data; /* point to flow->mcam_data + key_len */ + uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */ +}; + +struct otx2_flow_item_info { + const void *def_mask; /* rte_flow default mask */ + void *hw_mask; /* hardware supported mask */ + int len; /* length of item */ + const void *spec; /* spec to use, NULL implies match any */ + const void *mask; /* mask to use */ +}; + +struct otx2_idev_kex_cfg { + struct npc_get_kex_cfg_rsp kex_cfg; + rte_atomic16_t kex_refcnt; +}; + +enum npc_kpu_parser_flag { + NPC_F_NA = 0, + NPC_F_PKI, + NPC_F_PKI_VLAN, + NPC_F_PKI_ETAG, + NPC_F_PKI_ITAG, + NPC_F_PKI_MPLS, + NPC_F_PKI_NSH, + NPC_F_ETYPE_UNK, + NPC_F_ETHER_VLAN, + NPC_F_ETHER_ETAG, + NPC_F_ETHER_ITAG, + NPC_F_ETHER_MPLS, + NPC_F_ETHER_NSH, + NPC_F_STAG_CTAG, + NPC_F_STAG_CTAG_UNK, + NPC_F_STAG_STAG_CTAG, + NPC_F_STAG_STAG_STAG, + NPC_F_QINQ_CTAG, + NPC_F_QINQ_CTAG_UNK, + NPC_F_QINQ_QINQ_CTAG, + NPC_F_QINQ_QINQ_QINQ, + NPC_F_BTAG_ITAG, + NPC_F_BTAG_ITAG_STAG, + NPC_F_BTAG_ITAG_CTAG, + NPC_F_BTAG_ITAG_UNK, + NPC_F_ETAG_CTAG, + NPC_F_ETAG_BTAG_ITAG, + NPC_F_ETAG_STAG, + NPC_F_ETAG_QINQ, + NPC_F_ETAG_ITAG, + NPC_F_ETAG_ITAG_STAG, + NPC_F_ETAG_ITAG_CTAG, + NPC_F_ETAG_ITAG_UNK, + NPC_F_ITAG_STAG_CTAG, + NPC_F_ITAG_STAG, + NPC_F_ITAG_CTAG, + NPC_F_MPLS_4_LABELS, + NPC_F_MPLS_3_LABELS, + NPC_F_MPLS_2_LABELS, + NPC_F_IP_HAS_OPTIONS, + NPC_F_IP_IP_IN_IP, + NPC_F_IP_6TO4, + NPC_F_IP_MPLS_IN_IP, + NPC_F_IP_UNK_PROTO, + NPC_F_IP_IP_IN_IP_HAS_OPTIONS, + NPC_F_IP_6TO4_HAS_OPTIONS, + NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS, + NPC_F_IP_UNK_PROTO_HAS_OPTIONS, + NPC_F_IP6_HAS_EXT, + NPC_F_IP6_TUN_IP6, + NPC_F_IP6_MPLS_IN_IP, + NPC_F_TCP_HAS_OPTIONS, + NPC_F_TCP_HTTP, + NPC_F_TCP_HTTPS, + NPC_F_TCP_PPTP, + NPC_F_TCP_UNK_PORT, + NPC_F_TCP_HTTP_HAS_OPTIONS, + NPC_F_TCP_HTTPS_HAS_OPTIONS, + NPC_F_TCP_PPTP_HAS_OPTIONS, + NPC_F_TCP_UNK_PORT_HAS_OPTIONS, + NPC_F_UDP_VXLAN, + NPC_F_UDP_VXLAN_NOVNI, + NPC_F_UDP_VXLAN_NOVNI_NSH, + NPC_F_UDP_VXLANGPE, + NPC_F_UDP_VXLANGPE_NSH, + NPC_F_UDP_VXLANGPE_MPLS, + NPC_F_UDP_VXLANGPE_NOVNI, + NPC_F_UDP_VXLANGPE_NOVNI_NSH, + NPC_F_UDP_VXLANGPE_NOVNI_MPLS, + NPC_F_UDP_VXLANGPE_UNK, + NPC_F_UDP_VXLANGPE_NONP, + NPC_F_UDP_GTP_GTPC, + NPC_F_UDP_GTP_GTPU_G_PDU, + NPC_F_UDP_GTP_GTPU_UNK, + NPC_F_UDP_UNK_PORT, + NPC_F_UDP_GENEVE, + NPC_F_UDP_GENEVE_OAM, + NPC_F_UDP_GENEVE_CRI_OPT, + NPC_F_UDP_GENEVE_OAM_CRI_OPT, + NPC_F_GRE_NVGRE, + NPC_F_GRE_HAS_SRE, + NPC_F_GRE_HAS_CSUM, + NPC_F_GRE_HAS_KEY, + NPC_F_GRE_HAS_SEQ, + NPC_F_GRE_HAS_CSUM_KEY, + NPC_F_GRE_HAS_CSUM_SEQ, + NPC_F_GRE_HAS_KEY_SEQ, + NPC_F_GRE_HAS_CSUM_KEY_SEQ, + NPC_F_GRE_HAS_ROUTE, + NPC_F_GRE_UNK_PROTO, + NPC_F_GRE_VER1, + NPC_F_GRE_VER1_HAS_SEQ, + NPC_F_GRE_VER1_HAS_ACK, + NPC_F_GRE_VER1_HAS_SEQ_ACK, + NPC_F_GRE_VER1_UNK_PROTO, + NPC_F_TU_ETHER_UNK, + NPC_F_TU_ETHER_CTAG, + NPC_F_TU_ETHER_CTAG_UNK, + NPC_F_TU_ETHER_STAG_CTAG, + NPC_F_TU_ETHER_STAG_CTAG_UNK, + NPC_F_TU_ETHER_STAG, + NPC_F_TU_ETHER_STAG_UNK, + NPC_F_TU_ETHER_QINQ_CTAG, + NPC_F_TU_ETHER_QINQ_CTAG_UNK, + NPC_F_TU_ETHER_QINQ, + NPC_F_TU_ETHER_QINQ_UNK, + NPC_F_LAST /* has to be the last item */ +}; + + +int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id); + +int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id, + uint64_t *count); + +int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id); + +int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry); + +int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox); + +int otx2_flow_update_parse_state(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, + int lid, int lt, uint8_t flags); + +int otx2_flow_parse_item_basic(const struct rte_flow_item *item, + struct otx2_flow_item_info *info, + struct rte_flow_error *error); + +void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask); + +int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, + struct otx2_mbox *mbox, + struct otx2_parse_state *pst, + struct otx2_npc_flow_info *flow_info); + +void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, + int lid, int lt); + +const struct rte_flow_item * +otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern); + +int otx2_flow_parse_lh(struct otx2_parse_state *pst); + +int otx2_flow_parse_lg(struct otx2_parse_state *pst); + +int otx2_flow_parse_lf(struct otx2_parse_state *pst); + +int otx2_flow_parse_le(struct otx2_parse_state *pst); + +int otx2_flow_parse_ld(struct otx2_parse_state *pst); + +int otx2_flow_parse_lc(struct otx2_parse_state *pst); + +int otx2_flow_parse_lb(struct otx2_parse_state *pst); + +int otx2_flow_parse_la(struct otx2_parse_state *pst); + +int otx2_flow_parse_actions(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow); + +#endif /* __OTX2_FLOW_H__ */ From patchwork Sun Jun 2 15:24:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54081 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DB37C1BB42; Sun, 2 Jun 2019 17:26:28 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id AFEFC1B9A4 for ; Sun, 2 Jun 2019 17:26:22 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKK4l020364; Sun, 2 Jun 2019 08:26:22 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=PB4f3J2hons1wojb3dpqGrFDiIi8QBRGUHGgsuAoudk=; b=Jmq3SPnmztwvf36wndV3aVktJWumpM8X41v3HEcu4q9VxMIprb3Tf9AS9XQqdCZvc5Az WoHEetUtQNd5yGzax8/G+/arjM0giJ35oMr/fKaYWCwoyyEn7ew4lYan2TGzdSeXq4x/ me9MCjVJ2CgOvJbI3pNmFsICpLT7B02ARaLUa8Hi4UnEar3Yf4UVtud9VzXIlVfvBj2n 6xIObByAeQIgovXb0eTNTWp+jefA3mm37nqMLki3xDiKivRkvZXn0AXQ7iam5spGzyGc 6XrqUvjo2uwnhK/lHGFOM57AfVahXEbteGMJm7ruXDKIS6Ljpi6VXYp2YlrDVEngWmnu 7g== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4975-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:22 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:20 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:20 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id C46433F703F; Sun, 2 Jun 2019 08:26:18 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:10 +0530 Message-ID: <20190602152434.23996-35-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 34/58] net/octeontx2: flow utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K First pass rte_flow utility functions for octeontx2. These will be used to communicate with AF driver. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_flow_utils.c | 369 ++++++++++++++++++++++++ 3 files changed, 371 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow_utils.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index b1c8e4e52..7773643af 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -39,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_lookup.c \ otx2_ethdev.c \ otx2_flow_ctrl.c \ + otx2_flow_utils.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ otx2_ethdev_debug.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 57d6c0a58..cd168c32f 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -12,6 +12,7 @@ sources = files( 'otx2_lookup.c', 'otx2_ethdev.c', 'otx2_flow_ctrl.c', + 'otx2_flow_utils.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', 'otx2_ethdev_debug.c', diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c new file mode 100644 index 000000000..bf20d7319 --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow_utils.c @@ -0,0 +1,369 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +int +otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id) +{ + struct npc_mcam_oper_counter_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox); + req->cntr = ctr_id; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +int +otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id, + uint64_t *count) +{ + struct npc_mcam_oper_counter_req *req; + struct npc_mcam_oper_counter_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox); + req->cntr = ctr_id; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + + *count = rsp->stat; + return rc; +} + +int +otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id) +{ + struct npc_mcam_oper_counter_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox); + req->cntr = ctr_id; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +int +otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry) +{ + struct npc_mcam_free_entry_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox); + req->entry = entry; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +int +otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox) +{ + struct npc_mcam_free_entry_req *req; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox); + req->all = 1; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, NULL); + + return rc; +} + +static void +flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len) +{ + int idx; + + for (idx = 0; idx < len; idx++) + ptr[idx] = data[len - 1 - idx]; +} + +static size_t +flow_check_copysz(size_t size, size_t len) +{ + if (len <= size) + return len; + + rte_panic("String op-overflow"); +} + +static inline int +flow_mem_is_zero(const void *mem, int len) +{ + const char *m = mem; + int i; + + for (i = 0; i < len; i++) { + if (m[i] != 0) + return 0; + } + return 1; +} + +void +otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, int lid, int lt) +{ + struct npc_xtract_info *xinfo; + char *hw_mask = info->hw_mask; + int i, j; + int intf; + + intf = pst->flow->nix_intf; + xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract; + memset(hw_mask, 0, info->len); + + for (i = 0; i < NPC_MAX_LD; i++) { + int max_off = xinfo[i].hdr_off + xinfo[i].len; + + if (xinfo[i].enable == 0) + continue; + + if (max_off > info->len) + max_off = info->len; + + for (j = xinfo[i].hdr_off; j < max_off; j++) + hw_mask[j] = 0xff; + } +} + +int +otx2_flow_update_parse_state(struct otx2_parse_state *pst, + struct otx2_flow_item_info *info, int lid, int lt, + uint8_t flags) +{ + uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN]; + uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN]; + struct npc_lid_lt_xtract_info *xinfo; + int len = 0; + int intf; + int i; + + otx2_npc_dbg("Parse state function info mask total %s", + (const uint8_t *)info->mask); + + pst->layer_mask |= lid; + pst->lt[lid] = lt; + pst->flags[lid] = flags; + + intf = pst->flow->nix_intf; + xinfo = &pst->npc->prx_dxcfg[intf][lid][lt]; + otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating); + if (xinfo->is_terminating) + pst->terminate = 1; + + /* Need to check if flags are supported but in latest + * KPU profile, flags are used as enumeration! No way, + * it can be validated unless MBOX is changed to return + * set of valid values out of 2**8 possible values. + */ + if (info->spec == NULL) { /* Nothing to match */ + otx2_npc_dbg("Info spec NULL"); + goto done; + } + + /* Copy spec and mask into mcam match string, mask. + * Since both RTE FLOW and OTX2 MCAM use network-endianness + * for data, we are saved from nasty conversions. + */ + for (i = 0; i < NPC_MAX_LD; i++) { + struct npc_xtract_info *x; + int k, idx; + + x = &xinfo->xtract[i]; + len = x->len; + + if (x->enable == 0) + continue; + + otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d," + "x->key_off = %d", x->hdr_off, len, info->len, + x->key_off); + + if (x->hdr_off + len > info->len) + len = info->len - x->hdr_off; + + /* Check for over-write of previous layer */ + if (!flow_mem_is_zero(pst->mcam_mask + x->key_off, + len)) { + /* Cannot support this data match */ + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + pst->pattern, + "Extraction unsupported"); + return -rte_errno; + } + + len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8) + - x->key_off, + len); + /* Need to reverse complete structure so that dest addr is at + * MSB so as to program the MCAM using mcam_data & mcam_mask + * arrays + */ + flow_prep_mcam_ldata(int_info, + (const uint8_t *)info->spec + x->hdr_off, + x->len); + flow_prep_mcam_ldata(int_info_mask, + (const uint8_t *)info->mask + x->hdr_off, + x->len); + + otx2_npc_dbg("Spec: "); + for (k = 0; k < info->len; k++) + otx2_npc_dbg("0x%.2x ", + ((const uint8_t *)info->spec)[k]); + + otx2_npc_dbg("Int_info: "); + for (k = 0; k < info->len; k++) + otx2_npc_dbg("0x%.2x ", int_info[k]); + + memcpy(pst->mcam_mask + x->key_off, int_info_mask, len); + memcpy(pst->mcam_data + x->key_off, int_info, len); + + otx2_npc_dbg("Parse state mcam data & mask"); + for (idx = 0; idx < len ; idx++) + otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx, + *(pst->mcam_data + idx + x->key_off), idx, + *(pst->mcam_mask + idx + x->key_off)); + } + +done: + /* Next pattern to parse by subsequent layers */ + pst->pattern++; + return 0; +} + +static inline int +flow_range_is_valid(const char *spec, const char *last, const char *mask, + int len) +{ + /* Mask must be zero or equal to spec as we do not support + * non-contiguous ranges. + */ + while (len--) { + if (last[len] && + (spec[len] & mask[len]) != (last[len] & mask[len])) + return 0; /* False */ + } + return 1; +} + + +static inline int +flow_mask_is_supported(const char *mask, const char *hw_mask, int len) +{ + /* + * If no hw_mask, assume nothing is supported. + * mask is never NULL + */ + if (hw_mask == NULL) + return flow_mem_is_zero(mask, len); + + while (len--) { + if ((mask[len] | hw_mask[len]) != hw_mask[len]) + return 0; /* False */ + } + return 1; +} + +int +otx2_flow_parse_item_basic(const struct rte_flow_item *item, + struct otx2_flow_item_info *info, + struct rte_flow_error *error) +{ + /* Item must not be NULL */ + if (item == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Item is NULL"); + return -rte_errno; + } + /* If spec is NULL, both mask and last must be NULL, this + * makes it to match ANY value (eq to mask = 0). + * Setting either mask or last without spec is an error + */ + if (item->spec == NULL) { + if (item->last == NULL && item->mask == NULL) { + info->spec = NULL; + return 0; + } + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "mask or last set without spec"); + return -rte_errno; + } + + /* We have valid spec */ + info->spec = item->spec; + + /* If mask is not set, use default mask, err if default mask is + * also NULL. + */ + if (item->mask == NULL) { + otx2_npc_dbg("Item mask null, using default mask"); + if (info->def_mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "No mask or default mask given"); + return -rte_errno; + } + info->mask = info->def_mask; + } else { + info->mask = item->mask; + } + + /* mask specified must be subset of hw supported mask + * mask | hw_mask == hw_mask + */ + if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, + item, "Unsupported field in the mask"); + return -rte_errno; + } + + /* Now we have spec and mask. OTX2 does not support non-contiguous + * range. We should have either: + * - spec & mask == last & mask or, + * - last == 0 or, + * - last == NULL + */ + if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) { + if (!flow_range_is_valid(item->spec, item->last, info->mask, + info->len)) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Unsupported range for match"); + return -rte_errno; + } + } + + return 0; +} + +void +otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask) +{ + uint64_t cdata[2] = {0ULL, 0ULL}, nibble; + int i, j = 0; + + for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) { + if (nibble_mask & (1 << i)) { + nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf; + cdata[j / 16] |= (nibble << ((j & 0xf) * 4)); + j += 1; + } + } + + data[0] = cdata[0]; + data[1] = cdata[1]; +} + From patchwork Sun Jun 2 15:24:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54108 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BF9541BB4D; Sun, 2 Jun 2019 17:26:32 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id F2CEC1BB3B for ; Sun, 2 Jun 2019 17:26:24 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJxgx021326; Sun, 2 Jun 2019 08:26:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=Sbnve039aroqaAUyC/xE0cLXyt4ckc2C9yd9FCbDBZ0=; b=XsG9nQRiMik5mFI1s6fjQQj/E01kkyAD0+WpltA2g4eOtY5OFmrIZrdiWGiHWR/3+plA vDMOojW4OR5s/+XFjA9dyt6BIRP4NLEiBX/YOXIvvM78DyMH25UZawOUFHpLqxOUARv5 wTZTgGVaIi0WzKuVSboJemBvI4u2t/+udrOWfL2POxgIAe9x7N7IPJXn70t+8YjY9AmJ YZggn5JgPKAOYYtk8YOmxNqCQTIjtr75HEnG6YhCvam5OCx+O2VAZJhwesS0V7mWr2lw bNa2tYZyLemUNY7q5d/vJNDJomqVvpTc+DfJVNheZSgv9Ybt33cKv5b5PJsPwK3mvm+h 5Q== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqju-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:24 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:23 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:23 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 95C6F3F703F; Sun, 2 Jun 2019 08:26:21 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:11 +0530 Message-ID: <20190602152434.23996-36-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 35/58] net/octeontx2: flow mailbox utility X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding mailbox utility functions for rte_flow. These will be used to alloc, reserve and write the entries to the device on request. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.h | 6 +- drivers/net/octeontx2/otx2_flow_utils.c | 259 ++++++++++++++++++++++++ 2 files changed, 264 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h index 07d9e9fd6..04c5e487f 100644 --- a/drivers/net/octeontx2/otx2_flow.h +++ b/drivers/net/octeontx2/otx2_flow.h @@ -380,5 +380,9 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error, struct rte_flow *flow); - +int +flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + struct npc_mcam_alloc_entry_rsp *rsp, + int req_prio); #endif /* __OTX2_FLOW_H__ */ diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c index bf20d7319..288f5776e 100644 --- a/drivers/net/octeontx2/otx2_flow_utils.c +++ b/drivers/net/octeontx2/otx2_flow_utils.c @@ -367,3 +367,262 @@ otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask) data[1] = cdata[1]; } +static int +flow_first_set_bit(uint64_t slab) +{ + int num = 0; + + if ((slab & 0xffffffff) == 0) { + num += 32; + slab >>= 32; + } + if ((slab & 0xffff) == 0) { + num += 16; + slab >>= 16; + } + if ((slab & 0xff) == 0) { + num += 8; + slab >>= 8; + } + if ((slab & 0xf) == 0) { + num += 4; + slab >>= 4; + } + if ((slab & 0x3) == 0) { + num += 2; + slab >>= 2; + } + if ((slab & 0x1) == 0) + num += 1; + + return num; +} + +static int +flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + uint32_t old_ent, uint32_t new_ent) +{ + struct npc_mcam_shift_entry_req *req; + struct npc_mcam_shift_entry_rsp *rsp; + struct otx2_flow_list *list; + struct rte_flow *flow_iter; + int rc = 0; + + otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent, + flow->priority); + + list = &flow_info->flow_list[flow->priority]; + + /* Old entry is disabled & it's contents are moved to new_entry, + * new entry is enabled finally. + */ + req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox); + req->curr_entry[0] = old_ent; + req->new_entry[0] = new_ent; + req->shift_count = 1; + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc) + return rc; + + /* Remove old node from list */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id == old_ent) + TAILQ_REMOVE(list, flow_iter, next); + } + + /* Insert node with new mcam id at right place */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > new_ent) + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + } + return rc; +} + +/* Exchange all required entries with a given priority level */ +static int +flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl) +{ + struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp; + uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries; + uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0; + /* Bit position within the slab */ + uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0; + /* Overall bit position of the start of slab */ + /* free & live entry index */ + int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0; + struct otx2_mcam_ents_info *ent_info; + /* free & live bitmap slab */ + uint64_t sl_fr = 0, sl_lv = 0, *sl; + + fr_bmp = flow_info->free_entries[prio_lvl]; + fr_bmp_rev = flow_info->free_entries_rev[prio_lvl]; + lv_bmp = flow_info->live_entries[prio_lvl]; + lv_bmp_rev = flow_info->live_entries_rev[prio_lvl]; + ent_info = &flow_info->flow_entry_info[prio_lvl]; + mcam_entries = flow_info->mcam_entries; + + + /* New entries allocated are always contiguous, but older entries + * already in free/live bitmap can be non-contiguous: so return + * shifted entries should be in non-contiguous format. + */ + while (idx <= rsp->count) { + if (!sl_fr && !sl_lv) { + /* Lower index elements to be exchanged */ + if (dir < 0) { + rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr); + rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv); + otx2_npc_dbg("Fwd slab rc fr %u rc lv %u " + "e_fr %u e_lv %u", rc_fr, rc_lv, + e_fr, e_lv); + } else { + rc_fr = rte_bitmap_scan(fr_bmp_rev, + &sl_fr_bit_off, + &sl_fr); + rc_lv = rte_bitmap_scan(lv_bmp_rev, + &sl_lv_bit_off, + &sl_lv); + + otx2_npc_dbg("Rev slab rc fr %u rc lv %u " + "e_fr %u e_lv %u", rc_fr, rc_lv, + e_fr, e_lv); + } + } + + if (rc_fr) { + fr_bit_pos = flow_first_set_bit(sl_fr); + e_fr = sl_fr_bit_off + fr_bit_pos; + otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos); + } else { + e_fr = ~(0); + } + + if (rc_lv) { + lv_bit_pos = flow_first_set_bit(sl_lv); + e_lv = sl_lv_bit_off + lv_bit_pos; + otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos); + } else { + e_lv = ~(0); + } + + /* First entry is from free_bmap */ + if (e_fr < e_lv) { + bmp = fr_bmp; + e = e_fr; + sl = &sl_fr; + bit_pos = fr_bit_pos; + if (dir > 0) + e_id = mcam_entries - e - 1; + else + e_id = e; + otx2_npc_dbg("Fr e %u e_id %u", e, e_id); + } else { + bmp = lv_bmp; + e = e_lv; + sl = &sl_lv; + bit_pos = lv_bit_pos; + if (dir > 0) + e_id = mcam_entries - e - 1; + else + e_id = e; + + otx2_npc_dbg("Lv e %u e_id %u", e, e_id); + if (idx < rsp->count) + rc = + flow_shift_lv_ent(mbox, flow, + flow_info, e_id, + rsp->entry + idx); + } + + rte_bitmap_clear(bmp, e); + rte_bitmap_set(bmp, rsp->entry + idx); + /* Update entry list, use non-contiguous + * list now. + */ + rsp->entry_list[idx] = e_id; + *sl &= ~(1 << bit_pos); + + /* Update min & max entry identifiers in current + * priority level. + */ + if (dir < 0) { + ent_info->max_id = rsp->entry + idx; + ent_info->min_id = e_id; + } else { + ent_info->max_id = e_id; + ent_info->min_id = rsp->entry; + } + + idx++; + } + return rc; +} + +/* Validate if newly allocated entries lie in the correct priority zone + * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy. + * If not properly aligned, shift entries to do so + */ +int +flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, + struct npc_mcam_alloc_entry_rsp *rsp, + int req_prio) +{ + int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority; + struct otx2_mcam_ents_info *info = flow_info->flow_entry_info; + int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1; + uint32_t tot_ent = 0; + + otx2_npc_dbg("Dir %d, priority = %d", dir, prio); + + if (dir < 0) + prio_idx = flow_info->flow_max_priority - 1; + + /* Only live entries needs to be shifted, free entries can just be + * moved by bits manipulation. + */ + + /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting, + * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority + * level entries(lower indexes). + * + * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift, + * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority + * level entries(higher indexes) with highest indexes. + */ + do { + tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent; + + if (dir < 0 && prio_idx != prio && + rsp->entry > info[prio_idx].max_id && tot_ent) { + otx2_npc_dbg("Rsp entry %u prio idx %u " + "max id %u", rsp->entry, prio_idx, + info[prio_idx].max_id); + + needs_shift = 1; + } else if ((dir > 0) && (prio_idx != prio) && + (rsp->entry < info[prio_idx].min_id) && tot_ent) { + otx2_npc_dbg("Rsp entry %u prio idx %u " + "min id %u", rsp->entry, prio_idx, + info[prio_idx].min_id); + needs_shift = 1; + } + + otx2_npc_dbg("Needs_shift = %d", needs_shift); + if (needs_shift) { + needs_shift = 0; + rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir, + prio_idx); + } else { + for (idx = 0; idx < rsp->count; idx++) + rsp->entry_list[idx] = rsp->entry + idx; + } + } while ((prio_idx != prio) && (prio_idx += dir)); + + return rc; +} From patchwork Sun Jun 2 15:24:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54109 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BBDAB1BB72; Sun, 2 Jun 2019 17:26:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9B5E61BB37 for ; Sun, 2 Jun 2019 17:26:28 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Yu020248; Sun, 2 Jun 2019 08:26:28 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=j4XGr9z6+AJtLC9TFFoewQaDJlNAiX237ioOdoaEmWI=; b=xoNgMoyIicV7CAI4pxdLT6e60tUixGAnLX/d1scYGiHDZEl4UL0TY/XhpM+2RyheUjkU ZzafvgHdBe7UupL6Rno1L3oQ5kQ9jE1Nq/yN6GJqiztXN7LuYwTFoVGlkKrFjuvueNv3 iaMPcVy/DxFKTD8+RCrxnhvPX/8Y/fVrTmqGHBbg0aGaUlr0mvDqlQDtcm+A/Li6iUXq w1U0qLxOAiS/M52SdAQY1QZlZqhNuIuespi1ARjmO8+GH7Uo12Jow2hQw1hpHuNpu2Xt iNoySKSMGtlLrL31obC5oBjxowHXiDGSea8lRIkvM2gf0XXMjtKVLmDZB96qHjtRpnKL Bg== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk497j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:27 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:26 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:26 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A94D43F703F; Sun, 2 Jun 2019 08:26:24 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:12 +0530 Message-ID: <20190602152434.23996-37-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 36/58] net/octeontx2: add flow MCAM utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding MCAM utility functions to alloc and write the entries. These will be used to arrange the flow rules based on priority. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.h | 6 +- drivers/net/octeontx2/otx2_flow_utils.c | 258 +++++++++++++++++++++++- 2 files changed, 258 insertions(+), 6 deletions(-) diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h index 04c5e487f..07d9e9fd6 100644 --- a/drivers/net/octeontx2/otx2_flow.h +++ b/drivers/net/octeontx2/otx2_flow.h @@ -380,9 +380,5 @@ int otx2_flow_parse_actions(struct rte_eth_dev *dev, const struct rte_flow_action actions[], struct rte_flow_error *error, struct rte_flow *flow); -int -flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, - struct otx2_npc_flow_info *flow_info, - struct npc_mcam_alloc_entry_rsp *rsp, - int req_prio); + #endif /* __OTX2_FLOW_H__ */ diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c index 288f5776e..1dd57cc0f 100644 --- a/drivers/net/octeontx2/otx2_flow_utils.c +++ b/drivers/net/octeontx2/otx2_flow_utils.c @@ -5,6 +5,22 @@ #include "otx2_ethdev.h" #include "otx2_flow.h" +static int +flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr) +{ + struct npc_mcam_alloc_counter_req *req; + struct npc_mcam_alloc_counter_rsp *rsp; + int rc; + + req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox); + req->count = 1; + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + + *ctr = rsp->cntr_list[0]; + return rc; +} + int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id) { @@ -567,7 +583,7 @@ flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow, * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy. * If not properly aligned, shift entries to do so */ -int +static int flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, struct otx2_npc_flow_info *flow_info, struct npc_mcam_alloc_entry_rsp *rsp, @@ -626,3 +642,243 @@ flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow, return rc; } + +static int +flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio, + int prio_lvl) +{ + struct otx2_mcam_ents_info *info = flow_info->flow_entry_info; + int step = 1; + + while (step < flow_info->flow_max_priority) { + if (((prio_lvl + step) < flow_info->flow_max_priority) && + info[prio_lvl + step].live_ent) { + *prio = NPC_MCAM_HIGHER_PRIO; + return info[prio_lvl + step].min_id; + } + + if (((prio_lvl - step) >= 0) && + info[prio_lvl - step].live_ent) { + otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step, + info[prio_lvl - step].live_ent); + *prio = NPC_MCAM_LOWER_PRIO; + return info[prio_lvl - step].max_id; + } + step++; + } + *prio = NPC_MCAM_ANY_PRIO; + return 0; +} + +static int +flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info, uint32_t *free_ent) +{ + struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev; + struct npc_mcam_alloc_entry_rsp rsp_local; + struct npc_mcam_alloc_entry_rsp *rsp_cmd; + struct npc_mcam_alloc_entry_req *req; + struct npc_mcam_alloc_entry_rsp *rsp; + struct otx2_mcam_ents_info *info; + uint16_t ref_ent, idx; + int rc, prio; + + info = &flow_info->flow_entry_info[flow->priority]; + free_bmp = flow_info->free_entries[flow->priority]; + free_bmp_rev = flow_info->free_entries_rev[flow->priority]; + live_bmp = flow_info->live_entries[flow->priority]; + live_bmp_rev = flow_info->live_entries_rev[flow->priority]; + + ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority); + + req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox); + req->contig = 1; + req->count = flow_info->flow_prealloc_size; + req->priority = prio; + req->ref_entry = ref_ent; + + otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio); + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd); + if (rc) + return rc; + + rsp = &rsp_local; + memcpy(rsp, rsp_cmd, sizeof(*rsp)); + + otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry, + rsp->count, prio); + + /* Non-first ent cache fill */ + if (prio != NPC_MCAM_ANY_PRIO) { + flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp, + prio); + } else { + /* Copy into response entry list */ + for (idx = 0; idx < rsp->count; idx++) + rsp->entry_list[idx] = rsp->entry + idx; + } + + otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count); + /* Update free entries, reverse free entries list, + * min & max entry ids. + */ + for (idx = 0; idx < rsp->count; idx++) { + if (unlikely(rsp->entry_list[idx] < info->min_id)) + info->min_id = rsp->entry_list[idx]; + + if (unlikely(rsp->entry_list[idx] > info->max_id)) + info->max_id = rsp->entry_list[idx]; + + /* Skip entry to be returned, not to be part of free + * list. + */ + if (prio == NPC_MCAM_HIGHER_PRIO) { + if (unlikely(idx == (rsp->count - 1))) { + *free_ent = rsp->entry_list[idx]; + continue; + } + } else { + if (unlikely(!idx)) { + *free_ent = rsp->entry_list[idx]; + continue; + } + } + info->free_ent++; + rte_bitmap_set(free_bmp, rsp->entry_list[idx]); + rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries - + rsp->entry_list[idx] - 1); + + otx2_npc_dbg("Final rsp entry %u rsp entry rev %u", + rsp->entry_list[idx], + flow_info->mcam_entries - rsp->entry_list[idx] - 1); + } + + otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent, + flow_info->mcam_entries - *free_ent - 1); + info->live_ent++; + rte_bitmap_set(live_bmp, *free_ent); + rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1); + + return 0; +} + +static int +flow_check_preallocated_entry_cache(struct otx2_mbox *mbox, + struct rte_flow *flow, + struct otx2_npc_flow_info *flow_info) +{ + struct rte_bitmap *free, *free_rev, *live, *live_rev; + uint32_t pos = 0, free_ent = 0, mcam_entries; + struct otx2_mcam_ents_info *info; + uint64_t slab = 0; + int rc; + + otx2_npc_dbg("Flow priority %u", flow->priority); + + info = &flow_info->flow_entry_info[flow->priority]; + + free_rev = flow_info->free_entries_rev[flow->priority]; + free = flow_info->free_entries[flow->priority]; + live_rev = flow_info->live_entries_rev[flow->priority]; + live = flow_info->live_entries[flow->priority]; + mcam_entries = flow_info->mcam_entries; + + if (info->free_ent) { + rc = rte_bitmap_scan(free, &pos, &slab); + if (rc) { + /* Get free_ent from free entry bitmap */ + free_ent = pos + __builtin_ctzll(slab); + otx2_npc_dbg("Allocated from cache entry %u", free_ent); + /* Remove from free bitmaps and add to live ones */ + rte_bitmap_clear(free, free_ent); + rte_bitmap_set(live, free_ent); + rte_bitmap_clear(free_rev, + mcam_entries - free_ent - 1); + rte_bitmap_set(live_rev, + mcam_entries - free_ent - 1); + + info->free_ent--; + info->live_ent++; + return free_ent; + } + + otx2_npc_dbg("No free entry:its a mess"); + return -1; + } + + rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent); + if (rc) + return rc; + + return free_ent; +} + +int +otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox, + __rte_unused struct otx2_parse_state *pst, + struct otx2_npc_flow_info *flow_info) +{ + int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1); + struct npc_mcam_write_entry_req *req; + struct mbox_msghdr *rsp; + uint16_t ctr = ~(0); + int rc, idx; + int entry; + + if (use_ctr) { + rc = flow_mcam_alloc_counter(mbox, &ctr); + if (rc) + return rc; + } + + entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info); + if (entry < 0) { + otx2_err("Prealloc failed"); + otx2_flow_mcam_free_counter(mbox, ctr); + return NPC_MCAM_ALLOC_FAILED; + } + req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox); + req->set_cntr = use_ctr; + req->cntr = ctr; + req->entry = entry; + otx2_npc_dbg("Alloc & write entry %u", entry); + + req->intf = + (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX; + req->enable_entry = 1; + req->entry_data.action = flow->npc_action; + + /* + * DPDK sets vtag action on per interface basis, not + * per flow basis. It is a matter of how we decide to support + * this pmd specific behavior. There are two ways: + * 1. Inherit the vtag action from the one configured + * for this interface. This can be read from the + * vtag_action configured for default mcam entry of + * this pf_func. + * 2. Do not support vtag action with rte_flow. + * + * Second approach is used now. + */ + req->entry_data.vtag_action = 0ULL; + + for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) { + req->entry_data.kw[idx] = flow->mcam_data[idx]; + req->entry_data.kw_mask[idx] = flow->mcam_mask[idx]; + } + + req->entry_data.kw[0] |= flow_info->channel; + req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1); + + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp); + if (rc != 0) + return rc; + + flow->mcam_id = entry; + if (use_ctr) + flow->ctr_id = ctr; + return 0; +} From patchwork Sun Jun 2 15:24:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54110 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7FFBC1BB5C; Sun, 2 Jun 2019 17:26:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 2F6A61B996 for ; Sun, 2 Jun 2019 17:26:31 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7HA020263; Sun, 2 Jun 2019 08:26:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=pbjNelSzR3nMu73qoY3Ne5pkZ6Jgs9LIdf4V5MZPmLw=; b=i9dKKy21/pGswet6OMog8xEQRCbSGGb7IVVwRzFm77VaodBAiHP3z0JtB/LdNEX4cMVY 9Uv0H2336SpbiU8sZByIh9vD+l0XRT1I8Y2SdA+qRg9Er7Y9gXNqELnTEabFTyRMlKv6 kd/pO505+FdIqWwNmhxRFqcRrlRei7Shi4M5mww+0aNl4Fa3wDX8oXVbUiAaSO29l5mR 3M/NUrKN1tC4shCz4alS3wjvDNXlhemIZm3M6jwKLZSMsE6GagexZM9BTL6BI/nm4jD6 6ezT9pjeeC1SI76sgMaDgh7AnJF1p49431074m3Z3cg/aBVs9+X3w1pmELzj60YDEmWD 5Q== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk497p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:30 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:29 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:29 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A1B9E3F703F; Sun, 2 Jun 2019 08:26:27 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:13 +0530 Message-ID: <20190602152434.23996-38-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 37/58] net/octeontx2: add flow parsing for outer layers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding functionality to parse outer layers from ld to lh. These will be used parse outer layers L2, L3, L4 and tunnel types. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_flow_parse.c | 463 ++++++++++++++++++++++++ 3 files changed, 465 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow_parse.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 7773643af..f38901b89 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -39,6 +39,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_lookup.c \ otx2_ethdev.c \ otx2_flow_ctrl.c \ + otx2_flow_parse.c \ otx2_flow_utils.c \ otx2_ethdev_irq.c \ otx2_ethdev_ops.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index cd168c32f..cbab77f7b 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -12,6 +12,7 @@ sources = files( 'otx2_lookup.c', 'otx2_ethdev.c', 'otx2_flow_ctrl.c', + 'otx2_flow_parse.c', 'otx2_flow_utils.c', 'otx2_ethdev_irq.c', 'otx2_ethdev_ops.c', diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c new file mode 100644 index 000000000..2d0fa439a --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -0,0 +1,463 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +const struct rte_flow_item * +otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern) +{ + while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) || + (pattern->type == RTE_FLOW_ITEM_TYPE_ANY)) + pattern++; + + return pattern; +} + +int +otx2_flow_parse_lh(struct otx2_parse_state *pst __rte_unused) +{ + return 0; +} + +/* + * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP, + * Tunnel+SCTP + */ +int +otx2_flow_parse_lg(struct otx2_parse_state *pst) +{ + struct otx2_flow_item_info info; + char hw_mask[64]; + int lid, lt; + int rc; + + if (!pst->tunnel) + return 0; + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + lid = NPC_LID_LG; + + switch (pst->pattern->type) { + case RTE_FLOW_ITEM_TYPE_UDP: + lt = NPC_LT_LG_TU_UDP; + info.def_mask = &rte_flow_item_udp_mask; + info.len = sizeof(struct rte_flow_item_udp); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + lt = NPC_LT_LG_TU_TCP; + info.def_mask = &rte_flow_item_tcp_mask; + info.len = sizeof(struct rte_flow_item_tcp); + break; + case RTE_FLOW_ITEM_TYPE_SCTP: + lt = NPC_LT_LG_TU_SCTP; + info.def_mask = &rte_flow_item_sctp_mask; + info.len = sizeof(struct rte_flow_item_sctp); + break; + case RTE_FLOW_ITEM_TYPE_ESP: + lt = NPC_LT_LG_TU_ESP; + info.def_mask = &rte_flow_item_esp_mask; + info.len = sizeof(struct rte_flow_item_esp); + break; + default: + return 0; + } + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} + +/* Tunnel+IPv4, Tunnel+IPv6 */ +int +otx2_flow_parse_lf(struct otx2_parse_state *pst) +{ + struct otx2_flow_item_info info; + char hw_mask[64]; + int lid, lt; + int rc; + + if (!pst->tunnel) + return 0; + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + lid = NPC_LID_LF; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) { + lt = NPC_LT_LF_TU_IP; + info.def_mask = &rte_flow_item_ipv4_mask; + info.len = sizeof(struct rte_flow_item_ipv4); + } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) { + lt = NPC_LT_LF_TU_IP6; + info.def_mask = &rte_flow_item_ipv6_mask; + info.len = sizeof(struct rte_flow_item_ipv6); + } else { + /* There is no tunneled IP header */ + return 0; + } + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} + +/* Tunnel+Ether */ +int +otx2_flow_parse_le(struct otx2_parse_state *pst) +{ + const struct rte_flow_item *pattern, *last_pattern; + struct rte_flow_item_eth hw_mask; + struct otx2_flow_item_info info; + int lid, lt, lflags; + int nr_vlans = 0; + int rc; + + /* We hit this layer if there is a tunneling protocol */ + if (!pst->tunnel) + return 0; + + if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH) + return 0; + + lid = NPC_LID_LE; + lt = NPC_LT_LE_TU_ETHER; + lflags = 0; + + info.def_mask = &rte_flow_item_vlan_mask; + /* No match support for vlan tags */ + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_vlan); + info.spec = NULL; + info.mask = NULL; + + /* Look ahead and find out any VLAN tags. These can be + * detected but no data matching is available. + */ + last_pattern = pst->pattern; + pattern = pst->pattern + 1; + pattern = otx2_flow_skip_void_and_any_items(pattern); + while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + nr_vlans++; + rc = otx2_flow_parse_item_basic(pattern, &info, pst->error); + if (rc != 0) + return rc; + last_pattern = pattern; + pattern++; + pattern = otx2_flow_skip_void_and_any_items(pattern); + } + otx2_npc_dbg("Nr_vlans = %d", nr_vlans); + switch (nr_vlans) { + case 0: + break; + case 1: + lflags = NPC_F_TU_ETHER_CTAG; + break; + case 2: + lflags = NPC_F_TU_ETHER_STAG_CTAG; + break; + default: + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + last_pattern, + "more than 2 vlans with tunneled Ethernet " + "not supported"); + return -rte_errno; + } + + info.def_mask = &rte_flow_item_eth_mask; + info.hw_mask = &hw_mask; + info.len = sizeof(struct rte_flow_item_eth); + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + pst->pattern = last_pattern; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +static int +otx2_flow_parse_ld_udp_tunnel(struct otx2_parse_state *pst) +{ + /* + * We are positioned at UDP. Scan ahead and look for + * UDP encapsulated tunnel protocols. If available, + * parse them. In that case handle this: + * - RTE spec assumes we point to tunnel header. + * - NPC parser provides offset from UDP header. + */ + + /* + * Note: Add support to GENEVE, VXLAN_GPE when we + * upgrade DPDK + * + * Note: Better to split flags into two nibbles: + * - Higher nibble can have flags + * - Lower nibble to further enumerate protocols + * and have flags based extraction + */ + const struct rte_flow_item *pattern = pst->pattern + 1; + struct otx2_flow_item_info info; + int lid, lt, lflags; + char hw_mask[64]; + int rc; + + info.spec = NULL; + info.mask = NULL; + info.hw_mask = NULL; + info.def_mask = NULL; + info.len = 0; + lid = NPC_LID_LD; + lt = NPC_LT_LD_UDP; + lflags = 0; + + /* Ensure we are not matching anything in UDP */ + rc = otx2_flow_parse_item_basic(pattern, &info, pst->error); + if (rc) + return rc; + + info.hw_mask = &hw_mask; + pattern = otx2_flow_skip_void_and_any_items(pattern); + otx2_npc_dbg("Pattern->type = %d", pattern->type); + switch (pattern->type) { + case RTE_FLOW_ITEM_TYPE_VXLAN: + lflags = NPC_F_UDP_VXLAN; + info.def_mask = &rte_flow_item_vxlan_mask; + info.len = sizeof(struct rte_flow_item_vxlan); + lt = NPC_LT_LD_UDP_VXLAN; + break; + case RTE_FLOW_ITEM_TYPE_GTPC: + lflags = NPC_F_UDP_GTP_GTPC; + info.def_mask = &rte_flow_item_gtp_mask; + info.len = sizeof(struct rte_flow_item_gtp); + break; + case RTE_FLOW_ITEM_TYPE_GTPU: + lflags = NPC_F_UDP_GTP_GTPU_G_PDU; + info.def_mask = &rte_flow_item_gtp_mask; + info.len = sizeof(struct rte_flow_item_gtp); + break; + default: + return 0; + } + + /* Now pst->pattern must point to tunnel header */ + pst->pattern = pattern; + pst->tunnel = 1; + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + /* Get past UDP header */ + rc = otx2_flow_parse_item_basic(pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +static int +flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag) +{ + int nr_labels = 0; + const struct rte_flow_item *pattern = pst->pattern; + struct otx2_flow_item_info info; + int rc; + uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS, + NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS}; + + /* + * pst->pattern points to first MPLS label. We only check + * that subsequent labels do not have anything to match. + */ + info.def_mask = &rte_flow_item_mpls_mask; + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_mpls); + info.spec = NULL; + info.mask = NULL; + + while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) { + nr_labels++; + + /* Basic validation of 2nd/3rd/4th mpls item */ + if (nr_labels > 1) { + rc = otx2_flow_parse_item_basic(pattern, &info, + pst->error); + if (rc != 0) + return rc; + } + pst->last_pattern = pattern; + pattern = otx2_flow_skip_void_and_any_items(pattern); + } + + if (nr_labels > 4) { + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + pst->last_pattern, + "more than 4 mpls labels not supported"); + return -rte_errno; + } + + *flag = flag_list[nr_labels - 1]; + return 0; +} + +static int +otx2_flow_parse_lc_ld_mpls(struct otx2_parse_state *pst, int lid) +{ + /* Find number of MPLS labels */ + struct rte_flow_item_mpls hw_mask; + struct otx2_flow_item_info info; + int lt, lflags; + int rc; + + lflags = 0; + + if (lid == NPC_LID_LC) + lt = NPC_LT_LC_MPLS; + else + lt = NPC_LT_LD_TU_MPLS; + + /* Prepare for parsing the first item */ + info.def_mask = &rte_flow_item_mpls_mask; + info.hw_mask = &hw_mask; + info.len = sizeof(struct rte_flow_item_mpls); + info.spec = NULL; + info.mask = NULL; + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + /* + * Parse for more labels. + * This sets lflags and pst->last_pattern correctly. + */ + rc = flow_parse_mpls_label_stack(pst, &lflags); + if (rc != 0) + return rc; + + pst->tunnel = 1; + pst->pattern = pst->last_pattern; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +/* + * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE, + * GTP, GTPC, GTPU, ESP + * + * Note: UDP tunnel protocols are identified by flags. + * LPTR for these protocol still points to UDP + * header. Need flag based extraction to support + * this. + */ +int +otx2_flow_parse_ld(struct otx2_parse_state *pst) +{ + char hw_mask[NPC_MAX_EXTRACT_DATA_LEN]; + struct otx2_flow_item_info info; + int lid, lt, lflags; + int rc; + + if (pst->tunnel) { + /* We have already parsed MPLS or IPv4/v6 followed + * by MPLS or IPv4/v6. Subsequent TCP/UDP etc + * would be parsed as tunneled versions. Skip + * this layer, except for tunneled MPLS. If LC is + * MPLS, we have anyway skipped all stacked MPLS + * labels. + */ + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) + return otx2_flow_parse_lc_ld_mpls(pst, NPC_LID_LD); + return 0; + } + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + info.def_mask = NULL; + info.len = 0; + + lid = NPC_LID_LD; + lflags = 0; + + otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type); + switch (pst->pattern->type) { + case RTE_FLOW_ITEM_TYPE_ICMP: + if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6) + lt = NPC_LT_LD_ICMP6; + else + lt = NPC_LT_LD_ICMP; + info.def_mask = &rte_flow_item_icmp_mask; + info.len = sizeof(struct rte_flow_item_icmp); + break; + case RTE_FLOW_ITEM_TYPE_UDP: + /* Check if a tunnel follows. If yes, we do not + * match anything in UDP spec but process the + * tunnel spec. + */ + rc = otx2_flow_parse_ld_udp_tunnel(pst); + if (rc != 0) + return rc; + + /* If tunnel was present and processed, we are done. */ + if (pst->tunnel) + return 0; + + /* This is UDP without tunnel */ + lt = NPC_LT_LD_UDP; + info.def_mask = &rte_flow_item_udp_mask; + info.len = sizeof(struct rte_flow_item_udp); + break; + case RTE_FLOW_ITEM_TYPE_TCP: + lt = NPC_LT_LD_TCP; + info.def_mask = &rte_flow_item_tcp_mask; + info.len = sizeof(struct rte_flow_item_tcp); + break; + case RTE_FLOW_ITEM_TYPE_SCTP: + lt = NPC_LT_LD_SCTP; + info.def_mask = &rte_flow_item_sctp_mask; + info.len = sizeof(struct rte_flow_item_sctp); + break; + case RTE_FLOW_ITEM_TYPE_ESP: + lt = NPC_LT_LD_ESP; + info.def_mask = &rte_flow_item_esp_mask; + info.len = sizeof(struct rte_flow_item_esp); + break; + case RTE_FLOW_ITEM_TYPE_GRE: + lt = NPC_LT_LD_GRE; + info.def_mask = &rte_flow_item_gre_mask; + info.len = sizeof(struct rte_flow_item_gre); + break; + case RTE_FLOW_ITEM_TYPE_NVGRE: + lt = NPC_LT_LD_GRE; + lflags = NPC_F_GRE_NVGRE; + info.def_mask = &rte_flow_item_nvgre_mask; + info.len = sizeof(struct rte_flow_item_nvgre); + /* Further IP/Ethernet are parsed as tunneled */ + pst->tunnel = 1; + break; + default: + return 0; + } + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} From patchwork Sun Jun 2 15:24:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54082 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 234641BBD2; Sun, 2 Jun 2019 17:26:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id BD5981B9A8 for ; Sun, 2 Jun 2019 17:26:33 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7HB020263; Sun, 2 Jun 2019 08:26:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=dnzZsBI9vJWMbOAHEAGkKq5mg2VB0v4wv+6lG/2/aqE=; b=EB2a1vpEuporo46gjYDLwRwiJaqwUv1GBbAyCrFRN4yZPXkItCQzdyBmOj4c816oVHEE nRUVAqpM0Kl8U92FCoMbf/Z4e0YVrj9GTjHfOOsmlKehfdZ0mdUtVwwNze7yoB0wlImT k+5ALmucyUJ0cSCAinFLhP1zSp/a0JQctphG307FlVC+l6PMvWWznwHHQzdFWWKZaeWV gViwvoDy2DUY1w1q8hunLzm/o9XHxfp9CVhSsLJsRu6BieFmIqg2GbCvtoElh2qg6xWr xRCsqagU7dDfwV8c4MgYzaQ8lwR3WmI6uJlpwPXIpmmnDaaSnK+gp6AZPHqM//kUAL32 Jw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4983-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:33 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 7879F3F703F; Sun, 2 Jun 2019 08:26:30 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:54:14 +0530 Message-ID: <20190602152434.23996-39-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 38/58] net/octeontx2: adding flow parsing for inner layers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding functionality to parse inner layers from la to lc. These will be used to parse inner layers L2, L3, L4 types. Signed-off-by: Kiran Kumar K --- drivers/net/octeontx2/otx2_flow_parse.c | 202 ++++++++++++++++++++++++ 1 file changed, 202 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c index 2d0fa439a..1351dff4c 100644 --- a/drivers/net/octeontx2/otx2_flow_parse.c +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -461,3 +461,205 @@ otx2_flow_parse_ld(struct otx2_parse_state *pst) return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); } + +static inline void +flow_check_lc_ip_tunnel(struct otx2_parse_state *pst) +{ + const struct rte_flow_item *pattern = pst->pattern + 1; + + pattern = otx2_flow_skip_void_and_any_items(pattern); + if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS || + pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 || + pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) + pst->tunnel = 1; +} + +/* Outer IPv4, Outer IPv6, MPLS, ARP */ +int +otx2_flow_parse_lc(struct otx2_parse_state *pst) +{ + uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN]; + struct otx2_flow_item_info info; + int lid, lt; + int rc; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) + return otx2_flow_parse_lc_ld_mpls(pst, NPC_LID_LC); + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + lid = NPC_LID_LC; + + switch (pst->pattern->type) { + case RTE_FLOW_ITEM_TYPE_IPV4: + lt = NPC_LT_LC_IP; + info.def_mask = &rte_flow_item_ipv4_mask; + info.len = sizeof(struct rte_flow_item_ipv4); + break; + case RTE_FLOW_ITEM_TYPE_IPV6: + lid = NPC_LID_LC; + lt = NPC_LT_LC_IP6; + info.def_mask = &rte_flow_item_ipv6_mask; + info.len = sizeof(struct rte_flow_item_ipv6); + break; + case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4: + lt = NPC_LT_LC_ARP; + info.def_mask = &rte_flow_item_arp_eth_ipv4_mask; + info.len = sizeof(struct rte_flow_item_arp_eth_ipv4); + break; + default: + /* No match at this layer */ + return 0; + } + + /* Identify if IP tunnels MPLS or IPv4/v6 */ + flow_check_lc_ip_tunnel(pst); + + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} + +/* VLAN, ETAG */ +int +otx2_flow_parse_lb(struct otx2_parse_state *pst) +{ + const struct rte_flow_item *pattern = pst->pattern; + const struct rte_flow_item *last_pattern; + char hw_mask[NPC_MAX_EXTRACT_DATA_LEN]; + struct otx2_flow_item_info info; + int lid, lt, lflags; + int nr_vlans = 0; + int rc; + + info.spec = NULL; + info.mask = NULL; + + lid = NPC_LID_LB; + lflags = 0; + last_pattern = pattern; + + if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + /* RTE vlan is either 802.1q or 802.1ad, + * this maps to either CTAG/STAG. We need to decide + * based on number of VLANS present. Matching is + * supported on first tag only. + */ + info.def_mask = &rte_flow_item_vlan_mask; + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_vlan); + + pattern = pst->pattern; + while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + nr_vlans++; + + /* Basic validation of 2nd/3rd vlan item */ + if (nr_vlans > 1) { + otx2_npc_dbg("Vlans = %d", nr_vlans); + rc = otx2_flow_parse_item_basic(pattern, &info, + pst->error); + if (rc != 0) + return rc; + } + last_pattern = pattern; + pattern++; + pattern = otx2_flow_skip_void_and_any_items(pattern); + } + + switch (nr_vlans) { + case 1: + lt = NPC_LT_LB_CTAG; + break; + case 2: + lt = NPC_LT_LB_STAG; + lflags = NPC_F_STAG_CTAG; + break; + case 3: + lt = NPC_LT_LB_STAG; + lflags = NPC_F_STAG_STAG_CTAG; + break; + default: + rte_flow_error_set(pst->error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, + last_pattern, + "more than 3 vlans not supported"); + return -rte_errno; + } + } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) { + /* we can support ETAG and match a subsequent CTAG + * without any matching support. + */ + lt = NPC_LT_LB_ETAG; + lflags = 0; + + last_pattern = pst->pattern; + pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1); + if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) { + info.def_mask = &rte_flow_item_vlan_mask; + /* set supported mask to NULL for vlan tag */ + info.hw_mask = NULL; + info.len = sizeof(struct rte_flow_item_vlan); + rc = otx2_flow_parse_item_basic(pattern, &info, + pst->error); + if (rc != 0) + return rc; + + lflags = NPC_F_ETAG_CTAG; + last_pattern = pattern; + } + + info.def_mask = &rte_flow_item_e_tag_mask; + info.len = sizeof(struct rte_flow_item_e_tag); + } else { + return 0; + } + + info.hw_mask = &hw_mask; + info.spec = NULL; + info.mask = NULL; + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc != 0) + return rc; + + /* Point pattern to last item consumed */ + pst->pattern = last_pattern; + return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags); +} + +int +otx2_flow_parse_la(struct otx2_parse_state *pst) +{ + struct rte_flow_item_eth hw_mask; + struct otx2_flow_item_info info; + int lid, lt; + int rc; + + /* Identify the pattern type into lid, lt */ + if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH) + return 0; + + lid = NPC_LID_LA; + lt = NPC_LT_LA_ETHER; + + /* Prepare for parsing the item */ + info.def_mask = &rte_flow_item_eth_mask; + info.hw_mask = &hw_mask; + info.len = sizeof(struct rte_flow_item_eth); + otx2_flow_get_hw_supp_mask(pst, &info, lid, lt); + info.spec = NULL; + info.mask = NULL; + + /* Basic validation of item parameters */ + rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error); + if (rc) + return rc; + + /* Update pst if not validate only? clash check? */ + return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); +} From patchwork Sun Jun 2 15:24:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54111 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A715D1BBD6; Sun, 2 Jun 2019 17:26:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 3F1E81BA9F for ; Sun, 2 Jun 2019 17:26:37 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6kZ020260; Sun, 2 Jun 2019 08:26:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=zTt4ftydRsv6FStfkIG4HY5WQ+QtxzmSoaklxMwyn2c=; b=yEDoZVx+KdR5E5DFBE/jp/xLFSZ4KMl7RL+cXDUYrpe/hW3TamZDXU2Zg0S5OYRxJ3MH RHafPIuOlbDh6kiAGfX2arUc4oz1OTpxeOJ+luNlY85LadeP2SqfO8VMj88AsUCsas9X jauOUWMiDPrhluTxlEZHYlOHwwRY+0SAVwI45w3y1RJTJhMocG9uhVw1Bfjm8v/wDCJv HfE1n28sPJSEFzOsyagYJlE3vriK7OW8vlk2yuXvehSOeNakn+M3qJhE8Q2cUTciF0jy qhz5XPQ/1KE3oQc4MEwt64TEqCXLOT8IVbVUhYHDgY05PS4IQXH/rjOtbcofDDHbrmyP iA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4989-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:36 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:35 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:35 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 7A6D03F703F; Sun, 2 Jun 2019 08:26:33 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:15 +0530 Message-ID: <20190602152434.23996-40-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 39/58] net/octeontx2: add flow actions support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding support to parse flow actions like drop, count, mark, rss, queue. On egress side, only drop and count actions were supported. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow_parse.c | 276 ++++++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 1 + 2 files changed, 277 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c index 1351dff4c..cf13813d8 100644 --- a/drivers/net/octeontx2/otx2_flow_parse.c +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -663,3 +663,279 @@ otx2_flow_parse_la(struct otx2_parse_state *pst) /* Update pst if not validate only? clash check? */ return otx2_flow_update_parse_state(pst, &info, lid, lt, 0); } + +static int +parse_rss_action(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action *act, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_rss_info *rss_info = &hw->rss_info; + const struct rte_flow_action_rss *rss; + uint32_t i; + + rss = (const struct rte_flow_action_rss *)act->conf; + + /* Not supported */ + if (attr->egress) { + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, + attr, "No support of RSS in egress"); + } + + if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "multi-queue mode is disabled"); + + /* Parse RSS related parameters from configuration */ + if (!rss || !rss->queue_num) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, "no valid queues"); + + if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "non-default RSS hash functions" + " are not supported"); + + if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key)) + return rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, act, + "RSS hash key too large"); + + if (rss->queue_num > rss_info->rss_size) + return rte_flow_error_set + (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act, + "too many queues for RSS context"); + + for (i = 0; i < rss->queue_num; i++) { + if (rss->queue[i] >= dev->data->nb_rx_queues) + return rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "queue id > max number" + " of queues"); + } + + return 0; +} + +int +otx2_flow_parse_actions(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_npc_flow_info *npc = &hw->npc_flow; + const struct rte_flow_action_count *act_count; + const struct rte_flow_action_mark *act_mark; + const struct rte_flow_action_queue *act_q; + const char *errmsg = NULL; + int sel_act, req_act = 0; + uint16_t pf_func; + int errcode = 0; + int mark = 0; + int rq = 0; + + /* Initialize actions */ + flow->ctr_id = NPC_COUNTER_NONE; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + otx2_npc_dbg("Action type = %d", actions->type); + + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_MARK: + act_mark = + (const struct rte_flow_action_mark *)actions->conf; + + /* We have only 16 bits. Use highest val for flag */ + if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) { + errmsg = "mark value must be < 0xfffe"; + errcode = ENOTSUP; + goto err_exit; + } + mark = act_mark->id + 1; + req_act |= OTX2_FLOW_ACT_MARK; + rte_atomic32_inc(&npc->mark_actions); + break; + + case RTE_FLOW_ACTION_TYPE_FLAG: + mark = OTX2_FLOW_FLAG_VAL; + req_act |= OTX2_FLOW_ACT_FLAG; + rte_atomic32_inc(&npc->mark_actions); + break; + + case RTE_FLOW_ACTION_TYPE_COUNT: + act_count = + (const struct rte_flow_action_count *) + actions->conf; + + if (act_count->shared == 1) { + errmsg = "Shared Counters not supported"; + errcode = ENOTSUP; + goto err_exit; + } + /* Indicates, need a counter */ + flow->ctr_id = 1; + req_act |= OTX2_FLOW_ACT_COUNT; + break; + + case RTE_FLOW_ACTION_TYPE_DROP: + req_act |= OTX2_FLOW_ACT_DROP; + break; + + case RTE_FLOW_ACTION_TYPE_QUEUE: + /* Applicable only to ingress flow */ + act_q = (const struct rte_flow_action_queue *) + actions->conf; + rq = act_q->index; + if (rq >= dev->data->nb_rx_queues) { + errmsg = "invalid queue index"; + errcode = EINVAL; + goto err_exit; + } + req_act |= OTX2_FLOW_ACT_QUEUE; + break; + + case RTE_FLOW_ACTION_TYPE_RSS: + errcode = parse_rss_action(dev, attr, actions, error); + if (errcode) + return -rte_errno; + + req_act |= OTX2_FLOW_ACT_RSS; + break; + + case RTE_FLOW_ACTION_TYPE_SECURITY: + /* Assumes user has already configured security + * session for this flow. Associated conf is + * opaque. When RTE security is implemented for otx2, + * we need to verify that for specified security + * session: + * action_type == + * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL && + * session_protocol == + * RTE_SECURITY_PROTOCOL_IPSEC + * + * RSS is not supported with inline ipsec. Get the + * rq from associated conf, or make + * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this + * action. + * Currently, rq = 0 is assumed. + */ + req_act |= OTX2_FLOW_ACT_SEC; + rq = 0; + break; + default: + errmsg = "Unsupported action specified"; + errcode = ENOTSUP; + goto err_exit; + } + } + + /* Check if actions specified are compatible */ + if (attr->egress) { + /* Only DROP/COUNT is supported */ + if (!(req_act & OTX2_FLOW_ACT_DROP)) { + errmsg = "DROP is required action for egress"; + errcode = EINVAL; + goto err_exit; + } else if (req_act & ~(OTX2_FLOW_ACT_DROP | + OTX2_FLOW_ACT_COUNT)) { + errmsg = "Unsupported action specified"; + errcode = ENOTSUP; + goto err_exit; + } + flow->npc_action = NIX_TX_ACTIONOP_DROP; + return 0; + } + + /* We have already verified the attr, this is ingress. + * - Exactly one terminating action is supported + * - Exactly one of MARK or FLAG is supported + * - If terminating action is DROP, only count is valid. + */ + sel_act = req_act & OTX2_FLOW_ACT_TERM; + if ((sel_act & (sel_act - 1)) != 0) { + errmsg = "Only one terminating action supported"; + errcode = EINVAL; + goto err_exit; + } + + if (req_act & OTX2_FLOW_ACT_DROP) { + sel_act = req_act & ~OTX2_FLOW_ACT_COUNT; + if ((sel_act & (sel_act - 1)) != 0) { + errmsg = "Only COUNT action is supported " + "with DROP ingress action"; + errcode = ENOTSUP; + goto err_exit; + } + } + + if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) + == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) { + errmsg = "Only one of FLAG or MARK action is supported"; + errcode = ENOTSUP; + goto err_exit; + } + + /* Set NIX_RX_ACTIONOP */ + if (req_act & OTX2_FLOW_ACT_DROP) { + flow->npc_action = NIX_RX_ACTIONOP_DROP; + } else if (req_act & OTX2_FLOW_ACT_QUEUE) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & OTX2_FLOW_ACT_RSS) { + /* When user added a rule for rss, first we will add the + *rule in MCAM and then update the action, once if we have + *FLOW_KEY_ALG index. So, till we update the action with + *flow_key_alg index, set the action to drop. + */ + if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) + flow->npc_action = NIX_RX_ACTIONOP_DROP; + else + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else if (req_act & OTX2_FLOW_ACT_SEC) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC; + flow->npc_action |= (uint64_t)rq << 20; + } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) { + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else if (req_act & OTX2_FLOW_ACT_COUNT) { + /* Keep OTX2_FLOW_ACT_COUNT always at the end + * This is default action, when user specify only + * COUNT ACTION + */ + flow->npc_action = NIX_RX_ACTIONOP_UCAST; + } else { + /* Should never reach here */ + errmsg = "Invalid action specified"; + errcode = EINVAL; + goto err_exit; + } + + if (mark) + flow->npc_action |= (uint64_t)mark << 40; + + if (rte_atomic32_read(&npc->mark_actions) == 1) + hw->rx_offload_flags |= NIX_RX_OFFLOAD_MARK_UPDATE_F; + + + /* Ideally AF must ensure that correct pf_func is set */ + pf_func = otx2_pfvf_func(hw->pf, hw->vf); + flow->npc_action |= (uint64_t)pf_func << 4; + + return 0; + +err_exit: + rte_flow_error_set(error, errcode, + RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL, + errmsg); + return -rte_errno; +} + diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 0c3627c12..b9c9ff3cc 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -14,6 +14,7 @@ #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) +#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_TIMESYNC_RX_OFFSET 8 From patchwork Sun Jun 2 15:24:16 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54083 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 70C841BBDC; Sun, 2 Jun 2019 17:26:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 679971BBA5 for ; Sun, 2 Jun 2019 17:26:40 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCh021032; Sun, 2 Jun 2019 08:26:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=++1c8RXjP2yecHY1BpLLy/vNd/g7HxKN9AVU5fPF/ZM=; b=uW2y2SDhz1Mm8PqbwUnI0tuM/xv//AS6enaTrrlkkMW6D+4d/tLRUyW/9yu4bOFK1oko 9+9Oe/JRhlIaiqhLjz+unhAy1j2rO0hfZVhQ6XMXKVlAgDSrzVZatxMfgwmujgF0DL4k 34dVD76CkNjceAfaGmqihek3nLGOsAycVmkedyDo2yrHi9GLfK+VuNjv+GOtLWr73ywJ 3HYQ9LuqJsKWbl1GiuBhWnYYcZeLKgK1/CNdXUoToWoRP9191Iy29slXYexem9TVzv0Z NemK/ndHzEJaun1eRkUr0dhgYPJKKMbs8mExKmIo7Z0JkD6hXdlehz4tsvJqY4EPgHrO xA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk498f-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:39 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:38 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:38 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 7C5F23F703F; Sun, 2 Jun 2019 08:26:36 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:16 +0530 Message-ID: <20190602152434.23996-41-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 40/58] net/octeontx2: add flow operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding the initial flow ops like flow_create and flow_validate. These will be used to alloc and write flow rule to the device and validate the flow rule. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_flow.c | 430 ++++++++++++++++++++++++++++++ 3 files changed, 432 insertions(+) create mode 100644 drivers/net/octeontx2/otx2_flow.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index f38901b89..d651c8c50 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -34,6 +34,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_rss.c \ otx2_mac.c \ otx2_ptp.c \ + otx2_flow.c \ otx2_link.c \ otx2_stats.c \ otx2_lookup.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index cbab77f7b..a2c494bb4 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -7,6 +7,7 @@ sources = files( 'otx2_rss.c', 'otx2_mac.c', 'otx2_ptp.c', + 'otx2_flow.c', 'otx2_link.c', 'otx2_stats.c', 'otx2_lookup.c', diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c new file mode 100644 index 000000000..d1e1c4411 --- /dev/null +++ b/drivers/net/octeontx2/otx2_flow.c @@ -0,0 +1,430 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + +static int +flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox, + struct otx2_npc_flow_info *flow_info) +{ + /* This is non-LDATA part in search key */ + uint64_t key_data[2] = {0ULL, 0ULL}; + uint64_t key_mask[2] = {0ULL, 0ULL}; + int intf = pst->flow->nix_intf; + uint64_t lt, flags; + int off, idx; + uint64_t val; + int key_len; + uint8_t lid; + + for (lid = 0; lid < NPC_MAX_LID; lid++) { + /* Offset in key */ + off = NPC_PARSE_KEX_S_LID_OFFSET(lid); + lt = pst->lt[lid] & 0xf; + flags = pst->flags[lid] & 0xff; + /* NPC_LAYER_KEX_S */ + val = (lt << 8) | flags; + key_data[off / UINT64_BIT] |= (val << (off & 0x3f)); + val = (flags == 0 ? 0 : 0xffULL); + if (lt) + val |= 0xf00ULL; + key_mask[off / UINT64_BIT] |= (val << (off & 0x3f)); + }; + + otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64, + key_data[0], key_data[1]); + /* + * Channel, errlev, errcode, l2_l3_bc_mc + * AF must set the channel. For time being, it can be + * hard-coded + * Rest of the fields are zero for now. + */ + + /* + * Compress key_data and key_mask, skipping any disabled + * nibbles. + */ + otx2_flow_keyx_compress(key_data, pst->npc->keyx_supp_nmask[intf]); + otx2_flow_keyx_compress(key_mask, pst->npc->keyx_supp_nmask[intf]); + + /* Copy this into mcam string */ + key_len = (pst->npc->keyx_len[intf] + 7) / 8; + otx2_npc_dbg("Key_len = %d", key_len); + memcpy(pst->flow->mcam_data, key_data, key_len); + memcpy(pst->flow->mcam_mask, key_mask, key_len); + + otx2_npc_dbg("Final flow data"); + for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) { + otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64, + idx, pst->flow->mcam_data[idx], + idx, pst->flow->mcam_mask[idx]); + } + + /* + * Now we have mcam data and mask formatted as + * [Key_len/4 nibbles][0 or 1 nibble hole][data] + * hole is present if key_len is odd number of nibbles. + * mcam data must be split into 64 bits + 48 bits segments + * for each back W0, W1. + */ + + return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info); +} + +static int +flow_parse_attr(struct rte_eth_dev *eth_dev, + const struct rte_flow_attr *attr, + struct rte_flow_error *error, + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + const char *errmsg = NULL; + + if (attr == NULL) + errmsg = "Attribute can't be empty"; + else if (attr->group) + errmsg = "Groups are not supported"; + else if (attr->priority >= dev->npc_flow.flow_max_priority) + errmsg = "Priority should be with in specified range"; + else if ((!attr->egress && !attr->ingress) || + (attr->egress && attr->ingress)) + errmsg = "Exactly one of ingress or egress must be set"; + + if (errmsg != NULL) { + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR, + attr, errmsg); + return -ENOTSUP; + } + + if (attr->ingress) + flow->nix_intf = OTX2_INTF_RX; + else + flow->nix_intf = OTX2_INTF_TX; + + flow->priority = attr->priority; + return 0; +} + +static inline int +flow_get_free_rss_grp(struct rte_bitmap *bmap, + uint32_t size, uint32_t *pos) +{ + for (*pos = 0; *pos < size; ++*pos) { + if (!rte_bitmap_get(bmap, *pos)) + break; + } + + return *pos < size ? 0 : -1; +} + +static int +flow_configure_rss_action(struct otx2_eth_dev *dev, + const struct rte_flow_action_rss *rss, + uint8_t *alg_idx, uint32_t *rss_grp, + int mcam_index) +{ + struct otx2_npc_flow_info *flow_info = &dev->npc_flow; + uint16_t reta[NIX_RSS_RETA_SIZE_MAX]; + uint32_t flowkey_cfg, grp_aval, i; + uint16_t *ind_tbl = NULL; + uint8_t flowkey_algx; + int rc; + + rc = flow_get_free_rss_grp(flow_info->rss_grp_entries, + flow_info->rss_grps, &grp_aval); + /* RSS group :0 is not usable for flow rss action */ + if (rc < 0 || grp_aval == 0) + return -ENOSPC; + + *rss_grp = grp_aval; + + otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key, + rss->key_len); + + /* If queue count passed in the rss action is less than + * HW configured reta size, replicate rss action reta + * across HW reta table. + */ + if (dev->rss_info.rss_size > rss->queue_num) { + ind_tbl = reta; + + for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++) + memcpy(reta + i * rss->queue_num, rss->queue, + sizeof(uint16_t) * rss->queue_num); + + i = dev->rss_info.rss_size % rss->queue_num; + if (i) + memcpy(&reta[dev->rss_info.rss_size] - i, + rss->queue, i * sizeof(uint16_t)); + } else { + ind_tbl = (uint16_t *)(uintptr_t)rss->queue; + } + + rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl); + if (rc) { + otx2_err("Failed to init rss table rc = %d", rc); + return rc; + } + + flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level); + + rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx, + *rss_grp, mcam_index); + if (rc) { + otx2_err("Failed to set rss hash function rc = %d", rc); + return rc; + } + + *alg_idx = flowkey_algx; + + rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp); + + return 0; +} + + +static int +flow_program_rss_action(struct rte_eth_dev *eth_dev, + const struct rte_flow_action actions[], + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + const struct rte_flow_action_rss *rss; + uint32_t rss_grp; + uint8_t alg_idx; + int rc; + + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) { + rss = (const struct rte_flow_action_rss *)actions->conf; + + rc = flow_configure_rss_action(dev, + rss, &alg_idx, &rss_grp, + flow->mcam_id); + if (rc) + return rc; + + flow->npc_action |= + ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) << + NIX_RSS_ACT_ALG_OFFSET) | + ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) << + NIX_RSS_ACT_GRP_OFFSET); + } + } + return 0; +} + +static int +flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst) +{ + otx2_npc_dbg("Meta Item"); + return 0; +} + +/* + * Parse function of each layer: + * - Consume one or more patterns that are relevant. + * - Update parse_state + * - Set parse_state.pattern = last item consumed + * - Set appropriate error code/message when returning error. + */ +typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst); + +static int +flow_parse_pattern(struct rte_eth_dev *dev, + const struct rte_flow_item pattern[], + struct rte_flow_error *error, + struct rte_flow *flow, + struct otx2_parse_state *pst) +{ + flow_parse_stage_func_t parse_stage_funcs[] = { + flow_parse_meta_items, + otx2_flow_parse_la, + otx2_flow_parse_lb, + otx2_flow_parse_lc, + otx2_flow_parse_ld, + otx2_flow_parse_le, + otx2_flow_parse_lf, + otx2_flow_parse_lg, + otx2_flow_parse_lh, + }; + struct otx2_eth_dev *hw = dev->data->dev_private; + uint8_t layer = 0; + int key_offset; + int rc; + + if (pattern == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL, + "pattern is NULL"); + return -EINVAL; + } + + memset(pst, 0, sizeof(*pst)); + pst->npc = &hw->npc_flow; + pst->error = error; + pst->flow = flow; + + /* Use integral byte offset */ + key_offset = pst->npc->keyx_len[flow->nix_intf]; + key_offset = (key_offset + 7) / 8; + + /* Location where LDATA would begin */ + pst->mcam_data = (uint8_t *)flow->mcam_data; + pst->mcam_mask = (uint8_t *)flow->mcam_mask; + + while (pattern->type != RTE_FLOW_ITEM_TYPE_END && + layer < RTE_DIM(parse_stage_funcs)) { + otx2_npc_dbg("Pattern type = %d", pattern->type); + + /* Skip place-holders */ + pattern = otx2_flow_skip_void_and_any_items(pattern); + + pst->pattern = pattern; + otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer); + rc = parse_stage_funcs[layer](pst); + if (rc != 0) + return -rte_errno; + + layer++; + + /* + * Parse stage function sets pst->pattern to + * 1 past the last item it consumed. + */ + pattern = pst->pattern; + + if (pst->terminate) + break; + } + + /* Skip trailing place-holders */ + pattern = otx2_flow_skip_void_and_any_items(pattern); + + /* Are there more items than what we can handle? */ + if (pattern->type != RTE_FLOW_ITEM_TYPE_END) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, pattern, + "unsupported item in the sequence"); + return -ENOTSUP; + } + + return 0; +} + +static int +flow_parse_rule(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error, + struct rte_flow *flow, + struct otx2_parse_state *pst) +{ + int err; + + /* Check attributes */ + err = flow_parse_attr(dev, attr, error, flow); + if (err) + return err; + + /* Check actions */ + err = otx2_flow_parse_actions(dev, attr, actions, error, flow); + if (err) + return err; + + /* Check pattern */ + err = flow_parse_pattern(dev, pattern, error, flow, pst); + if (err) + return err; + + /* Check for overlaps? */ + return 0; +} + +static int +otx2_flow_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct otx2_parse_state parse_state; + struct rte_flow flow; + + memset(&flow, 0, sizeof(flow)); + return flow_parse_rule(dev, attr, pattern, actions, error, &flow, + &parse_state); +} + +static struct rte_flow * +otx2_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_parse_state parse_state; + struct otx2_mbox *mbox = hw->mbox; + struct rte_flow *flow, *flow_iter; + struct otx2_flow_list *list; + int rc; + + flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0); + if (flow == NULL) { + rte_flow_error_set(error, ENOMEM, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Memory allocation failed"); + return NULL; + } + memset(flow, 0, sizeof(*flow)); + + rc = flow_parse_rule(dev, attr, pattern, actions, error, flow, + &parse_state); + if (rc != 0) + goto err_exit; + + rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to insert filter"); + goto err_exit; + } + + rc = flow_program_rss_action(dev, actions, flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to program rss action"); + goto err_exit; + } + + + list = &hw->npc_flow.flow_list[flow->priority]; + /* List in ascending order of mcam entries */ + TAILQ_FOREACH(flow_iter, list, next) { + if (flow_iter->mcam_id > flow->mcam_id) { + TAILQ_INSERT_BEFORE(flow_iter, flow, next); + return flow; + } + } + + TAILQ_INSERT_TAIL(list, flow, next); + return flow; + +err_exit: + rte_free(flow); + return NULL; +} + +const struct rte_flow_ops otx2_flow_ops = { + .validate = otx2_flow_validate, + .create = otx2_flow_create, +}; From patchwork Sun Jun 2 15:24:17 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54084 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 09A6B1BB76; Sun, 2 Jun 2019 17:26:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 34F141BBB4 for ; Sun, 2 Jun 2019 17:26:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKpB020361; Sun, 2 Jun 2019 08:26:42 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=XaE337EoDpopfagSitHYv3xXR7VUbJWyZfMr7ACZmVA=; b=s3KwIN402FalW/7mkOYqepZ0oYVo+5sLdZgV1Nof/sR780RXYZE3YaQGgWkxts4e2Pn5 INnrrNFrFf0Gpn3HA3sFS3r4CdRuakEwSyeEvG6H1kVH9JBqbYq0AIs4vf81Ygbe92kl UzH/H6EFbkdNdosMlJ0cs3mU8Vl0R2eBM43Kn8Br9LOJdCeAKXpqYtkCvV4y+wcl8/5m dRHjXxE20NFtJbHwfAPPtXh8pq5Y1eW/F3ZtHRSPDuBQhPN3WPrTgujG3jkIQNicB9Bm hqx9ZnYMAVZuwdw+4LLX4W5mE5Ob/xDXEUweATkKjEmTllvV5VT6SW6jMqsYUiFBbf3c Gg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk498k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:42 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:41 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:41 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A43663F703F; Sun, 2 Jun 2019 08:26:39 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:17 +0530 Message-ID: <20190602152434.23996-42-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 41/58] net/octeontx2: add additional flow operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding the initial flow ops like flow_create and flow_validate. These will be used to alloc and write flow rule to device and validate the flow rule. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.c | 197 ++++++++++++++++++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 3 + 2 files changed, 200 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index d1e1c4411..33fdafeb7 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -5,6 +5,39 @@ #include "otx2_ethdev.h" #include "otx2_flow.h" +static int +flow_free_all_resources(struct otx2_eth_dev *hw) +{ + struct otx2_npc_flow_info *npc = &hw->npc_flow; + struct otx2_mbox *mbox = hw->mbox; + struct otx2_mcam_ents_info *info; + struct rte_bitmap *bmap; + struct rte_flow *flow; + int rc, idx; + + /* Free all MCAM entries allocated */ + rc = otx2_flow_mcam_free_all_entries(mbox); + + /* Free any MCAM counters and delete flow list */ + for (idx = 0; idx < npc->flow_max_priority; idx++) { + while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) { + if (flow->ctr_id != NPC_COUNTER_NONE) + rc |= otx2_flow_mcam_free_counter(mbox, + flow->ctr_id); + + TAILQ_REMOVE(&npc->flow_list[idx], flow, next); + rte_free(flow); + bmap = npc->live_entries[flow->priority]; + rte_bitmap_clear(bmap, flow->mcam_id); + } + info = &npc->flow_entry_info[idx]; + info->free_ent = 0; + info->live_ent = 0; + } + return rc; +} + + static int flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox, struct otx2_npc_flow_info *flow_info) @@ -216,6 +249,27 @@ flow_program_rss_action(struct rte_eth_dev *eth_dev, return 0; } +static int +flow_free_rss_action(struct rte_eth_dev *eth_dev, + struct rte_flow *flow) +{ + struct otx2_eth_dev *dev = eth_dev->data->dev_private; + struct otx2_npc_flow_info *npc = &dev->npc_flow; + uint32_t rss_grp; + + if (flow->npc_action & NIX_RX_ACTIONOP_RSS) { + rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) & + NIX_RSS_ACT_GRP_MASK; + if (rss_grp == 0 || rss_grp >= npc->rss_grps) + return -EINVAL; + + rte_bitmap_clear(npc->rss_grp_entries, rss_grp); + } + + return 0; +} + + static int flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst) { @@ -424,7 +478,150 @@ otx2_flow_create(struct rte_eth_dev *dev, return NULL; } +static int +otx2_flow_destroy(struct rte_eth_dev *dev, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct otx2_npc_flow_info *npc = &hw->npc_flow; + struct otx2_mbox *mbox = hw->mbox; + struct rte_bitmap *bmap; + uint16_t match_id; + int rc; + + match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) & + NIX_RX_ACT_MATCH_MASK; + + if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) { + if (rte_atomic32_read(&npc->mark_actions) == 0) + return -EINVAL; + + /* Clear mark offload flag if there are no more mark actions */ + if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) + hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F; + } + + rc = flow_free_rss_action(dev, flow); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to free rss action"); + } + + rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id); + if (rc != 0) { + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to destroy filter"); + } + + TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next); + + bmap = npc->live_entries[flow->priority]; + rte_bitmap_clear(bmap, flow->mcam_id); + + rte_free(flow); + return 0; +} + +static int +otx2_flow_flush(struct rte_eth_dev *dev, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + int rc; + + rc = flow_free_all_resources(hw); + if (rc) { + otx2_err("Error when deleting NPC MCAM entries " + ", counters"); + rte_flow_error_set(error, EIO, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Failed to flush filter"); + return -rte_errno; + } + + return 0; +} + +static int +otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused, + int enable __rte_unused, + struct rte_flow_error *error) +{ + /* + * If we support, we need to un-install the default mcam + * entry for this port. + */ + + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + "Flow isolation not supported"); + + return -rte_errno; +} + +static int +otx2_flow_query(struct rte_eth_dev *dev, + struct rte_flow *flow, + const struct rte_flow_action *action, + void *data, + struct rte_flow_error *error) +{ + struct otx2_eth_dev *hw = dev->data->dev_private; + struct rte_flow_query_count *query = data; + struct otx2_mbox *mbox = hw->mbox; + const char *errmsg = NULL; + int errcode = ENOTSUP; + int rc; + + if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) { + errmsg = "Only COUNT is supported in query"; + goto err_exit; + } + + if (flow->ctr_id == NPC_COUNTER_NONE) { + errmsg = "Counter is not available"; + goto err_exit; + } + + rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits); + if (rc != 0) { + errcode = EIO; + errmsg = "Error reading flow counter"; + goto err_exit; + } + query->hits_set = 1; + query->bytes_set = 0; + + if (query->reset) + rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id); + if (rc != 0) { + errcode = EIO; + errmsg = "Error clearing flow counter"; + goto err_exit; + } + + return 0; + +err_exit: + rte_flow_error_set(error, errcode, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, + errmsg); + return -rte_errno; +} + const struct rte_flow_ops otx2_flow_ops = { .validate = otx2_flow_validate, .create = otx2_flow_create, + .destroy = otx2_flow_destroy, + .flush = otx2_flow_flush, + .query = otx2_flow_query, + .isolate = otx2_flow_isolate, }; diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index b9c9ff3cc..687cf2b40 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -5,6 +5,9 @@ #ifndef __OTX2_RX_H__ #define __OTX2_RX_H__ +/* Default mark value used when none is provided. */ +#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff + #define PTYPE_WIDTH 12 #define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) #define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_WIDTH) From patchwork Sun Jun 2 15:24:18 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54085 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 774FF1BADD; Sun, 2 Jun 2019 17:26:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 8B5CB1BBE1 for ; Sun, 2 Jun 2019 17:26:46 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7HD020263; Sun, 2 Jun 2019 08:26:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=s5uel4FLt/4NH1XGIYSEU0d3HWV/dEQkkfKVUaCXTQs=; b=GG8XYDR/NGzV114ZucuC+IN/NiX3BjI683QZTPpQ540mESQgjO6Gfbxzv2QLC29DY+mu hrkZYqlx2SBJxDRMbpusu1do1O9sTbCiVyNMBI+Hg5TKRIv4AVJ0ZMUnxV0rc/sULnLQ kldmfB7PLaQIiCFy0w/aV5XN4483Pa0YKV+kp5lKon5x5w2G5KXfCBM4lh/M61QeXQvP iWOcgqM6NecrkjYuGm4RMYY/J7TeJKfOHcnShbl1Hy1pRUrtMFUckaA6a3b/ZOja/W3H atBcLCV5hR55lqPvHjQRv+BX7AOEs7SBbGaGU4ZKI9RQc8p6WGSD/XPn/XiWpsecNyGk fw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk498p-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:45 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:44 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:44 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 973AA3F703F; Sun, 2 Jun 2019 08:26:42 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:18 +0530 Message-ID: <20190602152434.23996-43-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 42/58] net/octeontx2: add flow init and fini X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Kiran Kumar K Adding the flow init and fini functionality. These API will be called from dev init and will initialize and de-initialize the flow related memory. Signed-off-by: Kiran Kumar K Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_flow.c | 315 ++++++++++++++++++++++++++++++ 1 file changed, 315 insertions(+) diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 33fdafeb7..1fbe6b86e 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -625,3 +625,318 @@ const struct rte_flow_ops otx2_flow_ops = { .query = otx2_flow_query, .isolate = otx2_flow_isolate, }; + +static int +flow_supp_key_len(uint32_t supp_mask) +{ + int nib_count = 0; + while (supp_mask) { + nib_count++; + supp_mask &= (supp_mask - 1); + } + return nib_count * 4; +} + +/* Refer HRM register: + * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG + * and + * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG + **/ +#define BYTESM1_SHIFT 16 +#define HDR_OFF_SHIFT 8 +static void +flow_update_kex_info(struct npc_xtract_info *xtract_info, + uint64_t val) +{ + xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1; + xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff; + xtract_info->key_off = val & 0x3f; + xtract_info->enable = ((val >> 7) & 0x1); +} + +static void +flow_process_mkex_cfg(struct otx2_npc_flow_info *npc, + struct npc_get_kex_cfg_rsp *kex_rsp) +{ + volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT] + [NPC_MAX_LD]; + struct npc_xtract_info *x_info = NULL; + int lid, lt, ld, fl, ix; + otx2_dxcfg_t *p; + uint64_t keyw; + uint64_t val; + + npc->keyx_supp_nmask[NPC_MCAM_RX] = + kex_rsp->rx_keyx_cfg & 0x7fffffffULL; + npc->keyx_supp_nmask[NPC_MCAM_TX] = + kex_rsp->tx_keyx_cfg & 0x7fffffffULL; + npc->keyx_len[NPC_MCAM_RX] = + flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]); + npc->keyx_len[NPC_MCAM_TX] = + flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]); + + keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL; + npc->keyw[NPC_MCAM_RX] = keyw; + keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL; + npc->keyw[NPC_MCAM_TX] = keyw; + + /* Update KEX_LD_FLAG */ + for (ix = 0; ix < NPC_MAX_INTF; ix++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + for (fl = 0; fl < NPC_MAX_LFL; fl++) { + x_info = + &npc->prx_fxcfg[ix][ld][fl].xtract[0]; + val = kex_rsp->intf_ld_flags[ix][ld][fl]; + flow_update_kex_info(x_info, val); + } + } + } + + /* Update LID, LT and LDATA cfg */ + p = &npc->prx_dxcfg; + q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD]) + (&kex_rsp->intf_lid_lt_ld); + for (ix = 0; ix < NPC_MAX_INTF; ix++) { + for (lid = 0; lid < NPC_MAX_LID; lid++) { + for (lt = 0; lt < NPC_MAX_LT; lt++) { + for (ld = 0; ld < NPC_MAX_LD; ld++) { + x_info = &(*p)[ix][lid][lt].xtract[ld]; + val = (*q)[ix][lid][lt][ld]; + flow_update_kex_info(x_info, val); + } + } + } + } + /* Update LDATA Flags cfg */ + npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0]; + npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1]; +} + +static struct otx2_idev_kex_cfg * +flow_intra_dev_kex_cfg(void) +{ + static const char name[] = "octeontx2_intra_device_kex_conf"; + struct otx2_idev_kex_cfg *idev; + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(name); + if (mz) + return mz->addr; + + /* Request for the first time */ + mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg), + SOCKET_ID_ANY, 0, OTX2_ALIGN); + if (mz) { + idev = mz->addr; + rte_atomic16_set(&idev->kex_refcnt, 0); + return idev; + } + return NULL; +} + +static int +flow_fetch_kex_cfg(struct otx2_eth_dev *dev) +{ + struct otx2_npc_flow_info *npc = &dev->npc_flow; + struct npc_get_kex_cfg_rsp *kex_rsp; + struct otx2_mbox *mbox = dev->mbox; + struct otx2_idev_kex_cfg *idev; + int rc = 0; + + idev = flow_intra_dev_kex_cfg(); + if (!idev) + return -ENOMEM; + + /* Is kex_cfg read by any another driver? */ + if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) { + /* Call mailbox to get key & data size */ + (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox); + otx2_mbox_msg_send(mbox, 0); + rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp); + if (rc) { + otx2_err("Failed to fetch NPC keyx config"); + goto done; + } + memcpy(&idev->kex_cfg, kex_rsp, + sizeof(struct npc_get_kex_cfg_rsp)); + } + + flow_process_mkex_cfg(npc, &idev->kex_cfg); + +done: + return rc; +} + +int +otx2_flow_init(struct otx2_eth_dev *hw) +{ + uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL; + struct otx2_npc_flow_info *npc = &hw->npc_flow; + uint32_t bmap_sz; + int rc = 0, idx; + + rc = flow_fetch_kex_cfg(hw); + if (rc) { + otx2_err("Failed to fetch NPC keyx config from idev"); + return rc; + } + + rte_atomic32_init(&npc->mark_actions); + + npc->mcam_entries = NPC_MCAM_TOT_ENTRIES >> npc->keyw[NPC_MCAM_RX]; + /* Free, free_rev, live and live_rev entries */ + bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries); + mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority, + RTE_CACHE_LINE_SIZE); + if (mem == NULL) { + otx2_err("Bmap alloc failed"); + rc = -ENOMEM; + return rc; + } + + npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct otx2_mcam_ents_info), + 0); + if (npc->flow_entry_info == NULL) { + otx2_err("flow_entry_info alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->free_entries == NULL) { + otx2_err("free_entries alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->free_entries_rev == NULL) { + otx2_err("free_entries_rev alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->live_entries == NULL) { + otx2_err("live_entries alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct rte_bitmap), + 0); + if (npc->live_entries_rev == NULL) { + otx2_err("live_entries_rev alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority + * sizeof(struct otx2_flow_list), + 0); + if (npc->flow_list == NULL) { + otx2_err("flow_list alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc_mem = mem; + for (idx = 0; idx < npc->flow_max_priority; idx++) { + TAILQ_INIT(&npc->flow_list[idx]); + + npc->free_entries[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->free_entries_rev[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->live_entries[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->live_entries_rev[idx] = + rte_bitmap_init(npc->mcam_entries, mem, bmap_sz); + mem += bmap_sz; + + npc->flow_entry_info[idx].free_ent = 0; + npc->flow_entry_info[idx].live_ent = 0; + npc->flow_entry_info[idx].max_id = 0; + npc->flow_entry_info[idx].min_id = ~(0); + } + + npc->rss_grps = NIX_RSS_GRPS; + + bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps); + nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE); + if (nix_mem == NULL) { + otx2_err("Bmap alloc failed"); + rc = -ENOMEM; + goto err; + } + + npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz); + + /* Group 0 will be used for RSS, + * 1 -7 will be used for rte_flow RSS action + */ + rte_bitmap_set(npc->rss_grp_entries, 0); + + return 0; + +err: + if (npc->flow_list) + rte_free(npc->flow_list); + if (npc->live_entries_rev) + rte_free(npc->live_entries_rev); + if (npc->live_entries) + rte_free(npc->live_entries); + if (npc->free_entries_rev) + rte_free(npc->free_entries_rev); + if (npc->free_entries) + rte_free(npc->free_entries); + if (npc->flow_entry_info) + rte_free(npc->flow_entry_info); + if (npc_mem) + rte_free(npc_mem); + if (nix_mem) + rte_free(nix_mem); + return rc; +} + +int +otx2_flow_fini(struct otx2_eth_dev *hw) +{ + struct otx2_npc_flow_info *npc = &hw->npc_flow; + int rc; + + rc = flow_free_all_resources(hw); + if (rc) { + otx2_err("Error when deleting NPC MCAM entries, counters"); + return rc; + } + + if (npc->flow_list) + rte_free(npc->flow_list); + if (npc->live_entries_rev) + rte_free(npc->live_entries_rev); + if (npc->live_entries) + rte_free(npc->live_entries); + if (npc->free_entries_rev) + rte_free(npc->free_entries_rev); + if (npc->free_entries) + rte_free(npc->free_entries); + if (npc->flow_entry_info) + rte_free(npc->flow_entry_info); + + return 0; +} From patchwork Sun Jun 2 15:24:19 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54086 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 706921BBEA; Sun, 2 Jun 2019 17:26:51 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 70B9F1BBE3 for ; Sun, 2 Jun 2019 17:26:49 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJmwT021277; Sun, 2 Jun 2019 08:26:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=oKoKocb0+JGZf/9yBzhsTns/Xtb+DVs22jCbX8oacKw=; b=EXCtYTmwPi15LOgks/kCisH9x2jRLNayJUc96H/l9iVlPpPoKITaWrslWdLxtK/lsL5Q WACPIY3acuBUNZ2BEk4UkfybgQ8GsHQwzi3DAZCz6uxOGBWQbRE5GBe7Ub5Pi/JNqDLF CZtARqcqM/Dqm0M281vSmc2hbya9sAezNccp3DeYSbHw2MCeEUvrjrZwNjfGffR757B8 +wNYBMEn9uR0D7GwOpLJ6QAy2O5fYUcIcP1IC0gTSvZH9slbS3aux01bQihV4MAvrHpQ k+7cCaWM/2y/0LfA1HJZC8c34XLTJXHF+Qo+lAmKm53iBKRrLKRe31gFfD8CgrL9hKG3 3w== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqmk-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:48 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:47 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:47 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id B06293F703F; Sun, 2 Jun 2019 08:26:45 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:19 +0530 Message-ID: <20190602152434.23996-44-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 43/58] net/octeontx2: connect flow API to ethdev ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Connect rte_flow driver ops to ethdev via .filter_ctrl op. Signed-off-by: Vivek Sharma Signed-off-by: Kiran Kumar K --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 10 ++++++++++ drivers/net/octeontx2/otx2_ethdev.h | 3 +++ drivers/net/octeontx2/otx2_ethdev_ops.c | 21 +++++++++++++++++++++ 6 files changed, 37 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 0f416ee4b..4917057f6 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -22,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow control = Y +Flow API = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index b909918ce..9049e8e99 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -22,6 +22,7 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow control = Y +Flow API = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 812d5d649..735b7447a 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -17,6 +17,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +Flow API = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 9cd3ce407..bda5b4aa4 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1079,6 +1079,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { otx2_nix_rxchan_bpid_cfg(eth_dev, false); + otx2_flow_fini(dev); oxt2_nix_unregister_queue_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); rc = nix_store_queue_cfg_and_then_release(eth_dev); @@ -1324,6 +1325,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .rx_descriptor_status = otx2_nix_rx_descriptor_status, .tx_done_cleanup = otx2_nix_tx_done_cleanup, .pool_ops_supported = otx2_nix_pool_ops_supported, + .filter_ctrl = otx2_nix_dev_filter_ctrl, .get_module_info = otx2_nix_get_module_info, .get_module_eeprom = otx2_nix_get_module_eeprom, .flow_ctrl_get = otx2_nix_flow_ctrl_get, @@ -1503,6 +1505,11 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev) dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL; } + /* Initialize rte-flow */ + rc = otx2_flow_init(dev); + if (rc) + goto free_mac_addrs; + otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64 " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64, eth_dev->data->port_id, dev->pf, dev->vf, @@ -1539,6 +1546,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Disable other rte_flow entries */ + otx2_flow_fini(dev); + /* Disable PTP if already enabled */ if (otx2_ethdev_is_ptp_en(dev)) otx2_nix_timesync_disable(eth_dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 1edc7da29..e9123641c 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -274,6 +274,9 @@ otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev) /* Ops */ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info); +int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, void *arg); int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, struct rte_eth_dev_module_info *modinfo); int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 51c156786..1da9222b7 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -220,6 +220,27 @@ otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) return -ENOTSUP; } +int +otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, void *arg) +{ + RTE_SET_USED(eth_dev); + + if (filter_type != RTE_ETH_FILTER_GENERIC) { + otx2_err("Unsupported filter type %d", filter_type); + return -ENOTSUP; + } + + if (filter_op == RTE_ETH_FILTER_GET) { + *(const void **)arg = &otx2_flow_ops; + return 0; + } + + otx2_err("Invalid filter_op %d", filter_op); + return -EINVAL; +} + static struct cgx_fw_data * nix_get_fwdata(struct otx2_eth_dev *dev) { From patchwork Sun Jun 2 15:24:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54087 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B6D321BB47; Sun, 2 Jun 2019 17:26:54 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EEA651BBEE for ; Sun, 2 Jun 2019 17:26:52 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCk021032; Sun, 2 Jun 2019 08:26:52 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=xodww+tvfl7gKh1+d9KO9LYdPMLvKIq+XkxkVEOby3c=; b=vLXB/zMnWQG1nY3Rk23mGr2wi93WaMPW9ChwurWDmB3n2hwf/R1WfU0HhlFl4ODsQaeF rwCiHGzq3dCttcddr4QYGB4N4gjIH1v3GbXq3v+bvYnRAlyWuNWwx2SMwKjGL60gRYCa tOdlUTYmJA+kNx5sR4hUGopYanq0x14GWBUANVwcJPGgBdMr1+kUym3zWYomtDXGDE8T 81556ZmepHL8XnXlLjAIVqEu0/eg2DYr8zon1JXfprAxdvpPDLASdx6oqNXKhVAVr+8x chAkWjQGnq/FWFmgNEKAOjwX3KMeHa3chzRsGFeqKPDhguw4mMLC9lAQ/+yfUBYb0SUe VQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4997-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:52 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:50 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:50 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 2305C3F703F; Sun, 2 Jun 2019 08:26:48 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:20 +0530 Message-ID: <20190602152434.23996-45-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 44/58] net/octeontx2: implement VLAN utility functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Implement accessory functions needed for VLAN functionality. Introduce VLAN related structures as well. Maximum Vtag insertion size is controlled by SMQ configuration. This patch also configure SMQ for supporting upto double vtag insertion. Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 10 ++ drivers/net/octeontx2/otx2_ethdev.h | 48 +++++++ drivers/net/octeontx2/otx2_tm.c | 5 +- drivers/net/octeontx2/otx2_vlan.c | 190 ++++++++++++++++++++++++++++ 6 files changed, 253 insertions(+), 2 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_vlan.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index d651c8c50..b1cc6d83b 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -36,6 +36,7 @@ SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_ptp.c \ otx2_flow.c \ otx2_link.c \ + otx2_vlan.c \ otx2_stats.c \ otx2_lookup.c \ otx2_ethdev.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index a2c494bb4..d5f272c8b 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -9,6 +9,7 @@ sources = files( 'otx2_ptp.c', 'otx2_flow.c', 'otx2_link.c', + 'otx2_vlan.c', 'otx2_stats.c', 'otx2_lookup.c', 'otx2_ethdev.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index bda5b4aa4..cfc22a2da 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1079,6 +1079,7 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) /* Free the resources allocated from the previous configure */ if (dev->configured == 1) { otx2_nix_rxchan_bpid_cfg(eth_dev, false); + otx2_nix_vlan_fini(eth_dev); otx2_flow_fini(dev); oxt2_nix_unregister_queue_irqs(eth_dev); nix_set_nop_rxtx_function(eth_dev); @@ -1126,6 +1127,12 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + rc = otx2_nix_vlan_offload_init(eth_dev); + if (rc) { + otx2_err("Failed to init vlan offload rc=%d", rc); + goto free_nix_lf; + } + /* Register queue IRQs */ rc = oxt2_nix_register_queue_irqs(eth_dev); if (rc) { @@ -1546,6 +1553,9 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + /* Disable vlan offloads */ + otx2_nix_vlan_fini(eth_dev); + /* Disable other rte_flow entries */ otx2_flow_fini(dev); diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index e9123641c..b54018ae0 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -40,6 +40,7 @@ /* Used for struct otx2_eth_dev::flags */ #define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0) +#define NIX_MAX_VTAG_INS 2 #define VLAN_TAG_SIZE 4 #define NIX_HW_L2_OVERHEAD 22 /* ETH_HLEN+2*VLAN_HLEN */ @@ -163,6 +164,47 @@ struct otx2_fc_info { uint16_t bpid[NIX_MAX_CHAN]; }; +struct vlan_mkex_info { + struct npc_xtract_info la_xtract; + struct npc_xtract_info lb_xtract; + uint64_t lb_lt_offset; +}; + +struct vlan_entry { + uint32_t mcam_idx; + uint16_t vlan_id; + TAILQ_ENTRY(vlan_entry) next; +}; + +TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry); + +struct otx2_vlan_info { + struct otx2_vlan_filter_tbl fltr_tbl; + /* MKEX layer info */ + struct mcam_entry def_tx_mcam_ent; + struct mcam_entry def_rx_mcam_ent; + struct vlan_mkex_info mkex; + /* Default mcam entry that matches vlan packets */ + uint32_t def_rx_mcam_idx; + uint32_t def_tx_mcam_idx; + /* MCAM entry that matches double vlan packets */ + uint32_t qinq_mcam_idx; + /* Indices of tx_vtag def registers */ + uint32_t outer_vlan_idx; + uint32_t inner_vlan_idx; + uint16_t outer_vlan_tpid; + uint16_t inner_vlan_tpid; + uint16_t pvid; + /* QinQ entry allocated before default one */ + uint8_t qinq_before_def; + uint8_t pvid_insert_on; + /* Rx vtag action type */ + uint8_t vtag_type_idx; + uint8_t filter_on; + uint8_t strip_on; + uint8_t qinq_on; +}; + struct otx2_eth_dev { OTX2_DEV; /* Base class */ MARKER otx2_eth_dev_data_start; @@ -222,6 +264,7 @@ struct otx2_eth_dev { struct rte_timecounter systime_tc; struct rte_timecounter rx_tstamp_tc; struct rte_timecounter tx_tstamp_tc; + struct otx2_vlan_info vlan_info; } __rte_cache_aligned; struct otx2_eth_txq { @@ -422,4 +465,9 @@ int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts); int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en); +/* VLAN */ +int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); +int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); + + #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c index 4439389b8..246920695 100644 --- a/drivers/net/octeontx2/otx2_tm.c +++ b/drivers/net/octeontx2/otx2_tm.c @@ -359,7 +359,7 @@ populate_tm_registers(struct otx2_eth_dev *dev, /* Set xoff which will be cleared later */ *reg++ = NIX_AF_SMQX_CFG(schq); - *regval++ = BIT_ULL(50) | + *regval++ = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) | (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; req->num_regs++; *reg++ = NIX_AF_MDQX_PARENT(schq); @@ -688,7 +688,8 @@ nix_smq_xoff(struct otx2_eth_dev *dev, uint16_t smq, bool enable) req->reg[0] = NIX_AF_SMQX_CFG(smq); /* Unmodified fields */ - req->regval[0] = (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; + req->regval[0] = ((uint64_t)NIX_MAX_VTAG_INS << 36) | + (NIX_MAX_HW_FRS << 8) | NIX_MIN_HW_FRS; if (enable) req->regval[0] |= BIT_ULL(50) | BIT_ULL(49); diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c new file mode 100644 index 000000000..b3136d2cf --- /dev/null +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -0,0 +1,190 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include +#include + +#include "otx2_ethdev.h" +#include "otx2_flow.h" + + +#define VLAN_ID_MATCH 0x1 +#define VTAG_F_MATCH 0x2 +#define MAC_ADDR_MATCH 0x4 +#define QINQ_F_MATCH 0x8 +#define VLAN_DROP 0x10 + +enum vtag_cfg_dir { + VTAG_TX, + VTAG_RX +}; + +static int +__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, + uint32_t entry, const int enable) +{ + struct npc_mcam_ena_dis_entry_req *req; + struct otx2_mbox *mbox = dev->mbox; + int rc = -EINVAL; + + if (enable) + req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox); + else + req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox); + + req->entry = entry; + + rc = otx2_mbox_process_msg(mbox, NULL); + return rc; +} + +static int +__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) +{ + struct npc_mcam_free_entry_req *req; + struct otx2_mbox *mbox = dev->mbox; + int rc = -EINVAL; + + req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox); + req->entry = entry; + + rc = otx2_mbox_process_msg(mbox, NULL); + return rc; +} + +static int +__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, + struct mcam_entry *entry, uint8_t intf) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct npc_mcam_write_entry_req *req; + struct otx2_mbox *mbox = dev->mbox; + struct msghdr *rsp; + int rc = -EINVAL; + + req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox); + + req->entry = ent_idx; + req->intf = intf; + req->enable_entry = 1; + memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + return rc; +} + +static int +__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, + uint8_t intf, bool drop) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct npc_mcam_alloc_and_write_entry_req *req; + struct npc_mcam_alloc_and_write_entry_rsp *rsp; + struct otx2_mbox *mbox = dev->mbox; + int rc = -EINVAL; + + req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox); + + if (intf == NPC_MCAM_RX) { + if (!drop && dev->vlan_info.def_rx_mcam_idx) { + req->priority = NPC_MCAM_HIGHER_PRIO; + req->ref_entry = dev->vlan_info.def_rx_mcam_idx; + } else if (drop && dev->vlan_info.qinq_mcam_idx) { + req->priority = NPC_MCAM_LOWER_PRIO; + req->ref_entry = dev->vlan_info.qinq_mcam_idx; + } else { + req->priority = NPC_MCAM_ANY_PRIO; + req->ref_entry = 0; + } + } else { + req->priority = NPC_MCAM_ANY_PRIO; + req->ref_entry = 0; + } + + req->intf = intf; + req->enable_entry = 1; + memcpy(&req->entry_data, entry, sizeof(struct mcam_entry)); + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + return rsp->entry; +} + +static int +nix_vlan_rx_mkex_offset(uint64_t mask) +{ + int nib_count = 0; + + while (mask) { + nib_count += mask & 1; + mask >>= 1; + } + + return nib_count * 4; +} + +static int +nix_vlan_get_mkex_info(struct otx2_eth_dev *dev) +{ + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + struct otx2_npc_flow_info *npc = &dev->npc_flow; + struct npc_xtract_info *x_info = NULL; + uint64_t rx_keyx; + otx2_dxcfg_t *p; + int rc = -EINVAL; + + if (npc == NULL) { + otx2_err("Missing npc mkex configuration"); + return rc; + } + +#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL +#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL +#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL + + rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX]; + if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA) + return rc; + + if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) != + NPC_KEX_LB_LTYPE_NIBBLE_ENA) + return rc; + + mkex->lb_lt_offset = + nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK); + + p = &npc->prx_dxcfg; + x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0]; + memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info)); + x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0]; + memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info)); + + return 0; +} + +int +otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc; + + /* Port initialized for first time or restarted */ + if (!dev->configured) { + rc = nix_vlan_get_mkex_info(dev); + if (rc) { + otx2_err("Failed to get vlan mkex info rc=%d", rc); + return rc; + } + } + return 0; +} + +int +otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev) +{ + return 0; +} From patchwork Sun Jun 2 15:24:21 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54088 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1387B1B9B6; Sun, 2 Jun 2019 17:26:58 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9F5761BAC5 for ; Sun, 2 Jun 2019 17:26:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4Z1020248; Sun, 2 Jun 2019 08:26:56 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=SXVX08q9E3zl2LtI/BQ0rkZ62qwYqHW4iKqwZTIQGdw=; b=fDcDbQv+mt62nwTCnJbtPjbyKStWfkFGp9UzFXrGT2ZchXOaZubUOdjYZltLc0v5Q6sE +vUcSJ2Bnd77CyWLZ0Q3EXXbJ1PLKVNj1kmSZG8tJm2KyD9aGY2hP/+Cxf3Dv+5mFBBm EqG9JkZfojRmKDutRgdtLCr6U5Tw/CWT7HUkajEPoyB3J0nnYFL26TVx/rGRFD7fqozs bHSop/QaVM1yNMNGGgDJqZrjTh13ZHaRierj5t7w8IgW/p1gMpDBSOrsI2wb6jx7RPj/ nYcP/WqfetfFTuVTTv3mDCAl/G0jU6qogJkbaTFYnnXMENK6LTJJ25RjuCyNkQf+IhOQ Nw== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk4998-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:55 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:54 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:54 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id F349B3F703F; Sun, 2 Jun 2019 08:26:51 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:21 +0530 Message-ID: <20190602152434.23996-46-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 45/58] net/octeontx2: support VLAN offloads X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Support configuring VLAN offloads for an ethernet device. Signed-off-by: Vivek Sharma --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 2 + doc/guides/nics/features/octeontx2_vf.ini | 2 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 1 + drivers/net/octeontx2/otx2_rx.h | 1 + drivers/net/octeontx2/otx2_vlan.c | 424 ++++++++++++++++++++- 7 files changed, 425 insertions(+), 8 deletions(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 4917057f6..f811c38e3 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 9049e8e99..77c3a5637 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -23,6 +23,8 @@ RSS reta update = Y Inner RSS = Y Flow control = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 735b7447a..4571a1e78 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -18,6 +18,8 @@ RSS key update = Y RSS reta update = Y Inner RSS = Y Flow API = Y +VLAN offload = Y +QinQ offload = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index cfc22a2da..362e46941 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1344,6 +1344,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .timesync_adjust_time = otx2_nix_timesync_adjust_time, .timesync_read_time = otx2_nix_timesync_read_time, .timesync_write_time = otx2_nix_timesync_write_time, + .vlan_offload_set = otx2_nix_vlan_offload_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index b54018ae0..816371c37 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -468,6 +468,7 @@ int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en); /* VLAN */ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); +int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask); #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 687cf2b40..763dc402e 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -16,6 +16,7 @@ sizeof(uint16_t)) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index b3136d2cf..d9880d069 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -39,8 +39,50 @@ __rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, return rc; } +static void +nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, bool qinq, bool drop) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int pcifunc = otx2_pfvf_func(dev->pf, dev->vf); + uint64_t action = 0, vtag_action = 0; + + action = NIX_RX_ACTIONOP_UCAST; + + if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) { + action = NIX_RX_ACTIONOP_RSS; + action |= (uint64_t)(dev->rss_info.alg_idx) << 56; + } + + action |= (uint64_t)pcifunc << 4; + entry->action = action; + + if (drop) { + entry->action &= ~((uint64_t)0xF); + entry->action |= NIX_RX_ACTIONOP_DROP; + return; + } + + if (!qinq) { + /* VTAG0 fields denote CTAG in single vlan case */ + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); + vtag_action |= (NPC_LID_LB << 8); + vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR; + } else { + /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */ + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15); + vtag_action |= (NPC_LID_LB << 8); + vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR; + vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47); + vtag_action |= ((uint64_t)(NPC_LID_LB) << 40); + vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32); + } + + entry->vtag_action = vtag_action; +} + static int -__rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) +nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) { struct npc_mcam_free_entry_req *req; struct otx2_mbox *mbox = dev->mbox; @@ -54,8 +96,8 @@ __rte_unused nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) } static int -__rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, - struct mcam_entry *entry, uint8_t intf) +nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, + struct mcam_entry *entry, uint8_t intf) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct npc_mcam_write_entry_req *req; @@ -75,9 +117,9 @@ __rte_unused nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx, } static int -__rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, - struct mcam_entry *entry, - uint8_t intf, bool drop) +nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, + struct mcam_entry *entry, + uint8_t intf, bool drop) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct npc_mcam_alloc_and_write_entry_req *req; @@ -114,6 +156,347 @@ __rte_unused nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev, return rsp->entry; } +/* Configure mcam entry with required MCAM search rules */ +static int +nix_vlan_mcam_config(struct rte_eth_dev *eth_dev, + uint16_t vlan_id, uint16_t flags) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_mkex_info *mkex = &dev->vlan_info.mkex; + volatile uint8_t *key_data, *key_mask; + uint64_t mcam_data, mcam_mask; + struct mcam_entry entry; + uint8_t *mac_addr; + int idx, kwi = 0; + + memset(&entry, 0, sizeof(struct mcam_entry)); + key_data = (volatile uint8_t *)entry.kw; + key_mask = (volatile uint8_t *)entry.kw_mask; + + /* Channel base extracted to KW0[11:0] */ + entry.kw[kwi] = dev->rx_chan_base; + entry.kw_mask[kwi] = BIT_ULL(12) - 1; + + /* Adds vlan_id & LB CTAG flag to MCAM KW */ + if (flags & VLAN_ID_MATCH) { + entry.kw[kwi] |= NPC_LT_LB_CTAG << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset; + + mcam_data = (vlan_id << 16); + mcam_mask = BIT_ULL(32) - 1; + otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off, + &mcam_data, mkex->lb_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off, + &mcam_mask, mkex->lb_xtract.len + 1); + } + + /* Adds LB STAG flag to MCAM KW */ + if (flags & QINQ_F_MATCH) { + entry.kw[kwi] |= NPC_LT_LB_STAG << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset; + } + + /* Adds LB CTAG & LB STAG flags to MCAM KW */ + if (flags & VTAG_F_MATCH) { + entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG) + << mkex->lb_lt_offset; + entry.kw_mask[kwi] |= (NPC_LT_LB_CTAG & NPC_LT_LB_STAG) + << mkex->lb_lt_offset; + } + + /* Adds port MAC address to MCAM KW */ + if (flags & MAC_ADDR_MATCH) { + mcam_data = 0ULL; + mac_addr = dev->mac_addr; + for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--) + mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx); + + mcam_mask = BIT_ULL(48) - 1; + otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off, + &mcam_data, mkex->la_xtract.len + 1); + otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off, + &mcam_mask, mkex->la_xtract.len + 1); + } + + /* VLAN_DROP: for drop action for all vlan packets when filter is on. + * For QinQ, enable vtag action for both outer & inner tags + */ + if (flags & VLAN_DROP) { + nix_set_rx_vlan_action(eth_dev, &entry, false, true); + dev->vlan_info.def_rx_mcam_ent = entry; + } else if (flags & QINQ_F_MATCH) { + nix_set_rx_vlan_action(eth_dev, &entry, true, false); + } else { + nix_set_rx_vlan_action(eth_dev, &entry, false, false); + } + + return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX, + flags & VLAN_DROP); +} + +/* Installs/Removes/Modifies default rx entry */ +static int +nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip, + bool filter, bool enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + uint16_t flags = 0; + int mcam_idx, rc; + + /* Use default mcam entry to either drop vlan traffic when + * vlan filter is on or strip vtag when strip is enabled. + * Allocate default entry which matches port mac address + * and vtag(ctag/stag) flags with drop action. + */ + if (!vlan->def_rx_mcam_idx) { + if (filter && enable) + flags = MAC_ADDR_MATCH | VTAG_F_MATCH | VLAN_DROP; + else if (strip && enable) + flags = MAC_ADDR_MATCH | VTAG_F_MATCH; + else + return 0; + + mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags); + if (mcam_idx < 0) { + otx2_err("Failed to config vlan mcam"); + return -mcam_idx; + } + + vlan->def_rx_mcam_idx = mcam_idx; + return 0; + } + + /* Filter is already enabled, so packets would be dropped anyways. No + * processing needed for enabling strip wrt mcam entry. + */ + + /* Filter disable request */ + if (vlan->filter_on && filter && !enable) { + vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF); + + /* Free default rx entry only when + * 1. strip is not on and + * 2. qinq entry is allocated before default entry. + */ + if (vlan->strip_on || + (vlan->qinq_on && !vlan->qinq_before_def)) { + if (eth_dev->data->dev_conf.rxmode.mq_mode == + ETH_MQ_RX_RSS) + vlan->def_rx_mcam_ent.action |= + NIX_RX_ACTIONOP_RSS; + else + vlan->def_rx_mcam_ent.action |= + NIX_RX_ACTIONOP_UCAST; + return nix_vlan_mcam_write(eth_dev, + vlan->def_rx_mcam_idx, + &vlan->def_rx_mcam_ent, + NIX_INTF_RX); + } else { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + } + + /* Filter enable request */ + if (!vlan->filter_on && filter && enable) { + vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF); + vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP; + return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx, + &vlan->def_rx_mcam_ent, NIX_INTF_RX); + } + + /* Strip disable request */ + if (vlan->strip_on && strip && !enable) { + if (!vlan->filter_on && + !(vlan->qinq_on && !vlan->qinq_before_def)) { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + } + + return 0; +} + +/* Configure vlan stripping on or off */ +static int +nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_mbox *mbox = dev->mbox; + struct nix_vtag_config *vtag_cfg; + int rc = -EINVAL; + + rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable); + if (rc) { + otx2_err("Failed to config default rx entry"); + return rc; + } + + vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox); + /* cfg_type = 1 for rx vlan cfg */ + vtag_cfg->cfg_type = VTAG_RX; + + if (enable) + vtag_cfg->rx.strip_vtag = 1; + else + vtag_cfg->rx.strip_vtag = 0; + + /* Always capture */ + vtag_cfg->rx.capture_vtag = 1; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + /* Use rx vtag type index[0] for now */ + vtag_cfg->rx.vtag_type = 0; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + dev->vlan_info.strip_on = enable; + return rc; +} + +/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */ +static int +nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, + uint16_t vlan_id) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc = -EINVAL; + + if (!vlan_id && enable) { + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, + enable); + if (rc) { + otx2_err("Failed to config vlan mcam"); + return rc; + } + dev->vlan_info.filter_on = enable; + return 0; + } + + if (!vlan_id && !enable) { + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, + enable); + if (rc) { + otx2_err("Failed to config vlan mcam"); + return rc; + } + dev->vlan_info.filter_on = enable; + return 0; + } + + return 0; +} + +/* Configure double vlan(qinq) on or off */ +static int +otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev, + const uint8_t enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan_info; + int mcam_idx; + int rc; + + vlan_info = &dev->vlan_info; + + if (!enable) { + if (!vlan_info->qinq_mcam_idx) + return 0; + + rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx); + if (rc) + return rc; + + vlan_info->qinq_mcam_idx = 0; + dev->vlan_info.qinq_on = 0; + vlan_info->qinq_before_def = 0; + return 0; + } + + mcam_idx = + nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH | MAC_ADDR_MATCH); + if (mcam_idx < 0) + return mcam_idx; + + if (!vlan_info->def_rx_mcam_idx) + vlan_info->qinq_before_def = 1; + + vlan_info->qinq_mcam_idx = mcam_idx; + dev->vlan_info.qinq_on = 1; + return 0; +} + +int +otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t offloads = dev->rx_offloads; + struct rte_eth_rxmode *rxmode; + int rc; + + rxmode = ð_dev->data->dev_conf.rxmode; + + if (mask & ETH_VLAN_EXTEND_MASK) { + otx2_err("Extend offload not supported"); + return -ENOTSUP; + } + + if (mask & ETH_VLAN_STRIP_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { + offloads |= DEV_RX_OFFLOAD_VLAN_STRIP; + rc = nix_vlan_hw_strip(eth_dev, true); + } else { + offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP; + rc = nix_vlan_hw_strip(eth_dev, false); + } + if (rc) + goto done; + } + + if (mask & ETH_VLAN_FILTER_MASK) { + if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) { + offloads |= DEV_RX_OFFLOAD_VLAN_FILTER; + rc = nix_vlan_hw_filter(eth_dev, true, 0); + } else { + offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER; + rc = nix_vlan_hw_filter(eth_dev, false, 0); + } + if (rc) + goto done; + } + + if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) { + if (!dev->vlan_info.qinq_on) { + offloads |= DEV_RX_OFFLOAD_QINQ_STRIP; + rc = otx2_nix_config_double_vlan(eth_dev, true); + if (rc) + goto done; + } + } else { + if (dev->vlan_info.qinq_on) { + offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP; + rc = otx2_nix_config_double_vlan(eth_dev, false); + if (rc) + goto done; + } + } + + if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP)) { + dev->rx_offloads |= offloads; + dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + } + +done: + return rc; +} + static int nix_vlan_rx_mkex_offset(uint64_t mask) { @@ -170,7 +553,7 @@ int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); - int rc; + int rc, mask; /* Port initialized for first time or restarted */ if (!dev->configured) { @@ -179,12 +562,37 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) otx2_err("Failed to get vlan mkex info rc=%d", rc); return rc; } + + TAILQ_INIT(&dev->vlan_info.fltr_tbl); } + + mask = + ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK; + rc = otx2_nix_vlan_offload_set(eth_dev, mask); + if (rc) { + otx2_err("Failed to set vlan offload rc=%d", rc); + return rc; + } + return 0; } int -otx2_nix_vlan_fini(__rte_unused struct rte_eth_dev *eth_dev) +otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + int rc; + + if (!dev->configured) { + if (vlan->def_rx_mcam_idx) { + rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); + if (rc) + return rc; + } + } + + otx2_nix_config_double_vlan(eth_dev, false); + vlan->def_rx_mcam_idx = 0; return 0; } From patchwork Sun Jun 2 15:24:22 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54089 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 003AC1BA6F; Sun, 2 Jun 2019 17:27:01 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E2B471BBF8 for ; Sun, 2 Jun 2019 17:26:59 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6kb020260; Sun, 2 Jun 2019 08:26:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=XuNX2pMqfsD0vas+fBWUzwq5MgllqS0xyAhtQ4D3jQc=; b=i23AGyCN+WEjpQ5XgsFf+kfJdoOx5kVft2B0HEw6f/XVA57DNQm9qABIsS4oqIC0Zt5E Vk+lbvAr4sF1tXqS2JhON7o4lgxAbrjhsO9lbxvN8JK3Xw0wgHuOh8iSn7B/bCFtms6z 31uANkYFrBPpo95M8YxYg7K/+6klMT/4CYngCaF7Lp8voP/jO8u8hZGT8M+piBVX+ivI +ou+McIP0ADFAFq91/knoL7zHbv+yRE/lCaGlr0rCaf0ESU3wARcIwymWjGmpHpgh95a xbISAevzyTCVedJceTzLkGYj6CxwJyjuoBeKiEPD2r2XtA/J9qX8KfyzRUIloY+Uw61B VA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk499h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:26:59 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:26:57 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:26:57 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 3C2BA3F703F; Sun, 2 Jun 2019 08:26:55 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:22 +0530 Message-ID: <20190602152434.23996-47-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 46/58] net/octeontx2: support VLAN filters X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Support setting up VLAN filters so as to allow tagged packet's reception after VLAN HW Filter offload is enabled. Signed-off-by: Vivek Sharma --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 5 +- drivers/net/octeontx2/otx2_vlan.c | 147 ++++++++++++++++++++- 6 files changed, 154 insertions(+), 3 deletions(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index f811c38e3..3567e3f63 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y Flow control = Y Flow API = Y VLAN offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 77c3a5637..7edc80348 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -21,6 +21,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y Flow control = Y Flow API = Y VLAN offload = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 4571a1e78..fcc1ddc03 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -17,6 +17,7 @@ RSS hash = Y RSS key update = Y RSS reta update = Y Inner RSS = Y +VLAN filter = Y Flow API = Y VLAN offload = Y QinQ offload = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 362e46941..175e80e44 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1345,6 +1345,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .timesync_read_time = otx2_nix_timesync_read_time, .timesync_write_time = otx2_nix_timesync_write_time, .vlan_offload_set = otx2_nix_vlan_offload_set, + .vlan_filter_set = otx2_nix_vlan_filter_set, + .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 816371c37..a3babe51a 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -469,6 +469,9 @@ int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en); int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev); int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask); - +int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, + int on); +void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev, + uint16_t queue, int on); #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index d9880d069..3e60da099 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -21,8 +21,8 @@ enum vtag_cfg_dir { }; static int -__rte_unused nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, - uint32_t entry, const int enable) +nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev, + uint32_t entry, const int enable) { struct npc_mcam_ena_dis_entry_req *req; struct otx2_mbox *mbox = dev->mbox; @@ -366,6 +366,8 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, uint16_t vlan_id) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; int rc = -EINVAL; if (!vlan_id && enable) { @@ -379,6 +381,24 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, return 0; } + /* Enable/disable existing vlan filter entries */ + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (vlan_id) { + if (entry->vlan_id == vlan_id) { + rc = nix_vlan_mcam_enb_dis(dev, + entry->mcam_idx, + enable); + if (rc) + return rc; + } + } else { + rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx, + enable); + if (rc) + return rc; + } + } + if (!vlan_id && !enable) { rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, enable); @@ -393,6 +413,80 @@ nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable, return 0; } +/* Enable/disable vlan filtering for the given vlan_id */ +int +otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, + int on) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; + int entry_exists = 0; + int rc = -EINVAL; + int mcam_idx; + + if (!vlan_id) { + otx2_err("Vlan Id can't be zero"); + return rc; + } + + if (!vlan->def_rx_mcam_idx) { + otx2_err("Vlan Filtering is disabled, enable it first"); + return rc; + } + + if (on) { + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (entry->vlan_id == vlan_id) { + /* Vlan entry already exists */ + entry_exists = 1; + /* mcam entry already allocated */ + if (entry->mcam_idx) { + rc = nix_vlan_hw_filter(eth_dev, on, + vlan_id); + return rc; + } + } + } + + if (!entry_exists) { + entry = rte_zmalloc("otx2_nix_vlan_entry", + sizeof(struct vlan_entry), 0); + if (!entry) { + otx2_err("Failed to allocate memory"); + return -ENOMEM; + } + } + + /* Enables vlan_id & mac address based filtering */ + mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id, + VLAN_ID_MATCH | + MAC_ADDR_MATCH); + if (mcam_idx < 0) { + otx2_err("Failed to config vlan mcam"); + TAILQ_REMOVE(&vlan->fltr_tbl, entry, next); + rte_free(entry); + return mcam_idx; + } + + entry->mcam_idx = mcam_idx; + if (!entry_exists) { + entry->vlan_id = vlan_id; + TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next); + } + } else { + TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) { + if (entry->vlan_id == vlan_id) { + nix_vlan_mcam_free(dev, entry->mcam_idx); + TAILQ_REMOVE(&vlan->fltr_tbl, entry, next); + rte_free(entry); + break; + } + } + } + return 0; +} + /* Configure double vlan(qinq) on or off */ static int otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev, @@ -497,6 +591,13 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) return rc; } +void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev, + __rte_unused uint16_t queue, + __rte_unused int on) +{ + otx2_err("Not Supported"); +} + static int nix_vlan_rx_mkex_offset(uint64_t mask) { @@ -549,6 +650,27 @@ nix_vlan_get_mkex_info(struct otx2_eth_dev *dev) return 0; } +static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct vlan_entry *entry; + int rc; + + /* VLAN filters can't be set without setting filtern on */ + rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true); + if (rc) { + otx2_err("Failed to reinstall vlan filters"); + return; + } + + while ((entry = TAILQ_FIRST(&dev->vlan_info.fltr_tbl)) != NULL) { + rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true); + if (rc) + otx2_err("Failed to reinstall filter for vlan:%d", + entry->vlan_id); + } +} + int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) { @@ -564,6 +686,11 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev) } TAILQ_INIT(&dev->vlan_info.fltr_tbl); + } else { + /* Reinstall all mcam entries now if filter offload is set */ + if (eth_dev->data->dev_conf.rxmode.offloads & + DEV_RX_OFFLOAD_VLAN_FILTER) + nix_vlan_reinstall_vlan_filters(eth_dev); } mask = @@ -582,8 +709,24 @@ otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev) { struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); struct otx2_vlan_info *vlan = &dev->vlan_info; + struct vlan_entry *entry; int rc; + while ((entry = TAILQ_FIRST(&vlan->fltr_tbl)) != NULL) { + if (!dev->configured) { + rc = nix_vlan_mcam_free(dev, entry->mcam_idx); + if (rc) + return rc; + TAILQ_REMOVE(&vlan->fltr_tbl, entry, next); + rte_free(entry); + } else { + /* MCAM entries freed by flow_fini & lf_free on + * port stop. + */ + entry->mcam_idx = 0; + } + } + if (!dev->configured) { if (vlan->def_rx_mcam_idx) { rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx); From patchwork Sun Jun 2 15:24:23 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54090 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 563B41BBFC; Sun, 2 Jun 2019 17:27:03 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 546531BBFC for ; Sun, 2 Jun 2019 17:27:02 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FLOCm021032; Sun, 2 Jun 2019 08:27:01 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=16sK1n1CdAHNzc6Q1/0q31JoM0moar82hEVaeqb1Av4=; b=YVhnng/HPGfAPXem3tGWxy8Z9MW297pl9NJ+fsgUXufTXc3aA5rq5rS/5kcyNTaq7GC7 JMQxKIO+SJB26V6VISSmXziLXH/qdimFYzQmKUDNDHnzexLArecNiVjJjdBwPNv3R40Z JVrCAgBUtwQcRmnk0Q8PP0CXkxqo3iWO6T/xz5IiDDAmixYx+zATlcgAF6dQYPT+qXDv HVBvvysF3JyDvUGKXcfdOQqGrBZCLXhCAFm3RsABhlDY+A1dONP8t2SEXPco8JWuXEGf TKBYa6HCHWFwhjdS8dO7VhG2wNUODA7BoWIu2xzvsMeFrktJSkCAId9EUhDsEgChX/7t ow== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk499j-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:01 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:00 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:00 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 7A6E53F703F; Sun, 2 Jun 2019 08:26:58 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vivek Sharma Date: Sun, 2 Jun 2019 20:54:23 +0530 Message-ID: <20190602152434.23996-48-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 47/58] net/octeontx2: support VLAN TPID and PVID for Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vivek Sharma Implement support for setting VLAN TPID and PVID for Tx packets. Signed-off-by: Vivek Sharma --- drivers/net/octeontx2/otx2_ethdev.c | 2 + drivers/net/octeontx2/otx2_ethdev.h | 5 + drivers/net/octeontx2/otx2_vlan.c | 191 ++++++++++++++++++++++++++++ 3 files changed, 198 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 175e80e44..c5dcdc21c 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1347,6 +1347,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .vlan_offload_set = otx2_nix_vlan_offload_set, .vlan_filter_set = otx2_nix_vlan_filter_set, .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set, + .vlan_tpid_set = otx2_nix_vlan_tpid_set, + .vlan_pvid_set = otx2_nix_vlan_pvid_set, }; static inline int diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index a3babe51a..3f11802eb 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -473,5 +473,10 @@ int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id, int on); void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on); +int +otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev, + enum rte_vlan_type type, uint16_t tpid); +int +otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on); #endif /* __OTX2_ETHDEV_H__ */ diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index 3e60da099..3c0d40553 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -81,6 +81,37 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev, entry->vtag_action = vtag_action; } +static void +nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type, + int vtag_index) +{ + union { + uint64_t reg; + struct nix_tx_vtag_action_s act; + } vtag_action; + + uint64_t action; + + action = NIX_TX_ACTIONOP_UCAST_DEFAULT; + + if (type == ETH_VLAN_TYPE_OUTER) { + vtag_action.act.vtag0_def = vtag_index; + vtag_action.act.vtag0_lid = NPC_LID_LA; + vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT; + vtag_action.act.vtag0_relptr = sizeof(struct nix_inst_hdr_s) + + 2 * RTE_ETHER_ADDR_LEN + NIX_RX_VTAGACTION_VTAG0_RELPTR; + } else { + vtag_action.act.vtag1_def = vtag_index; + vtag_action.act.vtag1_lid = NPC_LID_LA; + vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT; + vtag_action.act.vtag1_relptr = sizeof(struct nix_inst_hdr_s) + + 2 * RTE_ETHER_ADDR_LEN + NIX_RX_VTAGACTION_VTAG1_RELPTR; + } + + entry->action = action; + entry->vtag_action = vtag_action.reg; +} + static int nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry) { @@ -322,6 +353,46 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip, return 0; } +/* Installs/Removes default tx entry */ +static int +nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev, + enum rte_vlan_type type, int vtag_index, + int enable) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct otx2_vlan_info *vlan = &dev->vlan_info; + struct mcam_entry entry; + uint16_t pf_func; + int rc; + + if (!vlan->def_tx_mcam_idx && enable) { + memset(&entry, 0, sizeof(struct mcam_entry)); + + /* Only pf_func is matched, swap it's bytes */ + pf_func = (dev->pf_func & 0xff) << 8; + pf_func |= (dev->pf_func >> 8) & 0xff; + + /* PF Func extracted to KW1[63:48] */ + entry.kw[1] = (uint64_t)pf_func << 48; + entry.kw_mask[1] = (BIT_ULL(16) - 1) << 48; + + nix_set_tx_vlan_action(&entry, type, vtag_index); + vlan->def_tx_mcam_ent = entry; + + return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, + NIX_INTF_TX, 0); + } + + if (vlan->def_tx_mcam_idx && !enable) { + rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx); + if (rc) + return rc; + vlan->def_rx_mcam_idx = 0; + } + + return 0; +} + /* Configure vlan stripping on or off */ static int nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable) @@ -591,6 +662,126 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) return rc; } +int +otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev, + enum rte_vlan_type type, uint16_t tpid) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct nix_set_vlan_tpid *tpid_cfg; + struct otx2_mbox *mbox = dev->mbox; + int rc; + + tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox); + + tpid_cfg->tpid = tpid; + if (type == ETH_VLAN_TYPE_OUTER) + tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER; + else + tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + if (type == ETH_VLAN_TYPE_OUTER) + dev->vlan_info.outer_vlan_tpid = tpid; + else + dev->vlan_info.inner_vlan_tpid = tpid; + return 0; +} + +int +otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev); + struct otx2_mbox *mbox = otx2_dev->mbox; + struct nix_vtag_config *vtag_cfg; + struct nix_vtag_config_rsp *rsp; + struct otx2_vlan_info *vlan; + int rc, rc1, vtag_index = 0; + + if (vlan_id == 0) { + otx2_err("vlan id can't be zero"); + return -EINVAL; + } + + vlan = &otx2_dev->vlan_info; + + if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) { + otx2_err("pvid %d is already enabled", vlan_id); + return -EINVAL; + } + + if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) { + otx2_err("another pvid is enabled, disable that first"); + return -EINVAL; + } + + /* No pvid active */ + if (!on && !vlan->pvid_insert_on) + return 0; + + /* Given pvid already disabled */ + if (!on && vlan->pvid != vlan_id) + return 0; + + vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox); + + if (on) { + vtag_cfg->cfg_type = VTAG_TX; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + + if (vlan->outer_vlan_tpid) + vtag_cfg->tx.vtag0 = + (vlan->outer_vlan_tpid << 16) | vlan_id; + else + vtag_cfg->tx.vtag0 = + ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id); + vtag_cfg->tx.cfg_vtag0 = 1; + } else { + vtag_cfg->cfg_type = VTAG_TX; + vtag_cfg->vtag_size = NIX_VTAGSIZE_T4; + + vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx; + vtag_cfg->tx.free_vtag0 = 1; + } + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return rc; + + if (on) { + vtag_index = rsp->vtag0_idx; + } else { + vlan->pvid = 0; + vlan->pvid_insert_on = 0; + vlan->outer_vlan_idx = 0; + } + + rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER, + vtag_index, on); + if (rc < 0) { + printf("Default tx entry failed with rc %d\n", rc); + vtag_cfg->tx.vtag0_idx = vtag_index; + vtag_cfg->tx.free_vtag0 = 1; + vtag_cfg->tx.cfg_vtag0 = 0; + + rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc1) + otx2_err("Vtag free failed"); + + return rc; + } + + if (on) { + vlan->pvid = vlan_id; + vlan->pvid_insert_on = 1; + vlan->outer_vlan_idx = vtag_index; + } + + return 0; +} + void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev, __rte_unused uint16_t queue, __rte_unused int on) From patchwork Sun Jun 2 15:24:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54091 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BD5B61BC03; Sun, 2 Jun 2019 17:27:06 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D1C5B1BC01 for ; Sun, 2 Jun 2019 17:27:04 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FJu61021294; Sun, 2 Jun 2019 08:27:04 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=5EkofkpT+vL4Ow16JeRpMLFWHaYX4K4YLF4eJAVYSVI=; b=loHQ0IOarcP6Z7ueVJ8ghTRWajh4HwBn2DkgJcLg7Xu3j9Lw7g5Y1MjwEas6nLcCdmZN nVyeZHqbTEjeJu0bEow/1X1yctbzeR0GgjuxWXPTIjltp02ScCA3gejgyXiY1pk+ZgbX HB8bC8uK1THZmoNA/zb9hmk967waoO/oIfxgeDG8MSN5N19RjprygFNEUEsaB6LYX48N zdxCB93NkEhUpLEz1BKQfhztBYHTPlMCXVHBhUyAmXvMqZrpnXIVxXpEtfofOhSBHjNO xzar8l1o3qhZnluzIgV1E2twhSHhOh+ITGLwy+jsN8ouBQcYo8SooYnPM8u6wsBO8vtW nw== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0a-0016f401.pphosted.com with ESMTP id 2supqkvqna-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:04 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:03 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:03 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 5AAC73F7040; Sun, 2 Jun 2019 08:27:01 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:54:24 +0530 Message-ID: <20190602152434.23996-49-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 48/58] net/octeontx2: add FW version get operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add firmware version get operation. Signed-off-by: Vamsi Attunuru --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 1 + drivers/net/octeontx2/otx2_ethdev.h | 3 +++ drivers/net/octeontx2/otx2_ethdev_ops.c | 22 ++++++++++++++++++++++ drivers/net/octeontx2/otx2_flow.c | 7 +++++++ 7 files changed, 36 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 3567e3f63..6117e1edf 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -33,5 +33,6 @@ Rx descriptor status = Y Basic stats = Y Stats per queue = Y Extended stats = Y +FW version = Y Module EEPROM dump = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 7edc80348..66c327cfc 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -31,5 +31,6 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +FW version = Y Module EEPROM dump = Y Registers dump = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index fcc1ddc03..3aa0491e1 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -26,5 +26,6 @@ Rx descriptor status = Y Basic stats = Y Extended stats = Y Stats per queue = Y +FW version = Y Module EEPROM dump = Y Registers dump = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index c5dcdc21c..b449bb032 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1335,6 +1335,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .filter_ctrl = otx2_nix_dev_filter_ctrl, .get_module_info = otx2_nix_get_module_info, .get_module_eeprom = otx2_nix_get_module_eeprom, + .fw_version_get = otx2_nix_fw_version_get, .flow_ctrl_get = otx2_nix_flow_ctrl_get, .flow_ctrl_set = otx2_nix_flow_ctrl_set, .timesync_enable = otx2_nix_timesync_enable, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 3f11802eb..7bb42be8d 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -216,6 +216,7 @@ struct otx2_eth_dev { uint8_t lso_tsov4_idx; uint8_t lso_tsov6_idx; uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; + uint8_t mkex_pfl_name[MKEX_NAME_LEN]; uint8_t max_mac_entries; uint8_t lf_tx_stats; uint8_t lf_rx_stats; @@ -320,6 +321,8 @@ void otx2_nix_info_get(struct rte_eth_dev *eth_dev, int otx2_nix_dev_filter_ctrl(struct rte_eth_dev *eth_dev, enum rte_filter_type filter_type, enum rte_filter_op filter_op, void *arg); +int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, + size_t fw_size); int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev, struct rte_eth_dev_module_info *modinfo); int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev, diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index 1da9222b7..d2cb5ba1c 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -209,6 +209,28 @@ otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt) return 0; } +int +otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version, + size_t fw_size) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc = (int)fw_size; + + if (fw_size > sizeof(dev->mkex_pfl_name)) + rc = sizeof(dev->mkex_pfl_name); + + rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc); + + rc += 1; /* Add the size of '\0' */ + if (fw_size < (uint32_t)rc) + goto done; + else + return 0; + +done: + return rc; +} + int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool) { diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 1fbe6b86e..270433cd6 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -740,6 +740,7 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev) struct otx2_npc_flow_info *npc = &dev->npc_flow; struct npc_get_kex_cfg_rsp *kex_rsp; struct otx2_mbox *mbox = dev->mbox; + char mkex_pfl_name[MKEX_NAME_LEN]; struct otx2_idev_kex_cfg *idev; int rc = 0; @@ -761,6 +762,12 @@ flow_fetch_kex_cfg(struct otx2_eth_dev *dev) sizeof(struct npc_get_kex_cfg_rsp)); } + otx2_mbox_memcpy(mkex_pfl_name, + idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN); + + strlcpy((char *)dev->mkex_pfl_name, + mkex_pfl_name, sizeof(dev->mkex_pfl_name)); + flow_process_mkex_cfg(npc, &idev->kex_cfg); done: From patchwork Sun Jun 2 15:24:25 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54092 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 119FB1B9BC; Sun, 2 Jun 2019 17:27:10 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 64AE11B9EB for ; Sun, 2 Jun 2019 17:27:08 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6ke020260; Sun, 2 Jun 2019 08:27:07 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=JMmwHsuWSsBgefd9k5yb9julsA9eb+Y2oaX1uHwQKqo=; b=ViLjmIc0jtW3/7cNhUmveAZIHpHq5oLw0WzQsLGWvbJki0GsUYSCjC+PSZYKH8cVCYGs KLtWkUgV7DWHNy2N50/Qw6zLKV5uW32QhdUSQuJV/5ZESMODkniWNh750dqNFZww/IIb hh8iMD+JBI4wYHXfcOqcau8H64fD8VkuOt0gFvJo6/8Xqyt5IKrfdgc1fdCg23qwTEqf k4BgN5rkjFb1+UMZB5n4ObkV5acomcqRpnV/WkwOrooInPnE5nxU+R/AeCgKyMK2Fomc JBpVFWQJ5UhezDWUO6FgiAApW5zBaX1me4rzG5JVmBfNHM8GesdwTBIi0W9KRmhyHgvP 3w== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk499t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:07 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:06 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:06 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 85D0B3F7040; Sun, 2 Jun 2019 08:27:04 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Pavan Nikhilesh , Harman Kalra Date: Sun, 2 Jun 2019 20:54:25 +0530 Message-ID: <20190602152434.23996-50-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 49/58] net/octeontx2: add Rx burst support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Rx burst support. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Harman Kalra --- drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 2 +- drivers/net/octeontx2/otx2_ethdev.c | 6 - drivers/net/octeontx2/otx2_ethdev.h | 2 + drivers/net/octeontx2/otx2_rx.c | 128 ++++++++++++++ drivers/net/octeontx2/otx2_rx.h | 249 +++++++++++++++++++++++++++- 6 files changed, 380 insertions(+), 8 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_rx.c diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index b1cc6d83b..76847b2c2 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -30,6 +30,7 @@ LIBABIVER := 1 # all source are stored in SRCS-y # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ + otx2_rx.c \ otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index d5f272c8b..1361f1707 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -2,7 +2,7 @@ # Copyright(C) 2019 Marvell International Ltd. # -sources = files( +sources = files('otx2_rx.c', 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index b449bb032..9b55e757e 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -14,12 +14,6 @@ #include "otx2_ethdev.h" -static inline void -otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) -{ - RTE_SET_USED(eth_dev); -} - static inline void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) { diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 7bb42be8d..3ba47f6ab 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -259,6 +259,7 @@ struct otx2_eth_dev { struct otx2_eth_qconf *tx_qconf; struct otx2_eth_qconf *rx_qconf; struct rte_eth_dev *eth_dev; + eth_rx_burst_t rx_pkt_burst_no_offload; /* PTP counters */ bool ptp_en; struct otx2_timesync_info tstamp; @@ -451,6 +452,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev); /* Rx and Tx routines */ +void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev); void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); /* Timesync - PTP routines */ diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c new file mode 100644 index 000000000..b4a3e9d55 --- /dev/null +++ b/drivers/net/octeontx2/otx2_rx.c @@ -0,0 +1,128 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" +#include "otx2_rx.h" + +#define NIX_DESCS_PER_LOOP 4 +#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x)) +#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ) + +static inline uint16_t +nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata, + const uint16_t pkts, const uint32_t qmask) +{ + uint32_t available = rxq->available; + + /* Update the available count if cached value is not enough */ + if (unlikely(available < pkts)) { + uint64_t reg, head, tail; + + /* Use LDADDA version to avoid reorder */ + reg = otx2_atomic64_add_sync(wdata, rxq->cq_status); + /* CQ_OP_STATUS operation error */ + if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) || + reg & BIT_ULL(CQ_OP_STAT_CQ_ERR)) + return 0; + + tail = reg & 0xFFFFF; + head = (reg >> 20) & 0xFFFFF; + if (tail < head) + available = tail - head + qmask + 1; + else + available = tail - head; + + rxq->available = available; + } + + return RTE_MIN(pkts, available); +} + +static __rte_always_inline uint16_t +nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t pkts, const uint16_t flags) +{ + struct otx2_eth_rxq *rxq = rx_queue; + const uint64_t mbuf_init = rxq->mbuf_initializer; + const void *lookup_mem = rxq->lookup_mem; + const uint64_t data_off = rxq->data_off; + const uintptr_t desc = rxq->desc; + const uint64_t wdata = rxq->wdata; + const uint32_t qmask = rxq->qmask; + uint16_t packets = 0, nb_pkts; + uint32_t head = rxq->head; + struct nix_cqe_hdr_s *cq; + struct rte_mbuf *mbuf; + + nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask); + + while (packets < nb_pkts) { + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(desc + (CQE_SZ(head + 2)))); + cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head)); + + mbuf = nix_get_mbuf_from_cqe(cq, data_off); + + otx2_nix_cqe_to_mbuf(cq, mbuf, lookup_mem, mbuf_init, flags); + otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags); + rx_pkts[packets++] = mbuf; + otx2_prefetch_store_keep(mbuf); + head++; + head &= qmask; + } + + rxq->head = head; + rxq->available -= nb_pkts; + + /* Free all the CQs that we've processed */ + otx2_write64((wdata | nb_pkts), rxq->cq_door); + + return nb_pkts; +} + + +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_recv_pkts_ ## name(void *rx_queue, \ + struct rte_mbuf **rx_pkts, uint16_t pkts) \ +{ \ + return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \ +} \ + +NIX_RX_FASTPATH_MODES +#undef R + +static inline void +pick_rx_func(struct rte_eth_dev *eth_dev, + const eth_rx_burst_t rx_burst[2][2][2][2][2][2]) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */ + eth_dev->rx_pkt_burst = rx_burst + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)]; +} + +void +otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) +{ + const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name, + +NIX_RX_FASTPATH_MODES +#undef R + }; + + pick_rx_func(eth_dev, nix_eth_rx_burst); + + rte_mb(); +} diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index 763dc402e..fc0e87d14 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -15,10 +15,13 @@ PTYPE_TUNNEL_ARRAY_SZ) *\ sizeof(uint16_t)) +#define NIX_RX_OFFLOAD_NONE (0) +#define NIX_RX_OFFLOAD_RSS_F BIT(0) #define NIX_RX_OFFLOAD_PTYPE_F BIT(1) +#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2) #define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3) -#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) +#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) #define NIX_TIMESYNC_RX_OFFSET 8 @@ -30,4 +33,248 @@ struct otx2_timesync_info { uint8_t rx_ready; } __rte_cache_aligned; +union mbuf_initializer { + struct { + uint16_t data_off; + uint16_t refcnt; + uint16_t nb_segs; + uint16_t port; + } fields; + uint64_t value; +}; + +static __rte_always_inline void +otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf, + struct otx2_timesync_info *tstamp, const uint16_t flag) +{ + if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) && + mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC && + (mbuf->data_off == RTE_PKTMBUF_HEADROOM + + NIX_TIMESYNC_RX_OFFSET)) { + uint64_t *tstamp_ptr; + + /* Deal with rx timestamp */ + tstamp_ptr = rte_pktmbuf_mtod_offset(mbuf, uint64_t *, + -NIX_TIMESYNC_RX_OFFSET); + mbuf->timestamp = rte_be_to_cpu_64(*tstamp_ptr); + tstamp->rx_tstamp = mbuf->timestamp; + tstamp->rx_ready = 1; + mbuf->ol_flags |= PKT_RX_IEEE1588_PTP | PKT_RX_IEEE1588_TMST + | PKT_RX_TIMESTAMP; + } +} + +static __rte_always_inline uint64_t +nix_clear_data_off(uint64_t oldval) +{ + union mbuf_initializer mbuf_init = { .value = oldval }; + + mbuf_init.fields.data_off = 0; + return mbuf_init.value; +} + +static __rte_always_inline struct rte_mbuf * +nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off) +{ + rte_iova_t buff; + + /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */ + buff = *((rte_iova_t *)((uint64_t *)cq + 9)); + return (struct rte_mbuf *)(buff - data_off); +} + + +static __rte_always_inline uint32_t +nix_ptype_get(const void * const lookup_mem, const uint64_t in) +{ + const uint16_t * const ptype = lookup_mem; + const uint16_t lg_lf_le = (in & 0xFFF000000000000) >> 48; + const uint16_t tu_l2 = ptype[(in & 0x000FFF000000000) >> 36]; + const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lg_lf_le]; + + return (il4_tu << PTYPE_WIDTH) | tu_l2; +} + +static __rte_always_inline uint32_t +nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in) +{ + const uint32_t * const ol_flags = (const uint32_t * const) + ((const uint8_t * const)lookup_mem + PTYPE_ARRAY_SZ); + + return ol_flags[(in & 0xfff00000) >> 20]; +} + +static inline uint64_t +nix_update_match_id(const uint16_t match_id, uint64_t ol_flags, + struct rte_mbuf *mbuf) +{ + /* There is no separate bit to check match_id + * is valid or not? and no flag to identify it is an + * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK + * action. The former case addressed through 0 being invalid + * value and inc/dec match_id pair when MARK is activated. + * The later case addressed through defining + * OTX2_FLOW_MARK_DEFAULT as value for + * RTE_FLOW_ACTION_TYPE_MARK. + * This would translate to not use + * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and + * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id. + * i.e valid mark_id's are from + * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2 + */ + if (likely(match_id)) { + ol_flags |= PKT_RX_FDIR; + if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) { + ol_flags |= PKT_RX_FDIR_ID; + mbuf->hash.fdir.hi = match_id - 1; + } + } + + return ol_flags; +} + +static __rte_always_inline void +otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *mbuf, + const void *lookup_mem, const uint64_t val, + const uint16_t flag) +{ + const struct nix_rx_parse_s *rx = + (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1); + const uint64_t w1 = *(const uint64_t *)rx; + const uint16_t len = rx->pkt_lenm1 + 1; + uint16_t ol_flags = 0; + + /* Mark mempool obj as "get" as it is alloc'ed by NIX */ + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + + if (flag & NIX_RX_OFFLOAD_PTYPE_F) + mbuf->packet_type = nix_ptype_get(lookup_mem, w1); + else + mbuf->packet_type = 0; + + if (flag & NIX_RX_OFFLOAD_RSS_F) { + mbuf->hash.rss = cq->tag; + ol_flags |= PKT_RX_RSS_HASH; + } + + if (flag & NIX_RX_OFFLOAD_CHECKSUM_F) + ol_flags |= nix_rx_olflags_get(lookup_mem, w1); + + if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) { + if (rx->vtag0_gone) { + ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; + mbuf->vlan_tci = rx->vtag0_tci; + } + if (rx->vtag1_gone) { + ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED; + mbuf->vlan_tci_outer = rx->vtag1_tci; + } + } + + if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F) + ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf); + + mbuf->ol_flags = ol_flags; + *(uint64_t *)(&mbuf->rearm_data) = val; + mbuf->pkt_len = len; + + mbuf->data_len = len; +} + +#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F +#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F +#define RSS_F NIX_RX_OFFLOAD_RSS_F +#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F +#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F +#define TS_F NIX_RX_OFFLOAD_TSTAMP_F + +/* [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */ +#define NIX_RX_FASTPATH_MODES \ +R(no_offload, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \ +R(rss, 0, 0, 0, 0, 0, 1, RSS_F) \ +R(ptype, 0, 0, 0, 0, 1, 0, PTYPE_F) \ +R(ptype_rss, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \ +R(cksum, 0, 0, 0, 1, 0, 0, CKSUM_F) \ +R(cksum_rss, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \ +R(cksum_ptype, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \ +R(cksum_ptype_rss, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\ +R(vlan, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \ +R(vlan_rss, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \ +R(vlan_ptype, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \ +R(vlan_ptype_rss, 0, 0, 1, 0, 1, 1, RX_VLAN_F | PTYPE_F | RSS_F)\ +R(vlan_cksum, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \ +R(vlan_cksum_rss, 0, 0, 1, 1, 0, 1, RX_VLAN_F | CKSUM_F | RSS_F)\ +R(vlan_cksum_ptype, 0, 0, 1, 1, 1, 0, \ + RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, \ + RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(mark, 0, 1, 0, 0, 0, 0, MARK_F) \ +R(mark_rss, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \ +R(mark_ptype, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \ +R(mark_ptype_rss, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F)\ +R(mark_cksum, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \ +R(mark_cksum_rss, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F)\ +R(mark_cksum_ptype, 0, 1, 0, 1, 1, 0, MARK_F | CKSUM_F | PTYPE_F)\ +R(mark_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, \ + MARK_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(mark_vlan, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \ +R(mark_vlan_rss, 0, 1, 1, 0, 0, 1, MARK_F | RX_VLAN_F | RSS_F)\ +R(mark_vlan_ptype, 0, 1, 1, 0, 1, 0, \ + MARK_F | RX_VLAN_F | PTYPE_F) \ +R(mark_vlan_ptype_rss, 0, 1, 1, 0, 1, 1, \ + MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \ +R(mark_vlan_cksum, 0, 1, 1, 1, 0, 0, \ + MARK_F | RX_VLAN_F | CKSUM_F) \ +R(mark_vlan_cksum_rss, 0, 1, 1, 1, 0, 1, \ + MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \ +R(mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 0, \ + MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, \ + MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts, 1, 0, 0, 0, 0, 0, TS_F) \ +R(ts_rss, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \ +R(ts_ptype, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \ +R(ts_ptype_rss, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F)\ +R(ts_cksum, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \ +R(ts_cksum_rss, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F)\ +R(ts_cksum_ptype, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F)\ +R(ts_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, \ + TS_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts_vlan, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \ +R(ts_vlan_rss, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F)\ +R(ts_vlan_ptype, 1, 0, 1, 0, 1, 0, TS_F | RX_VLAN_F | PTYPE_F)\ +R(ts_vlan_ptype_rss, 1, 0, 1, 0, 1, 1, \ + TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \ +R(ts_vlan_cksum, 1, 0, 1, 1, 0, 0, \ + TS_F | RX_VLAN_F | CKSUM_F) \ +R(ts_vlan_cksum_rss, 1, 0, 1, 1, 0, 1, \ + MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \ +R(ts_vlan_cksum_ptype, 1, 0, 1, 1, 1, 0, \ + TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(ts_vlan_cksum_ptype_rss, 1, 0, 1, 1, 1, 1, \ + TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts_mark, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \ +R(ts_mark_rss, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F)\ +R(ts_mark_ptype, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F)\ +R(ts_mark_ptype_rss, 1, 1, 0, 0, 1, 1, \ + TS_F | MARK_F | PTYPE_F | RSS_F) \ +R(ts_mark_cksum, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F)\ +R(ts_mark_cksum_rss, 1, 1, 0, 1, 0, 1, \ + TS_F | MARK_F | CKSUM_F | RSS_F)\ +R(ts_mark_cksum_ptype, 1, 1, 0, 1, 1, 0, \ + TS_F | MARK_F | CKSUM_F | PTYPE_F) \ +R(ts_mark_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, \ + TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \ +R(ts_mark_vlan, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\ +R(ts_mark_vlan_rss, 1, 1, 1, 0, 0, 1, \ + TS_F | MARK_F | RX_VLAN_F | RSS_F)\ +R(ts_mark_vlan_ptype, 1, 1, 1, 0, 1, 0, \ + TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \ +R(ts_mark_vlan_ptype_rss, 1, 1, 1, 0, 1, 1, \ + TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \ +R(ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 0, \ + TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \ +R(ts_mark_vlan_cksum_ptype_rss, 1, 1, 1, 1, 1, 1, \ + TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) + #endif /* __OTX2_RX_H__ */ From patchwork Sun Jun 2 15:24:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54093 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DAD4E1BB71; Sun, 2 Jun 2019 17:27:13 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id EC6F21BA57 for ; Sun, 2 Jun 2019 17:27:11 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4k8020253; Sun, 2 Jun 2019 08:27:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=VPjzGxk4M36Hm3e8jClYVS2NOkzh3uRj/nei6l88W4g=; b=Qd0pQPNvvEvZrHw1rg+uZKwRpSZ4+M0+Dkp6sLbPFT80Emb8kJRg7RCcyi6j+JXavNyO pblPATUgHBCkdRMcqeXFHyLdIM5tfFekOnPVy/fc88z/xIHApFeH0lN0me5PVHaFMtVS h8nnZtn2sBBpNBWoqTHFeCAOJ7t51hywd1OGQ1wG5fDJkupWM77wBxiQKw4q//11lkc6 3r/U1KLpurNyrFyMbNf+dCTbC0Wmqx5Y9Ej0KASQKzhCcgplo8LLlPMNFW15ZgdwiDt2 f7Ev0ie59zumRAUojIUjCu4Gu9rPcqK2ispcjykNL0GNgVOxhKaauWr5xzQIpzVzjibd XQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk499v-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:11 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:09 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 8CAF23F7040; Sun, 2 Jun 2019 08:27:07 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K , Anatoly Burakov CC: , Pavan Nikhilesh Date: Sun, 2 Jun 2019 20:54:26 +0530 Message-ID: <20190602152434.23996-51-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 50/58] net/octeontx2: add Rx multi segment version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add multi segment version of packet Receive function. Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- doc/guides/nics/features/octeontx2.ini | 2 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 2 + drivers/net/octeontx2/otx2_rx.c | 25 ++++++++++ drivers/net/octeontx2/otx2_rx.h | 55 +++++++++++++++++++++- 5 files changed, 84 insertions(+), 1 deletion(-) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 6117e1edf..18bcf81cf 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -24,6 +24,8 @@ Inner RSS = Y VLAN filter = Y Flow control = Y Flow API = Y +Jumbo frame = Y +Scattered Rx = Y VLAN offload = Y QinQ offload = Y Packet type parsing = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 66c327cfc..97a24671e 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -24,6 +24,7 @@ Inner RSS = Y VLAN filter = Y Flow control = Y Flow API = Y +Jumbo frame = Y VLAN offload = Y QinQ offload = Y Packet type parsing = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 3aa0491e1..916a6d7b0 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -19,6 +19,8 @@ RSS reta update = Y Inner RSS = Y VLAN filter = Y Flow API = Y +Jumbo frame = Y +Scattered Rx = Y VLAN offload = Y QinQ offload = Y Packet type parsing = Y diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c index b4a3e9d55..0f0919338 100644 --- a/drivers/net/octeontx2/otx2_rx.c +++ b/drivers/net/octeontx2/otx2_rx.c @@ -91,6 +91,14 @@ otx2_nix_recv_pkts_ ## name(void *rx_queue, \ { \ return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \ } \ + \ +static uint16_t __rte_noinline __hot \ +otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \ + struct rte_mbuf **rx_pkts, uint16_t pkts) \ +{ \ + return nix_recv_pkts(rx_queue, rx_pkts, pkts, \ + (flags) | NIX_RX_MULTI_SEG_F); \ +} \ NIX_RX_FASTPATH_MODES #undef R @@ -114,15 +122,32 @@ pick_rx_func(struct rte_eth_dev *eth_dev, void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name, +NIX_RX_FASTPATH_MODES +#undef R + }; + + const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name, + NIX_RX_FASTPATH_MODES #undef R }; pick_rx_func(eth_dev, nix_eth_rx_burst); + if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) + pick_rx_func(eth_dev, nix_eth_rx_burst_mseg); + + /* Copy multi seg version with no offload for tear down sequence */ + if (rte_eal_process_type() == RTE_PROC_PRIMARY) + dev->rx_pkt_burst_no_offload = + nix_eth_rx_burst_mseg[0][0][0][0][0][0]; rte_mb(); } diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h index fc0e87d14..1d1150786 100644 --- a/drivers/net/octeontx2/otx2_rx.h +++ b/drivers/net/octeontx2/otx2_rx.h @@ -23,6 +23,11 @@ #define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4) #define NIX_RX_OFFLOAD_TSTAMP_F BIT(5) +/* Flags to control cqe_to_mbuf conversion function. + * Defining it from backwards to denote its been + * not used as offload flags to pick function + */ +#define NIX_RX_MULTI_SEG_F BIT(15) #define NIX_TIMESYNC_RX_OFFSET 8 struct otx2_timesync_info { @@ -133,6 +138,51 @@ nix_update_match_id(const uint16_t match_id, uint64_t ol_flags, return ol_flags; } +static __rte_always_inline void +nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx, + struct rte_mbuf *mbuf, uint64_t rearm) +{ + const rte_iova_t *iova_list; + struct rte_mbuf *head; + const rte_iova_t *eol; + uint8_t nb_segs; + uint64_t sg; + + sg = *(const uint64_t *)(rx + 1); + nb_segs = (sg >> 48) & 0x3; + mbuf->nb_segs = nb_segs; + mbuf->data_len = sg & 0xFFFF; + sg = sg >> 16; + + eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1)); + /* Skip SG_S and first IOVA*/ + iova_list = ((const rte_iova_t *)(rx + 1)) + 2; + nb_segs--; + + rearm = rearm & ~0xFFFF; + + head = mbuf; + while (nb_segs) { + mbuf->next = ((struct rte_mbuf *)*iova_list) - 1; + mbuf = mbuf->next; + + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, 1, 1); + + mbuf->data_len = sg & 0xFFFF; + sg = sg >> 16; + *(uint64_t *)(&mbuf->rearm_data) = rearm; + nb_segs--; + iova_list++; + + if (!nb_segs && (iova_list + 1 < eol)) { + sg = *(const uint64_t *)(iova_list); + nb_segs = (sg >> 48) & 0x3; + head->nb_segs += nb_segs; + iova_list = (const rte_iova_t *)(iova_list + 1); + } + } +} + static __rte_always_inline void otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *mbuf, const void *lookup_mem, const uint64_t val, @@ -178,7 +228,10 @@ otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, struct rte_mbuf *mbuf, *(uint64_t *)(&mbuf->rearm_data) = val; mbuf->pkt_len = len; - mbuf->data_len = len; + if (flag & NIX_RX_MULTI_SEG_F) + nix_cqe_xtract_mseg(rx, mbuf, val); + else + mbuf->data_len = len; } #define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F From patchwork Sun Jun 2 15:24:27 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54112 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7A1171BC10; Sun, 2 Jun 2019 17:27:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 6C7CE1BC0E for ; Sun, 2 Jun 2019 17:27:14 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6kf020260; Sun, 2 Jun 2019 08:27:13 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=QhNaaMWMSFT5yJGiUUv8PJ1xM9iR8CD1BchPDSeb8Nw=; b=SJpKLxbf1J1XzOouvmCCtq/ggiIjXyjSMzE/CgC8L3dWNTLWbYpB3iXDLe5LmHRxN177 zs4DmzwCAIrkUfLkXUqwfjIF+7DtZxksSwCpv20L/2XN7VEIAEFTQ/FIm6SayK+zgWcm bqlMBraWFh7aBI/VB1JTmeASvR9whagZqPFNjJMMvyZU5F44b4RLTc+m0iEA4alFAVtu P4raQga6jr6Mf5Buue53380KuAfCMnaWH5HJ3zLV0BVRnBLJBFYi3SXLG7Nv8cHCQoOO kS1zeQA1hwLR58LYq87Hu0qpQDx51PG4AiyWy0u6GaYEFTkxcLuxocO5It1Kx1PLMJZT FA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49a1-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:13 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:12 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:12 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id F33F93F7041; Sun, 2 Jun 2019 08:27:10 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: Date: Sun, 2 Jun 2019 20:54:27 +0530 Message-ID: <20190602152434.23996-52-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 51/58] net/octeontx2: add Rx vector version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add vector version of packet Receive function. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram --- drivers/net/octeontx2/otx2_rx.c | 259 +++++++++++++++++++++++++++++++- 1 file changed, 258 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c index 0f0919338..4ba881ffb 100644 --- a/drivers/net/octeontx2/otx2_rx.c +++ b/drivers/net/octeontx2/otx2_rx.c @@ -83,6 +83,239 @@ nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return nb_pkts; } +#if defined(RTE_ARCH_ARM64) + +static __rte_always_inline uint64_t +nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f) +{ + if (w2 & BIT_ULL(21) /* vtag0_gone */) { + ol_flags |= PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED; + *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5); + } + + return ol_flags; +} + +static __rte_always_inline uint64_t +nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf) +{ + if (w2 & BIT_ULL(23) /* vtag1_gone */) { + ol_flags |= PKT_RX_QINQ | PKT_RX_QINQ_STRIPPED; + mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48); + } + + return ol_flags; +} + +static __rte_always_inline uint16_t +nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t pkts, const uint16_t flags) +{ + struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0; + uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23; + const uint64_t mbuf_initializer = rxq->mbuf_initializer; + const uint64x2_t data_off = vdupq_n_u64(rxq->data_off); + uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3; + uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer); + uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer); + uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer); + uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer); + struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3; + const uint16_t *lookup_mem = rxq->lookup_mem; + const uint32_t qmask = rxq->qmask; + const uint64_t wdata = rxq->wdata; + const uintptr_t desc = rxq->desc; + uint8x16_t f0, f1, f2, f3; + uint32_t head = rxq->head; + + pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask); + /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */ + pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP); + + while (packets < pkts) { + /* Get the CQ pointers, since the ring size is multiple of + * 4, We can avoid checking the wrap around of head + * value after the each access unlike scalar version. + */ + const uintptr_t cq0 = desc + CQE_SZ(head); + + /* Prefetch N desc ahead */ + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8))); + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9))); + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10))); + rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11))); + + /* Get NIX_RX_SG_S for size and buffer pointer */ + cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64)); + cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64)); + cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64)); + cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64)); + + /* Extract mbuf from NIX_RX_SG_S */ + mbuf01 = vzip2q_u64(cq0_w8, cq1_w8); + mbuf23 = vzip2q_u64(cq2_w8, cq3_w8); + mbuf01 = vqsubq_u64(mbuf01, data_off); + mbuf23 = vqsubq_u64(mbuf23, data_off); + + /* Move mbufs to scalar registers for future use */ + mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0); + mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1); + mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0); + mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1); + + /* Mask to get packet len from NIX_RX_SG_S */ + const uint8x16_t shuf_msk = { + 0xFF, 0xFF, /* pkt_type set as unknown */ + 0xFF, 0xFF, /* pkt_type set as unknown */ + 0, 1, /* octet 1~0, low 16 bits pkt_len */ + 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */ + 0, 1, /* octet 1~0, 16 bits data_len */ + 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF + }; + + /* Form the rx_descriptor_fields1 with pkt_len and data_len */ + f0 = vqtbl1q_u8(cq0_w8, shuf_msk); + f1 = vqtbl1q_u8(cq1_w8, shuf_msk); + f2 = vqtbl1q_u8(cq2_w8, shuf_msk); + f3 = vqtbl1q_u8(cq3_w8, shuf_msk); + + /* Load CQE word0 and word 1 */ + uint64x2_t cq0_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0))); + uint64x2_t cq1_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1))); + uint64x2_t cq2_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2))); + uint64x2_t cq3_w0 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3))); + + if (flags & NIX_RX_OFFLOAD_RSS_F) { + /* Fill rss in the rx_descriptor_fields1 */ + f0 = vsetq_lane_u32(vgetq_lane_u32(cq0_w0, 0), f0, 3); + f1 = vsetq_lane_u32(vgetq_lane_u32(cq1_w0, 0), f1, 3); + f2 = vsetq_lane_u32(vgetq_lane_u32(cq2_w0, 0), f2, 3); + f3 = vsetq_lane_u32(vgetq_lane_u32(cq3_w0, 0), f3, 3); + ol_flags0 = PKT_RX_RSS_HASH; + ol_flags1 = PKT_RX_RSS_HASH; + ol_flags2 = PKT_RX_RSS_HASH; + ol_flags3 = PKT_RX_RSS_HASH; + } else { + ol_flags0 = 0; ol_flags1 = 0; + ol_flags2 = 0; ol_flags3 = 0; + } + + if (flags & NIX_RX_OFFLOAD_PTYPE_F) { + /* Fill packet_type in the rx_descriptor_fields1 */ + f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq0_w0, 1)), f0, 0); + f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq1_w0, 1)), f1, 0); + f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq2_w0, 1)), f2, 0); + f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, + vgetq_lane_u64(cq3_w0, 1)), f3, 0); + } + + if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) { + ol_flags0 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq0_w0, 1)); + ol_flags1 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq1_w0, 1)); + ol_flags2 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq2_w0, 1)); + ol_flags3 |= nix_rx_olflags_get(lookup_mem, + vgetq_lane_u64(cq3_w0, 1)); + } + + if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) { + uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16); + uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16); + uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16); + uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16); + + ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0); + ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1); + ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2); + ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3); + + ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0); + ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1); + ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2); + ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3); + } + + if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) { + ol_flags0 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0); + ol_flags1 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1); + ol_flags2 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2); + ol_flags3 = nix_update_match_id(*(uint16_t *) + (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3); + } + + /* Form rearm_data with ol_flags */ + rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1); + rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1); + rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1); + rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1); + + /* Update rx_descriptor_fields1 */ + vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0); + vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1); + vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2); + vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3); + + /* Update rearm_data */ + vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0); + vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1); + vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2); + vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3); + + /* Store the mbufs to rx_pkts */ + vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01); + vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23); + + /* Prefetch mbufs */ + otx2_prefetch_store_keep(mbuf0); + otx2_prefetch_store_keep(mbuf1); + otx2_prefetch_store_keep(mbuf2); + otx2_prefetch_store_keep(mbuf3); + + /* Mark mempool obj as "get" as it is alloc'ed by NIX */ + __mempool_check_cookies(mbuf0->pool, (void **)&mbuf0, 1, 1); + __mempool_check_cookies(mbuf1->pool, (void **)&mbuf1, 1, 1); + __mempool_check_cookies(mbuf2->pool, (void **)&mbuf2, 1, 1); + __mempool_check_cookies(mbuf3->pool, (void **)&mbuf3, 1, 1); + + /* Advance head pointer and packets */ + head += NIX_DESCS_PER_LOOP; head &= qmask; + packets += NIX_DESCS_PER_LOOP; + } + + rxq->head = head; + rxq->available -= packets; + + rte_cio_wmb(); + /* Free all the CQs that we've processed */ + otx2_write64((rxq->wdata | packets), rxq->cq_door); + + return packets; +} + +#else + +static inline uint16_t +nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t pkts, const uint16_t flags) +{ + RTE_SET_USED(rx_queue); + RTE_SET_USED(rx_pkts); + RTE_SET_USED(pkts); + RTE_SET_USED(flags); + + return 0; +} + +#endif #define R(name, f5, f4, f3, f2, f1, f0, flags) \ static uint16_t __rte_noinline __hot \ @@ -99,6 +332,16 @@ otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \ return nix_recv_pkts(rx_queue, rx_pkts, pkts, \ (flags) | NIX_RX_MULTI_SEG_F); \ } \ + \ +static uint16_t __rte_noinline __hot \ +otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \ + struct rte_mbuf **rx_pkts, uint16_t pkts) \ +{ \ + /* TSTMP is not supported by vector */ \ + if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \ + return 0; \ + return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \ +} \ NIX_RX_FASTPATH_MODES #undef R @@ -140,7 +383,21 @@ NIX_RX_FASTPATH_MODES #undef R }; - pick_rx_func(eth_dev, nix_eth_rx_burst); + const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2] = { +#define R(name, f5, f4, f3, f2, f1, f0, flags) \ + [f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name, + +NIX_RX_FASTPATH_MODES +#undef R + }; + + /* For PTP enabled, scalar rx function should be chosen as most of the + * PTP apps are implemented to rx burst 1 pkt. + */ + if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) + pick_rx_func(eth_dev, nix_eth_rx_burst); + else + pick_rx_func(eth_dev, nix_eth_rx_vec_burst); if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) pick_rx_func(eth_dev, nix_eth_rx_burst_mseg); From patchwork Sun Jun 2 15:24:28 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54094 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A23D41BBA1; Sun, 2 Jun 2019 17:27:19 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id E6F881BC16 for ; Sun, 2 Jun 2019 17:27:17 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK6kg020260; Sun, 2 Jun 2019 08:27:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=t9q2F72vqGzxn2L3glzIYPCtu1V9CgnC/1mxwIT8wZA=; b=v+wDzMeY59aDIzVRMGrzsF4JcowwVpolQY2pVtE6JTaxIBl1QYXWf97ynoagRWo6OH4y BlwzcGm581s6dDYVp1Jfm8GrkVC+LeM6Fy+OKffvk7ZO2dLVLIZfGwwfuqsxs1GX0ZY6 3RBgAWSizMU5b9+ZbLJtLKiS1k+7tI/lzxGCnQHqcTC22CPOugBihXy1wtv01YHf7qR3 D2/frH/JovRRMqKtNUaqREP+jKRxYioczYhx/XHvFg2pzmea0JGaxqD8ELyVXNhRPOPn qK1akkHe7CYkZL7U4e0SbL/z1Z6tMQypVy2688fVq5h341QdzhTp9pvSPnQFmaLCV/eI AA== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49a9-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:17 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:15 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:15 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id A32D43F7040; Sun, 2 Jun 2019 08:27:13 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Pavan Nikhilesh , Harman Kalra Date: Sun, 2 Jun 2019 20:54:28 +0530 Message-ID: <20190602152434.23996-53-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 52/58] net/octeontx2: add Tx burst support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Tx burst support. Signed-off-by: Jerin Jacob Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh Signed-off-by: Harman Kalra --- doc/guides/nics/features/octeontx2.ini | 5 + doc/guides/nics/features/octeontx2_vec.ini | 5 + doc/guides/nics/features/octeontx2_vf.ini | 5 + drivers/net/octeontx2/Makefile | 1 + drivers/net/octeontx2/meson.build | 1 + drivers/net/octeontx2/otx2_ethdev.c | 6 - drivers/net/octeontx2/otx2_ethdev.h | 1 + drivers/net/octeontx2/otx2_tx.c | 94 ++++++++ drivers/net/octeontx2/otx2_tx.h | 261 +++++++++++++++++++++ 9 files changed, 373 insertions(+), 6 deletions(-) create mode 100644 drivers/net/octeontx2/otx2_tx.c diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 18bcf81cf..396979451 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -12,6 +12,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y @@ -28,6 +29,10 @@ Jumbo frame = Y Scattered Rx = Y VLAN offload = Y QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Packet type parsing = Y Timesync = Y Timestamp offload = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 97a24671e..1435fd91e 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -12,6 +12,7 @@ SR-IOV = Y Multiprocess aware = Y Link status = Y Link status event = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y Promiscuous mode = Y @@ -27,6 +28,10 @@ Flow API = Y Jumbo frame = Y VLAN offload = Y QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 916a6d7b0..0d5137316 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -11,6 +11,7 @@ Lock-free Tx queue = Y Multiprocess aware = Y Link status = Y Link status event = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y RSS hash = Y @@ -23,6 +24,10 @@ Jumbo frame = Y Scattered Rx = Y VLAN offload = Y QinQ offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Inner L3 checksum = Y +Inner L4 checksum = Y Packet type parsing = Y Rx descriptor status = Y Basic stats = Y diff --git a/drivers/net/octeontx2/Makefile b/drivers/net/octeontx2/Makefile index 76847b2c2..102bf49d7 100644 --- a/drivers/net/octeontx2/Makefile +++ b/drivers/net/octeontx2/Makefile @@ -31,6 +31,7 @@ LIBABIVER := 1 # SRCS-$(CONFIG_RTE_LIBRTE_OCTEONTX2_PMD) += \ otx2_rx.c \ + otx2_tx.c \ otx2_tm.c \ otx2_rss.c \ otx2_mac.c \ diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build index 1361f1707..f9b796b5c 100644 --- a/drivers/net/octeontx2/meson.build +++ b/drivers/net/octeontx2/meson.build @@ -3,6 +3,7 @@ # sources = files('otx2_rx.c', + 'otx2_tx.c', 'otx2_tm.c', 'otx2_rss.c', 'otx2_mac.c', diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 9b55e757e..fdcab89b8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -14,12 +14,6 @@ #include "otx2_ethdev.h" -static inline void -otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) -{ - RTE_SET_USED(eth_dev); -} - static inline uint64_t nix_get_rx_offload_capa(struct otx2_eth_dev *dev) { diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index 3ba47f6ab..bcc351b76 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -453,6 +453,7 @@ int otx2_ethdev_parse_devargs(struct rte_devargs *devargs, /* Rx and Tx routines */ void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev); +void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev); void otx2_nix_form_default_desc(struct otx2_eth_txq *txq); /* Timesync - PTP routines */ diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c new file mode 100644 index 000000000..16d69b74f --- /dev/null +++ b/drivers/net/octeontx2/otx2_tx.c @@ -0,0 +1,94 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2019 Marvell International Ltd. + */ + +#include + +#include "otx2_ethdev.h" + +#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \ + /* Cached value is low, Update the fc_cache_pkts */ \ + if (unlikely((txq)->fc_cache_pkts < (pkts))) { \ + /* Multiply with sqe_per_sqb to express in pkts */ \ + (txq)->fc_cache_pkts = \ + ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \ + (txq)->sqes_per_sqb_log2; \ + /* Check it again for the room */ \ + if (unlikely((txq)->fc_cache_pkts < (pkts))) \ + return 0; \ + } \ +} while (0) + + +static __rte_always_inline uint16_t +nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, uint64_t *cmd, const uint16_t flags) +{ + struct otx2_eth_txq *txq = tx_queue; uint16_t i; + const rte_iova_t io_addr = txq->io_addr; + void *lmt_addr = txq->lmt_addr; + + NIX_XMIT_FC_OR_RETURN(txq, pkts); + + otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags)); + + /* Lets commit any changes in the packet */ + rte_cio_wmb(); + + for (i = 0; i < pkts; i++) { + otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags); + /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */ + otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], + tx_pkts[i]->ol_flags, 4, flags); + otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags); + } + + /* Reduce the cached count */ + txq->fc_cache_pkts -= pkts; + + return pkts; +} + +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ + struct rte_mbuf **tx_pkts, uint16_t pkts) \ +{ \ + uint64_t cmd[sz]; \ + \ + return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \ +} + +NIX_TX_FASTPATH_MODES +#undef T + +static inline void +pick_tx_func(struct rte_eth_dev *eth_dev, + const eth_tx_burst_t tx_burst[2][2][2][2][2]) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + + /* [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */ + eth_dev->tx_pkt_burst = tx_burst + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] + [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; +} + +void +otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) +{ + const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name, + +NIX_TX_FASTPATH_MODES +#undef T + }; + + pick_tx_func(eth_dev, nix_eth_tx_burst); + + rte_mb(); +} diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index 4d0993f87..db4c1f70f 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -25,4 +25,265 @@ #define NIX_TX_NEED_EXT_HDR \ (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F) +/* Function to determine no of tx subdesc required in case ext + * sub desc is enabled. + */ +static __rte_always_inline int +otx2_nix_tx_ext_subs(const uint16_t flags) +{ + return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 : + ((flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) ? 1 : 0); +} + +static __rte_always_inline void +otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc, + const uint64_t ol_flags, const uint16_t no_segdw, + const uint16_t flags) +{ + if (flags & NIX_TX_OFFLOAD_TSTAMP_F) { + struct nix_send_mem_s *send_mem; + uint16_t off = (no_segdw - 1) << 1; + + send_mem = (struct nix_send_mem_s *)(cmd + off); + if (flags & NIX_TX_MULTI_SEG_F) + /* Retrieving the default desc values */ + cmd[off] = send_mem_desc[6]; + + /* Packets for which PKT_TX_IEEE1588_TMST is not set, tx tstamp + * should not be updated at tx tstamp registered address, rather + * a dummy address which is eight bytes ahead would be updated + */ + send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] + + !(ol_flags & PKT_TX_IEEE1588_TMST)); + } +} + +static inline void +otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) +{ + struct nix_send_ext_s *send_hdr_ext; + struct nix_send_hdr_s *send_hdr; + uint64_t ol_flags = 0, mask; + union nix_send_hdr_w1_u w1; + union nix_send_sg_s *sg; + + send_hdr = (struct nix_send_hdr_s *)cmd; + if (flags & NIX_TX_NEED_EXT_HDR) { + send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2); + sg = (union nix_send_sg_s *)(cmd + 4); + /* Clear previous markings */ + send_hdr_ext->w0.lso = 0; + send_hdr_ext->w1.u = 0; + } else { + sg = (union nix_send_sg_s *)(cmd + 2); + } + + if (flags & NIX_TX_NEED_SEND_HDR_W1) { + ol_flags = m->ol_flags; + w1.u = 0; + } + + if (!(flags & NIX_TX_MULTI_SEG_F)) { + send_hdr->w0.total = m->data_len; + send_hdr->w0.aura = + npa_lf_aura_handle_to_aura(m->pool->pool_id); + } + + /* + * L3type: 2 => IPV4 + * 3 => IPV4 with csum + * 4 => IPV6 + * L3type and L3ptr needs to be set for either + * L3 csum or L4 csum or LSO + * + */ + + if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) && + (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) { + const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM); + const uint8_t ol3type = + ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) + + !!(ol_flags & PKT_TX_OUTER_IP_CKSUM); + + /* Outer L3 */ + w1.ol3type = ol3type; + mask = 0xffffull << ((!!ol3type) << 4); + w1.ol3ptr = ~mask & m->outer_l2_len; + w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len); + + /* Outer L4 */ + w1.ol4type = csum + (csum << 1); + + /* Inner L3 */ + w1.il3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_IPV6)) << 2); + w1.il3ptr = w1.ol4ptr + m->l2_len; + w1.il4ptr = w1.il3ptr + m->l3_len; + /* Increment it by 1 if it is IPV4 as 3 is with csum */ + w1.il3type = w1.il3type + !!(ol_flags & PKT_TX_IP_CKSUM); + + /* Inner L4 */ + w1.il4type = (ol_flags & PKT_TX_L4_MASK) >> 52; + + /* In case of no tunnel header use only + * shift IL3/IL4 fields a bit to use + * OL3/OL4 for header checksum + */ + mask = !ol3type; + w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) | + ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4)); + + } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) { + const uint8_t csum = !!(ol_flags & PKT_TX_OUTER_UDP_CKSUM); + const uint8_t outer_l2_len = m->outer_l2_len; + + /* Outer L3 */ + w1.ol3ptr = outer_l2_len; + w1.ol4ptr = outer_l2_len + m->outer_l3_len; + /* Increment it by 1 if it is IPV4 as 3 is with csum */ + w1.ol3type = ((!!(ol_flags & PKT_TX_OUTER_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_OUTER_IPV6)) << 2) + + !!(ol_flags & PKT_TX_OUTER_IP_CKSUM); + + /* Outer L4 */ + w1.ol4type = csum + (csum << 1); + + } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) { + const uint8_t l2_len = m->l2_len; + + /* Always use OLXPTR and OLXTYPE when only + * when one header is present + */ + + /* Inner L3 */ + w1.ol3ptr = l2_len; + w1.ol4ptr = l2_len + m->l3_len; + /* Increment it by 1 if it is IPV4 as 3 is with csum */ + w1.ol3type = ((!!(ol_flags & PKT_TX_IPV4)) << 1) + + ((!!(ol_flags & PKT_TX_IPV6)) << 2) + + !!(ol_flags & PKT_TX_IP_CKSUM); + + /* Inner L4 */ + w1.ol4type = (ol_flags & PKT_TX_L4_MASK) >> 52; + } + + if (flags & NIX_TX_NEED_EXT_HDR && + flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) { + send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & PKT_TX_VLAN); + /* HW will update ptr after vlan0 update */ + send_hdr_ext->w1.vlan1_ins_ptr = 12; + send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci; + + send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & PKT_TX_QINQ); + /* 2B before end of l2 header */ + send_hdr_ext->w1.vlan0_ins_ptr = 12; + send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer; + } + + if (flags & NIX_TX_NEED_SEND_HDR_W1) + send_hdr->w1.u = w1.u; + + if (!(flags & NIX_TX_MULTI_SEG_F)) { + sg->seg1_size = m->data_len; + *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m); + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + /* Set don't free bit if reference count > 1 */ + if (rte_pktmbuf_prefree_seg(m) == NULL) + send_hdr->w0.df = 1; /* SET DF */ + } + /* Mark mempool object as "put" since it is freed by NIX */ + if (!send_hdr->w0.df) + __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + } +} + + +static __rte_always_inline void +otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr, + const rte_iova_t io_addr, const uint32_t flags) +{ + uint64_t lmt_status; + + do { + otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags)); + lmt_status = otx2_lmt_submit(io_addr); + } while (lmt_status == 0); +} + + +#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F +#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F +#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F +#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F +#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F + +/* [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */ +#define NIX_TX_FASTPATH_MODES \ +T(no_offload, 0, 0, 0, 0, 0, 4, \ + NIX_TX_OFFLOAD_NONE) \ +T(l3l4csum, 0, 0, 0, 0, 1, 4, \ + L3L4CSUM_F) \ +T(ol3ol4csum, 0, 0, 0, 1, 0, 4, \ + OL3OL4CSUM_F) \ +T(ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 4, \ + OL3OL4CSUM_F | L3L4CSUM_F) \ +T(vlan, 0, 0, 1, 0, 0, 6, \ + VLAN_F) \ +T(vlan_l3l4csum, 0, 0, 1, 0, 1, 6, \ + VLAN_F | L3L4CSUM_F) \ +T(vlan_ol3ol4csum, 0, 0, 1, 1, 0, 6, \ + VLAN_F | OL3OL4CSUM_F) \ +T(vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 6, \ + VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(noff, 0, 1, 0, 0, 0, 4, \ + NOFF_F) \ +T(noff_l3l4csum, 0, 1, 0, 0, 1, 4, \ + NOFF_F | L3L4CSUM_F) \ +T(noff_ol3ol4csum, 0, 1, 0, 1, 0, 4, \ + NOFF_F | OL3OL4CSUM_F) \ +T(noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 4, \ + NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(noff_vlan, 0, 1, 1, 0, 0, 6, \ + NOFF_F | VLAN_F) \ +T(noff_vlan_l3l4csum, 0, 1, 1, 0, 1, 6, \ + NOFF_F | VLAN_F | L3L4CSUM_F) \ +T(noff_vlan_ol3ol4csum, 0, 1, 1, 1, 0, 6, \ + NOFF_F | VLAN_F | OL3OL4CSUM_F) \ +T(noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 6, \ + NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts, 1, 0, 0, 0, 0, 8, \ + TSP_F) \ +T(ts_l3l4csum, 1, 0, 0, 0, 1, 8, \ + TSP_F | L3L4CSUM_F) \ +T(ts_ol3ol4csum, 1, 0, 0, 1, 0, 8, \ + TSP_F | OL3OL4CSUM_F) \ +T(ts_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 8, \ + TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts_vlan, 1, 0, 1, 0, 0, 8, \ + TSP_F | VLAN_F) \ +T(ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 8, \ + TSP_F | VLAN_F | L3L4CSUM_F) \ +T(ts_vlan_ol3ol4csum, 1, 0, 1, 1, 0, 8, \ + TSP_F | VLAN_F | OL3OL4CSUM_F) \ +T(ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 8, \ + TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts_noff, 1, 1, 0, 0, 0, 8, \ + TSP_F | NOFF_F) \ +T(ts_noff_l3l4csum, 1, 1, 0, 0, 1, 8, \ + TSP_F | NOFF_F | L3L4CSUM_F) \ +T(ts_noff_ol3ol4csum, 1, 1, 0, 1, 0, 8, \ + TSP_F | NOFF_F | OL3OL4CSUM_F) \ +T(ts_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 1, 8, \ + TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \ +T(ts_noff_vlan, 1, 1, 1, 0, 0, 8, \ + TSP_F | NOFF_F | VLAN_F) \ +T(ts_noff_vlan_l3l4csum, 1, 1, 1, 0, 1, 8, \ + TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \ +T(ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 0, 8, \ + TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \ +T(ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 1, 8, \ + TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) + #endif /* __OTX2_TX_H__ */ From patchwork Sun Jun 2 15:24:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54113 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 865661BC20; Sun, 2 Jun 2019 17:27:21 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 9B0D01BC19 for ; Sun, 2 Jun 2019 17:27:20 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK4kA020253; Sun, 2 Jun 2019 08:27:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=2LeZdjdFkyChC2FTBBYLW6SkrNOqvXIhNwL3ihcZvww=; b=sIg4kMorEC1Ht49kZE8mZcrzf5D99ZwkQWN9kMSaiig9mWpCvyRN3Tfrs0PvX+GxFY95 HNsvX0e1cpKMhNdh6n4bvcmZ9qXHkg87ysaYPlJCkoGVUW2jtwysG8bF5tSUdjNqJFHK NKhSjyEjn+zEUwQ8OVdSruzRp5l2rwQA3DavRFTFzgzYAN2z5RNs6vgFhAZub6JHh+Gr 6pgam/ksVSOje1QhoncE1KhNDMuKBI39HySaLiDfL+mv9RNW0DRflwG8sZXbBJNFNjgT hdXbvrG4r0cXcBXcqchRokRWN53mZzSgOGwUSaRDOkuQWTHLOQHgqpiL2+O0XMpmi8T3 nQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49ac-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:19 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:18 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:18 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 10B3C3F703F; Sun, 2 Jun 2019 08:27:16 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Pavan Nikhilesh Date: Sun, 2 Jun 2019 20:54:29 +0530 Message-ID: <20190602152434.23996-54-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 53/58] net/octeontx2: add Tx multi segment version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add multi segment version of packet Transmit function. Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_tx.c | 58 +++++++++++++++++++++ drivers/net/octeontx2/otx2_tx.h | 81 +++++++++++++++++++++++++++++ 3 files changed, 143 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index bcc351b76..dff4de250 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -71,6 +71,10 @@ #define NIX_TX_NB_SEG_MAX 9 #endif +#define NIX_TX_MSEG_SG_DWORDS \ + ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \ + + NIX_TX_NB_SEG_MAX) + /* Apply BP when CQ is 75% full */ #define NIX_CQ_BP_LEVEL (25 * 256 / 100) diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 16d69b74f..0ac5ea652 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -49,6 +49,37 @@ nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return pkts; } +static __rte_always_inline uint16_t +nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, uint64_t *cmd, const uint16_t flags) +{ + struct otx2_eth_txq *txq = tx_queue; uint64_t i; + const rte_iova_t io_addr = txq->io_addr; + void *lmt_addr = txq->lmt_addr; + uint16_t segdw; + + NIX_XMIT_FC_OR_RETURN(txq, pkts); + + otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags)); + + /* Lets commit any changes in the packet */ + rte_cio_wmb(); + + for (i = 0; i < pkts; i++) { + otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags); + segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags); + otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0], + tx_pkts[i]->ol_flags, segdw, + flags); + otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw); + } + + /* Reduce the cached count */ + txq->fc_cache_pkts -= pkts; + + return pkts; +} + #define T(name, f4, f3, f2, f1, f0, sz, flags) \ static uint16_t __rte_noinline __hot \ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ @@ -62,6 +93,20 @@ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ NIX_TX_FASTPATH_MODES #undef T +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \ + struct rte_mbuf **tx_pkts, uint16_t pkts) \ +{ \ + uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ + \ + return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \ + (flags) | NIX_TX_MULTI_SEG_F); \ +} + +NIX_TX_FASTPATH_MODES +#undef T + static inline void pick_tx_func(struct rte_eth_dev *eth_dev, const eth_tx_burst_t tx_burst[2][2][2][2][2]) @@ -80,15 +125,28 @@ pick_tx_func(struct rte_eth_dev *eth_dev, void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev) { + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2] = { #define T(name, f4, f3, f2, f1, f0, sz, flags) \ [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name, +NIX_TX_FASTPATH_MODES +#undef T + }; + + const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name, + NIX_TX_FASTPATH_MODES #undef T }; pick_tx_func(eth_dev, nix_eth_tx_burst); + if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) + pick_tx_func(eth_dev, nix_eth_tx_burst_mseg); + rte_mb(); } diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h index db4c1f70f..b75a220ea 100644 --- a/drivers/net/octeontx2/otx2_tx.h +++ b/drivers/net/octeontx2/otx2_tx.h @@ -212,6 +212,87 @@ otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr, } while (lmt_status == 0); } +static __rte_always_inline uint16_t +otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags) +{ + struct nix_send_hdr_s *send_hdr; + union nix_send_sg_s *sg; + struct rte_mbuf *m_next; + uint64_t *slist, sg_u; + uint64_t nb_segs; + uint64_t segdw; + uint8_t off, i; + + send_hdr = (struct nix_send_hdr_s *)cmd; + send_hdr->w0.total = m->pkt_len; + send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id); + + if (flags & NIX_TX_NEED_EXT_HDR) + off = 2; + else + off = 0; + + sg = (union nix_send_sg_s *)&cmd[2 + off]; + sg_u = sg->u; + slist = &cmd[3 + off]; + + i = 0; + nb_segs = m->nb_segs; + + /* Fill mbuf segments */ + do { + m_next = m->next; + sg_u = sg_u | ((uint64_t)m->data_len << (i << 4)); + *slist = rte_mbuf_data_iova(m); + /* Set invert df if reference count > 1 */ + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) + sg_u |= + ((uint64_t)(rte_pktmbuf_prefree_seg(m) == NULL) << + (i + 55)); + /* Mark mempool object as "put" since it is freed by NIX */ + if (!(sg_u & (1ULL << (i + 55)))) { + m->next = NULL; + __mempool_check_cookies(m->pool, (void **)&m, 1, 0); + } + slist++; + i++; + nb_segs--; + if (i > 2 && nb_segs) { + i = 0; + /* Next SG subdesc */ + *(uint64_t *)slist = sg_u & 0xFC00000000000000; + sg->u = sg_u; + sg->segs = 3; + sg = (union nix_send_sg_s *)slist; + sg_u = sg->u; + slist++; + } + m = m_next; + } while (nb_segs); + + sg->u = sg_u; + sg->segs = i; + segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off]; + /* Roundup extra dwords to multiple of 2 */ + segdw = (segdw >> 1) + (segdw & 0x1); + /* Default dwords */ + segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F); + send_hdr->w0.sizem1 = segdw - 1; + + return segdw; +} + +static __rte_always_inline void +otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr, + rte_iova_t io_addr, uint16_t segdw) +{ + uint64_t lmt_status; + + do { + otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw); + lmt_status = otx2_lmt_submit(io_addr); + } while (lmt_status == 0); +} #define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F #define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F From patchwork Sun Jun 2 15:24:30 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54095 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DBCB01BBD5; Sun, 2 Jun 2019 17:27:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 30E8C1BC08 for ; Sun, 2 Jun 2019 17:27:24 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKpG020361; Sun, 2 Jun 2019 08:27:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=nU0jLGdRQ5mVVEbMQHWirkjBxe2PnFmruCk+dz1oj+4=; b=WrA4fn264sgPJRsJM8YyJhI+tI2KGtkCge93xvO/RGvuq+YsLfvNO8cNT2T4y+p0/6FC n2rFIe6IaMbA7KzZmvyCK+YZLiU966KO37O4No3Z2wpQ1HjCqWSxpzjOy/cwPmRa5+Wp 87tA7keqxZGI3iloT3cyRllJIa3NXLkP5EZmbeyTjY31TZNqINjVZ+D65O8w72jBezi+ m5R+wMSGijFRQ35jr5NAIFxabzPnKS+tTVcHA3CA9TdTA/l2tQ3rnbduWW1c3oIC9chR Tk+yf0TtQm9qcrEKa9gONPLsBpbDlpkXyk2VGkh5+N0ESsRIAYXEhAm6R0b34m5MY32X 7g== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49af-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:23 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:21 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:21 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id E1BEA3F703F; Sun, 2 Jun 2019 08:27:19 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Pavan Nikhilesh Date: Sun, 2 Jun 2019 20:54:30 +0530 Message-ID: <20190602152434.23996-55-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 54/58] net/octeontx2: add Tx vector version X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add vector version of packet transmit function. Signed-off-by: Nithin Dabilpuram Signed-off-by: Pavan Nikhilesh --- drivers/net/octeontx2/otx2_tx.c | 883 +++++++++++++++++++++++++++++++- 1 file changed, 882 insertions(+), 1 deletion(-) diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c index 0ac5ea652..6bce55112 100644 --- a/drivers/net/octeontx2/otx2_tx.c +++ b/drivers/net/octeontx2/otx2_tx.c @@ -80,6 +80,859 @@ nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts, return pkts; } +#if defined(RTE_ARCH_ARM64) + +#define NIX_DESCS_PER_LOOP 4 +static __rte_always_inline uint16_t +nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, const uint16_t flags) +{ + uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3; + uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3; + uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3; + uint64x2_t senddesc01_w0, senddesc23_w0; + uint64x2_t senddesc01_w1, senddesc23_w1; + uint64x2_t sgdesc01_w0, sgdesc23_w0; + uint64x2_t sgdesc01_w1, sgdesc23_w1; + struct otx2_eth_txq *txq = tx_queue; + uint64_t *lmt_addr = txq->lmt_addr; + rte_iova_t io_addr = txq->io_addr; + uint64x2_t ltypes01, ltypes23; + uint64x2_t xtmp128, ytmp128; + uint64x2_t xmask01, xmask23; + uint64x2_t mbuf01, mbuf23; + uint64x2_t cmd00, cmd01; + uint64x2_t cmd10, cmd11; + uint64x2_t cmd20, cmd21; + uint64x2_t cmd30, cmd31; + uint64_t lmt_status, i; + + pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP); + + NIX_XMIT_FC_OR_RETURN(txq, pkts); + + /* Reduce the cached count */ + txq->fc_cache_pkts -= pkts; + + /* Lets commit any changes in the packet */ + rte_cio_wmb(); + + senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]); + senddesc23_w0 = senddesc01_w0; + senddesc01_w1 = vdupq_n_u64(0); + senddesc23_w1 = senddesc01_w1; + sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]); + sgdesc23_w0 = sgdesc01_w0; + + for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) { + mbuf01 = vld1q_u64((uint64_t *)tx_pkts); + mbuf23 = vld1q_u64((uint64_t *)(tx_pkts + 2)); + + /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */ + senddesc01_w0 = vbicq_u64(senddesc01_w0, + vdupq_n_u64(0xFFFFFFFF)); + sgdesc01_w0 = vbicq_u64(sgdesc01_w0, + vdupq_n_u64(0xFFFFFFFF)); + + senddesc23_w0 = senddesc01_w0; + sgdesc23_w0 = sgdesc01_w0; + + tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP; + + /* Move mbufs to iova */ + mbuf0 = (uint64_t *)vgetq_lane_u64(mbuf01, 0); + mbuf1 = (uint64_t *)vgetq_lane_u64(mbuf01, 1); + mbuf2 = (uint64_t *)vgetq_lane_u64(mbuf23, 0); + mbuf3 = (uint64_t *)vgetq_lane_u64(mbuf23, 1); + + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mbuf, buf_iova)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mbuf, buf_iova)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mbuf, buf_iova)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mbuf, buf_iova)); + /* + * Get mbuf's, olflags, iova, pktlen, dataoff + * dataoff_iovaX.D[0] = iova, + * dataoff_iovaX.D[1](15:0) = mbuf->dataoff + * len_olflagsX.D[0] = ol_flags, + * len_olflagsX.D[1](63:32) = mbuf->pkt_len + */ + dataoff_iova0 = vld1q_u64(mbuf0); + len_olflags0 = vld1q_u64(mbuf0 + 2); + dataoff_iova1 = vld1q_u64(mbuf1); + len_olflags1 = vld1q_u64(mbuf1 + 2); + dataoff_iova2 = vld1q_u64(mbuf2); + len_olflags2 = vld1q_u64(mbuf2 + 2); + dataoff_iova3 = vld1q_u64(mbuf3); + len_olflags3 = vld1q_u64(mbuf3 + 2); + + if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { + struct rte_mbuf *mbuf; + /* Set don't free bit if reference count > 1 */ + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - + offsetof(struct rte_mbuf, buf_iova)); + + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask01, 0); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - + offsetof(struct rte_mbuf, buf_iova)); + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask01, 1); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - + offsetof(struct rte_mbuf, buf_iova)); + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask23, 0); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - + offsetof(struct rte_mbuf, buf_iova)); + if (rte_pktmbuf_prefree_seg(mbuf) == NULL) + vsetq_lane_u64(0x80000, xmask23, 1); + else + __mempool_check_cookies(mbuf->pool, + (void **)&mbuf, + 1, 0); + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + } else { + struct rte_mbuf *mbuf; + /* Mark mempool object as "put" since + * it is freed by NIX + */ + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + + mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 - + offsetof(struct rte_mbuf, buf_iova)); + __mempool_check_cookies(mbuf->pool, (void **)&mbuf, + 1, 0); + RTE_SET_USED(mbuf); + } + + /* Move mbufs to point pool */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mbuf, pool) - + offsetof(struct rte_mbuf, buf_iova)); + + if (flags & + (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | + NIX_TX_OFFLOAD_L3_L4_CSUM_F)) { + /* Get tx_offload for ol2, ol3, l2, l3 lengths */ + /* + * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7) + * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7) + */ + + asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" : + [a]"+w"(senddesc01_w1) : + [in]"r"(mbuf0 + 2) : "memory"); + + asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" : + [a]"+w"(senddesc01_w1) : + [in]"r"(mbuf1 + 2) : "memory"); + + asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" : + [b]"+w"(senddesc23_w1) : + [in]"r"(mbuf2 + 2) : "memory"); + + asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" : + [b]"+w"(senddesc23_w1) : + [in]"r"(mbuf3 + 2) : "memory"); + + /* Get pool pointer alone */ + mbuf0 = (uint64_t *)*mbuf0; + mbuf1 = (uint64_t *)*mbuf1; + mbuf2 = (uint64_t *)*mbuf2; + mbuf3 = (uint64_t *)*mbuf3; + } else { + /* Get pool pointer alone */ + mbuf0 = (uint64_t *)*mbuf0; + mbuf1 = (uint64_t *)*mbuf1; + mbuf2 = (uint64_t *)*mbuf2; + mbuf3 = (uint64_t *)*mbuf3; + } + + const uint8x16_t shuf_mask2 = { + 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + }; + xtmp128 = vzip2q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip2q_u64(len_olflags2, len_olflags3); + + /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */ + const uint64x2_t and_mask0 = { + 0xFFFFFFFFFFFFFFFF, + 0x000000000000FFFF, + }; + + dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0); + dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0); + dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0); + dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0); + + /* + * Pick only 16 bits of pktlen preset at bits 63:32 + * and place them at bits 15:0. + */ + xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2); + ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2); + + /* Add pairwise to get dataoff + iova in sgdesc_w1 */ + sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1); + sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3); + + /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of + * pktlen at 15:0 position. + */ + sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128); + sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128); + senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128); + senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128); + + if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) && + !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) { + /* + * Lookup table to translate ol_flags to + * il3/il4 types. But we still use ol3/ol4 types in + * senddesc_w1 as only one header processing is enabled. + */ + const uint8x16_t tbl = { + /* [0-15] = il4type:il3type */ + 0x04, /* none (IPv6 assumed) */ + 0x14, /* PKT_TX_TCP_CKSUM (IPv6 assumed) */ + 0x24, /* PKT_TX_SCTP_CKSUM (IPv6 assumed) */ + 0x34, /* PKT_TX_UDP_CKSUM (IPv6 assumed) */ + 0x03, /* PKT_TX_IP_CKSUM */ + 0x13, /* PKT_TX_IP_CKSUM | PKT_TX_TCP_CKSUM */ + 0x23, /* PKT_TX_IP_CKSUM | PKT_TX_SCTP_CKSUM */ + 0x33, /* PKT_TX_IP_CKSUM | PKT_TX_UDP_CKSUM */ + 0x02, /* PKT_TX_IPV4 */ + 0x12, /* PKT_TX_IPV4 | PKT_TX_TCP_CKSUM */ + 0x22, /* PKT_TX_IPV4 | PKT_TX_SCTP_CKSUM */ + 0x32, /* PKT_TX_IPV4 | PKT_TX_UDP_CKSUM */ + 0x03, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM */ + 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_TCP_CKSUM + */ + 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_SCTP_CKSUM + */ + 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_UDP_CKSUM + */ + }; + + /* Extract olflags to translate to iltypes */ + xtmp128 = vzip1q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip1q_u64(len_olflags2, len_olflags3); + + /* + * E(47):L3_LEN(9):L2_LEN(7+z) + * E(47):L3_LEN(9):L2_LEN(7+z) + */ + senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1); + senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1); + + /* Move OLFLAGS bits 55:52 to 51:48 + * with zeros preprended on the byte and rest + * don't care + */ + xtmp128 = vshrq_n_u8(xtmp128, 4); + ytmp128 = vshrq_n_u8(ytmp128, 4); + /* + * E(48):L3_LEN(8):L2_LEN(z+7) + * E(48):L3_LEN(8):L2_LEN(z+7) + */ + const int8x16_t tshft3 = { + -1, 0, 8, 8, 8, 8, 8, 8, + -1, 0, 8, 8, 8, 8, 8, 8, + }; + + senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3); + senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3); + + /* Do the lookup */ + ltypes01 = vqtbl1q_u8(tbl, xtmp128); + ltypes23 = vqtbl1q_u8(tbl, ytmp128); + + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + + /* Pick only relevant fields i.e Bit 48:55 of iltype + * and place it in ol3/ol4type of senddesc_w1 + */ + const uint8x16_t shuf_mask0 = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF, + }; + + ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0); + ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0); + + /* Prepare ol4ptr, ol3ptr from ol3len, ol2len. + * a [E(32):E(16):OL3(8):OL2(8)] + * a = a + (a << 8) + * a [E(32):E(16):(OL3+OL2):OL2] + * => E(32):E(16)::OL4PTR(8):OL3PTR(8) + */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u16(senddesc01_w1, 8)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u16(senddesc23_w1, 8)); + + /* Create first half of 4W cmd for 4 mbufs (sgdesc) */ + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + /* Move ltypes to senddesc*_w1 */ + senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01); + senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23); + + /* Create first half of 4W cmd for 4 mbufs (sendhdr) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + + } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) && + (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) { + /* + * Lookup table to translate ol_flags to + * ol3/ol4 types. + */ + + const uint8x16_t tbl = { + /* [0-15] = ol4type:ol3type */ + 0x00, /* none */ + 0x03, /* OUTER_IP_CKSUM */ + 0x02, /* OUTER_IPV4 */ + 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */ + 0x04, /* OUTER_IPV6 */ + 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM */ + 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */ + 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */ + 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 | + * OUTER_IP_CKSUM + */ + 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 | OUTER_IP_CKSUM + */ + }; + + /* Extract olflags to translate to iltypes */ + xtmp128 = vzip1q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip1q_u64(len_olflags2, len_olflags3); + + /* + * E(47):OL3_LEN(9):OL2_LEN(7+z) + * E(47):OL3_LEN(9):OL2_LEN(7+z) + */ + const uint8x16_t shuf_mask5 = { + 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, + }; + senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5); + senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5); + + /* Extract outer ol flags only */ + const uint64x2_t o_cksum_mask = { + 0x1C00020000000000, + 0x1C00020000000000, + }; + + xtmp128 = vandq_u64(xtmp128, o_cksum_mask); + ytmp128 = vandq_u64(ytmp128, o_cksum_mask); + + /* Extract OUTER_UDP_CKSUM bit 41 and + * move it to bit 61 + */ + + xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20); + ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20); + + /* Shift oltype by 2 to start nibble from BIT(56) + * instead of BIT(58) + */ + xtmp128 = vshrq_n_u8(xtmp128, 2); + ytmp128 = vshrq_n_u8(ytmp128, 2); + /* + * E(48):L3_LEN(8):L2_LEN(z+7) + * E(48):L3_LEN(8):L2_LEN(z+7) + */ + const int8x16_t tshft3 = { + -1, 0, 8, 8, 8, 8, 8, 8, + -1, 0, 8, 8, 8, 8, 8, 8, + }; + + senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3); + senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3); + + /* Do the lookup */ + ltypes01 = vqtbl1q_u8(tbl, xtmp128); + ltypes23 = vqtbl1q_u8(tbl, ytmp128); + + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + + /* Pick only relevant fields i.e Bit 56:63 of oltype + * and place it in ol3/ol4type of senddesc_w1 + */ + const uint8x16_t shuf_mask0 = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF, + }; + + ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0); + ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0); + + /* Prepare ol4ptr, ol3ptr from ol3len, ol2len. + * a [E(32):E(16):OL3(8):OL2(8)] + * a = a + (a << 8) + * a [E(32):E(16):(OL3+OL2):OL2] + * => E(32):E(16)::OL4PTR(8):OL3PTR(8) + */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u16(senddesc01_w1, 8)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u16(senddesc23_w1, 8)); + + /* Create second half of 4W cmd for 4 mbufs (sgdesc) */ + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + /* Move ltypes to senddesc*_w1 */ + senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01); + senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23); + + /* Create first half of 4W cmd for 4 mbufs (sendhdr) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + + } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) && + (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) { + /* Lookup table to translate ol_flags to + * ol4type, ol3type, il4type, il3type of senddesc_w1 + */ + const uint8x16x2_t tbl = { + { + { + /* [0-15] = il4type:il3type */ + 0x04, /* none (IPv6) */ + 0x14, /* PKT_TX_TCP_CKSUM (IPv6) */ + 0x24, /* PKT_TX_SCTP_CKSUM (IPv6) */ + 0x34, /* PKT_TX_UDP_CKSUM (IPv6) */ + 0x03, /* PKT_TX_IP_CKSUM */ + 0x13, /* PKT_TX_IP_CKSUM | + * PKT_TX_TCP_CKSUM + */ + 0x23, /* PKT_TX_IP_CKSUM | + * PKT_TX_SCTP_CKSUM + */ + 0x33, /* PKT_TX_IP_CKSUM | + * PKT_TX_UDP_CKSUM + */ + 0x02, /* PKT_TX_IPV4 */ + 0x12, /* PKT_TX_IPV4 | + * PKT_TX_TCP_CKSUM + */ + 0x22, /* PKT_TX_IPV4 | + * PKT_TX_SCTP_CKSUM + */ + 0x32, /* PKT_TX_IPV4 | + * PKT_TX_UDP_CKSUM + */ + 0x03, /* PKT_TX_IPV4 | + * PKT_TX_IP_CKSUM + */ + 0x13, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_TCP_CKSUM + */ + 0x23, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_SCTP_CKSUM + */ + 0x33, /* PKT_TX_IPV4 | PKT_TX_IP_CKSUM | + * PKT_TX_UDP_CKSUM + */ + }, + + { + /* [16-31] = ol4type:ol3type */ + 0x00, /* none */ + 0x03, /* OUTER_IP_CKSUM */ + 0x02, /* OUTER_IPV4 */ + 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */ + 0x04, /* OUTER_IPV6 */ + 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 */ + 0x00, /* OUTER_IPV6 | OUTER_IPV4 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM */ + 0x33, /* OUTER_UDP_CKSUM | + * OUTER_IP_CKSUM + */ + 0x32, /* OUTER_UDP_CKSUM | + * OUTER_IPV4 + */ + 0x33, /* OUTER_UDP_CKSUM | + * OUTER_IPV4 | OUTER_IP_CKSUM + */ + 0x34, /* OUTER_UDP_CKSUM | + * OUTER_IPV6 + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IP_CKSUM + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 + */ + 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 | + * OUTER_IPV4 | OUTER_IP_CKSUM + */ + }, + } + }; + + /* Extract olflags to translate to oltype & iltype */ + xtmp128 = vzip1q_u64(len_olflags0, len_olflags1); + ytmp128 = vzip1q_u64(len_olflags2, len_olflags3); + + /* + * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z) + * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z) + */ + const uint32x4_t tshft_4 = { + 1, 0, + 1, 0, + }; + senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4); + senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4); + + /* + * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z) + * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z) + */ + const uint8x16_t shuf_mask5 = { + 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF, + 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF, + }; + senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5); + senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5); + + /* Extract outer and inner header ol_flags */ + const uint64x2_t oi_cksum_mask = { + 0x1CF0020000000000, + 0x1CF0020000000000, + }; + + xtmp128 = vandq_u64(xtmp128, oi_cksum_mask); + ytmp128 = vandq_u64(ytmp128, oi_cksum_mask); + + /* Extract OUTER_UDP_CKSUM bit 41 and + * move it to bit 61 + */ + + xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20); + ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20); + + /* Shift right oltype by 2 and iltype by 4 + * to start oltype nibble from BIT(58) + * instead of BIT(56) and iltype nibble from BIT(48) + * instead of BIT(52). + */ + const int8x16_t tshft5 = { + 8, 8, 8, 8, 8, 8, -4, -2, + 8, 8, 8, 8, 8, 8, -4, -2, + }; + + xtmp128 = vshlq_u8(xtmp128, tshft5); + ytmp128 = vshlq_u8(ytmp128, tshft5); + /* + * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8) + * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8) + */ + const int8x16_t tshft3 = { + -1, 0, -1, 0, 0, 0, 0, 0, + -1, 0, -1, 0, 0, 0, 0, 0, + }; + + senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3); + senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3); + + /* Mark Bit(4) of oltype */ + const uint64x2_t oi_cksum_mask2 = { + 0x1000000000000000, + 0x1000000000000000, + }; + + xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2); + ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2); + + /* Do the lookup */ + ltypes01 = vqtbl2q_u8(tbl, xtmp128); + ltypes23 = vqtbl2q_u8(tbl, ytmp128); + + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + + /* Pick only relevant fields i.e Bit 48:55 of iltype and + * Bit 56:63 of oltype and place it in corresponding + * place in senddesc_w1. + */ + const uint8x16_t shuf_mask0 = { + 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF, + 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF, + }; + + ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0); + ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0); + + /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from + * l3len, l2len, ol3len, ol2len. + * a [E(32):L3(8):L2(8):OL3(8):OL2(8)] + * a = a + (a << 8) + * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2] + * a = a + (a << 16) + * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2] + * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8) + */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u32(senddesc01_w1, 8)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u32(senddesc23_w1, 8)); + + /* Create second half of 4W cmd for 4 mbufs (sgdesc) */ + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + + /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */ + senddesc01_w1 = vaddq_u8(senddesc01_w1, + vshlq_n_u32(senddesc01_w1, 16)); + senddesc23_w1 = vaddq_u8(senddesc23_w1, + vshlq_n_u32(senddesc23_w1, 16)); + + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + /* Move ltypes to senddesc*_w1 */ + senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01); + senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23); + + /* Create first half of 4W cmd for 4 mbufs (sendhdr) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + } else { + /* Just use ld1q to retrieve aura + * when we don't need tx_offload + */ + mbuf0 = (uint64_t *)((uintptr_t)mbuf0 + + offsetof(struct rte_mempool, pool_id)); + mbuf1 = (uint64_t *)((uintptr_t)mbuf1 + + offsetof(struct rte_mempool, pool_id)); + mbuf2 = (uint64_t *)((uintptr_t)mbuf2 + + offsetof(struct rte_mempool, pool_id)); + mbuf3 = (uint64_t *)((uintptr_t)mbuf3 + + offsetof(struct rte_mempool, pool_id)); + xmask01 = vdupq_n_u64(0); + xmask23 = xmask01; + asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory"); + + asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" : + [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory"); + + asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory"); + + asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" : + [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory"); + xmask01 = vshlq_n_u64(xmask01, 20); + xmask23 = vshlq_n_u64(xmask23, 20); + + senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01); + senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23); + + /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */ + cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1); + cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1); + cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1); + cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1); + cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1); + cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1); + cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1); + cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1); + } + + do { + vst1q_u64(lmt_addr, cmd00); + vst1q_u64(lmt_addr + 2, cmd01); + vst1q_u64(lmt_addr + 4, cmd10); + vst1q_u64(lmt_addr + 6, cmd11); + vst1q_u64(lmt_addr + 8, cmd20); + vst1q_u64(lmt_addr + 10, cmd21); + vst1q_u64(lmt_addr + 12, cmd30); + vst1q_u64(lmt_addr + 14, cmd31); + lmt_status = otx2_lmt_submit(io_addr); + + } while (lmt_status == 0); + } + + return pkts; +} + +#else +static __rte_always_inline uint16_t +nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t pkts, const uint16_t flags) +{ + RTE_SET_USED(tx_queue); + RTE_SET_USED(tx_pkts); + RTE_SET_USED(pkts); + RTE_SET_USED(flags); + return 0; +} +#endif + #define T(name, f4, f3, f2, f1, f0, sz, flags) \ static uint16_t __rte_noinline __hot \ otx2_nix_xmit_pkts_ ## name(void *tx_queue, \ @@ -107,6 +960,21 @@ otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \ NIX_TX_FASTPATH_MODES #undef T +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ +static uint16_t __rte_noinline __hot \ +otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \ + struct rte_mbuf **tx_pkts, uint16_t pkts) \ +{ \ + /* VLAN and TSTMP is not supported by vec */ \ + if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \ + (flags) & NIX_TX_OFFLOAD_TSTAMP_F) \ + return 0; \ + return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, (flags)); \ +} + +NIX_TX_FASTPATH_MODES +#undef T + static inline void pick_tx_func(struct rte_eth_dev *eth_dev, const eth_tx_burst_t tx_burst[2][2][2][2][2]) @@ -143,7 +1011,20 @@ NIX_TX_FASTPATH_MODES #undef T }; - pick_tx_func(eth_dev, nix_eth_tx_burst); + const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2] = { +#define T(name, f4, f3, f2, f1, f0, sz, flags) \ + [f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name, + +NIX_TX_FASTPATH_MODES +#undef T + }; + + if (dev->scalar_ena || + (dev->tx_offload_flags & + (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F))) + pick_tx_func(eth_dev, nix_eth_tx_burst); + else + pick_tx_func(eth_dev, nix_eth_tx_vec_burst); if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) pick_tx_func(eth_dev, nix_eth_tx_burst_mseg); From patchwork Sun Jun 2 15:24:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54114 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9255F1BC28; Sun, 2 Jun 2019 17:27:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id B388D1BC25 for ; Sun, 2 Jun 2019 17:27:26 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7HI020263; Sun, 2 Jun 2019 08:27:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=L/0ljATpVlB/HPy59chuNzur1qzjmQBoM7dAoGcD+s0=; b=SMvUNLkKtYF77UDQwpux99jW7VDFk3qONhvBkJIIQbXymemErFCx2xLQcqI1UC6vcwWZ DHzphGK4CW2ybx2LamzlqPP/QoxeigRve2XyGiZlu37XBXIv9EGYahgHxBPDwxwduPlu 7chX5AKhmUHgdOwMSYYzNjDVvv/EGQD7MJ8lU+V+44+9N0nHcG88z6njHNlqepZfBe/j SJJJd7lbvDjExp/98nAoblUyLwFXdjefUhQUCYF9SJ+7nvhDtVHxXmLdDfhKDGdQHxyH C9cuenhv0OX1Xuc7D+oPEij6VYPG7510iWWPXtQ3bm2FtEMiOSxvAUnalTWz90VdUE06 WA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49ag-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:26 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:24 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:24 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 015883F703F; Sun, 2 Jun 2019 08:27:22 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:54:31 +0530 Message-ID: <20190602152434.23996-56-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 55/58] net/octeontx2: add device start operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add device start operation and update the correct function pointers for Rx and Tx burst functions. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru Signed-off-by: Jerin Jacob --- drivers/net/octeontx2/otx2_ethdev.c | 180 ++++++++++++++++++++++++ drivers/net/octeontx2/otx2_flow.c | 4 +- drivers/net/octeontx2/otx2_flow_parse.c | 7 +- drivers/net/octeontx2/otx2_ptp.c | 8 ++ drivers/net/octeontx2/otx2_vlan.c | 1 + 5 files changed, 197 insertions(+), 3 deletions(-) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index fdcab89b8..bdf291996 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -135,6 +135,55 @@ otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev) return otx2_mbox_process(mbox); } +static int +npc_rx_enable(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_lf_start_rx(mbox); + + return otx2_mbox_process(mbox); +} + +static int +npc_rx_disable(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox); + + return otx2_mbox_process(mbox); +} + +static int +nix_cgx_start_link_event(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_start_linkevents(mbox); + + return otx2_mbox_process(mbox); +} + +static int +cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + if (en) + otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox); + else + otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox); + + return otx2_mbox_process(mbox); +} + static inline void nix_rx_queue_reset(struct otx2_eth_rxq *rxq) { @@ -478,6 +527,74 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq) return NIX_MAXSQESZ_W8; } +static uint16_t +nix_rx_offload_flags(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_eth_conf *conf = &data->dev_conf; + struct rte_eth_rxmode *rxmode = &conf->rxmode; + uint16_t flags = 0; + + if (rxmode->mq_mode == ETH_MQ_RX_RSS) + flags |= NIX_RX_OFFLOAD_RSS_F; + + if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_UDP_CKSUM)) + flags |= NIX_RX_OFFLOAD_CHECKSUM_F; + + if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) + flags |= NIX_RX_OFFLOAD_CHECKSUM_F; + + if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) + flags |= NIX_RX_MULTI_SEG_F; + + if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP | + DEV_RX_OFFLOAD_QINQ_STRIP)) + flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + + if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)) + flags |= NIX_RX_OFFLOAD_TSTAMP_F; + + return flags; +} + +static uint16_t +nix_tx_offload_flags(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + uint64_t conf = dev->tx_offloads; + uint16_t flags = 0; + + /* Fastpath is dependent on these enums */ + RTE_BUILD_BUG_ON(PKT_TX_TCP_CKSUM != (1ULL << 52)); + RTE_BUILD_BUG_ON(PKT_TX_SCTP_CKSUM != (2ULL << 52)); + RTE_BUILD_BUG_ON(PKT_TX_UDP_CKSUM != (3ULL << 52)); + + if (conf & DEV_TX_OFFLOAD_VLAN_INSERT || + conf & DEV_TX_OFFLOAD_QINQ_INSERT) + flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F; + + if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM || + conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) + flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F; + + if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM || + conf & DEV_TX_OFFLOAD_TCP_CKSUM || + conf & DEV_TX_OFFLOAD_UDP_CKSUM || + conf & DEV_TX_OFFLOAD_SCTP_CKSUM) + flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F; + + if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE)) + flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F; + + if (conf & DEV_TX_OFFLOAD_MULTI_SEGS) + flags |= NIX_TX_MULTI_SEG_F; + + return flags; +} + static int nix_sq_init(struct otx2_eth_txq *txq) { @@ -1089,6 +1206,8 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) dev->rx_offloads = rxmode->offloads; dev->tx_offloads = txmode->offloads; + dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev); + dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev); dev->rss_info.rss_grps = NIX_RSS_GRPS; nb_rxq = RTE_MAX(data->nb_rx_queues, 1); @@ -1128,6 +1247,13 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev) goto free_nix_lf; } + /* Configure loop back mode */ + rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode); + if (rc) { + otx2_err("Failed to configure cgx loop back mode rc=%d", rc); + goto free_nix_lf; + } + rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true); if (rc) { otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc); @@ -1277,6 +1403,59 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) return rc; } +static int +otx2_nix_dev_start(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + int rc, i; + + /* Start rx queues */ + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rc = otx2_nix_rx_queue_start(eth_dev, i); + if (rc) + return rc; + } + + /* Start tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { + rc = otx2_nix_tx_queue_start(eth_dev, i); + if (rc) + return rc; + } + + rc = otx2_nix_update_flow_ctrl_mode(eth_dev); + if (rc) { + otx2_err("Failed to update flow ctrl mode %d", rc); + return rc; + } + + rc = npc_rx_enable(dev); + if (rc) { + otx2_err("Failed to enable NPC rx %d", rc); + return rc; + } + + otx2_nix_toggle_flag_link_cfg(dev, true); + + rc = nix_cgx_start_link_event(dev); + if (rc) { + otx2_err("Failed to start cgx link event %d", rc); + goto rx_disable; + } + + otx2_nix_toggle_flag_link_cfg(dev, false); + otx2_eth_set_tx_function(eth_dev); + otx2_eth_set_rx_function(eth_dev); + + return 0; + +rx_disable: + npc_rx_disable(dev); + otx2_nix_toggle_flag_link_cfg(dev, false); + return rc; +} + + /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_infos_get = otx2_nix_info_get, @@ -1286,6 +1465,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .tx_queue_release = otx2_nix_tx_queue_release, .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, + .dev_start = otx2_nix_dev_start, .tx_queue_start = otx2_nix_tx_queue_start, .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c index 270433cd6..68337631d 100644 --- a/drivers/net/octeontx2/otx2_flow.c +++ b/drivers/net/octeontx2/otx2_flow.c @@ -498,8 +498,10 @@ otx2_flow_destroy(struct rte_eth_dev *dev, return -EINVAL; /* Clear mark offload flag if there are no more mark actions */ - if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) + if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) { hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F; + otx2_eth_set_rx_function(dev); + } } rc = flow_free_rss_action(dev, flow); diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c index cf13813d8..cebae645e 100644 --- a/drivers/net/octeontx2/otx2_flow_parse.c +++ b/drivers/net/octeontx2/otx2_flow_parse.c @@ -922,8 +922,11 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev, if (mark) flow->npc_action |= (uint64_t)mark << 40; - if (rte_atomic32_read(&npc->mark_actions) == 1) - hw->rx_offload_flags |= NIX_RX_OFFLOAD_MARK_UPDATE_F; + if (rte_atomic32_read(&npc->mark_actions) == 1) { + hw->rx_offload_flags |= + NIX_RX_OFFLOAD_MARK_UPDATE_F; + otx2_eth_set_rx_function(dev); + } /* Ideally AF must ensure that correct pf_func is set */ diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c index 5291da241..0186c629a 100644 --- a/drivers/net/octeontx2/otx2_ptp.c +++ b/drivers/net/octeontx2/otx2_ptp.c @@ -118,6 +118,10 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev) struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; otx2_nix_form_default_desc(txq); } + + /* Setting up the function pointers as per new offload flags */ + otx2_eth_set_rx_function(eth_dev); + otx2_eth_set_tx_function(eth_dev); } return rc; } @@ -147,6 +151,10 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev) struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i]; otx2_nix_form_default_desc(txq); } + + /* Setting up the function pointers as per new offload flags */ + otx2_eth_set_rx_function(eth_dev); + otx2_eth_set_tx_function(eth_dev); } return rc; } diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c index 3c0d40553..4f56cefd9 100644 --- a/drivers/net/octeontx2/otx2_vlan.c +++ b/drivers/net/octeontx2/otx2_vlan.c @@ -656,6 +656,7 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask) DEV_RX_OFFLOAD_QINQ_STRIP)) { dev->rx_offloads |= offloads; dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F; + otx2_eth_set_rx_function(eth_dev); } done: From patchwork Sun Jun 2 15:24:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54096 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2E4441BC76; Sun, 2 Jun 2019 17:27:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id AEF9B1BC76 for ; Sun, 2 Jun 2019 17:27:29 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKPIO020378; Sun, 2 Jun 2019 08:27:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=oqIXB+fCFKkNjh34kvAjuhynxSqibC6TzYHGKZIo36I=; b=jfMKGdZVgA1W3VhLPdnhaQNdOzgy4sHBQZjvXbmveJk11RpqhEvlm7EXBJDYL4NImyi/ 3EgoaAKDo6uHVuoQzZCg2sPvqJ5zN4PgHrxQ4iJAfRhrsd9s77Ca9frlzQhuMAa4AsqS 37FVzgYvb8vPUNCd27vm3E4FIndaL0dfMEI/te4qk7fOpHxF0m5NMn9eRnBFGwVn/t7i hioQat8yILaYdM4jZvGlYQ2nDVqYpbwxS2POjuocDOKeVuvnFH5szcw9W0hh3Q2uSUpo E3t+ZK2t98duqW1DXXZa6mzRk6ocztTMo4YgW5PjZshz70amuJIz9LLYFC8jnXKB0JM2 Eg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49an-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:29 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:27 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:27 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 165123F7041; Sun, 2 Jun 2019 08:27:25 -0700 (PDT) From: To: , Jerin Jacob , Nithin Dabilpuram , Kiran Kumar K CC: , Vamsi Attunuru Date: Sun, 2 Jun 2019 20:54:32 +0530 Message-ID: <20190602152434.23996-57-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 56/58] net/octeontx2: add device stop and close operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Nithin Dabilpuram Add device stop, close and reset operations. Signed-off-by: Nithin Dabilpuram Signed-off-by: Vamsi Attunuru --- drivers/net/octeontx2/otx2_ethdev.c | 70 +++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index bdf291996..6c67cecd5 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -184,6 +184,19 @@ cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en) return otx2_mbox_process(mbox); } +static int +nix_cgx_stop_link_event(struct otx2_eth_dev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + + if (otx2_dev_is_vf(dev)) + return 0; + + otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox); + + return otx2_mbox_process(mbox); +} + static inline void nix_rx_queue_reset(struct otx2_eth_rxq *rxq) { @@ -1403,6 +1416,37 @@ otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx) return rc; } +static void +otx2_nix_dev_stop(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_mbuf *rx_pkts[32]; + struct otx2_eth_rxq *rxq; + int count, i, j, rc; + + nix_cgx_stop_link_event(dev); + npc_rx_disable(dev); + + /* Stop rx queues and free up pkts pending */ + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rc = otx2_nix_rx_queue_stop(eth_dev, i); + if (rc) + continue; + + rxq = eth_dev->data->rx_queues[i]; + count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32); + while (count) { + for (j = 0; j < count; j++) + rte_pktmbuf_free(rx_pkts[j]); + count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32); + } + } + + /* Stop tx queues */ + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + otx2_nix_tx_queue_stop(eth_dev, i); +} + static int otx2_nix_dev_start(struct rte_eth_dev *eth_dev) { @@ -1455,6 +1499,8 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev) return rc; } +static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev); +static void otx2_nix_dev_close(struct rte_eth_dev *eth_dev); /* Initialize and register driver with DPDK Application */ static const struct eth_dev_ops otx2_eth_dev_ops = { @@ -1466,6 +1512,8 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .rx_queue_setup = otx2_nix_rx_queue_setup, .rx_queue_release = otx2_nix_rx_queue_release, .dev_start = otx2_nix_dev_start, + .dev_stop = otx2_nix_dev_stop, + .dev_close = otx2_nix_dev_close, .tx_queue_start = otx2_nix_tx_queue_start, .tx_queue_stop = otx2_nix_tx_queue_stop, .rx_queue_start = otx2_nix_rx_queue_start, @@ -1473,6 +1521,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .dev_set_link_up = otx2_nix_dev_set_link_up, .dev_set_link_down = otx2_nix_dev_set_link_down, .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get, + .dev_reset = otx2_nix_dev_reset, .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, @@ -1727,6 +1776,7 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) /* Disable nix bpid config */ otx2_nix_rxchan_bpid_cfg(eth_dev, false); + npc_rx_disable(dev); /* Disable vlan offloads */ otx2_nix_vlan_fini(eth_dev); @@ -1737,6 +1787,8 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) if (otx2_ethdev_is_ptp_en(dev)) otx2_nix_timesync_disable(eth_dev); + nix_cgx_stop_link_event(dev); + /* Free up SQs */ for (i = 0; i < eth_dev->data->nb_tx_queues; i++) { otx2_nix_tx_queue_release(eth_dev->data->tx_queues[i]); @@ -1792,6 +1844,24 @@ otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close) return 0; } +static void +otx2_nix_dev_close(struct rte_eth_dev *eth_dev) +{ + otx2_eth_dev_uninit(eth_dev, true); +} + +static int +otx2_nix_dev_reset(struct rte_eth_dev *eth_dev) +{ + int rc; + + rc = otx2_eth_dev_uninit(eth_dev, false); + if (rc) + return rc; + + return otx2_eth_dev_init(eth_dev); +} + static int nix_remove(struct rte_pci_device *pci_dev) { From patchwork Sun Jun 2 15:24:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54097 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 97CEF1BC1B; Sun, 2 Jun 2019 17:27:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 2F45B1BC71 for ; Sun, 2 Jun 2019 17:27:33 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FKKpJ020361; Sun, 2 Jun 2019 08:27:32 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=I7lG+v8KhMdwuGlsn8g96AHyyufEkLaQdRNKvQyfMUo=; b=B1om2C/FgirL7ocLdqG2WN2Dci2uOeu0n1Kz5BUPYh92fv4E+xoxNAHiVsrabi5zYbp4 EnPYEY6IsP+cQZe7qy+gM+TmZt6KnrDYV28WWdsrNzsEW+BjsxTV59t+4/tYr6idPxBP +adtGXxKJUrMWtfjQ5RDuFg3orNawP2SCxsqPwosaAxlpAM9qDc86MWQN8JR9MPOKkvw BM7Rk6WmdAWxA14RxZoSMRek/pa88eMnZSydfdf+oR1T3wm8dJdVHNQdeKtKF2IpLamM xO1Rhe868woadPx7a+T1Uznzbrdt1n5zNGYUGczMwFjs2GBhPk5nhSy60IM9eptUqLU1 AQ== Received: from sc-exch04.marvell.com ([199.233.58.184]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49au-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:32 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH04.marvell.com (10.93.176.84) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:31 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:31 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id D61913F703F; Sun, 2 Jun 2019 08:27:28 -0700 (PDT) From: To: , John McNamara , Marko Kovacevic , Jerin Jacob , "Nithin Dabilpuram" , Kiran Kumar K CC: , Vamsi Attunuru , "Sunil Kumar Kori" Date: Sun, 2 Jun 2019 20:54:33 +0530 Message-ID: <20190602152434.23996-58-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 57/58] net/octeontx2: add MTU set operation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Add MTU set operation and MTU update feature. Signed-off-by: Vamsi Attunuru Signed-off-by: Sunil Kumar Kori --- doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + drivers/net/octeontx2/otx2_ethdev.c | 7 ++ drivers/net/octeontx2/otx2_ethdev.h | 4 ++ drivers/net/octeontx2/otx2_ethdev_ops.c | 80 ++++++++++++++++++++++ 5 files changed, 93 insertions(+) diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index 396979451..e96c588fa 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -15,6 +15,7 @@ Link status event = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y +MTU update = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 1435fd91e..7ad097df4 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -15,6 +15,7 @@ Link status event = Y Fast mbuf free = Y Free Tx mbuf on demand = Y Queue start/stop = Y +MTU update = Y Promiscuous mode = Y Allmulticast mode = Y Unicast MAC filter = Y diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c index 6c67cecd5..ddd924ce8 100644 --- a/drivers/net/octeontx2/otx2_ethdev.c +++ b/drivers/net/octeontx2/otx2_ethdev.c @@ -1453,6 +1453,12 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev) struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); int rc, i; + if (eth_dev->data->nb_rx_queues != 0) { + rc = otx2_nix_recalc_mtu(eth_dev); + if (rc) + return rc; + } + /* Start rx queues */ for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { rc = otx2_nix_rx_queue_start(eth_dev, i); @@ -1525,6 +1531,7 @@ static const struct eth_dev_ops otx2_eth_dev_ops = { .stats_get = otx2_nix_dev_stats_get, .stats_reset = otx2_nix_dev_stats_reset, .get_reg = otx2_nix_dev_get_reg, + .mtu_set = otx2_nix_mtu_set, .mac_addr_add = otx2_nix_mac_addr_add, .mac_addr_remove = otx2_nix_mac_addr_del, .mac_addr_set = otx2_nix_mac_addr_set, diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h index dff4de250..862a1ccbb 100644 --- a/drivers/net/octeontx2/otx2_ethdev.h +++ b/drivers/net/octeontx2/otx2_ethdev.h @@ -351,6 +351,10 @@ int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx); int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx); uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id); +/* MTU */ +int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu); +int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev); + /* Link */ void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set); int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete); diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c index d2cb5ba1c..e8959e179 100644 --- a/drivers/net/octeontx2/otx2_ethdev_ops.c +++ b/drivers/net/octeontx2/otx2_ethdev_ops.c @@ -6,6 +6,86 @@ #include "otx2_ethdev.h" +int +otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu) +{ + uint32_t buffsz, frame_size = mtu + NIX_HW_L2_OVERHEAD; + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct otx2_mbox *mbox = dev->mbox; + struct nix_frs_cfg *req; + int rc; + + /* Check if MTU is within the allowed range */ + if (frame_size < NIX_MIN_HW_FRS || frame_size > NIX_MAX_HW_FRS) + return -EINVAL; + + buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM; + + /* Refuse MTU that requires the support of scattered packets + * when this feature has not been enabled before. + */ + if (data->dev_started && frame_size > buffsz && + !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) + return -EINVAL; + + /* Check * >= max_frame */ + if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER) && + (frame_size > buffsz * NIX_RX_NB_SEG_MAX)) + return -EINVAL; + + req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox); + req->update_smq = true; + req->maxlen = frame_size; + + rc = otx2_mbox_process(mbox); + if (rc) + return rc; + + if (frame_size > RTE_ETHER_MAX_LEN) + dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME; + else + dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME; + + /* Update max_rx_pkt_len */ + data->dev_conf.rxmode.max_rx_pkt_len = frame_size; + + return rc; +} + +int +otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev) +{ + struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev); + struct rte_eth_dev_data *data = eth_dev->data; + struct rte_pktmbuf_pool_private *mbp_priv; + struct otx2_eth_rxq *rxq; + uint32_t buffsz; + uint16_t mtu; + int rc; + + /* Get rx buffer size */ + rxq = data->rx_queues[0]; + mbp_priv = rte_mempool_get_priv(rxq->pool); + buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM; + + /* Setup scatter mode if needed by jumbo */ + if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) + dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER; + + /* Setup MTU based on max_rx_pkt_len or default */ + mtu = ((dev->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) || + (data->dev_conf.rxmode.max_rx_pkt_len < RTE_ETHER_MAX_LEN)) ? + data->dev_conf.rxmode.max_rx_pkt_len - NIX_HW_L2_OVERHEAD : + RTE_ETHER_MTU; + + rc = otx2_nix_mtu_set(eth_dev, mtu); + if (rc) + otx2_err("Failed to set default MTU size %d", rc); + + return rc; +} + static void nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en) { From patchwork Sun Jun 2 15:24:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jerin Jacob Kollanukkaran X-Patchwork-Id: 54098 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 185851BB7A; Sun, 2 Jun 2019 17:27:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 123FA1B9FE for ; Sun, 2 Jun 2019 17:27:36 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x52FK7HJ020263; Sun, 2 Jun 2019 08:27:36 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=uP1tn4vCqkDEAvdALAgJEkMotbx6hhPjFE/lS/C/XWs=; b=ADdUU/WQ/GyhZ+Q28emyekbkCSMJaPZK/ZKNZH+PIzvZU/FQ45/NYIB4/Jz8Is6tb6M9 djq1T8jnM2PnvDH8jFjpQOWcPSz6Oy73JJWbZrvAmwbWyTE0+Nre1f5VHyJvEcLD05oQ j3PhsfJ6tD9ZsqfeiuHIr6oVSyeyz8XLl37D9Eys4Nn/BzHDatN+xxjnZpLeuMxrX1fr HytgnTQtsr/pNfv9t09D1rlm3QpZdxVcCsXPgD6JPVkCQgVRUTAZgDLumENevwgtnO0R erGS6hnuB2k70JxkIVsQO+tN9SdjtFRihQXQ+2XCDc7Tu51lhRRey2oR1+SA61UqaWxC xg== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2survk49b3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 02 Jun 2019 08:27:36 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 2 Jun 2019 08:27:34 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 2 Jun 2019 08:27:34 -0700 Received: from jerin-lab.marvell.com (jerin-lab.marvell.com [10.28.34.14]) by maili.marvell.com (Postfix) with ESMTP id 808463F703F; Sun, 2 Jun 2019 08:27:32 -0700 (PDT) From: To: , Thomas Monjalon , John McNamara , Marko Kovacevic , "Jerin Jacob" , Nithin Dabilpuram , Kiran Kumar K , Vamsi Attunuru CC: Date: Sun, 2 Jun 2019 20:54:34 +0530 Message-ID: <20190602152434.23996-59-jerinj@marvell.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190602152434.23996-1-jerinj@marvell.com> References: <20190602152434.23996-1-jerinj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-02_09:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 58/58] doc: add Marvell OCTEON TX2 ethdev documentation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jerin Jacob Add Marvell OCTEON TX2 ethdev documentation. This patch also updates the MAINTAINERS file and shared library versions in release_19_08.rst. Cc: John McNamara Cc: Thomas Monjalon Signed-off-by: Jerin Jacob Signed-off-by: Vamsi Attunuru Signed-off-by: Kiran Kumar K Signed-off-by: Nithin Dabilpuram --- MAINTAINERS | 8 + doc/guides/nics/features/octeontx2.ini | 1 + doc/guides/nics/features/octeontx2_vec.ini | 1 + doc/guides/nics/features/octeontx2_vf.ini | 1 + doc/guides/nics/index.rst | 1 + doc/guides/nics/octeontx2.rst | 289 +++++++++++++++++++++ doc/guides/platform/octeontx2.rst | 3 + doc/guides/rel_notes/release_19_05.rst | 1 + 8 files changed, 305 insertions(+) create mode 100644 doc/guides/nics/octeontx2.rst diff --git a/MAINTAINERS b/MAINTAINERS index 74ac6d41f..fe509c1f9 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -668,6 +668,14 @@ F: drivers/net/mvneta/ F: doc/guides/nics/mvneta.rst F: doc/guides/nics/features/mvneta.ini +Marvell OCTEON TX2 +M: Jerin Jacob +M: Nithin Dabilpuram +M: Kiran Kumar K +F: drivers/net/octeontx2/ +F: doc/guides/nics/features/octeontx2*.rst +F: doc/guides/nics/octeontx2.rst + Mellanox mlx4 M: Matan Azrad M: Shahaf Shuler diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini index e96c588fa..ef1a638e9 100644 --- a/doc/guides/nics/features/octeontx2.ini +++ b/doc/guides/nics/features/octeontx2.ini @@ -44,3 +44,4 @@ Extended stats = Y FW version = Y Module EEPROM dump = Y Registers dump = Y +Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini index 7ad097df4..8f95727f7 100644 --- a/doc/guides/nics/features/octeontx2_vec.ini +++ b/doc/guides/nics/features/octeontx2_vec.ini @@ -41,3 +41,4 @@ Stats per queue = Y FW version = Y Module EEPROM dump = Y Registers dump = Y +Usage doc = Y diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini index 0d5137316..e78385bb2 100644 --- a/doc/guides/nics/features/octeontx2_vf.ini +++ b/doc/guides/nics/features/octeontx2_vf.ini @@ -36,3 +36,4 @@ Stats per queue = Y FW version = Y Module EEPROM dump = Y Registers dump = Y +Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 2221c35f2..6fa075594 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -44,6 +44,7 @@ Network Interface Controller Drivers nfb nfp octeontx + octeontx2 qede sfc_efx softnic diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst new file mode 100644 index 000000000..2f14a4a1c --- /dev/null +++ b/doc/guides/nics/octeontx2.rst @@ -0,0 +1,289 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2019 Marvell International Ltd. + +OCTEON TX2 Poll Mode driver +=========================== + +The OCTEON TX2 ETHDEV PMD (**librte_pmd_octeontx2**) provides poll mode ethdev +driver support for the inbuilt network device found in **Marvell OCTEON TX2** +SoC family as well as for their virtual functions (VF) in SR-IOV context. + +More information can be found at `Marvell Official Website +`_. + +Features +-------- + +Features of the OCTEON TX2 Ethdev PMD are: + +- Packet type information +- Promiscuous mode +- Port hardware statistics +- Jumbo frames +- SR-IOV VF +- Lock-free Tx queue +- Multiple queues for TX and RX +- Receiver Side Scaling (RSS) +- MAC/VLAN filtering +- Generic flow API +- Inner and Outer Checksum offload +- VLAN/QinQ stripping and insertion +- Port hardware statistics +- Link state information +- Link flow control +- MTU update +- Scatter-Gather IO support +- Vector Poll mode driver +- Debug utilities - Context dump and error interrupt support +- IEEE1588 timestamping +- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection + +Prerequisites +------------- + +See :doc:`../platform/octeontx2` for setup information. + +Compile time Config Options +--------------------------- + +The following options may be modified in the ``config`` file. + +- ``CONFIG_RTE_LIBRTE_OCTEONTX2_PMD`` (default ``y``) + + Toggle compilation of the ``librte_pmd_octeontx2`` driver. + +Driver compilation and testing +------------------------------ + +Refer to the document :ref:`compiling and testing a PMD for a NIC ` +for details. + +To compile the OCTEON TX2 PMD for Linux arm64 gcc, +use arm64-octeontx2-linux-gcc as target. + +#. Running testpmd: + + Follow instructions available in the document + :ref:`compiling and testing a PMD for a NIC ` + to run testpmd. + + Example output: + + .. code-block:: console + + ./build/app/testpmd -c 0x300 -w 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1 + EAL: Detected 24 lcore(s) + EAL: Detected 1 NUMA nodes + EAL: Multi-process socket /var/run/dpdk/rte/mp_socket + EAL: No available hugepages reported in hugepages-2048kB + EAL: Probing VFIO support... + EAL: VFIO support initialized + EAL: PCI device 0002:02:00.0 on NUMA socket 0 + EAL: probe driver: 177d:a063 net_octeontx2 + EAL: using IOMMU type 1 (Type 1) + testpmd: create a new mbuf pool : n=267456, size=2176, socket=0 + testpmd: preferred mempool ops selected: octeontx2_npa + Configuring Port 0 (socket 0) + PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex + + Port 0: link state change event + Port 0: 36:10:66:88:7A:57 + Checking link statuses... + Done + No commandline core given, start packet forwarding + io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native + Logical Core 9 (socket 0) forwards packets on 1 streams: + RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 + + io packet forwarding packets/burst=32 + nb forwarding cores=1 - nb forwarding ports=1 + port 0: RX queue number: 1 Tx queue number: 1 + Rx offloads=0x0 Tx offloads=0x10000 + RX queue: 0 + RX desc=512 - RX free threshold=0 + RX threshold registers: pthresh=0 hthresh=0 wthresh=0 + RX Offloads=0x0 + TX queue: 0 + TX desc=512 - TX free threshold=0 + TX threshold registers: pthresh=0 hthresh=0 wthresh=0 + TX offloads=0x10000 - TX RS bit threshold=0 + Press enter to exit + +Runtime Config Options +---------------------- + +- ``HW offload ptype parsing disable`` (default ``0``) + + Packet type parsing is HW offloaded by default and this feature may be toggled + using ``ptype_disable`` ``devargs`` parameter. + +- ``Rx scalar mode enable`` (default ``0``) + + Ethdev rx supports both scalar and vector mode, it may be selected at runtime + using ``scalar_enable`` ``devargs`` parameter. + +- ``RSS reta size`` (default ``64``) + + RSS redirection table size may be configured during runtime using ``reta_size`` + ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,reta_size=256 + + With the above configuration, reta table of size 256 is populated. + +- ``Flow priority levels`` (default ``3``) + + RTE Flow priority levels can be configured during runtime using + ``flow_max_priority`` ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,flow_max_priority=10 + + With the above configuration, priority level was set to 10 (0-9). Max + priority level supported is 32. + +- ``Reserve Flow entries`` (default ``8``) + + RTE flow entries can be pre allocated and the size of pre allocation can be + selected runtime using ``flow_prealloc_size`` ``devargs`` parameter. + + For example:: + + -w 0002:02:00.0,flow_prealloc_size=4 + + With the above configuration, pre alloc size was set to 4. Max pre alloc + size supported is 32. + +.. note:: + + Above devarg parameters are configurable per device, user needs to pass the + parameters to all the PCIe devices if application requires to configure on + all the ethdev ports. + +Limitations +----------- + +``mempool_octeontx2`` external mempool handler dependency +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager. +``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler +as it is performance wise most effective way for packet allocation and Tx buffer +recycling on OCTEON TX2 SoC platform. + +CRC striping +~~~~~~~~~~~~ + +The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by +the host interface irrespective of the offload configuration. + + +Debugging Options +----------------- + +.. _table_octeontx2_ethdev_debug_options: + +.. table:: OCTEON TX2 ethdev debug options + + +---+------------+-------------------------------------------------------+ + | # | Component | EAL log command | + +===+============+=======================================================+ + | 1 | NIX | --log-level='pmd\.net.octeontx2,8' | + +---+------------+-------------------------------------------------------+ + | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' | + +---+------------+-------------------------------------------------------+ + +RTE Flow Support +---------------- + +The OCTEON TX2 SoC family NIC has support for the following patterns and +actions. + +Patterns: + +.. _table_octeontx2_supported_flow_item_types: + +.. table:: Item types + + +----+--------------------------------+ + | # | Pattern Type | + +====+================================+ + | 1 | RTE_FLOW_ITEM_TYPE_ETH | + +----+--------------------------------+ + | 2 | RTE_FLOW_ITEM_TYPE_VLAN | + +----+--------------------------------+ + | 3 | RTE_FLOW_ITEM_TYPE_E_TAG | + +----+--------------------------------+ + | 4 | RTE_FLOW_ITEM_TYPE_IPV4 | + +----+--------------------------------+ + | 5 | RTE_FLOW_ITEM_TYPE_IPV6 | + +----+--------------------------------+ + | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4| + +----+--------------------------------+ + | 7 | RTE_FLOW_ITEM_TYPE_MPLS | + +----+--------------------------------+ + | 8 | RTE_FLOW_ITEM_TYPE_ICMP | + +----+--------------------------------+ + | 9 | RTE_FLOW_ITEM_TYPE_UDP | + +----+--------------------------------+ + | 10 | RTE_FLOW_ITEM_TYPE_TCP | + +----+--------------------------------+ + | 11 | RTE_FLOW_ITEM_TYPE_SCTP | + +----+--------------------------------+ + | 12 | RTE_FLOW_ITEM_TYPE_ESP | + +----+--------------------------------+ + | 13 | RTE_FLOW_ITEM_TYPE_GRE | + +----+--------------------------------+ + | 14 | RTE_FLOW_ITEM_TYPE_NVGRE | + +----+--------------------------------+ + | 15 | RTE_FLOW_ITEM_TYPE_VXLAN | + +----+--------------------------------+ + | 16 | RTE_FLOW_ITEM_TYPE_GTPC | + +----+--------------------------------+ + | 17 | RTE_FLOW_ITEM_TYPE_GTPU | + +----+--------------------------------+ + | 18 | RTE_FLOW_ITEM_TYPE_VOID | + +----+--------------------------------+ + | 19 | RTE_FLOW_ITEM_TYPE_ANY | + +----+--------------------------------+ + +Actions: + +.. _table_octeontx2_supported_ingress_action_types: + +.. table:: Ingress action types + + +----+--------------------------------+ + | # | Action Type | + +====+================================+ + | 1 | RTE_FLOW_ACTION_TYPE_VOID | + +----+--------------------------------+ + | 2 | RTE_FLOW_ACTION_TYPE_MARK | + +----+--------------------------------+ + | 3 | RTE_FLOW_ACTION_TYPE_FLAG | + +----+--------------------------------+ + | 4 | RTE_FLOW_ACTION_TYPE_COUNT | + +----+--------------------------------+ + | 5 | RTE_FLOW_ACTION_TYPE_DROP | + +----+--------------------------------+ + | 6 | RTE_FLOW_ACTION_TYPE_QUEUE | + +----+--------------------------------+ + | 7 | RTE_FLOW_ACTION_TYPE_RSS | + +----+--------------------------------+ + | 8 | RTE_FLOW_ACTION_TYPE_SECURITY | + +----+--------------------------------+ + +.. _table_octeontx2_supported_egress_action_types: + +.. table:: Egress action types + + +----+--------------------------------+ + | # | Action Type | + +====+================================+ + | 1 | RTE_FLOW_ACTION_TYPE_COUNT | + +----+--------------------------------+ + | 2 | RTE_FLOW_ACTION_TYPE_DROP | + +----+--------------------------------+ diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst index c9ea45647..d2592f119 100644 --- a/doc/guides/platform/octeontx2.rst +++ b/doc/guides/platform/octeontx2.rst @@ -98,6 +98,9 @@ HW Offload Drivers This section lists dataplane H/W block(s) available in OCTEON TX2 SoC. +#. **Ethdev Driver** + See :doc:`../nics/octeontx2` for NIX Ethdev driver information. + #. **Mempool Driver** See :doc:`../mempool/octeontx2` for NPA mempool driver information. diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst index b4c6972e3..e925ccf0e 100644 --- a/doc/guides/rel_notes/release_19_05.rst +++ b/doc/guides/rel_notes/release_19_05.rst @@ -386,6 +386,7 @@ The libraries prepended with a plus sign were incremented in this version. librte_pmd_i40e.so.2 librte_pmd_ixgbe.so.2 librte_pmd_dpaa2_qdma.so.1 + + librte_pmd_octeontx2.so.1 librte_pmd_ring.so.2 librte_pmd_softnic.so.1 librte_pmd_vhost.so.2